A couple months ago I wrote about a note-taking system built on plain markdown and 50 lines of bash. The thesis: when your entire system is transparent to AI, it can write whatever tooling you need. No plugins, no APIs — just files and scripts.
That thesis was correct. But it undersold what happens when you keep going.
The system isn’t notes anymore. It’s an operational layer. AI agents run morning routines, process meeting transcripts, triage inboxes, maintain their own memory across sessions, and flag when I’ve dropped a commitment. The same markdown files, the same transparency principle — but the ceiling turned out to be much higher than “better notes.”
The Cold Start Problem
The original system had a gap. Every conversation with AI started from zero. It didn’t know what I was working on, what I’d decided last week, or what mattered to me. I’d spend the first few messages re-establishing context, or worse — get advice that contradicted a decision I’d already made.
Notes existed, but they were organized for me to find things. Not for AI to orient itself.
The fix was a folder called compass/ with four files:
compass/
├── goals.md # What I'm trying to achieve
├── context.md # What's in flight right now
├── preferences.md # How I work and communicate
└── decisions.md # What's been decided and whycontext.md — the session handoff
This is the most important file. It’s a queue of open threads — not a project wiki, not a journal. Three sections: In Flight, Blocked/Waiting, Needs Attention. Each entry is a one-liner: what it is plus current status.
## In Flight
- API redesign — open question on auth model blocks shipping
- Onboarding flow rewrite — new design approved, implementation started
- Hiring — backend role, two candidates in pipeline
## Blocked / Waiting
- Enterprise pilot — blocked on their internal security review
## Needs Attention
- Monitoring gaps — no alerting on payment failures, needs setupWhen AI reads this at the start of a session, it knows what I’m dealing with. It can prioritize, notice conflicts, and avoid suggesting things I’ve already rejected. The file acts as a handoff between sessions — like briefing notes for an incoming shift.
preferences.md — behavioral memory
When I tell AI “don’t do X” or “I prefer Y,” that correction lasts exactly one session. Next time, same mistake. preferences.md makes corrections permanent:
## Communication
- Prefers brief, direct answers — no filler, no excessive caveats
- No emojis unless asked
## Decision Making
- Pragmatic over perfect — "good enough now" beats "ideal later"
## Behavioral Corrections
- 2026-01-15: "Don't summarize what I just said back to me" — contextThe AI reads this file, adjusts its behavior, and the correction sticks. Over time the file becomes a fairly accurate profile of how you work.
decisions.md — the decision log
This one prevents the most insidious problem: re-litigating settled questions. When a significant decision gets made, it’s logged with reasoning and alternatives considered. Two weeks later when I’m second-guessing a technical or business decision, the AI can point to the entry and say “here’s why you decided this.”
References: Stable Context
Notes are dated and temporal. But some context is durable — it applies across sessions and doesn’t change with each meeting.
references/
├── professional-self.md # Who I am, background
├── products.md # Product descriptions
├── projects/ # Living project briefs
└── people/
├── internal/ # Team members
├── partners/ # External partners
├── candidates/ # Hiring pipeline
└── customers/ # Customer accountsProject briefs get updated in place as understanding evolves — architecture decisions, shifted strategy, changed ownership. They’re not notes about a project. They’re the canonical source of truth for what the project is.
People files are the same idea. When I’m about to meet someone, the AI can read their file and tell me: here’s who they are, here’s what we discussed last time, here’s what’s pending between us. No prep work on my part.
Skills: Workflows as Markdown
This is where it gets interesting.
A “skill” is a markdown file that teaches AI a multi-step workflow. Not code. Not a plugin. Just instructions that reference tools.
Here’s the structure of the morning kickoff skill:
# Morning Kickoff
Create a daily "today" page with an overview of the day ahead.
## Steps
1. Get today's date and determine the day of week
2. Create the daily file at `daily/YYYYMMDD--dayofweek.md`
3. Gather information using the relevant skills:
- Check work calendar and personal calendar
- Check today's tasks, upcoming tasks, deadlines
- Check email for anything unread or requiring attention
4. Read all compass files to fully load contextThat’s it. The AI reads this, follows the steps, calls the appropriate tools (email, calendar, task manager), and produces a daily briefing. No code to maintain. No API integration to debug.
I have about 20 of these now:
- Morning kickoff — daily briefing with calendar, tasks, emails, and rehydrated context
- Weekly review — audit progress against goals, clean up stale threads, identify stuck items
- Inbox triage — process work email, personal email, and task inbox in sequence
- Process meetings — pull transcripts from Notion, extract tasks, create meeting notes, flag Zettelkasten connections
- People prep — briefing about a specific person before a 1:1
- Follow-ups — scan notes and emails for commitments that aren’t tracked as tasks
- Wrap-up — end-of-session routine that saves context and logs decisions
The realization that made this click: most operational work is a checklist. And checklists are exactly what AI is good at following — especially when the tools and context are transparent.
Why markdown instead of code? Because I can edit a workflow in 30 seconds. Adding a step, changing the order, adjusting what gets checked — it’s just editing a text file. And the AI reads it natively. No parsing, no compilation, no framework.
The Session-End Hook
The skills above all require me to invoke them. The session-end hook doesn’t.
It’s an extension that fires automatically when a session ends. It spawns a background AI agent that reads the session transcript and decides what’s worth persisting. It updates compass files, project briefs, and people references — without me asking.
The prompt for this agent is specific about what to look for:
- Status changes — did a project move forward, get blocked, or resolve?
- Decisions — was something significant decided?
- Preferences — did I correct the AI’s behavior?
- People info — did new context surface about someone I work with?
And specific about what to ignore: routine debugging, trivial file edits, implementation details.
This closes a loop that matters: the system maintains itself. I don’t have to remember to update context after each session. I don’t have to decide whether something was worth logging. The agent handles triage, and I review the results.
A Typical Day
Morning. I run /morning-kickoff. AI checks both calendars, pulls tasks and deadlines, scans email for anything urgent, reads compass files to rehydrate context. Produces a daily page I can glance at over coffee.
During work. When I ask for advice — on priorities, architecture decisions, how to handle a conversation — the AI already knows what I’m working on, what’s been decided, and how I prefer to operate. The advice is grounded, not generic.
After meetings. /process-meetings pulls transcripts from Notion, creates summary notes, extracts follow-up tasks, and checks them against my task manager. If a discussion maps to an existing Zettelkasten concept, it flags the connection.
End of day. Either I run /wrap-up explicitly, or the session-end hook captures context automatically. Open threads get updated. Decisions get logged. Nothing falls through the cracks — or at least, fewer things do.
Weekly. /weekly-review compares what actually happened against goals. Cleans up stale context. Surfaces stuck items I’ve been unconsciously avoiding.
The Principle
The original post argued that transparency enables tooling. That’s still true, but it’s the small version. The bigger version: transparency enables delegation.
When AI can see the full system, has persistent memory, and has procedures to follow — it’s not a chat assistant anymore. It’s an operational layer. It maintains its own context. It runs routines. It catches things you’d miss.
The building blocks are the same: plain text, simple conventions, no proprietary formats. But the capability ceiling is determined by what you teach it to do, not by what the tool vendor ships.
Tradeoffs
Same ones as before, plus a few new ones:
- Terminal-native. If you don’t live in a terminal, there’s friction. I’m building a Telegram bot to extend this outside the IDE, but it’s not there yet.
- Designing for AI is a skill. You’re not just organizing notes for yourself anymore. You’re thinking about what context an AI agent needs, how to structure handoffs, what’s worth persisting. It’s a different discipline.
- Trust calibration. The session-end hook updates files autonomously. Mostly it’s accurate. Occasionally it overpersists or misreads significance. You need to review its work, especially early on.
- Tool surface area. This works because I have CLI tools for email, calendar, tasks, and meeting transcripts. Each new integration is a new tool to build or configure. The system is only as capable as its access.
What’s Next
The gap I’m most interested in closing: proactive behavior. Right now, AI acts when I invoke it. The morning kickoff runs because I type the command. But the information is all there — calendar, tasks, deadlines, commitments. An agent could notice that I have a meeting in an hour with someone I haven’t prepped for, or that a deadline is approaching with no progress on the task.
Moving from “AI that helps when asked” to “AI that notices when something needs attention” is the next step. The foundation — transparent files, persistent memory, structured workflows — is already in place. The trigger just needs to shift from manual to event-driven.
Fifty lines of bash was the starting point. The system it grew into is unrecognizable from those origins. But the principle hasn’t changed: keep everything transparent, and AI will surprise you with what it can do.