After months of experimenting with AI assistants across different projects, I've developed what I call a "modular AI context stack" – a systematic approach to managing knowledge that transforms how I work with AI. It's not just about better prompts; it's about building a reusable infrastructure for thought.
But let me be clear upfront: this is messy, manual, and very much a work in progress. I'm not selling you a polished system – I'm sharing an evolving experiment that's been valuable despite its rough edges.
The Multi-Assistant Advantage
I previously wrote about LLMs as thought partners:
LLMs as Thought Partners
An esoteric use-case for which I find myself increasingly turning to LLMs is as a thought partner. I don't mean using AI as a therapist or as a girlfriend, like some other prevalent uses. Instead, I'm leveraging LLMs to help me think through complex decision-making and planning processes. While this use-case might seem odd at first, I believe it will be…
I explored how AI can help us think through complex decisions. But here's what I've discovered since: using multiple AI assistants isn't just helpful – it's transformative. I regularly pit Claude, ChatGPT, and Gemini against each other on the same problems, not because I can't pick a favorite, but because their different biases and approaches create productive friction.
Think of it like assembling your own advisory board, where each member brings distinct perspectives. Claude might offer nuanced analysis, ChatGPT provides creative alternatives, and Gemini excels at technical precision. The magic happens when you compare their responses – suddenly you're seeing blind spots and opportunities you'd miss with a single assistant.
The reality? This means having three browser tabs open, manually copying prompts between them, and doing the synthesis work myself. There's no elegant orchestration layer here – just good old-fashioned manual labor. But the insights are worth the friction.
The Atomic Document Revolution
The breakthrough came when I started treating context like code – modular, reusable, and version-controlled. Instead of dumping everything into massive prompts, I create focused, single-purpose documents that live in Google Docs.
For Blazer, my SERP analysis tool, this looks like:
Brand Guidelines: Visual identity, tone, and messaging principles
Product Strategy: Market positioning, competitive analysis, and roadmap
Technical Specifications: API design patterns and data structures
Customer Insights: Interview summaries and usage patterns
Each document is atomic – it stands alone, focuses on one thing, and acts as the "Directly Responsible Individual" (DRI) for that knowledge domain. Just as microservices revolutionized software architecture, this approach transforms AI context management.
Except, unlike microservices, there's no automatic discovery or loading. I'm manually sharing Google Doc links with Claude, or copy-pasting entire documents when link sharing doesn't work. Sometimes I forget which document has what, leading to a frantic search through my Drive. It's inelegant, but it beats starting from scratch every time.
From Documentation Avoider to Context Architect
Here's a confession: as a manager, writing documentation was always my Achilles' heel. I'd rather build ten prototypes than write one process document. But AI has completely flipped this script.
Why? Because the payback is immediate and multiplicative. When I document a decision framework today, tomorrow's AI conversation starts from that foundation rather than square one. It's like compound interest for knowledge work – every well-crafted document becomes a force multiplier across all future AI interactions.
This shift has been profound. I've gone from avoiding documentation to actively seeking opportunities to memorialize insights. The key difference? I'm not writing for some hypothetical future reader – I'm building tools for my AI collaborators to help me think better, faster, and more consistently.
That said, maintaining this system is its own burden. Documents get stale. I'll reference a strategy doc in an AI conversation only to realize it's three iterations behind my current thinking. The overhead of keeping everything updated is real, and I don't have a good solution yet.
The Power of Portable Context
Storing these atomic documents in Google Docs might seem mundane, but it's strategically brilliant. The documents live in a format that's broadly accessible – I can share them with any AI assistant, paste them into any chat interface, or reference them across platforms. No vendor lock-in, no migration headaches – just pure portability.
This approach also solves the context window problem elegantly. Instead of cramming everything into a single conversation, I can selectively load only the relevant modules. Working on brand design? Pull in the visual guidelines. Discussing technical architecture? Load the API specifications. It's precision context at scale.
The manual nature is both a feature and a bug. On one hand, I have complete control over what context each AI assistant sees. On the other, I'm the integration layer – constantly copying, pasting, and managing which documents go where. When I'm on mobile, the friction multiplies.
Looking ahead, protocols like MCP (Model Context Protocol) could transform this manual process into something more automated – imagine AI assistants directly querying your document repository based on conversation context. But for now, I'm the human API between my knowledge base and AI tools.
Building Your Own Context Stack
The principles for effective atomic artifacts are surprisingly simple:
Single Focus: Each document should have one clear purpose
Self-Contained: Minimize dependencies between documents
Broadly Accessible: Use formats that work everywhere
Living Documents: Update them as you learn
What makes this approach powerful isn't the individual documents – it's the system. You're essentially building a knowledge management OS that amplifies AI's capabilities while maintaining your unique perspective and expertise.
But here's what they don't tell you: figuring out the right granularity is an art. Too atomic, and you're drowning in documents. Too broad, and you lose the modularity benefits. I'm constantly refactoring my document structure, splitting some, merging others. It's very much a learning process.
The Meta-Skill of Context Management
As AI tools proliferate, the differentiating skill isn't knowing which model to use or how to craft clever prompts. It's becoming a master of context – knowing what information to provide, when to provide it, and how to structure it for maximum leverage.
This modular approach has transformed how I work. Complex projects that once required lengthy briefings now start with a few strategic document references. Consistency across projects has improved dramatically because core principles live in reusable modules rather than my fallible memory.
More importantly, it's made AI feel less like a tool and more like a true collaborator. When your AI assistant has access to your accumulated wisdom, structured thoughtfully, the conversations become richer and the outputs more aligned with your goals.
The Honest Truth
This system is held together with digital duct tape and determination. It's manual where it should be automated, fragile where it should be robust, and constantly evolving as I discover better approaches. Some days I wonder if I'm overengineering something that should be simple.
But despite all the friction, it works. The manual copy-pasting, the constant document updates – they're all worth it for the leverage this approach provides. I'm building a personal knowledge infrastructure one Google Doc at a time, and while it's not pretty, it's mine.
The future of AI isn't just about better models – it's about better collaboration between human expertise and AI capabilities. Building your own modular context stack is the first step toward that future. Start small, document one process or framework, and watch how it transforms your next AI interaction. The compound effects might surprise you.
Just don't expect it to be elegant. Expect it to be useful.
Great article. I'm on a similar journey.
I'm a non-coding knowledge worker and recently started exploring how I can treat my work as code.
I want to manage my work inputs and outputs as code - modular, interconnected, organized in a system...
And I want to build workflows similar to coding in Cursor = based on the prompt I or my AI will select relevant context and produce the outputs - without having to switch apps.
My v0 is Obsidian as the context storage and Obsidian Cannoli as the interface for interacting with LLMs.
I'm currently exploring v1 where I want to replace Cannoli with a more robust system - maybe VS Code with some LLM plugin, maybe some homegrown system on top of n8n.