My AI workflow has a fallback now. Here's how I built it in an afternoon.
Claude went down mid-morning. Not a big deal for most people. For me, I had a whole daily system running through it.
I use Claude Code from the terminal inside Obsidian - it reads my vault, processes my brain dump, sets up my day, and runs a set of custom skills I’ve built up over time. Things like /morning to review what’s open, /reflect to route notes into the right places, /evening to close out the day. When Claude went down, all of that stopped working.
I had two options. Wait for it to come back, or figure out a fallback.
I’m glad it went down.
If this is the first time you’re hearing about running Claude Code inside Obsidian, start here first: I Built an Obsidian Second Brain That Thinks Alongside My Mind. This post builds on that setup.
What I did
I installed Codex - OpenAI’s terminal AI agent, equivalent to Claude Code - and ran it from the same place I always run Claude: the Terminal plugin inside Obsidian, opened at the vault root.
That last part mattered. Because I was already inside the vault when I started Codex, it had the same context Claude does. It could see my files, my folder structure, my CLAUDE.md. I didn’t have to re-explain anything.
Then I gave it one prompt:
“Look through this vault. Find my Claude skills, agents, and commands setup. Read how they’re structured. Tell me what you find and confirm you can run them.”
It found everything. It understood the structure. It read a couple of skills to verify it could follow them.
I ran /evening - or the equivalent, telling Codex to read and execute that skill file - and it worked. Not identically to Claude, but close enough.
Why it worked: skills as plain text
The reason this wasn’t a painful migration is that all my skills are just markdown files.
In the Obsidian setup, there are three directories at play:
your-vault/ .claude/ commands/ <- Claude Code slash commands (vault-level) 99 System/ Skills/ <- the actual prompt content (source of truth)
~/.claude/ commands/ <- Claude Code slash commands (global)The Skills/ folder is where the real content lives - each skill is a markdown file with the full prompt written out. The .claude/commands/ folders don’t copy any of that. They’re thin wrappers, one per skill, that just say: “read the skill file at this path and execute it.”
Like this:
---description: Close out your dayallowed-tools: Read, Glob, Bash, Write, Edit---
Read the skill definition at `99 System/Skills/evening.md`and execute the full prompt under the "## Full prompt" heading.That’s the whole command file. One line of instruction.
When Codex reads that, it knows exactly what to do - same as Claude does. The skills aren’t Claude-specific. They’re just instructions written in plain language. Any model that can read markdown can run them.
Toggling between them
I now have three AI runners that all pull from the same 99 System/Skills/ folder:
- Claude Code - run
claudein the terminal. Slash commands work natively. - Codex - run
codexin the terminal. No native slash commands, so you tell it directly which skill file to read. Setup and docs at developers.openai.com/codex/cli. I signed in with ChatGPT login rather than an API key - that option is available during setup. - Gemini CLI - same pattern. Docs at ai.google.dev/gemini-api/docs/gemini-cli.
Switching is just a matter of which command I type to start a session. The skills don’t move. The logic doesn’t change. The model is just the runner.
The one difference: Claude handles my multi-step skills more smoothly. Some of the more complex ones - the ones that chain 6+ steps across multiple files - need a bit more steering with Codex or Gemini. For simpler skills, they’re nearly identical.
The honest bit
I didn’t plan any of this. I happened to have a portable setup, and the outage forced me to discover that.
The portability came from one decision made early: write skills as plain markdown, not inside any tool. If I’d built this inside a platform that locked in the format, I would have been stuck waiting for Claude to come back.
The fallback is intentional now. The setup isn’t.
Takeaway
If your AI workflow only runs on one model, it’s not a workflow - it’s a dependency.
Write your prompts as plain text. Store them where you can edit them. Any model can run them.
The AI is the runner. You own the logic.
← Back to blog