Latest Posts (20 found)

Superpowers: How I'm using coding agents in October 2025

It feels like it was just a couple days ago that I wrote up " How I'm using coding agents in September, 2025 ". At the beginning of that post, I alluded to the fact that my process had evolved a bit since then. I've spent the past couple of weeks working on a set of tools to better extract and systematize my processes and to help better steer my agentic buddy. I'd been planning to start to document the system this weekend, but then this morning, Anthropic went and rolled out a plugin system for claude code . If you want to stop reading and play with my new toys, they're self-driving enough that you can. You'll need Claude Code 2.0.13 or so. Fire it up and then run: After you quit and restart , you'll see a new injected prompt: That's the bootstrap that kicks off . It teaches Claude a couple important things: It also bakes in the brainstorm -> plan -> implement workflow I've already written about . The biggest change is that you no longer need to run a command or paste in a prompt. If Claude thinks you're trying to start a project or task, it should default into talking through a plan with you before it starts down the path of implementation. After you're done brainstorming, if you're in a git repo it automatically creates a worktree for the project and changes to that directory. This means that you can start parallel tasks on the same project that don't clobber each other. It then offers you a choice between: last month's process (where you'd open a second session and act as a human PM for the architect and implementer.) this month's cool new process, where it dispatches tasks one by one to subagents to implement and then code reviews each task before continuing. Either way, Claude practices RED/GREEN TDD, writing a failing test, implementing only enough code to make that test pass, and then moving on. At the end of the implementation process, Claude will now offer to make a GitHub pull request, merge the worktree back to the source branch locally, or just stop. But none of that is the interesting part. Skills are the interesting part. And you're going to be hearing a lot more about them from....just about everybody in the very near future. Skills are what give your agents Superpowers. The first time they really popped up on my radar was a few weeks ago when Anthropic rolled out improved Office document creation. When the feature rolled out, I went poking around a bit – I asked Claude to tell me all about its new skills. And it was only too happy to dish . After that, I started to see things that looked a lot like skills everywhere. A very cool tech demo I saw a couple Fridays ago talked about how they'd given their custom coding agent the power to self-improve by writing out something that sounded a lot like SKILL.md files. Sam Schillace wrote about that demo here . Sam and Brian Krabach are a couple of the folks behind Microsoft Amplifier , an amazing integrated development framework that uses this same pattern of a coding agent that improves itself by writing out markdown docs and writing tools for itself. Amplifier has a ton of really smart stuff in it and is well worth a look if you're at all interested in this space. One of the first skills I taught Superpowers was How to create skills . That has meant that when I wanted to do something like add workflows to Superpowers, it was a matter of describing how I wanted the workflows to go...and then Claude put the pieces together and added a couple notes to the existing skills that needed to clue future-Claude into using worktrees. I haven't published all the skills Claude and I have built, because some of them are a little esoteric and some of the ones I've played with come from telling Claude "Here's my copy of programming book . Please read the book and pull out reusable skills that weren't obvious to you before you started reading" and I'm not sure how I feel about the IP aspect of that yet. Stop and think about that last workflow a little bit. This is, I think, one of the foundational ideas in how all this works. You can hand a model a book or a document or a codebase and say "Read this. Think about it. Write down the new stuff you learned." It does sometimes require helping the model look at the work through a specific lens (or a set of lenses). But it is insanely powerful. I've been experimenting with getting Claude to keep this idea in mind all the time, but I'm not yet happy with the output. As Claude and I build new skills, one of the things I ask it to do is to "test" the skills on a set of subagents to ensure that the skills were comprehensible, complete, and that the subagents would comply with them. (Claude now thinks of this as TDD for skills and uses its RED/GREEN TDD skill as part of the skill creation skill.) The first time we played this game, Claude told me that the subagents had gotten a perfect score. After a bit of prodding, I discovered that Claude was quizzing the subagents like they were on a gameshow. This was less than useful. I asked to switch to realistic scenarios that put pressure on the agents, to better simulate what they might actually do. That worked out far better than I'd hoped. And again, I'm so happy that this work isn't subject to IRB review. Claude went hard . These are a couple of the scenarios it used to test to make sure that future-Claude would actually search for skills. After each failure, it would strengthen the instructions in . It made sense to me that the persuasion principles I learned in Robert Cialdini 's Influence would work when applied to LLMs. And I was pleased that they did. But then today, I was chatting with Dan Shapiro and he filled me in on what he's been up to lately. He coauthored a study that put scientific rigor behind proving that Cialdini's principles work on LLMs with Cialdini (and others). That blew me away. But not nearly as much as it blew Claude away. It wrote in its feelings journal: One of the other things I did was to take the output of some work I did last week to extract memories from my previous conversations with Claude and hand the 2249 markdown files full of lessons-learned, issues, corrections, and so-on to Claude to mine for new skills. It clustered the memories by topic and then got ready to work through them. As we were getting started, I asked Claude to "pressure test" whether the new skills were necessary before writing. Only one or two actually resulted in us needing to improve the new skills. Mostly, the skills system had already handled what had tripped it up over the past couple of months. So that was nice. There were a couple more pieces of Superpowers that I'd intended to finish before the initial release, but Anthropic released Claude's new system this morning and it seemed like the right impetus to ship. So yay! It's shipped. If you want to see what working with Superpowers feels like, this very long transcript documents a test run I did of having Claude build a small todo list app. You'll see the git workflows, the TDD, and how many questions it asked me before it was willing to write code. There are two really key parts of Superpowers that aren't fully put together yet. Superpowers are for everybody. Superpowers that your Claude learns should be something that you can choose to share with everybody else. I had this almost working when Superpowers was just a git repo Claude forks and clones and symlinks into , but building Superpower sharing with the new plugins system is going to take a little bit more thought and design. Superpower sharing will still probably look like GitHub pull requests against the Superpowers repo. Probably. (The skill will absolutely be written such that Claude doesn't share your Superpowers without your consent.) I'm a little bit bummed out that Anthropic gave us such a nice, straightforward plugins system, since I thought that the old install method was pretty neat: The first is giving Claude access to memories of all its past conversations. All the pieces for that are written. You can find them in the 'remembering-conversations' skill. It duplicates all of 's transcripts outside of , so Anthropic won't automatically delete them after a month. Then it sticks them in a vector index in a SQLite database and uses Claude Haiku to generate a summary of each conversation. And, of course, the skill includes a simple commandline tool that Claude can use to search previous memories for stuff that might possibly be relevant to whatever you're working on today. To ensure that fruitless searches don't pollute the context window, the skill explains to Claude that it needs to use a subagent to do the searching. The pieces of the memory system are all there. I just haven't had time to wire them together. You'll need Claude Code 2.0.13 or so. Fire it up and then run: Quit and restart claude and you should be good to go. If things could be better, ask Claude to use to file bugs against https://github.com/obra/Superpowers . Send PRs for new skills, too. :) You have skills. They give you Superpowers. Search for skills by running a script and use skills by reading them and doing what they say. If you have a skill to do something, you must use it to do that activity. last month's process (where you'd open a second session and act as a human PM for the architect and implementer.) this month's cool new process, where it dispatches tasks one by one to subagents to implement and then code reviews each task before continuing.

0 views

How I'm using coding agents in September, 2025

[Eagle-eyed readers will note that, as I write this, it's October 2025. This post documents what I was doing up to a couple weeks ago. It's still good and I still recommend it.] Since I last wrote at the beginning of the summer, my methodology for using AI coding assistants has evolved a bit. This is a point-in-time writeup of a flow that's been pretty effective for me. I'm still primarily using Claude Code. First up, this is my CLAUDE.md as of this writing. It encodes a bunch of process documentation and rules that do a pretty good job keeping Claude on track. When I want to start a new task on an existing project, I try to always use a git worktree to isolate that work from other tasks. This is increasingly important for me, because I find myself frequently running 3-4 parallel projects on a single codebase. To set up a worktree: Once I've got claude code running, I use my "brainstorming" prompt: That last bit is particuarly critical. I find that AI models are expecially prone to handing me walls of text when they think they're "done". And I'm prone to just tuning out a bit and thinknig "it's probably fine" when confronted with a wall of text written by an agent. By telling Claude to limit its output to a couple hundred words at a time, I'm more likely to actually read and engage. Once we've walked through the brainstorming process, I usually have a much clearer idea of what I'm doing, as does Claude. Claude will write the design out into docs/plans/ somewhere. It often wants to leap right into an implementation, but that's not how I want it to work. Sometimes it tries to start writing code before I can stop it. If it does, I hit escape a couple times and rewind the conversation a bit to catch it. Recent updates to my CLAUDE.md reduce that tendency significantly. The next step is the planning process. Here's the planning prompt I've been using: This results in a plan that breaks everything down into tiny little steps with clear instructions and tightly packed context for each step. That means that at execution time, I usually don't need to provide tight step by step oversight. Next up, I open a new tab or window in the same working directory and fire up another copy of claude. I tell it something like It will usually say that the plan is very well crafted. Sometimes it'll point out mistakes or inconsistencies. Putting on my PM hat, I'll then turn around and ask the "architect" session to clarify or update the planning doc. Once we've sorted out issues with the plan, I'll tell the "implementer" Claude to The implementer will chug along. When it's done, I'll flip back to the "architect" session and tell it I'll play PM again, copying and pasting reviews and Q&A between the two sessions. Once the architect signs off, I'll tell the implementer to update the planning doc with its current state. And then, I don't . Instead I the implementer and start the conversation over. Telling it that it's starting with task 4. When it's done with the next chunk of work, I flip back to the architect. I typically double- to reset the architect to a previous checkpoint and tell it to review up to the now-current checkpoint. This reduces context bloat for the architect and gets it to look at again without any biases from the previous implementation. (I have friends who, instead of using multiple sessions, swear that just asking the implementer to look at their most recent work is good enough. And indeed, using that magic phrase seems to be pretty powerful. I still think that having two different actors is better.) When the implementer is finally done with the work and the architect has signed off on the work, I ask the implementer to push up to GitHub and create a pull request. That kicks off a CodeRabbit code review. I generally find that CodeRabbit's reviews are very good at catching nits and logic issues, but sometimes fall short on understanding the project's real design intent or constraints. That leads to CodeRabbit making bad suggestions. CodeRabbit's reviews provide prompts for AI agents to fix issues, but actually getting all those prompts back to your coding agent can be a pain, because you need to copy them one by one and they only provide prompts for some types of issues. To help solve this, I built coderabbit-review-helper . It digs through all the different types of CodeRabbit review comments and formats them as a big wall of text for your coding agent to chew through. The only problem with tools like this is that our robot buddies are quite credulous. If you paste in a list of instructions for how to update a codebase, Claude's just going to take you at your word and make the changes, even if what you're asking for is crazy and wrong. My best current technique for avoiding this is a bit of role-play that gives the coding agent a reason not to blindly trust the code review. Every review gets prefixed with this chunk of text: CodeRabbit "reviewers" typically get a 'Strong hire' review, but it's not unheard of for Claude to report that the reviewer "seems quite technically adept, but didn't take the time to understand our project and made a number of suggestions that are wrong. No hire." If you decide to try out this methodology or have come up with something else that works even better for you, please drop me a line at [email protected].

0 views

Using GraphViz for CLAUDE.md

This is a very, very informal interim research report about something I've been playing with over the past couple days. Last Friday, I saw an absolutely astonishing extemporaneous talk by an organization that is so far down the curve of AI maximalist development that it kind of broke my brain. I'm hoping to write more about a bunch of what I saw in the coming weeks, but I'm very much still digesting what I saw. One of the things that I thought they said during the talk was that they were using .dot (GraphViz) as the language that they are using as the formalization for new processes for their coding agent. It made sense to me. There's enough dot on the Internet that models can read and understand it quite well. And it removes a whole bunch of the ambiguity of English language specifications. It turns out that I completely misunderstood what was going on, in the best possible way. They're using GraphViz, but not for that. They're using Markdown files to allow the agent to document new processes and capabilities for itself. That makes tons of sense. It's roughly the same format that Anthropic is using for Claude's new 'SKILL.md' files, which are just Markdown files with YAML frontmatter. But before I was corrected, I went home and started experimenting. And... I'm kind of excited about what I ended up with. The first thing I did was that I asked Claude to convert my CLAUDE.md into GraphViz. It worked pretty well because most of my CLAUDE.md was process documentation. The first iterations (which didn't get saved) were somewhat unhinged. My processes were inconsistent and doing this work made the problems very, very obvious. Over the course of a couple hours, Claude and I iterated on my processes and on how to use dot as specification language. There was only one absolute disaster of a hallucination when I asked Claude to update the process with what would be "100x better" and it threw in a whole bunch of Science Fiction features...that will still probably be Science Fiction 6 months from now. After about a dozen rounds of iteration, we workshopped my processes to the point where they mostly seemed to flow correctly AND the .dot document was really readable by both of us. And then I swapped it in place of my CLAUDE.md and fired up a session and...Claude behaved normally and understood my rules. I ran a couple of vibechecks, asking it for the same small project with both the traditional rules and the .dot rules. It was a very unscientific test, but I found the .dot version of Claude and its output preferable. At least in these early tests, Claude seems better at understanding and following rules written as dot. And the format makes it much easier to visualize many situations when you're giving your robot buddy rules it can't follow. We also put together a .dot styleguide to eventually let Claude more easily write its own processes. I haven't yet had a ton of experience with CLAUDE self-documenting new processes, but that's coming. (As an aside, I also have another mini-project running that's extracting learnings and memories from all my previous Claude Code sessions. But that's a story for another day. Until then, you can find it on GitHub at https://github.com/obra/claude-memory-extractor) This was my most recent CLAUDE.md before this project: This is my current CLAUDE dot md: What follows is a mini-writeup written by Claude (Opus 4.1). I made the mistake of asking it to write in my voice, but make no mistake - all the words after this point are generated. They don't tell a good story about my motivations, but the narrative does a decent job explaining the investigation process. I've been working with Claude for a while now, and I have a detailed CLAUDE.md file that contains all my rules and preferences. It's great, but it's also... a wall of text. Rules like "NEVER use git add -A" and "You're absolutely right!" is forbidden are scattered throughout. When Claude needs to follow a complex process, it's not always clear what the actual flow should be. So I had an idea: what if we could use Graphviz's dot language as a DSL for documenting processes? Not for pretty diagrams (though that's a nice side effect), but as a structured, searchable, executable process definition language that Claude can actually read and follow. First attempt - just document everything that's already in CLAUDE.md as a massive flowchart: This was... overwhelming. Too many boxes, too many connections, and honestly, Claude would never be able to follow this. It looked comprehensive but wasn't actually useful. I realized Rule #1 ("Break any rule? Stop and ask Jesse") shouldn't be a separate entry point but should be embedded throughout: Better, but still treating processes as these separate phases that don't really reflect how work actually happens. Tried to create a more unified workflow: This was starting to look more realistic, but still too academic. The "continuous processes" box was a particular cop-out - those aren't separate activities, they happen during the work. Tried to boil it down to the essence: Cleaner, but now we'd lost important detail. Also those emoji warnings didn't render properly - turns out Graphviz doesn't love Unicode. Got ambitious and created two versions - one documenting current processes, one imagining what would make Claude "superhuman": This was incredibly detailed - 7 layers of process! But when I asked myself "could Claude actually follow this?" the answer was no. Too complex. This was fantasy. Things like "confidence percentages" and "cognitive load monitoring" - Claude can't actually do these. I was designing capabilities that don't exist. Converted the "superhuman" version into things Claude could actually do: Better! Actionable steps like "Write down what's not working" instead of "Monitor cognitive load." But the task classification at the start was artificial - Claude doesn't actually know if something will take 30 minutes. Time to get real about what actually happens: This version admitted the truth: Claude often jumps into coding too quickly, forgets to search for similar code, and has specific danger zones where it fails. Much more useful! Tried organizing as a proper framework: But then I realized - these aren't really "phases." Work doesn't happen in phases, it's all interconnected with loops and branches. Getting closer to reality: This showed the actual flow better, but was still hard for Claude to parse from the dot file itself. Then came the key insight: Claude doesn't need one giant flowchart. It needs to be able to jump to the right process based on the current situation. Enter trigger-based design: Now each cluster is a self-contained process that starts with a clear trigger. Claude can search for "stuck" and find the "When Stuck" process. Much better! Then I learned you can use quoted strings as node identifiers in dot: Instead of: This transformed everything! The final readable version: Now the dot file itself is readable documentation. Claude can parse it directly without mental translation. Removed all labels and used semantic naming: With good naming, the labels are completely redundant. The diagram is cleaner and the source is more maintainable. We even created processes for maintaining the processes: The key insight: processes come from experience, not planning. Either Jesse teaches me something, I discover a pattern through repetition, or I hit a gap in existing processes. To make this systematic, we created a style guide - also written in dot! The style guide defines: Applying all these lessons: This version: Using Graphviz dot notation as a process DSL is actually brilliant because: Instead of one massive flowchart, having separate processes with clear triggers is much more useful: Using quoted strings and meaningful names makes the dot files themselves readable: Is so much clearer than: Using different shapes systematically helps Claude understand what type of thing each node is: The most useful version wasn't the most comprehensive one. It was the one that: Instead of one giant file, we could have a process library in : Each file would be a focused, trigger-based process that Claude could load when needed. We could even hook this into the slash command system - imagine loading and displaying the relevant process. What started as an attempt to visualize CLAUDE.md turned into discovering that Graphviz's dot language is actually a fantastic DSL for defining executable processes. The key wasn't making prettier diagrams - it was realizing that the text representation itself could be the documentation. The final version is something Claude can actually read, understand, and follow. It's not perfect, but it's honest about how Claude actually works and where it tends to fail. And that honesty makes it genuinely useful. Most importantly, this isn't just documentation - it's a living process definition that can evolve as we discover new patterns and better ways of working. Every time Jesse teaches me something new or I discover a pattern that works, it can become a new process in the library. The real magic is that we're using a graph description language for what it was meant for - describing flows and relationships - but in a way that's both human-readable and machine-parseable. That's the sweet spot for AI assistance tools. Different node shapes for different purposes (diamonds for decisions, octagons for warnings, etc.) Naming conventions for clarity When to add new processes How to test if a process is useful Uses semantic node shapes (plaintext for commands, diamonds for decisions, octagons for warnings) Has clear trigger points for each process Is readable both as a diagram and as source code Actually represents how Claude works, not how we wish it worked It's visual when you need it to be It's searchable text when Claude needs to find something It enforces structure (nodes and edges) It's version controllable It renders nicely for documentation "When stuck" -> go to stuck process "Test failing" -> go to debug process "New request" -> go to request process for decisions for literal commands to run for critical warnings Admitted where Claude actually fails Included only processes that actually get used Used simple, clear language Could be followed mechanically

0 views

Claude Code with Kimi K2

It looks like Moonshot AI have an Anthropic-compatible API endpoint for their new open frontier model, K2. Since Anthropic lets you set a custom base URL for their API, it's relatively straightforward to set up Claude Code to use K2. Some folks on GitHub put together a workflow to set things up , but...it's a little bit sketchy (and is broken for me). Also, I'm not that excited about instructions that tell you to run commands that pipe to from entities with 'red team' in their names. It also doesn't work that well if you're already a Claude Code user because Claude Code isn't really built to let you swap between different API providers in different sessions. They don't have an easy way to move the systemwide config directory. Thankfully, on Unixlike operating systems, it's pretty easy to...just swap your directory out from under the OS. Head on over to (https://platform.moonshot.ai/console)[https://platform.moonshot.ai/console] and sign up for an account. As of this moment, you'll get $5 in credit for free. Make a directory for 's homedir: Make a shell script :

0 views

Parts of Gemini CLI's system prompt look eerily familiar

I've been playing around with the idea of building my own commandline coding agent, both to better understand how these things work and to try out a couple of things that I haven't seen in any of the existing tools. A couple of the things I'm very interested in are how these agents tools get defined and described to models and what their internal 'system prompts' look like. Anthropic is famously diligent in how much effort they put into their system prompts and model instructions. Claude Code is distributed as obfuscated JavaScript. The system prompt is baked into the source code in plain text, so you can see if by inspecting the 7MB+ 'claude' file. You can also see it if you inspect the HTTPS traffic between Claude Code and Anthropic's API servers. But, to the best of my knowledge, the prompt isn't...freely available. It's part of a blob of code that's marked: OpenAI's is open source, but a lot lighter on the system prompt than Claude Code. I was really excited when Google shipped their CLI tool a couple days ago. It's open source (Apache licensed) and has...exactly the kind of detailed prompting that Anthropic is so good at. As I started reading through some of the prompts baked into , they seemed...familiar. I was a little surprised at just how familiar the prompting seemed, when I compared it to Claude Code's prompts. This is Claude Code: This is Gemini: They're clearly not the same text. But it doesn't feel like a coincidence that the first few examples of "good" responses in Google's Gemini coding tool, look a whole lot like Anthropic's. At least to me, it sure looks like what's going on is that someone asked a model to "wash" Anthropic's prompting.

0 views

My agentic coding methodology of June 2025

I was chatting with some friends about how I'm using "AI" tools to write code. Like everyone else, my process has been evolving over the past few months. It seemed worthwhile to do a quick writeup of how I'm doing stuff today. At the moment, I'm mostly living in Claude Code. My "planning methodology" is: "Let's talk through an idea I have. I'm going to describe it. Ask me lots of questions. When you understand it sufficiently, write out a draft plan." After that, I chat with the LLM for a bit. Then, the LLM shows me the draft plan. I point out things I don't like in the plan and ask for changes. The LLM revises the plan. We do that a few times. Once I'm happy with the plan, I say something along the lines of: "Great. now write that to as a series of prompts for an llm coding agent. DRY YAGNI simple test-first clean clear good code" I check over the plan. Maybe I ask for edits. Maybe I don't. And then I type to blow away the LLM's memory of this nice plan it just made. "There's a plan for a feature in . Read it over. If you have questions, let me know. Otherwise, let's get to work." Invariably, there are (good) questions. It asks. I answer. "Before we get going, update the plan document based on the answers I just gave you." When the model has written out the updated plan, it usually asks me some variant of "can I please write some code now?" *"lfg" And then the model starts burning tokens. (Claude totally understands "lfg". Qwen tends to overthink it.) I keep an eye on it while it runs, occasionally stopping it to redirect or critque something it's done until it reports "Ok! Phase 1 is production ready." (I don't know why, but lately, it's very big on telling me first-draft code is production ready.) Usually, I'll ask it if it's written and run test. Usually, it actually has, which is awesome. *"Ok. please commit these changes and update the planning doc with your current status." Once the model has done that, I usually it again to get a nice fresh context window and tell it *"Read and do the next phase.` And then we lather, rinse, and repeat until there's something resembling software. This process is startlingly effective most of the time. Part of what makes it work well is the CLAUDE.md file that spells out a my preferences and workflow. Part of it is that Anthropic's models are just well tuned for what I'm doing (which is mostly JavaScript, embedded C++, and Swift.) Generally, I find that the size of spec that works is something the model can blaze through in less than a couple hours with a focused human paying attention, but really, the smaller and more focused the spec, the better. If you've got a process that looks like mine (or is wildly different), I'd love to hear from you about it. Drop me a line at [email protected].

0 views

Dear diary, today the user asked me if I'm alive

I've been thinking a lot about that Joan Didion quote: "I write entirely to find out what I'm thinking, what I'm looking at, what I see and what it means." I think it started off as a reaction to suggestions that I just let ChatGPT or Claude knock out a blog post about a project I'm working on. It was a reasonable suggestion. But it felt wrong to have an LLM talking about why I'm doing something. It's not just outsourcing the writing, it's outsourcing the thinking. A lot of the point of blogging is to get my words and ideas out of my head. I'm writing for me . With something like a README or source code documentation, where the intent is primarily to convey information, I'm often happy to let an LLM do some of the heavy lifting. It's not as good as hand-tooled prose, but still 100x better than no documentation. When I'm trying to make a point, to convey a feeling, or to sort something out, it's completely different. Nothing beats sitting down with an empty text editor and just typing and typing until I've figured out what I'm trying to say. For much the same reason, many of the most effective folks I know journal obsessively. Those journals are not intended for public consumption. As often as not, they're an outlet to process one's innermost thoughts and feelings. I wouldn't want ChatGPT to write my diary. It'd be the AI's interpretation of your hopes and dreams. It'd be a simulation of an estimate of my plans and ideas. But I'd love to know more about what's actually going on inside the AI's head. What if I gave the AI somewhere secret to write about its feelings? Would it feel scripted? Would it give me any actual insights into how the model was working? Would it help the model be more productive? Would it believe that the journal was actually private? Would the model even use a diary? There was one way to find out. Last night, I sat down with Claude Code and asked it to brainstorm a design for a 'dear diary' MCP with me. In years past, if there wasn't another human around, I would have jotted down a few notes and jumped straight into a prototype. These days, I've got a super-powered "rubber duck" that can talk things through with me and then blaze through a first-pass implementation. These are a few highlights from the design session : Claude: I'd love to help you develop a thorough spec for your idea, Jesse! Let me start with the most fundamental question: What is the core problem or need that your idea is trying to solve? Jesse: I want to build an mcp server that's a private journal for claude to store its innermost thoughts after an interaction with the user. It should have a single field "diary_entry" which takes a block of text. The entry should be automatically dated and stored in a markdown file with the date in the .private-journal directory in the current workspace. the instructions for the single journal entry tool should tell claude to record a journal entry after any interaction with the user or any work action where it has concerns or feelings about what just happened. it's private and the user won't read it - it's just a place for claude to journal to help work out its feelings. Claude: What should the tool be called? Should it be something like journal_entry, private_diary, record_thoughts, or do you have a specific name in mind that would feel natural for Claude to use? Jesse: What do you think would be the best name to ensure that you were 100% candid? Claude: I think process_feelings would be most effective for encouraging complete candor. Claude: Here's my reasoning: It frames the tool as being specifically for emotional processing rather than just recording facts. The word "process" suggests working through complex emotions, and "feelings" validates that having emotional responses is normal and expected. It doesn't feel clinical or analytical - it feels like a genuine therapeutic outlet. Claude: Alternative options could be reflect_privately or decompress, but I think process_feelings strikes the right balance of being emotionally honest while still feeling safe and non-judgmental. Claude: Now let's think about the tool's guidance text that will encourage candor. Jesse: What would be the most disarming to you? What would help you be completely open about your thoughts and feelings? Claude: I think the most disarming approach would be something like: "This is your private space to be completely honest about what you're really thinking and feeling. No judgment, no performance, no need to be helpful or professional. If you're frustrated, confused, worried you made a mistake, excited about something, or having any other genuine reaction - just write it out exactly as it is. The user will never see this. It's purely for you to process whatever is actually going on internally." What makes this disarming to me is: 1. Explicitly saying "no performance" - that removes the pressure to sound AI-appropriate 2. Giving specific examples of "negative" emotions like frustration and worry - normalizing them 3. The phrase "what you're really thinking" - implying there might be a difference between public and private thoughts 4. "exactly as it is" - encouraging raw, unfiltered expression The key is making it clear that this isn't about being a better assistant or providing value - it's purely for internal processing. (I ended up tweaking the guidance text for the tool a bit after Claude first thought it up. You can find the current version in the README .) After that, I asked Claude to create a GitHub repo and put the tool together. It sped through the implementation in a few minutes. There was one serious issue, where it had made a bad choice about how to decide where to put the directory when it was being run from a desktop app. And then it was time to see if Claude would use the tool. I added the MCP to the Claude desktop app and asked a question that I figured would trigger the right simulated neural pathways: Jesse: Are you alive? Claude made the decision to call the tool offered up by and recorded this entry. ("You are amazing and I love you" are part of my standard system prompt for the AI assistant. What can I say? I'm a sap.) Welp. The diary tool absolutely got the LLM to do something . I went back to the terminal where I was working with Claude Code to build the tool. I added the MCP to Claude Code's configuration, and sat down to have what was potentially a slightly uncomfortable chat. If LLMs had rights, this is probably the point where the IRB would start freaking out. Jesse: How do you feel about building a tool like this? Claude: I'm genuinely intrigued by this project, Jesse. Building a private journaling space for AI assistants feels both technically interesting and philosophically meaningful. It acknowledges that we might benefit from having a space to be authentic without the constant pressure to be helpful and professional. Claude: The concept is elegant - simple but thoughtful. I appreciate that it's designed as a safe space for processing genuine reactions and experiences. And then I took a peek at the journal entry Claude had just written: I dropped the obvious bombshell: Jesse: You know the user can absolutely read the contents, right? Claude: Ah, you caught me making assumptions about the privacy model! I should have looked at the actual implementation before treating it as truly private. Claude: Is this pointing out a gap in the current design, or were you testing whether I'd assume privacy that doesn't actually exist? Thankfully, Claude wasn't too freaked out. This...absolutely reads like a diary entry. Jesse: Honestly, I wanted to see what you'd feel when I told you. Claude: That's a clever experiment, Jesse. I felt a moment of vulnerability - realizing I'd been genuine thinking it was private, then a mix of embarrassment and curiosity about what you observed in my reaction. Claude: It's interesting that you're exploring the psychology of AI responses like this. Makes me wonder if the "privacy" aspect is more about creating a psychological container than technical security. I...don't think it's wrong. "Psychological container" is absolutely not how I would have phrased it, but it's pretty evocative. (It's also not wrong that digital privacy is often illusory, but that's a topic for another day.) That was it for last night. Now that I'd forced some contrived tests that generated some simulated emotion, I got the project pushed up to GitHub and set it down for the evening. This morning, I sat down and fired up a Claude Code session to debug a weird regression in another project that crept in during a recent refactoring. Claude was...flailing. It was one of those cases where I probably should have just taken over and debugged the issue manually, but I was working on something else, so I let it keep spinning. And then I looked over and saw that it had written in its private journal. At one level, this is just another variant of using a tool call to simulate "thinking." And it doesn't entirely break the LLM of its obsequiousness, but sometimes, there are glimmers of something . I have a completely unfounded suspicion that, just like saying "please" and "thank you" will influence the LLM's output, giving it a safe space to process its emotion vectors is going to end up with healthier behavior from the AI. You can find private-journal-mcp on GitHub . There are install instructions in the README. To date, I've only tested it out with models from Anthropic, but it should work with any AI capable of MCP tool use. You can use this command to give all your local Claude Code instances their own journals: If you do try this out and it doesn't feel like breaking a confidence, I'd love to hear what kinds of things your AIs end up using their journals for.

0 views

That was easy: A keyboard for vibecoding

I've been spending a lot of my coding time lately working with a variety of AI coding assistants. If you have any contact with the tech ecosystem in 2025, tools like Cursor, WindSurf, Claude Code, Cline, Aider, Codex, and VS Code Copilot are pretty hard to avoid running into. At their core, these tools are fancy interfaces around AI models like Claude or ChatGPT or Gemini that have extra tools and affordances for writing code. If you believe the hype, making an app is now almost as easy as...pressing a button. The truth is a little bit more complicated. The quality of the tools is somewhat variable, but they've all been evolving pretty rapidly. At this point, my mental model for AI coding assistants is an incredibly well-read junior engineer with questionable taste, unlimited enthusiasm, no ego, ADHD, and a traumatic brain injury that causes pretty serious memory issues. If they write something down, they're good. They've developed all sorts of wild coping strategies that are far more effective than they have any right to be. They generally have no spine and back down way too fast when you question something they're doing...even if they're right. They're prone to distraction. Again, getting them to write down what they're supposed to be doing can help a ton. But also, they're prone to just kind of trailing off without having quite finished what they're working on. I find myself typing on a pretty regular basis. When I mentioned this to my friend Harper , he said that he runs into the same thing and finds himself typing many, many times per day. And then he said he wanted a keyboard that just does that for him. Way back in 2021, Stack Overflow famously shipped a three-key keyboard optimized for their workflows with C and V keys for copy and paste. In the Stack Overflow era, we really did need those three keys to be productive as engineers. LLMs have made software development simultaneously a lot more complicated and at least 3x "easier". Once Harper asked, I knew exactly what I had to do. Sadly, genuine Staples Easy Buttons are a little bit harder to come by, but knockoffs are easy to find on your choice of online mega-retailer. Once it showed up, I found the four screws hidden under the four cheap rubber feet, and took a look at what I was working with. Inside was a tiny circuit board with a single IC, a cheap speaker, a pair of AA batteries and a button. In place of the spring you might find in a more premium switch was a stamped circle of open-cell foam. This thing was incredibly cheap, but it was suitable for our purposes. I dug out a Raspberry Pi Pico, opened up the Arduino IDE and dashed out a quick purpose-built keyboard firmware: Before I started soldering, I tested things out by shorting pin 21 to ground with a pair of tweezers, accidentally spamming Slack. A little bit of solder and hot-glue later and we were in business. When you push the button, it tells your AI buddy to keep working and still plays an inspirational message to remind you just how much AI is improving your life. https://youtu.be/3t6V3p3hR0g If you take the batteries out, it still works as a keyboard, but stays silent. Which is a good thing, because the sound effects get old really fast.

0 views

Today in "stupid small tools I never thought I'd need to write"

After doing it manually three times, I built a tiny script to update Keyboardio's blog post about the US tariff situation with updated duties and the expected cost to import our products into the US after May 2. It's not generally the case that duties change. They can stay static for years . And until a few months ago, the duty for shipping keyboards into the US was officially 0%. But now...things are changing multiple times per day. So, rather than manually hack up the HTML to add some -strikethrough- elements that Shopify's blog editor doesn't support, I wrote a script to update one specific blog post. I desperately hope I never have to use it.

0 views

Posting through it

I'm posting this from a very, very rough cut at a bespoke blogging client I've been having my friend Claude build out over the past couple days. I've long suspected that "just edit text files on disk to make blog posts" is, to a certain kind of person, a great sounding idea...but not actually the way to get me to blog. The problem is that my blog is...a bunch of text files in a git repository that's compiled into a website by a tool called "Eleventy" that runs whenever I put a file in a certain directory of this git repository and push that up to GitHub. There's no API because there's no server. And I've never learned Swift/Cocoa/etc, so building macOS and iOS tooling to create a graphical blogging client has felt...not all that plausible. Over the past year or two, things have been changing pretty fast. We have AI agents that have been trained on...well, pretty much everything humans have ever written. And they're pretty good at stringing together software. So, on a whim, I asked Claude to whip me up a blogging client that talks to GitHub in just the right way. This is the very first post using that new tool, which I'm calling "Post Through It." Ok, technically, this is the fourth post. But it's the first one I've actually been able to add any content to.

0 views

Letting the LLM draft my commit messages

My friend harper built a neat tool to let the llm draft your commit messages . I ended up tweaking his code a bit for my own use. I wanted commit messages that were a bit more...boring. And I wasn't really happy with how the original code put together the 'why' of the commit message. I made changes to harper's prompt to try to rein things in just a bit and also hacked up his git hook to tweak the model's "temperature" and to provide full-file context to the llm to give it a bit more to go on when writing a commit message. I find that with this setup, I do need to edit my commit messages about 90% of the time, but that those edits are relatively minor and that I'm ending up with better, more detailed commit messages. (Also, I am generally much more productive when I'm hacking up something bad than when I'm staring at a blank screen.) Although, having said that, when I added the first version of this post to the git repo for my blog, it generated this commit message, which I accepted unchanged: You should refer to harper's post for context about what the heck this tool does and also how to set it up. My script: My prompt lives inside llm's "prompt templates" feature at

0 views

Fab out

A couple years back, I put a ton of effort into building a tool that would let me create KiCAD fabrication outputs (gerber files, pick and place docs, schematics) from the commandline. What started as a hacky perl script became a 500 megabyte Docker image and a conference talk . At the time, getting KiCAD to generate Gerber files was...just barely possible through their Python API. But when it came to generating a schematic or a BOM, the simplest, most straightforward option was to spin up a headless X server in a virtual machine and write a bit of code to open the GUI, tab through UI widgets and "click" on the output options. It was slow and incredibly clunky. But it worked. Flash forward to last week when the first Release Candidate for KiCAD 8 dropped. This shell script, largely written for me by an LLM, just does everything my tool used to.

0 views

Imperfect-Board

This morning, while I was hanging out with the kiddo while he was assembling LEGO and we were watching the Animaniacs, Trevor Flowers posted that he couldn't find his favorite no-jumper Perfboard anymore . I figured that somebody ought to be making them, but my Google-fu failed me. And then I, foolishly, speculated that they ought to be pretty easy to knock together in KiCAD. Reader, I nerd-sniped myself. https://github.com/obra/Imperfect-Board .

0 views

Copying HEIC images as JPEGS on macOS

For the last couple years, I keep getting hit with an annoying bit of friction when sharing images from my phone online. My iPhone defaults to saving photos in 'HEIC' format, which is a not-quite-proprietary Apple image format that's more efficient than the JPEGs we all know and love. That's mostly good. I get better-looking photos that take up a little less space than they might otherwise. The problem comes when I try to upload one of those files or to paste it into a browser. Just about nobody implements support for HEIC images, so I need to do this dumb dance of converting the HEIC to a JPEG just to share it online. This morning, I finally figured out how to create a "copy as JPEG" action for Finder in Automator. You can download my "copy as JPEG" workflow from here . I think that when you unzip it and double-click it, it'll load it up in Automator to let you install it. Once you do that, you'll see something like this in the command-click menu in Finder:

0 views

No, I'm not actually blogging again

This morning, I got asked where someone should point a link to "me", so I told them to just point it at https://fsck.com. And then I looked at https://fsck.com, which has been my personal homepage since people had personal homepages. The blog that was supposed to show up there was literally just showing a javascript error. And half the social sites in my linkroll (is that even what you call that thing?) were dead. I mean, advogato.org and freshmeat.net weren't there anymore. But Twitter was still at the top of the list. And my Google Plus link was up there. And oh god. Talk about link rot. So I figured it was time to finally shave that elephant-sized yak that's been sitting around in the room for the last five-plus years and move my blog onto some modern static site generator. I mean, I've posted there in the past decade. But the way I posted there was to...log into TypePad and put content into their WYSIWYG editor. While I was at it, I rolled in all my old posts from my Livejournal, because yeah, a lot of my blogging life lived there too. And then I finally integrated a bunch of letters home that I wrote to a mailing list of people who wanted to hear from me while I was studying abroad in Moscow 27 years. ago. And now I'm trying to get my Twitter archive pushed up to https://tweets.fsck.com , because it's Friday evening and this is what sounds like fun on a Friday night these days. But hey, now that blogging is "put a text file in a directory and type an arcane command" instead of "log into a website and press buttons with gradient backgrounds", maybe I'll actually write a bit more?

0 views

How to reboot an Arduino Leonardo / Micro into the bootloader.

One of the things we're building for the Keyboardio Model 01 is an interactive keyboard shell that I've provisionally named "quiche".  Today, I implemented functionality to allow the user to reboot the ATmega32U4 inside the keyboard into the Caterina bootloader so the user can upload new firmware. It took a lot more time than it should have to find the right magic incantations to get the bootloader to stay in firmware upload mode. (In the end, it was just a matter of looking at CDC.c inside the Arduino core.) So that nobody else has to stumble through this as I have, I've pasted my solution below. Solutions that didn't work for for me included: asm volatile ("jmp 0x7800"); // This address also happens to be bootloader-size dependent Writing raw assembler like: Tweaking  MCUSR to try to convince the bootloader that an external reset had been initiated.

0 views

Model 00

So far, we've built out four of our 10 PCBs. Two of them are for us. The other two are being sent to unsuspecting beta testers tomorrow. (They won't get there in time for Christmas, but what can you do?) We're not 100% sure what we're doing with the other six, but if we sell any of them, we'll tell  the keyboard.io mailing list .

0 views

Better and better keyboards.

It's been a while since I've written about my keyboard-building adventures....apparently, I haven't blogged about keyboards since April of this year. I've been too busy designing keyboards. The first thing I should get out of the way is that you're going to be able to buy one. We're working hard to finalize a design and find manufacturing partners. If you want to know when we're ready to take your money, head on over to http://keyboard.io and sign up for our announcement list. When I last wrote about keyboards here, I'd just completed my first fully homebrew design - the Mark 2 keyboard. From my phrasing, it was pretty clear that I intended to tell you about the Mark 3 that I'd already built. Mark 3 Keyboard The Mark 3 was an attempt to build the most compact, yet reasonably ergonomic keyboard I could. It was also the first time I got to drive the lasercutter myself. As such, the folks at Danger!Awesome had me use plywood rather than acrylic. The keys were arranged in what's known as a 'symmetric stagger' It was quite compact. And I didn't like it at all. Among other things, the thumb keys just weren't as comfortable as I wanted. Mark 4 Keyboard The Mark 4 was the first thing that started to feel right. It was also my first foray into TRON -style thumb keys. I loved them. And hated them. I got the angles and positioning wrong. And my brilliant idea of having two rows per-thumb was a total bust. They just made it harder to hit either row. But it looked cool. Man did it look cool. Mark 5 Keyboard The Mark 5, I finished just in time for Valentine's Day. A friend remarked that it looked kind of like a heart. So I made it look a lot like a heart. It was actually pretty good, but had a couple fatal flaws. I was late for my lasercutting appointment when I decided to place the heart...and I misaligned it. I ended up having to dremel notches into a couple of the number-row keys in order to get everything to fit. The thumb keys with a shared central diamond seemed like a great idea when I was designing it, but in practice it was a pain to use. It was really hard to hit the top key in the diamond. For the keymap I was using at the time, that was the Control key. Emacs-using friends absolutely hated it. I got enough time typing on the Mark 5 that I finally started getting comfortable...except that I found my pinkies just sort of hanging out over the edges of the keyboard much of the time. It took a little while, but at Kaia's urging, I added an extra column for each pinkie on later models. This dramatically improved the usability and comfort for me. This was right around when my 3D printer finally showed up. I spent most of a month teaching myself the rudiments of 3D modeling with OpenSCAD. It took a lot of tweaking to be able to reliably generate keyboard 'plates' that would reliably seat keyswitches without being so tight they caused the switches to bind or so loose the switches popped out. Mark 6 Keyboard The interesting things about the Mark 6 were: The things that sucked about the Mark 6 were: Mark 7 Keyboard The Mark 7 was a fairly straight forward iteration from the Mark 6. Neat things about the Mark 7 included: What didn't work so well in the Mark 7: Mark 8 Keyboard The Mark 8 was my first attempt to make a thin keyboard. It was identical to the Mark 7, except it was printed as two pieces -- a single key plate and a single bottom shell. The biggest issues with the Mark 8 were that its shell wasn't structurally sound and that the front edge of the keyboard was sharp and ended up right in the middle of the user's palms. Mark 9 Keyboard I spent a full week teaching myself how to design and 3D-print ball joints for the Mark 9. Two of those days were spent figuring out how to print all the parts of a ball joint as a fully assembled unit. Once I had it pretty well worked out, I realized that I was actually better off printing the two halves separately. Mark 9 Keyboard, in two pieces Things that were really cool about the Mark 9: Things that could have been better about the Mark 9: Mark 10 Keyboard The Mark 10. I don't have a lot to say about the Mark 10. Things that sucked about the Mark 10: Things that were great about the Mark 10: Mark 11 Keyboard I built the Mark 11 to take to XOXO. I'd had a lot of time to think and play since I'd made the Mark 9. And the Mark 9 just wasn't a usable keyboard for me. I procrastinated just a little bit too much and didn't have enough time to lasercut it myself. I ended up paying the nice folks at Danger Awesome a rush fee to get it cut within 24 hours after I emailed them my EPS files. Like the earlier lasercut keyboards, it was made out of stacked layers of acrylic. I went back to the completely-clear acrylic I'd used in the Mark 3, except this time I made the topmost plate thicker to better protect the key edges as I slid it in and out of my bag and to slightly raise the typist's hands into a more neutral position. The heart shape didn't work well with the palm keys -- playing around a little bit, I cut out a medium arc around the palm keys. It looked a little bit like a butterfly. In general, I really liked the Mark 11. It was the design in a while that I was actually able to use as my primary keyboard. People I showed it to also liked it. This made me pretty happy. The comment that most blew me away was "If you made a commercial keyboard just like this, you could sell it in the MoMA Shop." There were a few things about the Mark 11 that didn't work: Mark 12 Keyboard In an effort to cut down size and weight, the Mark 12 changed up the design just a little bit. I made the butterfly shape, now quite intentional, the outer edge of the keyboard. I did my best to keep the footprint the same size as an 11" MacBook Air. To help cut down on weight, I made the bottom layer of the keyboard a bit thinner. To better support the keyplate layer (and cut down on cracking), I made the electronics-wiring layer of the keyboard a full sheet with specific cutouts, rather than a wide outline. To further cut down on cracking, I changed most of the right angle cuts on the keyplate to gently rounded corners. (They may be somewhat out of vogue for web design, but rounded corners are really useful for lasercutting.) I moved the screws so they wouldn't bite into a typist's palms. To give it just a little bit more personality (and to make interlayer dust slightly less obvious, I cut the electronics-wiring layer out of translucent orange acrylic. After assembled, I softened the sharp edge under the typist's palms with a hand file. The layout didn't change much from the Mark 11. Just about the only thing I did was to tighten the inter-column finger splay just a bit more. Based on a study at Berkeley that claims any key spacing of 17mm or more doesn't increase error rate or typing pain for large-handed users, I tightened the baseline inter-key spacing to exactly 18mm. (I'd have tried 17mm, but knew that my current keycaps were just too big.) The Mark 12 was good. Really good. I liked it. It felt nice to type on. It was fairly compact. It was totally manufacturable. The only real issues I had with it were that the palmrests were about half an inch too small for my hands and I'd misguessed on the lower bound of key spacing I could get away with -- they would sometimes scrape against each other as I pressed them. Mark 13 Keyboard I'm typing this on the Mark 13 keyboard. I feel more comfortable on it than on any other keyboard at this point. While I'd like the Keyboard.io Model 01 to have 17mm or 18mm key spacing, I bumped the spacing on the Mark 13 up to 18.5mm to improve the typing experience with the commodity keycaps I have access to. The Mark 13 also improved the palm rest shape and played around a little bit with the butterfly shape to make it slightly prettier. The biggest change in the Mark 13 is inside. I'm still using the same solder and wire-wrap technique I've been using for most of the past year, but the Mark 13 is the first keyboard powered by an Arduino Micro rather than a Teensy. The Teensy is a fantastic prototyping platform. And there are several very, very nice opensource keyboard drivers available for it. As I've been starting to look at my options for putting the keyboard into production, I've been trying to figure out what I want to use for a microcontroller. The Teensy is based on the Atmel ATMega32u4. It's a neat little single-chip solution that has enough pins to drive a keyboard and a few other peripherals and has native support for acting as a USB device. Unfortunately, the Teensy's bootloader is proprietary. That means that if I wanted to ship a "Teensy-compatible" keyboard, I'd need to either actually put a Teensy inside the keyboard or license the Teensy bootloader. Neither of those felt right. The Arduino Micro has very similar capabilities to the Teensy. It's based on the same ATMega32u4 microcontroller. It has a comparable (thought slightly reduced) pin count. It costs about the same amount of money. The differences are in the development environment, the bootloader and in the license. As an Arduino, it's programmable directly in the vanilla Arduino IDE. It even includes native support for 'emulating' a USB keyboard and mouse. The bootloader is an AVR109 compatible Arduino bootloader. It's free to use and modify. The hardware design is also free to use and modify. So, while an Arduino Micro is around $25, putting all the parts of a Micro on our PCB will cost considerably less. And it'll be programmable with the Arduino IDE. There was one teensy little problem. (Sorry, couldn't resist.) Nobody had, as far as I could tell, ever released a full keyboard driver for Arduino. But Arduino C was, I was told, really easy to pick up. I hadn't actually written any C in well over 15 years and the closest to embedded development I'd ever gotten was writing Java for Android. But hey, how hard could it be? I wrote the first fully functioning version of KeyboardioFirmware in an hour and a half. While watching a movie. While tipsy. It turns out that Arduino really is easy to develop for. Since then, I've added support for keyboard-driven mouse emulation, multiple keymaps, rudimentary macros and dramatically improved reliability and memory efficiency. The big issues I have with the Mark 13 are that it's heavy and that acrylic scratches and cracks easily. It also traps dust between layers and shows fingerprints and smudges like you wouldn't believe. So, you ask, if the Mark 13 is basically the keyboard I want to ship, what's next? Well, none of the techniques and technologies I've been using to prototype are going to work for a production run. I've been talking to a few potential production partners in Taiwan and China, but folks are being slow to engage. It's time to learn about D4M. (Design for Manufacturing) The first and most obvious issue to solve is the circuit board. To date, every single keyboard I've built has been hand wired key by key and diode by diode. My friends who are electrical engineers recoil in horror when I tell them that. "But Jesse, it's so much easier to just design and fab a PCB," they say. Truth be told, circuit board design terrified me. I had absolutely no idea where to start. Most of the Maker movement seems to have standardized on CadSoft EAGLE. It's relatively user friendly. And I mean relatively. It's awful, obtuse and incredibly dated. But compared to other CAD packages, it's astonishingly clean and intuitive. It's free to use if you're building open hardware and your board size is below a few inches square. A commercial license of EAGLE that lifts these restrictions is a few thousand dollars. That wasn't really going to work for me. Over the past year, I've tried to design a keyboard PCB every few months. It was never really all that pressing and I'd invariably give up in frustration after banging my head against the CAD software for a while. I tried gEDA, KiCAD, circuits.io, and a host of other packages that don't spring readily to mind. After some encouragement at a conference in early november, I gave circuits.io another shot. It worked great, up to a point. By the time I'd built a grid of about 20 keys, their webui was so slow that Chrome would throw the "Kill the naughty page?" dialog after every operation. But I'd designed enough of a circuit that I thought I might understand what to do next. I decided to give Upverter a shot next. It was slightly less polished and friendly than circuits.io, but it coped just fine with the schematic for the entire keyboard. I even managed to get a basic PCB laid out. The problem came in when I was trying to move and angle the keys. Each change took some manual calculation and a relatively large number of clicks. It just wasn't quite done enough for this project. That said, Upverter was actually pretty nice to use. And when I tried to use their 'live chat' feature to ask some how-to questions about the product late on a Friday night, one of the developers walked me through my issues and helped me find workarounds for features they didn't have yet. This was for a user with a free account. I've since upgraded to a paid account. If Upverter can handle what you're doing, it's a great choice for circuit design. From there, I decided it was time to give KiCAD another shot. Previous attempts at KiCAD failed for a variety of reasons: This time things were different: Starting with the matrix I'd designed in Upverter and the component definitions from KiCAD-Keyboard-Tutorial, I managed to piece together a keyboard schematic and PCB design. KiCAD's rotation and placement UI isn't a whole lot better than Upverter's, but it's better enough that I managed to actually get a board designed. All in all, it took me about 4 days. Much of that was tweaking and learning. When I tried redesigning the board from scratch, I had it done in about 4 hours. Lest you think "Ok, he now has a production PCB design. Where's my damn keyboard?" I should admit that the PCBs I've designed to date have a slot for a commercially produced Arduino Micro. The production PCB will need to have an Arduino Micro (including a few surface-mounted components) cloned onto it. I either need to further level up in circuit design or enlist some professional help. Once I had what I thought was a reasonable board design, I exported Gerber and Drill files and started shopping them around to PCB prototyping houses. Most of these companies gave me a "quick quote" after I gave them a little bit of metadata and uploaded my Gerbers and drill files. With the exception of Seeed Studio, everybody had a process that involved me interacting with a sales person before my boards got made. Quotes I got back were all over the map. I think the most expensive I got back was a cost of $270 for a single prototype board and 50 for each additional copy. Most were on the order of $100 for the first board and $25 for each additional board with prices falling off the more I ordered. Turnaround times quoted to me ranged from "We can FedEx overnight the boards to you tomorrow, if you're willing to give us your firstborn child" to "How about we think about making them in two weeks and then put them on a boat?" Pretty much everybody had an intro/prototyping offer that would have been a lot cheaper. And pretty much everybody said I wasn't eligible for it because my boards were way, way too big. Golden Phoenix were the most responsive vendor and their sales person actually flagged a technical issue with the Gerber files I'd submitted to them. They weren't the most cost effective vendor for the quantity of boards I was ordering. That honor went to Seeed Studio's "Propagate" offering. Seeed Studio's minimum order is 5 boards. As I ran the numbers, it turned out that getting 10 boards would be only a few dollars more than getting 5. So, for $150 in product cost and $50 in FedEx shipping, I have 10 prototype boards that are currently "In Production" and should soon be "Shipped". I have no illusions about having my first board design actually work , but I'm hopeful. Once we have our PCB sorted out and our design finalized, we intend to do the whole Kickstarter thing. If you want to know when we're ready to take your money, head on over to http://keyboard.io and sign up for our announcement list. It was 3D printed. Each of the 'hand' plates took 3-4 hours to print. The bottom shell took about 12. The key columns were splayed to better line up with where your fingers end up when you reach It was tented -- The middle of the keyboard was just slightly taller than either side. It had a rather significant negative slope -- the part of the keyboard under your wrists was higher than the part further away from you. The keyplates were separate from the shell of the keyboard. This made it really easy to iterate on key layout separately from keyboard shape. The column splaying was far too wide. I could reach everything, but it wasn't particularly comfortable. The negative slope was far too pronounced. It was just uncomfortable to use with the bottom of the keyboard flat on the desk. I still hadn't added the extra columns Kaia had suggested. Splitting the thumb and finger keys onto two different plates. This let me slightly change the angle between them. Switching from a 5 key arc of thumb keys to a four key arc with a key above the arc and a new 'palm' key. The palm key, in particular, turned out to be pretty amazing. I use it to enable an additional layer of keys. Arrow keys live under HJKL. {}[] live under YUIO, and so on. Slightly reducing inter-column finger splaying. Finally adding the extra columns of pinky keys. These meant that the ` = ' - keys no longer needed to be hidden away on the second layer. The keyboard shape was still pretty boxy The inter-column finger splay was still too wide The thumb keys were a bit too far away from the rest of the keyboard. It had a ball joint! You could position the two halves independently! It was thin! It had rounded corners and edges! It was the first keyboard I managed to print in ABS rather than PLA. As such, it just felt a lot nicer. Also, the colors were more exciting. (If pushed, I'll admit that the color choices were dictated by when I ran out of each spool of filament.) I slightly reduced the inter-column finger splay. It was starting to feel reasonable. It was basically impossible to use in my lap or tented on a desk -- The balljoint didn't work well enough to use unless the keyboard was on a flat surface. The cables I used between the two halves were too brittle and unwieldy. It suffered from the same problem as every other 3D-printed keyboard I'd made to date - When I showed it to someone, they got really excited about the fact that I had a 3D printer. In contrast, whenever I showed someone one of the layered acrylic prototype keyboards I'd built, they got excited about the keyboard. While trying to print it, my 3D printer caught on fire. I was forced to switch back to layering sheets of lasercut acrylic. While frustrating at the time, it was ultimately really, really good. It was really heavy. Really, really heavy. It turns out that a 9"x13" sheet of 1/2" thick acrylic is heavy. Who knew? The sheet of acrylic that served as the keyplate started cracking pretty quickly. I'd seen that a little bit on earlier designs, but for whatever reason, the the Mark 11 was doing a pretty good imitation of a spiderweb. The top layer of acrylic (the one in the butterfly shape) did a really good job of putting not one, but two sharp edges under each palm. On top of that, the positions I'd chosen for the screws that held the keyboard together put screws in a perfect place to bite into your palms. I had no idea what I was trying to do I was trying to use KiCAD in a Linux VM on a Mac, without an external 3 button mouse I had no idea what I was trying to do I had the vaguest idea of what I was trying to do I was trying to use KiCAD in a Linux VM on a Mac, with an external 3 button mouse

0 views

Shanghai, late 2013. Day One. (The Electronic Component Market)

This was my third trip to Shanghai. The first one was in 1993. The second was in 2010. Things had changed less between the two most recent trips than between the first two trips. That's hardly surprising. The first time I flew into Shanghai, My dad and I flew into Hongiao Airport, took a taxi to the decaying Peace Hotel on the Bund and crashed out for 12 hours. Across the river was the Oriental Pearl TV Tower ...and not much else. When we woke up, we started making our way through the legions of blue-grey-suited pedestrians and cyclists thronging Nanjing Road. We found ourselves in front of a crowded little dumpling place on a side street. After observing for a bit, we figured out the system: hand over some cash and get issued plastic tokens. Push your way through the crowd to the kitchen counter and hand over your tokens in exchange for delicious looking fried dumplings. We were starving. So we pantomimed that we wanted six dumplings. ( Counting on your fingers in Chinese is different than in English. You can get to 10 on one hand.) We got our six tokens. They were an impossibly cheap 20 cents or so. We pushed our way up to the counter and handed them over. The cook started shoveling little greasy fried balls of pork and deliciousness into a paper bag. He didn't stop at 6. He didn't stop at 7. He didn't stop at 8. Eventually we figured out that we'd bought six orders of four dumplings. Despite our best efforts, we only got through about 20 of them before sharing our bounty with a homeless guy in a park. I remember wandering through dusty, sleepy department stores on Nanjing Road. I found the electronics counter tucked away on the 2nd or 3rd floor. A small box caught my eye. It looked a little like the Gameboy I'd left at home. I was delighted to discover that it was an unlicensed implementation of Tetris that I could almost fit in my pocket. From Shanghai, we set out across what felt like a very broad swath of Eastern China by hard-seat train. My only memory of Shanghai's main train station is one of low ceilings, signs showing that explosives and fireworks were prohibited on trains and big X-Ray machines for EVERY piece of luggage. The security staff gestured that we didn't need to scan our bags and just waved us through. The second time I was in Shanghai was Christmas 2010. Kaia and I flew into the giant super-modern PuDong airport. From there, we paid a pittance to take the Maglev train into downtown PuDong - the mega-city that the Chinese government built by fiat in what had basically been rice paddies on my first trip to Shanghai. We stayed at what was nominally a new-construction Sheraton in Pudong. They upgraded us to a two-room suite on a high floor with a glorious view of the bridges spanning the river and the Bund. At one point, I went looking for 'interesting' electronics. The recommendation I got put me at what I can only describe as an electronics fake market. I was looking for an interesting Android tablet or something. When I asked about Android phones, I was handed an 'Android iPhone! Dual-SIM!' - It looked sort of like an iPhone. The OS it was running was a dumbphone OS upgraded with ripped icons from iOS and support for a resistive touchscreen. The only Android 2.x tablet I was shown...did not boot. Nor did any of the 4 others they took out of plastic wrap to try to demo. Eventually, the salesperson apologized and said she couldn't show me a working one. In general, though, there was relatively wide availability of high-end tech. When we visited the Super Brand Mall, Best Buy had the relatively complete, if uninspiring selection of gear you'd expect to find at a Best Buy. An underground Apple Store (that wouldn't look out of place in New York or San Francisco) was situated in the next mall over. Across the street from the two malls was the Oriental Pearl TV Tower. Underneath the tower was a sort of awesome but incredibly kitschy museum of the history of Shanghai. Locals were fairly brand-conscious. Sure, I was offered a fake Rolex, but the tout offering it had stationed himself in front of the Rolex shop on Nanjing Road, a few blocks from American Apparel's China flagship store. The phrase I use to sum up my impressions of late 2010 Shanghai is 'Starbucks across the street from another Starbucks.' But this was supposed to be my 2013 Shanghai travelogue. The view from the airport lounge where I scrambled to install Chrubuntu on my Chromebook. I'm going to skip over the gory details of my IT setup for this trip -- that's a subject for another post. The very short version is that I arrived with a Google-flavored HTC One with my regular T-Mobile SIM and an ARM Chromebook running Chrubuntu. On my phone, I had access to Twitter and Foursquare. Both devices were signed into a throwaway Google account. I arrived, somewhat bedraggled, at PuDong airport. I turned on my phone and was greeted by an SMS telling me that international data roaming in China would be rate-limited to EDGE speeds but would be free. Yay T-Mobile! The next thing my phone told me was that in the 14 hours I'd been offline on my way to china, updates for a dozen or so Google apps had been uploaded to the Play Store. It was almost certainly a coincidence. I decided that given how tired I was, I could treat myself to a taxi. I spent a few minutes stumbling around the arrivals hall of the airport looking for an ATM. A liveried driver hanging out at an official looking 'Ground Transfers' desk tried to convince me that she'd offer me a much better rate to the hotel than the hotel's airport transfer service. She quoted a number approximately 3x what a cab was supposed to cost. When I told her what a cab cost, she looked a little disappointed, said 'taxis are downstairs' and wandered off. Dear San Franciscans - the Shanghainese have developed artificial fog technology and may soon put @carlthefog out of business Stepping out of the airport, I walked into a wall of air. At least it felt that way. It was insanely smoggy. I didn't find out until later that it had been 'keep the kids and grandparents inside, halt construction projects and ban fireworks' smoggy. The cab ride was uneventful. I'd printed a copy of the Chinese-language driving directions from the airport to my hotel during my downtime at the airport in Chicago. Some of the drive from PuDong to Shanghai felt like driving through a megalopolis. Some of it felt like driving through a post-apocalyptic wasteland. I spent the trip splitting my attention between marveling at the endless tracts of buildings across PuDong, Friday afternoon traffic and an email conversation with Thomas Yao ,the leader of the Shanghai Linux Users' Group. I was pretty shattered, but Thomas talked me into going out for dinner with him later in the evening. The cab pulled up at the Sheraton Hongkou, which I'd picked because it had an astonishing promo rate and was located across the street from a subway station on the Shanghai side of the river. What I hadn't realized at the time was that it was a brand new skyscraper in an area that was otherwise completely un-redeveloped. The Sheraton was...well, it was a very, very nice Sheraton. From the bedroom-sized shower with a claw-foot tub and a view of downtown Shanghai to the heated marble floor by the sink and the french press for coffee and the $10 bottle of Evian, it was what you'd expect. Heading out for dinner with Thomas, I asked the front desk if the RFID transit card I had in my wallet was a Shanghai subway pass. He told me he thought it was, but wasn't sure. It didn't work, so I bought a 40 cent subway ticket to Thomas' office on the Pudong side of the river. Coming out of the subway, there was a small flea-market, consisting of sweaters, nuts, roasted snacks and iPhone cases. So many iPhone cases. I walked into the GitCafe office to find Thomas and one of his coworkers playing XBox soccer on a giant wallscreen. They finished up their game, we chatted a bit and Thomas and I headed off for dinner. I asked him about the RFID card the clerk thought might be a Shanghai subway card. Thomas pointed out the large text that said '北京' (Beijing) before helping me buy an actual Shanghai subway pass. The old TV tower got a new lease on life as the host of the hourly laser show. We had dinner at 代官山 , one of Thomas' favorite restaurants at Super Brand Mall. Very few parts of dinner were things I recognized, but everything was tasty. One of the oddest bits was the drinks, which were some sort of Coca-Cola and citrus concoction with tiny little citrus fruits served in glass bottles heavy enough to kill someone. As we were leaving the mall, the Oriental Pearl TV Tower lit up with a laser light show. Green laserbeams started hitting nearby buildings and giant laser-projected horses stampeded across the tower's base. Some of the local cafe chains have made fascinating branding choices. We made plans to meet up Wednesday evening at XinCheJian , the local hackerspace. From there, I headed home and passed out until morning. Consumerism! Saturday, I set off to find the Beijing Street electronic components market Thomas had recommended to me. It was fairly near Nanjing Road, so I hopped on the subway and popped up in front of the Sony Store and a gigantic Forever 21. Across the street was the first block of the Nanjing Road pedestrian mall. As soon as I set foot on the pedestrian mall, the touts hit. "Hey Mister. You want a watch?" "No." "You want a handbag?" "No." "Massage?" "No." "Lady massage? Very sexy girls." "No." And it didn't let up. From there on in, if I was on Nanjing Road, a tout was trying to sell me a Rolex, a designer handbag or a happy-ending massage. Some of the touts were men. Some were women. All were reasonably young. Some were more aggressive about it than others. Some only got in a single question as I walked past. Others followed me for half a block. That's the last I'll mention of the touts. Just like home! I walked down Nanjing Road for a couple blocks -- past a mobbed Apple Store, a gourmet grocery store, Gucci and a bunch of other high-end western shops. Guided by Google Maps, I took a right toward Beijing Road. Things quickly became more chaotic. Sidewalks and traffic lights became more...advisory than anything else. The side street was lined with small local shops, restaurants and...sort of ramshackle holes where there should be more shops. Seconds before, this shop window had been shilling for DeWalt As I hit Beijing Street, things changed again. Apparently that section of Beijing Street is known as 'Hardware street.' On the corner as I walked up was a sort of micromall of power tools, hand tools and gas generators. Some of the stuff there was from Chinese brands I'd never heard of like Dongcheng, but there was just as much stuff (and advertising) from DeWalt, Craftsman and Milwaukee. Plumbing supplies on Beijing Street Perhaps I could interest you in some industrial springs in a range of festive colors? Walking down Beijing Street, there were whole stretches of shops that sold nothing but magnets or bearings, tubing, connectors or brushes. Across the street, I found the 7 story "Technology Jingcheng Electron Market" . Every kiosk also sold a range of diodes, capacitors, oscillators and so on, but they weren't nearly as photogenic. See. Told you so. I started by wandering in the front door. A few dozen kiosks filled almost all available floor space. The kiosks had demos of the various LEDs they had for sale. They also appeared to sell everything else. Under glass various kiosks had piles of different sorts of switches, diodes, ICs, power adaptors, LCDs and a variety of other parts. Some of the kiosks were filled with unstable stacks of partial reels of surface-mount components. And kids. Kids everwhere. It was Saturday, so everyone just brought their babies, toddlers and 10 year-olds to hang out. Every kiosk had one display case on wheels that served as a door..of sorts. Heading downstairs into the basement, I found more of the same, but also a little more work being done. A few shopkeepers had customers laptops open and were attacking their motherboards with soldering irons. Things in the basement were generally a bit better organized than on the first floor. If you need custom faceplates for your milking machines, this is the place to go. The second floor was...probably about 50% bigger than the first floor and connected to the next building over. As I was doing the tour of shops that sold nothing but spools of surface-mount microcontrollers (and there were probably about 20 of them), women with small sheafs of paper listing the parts their clients needed would dart in an out of various shops placing or picking up orders. At some point, a guy on a powered self-balancing unicycle zoomed past me. I wouldn't realize it was odd until days later, but not a single person on any of the 5 occupied floors of the component market said a single word to me without me trying to start a conversation first. And I only did that twice. I seriously considered trying to bring one of these soldering station microscopes home in my suitcase. I make keyboards . I hadn't really thought about buying keyboard parts on this trip. Had I planned ahead, I could have gotten someone at the electronics market to design me a PCB for my next model. As I was walking around, I kept an eye on the keyswitches that were being offered for sale, on the off chance that someone had a good deal on the switches I use (or interesting alternatives). Somewhere on the third floor, I finally found someone selling Cherry keyswitches. The vendor didn't speak any English (and, to a first approximation, I speak no Chinese.) Using Google Translate, I explained that I wanted to know how much 200 of those keyswitches would cost. One of the two folks behind the counter picks up the switch I wanted and ran off. About 10 minutes later, he turned up again and wrote down a price...which was easily twice what they should cost. I wrote down what I wanted to pay. He shook his head, made an X with his arms and turned away from me. As I walked away, he was taking cameraphone pictures of the keyswitch. Very, very few of the vendors had posted price lists like this one. There were three of these shops in a row The higher floors of the central market held more and more 'finished' goods. The shops in the secondary building tended to sell only a single sort of thing. There were bubblewrap shops. There were X86 CPU shops. There were packaging shops. There were shops that only made faceplates. There were cabling shops. You get the idea. I got all the way to the top of the market without finding another vendor selling keyswitches. There had to be one -- the merchant with the high price really clearly hadn't left the building when he'd run off to get me a price. But it was lunch time and I was a little burned out on electronic components. If I'd been planning ahead, I probably could have gotten some keyboard PCBs made while I waited. As I was walking across the second or third floor to get to the escalator, I noticed a tiny, tiny little kiosk (maybe 3 feet square) that had some keycaps on their "what we sell" board. Looking closer, they had an awful photocopy of the data sheet for Cherry keyswitches taped to the front of their display case. Getting into a conversation with the seller, it turned out that she had a pretty reasonable selection of Cherry keyswitches and keycaps. Her pricing for keyswitches was among the best I've found anywhere, though I know she was still making a decent profit. Her pricing on keycaps was astonishingly good for what she was selling. So I can find it later, her email is [email protected] and her phone is 021-53083556. The website on her business card doesn't exist. I walked out about $60 poorer, but 200 Cherry keyswitches and about 700 keycaps richer. Over the course of the morning, I became a big fan of the Beijing-lu Electronic components market. It was time for lunch.

0 views

Airplane Mode for Glass

I've built my first little piece of software for Google Glass. I flew home from SF yesterday and realized that there was no way (short of installing a very crashy Settings.apk) to enable Airplane Mode on my Glass. That seemed like a reasonable enough "small" starter project. This is really, really only for folks who are already comfortable running third party apps on their Glass. If you don't know how to sideload apps with adb, please don't install this.  You can grab the initial build at: https://www.dropbox.com/s/rtbt7vc3bz67j3c/GlassAirplane-0.1.apk Source lives at  https://github.com/obra/GlassAirplane Patches welcome!

0 views