Pinky and the Brain: my agent/subagent duo

OpenCode‘s agents configuration and Beads. A match made in heaven.

My flow until now

Working on a task starts with the Plan agent. I provide an initial prompt of what I want to achieve and the agent responds with a plan. If the plan needs adjustments I ask the agent to update it. When I’m satisfied with the final outcome I move to the execution of the plan. This will happen in one of two ways:

  1. If the context window is still small I simply change agents1 (move to Build) and ask it to proceed with the execution.
  2. If the context window is already big I ask the agent to save the plan in a markdown file, start a new session and ask the Build agent to read the file and execute the plan.

This flow works but there are a few drawbacks that bother me:

  • There is no human in the loop. I end up reviewing all changes at the end of the execution.
  • Even if I start a new session, depending on the size of the plan, the context window might get big causing the agent to misbehave. Especially in changes that must be repeated.
  • I usually use Opus for planning and Haiku for execution. There are times though that I forget to make the change ending up using Opus for everything. Opus is good but is also expensive!
  • You can’t easily pause the flow and continue from where you stopped.

My new flow

My new flow is based on one agent, one subagent and a database. In particular:

  1. Like before I start with an agent that will help me build a detailed plan that consists from a number of tasks.
  2. When I’m happy with the plan I ask the agent to use Beads and save each task under an epic.
  3. Then I ask the agent to start the execution loop.

Execution loop

  1. The agent uses Beads to figure out which task must be executed. It changes its status to in_progress and asks the subagent to execute it.
  2. The subagent reads the task, makes the necessary changes and informs the agent that it finished.
  3. The agent asks me to review the changes and approve them or not.
  4. If I approve the changes, the agent commits them, close the task and move to the next one.
  5. If I request changes, the agent asks the subagent to make them. At this point we move to step 2 again. We remain at this inner loop until I give my approval.

Pinky and the Brain2

If you didn’t make the connection yet, Brain is the name I gave to the agent (type primary) and Pinky is the subagent.

I did not created them on my own. I asked OpenCode to help me by describing the flow I wanted. OpenCode read its own docs, asked me a couple of clarifying questions and came up with these:
Brain: https://gist.github.com/le0nidas/aae1c9f1b35110a00b7157b6c2437444
Pinky: https://gist.github.com/le0nidas/b8a3a89131a639e39e42f7aaf794cf33

Benefits

  • I am finally in the loop. I review fewer changes at a time and sooner!
  • Using subagents for each task keeps both the agent’s and the subagent’s context window smaller and cleaner ending up in fewer, to none, misbehaviors.
  • Brain is tied with Opus and Pinky with Haiku. No need to remember to change anything!
  • The best of all, with Beads I can pause and resume whenever I want. The agent knows where to start from!

PS: if you are part of team and don’t want to pollute the codebase with various configurations, you can (a) init beads in stealth mode and (b) exclude .opencode folder from git

  1. according to the docs, all primary agents share the main conversation hence share the same context window ↩︎
  2. https://www.imdb.com/title/tt0112123/ ↩︎

n: the command-line tool that I wrote without writing any code

There are three reasons I wanted to write this blog post:

  1. Just because I’m still exited! Exited about writing a tool in an afternoon. Exited about the way I did it and the flow I was in.
  2. Because I wanted to talk about beads.
  3. Because I wanted to mention n.

A tool in an afternoon

The idea of a command line tool that will fit my needs was planted quite a while a go. Couldn’t find the time or didn’t have the energy to act on it so it remained just that, an idea.

Until yesterday that I saw this video. At some point, beads gets mentioned. I was curious as to what it does exactly that decided that it was now or never:
I created a repo, added beads to it, bought some credits in warp.dev and vibe coded the tool I had in mind!

This was the initial prompt:

I am an engineering manager and I want to create a terminal tool, named n, that will allow me to take quick notes about my reports.
The idea is that whenever I want to note something down about one of my reports I will open a terminal and write: n <reports name> <my note>.
The notes must be organized in markdown files named after the report name. For example if I write "n leonidas a simple note" the tool must create a file named "leonidas.md".
The note must be added to the file under a header with the current datetime in the format YYYY/MM/DD hh:mm:ss.
For example:
"n leonidas one note" and "n leonidas another note" will result in:
## 2026/01/03 14:16:03
one note
## 2026/01/03 18:20:00
another note
The tool must be configurable. It will read its configuration from "~/.n/config". Its first configurable setting is the folder where it saves the markdown files. By default it saves them in "~/.n/db/".
The first the user runs the tool it checks if config exists and if not it creates it alongside the db folder.
I don't know which technology to use so I want you to help me decide and then create a plan for implementing the tool using the selected technology.

Warp provided a few options with their pros and cons. I chose Go and asked it to provide an initial plan. It did, I accepted it and after a couple of minutes I had a first version of my tool.

Watching the tool working was a pivotal moment. The “features gate” opened and all those ideas just poured out. Here is where beads saved the day.

Beads

The description in its repo is

A memory upgrade for your coding agent

The idea is to provide to your coding agent a way to create tasks keeping its context window small and its concentration at the task at hand. The initial setup is provided by the tool and leverages conventions like AGENTS.md, CLAUDE.md etc

I guess I didn’t push the agent enough to make it create tasks on its own but the fact that I had a tool to save all those ideas and then prompt the agent to “Lets work on the next task” put me in a great flow:

  1. I was asking the agent to work on the next task
  2. While it was working I was playing around with n and adding tasks to beads with either bugs or new features
  3. Go to step 1

That cycle kept me going for a couple of hours!

A simple command-line tool for taking quick notes about your reports.

I’ve always been a terminal guy. Seeing how coding agents increased even more the time I spent to it made me realize that the way I keep notes about my reports must migrate to the terminal too.

The idea is simple. Every time I want to note something down I write:

n name note

this creates a folder name and in that folder a file with the current date which contains the note.

The tool provides a way to tag, edit and delete notes. It provides a way to backup your notes using github and a few ways to see your notes about a report.

Its not much and I know there are better tools out there but this one is exactly as I want it and was implemented in an afternoon!

PS1: I vibe coded it which means that I did not see at the code at all. Don’t judge it 😛 !
PS2: The idea of having all notes in files is to provide them later on in an LLM and have a conversation to figure out the report’s growth.

3 weeks of agentic coding

use gh and tell me how many PRs I have authored the last 3 weeks. from these PRs how many are merged? what was the total of added and removed lines?

Your PR Statistics (Last 3 Weeks)
 - Total PRs authored: 31
 - Merged PRs: 27 (87% merge rate)
 - Total lines added: 4,095
 - Total lines removed: 2,758
 - Total lines changed: 6,853
You've been quite productive! Most of your PRs from the last 3 weeks are merged, with a solid mix of refactoring and feature work.

Up until now the main way of using LLMs was through chat. Every time I wanted to do a deep dive on a technical issue I was starting a conversation through which I was getting a better understanding both on the code and the way it should be used.

I had played with the agent tab of copilot in android studio but never given it too much time. That was the reason I believed that it is not that good!

So, I decided to give it a real shot and use it foul time, on production code, in a project that is important. The goal was to write as little code as possible.

The workflow I ended up having

  1. Understand the task at hand and create a mental model of the solution.
  2. Figure out the steps I need to make in order to implement the solution. If the steps are too many I break them into groups.
  3. Start writing these [group of] steps in a prompt where I ask the agent to provide me a plan with the intended changes.
  4. Review the plan, ask the agent to make adjustments (repeat this step as many times as needed).
  5. Ask the agent to save the plan in a markdown file.
  6. Ask the agent to execute the plan.
  7. Review the changes.
  8. If something trivial needs to be changed I do it myself, if the change cascades through many files I tell the agent to do it.
  9. In the second case I also request an update to the plan.
  10. Final review, commit and push.

The prompts must not be too detailed but also not too general. For example:

Take a look at <file #1> and <file #2> and give me a plan with all needed changes in order to:
1. Start <component #1> as disabled
2. Enable it every time the user selects an address (<component #2>) or
3. Enable it every time the user is typing a new zipcode (<component #3>)

Mistakes

It goes without saying that to end up in the above workflow I did many mistakes. Here are the big ones.

Provide the outcome

At first my prompts were a simple description of the outcome I wanted. I thought it will figure things out, make the necessary connections and write exactly what we need. Nope. The agent knows what you allow it to and when it can’t find something it simply creates random solutions.

Straight to the execution

My interaction with the agent was starting by asking it to do something. No plan at all. In simple cases this might be fine but when having a change that touches many components, a simple adjustment, after the agent’s work, might end up in updating a lot of code or in more adjustments.

Getting greedy, asking too much

After making some progress and saw how effective I was I got greedy. I started asking too much from the start and ended up with massive PRs that included changes often unrelated to each other.

Tips

Always have a plan first

  • For me having a plan gives me ease. I am more certain that things will be done as intended because they will be done they way I want to!
  • Through the process of making the plan there will be times that you will understand better the code at hand and figure out missing cases.
  • Especially for repetitive tasks the plan speeds things tremendously:
    I had to migrate a few screens from one pattern to another. I did the first migration using the agent (through a plan etc) and when finished I asked it to change the plan in such a way that will accept “parameters”. After that I just fed the updated plan with the next screen to the agent.
  • It is a memory that can be fed in any agent, in a clean context window, at any time.

Use the agent to figure things out

Some times in order to build the mental model for the solution you need to understand the code better. Use the agent to do that. See how it articulates things and then ask it to save its findings in a file. That file can be part of the plan:

see how component A works by reading file <name>

Always review the code

Perhaps the most important tip of all. Don’t add code to the project that you don’t know what it does. Always review what the agent did. Make sure that it follows the project’s conventions and standards. The fact that it was written by an agent does not mean that it is not your code. You are responsible for it. It is your solution, you just used a different medium to implement it.

Explore more, it is fast now

The benefit of having a tool that implements your thoughts way faster than you is that you can explore multiple solutions! Use git to make different branches/checkpoints and try every approach you thought of.

Keep things small

You can use an agent to implement an entire task but if you break it and do groups of changes then your reviews will be easier and quicker which means that your understanding of the changes will be better.

Bonus

I keep a repo with the Gilded Rose kata. Every now and then I create a new branch and practice on the kata.

This time the practice required to use only an agent. You can see the branch here and the prompts I used here (i asked the agent to save them to a file).

Using Ollama and Kotlin to migrate multiple files into a new library

At work, there is a need to migrate our project from using LoganSquare to kotlinx.serialization.

Part of the work involves replacing the annotations the first library is using with the ones from the second. Unfortunately some cases are not as simple as replacing foo with boo. For example, a property must be annotated with @JsonField(name = ["a_name_here"]) in LoganSquare and @SerialName("a_name_here") is kotlinx.

So, I had to decide:

  1. Do I spend 2-3 hours and migrate 100+ files manually, one by one?
  2. Do I cut the hours in half by using the search and replace tool and then fix anything the tool couldn’t manage?
  3. Do I start a journey of unknown number of hours to figure out how to perform the migration using a local LLM?

Ollama

Yeap, I chose to go with number three! And to do that I started by installing Ollama. Ollama is a tool that allows you to download open source LLMs and start playing with them locally. All prompts are handled in your device and there is no network activity.

You can either download it from its site or, if you are on macOS, use brew: brew install ollama.

After that you can run one of the models it provides, ex: ollama run llama3.2
or fire up the server it comes with and start playing with its API: ollama serve

Kotlin

The flow is simple:

  • Load in memory, one by one, the contents of the files that must be migrated
  • Provide each content along side with a prompt to an LLM
  • Store the LLM’s result to the file
  • (optional) Start dancing for building your first LLM based workflow

Reading the contents and writing them back to the files is easy with Kotlin. Communicating with the ollama server is also easy when using OkHttp and kotlinx.serialization. Believe it or not the most time consuming part was figuring out the prompt!

After a lot of attempts the one prompt that managed to produced the best result was the one where I listed the steps that I would have done manually:

We have a file written in Kotlin and we need to migrate it from LoganSquare to KotlinX Serialization.

To do that we have to replace:
- "import com.bluelinelabs.logansquare.annotation.JsonField" with "import kotlinx.serialization.SerialName"
- "import com.bluelinelabs.logansquare.annotation.JsonObject" with "import kotlinx.serialization.Serializable"
- "@JsonObject\ninternal class <class name>" with "@Serializable\ninternal class <class name>"
- "@JsonObject\nclass <class name>" with "@Serializable\nclass <class name>"
- "@JsonField(name = ["<property name>"])" with "@SerialName("<property name>")"

Everything else in the file should be copied without any changes.

Please migrate the following file:
$contents

We just want the file. Don't comment on the result.

and even then, small details did matter a lot.

For example, at the beginning of the prompt I refer to a file but later in the text I was saying Please migrate the following class. That alone was resulting in various weird migrations where a class was either missing completely or had only half of its initial code. Same results when I wasn’t using \n after the annotations.

The code

import gr.le0nidas.kotlin.ollama.OllamaClient
import gr.le0nidas.kotlin.ollama.request.GenerateRequest
import gr.le0nidas.kotlin.ollama.request.parameter.Model
import gr.le0nidas.kotlin.ollama.request.parameter.Prompt
import java.io.File
fun main(args: Array<String>) {
val ollamaClient = OllamaClient()
val requestBuilder = GenerateRequest.Builder(Model("llama3.2"))
val files = getAllFilePaths(args[0])
files.forEach { file ->
println("- Migrating file $file…")
val content = getFileContents(file)
val request = requestBuilder.build(prompt = createPrompt(content))
val response = ollamaClient.generate(request)
response
.onSuccess { saveFileContents(file, it.value) }
.onFailure { println(it.message) }
}
}
fun createPrompt(contents: String) = Prompt(
"""
We have a file written in Kotlin and we need to migrate it from LoganSquare to KotlinX Serialization.
To do that we have to replace:
– "import com.bluelinelabs.logansquare.annotation.JsonField" with "import kotlinx.serialization.SerialName"
– "import com.bluelinelabs.logansquare.annotation.JsonObject" with "import kotlinx.serialization.Serializable"
– "@JsonObject\ninternal class <class name>" with "@Serializable\ninternal class <class name>"
– "@JsonObject\nclass <class name>" with "@Serializable\nclass <class name>"
– "@JsonField(name = ["<property name>"])" with "@SerialName("<property name>")"
Everything else in the file should be copied without any changes.
Please migrate the following file:
$contents
We just want the file. Don't comment on the result.
""".trimIndent()
)
fun getAllFilePaths(directoryPath: String): List<String> {
val directory = File(directoryPath)
if (!directory.exists() || !directory.isDirectory) {
println("Directory does not exist or is not a directory")
return emptyList()
}
return directory.listFiles()
?.filter { it.isFile && it.name.endsWith(".kt") }
?.map { it.path }
?.toList()
?: emptyList()
}
fun getFileContents(filePath: String): String {
return try {
File(filePath).readText()
} catch (e: Exception) {
println("Error reading file: ${e.message}")
""
}
}
fun saveFileContents(filePath: String, contents: String) {
val file = File(filePath)
file.writeText(contents)
}

Conclusion

Was I faster than choice number two? Didn’t try this choice but I guess no. Too many things to learn, figure out and write.
Do I regret it? No! I now have a new tool in my belt and I’m pretty sure it will pay off, time wise, in the future.

ollama-kotlin-playground

One more thing that came out of this endeavour is ollama-kotlin-playground. A very simple library that does only one thing: generate a completion without even supporting all possible parameters. It is my way of not copying code from one tool/experiment to another.