Working on a task starts with the Plan agent. I provide an initial prompt of what I want to achieve and the agent responds with a plan. If the plan needs adjustments I ask the agent to update it. When I’m satisfied with the final outcome I move to the execution of the plan. This will happen in one of two ways:
If the context window is still small I simply change agents1 (move to Build) and ask it to proceed with the execution.
If the context window is already big I ask the agent to save the plan in a markdown file, start a new session and ask the Build agent to read the file and execute the plan.
This flow works but there are a few drawbacks that bother me:
There is no human in the loop. I end up reviewing all changes at the end of the execution.
Even if I start a new session, depending on the size of the plan, the context window might get big causing the agent to misbehave. Especially in changes that must be repeated.
I usually use Opus for planning and Haiku for execution. There are times though that I forget to make the change ending up using Opus for everything. Opus is good but is also expensive!
You can’t easily pause the flow and continue from where you stopped.
My new flow
My new flow is based on one agent, one subagent and a database. In particular:
Like before I start with an agent that will help me build a detailed plan that consists from a number of tasks.
When I’m happy with the plan I ask the agent to use Beads and save each task under an epic.
Then I ask the agent to start the execution loop.
Execution loop
The agent uses Beads to figure out which task must be executed. It changes its status to in_progress and asks the subagent to execute it.
The subagent reads the task, makes the necessary changes and informs the agent that it finished.
The agent asks me to review the changes and approve them or not.
If I approve the changes, the agent commits them, close the task and move to the next one.
If I request changes, the agent asks the subagent to make them. At this point we move to step 2 again. We remain at this inner loop until I give my approval.
I am finally in the loop. I review fewer changes at a time and sooner!
Using subagents for each task keeps both the agent’s and the subagent’s context window smaller and cleaner ending up in fewer, to none, misbehaviors.
Brain is tied with Opus and Pinky with Haiku. No need to remember to change anything!
The best of all, with Beads I can pause and resume whenever I want. The agent knows where to start from!
PS: if you are part of team and don’t want to pollute the codebase with various configurations, you can (a) init beads in stealth mode and (b) exclude .opencode folder from git
according to the docs, all primary agents share the main conversation hence share the same context window ↩︎
The idea of a command line tool that will fit my needs was planted quite a while a go. Couldn’t find the time or didn’t have the energy to act on it so it remained just that, an idea.
Until yesterday that I saw this video. At some point, beads gets mentioned. I was curious as to what it does exactly that decided that it was now or never: I created a repo, added beads to it, bought some credits in warp.dev and vibe coded the tool I had in mind!
This was the initial prompt:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I am an engineering manager and I want to create a terminal tool, named n, that will allow me to take quick notes about my reports.
The idea is that whenever I want to note something down about one of my reports I will open a terminal and write: n <reports name> <my note>.
The notes must be organized in markdown files named after the report name. For example if I write "n leonidas a simple note" the tool must create a file named "leonidas.md".
The note must be added to the file under a header with the current datetime in the format YYYY/MM/DD hh:mm:ss.
For example:
"n leonidas one note" and "n leonidas another note" will result in:
## 2026/01/03 14:16:03
one note
## 2026/01/03 18:20:00
another note
The tool must be configurable. It will read its configuration from "~/.n/config". Its first configurable setting is the folder where it saves the markdown files. By default it saves them in "~/.n/db/".
The first the user runs the tool it checks if config exists and if not it creates it alongside the db folder.
I don't know which technology to use so I want you to help me decide and then create a plan for implementing the tool using the selected technology.
Warp provided a few options with their pros and cons. I chose Go and asked it to provide an initial plan. It did, I accepted it and after a couple of minutes I had a first version of my tool.
Watching the tool working was a pivotal moment. The “features gate” opened and all those ideas just poured out. Here is where beads saved the day.
Beads
The description in its repo is
A memory upgrade for your coding agent
The idea is to provide to your coding agent a way to create tasks keeping its context window small and its concentration at the task at hand. The initial setup is provided by the tool and leverages conventions like AGENTS.md, CLAUDE.md etc
I guess I didn’t push the agent enough to make it create tasks on its own but the fact that I had a tool to save all those ideas and then prompt the agent to “Lets work on the next task” put me in a great flow:
I was asking the agent to work on the next task
While it was working I was playing around with n and adding tasks to beads with either bugs or new features
Go to step 1
That cycle kept me going for a couple of hours!
A simple command-line tool for taking quick notes about your reports.
I’ve always been a terminal guy. Seeing how coding agents increased even more the time I spent to it made me realize that the way I keep notes about my reports must migrate to the terminal too.
The idea is simple. Every time I want to note something down I write:
n name note
this creates a folder name and in that folder a file with the current date which contains the note.
The tool provides a way to tag, edit and delete notes. It provides a way to backup your notes using github and a few ways to see your notes about a report.
Its not much and I know there are better tools out there but this one is exactly as I want it and was implemented in an afternoon!
PS1: I vibe coded it which means that I did not see at the code at all. Don’t judge it 😛 ! PS2: The idea of having all notes in files is to provide them later on in an LLM and have a conversation to figure out the report’s growth.
use gh and tell me how many PRs I have authored the last 3 weeks. from these PRs how many are merged? what was the total of added and removed lines?
Your PR Statistics (Last 3 Weeks)
- Total PRs authored: 31
- Merged PRs: 27 (87% merge rate)
- Total lines added: 4,095
- Total lines removed: 2,758
- Total lines changed: 6,853
You've been quite productive! Most of your PRs from the last 3 weeks are merged, with a solid mix of refactoring and feature work.
Up until now the main way of using LLMs was through chat. Every time I wanted to do a deep dive on a technical issue I was starting a conversation through which I was getting a better understanding both on the code and the way it should be used.
I had played with the agent tab of copilot in android studio but never given it too much time. That was the reason I believed that it is not that good!
So, I decided to give it a real shot and use it foul time, on production code, in a project that is important. The goal was to write as little code as possible.
The workflow I ended up having
Understand the task at hand and create a mental model of the solution.
Figure out the steps I need to make in order to implement the solution. If the steps are too many I break them into groups.
Start writing these [group of] steps in a prompt where I ask the agent to provide me a plan with the intended changes.
Review the plan, ask the agent to make adjustments (repeat this step as many times as needed).
Ask the agent to save the plan in a markdown file.
Ask the agent to execute the plan.
Review the changes.
If something trivial needs to be changed I do it myself, if the change cascades through many files I tell the agent to do it.
In the second case I also request an update to the plan.
Final review, commit and push.
The prompts must not be too detailed but also not too general. For example:
Take a look at <file #1> and <file #2> and give me a plan with all needed changes in order to: 1. Start <component #1> as disabled 2. Enable it every time the user selects an address (<component #2>) or 3. Enable it every time the user is typing a new zipcode (<component #3>)
Mistakes
It goes without saying that to end up in the above workflow I did many mistakes. Here are the big ones.
Provide the outcome
At first my prompts were a simple description of the outcome I wanted. I thought it will figure things out, make the necessary connections and write exactly what we need. Nope. The agent knows what you allow it to and when it can’t find something it simply creates random solutions.
Straight to the execution
My interaction with the agent was starting by asking it to do something. No plan at all. In simple cases this might be fine but when having a change that touches many components, a simple adjustment, after the agent’s work, might end up in updating a lot of code or in more adjustments.
Getting greedy, asking too much
After making some progress and saw how effective I was I got greedy. I started asking too much from the start and ended up with massive PRs that included changes often unrelated to each other.
Tips
Always have a plan first
For me having a plan gives me ease. I am more certain that things will be done as intended because they will be done they way I want to!
Through the process of making the plan there will be times that you will understand better the code at hand and figure out missing cases.
Especially for repetitive tasks the plan speeds things tremendously: I had to migrate a few screens from one pattern to another. I did the first migration using the agent (through a plan etc) and when finished I asked it to change the plan in such a way that will accept “parameters”. After that I just fed the updated plan with the next screen to the agent.
It is a memory that can be fed in any agent, in a clean context window, at any time.
Use the agent to figure things out
Some times in order to build the mental model for the solution you need to understand the code better. Use the agent to do that. See how it articulates things and then ask it to save its findings in a file. That file can be part of the plan:
see how component A works by reading file <name>
Always review the code
Perhaps the most important tip of all. Don’t add code to the project that you don’t know what it does. Always review what the agent did. Make sure that it follows the project’s conventions and standards. The fact that it was written by an agent does not mean that it is not your code. You are responsible for it. It is your solution, you just used a different medium to implement it.
Explore more, it is fast now
The benefit of having a tool that implements your thoughts way faster than you is that you can explore multiple solutions! Use git to make different branches/checkpoints and try every approach you thought of.
Keep things small
You can use an agent to implement an entire task but if you break it and do groups of changes then your reviews will be easier and quicker which means that your understanding of the changes will be better.
Bonus
I keep a repo with the Gilded Rose kata. Every now and then I create a new branch and practice on the kata.
This time the practice required to use only an agent. You can see the branch here and the prompts I used here (i asked the agent to save them to a file).
We have an app that relies heavily on fragments. Every screen is an activity that hosts a fragment that hosts a recycler view:
We want to start migrating the screens to compose by keeping the screen’s content and placing it in a compose environment. This means that our activities will call setContent { } instead of setContentView() and that our top and bottom bars will be written in compose. Also, a must have requirement is that the top bar has to react to the user’s scroll by collapsing and expanding.
Using Scaffold and AndroidFragment
Scaffold offers, among others, slots for both bars. It also provides wiring between the screens’ components so that we can easily add support for various interactions. In our case we want to leverage top bar’s enter always scroll behavior. AndroidFragment is a composable that takes care the instantiation and hosting of a fragment.
Putting them together we start we something like this:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
As you can see the list renders fine but, even though we’ve setup the wiring, the top bar does not collapse/expand when we scroll.
NestedScrollConnection and NestedScrollDispatcher
Scrolling in compose is a bit different from the view systems’. When a scroll event happens, instead of just one composable handling it, multiple composables can participate and decide how much of that scroll they want to consume.
To achieve that, compose provides three components:
NestedScrollConnection which is for parents to listen and consume
NestedScrollDispatcher which is for children to dispatch events and
the .nestedScroll() modifier which is the way to integrate these to the hierarchy
This means that if we want a parent composable to react to scroll events we need to provide a connection to it. If we want one of its the children to emit those events we need to provide to it that same connection and a dispatcher. .nestedScroll internally creates a NestedScrollNode which is used to couple the connection and the dispatcher together.
So, for our case, we have to create a dispatcher, couple it with the scroll behavior’s connection and provide it to our recycler view. Then the view will use it to dispatch its scroll events.
fun RecyclerView.setNestedScrollDispatcher()
Looking at the dispatcher’s API we can see that it provides two methods to dispatch scroll events. The first one, dispatchPreScroll, will inform the that a scroll is about to take place and that it will consume a certain distance. The second one, dispatchPostScroll, will inform that a scroll took place, it consumed a certain distance and has left some for consumption.
In the compose world all that makes sense. The scrollable modifier handles a scroll delta and communicates it properly in pre and post events. In the view world we don’t have anything similar. We can implement the logic using gesture detectors and by intercepting touch event but we can start simpler with a OnScrollListener:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When dispatching the y delta we need to negate its sign because RecyclerView and Compose use opposite sign conventions for scroll direction.
We coupled the connection with the dispatcher in the AndroidFragment because the created NestedScrollNode will try to find a parent node to dispatch its events too.
As you can see from the video the implementation works. The top bar collapses and expands accordingly. The only problem is that when the user starts scrolling slowly the top bar jiggles creating an unpleasant UX.
After adding some logs we can see that the scrolled dy is not always positive or negative through out the gesture:
NestedScrollDispatcher gr.le0nidas.fragmentincompose D onScrolled: dy=3 NestedScrollDispatcher gr.le0nidas.fragmentincompose D onScrolled: dy=1 NestedScrollDispatcher gr.le0nidas.fragmentincompose D onScrolled: dy=6 NestedScrollDispatcher gr.le0nidas.fragmentincompose D onScrolled: dy=-3 NestedScrollDispatcher gr.le0nidas.fragmentincompose D onScrolled: dy=6 NestedScrollDispatcher gr.le0nidas.fragmentincompose D onScrolled: dy=-4 NestedScrollDispatcher gr.le0nidas.fragmentincompose D onScrolled: dy=10 NestedScrollDispatcher gr.le0nidas.fragmentincompose D onScrolled: dy=-7
The slow movement is making the framework think that we are constantly trying to move a little bit up and immediately a little bit down. This causes the jiggle since the top bar toggles constantly between collapsing and expanding.
VerticalMovementDetector
We can prevent this by determining what kind of movement we have and then ignore any deltas that are not part of that movement.
To do that we need to have a window of n deltas and then see if the majority of them is positive or negative which will mean that the user scrolls down or up respectively. After knowing that we simply ignore the deltas that we don’t want.
A couple of things that help in the UX:
Until we fill that window we do not dispatch anything.
After filling it we make sure that we keep the last n deltas. That way we can determine the movement even if the user does one continuous gesture.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Here is the entire code where we’ve added a sampling switch in order to keep things cleaner and avoid weird edge cases:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Part of the work involves replacing the annotations the first library is using with the ones from the second. Unfortunately some cases are not as simple as replacing foo with boo. For example, a property must be annotated with @JsonField(name = ["a_name_here"]) in LoganSquare and @SerialName("a_name_here") is kotlinx.
So, I had to decide:
Do I spend 2-3 hours and migrate 100+ files manually, one by one?
Do I cut the hours in half by using the search and replace tool and then fix anything the tool couldn’t manage?
Do I start a journey of unknown number of hours to figure out how to perform the migration using a local LLM?
Ollama
Yeap, I chose to go with number three! And to do that I started by installing Ollama. Ollama is a tool that allows you to download open source LLMs and start playing with them locally. All prompts are handled in your device and there is no network activity.
You can either download it from its site or, if you are on macOS, use brew: brew install ollama.
After that you can run one of the models it provides, ex: ollama run llama3.2 or fire up the server it comes with and start playing with its API: ollama serve
Kotlin
The flow is simple:
Load in memory, one by one, the contents of the files that must be migrated
Provide each content along side with a prompt to an LLM
Store the LLM’s result to the file
(optional) Start dancing for building your first LLM based workflow
Reading the contents and writing them back to the files is easy with Kotlin. Communicating with the ollama server is also easy when using OkHttp and kotlinx.serialization. Believe it or not the most time consuming part was figuring out the prompt!
After a lot of attempts the one prompt that managed to produced the best result was the one where I listed the steps that I would have done manually:
We have a file written in Kotlin and we need to migrate it from LoganSquare to KotlinX Serialization.
To do that we have to replace: - "import com.bluelinelabs.logansquare.annotation.JsonField" with "import kotlinx.serialization.SerialName" - "import com.bluelinelabs.logansquare.annotation.JsonObject" with "import kotlinx.serialization.Serializable" - "@JsonObject\ninternal class <class name>" with "@Serializable\ninternal class <class name>" - "@JsonObject\nclass <class name>" with "@Serializable\nclass <class name>" - "@JsonField(name = ["<property name>"])" with "@SerialName("<property name>")"
Everything else in the file should be copied without any changes.
Please migrate the following file: $contents
We just want the file. Don't comment on the result.
and even then, small details did matter a lot.
For example, at the beginning of the prompt I refer to a file but later in the text I was saying Please migrate the following class. That alone was resulting in various weird migrations where a class was either missing completely or had only half of its initial code. Same results when I wasn’t using \n after the annotations.
The code
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Was I faster than choice number two? Didn’t try this choice but I guess no. Too many things to learn, figure out and write. Do I regret it? No! I now have a new tool in my belt and I’m pretty sure it will pay off, time wise, in the future.
ollama-kotlin-playground
One more thing that came out of this endeavour is ollama-kotlin-playground. A very simple library that does only one thing: generate a completion without even supporting all possible parameters. It is my way of not copying code from one tool/experiment to another.
It took me two and half years to come to terms with my new role. So, why now? I guess its a process like everything else. You start something new and feel completely out of your league. You educate yourself and try to apply your learnings. You make mistakes and feel awful. You try something else, make progress and feel better. A roller coaster of feelings! But you keep on pushing and some day you wake up knowing exactly what your job is.
The individual contributor to engineering manager transition
I don’t know about other industries but in software development the transition from an IC to EM is a bit hard. You have to leave a role where everything can be solved with code and where you can showcase your progress and your skills and move to a role that you might not write code for weeks!
So it took me all that time to accept the transition and most importantly to embrace the fact that it is OK to do or not to do a few things.
It’s OK to…
It’s OK not knowing everything and not having all the answers. My job is not to be a dictionary. My job is to be there, help in the research and assist on picking the more suitable solution.
It’s OK not solving everything myself. My job is not being a 10x developer. My job is building a 10x team. So when a problem occurs, I have to define it properly, set a few guidelines and let someone else do the solving.
It’s OK not being the first figuring out that there is a problem. Being a leader does not mean that I have to monitor everything and prevent problems before they occur. My job is to listen when someone brings an issue, assess the situation and prioritise any work that needs to be done while having the best interest of both the team and the codebase.
It’s OK not writing code that much. My job is not measured by the features I implement any more. My job is measured by the features my team is implementing and the level of quality the project has. I might not write code but I do write everything else. Goals, guidelines and documents that will help the team align and work towards the same end.
It’s OK not to be the first to speak. I’m not here to be heard. My job is to foster an environment where everyone has a say.
It’s OK not be right, it’s OK to ask for help. Being in a leading position does not mean you have to do everything perfect. My job is to build a healthy team that its members support each other. What a better way to do that by setting the example that we can all be wrong from time to time and that we will need someone to guide us.
Changing my status
I truly believe that a title in LinkedIn, or anywhere else, does not mean a thing on its own. Having said that I will be updating my title in LinkedIn because (a) I finally feel more comfortable with my role and (b) it is a psychological hack to help me invest even more in my new craft.
PS: I will still answer software engineer when someone asks me what I do for a living 😛
Kata is a Japanese word meaning “form”. It refers to a detailed choreographed pattern of martial arts movements. It can also be reviewed within groups and in unison when training. It is practiced in Japanese martial arts as a way to memorize and perfect the movements being executed. — Wikipedia
I don’t have a relationship with martial arts so I’ve learned the meaning of kata through code katas. A series of exercises that, through repetition, help in learning and understanding a pattern, an approach or even a technology/tool.
New benefits each time
In my mind a code kata is a list of specific steps that you apply one after the other in order to implement something. You do it enough times and the steps become muscle memory.
The red-green-refactor cycle, when trying to develop something test first, is one example. Write the test, watch it fail, write the code to make it pass, watch it pass, refactor the code to make it good! My favorite kata for practicing TDD is the Bank.
Another list of steps is when you want to refactor a piece of code. First, you write tests to ensure that your changes will not break the code’s behaviour. Then you start making changes by extracting code to new methods or classes. Each change is followed by an execution of the test suite to make sure that everything still works. Repeat until you are done. My favorite kata for practicing these steps is the Gilded Rose1.
a quick note here: both links above, apart from the exercise, contain a great video that showcases the kata
Practicing a code kata results in building your knowledge bit by bit. You know that each try helps in building that muscle memory and you keep on do it. The first time you simply apply the steps and enjoy the result but from that point on, each new practice helps you to understand why each step is needed. You start feeling more comfortable and begin to experiment by tweaking the steps a bit. Until one day you just know how and when to use the steps.
Be patient Daniel-san
On the other hand Mr. Miyaki’s wax on – wax off approach gives you no immediate knowledge or satisfaction. You simply repeat a small and tedious exercise feeling that there is no meaning in all that. What you don’t realize is that each repetition adds a small brick in building that memory. Until one day you wake up and your reaction is perfect and without you even think about it!
Why am I telling you all that? Because this week I realized that some times we need to be Mr. Miyaki. We need to push ourselves and our colleagues to do, and repeat on doing, the small and tedious change. The change that does not offer an immediate great value but its repetition helps in creating a muscle memory that contributes, at all times, on keeping the project clean and the team focused on the same goal.
“Act as if the API was providing it”
An example of such change is the request I did in a PR where the UI code looked like this (jetpack compose):
if (items.size > 2) {
Button(...)
}
Here 2 is a hard coded magic number, given by the business and known only by the mobile client. The request was to move this “threshold” to the model that gets created from the API’s response deserialization. Then change both our domain and presentation objects and act as if the API was providing the value all along.
The repetition of this particular wax on/off has three advantages:
It keeps the project’s architecture clean by moving all hard coded values to the outer layer.
It cultivates the thinking that the mobile client must always be dynamic and have everything provided by the API.
If the business decides to play with this threshold, the change in the mobile client will be quick and trivial
“Lets keep using this boolean property”
Another example comes from a discussion I had with a colleague of mine regarding a presentation object that looked like this:
class Foo(
val title: String,
val showTitle: Boolean
)
The question was if there is a need to keep the boolean property. Is there a value on having presentation objects that detailed? My answer was yes, there is a value and the repetition of such implementations has its own advantages:
It maintains the project’s convention which dictates that a view must be as dummy as possible. The view will not have to use the title to figure out if it has to hide it or not. It just consumes the boolean property.
It cultivates a certain approach, when designing such presentation objects, that makes the entire presentation layer more powerful since it handles many aspects of the UI.
Having such a layer allows testing to indirectly assert the UI behaviour.
That on its own is a good motivation for the devs to write more tests and grow their skills and thinking even more.
Be both Mr. Miyagi and Daniel
Be like Miyagi and push for the repetition. Be like Daniel and keep on doing the exercise even if you think there is no point at it. There will be a time that the change will come naturally and the entire team will have the same mindset. This will keep the project at a great level.
The team will kick a$#:
if you want to practice gilded rose in kotlin I have a repo to help you started ↩︎
WordPress informed me this morning that my subscription will be auto-renewed in a couple of weeks. That got me thinking. Am I spending money for no reason? Its been a while since the last time I wrote a post and even more since I was writing regularly.
The benefits
Has this blog helped me somehow? For that I am sure, it has!
Better communicator
Back in 2019, when I started writing, I already had a ten years experience on programming. I knew how to do things but I was having trouble on passing that knowledge to my co-workers. Why? I was never providing the entire context. My mind was skipping bits just because it was taking them for granted. I know them so my listeners most probably know them too!
It was blogging, and the fact that I did not have someone in front of me to ask a question or to be confused, that forced me into always building a proper context. Into taking the time to organize my thoughts, trying to break them to small pieces, putting them in the proper order and to give all necessary examples.
A skill that started here but soon enough moved into emails, messaging and finally talking live (I’m pretty sure I get less confused looks :P).
Deeper knowledge
Writing to a public blog makes you feel exposed. So I started to second guess myself and think that a mistake in my examples will bring thousands of years of shame upon my family!
So I started researching. I read articles, books and anything that is related to what I was about to write! I dove into GitHub and any public code of the framework/language I was using.
At the end, I was spending many hours for a ten lines blog post, but I knew a lot more about the post’s subject!
Personal branding
I started writing because I wanted to express a few of my believes and approaches on programming. Watching how others were doing it, I mimicked them and after each post I was also making a tweet and a LinkedIn post.
That got me new connections, a couple of cool discussions and a few job offers.
In other words, I now have a small place in the internet that anyone can visit and see a few things about me and my work.
Small dopamine doses
I won’t lie. I like it when a post of mine gets featured, or gets a thank you comment. I like it when I see in my stats a team’s tool as a referrer. There is a team out there that uses one of my posts as learning material!
Just start writing
So, am I spending money? Nah. I might write less often but I like having this place. At the end of the day it helps me grow and that is enough.
Should you do it?
Don’t worry if someone else has written about the topic you want. Don’t worry if you don’t have a specific style at first. Don’t worry if no one will read it. Just write about something you want. Something you learned, something you discovered, something that puzzles you. Just start writing!
Open for extension closed for modification. This means that our code is written in such a way that our class can have new behaviour without changing its structure.
Lets say that we have a Task and this task can have a description, a status and a couple of on/off characteristics like the fact that can be edited, shared or tracked.
Using flags
One way to depict this in code is by using flags in the form of booleans:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This means that every time we need to add a new characteristic we have to modify the Task class and add a new boolean property. This violates the open close principle making the code difficult to scale.
Using a list of enums
Another way is to replace all these flags with a list of enums:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Now if we need to add a new characteristic we simply add a new enum entry and the entire project can keep using the hasCharacteristic method to query a task for its characteristics.
Bonus benefit: this way we also avoid the wall of properties that sometimes hides information instead of revealing it!
Another year of advent of code and this time I decided to participate. Not only that but I thought it would be a great way to teach myself Ruby. This means that my train of thought gets interrupted quite often since for every little thing I come up with I have to stop and search how Ruby does it.
Day 3 – Gear Ratios
In today’s challenge the target is to calculate the sum of all numbers that are adjacent, even diagonally, to a symbol. The input looks like this:
and a symbol is everything that is not a number or a dot (.).
Head first
Seeing this input combined with the fact that I don’t know Ruby throw me in a crazy rabbit hole where I was searching for two-dimensional arrays at one point and parsing strings at another.
Then I remembered that the adjacent part includes diagonals too so I dropped everything and start thinking of how I will combine numbers from one line with symbols from another.
This is getting big. Should I start smaller? Should I try to approach this in a 2×2 array? Should I do this or that? Chaos!
Start from the business logic
Thankfully after taking a break and drinking some water I realized that my need to answer all my unknowns had taken the best of me and I was viewing things wrong.
It does not matter how the input looks. It is just that, an input. I shouldn’t start from there. It does not matter what/how Ruby does things. It is just a tool.
What matters is the business logic and in this case its quite simple:
Our business entities are Numbers and Symbols.
Our business logic dictates that a Number is next to a Symbol if it lies to the area that surrounds it.
Translating this to code:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
After writing and testing the business logic all I had left to do was to write the code that will produce our lists for numbers and symbols. In this case it just happens to be a two-dimensional array with string values.
Start from what matters
Being overwhelmed or not, relax, take a step back and start from what matters.
Presentation is important but it shouldn’t drive an approach since it might change often. Input is also important but it shouldn’t matter if we are dealing with a database, a web service or the file system.
Start from the business, make it work and then try to figure out how everything else can be plugged in.