<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/rss/feed-style.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Andy Carlberg | All Posts</title><description>The latest updates, articles, and thoughts on strategy and systems.</description><link>https://www.andycarlberg.com/</link><language>en-us</language><item><title>Analog Workflow for a Digital Persona</title><link>https://www.andycarlberg.com/posts/analog-workflow-digital-persona/</link><guid isPermaLink="true">https://www.andycarlberg.com/posts/analog-workflow-digital-persona/</guid><description>Why a technologist traded digital apps for a physical GTD system. Exploring the engineering behind a low-latency, analog productivity workflow.</description><pubDate>Wed, 11 Feb 2026 21:30:00 GMT</pubDate><content:encoded>I live behind a screen. I build software. I work with remote teams. I&apos;m glued to my phone as much
as anyone. When it comes to organization, the digital tools designed to meet us where we&apos;re at have
a fatal flaw: They are too easy to ignore. You can’t lose a Trello board behind a GitHub tab if the
board is a physical wall in your office.

Coming off a long client project, I realized my &quot;high-availability&quot; digital system was actually
high-friction and low-visibility. I needed a system that stayed in my face and removed the &quot;out of
sight, out of mind&quot; tax on my focus.

## The Basis: Getting Things Done

I’ve used GTD for years. The &quot;mind like water&quot; philosophy resonates because, in my world, if a task
isn&apos;t logged, it doesn&apos;t exist. My life runs on Google Task reminders, but my strategy was getting
buried.

I tried Obsidian. I tried Trello. I tried automation. They all failed because they lacked **tactile
presence**. Now, I’ve covered a blank wall in my office with sticky notes. It’s the first thing I see
when I walk in. It’s a radiator of information that I can’t minimize.

## The Schema

I’m not revolutionizing the workflow; I’m just making it **impossible to dodge**. Left-to-right, the
wall mirrors a standard pipeline:

* **FUTURE**: Cold storage for ideas that aren&apos;t a priority.
* **INBOX**: The landing zone for fleeting ideas.
* **ACTIVE**: The WIP (Work In Progress). Limit 3–5 projects max.
* **HOLD**: Async dependencies.

I don’t over-document. I only write the very next action. This reduces the documentation load and
forces agility. If a project is waiting on someone else, it moves to &quot;HOLD&quot; to free up a slot. It’s
a physical representation of system throughput.

## The Hardware: Tickler Files and Daily Notes

For tasks that aren&apos;t &quot;now&quot; but have a &quot;when,&quot; I use a 3&quot;x5&quot; index card Tickler File. It’s a set of
43 dividers (31 days, 12 months) that contains sticky notes and index cards. Since most of my work
is digital, these aren&apos;t projects or tasks themselves; they&apos;re &quot;pointers.&quot; The &quot;pointer&quot; tells me
exactly which digital resource needs an update and when. It’s a **physical calendar with a digital
backup**.

For the day-to-day grind, I’ve adopted a disc-bound notebook. This is my high-speed scratchpad.

* **The Daily Note**: I start each morning by identifying the top 1–3 high-impact tasks.
* **The Scratchpad**: A catch-all for meeting notes and fleeting thoughts.
* **The Timesheet**: The left margin acts as a manual log, making it easy to reconcile billable time
  into our digital tools later without &quot;guessing&quot; where the day went.

## The &quot;Harvest&quot;

The lynchpin of this system is the evening shutdown. For a WFH professional, this is the **literal
firewall** between &quot;Work&quot; and &quot;Home.&quot;

* **Clear the cache**: Review the day’s paper notes. Transcribe what matters to the digital source
  of truth; shred the rest.
* **Update the wall**: Ensure every active project has a clear requirement for tomorrow.
* **Shut it down**: Close the tabs. Clear the desk. Make sure we have a &quot;mind like water&quot;.

## Why this works

When I&apos;m away from the desk, I still use digital tools like Google Keep. But the &quot;Harvest&quot; ensures
that mobile data is synced back to the physical command center.

By moving from a digital system that gets lost to a physical one that doesn&apos;t, I’ve engineered out
the cognitive load of &quot;searching for work.&quot; I don&apos;t look for my tasks anymore. **They’re just
there** - guarding my focus so I can spend it on the work that actually moves the needle.</content:encoded></item><item><title>From Surveillance Boards to Sovereign Teams</title><link>https://www.andycarlberg.com/posts/scrum-board-layout-and-team-sovereignty/</link><guid isPermaLink="true">https://www.andycarlberg.com/posts/scrum-board-layout-and-team-sovereignty/</guid><description>Stop using your Sprint Board for surveillance. Learn how shifting from developer swimlanes to story-centric lanes reclaims team sovereignty and transforms your stand-ups.</description><pubDate>Mon, 02 Feb 2026 20:00:00 GMT</pubDate><content:encoded>import { Image } from &quot;astro:assets&quot;;
import surveillanceBoardImage from &quot;../../assets/developer-swimlanes-board.png&quot;;
import sovereignBoardImage from &quot;../../assets/scrum-sprint-board-sutherland.png&quot;;

Many teams implement Scrum as a rigid, prescribed process - a &quot;by-the-book&quot; checklist that rarely
delivers the promised agility. When Scrum is treated this way, it&apos;s a sign that *why* we do Scrum
has not been understood. This type of implementation is based on a flawed assumption: that simply
implementing the ceremonies of other teams will provide the same results.

The result is a common, growing frustration: developers feeling micromanaged as daily stand-ups
devolve into rote, individual status updates. This breakdown isn&apos;t just a process failure; it’s a
symptom of a lack of **team sovereignty**. Scrum, at its core, is designed to give a team the
agency to manage its own work, but that sovereignty is often undermined by the very tool meant to
facilitate it: the Sprint Board. To fix the culture, we have to rethink how we map **Story and Task
ownership**.

## The Surveillance Board

The most common sprint board layout I see today is a series of columns representing a card&apos;s
journey to &quot;Done,&quot; but with one critical flaw: the horizontal swimlanes are categorized by
developer names.

&lt;Image
  src={surveillanceBoardImage}
  alt={`A sprint board showing developer names (Alice, Bob, Charlie) as horizontal swimlanes,
  illustrating a surveillance-style tracking layout.`}
  widths={[320, 640, 960, 1200]}
  sizes=&quot;(max-width: 640px) 90vw, 75vw&quot;
  class=&quot;max-h-[60vh] object-contain&quot;
  loading=&quot;lazy&quot;
/&gt;

This layout betrays the board’s true purpose. Instead of a tool for delivery, it becomes a tool for
tracking individual status. When the board is structured this way, individuals own the card,
individuals move it across the board, and *individuals* are solely on the hook for its delivery.
This is the antithesis of Scrum, where the *team* collectively commits to delivering value.

I suspect this stems from project-management-style leads who are more comfortable with traditional
hierarchies than with agile self-organization. Tools like Jira don&apos;t mandate this layout, but they
easily support its implementation because it satisfies a management desire to see &quot;who is doing
what&quot; at a glance. Unfortunately, that desire for individual visibility often overrides the
collaborative, non-hierarchical structure a high-performing Scrum team needs to thrive.

## Value vs. Labor

The surveillance board fails because it fails to distinguish between **Value** and **Labor**. By
assigning stories to individuals and tracking their status as a row, we conflate the two. The labor
—the specific actions required to build a feature—is mixed into the produced value of the story
itself. A team truly gains sovereignty when they **collectively own the Value** and **individually
own the Labor**.

In *Scrum: The Art of Doing Twice the Work in Half the Time*, Sutherland shares an anecdote from
Eelco Rustenburg regarding a home renovation project. During the daily stand-up, Rustenburg
gathered the specialists - carpenters, electricians, and plumbers - to facilitate a discussion
focused on clearing blockers and making progress. This strategy kept a complex renovation on time,
a notoriously difficult feat as any homeowner knows.

I like this story because it perfectly illustrates what happens when a Scrum team delivers value.
Sutherland doesn&apos;t detail the specific user stories, but we can assume most required multiple
specialties. To install a kitchen sink, you need a carpenter to cut the counter and a plumber to
run the lines - and perhaps an electrician if the sink has a disposal unit. None of the specialties
delivers &quot;Value&quot; to the homeowner in isolation. The Value—a functional kitchen sink—was owned by
the team, while the Labor—the wiring, the plumbing, the carpentry—was owned by the individuals.

## The Story-Centric Board

Returning to the Sprint Board, we already have a layout that visually reinforces team ownership of
Value. Sutherland published this format in *Scrum: The Art of Doing Twice the Work in Half the
Time*, yet it is surprisingly rare to see it in the wild.

&lt;Image
  src={sovereignBoardImage}
  alt={`A story-centric sprint board where user stories are horizontal swimlanes, illustrating team
   ownership of value.`}
  widths={[320, 640, 960, 1200]}
  sizes=&quot;(max-width: 640px) 90vw, 75vw&quot;
  class=&quot;max-h-[60vh] object-contain&quot;
  loading=&quot;lazy&quot;
/&gt;
&lt;small&gt;Image adapted from Jeff Sutherland, &quot;Scrum: The Art of Doing Twice the Work in Half the
Time.&quot;&lt;/small&gt;

In this layout, the **User Stories are the swimlanes**. This simple shift ensures the board
represents the team’s commitment to Value. Within each lane are the individual tasks—the Labor—
which move through the process independently.

A story is binary: it is either &quot;Done&quot; or &quot;Not Done.&quot; You can judge its progress by looking at the
remaining task notes in the lane. Tasks move through statuses and may be blocked, but the focus
remains on delivering that vertical slice of value. In this format, the board ceases to be a
surveillance tool and becomes a **communication hub**. It facilitates the critical conversations
that actually drive a team forward, transforming the stand-up from a series of individual reports
into a collective alignment session.

## Passing the Baton

There is a subtle, key phrase Sutherland uses that reinforces this distinction:

 &gt; When someone **signs out a story**, everyone knows who’s working on it.

&lt;small&gt;(Sutherland, 2014, p. 155, emphasis added)&lt;/small&gt;

While he doesn&apos;t dwell on the mechanics of &quot;signing out,&quot; the phrasing is telling. It implies that
while the Value belongs to the team, the responsibility for moving it forward is a baton passed
between individuals.

This has been a core tenet of Scrum since its inception, yet so many teams have accidentally
replaced the baton with a tether, tying stories to developers in a 1:1 relationship. By reclaiming
the board, teams can reclaim their sovereignty—shifting the focus from &quot;Who is busy?&quot; to &quot;How do we
deliver this value together?&quot;

## The Goal is Sovereignty

The Sprint Board is a choice made by the self-organizing team. Its architecture influences how the
team processes the information provided. If your board layout is designed for individual status
check-ins, your developers will act accordingly - as individuals instead of a cohesive team.

By shifting the swimlanes from individuals to Stories, we change the **orientation of the board**.
We stop asking &quot;What did you do?&quot; and start asking &quot;What does the Story need?&quot; That shift is a big
step towards team sovereignty - turning a group of individuals into a **team that owns their
success**.</content:encoded></item><item><title>Story Points as Categorical Data: Embracing the Cone of Uncertainty</title><link>https://www.andycarlberg.com/posts/story-points-categorical-data/</link><guid isPermaLink="true">https://www.andycarlberg.com/posts/story-points-categorical-data/</guid><description>Stop treating Story Points like fixed units of time. Learn how points function as categorical data and how plotting them against velocity reveals your team&apos;s size-specific Cone of Uncertainty for realistic estimation.</description><pubDate>Thu, 11 Dec 2025 22:00:00 GMT</pubDate><content:encoded>import { Image } from &quot;astro:assets&quot;;
import scatterPlotImage from &quot;../../assets/points-time-scatter.png&quot;;
import coneOfUncertaintyImage from &quot;../../assets/points-time-scatter-uncertainty.png&quot;;

## Introduction

**Points** are one of the most *misunderstood parts* of any Scrum-like Agile process and are often
a stumbling block for teams new to Agile. It&apos;s often referred to as a **level of complexity**, but
this description is vague and confusing when you don&apos;t have experience (and, for some, even when
you do). Naturally, we want to estimate in time - &quot;How long will this task take?&quot;

Scrum and similar agile processes use points to address a specific concern, but the *intentional
ambiguity* of Points makes them confusing for teams that have been working with time estimates up
to this point. This fundamental misunderstanding comes from the **false quantitative nature of the
Point value**.

The core of the issue is this: Story Points are fundamentally **categorical data** - labels of
*relative size* - that we *incorrectly* treat as **quantitative, interval data** for calculating
velocity. We should think of Points as **qualitative, categorical data** to grasp the true intent
and why, when visualized correctly, they reveal the inherent **size-specific Cone of Uncertainty**
within your team&apos;s historical performance.

---

## Why Points at All? The Tension Between Categories and Numbers

The origin of Points is somewhat up for debate but Ron Jeffries (of Extreme Programming fame) has
tentatively laid claim to coining the term. Early versions of Extreme Programming estimated in
&quot;**Ideal Days**&quot;: the number of days it would take a developer to complete a task given no other
work. The Ideal Days was then multiplied by some factor to account for meta tasks and
administrative work. Managers were confused by this process so the word Days was changed to
**Points** to attempt to fix the assumption that the number of Ideal Days was exactly how long the
task would take to complete. Other Agile processes - notably Scrum - adopted the term along with the
concept of Stories. However, Scrum and most other modern Agile processes care more about **relative
size** than absolute size.

Estimating correctly in absolute time is, itself, time consuming and almost certainly produces a
*wrong estimate* - it is slow and produces little value. To speed up the process, Scrum teams
estimate in **relative size**, with tasks estimated relative to each other. One of the most common
examples is T-shirt sizes. In the book *Scrum: The Art of Doing Twice the Work in Half the Time*,
Jeff and J.J. Sutherland reference &quot;**Dog Points**&quot;: using dog breeds to correspond to relative
sizes (interestingly, this is one of only two uses of the word &quot;Points&quot; that I have found in the
book, at all). These **categorical sizes** provide useful relative comparisons but aren&apos;t very
valuable for predicting how long a project will take. At the end of the day, we need to talk about
**time**.

This is where &quot;**Velocity**&quot; enters the process. Velocity is a metric showing us how much work the
team can complete in a Sprint based on historical values. Velocity is Points-per-Sprint. If we have
a suitable value for Velocity and an estimated backlog in Points, we can calculate the estimated
remaining number of sprints:

$$
\text{Remaining Points} \div \text{Velocity} = \text{Remaining Sprints}
\\
\text{Remaining Sprints} \times \text{Sprint Length} = \text{Remaining Time}
$$

**This is the crux of the problem:** Since we&apos;re doing math now, we require numbers - **quantitative
data** - instead of qualitative categories. Velocity **forces** our system of qualitative relative
sizing to adopt numerical, quantitative values. This necessity is how we get to Points, but the
**tension** between the quantitative requirement and the desire for relative sizing is the source
of the fundamental misunderstanding.

---

## Visualizing Uncertainty as a Size-Specific Cone

So we know we need actual numbers but we still want to have easy, relative sizing. This is where
the **Fibonacci Sequence** comes in. The Fibonacci Sequence is the set of numbers following this
formula:
$$
F_{n} = F_{n-1} + F_{n-2}
\\
\text{where}\:F_{0} = 0\:\text{and}\:F_{1} = 1
$$
So the first few values (ignoring the initial 0 and 1) are: 1, 2, 3, 5, 8. This is usually where
most teams will stop because anything bigger than this should probably be split up. It&apos;s possible
to use more values but this would be a team preference and outside our scope here.

The Fibonacci Sequence is recommended in *Scrum: The Art of Doing Twice the Work in Half the Time*
as well as many other books, articles, blog posts, videos, and any other content discussing points.
This is for a good reason. They are numerical values that provide a *nice relative scale* - they are
different enough that the difference in size between them is intuitive. If we used a straight
count, most teams will get hung up on the difference between sizes - is it a 4 or a 5? Think about
when someone asks you to rate something on a ten-point scale. Most people will usually say a couple
of adjacent numbers - &quot;eh, it&apos;s a 6 or 7.&quot; The Fibonacci Sequence removes this hurdle while giving us
quantitative values.

Since we now have quantitative values for use in Velocity, we can also plot this data and examine
what points actually represent. Here is an *ideal* scatter plot for a team that has completed a
large number of stories estimated as points. Since it&apos;s made up data, I&apos;ve chosen to remove the
outliers that a real team would certainly have - no team is perfect. This more clearly shows the
relationship between point values.

&lt;Image
  src={scatterPlotImage}
  alt={`Scatter plot showing Story Points (1, 2, 3, 5, 8) on the X-axis plotted against Actual
  Completion Time on the Y-axis.`}
  widths={[320, 640, 960, 1200]}
  sizes=&quot;(max-width: 640px) 90vw, 75vw&quot;
  class=&quot;max-h-[60vh] object-contain&quot;
  loading=&quot;lazy&quot;
/&gt;

We can see this produces a *nice grouping* of completion times for each Point value. We can see
each group is roughly twice as big and the mean is roughly two times larger than the previous point
value. This would be a common spread but ultimately the exact relationships would depend on the
team. The key piece of information here is **our numerical Points represent categorical data**.

### What does this tell us about Points?

I think this visualization of points for a given team is very valuable but I rarely see it. I
actually initially found it in an add-on for Jira - it&apos;s not one of Jira&apos;s built-in graphs. It tells
us a lot about how the team values tasks against each other - **relative sizing**. We can take it
further though.

**Crucially**, this scatter plot shows that the Point values are acting as **distinct categories**
(1, 2, 3, 5, 8) that possess a wide, *unique range* of completion times. They are not points on a
smooth, predictable quantitative line.

To isolate the inherent uncertainty within each size category, we can **normalize the data**: If we
adjust the scatter plot, aligning the mean completion time for each point value to a horizontal
center line, the Y-axis now represents the **variance or uncertainty**.

&lt;Image
  src={coneOfUncertaintyImage}
  alt={`Normalized scatter plot showing a Size-Specific Cone of Uncertainty. Story Point categories
  are on the X-axis, and time Variance from the mean is on the Y-axis, clearly demonstrating that
  larger point categories (e.g., 8) have a wider spread of uncertainty than smaller categories
  (e.g., 1).`}
  widths={[320, 640, 960, 1200]}
  sizes=&quot;(max-width: 640px) 90vw, 75vw&quot;
  class=&quot;max-h-[60vh] object-contain&quot;
  loading=&quot;lazy&quot;
/&gt;

You may already see where I&apos;m going with this from the curves I&apos;ve added to the graph. This is the
**Cone of Uncertainty** commonly referenced when discussing estimation for projects. At a high
level, the Cone of Uncertainty is a common visualization of how accurate our estimate can be
relative to the work completed and the information we have. In other words, the more information we
have, the more accurate we can be and the more of a project we have completed, the more information
we have.

We can see in this graph, that large Stories will take longer, we&apos;ll have less information, and the
**variance on completion time is wider**. The smaller the story, the more information we have (or
less is needed), the narrower the window of completion time is. Thus we can take the relative
sizing of Points and use it to see the **level of complexity** for a given story.

---

## Conclusion: Embracing Uncertainty with Categorical Points

The confusion surrounding Story Points stems from a simple mismatch: we use a **qualitative tool
(relative sizing)** for a **quantitative task (time prediction)**. By forcing categorical labels
(1, 2, 3, 5, 8) into numerical slots to calculate Velocity, we lose sight of their true nature.

The solution is to use historical data to **reclaim the true nature of points** as relative
categories of complexity and variance. By plotting your team&apos;s historical **Points against actual
Completion Time**, you create a powerful visualization that bypasses the flawed notion of points as
linear time units. Instead of trying to force every 5-point story into a narrow time box, the
resulting scatter plot reveals the authentic, size-dependent **variance** inherent in your team&apos;s
work.

This variance *is* your **size-specific Cone of Uncertainty**.

Stop treating a 5-point story as a fixed unit of time. Start using this
historical scatter data to set realistic **range-based expectations** for every size category.
Encourage your stakeholders and product owners to discuss the *range* of possible completion times
when reviewing estimates for larger items, acknowledging that a **3-point story is inherently less
certain than a 1-point story.** By recognizing points as categorical buckets of uncertainty, you
can finally use them as the honest and valuable estimation tool they were always intended to be.</content:encoded></item><item><title>I Killed the AoC Architecture Resilience Project on Day 4 (And Why It Was Good Governance)</title><link>https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-retrospective/</link><guid isPermaLink="true">https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-retrospective/</guid><description>The ultimate resilience strategy? Knowing when to stop. This AoC retrospective explores TDD as governance, strategic prioritization, and true architectural discipline.</description><pubDate>Mon, 08 Dec 2025 21:53:00 GMT</pubDate><content:encoded>I set out to prove a point: that **architectural discipline** - like **Test-Driven
Development (TDD)**, **Make It Work, Make It Right, Make It Fast** iteration, and overall
**smart governance** - is the best way to manage a software project. For the first four
days of **Advent of Code**, we absolutely nailed it.

The premise was an experiment in **resilience**, stress-testing our governance against the
pressure cooker of daily puzzles. The goal wasn&apos;t just to solve the puzzle; it was to
**prove the strategy.**

As of Day 4, the experiment is **formally concluded**. I don&apos;t see this as a failure of
discipline. Instead, I see this as a **pragmatic strategic decision.** Here is the
rationale for the pivot, and the core lessons we successfully locked in.

---

## The Rationale for the Pivot

### Prioritization is the Ultimate Resilience Strategy

The most **resilient system** has a strong firewall between &apos;fun side project&apos; and
&apos;real-life priorities.&apos; While AoC was a great proving ground, **real-world events like
family holiday commitments and end-of-year business demands took precedence.** It&apos;s hard
to justify fun but ultimately low-value coding challenges over attending holiday parties
with my kids.

Besides, the overall point was made. In any enterprise context, the moment you choose to
allocate **precious resources** (like my time) to a non-critical experiment after its key
learning objectives are met, you’ve committed a failure in **strategic governance.**
Stopping is the **most strategically sound choice** I could make right now. If we have
already learned and demonstrated what we can, continuing would just be a **waste of time**
we could better apply elsewhere.

### The Return on Strategy Taps Out

As the challenges progressed past Day 4, the nature of the solution shifted dramatically.
It went from requiring solid **system stability and change management** (where TDD
shines) to purely **algorithmic identification** (where your brain is the most important
tool).

The problems became less about *how to build a resilient system* and more about
recognizing a specific graph theory challenge. When that happens, the overhead of
maintaining enterprise-grade TDD starts generating **friction** rather than mitigating it.
I found the **strategic overhead** was actively impeding the **velocity of discovery** -
it proved its own limits in this specific domain.

---

## The Core Learnings Locked In

Despite the early pivot, the experiment successfully **validated several core
principles**. We got the data we came for:

1. **TDD as Governance, Always:** Our practice confirmed that **TDD’s primary value** is
  not driving the initial algorithm, but serving as a **robust governance mechanism** for
  change.

2. The first pass (**&quot;Make It Work&quot;**) usually solved the challenge, and the rigorous
  **&quot;Make It Right&quot;** step consistently identified and squashed subtle bugs in either
  the code or the test logic - demonstrating its power in **mitigating technical
  debt before it even felt real.**

3. **The Predictable Success of the Iterative Loop:** The **Make It Work, Make It Right,
  Make It Fast** process worked exactly as designed. The results were visible and
  reliable. We proved the strategy&apos;s validity against **early-stage complexity**.
  Continuing would have only reiterated the success for the same reasons. **We proved the
  hypothesis, case closed.**

### An Unintentional Study: The LLM in the Engine Room

The series unexpectedly became a useful test for integrating **AI** into my workflow. I used
**Gemini**, a general-purpose LLM, which was excellent for **boilerplate generation** and
structural code, but required **significant guidance and human verification** to ensure the
actual logic was correct.

This finding is **critical**: **Governance and validation remain human, architectural
responsibilities.** The AI is a powerful assistant, but the architect is still the only one
with the **strategic map.**

---

## Conclusion: The Ultimate Resilient Architecture

The experiment proved the strength of **enterprise discipline** against initial complexity.
The pivot proved the strength of **strategic prioritization** against real-world
constraints.

The **resilient strategist** is the one who chooses the right tool for the job. And the
ultimate **resilient architecture**? Knowing exactly when to clock out.</content:encoded></item><item><title>Architecture &amp; Resilience: AoC 2025 - Day 4: Printing Department</title><link>https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-day4/</link><guid isPermaLink="true">https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-day4/</guid><description>Day 4 required solving an iterative spatial problem. I used spatial indexing for efficient neighbor counting and a BFS simulation to model the cascading removal of paper rolls.</description><pubDate>Thu, 04 Dec 2025 21:53:00 GMT</pubDate><content:encoded>## The Challenge: Spatial Indexing and Iterative Collapse

Day 4 presented a **spatial problem** involving rolls of paper (`@`) stored in a warehouse grid,
requiring me to count how many rolls are **accessible**. A roll is accessible if it has **fewer
than four** adjacent rolls (checking all 8 surrounding positions).

---

### Part 1: Initial Accessibility

The first part required a simple snapshot: how many rolls are accessible *right
now*?

### Part 2: Cascading Removal

The second part introduced a simulation: Accessible rolls are removed, potentially making *more*
rolls accessible. I had to calculate the **total** number of rolls that could be removed through
this iterative, cascading process. This transformed the problem from a simple count into a complex
**wave propagation** simulation.

---

## TDD and the Architectural Process

My test-driven development (TDD) process highlighted the inherent difficulty of these spatial logic
puzzles, even for advanced AI tools.

### LLM Struggles with Logic

As in previous days, I leveraged Google Gemini to generate a test suite. However, the LLM
consistently provided tests with **incorrect logic or expected values** for edge cases.

I noticed that the LLM often tried to apply a simplified, non-spatial rule or simply miscounted the
number of accessible rolls. For example, in a **dense block**, the LLM failed to identify that many
rolls on the perimeter were still blocked by **4 or more neighbors**.

**I think this is a very important thing to point out:** Even though Gemini could correctly process
the main example (likely by simply extracting the known correct result), it fundamentally struggled
to generate **reliable test cases** for the subtle boundary conditions. I ultimately used the
structural boilerplate from Gemini but had to write **all the inputs and expected values myself**
to ensure accuracy. This reinforced the need for a **human-in-the-loop** to validate core domain
logic.

---

## Part 1: From Naive Loops to Spatial Indexing

I continued trying to work through the classic Make it Work, Make it Right, Make it Fast
development loop that I have used for previous days in this year&apos;s challenge.

### 1. Make it Work: The Naive Solution

My initial solution used nested `for` loops to iterate through the entire grid, and for every roll
(`@`), it ran **eight separate checks** for adjacent positions.

While this was functional, it was inefficient for several reasons: it wasted time iterating over
empty spaces (`.`) and the check logic was verbose.

### 2. Make it Right: Encapsulation and Constants

To clean up the code, I consolidated the eight separate checks into a single constant array of
**neighbor offsets** (`[-1, -1]`, `[0, 1]`, etc.) and used a loop to apply them. This resulted in
significantly cleaner and more maintainable code by replacing long, repetitive checks.

### 3. Make it Fast: Pre-calculating Adjacency (Spatial Indexing)

The true performance optimization came from avoiding the repeated calculation of neighbor counts.
Instead of checking a roll&apos;s neighbors every time, I pre-calculated the neighbor count for *every*
cell in a **single pass** over the input. This is a form of **spatial indexing**.

1.  **Iterate Over All Rolls:** In one pass, I iterate only over the rolls (`@`).

2.  **Propagate Count:** For every roll found, I iterate over its 8 neighbors and **increment a
    count grid** at that neighbor&apos;s coordinate.

3.  **Final Tally:** After the single pass, the count grid holds the exact number of adjacent rolls
    for every cell. I simply iterate one last time to count how many cells with an `@` have a final
    count of `&lt;4`.

This approach converts a complex, potentially slow `O(N)` operation into a highly efficient
`O(R * C)` operation with minimal overhead.

---

## Part 2: Breadth-first Search

Part 2 required a simulation. My optimized Part 1 solution provided the perfect starting point: the
ability to quickly determine initial accessibility.

### The BFS Simulation

The cascade effect is best solved using a **Breadth-First Search (BFS)**.

1.  **Initial Queue:** I use the pre-calculated counts from Part 1 to populate a **Set** with the
    coordinates of all initially accessible rolls. I use a **Set** because it guarantees
    **uniqueness**, preventing a single roll from being added to the queue multiple times if freed
    by several neighbors simultaneously.

2.  **The Loop:** While the queue contains rolls to remove:

* **Process Roll:** I dequeue one roll.
* **Safety Check:** I use a separate **Set** (`removedRolls`) to ensure I **never double-
    count** a roll that was processed earlier.
* **Decrement Counts:** For all 8 neighbors of the removed roll, I **decrement** their count in
    the adjacency grid.
* **Propagation:** If a neighbor is an unremoved roll and its new count drops to **`&lt;4`**, it
    becomes the newest candidate and is immediately added to the `removableQueue`.

This iterative process continues until the wave of removal stops, leaving me with the final,
correct total. By starting from the &quot;Make it Fast&quot; solution from Part 1, I had a clean foundation
to build this resilient, high-performance simulation.

---

## View the Full Codebase

The complete, final, and tested TypeScript solution for Day 4 is available for review on GitHub,
demonstrating the implementation of the TDD and architectural principles discussed here.

**[View the Advent of Code 2025 Repository on GitHub](https://github.com/andycarlberg/advent-of-code-2025)**

Day 4 reinforced that **resilience** isn&apos;t just about handling large numbers; it&apos;s about choosing
the right **data structure** and **algorithmic strategy** (like spatial indexing and BFS) to model
complex, real-world dependencies. The lesson from the LLM struggle is clear: human intuition is
still paramount when validating the subtle, spatial logic required to build a robust system.</content:encoded></item><item><title>Architecture &amp; Resilience: Advent of Code 2025 Day 3 – Maximizing Battery Joltage</title><link>https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-day3/</link><guid isPermaLink="true">https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-day3/</guid><description>The Day 3 challenge demanded maximizing a 12-digit sequence, testing our system&apos;s architectural resilience. See the journey from flawed assumptions to implementing the optimized stack, ensuring robustness against extreme scale via BigInt.</description><pubDate>Wed, 03 Dec 2025 21:53:00 GMT</pubDate><content:encoded>Day 3 presented a fascinating sequence manipulation problem. My initial assumptions about data
structures led to a flawed implementation, but TDD and persistent debugging eventually pointed the
way to a highly optimized solution for both parts. The biggest challenge? Realizing the **O(N)**
algorithm for Part 1, and then correctly applying a different **O(N)** algorithm (the stack) for
Part 2.

---

## Testing the Boundaries (TDD Setup)

As with the previous days, I used Gemini to generate a comprehensive test suite. This was
important, as it immediately raised important edge cases:

* **Implicit Constraints:** The initial tests assumed the input should be handled robustly, even
  including non-digit characters and lines with too few batteries. I decided to filter non-
  standard input, focusing on lines that met the minimal requirements (at least two batteries for
  Part 1).
* **The BigInt Revelation:** For Part 2, Gemini&apos;s generated tests immediately incorporated **BigInt**
  for the expected values, alerting me early that the final sum would exceed JavaScript&apos;s safe
  integer limit. This saved significant debugging time later.
* **Catching Test Issues:** Ironically, the generated suite itself had two flaws (misplaced comments
  being included as input and incorrect expected values). This experience reinforced the key lesson
  from Day 2: while the AI is a **powerful TDD accelerant**, it still requires **careful human
  validation** of its output, especially concerning specific input formats and core problem logic,
  to ensure architectural **robustness**.

---

## Part 1: Finding the Max Two-Digit Joltage

The initial goal was to find the largest two-digit number formed by any pair of digits in the bank
sequence, where the tens digit must appear *before* the ones digit in the sequence. This seemingly
simple task became an early test of my ability to resist premature complexity and simplify my
state management.

### Make it Work: The Flawed Queue and the Logical Breakthrough

My initial approach was driven by a premature assumption about the data structure:

1.  **Initial Misstep (The Queue):** I believed a custom `Queue` was the solution, allowing me to
  maintain the last two digits and check their resulting joltage. This failed because I was only
  comparing the incoming digit to the *front* of the queue, completely missing the comprehensive
  check needed to find the maximum possible pair across the entire sequence.

2.  **Debugging &amp; Realization:** After several failing tests, I realized the core problem wasn&apos;t
  queue management; it was determining the best possible result from the entire bank in a single
  pass. The solution needed a **greedy algorithm** that tracked the best overall result, not just
  the last two digits.
  
3.  **The O(N) Breakthrough:** The necessary logic was realized: at any point, the maximum
  joltage is either the `currentMax` found so far, OR the new joltage formed by pairing the
  **highest preceding digit found so far** (`bestTensDigit`) with the current incoming digit. This
  single-pass comparison was the key to an efficient **O(N)** solution.

### Make it Right: Refactoring for Clarity and Resilience

Once the logic was functionally correct, the focus shifted to code hygiene and architectural
clarity. This phase strongly echoed lessons learned in previous days regarding **separation of
concerns** and **state simplification**.

1.  **Lessons in Abstraction and State:** My initial custom `Queue` class was clumsy. It was doing
  too much, tracking an underlying array while trying to manage the overall maximum. This
  highlights the need to start with the simplest, clearest state representation possible.
  
2.  **Refactoring to Minimal State:** The most significant clarity improvement came from
  refactoring the &quot;queue-like&quot; logic to track only two essential pieces of state:

* `currentMax`: The best result found globally so far.
* `bestTensDigit`: The single most valuable digit seen previously (the best candidate for the
    tens place).

3.  **Encapsulation of Business Logic:** By focusing the class only on these two pieces of state
  and implementing the two-step greedy logic, the solution became transparent and resilient. The
  complex multi-case comparison was elegantly replaced by two simple, ordered checks, making the
  logic much easier to understand.

### Make it Fast: Optimization and Efficiency

With the correct **O(N)** algorithm in place, the solution was already as fast as possible in terms
of **Big O complexity**, as there&apos;s no way to avoid iterating over every digit in the input bank.

* **Optimal Time Complexity:** The reliance on the single-pass **greedy algorithm** ensured the
  complexity remained linear, **O(N)**.
* Any additional changes would provide minimal benefit and reduce the clarity unless you&apos;re very
  familiar with JavaScript&apos;s idiosyncrasies.

---

## Part 2: Maximizing the Twelve-Digit Sequence

The constraint changed from finding the largest 2-digit number to finding the largest 12-digit
number by dropping any extra digits.

### Make it Work: The Stack

My initial assumption was that this was a queue problem, but it turns out that the true required
structure is a **stack** (LIFO: Last-In, First-Out).

1.  **Stack Logic:** To form the largest number, every digit must be as large as possible, placed
  as far to the left as possible.

2.  **The Greedy Rule:** When a new digit arrives, if it is larger than the digit currently at the
  top of our sequence (stack), I **pop** the smaller digit off the stack, effectively deleting it,
  because putting the larger digit to the left yields a bigger number. This process continues until
  the stack top is greater than the new digit, or I run out of allowed drops.

3.  **No Prefill:** I initially tried to prefill the stack but realized that this could add digits
  that should have been greedily dropped earlier. The solution must rely on the **always-add**
  strategy: always push the `currentDigit` after the deletions, ensuring it&apos;s available for
  comparison with future digits.

The final structure was a single loop using a simple array as a stack, relying on `pop()` and
`push()`.

```javascript
// Simplified logic for greedy deletion
while (digitsToDrop &gt; 0 &amp;&amp; currentDigit &gt; stack[stack.length - 1]) {
    stack.pop();
    digitsToDrop--;
}
stack.push(currentDigit);
```

A final debug session was necessary to fix a regex error that excluded &apos;0&apos; from valid banks, which
caused one of the new test cases to fail. Once fixed, all tests passed.

### Make it Right

The main cleanup focused on clarity:

* Constants: Defined `SEQUENCE_LENGTH = 12` to eliminate **magic numbers**.
* Naming: Some variable names could be cleaned up for clarity.

### Make it Fast

As in Part 1, no further optimization was possible without sacrificing clarity.

---

## View the Full Codebase

The complete, final, and tested TypeScript solution for Day 3 is available for review on
GitHub, demonstrating the implementation of the TDD and architectural principles discussed
here.

**[View the Advent of Code 2025 Repository on GitHub](https://github.com/andycarlberg/advent-of-code-2025)**

This process, driven by TDD, proved that even when initial assumptions are wrong, persistent
testing and refactoring lead to the most correct and optimized solution.</content:encoded></item><item><title>Discipline of Resilience: AoC Day 2 - Stress-Testing TDD and Resilience Architecture</title><link>https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-day2/</link><guid isPermaLink="true">https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-day2/</guid><description>A Principal Architect&apos;s breakdown of AoC Day 2. See how TDD and architectural discipline hold up under new requirements, and the limits of AI in generating governed tests.</description><pubDate>Tue, 02 Dec 2025 21:53:00 GMT</pubDate><content:encoded>I continue my series focusing on the deliberate application of **enterprise architectural**
**discipline** to the simple, low-stakes problems presented by Advent of Code.

Day 2, the **Gift Shop Challenge**, presented a puzzle that didn&apos;t directly build on Day 1&apos;s
logic, but instead offered a new opportunity to stress-test my strategic foundation:
**Test-Driven Development (TDD)** and **Architectural Separation of Concerns**.

-----

## The Strategic Rationale

My goal remains to **deliberately stretch technical muscles** that focus on governance and
system resilience. Today&apos;s solution would determine if my Day 1 architectural decisions held
up when faced with a wholly different problem set.

I applied my core methodology - **Make it Work, Make it Right, Make it Fast** - within my
established architectural principles (**TypeScript** for type safety and **Governance**). This
allowed me to focus my strategic questions for Day 2:

- **Continuity Check**: Could I re-use the wrapper (`index.ts`) architecture?

- **Advanced TDD**: Could I strategically leverage AI to accelerate the test definition phase
  while still maintaining ownership of the governance?

- **Optimization Discipline**: Could I execute the full process, even when the initial
  implementation seemed deceptively simple?

-----

## Day 2 Solve: The Gift Shop Challenge

### The Problem Description (Part 1)

The **Gift Shop Challenge** required me to analyze a set of ID ranges and sum all *invalid
product IDs*. An invalid ID is defined as any number made up of some sequence of digits
repeated exactly twice (e.g., 6464 is 64 repeated twice). The complexity lay in parsing the
input, handling ranges, and mathematically determining these repetitive patterns across
varying number lengths.

### 1\. Initial Architecture Application

To apply the strategic principle of **Architectural Separation of Concerns**, I isolated the
core logic from external, messy dependencies. This ensures high testability and component
reusability:

- `index.ts` (The Wrapper): This is a simple CLI layer dedicated only to handling messy I/O,
  such as file system operations (`fs`) and argument parsing (`process.argv`). I keep this
  simple and **deliberately exclude it from unit testing.**

- `solution.ts` (The Core Logic): This file is dedicated solely to the arithmetic logic of the
  puzzle. By isolating the business logic here, I remove the need for complex mocking in my tests
  and greatly improve the testability of the algorithm. This separation is fundamental to building
  scalable, low-friction systems.

### 2\. Advanced TDD: Leveraging AI as a Strategic Accelerator

To test the next level of TDD automation, I provided **Gemini** with the full problem prompt and
asked it to scaffold the entire test suite.

This strategic outsourcing immediately yielded a **significant reduction in manual effort** and
provided a complete suite that implicitly defined the necessary interface for my solution:

```typescript
class Solution {
    // run solution for complete input
    solve(input: string): number;
    // Detect if a given number is an invalid ID
    isInvalid(input: number | string): boolean;
}
```

#### The Strategic Critique: AI for Governance

While the AI provided exceptional speed and rigor - identifying edge cases, boundary conditions,
and invalid inputs I hadn&apos;t yet considered - it also demonstrated key limitations that highlight
the continued necessity of human oversight:

- Logical Confusion: The AI added test cases based on confusion with the problem logic
  (e.g., testing `1111`&apos;s validity or handling leading zeros, which the prompt explicitly
  rules out).

- AI Transparency: Fascinatingly, the AI&apos;s internal confusion was sometimes visible within
  its own generated comments, reinforcing the need for **governance**: AI is an exceptional
  tool for generating boilerplate, but the architect&apos;s primary role remains critically
  validating the *intent* of the test cases against the business requirements.

### 3\. Implementation Phase: Make It Work

With the interface defined, I began implementation:

- Initial solve: I started by implementing the `isInvalid` helper function. Though the
  generated interface expected a number, I initially defaulted to a simpler string
  comparison (splitting the number string in half). This covered all the helper tests but
  felt like a naive solution, which is acceptable for this phase.

- Parsing Friction: The `solve` function required gnarly parsing logic for the nested data
  string (multiple comma-separated, dash-delineated ranges). While I strategically trimmed
  whitespace (assuming generous input handling), the resulting parsing logic was complex and
  difficult to read.

### 4\. Architecture Phase: Make It Right

I focused on eliminating the friction introduced by the parsing logic:

- Code Governance: I extracted the complex parsing into its own function and added explicit
  error handling for invalid range formats (generally ignoring them).

- Type Clarity: I defined a more explicit `Range` type, replacing the generic array return
  value to ensure stronger **type safety** and readability across the codebase.

### 5\. Optimization Phase: Make It Fast

As anticipated, the initial &quot;Make it Work&quot; solution was inefficient due to iterating over
*every single number* in the given ranges.

- The Strategic Pivot: The goal shifted from iterative searching to **mathematical
  optimization**. I was able to use math trickery to surgically locate the few numbers that
  are actual invalid IDs, rather than iterating over millions of valid numbers.

- TDD Validation: This complex math was prone to off-by-one errors. The comprehensive test
  suite - even the initial AI-generated cases - proved invaluable here, immediately
  highlighting flaws in the calculation logic.

- Architectural Cost: Due to this **mathematical optimization**, the helper function
  `isInvalid` and its entire test suite became obsolete and were removed, a clear case where
  **performance needs dictated a change to the initial architecture.**

-----

## Part 2: The Payoff of Architecture

### The Critical Change in Requirements

Part 2 expanded the definition of an invalid ID to any sequence of digits repeated **at least**
**twice** (e.g., `123|123|123` or `1|1|1|1|1`).

Because my solution followed the principles of separation and optimization:

1. **TDD Resilience:** The automated test cases were efficiently updated for the new
  requirement. Importantly, 8 of the original tests still passed, proving my underlying
  input handling and structural logic was robust.

2. **Core Logic Update:** I updated the mathematical logic to account for sequences repeating
  $2, 3, 4,$ or more times. Although I again required some research to correctly apply the
  new math, my disciplined use of TDD meant I had a safety net to validate the updated
  optimization logic instantly.

-----

## View the Full Codebase

The complete, final, and tested TypeScript solution for Day 2 is available for review on
GitHub, demonstrating the implementation of the TDD and architectural principles discussed
here.

**[View the Advent of Code 2025 Repository on GitHub](https://github.com/andycarlberg/advent-of-code-2025)**

Day 2 demonstrated the meaning of resilient architecture. While the need for **mathematical**
**optimization** forced an intentional break from the initial code design, the established
boundaries (TDD and Architectural Separation) ensured the pivot was **safe, governed, and**
**low-friction**, proving that process outweighs initial implementation choices.</content:encoded></item><item><title>Discipline of Resilience: Enterprise Architecture in AoC Day 1</title><link>https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-day1/</link><guid isPermaLink="true">https://www.andycarlberg.com/posts/architecture-resilience-aoc-2025-day1/</guid><description>Architects don&apos;t just solve problems; we prevent them. See why applying enterprise-grade TDD and strategic overkill to Advent of Code is the only way to build resilience. Day 1.</description><pubDate>Mon, 01 Dec 2025 16:00:00 GMT</pubDate><content:encoded>I&apos;m returning to **Advent of Code (AoC)** this year, with the new, more manageable
**12-day format**. In the past, end-of-year deadlines and holiday season pressures
consistently caused me to drop out so the shorter commitment is attractive. This year,
I&apos;m treating AoC not just as a technical challenge, but as a deliberate exercise in
applying large-scale **architectural discipline** - Test-Driven Development (TDD) and
strategic optimization - to small, contained problems. The goal is to build solutions that
are robust, maintainable, and built on a foundation of sound engineering principles that
will carry us through the list of challenges.

---

## The Strategic Rationale

The fundamental idea behind this project is to practice skills that scale. **Yes, this
level of TDD and architectural separation is overkill for a 12-day coding challenge - and
that is precisely the point.** These simple, contained problems provide a low-stakes
environment to exercise and refine the critical discipline needed for massive enterprise
systems.

This project is a dedicated effort to **deliberately stretch technical muscles** that
often get sidelined in executive work:

- **Practicing TDD:** Creating an environment where tests are non-negotiable - a core
  strategy for **risk mitigation**.
- **Structured Problem-Solving:** Applying the deliberate **&quot;Make it work, make it
  right, make it fast&quot;** methodology to walk through optimization from the ground up.
- **Refining Code Governance:** Ensuring the final solution is not just correct, but
  robust and maintainable through careful language and architectural choices.

---

## Architectural Strategy: Reliability and Governance

My approach prioritizes creating a foundation that is easy to modify, test, and
understand - the core tenets of resilient **technology governance**.

### 1. Methodology: Make it Work, Make it Right, Make it Fast

This simple sequence ensures we prioritize functionality, clean architecture, and
performance in that order:

1. **Functionality:** Solve the immediate problem.
2. **Readability/Architecture:** Refactor for clean separation of concerns and maintainability.
3. **Performance:** Optimize the solution only after it is correct and cleanly structured.

### 2. Technology Choice: TypeScript

I chose **TypeScript** to prioritize **governance and reliability** in the codebase.
TypeScript provides superior **type safety** and robust tooling, allowing me to focus
mental energy on complex logic and architecture rather than chasing avoidable runtime
errors - a low-friction engineering environment. Plus I&apos;m already familiar with the syntax
and can focus on the other architectural patterns I&apos;m aiming targeting for practice.

---

## Day 1 Solve: The Safe Dial Challenge

### The Problem Description (Part 1)

Day 1 presented the **Safe Dial** challenge. The goal was to calculate the final position
of a dial after following a series of instructions (e.g., `R10`, `L5`). The dial has
positions &apos;0&apos; through &apos;99&apos;. The core requirement was to count the number of instructions
that resulted in the dial **stopping exactly at position 0**. This scenario is perfect
for practicing TDD and handling modular arithmetic.

### 1. Initial Architecture Application

To apply the strategic principle of **Architectural Separation of Concerns**, we isolated
the core logic from external, messy dependencies. This ensures high testability and
component reusability:

- **`index.ts` (The Wrapper):** This is a simple CLI layer dedicated only to handling messy
  I/O, such as file system operations (`fs`) and argument parsing (`process.argv`). We keep
  this simple and **deliberately exclude it from unit testing.**
- **`safedial.ts` (The Core Logic):** This file is dedicated solely to the arithmetic and
  movement logic of the puzzle. By isolating the business logic here, we remove the need
  for complex mocking in our tests and greatly improve the testability of the algorithm.
  This separation is fundamental to building scalable, low-friction systems.

### 2. TDD Phase: Building Robust Test Cases

Adhering to TDD, I wrote the comprehensive test suite first. This created a complete
specification for the code before it was ever written, ensuring **governance over the
required behavior**.

Crucially, I leveraged an **AI tool** (in this case, Google&apos;s Gemini) to generate the
initial set of test cases and associated Jest boilerplate. While I maintain a healthy
skepticism regarding the long-term architectural implications of generative AI, its
immediate utility as a tactical tool to **eliminate manual friction** was undeniable here.
This provided a **significant reduction in manual effort**, immediately allowing me to
shift my focus from the repetitive mechanics of testing to the strategic identification of
complex scenarios. I was able to spend my limited time defining:

- The puzzle&apos;s official example.
- Rigorous **Boundary Conditions** (e.g., testing wrapping behavior at the dial limits).
- **Edge Cases** to ensure the system handles invalid input gracefully (e.g., empty strings,
  whitespace, negative ticks, and invalid starting letters - all of which must throw clear
  errors).

By offloading the repetitive scaffolding, the AI tool helped ensure the final architecture
was robust and minimized the initial friction in the development process.

### 3. Implementation Phase: Make It Work

The initial solution used regex to split instructions and the **modulo function** for
movement wrapping. Initial test failures immediately required us to address real-world
input problems, specifically handling **leading/trailing whitespace** and empty lines.
Robust systems must anticipate and mitigate this kind of input friction.

### 4. Architecture Phase: Make It Right

With the logic validated, we focused on **readability and governance**: we replaced &quot;magic
strings&quot; for direction with a proper TypeScript `enum` and split the complex instruction
parsing regex into its own dedicated function. This clean separation ensures the
instruction processing is isolated from the main control flow, improving maintainability.

### 5. Optimization Phase: Make It Fast

For a low-complexity problem, optimization was an architectural choice. Instead of
repeatedly parsing the instruction string inside the main execution loop (heavy string
work), we extracted the parsing step to happen *once* before the loop begins. While this
doesn&apos;t technically improve the Big-O complexity or anything, it isolates the simple
arithmetic inside the loop, allowing the JavaScript JIT compiler to focus on optimizing
the number crunching, thereby reducing system overhead.

---

## Part 2: The Payoff of Architecture

### The Critical Change in Requirements

Part 2 introduced a small but critical change to the counting requirement. Instead of
counting when the dial **stops exactly at 0**, the new instruction was to count every
instance where the dial **passes through or ends on 0** during a move. This update
increased the complexity of the movement arithmetic, requiring us to calculate &quot;crossings&quot;
rather than just the final modulo remainder.

Because our architecture followed the principles of separation of concerns and TDD:

1. **Test Update:** Only the *expected values* in our existing test suite needed to be
   calculated and updated. No structural changes to the tests were required.
2. **Code Update:** The complex new logic (especially for handling left turns and crossings)
   was contained entirely within the isolated `doInstruction` function. The rest of the
   system (the execution loop, the input parsing, and the I/O wrapper) remained
   untouched, demonstrating the **resilience of the architecture** against changing
   requirements.

---

### View the Full Codebase

The complete, final, and tested TypeScript solution for Day 1 is available for review on
GitHub, demonstrating the implementation of the TDD and architectural principles discussed
here.

[**View the Advent of Code 2025 Repository on GitHub**](https://github.com/andycarlberg/advent-of-code-2025)

By applying foundational engineering discipline, even a simple puzzle is transformed into a
robust, modifiable, and low-friction solution.</content:encoded></item></channel></rss>