EN 39: Shifting complexity

Complexity is everywhere, we can't escape it, we can only hope to manage it

Everything but the smallest things are complex. It’s a fact of life. From the simplest building blocks, we can get ever complex structures, by combining them in different ways. From atoms to a cell, from a cell to a human body.

Software cannot escape complexity. It itself sits on top of layers of complexity. We see the idea of combining building blocks in object composition or in functional programming, where we combine objects or functions to create sophisticated structures.

We developers are not foreign to this. In fact, an intrinsic part of software is to manage complexity. Think of the times we abstract or encapsulate, putting a simpler interface or facade on top of an intricate piece of functionality—with the risk of leaky abstractions. What about modularity—composition again— and separation of concerns?

I had in mind the first law of thermodynamics—energy cannot be created or destroyed, but it can be transformed from one form to another—but quickly realised that we even create complexity to manage our current complexity, or solve new problems.

In all this managing and juggling, there’s a fine balance between doing too little or doing too much, or to “overengineer”. There’s also the risk of simply ignoring the issues that arise from adding a more complex pattern.

A typical example of ignoring the added complexity is the microservices architecture. Everyone loves this pattern and wants to implement it everywhere and for everything. It delivers many benefits, but there’s a catch: it only delivers the promised goods if you’re using it in the context that it works for and to solve the problem it addresses. In a pattern, there are always trade-offs and new challenges.

Moving from a monolith to a microservice can make your app more manageable, easier to reason about, scalable, etc. but it comes at a big price. Just a seemingly simple jump from a single machine to many machines over the network adds tons of complexity. You’ll have to solve the new challenges that come from distributed systems.

I’m using events, but what if I get duplicates, what if I lose important ones? My service depends on another one or communicate with others, how can I make sure it knows where they are, and what happens if they’re down? How do I make sure that the transactions in services that need to occur in order actually go through or revert if something goes wrong? Now I need to aggregate data from multiple services, doing inefficient, expensive in-memory joins in one service… So many new things to contemplate, and with the same old risks, like creating big balls of mud.

A thought is that complex things need energy to be maintained:

Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible.

Which is to say that without conscious effort (and even with effort), eventually codebases tend to go towards big balls of mud. The path of least resistance and least effort to get the feature out is not the one that takes you away from the big ball of mud. A bit of a heat death of the universe but for the application, only that the application makes you money and still needs to be maintained.

Complexity is unavoidable, we can only manage it: shift it around, compartmentalise it, shape it differently, abstract it, sweep it under the rug… To manage it also means choosing which kind of flavour of complexity you’re more comfortable with given your needs at the moment.

Reply

or to participate.