The Buzz Lightyear Syndrome and our misunderstandings about what a good enough software project is — Essays on Software Engineering

It’s been a time since I started saying to my students that we usually make two basic mistakes on projecting software, first we tend to predict the future and second we do really love to build things who scale “To the infinit and beyond” as says Buzz Lightyear Toy Story’s character and thats where pitfalls begins.

Marvin Ferreira
7 min readDec 21, 2020

In the end of the 17th chapter of my favourite book “The Mythical Man Month — 20 years anniversary edition” Frederick P. Brooks Jr. quotes an excerpt from Robert L. Glass about its 1988 article and said that he accurately summarizes his views in 1995:

So what, in retrospect, have Parnas and Brooks said to us? That software development is a conceptually tough business. That magic solutions are not just around the corner. That it is time for the practitioner to examine evolutionary improvements rather than to wait — or hope — for revolutionary ones. Some in the software field find this to be a discouraging picture. They are the ones who still thought breakthroughs were near at hand. But some of us — those of us crusty enough to think that we are realists — see this as a breath of fresh air. At last, we can focus on something a little more viable than pie in the sky. Now, perhaps, we can get on with the incremental improvements to software productivity that are possible, rather than waiting for the breakthroughs that are not likely to ever come.

With this quote in mind I start this essay asking you, did software development get easier 45 years after Brooks presented us the concept of Accidental and Essential complexity in software development?

Accidental: programming languages, frameworks and tech stack.
Essential: design, model, architecture and project.

Thinking about that days when computing power was scarce and really expensive a day to day task would be write some trace table before any code and design the functions and its relations, after that check if it’s time processing is assinthoticaly fair enough to not expend unnecessary processing and praying to everything runs perfectly without any “butterfly effect” moving to the next task (always trying to “document” something to get track about the progress and the knowledge).

And what about these days, what do we usually think when we are about to create a new software project? Let me guess, it all starts deciding which cloud we’ll host our software, which framework use to create our “microservices hive” and all the stuff that will be the foundation of our micro-whatever we’ll build to be extreme elastic and scale everything as much as possible (even if we’ll have only a dozen users).

Looking at these “steps” which is that what we usually hear and see everywhere (events, lectures, podcasts, articles and so on) they look like more as a syndrome than a process or method to create good enough software.

And about syndromes, what do we know exactly about it?

Cambridge Dictionary defines Syndrome (noun) as “a combination of medical problems that shows the existence of a particular disease or mental condition”.

If a syndrome is the combination of problems that shows the existence of a particular condition which are the conditions we are talking about?

Let’s begin with the first problem which also I particularly consider one of the most important breakthroughs of last decade in computer area, the commoditization of computing power or what we call Cloud Computing.

Cloud Computing brings us the power to provide with a “click” what in the past we had to plan, dimension and exactly calculate the amount of computing power we’d need or to over provide and underuse the infraestructure that was expensive and complex to administrate by ourselves.

Nowadays it’s rare to see people discussing how much memory or CPUs they’d need to process their computing tasks, buy computing power is so easy as get the PaaS (Platform as a Service) which’s easier to allocate and put your code inside it to do the work you need.

This brings us a direct effect of getting high bills related of misusing computing power inside cloud providers or using the wrong stack (PaaS) to the type of problem we are trying to address. Think about the over using of microservices to solve specific problems like creating a large ETL or things that really need High-Performance Computing (HPC), do you think that microsservices are the best solution to address this kind of problem?

Catching the hook I’ll talk about the second problem, Microservices.

Let’s make things clear, you have a small team of not so experienced developers and you have to build a simple API that extract some not so used data and pass it to the front-end, why do you think that all the complexity related to creation and administration of microservices would be the best, the fastest and the simplest solution to address this?

Just to start you’d have to be aware about your CI/CD process, is it easy do deploy? Do your design makes sense? Are you inflicting some rule and mixing business responsibilities that doesn’t make sense? Do we get feedback when things go wrong while deploying? Do I need to prepare a specific network to group context related microservices? How we’ll monitor, debug and trace this microservices? Is that testable? Are we prepared to handle all this specific things (and problems) that comes with following this strategy?

Instant microservices solutions are easy, aren’t they? I don’t think so.If your API Gateway tool has a problem does your programmers could fix it or you’ll have always some DevOps-ish expert around to help your team?

And the third problem, the center of our syndome, the Buzz Lightyear effect, scale everything from day zero “to the infinit and beyond” being capable of receive millions of requests per minute without getting a scratch.

Keeping it real, is your application really critical and will receive tons of requests at the same time, or it’ll just handle some requests, make some independent cache and who knows maybe someday you’d have some problem there and you’ll need to refactor something?

It’s that really necessary to pay a future bill right now? Why don’t you first asses if you have a low coupled architecture, with components (or services) in the right context keeping the Single Responsibility Principle at least, being well aggregated by a context and are so agnostic that you could avoid vendor lock-in just migrating core business logic to any provider?

Am I dreaming? I don’t think so.

What about if we, before talking any implementation details, get to the board (or virtual one) and review which are the real problem that we are facing? Is that possible to creat an independent architecture with good design principles in mind that can brings us this kind of flexibility? Can we validate our business rules without dependency of any specific framework lock?

Is that easy? Of course not.

Making software isn’t an easy thing as some people say, maybe it’s easy to start coding but that’s not the same thing as creating high quality enterprise software, that’s a really different game.

Despite this chaotic and alarming view we have practices and techniques that could help us to achieve at least a good enough software:

  1. Think about how extensible and reusable (not to say componentizable) your software could be, you can separate things by business rules, domains, context or whatever is best to you keep things at least organized. Try models like Onion, Clean and Hexagonal architecture (they are not new).
  2. Always asses the capacity you’ll need to your application and plan your software to perform safe and confortable at your peak usage, it’s your responsibility to monitor the growth of your application usage and plan when it’s time to evolve it or even refactor something. If you want to hike a mountain you have to begin from the bottom, step by step.
  3. Analyze all the computing power flavors your provider have and pick the right tool to the right job. Putting Kubernetes in everything maybe cost a lot in the present and even more in the future if you don’t know what you are doing.
  4. Don’t criminalize well-made monoliths, sometimes they are exactly the good enough software that you need to solve a problem and meet your time to market.
  5. Think about being evolutionary not revolutionary, remember the chapter “boiling frogs” from the book Pragmatic Programmer, start little, make some cases, show successful use and evolve things step by step.
  6. And last but not least, never try to predict the future, nobody can do it and it’ll probably change. Solve the problem of today, do not suffer thinking about the problem of tomorrow and remember try to take decisions (if possible) at tha last moment with the max of information you could have.

These are some starting point to address this syndrome, there are many techniques and many “medicines” that could help us creating good enough software not paying more than the “sufficient and necessary” effort to make things better.

Making software is not an easy task, but if we do not neglect its intrinsic complexity it can be well planned (this includes agile) and built in a healthy and evolutionary way. To paraphrase Frederick P. Brooks, we can all fall into the tar pit, whether we fall and continue it depends on how each of us assumes our professional responsibilities.

Professor Marvin Ferreira — December, 2020

--

--

Marvin Ferreira

Engineering Manager at XP Inc. | Head of Technology | MBA Professor