See the first post in The Pragmatic Programmer 20th Anniversary Edition series for an introduction.
Challenge 1
Look at the software tools and operating systems that you use regularly. Can you find any evidence that these organizations and/or developers are comfortable shipping software they know is not perfect? As a user, would you rather (1) wait for them to get all the bugs out, (2) have complex software and accept some bugs, or (3) opt for simpler software with fewer defects?
Although very stable, Ubuntu (and likely any Linux distribution) are not perfect. This is clearly indicated by regular patches/updates being made available throughout the life cycle of a given release (LTS or interim in the case of Ubuntu).
Ideally I would opt for option 3, simpler software with fewer defects, as getting all the bugs out of complex software (option 1) is incredibly difficult if not impossible. The simpler the software, the more likely it is that defects can be prevented. Furthermore, it follows the Unix philosophy of ‘do one thing and do it well’ which I aim for wherever possible for exactly that reason. Many common Linux software tools take this approach and have very few defects (grep
, curl
, wget
, rsync
, ssh
etc).
However, some software (in my opinion) is just inherently more complex. An Operating System such as Ubuntu for example has to encompass a large range of functionality across many different domains. In this case I prefer option 2 provided that the organisations/developers behind the software fix bugs as the are discovered by users. Since option 1 is essentially impossible for complex software, releasing it as ‘good-enough’ and then incrementally improving it as time goes on is a far more pragmatic approach.
Challenge 2
Consider the effect of modularization on the delivery of software. Will it take more or less time to get a tightly coupled monolithic block of software to the required quality compared with a system designed as very loosely coupled modules or microservices? What are the advantages or disadvantages of each approach?
Before I give my answer, I would like to stress a nuance present in this challenge. It is asking for a comparison of tightly coupled monolithic software and very loosely coupled modules or microservices, not monolithic vs microservice architecture in general.
My answer to this depends on the definition of required quality and the scope of the software. If required quality is referring to a fully working, defect free first version, then the tightly coupled monolithic approach is likely to be delivered in less time due to less ‘moving parts’ to manage and less time required to identify the bounded contexts of the system and split it up accordingly. However, as we know, software evolves and must be changed over time. If required quality is referring to a maintained level of quality over the life cycle of evolving software, the loosely coupled module/microservice approach will take less time. This approach (in theory) allows individual components of functionality to be changed, replaced, removed or added without affecting the rest of the software - allowing for faster delivery of the required quality software.
If the software has limited scope, the time taken to split it up into loosely coupled microservices may be greater than the potential increase in development speed. On the other hand, for large and complex software, the increase in development speed is likely to outweigh the initial ‘investment’ as individual components can be easily developed and tested separately.
Challenge 3
Can you think of popular software that suffers from feature bloat? That is, software containing far more features than you would ever use, each feature introducing more opportunity for bugs and security vulnerabilities, and making the features you do use harder to find and manage. Are you in danger of falling into this trap yourself?
I think that Steam suffers from some feature bloat, over time it has grown to a store, social network (community, messaging etc), development platform (Steam Workshop) and game launcher. I’m not saying that Steam is any way a bad example of software - I and many others use it to great success. However it also not known for being bug free by any means!
I think every major modern software project is in danger of feature bloat. We live in a culture that promotes the notion of ’not moving forwards is moving backwards’, which inevitably leads to new features/capabilities being added to software projects. I do not wish to imply that continual improvement is bad, but that with software there is a fine line between increasing the feature set of a system and causing it to be bloated. As engineers, we should be concious of the amount of ‘stuff’ in a software project and consider creating additional software to provide new features is more appropriate (coming back to the Unix philosophy again). Facebook Messenger is an interesting example of this. Facebook introduced Facebook Chat in 2008 as part of the main Facebook application. As the feature set and usage of Chat grew, it was split off into a standalone iOS/Android Facebook Messenger app in 2011 with a separate web application introduced later on. This reduced the amount of bloat in the core Facebook application (though this isn’t to say Facebook isn’t bloated anyway but that’s another discussion).