Critical thinking in software development, the word ‘should’, and why you shouldn’t listen to Martin Fowler
Before I begin, I’ll just point out that I’m not actually going to argue that you should never listen to Martin Fowler. That was a trick, I’ve probably been reading too much click-bait.
That being said, consider the following conversation:
Engineer 1: “We should use architecture/design pattern X to build our new project”
Engineer 2: “Should we? Why?”
Engineer 1: Because architecture/design pattern X is really powerful. It’s the way software should be built.”
Engineer 2: “That doesn’t sound like good reasoning. What are the benefits for us?”
Engineer 1: “It solves problem we don’t have Y.
Engineer 2: “Do I really need to explain what was wrong with that sentence?”
Engineer 1: But household name company Z used it to build their system that in no way resembles ours.”
Engineer 2: “Really??”
Engineer 1: “But Martin Fowler said it was good in a blog post once. It is definitely the way things are done these days.”
Continue ad nauseum...
I’ve noticed myself having slightly less exaggerated variants of this conversation throughout my time as a software engineer. The sad truth is that at some point or another, most of us have been both engineer 1 and engineer 2; I know I have.
The problem is, that although the illogicality in the discussion above is obvious, these styles of pernicious arguments when combined with the right irrational attachment to an idea, charisma, authority, or rhetorical skill, can actually be quite convincing.
Leading or following someone up the garden path towards a bad design decision is a lot easier than people might generally think. As a result, as engineers we always need to be on our toes when it comes to critical thinking and be aware of the kind of good and bad arguments that might crop up when we need to make decisions.
Below are four of my personal favourite logical fallacies and cognitive biases that can appear in software engineering discussions, with examples based on conversations I have had.
Common logical fallacies and cognitive biases in software design decisions
1. Argument from authority
Using an authority as evidence in your argument when the authority is not really an authority on the facts relevant to the argument. As the audience, allowing an irrelevant authority to add credibility to the claim being made.
Martin Fowler says domain driven design is good on his blog, let’s use that. DHH says TDD is dead. Troy Hunt says don’t take security advice from psychics (actually, I’d probably always listen to this one…).
I don’t mean to say that these authority figures don’t know what they are talking about, they certainly do. It is just that the facts relevant to the argument are the technical and domain details of the project in question, not the abstract discussion of the generic application of solutions offered by the experts. Abstract discussions about best practices are interesting and informative inputs to your problem solving process, but are only indirectly relevant. Martin Fowler does not know your technical or business problems.
Example: Let’s use Domain Driven Design because well known expert software developer Martin Fowler advocates it on his blog.
If you are unaware: domain driven design is a software development approach for modelling your business domain in your code and creating a shared language between tech staff and business experts.
Martin Fowler clearly evangelises domain driven techniques on his blog and he sure knows his stuff, so choosing it for your project must be a no-brainer right?
This might be the case, but a good problem solving process will need to consider other factors than the expertise of the person recommending the approach. For example, for domain driven design you would probably want to ask the following:
- The techniques are a solution to the problem of creating reliable business rules heavy software. Does your app fall into this category?
- There will be a learning overhead (all your team must understand the underlying concepts). Do you have time for this?
- Code overhead: there is much more boilerplate (domain models, factories, repositories etc.) required than in conventional CRUD style apps. Would adding this additional code be worth the extra time?
If the answers to these questions are “no”, then it may well be the case that adopting this approach will cause more problems than it solves. There is no way that citing an authority as an advocate is going to answer these questions for you and therefore doing so should not be weighted as an important factor in the discussion.
Example 2: Let’s use micro-services because Netflix, Uber and Soundcloud do.
Many household name companies make heavy use of micro-service architectures, a point that any dogmatic micro-services enthusiast will beat you over the head with at any given opportunity. The implication is that because this pattern has been used at multiple large successful organisations like Netflix, it follows that micro-services are all sunshine and rainbows and anyone who has any sense should jump straight on the bandwagon. This is a dangerous non-sequitur.
For Example, Netflix happens to have a product that lends itself to the benefits of micro-services, while being hit by few of the drawbacks. They are a business where throughput and availability are everything. They will know and understand their domain boundaries well, and concerns like eventual consistency and the drawbacks of managing complex deployments are not a problem. They also are likely to see benefits from teams being able to work on services in isolation. Many of these benefits do not apply to most businesses (especially startups). Netflix are the exception, not the rule.
The traditional monolithic architecture, which despite getting a bad name in recent years, has served many businesses well for a long time and does not necessarily mean that you cannot scale effectively. There may be no need to reinvent the wheel.
2. No True Scotsman
The No True Scotsman (NTS) fallacy is a logical fallacy that occurs when a debater defines a group such that every group member possess some quality. For example, it is common to argue that “all members of [my religion] are fundamentally good”, and then to abandon all bad individuals as “not true [my-religion]-people”.
A common form of this argument in software development is as follows:
It ‘should’ be done this way because it is the ‘correct’ way to do it and any exceptions to this are not true instances of the ‘way’.
You might recognise these:
- “Good code-bases always have 80% test coverage (my one doesn’t though because it’s just a prototype)”.
- “All good developers follow TDD (apart from this controller I am writing, as it is requires a lot of mocks due to external dependencies)”.
Example: Good code should always be DRY.
“Do not repeat yourself” seems like a perfectly logical principle to apply across the board. What could be the downside to only writing code once rather than twice?
When you actually give it some thought though, you realise that there are already many exceptions to the “good code should be DRY” rule that you follow everyday — for example, just take a look at pretty much any implementation of a MVC controller method. You will probably immediately see repetitions in calls to validation and view rendering logic across similar controller methods. Even the notion of keeping one method per route when pages are virtually identical is very anti-dry and boiler-platy, but we (rightly) don’t feel the need to refactor and remove this kind of repetition, as we know it would not be useful to do so.
Here, we are likely to invoke our ‘no true Scotsman’ mentality and say that these instances are ‘not true instances of repetition’, because code fragments just ‘happen’ to be the same, and that we need to ‘keep flexibility’ for code changes in the future.
Rather than thinking this way perhaps the need for exceptions is evidence that the original ‘always be DRY’ premise is flawed (or at least misunderstood).
3. Wishful thinking (confirmation bias)
When the desire for something to be true is used in place of/or as evidence for the truthfulness of the claim.
Example: This large code-base we’ve inherited is written in foo-script, we should rewrite it in our preferred language, bar-lang, because that language is better.
A proponent of this idea might cite arguments along the lines of the following:
- bar-lang has static typing, so we will see less errors in production.
- Our team is already more familiar with bar-lang.
- I’ve been talking to my friends who also know bar-lang, and they agree that foo-script is well known for being a bad language.
These arguments seem good on their own, but the desire for the claim to be true might overlook much stronger arguments against the idea, such as:
- A rewrite of a large project will require a huge amount of effort and time, most likely with unpredictable timescales.
- The project already works in its current form (even if it is in a way that the proponent of bar-lang would consider objectionable) and a rewrite adds no additional business value. The value created compared to time spent would be atrociously poor.
An often accurate actuality of web development is that the majority of the common n-tier stacks are actually pretty mature and are more than up to the task of producing maintainable, performant and reliable software. As a result, the reality of whether you end up with a clean, easily maintainable code base or a nightmare horror show of spaghetti nonsense has a lot more to do with code architecture decisions and the skill of the developers involved. Only in very business specific cases is the choice of language going to be the primary cause of issues and the idea that a complete rewrite in a new language will be productive is almost certainly ‘wishful thinking’.
4. Appeal to novelty
Claiming that something that is new or modern is superior to the status quo, based exclusively on its newness.
Otherwise known as ‘shiny things’ syndrome:
“I saw [library, AWS offering, or architecture] on hacker news today; it looks awesome, let’s use that.”
Example: “I’ve been learning Docker and Kubernetes in my spare time. I like them very much, and we should deploy our app using them, because they are the new industry standard way that businesses are deploying their apps.”
There can be no doubt that Docker and Kubernetes are fantastic tools, but the fact that they are new and becoming accepted as an industry standard does not necessarily mean you should adopt them. The statement above is a weak argument on its own. This is because it is another example of an area where there are much more important and heavily weighted considerations:
- Assuming they don’t already have experience, your team is now going to have to learn both Docker and Kubernetes. Is it worth it compared to simpler approaches?
- When you make new hires in the future have you considered that you are either going to have to teach them, or require them to know this already?
- There are certainly much simpler ways of deploying software than using this strategy. Are you sure want to commit the time and resources to this?
- Are you able to empirically demonstrate that you are likely to see enough benefits from the switch to the new technology to compensate for the costs of the above points?
Again, if the answers are “no”, perhaps this needs more detailed consideration. It seems like an endemic number of developers, particularly skilled ones, have a tendency to get bored with what they are currently doing and like opportunities to stretch their wings and try out new and different things. It is always worth being very aware that this may cloud their judgement when making key decisions. It is worth remembering that often the simplest solutions are the best.
Like many university teachers, I have had numerous students asking me what critical thinking is, or how they can become better critical thinkers. We do workshops and lectures on this, and there is a healthy cottage industry in “teach yourself critical thinking” books. Still, in a recent survey at my institution, critical thinking came up as one of the top skills that students feel they struggle to acquire.
So we in learning development and academic skills must try harder. Or must we? As I struggled to satisfactorily answer the latest student to ask me what critical thinking was, I was suddenly minded to subject critical thinking to some…well, critical thinking. And it occurs to me that, perhaps, there is no such thing as critical thinking at all: that the concept is just a tautology.
It’s a tautology because, to my mind, thinking encapsulates being critical. Otherwise it’s not thinking. So, in a sense, you can’t teach critical thinking because it doesn’t really exist as a distinct entity.
Most sentient human beings think well enough, and our students are (mostly) no exception. They “think critically” all the time. Last term, I was standing in the queue at a college cafe (where an awful lot of my undercover ethnography is conducted), and I overheard a discussion between three newly arrived 18-year-olds on the best method of commuting to and from the nearby town. The conversation was, in effect, a cogent cost-benefit analysis of various possible forms of transport. Evidence was weighed against evidence, conclusions were drawn and a consensus was reached that the humble bicycle, would, all things considered, represent the most cost-effective form of conveyance over the longer term. Hurrah, I thought – here’s an informed exegesis to challenge our car-obsessed society: bring in the political advisers!
If these students are doing such a good job of thinking “critically”, why do they feel the need to ask me what critical thinking is? Why do they need workshops and self-help books? I think the answer is that they struggle to express their critical thinking in accordance with academic conventions. In other words, they can walk the walk but not, alas, talk the talk. This is what we need to teach them, and it means paying explicit attention to writing at university, and being prepared to talk about that writing.
As is well attested in the pedagogical literature, talking ideas through and writing them down helps to foster clear and logical thinking. Writing is a powerful tool for developing sharp arguments. Yet, in the UK at least, it is woefully underused. Unlike in the US, little overt attention has been historically dedicated to “teaching” writing; all too often, essays are written with little or no feedback between drafts, and with detailed comments given only when it is too late for students to act on them. This is both ironic and puzzling considering the sheer quantity of written work produced by the average degree student.
When undergraduates start talking about their writing, they start thinking better, often with startling results. In my own work in one-to-one writing tutorials, I know that such discussions can lead to palpable improvements, not just in a student’s writing for a particular module or course, but in their longer-term effectiveness as learners. I make sure that the student leaves with a concrete sense of how they can improve their writing. And I like to think that they thereby leave as better thinkers.
So let’s consign the term “critical thinking” to the dustbin of buzzwords and focus instead on challenging students simply to think, by providing them with gripping content and teaching them the power of effective written and oral communication. This surely is something truly transformative that will stay with them for the rest of their lives.
Stuart Wrigley is a teaching fellow at Royal Holloway, University of London’s Centre for the Development of Academic Skills.