Due to high amount of stakeholder conflicts, the project suffered from severe design by committee illness. This peculiar case of design by committee was particularly heinous: requirements for more important features were also more likely to contain contradictions because different actors would pull harder in their own preferred direction. Every once in a while a unit of requirements would come in, that made the developers think: “this will be a debugging nightmare in 3 months.” Naturally, developers who have been in the project for the longest time had developed a defensive coding strategy. They would insert additional business logic in the gaps among the business logic of the official requirements to prevent the product from failure. Because these safeguards were officially classified as platform specific implementation details, they were undocumented except for code comments.
The high amount of stakeholder conflicts led to low amount of interdisciplinary trust. Nobody - and this really is nobody: not the team, not external service providers, not the operating department, not the testers - really trusted each other to do a proper job but programmers happen to be the bottleneck in the software production pipeline and their distrust can be very disruptive. Let me take you a bit deeper into the topic to illustrate what I mean: If the programmer experienced a consistent stream of bad requirements, any behaviour could seem like undesired behaviour subject to the instincts of the implementer. Maybe marketing people messed up some arbitrary data input once and a particular programmer had a really bad time because of it? Or maybe the business partner liaison insisted on too many additional business rules to dance around some legal and contractual obligations while still maintaining a competitive edge, and three business rules became contradictory in a subtle manner in a hard-to-catch corner case and 5 people had to mob program to figure it out? Did the last sentence give you a headache? That’s how it feels like.
One developer noticed that a combination of two features would force the app into a blocking open loop under some conditions. And he introduced a very long timeout that would protect the app from this state. Which is a reasonable decision as the product was an embedded system and embedded systems are very unforgiving when it comes to bad interrupt management. However, the two features were designed for exactly this outcome, some applications of the product were very sensitive to fraud and legal-side stakeholders actively sought the ability to block the app for potential markers of fraud. In the end, the minor gold plating that would normally pass as an implementation detail ended up breaking a regression test. Which triggered a group email that includes at least one very important person, who then wondered “Why is this particular feature broken today, many weeks after we actually shipped it? Maybe a rotation of people in the project is in order?”
The human mechanism behind all this is simple. Competent programmers are good at picking up patterns, they will form a mental list of recurrent failure modes by the upstream teams. As programmers work on dysfunctional projects like the ones described above, the lists will get longer and programmers will get more cynical over time. At some point, the programmer will learn that the very reasons that introduce defects into the requirements also corrupt the feedback loops for corrections. Depending on their levels of cynicism, programmers will sooner or later begin to proactively and silently "fix" the perceived problems by adding unspecified code. Of course, this is no real solution as the programmer is simply guessing. It is also probable that programmers will falsely identify an accepted behaviour as problematic and in fact introduce a bug by trying to fix a defect that does not exist.
The way I have explained it does not sound that bad does it? Programmers implement incorrect fail-safes for some corner cases. Corner cases are by default infrequent. How bad can the impact be? Programmer cynicism is what keeps products running in the field. Well, what if the ad hoc fix strategy accidentally proves successful over a long period and programmers grow ever bolder, cynical and zealous in their quest to protect the product from the stakeholder? I have seen a couple of such cases in actual commercial and very expensive projects. The products eventually evolved into heaps of hacks and became unmaintainable. The one specific developer I mentioned also curated a list of unofficial feature flags designed to manage a set of requirements that get turned on and off again every six months.
So what to do? The summary solution is to build up trust. But as I mentioned, the absence of trust in this class of problems is actually justified. Entertaining ice breaker activities and trust falls won't really help you in such cases. There has to be some sort of structural reform and programmers must observe that requirements are being properly managed.
The smallest structural reform possible is to assign a person to act as a product owner as defined in Scrum. The Scrum PO role is specifically designed to abstract the social and political aspects of the requirements process away from developers. (Summary: A single person solicits opinions from all stakeholders, then makes a list of priorities and is empowered to make decisions for the app. Developers need to talk to a single person and hopefully can get coherent answers.) You don’t need all the bells and whistles of Scrum to have a PO. A PO simply needs the institutional power to say no to important people without getting fired.
My personal solution would be to get rid of toxicity in the workplace. But that is a totally different can of worms.