It is a commonly known rule-of-thumb in software development that 80% of features can be implemented in 20% of the time, with the remaining 20% of features taking the bulk of development effort. In reality I believe it is actually a sliding scale, with development effort increasing almost exponentially as the most difficult and unique features are tackled. The problem is that it’s usually those unique features that differentiate one client’s processes from another, and automating those unique processes is where the client gets value from the software.
So the question then becomes where does one draw the line? It’s hard to explain, but after listening to requirements and developing software for a while, one tends to acquire a sixth sense. Call it a complexity detector maybe. A software developer’s brain naturally tries to assign things into boxes, with rules and processes connecting them. A sudden increase in the number of boxes, or the complexity of rules, is a sure-fire sign that you’re entering the steep part of the complexity curve.
In many cases you can change the parameters of a box, or perhaps slacken a rule to reduce the complexity, but sometimes it can be an effective approach to ask “Hey, what happens if we simply don’t allow that to happen? What does it break? How often will we meet that constraint in reality?”. Strangely enough, about 80% of the time this approach works wonders for routing around 20% of the complexity. 🙂