When is Redundancy Okay?
A common theme that comes up in architecture discussions is the elimination of redundancy. Simply stated, it’s about finding systems that are doing the same thing and getting rid of all of them except one. While it’s easily argued that there are cost savings just waiting to be realized, does this mean that organizations should always strive to eliminate all redundancy from their technology architectures? I think such a principle is too restrictive. If you agree, then what should the principle be?
The principle that I have used is that if I’m going to have two or more solutions that appear to provide the same set of capabilities, then I must have clear and unambiguous policies on when to use each of those solutions. Those policies should be objective, not subjective. So, a policy that says “Use Windows Server and .NET if your developer’s preferred language is C#, and use
On the operational model/organizational structure side of things, there may be a temptation to align technology choices with the organizational structure. While this may work for development, frequently, the engineering and operations team are centralized, supporting all of the different development organizations. If each development group is free to choose their own technology, this adds cost to the engineering and operations team, as they need expertise in all of the platforms involved. If the engineering and operations functions are not centralized, then basing technology decisions the org chart may not be as problematic. If you do this, however, keep in mind that organizations change. An internal re-organization or a broader merger/acquisition could completely change the foundation on which policies were defined.
On the development side of things, the common examples where this comes into play are environments that involve Microsoft or SAP. Both of these solutions, while certainly capable of operating in a heterogeneous environment, provide significant value when you stay within their environments. In the consumer space, Apple fits into this category as well. Their model works best when it’s all Apple/Microsoft/SAP from top-to-bottom. There’s certainly other examples, these are just ones that people will associate with this more strongly than others. Using SAP as an example, they provide both middleware (NetWeaver) and applications that leverage that middleware. Is it possible to have SAP applications run on non-SAP middleware? Certainly. Is there significant value-add if you use SAP’s middleware? Yes, it’s very likely. If your entire infrastructure is SAP, there’s no decisions to be made. If not, now you have to decide whether you want both SAP middleware and your other middleware, or not. Likewise, if you’ve gone through a merger, and have both Microsoft middleware and Java middleware, you’re faced with the same decision. The SAP scenario is bit more complicated because of the applications piece. If we were only talking about custom development, the more likely choice is to go all Java, all C#, or all -insert your language of choice-, along with the appropriate middleware. Any argument about value-add of one over the other is effectively a wash. When we’re dealing with out-of-the-box applications, it’s a different scenario. If I deploy a SAP application that will automatically leverage SAP middleware, that needs to be compared against deploying the SAP application and then manually configuring the non-SAP middleware. In effect, I create additional work by not using the SAP middleware, which now chips away at the cost reductions I may have gained by only going with a single source of middleware.
So, the gist of this post is that a broad principle that says, “Eliminate all redundancy” may not be well thought out. Rather, strive to reduce redundancy where it makes sense, and where it doesn’t, make sure that you have clear and unambiguous policies that tells project teams how to choose among the options. Make sure you consider all use cases, such as where the solution may span domains. Your policies may say “use X if in domain X, use Y if in domain Y,” but you also need to give direction on how to use X and Y when the solution requires communication across domains X and Y. If you don’t, projects will either choose what they want (subjective, bad), or come back to you for direction anyway.
We decided to accept some redundancy because the costs we would save by eliminating it would lower the cost of a single project, but when looked at from the perspective of the complete project portfolio it would in the end result in higher costs and a worse cumulated time to market.
I must say that for many IT people, that was hard to swallow.
When the business strategy is growth through M&A and having the ability to quickly sell off or spin-off units, redundancy might not only be okay, it might be a required approach.
Todd,
There’s a couple of key issues here that you didn’t mention, which are completely relevant to the decision to allow redundancy.
1. Redundancy of data – costs skyrocket when there’s redundancy of data. If you have redundancy in your application portfolio, but a single “gold” source for your data against which all these applications obtain their data, then the relative impact is minimal. Moreover, I would suggest that the redundancy was introduced as an outgrowth of usability.
2 – Redundant application infrastructure – if you’re maintaining two distinct application infrastructures to achieve the same goal, then this is something that should be targeted for consolidation. In your scenario, I would relate this to having SAP and Oracle. This situation tends to happen when one suite has better support for a particular function than another and the belief is that a COTS approach is less costly than custom development, without looking at the TCO of supporting both SAP and Oracle simultaneously.
[…] Todd Biske – When is Redundancy Okay? […]