Very often IT professionals have to pick up where many others have left off. All things started somewhere, but very few of us actually get to experience the ground floor and understand the mentality during the time a system was first conceived and created, especially in large organizations. What started off as a novel idea or approach to accomplishing something may now be a hodge-podge monolithic system of bolted on (or loosely tied together) features bound together by the technical equivalent of duct tape and first-aid bandages. This happens everywhere all the time; from the smallest mom-and-pop shop to the largest federal government agency. It seems to be human nature in dealing with technology to try and avoid starting over but often at the cost of endless complexity later on.
Sometimes it’s best to just rip the bandage off straight away, unfurl the reams of tape binding all the little pieces together and start anew. This doesn’t mean, however, that you lose your data or the precious knowledge built up across the years or decades. What it does mean is that you find a way to port that information to a newer, faster and simpler solution either in bulk or as a phased migration. That new solution needs to, this time, be built with modularity in mind in anticipation of being replaced (or wholly upgraded) one day in the not-too-distant future.
Modularity in IT systems is the key to success (and future cost savings), and with the proliferation of web services (how computers “talk” to each other over the internet) there is no need for proprietary protocols and obtusely complex interfaces. All that should remain is your data, but the format (how it is stored) and logic (how it is processed) can and should change to keep pace with modern advancements. I’m not saying to do such a thing for every passing technological fancy, but at least once every 7 to 10 years a formerly shiny “novel” technology becomes so outdated that costs to maintain it skyrocket, as does its inherent risk in supporting it. This is because your vendor has probably moved onto something else much more modern long ago and does not provide adequate patching or security fixes (or does so only at an extreme, untenable cost).
So, all that being said, we must plan and budget with replacement in mind from the start no matter how large or small the technical infrastructure may be. The simplest way to do so is to build IT systems of limited complexity, but high modularity as previously stated. This makes replacement comparatively easy because these coupled technology components can be upgraded one-by-one over a period of years without having to throw thousands (even millions) at a large monolithic system upgrade all at once. The mega-upgrades of yesteryear have proven to be a recipe for failure. The approach to build with the entire lifecycle in mind, in a modular, standards-based manner that focuses on the data, not only the logic, is what adds value, not complexity, and thus saves tremendous costs over the lifetime of the system.