In software development, compromises are often necessary, such as choosing which parts should be performed rigorously from the beginning and which can be temporarily resolved quickly and cheaply, but knowing that sooner or later you have to dedicate yourself to them. The hope is that of the many processes solved with little effort, only a few actually require subsequent, very expensive rework, and none lead to damage in the activity that software describes.
In computer science, the rework cost associated with a solution is often called technical debt, or code debt: each choice is associated with a cost and a debt: the cost is addressed immediately, while the debt is sent back to future generations.
Legacy systems management is a classic example of debt that is much greater than cost and for which the risk of system collapse, with loss of assets and necessary disaster recovery, cannot be ruled out.
This reality has been further affirmed by the advent of development through the CI/CD pipeline. Constant updating and the inclusion of new topics in an increasingly modern key (think security or AI/ML optimizations) often makes the comparison between legacy system management and modern system management merciless.
Obviously, it is possible to place legacy systems in a current context, evaluating the strategy of the specific subsystem. The operation can be done on several levels, always leaving part of the debt to future generations. Maximum consideration must be given to solutions that, although old, are perfectly functional, as they are perfectly mapped onto the business and therefore a complete rewrite of them could lead to some mismatches that are not IT but organizational.
An assessment on the legacy code
Having to reduce technical debt at least in part, the reference paradigm is the approach to microservices and APIs.
Analysing legacy code and the infrastructure that runs it is the first phase of a modernization process. An assessment from an external auditor is needed here: the company is generally too tied to past choices to be objective about future choices.
The software should be divided into functional blocks with their communications. The infrastructure components that perform it are also identified. If desired, you can also document the overall process that this software implements. Modernization can be total or partial (in various steps) on code, infrastructure, and process.
At this point it is possible to identify the risks of not addressing the three major aspects of legacy systems today:
- obsolete infrastructure
- codebase too variable
- security by design
The infrastructure must be modern
Old infrastructure doesn’t cost as long as it works, but when it stops working, it can be difficult (time) or impossible to replace, resulting in a blockage of the solution’s operation.
The solution is the migration of the old infrastructure and its progressive confluence towards the new infrastructure. It is certainly necessary to migrate some of the code to runtimes that address a new infrastructure.
Associated costs and time will be limited, as a 3-5 year TCO can show, with reduced personnel costs and likely overall benefits.
Reduce codebase variability
Depending on the type of legacy codebase, problems can have very large amplitudes. Although slowly, developers experienced in some languages tend to be fewer and fewer, always the same (with a related skill gap), and increasingly difficult to integrate into modern pipelines. Relative technical debt is becoming easier and easier to deal with.
The solution is to develop a code analysis and perform the corresponding complete rewrite. In some cases, you can rely on automatic tools that directly generate the new code and related documentation.
Implementing security by design
The technical and regulatory complexity of today’s connected world forcibly requires code written from security, according to the principle of security by design. The old systems were developed in another way and although they are often quite solid they do not follow modern requirements. The attacker, but curiously also the legislator, could put a non-modern subsystem under stress.
Security management today involves a codebase and related infrastructure managed with SecDevOps criteria, without any exceptions.
The ideal solution is always to rewrite the code from scratch, with related process analysis and reengineering before rewriting. Of course, we can limit ourselves to a code analysis with rewriting the most stressed services in microservices and encapsulating the most intricate parts with their relative use via APIs. What the final choice is, the code must be manageable by modern pipelines and the infrastructure must be easy to maintain.

