Thursday, December 1, 2022

 

Application Modernization Field Guide

Part 7 discussed modernization via incremental approaches and using tools.

This section summarizes the field guide for general application modernizations.

Application modernization is about reverse engineering, restructuring and forward engineering. It is also an opportunity to optimize the user experience through modernization. Customers are vital to business. Applications have become the gateway to more impactful and rewarding experiences for internal stakeholders and customers. Modernization is not just one of the ways but sometimes the only way for applications to embrace agility, fuel growth and remain competitive.

The points listed here are all elements of a robust hybrid cloud strategy and are essential for a full modernization experience. They can be used to accelerate digital transformations by building new capabilities and delivering them. Cloud native architectures and containerizations are priorities. Delivery must be accelerated with a culture of automation and transformation and deployments must be friendly to hybrid clouds.

One of the most critical aspects of application modernization is application readiness assessment. A cloud-native microservices approach can bring scalability and flexibility inherent to the hybrid cloud but it relies on an evaluation of the existing application. It also brings the opportunity to tailor the application to the business needs.

The build once and deploy on any cloud begins with assessing the applications. Some of them can undergo lift-and-shift while others will require refactoring. Even if the applications are deployed with little changes, preparing them for containerization is essential. Containers bring scalability, openness and portability. Automating deployments via a CI/CD pipeline is next. DevOps pipelines are very welcome here. Applications must be run and managed with ease for true embrace by customers.

Accelerators and tools can certainly help but recognizing the disciplines in which they will be used are just as important. For example, Innovation helps with refactoring which can help deliver a cloud-native application. Agile delivery can help with replatform that when deployed by modern devOps pipelines and run by newer runtimes can help deliver a cloud ready application. Cost reduction is another distinct area where repackaging can help save costs if a traditional application is delivered. Cloud Migration that requires VMs in the cloud and when used with migration accelerators and common operations can help deliver complex traditional applications in the new world.

There is not a single formula that holds but an approach that is totally unique to the business goals and needs. The modernization goals of agile delivery, transform-and-innovate, reduce costs, replace with SaaS, and cloud migrations can be planned for by analyzing for insights and the utilization of one or more modernization patterns that include pivoting from monolithic strategies, adding new capabilities as microservices, refactoring monolith to microservices, exposing APIs, and migrating monolith to a cloud runtime. With these, applications can be deployed to both a public cloud and a private cloud.

A trusted foundation helps. Infrastructure platforms like Kubernetes and cloud technologies like AWS developer tools provide a consistent platform to leverage for this purpose.

 

Wednesday, November 30, 2022

Towards architecture driven modernization

 

Architecture driven application modernization involves meta modeling and transformations of models that can help to optimize system evaluations costs by automating the modernization process of systems. This is done in three phases: reverse engineering, restructuring and forward engineering. Reverse engineering technologies can analyze legacy software systems, identify its widgets and their interconnection, reproduce it based on the extracted information, and create a representation at a higher level of abstraction. Some requirements for modernization tools can be called out here. It must allow extracting domain classes according to Concrete Syntax Tree meta-model and semantic graphical information, then analyze extracted information to change them into a higher level of abstraction as a knowledge-discovery model.

Software modernization approach becomes a necessity for creating new business value from legacy applications. Modernization tools are required to extract a model from text in source code that conforms to a grammar by manipulating the concrete syntax-tree of the source code. For example, there is a tool that can convert Java swing applications to Android platform which uses two Object Management Group standards: Abstract Syntax Tree for representing data extracted from java swing code in reverse engineering phase and Knowledge Discovery Platform-independent model. Some tools can go further to propose a Rich Internet Application Graphical User Interface. The three phases articulated by this tool can be separated into stages as: the reverse engineering phase which uses the jdt API for parsing the Java swing code to fill in an AST and Graphical User Interface model and the restructuring phase that represents a model transformation for generating an abstract KDM model and the forward phase which includes the elaboration of the target model and a Graphical User Interface.

The overall process can be described as the following transitions:

Legacy system  

–parsing->

AST Meta model

–restructuring algorithm->

Abstract Knowledge Model

--forward engineering->

GUI Metamodel.

The reverse engineering phase is dedicated to the extraction and representation of information. It defines the first phase of reengineering following the Architecture Driven Modernization process. It is about the parsing technique and representing information in the form of a model. Parsing can focus on the structural aspect of header and source files and then there is the presentation layer that determines the layout of the functionalities such as widgets.

The restructuring phase aims at deriving an enriched conceptual technology independent specification of the legacy system in a knowledge model KDM from the information stored inside the models generated on the previous phase. KDM is an OMG standard and can involve up to four layers: Infrastructure layer, Program Elements layer, resource layer and abstractions layer. Each layer is dedicated to a particular application viewpoint.

The forward engineering is a process of moving from high-level abstractions by means of transformational techniques to automatically obtain representation on a new platform such as microservices or as constructs in a programming language such as interfaces and classes. Even the user interface can go through forward engineering into a Rich Internet application model with a new representation describing the organization and positioning of widgets.

Automation is key to developing a tool that enables these transitions via reverse engineering, restructuring and forward engineering.

 

 

Tuesday, November 29, 2022

Application modernization continued

 

This section of the article discusses a case study on the incremental code-migration strategy of large monolithic base used in supply system. The code migration strategy considers a set of factors that includes scaffolding code, balancing iterations, and grouping related functionality.

Incremental migration only works when it is progressive. Care must be taken to ensure that progress is measured by means of some key indicators. These include tests, percentage of code migration, signoffs and such other indicators. Correspondingly the backlog of the code in the legacy system that must be migrated must also decrease.

Since modernized components are being deployed prior to the completion of the entire system, it is necessary to combine elements from the legacy system with the modernized components to maintain the existing functionality during the development period. Adapters and other wrapping techniques may be needed to provide a communication mechanism between the legacy system and the modernized systems, when dependencies exist.

There is no downtime during the incremental modernization approach and this kind of modernization effort tries to always keep the system fully operational while reducing the amount of rework and technical risk during the modernization. One way to overcome challenges in this regard is to plan the efforts involved in the modernization. A modernization plan must also include the order in which the functionality is going to be modernized.

Another way to overcome challenges is to build and use adapters, bridges and other scaffolding code which represents an added expense, as this code must be designed, developed, tested and maintained during the development period but it eventually reduces the overall development and deployment costs.

Supporting an aggressive and yet predictable schedule also helps in this regard. The componentization strategy should seek to minimize the time required to develop and deploy the modernized system.

This does not necessarily have a tradeoff with quality as both the interim and final stages must be tested and the gates for release and progression towards revisions only helps with the overall predictability and timeline of the new system.

Risk occurs in different forms and some risk is acceptable if it is managed and mitigated properly. Due to the overall size and investment required to complete a system migration, it is important that the overall risk be kept low. The expectations around the system including its performance helps to mitigate these risks.

 

Monday, November 28, 2022

Application modernization for massively parallel applications

  Part 6 of this article on Application Modernization covered the migration process. This section focuses on specialty applications.

Every organization must determine its own roadmap to application modernization. Fortunately, patterns and best practices continue to help and provide guidance. This section describes the application modernization for a representative case study for those applications that do not conform to cookie cutter web applications.

When we take a specialty application that involves massively parallel compute intensive applications that provide predictions, the initial approach is one of treating the model as a black box and working around the dependencies it has. But the modernization effort does not remain constrained by the technology stack that are the dependencies of the model. Instead, this is an opportunity to refine the algorithm and describe it with a class and an interface that lends itself to isolation and testing. This has the added benefit of providing testability beyond those that were available until now. The algorithm can also be implemented with design patterns like the Bridge design pattern so that the abstraction and the implementation can vary or the Strategy design pattern which facilitates a family of algorithms that are interchangeable.

Microservices developed in an agile manner with Continuous Integration and Continuous Deployment pipeline provide an unprecedented opportunity to compare algorithms and fine tune them in a standalone manner where the investments for infrastructure and data preparation need to be considered only once.

Algorithms for massively parallel systems often involve some variation of batched map-reduce summation forms or a continuous one-by-one record processing streaming form. In either of these cases, the stateless form of the microservice demonstrates superior scalability and reduced execution time than other conventional forms of software applications. The leap between microservice to serverless computing is one that can be taken for lightweight processing or where the model has already been trained so that it can be hosted with little resources.

Parallel computing works on immense size of data and the considerations for data modernization continue to apply independent of the application modernization.

Sunday, November 27, 2022

Part 6: Application Migration process

There are two ways at least that are frequently encountered for the migration process itself. In some cases, the migration towards microservices architecture is organized in small increments, rather than a big overall migration project. In those cases, the migration is implemented as an iterative and incremental process. They might also be referred to as phased adoption. This has been the practice even for the migration towards Service Oriented Architecture. There are times when the migration has a predefined starting point but not necessarily a defined upfront endpoint.

Agility is a very relevant aspect when moving towards a microservices architecture. New functionalities are often added during the migration. This clearly shows that the preexisting system was hindering development and improvements. New functionalities are added as microservices, and existing functionalities are reimplemented also as microservices. The difficulty is only in getting the infrastructure ready for adding microservices. Domain-driven design practices can certainly help here.

Not all the existing functionality is migrated. It does not align with the “hide the internal implementation detail” principle of microservices nor does it align with the typical MSA characteristic of decentralized data management. If the data is not migrated, it may hinder the evolving of independent services. Both the service and the data scalability are also hindered. If the scalability is not a concern, then the data migration can be avoided altogether.

The main challenges in architecture transformation are represented by (i) the high level of coupling, (ii) the difficulties in identifying the boundaries of services (iii) and system decomposition. There could be some more improvement and visibility in this area with the use of architecture recovery tools so that the services are well-defined at the architectural level.

Some good examples of microservices have consistently shown a pattern of following the “model around business concepts”.

The general rule of thumb inferred from various microservices continues to be 1) First, to build and share reusable technical competence/knowledge which includes (i.) kickstarting a MSA and (ii.) reusing solutions, 2) Second, to check business-IT alignment which is a key concern during the migration and 3) Third, to monitor the development effort and migrate when it grows too much which would show a high correlation between migration to microservices and increasingly prohibitive effort in implementing new functionalities in the monolith.


Saturday, November 26, 2022

Part 5: Application Modernization and the migration towards Microservices architecture

 The path towards a microservice-based architecture is anything but straightforward in many companies. There are plenty of challenges to address from both technical and organizational perspectives. The performed activities and the challenges faced during the migration process are both included in this section. 

The migration to microservices is sometimes referred to as the “horseshoe model” comprising three steps: reverse engineering, architectural transformations, and forward engineering. The system before the migration is the pre-existing system. The system after the migration is the new system. The transitions between the pre-existing system and the new system can be described via pre-existing architecture and microservices architecture. 

The reverse engineering step comprises the analysis by means of code analysis tools or some existing documentation and identifies the legacy elements which are candidates for transformation to services. The transformation step involves the restructuring of the pre-existing architecture into a microservice based one as with reshaping the design elements, restructuring the architecture, and altering business models and business strategies. Finally, in the forward engineering step, the design of the new system is finalized. 

Many companies will say that they are in the early stages of the migration process because the number and size of legacy elements in their software portfolio continues to be a challenge to get through. That said, these companies also deploy anywhere from a handful to hundreds of microservices while still going through the deployment. Some migrations require several months and even a couple of years. The management is usually supportive of migrations. The business-IT alignment comprising of technical solutions and business strategies are more overwhelmingly supportive of migrations. 

Microservices are implemented as small services by small teams that suits Amazon’s definition of Two-Pizza Team. The migration activities begin with an understanding of both the low-level and the high-level sources of information. The source code and test suites comprise the low-level.  The higher-level comprises of textual documents, architectural documents, data models or schema and box and lines diagrams. The relevant knowledge about the system also resides with people and in some extreme cases as tribal knowledge. Less common but useful sources of information include UML diagrams, contracts with customers, architecture recovery tools for information extraction and performance data. Very rarely but also found are cases where the pre-existing system is considered so bad, that their owners do not look at the source code. 

Such an understanding can also be used towards determining whether it is better to implement new functionalities in the pre-existing system or in the new system. This could also help with improving documentation, or for understanding what to keep or what to discard in the new system. 

Friday, November 25, 2022

 

Part 3 discussed microservices. This one focuses on maintainability, performance, and security. The maintainability of microservices is somewhat different from conventional software. When the software is finished, it is handed over to the maintenance team.  This model is not favored for microservices. Instead, a common practice for microservices development is for the owning team to continue owning it for its lifecycle. This idea is inspired by Amazon’s “you build it, you run it” philosophy. Developers working daily with their software and communicating with their customers creates a feedback loop for the improvement of the microservice.

Microservices suffer a weakness in their performance in that the communication happens over a network. Microservices often send requests to one another. The performance is dependent on these external request-responses. If a microservice has well-defined bounded contexts, it will experience less performance hit. The issues related to microservice connectivity can be mitigated in two ways – making less frequent and more batched calls as well as converting the calls to be asynchronous. Parallel requests can be issued for asynchronous calls and the performance hit is that of the slowest call.

Microservices have the same security vulnerabilities as any other distributed software. Microservices can always be targeted for denial-of-service attack. Some endpoint protection, rate limits and retries can be included with the microservices. Requests and responses can be encrypted so that the data is never in the clear. If the “east-west” security cannot be guaranteed, at least the edge facing microservices must be protected with a firewall or a proxy or a load balancer or some such combination. East-West security refers to the notion that the land connects the east and the west whereas the oceans are external. Another significant security concern is that a monolithic software can be broken down into many microservices which can increase the surface area significantly. It is best to perform threat modeling of each microservice independently. Threat modeling can be done with STRIDE as an example. It is an acronym for the following: Spoofing Identity – is the threat when a user can impersonate another user. Tampering with data- is the threat when a user can access resources or modify the contents of security artifacts. Repudiation – is the threat when a user can perform an illegal action that the microservice cannot deter. Information Disclosure – is the threat when, say a guest user can access resources as if the guest was the owner. Denial of service – is the threat when say a crucial component in the operations of the microservice is overwhelmed by requests so that others experience outage. Elevation of privilege – is the threat when the user has gained access to the components within the trust boundary and the system is therefore compromised.

Migration of microservices comes with three challenges: multitenancy, statefulness and data consistency. The best way to address these challenges involves removing statefulness from migrated legacy code, implementing multitenancy, and paying increased attention to data consistency.