Tuesday, December 20, 2022

Application Modernization Readiness Assessment

This checklist evaluates the dependencies of an application to help with its modernization. It strives to collect all the information about the application to give a more detailed and precise picture of its readiness for building and running it in the cloud.

Hi, Ravi. When you submit this form, the owner will see your name and email address.

1.The drive for the modernization of this application comes from:


Changing business requirements


Technical debt


Pending deadline


Budgetary considerations


None of the above

2.My application is an N-Tier web application and has customer facing portal.


Yes


No

3.My application modernization journey requires all three: planning, executing and monitoring.


Yes


No

4.My application is accessible (Select all that apply.)


Web Interface


Command line


Scripts


SDK


None of the above

5.I would like my cloud adoption strategy to be


Retain/Retire


Lift-and-shift


Lift-and-reshape


Replace, drop and shop


Refactor (Rewriting/De-coupling applications)

6.My application has specific requirements from: (Select all that apply)


Programming Languages


Operating systems


Databases


Services


Application Frameworks

7.My application must maintain the same programming language


Yes


No

8.My application is sensitive to the flavor and/or version of operating system


Yes


No

9.My application requires the database to be the same as before:


Yes


No

10.My application is dependent on other services that are not available in the cloud


Yes


No

11.My application requires profiling to generate:


mapping of system components


topology maps


coverage of the technology stack


automations


creating a baseline


to view/simulate real-world conditions


for full stress testing

12.I'm fine with phased migration with phases for


service-by-service migration


improving performance and scalability


Integration and full DevOps support


meeting SLAs required from the application

13.I can point to SLAs for the application


Yes


No

14.I need CI/CD enhancements for


visibility over migration strategy and roadmap


investing over quality controls


showing on dashboards


using with my monitoring solution

15.I need investment in fault detection for


maintaining availability and performance


leveraging my monitoring investment


improving visibility


reducing false alerts

16.I have specific queries or demands from my applications behavior over time


Yes


No

17.The number of web requests to my application are in the range


< 100 per hour


< 100 per minute


< 100 per second


100 - 1000 per second


> 1000 per second

18.The size of data stored in the database increases by 


a few hundred bytes per day


a few hundred kilobytes per day


a few hundred megabytes per day


a few hundred gigabytes per day


a few terabytes per day


greater than a few terabytes a day

This content is created by the owner of the form. The data you submit will be sent to the form owner. Microsoft is not responsible for the privacy or security practices of its customers, including those of this form owner. Never give out your password.

Powered by Microsoft Forms | Privacy and cookies | Terms of use

Monday, December 19, 2022

 

The application of data mining and machine learning techniques to Reverse Engineering.

An earlier article1 introduced the notion and purpose of reverse engineering. This article focuses on the transitions between text to model for source code so that the abstract knowledge discovery model can be enhanced.

The premise for doing this is similar to what a compiler does in creating a symbol table and maintaining dependencies. In particular, we recognize that the symbols as nodes and their dependencies as edges presents a rich graph on which relationships can be superimposed and queried for different insights. These insights help with better representation of the KDM. Specifically, some queries can be based on the well-known architecture designs such as model-view-controllers that leverage both the functionality and layout of source code. But the purpose of this article is to leverage well-known data mining algorithms to glean more insights. Even a basic linear or non-linear ranking of the symbols and thresholding them can be very useful towards representing the architecture.

We cover just a few of the data mining algorithms to begin with and close that with a discussion on machine learning methods including SoftMax classification that can make excellent use of co-occurrence data. Finally, we suggest that this does not need to be a one-pass KDM builder and that the use of pipeline and metrics can be helpful towards incremental or continually enhancing the KDM. The symbol and dependency graph is merely the persistence of information learned which can be leveraged for analysis and reporting such as rendering a KDM

Classification algorithms

This is useful for finding similar groups based on discrete variables

It is used for true/false binary classification. Multiple label classifications are also supported. There are many techniques, but the data should have either distinct regions on a scatter plot with their own centroids or if it is hard to tell, scan breadth first for the neighbors within a given radius forming trees or leaves if they fall short.

 

Useful for categorization of symbols beyond the nomenclature. Primary use case is to see clusters of symbols match based on features. By translating to a vector space and assessing the quality of cluster with a sum of square of errors, it is easy to analyze large number of symbols as belonging to specific clusters for management perspective.

Decision tree

This is probably one of the most heavily used and easy to visualize mining algorithm. The decision tree is both a classification and a regression tree. A function divides the rows into two datasets based on the value of a specific column. The two list of rows that are returned are such that one set matches the criteria for the split while the other does not. When the attribute to be chosen is clear, this works well.

A Decision Tree algorithm uses the attributes of the service symbols to make a prediction such as a set of symbols representing a component can be included or excluded. The ease of visualization of split at each level helps throw light on the importance of those sets.  This information becomes useful to prune the tree and to draw the tree


Sunday, December 18, 2022

 Writing a book:

The journey for writing and publishing a book has many milestones. This article captures some of the essential steps.

The first step for a good book is to prepare the manuscript. Many authors will vouch for the effort that this takes and seldom it is a collaboration unlike the other milestones on the roadmap to realizing a book. It is also an opportunity for the author to find their true voice as she articulates it in the book. Planning for the book writing is an essential step because it consumes several hours at least. There are quite a few incentives available to write a book. The famous author Alexander Chee described his love of writing while on the train and his wish that Amtrak would provide residencies for that very purpose. Amtrak established its residency program to that effect in 2014.

The next step for the author is to decide between self-publishing or utilizing a white glove full service from a publishing house. This affects planning for such things as an ISBN number which must be purchased and can be used with a book only when it has been applied and paid for. Many publishing companies offer several advantages such as working with graphic designers to create the book cover.

The beginning and end of a book have several sections and language necessary for the proper branding and compliance that can only come from experience. It is better to hire a publisher at least for the first book.

The choices between publishing houses vary quite a bit. Some can be based on reputation, precedence or simply affiliation but did you know there are negotiations involved that can often make one better than the other among equals. The art of negotiation is in full swing when it comes to compensation.

There are shops that will help companies enable authors to self-publish their book that allow you to keep 100% of the royalties but these take a while.

Editing, book-designing, and selling are all essential considerations for the publishing of the book. These include several aspects that the publishers can help with, and some might take the author more time if she were to do it herself. Selling channels and advertising is immensely dependent on how sellers perceive the book more than what the author may want to say. Positioning and branding the book to proper selling channels is just as important as the royalty discussion.

There are many creative aspects in which competitors differentiate themselves from each other in the book publishing industry but following the well-trodden path comes with some predictability.

Lastly, authors must explore the opportunities to read the book for audio and books on tapes because these sell just as well as the books.

 

Friday, December 16, 2022

Reverse engineering of applications

Security experts, DevOps and SRE personnel often find themselves in situations where they are given an application that they do not own but must know enough about it to do their job. Their concerns overlap over the discipline of reverse engineering. It is not enough to know what an application is by doing static analysis whenever possible, but it is also necessary to know what it does at runtime. Observability of an application does not come easily with even sophisticated and learning tools that are now so ubiquitous on-premises and in the cloud. System center agents for public clouds, third-party monitoring tools, on-premises support for telemetry, logging and tracing and cloud-based x-rays of workloads can help with many cases. But they do not adequately cover standalone application-database pairs that are remnants of an age gone by. While arguably every organization can claim to have only a finite number of these applications, more recently container images have proliferated by mutations to the point where organizations might not even bother to make an inventory of all usages.

There are many runtimes analysis tools that are designed to closely monitor the behavior of an application or container image. Companies like Twistlock and Tenable.IO have opened new ways of investigating them for the purposes of vulnerability assessment.  Some tools have found a way to allow source code insertion to instrument the source code providing dynamic analysis of the application while it is running either on a native or an embedded target platform. Tools also vary by purpose. For example, code coverage tools perform code coverage analysis. Memory profilers analyze memory usage and detect memory leaks. Performance profiling provides performance load monitoring. Runtime tracing draws a real-time UML sequence diagram of the application. Tools can also be used alone or together. When the source code is run with any of the runtime analysis tools engaged, the resulting instrumented code is then executed, and the result is dynamically displayed in the corresponding reports.

Among these runtime tracing is perhaps the most comprehensive way of observing real-time dynamic interaction analysis of the source code by generating trace data, which can then be used to create UML Diagrams.

Model driven Software discovery evolves existing systems and facilitates the creation of new software systems. 

The salient features of model driven software discovery include: 

1.       Domain-specific languages (DSLs) that express models at different abstraction levels. 

2.       DSL notation syntaxes that are collected separately 

3.       Model transformations for generating code from models either directly by model-to-text transformations or indirectly by intermediate model-to-model transformations. 

An abstract syntax is defined by a metamodel that uses a metamodeling language to describe a set of concepts and their relationships. These languages use object-oriented constructs to build metamodels. The relationship between a model and a metamodel can be described by a “conforms-to” relationship. 

KDM helps to represent semantic information about a software system, ranging from source code to higher level of abstractions. KDM is the language of architecture and provides a common interchange format intended for representing software assets and tools interoperability. Platform, user interface or data can each have its own KDM and are organized as packages. These packages are grouped into four abstract layers to improve modularity and separation of concerns: infrastructure, program elements, runtime resources and abstractions.  SMM is the metamodel that can represent both metrics and measurements. It includes a set of elements to describe the metrics in KDM models and their measurements. 

These are some of the ways reverse engineering is evolving fueled by the requirements called out.

Thursday, December 15, 2022

Overview of a Modernization tool and process in architecture driven modernization.

 


 

This section is explained in the context of the modernization of a database forms application to a Java platform. An important part of the migration could involve PL/SQL triggers in legacy Forms code. In a Forms application, the sets of SQL statements corresponding to triggers are tightly coupled to the User Interface. The cost of the migration project is proportional to the number and complexity of these couplings. The reverse engineering process involves extracting KDM models from the SQL code.  

An extractor that generates the KDM model from SQL code can be automated. A framework that provides domain specific languages for extraction of model is available and this can be used to create a model that conforms to a target KDM from program that conforms to grammar. Dedicated parsers can help with this code-to-model transformation. 

 

A major factor that determines the time and effort required for the migration of  a trigger is its coupling to the user interface which includes the number and kind of statements for accessing the User Interface. A tool to analyze this coupling helps to estimate the modernization costs. Several metrics can be defined to measure the coupling that influences the efforts of migrating triggers. For example, these metrics are based on the UI statements’ count, location, and type such as whether for reading or writing. The couplings can be classified as reflective, declarative, and imperative. The extracted KDM models can then be transformed into Software Measurement Metamodels.

With the popularity of machine learning techniques and SoftMax classification, extracting domain classes according to syntax tree meta-model and semantic graphical information has become more meaningful. The two-step process of parsing to yield Abstract Syntax Tree Meta-model and restructuring to express Abstract Knowledge Discovery Model becomes enhanced with collocation and dependency information. This results in classifications at code organization units that were previously omitted. For example, code organization and call graphs can be used for such learning as shown in reference 1. The discovery of KDM and SMM can also be broken down into independent learning mechanisms with the Dependency Complexity being one of them.

Wednesday, December 14, 2022

 

Networking Modernization

Networking and storage are taken for granted but their modernization is as important to the enterprise as the Applications and Databases.  When businesses embrace the cloud, they must consider whether their on-premises network will scale up to the cloud traffic. The cloud acts like a massive aggregator of traffic and with hybrid cloud, the on-premises network can get overwhelmed because they were not designed with the cloud capacity. This section of the book deals with these considerations for multi cloud adoption and hybrid computing.

Networking modernization is essential to digital transformation. When networks age, they don’t just pose a higher risk with a fault domain, they also increase complexity by being some of the lowest levels of virtualization and compute. When a single Network Interface Card failed on the corporate network associated with a production system, it was easy to diagnose given the reservations made and the stack that was dedicated to it. In a hybrid world, the customers have gone way beyond the traditional application/database landscape to having more modular applications with deep divisions and even segregated hardware. The communication is assumed to be a resource as free as the storage and one that does not factor beyond the latency of a single call. With cloud traffic, application usages and their management via a single pane of glass has elevated the customers from on-premises to the cloud. The public cloud supports rich monitoring that even spans the on-premises with the help of agents running on the enterprise hosts, but they do not help in determining the root cause of failure when the symptoms of failure become scattered, sparse, and even random or non-deterministic.

Newer networks have become software-defined and rightfully so although this has increased an abstraction layer over the hardware. This is an architectural approach to data center networking in the cloud era, bringing the flexibility and economy of software to datacenter hardware. It helps enterprise network infrastructure with the needs of application workloads by providing 1. Automated orchestration and agile provisioning, 2. Programmatic network management, 3. Application-oriented, network wide visibility, and 4. Direct integration with cloud orchestration platforms. SDN is even built into each operating system. When IT wants the ability to deploy applications quickly, SDN and network controller can be used, and policy can be managed with scripts. HyperV, and network controller can be used to create virtual Local Area Networks overlays which do not require the reassignment of IP addresses. Hybrid SDN gateways can be used to assign and manage resources independently.

There is greater security and isolation of workloads with the use of network security groups and distributed firewalls for micro-segmentation. North-South internet traffic and East-West intranet traffic can be established differently. User-defined routing can be configured with service chains that can be established with 3rd party appliances such as firewall, load balancer or content inspection. Cost is driven down by converging storage and network on Ethernet, and activating Remote Direct Memory Access (RDMA)

Network modernization might seem like an overwhelming challenge by virtue of the number of entities impacted by the effort. It can even be a struggle to get a clear picture of the evolving application environment or to document the changing requirements over the infrastructure and operations. Many organizations that don’t know where to begin can do so by identifying gaps that might hinder SDN deployment, determine automation needs, define an orchestration strategy and develop a roadmap.

A strategy for orchestration and automation becomes critical to such implementation plans. Some of these activities of network modernization include enabling self-service functions for development teams, reducing risk through integrated governance and management, preventing vendor lock-ins on hardware-based platforms, saving time by orchestrating and automating integration complexities and boosting overall quality through intelligent and aware operations such as self-healing.

Tuesday, December 13, 2022

 

Summary of the book “Project management for the unofficial project manager” by Kory Kogon, Suzette Blakemore, James Wood.

More than ever, employees wear different hats at work. They are routinely expected to coordinate and manage projects even though they might not have a formal training. This is what the authors refer to as the role of the unofficial project manager.

The authors make it clear that this book emphasizes leadership in project completion and explain that the people are crucial in the formula for success. This book offers practical, real-world insights for effective project management and guides us through the essentials of people and the project management process. This includes the steps to initiate, plan, execute, monitor/control and close. If we are struggling to keep the projects organized, this book certainly helps us.

A project is a temporary endeavor with a start and finish undertaken to create a unique product, service or result. Project failures are evidenced by the following facts: Only 8 percent of organizations are “high performers” in managing projects. 45 percent of the projects are either overdue or canceled altogether. Only 45 percent of projects actually meet the goals they are supposed to meet. For every US$100 invested in projects worldwide, there’s a net loss of US $13.50 – “lost forever – unrecoverable”.

This book provides hope against such a failure in two ways: First, it says that everyone is a project manager. And second, project management is no longer just about managing a process. It’s also about leading people which taps into the potential of the people on the team, then engaging with and inspiring them to offer their best to the project.

Most common reasons for failure are cited as one or more of the following: lack of commitment/support, unrealistic timelines, too many competing priorities, unclear outcomes/expectations, unrealistic resources, people pulled away from the project, politics/legislation, lack of a “big picture” for the team, poor planning, lack of leadership, changing standards, and lack of or mismanaged budget. Failure is expensive and there are even greater costs than just the budget when measured by lost opportunities, dissatisfied customers, loss of innovation, and employee morale. A successful project meets or exceeds expectations, optimizes resources, and builds team confidence and morale for future projects. In fact, this definition goes beyond the popular notion of project success as one that meets deadlines or budget. Doing more with less and maximizing the human, technical and budgetary resources are only some of these. The true formula for winning at projects is therefore expressed in the equation PEOPLE + PROCESS = SUCCESS.

A good project manager is often known as one who valued the project members. While some are nervous about leading people, others may be the opposite – great with people but anxious about the process part. In such a case, simple is good. It might even be better to do without the vast machinery that the project management profession uses.

Founded in 1969, the Project Management Institute sets standards for the project management profession. It has 454,000 members in 180 countries and defines five process groups. These are: 1. Initiate, 2. Plan. 3. Execute, 4. Monitor and control and 5. Close.

Some people thrive on the operations side and others thrive on the people side. The modern thinking is that managing the process with excellence is important but being a good leader is essential. Informal authority inspires people to want to play on your team and win. The flip side is also clear. Formal authority comes from a title or a position. Giving people titles doesn’t necessarily make them good leaders.

The authors have worked with hundreds of clients and come up with four foundational behaviors to focus on. 1. Demonstrate respect, 2. Listen first, 3. Clarify expectations and 4. Practice accountability.

The first one values respect as its own reward. The more respected team members feel, even when having a tough conversation, the more engaged they will be. Listening first allows the other to talk first. It also avoids impatience or immaturity in decision-making. The key principle here is empathy. Clarifying expectations brings focus to a cacophony of voices. The cause of almost all relationship difficulties is rooted in conflicting or ambiguous expectations around roles and goals. Accountability on the other hand is about walking your talk. It also means transparency because covering up the truth is what hurts in the long run.

Among the five process groups, the initiate, monitor and control and close are progressive. The plan and execution are cyclical. None of these steps should be skipped. Every successfully completed project runs through all five process groups. Initiating processes authorizing the project, planning processes defining and refining objectives, executing processes coordinating people and resources to carry out the plan, monitoring/controlling processes ensuring that objectives are met, and closing processes formalizing acceptance of the project.

When there is pressure, it takes work, discipline and practice to keep one’s head and inspire others to keep theirs.

Each of the intermediary chapters in the book focuses on one of these five process groups.

The 'initiate' process group is about skillset and toolset.

For the skill to identify all stakeholders, the tool of group brainstorming helps.

For a skill to identify key stakeholders, the tool to perform key stakeholder D.A.N.C.E is relevant. The acronym stands for Decisions, Authority, Need, Connections and Energy where the risks against each of them are called out. For example, make the decisions that control or influence the project budget, have the authority to grant permission to proceed with the project, directly benefit from or are impacted by the project and consequently need to know all about it, are connected to the people, money or resources required to remove roadblocks or exert influence to ensure project success and have positive or negative energy that could affect project success.

For a skill to interview key stakeholders, the tool could be the key stakeholder interview or the question funnel.

For a skill to document project scope statements, the tool could be the project scope statement.

In short, frontloading is the basic principle of project success.

The next set of iterative process groups are planning and executing.  A similar breakup for skills and tools for planning can be drawn as follows:

For a skill to perform effective planning, a risk matrix could be helpful.

For a skill to plan a risk management strategy, the tools to tame the risks and the risk management plan helps.

For a skill to create a project schedule, the tools for 1. Mind Map. 2. Linear lists, 3. Post-it Note method and 4. Gantt chart help.

For a skill to develop a communication plan, the choice among tools is hands down the project communication plan.

Similarly, the breakup of skills and tools for execute process group are listed as:

For a skill to create a cadence of accountability, a team accountability session could be a tool.

For a skill to hold performance conversations, a conversation planner could be helpful.

Above all, the four foundational behaviors must be kept in mind during the execution phase.

Monitoring and control require skills and tools as follows:

For a skill to keep stakeholders informed about the project status, the tool is to create a project status report.

For a skill to manage scope change effectively, the tool is to create a process change request.

Lastly, the closure demands the following skills and tools:

The skills include evaluating task list, confirming fulfillment of project scope, completing procurement closure, documenting lessons learned, submitting final status report to stakeholders and obtaining required signatures, archiving project documents, publishing success and celebrating project close with rewards and recognitions and the corresponding tool is closing checklist.

The authors conclude by saying that developing the skills in this book will be worth the effort, and not just for managing the projects. It has positive side-effects. It complements and augments the critical time and life-management skills. Those skills applied correctly will have far-reaching effects in all areas of our life.

 #codingexercise

get count of strings with same prefix

static int getCountOfStringsWithPrefix(String[] strs, String prefix)

    {

        return (int)Arrays.asList(strs).stream().filter(x -> x.startsWith(prefix)).count();

    }