Monday, November 23, 2020

Network Engineering continued ...

This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

            1. Public cloud and hybrid cloud storage discussions are elaborated on many forums including this one. The hybrid storage provider is focused on letting the public cloud appear as front-end to harness the traffic from the users while allowing storage and networking best practice for the on-premise data. 


            1. Data can be pushed or pulled from source to destination. If it’s possible to pull, it helps in relieving the workload to another process. 


            1. Lower-level data transfers are favored over higher-level data transfers involving say HTTP. 


            1. The smaller the data transfers the larger the number which results in chattier and potentially fault prone traffic.  We are talking about very small amount of data per request. 


            1. The larger size reads and writes are best served by multiple parts as opposed to long running requests with frequent restarts  


            1. The up and down traversal of the layers of the stack is expensive operations. These need to be curtailed. 

            1. The more the number of hoops to cross, the longer it takes for data to arrive. Local data wins hands down. Similarly, shared data on remote is less preferred over partitioned data on local. 


            1. The number of times a network is traversed also matters in the overall cost for data. The best cost for data is when data is at rest rather than in transit. 

Sunday, November 22, 2020

Network engineering continued ...

 This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

          1. Sometimes the trade-offs are not even from the business but more so from the compliance and regulatory considerations around housing and securing data. Public cloud is great to harness traffic to the data stores but there are considerations when data must be on-premise. 


          1. Customers have a genuine problem with anticipating growth and planning for capacity. The success of an implementation done right enables prospect but implementations don’t always follow the design and it is also hard to get the design right. 


          1. Similarly, customers cannot predict what technology will hold up and what won’t in the near and long term. They are more concerned about the investments they make and the choices they must face. 

          1. Traffic, usage and patterns are good indicators for prediction once the implementations is ready to scale. 


          1. Topology changes for deployment of instances are easier to manage when they are part of a cluster or a technology that allows elastic growth. This is one of the reasons why public cloud is popular. The caveat with initial topology design is that it is usually a top-down approach. Findings from bottom-up studies will also help here. 


          1. There is necessity to classify and segregate traffic in order to get more efficiency from the resources. Data is not all the same in transit but there is no classification in terms of quality of service either. Only the production support folks know the data flows and then these can be better streamlined for high performance and availability. 

Saturday, November 21, 2020

Network engineering continued ...

This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

        1. Networking products solve a piece of the puzzle. And customers don’t always have boilerplate problems. 

        2. This calls for a solution integration between the solution and the product.


        1. Customers also prefer ability to switch products and stacks. They are willing to try out new solutions but have become increasingly wary of binding to any one product or the increasing encumbrances  


        1. Customers have a genuine problem with data being sticky. They cannot keep up with data transfers  

        1. Customers want the expedient solution and they are not willing to pay for redesign


        1. Customers need to evaluate the cost of even data transfer over the network. Their priority and severity are most important to them. 


        1. Customers have concerns with the $/resource whether it is network, compute or storage. They must secure ownership of data and yet have it spread out between geographical regions. This means they have trade-offs from the business perspectives rather than the technical perspectives

Friday, November 20, 2020

Network Engineering continued ...

 This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

      1. Almost every packet in the network for user's data is sandwiched between a header and a footer in some container and the data segments read with offset and length. This mechanism is repeated at various layers and becomes even more useful when data is encrypted. 


      1. Similarly, entries of data are interspersed with routine markers and indicators from packaging and processing perspectives. Many background jobs frequently stamp what’s relevant to them in between data segments so that they can continue their processing in a progressive manner. 


      1. Data format and reserved content for certain networking applications are proprietary and at times internal to the product. They may not be readable in raw form. Some  command line tool to dump and parse the contents offline could prove very helpful. 


      1. The argument above also holds true for message passing between shared libraries inside a networking product While logs help capture the conversations, their entries may end up truncated. An offline tool to fully record, replay and interpret large messages would be helpful for troubleshooting. 


      1. Most of the newer networking products have embraced APIs in one form or the other. Their usage for protocols with external agents, internal diagnostics and manageability are valuable as online tools and merit the same if not better appreciation than scripts and offline tools.