Tuesday, November 24, 2020

Network Engineering Continued ...

This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

  1. The choice between a faster processor or a large storage or both is a flexible choice if the dollar value is the same.  In such cases, the strategy can be sequential, streaming or batched. Once the strategy is in place, the dollar TCO significantly increases when business needs change. 


  1. From supercomputers to large scale clusters, the size of compute, storage and network can be made to vary quite a bit. The need to own or manage such capability reduces significantly once it is commoditized and outsourced. 


  1. Some tasks are high priority and are usually smaller in number than the general class of tasks. If they arrive out of control, it can be significant cost. Most networking products try to control the upstream workload for which they are designed. For example, if the tasks can be contrasted significantly, it can be advantageous. 


  1. The scheduling policies for tasks can vary from scheduler to scheduler. Usually, a simple policy scales much better than complicated policies. For example, if all the tasks have a share in a pie representing the scheduler, then it is simpler to expand the pie rather than re-adjusting the pie slices dynamically to accommodate the tasks. 


  1. The weights associated with tasks are set statically and then used in computations to determine the scheduling of the tasks. This can be measured in quantums of time and if a task takes more than what is expected, it is called a quantum thief. A scheduler uses tallying to find and make a quantum thief yield to other tasks. 



Monday, November 23, 2020

Network Engineering continued ...

This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

            1. Public cloud and hybrid cloud storage discussions are elaborated on many forums including this one. The hybrid storage provider is focused on letting the public cloud appear as front-end to harness the traffic from the users while allowing storage and networking best practice for the on-premise data. 


            1. Data can be pushed or pulled from source to destination. If it’s possible to pull, it helps in relieving the workload to another process. 


            1. Lower-level data transfers are favored over higher-level data transfers involving say HTTP. 


            1. The smaller the data transfers the larger the number which results in chattier and potentially fault prone traffic.  We are talking about very small amount of data per request. 


            1. The larger size reads and writes are best served by multiple parts as opposed to long running requests with frequent restarts  


            1. The up and down traversal of the layers of the stack is expensive operations. These need to be curtailed. 

            1. The more the number of hoops to cross, the longer it takes for data to arrive. Local data wins hands down. Similarly, shared data on remote is less preferred over partitioned data on local. 


            1. The number of times a network is traversed also matters in the overall cost for data. The best cost for data is when data is at rest rather than in transit. 

Sunday, November 22, 2020

Network engineering continued ...

 This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

          1. Sometimes the trade-offs are not even from the business but more so from the compliance and regulatory considerations around housing and securing data. Public cloud is great to harness traffic to the data stores but there are considerations when data must be on-premise. 


          1. Customers have a genuine problem with anticipating growth and planning for capacity. The success of an implementation done right enables prospect but implementations don’t always follow the design and it is also hard to get the design right. 


          1. Similarly, customers cannot predict what technology will hold up and what won’t in the near and long term. They are more concerned about the investments they make and the choices they must face. 

          1. Traffic, usage and patterns are good indicators for prediction once the implementations is ready to scale. 


          1. Topology changes for deployment of instances are easier to manage when they are part of a cluster or a technology that allows elastic growth. This is one of the reasons why public cloud is popular. The caveat with initial topology design is that it is usually a top-down approach. Findings from bottom-up studies will also help here. 


          1. There is necessity to classify and segregate traffic in order to get more efficiency from the resources. Data is not all the same in transit but there is no classification in terms of quality of service either. Only the production support folks know the data flows and then these can be better streamlined for high performance and availability. 

Saturday, November 21, 2020

Network engineering continued ...

This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

        1. Networking products solve a piece of the puzzle. And customers don’t always have boilerplate problems. 

        2. This calls for a solution integration between the solution and the product.


        1. Customers also prefer ability to switch products and stacks. They are willing to try out new solutions but have become increasingly wary of binding to any one product or the increasing encumbrances  


        1. Customers have a genuine problem with data being sticky. They cannot keep up with data transfers  

        1. Customers want the expedient solution and they are not willing to pay for redesign


        1. Customers need to evaluate the cost of even data transfer over the network. Their priority and severity are most important to them. 


        1. Customers have concerns with the $/resource whether it is network, compute or storage. They must secure ownership of data and yet have it spread out between geographical regions. This means they have trade-offs from the business perspectives rather than the technical perspectives