Sunday, June 13, 2021

 Networking Techniques Continued ...  

Introduction: This is a continuation of the article written about networking technologies in the cloud specifically the Azure public cloud.   

 

Description: Some techniques with networking drive down costs significantly. For example, Switched Embedded Teaming can be used to combine two 10 Gb ports to 20 Gb ports. This is a boost to the capacity and has little or no additional load on the CPU. Such techniques have always been part of the networking industry and its history. Modems that provided Point-to-Point connectivity enabled the bandwidth to be increased by cumulative additions of other modems. Multilink capabilities enabled all the modems to be active at once while the Bandwidth allocation protocol allowed these modems to be aggregated one by one. These techniques allowed IT administrators to extend the service life of the existing infrastructure and hardware so that new equipment could be purchased when the organization was ready instead of when incidents occurred.  The overall throughput could be improved so that high-priority applications could get the network access they need.  Service levels could be added to the existing default conditions with the help of Quality-of-Service improving protocols that prioritized traffic to and from critical applications. DiffServ and IntServ paradigms boosted the adoption of certain techniques and technologies in favor of others. IntServ and DiffServ are models of providing Quality-of-Service. IntServ stands for Integrated services and DiffServ stands for Differentiated Services. In the IntServ model, QoS is applied on a per-flow basis and addresses business model and charging concerns. Even in mobile phone networks, this is evident when certain billing options are desirable but not possible. In Differentiated services, the emphasis is on scalability, a flexible service model, and simpler signaling. Scalability here means we do not track resources for each of the considerable numbers of flows. Service model means we provide bands of service such as platinum, gold, and silver.  

 

Queuing is also a technique that helps improve service levels. Virtual Machine Multi-Queue on Windows Server improves throughput. A built-in software load balancer can distribute incoming requests where the policies are defined using the network controller. Putting this all together, we have a network controller that sits between a Gateway and a Hyper-V vSwitch that works with a load balancer to distribute traffic to other Hyper-V vSwitches which connect several other virtual machines. The Gateway works with a Hyper-V vSwitch that routes internal as well as external traffic to Enterprise sites and Microsoft Azure using IP Security protocol (IPSec) or Generic Routing Encapsulation (GRE) protocol. The Hyper-V vSwitches in the hybrid cloud is also able to send and receive Layer3 traffic via the gateway which rounds up overall connectivity for the entire Hybrid cloud with limitless expansion internally.  This mode of connecting Hybrid cloud with public cloud is here to stay for a while because customers for the public cloud have significant investments in their hybrid cloud and the public cloud cannot migrate the applications, services, devices, and organizations with their workloads without requiring code changes. Networking is probably the easiest technique to allow new ones to be written directly on the public cloud so that traffic may be eased out on to public cloud as investments in the hybrid cloud can be scaled back.   

 

 

No comments:

Post a Comment