Wednesday, September 23, 2020

Network engineering continued ...

 This is a continuation of the article at http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

    1. Deduplication - As data ages, there is very little need to access it regularly. It can be packed and saved in a format that reduces spaces. Networking makes efficient use of bits. In addition, if data is repeated across packets, it can be viewed as segments that are delineations which facilitate study of redundancy in data. Then redundant segments may simply be avoided from storing which allows a more manageable form of accumulated raw data. Deduplication lightens the load for networking. 


    1. Encryption – Encryption is probably the only technique to truly protect a data when there can be unwanted or undesirable access. The scope of encryption may be limited to sensitive data if the raw data can be tolerated as not encrypted. 


    1. Data flow – Data flows into stores and stores grow by size. Businesses and applications that generate data often find the data to be sticky once it accumulates. Consequently, a lot of attention is paid to early estimation of size and the kind of treatment to take. Determining flows helps determine the network. 


    1. Protocols – Nothing facilitates communication between peers or master-slave better than a protocol. Even a description of the payload and generic operations of create, update, list and delete become sufficient to handle network relevant operations at all levels.  


    1. Layering – Finally network solutions have taught us that appliances can be stacked, services can be hierarchical and data may be tiered. Problem solved in one domain with a particular solution may be equally applicable to similar problem in different domain. This means that we can use layers for the overall solution 

Tuesday, September 22, 2020

Network engineering continued...

 This is a continuation of the article at http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

  1. Protection against loss – Data when stagnant may get corrupted. In order to make sure the data does not change; we need to keep additional information. This is called erasure coding and with additional information about the data, we can not only validate the existing data, but we may also even be able to recreate the original data by tolerating certain loss. How we store the data and the erasure code, also determines the level of redundancy we can use. If the data is in transit, it can be made immutable and uninterpretable with encryption 

  2.  

  1. Hot warm cold – Data differs in treatment based on the access. Hot data is one that is actively read and written. Warm and cold indicate progressive inactivity over the data. Each of these labels allows different leeway with the treatment of the data and the cost of network flow. 


  1. The organizational unit of data – Networking is always in layers due to the separation of concerns in each layer and its communication with a peer at the same level across a hybrid network. 


  1. Seal your packet – Every packet has a header and a payload start and length. Even if the data is chunked, the packet has to be well-formed so that any tool or application can validate the packet for its representation.  


  1. Versions and policy – As with most libraries, packet headers can be versioned and versions can be managed with policies. Headers may be static but policies can be dynamic. When a software-defined network is viewed as revisions, users can go back in time and track revisions. 


Monday, September 21, 2020

Network Engineering (continued) ...

  1. This is a continuation of the article at http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html


  2. Websocket – facilitates duplex communication and is independent of HTTP. Both the client and the server can be a producer as well as the consumer. The client and the server can both push events. 


  1. Address – Universal addressing without exhaustion is possible with IPv6 connectivity. This is independent of the existing IPv4 connectivity that powers the internet as we know. 


  1. Binding – These can be of three types – TCP/IP binding, HTTP binding, and net MSMQ binding, and each of them differentiates a way for an endpoint to be setup. 


  1. Contract – become a descriptor for the service just like address and binding and gives information to the client on the aspects of connecting to the endpoint of the service. Contracts can support stateful protocols but they are verbose, static, and brittle and became less popular in the face of growing competition from a stateless design that uses pre-determined and well-accepted verbs. 


  1. Stateful and Stateless design – In a stateless design, each request is granular authenticated, authorized, audited, and optionally encrypted. The resource usages are clean after a request-response exchange. The well-established protocols foster a community of developers, tools, and ecosystems. 

Sunday, September 20, 2020

Best practice from networking

 Best practice from networking: 

Introduction: Networking is one of the three pillars of any commercial software. The other two are compute, and storage and the three are directly included as products to implement solutions, as components to make products, as perspectives for implementation details of a feature within a product and so on. Every algorithm that is implemented pays attention to these three perspectives in order to be efficient and correct. We cannot think of distributed or parallel algorithms without network, efficiency without storage, and convergence without compute. Therefore, these disciplines bring certain best practice from the industry. 

  

We list a few in this article from networking perspective: 

  1. Not a singleton – Most network vendors know that networking is about data communications. Data cannot be lost or corrupted. Therefore, network industry vendors go to great lengths in making data safe in transit by not allowing a single point of failure such as a hop failure.  If the data is written to the wire, it is relayed to the recipient eventually.   

  1. Chunked data – Packets form the core unit of transmission in any network. If the frame is too long, it may suffer from transmission failures and require retries. Instead if it were chunked, it will reduce the fault while subsequent packets will require to be sent only once. 

  1. Global connectivity – The public cloud has taught us that it is a massive sponge for global traffic that allows data to be consolidated in the datacenters behind the cloud. This makes networking popular for application and universal connectivity 

  1. Mobile IP – The ability to appear as if working of office computer with the same address while floating different networks gives unparalleled mobility only possible by networking. 

  1. Tunneling – The ability to wrap an existing packet with a header in the same IP protocol to allow packets to safely cross a public network but allow the endpoints on either end to be part of a secure network is made possible by tunneling. The virtual private network protocols help with this.