Tuesday, February 2, 2016

We continue discussing the paper "Shielding applications from an untrusted cloud with Haven,” written by Andrew Baumann, Marcus Peinado, and Galen Hunt.  We discussed SGX optimizations that could be attempted in newer releases but were found in the study by the authors. We continue to review the SGX hardware notes in the paper. SGX is the first commodity hardware that permits efficient multiplexing among multiple isolated programs without relying on trusted software. But while SGX is isolating the virtual address space, it would be even more inclusive to isolate full virtual machines. There are several benefits to this. First the operating system will be included altogether and this expands the capabilities significantly. By supporting a complete guest operating system, we can have more than one processes in stead of isolating a portion of user address as we do today. Though this comes with a huge trade off of surface area increase for vulnerabilities but it is not the same as Type II hypervisors today because we are in fact encrypting and redefining the trusted computing base. The fundamental assumption that the host is untrusted can be expanded and used in different contexts which leads to more challenges to solve in building such hardware. For example, this involves multiple levels of address translation, privileged instructions and virtual devices. Since the assumption is the same, the architecture in the shield module would apply at this scope and level as well. Let us take the example of virtual or pseudo devices. Whether the device isolates itself depends on whether it is communicating across the interface we want to protect.Virtualization makes it easy to port because it is no longer bound to physical resources but even a virtual device needs to be shielded when it is communicating across the trust boundary. Virtual resources can enable multiplexing  over the same physical resources. It is not that all the devices need to be shielded so some can be grouped together for isolation. Encryption-decryption and the ability to resume from a saved state is key to implementing this solution. The design is based on the principle of policy/mechanism separation. The guest controls the policy for virtual resources while the host manages policy only for physical resources. Resources are different from devices we talked earlier. Virtual resources include such things as virtual address allocation, threads etc. Physical resources address such things as memory and CPU time. This design using policy/mechanism separation enables the host to merely allocate and bookkeep resources where as the shield manages the policies.  #codingexercise
We were discussing the Hamiltonian cycles in a complete graphy. The problem was stated this way.
In a complete undirected graph, there are n nodes numbered 1 to n and there are k forbidden edges. Find the number of Hamiltonian cycles in this graph that don't contain the forbidden edges. Assume that order of traversal does not matter.

We gave the solution as follows

Permutations of  nodes as represented by characters and their sequences as edges traversed. 
Void Permute (String a, StringBuilder b, bool[] used, List<string> edges) 
{ 
  If ( b.Length == a.Length { if ( edges.Any(x => b.contains(x)) == false) print b.ToString(); return;} 
  For (int I = 0; I < a.Length ; i++) 
  { 
    If (used[i]) continue; 
     used[i] = true; 
     B += A[i]; 
     Permute(a, b, used, edges);
     B [B.Length – 1] = ‘/0’; 
     used[i] = false; 
  } 
} 
One optimization we suggested was to bail early from a permutation when an edge already listed appears in the forbidden list.
Another way, we could optimize this is to generate all possible forbidden edge combinations using the given ones.
Then we list possible permutations of the Hamiltonian cycles. 
Finally we exclude the edges that contain any of the combinations.  
The key to improving performance this way is the patterns to match and cycles to run against. The latter is fixed in number but the former if comprehensive and attempted with longer patterns first, might be cheaper to execute. What we are trying to do is that if we sort the  pattern and the occurrences, we can do better than the naiive O(n^2) check

No comments:

Post a Comment