Friday, January 30, 2015

Today we continue our discussion on Two layered access control in Storage Area Networks. We mentioned the matching rule used in mature detector selection and access request inspection. The substrings in the initial detector and the legal access request are compared bit by bit. This accounts for most of the cost in terms of time and space. The substring in the detector can be extracted and converted to only single integer value. The matching threshold, say r, is defined for the length of the substring. This is then used in the two main processes in access request inspection : analyzing legal access request and the integer value selection for the number type detector.  The extraction and conversion to a single integer value is done using the formulae which takes the location of the substring relative to the left in the binary string and enumerates the possible choices for the length of the substring and for the next segment unto the length of the binary string enumerates the possible choices by setting the current bit and the permutations possible with  the rest and then cumulating this for the length equal to that of the substring. This calculation of permutations helps us come up with a unique integer index. All these integer values are in the range of 0 to the number of permutations possible with the remainder of the binary string from r. This is a one-dimensional limited interval and can be efficiently indexed with a B-Tree. The B-Tree is looked up for an entry that is not the same as any number type detector. We now review the detector distribution algorithm. This was designed keeping in mind the discordance between the metadata server and the intelligent disk such as the processing capacity and function. They may yet co-operate on a single access request.  The negative selection algorithm was improved based on this relations between the metadata server and the intelligent disk. The processing capacity of the metadata server was strong and the overhead of the access control would cause loss of I/O performance. The processing capacity of the intelligent disk  was poor and the overhead of access control would make large loss of I/O performance. The strategy used was to divide the file into several segments and stored in the different intelligent disk.So that any one lower access control in these intelligent disks can complete access control function for this file. By the file segmentation information, a shortest distance algorithm is used to cluster the intelligent disk.
#codingexercise
Double GetAlternateEvenNumberRangeSumProductpower()(Double [] A, int n)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangeSumProductPower(n);
}
One more
#codingexercise

Double GetAlternateEvenNumberRangeSqrtSumProductpower()(Double [] A, int n)
{

if (A == null) return 0;

Return A.AlternateEvenNumberRangeSqRtSumProductPower(n);

}

Now back to the discussion we were having on the detector distribution algorithm. The majority of the detectors were in the metadata server and the remaining in the access control module of the intelligent disk. Data segmentation strategy was used in the storage area network, One file would be divided into several segments and stored in different disk. The access control module for any one disk can make the access control decision for the file. Since disks share the same file descriptors for common files, they can be clustered based on a similarity measure that works out as the fraction of the common file descriptors to the total file descriptor. If the top layer access control module stores num detectors, then assuming we cluster into k categories , we select num/k detectors and store them in the top layer. The rest is distributed in the lower access control module in the corresponding category.

No comments:

Post a Comment