Saturday, January 31, 2015

Today we continue our discussion on the detector distribution algorithm. If there are N intelligent disks, then they are clustered into k  categories as follows:
Step 1: Initialize the N disks into individual clusters of their own with the N centers pointing to the disks themselves. K = N
Step 2: stop the execution if K clusters have been formed.
Step 3: merge shortest distance clusters into a new cluster and adjust the center.
Step 4: use this new cluster and decrement K to repeat with step 2.
The shortest distance between clusters is computed as the fraction of the number of detectors stored in the top layer over K. These detectors are randomly selected separate from the K -categories  and stored in the top access control module. The  rest is stored in the lower access control module.
Let us take a closer look at the Access control function distribution.This is not computed by just the metadata server but also by the intelligent disks thereby avoiding a single point of failure and performance bottleneck.  However the number type detectors are generated only by the metadata server.This helps with the accuracy of the access request inspection.  The detectors are distributed among top and lower access control modules. The two layers can inspect access request in parallel. The lower layer stores a smaller number of detectors so that it doesn't get in the way of IO. During detector generation, all sub-string in the legal access request are extracted one time and converted to a single integer value. The numerical values are then indexed with a B-Tree. This avoids the conversion of the same substring from different sources (access requests) and improves the algorithm in detector selection. The numerical interval found this way avoids the mistake in judging a  legal access request as an abnormal one. Anything that lies outside this numerical range is not a detector or a valid access request.
The move from comparing binary strings to numerical values reduces the inspection overhead considerably.Further with clustering, these detectors are distributed in such a way that it improves the accuracy of the inspection without the overhead.

Friday, January 30, 2015

Today we continue our discussion on Two layered access control in Storage Area Networks. We mentioned the matching rule used in mature detector selection and access request inspection. The substrings in the initial detector and the legal access request are compared bit by bit. This accounts for most of the cost in terms of time and space. The substring in the detector can be extracted and converted to only single integer value. The matching threshold, say r, is defined for the length of the substring. This is then used in the two main processes in access request inspection : analyzing legal access request and the integer value selection for the number type detector.  The extraction and conversion to a single integer value is done using the formulae which takes the location of the substring relative to the left in the binary string and enumerates the possible choices for the length of the substring and for the next segment unto the length of the binary string enumerates the possible choices by setting the current bit and the permutations possible with  the rest and then cumulating this for the length equal to that of the substring. This calculation of permutations helps us come up with a unique integer index. All these integer values are in the range of 0 to the number of permutations possible with the remainder of the binary string from r. This is a one-dimensional limited interval and can be efficiently indexed with a B-Tree. The B-Tree is looked up for an entry that is not the same as any number type detector. We now review the detector distribution algorithm. This was designed keeping in mind the discordance between the metadata server and the intelligent disk such as the processing capacity and function. They may yet co-operate on a single access request.  The negative selection algorithm was improved based on this relations between the metadata server and the intelligent disk. The processing capacity of the metadata server was strong and the overhead of the access control would cause loss of I/O performance. The processing capacity of the intelligent disk  was poor and the overhead of access control would make large loss of I/O performance. The strategy used was to divide the file into several segments and stored in the different intelligent disk.So that any one lower access control in these intelligent disks can complete access control function for this file. By the file segmentation information, a shortest distance algorithm is used to cluster the intelligent disk.
#codingexercise
Double GetAlternateEvenNumberRangeSumProductpower()(Double [] A, int n)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangeSumProductPower(n);
}
One more
#codingexercise

Double GetAlternateEvenNumberRangeSqrtSumProductpower()(Double [] A, int n)
{

if (A == null) return 0;

Return A.AlternateEvenNumberRangeSqRtSumProductPower(n);

}

Now back to the discussion we were having on the detector distribution algorithm. The majority of the detectors were in the metadata server and the remaining in the access control module of the intelligent disk. Data segmentation strategy was used in the storage area network, One file would be divided into several segments and stored in different disk. The access control module for any one disk can make the access control decision for the file. Since disks share the same file descriptors for common files, they can be clustered based on a similarity measure that works out as the fraction of the common file descriptors to the total file descriptor. If the top layer access control module stores num detectors, then assuming we cluster into k categories , we select num/k detectors and store them in the top layer. The rest is distributed in the lower access control module in the corresponding category.

Thursday, January 29, 2015

Today we continue to discuss the paper Two layered access control for Storage Area Network by Tao, DeJiao, ShiGuang. In this paper they describe it in the context of an artificial immune algorithm that can efficiently detect abnormal access. The two layers consist of a central metadata server which forms the top layer and the intelligent disks that may number 1 to n which forms the lower layer  Detectors are used in both layers that receive, analyze and inspect the access request. The inspection is the new addition to the tasks of the metadata server or intelligent disk and it intercepts the access request prior to the execution of the command and the return of the data. The immune algorithm first does a negative selection to generate detectors for access request inspection. If there is a match, the request is considered abnormal and denied. The algorithm also decides the generation and distribution of detector to intercept the access requests.  We now look at the detector generation algorithm. It uses the notion of an antigen and a detector both are represented by binary strings. The latter represents a space vector. All non-repeating binary strings are generated as the initial detector. The initial detectors that did not match any of the illegal access request were selected to be a mature detector. There are more than one detector generation algorithms namely Enumeration generation algorithm, linear generation algorithm, and greedy generation algorithm. In these algorithms, the initial detector are enumerated and the mature detectors are randomly selected.  They have large time and space overhead. For selecting mature detectors and for inspecting access requests, matching is done based on a set of matching rules. These can be r-contiguous matching rules, r-chunk matching rules and Hamming distance matching rule.  Matching involves comparing binary substrings unto r bits between the detectors and the legal access request All substrings with more than r bits in legal access request was traversed and there was no index for them. This study used the Hamming distance matching rule. The binary string matching could be improved. The number type detector and the matching threshold of r bits is defined for the length of the substring. The substring in the detector is converted to a single integer value. Access request inspection then involves analyzing legal access request and the integer value selection for number type detector.
#codingexercise
Double GetAlternateEvenNumberRangePowerRtProductpower()(Double [] A, int n, int m)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangecubePowerRtProductPower(n,m);
}

Wednesday, January 28, 2015

Today we continue our discussion on Access Control. We were reviewing RBAC models specifically  IRBAC2000 model. We now review MDRBAC model We noticed that it was an improvement over IRBAC2000 because it introduced the notion of time restricted mappings  and minimized the role mapping restrictions. Still it suffered from the problem that each participant of the grid had to now implement this model.
We next look at the model of CBAC. This is also a mutual operation model is based on the premise that the associated multi-domain can be dynamically defined. The dynamic relationship is called an alliance. Information can be exchanged on an alliance relationship. The shortcoming of this model is that the authorization is not clear because the dynamic relationship only helps with the role mapping.
These are some of the models that were presented in the paper.
We next look at two layered access control for Storage Area Network as written by Tao et al. This is a slightly different topic from the access control models we have been discussing but it is informative to take a look at access control in storage area networking. First access control is relevant in storage area networking and second, there is an immunity algorithm in this case. However, it incurs a large space and time overhead which has performance implications for large I/O. The structure of two layered access control is already given. The top layer being the layer that maintains metadata and the lower layer maintaining the disk. The distribution strategy for two layer access control is presented. The top layer generates all the detectors and the preserves a majority of them. The lower layer maintains a small number of detectors. The network access request is inspected with the help of the top layer access control module. The problem of protecting  the storage area network contains several parts such as data and communication encryption, certification and access control.To prevent the illegal request and pass the valid request are the two main functions. Numerical detectors are used where their indices are found using a B-Tree.  The detectors are used to inspect the access request. If the detector matches the access request, the control module will deny access to the request. The distribution of the detector is the main concern here.
#codingexercise
Double GetAlternateEvenNumberRangecubeRtProductpower()(Double [] A, int n)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangecubeRtProductPower(n);
}

Tuesday, January 27, 2015

#codingexercise
Double GetAlternateEvenNumberRangecubeRtProductSquares()(Double [] A)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangecubeRtProductSquares();
}
Today we continue the discussion on Access Control.  We alluded to how RBAC does not adequately address a grid computing environment.  We now look at a few specific models of RBAC.
IRBAC2000 was proposed by Kapadia. Its called a role-based multi domain mutual operation model. By the dynamic mapping of roles among domains, it can solve the mutual operation between two reciprocal domains to  some extent. Roles are mapped to the corresponding local system definitions.  The mapping is dynamic and this provides flexibility.  The shortcoming is that once the mapping is done, access cannot be restricted further because it could violate the mutually exclusive roles. Say if it got mapped to one role and then we added another mutually exclusive role to the same external access, then our system would be confused.
The MDRBAC was introduced to solve this specific problem. It introduces notions such as domain agency, role mapping with time attribute, and minimized role mapping  restriction and it is applied in the access control among reciprocal domains.
The time attribute is helpful to restrict the duration for the mapping is made so that the mappings can be renewed with another or refreshed.
#codingexercise
Double GetAlternateEvenNumberRangecubeRtProduct()(Double [] A)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangecubeRtProduct();
}

Monday, January 26, 2015

Today we continue discussing RBAC model. We discussed that it could implement both DAC and MAC. It is based on the premise of roles. User migrations are therefore easier to handle. Further, a user has to be assigned a role, the role has to be active and authorized. Permission for the object must also be authorized.
RBAC supports three security principles i.e the minimum authority principle, the responsibility separation principle, and the data abstraction principle. The minimum authority principle means that the system only assigns minimum running authority to the role, and the responsibility separation principle means that mutually exclusive roles can be activated simultaneously to complete one task. The data abstraction principle means that authority is abstracted so that it does not specify explicit operations such as read, write, create, delete etc.
RBAC became popular in the enterprise world for its ease of deployment and the controls it gave. For example, the user, role, access authority, role class, mutual exclusion, and restriction of roles simplified deployment and management. RBAC provides flexibility, convenience and security.
Access Control in the grid computing environment differs from the enterprise access control in that the there is no more a centralized entity that can support a unified and central access control mechanism. Grid computing might involve peer to peer networks or other distributed technologies where decentralized multi domain management mode may be better suited. Therefore the access control strategy should be studied based on the traditional access control model.
#codingexercise
Double GetAlternateEvenNumberRangeSqRtProductSquares()(Double [] A)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangeSqRtProductSquares();
}

Sunday, January 25, 2015

We continue our reading on the Study of Access Control Model in Information Security by Qing-Hai,Ying et al. Today we review the mention for Role Based Access Control (RBAC). DAC and MAC was not flexible to handle business changes such as adding, canceling, and merging departments, employee promotions, duty changes, etc. Instead role based access control was favored. In RBAC, authorization is mapped to roles. A user can take different roles. This effectively handles changes in the organization. Since users are not assigned rights directly but only acquire it with roles, management of individual user rights becomes a matter of assigning appropriate roles to user's accounts. The roles are classified based on the set of stabilized duties and responsibilities in the management. There are three primary rules for RBAC.
Role assigment - A subject can exercise a permission only if the subject has been selected or assigned a role.
Role authorization - A subject's active role must be authorized for the subject. i.e User cannot take any or all roles.
Permission authorization - A subject can exercise a permission only if the permission is authorized for the subject's active role. i.e the user can exercise only those permissions assigned to the role.
Roles can be hierarchical in which a higher level role assumes all that comes with the lower level role.
With a hierarchical role and constraints, an RBAC can be controlled to simulate a Lattice Based Access Control. Thus RBAC can also be used to implement DAC and MAC.
A constraint places a restrictive rule on the potential inheritance of permissions from opposing roles.
RBAC is not the same as ACLs.  RBAC differs from access control lists in that RBAC assigns permissions to specific operations with meaning in the organization, rather than to low level data objects. An ACL may control how whether a file can be read or written but it cannot say how the file can be changed. RBAC lends more meaning to the operations in the organization. It can be used to achieve a Separation of Duties which ensures that two or more people must be involved in authorizing critical operations SoD is used where no individual should be able to effect a breach of security through dual privilege.