Wednesday, July 3, 2013

Resource records used by Active Directory

When a DC is promoted to a domain, the default resource records are populated in the netlogon.dns file in the system root directory. These records look something like this:  The first record is for domain itself and lists the name of the domain, the type of the record, the IP address and the weight. Each DC attempts to register an A record for its IP address for the domain it is in similar to the preceding record. This is an alias or canonical name (CNAME) record. The record is comprised of the GUID for the server, which is an alias for the server itself.  Then there is a record for the canonical name (CNAME) DCs use this record if they know the GUID of a server and want to determine its IP address. If the dc is a Global catalog server, there is another A record.  The remaining records are  of type SRV which specifies the location of servers that should be used for specific protocols. These records allow you to remap the port numbers for individual protocols or the priority in which certain servers are used.
 Sites that do not have domain controller located within the site  can be covered by others that have site links defined. This is called automatic site coverage. The DC adds site specific records for a site to cover, so that the DC can handle queries for clients in that site.  To see a list of sites for a particular DC, the NLTest command can be run. The automatic site coverage can be toggled on or off with a registry value on the domain controllers.
These records can be queried for information such as:
all the global catalogs in a forest or particular site
all Kerberos servers in a domain or a particular site
all domain controllers in a domain or a particular site
the PDC emulator for a domain.
For domain controllers that should be dedicated to an application  like Microsoft Exchange and do not publish any records, there are two options for configuration of the SRV record : the DnsAvoidRegisterRecords registry entry can be used or the NetLogon system settings in the administrative templates of the group policy can be applied to the domain controllers.

Tuesday, July 2, 2013

NDIS drivers

Protocol drivers write packets on to the wire using a network adapter. Network adapter vendors write proprietary drivers for their hardware and this can be a large number. Since the protocol driver does not need to know the nuances of every network adapter, windows developed the Network Driver Interface Specification (NDIS)  so that protocol drivers would not have to know the nuances of each network adapter. Furthermore, the network adapters are now expected to conform to NDIS. Adapters that do so are called NDIS miniport drivers.
NDIS library implements the boundary that exists between the NDIS drivers and Transport Driver Interface(TDI). 
NDIS library helps the NDIS driver clients  to format commands they send to NDIS drivers. NDIS drivers interface with the library to receive requests and send back responses.  So NDIS IRPs are intercepted by the library at the NDIS protocol interface and forwarded to the NDIS intermediate driver and to the NDIS miniport driver before sending to the Hardware abstraction layer (HAL).
NDIS library was designed to not just provide NDIS boundary helper routines but also provide an entire execution environment so that the driver code can be moved between client and server. So the NDIS library does not accept and process IRPs but translates IRPs into calls into the NDIS driver.  NDIS drivers does not have to handle re-entrancy as the library guarantees that the requests will be allowed to complete before new requests are issued. This helps the NDIS driver to avoid synchronizations which grow complex with multiprocessors.
On the other hand, this serialization can hamper scalability so in subsequent versions, drivers can indicate to the NDIS library that they don't want to be serialized. The NDIS library  in such cases forwards requests as fast as the IRPs arrive. The NDIS driver would then be expected to queue and manage multiple simultaneous requests. Other features include reporting whether the network medium is active. TCP/IP task offloading allows a minport to offload packet checksums and IPsec to others. Fast packet forwarding allows forwarding without processing incoming packets that are not destined to the host. Wake-on-LAN introduces power management capabilities. Connection-oriented NDIS allows NDIS drivers to manage connection-oriented media. The functions on the interfaces used by NDIS driver to interface with the network adapter hardware translate directly to corresponding functions in the HAL.

Monday, July 1, 2013

Dynamic DNS

DDNS is a method for clients to send requests to a DNS server to add or delete resource records in a zone. Prior to DDNS, the records were either directly updated via a text based zone file or via a vendor-supported GUI, such as the Windows DNS MMC snap-in. Active Directory takes full advantage of DDNS to relieve the maintenance of resource records.
DNSSec was introduced to secure dynamic updates using public key-based methods. The approach Microsoft takes to providing secure dynamic updates is by using access control lists in AD. Zones store their DNS data in AD. By default, authenticated computers in a forest can make new entries in a zone. This enables authenticated user or computer to directly add personal computers to the network.
Global Names Zone was introduced  to ease migration from WINS. WINS uses short names as opposed to DNS that uses hierarchical names. However, DNS provides support for short names using DNS suffix search, orders on clients, and the DNS resolver on the client will attempt to resolve the short name by appending each DNS suffix, defined one at a time in the order listed. In a large organization with numerous DNS namespaces, this list of suffixes could be quite long. Since such lookup could be potentially time-consuming, difficult to maintain and also causes significant increases in network traffic during short name resolution, Global Names Zone was introduced in Windows Server 2008. GNZ supports resolution without suffix search list to be on the client. Any client that supports DNS resolution can utilize the global name zones functionality without additional configuration. Windows server 2008 DNS server will first try to resolve the name queried in the local zone and if that fails, they will then try to resolve it in the global name zone. The caveat is that the names are statically registered instead of dynamically registered so it needs to be maintained. GNZ is useful for IPv6 deployments. CName records are placed in the GlobalNames zone and alias them to the records for specific server/name in the relevant forward lookup zone.
 

Sunday, June 30, 2013

How replication conflicts are resolved ?
There can be conflicts during replication cycle. For example, server A creates an object with a particular name at roughly the same time that Server B creates an object. with the same name. The conflict reconciliation process kicks in at the next replication cycle. The server looks for the version numbers of the updates and whichever is higher wins the conflict. If the version numbers are same, whichever attribute was changed at a later time wins the conflict. If the timestamps are equal, the guids are checked and whichever is higher wins.
If an object is moved to a parent that is now deleted, that object is placed in the lost and found` container.
 If  two objects with same RDN are created, then after the usual resolution process mentioned above, one of the conflicting attribute is modified with a known unique value. Between three servers, when a server receives updates from the other two, the conflict resolution is worked out  at this server and repeated at the other two when they request the updates from this server.
 Active Directory uses DNS for name resolution.  WINS is no longer supported.  DNS is a hierarchical name resolution system.  DNS is one of the largest directory services.  DNS nomenclature involves Zones, resource records  and dynamic DNS. Zones are delegated portions of the DNS namespace that a name server maintains. A resource record is the unit of information in DNS. A zone is essentially a collection of resource records. Common record types include address record, pointer record, alias record, mail exchange record, name server record, start of authority record and service record. The last one is used by DC and AD clients to locate servers for a particular service. Service records are dependent on address records. A list of all computers running a service is maintained as A records. 

Saturday, June 29, 2013

Active Directory replication continued ...
When replicating a naming context, a domain controller maintains a high watermark table to pick up where it left off. . There is one table for every naming context which totals three if we include schema, configuration and domain NCs. Each table stores the highest USN of the updates so that only new information is requested.
This is different from the Up-to-dateness vector  which is another table that the DC maintains to assist in efficient  replication of a naming context by removing redundancies in replication and endless replication loops. The two tables can be used together to improve the efficiency in replication.
By filtering out the same changes from multiple sources, only the updates that have not been made yet are done. This is called propagation dampening.  Thus we have seen that the Active Directory is split into separate naming contexts each of which is replicated independently and that within each naming context, a variety of metadata is held. Update entries consist of originating-DSA-GUID,originating-USN and a timestamp indicating the last successful replication with the originating domain controller. These values are updated only during a replication cycle.
 As an example of replication, we take the following example from the book :Step 1) a user is created on DC A Step2) That object is replicated to DC B. Step 3) DC B is subsequently modified and Step 4) the new changes to that object are replicated back to DC A. The Active Directory database transaction representing  step 1 consists of a USN  and the timestamp. The replication of the originating write to a DC B allocates a different USN and the users USNCreated and USNChanged attributes are updated. In Step 3) the password change for the user on DC B also modifies this USNChanged attribute for the user. In addition the password attribute is modified and the corresponding USN and timestamp updated. The step 4 is similar to step 2. A change transaction is issued and the attributes updated.  To look at how the replication occurs, we look at the following five steps : Step 1) Replication with a partner is initiated Step 2) the partner works out what updates to send. Step 3) The partner sends the updates to the initiating server. Step 4) The initiating server processes the updates and Step 5) The initiating server checks whether it is up to date.

Friday, June 28, 2013

Active Directory Site topology and replication

While data is usually replicated from a single master server to subordinate servers, Active Directory offers multimaster replication. The single-master replication has following drawbacks: it has a single point of failure, there's geographic distance from master to clients performing the updates, and less efficient replication due to single originating location of updates. With multimaster replication, you can avoid these but you have to first create a site topology and define how the domain controllers replicate with each other. The Knowledge Consistency Checker tool sets up and manages the replication connections. Subnets are added to site with 32 bit or 128bit IP addressing to determine relative locations on a network. AD sites are defined in terms of a collection of well-connected AD subnets. Replication flows are setup between sites and DFS shares or DCs are located using sites.  Sites are used to perform DNS queries via the DC locator service which finds the nearest DC or the global catalog. Most members of a domain dynamically determine their site when they start up. By default, there's one site created automatically. Multiple sites can be defined for a single location to segregate resources.
Site links allow you to define what sites are connected to each other and the cost associated. Site links are used for replication and can support IP or SMTP. The default replication happens via IP but in some cases where the connectivity is poor or unreliable. These site links help a DC to determine which  other sites to cover in addition to its own site for someone to logon to that site. As a trivia, if the network is not fully routed i.e. not all site links are available, then bridges have to be defined between sites. Connection objects specify which DC replicate with other DCs and are generally managed by the DC themselves. It isn't always possible to allow AD to manage all of these connections.
 Knowledge Consistency Checker tool automatically maintains and generates the connection objects and knows how to replicate them and when. It uses two algorithms one called intrasite and another called intersite. The intrasite is designed to create a minimal latency ring topology that guarantees no more than three hops between any two DCs in the site. The intersite on the other hand, tries to keep the sites connected via a spanning tree algorithm so that replication can occur and uses the site link metrics to make the connections. RepAdmin is a commandline tool for administering replication. Replmon is a graphical utility for managing and monitoring replication.
 Replication is done with either update sequence number (USN) or timestamps. Each DC maintains its highest combined USN for all naming contexts.
Constructing a web image graph requires uses the VIPS algorithm  which separates out the web page into visual blocks that are organized in a tree. Different blocks are related to different topics, so we use the hyperlinks from block to page, rather than from page to page. The page to block and block to page relationships are extracted first. The block to page matrix  Z with dimensions nxk is constructed first  and the matrix has values for a cell as the inverse of the number of pages to which a block links or zero otherwise. The page to block matrix X with dimensions kxn is populated with a normalized importance value based on the size and the distance from the center or zero otherwise. These two matrix are combined to form a new web page graph W = ZX.  Let Y define a block to image matrix with dimension nxm such that each block contains the inverse of the number of images contained in the image block or zero otherwise. Using link level analysis, the block graph is defined as Wb = (1-t)ZX +TU/D where t is a constant and D is a diagonal matrix. The diagonal matrix is populated with zero if block i and block j are contained in two different web pages, otherwise it is set to the default degree of coherence from VIPS algorithm. The block graph attempts to define the probability of jumping from block a to block b. Then the image graph is constructed Wi = (Y-Transposed)WbY. This image graph better reflects semantic relationships between images and can be used with the data mining techniques discussed earlier.