Sunday, June 16, 2013

Microsoft Exchange Server Architecture

Microsoft Exchange Servers are deployed to Active Directory Sites except for the perimeter role. The AD must have at least one domain controller in each domain and at least one global catalog server. Generally, there should be 4 : 1 ratio of exchange processors to global catalog server processors. There are several different roles included with the Exchange Server.
These are :
1) Edge Transport Server Role:
The Edge Transport Server is deployed to the perimeter beyond which lies the internet. The Exchange hosted services are hosted in the Internet.The Edge Transport server runs in the perimeter network and provides message hygiene and security over untrusted networks. The EdgeSync service pushes configuration information to the Edge server using secure LDAP. The Edge server sends messages using SMTP TLS. The Edge transport filters the messages using antispam and antivirus filters which comprise of connection filter, address rewriting agent, edge rule agent, sender ID agent, recipient/sender filter, content writer, attachment filter and virus scanning.
2) Hub Transport Server Role:
The Hub Transport Server role handles all e-mail flow inside the organization, applies transport rules, applies journaling policies and delivers messages to a recepients mailbox.
3) Client Access Server Role: The client access server role supports the WebAccess and ActiveSync client applications, and the POP3 and IMAP4 protocols.
4) Mailbox Server Role :
The mailbox server role hosts mailbox and public folder databases. It also provides advanced scheduling services for Microsoft Office Outlook users, generates the offline address book, provides services that calculate e-mail address policies and address list for recepients.
5) Unified Messaging Server Role:
The unified messaging server role enables combines voice, fax, and e-mail messaging into a single infrastructure. These are all delivered to the user's inbox so that they can be accessed from a variety of devices. All unified messaging servers are placed in a central location and IP gateways are enabled in each branch office.

Management and Monitoring components : Exchange Management and monitoring is made easy with tools such as management shell and console. This helps answer questions like are all Exchange Services running, are all databases active, do disks have enough space, can clients connect with reasonable performance, are the servers performing efficiently and reliably, are they configured correctly  and are they secure ?

High Availability : The Microsoft Exchange Server has a component for high-availability. It includes built in features that can provide quick recovery, high availability and site resiliency for mailbox servers. Availability is improved with no replication such as with single copy cluster (SCC)  or shared storage cluster feature, replication within a cluster using cluster continuous replication feature, replication to a standby server using standby continuous replication  and replication to a local disk set using local continuous replication.
 

Saturday, June 15, 2013

Finding the closest pair of points
Two points are closest based on their euclidean distance which is computed as sqrt( (x1 - x2) ^ 2    +  (y1 - y2) ^ 2 ). The naive approach would be to compare two points at a time and exhaust the nC2 choices
Instead we can use a divide and conquer algorithm whose running time is T(n)=2T(n/2)+O(n)
Let us take a subset P of points in Q and have two arrays with each holding all the points in P. Array X is sorted with monotonically increasing x coordinates and Y is sorted with monotonically increasing y coordinates. We keep the arrays presorted.
First the point set P is divided into two set with a vertical line such that there is roughly half the points in each. These are stored as two subarrays within X. Similarly the Y is stored as two sub-arrays which contains the points sorted by monotonically increasing y-coordinate.
Next, we find the closest pair of points recursively first in the left subarray and then in the right subarray. The inputs to the first call are the subset PL and arrays XL and YL and this is repeated for the right. The minimum of the closest distances between the left and the right are chosen.
Combine The closest pair is either the pair with the distance delta found by one of the recursive calls or it is a pair of points with one point in PL and another in PR. To find such points, we create an array Y' with only those points in Y that are not within the 2-delta wide distance from the line dividing the points. Delta is the minimum distance found from the earlier step.For each point p in Y', try to find points within Y' that are less than delta apart while keeping track of the smallest delta' found in Y'. It has to compare with only seven others in the delta times 2 delta rectangle. If  delta'  < delta we have found points that exist in this narrow band and are the results of the search otherwise the recursive calls gives the points with the closest distance.

Finding the convex hull

The convex hull of a set Q of points is the smallest convex polygon for which each point in Q is either on the boundary or inside the polygon. We discuss two techniques to solve these called Graham's scan and Jarvis' march.
Graham's approach is based on the following steps:
1) Choose the point with the lowest y-coordinate and the left most in case of a tie as the starting point.
2) Push this point and the next two points visited in the counter clockwise order on stack S
3) For the next points if the angle formed by the next to top and the top and the candidate point makes a non-left turn, then pop it from the stack
otherwise push the next point on the stack and proceed
The stack returns the convex hull vertices.
The complexity is O(nlogn)
Jarvis' approach also known as package wrapping is based on the following steps:
We start with the lowest point and we go around the board building a sequence such that the next vertex in the convex hull has the smallest polar angle with respect to the previous point and in case of ties we pick the point farthest from the previous point. When we reach the highest vertex, breaking ties by choosing the farthest such vertex.
The complexity is O(NM)

Friday, June 14, 2013

Computational Geometry is the branch of computer science that studies algorithms for solving geometric problems.The input to a problem in this area is typically is a set of geometric objects such as a set of points, a set of line segments, or the vertices of a polygon in counter-clockwise order.
To determine whether a set of n line-segments contains any intersections, we review a technique called sweeping.
Before we discuss sweeping, let us use the following:
Two consecutive segments turn left or right if their cartesian product (p2-p0)*(p1-p0) is positive or negative.
Two line segments intersect each other if each segment straddles the line containing the other.
segments-intersect (p1, p2, p3, p4)
We determine if line segments straddle by finding the directions of the four line segments and checking that there are pairs that are opposite. We also check that if any of the directions are zero, then we check that the opposite end is colinear with a segment.
Now we look at the sweeping technique that describes whether any two line segments in a set of segments intersect.
In sweeping, an imaginary vertical sweep line passes through the given set of geometric objects, usually from the left to the right.  This technique provides a way of ordering the geometric objects by placing them in a dynamic data structure. We further assume that no input segment is vertical and that no three input segments intersect at a single point.
The first assumption tells us that any segment crossing a vertical sweep line intersects it at only one point.
Where the line segments intersect the sweeping line, the intersection points are comparable and are taken in the order of the increasing y coordinates. Where the segments intersect, this order is reversed. Any sweep line that passes through the shaded region has e and f intersect, they reverse their orders.
 
Review of Text analytics 2011 talk by Seth Grimes
Text analysis adds value where transactional information stops. From the information retrieval perspective, people want to publish, manage, archive, index and search, categorize and classify and extract metadata. Text analytics add semantic understanding of named entities, pattern based entities, concepts, facts and relationships, concrete and abstract attributes, subjectivity etc. Text analytics applications lets users search terms, retrieve material from large scale structures, search features such as entities or topics, retrieve materials such as facts and relationships, group results based on topics and visually explore information. Some examples are SiloBreaker, FirstRain, Bing, Google. etc. Text analytics includes metadata and metadata population Search results are measued based on precision and recall. Accuracy is measured with the combination of the two in a term called f-score which is defined as 2 * (precision * recall)/ (precision + recall). Typical steps in text analytics include : identify and retrieve documents for analysis, apply techniques to discern, tag and extract entities and apply techniques to classify documents and organize extracted features. BeyeNetwork and Ranks.NL are some examples of these. Applications such as Connexor  and VisuWords talk display part of speech tagging and ontology. Search logs suggest that

Thursday, June 13, 2013

Slide review of text analytics user perspectives on solution and providers by Seth Grimes ( Continued )
Text analysis involves statistical methods for a relative measure of the significance of words, first for individual words and then for sentences. Vector space models is used to represent documents for information retrieval, classification, and other tasks. The text content of a document is viewed as an unordered bag of words and measures such as TF-IDF (term-frequency-inverse-document-frequency) represent their distances in the vector space. Additional analytic techniques to group the text is used to identify the salient topics.
However, the limitation of statistical methods is that the statistical method have a hard time making sense of nuanced human language. Hence natural language processing is proposed where one or a sequence or pipeline of resolving steps are applied to text. These include:
tokenization - identification of distinct elements
stemming - identifying variants of word bases
Lemmatization - use of stemming and analysis of context and parts of speech
Entity Recognition - Lookup lexicons and gazetteers and use of pattern matching
Tagging - XML markup of distinct elements
Software using the above approaches have found applications in business, scientific and research problems. The application domains include: brand management, competitive intelligence, content management, customer service, e-discovery, financial services, compliance, insurance, law enforcement, life sciences, product / service design, research, voice of the customer etc.  Text analytics solution providers include young and mature software vendors as well as software giants. Early adopters have very high expectations on the return of investments from text analytics.  Among the survey conducted on adopters, some more findings are as follows:
Bulk of the text analytics users have been using it for four years or more.
Primary uses include brand management, competitive intelligence, customer experience, voice of the customer, Research and together they represent more than 50% of all applications.
Textual information sources are primarily blogs, news articles, e-mails, forums, surveys and technical literature.
Return on investment is measured by increased sales to existing customers, higher satisfaction ratings, new-customer acquisition, and higher customer retention.
A third of all spenders had budget below $50,000 and a quarter used open-source.
Software users also showed likes and dislikes primarily on flexibility, effectiveness, accuracy, and ease of use.
More than 70% of the users wanted the ability to extend the analytics to named entities such as people, companies, geographic locations, brands, ticker symbols, etc.
Ability to use specialized dictionaries, taxonomies, or extraction rules was considered more important than others.
This study was conducted in 2009.
 

A moving point within a bounded square

Q: If you are given a bounded square with cartesian co-ordinates of a 100 * 100 and a point within the square that moves in straight lines and rebounds of the edges, write a function that gives the new position of the point in each update.
A: Here's the code for the update method where previous and current are two points on the  board.

Point Update ( Point previous, Point current )
{
   var next = new Point();

   if (previous.X < current.X)
   {
       next.X = current.X + (current.X - previous.X);
       if  (next.X > 99) next.X = previous.X;
    }

   if (previous.Y < current.Y)
  {
     next.Y = current.Y + (current.Y - previous.Y);
     if (next.Y > 99) next.Y = previous.Y;
   }

  if (previous.X > current.X)
  {
     next.X = current.X - (previous.X - current.X);
     if (next.X < 0) next.X = previous.X;
   }

  if (previous.Y > current.Y)
  {
     next.Y = current.Y - (previous.Y - current.Y);
     if (next.Y < 0) next.Y = previous.Y;
   }

  if (previous == current)
  {
    var random = new Random();
    next.X = random.Next(99);
    next.Y = random.Next(99);
  }

  return next;
}