Wednesday, June 19, 2013

Cluster analysis:
A cluster is a collection of data objects that are similar to one another and dissimilar to the objects in another cluster. The quality of  a cluster can be measured based on the dissimilarity of objects, which can be computed for different types of data such as interval-scaled, binary, categorical, ordinal and ratio-scaled variables. Vector data can be measured with cosine measure and Tanimoto coefficient. Cluster analysis can be a stand alone data mining tool. Clustering methods can be grouped as partitioning methods, hierarchical methods, density-based methods, grid-based methods, model-based methods, methods for high-dimensional data (including frequent pattern based methods) and constraint-based methods.
A partitioning method creates an initial set of k partitions and assigns the data points to the nearest clusters. Then it iteratively re-computes the center of the cluster and reassigns the data points. Some examples are K-means, k-medoids, and CLARANS. A hierarchical method groups data points into a hierarchy by either bottom up or top down construction. Iterative relocation can also be applied to the sub clusters. A density based approach clusters objects based on a density function or according to the density of neighboring objects. Some examples are DBSCAN, DENCLUE and OPTICS.  A grid based method first allocates the objects into a finite number of cells that form a grid structure, and then performs clustering on the grid structure. STING is an example of grid based where the cells contain statistical information.  A model based method hypothesizes a model for each of the clusters and finds the best fit of the data into the model. Self-organizing feature maps is an example of model based method. Clustering high-dimensional data has applications to text documents. There are three methods - dimension growth subspace clustering, dimension reduction projected clustering and frequent pattern based clustering. A constraint based clustering method groups objects based on application dependent or user-specified constraints. Outlier detection and analysis can be useful for applications such as fraud detection.  The outliers are detected based on statistical distribution, distance, density, or deviation.
Classification and prediction are forms of data analysis to extract models. Classification predicts labels or data classes while prediction models continuous-valued functions.
Same steps for data preprocessing are applied as discussed before.
ID3, C4.5 and CART are greedy algorithms for the induction of decision trees. Each algorithm tests the attribute for each non-leaf node before selection.
Naïve Bayes classification and Bayesian belief networks are based on Bayes probability theory. The former assumes data classes are conditionally independent while the latter assumes subsets of variables can be conditionally independent.
A rule based classifier uses a set of IF-THEN rules for classification. Rules can be generated directly from training data using sequential covering algorithms and associative classification algorithms.
Associative classification uses association mining techniques that search for frequently occurring patterns in large databases
The set of all classifiers that use training data to build a generalization model are called eager learners. This is different from lazy learners or instance based learners that store different training tuples in pattern space and wait for test data before performing generalization. Lazy learners require indexing techniques.
Linear, non-linear and generalized linear models of regression can be used for prediction. Non-linear programs are converted to linear problems by transforming predictor variables.

Tuesday, June 18, 2013

Some of the common techniques in finding patterns with data mining include Association rule mining which consists of finding frequent item sets from which strong association rules are generated.Associations can be analyzed to uncover correlation rules which give statistical information.
Frequent pattern mining can be categorized based on completeness, levels and dimensions of data, types of values, kinds of rules, patterns. Frequent pattern mining can be classified into frequent itemset mining, sequential pattern mining, structured pattern mining etc. Algorithms for frequent itemset mining can be of three types :
1) apriori-like algorithms: The Apriori algorithm mines frequent item sets for Boolean association rules. Based on the property that the non-empty subsets of a frequent itemset must also be frequent, the kth iteration, it forms frequent k-itemset candidates based on the frequent (k-1) itemsets.
2) frequent pattern based algorithms: FP-Growth does not generate any candidates but constructs a highly compact data structure (FP-tree) and uses fragment growth.
3) algorithms that use vertical data format transform a given data set of transactions in the horizontal data format of TID-itemset into the vertical data format of item-TID_set. Then it mines using the Apriori property and additional optimization techniques such as diffset.
These same methods can be extended for the mining of closed frequent itemsets from which the set of frequent itemsets can easily be derived. These include additional techniques such as item merging, sub-itemset pruning and item skipping.
These techniques can be extended to multilevel association rules and multidimensional association rules.
Techniques for mining multidimensional association rules can be categorized according to their treatment of quantitative attributes. For example, they can be discretized statically based on predefined concept hierarchies. Quantitative association rules can be mined where quantitative attributes are discretized dynamically based on binning/clustering.
Association rules should be augmented with a correlation measure such as lift, all_confidence and cosine.
Constraint-based rule mining refines the search for rules by providing meta rules and constraints which can be antimonotonic, monotonic, succint, convertible and inconvertible. Association rules should not be used for prediction without training.
 

book review

Data Mining concepts and techniques Jiawei Han and Micheline Kamber

This book mentions the data preprocessing steps as descriptive data summarization, data cleaning, data integration and transformation, data reduction, data discretization and automatic generation of concept hierarchies.
Descriptive data summarization provides the analytical foundation for data pre-processing using statistical measures such as mean, weighted mean, median and mode for center, range, quartiles, interquartile range, variance, and standard deviation for dispersion, histograms, boxplots, quantile plots, scatter plots,  and scatter plot matrices for visual representation.
Data cleaning routines fill in missing values, smooth out noise, identifying outliers and inconsistencies in the data.
Data integration combines data from multiple sources into a coherent data store by smoothing out data conflicts, semantic heterogeneity and contribute towards data integration.
Data transformation routines convert the data into appropriate forms for mining involving steps such as normalization.
Data reduction techniques such as data cube aggregation, dimensionality reduction, subset selection and discretization can be used to obtain a reduced representation of data.
Data discretization can involve techniques such as binning, histogram analysis, entropy based discretization, cluster analysis and intuitive partitioning. Data processing methods continue to evolve due to the size and complexity of the problem.

Data Mining is the discovery of knowledge based on finding hidden patterns and associations, constructing analytical models, perform classification and prediction and presenting the mining results using visualization tools. Data Warehousing helps with providing summarized data. A data warehouse is defined in this book as a subject-oriented, integrated, time-variant, and non-volatile collection of data organized for decision making. A multidimensional data model is used to design the data warehouse and consists of a data cube with a large set of facts or measures  and a number of dimensions. A data cube consists of a lattice of cuboids.Concept hierarchies organize the values into levels of abstraction.
Data Mining can use OLAP queries as well as On-line analytical mining (OLAM)

Monday, June 17, 2013

Q: Find the second largest number in an integer array
A: int GetSecondLargest(int[] array, uint length)
{
if (array == null || length < 1 ) throw new Exception();
int max = array[0];
bool found = false;
int secondmax = 0;
for (int i =1; i < length; i++)
{
if (found == false)
{
if (array[i] < max)
{
found = true;
secondmax = array[i];
}
}
if (found != false && array[i] > secondmax)
{
if (array[i] < max)
{
found = true;
secondmax = array[i];
}
}
if (array[i] > max)
{
found = true;
secondmax = max;
max = array[i];
}
}
if ( found == false) throw new Exception();
if ( secondmax == max) throw new Exception();
return secondmax;
}
 
Question: Given an arbitrary two dimensional matrix of integers where the elements are sorted in increasing order along rows and columns, find a number in the matrix closest to and less than or equal to a given number.
Answer:
 uint GetElement(int [,] matrix, uint startrow, uint startcol, uint endrow, uint endcol, uint number)
{
while(startrow < endrow && startcol < endCol)
{
uint midrow = (startrow + endrow) / 2 ;
uint midcol = (startcol + endcol) / 2;

if (matrix[midrow, midcol] < number))
{
startrow = midrow;
startcol = midcol;
}
else
{
endrow = midrow;
endcol = midcol;
}
}
if (startrow == endrow && startcol == endcol)
{
  return matrix[startrow, startcol] < number ? matrix[startrow, startcol] : 0;
}
if ((startcol == endcol && startrow == endrow - 1) || (startrow == endrow && startcol == endcol - 1) )
{
  if (matrix[endrow, endcol] < number) return matrix[endrow, endcol];
  if (matrix[startrow, startcol] < number) return matrix [ startrow, startcol];
  return 0;
}
if (matrix[startrow, startcol] < number)
{
startrow = endrow;
startcol = endcol;
}
uint topright =  startcol - 1 > 0 && startrow - 1 > 0  ? GetElement(matrix, 0, startcol, startrow - 1, endcol, number) : 0;
uint bottomleft = startrow + 1 <= endrow && startcol - 1 > 0 ? GetElement(matrix, startrow + 1, 0, endrow, startcol - 1,
number) : 0;
if (topright < bottomleft)
  return bottomleft;
else
  return topright;
}

Sunday, June 16, 2013

Microsoft Exchange Architecture continued

Microsoft Exchange Architecture
We will look at the Exchange Server Architecture in detail here:
1) Unified Messaging: The unified messaging server role enables unified messaging for an Exchange Server organization.
Features included are :
a) Outlook voice access : user logs onto the mailbox and accesses it via  a voice user interface. An associated UM server checks Active Directory for addresses and access information.
b) Call answering : UM Server plays the individual's greeting and captures voice mail message which is then sent to Hub transport server for delivery
c) Play on Phone: User receives a voice mail message and selects play. Outlook uses https to communicate with the UM web services to fetch the appropriate message.
d) One Inbox: Unified messaging puts all the voice, video and audio content in the same mailbox for the users to pull it from different devices.
e) The Active Directory UM objects has a dial plan comprising of an auto-attendant and user dictated Mailbox policies. A UM IP gateway communicates with the external PBX switchboard.

2) The Mailbox server role: includes the following features:
a) Resource Booking Attendant: This enables conference room booking, enforces duration, who can book, delegates for approval and provides conflict information. The policies and resources for auto-accept are booked using OWA or Exchange management shell.
b) Generate Offline Address Book: OAB files are generated, compressed and placed on a local share. Administrators can configure how the address books are distributed.
c) Outlook Client Connection : Clients inside the mailbox server can access the mailbox server directly to send and retrieve messages.
d) Exchange administration : Administrator-only computer retrieves active directory topology information from the corresponding AD service.
e) Mailbox and Public Folder Databases: Private user database as well as public folder information are stored in Exchange databases as logical containers
f) Exchange Search : generates fulltext index and indexes new message and attachments automatically
g) Calendar attendant : automatically puts new meetings as tentative appointments and deletes out of date meeting requests.
i) Messaging records management: This is a managed folder.

3) Client Access Server Role: includes the following features:
1) Exchange web services : These web services comprise of the autodiscover service, exchange data service, availability service, synchronization service, notification service, and managed folder service. Clients using EWS communicate over the https and SOAP / REST and these services are hosted in the IIS.  The autodiscover service lets the clients find the exchange server via AD or DNS. The Exchange data service provides read or write access to mailbox and public folder mail, contact, tasks and calendar data. The synchronization and notification services alerts changes to mailbox and synchronizes public folders. The availability service retrieves free/busy information and meeting time suggestions.
2) Exchange Active Sync : Used mainly by the handheld devices, this service is used to push messages from the intranet to the devices using cellular / wireless network and SSL. Remote device wipe can also be initiated.
3) Outlook web access : A variety of authentication schemes together with light and full feature clients enable mailbox to be accessed via browser.
4) CAS Proxy and redirection. Proxy enables another CAS server to be made available when one is not. Redirection informs the user the correct OWA url when user tries to access another.
5) Intranet features include Sharepoint, file share integration, conversion of pdf and office attachments to HTML, single sign on for mailbox server access, mailbox server access and most OWA configuration settings are stored in Active Directory.

4) High Availability includes features such as
1) no replication : Failover cluster is built using shared storage array and Microsoft Cluster service.
2) replication to a local disk set : partitions data for performance and recovery and adds data redundancy without service redundancy
3) replication to a standby server : Source server can be standalone. Target must be stand-alone and passive. Logs are copied, verified and replayed. Database is copied. There is a built-in delay for log replay activity.
4) replication within a cluster : Failover cluster built using Microsoft Windows cluster service with log replay and copying database. Hub Transport Server acts as the file share witness.

5) Hub Transport Server includes features such as :
1) directly delivers message between the source server and the target server by reducing the hops. Internally, it has a pickup/replay directory, a submission queue,  a categorizer that takes the messages from the submission queue and processes the message  by resolving recipients, routing, converting content, processing routed messages, and message packaging before delivering it to the delivery queue.
Courtesy : MSDN