Wednesday, January 8, 2014

Some T-SQL queries
SELECT t.name as tour_name, COUNT(*)
FROM Upfall u INNER JOIN trip t
on u.id = t.stop
GROUP BY t.name
HAVING COUNT(*) > 6
Aggregate funcions - AVG(), MAX(), MIN(), MEDIAN(), COUNT(), STDEV(), SUM(), VARIANCE()

--Summarizing rows with rollup
SELECT t.name AS tour_name, c.name as county_name COUNT(*) as falls_count
FROM upfall u INNER JOIN trip t
ON U.id = t.stop
INNER JOIN county c ON u.county_id = c.id
GROUP BY t.name, c.name with ROLLUP

SELECT t.name as tour_name,
c.name as county_name
COUNT(*) as falls_count
GROUPING(t.name) as n1 -- test null from cube
GROUPING(t.name) as n2 -- test null from cube
from upfall u INNER JOIN trip t
ON u.id = t.stop
INNER JOIN county c
ON u.county_id = c.id
WHERE t.name = 'Munising'
GROUP BY t.name,c.name WITH CUBE

--RECURSIVE QUERIES
WITH recursiveGov
(level, id, parent_id, name, type) AS
(SELECT 1, parent.id, parent.parent_id, parent.name, parent.type
FROM gov_unit parent
WHERE parent.parent_id IS NULL
UNION ALL
SELECT parent.level + 1, child.id, child.parent_id, child.name, child.type
FROM recursiveGov parent, gov_unit child
WHERE child.parent_id = parent.id)
SELECT level, id, parent_id, name, type
FROM recursiveGov

CREATE TABLE COUNTRY(
ID int identity (1,1),
NAME varchar(15) NOT NULL,
CODE Varchar(2) DEFAULT 'CA'
CONSTRAINT code_not_null NOT NULL
CONSTRAINT code_check
CHECK (country IN ('CA', 'US')),
indexed_name VARCHAR(15),
CONSTRAINT country_pk
PRIMARY KEY(ID)
CONSTRAINT country_fk01
FOREIGN KEY (name,code)
REFERENCES parent_example (name,country),
CONSTRAINT country_u01
UNIQUE(name,country)
CONSTRAINT country_index_upper
CHECK(indexed_name = UPPER(name))
);

Tuesday, January 7, 2014

This post is about one of the interview questions and is a coding problem :
bool IsMatch(string input, string pattern);
string input can be "ABCDBDXYZ"
string pattern can be "A*B?D*Z"
* and ? are the usual wild card for 0 or more and only one char respectively.
Here are some possible implementation:
bool IsMatch(string input, string pattern)
{
return Regex.Matches(input, pattern).Count > 0;
}

A brief review of programming interviews exposed book.
Bitwise operations - OR(any), AND(both) and XOR(same=0,different=1)
Shift operations - Base 2 right shift => divide by 2 and left shift => multiply by 1
Rectangle overlap is written as a.ul.x <= b.lr.x && a.ul.y >= b.lr.y && a.lr.x >= b.ul.x && a.lr.y <= b.ul.y
union{
int theInteger
char singleByte;
} endianTest;
endianTest.theInteger = 1
return endianTest.singleByte;
Permutations proceeds with an array of booleans for each element that denotes whether it is used
Combinations proceeds with varying start
Tackle graphical and spatial problems with pictures and over time.
Tackle Nodes and lists with previous, current and next variables during traversal.
Trees and Graphs are best tackled with traversals.

Monday, January 6, 2014

The red-black tree insert is very much like a tree insert except that its colored red before fix up.
The red-black tree delete considers four cases corresponding to
the fix up sibling w is red => color w to black and left-rotate
x's sibling w is black and both of w's children are black => color w to red.
x's sibling w is black and w's left child is red and right child is black => exchange color of w and its left child and right-rotate
x's sibling w is black and w's right child is red => change w's color and its parent and perform a left rotation
Now on to networking technologies:
SNMP - manages states such as address translation tables, routing tables, TCP connection states etc using MIB
Resolution occurs with different levels of identifiers : domain names, IP addresses, and physical network addresses. First, users specify domain names when interacting with the application. Second, application engages DNS to translate domain names to IP address and lastly IP engages ARP to translate the next hop IP address to physical address.
TLS session involves Client and Server communication where server sends certificate to client with its public key, client sends session keys, initialization vectors etc with encryption using the public key, server decrypts messages with its private key.
A certificate is a document with a digital signature and is signed by a Certification Authority. Keyed MD5 produces a cryptographic checksum for a message as m + MD5(m + k)
Public Key Authentication happens with A sending E(x, Public-B) to B and B sending back the decrypted x.
Kerberos provides a third party authentication by initiating, intercepting and closing the handshake.
Transmission Control Protocol  provides ordered reliable error free transmission with flow control and congestion management. This it does with sequence numbers, sliding window and window scaling. Sequence numbers, selective acknowledgements, receive window fields and persist timers help manage the flow.
 

Sunday, January 5, 2014

I'm preparing for an interview so I will post frequently as a recap of the things I revised.
A revisit of the Active Directory configurations and DNS and networking technologies.
Active Directory site topology and replication -
Replication usually from single master server to subordinate servers
Active directory offers multimaster replication; avoids single point of failure
KCC tool sets up and manages the replication connections.
KCC uses two modes - intrasite and intersite.
intrasite is designed to create a minimum latency ring topology between DCs
the intersite uses a spanning tree algorithm with site link metrics.
Replications flows are setup between sites and DFS shares.
By default there's one site created automatically.
Multiple sites can be defined for a single location to segregate resources.
AD sites are defined in terms of a collection of well-connected AD subnets.
Site links connect and DC uses them to cover additional ones including the current site for user logons.
If not all site links are available, bridges are used instead.
Naming contexts are replicated by a domain controller by maintaining a high watermark table
- one each for schema, configuration and domain NCs.
This is based on the highest USNs of the updates.
Conditional Forwarding, delegation options and Dynamic DNS.
CF is the feature that lets name resolution for an ip address to be passed other than the local dns
DNS servers can be primary or secondary
primary stores all the records
secondary gets the contents from primary
The contents of a zone file are stored hierarchically
This structure can be replicated among all the DCs.
It is updated via LDAP operations or DDNS (must have AD integration)
A common misconfiguration issues is the island issue when ip address for a DNS changes
and it is updated only locally. To do a global update instead, they must point to a root server other than themselves.
Delegation options are granted to DNS servers or DCs.
Simple is when DNS namespaces are delegated to DCs and DC hosts a DNS zone.
The records in a DNS server as opposed to DC are autonomously managed.
DNS servers need to allow DDNS by DC
DC does DDNS to prevent updates to the DNS records in the server.
Support and maintenance is minimal with DDNS.
Standalone AD is used to create test or lab networks.
A forest is created, a DC is assigned, DNS Service is installed.
DNS zone is added, unresolved requests are forwarded to an existing corporate server
The primary DNS for all clients point to the DC.
Background loading of DNS Zones makes it even easier to load DNS zones while keeping the zone available for dns updates / queries.

Algorithms and data structures:
1) Quicksort  - defined as
Partition
 Quicksort one side
 Quicksort other side

Partition works something like this
x is the value of the partition candidate A[r] in A[p,r]
i,j indexes are maintained
j iterates from first to the last but one
i lags behind j
i and j bound the values higher than the partition candidate x

Radix sort - based on significant digits starting from right to left.

Insertion sort - think sorted list or arranging a deck of cards.
Merge sort - -
Mergesort A,p,q,r
Mergesort A,q +1, r
Merge A,p,q,r
bottom up merge and at each step sort the contents on merge
for k from p to r
if L[i] < R[j]
A[k] = L[i] i = i + 1
else
A[k] = R[j] j = j + 1

HeapSort O(nlogn)
uses a heap
Parent(i) = i/2
Left(i) = 2i
Right(i) = 2i + 1
for i from length(a)/2 downto 1
do Max-Heapify(A,i)
Max-Heapify is recursive

Tree-Successor : return minimum on the right sub tree  or keep climbing the parents until the given node is descended from the left

Tree - predecessor : return maximum on left subtree or keep climbing until the given node is descended from the right.

Tree-delete uses tree-successor.
Tree-Insert walk down the tree to find the value less than the key, then insert there
 Tree - Delete depends on how many children the target z has. if z has no children, we just remove it. If z has only one child, we splice out z  If z has two children, we splice out its successor y which has at most one child.
 Red-black tree insert and delete is even more interesting.
We cover data warehouse design review checklist in this post.  Design reviews are very helpful for ensuring quality in the operational environment. It identifies errors before coding and saves costs. A design review considers such things as transaction performance, batch window adequacy, system availability, capacity, project readiness, user requirements satisfaction. The benefits of a design review become obvious when there is less code churn.  A design review is applicable to both operational systems and data warehouse but there are some differences.
In the data warehouse case, it is not built using SDLC as with operational systems.
In the operational environment, development is done one application at a time. In the data warehouse environment, they are built a subject area at a time.
In the operational environment, there are firm requirements whereas in the data warehouse environment, the processing requirements are not known at the outset of DSS development.
In the operational environment, transaction response time is critical.
In the operational environment, the input comes from external systems
In the operational environment, the data is current-value where as in the warehouse, its time-variant.
A design review in the data warehouse is done as soon as a major subject area has been designed.
Participants of a design review typically include data administration, database administration, programmers, DSS analysts, end users other than DSS analysts, operations, system support and management. End users and DSS analysts matter more than others.
The design review could table any item for discussion especially the controversial ones. The design review helps with ensuring the success.
The data warehouse design review should result in the following:
A list of the issues encountered, and recommendations for actions
A documentation of where the system is in the design, as of the moment of the review.
A list of action items that are specific and precise.
Typically a review includes both a facilitator as well as a recorder. The facilitator is not the leader so that the review can have maximum input. The facilitator brings in an external perspective and can offer criticism constructively.
There items on the checklist for the design review includes all of the points discussed above and more.  The complete list is available in building the data warehouse book.

Saturday, January 4, 2014

We continue our post on the data warehouses with a discussion on the end user community. The end users have a lot of say in how the data warehouse shapes. They have a lot of diversity so we recognize four types - the farmers, the explorers, the miners and the tourists. The farmer is the most predominant type of user found in the data warehouse environment. This is a predictable user in that the queries submitted by the user are short, go directly for the data, recur on the same time of the week and is usually successful on finding the data.
The explorer is the user community that does not know what he or she wants and hence takes more time and more volume of data to search. This user covers a lot of data and typically does not know what he or she wants before the exploration process begins. The exploration proceeds in a heuristic mode. In many cases, the exploration looks for something and never finds but there are also cases when the discoveries are specially interesting.
The miner is the user community that digs into piles of data to test assertions. Assertions are tested based on their strength from the data. Usually this user community uses statistical tools. The miner may work closely with the explorer. The explorer creates assertions and hypothesis and the miner may determine their strength. Usually this community has to have mathematical skills.
The tourist is the user community that knows what to find where. This user has a breadth of knowledge as opposed to the depth of the knowledge. This user is familiar with both formal and informal systems. He or she knows the metadata and the indexes, the structured data and the unstructured data, the source code and how to read and interpret it.
There are different types of data targeted by these end users. If data existed in different bands of probability of their use in the data warehouse, the farmers would be very predictable and target only the top small band of this data while the explorers would reach all over the data.
Cost justification and ROI analysis could be described for these user communities as follows:
The farmer's value and probability of success is very high. His queries are useful in decision support. The explorers success rate is not that high although his finds are much more valuable than the regular queries performed by the farmers. The warehouse therefore should present the ROI from farmers community instead of the explorers.