I frankly don't get the drive towards in-memory applications. Please don't mistake me. I admire the thinking and the advantages. Faster running times, ability to use code generation, variety of data stores to use, code only application, support from solid state devices, etc etc are all great and I don't have any buts. In fact, my confusion is merely to say should there be an ecosystem to support such applications. We have an ecosystem of applications using external storage and processing. And yes we have .Net. Unless there is value only for very specific dedicated technology such as for niche segments, should there be something else to support these ?
Saturday, January 19, 2013
Friday, January 18, 2013
Asp.net membership providers
Asp.Net uses a provider model design pattern - a different back-end provider can be "plugged in" to change the mechanism used to save and retrieve data. .Net abstracts the actual storage mechanism from the classes that manipulate the data. The provider class is the one that stores the data on behalf of the other classes that manipulate the data. For example, the membership class uses a secondary class called a membership provider that actually knows the details of a particular data store and implements all the supporting logic to read and write data to/from int. Two built in providers are available for the membership system. One is the SqlMembershipProvider and another is the ActiveDirectoryMemberhsipProvider. This uses the LDAP proto col to communicate with the PDC server. This protocol is used for accessing and maintaining distributed directory information services over Internet protocol. Directory Services provide a hierarchical organization of members. LDAPv3 allows the use of Transport Layer Security for a secure connection. Another alternative mechanism of securing LDAP connection is the SSL Tunnel but this is now retired with LDAPv2. LDAP supports operations such as Bind, Search, Compare, Add, Delete, Modify, Abandon, Extend and Unbind. LDAP itself is a binary protocol and entries are specified with the LDAP data interchange format.
Thursday, January 17, 2013
Hadoop
Hadoop consists of a data storage and a data processing service. The former is named HDFS, short for Hadoop Distributed File System. The latter is named MapReduce and uses a high performance parallel data processing technique. On this framework is built a database named Hbase, a data warehouse named Hive, and a query language named Pig. Hadoop scales horizontally in that additional commodity hardware can be added without interruption and node failures can be compensated with redundant data. It does not guarantee ACID properties and supports forward only parsing. Hadoop is used to store unstructured data where data is stored in delimited flat files where the column names, column count and column data types don't matter. Data is retrieved with code in two steps - a Map function and a Reduce function. The Map function selects keys from each line and the values to hold resulting in a big hashtable. A Reduce function that aggregates results. Together these operations gives a blob of mapped and reduced data. Writing this code for MapReduce is easier with PIG queries. This key value set is stored in the Hbase store and this is the NOSql (read as 'not only SQL'). Hbase stores key-value as columns in a column family and each row can have more than one column family. Each row need not have the same number of columns. Hive is a data warehouse system. It uses a hive query language on joins of Hbase tables. SQL Server has a SCOOP connector to Hadoop which makes data transfer easy between HDFS and RDBMS. SCOM, AD and BI tools are also being integrated with Hadoop. Hadoop uses a user account named Isotope on all the windows nodes for running jobs.
Thursday, January 10, 2013
Signed requests for Amazon Web Services API
Signed requests for Amazon Web Services API
Amazon AWS APIs require requests to be signed. By signing
the requests now carry non-readable signature or hash. The hash is computed
from the operation and timestamp so as to make each api call differentiated.
The hash is computed with a secret by an algorithm that is specified upfront.
Typically these are Hash Based Message Authentication Code (HMAC) and as an
example the SHA256 hash function that produces a 256 bit hash can be used. Both
the AccessKeyId and the secret are issued separately to each user at the time
of his or her account registration for use of these APIs. This signature, the timestamp and the
AccessKeyId are all specified in the SOAP header or REST URI. These are
included in the SOAP header by the message inspector. This message inspector is
registered with the EndpointBehavior for the client. The EndpointBehaviour is
in turn returned by the BehaviourExtensionElement. All of these are System.ServiceModel
namespace types and can be specified in the configuration file itself along with
the address, binding and contract. The contract can be created by the WCF
service utility by pointing it to the wsdl of the service that should be available
online as the API. This configuration helps in instantiating a proxy and for
making direct calls to the API.
Here’s an example of REST AWS API
Wednesday, January 9, 2013
Tor project
If you wanted to be anonymous on the web, use different IP address, encyrpt your internet traffic and use dynamic routing and proxies, download the browser from https://www.torproject.org and start using it.
Saturday, January 5, 2013
difference between ip spoofing and web proxy
IP spoofing lets you conceal the ip address of the sender or impersonate another host in a system. If you want to spoof your ip address to your ISP provider, you could try IP spoofing but more than likely your ISP will reject these packets.
A web proxy on the other hand sits in between the ISP and the website you are connecting to. You are more likely to be anonymous to the website you connect if you use a web proxy.
A web proxy on the other hand sits in between the ISP and the website you are connecting to. You are more likely to be anonymous to the website you connect if you use a web proxy.
Thursday, January 3, 2013
XmlReader
The Read method of the XmlReader reads the next node from the stream
The MoveToElement method of the XmlReader moves to the element that contains the current attribute node
The MoveToElement method of the XmlReader moves to the element that contains the current attribute node
Subscribe to:
Posts (Atom)