Monday, May 13, 2013

To get stack frames from streams instead of dump files

Dump files can be arbitrarily large and they may generally stored in compressed format along with other satellite files.  File operations including extraction and copying on a remote network can be expensive. If we were interested only in a stack trace, we are probably not interested in these operations. Besides, we rely on the debuggers to give us the stack trace. The debuggers can attach to process, launch an executable and open the three different kinds of dump files to give you the stack trace but they don't work with compressed files or sections of it. While the debuggers have to support a lot of commands from the user, retrieving a specific stack trace requires access only to specific ranges of offsets in the crash dump file. Besides, the stack trace comes from a single thread. Unless all the thread stacks have to be analyzed, we will look at how to retrieve a specific stack trace using stream instead of files.
Note getting a stack trace that we describe here does not require symbols. The symbols help to make the frames user friendly. That can be done separately from getting the stack trace. Program debug database files and raw stack frames are sufficient to pretty print a stack.
The dump files we are talking about are Microsoft proprietary but the format is helpful for debugging. Retrieving physical address in a memory dump is easy. TEB information has top and bottom of stack. and memory dump of these can give us the stack.
Using streams is an improvement over using files for retrieving this information.
Streams can be written to a local file so we don't lose any feature we currently have.
Streams allow you to work with specific ranges of offsets so you don't need the whole file.
With a stream,
Debugger SDK available with the debugging tools has both managed and unmanaged APIs to get a stack trace. These APIs instantiate a debugging client which can give a stack trace. However, there is no API for supporting a stream yet. This is probably because most debuggers prefer to work on local files because the round trips for an entire debugging session over a low bandwidth and high latency networks is just not preferable. However, for specific operations such as to get a stack trace, this is not a bad idea. In fact, what stream support to GetStackTrace buys us is the ability to save a few more roundtrips for extraction, save on local storage as well as creating archive locations, and reduce the files and database footprint.
Both 32 bit and 64 bit dump require similar operations to retrieve the stack trace. There is additional information in the 64-bit dump files that helps with parsing.
The stack trace once retrieved can be made user friendly by looking up the symbols. These symbols are parsed from the program debug database.  Modules and offsets are matched with the text and then the stack symbols can be printed better. Information need not be retrieved from these files by hand but they can be retrieved with the Debug Interface Access. There's an SDK available on MSDN for the same.
Lastly, with a streamlined operation of retrieving stack trace as read only, no file copy, no maintenance of data or metadata locally, the stack trace parsing and reporting can be an entirely in-memory operation.

Assembly Settings

In writing applications and libraries using C#, we may have frequently encountered a need to define configuration data as settings. This we define with a settings file and keep it under the Properties folder of the assembly source and consequently has the Properties namespace. As different libraries are loaded into the assembly, each assembly may define its own settings that can be used as is or overridden by the calling application. The settings are compiled into the assembly's resource which one can view from the assembly. When more than one assembly is referenced in the current application, these settings are resolved in a by first looking up in the local settings file and then any other settings provider which derive from the abstract SettingsProvider class. The provider that a wrapper class uses is determined by decorating the wrapper class with the SettingsProviderAttribute.

Sunday, May 12, 2013

Compiler design review

Programs that are written in a high level programming language by programmers need to be translated to a language that machines can understand. A compiler translates this high level programming language into the low level machine language that is required by the computers.
This translation involves the following:
1) Lexical analysis This is the part where the compiler divides the text of the program into tokens each of which corresponds to a symbol such as a variable name, keyword, or number.
2) Syntax analysis This is the part where the tokens generated in the previous step are 'parsed' and arranged in a tree-structure ( called the syntax tree) that reflects the structure of the program.
3) Type checking This is the part where the syntax tree is analyzed to determine if the program violates certain consistency requirements for example if a variable is used in a context where the type of the variable doesn't permit.
4) Intermediate code generation This is the part where the program is translated to a simple machine independent intermediate language.
5) Register allocation : This is the part where the symbolic variable names are translated to numbers each of which corresponds to a register in the target machine code.
6) Machine code generation : This is the part where the intermediate language is translated to assembly language for a specific architecture
7) Assembly and linking : This is the part where the assembly language code is translated to binary representation and addresses of variables, functions etc are determined.
The first three parts are called the frontend and the last three parts form the backend.
There are checks and transformation at each step of the processing in the order listed above such that each step passes stronger invariants to the next. The type checker for instance can assume the absence of syntax error.
Lexical analysis is done with regular expressions and precedence rules. Precedence rules are similar to algebraic convention. Regular expressions are transformed into efficient programs using non-deterministic finite automata which consists of a set of states including the starting state and a subset of accepting states and transitions from one state to another on the symbol c. Because they are non-deterministic, compilers use a more restrictive form called deterministic finite automaton. This conversion from a language description written as regular expression into an efficiently executable representation, a DFA, is done by the lexer generator.
Syntax analysis recombines the token that the lexical analysis split. This results in a syntax tree which has the tokens as the leaves and their left to right sequence is the same as input text. Like in lexical analysis, we rely on building automata and in this case the context free grammars we find can be converted to recursive programs called stack automata. There are two ways to generate such automata, the LL parser (the first L indicates the reading direction and the second L indicates the derivation order) and the SLR parser (S stands for simple)
Symbol tables are used to track the scope and binding of all named objects It supports operations such as initialize an empty symbol table, bind a name to an object, lookup a name in the symbol table, enter a new scope and exit a scope.
Bootstrapping a compiler is interesting because the compiler itself is a program. We resolve this with a quick and dirty compiler or intermediate compilers.
from the textbook on compiler design by Mogensen

Saturday, May 11, 2013

File System Interface

Let us quickly review the operating system design principles of a file system.
We want to be discuss the functions of the file systems, describe the interfaces to file systems,  discuss file system design tradeoffs and access methods and explore file-system protection.
Files are mapped by the operating system onto physical devices. It is usually the smallest allotment of logical secondary storage. A file can have attributes such as name, identifier, type, location, size and protection.
File operations include creating, writing, reading, repositioning, deleting and truncating a file. These six basic operations comprise the minimal set of required file operations
Most of the file operations mentioned involve searching the directory for the entry associated with the named file. To avoid this search, many systems require that an open() call is made prior to file use. And the file-system maintains an open-file table, containing information about all open files. The open system call typically returns a pointer to the entry in the open-file table. This pointer, not the actual file name is used in all file operations
Similarly the operating system maintains a file-open count so that it can reuse the open file table entries or it could run out of space in the table The file system also has to locate the file on the disk.
Each process opens a file in an access mode so that subsequent I/O requests can be allowed or denied. File Systems also provide facilities for locking an open file. File locks are usually the same kind as reader-writer locks - a shared lock is similar to reader lock so that several processes can acquire the lock concurrently and an exclusive lock is like a writer lock, allowing only one process to acquire it.
File types are also important to the file system. File extensions inform the files system on the programs associated with the file type and the actions to be taken.
File types can be used to indicate the internal structure of the file. For example, an executable might have a specific file structure so that it can determine where in memory to load the file and the starting point of the instructions. The Macintosh operating system for example requires that files have a resource fork and a data fork. The resource fork contains information of interest to the user. The data fork contains program code or data - the traditional file contents.
Access methods determine how the information stored in the file is accessed. Sequential access method uses rewind, read next and write next operations. Editors and compilers use the file in this manner. Read and write makes up the bulk of the operations on the file. Another method is direct access. The file is made up of fixed length logical records that allows programs to read and write in no particular order. This method supports operations such as read n and write n where n is the relative block number.
File Systems also have to keep track of the layout referred to as the storage structure. Directories are a useful way of organizing files and they support operations such as Search for a file, create a file, delete a file, list a directory and rename a file. Traversal of the file system to list all directories and files is another common operation. Layout of files and directories could have a backup copy.
File Systems is usually mounted before it can be used. The mount point defines the location within the file structure where the file system is to be attached. The file system is then checked if its valid.
A file system can be remote such as in the case of distributed file system where the remote file systems are visible from the local machine.
 

Friday, May 10, 2013

Today I had to do some debugging and it reminded me of a book I had read earlier. I'm writing a review of the same. This book was titled "Debugging: The 9 indispensable rules for finding even the most elusive software and hardware problems" by David J Agans.
This book advocates a set of rules that will make your debugging experience more methodical, save you time from those hours of arduous labor associated with this task, avoid wild goose chase, and refine your analytical skills. If you play by these rules, it may not guarantee success in every case but it will help you avoid the common pitfalls. The toughest challenge with debugging has always been that the issue (we call them bugs) is a case by case basis and the worst drawback is the assumptions and generalizations that go with the investigation. These rules hone your investigation skills and are just as evident from real world examples quoted in the book as one may interpret from some documented investigations gone wrong.
These rules are not that hard and are listed below for a read:
Rule 1: "Understand the system" You should read the manual, know the expected behaviors, know the road map and your tools. Ignorance can lead you to frustrations and the system cannot be controlled if you don't know how it should behave.
Rule 2: "Make it fail" A very common issue of understanding what is wrong is to find out what it takes to make it fail. If you know the steps to repeat a symptom to the point that the failure can happen deterministically and repeatedly, you have armed yourself with the knowledge of what you want to investigate. You can start at the beginning and by that it means you can use a baseline or a control for a starting point. And then find the steps from that point on. The toughest part of this rule is that some bugs are not easy to repeat. Often these are labeled as timing issue, bad luck, lies and statistics etc. But this rule encourages you to persist first in knowing how to make it fail.
Rule 3: "Quit thinking and look" The very first thing we do when we get a bug is that we put on our thinking caps. While our reasoning could be considered infallible, we don't know until we have taken a look. What if there was some data that surfaces only because we took a look. Albeit looking is hard, it is well worth a rule to be mentioned. Looking can be hard because the systems may not be transparent or they are complex (we call them black box) but even so we might be able to open up logs or traces or in some cases add instrumentation to know what the system is doing.
Rule 4: "Divide and Conquer" The divide and conquer approach is required to narrow the search space so that we spend less time on that which doesn't matter and more on the ones that matter. You can also fix the bugs you know about first and silence the noise so that you can take a hard look at the bad bug. You can even  narrow down the layers vertically or the components horizontally to get to the bug.
Rule 5: "Change one thing at a time" While the rules above are about observing the system and learning from it even as it includes a method to get more information from the system than what it does, this rule is about your prudence in making one change at a time. The equivalent phrases for describing this are "use a rifle not a shotgun" and "grab the brass bar with both hands". The recommendation is to change and compare one thing at a time.
Rule 6: "Keep an audit trail" Write down every change to the system you make, in that order, and the results. Doing this will help you correlate, rediscover the covered ground and learn from it. The equivalent phrase for this is "the shortest pencil is longer than the longest memory".
Rule 7: "Check the plug"  This rule helps you avoid making assumptions or even better question them. You should not start from square three but from square one. If the power switch is not on, you are not going to see any response. "Check the plug is a rule to say go about your steps methodically starting from the beginning and the starting point includes the check for the basics.
Rule 8: "Get a fresh view" As in the game of chess, sometimes we pigeon hole ourselves into the very thing we want to solve. And sometimes we are too shy to get help or ask an expert. And we don't have to be sure, we can report symptoms and get a fresh view.
Rule 9: "If you didn't fix it, it ain't fixed". This is a rule that says if you thought you made a fix and the symptoms stopped, you cannot rest until you know that its really fixed and that it's your fix that fixed it. Ignoring this rule means you are pretending that the problem has gone away. It never just goes away by itself. Fix the cause and the process but be doubly sure that its your fix that fixed it.

Inverted file, TF- IDFs and Shannon information associated with terms

This is a data structure that is useful to retrieve all documents that contain the terms in a query. For each word, there is an ordered list of document identifiers maintained that has the given data. All query terms are organized in a second index structure such as a B+ tree or hash index and is referred to as the vocabulary index. With a single term from the query, the vocabulary index is looked up to find an inverted list and with that list, the rids are mapped to find the documents containing that term. For conjunction of several terms, the process is repeated for one word at a time. For a query with disjunction of terms, the inverted lists are merged.
A Term frequency - Inverse document frequency (Salton & Yang 1973) gives a weight for the given term among other terms and can be useful for such things as keyword extraction. A TF-IDF is defined as N(w)log D/D(w) where D(w) represents a number of documents containing word w in the training data, and D the total number of documents in a training data. 
Shannon information similarly gives a code for encoding the words in a set W. We can use this code to say compress or transmit the document in an efficient manner. Then l(w) = -logP(w) gives the length required for a code of a given word. We can use this length to compute Shannon information as -N(w)log P(w) where N(w) is the number of times the word w occurs in the text. Hang Yamanishi paper further compares Shannon information to TF-IDF on the basis of estimating the words with maximum likelihood of occurrence and defines it as -N(w)logF(w)/F where F(w) is the frequency of the word w in the training data and F the total frequency of word w in training data. Note that Shannon information now looks similar to TF-IDF and improves on the weights associated with words for keyword extraction.

Thursday, May 9, 2013

Indexing for text search

A text search database is a collection of documents and is referred to as text database. Let us think of this database as a single relation schema with one field of type document. Each record in the document contains exactly one document. Each record in this relation contains exactly one document.
Now, let us look at a class of queries based on keyword search that enables us to ask for all documents containing a given keyword. This is the most common kind of query on the web today. A more complex query is find all documents that have keyword 1 or keyword 2  and not keyword 3. For such documents we may also want to look for synonyms such as automobile instead of car. We can rank retrieved documents by the proximity of the query keywords in the document.
There are two types of such queries for text databases: Boolean queries and Ranked queries. In a Boolean query, the user supplies a Boolean expression like the one mentioned above and is called the conjunctive normal form. This query consist of j conjuncts, each of which consists of several disjuncts. Each conjunct corresponds to one concept, and the different words within each conjunct correspond to different terms for the same concept. In a ranked query, the user also specifies a list of words, but the results of the query is a list of documents ranked by their relevance to the list of user terms. Determining when and how a document is relevant to the set of user terms is a different problem. The algorithms for that problem belong to information retrieval, which is closely related to database management. Information retrieval systems unlike database systems don't address updates, concurrency control and recovery because the data is largely static. The criteria used to evaluate such retrieval system includes precision and recall which we define in that order as percentage of retrieved documents that are relevant and the percentage of relevant documents retrieved in response to a query. For Boolean queries, there are two index schemas - the inverted file index and the signature file index. The first index is widely used due to its simplicity and good performance but it has a huge space overhead. The second index has a small space overhead and enables a quick search for eliminating non-matching documents. Words are stemmed and analyzed for a common root during index creation. - as read from textbook on Database management systems.