Thursday, April 17, 2014

Next we look at the WinPCap architecture. WinPCap adds a similar functionality to what libpcap or tcpdump does to flavors of Unix. There have been other modules in Windows and some with available APIs and each one with a kernel mode driver, however they suffer of severe limitations. However, Netmon API is not freely available and its extensibility is limited. And it did not support sending packets. In this architecture we review some of these functionalities as well.
WinPCap was the first open system for packet capture on Win32 and it fills an important gap between Unix and Windows. Furthermore WinPCap puts performance at the first place.WinPCap consists of a kernel mode component to select packets and a user mode library to deliver them to the applications. The library also provides a low level network access and allows programmers to avoid kernel level programming. WinPCap includes an optimized kernel mode driver called Netgroup packet filter and a set of user-level libraries that are libpcap compatible. From the outset, libpcap compatibility was important to WinPCap so that unix applications could be ported over.
We now look at the BSD capturing components. Getting and sending data over the low level network interfaces was an important objective of BSD. There are three components to BSD. The first block Berkeley Packet Filter is the kernel level component for packet used to store packets coming from the kernel.The Network Tap is designed to snoop all packets flowing through the network and it reads the interface through the interface driver. It is followed by the filter which analyzes incoming packets The Libpcap library is the third component. A packet satisfying the filter for Network Tap is copied to the kernel buffer in the BSD. The user has direct access to each of these three layers. The user accesses the Network Interface Card driver with other protocol stacks to send and receive data.  The user code can directly access the BPF. Lastly the applications can write user code calls to libpcap.
We now look at the optimization techniques with the Berkeley Packet filter in TcpDump. Optimizing the code generation was required because users could specify compound queries. In such queries the generated code was redundant and highly inefficient. As a comparison, if we had taken a simple query such as tcp src port 100, then the resulting code would have been a linear decision tree with stteps evaluating down from IP, frag, TCP, sport, 100, true with each progress down only on true.
IF we wanted the same for both directions, we would have two such trees  with an OR in between  and evaluating one below the other. If the query had been both in same directions such as with tcp port 100 and 200, then we would have both trees one evaluating to the other.  and this combination evaluating one below the other. In this case, we would have had a significant code bloat. This is highly redundant and inefficient.  This is where the optimization techniques are used which have their roots in compiler design. One such technique is called the dominator technique. What this does, it eliminates common subexpressions.  So if the exit from one is the same as entry to another, we can replace that sub-expression with a third. While this traditional technique does optimize the code, it does not address the branching because data could flow to this sub-expression from both . If we look at the edge relationships instead of node relationships, we can do even more optimization. When we assume a particular sub-expression has already been evaluated by the expression above, then we can bypass that sub-expression and directly move to tthe outcome of the sub-expression. This creates new opportunities and when we repeat the cycle for all sub-expressions, at each stage, we could eliminated redundancies. An interesting observation here is that this exercise for optimizing code could also help us detect unreachable code and simplify the graph. Now we take the example above and remove the redundancy of opcode nodes and instead replace the edged to move directly to the outcome of the sub-expressions above. In this case we had the linear decision tree of IP, frag, TCP, common and we remove three sets of those copies and adding edges directly from them to the outcome. We also add edges from src port 100 to dest port 100 and dest port 200 as well as an edge from dest port 100 to src port 200 and completing the branches from the remaining nodes to the outcomes. We only have two outcomes true or false in the leaf level and all the nodes will be connected to it via edges. This covers the optimization in the Berkeley packet filter.


Wednesday, April 16, 2014

We now look at the compiler/optimizer in TcpDump. It initially introduced two layers of logic
- a lower layer that would handle predicates with multiple values
- an upper layer that would handle the combinations of the lower layer expressions.
The lower layer sees it as  key value pairs or an atomic predicate. For example, it sees it as ip host x or y. It could also see the predicate tcp port 80 or 1024.
The upper layer sees it as ip host x or y and (tcp port 80 or 1024)
But  this was not working. It tried to introduce paranthesis for grouping but this was still harder on the user.
The solution instead was to have a single level of logic. i.e. the predicate or values can both be part of the expression. The expression could be either predicate or val or both.
This made the grammar easy but it made code generation tricky.
BPF parser maintained a stack of symbol, field and code.
The expression was a predicate or an expression operator a predicate or a unary expression.
The predicate was the field value.
The code generation now takes the field value and updates its stack as it goes deeper through the expression evaluation. At each step it generates the corresponding code.
To evaluate an expression " ip src host x or y and tcp dst port z "
it would push ip one level down into the stack, followed by src followed by host.
when it comes to the value x, it would push protocol selector as field followed by a wrapper for the value. These two would be popped and pushed with a predicate, field and code value
Since we have an  'or' that would get pushed on top of this existing expression followed by a wrapper for the value y.
These three levels would then be replaced by a predicate, field and value corresponding to the protocol selector with a different value.
Having parsed the expression to ip address and its values, we now push and onto the stack and parse the 'tcp dst port z' similarly.
Finally we have pushed the following items on the stack, 1) expr as sym, ISH as fld and C2 as code followed by 2) AND as sym followed by 3) field as sym and TDP as fld and lastly 4) val(z) as sym

Tuesday, April 15, 2014

Today we look at how libpcap/tcpdump  works. Libpcap is a system independent use mode packet capturing system. Libpcap is open source and can be used with other application. Note that this library captures traffic unicast to an interface as is the case with TCP. All such traffic to and from the computer can be monitored. That is why if the interface is plugged into a switch, it may not capture traffic. Even if the machine is connected to a hub the hub could be  switched network and it may not work. When switches replicate all the traffic on all ports to a single port then the analyzer can capture packets on that port to sniff all traffic.
What distinguishes tcpdump is that it "filters" packet before they come up the stack.
Tcpdump compiles high-level filter specification into low-level code that filters packets at driver level. The kernel module used is called the Berkeley Packet Filter. The berkeley packet filter sits right between the NIC (network interface card) and the tcp stack in the kernel. This packet filter copies packets to tcpdump. The filter also blocks traffic that would otherwise appear as noise to tcpdump. This BPF can be considered to be a virtual machine. It has an architecture with an accumulator (A)  and index register (X), a packet based memory model and an arithmetic and conditional logic. For a packet capture of a TCP flow,  the filter works something like this :
Is the ethernet packet type IP ? (Load ether into A)
Is IP src address 10.0.0.20 ? (Load IP src address into A)
Is the IP dest address 10.0.0.20 ? (Load IP dest address into A)
Is the IP Protocol TCP ? (Load the protocol into A)
Is it first or only frag ? (Load the frag num into A)
Is TCP src port FTP ? (Load the port 20 into index register X)
(Load the port from packet to A)
Is TCP dest port FTP ? (Load the port 20 into index register X)
(Load the dest port into A)
This virtual model is flexible but we don't want to write low-level filters. So a higher level filter language is available.
We specify rules such as  src ip src port dest ip dest port and let the compiler and optimizer translate to the code.
The BPF filter language starts from a basic predicate which is true if and only if the specified packet field equals the indicated value.
Courtesy : libpcap : an architecture by Steve McCanne

Sunday, April 13, 2014

We read the topics TCP monitoring from Splunk docs today. Splunk monitors the TCP port specified. It listens for packets for one or all machines on the specified port. We use the host restriction field to specify this. Host can be specified using IP, DNS name or a custom label. On a unix system, Splunk will require root access to listen for packets from ports under 1024. SourceType is a default field added to events and so is the index. SourceType is used to determine the processing characteristics such as timestamps and event boundaries. Index is where the events are stored.
Note that when Splunk starts listening on a port, it establishes  a connection on both directions. There are two workers it spawns. The first worker is the forward data receiver thread and the second worker is the replication data receiver thread. Both workers act on Tcp input and hence share similar functionality.
The forward data receiver thread creates the input pipeline for data. Therefore it manages routines such as setting up the queue names, updating bookkeeping, maintaining stats and cleanup of all associate data structures and file descriptors. The  Replication data receiver thread is  responsible for creating an acceptor port, scheduling memory reclaiming and handling shutdowns.
Note that in a cluster configuration, there may be multiple peers or forwarders. Therefore all the data must be handled.  There is only endpoint to which the data arrives and these are consolidated.
The interface that deals with the TCP channel is the one that registers a data callback, consumes and send acknowledgements
The data callback function does the following:
It processes forwarder info and it sends acknowledgements.
The way TCP monitoring works is very similar to how tcpdump works. Raw packets read are dumped to a file. Tcpdump is written in the form of libpcap packet capture library. Libpcap works on different operating systems. It works on the principle that a TCP flow between a source ip address and port and a destination ip address and port is available to read just like any other file. Tcpdump can log to a file that can be parsed with the tcptrace tool. The use of such tools makes it unnecessary for Splunk to monitor all Tcp connections to a computer directly. 
As we are discussing proxies, we will evaluate both the relay behavior as well as most of the other functionalities that it can support.  In terms of monitoring and filtering, a web proxy server can do content filtering. It is used in both industrial and educational institutions primarily to prevent traffic to sites that don't conform to acceptable use. This proxy also does content user authentication and provides detailed logs of all the websites visited. The logs generated by a content filtering proxy is an example of how this proxy can produce the kind of information we can use for indexing with Splunk.
This means that if we have existing  proxies, they already produce logs. The second implication is that there could be a chain of proxies involved in accessing the internet and each of these provides a level of abstraction and anonymity in accessing the internet. This is often seen in the case of connection laundering where the government investigators find it hard to follow the trail of where the connections originated unless they go hop over hop in the chain of proxies.
One of the things that this kind of filtering supports is the use of whitelists and blacklists.  As we may see from the configuration specification that Splunk provides, there are several entries that can be mentioned in both lists. A whitelist is one that allows access to those mentioned in the list. A blacklist is one that denies access to those mentioned in the list. Together, we can use these two lists to fully express what traffic is permitted and what isn't because they are mutually exclusive.
One caveat that goes with the use of proxy for content filtering is that, if the rules of the filtering are based on the origin server, another proxy could bypass these rules. Therefore, these rules are effective only when the origin server is not spoofed. At the same time, rules based on destination are more effective to write.
Proxies can also be used to enhance performance. A caching proxy server accelerates service requests by retrieving content saved from a previous request. Since these requests may have been made earlier from the same or other client, the time it takes to serve these resources is reduced thereby increasing performance. When there is high volume of resource requests with duplicate resources, these can be served with higher efficiency. Finally, a proxy can also be used in translation.
I've read up from Wikipedia and StackExchange however I will cover some differences with Splunk monitoring later.
 In the Splunk TCP monitoring, we are able to monitor on a single address:port endpoint.

Friday, April 11, 2014

In today's post we continue our discussion a Fiddler like application with a modular input to Splunk. One of the things we have to consider is that on production machines the type of network traffic may be very different from a desktop.  So the first thing one has to do is to determine the use cases. There is more inbound traffic on a pproduction system than there I'd outbound. While there is a lot of information to gather on inbound traffic such services already being outsourced to third party proxy providers. What this app does is it gives a monitoring toil in the hands of the individual users or administrators that the cab rub on desktop. Note that even the light weight forwarder is only deployed to a handful of machines for each instance if an Enterprise class Splunk server. What we are talking about can scale to several thousands with one instance on each machine  and at the same time be ubiquitous as in they can be on mobile devices as well.
Furthermore, we could argue that packet capture tools can be turned on by admins on all desktops and these could be logged onto a common location from which Splunk can read and populate the events. In practice, seldom do we enable such applications by default without user opt in on grounds of privacy and security even for employees of an organization. Besides it leads to more maintenance overhead with very little benefit for governance or security. Its more common on the other hand to selective control the intranet and internet zones, proxies etc across the organization and not go after individual computers with the exception of software updates and publishing. That said, a central log appending from multiple sources is also not common for the sake that it introduces a central point of failure, and possibly slower responsiveness on the individual users computer. That is why its better to separate this overall workflow into two separate workflows - one for pushing an app onto the individual users computer through software updates and push mechanism - and another to collect the events / log gathered by this app onto a common file or database index. The former is where our app or Fiddler will come in useful. The latter is what Splunk is very good at with its own collect and index mechanism. Our app goes an extra step over Fiddler in that it collects the packet captures and forwards to Splunk. This way we have absolutely no problems in utilizing the best of both for the number of individual user computers in an organization.

We will now look at what all things a proxy does. We will try to see not only the relay behavior but also the filtering it can do. Proxies support promiscuous mode listening. In our case we have a transparent proxy that does not modify the requests or responses. Proxies can also be forward or reverse. A  forward proxy helps with anonymity in that it retrieves resources from the web on behalf of the users behind the proxy. A reverse proxy is one that secures the resources of the corporate from outside access This comes in helpful to maintain quarantine lists to stage access to the protected resources. Network address translation is a useful concept in this regard. Its also referred to as fencing when used with virtual machines. A reverse proxy can do several things such as load-balancing, authentication, decryption or caching. By treating the proxy as another server, the clients don't know which server processed the request. The reverse proxy can distribute the load to several servers which means the configuration is not just a fail over but a load balancing one.
SSL acceleration is another option where this proxy enables hardware acceleration and a central place for SSL connectivity for clients.
The proxy can also choose to serve/cache static content and facilitate compression or to communicate to the firewall server since the rules on the firewall server could now be simplified.