Finding stack frames from dump has multi level solutions.
1) Manually walkthrough the dump. This method reads the dump file header, lists the number of streams, finds the exception stream, reads the context and finds the stack pointer. Next it iterates through the streams to find the memory list stream, dumps the list of memory ranges, finds the range corresponding to the stack pointer, reads the memory pages for that range, finds the stack pointer and dumps the memory at the stack pointer to get the stack frames. For each of the stack frames, verify that the source corresponding to the current entry should make a call to the next frame. For any errors in completing the operation above, display a message that the stack frames could not be fully resolved. To find the source corresponding to each entry, goto the stream with the module list information and for the stack frame, resolve the module, file, function and offset. This method is exactly the same as what a debugger would do. The benefit of walking through the dump manually instead of a debugger is that this can be an in-memory stream based operation and since dump files come in ZipArchive and there are methods on ZipArchive to not extract the files but read them as streams. Just that the debugger and its SDK does not support reading a stream. This is not a marginal benefit because when we are dealing with thousands of dumps, we are not making wasting time to copy large files over network or to maintain the lifetime of copied files or to keep track of archived locations in our system. This is efficient, doable but expensive to re-implement outside the debugger.
2) use the debugger SDK. This makes the assumption that we programmatically call the debugger proxy to read the stack frame for us. This we call on the dump that we have extracted and made a local copy. The SDK has the benefit that it can be imported directly in a powershell CmdLet thus improving the automation that is desired for reading the dumps.The caveat here is that the SDK requires full trust and either requires registration to the GAC of the system on which it runs or a mention to skip verification and this is appdomain based so we need to do this early on . This is not a problem in test automation or environment. Further, a dedicated service can be written that takes as an input the location of each dump and reads the stack trace using the pre-loaded debugger sdk. In addition to portability, using the sdk also has the advantage that exception handling and propagation is easier since it is all in process. Moreover, the sdk comes with definitions of stack frames and validation logic that obviates string based parsing and re-interpretation from shell debugger invocation. At this level of solution, the changes are not as expensive and we reuse the debugger without having to rewrite the functionality we need from that layer.
3) Use a service that is a singleton and watches a folder for new dumps, reads those dumps using a debugger process or the layer mentioned just earlier and stores stack frames in a data store accessible to all. The service abstracts the implementation and provides APIs that can be used by different clients. Automation clients can directly call the APIs for their tasks. This approach has the advantage that it provides a single point of maintenance for all the usages.
1) Manually walkthrough the dump. This method reads the dump file header, lists the number of streams, finds the exception stream, reads the context and finds the stack pointer. Next it iterates through the streams to find the memory list stream, dumps the list of memory ranges, finds the range corresponding to the stack pointer, reads the memory pages for that range, finds the stack pointer and dumps the memory at the stack pointer to get the stack frames. For each of the stack frames, verify that the source corresponding to the current entry should make a call to the next frame. For any errors in completing the operation above, display a message that the stack frames could not be fully resolved. To find the source corresponding to each entry, goto the stream with the module list information and for the stack frame, resolve the module, file, function and offset. This method is exactly the same as what a debugger would do. The benefit of walking through the dump manually instead of a debugger is that this can be an in-memory stream based operation and since dump files come in ZipArchive and there are methods on ZipArchive to not extract the files but read them as streams. Just that the debugger and its SDK does not support reading a stream. This is not a marginal benefit because when we are dealing with thousands of dumps, we are not making wasting time to copy large files over network or to maintain the lifetime of copied files or to keep track of archived locations in our system. This is efficient, doable but expensive to re-implement outside the debugger.
2) use the debugger SDK. This makes the assumption that we programmatically call the debugger proxy to read the stack frame for us. This we call on the dump that we have extracted and made a local copy. The SDK has the benefit that it can be imported directly in a powershell CmdLet thus improving the automation that is desired for reading the dumps.The caveat here is that the SDK requires full trust and either requires registration to the GAC of the system on which it runs or a mention to skip verification and this is appdomain based so we need to do this early on . This is not a problem in test automation or environment. Further, a dedicated service can be written that takes as an input the location of each dump and reads the stack trace using the pre-loaded debugger sdk. In addition to portability, using the sdk also has the advantage that exception handling and propagation is easier since it is all in process. Moreover, the sdk comes with definitions of stack frames and validation logic that obviates string based parsing and re-interpretation from shell debugger invocation. At this level of solution, the changes are not as expensive and we reuse the debugger without having to rewrite the functionality we need from that layer.
3) Use a service that is a singleton and watches a folder for new dumps, reads those dumps using a debugger process or the layer mentioned just earlier and stores stack frames in a data store accessible to all. The service abstracts the implementation and provides APIs that can be used by different clients. Automation clients can directly call the APIs for their tasks. This approach has the advantage that it provides a single point of maintenance for all the usages.
No comments:
Post a Comment