Unit-testing data provider calls requires that the underlying DataContext is substituted with a dummy and then we call it mocked. The trouble with that is there are no fakes or mocks that can be generated for the DataContext type. This is primarily because the DataContext object derives from a base DbContext object. If it were to implement an interface, then it can be mocked or faked. However, adding an interface that calls out the salient methods for mocking stubs is not usually a one time solution. The DataContext object is itself auto generated. Each time the DataContext object is regenerated, the interface would likely have to be added again or the tests won't pass. This adds a lot of maintenance for otherwise unchanging code. There is a solution with Text Template Transformation toolkit (T4) templates which comes with Entity Data Modeler in EF 4.0. The T4 template creates the interface we desire.
We generate this the same way that we generate our data objects from EF. We right click on the Entity framework model and set the code generation strategy to none, and then we add a new code generation item and select the ADO.Net mocking context generator.
This way we have enabled testing the data provider classes by mocking the context classes underneath. So our testing can stay limited to this layer without polluting the layers beneath or reaching the database.
Testing the data provider class comes in helpful when we want to build the business logic layer on top of it. The business layer can now assume that layers underneath it know how to send the data across.
In the previous example, we consider a server client for reading stack trace from dumps. If this logic were to be implemented, both server and client side could use EF and data provider classes as described above. The interfaces only help to make it more testable at which point this common code can be factored out and included as a core project with the server and client code. The reason for the server and client code to be written separately is because both of them can be developed separately.
As discussed earlier, the server code handles all the data for the population of the tables. The interface for the server is the same as from a powershell client or UI or a file watcher service. By exposing the object that read the stack trace from a dump, directly in powershell we improve the automation.The file watcher service invokes the same interface for each of the files watched. The code could also keep the data local with a local database file that asp.net and EF can understand. This way we can even do away with the consumer side for the first phase and then add it subsequently in the second phase. At that point the local database can be promoted to a shared database server.
Finally, the processing of the dump files may involve a debugger process to be launched. This means there has to be diagnosability of the process failures and appropriate messages to the invoker. Since the process invoking the debugger might be handling all exceptions and not passing the exception information across the process boundary, failures to read the dump may be hard to diagnose. If the number of such failures are high such that there is a backlog of unprocessed or partly processed dumps, then the overall success rate of the solution is affected. Some simple ways to handle this would be to stream all exceptions to the error output stream and read it from the process invoker.
No comments:
Post a Comment