Wednesday, September 11, 2013

OAuth testing continued

One more test I needed to add to my list earlier is that the tokens expiry time could be validated by waiting for the expiration time and trying it again. In addition, we could test that the refresh tokens are issued for non-expired tokens. Token issued to one client should not be usable by another client.
The spoofing client could even use the same API key as the spoofed client. If the same user authorizes two clients both of whom have now requested access tokens, then these tokens should be similar, work the same and generally not be transferable or exchanged. A client requesting user authorization cannot use the same token for non-user privileged API for another user.
In the previous post, there was a mention for the different services hosted by the team foundation server. We will explore them in detail now. These services are :
1. ITeamFoundationRegistry - Gets or reads user entries or values
2. IIdentityManagementService - Manage application groups and memberships.
3. ITeamFoundationJobService
4. IPropertyService
5. IEventService
6. ISecurityService
7. ILocationService
8. TswaClientHyperlinkService
9. ITemaProjectCollectionService
10. IAdministrationService
11. ICatalogService
12. VersionControlServer
13. WorkItemStore
14. IBuildServer
15. ITestManagementService
16. ILinking
17. ICommonStructureService3
18. IServerStatusService
19. IProcessTemplates

Tuesday, September 10, 2013

In today's post we want to cover the TFS client model library in detail. We will describe the hierarchy. The TeamProjectCollection is the root objectand we can instantiate a WorkItemStore with this root object. A WorkItemStore has a collection of projects. WE can look up the project we want by the name. When we find the project by name using the Projects propery of WorkItemStore, the corresponding Project object is returned. This Project item has a set of properties we are interested in. The project has AreaRootNodes that gets the collection of area root nodes. Recall that Area nodes can have path qualifiers so these are tree nodes. The next property is the Categories property and this gets us all the collection of work item type categories that belong to this project.
Take for example, the QueryHierarchy property. This is a replacement to the obsolete StoredQueries type and lets you get all the query items whether they are query folders or query definitions. Note that the query definition files are viewable only. This is there is a property called QueryText that gives you the text of the WIQL stored in these files but no way of executing that wIQL. You can then use this text to instantiate a Query object and invoke the RunLinkQuery. You need both the WorkItemStore and the WIQL to instantiate the Query object.  Thus given a URI to the TFS server, we have programmatically traversed the object model to find the query and execute it. Whenever you make or change the QueryItems in this tree, you can simply save the QueryHierarchy. This will save any changes anywhere in the tree. If you were to look for an item in this tree, you may have to implement a recursive search method that enumerates the contents of the current QueryFolder. However if you have a GUID you can use the find method to retrieve that specific item.There is a FindNodeInSubTree method that can do this recursion and it accepts a lookup based on a specified ID or a path. In most cases, this works well because with TFS workitems when we create, update or delete them in visual Studio, we can get their GUIDs by using copy full path or retrieving the GUID by a previous client object model call. There is a mention for a hash of all names of the items that can be looked via a Item property on the treenode but it doesn't seem to be available with VS2010.
TFS also provides a set of services that can be individually called for access to the resources they represent. You can get a reference to any service using the GetService method and it can take the type of the service you want to use as a parameter. 
TFS client object model provides a nice hierarchy of objects to use to our advantage. For example, you can navigate from the server scope to the project scope and then to the work items scope. And at any level you can enumerate the resources. This provides a convenient mechanism for the length and breadth of the organization.
The client object model is not the only thing. There's the server object model to be used at the server side. and the build process object model at the build machine.
The OData on the other hand gives the same flexibility outside the programming context. It is accessible over the web.
The way to navigate the OData from TFS is to use the OData model to find the workItem we are interested in. Here's how it would look like :
var proxy = CreateTFSServiceProxy(out baseURL, out teamProject, out workitemtype);
var workItems = proxy.Execute<WorkItem>(new Uri(string.Format(cultureInfo.InvariantCulture,
                                                             {0}/Projects('{1}')/WorkItems?$filter=Type eq '{2}' and Title eq '{3}' & $orderby=Id desc",
                                                             baseURL,
                                                             teamProject,
                                                             WorkItemType,
                                                             WorkItemTitle)))
                  .First();
 This lets us specify the filter in the Uri to get the results to we want. Then it can be processed or reported in any way.
or another way to execute the queries could be as follows:
var queryProxy = new TFSQueryProxy(Uri, ICredentials)
var queries = queryProxy.GetQueriesByProjectKey

TFS client object model provides a nice hierarchy of objects to use to our advantage. For example, you can navigate from the server scope to the project scope and then to the work items scope. And at any level you can enumerate the resources. This provides a convenient mechanism for the length and breadth of the organization.

Monday, September 9, 2013

OData and TFS

TFS has a client object model. These are available via the Microsoft.TeamFoundation.Common and Microsoft.TeamFoundation.Client libraries for programmatic access. Using this library a query can be executed by instantiating the query class which takes the store and the WIQL as parameters.
The store can be found as follows:
1) Instantiate the TfsTeamProjectCollection with the Uri for the TFSServer, something like : http://server:port/tfs/web
2) get the work item store from 1) with GetService method
3) get the project from the work item store using the workItemStore.Projects["API"]
The query class represents a query to the work item store. An executed query returns a WorkItemCollection. These and other objects can be browsed from the Microsoft.TeamFoundation.WorkItemTracking.Client.dll  which is available from \Program Files\ Microsoft Visual Studio 10.0\Common7\IDE\ReferenceAssemblies\v2.0  on computers where TeamExplorer is installed.
Authenticated credentials may need to be used with the Team Foundation Server. ICredentials  object can be created to connect to the server. The password is required to create this object. The team foundation server also provides IdentityDescriptors for impersonation which means that you need not use the username and passwords.
Both the Uri and the ICredentials can be passed to the constructor of the TFSConfigurationServer object. The constructor also allows for mixed mode authentication where the credential used to connect to the team foundation identity where authentication and impersonation are both allowed.
Once the TFSConfigurationServer object is constructed, we can drill down to the objects we are interested in using the object model hierarchy or using search queries.
Queries can be executed by navigating to the QueryFolder for a QueryDefinition.
So code looks like the following:
var root = TfsConfigurationServerFactory.GetConfigurationServer(uri, iCredentials);
var projectCollection = server.GetTeamProjectCollection(NameOrGuid);
var store = projectCollection.GetService<WorkItemStore>();
var  teamProject = store.Projects["your_project_name"];
Assert(teamProject != null);
var queryResults = store.Query("your_WIQL_here");
or
var folder = teamProject.QueryHierarchy as QueryFolder;
foreach(var queryItem in folder)
{
// iterate
}
There is another way to get this data.
OData exposes a way to work with this data over the web. It is accessible from any device or application that supports HTTP requests. Think of OData as a web catalog browser of the client object model. For example, if you could enumerate some work item types with the client object model, then you can view them in a browser with OData. Scripts and programs can now work off of http requests.

Saturday, September 7, 2013

One of the tests for an OAuth provider could be to use the access tokens with the API that uses a user information as a parameter. If none of the APIs use user parameter, and only the access token, this test does not apply.however, using the user parameter for the current user whose access token has been retrieved should work. And that of another user with the same access token should not.
Same user could sign in from multiple clients. Since, the tokens once issued are valid for usually a duration of an hour, the provider does not limit the number of calls made in that duration. For the same reason, number of clients used by the user should not be limited. since the API's in discussion are stateless, the number of such calls doesn't matter. That said, a single client may be hogging the provider. Throttling in both cases could be helpful but should not be done on user or client basis but could be done on an API by API basis. This is one of the functions of the proxy provider. Authorization denials are severe measures and should generally be a last resort.
Also, performance test for an OAuth provider is important. If the existing load tests cover a authorizations and revokes from a user on a client repeated say thousand times in one minute and five minute intervals, it should work. Token expiry time is not specified by the user. So a test that involves a revoke prior to a re-authorization should work as an equivalent. Load test could have a variation of the different grant types for authorization. The functional tests or the build verification tests cover these and they might have other tests that could be thrown into a mix. However, single authorization and revoke of a token should be targeted in a separate run if possible. This should involve the authorization grant type that is most common. The test run that includes other kinds of tests could not only include the hand picked from the existing test cases but also include a capture of peak load traffic from external clients.
Tests could also target authorization denials from the API provider and the proxy independently. This should be visible from the responses to the authorization requests. The server property carries the source of the response. This is useful to know whether the token in invalidated because of a invalid user or an invalid client. The status code testing is not enough. Error message if any should also be tested. In the case of OAuth, providing error message that mentions an invalid user or an invalid client could be helpful. A common error message is a developer inactive message. This one is interesting because there seems to be an activation step involved.
Tests could cover spoofing identity, tampering with data, repudiation, information disclosure, denial of service and elevation of privilege.
One of the weakness of this mechanism is that the APIs have to comply in a certain way. For example, none of the APIs should expose the userId parameter. If APIs expose a user parameter, it should be enforced with an alias for client usage even if those aliases are translated internally. Separation of the user parameter for API from the security mechanism that validates the user is important because security is generally considered a declarative aspect and not the code of the API.
If the two were tied together where the user information for the API is looked up via security tokens translation in the API implementation instead of outside it as a parameter, each API requiring that may need to do the same. Instead, it is probably more convenient to maintain a list of API secured by privileged access tokens. For example, if an endpoint is marked internal, it should be enforced. It could enforce by making sure that the callers are internal or that it is packaged in an assembly that is not exposed, etc. Test should verify that all APIs are marked for use only with an access token even if      the tokens are not user privileged.