Saturday, August 31, 2013

The APIs we mentioned in the previous post for OAuth have resource qualifiers based on clients, users and token since there can be several for each. We have not talked about claims. Claims provide access to different scope such as account, payment, rewards etc. We leave this as an internal granularity that we can expose subsequently without any revisions to the initial set. For the first cut of the APIs we want to keep it simple and exhaust the mapping between the clients, the users and their tokens. The tokens are themselves a hash they have an expiry time of around an hour. So we keep track of the tokens issued for the different clients and user authorizations. The table for the tokens could have a mapping to the user table and the client table based on the user and client ids respectively. The table is populated for every authorization grant. If the tokens are issued their issue time is recorded. So that for every access to the token, we check whether the token has expired. Since the tokens are hashes or strings, its useful to index them and this can help lookup time. An alternative would be to look them up with the date of issue so so that we can retrieve the tokens that were issued only in the last hour and check for the presence of the token presented. Tokens will be of different privileges depending on whether the userId and the clientId fields are populated. There could be fields for say "bearer" token. These tokens are important since OAuth treats these tokens differently. OAuth RFC 6750 describes these in details. The lookup of the token needs to be fast since the tokens will most likely be used more than the number of times they are issued. A cryptographic hash is sufficient for the token since we don't want to tie it to any information other than the mapping we have internally and we do want to make it hard for the hackers to generate or break the hash. This is easy because .Net libraries make it easy to generate such hash. Token once generated should not be removed from the table unless the user requests it. Revoking on user request keeps the table consistent internally and externally because the APIs return the list of tokens that the clients can keep track of. Revoking the token is important because the user or client can choose to revoke one or more tokens simultaneously. So this operation is different from the insert or the lookup. The tokens should not be generated more than once. So retries have to be error proof. 

Friday, August 30, 2013

In the previous post, we began with a mention for how the proxy and the API interact as of today and moved on to how we want it to be. Here we re-visit the current interactions. The proxy may work across API providers across providers. However, we will keep our discussions to the API implementation. In the API implementation, we have the following methods provided by OAuth provider :
1) Get the description of a client
2) Create an access token when the API calls the provider
3) Create an authorization code when the API calls the provider
4) Get the applications for a user
5) Revoke an access token
When the token or codes are created, a user context is passed in that is used to track the client authorizations and the responses. This is how the tokens and codes are associated with a client and user. Depending on whether provider is implemented by a Proxy or by the retail company, the methods mentioned above are required to establish the mapping between the user and the client and their tokens/codes. The methods also take in additional parameter to make the token effective such as grantType, scope, response type, inclusion of a refresh token in the response, a URI and a state. These parameters are relevant to cut out the tokens as requested.
However, let's take a closer look at the APIs to see what we could add or improve on these APIs. The first thing is the testability. Do we have all the information exposed so that the data can be tested. No, we could do with a list of tokens assoicated by user. If we have a list of tokens issued for a client and a list of tokens issued for a user, we should be able to see which tokens are issued for clients only and that the tokens associated with the clients only are for low privilege uses as opposed to those that are issued with a user context. This test helps us know the set of tokens that have more privilege than others. By separating out the tokens this way and knowing the full set of tokens issued, we should be able to ensure that all access is accounted for.
Next we could do with a list of clients authorized by user and a list of users that a client has been authorized for. This could help us determine whether the authorization updates both lists. This is relevant because a token for user resources should be issued only when both are present.
Lastly, the list of authorized clients for a user and the list of users for a client should be updated with corresponding revokes. When all the tokens for a user with a given client are revoked, it could be considered equivalent to revoking the client authorization so that the client can request the user to re-authorize via OAuth for the next session. That is upto the discretion of the client and could be facilitated for the user with a checkbox for keeping the client authorized the first time the user signs in with OAuth WebUI.

Wednesday, August 28, 2013

Discussion continued

In the previous post we discussed interactions for OAuth. In this post we focus on the provider side interactions. The proxy and the API both participate in token and code grant. The proxy maintains the token database and the API maintains the user database. When the client calls in without a user context, the grants have lesser privileges. When the client calls in with a user context, the grants have more privileges. The API implements the OAuth endpoints that the webUI calls or clients call and internally there are handshakes between the proxy and the API before a grant is issued. As you can see there's several interactions between the proxy, the client, the user and the API including cross communication. First, the client talks to the token endpoint, then the API talks to the token provider, then the API returns the token as in the case of client credentials. If the user is involved, the user first talks to the WebUI, then the WebUI talks to the API which then talks to the token provider and then the token is passed to the client.
When the user signs on at the webUI through any client, then the API gets a call to an internal or external endpoint for OAuth. At that point, it is a mere convenience for the API to rely on external  token provider to return a token. It is better for the API to own the mapping between userId and client Id since such provisioning is easy and brings in the control over what clients can selectively be treated differential-ly. For example, the company's mobile application might be a client that should have high availability over other clients. Also the company's mobile application could have access to internal endpoints. Internal endpoints are different from external in many ways but I would like to draw attention to one call out item. External endpoints could be hosted by any proxy or in a third party cloud hosted by yet another network. There could be significant delays in the responsiveness of the APIs. This is not just a maintenance issue, its actually a significant usability issue. User attention span can be assumed to span a handful of seconds and even on mobile devices, frequent round trips may need to be avoided.
Network delays, heterogeneous networks and cloud services provide a significant challenge to the responsiveness of the APIs to various clients.
When we talk about the company's mobile application, we could consider diverting the traffic to networks within direct control of the company or a direct lease with a cloud provider.
This argument is made in the favor of differentiating clients but in general the point here is that network responsiveness the APIs may be even more important than the resource management such as CPU or memory for the API providers. CDNs could also help but they are very different from what APIs are used for.
Moreover statistics and call history of APIs are better grouped with clients rather than users.

Monday, August 26, 2013

membership providers

membership providers have their own validation routines. For example, MembershipProvider.ValidateUser checks if the user specified is valid. The membership provider changes in OAuth and hence corresponding validations  should occur by the OAuth provider or in this case, the website. In OAuth, the sessions are maintained by the other website that requests OAuth. This causes the user to reauthenticate after a specified time. For the most part refresh tokens are sufficient to keep accessing protected resources.
UserId lookup is associated with the WebUI login process. All callers to APIs could require the alias of "me" for the user Id. This way they don't need to know the user Id. However internally for APIs that are secured through OAuth, the access token is associated with a user and client. The OAuth provider should be able to find out what the userId is based on the token and grant access to protected resources or flag the token as inappropriate token.
So let's consider the access between each quadrant of players:

Client                    |    User (Vivian)
(Ravi.com)
----------------------------------------------
Provider(Proxy)     |    API (Retail.com)


I made the User directly access the API through the retail company's primary website or over the company's mobile application. Users are able to access their resources.
All calls crossing the dotted line must be with registration so that implementation knows who the caller is. The bottom parts constitute the implementation components and are things we will assume we can control and is therefore private in nature. The User and the client can be considered to be public and can be considered public. The public region is largely where security attacks originate from. This is typically launched by a malevolent user or a client. The interaction between a user and a client is one of authorization Without both of them, there is no access for the client to the protected resources in the API. For the non protected resources, the client has to get tokens from the OAuth provider. In this case the client has to register with the OAuth Provider. The retailer has no knowledge of who the client is and does not need to keep track of the client. This can be strict. Virtually all calls to the API are for some user or a guest and the API need not even bother about the clients. If the strictness were to be enforced, the retailer is looking for some kind of userId from the proxy for all client calls. Therefore if the proxy is not presenting a token that maps to a UserId, there is no access to protected resources through the proxy.
Notice that Retail.com could become a client by itself. This is a scenario we can talk about in the improvements. This does not change the fact that the APIs is the true representation of the Retail company.
When all websites including the retailer's access data through the API as a registered client, the API implementation is guaranteed that all accesses are unified in a way such that they are accessed with a token that represents both the client and the user. The retailer's website does not have to be a special client. It just needs to keep its client I'd and secret confidential just like any other web server. In practice, a .com website for a retailer presents a considerable legacy that adds no customer win for registering itself as a client and moreover poses risks during the change which the business may not allow. Furthermore, the .com website and the API provide mechanisms to compare the data between the two.



We will also talk about proxy and/or API sharing user and client mapping.
Finally, I want to list what I see as candidate improvements I can consider for OAuth
1) Token and management endpoints to be redesigned such that one provides token or code by all grant methods and another provides management functionalities
2) the management portion could be exclusively for the WebUI presented to the user for management of the client registrations. Grants and revokes of clients or new user registrations and redirects are what the WebUI provides
3) API implementation should not rely on OAuth to treat it merely as a user based access that could have otherwise been established with a single sign on web server. It was to enable richer experiences customized for different users and clients. OAuth was also not merely about using tokens and codes in place of passwords and sessions but for different clients to provide seamless sessions.
4) when the clients are fully supported  andand are treated the same as users, then we could even do away without the notion of users in the implementation.
5) resource management becomes easier with client only management and representing users as groups.

We can the think of a stack such as the following :
User
Client
Proxy
API
and eliminate the proxy with better functionalities within the API

OAuth testing discussion continued

In this post we look at performance considerations to OAuth testing. Much of this testing is targeting the OAuth provider. The authorization end points both token and management rely on the token database maintained by the OAuth provider. 

Saturday, August 24, 2013

OAuth testing continued

9) landing page for user authorization
a) users must be able to see client description
b) user acceptance must result in return url with token/code
c) user denial must result in return url with error parameters in query string
d) response type, user_id, client_id, redirect_uri, scope and state parameters should be validated.
e) tokens retrieved should exist in the provider database.
10) user and client mapping
a) client access provisioned on a user by user basis, otherwise only client credential provisioning possible
b) check against cross user profile access via common clients
c) check against admin access clients
d) check correctness of user list maintained by client
e) check correctness of clients authorized by user
11) resource management policies enforcement
a) provision minimal scope authorization and check for external access
b) check against all scope parameters or access range.
c) specify full access range and bearer token to see if different if card balances can be read.
d) set the state and callbacks to see if scope changes
e) check which apis or methods are to be protected with access tokens and if they are all enforced.
f) check mashery or OAuth providers api for token to user or client mapping
12) security validations
a) check for phishing attacks
b) check the http headers for leak of securables
c) check that TLS is required for all APIs
d) check that the server authentication by way of certficates is provisioned.
e) check that client ids and secrets are not leaked
f) check that cross site forgery attacks can be thwarted by callbacks and state.



Friday, August 23, 2013

oauth testing continued

Let's close on the OAuth test matrix here:
1) Implicit Grant
a. missing user id
b. missing client id
c. any user id but Valid client id 
d. Valid user id and client id
e. Valid user id but invalid client id
f. Error codes – 400, 403, 404, invalid_request, invalid_token and insufficient_scope
g. Use invalid uri to not get 302 (new)
h. Performance (new)
i. XML and JSON responses
2) Authorization Code Grant
a. Similar to Implicit grant but responseType=code so 1a to 1i will be repeated. (new)
b. Code will be translated to token.
c. Code expiry will not be tested but code revoke will be tested to validate token
3) Client credentials grant
a. Targets token endpoint to get token using client id, client secret, scope (new)
b. Checks for error message for  invalid grant (new)
4) Revoke access
a. Revoke token will be tested but not revoke client
b. Revoke an already revoked token
c. Revoke an already revoked client (new)
d. Revoke all tokens for a client ( Get all tokens and validate each) (new)
5) Claim information
a) Get claims based on default scope (null)
b) Get claims based on specific scope (not null)
6) Client Information
a) Get name of client application and check access tokens
b) Get client without name, description or image to see the default rendered to the user
c) Get all access tokens and add or remove tokens to see if the client information is updated
d) Check if revoke all removes all access tokens.
7) Get allowed clients for a user
a) Check if all the clients are listed for the user.
b) Add or remove a client to see the corresponding update to the list
c) Authorize a client for the user but delete the client to check for orphaned entries
8) Check response types
a) check the code
b) check the token


In the previous post we looked at some OAuth security considerations. In this post we review the IANA considerations from RFC.
First, the token types have to be registered with the OAuth external review mailing list.
Examples could be included with the registration request. Registry updates must only be done by designated experts.
Registration templates include type name, authentication scheme, change controller and specification document.
Initial registry contents include

  • client_id, 
  • client_secret, 
  • response_type, 
  • redirect_uri, 
  • scope, 
  • state,
  • code,
  • error_description,
  • error_uri,
  • grant_type,
  • access_token,
  • token_type,
  • expires_in,
  • username,
  • password
  • refresh_token
Response types are also registered. Again these include response type name, change controller and specification document

OAuth testing should cover these registrations. For example, tests should cover the different response types and token types.

OAuth testing requires clients and users mapping to be tested. In addition, resource management policies should be tested. For example, resource management testing should include test that the public clients should don't have access to the user profiles. Similarly user profiles based access should have access to the non-user-profile specific information.

A specially designated admin client is useful for offloading the troubleshooting to non-development. teams. however, that may be for internal use and should ideally be built without modifying the existing protocol implementation.

Thursday, August 22, 2013

OAuth security considerations from RFC continued.
1) resource owner password credentials - this is probably the one that grants maximum access to the client. However client and authorization server should limit the scope. This method in general is different from the pattern that this RFC proposes and is used for backward compatibility. The vulnerability here is that the resource owner does not have control over how the credentials are used.
2) tokens, codes, passwords, and secrets should not be transmitted in the clear. Http headers, URI appear as clear text. State and scope could appear in the clear so they should not ave any sensitive  information.
3) ensure endpoint authenticity by requesting TLS with server authentication. TLS certificates must be validated
4) tokens, passwords etc should not be guessable. Their probability should be fewer than one in 2^128
5) due to the use of redirects there could be phishing attacks possible. Websites that ask for credentials should be authenticated.
6) Cross site request forgery should be prevented such as when  a  user-agent is made to follow a malicious URI to a trusting server.  In such security attacks, the attacker injects his own authorization code. CSRF protection is achieved by including a value in the URI that hints at the authenticated state.

URL shortening for REST API

REST APIs are easy to document. And their use often involves small variations in the URI. Therefore , URI shortening and parameter abbreviation helps particularly on mobile devices.  The difference between a regular URL shortening and this kind of smaller URI's is that these will not necessarily be a hash. The URI's can have meaningful abbreviations given that we could  control which API's they map to  and because the APIs are limited. The shortened URI' s could  be stored in a database and the URI's they map to could  still be available independently. This means that we can develop the MVC route based API's as usual and have this service as an add on. The service merely redirects to the actual API"s as HTTP 302 and not require any static or dynamic pages assigned to the short URIs. This service could even live in the http proxy, if the API uses one. The proxy can be transparent on this redirection.
API documentation has to be kept in sync with this short URI mapping. Documentation can be generated from the config routes in which case the short APIs may need their own listing and correspond API documentation redirects. if the framework were to allow such short URI services, then thereis no need for an external service or database and short URi's  could be provisioned in the config files itself. This will also help keep the documentation together.
Query string or Request body parameter abbreviation requires changes on the API implementation. Parameters need not be specified with their full names but by their first initial or some shortener convention.  Only the parameter lookup changes so they can be kept in sync with their corresponding actual parameters. Together with abbreviations, short URI and documentation provide compelling convenience. Statistics could be gathered for both short and regular URI's to warrant continued usage.

Wednesday, August 21, 2013

This is  a quick overview of the OAuth RFC 6749 and will cover some salient features mentioned there.
Implicit workflow is described as follows:
The client is issued an access token directly instead of the authorization code. This is done on the basis of resource owner authorization The client is issued an access token. Client is not verified.  The access token is sent back in the uri fragment. Unauthorized parties could gain access to it. In some cases client can be verified with the redirect uri. Access tokens are issued with the minimal scope.  Phishing attacks are possible where the access token can be switched with the another token previously issued to the attacker. Due to such security considerations, when authorization code grant is available, implicit might not be provisioned.
Other security scenarios include the following:
1) A malicious client can impersonate another client and obtain access to protected resources. This can be avoided with the registration of redirect uri and receiving authorization responses. The authentication server must authenticate the client whenever possible.
2) Access tokens must be kept confidential in transit and storage. TLS can be used for transit and database could be encrypted.
3) Authorization servers may issue refresh tokens to both web application and native application clients. Refresh tokens must be issued only to the client who was previously issued the token. The authorization server could also use refresh token rotation where the clients who used the previous token can be identifed.
4) Authorization codes must be short lived since it is used for the exchange of a token.
5) The redirect_uri parameter can be manipulated.


Tuesday, August 20, 2013


In the previous post, there was a mention for cross user access or admin access, however in this post we talk about infrastructure support for mobile devices. One of the things we discussed was expiration of access. With mobile devices and other applications, there can be convenience provided to the user such that her login efforts are minimized. For example the authorization website where the user login to grant access to a client, may choose to remain signed in for the duration of a session that is longer the token expiry time.  Clients in expiry of their tokens need not request the user to login again. That can be maintained by the site. Further the user may grant indefinite time access or until explicit revoke at the website such that the client could continue to have the web site send redirects. Strictly speaking this is not OAuth. It's just a convenience provisioned outside the OAuth.

Monday, August 19, 2013

OAuth testing discussion continued.

In the previous posts we discussed that OAuth testing depends on the nature of the token. The token is granted to a client for access to a user's resources. Therefore, OAuth APIs are largely qualified by user first and then client. So APIs are resource qualified in the form '/v1/users/me/clients/clientId'.  The reverse order of '/v1/clients' is generally not needed to be maintained because the clients don't have any access unless a user authorizes a resource.
That said, it is important to consider admin access such as one that can reach across users. Is it necessary to have admin access ? or should users manage their own client authorizations ? This is the topic we will briefly consider.
Resources that are not user-specific are generally less demanding on security as opposed to resources that are user owned. For example, the reward cards and balances on them that a user has is considered confidential and so no other user should have access to the same. However, what if there is governance required such as when the law enforcement agencies want to know how may coffee drinks you need to have before you get your free one ? Or even more pertinent where was your last purchase store in case you died ? Governance has a notorious connotation to big brother snooping but in our technical discussion here we refrain from such discussion and consider the case where it is important for another user or system to know information of other users. This could be as benign as friends sharing card information to see which card needs to be used first for the free drink. In OAuth spec, there is no limitation against admin access or access to other users information. In OAuth there is no mention for grouping users or clients. In OAuth there is no mention for API aliasing or shortening or parameter abbreviation for ease of use by one or more users especially from handheld devices. This is left to individual vendors to determine.
In the case where vendors choose to have cross user access or admin client access, the nature of the token granted is relevant. This could be scoped to include certain information such as last store while excluding certain information as credit cards or social security number. Basically, the protocol does not and should not prevent granularity of access and accessors be it the client or the user.
This definitely calls for OAuth tests so that security, functionality and performance are not affected. The RFC discusses several security implications from the user and client point of view however testing may need to be customized to the implementation. The implementation can be as simple as requiring all authorizations owned and delegated to the end user. 
Bearer token recommendations from RFC
1) Bearer tokens must be safeguarded since it gives access to the bearer. It should not appear in the clear such as in the header or in cookies. It should be passed only over secured traffic.
2) TLS certificate chains should be validated otherwise DNS hijackers can steal the token and gain unintended access
3) https should be used for all OAuth communications. The transport layer security is necessary for encrypting the traffic and securing the endpoints.
4) Bearer tokens should not be stored in the cookies. Implementations must not store bearer tokens in cookies because it can lead to cross site forgery attacks.
5) Bearer tokens should only be issued as short lived. One hour or less is recommended. Using short lived bearer tokens means that very few will gain access to it to misuse it.
6) Bearer tokens should always be scoped, scoping their use to the designated user or party. This is important because we don't want to grant universal access.
7) Bearer tokens should not be passed in page URLs since browses, web servers, and other software may not adequately secure URLS. The token may appear in web server logs and other data structures.
Appropriate error codes must be returned to deny specific requests. These include:
1) invalid_request - resource access error response This covers cases where the request is to be denied on grounds of invalid user or client. Http Status code is 400
2) invald_token - This is also a resource access error response. This covers the case where the token may have expired or the client has used a fabricated token. Http Status code is 401 (unauthorized)
3) insufficient_scope - this is also a resource access error response. This covers the case where the token has not been scoped and may cause security vulnerability. Http Status code is 403 (forbidden).
An example of a successful response could be
     HTTP/1.1 200 OK
     Content-Type: application/json;charset=UTF-8
     Cache-Control: no-store
     Pragma: no-cache

     {
       "access_token":"mF_9.B5f-4.1JqM",
       "token_type":"Bearer",
       "expires_in":3600,
       "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA"
     }


Sunday, August 18, 2013

OAuth testing discussion continued

The previous post on OAuth testing discussed the mapping of user and client for grant of token. Therefore testing discussed mostly on the setup of a user and client for the purpose of grant token. The token has a certain expiry time. There was some mention about scope and state. In this post, we discuss the revokes
When the authorization website grants access to a client from a user, the client can get a "bearer" token to access the protected resources. This token can be used by anyone for any of the APIs.  The authorization server must implement TLS so that the passwords and bearer tokens can flow.
OAuth only mandates that the token are issued for less than an hour. However clients, user and the authorization website could establish trust relationship the first time the authorization occurs. This is outside the scope of OAuth and may be provided as a mere convenience to the user from having to sign on each time for the client.
If this relationship were to be persisted it would merely be a table mapping the client with the user on this server. The records could be added when the authorization completes and revoked when the user explictly wants to clean up the clients she has authorized. The authorization website identifies the user by the user id and the client by the client id. When the client requests the authorization again subsequently, the website may have the user signed on from a previous session and hence uses the same to generate a token for the client. Neither the user specifies password again, nor does the client change request anything different other than accepting the redirect uri response with the token it is expecting.
Care should be taken to ensure that retries by the user do not create duplicate rows in the table maintained by the authorization website. In addition to user explicit revokes, the website may grant additional session time than the usual token expiry time. This can be facilitated with the use of a timestamp.
Lastly the website may do periodic cleanups and archival of the trust relationships. The default case could be to not store the trust relationships and have the token expirty time force re-authentication.
In the previous post, there was a sample code for objective C. The XCode project comes with a test  project that can be used for testing the code in the project. Here is an example for the code in the test:

//
//  Starbucks_API_ExplorerTests.m
//  Starbucks API ExplorerTests


#import "Starbucks_API_ExplorerTests.h"

@implementation Starbucks_API_ExplorerTests

- (void)setUp
{
    [super setUp];
}

- (void)tearDown
{
    // Tear-down code here.
    
    [super tearDown];
}

- (void)testExample
{
    STFail(@"Unit tests are not implemented yet in Starbucks API ExplorerTests");
}

@end



Saturday, August 17, 2013

sample objective c code


//
//  ViewController.m


#import "ViewController.h"

@interface ViewController ()

@end

@implementation ViewController
@synthesize userInput;
@synthesize userParams;
@synthesize userOutput;
@synthesize username;
@synthesize password;

- (void)viewDidLoad
{
    [super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}

- (void)didReceiveMemoryWarning
{
    [super didReceiveMemoryWarning];
    // Dispose of any resources that can be recreated.
}

- (IBAction)Submit:(id)sender {
    ResponseViewer.text =  [self makeRequest];
}


- (NSString*) getAccessToken {
    NSString *urlString = @"https://test.openapi.starbucks.com/v1/oauth/token";
    NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init];
    [request setURL:[NSURL URLWithString:urlString]];
    [request setHTTPMethod:@"POST"];
    
    NSMutableData *body = [NSMutableData data];
    NSString *requestBody = [NSString stringWithFormat:@"grant_type=password&username=%@&password=%@client_id=my_client_id&client_secret=my_client_secret&scope=test_scope&api_key=my_api_key", username.text, password.text];

    [body appendData:[[NSString stringWithFormat:@"%@\r\n", requestBody] dataUsingEncoding:NSUTF8StringEncoding]];
    
    // set request body
    [request setHTTPBody:body];
    [request setValue:@"application/json" forHTTPHeaderField:@"Accept"];
    
    //return and test
    NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil];
    NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding];
    
    NSLog(@"%@", returnString);
    
    NSData *JSONData = [returnString dataUsingEncoding:NSUTF8StringEncoding];
    
    userOutput.text = returnString;
    
    NSDictionary* json = [NSJSONSerialization JSONObjectWithData:JSONData options:kNilOptions error:nil];
    
    return [json objectForKey:@"access_token"];
    
}

- (NSString*) makeRequest {
    NSString *urlString = [NSString stringWithFormat:@"https://test.openapi.starbucks.com/%@", userInput.text];
    NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init];
    [request setURL:[NSURL URLWithString:urlString]];
    [request setHTTPMethod:[NSString stringWithFormat:@"%@", _method.accessibilityValue]];
    
    if ([_outputType.accessibilityValue compare:@"Json" options:NSCaseInsensitiveSearch] == TRUE)[request setValue:@"application/json" forHTTPHeaderField:@"Accept"];
    
    if ([_outputType.accessibilityValue compare:@"XML" options:NSCaseInsensitiveSearch] == TRUE)[request setValue:@"application/xml" forHTTPHeaderField:@"Accept"];
    
    
    if ([_method.accessibilityValue compare:@"POST" options:NSCaseInsensitiveSearch] == TRUE)
    {
        NSMutableData *body = [NSMutableData data];
        NSString *requestBody = userParams.text;
    
        [body appendData:[[NSString stringWithFormat:@"%@\r\n", requestBody] dataUsingEncoding:NSUTF8StringEncoding]];
    
        // set request body
        [request setHTTPBody:body];
    }
    
    //return and test
    NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil];
    NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding];
    
    NSLog(@"%@", returnString);
    userOutput.text = returnString;
    return returnString;
    
}
    
@end

In the previous post we talked about database transforms of test results and mail. This was easy to implement when you apply the xslt transform. The resulting procedure with cursors looks something like this:

DECLARE @results table(
TestRunId uniqueidentifier NOT NULL,
SubmittedBy varchar(30) NOT NULL,
TestOwner varchar(30) NOT NULL,
TestName varchar(255) NOT NULL,
Environment varchar(10) NOT NULL,
TestCategory varchar(255) NOT NULL,
Submitted varchar(255) NOT NULL,
Status varchar(10) NOT NULL,
Elapsed varchar(255) NOT NULL,
Total int NOT NULL,
Passed int NOT NULL,
Fail int NOT NULL,
Inconclusive int NOT NULL,
Error int NOT NULL,
PercentPass varchar(10) NOT NULL,
ResultLabelWidth varchar(10) NOT NULL);

DECLARE @TestRunId uniqueidentifier ;
DECLARE @SubmittedBy varchar(30) ;
DECLARE @TestOwner varchar(30) ;
DECLARE @TestName varchar(255) ;
DECLARE @Environment varchar(10) ;
DECLARE @TestCategory varchar(255) ;
DECLARE @Submitted varchar(255) ;
DECLARE @Status varchar(10) ;
DECLARE @Elapsed varchar(255) ;
DECLARE @Total int ;
DECLARE @Passed int ;
DECLARE @Fail int ;
DECLARE @Inconclusive int ;
DECLARE @Error int ;
DECLARE @PercentPass varchar(10) ;
DECLARE @ResultLabelWidth varchar(10);


INSERT into @results SELECT * FROM dbo.Results;  -- or exec stored_proc
DECLARE @msg nvarchar(MAX);
SET @msg = '<head></head><body>
<div>Test Run Report</div><table id="ResultContainer" border="1">
<tr>
<th>Service Name</th>
<th>Environment</th>
<th>Test Category</th>
<th>Submitted By</th>
<th>Date</th>
<th>Status</th>
<th>Elapsed</th>
<th>Total</th>
<th>Pass</th>
<th>Fail</th>
<th>Incon</th>
<th>Error</th>
<th>Result</th>
<th>Test Owner</th>
</tr>'
DECLARE Results_Cursor CURSOR FOR  (SELECT * From @results); -- exec [ms01806.sbweb.prod].TestRunner.dbo.usp_GetSubmittedTestStatus @SubmittedBy='Ravi Rajamani (CW)',@TodayOnly=1,@TestType=N'Coded';
OPEN Results_Cursor;
FETCH NEXT FROM Results_Cursor INTO @TestRunId, @SubmittedBy , @TestOwner , @TestName, @Environment , @TestCategory , @Submitted , @Status , @Elapsed , @Total , @Passed , @Fail , @Inconclusive , @Error , @PercentPass , @ResultLabelWidth ;
WHILE @@FETCH_STATUS = 0
BEGIN
SET @msg = @msg + '<tr>
                            <td>' +  @TestName + '
                            </td>
                            <td> ' + @Environment +  '
                            </td>
                            <td>' + @TestCategory + '
                            </td>
                            <td>'  + @SubmittedBy + '
                            </td>
                            <td>' + @Submitted + '
                            </td>
                            <td> '  +  @Status + '
                            </td>
                            <td>' + @Elapsed + '</td>
                            <td>' + Convert(nvarchar, @Total) + '
                            </td>
                            <td>' + Convert(nvarchar, @Passed) + '
                            </td>
                            <td>'  + Convert(nvarchar, @Fail) + '
                            </td>
                            <td> ' + Convert(nvarchar, @Inconclusive) + '
                            </td>
                            <td>' + Convert(nvarchar, @Error) + '
                            </td>
                            <td>' + @PercentPass + '
                            </td>
                            <td> ' + @TestOwner + '
                            </td>
                        </tr>';
FETCH NEXT FROM Results_Cursor INTO @TestRunId, @SubmittedBy , @TestOwner , @TestName, @Environment , @TestCategory , @Submitted , @Status , @Elapsed , @Total , @Passed , @Fail , @Inconclusive , @Error , @PercentPass , @ResultLabelWidth ;
END
SET @msg = @msg + '</table></body>';
CLOSE Results_Cursor;
DEALLOCATE Results_Cursor;

EXEC msdb.dbo.sp_send_dbmail
    @profile_name = 'Mail Profile',
    @recipients = 'recipients@where.com',
@body_format = 'HTML',
    @body = @msg,
    @subject = 'Test Results Reports' ;


Friday, August 16, 2013

automating test results publishing from the database.

Test runs generate results that are often important for keeping a daily pulse on the software quality. Typically such data is published in a database and have details such as test run id, submitted by, test owner, test name, environment, test category, submitted, status, Elapsed time, total tests passed, total tests failed, inconclusive tests, Tests that have errors, percentage pass etc. These records give sufficient detail about the tests that were run and they just need to be transformed into xslt to make pretty reports. Both the xslt transform and associated css if any could be the text template with which the data is presented. The individual spans for each column may need to have distinguished identifiers so that the transform can be rendered as an html. These identifiers could all have row number as suffix to distinguish them.  A header row followed by one or more rows for the data is sufficient to display the test run results. The table itself could use a css class to make it prettier by differentiating alternate rows. This is done with a class id and a style borrowed from the css. The transform itself could be wrapped in a stored procedure which retrieves the results from the database and enumerates them. This stored procedure has several advantages. First this logic is as close to the data as possible. Second it can be invoked by many different clients on scheduled or on demand basis. Third, it can be used to register a trigger for the insert of results from well marked runs. Runs can be marked based on some attributes such as a login for automated test execution from reporting. This allows regular test invocation and publish by all users without any reporting. Only the test execution by a separate login is candidate for reporting. The stored procedure may need to enumerate through the records. This can be done with a cursor. A cursor is also useful in working with one record at a time. The entire stored procedure will have plans cached for improved performance over adhoc TSQL. Therefore this will have performance improvements as well. Finally the database itself can send out mails with this html text. This can be done with a database mail  profile set up with an smtp server. Setting up automated e-mails requires the use of sysadmin or appropriate server scoped privileges.
If transforms are not needed, the results from the queries or stored procedures can be directly sent out. This may be a quick and easy thing to do since the results can be output to text format and imported via excel and other BI applications.
Also, test results are often associated with team foundation server work items or queries. Such articles might involve items such as active and resolved bug counts. While it is easier to use the Excel application to open and format such TFS queries, it is also possible to directly issue http requests from the database. This consolidates the results and the logic for results reporting in one place. 

REST API documentation

There are several tools to manage REST API documentation. Apiary.IO lets you create documentation from a terse blueprint written in their syntax. Input and output can be listed in an excel sheet and the scripts can be used to generate html pages.
HOST: https://testhost.openapi.starbucks.com

--- Sample API v2 ---
---
Welcome to the our sample API documentation. All comments can be written in (support [Markdown](http://daringfireball.net/projects/markdown/syntax) syntax)
---

--
OAuth workflow
The following is a section of resources related to the OAuth access grant methods
--
List products added into your shopping-cart. (comment block again in Markdown)
POST /v1/token
{ "grant_type":"password", "client_id":"abcdefg8hijklmno16", "client_secret=some_secret", "username":"test", "password":"password", scope:"test_scope"}
< 200
< Content-Type: application/json
{"return_type":"json","access_token":"nmhs7y7qmmh8mngyq2ncn3gp","token_type":"bearer","expires_in":3600,"refresh_token":"5skg9u4qdybksete8gaauvk3","scope":"test_scope","state":null,"uri":null,"extended":null}


OAuth workflow

The following is a section of resources related to the OAuth access grant methods

POST

/v1/token

List products added into your shopping-cart. (comment block again in Markdown)

Request
{ 
"grant_type":"password", 
"client_id":"abcdefg8hijklmno16", 
"client_secret=some_secret", 
"username":"test", 
"password":"password", 
scope:"test_scope"
}
Response
200 (OK)
    Content-Type: application/json
    
{"return_type":"json",
"access_token":"nmhs7y7qmmh8mngyq2ncn3gp",
"token_type":"bearer",
"expires_in":3600,
"refresh_token

Thursday, August 15, 2013

Test automation framework for API testing :
The following post attempts to enumerate some of the salient features that make an automation easier to use for API testing. APIs such as those that are available over the web as RESTful APIs are great candidates for automations because there is a lot of repetitions and overlaps in the test matrix. The test matrix consists of variations in  protocol such as http or https and their versions, verbs such as GET, PUT, POST, DELETE, etc, endpoints and their ports for test, stage, production environments, endpoint versions, resource qualifiers and their query strings, their response types such xml or json and the request body in the case of post, the content-headers and their various options. A request and response sequence for each API alone constitutes a bulk of the test matrix.
Note that the testing largely relies on default values so that the test scripts are easier to write. Python requests library really help with making the scripts easier to write. One of the primary goals of automation framework is to make it easy to add tests. The lesser the code required to write the tests the lesser the chances of making mistakes, the lesser the frustrations, the easier it is to maintain and hence more popular. Python library enables succinct and terse test scripts. It's cool to write Python scripts.
Besides, UI based tools such as SOAPUI or TestRunner are nice to haves but consider that the primary input and output for REST API testing is all text. Variations in the text, their comparision, serialization and deserialization into objects for comparision with Data access layer are tedious and may need to be specified in the tests. While load testing, performance and call inspectors, capture and replay are undeniably UI features, the automation framework can separate out the UI tests from the scripts and in my case argue for script based framework.
The object model library should be comprehensive because such would be available in the corresponding development code. In fact test should refrain from from rewriting or redefining any or all objects that the requests and responses use. These objects should all be instanted via reflection and can live in properly namespaced development assemblies. Unifying the object model for development and test is not just for convenience, it enforces single maintenance policy and reuse. Code may be tweaked for different purposes in either test side or development side but object model has no reason to be different between test and dev code. To illustrate this point, in a well written development code, if you were to add strongly typed views in a model view controller framework, when you expand the drop down to choose the model for the view, you should be able to see all ( read dozens ) of objects to choose from. The same or similar should be available in test. The data and the object model mapping is also common. When the object model changes and the development code changes, the corresponding tests will break. The more they are tied, the more visible the regressions.
Testing often relies on interpreting data from the layers below the API implementation to compare the expected and actual results from the API. With default values for test scripts and a unified object model collection of assemblies, the test automation will be easier to use, more consistent and organized.