Improvements to OAuth 2.0 if I could venture to guess:
1) Tokens are not hashes but an encyrption provided in request and response body. Token is an end result representing an agreement between user, client, and one or more parties. A given token could be like a title on an asset immediately indicating the interests vested in the asset - be it the bank, an individual or third party. However the token must be interpretable to the authorization server alone and nobody else. It should be opaque to both the user and the client and understood only by the issuer to repudiate the caller and the approver. The client and the user can use their state and session properties respectively to repudiate the server. In both cases, this has nothing to do with the token. Such a token processing obviates the persistence of tokens in a store for lookup by each API call. This moves the performance overhead from storage and cpu to mostly cpu based which should be welcome on server machines. That said, almost every api makes a call to a data provider or database and yet another call to associate a hash with a user and a client is not only simpler but archivable and available for audit later. Test will find this option easy, more maintainable and common for both dev and test. This option also scales up and out just like any other api call and since its a forward only rolling population, there is possibility to keep the size finite and even recycle the tokens. The internal representation of user and client has nothing to do with the information exchanged in the querystring so the mapping is almost guaranteed to be safe and secure. This approach also has conventional merits of good bookkeeping with the database technologies such as the ability to do change data capture, archivals, prepared plan executions etc. In the encryption based token scenario, the entire request and response capture may need to be taken for audit and then parsed to isolate the tokens and discover their associations. Each discovery may need to be repeated over and over again in different workflows. Besides an encrypted string is not easy to cut and paste as is the simplicity of a hash and the cleartext on the http. That said, encryption of request parameters or for API call signatures are anyways used in practice so the adoption of an encryption based token should not have a high barrier to adoption. Besides, the tokens are cheap to issue and revoke.
2) Better consolidation of the grant methods offered. This has been somewhat alluded to in my previous post and it will simplify the endpoints and the mechanisms to what is easy to propagate. For example, the authentication code grants and the implicit code grants need not be from an endpoint different from the token granting endpoints since they are semantically the same. Code to token and refresh could be considered different but at the end internal and external identifiers will anyways be maintained be it for user, client or token. Hence, the ability to treat all clients as one and access tokens with or without user privileges and all requests and response as pipelined activities will make this more streamlined. Finally, an OAuth provider does not need to be distributed between the proxy and the retailer. In some sense, these can all be consolidated in the same stack.
1) Tokens are not hashes but an encyrption provided in request and response body. Token is an end result representing an agreement between user, client, and one or more parties. A given token could be like a title on an asset immediately indicating the interests vested in the asset - be it the bank, an individual or third party. However the token must be interpretable to the authorization server alone and nobody else. It should be opaque to both the user and the client and understood only by the issuer to repudiate the caller and the approver. The client and the user can use their state and session properties respectively to repudiate the server. In both cases, this has nothing to do with the token. Such a token processing obviates the persistence of tokens in a store for lookup by each API call. This moves the performance overhead from storage and cpu to mostly cpu based which should be welcome on server machines. That said, almost every api makes a call to a data provider or database and yet another call to associate a hash with a user and a client is not only simpler but archivable and available for audit later. Test will find this option easy, more maintainable and common for both dev and test. This option also scales up and out just like any other api call and since its a forward only rolling population, there is possibility to keep the size finite and even recycle the tokens. The internal representation of user and client has nothing to do with the information exchanged in the querystring so the mapping is almost guaranteed to be safe and secure. This approach also has conventional merits of good bookkeeping with the database technologies such as the ability to do change data capture, archivals, prepared plan executions etc. In the encryption based token scenario, the entire request and response capture may need to be taken for audit and then parsed to isolate the tokens and discover their associations. Each discovery may need to be repeated over and over again in different workflows. Besides an encrypted string is not easy to cut and paste as is the simplicity of a hash and the cleartext on the http. That said, encryption of request parameters or for API call signatures are anyways used in practice so the adoption of an encryption based token should not have a high barrier to adoption. Besides, the tokens are cheap to issue and revoke.
2) Better consolidation of the grant methods offered. This has been somewhat alluded to in my previous post and it will simplify the endpoints and the mechanisms to what is easy to propagate. For example, the authentication code grants and the implicit code grants need not be from an endpoint different from the token granting endpoints since they are semantically the same. Code to token and refresh could be considered different but at the end internal and external identifiers will anyways be maintained be it for user, client or token. Hence, the ability to treat all clients as one and access tokens with or without user privileges and all requests and response as pipelined activities will make this more streamlined. Finally, an OAuth provider does not need to be distributed between the proxy and the retailer. In some sense, these can all be consolidated in the same stack.
No comments:
Post a Comment