In today's post I want to continue the discussion on generating tokens by encryption. We mentioned encrypting the UserId and ClientId and circulate the base64 encoded string as tokens. We rely on the strength of the encryption to pass the information around. However we missed mentioning a magic number to add to the UserId and ClientId before we encrypt. This is important for several reasons. One because we want to be able to vary the tokens for the same UserId and ClientId. And we want to make it hard for a hacker to guess how we vary it. One way to do this for example would be to use the current time such as in hhmmss format along with an undisclosed constant increment.
Another way to generate the tokens without adding another number to the input is rotate the userId and ClientId. So the string constituting the UserId and the ClientId will be split and the two substrings exchanged in their positions. After decryption, we can swap it again to get the original string. Since the client Ids are not expected to grow as large as the integer max, we can use the leftmost padding of the client Ids as the delimiter.
Another way to do this would be to use the Luhn's algorithm that is used for generating credit card numbers. Here every alternate number in the original sequence is doubled and then their sum is multiplied by 9 and taken with modulus 10. This gives the check number to add to the end.
No matter how we vary the tokens from being generated, we can create a base64 encoded token.
The OAuth spec does not restrict the tokens to be a hash. The tokens could be anything. If they store information about the user and the client for validations during API calls, that is not restricted. The OAuth goes to mention such possibilities in the spec.
When considering the tradeoffs between a hash and an encrypted content for a token, the main caveat is whether the tokens are interpretable. If a malicious user can decrypt the tokens, then its a severe security vulnerability. Tokens can then be faked and the OAuth server will compromise the protected resources. Since the token is the primary means to gain access to the protected resources of the user, there's no telling when the token will be faked and misused and by whom. Tokens therefore need to be as tamper proof as the user's password.
If the tokens were not encrypted but merely a hash that has no significance in terms of contents, and the relevance is private to the server, then we are relying on a simpler security model. This means we don't have to keep upgrading the token generating technologies as more and more ways are discovered to break them. This however adds a persistence to the OAuth server such that these hashes can be tied back to the user and the client.
Even though I have been advocating a token database, and a hash is a convenient simpler security model, I firmly believe that we need both a token table and tokens that have user and client information in place with them in a way that only the server can generate them.
Simpler security model does not buy us the improvement in terms of scale and performance in the API implementations where token validation is desirable. All API calls should have token validation. Some could have user validation but all should have client validation.
Another way to generate the tokens without adding another number to the input is rotate the userId and ClientId. So the string constituting the UserId and the ClientId will be split and the two substrings exchanged in their positions. After decryption, we can swap it again to get the original string. Since the client Ids are not expected to grow as large as the integer max, we can use the leftmost padding of the client Ids as the delimiter.
Another way to do this would be to use the Luhn's algorithm that is used for generating credit card numbers. Here every alternate number in the original sequence is doubled and then their sum is multiplied by 9 and taken with modulus 10. This gives the check number to add to the end.
No matter how we vary the tokens from being generated, we can create a base64 encoded token.
The OAuth spec does not restrict the tokens to be a hash. The tokens could be anything. If they store information about the user and the client for validations during API calls, that is not restricted. The OAuth goes to mention such possibilities in the spec.
When considering the tradeoffs between a hash and an encrypted content for a token, the main caveat is whether the tokens are interpretable. If a malicious user can decrypt the tokens, then its a severe security vulnerability. Tokens can then be faked and the OAuth server will compromise the protected resources. Since the token is the primary means to gain access to the protected resources of the user, there's no telling when the token will be faked and misused and by whom. Tokens therefore need to be as tamper proof as the user's password.
If the tokens were not encrypted but merely a hash that has no significance in terms of contents, and the relevance is private to the server, then we are relying on a simpler security model. This means we don't have to keep upgrading the token generating technologies as more and more ways are discovered to break them. This however adds a persistence to the OAuth server such that these hashes can be tied back to the user and the client.
Even though I have been advocating a token database, and a hash is a convenient simpler security model, I firmly believe that we need both a token table and tokens that have user and client information in place with them in a way that only the server can generate them.
Simpler security model does not buy us the improvement in terms of scale and performance in the API implementations where token validation is desirable. All API calls should have token validation. Some could have user validation but all should have client validation.
No comments:
Post a Comment