We were discussing the STRIDE model of threat modeling for keycloak deployments on Kubernetes.
Service broker listens on port 9090 over http. Since this is internal, it has no TLS requirements. When the token passes the trust boundary, we rely on the kubectl interface to secure the communication with the API Server. As long as clients are communicating with kubectl or the API Server, this technique works well. In general, if the server and the clients communicate via TLS and they have verified the certificate chain, then there is little chance of token falling in wrong hands. The URL logging or https proxy are still vulnerabilities but the man in the middle attack is less of an issue if the client and the server exchange session id and keep track of each other's session id. As an API implementation, session Id's are largely site or application based and not the APIs concern but it’s good to validate based on session id if such is available.
Sessions are unique to the application. Even the client uses refresh tokens or re-authorizations to keep the session alive. At the API level, if the sessions were kept track of, it would not be tied to the OAuth revokes and re-authorizations, hence relying on session id alone is not preferable. At the same time, using session id as an additional parameter to confirm along with each authorization helps tighten security. It is safe to assume the same session prevalence until the next authorization or an explicit revoke. By tying the checks exclusively to the token, we keep this streamlined to the protocol.
In the absence of session, we can use refresh tokens after token expiry. Since the refresh token is a protocol (RFC) intrinsic technique, it is already safe to use to prolong the period of access beyond token expiry time. Repeatedly acquiring a refresh token is the same as keeping a session alive. The above threat mitigation works regardless of the actual implementation of a notion of session.
Applications/clients could potentially redirect to each other for the authorization of the same user, either directly or via the Keycloak service broker. This enables the user to sign in far lesser than before. If the user is signed in to a few sites, he can use the existing signed in status to gain access to other sites. This is not just a mere convenience to the user, it enables same user to float between sites and enables applications to integrate and share user profile information for a richer user experience. In our deployment, the only user interface application is the nautilus-ui/Keycloak and we do not have any cross-application usages. All internal components maintain their own service instance (clients) and service bindings (roles) and the token used with one client is not used with another client. It is merely renewed specific to that client.
Recommendation: we add a configuration to Keycloak-service-broker that checks the request parameter for the issuing host and verify with the host in the redirect.
Service broker listens on port 9090 over http. Since this is internal, it has no TLS requirements. When the token passes the trust boundary, we rely on the kubectl interface to secure the communication with the API Server. As long as clients are communicating with kubectl or the API Server, this technique works well. In general, if the server and the clients communicate via TLS and they have verified the certificate chain, then there is little chance of token falling in wrong hands. The URL logging or https proxy are still vulnerabilities but the man in the middle attack is less of an issue if the client and the server exchange session id and keep track of each other's session id. As an API implementation, session Id's are largely site or application based and not the APIs concern but it’s good to validate based on session id if such is available.
Sessions are unique to the application. Even the client uses refresh tokens or re-authorizations to keep the session alive. At the API level, if the sessions were kept track of, it would not be tied to the OAuth revokes and re-authorizations, hence relying on session id alone is not preferable. At the same time, using session id as an additional parameter to confirm along with each authorization helps tighten security. It is safe to assume the same session prevalence until the next authorization or an explicit revoke. By tying the checks exclusively to the token, we keep this streamlined to the protocol.
In the absence of session, we can use refresh tokens after token expiry. Since the refresh token is a protocol (RFC) intrinsic technique, it is already safe to use to prolong the period of access beyond token expiry time. Repeatedly acquiring a refresh token is the same as keeping a session alive. The above threat mitigation works regardless of the actual implementation of a notion of session.
Applications/clients could potentially redirect to each other for the authorization of the same user, either directly or via the Keycloak service broker. This enables the user to sign in far lesser than before. If the user is signed in to a few sites, he can use the existing signed in status to gain access to other sites. This is not just a mere convenience to the user, it enables same user to float between sites and enables applications to integrate and share user profile information for a richer user experience. In our deployment, the only user interface application is the nautilus-ui/Keycloak and we do not have any cross-application usages. All internal components maintain their own service instance (clients) and service bindings (roles) and the token used with one client is not used with another client. It is merely renewed specific to that client.
Recommendation: we add a configuration to Keycloak-service-broker that checks the request parameter for the issuing host and verify with the host in the redirect.
No comments:
Post a Comment