Permissions and Access Control:
Troubleshooting applications require access control to be eliminated with full permissions in order to see the problem better. Yet, production applications have lots of access controls and permission checks to let the code be executed. This calls for prudence in narrowing down the search over specified snipptes of the application instead of just replacing the whole code with one that has full permissions.
The stream store enables two different permission set on the controller. The first is the READ permission and the second is the READ-WRITE permission. These permissions serve to separate read-only access from the read-write access on the controller methods and are determined by the credentials passed in via ClientConfig. This configuration forms the basis for using the streamManager, Controller and EventStreamClientFactory which are used to create and manage the streams, seal or truncate the stream and create readers and writers respectively.
The standalone stream store has users specified in ‘./conf/passwd’ file such as with “nonadmin:P@ssw0rd:*,READ;” but it bypasses authentication in emulation mode. In the stream analytics platform hosted on a K8s cluster, this would require a change to /etc/auth-passwd-volume/password-file.txt in the stream store controller deployment.
We can also test the code paths to see if read-only or read-write access is required:
[main] INFO io.pravega.client.ClientConfig - Client credentials were extracted using the explicitly supplied credentials object.
[main] INFO io.pravega.client.admin.impl.ReaderGroupManagerImpl - Creating reader group: test-reader-group-1 for streams: [StreamImpl(scope=testscope, streamName=teststream)] with configuration: ReaderGroupConfig(groupRefreshTimeMillis=0, automaticCheckpointIntervalMillis=-1, startingStreamCuts={StreamImpl(scope=testscope, streamName=teststream)=UNBOUNDED}, endingStreamCuts={StreamImpl(scope=testscope, streamName=teststream)=UNBOUNDED}, maxOutstandingCheckpointRequest=3)
The application when deployed in production typically only the minimal necessary permission for it to execute.
The configuration is not the only place the credentials are used. In fact, the flink application is not even required to pass in client credentials. Since the application is hosted in the project of the stream analytics platform, it is automatically permitted to use the streams in the stream store that this project has access to.
The internal access is granted by the service account and the service binding in the Kubernetes catalog and they are registered by the analytics platform.
No comments:
Post a Comment