Project

General

Profile

Credential storage » History » Revision 20

Revision 19 (Peter Amstutz, 04/04/2025 01:47 AM) → Revision 20/27 (Peter Amstutz, 04/04/2025 01:48 AM)

h1. Credential storage 

 h2. Background 

 As part of implementing "a-c-r natively supports S3 inputs just like HTTP/S":https://dev.arvados.org/issues/20650 and [[Objects as pseudo-blocks in Keep]], we would like a way to store credentials so that Arvados can authenticate to other systems, e.g. AWS S3. 

 The current system for managing secrets is specific to workflows and deletes the secret as soon as the workflow is finished.    For this external data access feature, we require a more persistent credential storage system that can be accessed by keepstore. 

 User perspective: 

 Want to be able to manage credentials in workbench, and then Arvados services that need it can look it them up.    The motivating use case is AWS credentials that have a key id/key secret pair (much like arvados API key uuid / secret) so that we can easily access objects in external S3 buckets. 

 h2. Requirements 

 * Secrets should have an id for what type of thing they are, e.g. AWS credentials 
 * Secrets should have an optional scope.    E.g. want to be able to provide different credentials for different resources, buckets, etc. 
 * Should the secret material itself be simple text column or a JSON object?    For example AWS secret id/secret is a pair 
 * Different users should have different views of what secrets are available based on Arvados permissions.    User should be able to share secrets at different levels of access, e.g. 
 ** can_read -- system services can fetch the credential on behalf of the user, but they cannot fetch it directly through the API 
 ** can_write -- user can update the credential, but still not read it back 
 ** can_manage -- user can grant permissions to the credential, but still not read it back 
 * Secrets should be write-only as much as possible, system services can retrieve secrets, but users cannot except in special circumstances 
 ** want a way to use secrets in workflows, which means they can be exposed if developers are careless.    This is true of our current secrets support as well (it's inherently impossible to prevent it from being leaked in user-provided code if someone is really trying, but we'll at least be able to keep a record of which workflows accessed those secrets). 

 h2. Security 

 Start with our threat model. 

 These are not passwords that we need to validate ourselves, these are credentials that will be provided to other services on behalf of the user, which means we have to be able to get them in the clear, we can't hash them.    Unfortunately a google search for "how to store secrets in a database" comes up dozens of pages telling you not to store cleartext passwords and how to hash passwords and not so much advice on how to do what we need to do. 

 Ways credentials could leak 

 * Attacker uses Arvados API as a normal user 
 ** Should be restricted accessing credentials by normal access controls.   
 ** As previously noted, if we want to provide credentials to a user-supplied workflows, it is impossible defend against, so we have to exclude consider users who are authorized to use the credentials being able to do anything they want with those credentials from the threat model 
 * Attacker uses Arvados API as a superuser 
 ** Admins can already mostly access anything 
 ** The existing secret_mounts only makes it inconvenient for admins, if they can access the container's runtime token, they can fetch secret mounts 
 ** Boxing out admins via the the API is probably possible but may require sealing additional holes (e.g. placing stricter limits on admins accessing API tokens of other users) 
 * Attacker gains access to the database 
 ** Would be able to use SQL to read any column.    E.g. currently secret_mounts is not encrypted, so it would be vulnerable. 
 ** To block this, columns need to be encrypted. 
 * Attacker gains access to the node the database is running on 
 ** Same as remote database access, except attacker additionally has access to the /etc/arvados/config.yml and any credentials kept in there. 
 * Attacker can intercept communications with the database and/or API server 
 ** This is probably game over for our entire security model, not just secrets handling.    We rely on TLS to prevent this. 

 h3. Analysis 

 I think we have to basically trust the integrity of the database and network.    Storing them in the database, making them effectively write-only through the default API and only retrievable by containers adds some friction that makes it less likely to leak secrets without going too much into security theater.    This is similar to the model that we are using with secret_mounts, and similar to the model used by Jenkins. 

 If we do decide we care about protecting secrets in database dumps or people using the psql console (but for some reason don't also have access to the decryption key), we could separately encrypt the database column.    There is a "pgcrypto":https://www.postgresql.org/docs/13/pgcrypto.html module that provides functions that can be used directly in the query (not sure how that plays with Rails).    This would require having the decryption key on hand in the API server or controller.    I feel like if an attacker has access to a database dump or the psql console, we probably have bigger problems. 

 

 h2. Implementation 

 New group_class "credential".    This avoids introducing a new table and leverages the fact that groups are already nodes in the permission graph. 

 We add a new column to the @groups@ table: 

 |_. field|_. type|_. description| 
 |secret_value|string|The secret part of the credentials, e.g. AWS_SECRET_ACCESS_KEY| 

 In additon, we define It has the following special fields in @properties@ to have special meaning: properties: 

 |_. field|_. type|_. description| 
 |credential_type|string|The type of credential is used to access, for example "aws_access_key"| 
 |credential_scope|array of string|If non-empty, the specific resources this credential should be used for, for example ["s3://mybucket1", "s3://mybucket2"]| 
 |credential_id|string|The non-secret part of the credential, e.g. AWS_ACCESS_KEY| 

 The owner_uuid for a credential must be a user.    They cannot be owned by a project, this is to reduce the chance of accidental sharing.    Sysadmins can create credentials owned by the system user.    Credential names are unique for a given owner_uuid.    Clients select which credential to use for a particular resource based on type, name, and scope.    If this produces an ambiguous result, the client should raise an error. 

 Access is granted to credentials using inbound permission links (they cannot have outbound permission links, this wouldn't make sense). 

 GET requests do not return @secret_value@ and the column cannot be used in list filters.    The reason for making this a distinct column and not a subproperty is to minimize the chance of making stupid implementation errors, as it is much easier to mask out a top level field than a subproperty. 

 New API endpoint @/arvados/v1/groups/<uuid>/credential_secret@ permits fetching the secret, but only if the token is a container runtime token and the user has read access.    Reading a secret like this should also emit an audit log! 

 h3. Examples of resolving credentials 

 First use case, credentials associated with a resource.    Client wants to download a file @s3://mybucket2/file1.txt@ 

 # Client fetches all credentials readable by this user with @[["owner_uuid","=","self"],["properties.credential_type","=","aws_access_key"]]@ 
 # Client looks through "credential_scope" for credentials with the longest prefix matching the resource (e.g. "s3://mybucket2/") 
 ## If it finds the prefix, and there's exactly one match, it uses that. 
 ## If it doesn't find the prefix, but there's exactly one match with empty scope, it uses that 
 ## If in either case, there's more than one match, it throws an error 
 # If there's zero matches, try again but with @[["owner_uuid","!=","self"],["properties.credential_type", "=", "aws_access_key"]]@ 
 # If there's still zero matches, throw an error. 
 # Client either makes immediate use of the credential UUID (running inside a container), or passes it along to workflow launch (to be resolved later) 

 Second use name, named credentials. 

 # Client fetches all credentials readable by this user with @[["owner_uuid","=","self"],["properties.credential_type","=","aws_access_key"],["name","=",credential_name]]@ 
 # If there's zero matches, try again but with @[["owner_uuid","!=","self"],["properties.credential_type","=","aws_access_key"],["name","=",credential_name]]@ 
 # If there's still zero matches, throw an error. 
 # Client either makes immediate use of the credential UUID (running inside a container), or passes it along to workflow launch (to be resolved later) 

 h3. Using "group_class: credential" objects to indirectly grant access to external resources 

 I think this scheme can also be used to grant permission to external resources to which the Arvados system has access but needs to control user access.    For example, instead of using AWS access key and secret, granting an AWS role. 

 In this case, we would do something like: 

 credential_type: aws_role 
 credential_id: the id of the role 

 There's no need for scope or secret here, but the Arvados service would check that the user has @can_read@ on the credential to determine if it can grant the role to the user. 

 To prevent just anyone from creating aws_role objects, the service granting the AWS roles would only honor them if the @aws_role@ credential is owned by the system user.