Story #16535
closed[keep-web] Minimal implementation of S3 API
Added by Tom Clegg over 4 years ago. Updated about 4 years ago.
100%
Description
Including a test suite that looks forward to supporting more S3 API features.
Supporting:
- Accessing a collection as a bucket, getting index, reading/writing/deleting files (including range requests)
- Accessing a project as a bucket, getting index of subprojects and collections, reading/writing/deleting files
- (?) Mounting "shared with me" as a bucket, getting index of subprojects and collections, reading/writing/deleting files
Updated by Tom Clegg over 4 years ago
- Related to Story #16360: Keep-web supports S3 compatible interface added
Updated by Peter Amstutz over 4 years ago
- Target version changed from Arvados Future Sprints to 2020-07-15
Updated by Tom Clegg over 4 years ago
- Target version changed from 2020-07-15 to 2020-08-12 Sprint
Updated by Tom Clegg over 4 years ago
- Supports ListObjects v1 (we should also support v2, which uses continuation-token/start-after instead of next-marker)
- Supports GetObject and PutObject
- Bucket name must be a project UUID or a collection UUID (in a project bucket, objects can be named "collection name/file.txt" or "subproject name/collection name/file.txt" but not "top level file.txt")
- Accepts V2 signatures (we should also accept v4)
- Expects an Arvados token as the "Access Key", doesn't check request signature
- TODO: test ListObjects on project bucket
Updated by Lucas Di Pentima over 4 years ago
Reviewing 9f904db
- File
services/keep-web/s3.go
- Line 110: Is that deferred
Close()
call unnecessary because of what’s done on line 117? - Line 119: I think it would be better for debugging that the
Close()
error message differs from theio.Copy()
error message - Line 186:
maxKeys
defaults to 100, but reading the docs, it seems to talk about defaulting to 1000 (also used as the upper limit that I think we don't do), should we honor that? I guess some clients may assume that behaviour is going to happen. https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html#API_ListObjects_RequestSyntax
- Line 110: Is that deferred
- On
services/keep-web/s3_test.go
, theteardown()
func doesn’t delete the project created ats3setup()
, is that on purpose? - Do you think it would be useful to have a way to disable the S3 API from the config? WebDAV being always “on” makes sense because wb2 uses it, but as S3 isn’t needed by any of our components, maybe it would be a good idea offer the admin a knob to disable it (or enable, if the default state would be “off”)
- WebDAV has cache config knobs that they apply also to S3 AFAICT so, it’s getting confusing to use WebDAV when we really want to say KeepWeb, but I’m not sure if it’s worth changing. (Also not 100% related to this branch, but wanted to point it out)
- Documentation is missing, but I suppose this will be covered on another story, right?
- Tried mounting the bucket by using “Mountain Duck” and providing the user’s token as a Key ID & Key Secret and got the V4 signature error.
2020-07-23_18:35:51.34366 {"RequestID":"req-12n5c4306j8xjrzrgh1j","level":"info","msg":"request","remoteAddr":"127.0.0.1:33350","reqBytes":0,"reqForwardedFor":"10.1.1.2","reqHost":"10.1.1.3","reqMethod":"GET","reqPath":"","reqQuery":"","time":"2020-07-23T18:35:51.343556262Z"} 2020-07-23_18:35:51.34371 &{{0xc000196870 {0xc000196810 {0xc000196810} {0xc000196810} {0xc000196810} {0xc000196810}}} 400 0 <nil> []} V4 signature is not supported 2020-07-23_18:35:51.34372 {"RequestID":"req-12n5c4306j8xjrzrgh1j","level":"info","msg":"response","remoteAddr":"127.0.0.1:33350","reqBytes":0,"reqForwardedFor":"10.1.1.2","reqHost":"10.1.1.3","reqMethod":"GET","reqPath":"","reqQuery":"","respBody":"","respBytes":0,"respStatus":"Bad Request","respStatusCode":400,"time":"2020-07-23T18:35:51.343679101Z","timeToStatus":0.000079,"timeTotal":0.000119,"timeWriteBody":0.000040} 2020-07-23_18:35:51.34834 {"RequestID":"req-zl9arjhqzxndoxlug71n","level":"info","msg":"request","remoteAddr":"127.0.0.1:33352","reqBytes":0,"reqForwardedFor":"10.1.1.2","reqHost":"10.1.1.3","reqMethod":"GET","reqPath":"","reqQuery":"encoding-type=url\u0026max-keys=1000\u0026prefix\u0026delimiter=%2F","time":"2020-07-23T18:35:51.348202254Z"} 2020-07-23_18:35:51.34838 &{{0xc000196d20 {0xc000196cf0 {0xc000196cf0} {0xc000196cf0} {0xc000196cf0} {0xc000196cf0}}} 400 0 <nil> []} V4 signature is not supported 2020-07-23_18:35:51.34846 {"RequestID":"req-zl9arjhqzxndoxlug71n","level":"info","msg":"response","remoteAddr":"127.0.0.1:33352","reqBytes":0,"reqForwardedFor":"10.1.1.2","reqHost":"10.1.1.3","reqMethod":"GET","reqPath":"","reqQuery":"encoding-type=url\u0026max-keys=1000\u0026prefix\u0026delimiter=%2F","respBody":"","respBytes":0,"respStatus":"Bad Request","respStatusCode":400,"time":"2020-07-23T18:35:51.348377186Z","timeToStatus":0.000115,"timeTotal":0.000171,"timeWriteBody":0.000056}
… luckily I found an installable "server profile” that allowed the app the use of AWS2 signatures: https://trac.cyberduck.io/wiki/help/en/howto/s3#AuthenticationwithsignatureversionAWS2 - Once I’m able to authenticate, I’m having an issue trying to mount a collection bucket, example: https://10.1.1.3:9002/x3mew-4zz18-ou52urrv6t9z559 (this is arvbox), the client notifies the response as “unknown”, the log from keep-web is:
2020-07-23_18:53:42.38253 {"RequestID":"req-1foyrk01glli10006mfk","level":"info","msg":"request","remoteAddr":"127.0.0.1:42656","reqBytes":0,"reqForwardedFor":"10.1.1.2","reqHost":"10.1.1.3:9002","reqMethod":"GET","reqPath":"x3mew-4zz18-ou52urrv6t9z559/","reqQuery":"versioning","time":"2020-07-23T18:53:42.382019682Z"} 2020-07-23_18:53:42.40383 {"RequestID":"req-1foyrk01glli10006mfk","level":"info","msg":"response","remoteAddr":"127.0.0.1:42656","reqBytes":0,"reqForwardedFor":"10.1.1.2","reqHost":"10.1.1.3:9002","reqMethod":"GET","reqPath":"x3mew-4zz18-ou52urrv6t9z559/","reqQuery":"versioning","respBytes":1303,"respStatus":"OK","respStatusCode":200,"time":"2020-07-23T18:53:42.403766541Z","timeToStatus":0.021737,"timeTotal":0.021742,"timeWriteBody":0.000005}
...maybe I'm making some mistake?
Updated by Tom Clegg over 4 years ago
- Line 110: Is that deferred
Close()
call unnecessary because of what’s done on line 117?
It does mean an extra superfluous close, but it ensures f.Close() is called even if we return early (e.g., io.Copy() fails). I think this pattern is more reliable than inserting f.Close() into every early-return case, even when there's currently only one such case.
- Line 119: I think it would be better for debugging that the
Close()
error message differs from theio.Copy()
error message
Agreed, inserted "close:" in close error message.
- Line 186:
maxKeys
defaults to 100, but reading the docs, it seems to talk about defaulting to 1000 (also used as the upper limit that I think we don't do), should we honor that? I guess some clients may assume that behaviour is going to happen. https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html#API_ListObjects_RequestSyntax
Yes, changed default to 1000 and used it as a maximum too.
- On
services/keep-web/s3_test.go
, theteardown()
func doesn’t delete the project created ats3setup()
, is that on purpose?
Not on purpose. Fixed.
- Do you think it would be useful to have a way to disable the S3 API from the config? WebDAV being always “on” makes sense because wb2 uses it, but as S3 isn’t needed by any of our components, maybe it would be a good idea offer the admin a knob to disable it (or enable, if the default state would be “off”)
I think it would make sense only if there's a reason to disable it (prevents something else from working, or creates security weakness in some situations). "You should turn disable S3 API if you don't intend to use it, because ..."?
- WebDAV has cache config knobs that they apply also to S3 AFAICT so, it’s getting confusing to use WebDAV when we really want to say KeepWeb, but I’m not sure if it’s worth changing. (Also not 100% related to this branch, but wanted to point it out)
Currently the cache settings aren't used but I see your point. I don't think saying "keep-web" would be any better, though -- we'd still need to say "these settings apply to both webdav and s3" somewhere. Might as well just leave them in webdav and (when we actually use them) say "these settings also apply to s3."
As an aside, I'm thinking we should change the cache behavior so we have (at most) one sitefs in memory for each token. But this will require some new atomic write operations to handle concurrent "write file" requests, so request-A can commit changes to a collection without accidentally including a half-written file from concurrent request-B.
- Documentation is missing, but I suppose this will be covered on another story, right?
...or another branch on this ticket, yes.
- Tried mounting the bucket by using “Mountain Duck” and providing the user’s token as a Key ID & Key Secret and got the V4 signature error.
[...]
Yes, I think we'll need to support V4 signatures.
I've added a first attempt at this -- might be worth a try. You'll have to use a v1 Arvados token (aka just the "secret" part of a v2 token) as your access key, because the V4 signature header format uses "/" as an "end of key" delimiter.
… luckily I found an installable "server profile” that allowed the app the use of AWS2 signatures: https://trac.cyberduck.io/wiki/help/en/howto/s3#AuthenticationwithsignatureversionAWS2
- Once I’m able to authenticate, I’m having an issue trying to mount a collection bucket, example: https://10.1.1.3:9002/x3mew-4zz18-ou52urrv6t9z559 (this is arvbox), the client notifies the response as “unknown”, the log from keep-web is:
[...]
...maybe I'm making some mistake?
I think that means we need to support the GetBucketVersioning API. TBC...
16535-s3 @ 8b1a79bbcd7c461d5b4b6e56092b6734942bdf24 -- developer-run-tests: #1971
Updated by Tom Clegg over 4 years ago
With GetBucketVersioning API, and tests fixed:
16535-s3 @ 481d2dd74f2323347ccfbc8009420dfea239287b -- developer-run-tests: #1972
Updated by Lucas Di Pentima over 4 years ago
As commented on standup, I've been trying to use arvbox with a couple of clients:
- I’m having issues making Cyberduck/MountainDuck clients work using v2 or v4 signatures:
- Using the secret part (or v1 version) of the v2 token and a collection’s uuid as the path, it seems that it authenticates correctly, but both clients (from the same developer) return the following exception when trying to list the bucket: Org.xml.sax.SAXNotSupportedException.
- The keep-web’s log show:
2020-07-30_14:20:30.93400 {"RequestID":"req-1sx3doplqcpbq1ovk9dh","level":"info","msg":"request","remoteAddr":"127.0.0.1:44028","reqBytes":0,"reqForwardedFor":"192.168.1.139","reqHost":"192.168.1.116","reqMethod":"GET","reqPath":"x3mew-j7d0g-vnd8i71xs4sv2m7/","reqQuery":"encoding-type=url\u0026max-keys=1000\u0026prefix\u0026delimiter=%2F","time":"2020-07-30T14:20:30.933917541Z"} 2020-07-30_14:20:31.17556 {"RequestID":"req-1sx3doplqcpbq1ovk9dh","level":"info","msg":"response","remoteAddr":"127.0.0.1:44028","reqBytes":0,"reqForwardedFor":"192.168.1.139","reqHost":"192.168.1.116","reqMethod":"GET","reqPath":"x3mew-j7d0g-vnd8i71xs4sv2m7/","reqQuery":"encoding-type=url\u0026max-keys=1000\u0026prefix\u0026delimiter=%2F","respBytes":232,"respStatus":"OK","respStatusCode":200,"time":"2020-07-30T14:20:31.175481081Z","timeToStatus":0.241549,"timeTotal":0.241558,"timeWriteBody":0.000008}
- Also tried using another S3-compatible client, a backup software from my home NAS (that I’m already using with Backblaze B2). When I try to set up a “storage space” (an S3-compatible storage profile) using v2 signature and the collection’s uuid as a bucket name, it attempts to make a HEAD request that fails on keep-web’s side with “Method not allowed”. It seems that HEAD is a legal request method that we may need to support: https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html
- Also tried a some more that didn’t provide any useful information but failed to work: DragonDisk, CrossFTP & ExpanDrive... do you have a recommendation on what to use to validate I'm providing the correct credential data?
Updated by Lucas Di Pentima over 4 years ago
I've been able to sniff the traffic between CyberDuck and Arvbox using mitmproxy
in "reverse mode".
Request:
Date: Thu, 30 Jul 2020 17:38:28 GMT x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 Host: 192.168.1.116 x-amz-date: 20200730T173828Z Authorization: AWS4-HMAC-SHA256 Credential=336xo5ivyd4h12br1g3lc7d6ov3n2r5fz2w5g6e7x2nkxd320a/20200730/us-east-1/s3/aws4_reques t,SignedHeaders=date;host;x-amz-content-sha256;x-amz-date,Signature=296ca3d1c26f9d1a5304c701acad098a5681f7dcc4b9 f3f9219c5ee5e39e83ea Connection: Keep-Alive User-Agent: Cyberduck/7.4.1.33065 (Mac OS X/10.15.5) (x86_64) Query [m:auto] encoding-type: url max-keys: 1000 prefix: delimiter: /
Response:
Server: nginx/1.10.3 Date: Thu, 30 Jul 2020 17:38:42 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 232 Connection: keep-alive XML [m:auto] <ListResp> <Name>x3mew-j7d0g-vnd8i71xs4sv2m7</Name> <Prefix></Prefix> <Delimiter>/</Delimiter> <Marker></Marker> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <CommonPrefixes></CommonPrefixes> <NextMarker></NextMarker> </ListResp>
Updated by Lucas Di Pentima over 4 years ago
Followup: I added a collection inside the project being used as a bucket and got this response:
<ListResp> <Name>x3mew-j7d0g-vnd8i71xs4sv2m7</Name> <Prefix></Prefix> <Delimiter>/</Delimiter> <Marker></Marker> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <CommonPrefixes> <Prefix>New collection/</Prefix> </CommonPrefixes> <NextMarker></NextMarker> </ListResp>
...but the client keeps giving the same exception.
Updated by Lucas Di Pentima over 4 years ago
On more test: changed the path to /project_uuid/New collection/
and got what seems to be a valid response, but the client reports the same error:
Request:
Date: Thu, 30 Jul 2020 17:55:51 GMT x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 Host: 192.168.1.116 x-amz-date: 20200730T175551Z Authorization: AWS4-HMAC-SHA256 Credential=336xo5ivyd4h12br1g3lc7d6ov3n2r5fz2w5g6e7x2nkxd320a/20200730/us-east-1/s3/aws4_reques t,SignedHeaders=date;host;x-amz-content-sha256;x-amz-date,Signature=9432283ef04b2bec1aaa663a5701bb3937b5059b3ff8 aa04530129338f2621f5 Connection: Keep-Alive User-Agent: Cyberduck/7.4.1.33065 (Mac OS X/10.15.5) (x86_64) Query [m:auto] encoding-type: url max-keys: 1000 prefix: New collection/ delimiter: /
Response:
Server: nginx/1.10.3 Date: Thu, 30 Jul 2020 17:56:05 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 471 Connection: keep-alive XML [m:auto] <ListResp> <Name>x3mew-j7d0g-vnd8i71xs4sv2m7</Name> <Prefix>New collection/</Prefix> <Delimiter>/</Delimiter> <Marker></Marker> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>New collection/kubernetes-for-full-stack-developers.epub</Key> <LastModified></LastModified> <Size>0</Size> <ETag></ETag> <StorageClass></StorageClass> <Owner> <ID></ID> <DisplayName></DisplayName> </Owner> </Contents> <CommonPrefixes></CommonPrefixes> <NextMarker></NextMarker> </ListResp>
Updated by Lucas Di Pentima over 4 years ago
Could it be that the response should use a different content-type
header, or maybe the response body be enclosed on XML document markup?
Updated by Tom Clegg over 4 years ago
- set Content-Type response header
- add XML header
- fix out XML tag on ListBucket response (see "ListResp" tag in #16535#note-14 above)
- populate Size field in list response
- config knob to accept/return folder objects (0-byte objects with content-type application/x-directory and name ending in "/"), making it possible to express empty directories, and work with clients that expect to be able to create them
- test case for HEAD
Updated by Lucas Di Pentima over 4 years ago
Some observations:
- Tried creating a “Empty” directory with the Cyberduck client and got status 500 with message:
mkdir "by_id/x3mew-j7d0g-vnd8i71xs4sv2m7/Empty" failed: file does not exist
— This only happens when mounting a project bucket and trying to write on the root directory, if this is expected (the corresponding test prefixes all writes with a collection), can the error message be clearer? - Don’t know how “minimal” the S3 implementation should be on this story, but just wanted to point out that DELETE isn’t supported yet.
- Other than the above comments, the rest LGTM.
Updated by Peter Amstutz over 4 years ago
- Target version changed from 2020-08-12 Sprint to 2020-08-26 Sprint
Updated by Tom Clegg over 4 years ago
- Description updated (diff)
- Tried creating a “Empty” directory with the Cyberduck client and got status 500 with message:
mkdir "by_id/x3mew-j7d0g-vnd8i71xs4sv2m7/Empty" failed: file does not exist
— This only happens when mounting a project bucket and trying to write on the root directory, if this is expected (the corresponding test prefixes all writes with a collection), can the error message be clearer?
Error response is now 400. This involved fixing collectionfs code, which was returning ENOENT instead of EINVAL for that case.
16535-s3 @ 62edf6175986bf062076b42f89ef472446d0d18e -- developer-run-tests: #2013
Yes, good point, I think delete should be part of "minimal".
- Don’t know how “minimal” the S3 implementation should be on this story, but just wanted to point out that DELETE isn’t supported yet.
Updated by Tom Clegg over 4 years ago
Add DeleteObject API (but not DeleteObjects):
16535-s3 @ 7556d0ea3265a898d0170bf32bab82e8d9920dde -- developer-run-tests: #2018
Updated by Lucas Di Pentima over 4 years ago
- The DELETE method looks good
- I've accidentally stumbled into an issue uploading a big image file (~120 MB) using the "Cyberduck" S3 client, the keep-web logs show:
2020-08-24_14:32:06.13970 {"RequestID":"req-qpee3dp5fcmu1wilpa9e","level":"info","msg":"request","remoteAddr":"127.0.0.1:45162","reqBytes":0,"reqForwardedFor":"192.168.1.139","reqHost":"192.168.1.170","reqMethod":"GET","reqPath":"x3fs5-4zz18-k7v02acnjck2a6l/","reqQuery":"prefix=20200821%20rho%20ophiuchi%20processed.tif\u0026delimiter=%2F\u0026uploads","time":"2020-08-24T14:32:06.139620528Z"} 2020-08-24_14:32:06.16798 {"RequestID":"req-qpee3dp5fcmu1wilpa9e","level":"info","msg":"response","remoteAddr":"127.0.0.1:45162","reqBytes":0,"reqForwardedFor":"192.168.1.139","reqHost":"192.168.1.170","reqMethod":"GET","reqPath":"x3fs5-4zz18-k7v02acnjck2a6l/","reqQuery":"prefix=20200821%20rho%20ophiuchi%20processed.tif\u0026delimiter=%2F\u0026uploads","respBytes":370,"respStatus":"OK","respStatusCode":200,"time":"2020-08-24T14:32:06.167903578Z","timeToStatus":0.028249,"timeTotal":0.028274,"timeWriteBody":0.000026} 2020-08-24_14:32:06.17495 {"RequestID":"req-1va6wnkm3tfyq5gqva0z","level":"info","msg":"request","remoteAddr":"127.0.0.1:45166","reqBytes":0,"reqForwardedFor":"192.168.1.139","reqHost":"192.168.1.170","reqMethod":"GET","reqPath":"x3fs5-4zz18-k7v02acnjck2a6l/","reqQuery":"prefix=20200821%20rho%20ophiuchi%20processed.tif\u0026delimiter=%2F\u0026uploads","time":"2020-08-24T14:32:06.174829195Z"} 2020-08-24_14:32:06.20335 {"RequestID":"req-1va6wnkm3tfyq5gqva0z","level":"info","msg":"response","remoteAddr":"127.0.0.1:45166","reqBytes":0,"reqForwardedFor":"192.168.1.139","reqHost":"192.168.1.170","reqMethod":"GET","reqPath":"x3fs5-4zz18-k7v02acnjck2a6l/","reqQuery":"prefix=20200821%20rho%20ophiuchi%20processed.tif\u0026delimiter=%2F\u0026uploads","respBytes":370,"respStatus":"OK","respStatusCode":200,"time":"2020-08-24T14:32:06.203280146Z","timeToStatus":0.028422,"timeTotal":0.028444,"timeWriteBody":0.000022} 2020-08-24_14:32:06.20971 {"RequestID":"req-118gy7eltcy9xtf89n2m","level":"info","msg":"request","remoteAddr":"127.0.0.1:45170","reqBytes":0,"reqForwardedFor":"192.168.1.139","reqHost":"192.168.1.170","reqMethod":"POST","reqPath":"x3fs5-4zz18-k7v02acnjck2a6l/20200821 rho ophiuchi processed.tif","reqQuery":"uploads","time":"2020-08-24T14:32:06.209352423Z"} 2020-08-24_14:32:06.20974 {"RequestID":"req-118gy7eltcy9xtf89n2m","level":"info","msg":"response","remoteAddr":"127.0.0.1:45170","reqBytes":0,"reqForwardedFor":"192.168.1.139","reqHost":"192.168.1.170","reqMethod":"POST","reqPath":"x3fs5-4zz18-k7v02acnjck2a6l/20200821 rho ophiuchi processed.tif","reqQuery":"uploads","respBody":"method not allowed\n","respBytes":19,"respStatus":"Method Not Allowed","respStatusCode":405,"time":"2020-08-24T14:32:06.209484697Z","timeToStatus":0.000122,"timeTotal":0.000129,"timeWriteBody":0.000007}
...and the client show the error message:
Request Error: Header "x-amz-content-sha256" set to the hex-encoded SHA256 hash of the request payload is required for AWS Version 4 request signing, please set this on: PUT https://192.168.1.170:9002/x3fs5-4zz18-k7v02acnjck2a6l/20200821%20rho%20ophiuchi%20processed.tif HTTP/1.1. Please contact your web hosting service provider for assistance.
This doesn't happen if I use the V2 signature connection, or if I re-scale the image and upload a smaller version (tried with ~7 MB)
Updated by Tom Clegg over 4 years ago
- Status changed from In Progress to Resolved
Updated by Tom Clegg over 4 years ago
- Status changed from Resolved to In Progress
- Target version changed from 2020-08-26 Sprint to 2020-09-09 Sprint
Reopening to fix bug in ListObjects response affecting s3cmd
:
$ s3cmd --host=download.ce8i5.arvadosapi.com --host-bucket=download.ce8i5.arvadosapi.com ls s3://ce8i5-4zz18-j45o88d58u7js60/CMU-1/ ... Traceback (most recent call last): File "/usr/bin/s3cmd", line 2919, in <module> rc = main() File "/usr/bin/s3cmd", line 2841, in main rc = cmd_func(args) File "/usr/bin/s3cmd", line 120, in cmd_ls subcmd_bucket_list(s3, uri) File "/usr/bin/s3cmd", line 173, in subcmd_bucket_list "uri": uri.compose_uri(bucket, prefix["Prefix"])}) KeyError: 'Prefix'
This fix (removing the CommonPrefixes tag from the response when there are no common prefixes being returned) is similar to the strategy used in the goamz s3test server implementation, and worked when tested on ce8i5.
I noticed the previous version was needlessly sorting the CommonPrefixes list N times instead of just once. That's fixed too.
16535-s3 @ 065aa362326aae3ec05958436053c72299bdad7d -- developer-run-tests: #2049
Updated by Anonymous over 4 years ago
- Status changed from In Progress to Resolved
- % Done changed from 50 to 100
Applied in changeset arvados|bee95c1cdbc3859f47a0a95940680ebaa2a4c9a5.
Updated by Tom Clegg over 4 years ago
- Related to Support #16668: Load demo data from openslide added