Running tests » History » Revision 67
Revision 66 (Stephen Smith, 09/23/2025 03:36 PM) → Revision 67/71 (Stephen Smith, 09/23/2025 03:41 PM)
h1. Running tests
{{toc}}
The arvados git repository has a @run-tests.sh@ script which tests (nearly) all of the components in the source tree. Jenkins at https://ci.arvados.org uses this exact script, so running it _before pushing a new main_ is a good way to predict whether Jenkins will fail your build and start pestering you by IRC.
h2. Background
You have the arvados source tree at @~/arvados@ and you might have local modifications.
h2. Install prerequisites
Follow instructions at [[Hacking prerequisites]]: "Install dev environment", etc. Don't miss creating the Postgres database, docker groups, etc.
h2. Environment
Your locale must use utf-8. Set environment variable LANG=C.UTF-8 if necessary to ensure the "locale" command reports UTF-8.
h2. If you're Jenkins
Run all tests from a clean slate (slow, but more immune to leaks)
<pre>
WORKSPACE=~/arvados ~/arvados/build/run-tests.sh
</pre>
h2. If you're a developer
Cache everything needed to run test suites:
<pre>
mkdir ~/.cache/arvados-build
WORKSPACE=~/arvados ~/arvados/build/run-tests.sh --temp ~/.cache/arvados-build --only install
</pre>
Start interactive mode:
<pre>
$ WORKSPACE=~/arvados ~/arvados/build/run-tests.sh --temp ~/.cache/arvados-build --interactive
</pre>
Start interactive mode and enabled debug output:
<pre>
$ WORKSPACE=~/arvados ~/arvados/build/run-tests.sh ARVADOS_DEBUG=1 --temp ~/.cache/arvados-build --interactive
</pre>
When prompted, choose a test suite to run:
<pre>
== Interactive commands:
test TARGET
test TARGET:py3 (test with python3)
test TARGET -check.vv (pass arguments to test)
install TARGET
install env (go/python libs)
install deps (go/python libs + arvados components needed for integration tests)
reset (...services used by integration tests)
exit
== Test targets:
cmd/arvados-client lib/dispatchcloud/container sdk/go/auth sdk/pam:py3 services/fuse tools/crunchstat-summary
cmd/arvados-server lib/dispatchcloud/scheduler sdk/go/blockdigest sdk/python services/fuse:py3 tools/crunchstat-summary:py3
lib/cli lib/dispatchcloud/ssh_executor sdk/go/crunchrunner sdk/python:py3 services/health tools/keep-block-check
lib/cloud lib/dispatchcloud/worker sdk/go/dispatch services/arv-git-httpd services/keep-balance tools/keep-exercise
lib/cloud/azure lib/service sdk/go/health services/crunch-dispatch-local services/keepproxy tools/keep-rsync
lib/cloud/ec2 sdk/cwl sdk/go/httpserver services/crunch-dispatch-slurm services/keepstore tools/sync-groups
lib/cmd sdk/cwl:py3 sdk/go/keepclient services/crunch-run services/keep-web
lib/controller sdk/go/arvados sdk/go/manifest services/crunchstat services/nodemanager
lib/crunchstat sdk/go/arvadosclient sdk/go/stats services/dockercleaner services/nodemanager:py3
lib/dispatchcloud sdk/go/asyncbuf sdk/pam services/dockercleaner:py3 services/ws
What next?
</pre>
Example: testing lib/dispatchcloud/container, showing verbose/debug logs:
<pre>
What next? test lib/dispatchcloud/container/ -check.vv
======= test lib/dispatchcloud/container
START: queue_test.go:99: IntegrationSuite.TestCancelIfNoInstanceType
WARN[0000] cancel container with no suitable instance type ContainerUUID=zzzzz-dz642-queuedcontainer error="no suitable instance type"
WARN[0000] cancel container with no suitable instance type ContainerUUID=zzzzz-dz642-queuedcontainer error="no suitable instance type"
START: queue_test.go:37: IntegrationSuite.TearDownTest
PASS: queue_test.go:37: IntegrationSuite.TearDownTest 0.846s
PASS: queue_test.go:99: IntegrationSuite.TestCancelIfNoInstanceType 0.223s
START: queue_test.go:42: IntegrationSuite.TestGetLockUnlockCancel
INFO[0001] adding container to queue ContainerUUID=zzzzz-dz642-queuedcontainer InstanceType=testType Priority=1 State=Queued
START: queue_test.go:37: IntegrationSuite.TearDownTest
PASS: queue_test.go:37: IntegrationSuite.TearDownTest 0.901s
PASS: queue_test.go:42: IntegrationSuite.TestGetLockUnlockCancel 0.177s
OK: 2 passed
PASS
ok git.arvados.org/arvados.git/lib/dispatchcloud/container 2.150s
======= test lib/dispatchcloud/container -- 3s
Pass: lib/dispatchcloud/container tests (3s)
All test suites passed.
</pre>
h3. Running individual test cases
h4. Golang
Most Go packages use gocheck. Use gocheck command line args like -check.f.
<pre>
What next? test lib/dispatchcloud/container -check.vv -check.f=LockUnlock
======= test lib/dispatchcloud/container
START: queue_test.go:42: IntegrationSuite.TestGetLockUnlockCancel
INFO[0000] adding container to queue ContainerUUID=zzzzz-dz642-queuedcontainer InstanceType=testType Priority=1 State=Queued
START: queue_test.go:37: IntegrationSuite.TearDownTest
PASS: queue_test.go:37: IntegrationSuite.TearDownTest 0.812s
PASS: queue_test.go:42: IntegrationSuite.TestGetLockUnlockCancel 0.184s
OK: 1 passed
PASS
ok git.arvados.org/arvados.git/lib/dispatchcloud/container 1.000s
======= test lib/dispatchcloud/container -- 2s
</pre>
h4. Python
If what you really want to do is focus on failing or newly-added tests, consider passing the appropriate switches to do that:
<pre>
-x, --exitfirst Exit instantly on first error or failed test
--lf, --last-failed Rerun only the tests that failed at the last run (or
all if none failed)
--ff, --failed-first Run all tests, but run the last failures first. This
may re-order tests and thus lead to repeated fixture
setup/teardown.
--nf, --new-first Run tests from new files first, then the rest of the
tests sorted by file mtime
</pre>
If you want to manually select tests:
<pre>
FILENAME Run tests from FILENAME, relative to the source root
FILENAME::CLASSNAME Run tests from CLASSNAME
FILENAME::FUNCNAME, FILENAME::CLASSNAME::FUNCNAME
Run only the named test function
-k EXPRESSION Only run tests which match the given substring
expression. An expression is a Python evaluable
expression where all names are substring-matched
against test names and their parent classes.
Example: -k 'test_method or test_other' matches all
test functions and classes whose name contains
'test_method' or 'test_other', while -k 'not
test_method' matches those that don't contain
'test_method' in their names. -k 'not test_method
and not test_other' will eliminate the matches.
Additionally keywords are matched to classes and
functions containing extra names in their
'extra_keyword_matches' set, as well as functions
which have names assigned directly to them. The
matching is case-insensitive.
-m MARKEXPR Only run tests matching given mark expression. For
example: -m 'mark1 and not mark2'.
</pre>
For even more options, refer to the "pytest command line reference":https://docs.pytest.org/en/stable/reference/reference.html#command-line-flags.
Example:
<pre>
What next? test sdk/python:py3 --disable-warnings --tb=no --no-showlocals tests/test_keep_client.py::KeepDiskCacheTestCase
======= test sdk/python
[…pip output…]
========================================================== test session starts ==========================================================
platform linux -- Python 3.8.19, pytest-8.2.0, pluggy-1.5.0
rootdir: /home/brett/Curii/arvados/sdk/python
configfile: pytest.ini
collected 9 items
tests/test_keep_client.py F........ [100%]
======================================================== short test summary info ========================================================
FAILED tests/test_keep_client.py::KeepDiskCacheTestCase::test_disk_cache_cap - AssertionError: True is not false
====================================================== 1 failed, 8 passed in 0.16s ======================================================
======= sdk/python tests -- FAILED
======= test sdk/python -- 2s
Failures (1):
Fail: sdk/python tests (2s)
What next? test sdk/python:py3 --disable-warnings --tb=no --no-showlocals --lf
======= test sdk/python
[…pip output…]
========================================================== test session starts ==========================================================
platform linux -- Python 3.8.19, pytest-8.2.0, pluggy-1.5.0
rootdir: /home/brett/Curii/arvados/sdk/python
configfile: pytest.ini
testpaths: tests
collected 964 items / 963 deselected / 1 selected
run-last-failure: rerun previous 1 failure
tests/test_keep_client.py F [100%]
======================================================== short test summary info ========================================================
FAILED tests/test_keep_client.py::KeepDiskCacheTestCase::test_disk_cache_cap - AssertionError: True is not false
============================================= 1 failed, 963 deselected, 1 warning in 0.43s ==============================================
======= sdk/python tests -- FAILED
======= test sdk/python -- 2s
Failures (1):
Fail: sdk/python tests (2s)
</pre>
h4. RailsAPI
<pre>
What next? test services/api TESTOPTS=--name=/.*signed.locators.*/
[...]
# Running:
....
Finished in 1.080084s, 3.7034 runs/s, 461.0751 assertions/s.
</pre>
h3. Restarting services for integration tests
If you have changed services/api code, and you want to check whether it breaks the lib/dispatchcloud/container integration tests:
<pre>
What next? reset # teardown the integration-testing environment
What next? install services/api # (only needed if you've updated dependencies)
What next? test lib/dispatchcloud/container # bring up the integration-testing environment and run tests
What next? test lib/dispatchcloud/container # leave the integration-testing environment up and run tests
</pre>
h3. Updating cache after pulling main
Always quit interactive mode and restart after modifying run-tests.sh (via git-pull, git-checkout, editing, etc).
When you start, run "install all" to get the latest gem/python dependencies, install updated versions of Arvados services used by integration tests, etc.
Then you can resume your cycle of "test lib/controller", etc.
h3. Controlling test order (Rails)
Rails tests start off with a line like this
<pre>
Run options: -v -d --seed 57089
</pre>
The seed value determines the order tests are run. To reproduce reproduce an order-dependent test failure, specify the same seed as a previous failed run:
<pre>
What next? test services/api TESTOPTS="-v -d --seed 57089"
</pre>
h3. Other options
For more usage info, try:
<pre>
~/arvados/build/run-tests.sh --help
</pre>
h2. Running workbench diagnostics tests
You can run workbench diagnostics tests against any production server.
Update your workbench application.yml to add a "diagnostics" section with the login token and pipeline details. The below example configuration is to run the "qr1hi-p5p6p-ftcb0o61u4yd2zr" pipeline in "qr1hi" environment.
<pre>
diagnostics:
secret_token: useanicelongrandomlygeneratedsecrettokenstring
arvados_workbench_url: https://workbench.qr1hi.arvadosapi.com
user_tokens:
active: yourcurrenttokenintheenvironmenttowhichyouarepointing
pipelines_to_test:
pipeline_1:
template_uuid: qr1hi-p5p6p-ftcb0o61u4yd2zr
input_paths: []
max_wait_seconds: 300
</pre>
You can now run the "qr1hi" diagnostics tests using the following command:
<pre>
cd $ARVADOS_HOME
RAILS_ENV=diagnostics bundle exec rake TEST=test/diagnostics/pipeline_test.rb
</pre>
h2. Running workbench2 tests
React uses a lot of filesystem watchers (via inotify). The default number of watched files is relatively low at 8192. Increase that with:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
h3. Docker
The integration tests can be run on non-debian based systems using docker. The workbench2 subfolder includes a Makefile target that preinstalls the necessary dependencies in a docker container using Ansible. container.
With Docker and Ansible installed, run this command from within the @arvados/services/workbench2@ @arvados/services/workbenc2@ directory:
<pre>
make workbench-docker-image
</pre>
You can verify the docker image was built by looking for @arvados/workbench@ in @docker image ls@
Then, start the interactive tests with this command:
<pre>
make interactive-tests-in-docker
</pre>
h4. Docker Troubleshooting
h5. No version of Cypress is installed / other error starting Cypress
Recreate the home volume which re-installs Cypress and other persisted dependencies cypress by running:
<pre>
make clean-docker-volume
make workbench-docker-volume
</pre>
h3. Debian Host System
These instructions assume a Debian 10 (buster) host system.
Install the Arvados test dependencies:
<pre>
echo "deb http://deb.debian.org/debian buster-backports main" > /etc/apt/sources.list.d/backports.list
apt-get update
apt-get install -y --no-install-recommends golang -t buster-backports
apt-get install -y --no-install-recommends build-essential ca-certificates git libpam0g-dev
</pre>
Install a few more dependencies for workbench2:
<pre>
apt-get update
apt-get install -y --no-install-recommends gnupg2 sudo curl
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
apt-get update
apt-get install -y --no-install-recommends yarn libgbm-dev
# we need the correct version of node to install cypress
# use arvados-server install to get it (and all other dependencies)
# All this will then not need to be repeated by ./tools/run-integration-tests.sh
# so we are not doing double work.
cd /usr/src/arvados
go mod download
cd cmd/arvados-server
go install
~/go/bin/arvados-server install -type test
cd <your-wb2-directory>
yarn run cypress install
</pre>
Make sure you have both the arvados and arvados-workbench2 source trees available, and then use the following commands (adjust path for the arvados source tree, if necessary) from your workbench2 source tree.
h3. Running Tests
Run the unit tests with:
<pre>
make unit-tests
# or
yarn test
</pre>
Run the cypress integration tests with:
<pre>
./tools/run-integration-tests.sh -i -a /usr/src/arvados
</pre>