Port a Pipeline » History » Version 37
Joshua Randall, 09/05/2015 11:41 PM
adds '-r' option to `cp` when copying directory (fixes https://dev.arvados.org/issues/7186)
1 | 1 | Nancy Ouyang | h1. Port a Pipeline |
---|---|---|---|
2 | |||
3 | Like any other tool, Arvados requires time to learn. Thus, we don't encourage using Arvados for initial development of analysis pipelines or exploratory research on small subsets of data, when each quick-and-dirty iteration takes minutes on a single machine. But for any computationally-intense work, Arvados offers a lot of benefits. |
||
4 | |||
5 | 8 | Nancy Ouyang | Okay, cool, provenance, reproducibility, easily scaling to gigabytes of data and mucho RAM, evaluating existing pipelines like lobSTR quickly. |
6 | 1 | Nancy Ouyang | |
7 | But what about if you want to these benefits when running your own pipelines? |
||
8 | In other words, how do you **port a pipeline** to Arvados? |
||
9 | 35 | Anonymous | |
10 | {{toc}} |
||
11 | 1 | Nancy Ouyang | |
12 | h2. 1. Quick Way |
||
13 | |||
14 | First, do you just want to parallelize a single bash script? |
||
15 | |||
16 | 23 | Nancy Ouyang | Check if you can use @arv-run@. Take this @arv-run@ example, which searches multiple FASTQ files in parallel, and saves the results to Keep through shell redirection: |
17 | 9 | Nancy Ouyang | |
18 | $ arv-run grep -H -n GCTACCAAGTTT \< *.fa \> output.txt |
||
19 | 1 | Nancy Ouyang | |
20 | 24 | Nancy Ouyang | Or this example, which runs a shell script: |
21 | |||
22 | $ echo 'echo hello world' > hello.sh |
||
23 | $ arv-run /bin/sh hello.sh |
||
24 | |||
25 | (Lost? Check out http://doc.arvados.org/user/topics/arv-run.html) |
||
26 | |||
27 | 1 | Nancy Ouyang | h3. 1.1 Install arv-run |
28 | |||
29 | 32 | Brett Smith | (You can skip this step if you're working on an Arvados shell node. @arv run@ is already installed and configured for you there.) |
30 | |||
31 | 1 | Nancy Ouyang | See: http://doc.arvados.org/sdk/python/sdk-python.html and http://doc.arvados.org/user/reference/api-tokens.html, or in short below: |
32 | <pre> |
||
33 | 32 | Brett Smith | $ sudo apt-get install python-pip python-dev python-yaml |
34 | 1 | Nancy Ouyang | $ sudo pip install --pre arvados-python-client |
35 | </pre> |
||
36 | (Lost? See http://doc.arvados.org/sdk/python/sdk-python.html ) |
||
37 | |||
38 | If you try to use arv run right now, it will complain about some settings your missing. To fix that, |
||
39 | |||
40 | 13 | Nancy Ouyang | # Go to http://cloud.curoverse.com |
41 | 1 | Nancy Ouyang | # Login with any Google account (you may need to click login a few times if you hit multiple redirects from Google) |
42 | # Click in the upper right on your account name -> Manage Account |
||
43 | 15 | Nancy Ouyang | ... !{height:10em}manage_account.png!:manage_account.png |
44 | 1 | Nancy Ouyang | # Optional: While you're here, click "send request for shell access", since that will give you shell access to a VM with all of the Arvados tools pre-installed. |
45 | 14 | Nancy Ouyang | 1) !{height:10em}send_request.png!:send_request.png 2) !{height:10em}request_sent.png!:request_sent.png 3) !{height:10em}access_granted.png!:access_granted.png |
46 | 12 | Nancy Ouyang | # Copy the lines of text into your terminal, something like |
47 | 1 | Nancy Ouyang | <pre> |
48 | HISTIGNORE=$HISTIGNORE:'export ARVADOS_API_TOKEN=*' |
||
49 | export ARVADOS_API_TOKEN=sekritlongthing |
||
50 | 12 | Nancy Ouyang | export ARVADOS_API_HOST=qr1hi.arvadosapi.com |
51 | 1 | Nancy Ouyang | unset ARVADOS_API_HOST_INSECURE |
52 | 17 | Nancy Ouyang | </pre> ... !{height:5em}terminal_ssh.png!:terminal_ssh.png |
53 | 22 | Nancy Ouyang | # If you want this to persist across reboot, add the above lines to @~/.bashrc@ or your @~/.bash_profile@ |
54 | 1 | Nancy Ouyang | |
55 | (Lost? See http://doc.arvados.org/user/reference/api-tokens.html ) |
||
56 | |||
57 | h3. 1.2 Submit job to Arvados |
||
58 | |||
59 | First, check: Does your command work locally? |
||
60 | |||
61 | 25 | Nancy Ouyang | $ grep -H -n TGGAAGT *.fa |
62 | 1 | Nancy Ouyang | |
63 | 25 | Nancy Ouyang | ... !{width:20em}grep-fasta.png!:grep-fasta.png |
64 | |||
65 | (If you want to follow along and don't have fasta files -- use the ones here: https://workbench.qr1hi.arvadosapi.com/collections/qr1hi-4zz18-0o2bt8216d7trrw) |
||
66 | |||
67 | 1 | Nancy Ouyang | If so, then submit it to arvados using @arv run@ |
68 | |||
69 | 25 | Nancy Ouyang | $ arv-run grep -H -n TGGAAGT \< *.fa \> output.txt |
70 | 1 | Nancy Ouyang | |
71 | 21 | Nancy Ouyang | * This bash command stores the results as @output.txt@ |
72 | 25 | Nancy Ouyang | * Note that due to the particulars of grep, Arvados will report this pipeline as **failed** if grep does not find anything, and no output will appear on Arvados |
73 | 1 | Nancy Ouyang | |
74 | Your dataset is uploaded to Arvados if it wasn't on there already (which may take a while if it's a large dataset), your @grep@ job is submitted to run on the Arvados cluster, and you get the results in a few minutes (stored inside @output.txt@ in Arvados). If you go to Workbench at http://cloud.curoverse.com, you will see the pipeline running. It may take a few minutes for Arvados to spool up a node, provision it, and run your job. The robots are working hard for you, grab a cup of coffee. |
||
75 | |||
76 | 21 | Nancy Ouyang | (Lost? See http://doc.arvados.org/user/topics/arv-run.html ) |
77 | 25 | Nancy Ouyang | |
78 | 1 | Nancy Ouyang | h3. 1.3 However |
79 | |||
80 | If your pipeline looks more like |
||
81 | |||
82 | 31 | Nancy Ouyang | ... !{width: 50%}https://arvados.org/attachments/download/428/provenance_graph_full.png!:https://arvados.org/attachments/download/428/provenance_graph_full.png |
83 | ... _yes, that is a screenshot of an actual pipeline graph auto-generated by Arvados_ |
||
84 | 1 | Nancy Ouyang | |
85 | 10 | Nancy Ouyang | @arv-run@ is not powerful enough. Here we gooooo. |
86 | 1 | Nancy Ouyang | |
87 | 31 | Nancy Ouyang | |
88 | 20 | Nancy Ouyang | |
89 | 1 | Nancy Ouyang | h2. 2. In Short |
90 | |||
91 | **Estimated reading time: 1 hour.** |
||
92 | |||
93 | You must write a **pipeline template** that describes your pipeline to Arvados. |
||
94 | |||
95 | h3. 2.1 VM (Virtual Machine) Access |
||
96 | |||
97 | Note: You'll need the Arvados set of command-line tools to follow along. The easiest way to get started is to get access to a Virtual Machine (VM) with all the tools pre-installed. |
||
98 | |||
99 | 19 | Nancy Ouyang | # Go to http://cloud.curoverse.com |
100 | 1 | Nancy Ouyang | # Login with google account (you may need to click login a few times, our redirects are not working well) |
101 | 18 | Nancy Ouyang | # Click in the upper right on your account name -> Manage Account |
102 | # Hit the "Request shell access" button under Manage Account in workbench. |
||
103 | 1 | Nancy Ouyang | |
104 | h3. 2.2 Pipeline Template Example |
||
105 | |||
106 | Here is what a simple pipeline template looks like, where the output of program A is provided as input to program B. We'll explain what it all means shortly, but first, don't worry -- you'll never be creating a pipeline template from scratch. You'll always copy and modify an existing boilerplate one (yes, a template for the pipeline template! :]) |
||
107 | |||
108 | |||
109 | **pipelinetemplate.json** |
||
110 | { |
||
111 | "name": "Tiny Bash Script", |
||
112 | "components": { |
||
113 | "Create Two Files": { |
||
114 | "script": "run-command", |
||
115 | "script_version": "master", |
||
116 | "repository": "nancy", |
||
117 | "script_parameters": { |
||
118 | "command": [ |
||
119 | "$(job.srcdir)/crunch_scripts/createtwofiles.sh" |
||
120 | ] |
||
121 | , |
||
122 | "runtime_constraints": { |
||
123 | "docker_image": "nancy/cgatools-wormtable" |
||
124 | , |
||
125 | "Merge Files": { |
||
126 | "script": "run-command", |
||
127 | "script_version": "master", |
||
128 | "repository": "nancy", |
||
129 | "script_parameters": { |
||
130 | "command": [ |
||
131 | "$(job.srcdir)/crunch_scripts/mergefiles.sh", |
||
132 | "$(input)" |
||
133 | ], |
||
134 | "input": { |
||
135 | "output_of": "Create Two Files" |
||
136 | , |
||
137 | "runtime_constraints": { |
||
138 | "docker_image": "nancy/cgatools-wormtable" |
||
139 | |||
140 | h2. 3. simple and sweet port-a-pipeline example |
||
141 | |||
142 | Okay, let's dig into what's going on. |
||
143 | |||
144 | h3. 3.1 the setup |
||
145 | |||
146 | We'll port an artificially simple pipeline which involves just two short bash scripts, described as "A" and "B" below: |
||
147 | |||
148 | **Script A. Create two files** |
||
149 | Input: nothing |
||
150 | 26 | Nancy Ouyang | Output: two files (@out1.txt@ and @out2.txt@) |
151 | 1 | Nancy Ouyang | |
152 | **Script B. Merge two files into a single file** |
||
153 | Input: output of step A |
||
154 | 26 | Nancy Ouyang | Output: a single file (@output.txt@) |
155 | 1 | Nancy Ouyang | |
156 | 27 | Nancy Ouyang | Or visually (ignore the long strings of gibberish in the rectangles for now): |
157 | 1 | Nancy Ouyang | |
158 | 27 | Nancy Ouyang | ... !{height:30em}choose_inputs-small.png!:choose_inputs-small.png |
159 | |||
160 | 26 | Nancy Ouyang | |
161 | 1 | Nancy Ouyang | Here's what we've explained so far in the pipeline template: |
162 | |||
163 | |||
164 | **pipelinetemplate.json** |
||
165 | { |
||
166 | **"name": "Tiny Bash Script",** |
||
167 | "components": { |
||
168 | **"Create Two Files": {** |
||
169 | "script": "run-command", |
||
170 | "script_version": "master", |
||
171 | "repository": "arvados", |
||
172 | "script_parameters": { |
||
173 | "command": [ |
||
174 | "$(job.srcdir)/crunch_scripts/ *createtwofiles.sh* " |
||
175 | ] |
||
176 | , |
||
177 | "runtime_constraints": { |
||
178 | "docker_image": "nancy/cgatools-wormtable" |
||
179 | , |
||
180 | **"Merge Files": {** |
||
181 | "script": "run-command", |
||
182 | "script_version": "master", |
||
183 | "repository": "arvados", |
||
184 | "script_parameters": { |
||
185 | "command": [ |
||
186 | "$(job.srcdir)/crunch_scripts/ *mergefiles.sh* ", |
||
187 | "$(input)" |
||
188 | ], |
||
189 | **"input": {** |
||
190 | **"output_of": "Create Two Files"** |
||
191 | , |
||
192 | "runtime_constraints": { |
||
193 | "docker_image": "nancy/cgatools-wormtable" |
||
194 | |||
195 | h3. **3.2 arv-what?** |
||
196 | |||
197 | Before we go further, let's take a quick step back. Arvados consists of two parts |
||
198 | |||
199 | **Part 1. Keep** - I have all your files in the cloud! |
||
200 | |||
201 | You can access your files through your browser, using **Workbench**, or using the Arvados command line (CLI) tools (link: http://doc.arvados.org/sdk/cli/index.html ). |
||
202 | |||
203 | 28 | Nancy Ouyang | Visually, in Workbench, the built-in Arvados web interface, this looks like |
204 | ... !{height:15em}port-a-pipeline-workbench-collection.png!:port-a-pipeline-workbench-collection.png |
||
205 | 1 | Nancy Ouyang | |
206 | 28 | Nancy Ouyang | Or via the command-line interface |
207 | ... !{height:10em}CLI-keep.png!:CLI-keep.png |
||
208 | |||
209 | |||
210 | |||
211 | 1 | Nancy Ouyang | **Part 2. Crunch** - I run all your scripts in the cloud! |
212 | |||
213 | Crunch both dispatches jobs and provides version control for your pipelines. |
||
214 | |||
215 | You describe your workflow to Crunch using **pipeline templates**. Pipeline templates describe a pipeline ("workflow"), the type of inputs each step in the pipeline requires, and . You provide a high-level description of how data flows through the pipeline—for example, the outputs of programs A and B are provided as input to program C—and let Crunch take care of the details of starting the individual programs at the right time with the inputs you specified. |
||
216 | |||
217 | 28 | Nancy Ouyang | ... !{width:20em}provenance_graph_detail.png!:provenance_graph_detail.png |
218 | ... _Each task starts when all its inputs have been created_ |
||
219 | 1 | Nancy Ouyang | |
220 | Once you save a pipeline template in Arvados, you run it by creating a pipeline instance that lists the specific inputs you’d like to use. The pipeline’s final output(s) will be saved in a project you specify. |
||
221 | |||
222 | Concretely, a pipeline template describes |
||
223 | |||
224 | * **data inputs** - specified as Keep content addresses |
||
225 | * **job scripts** - stored in a Git version control repository and referenced by a commit hash |
||
226 | * **parameters** - which, along with the data inputs, can have default values or can be filled in later when the pipeline is actually run |
||
227 | * **the execution environment** - stored in Docker images and referenced by Docker image name |
||
228 | |||
229 | **What is Docker**? Docker allows Arvados to replicate the execution environment your tools need. You install whatever bioinformatics tools (bwa-mem, vcftools, etc.) you are using inside a Docker image, upload it to Arvados, and Arvados will recreate your environment for computers in the cloud. |
||
230 | |||
231 | 34 | Bryan Cosca | **Protip:** Install stable external tools in Docker. Put your own scripts in a Git repository. This is because each docker image is about 1-5 GB, so each new docker image takes a while to upload (30 minutes) if you are not using Arvados on a local cluster. In the future, we hope to use small diff files describing just the changes made to Docker image instead of the full Docker image. [Last updated 19 Feb 2015] |
232 | 1 | Nancy Ouyang | |
233 | h3. 3.3 In More Detail |
||
234 | |||
235 | Alright, let's put that all together. |
||
236 | |||
237 | **pipelinetemplate.json** |
||
238 | { |
||
239 | "name": "Tiny Bash Script", |
||
240 | "components": { |
||
241 | "Create Two Files": { |
||
242 | "script": "run-command", |
||
243 | "script_version": "master", |
||
244 | "repository": "nancy", |
||
245 | "script_parameters": { |
||
246 | "command": [ |
||
247 | "$(job.srcdir)/crunch_scripts/createtwofiles.sh" **#[1]** |
||
248 | ] |
||
249 | , |
||
250 | "runtime_constraints": { |
||
251 | "docker_image": "nancy/cgatools-wormtable" |
||
252 | , |
||
253 | "Merge Files": { |
||
254 | "script": "run-command", |
||
255 | "script_version": "master", |
||
256 | "repository": "nancy", |
||
257 | "script_parameters": { |
||
258 | "command": [ |
||
259 | "$(job.srcdir)/crunch_scripts/mergefiles.sh", **#[2]** |
||
260 | "$(input)" |
||
261 | ], |
||
262 | "input": { |
||
263 | "output_of": "Create Two Files" **#[3]** |
||
264 | , |
||
265 | "runtime_constraints": { |
||
266 | "docker_image": "nancy/cgatools-wormtable" |
||
267 | |||
268 | **Explanation** |
||
269 | |||
270 | [1] **$(job.srcdir)** references the git repository "in the cloud". Even though **run-command** is in nancy/crunch_scripts/ and is "magically found" by Arvados, INSIDE run-command you can't reference other files in the same repo as run-command without this magic variable. |
||
271 | |||
272 | Any output files as a result of this run-command will be automagically stored to keep as an auto-named collection (which you can think of as a folder for now). |
||
273 | |||
274 | [2] Okay, so how does the next script know where to find the output of the previous job? run-command will keep track of the collections it's created, so we can feed that in as an argument to our next script. In this "command" section under "run-command", you can think of the commas as spaces. Thus, what this line is saying is "run mergefile.sh on the previous input", or |
||
275 | |||
276 | $ mergefiles.sh [directory with output of previous command] |
||
277 | |||
278 | [3] Here we set the variable "input" to point to the directory with the output of the previous command "Create Two Files". |
||
279 | |||
280 | (Lost? Try the hands-on example in the next section, or read more detailed documentation on the Arvados website: |
||
281 | |||
282 | * http://doc.arvados.org/user/tutorials/running-external-program.html |
||
283 | * http://doc.arvados.org/user/topics/run-command.html |
||
284 | * http://doc.arvados.org/api/schema/PipelineTemplate.html ) |
||
285 | |||
286 | h3. 3.4 All hands on deck! |
||
287 | |||
288 | Okay, now that we know the rough shape of what's going on, let's get our hands dirty. |
||
289 | |||
290 | 2 | Nancy Ouyang | *From your local machine, login to Arvados virtual machine* |
291 | 1 | Nancy Ouyang | |
292 | 4 | Nancy Ouyang | Single step: |
293 | 1 | Nancy Ouyang | |
294 | 4 | Nancy Ouyang | nrw@ *@nrw-local@* $ ssh nancy@lightning-dev4.shell.arvados |
295 | |||
296 | 1 | Nancy Ouyang | (Lost? See "SSH access to machine with Arvados commandline tools installed" http://doc.arvados.org/user/getting_started/ssh-access-unix.html ) |
297 | |||
298 | **In VM, create pipeline template** |
||
299 | |||
300 | A few steps: |
||
301 | |||
302 | 30 | Nancy Ouyang | nancy@ *@lightning-dev4.qr1hi@* :~$ arv create pipeline_template |
303 | 1 | Nancy Ouyang | Created object qr1hi-p5p6p-3p6uweo7omeq9e7 |
304 | $ arv edit qr1hi-p5p6p-3p6uweo7omeq9e7 #Create the pipeline template as described above! [[Todo: concrete thing]] |
||
305 | |||
306 | (Lost? See "Writing a pipeline template" http://doc.arvados.org/user/tutorials/running-external-program.html ) |
||
307 | |||
308 | *In VM, set up git repository with run_command and our scripts* |
||
309 | |||
310 | 33 | Brett Smith | A few steps: |
311 | 1 | Nancy Ouyang | |
312 | 2 | Nancy Ouyang | $ mkdir @~@/projects |
313 | $ cd @~@/projects |
||
314 | 33 | Brett Smith | ~/projects $ git clone git@git.qr1hi.arvadosapi.com:nancy.git |
315 | 1 | Nancy Ouyang | |
316 | 30 | Nancy Ouyang | (Lost? Find your own git URL by going to https://workbench.qr1hi.arvadosapi.com/manage_account ) |
317 | 1 | Nancy Ouyang | |
318 | 2 | Nancy Ouyang | ⤷Copy run_command & its dependencies into this crunch_scripts |
319 | 33 | Brett Smith | $ git clone https://github.com/curoverse/arvados.git |
320 | 1 | Nancy Ouyang | |
321 | 2 | Nancy Ouyang | (Lost? Visit https://github.com/curoverse/arvados ) |
322 | |||
323 | @ @$ cd ./nancy |
||
324 | *@~/projects/nancy@* $ mkdir crunch_scripts |
||
325 | *@~/projects/nancy@* $ cd crunch_scripts |
||
326 | *@~/projects/nancy/crunch_scripts@* $ cp @~@/projects/arvados/crunch_scripts/run_command . #trailing dot! |
||
327 | 37 | Joshua Randall | ~/projects/nancy/crunch_scripts$ cp -r ~/projects/arvados/crunch_scripts/crunchutil . #trailing dot! |
328 | 2 | Nancy Ouyang | |
329 | @ @$ cd ~/projects/nancy/crunch_scripts |
||
330 | |||
331 | @ @$ vi createtwofiles.sh |
||
332 | ⤷ $cat createtwofiles.sh |
||
333 | #!/bin/bash |
||
334 | echo "Hello " > out1.txt |
||
335 | echo "Arvados!" > out2.txt |
||
336 | |||
337 | @ @$ vi mergefiles.sh |
||
338 | 1 | Nancy Ouyang | ⤷$cat mergefiles.sh |
339 | 5 | Nancy Ouyang | #!/bin/bash *#[1]* |
340 | PREVOUTDIR=$1 *#[2]* |
||
341 | echo $TASK_KEEPMOUNT/by_id/$PREVOUTDIR *#[3]* |
||
342 | 1 | Nancy Ouyang | cat $TASK_KEEPMOUNT/by_id/$PREVOUTDIR/*.txt > output.txt |
343 | 33 | Brett Smith | |
344 | 5 | Nancy Ouyang | ⤷ *Explanations* |
345 | 6 | Nancy Ouyang | *[1]* We use the @#!@ syntax to let bash know what to execute this file with |
346 | 5 | Nancy Ouyang | |
347 | ⤷To find the location of any particular tool, try using **which** |
||
348 | $ which python |
||
349 | /usr/bin/python |
||
350 | $ which bash |
||
351 | /bin/bash |
||
352 | 33 | Brett Smith | |
353 | 29 | Nancy Ouyang | *[2]* Here we give a human-readable name, @PREVOUTDIR@, to the first argument (referenced using the dollar-sign syntax ala @$1@), given to @mergefiles.sh@, which (referring back to the pipeline template) we defined as the directory containing the output of the previous job (which ran @createtwofiles.sh@). |
354 | |||
355 | (Lost about @$1@? Google "passing arguments to the bash script"). |
||
356 | 33 | Brett Smith | |
357 | 5 | Nancy Ouyang | *[3]* Using the environmental variable @TASK_KEEPMOUNT@ allows us to not make assumptions about where **keep** is mounted. @TASK_KEEPMOUNT@ will be replaced by Arvados automatically with the name of the location to which **keep** is mounted on each worker node. (Lost? Visit http://doc.arvados.org/user/tutorials/tutorial-keep-mount.html ) |
358 | 33 | Brett Smith | |
359 | 6 | Nancy Ouyang | <pre>$ chmod +x createtwofiles.sh mergefiles.sh # make these files executable</pre> |
360 | 1 | Nancy Ouyang | |
361 | **Commit changes and push to remote** |
||
362 | |||
363 | 33 | Brett Smith | A few steps: |
364 | 1 | Nancy Ouyang | |
365 | $ git status #check that everything looks ok |
||
366 | 2 | Nancy Ouyang | $ git add * |
367 | 1 | Nancy Ouyang | $ git commit -m “hello world-of-arvados scripts!” |
368 | $ git push |
||
369 | 2 | Nancy Ouyang | |
370 | 33 | Brett Smith | **Create Docker image with Arvados command-line tools and other tools we want** |
371 | 1 | Nancy Ouyang | |
372 | 33 | Brett Smith | > *Note:* This section assumes that you have Docker installed and usable under your user accounts. However, because users with Docker access can defeat a lot of system security, it's not available on all Arvados shells. If your Arvados VM doesn't provide you access to Docker, you have two options. You can ask the site administrator to grant you access; or you can install Docker on your own GNU/Linux workstation, and upload the image to Arvados from there. To learn how to do that, see the installation guides for "Docker Engine":https://docs.docker.com/ and the "Arvados Python SDK":http://doc.arvados.org/sdk/python/sdk-python.html, which includes the @arv-keepdocker@ tool to upload an image. |
373 | 1 | Nancy Ouyang | |
374 | A few steps: |
||
375 | |||
376 | 3 | Nancy Ouyang | $ docker pull arvados/jobs |
377 | 36 | Joshua Randall | $ docker run -ti -u root arvados/jobs /bin/bash |
378 | 1 | Nancy Ouyang | |
379 | 2 | Nancy Ouyang | Now we are in the docker image. |
380 | |||
381 | 33 | Brett Smith | root@4fa648c759f3:/# apt-get update |
382 | 1 | Nancy Ouyang | |
383 | 33 | Brett Smith | @ @⤷In the docker image, install external tools that you don't expect to need to update often. |
384 | 1 | Nancy Ouyang | For instance, we can install the wormtable python tool in this docker image |
385 | 4 | Nancy Ouyang | @ @# apt-get install libdb-dev |
386 | 2 | Nancy Ouyang | @ @# pip install wormtable |
387 | 1 | Nancy Ouyang | |
388 | @ @ ⤷ Note: If you're installing from binaries, you should either |
||
389 | 33 | Brett Smith | 1) Install in existing places where bash looks for programs (e.g. install in /usr/local/bin/cgatools). |
390 | 1 | Nancy Ouyang | To see where bash looks, inspect the PATH variable. |
391 | 2 | Nancy Ouyang | #echo $PATH |
392 | 3 | Nancy Ouyang | /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin |
393 | 1 | Nancy Ouyang | 2) If you put them in a custom directory, remember them to reference them as such in your scripts |
394 | (e.g. spell out /home/nrw/local/bin/cgatools). |
||
395 | Arvados will not respect modifyng the $PATH variable by using the ~/.bashrc configuration file in the docker image. |
||
396 | |||
397 | 6 | Nancy Ouyang | (Lost? See http://doc.arvados.org/user/topics/arv-docker.html ) |
398 | 33 | Brett Smith | |
399 | 6 | Nancy Ouyang | root@4fa648c759f3:/# exit |
400 | 2 | Nancy Ouyang | |
401 | 1 | Nancy Ouyang | *Commit Docker image* |
402 | <pre> |
||
403 | $ docker commit 4fa648c759f3 nancy/cgatools-wormtable #Label the image thoughtfully |
||
404 | 2 | Nancy Ouyang | $ #For instance here I used the name of key tools I installed: cgatools & wormtable |
405 | 1 | Nancy Ouyang | </pre> |
406 | 2 | Nancy Ouyang | |
407 | 6 | Nancy Ouyang | *Upload Docker image from your VM to Keep* |
408 | 33 | Brett Smith | |
409 | > *Note:* @arv-keepdocker@ saves the Docker image in @~/.cache/arvados/docker@ before uploading, so it can resume in case of interruption. If the @/home@ partition is not big enough to hold the Docker image, you may get strange I/O errors about pipe closed or stdin full. You can prevent this by making @~/.cache/arvados/docker@ a symlink to another directory you control where enough space is available. An example command for that might look like: @ln -s /scratch/MYNAME/docker ~/.cache/arvados/docker@ |
||
410 | |||
411 | 2 | Nancy Ouyang | <pre> |
412 | 33 | Brett Smith | $ arv-keepdocker nancy/cgatools-wormtable #this takes a few minutes |
413 | $ arv-keepdocker #lists docker images in the cloud, so you can double-check what was uploaded </pre> |
||
414 | 2 | Nancy Ouyang | |
415 | 1 | Nancy Ouyang | **Run this pipeline!** |
416 | Go to Workbench and hit **Run**. |
||
417 | <pre>$ firefox http://qr1hi.arvadosapi.com</pre> |
||
418 | 30 | Nancy Ouyang | [!image: workbench with 'tiny bash script'] |
419 | 1 | Nancy Ouyang | |
420 | *Note: If no worker nodes are already provisioned, your job may take up to 10 minutes to queue up and start.* Behind-the-scenes, Arvados is requesting compute nodes for you and installing your Docker image and otherwise setting up the environment on those nodes. Then Arvados will be ready to run your job. Be patient -- the wait time may seem frustrating for a trivial pipeline like this, but Arvados really excels at handling long and complicated pipelines with built-in data provenance and pipeline reproducibility. |
||
421 | 33 | Brett Smith | |
422 | 1 | Nancy Ouyang | |
423 | h3. 3.5 Celebrate |
||
424 | |||
425 | Whew! Congratulations on porting your first pipeline to Arvados! Check out http://doc.arvados.org/user/topics/crunch-tools-overview.html to learn more about the different ways to port pipelines to Arvados and how to take full advantage of Arvados's features, like restarting pipelines from where they failed instead of from the beginning. |
||
426 | |||
427 | h2. 4. Debugging Tips and Pro-Tips |
||
428 | |||
429 | h3. **4.1 Pro-tips** |
||
430 | |||
431 | **Keep mounts are read-only right now. [19 March 2015]** |
||
432 | Need to 1) make some temporary directories or 2) change directories away from wherever you started out in but still upload the results to keep? |
||
433 | |||
434 | For 1, Explicitly use the $HOME directory and make the temporary files there |
||
435 | For 2, Use present working directory, $(pwd) at the beginning of your script to write down the directory where run-command will look for files to upload to keep. |
||
436 | |||
437 | Here's an example: |
||
438 | <pre> |
||
439 | $ cat mergefiles.sh |
||
440 | TMPDIR=$HOME #directory to make temporary files in |
||
441 | OUTDIR=$(pwd) #directory to put output files in |
||
442 | mkdir $TMPDIR |
||
443 | touch $TMPDIR/sometemporaryfile.txt #this file is deleted when the worker node is stopped |
||
444 | touch $OUTDIR/someoutputfile.txt #this file will be uploaded to keep by run-command |
||
445 | </pre> |
||
446 | |||
447 | * make sure you point to the right repository, your own or arvados. |
||
448 | * make sure you pin the script versions of your python sdk, docker image, and script version or you will not get reproducibiltiy. |
||
449 | * if you have a file you want to use as a crunch script, make sure its in a crunch_scripts directory. otherwise, arvados will not find it. i.e. ~/path/to/git/repo/crunch_scripts/foo.py |
||
450 | |||
451 | h3. 4.2 Common log errors and reasons for pipelines to fail |
||
452 | |||
453 | Todo. |
||
454 | |||
455 | h3. 4.3 Miscellaneous Notes |
||
456 | |||
457 | Other ways to avoid the read-only keep mount problem is to use task.vwd which uses symlinks from the output directory which is writable to the colelction in keep. If you can change your working directory to the output directory and do all your work there, you'll avoid the keep read only issue. (lost? see http://doc.arvados.org/user/topics/run-command.html ) |
||
458 | |||
459 | When indexing, i.e. tabix, bwa index, etc. The index file tends to be created in the same directory as your fastq file. In order to avoid this, use ^. There is no way to send the index file to another directory. If you figure out a way, please tell me. |
||
460 | |||
461 | "bash" "-c" could be your friend, it works sometimes, sometimes it doesnt. I don't have a good handle on why this works sometimes. |
||
462 | |||
463 | if you're trying to iterate over >1 files using the task.foreach, its important to know that run-command uses a m x n method of making groups. I dont think I can explain it right now, but it may not be exactly what you want and you can trip over it. (lost? see http://doc.arvados.org/user/topics/run-command.html ) |
||
464 | |||
465 | When trying to pair up reads, its hard to use run-command. You have to manipulate basename and hope your file names are foo.1 foo.2. base name will treat the group as foo (because you'll regex the groups as foo) and you can glob for foo.1 and foo.2. but if the file names are foo_1 and foo_2, you cant regex search them for foo becuase you'll get both names into a group and you'll be iterating through both of them twice, because of m x n. |
||
466 | |||
467 | Your scripts need to point to the right place where the file is. Its currently hard to figure out how to grep the file names, you have to do some magic through the collection api. |
||
468 | |||
469 | h2. 5. Learn More |
||
470 | |||
471 | To learn more, head over to the Arvados User Guide documentation online: http://doc.arvados.org/user/ |