GithubHelp home page GithubHelp logo

pacificbiosciences / pypeflow Goto Github PK

View Code? Open in Web Editor NEW
10.0 8.0 23.0 2.31 MB

a simple lightweight workflow engine for data analysis scripting

License: BSD 3-Clause Clear License

Python 15.57% HTML 83.81% Shell 0.23% Makefile 0.05% Java 0.34%

pypeflow's Introduction

What is pypeFLOW

pypeFLOW is light weight and reusable make / flow data process library written in Python.

Most of bioinformatics analysis or general data analysis includes various steps combining data files, transforming files between different formats and calculating statistics with a variety of tools. Ian Holmes has a great summary and opinions about bioinformatics workflow at http://biowiki.org/BioinformaticsWorkflows. It is interesting that such analysis workflow is really similar to constructing software without an IDE in general. Using a "makefile" file for managing bioinformatics analysis workflow is actually great for generating reproducible and reusable analysis procedure. Combining with a proper version control tool, one will be able to manage to work with a divergent set of data and tools over a period of time for a project especially when there are complicate dependence between the data, tools and customized code for the analysis tasks.

However, using "make" and "makefile" implies all data analysis steps are done by some command line tools. If you have some customized analysis tasks, you will have to write some scripts and to make them into command line tools. In my personal experience, I find it is convenient to bypass such burden and to combine those quick and simple steps in a single scripts. The only caveat is that if an analyst does not save the results of any intermediate steps, he or she has to repeat the computation all over again for every steps from the beginning. This will waste a lot of computation cycles and personal time. Well, the solution is simple, just like the traditional software building process, one have to track the dependencies and analyze them and only reprocess those parts that are necessary to get the most up-to-date final results.

General Design Principles

  • Explicitly modeling data and task dependency
  • Support declarative programming style within Python while maintaining some thing that imperative programming dose the best
  • Utilize RDF meta-data framework
  • Keep it simple if possible

Features

  • Multiple concurrent task scheduling and running
  • Support task as simple shell script (considering clustering job submission in mind)
  • reasonable simple interface for declarative programming

General Installation

pypeFlow uses the standard python setup.py for installation:

python setup.py install

Once install, a brief documentation can be generated by:

cd doc
make html

The generate sphinx html document can be viewed by point your web browser to _build/html/index.html in the doc directory.

DISCLAIMER

THIS WEBSITE AND CONTENT AND ALL SITE-RELATED SERVICES, INCLUDING ANY DATA, ARE PROVIDED "AS IS," WITH ALL FAULTS, WITH NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY, NON-INFRINGEMENT OR FITNESS FOR A PARTICULAR PURPOSE. YOU ASSUME TOTAL RESPONSIBILITY AND RISK FOR YOUR USE OF THIS SITE, ALL SITE-RELATED SERVICES, AND ANY THIRD PARTY WEBSITES OR APPLICATIONS. NO ORAL OR WRITTEN INFORMATION OR ADVICE SHALL CREATE A WARRANTY OF ANY KIND. ANY REFERENCES TO SPECIFIC PRODUCTS OR SERVICES ON THE WEBSITES DO NOT CONSTITUTE OR IMPLY A RECOMMENDATION OR ENDORSEMENT BY PACIFIC BIOSCIENCES.

pypeflow's People

Contributors

bredelings avatar cdunn2001 avatar cschin avatar pb-dseifert avatar pb-isovic avatar pb-jchin avatar raj76 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pypeflow's Issues

Create multi-threaded, blocking process-watcher

Given a string like one of these (from PacificBiosciences/FALCON-pbsmrtpipe#31):

  • qsub -S /bin/bash -sync y -V -q production -N ${JOB_ID} \\\n -o "${STDOUT_FILE}" \\\n -e "${STDERR_FILE}" \\\n -pe smp ${NPROC} \\\n "${CMD}"
  • runjmscmd ... (same variables)
  • bash "${CMD}" > ${STDOUT_FILE} 2> ${STDERR_FILE}

We need a process-watcher which uses a thread per job. Each thread will block on the qsub call.

We need to ensure that the calls are killed and the whole pwatcher is killed when the main program dies.

  • Submit
    • Start a thread for each job.
    • When a thread dies, it must update the State using a Lock. Or it must use a queue or notification.
  • Query
    • With a lock, examine the current State.
  • Delete
    • Kill all on any exit, including Ctrl-C.

These will wait for another day:

  • Delete
    • Kill the corresponding running thread and (with a lock) update the State, carefully.
    • Kill all on any exit, including Ctrl-C.
  • Restart
    • Upon restart, ensure that finished tasks are not re-run, even if they failed before.

Drop html5lib, used via rdflib

In PythonPackageIndex (PyPI), html5lib was recently updated from 0.9999999 to 0.999999999. That worked fine in most places, but it no longer can be installed by pip in TravisCI. So we have locked the previous version. Unfortunately, that can break users who have installed the later version in their site-packages/ already. So now we have an even better reason to reduce our dependency on rdflib.

One idea: Copy rdflib, rename our copy of the library, and delete the bits we do not need. The bits we need will not import html5lib.

job_start.py must be "executable"

We still need to ensure that job_start.py is installed with its executable bit set. Or we will need to find another way to install it.

Job completion not being detected

I updated FALCONintegrate a few days ago to update all related sub modules because of the recent pypeFLOW fix to the job_queue variable for SLURM. However, now when a job completes in SLURM, it is still being detected as "RUNNING" in the logs.

2016-12-02 23:00:39,848 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10
2016-12-02 23:00:49,859 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 6
2016-12-02 23:00:49,859 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<6>)
2016-12-02 23:00:49,860 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P272970845b783b
2016-12-02 23:00:49,860 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-Pf43bab5f570d27
2016-12-02 23:00:49,860 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P5c2c511e4cdb43
2016-12-02 23:00:49,860 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P3ad1be394d1e22
2016-12-02 23:00:49,860 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P93d2d6890055a6
2016-12-02 23:00:49,860 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-Pfcd2bd1698688a
2016-12-02 23:00:49,860 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10
2016-12-02 23:00:59,871 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 6
2016-12-02 23:00:59,871 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<6>)
2016-12-02 23:00:59,872 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P272970845b783b
2016-12-02 23:00:59,872 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-Pf43bab5f570d27
2016-12-02 23:00:59,872 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P5c2c511e4cdb43
2016-12-02 23:00:59,872 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P3ad1be394d1e22
2016-12-02 23:00:59,873 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P93d2d6890055a6
2016-12-02 23:00:59,873 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-Pfcd2bd1698688a
2016-12-02 23:00:59,873 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10
2016-12-02 23:01:09,883 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 6`

However, at this point the subjobs had already finished running. If I go to one of the job directories assembly/mypwatcher/jobs/P5c2c511e4cdb43, I can see the job finished from the stdout and stderr files.

...clip..
Comparing raw_reads.51 to raw_reads.44

   Capping mutual k-mer matches over 10000 (effectively -t100)
   Hit count = 480,279,248
   Highwater of 20.27Gb space

===========================================================================
Job Finished

Name                : P5c2c511e4cdb43
User                : hansvgdub
Partition           : bigmemh
Nodes               : bigmem3
Cores               : 8
State               : TIMEOUT
Submit              : 2016-12-01T13:44:26
Start               : 2016-12-01T13:44:27
End                 : 2016-12-01T22:44:46

What would be the best way to further debug this issue?

Bad argument to sbatch with SLURM

After installing falcon through FALCON-integrate and successfully running 'make test', I tried to run the same test with SLURM. This failed, but the log showed that a bad "-q arg" argument was being supplied to sbatch.

I was able to get synth0 to run to completion by removing this argument from the 'sge_cmd' in pwatcher/fs_based.py and pwatcher/network_based.py . However, I think that (assuming SLURM 'partitions' are analogous to SGE 'queues') the "-q" should just be changed to "-p" for sbatch.

See also readme.slurm.md, which has a "-p" option, but not a "-q" option.

No git tags for each version

Looking at setup.py the current version is 2.0.2, and the history would let us track back and find specific version bumps - for example cefc4ac#diff-2eeaed663bd0d25b7e608891384b7298 jumped from 1.1.0 to 2.0.0

However, without explicit tags, it hard to be sure that commit itself represented the 2.0.0 release, or if it was a slightly later commit.

`jobStatusMap` inside `_graphvizDot()` not initialized

jobStatusMap was an instance variable in controller.py. The recent refactoring make is a local variable inside _refreshTargets(). The local variable jobStatusMap inside _graphvizDot() needs initialization now. This is a minor issue. I only use _graphvizDot() for some debugging processes.

network_based slow for job_type=local

On synth5k, the blocking pwatcher with job_type=local and job_queue=bash -C ${CMD} takes about 20secs; fs_based takes 1.5mins; network_based takes 3mins.

newly created directories not always available fro remote job

This is similar to an earlier issue I commented on, and still appears to be an issue. The wrapper script created by simple_pwatcher_bridge.py tries to cd {wdir}, and it fails because that directory hasn't yet propogated from the program's head node to the remote node the job is run on. This one has a very simple fix:

--- simple_pwatcher_bridge.py 2017-05-31 10:57:47.317485000 -0500
+++ simple_pwatcher_bridge.py 2017-05-31 10:58:59.469657000 -0500
@@ -350,6 +350,7 @@
rel_actual_script_fn = os.path.relpath(actual_script_fn, wdir)
wrapper = """#!/bin/sh
set -vex
+mkdir -p {wdir}
cd {wdir}
bash {rel_actual_script_fn}
touch {sentinel_done_fn}

mkdir -p will return success if the directory already exists, and the attempt to create the directory forces it to propogate to the node the job is running on.

DB to track files?

@pb-jchin wrote:

If we can add some metadata files (thin layer DB) to track where the files that people might be interested to figure out where they are without depending on file system, it could be useful.

There is pwatcher/state.py, but maybe you really want a forward link from the run-dir into pwatcher.

FALCON qsub issues (part II)

After several weeks using other tools, I re-installed the most recent version FALCON on my torque/qsub system (installation is now a lot friendlier, thanks!), but I encountered problems with torque.

I can run FALCON on the provided example synth0 using job_type = local, but FALCON fails when job_type = torque. As I did before, I changed the sge_option_... and job_type = torque in the file /FALCON-examples/run/synth0/fc_run.cfg as follows ( I removed commented lines below)

use_tmpdir = true
job_type = torque
#stop_all_jobs_on_failure = true
input_fofn = input.fofn
input_type = raw
genome_size = 5000
seed_coverage = 20
length_cutoff_pr = 1
sge_option_da = -l nodes=1:ppn=8:nogpu,walltime=24:00:00,mem=30000mb -M [email protected]
sge_option_la = -l nodes=1:ppn=2:nogpu,walltime=24:00:00,mem=30000mb -M [email protected]
sge_option_pda = -l nodes=1:ppn=8:nogpu,walltime=24:00:00,mem=30000mb -M [email protected]
sge_option_pla = -l nodes=1:ppn=2:nogpu,walltime=24:00:00,mem=30000mb -M [email protected]
sge_option_fc = -l nodes=1:ppn=24:nogpu,walltime=24:00:00,mem=30000mb -M [email protected]
sge_option_cns = -l nodes=1:ppn=8:nogpu,walltime=24:00:00,mem=30000mb -M [email protected]
pa_concurrent_jobs = 32
cns_concurrent_jobs = 32
ovlp_concurrent_jobs = 32
pa_HPCdaligner_option =   -v -B4 -t50 -h1 -e.99 -w1 -l1 -s1000
ovlp_HPCdaligner_option = -v -B4 -t50 -h1 -e.99 -l1 -s1000
pa_DBsplit_option =   -a -x5 -s.065536
ovlp_DBsplit_option = -a -x5 -s50
falcon_sense_option = --output_multi --min_idt 0.70 --min_cov 1 --max_n_read 20000 --n_core 0
overlap_filtering_setting = --max_diff 10000 --max_cov 100000 --min_cov 1 --min_len 1 --bestn 1000 --n_core 0

and this is what happens

[stelo@head FALCON-integrate]$ make test
make -C ./FALCON-make/ test
make[1]: Entering directory `/home/stelo/FALCON-integrate/FALCON-make'
make -C /home/stelo/FALCON-integrate/FALCON-examples test
make[2]: Entering directory `/home/stelo/FALCON-integrate/FALCON-examples'
python -c 'import pypeflow.common; print pypeflow.common'
<module 'pypeflow.common' from '/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/common.pyc'>
python -c 'import falcon_kit; print falcon_kit.falcon'
<CDLL '/home/stelo/FALCON-integrate/FALCON/ext_falcon.so', handle e181c0 at 2b4a54c677d0>
make run-synth0
make[3]: Entering directory `/home/stelo/FALCON-integrate/FALCON-examples'
git-sym update run/synth0
git: 'check-ignore' is not a git command. See 'git --help'.
-> in dir 'run/synth0'
<- back to dir '/home/stelo/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/ref.fasta'
symlink: 'run/synth0/data/synth0.fasta'
-> in dir '/home/stelo/FALCON-integrate/FALCON-examples/.git/git-sym-local/links'
<- back to dir '/home/stelo/FALCON-integrate/FALCON-examples'
git-sym show run/synth0
git: 'check-ignore' is not a git command. See 'git --help'.
-> in dir 'run/synth0'
<- back to dir '/home/stelo/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/ref.fasta'
symlink: 'run/synth0/data/synth0.fasta'
. run/synth0/data/ref.fasta     .git-sym/synth0.ref.fasta
. run/synth0/data/synth0.fasta  .git-sym/synth0-circ-20.pb.fasta
git-sym check run/synth0
git: 'check-ignore' is not a git command. See 'git --help'.
-> in dir 'run/synth0'
<- back to dir '/home/stelo/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/ref.fasta'
symlink: 'run/synth0/data/synth0.fasta'
cd run/synth0; fc_run.py fc_run.cfg logging.ini
2016-09-11 21:04:47,717[INFO] Setup logging from file "logging.ini".
2016-09-11 21:04:47,718[INFO] fc_run started with configuration fc_run.cfg
2016-09-11 21:04:47,719[INFO]  No target specified, assuming "assembly" as target
2016-09-11 21:04:48,616[INFO] # of tasks in complete graph: 1
2016-09-11 21:04:48,616[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
2016-09-11 21:04:48,619[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
2016-09-11 21:04:48,619[INFO] Running task from function task_make_fofn_abs_raw()
2016-09-11 21:04:48,619[WARNING] Missing taskObj.generated_script_fn for task. Maybe we did not need it? Skipping and continuing.
2016-09-11 21:04:48,721[INFO] Queued 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py/task_make_fofn_abs_raw' ...
2016-09-11 21:04:48,721[INFO] Success ('done'). Joining 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py/task_make_fofn_abs_raw'...
2016-09-11 21:04:48,722[INFO] _refreshTargets() finished with no thread running and no new job to submit
2016-09-11 21:04:48,735[INFO] # of tasks in complete graph: 2
2016-09-11 21:04:48,735[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
2016-09-11 21:04:48,744[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
2016-09-11 21:04:48,745[INFO] Running task from function task_build_rdb()
2016-09-11 21:04:48,747[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/prepare_rdb.sh'
2016-09-11 21:04:48,747[INFO] jobid=J50b0079619ed1e0485d3adbd235ca121ef78997c49605faf74557e25fdf6d9f9
2016-09-11 21:04:48,748[INFO] starting job Job(jobid='J50b0079619ed1e0485d3adbd235ca121ef78997c49605faf74557e25fdf6d9f9', cmd='/bin/bash prepare_rdb.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads', options={'job_queue': None, 'sge_option': '-l nodes=1:ppn=8:nogpu,walltime=24:00:00,mem=30000mb -M [email protected]', 'job_type': None})
2016-09-11 21:04:48,749[CRITICAL] Any exception caught in RefreshTargets() indicates an unrecoverable error. Shutting down...
/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/controller.py:537: UserWarning:
            "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
            "! Please wait for all threads / processes to terminate !"
            "! Also, maybe use 'ps' or 'qstat' to check all threads,!"
            "! processes and/or jobs are terminated cleanly.        !"
            "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"

  warnings.warn(shutdown_msg)
2016-09-11 21:04:48,750[WARNING] #tasks=1, #alive=1
2016-09-11 21:04:48,752[WARNING] Now, #tasks=1, #alive=0
Traceback (most recent call last):
  File "/home/stelo/FALCON-integrate/fc_env/bin/fc_run.py", line 6, in <module>
    exec(compile(open(__file__).read(), __file__, 'exec'))
  File "/home/stelo/FALCON-integrate/FALCON/src/py_scripts/fc_run.py", line 5, in <module>
    main(sys.argv)
  File "/home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py", line 576, in main
    main1(argv[0], args.config, args.logger)
  File "/home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py", line 353, in main1
    setNumThreadAllowed=PypeProcWatcherWorkflow.setNumThreadAllowed)
  File "/home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py", line 410, in run
    wf.refreshTargets([rdb_build_done])
  File "/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/controller.py", line 548, in refreshTargets
    raise Exception('Caused by:\n' + tb)
Exception: Caused by:
Traceback (most recent call last):
  File "/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/controller.py", line 523, in refreshTargets
    rtn = self._refreshTargets(task2thread, objs = objs, callback = callback, updateFreq = updateFreq, exitOnFailure = exitOnFailure)
  File "/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/controller.py", line 657, in _refreshTargets
    numAliveThreads = self.thread_handler.alive(task2thread.values())
  File "/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/pwatcher_bridge.py", line 182, in alive
    result = watcher.run(**watcher_args)
  File "/home/stelo/FALCON-integrate/pypeFLOW/pwatcher/fs_based.py", line 683, in run
    return cmd_run(self.state, jobids, job_type, job_queue)
  File "/home/stelo/FALCON-integrate/pypeFLOW/pwatcher/fs_based.py", line 486, in cmd_run
    bjob = MetaJobTorque(mjob)
  File "/home/stelo/FALCON-integrate/pypeFLOW/pwatcher/fs_based.py", line 381, in __init__
    super(MetaJobTorque, self).__init__(mjob)
TypeError: object.__init__() takes no parameters

make[3]: *** [run-synth0] Error 1
make[3]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-examples'
make[2]: *** [test] Error 2
make[2]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-examples'
make[1]: *** [test] Error 2
make[1]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-make'
make: *** [test] Error 2

qsub jobs submission error

I have another problem with job submission.the job submission section is as follows:
and my submit command is:
qsub -S /bin/bash -sync y -V
-q all.q
-N Assemble
-o Assembleoutput
-e AssembleSTDERR
-pe smp 10
-l h_vmem=1000M
/home/wuxiaopei/miniconda2/bin/fc_run.py /data/wuxiaopei/fc_run.cfg
Immediately,the error occurs :
图片

I don't know how to do with it.Can you tell me ?Thank you very much``

FALCON qsub issues

This is a follow-up on an issue I opened in FALCON. I was finally able to run FALCON on the provided examples synth0 and ecoli using job_type = local, but I could not run it when job_type = torque. I decided first to see if I can run synth0 on torque, so I changed the sge_option_...and job_type = torque and *_concurrent_jobs = 1 in the file ./FALCON-integrate/FALCON-examples/run/synth0/fc_run.cfg (I removed the #commented lines before posting it below)

[General]
use_tmpdir = true
job_type = torque
input_fofn = input.fofn
input_type = raw
genome_size = 5000
seed_coverage = 20
length_cutoff_pr = 1
sge_option_da = -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]
sge_option_la = -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]
sge_option_pda = -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]
sge_option_pla = -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]
sge_option_fc = -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]
sge_option_cns = -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]
pa_concurrent_jobs = 1
cns_concurrent_jobs = 1
ovlp_concurrent_jobs = 1
pa_HPCdaligner_option =   -v -B4 -t50 -h1 -e.99 -w1 -l1 -s1000
ovlp_HPCdaligner_option = -v -B4 -t50 -h1 -e.99 -l1 -s1000
pa_DBsplit_option =   -a -x5 -s.065536
ovlp_DBsplit_option = -a -x5 -s50
falcon_sense_option = --output_multi --min_idt 0.70 --min_cov 1 --max_n_read 20000 --n_core 0
overlap_filtering_setting = --max_diff 10000 --max_cov 100000 --min_cov 1 --min_len 1 --bestn 1000 --n_core 0

This is what happens when I run make test using torque.

(fc_env) [stelo@head FALCON-integrate]$ make test
ln -sf ../../git-sym/git-sym /home/stelo/FALCON-integrate/fc_env/bin/git-sym
. /home/stelo/FALCON-integrate/fc_env/bin/activate; make -C ./FALCON-make/ test
make[1]: Entering directory `/home/stelo/FALCON-integrate/FALCON-make'
make -C /home/stelo/FALCON-integrate/FALCON-examples test
make[2]: Entering directory `/home/stelo/FALCON-integrate/FALCON-examples'
python -c 'import pypeflow.common; print pypeflow.common'
<module 'pypeflow.common' from '/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/common.pyc'>
python -c 'import falcon_kit; print falcon_kit.falcon'
<CDLL '/home/stelo/FALCON-integrate/FALCON/ext_falcon.so', handle e7b060 at 2b68a52b8650>
make run-synth0
make[3]: Entering directory `/home/stelo/FALCON-integrate/FALCON-examples'
git-sym update run/synth0
git: 'check-ignore' is not a git command. See 'git --help'.
-> in dir 'run/synth0'
<- back to dir '/home/stelo/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/ref.fasta'
symlink: 'run/synth0/data/synth0.fasta'
-> in dir '/home/stelo/FALCON-integrate/FALCON-examples/.git/git-sym-local/links'
<- back to dir '/home/stelo/FALCON-integrate/FALCON-examples'
git-sym show run/synth0
git: 'check-ignore' is not a git command. See 'git --help'.
-> in dir 'run/synth0'
<- back to dir '/home/stelo/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/ref.fasta'
symlink: 'run/synth0/data/synth0.fasta'
. run/synth0/data/ref.fasta .git-sym/synth0.ref.fasta
. run/synth0/data/synth0.fasta  .git-sym/synth0-circ-20.pb.fasta
git-sym check run/synth0
git: 'check-ignore' is not a git command. See 'git --help'.
-> in dir 'run/synth0'
<- back to dir '/home/stelo/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/ref.fasta'
symlink: 'run/synth0/data/synth0.fasta'
cd run/synth0; fc_run.py fc_run.cfg logging.ini
[INFO] fc_run started with configuration fc_run.cfg
[INFO]  No target specified, assuming "assembly" as target
[INFO] # of tasks in complete graph: 1
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_make_fofn_abs_raw()
[WARNING] Missing taskObj.generated_script_fn for task. Maybe we did not need it? Skipping and continuing.
[INFO] Queued 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py/task_make_fofn_abs_raw' ...
[INFO] Success ('done'). Joining 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py/task_make_fofn_abs_raw'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 2
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_build_rdb()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/prepare_rdb.sh'
[INFO] jobid=J50b0079619ed1e0485d3adbd235ca121ef78997c49605faf74557e25fdf6d9f9
[INFO] starting job Job(jobid='J50b0079619ed1e0485d3adbd235ca121ef78997c49605faf74557e25fdf6d9f9', cmd='/bin/bash prepare_rdb.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N J50b0079619ed1e -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/J50b0079619ed1e0485d3adbd235ca121ef78997c49605faf74557e25fdf6d9f9 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-J50b0079619ed1e0485d3adbd235ca121ef78997c49605faf74557e25fdf6d9f9.bash
102909.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='J50b0079619ed1e0485d3adbd235ca121ef78997c49605faf74557e25fdf6d9f9', cmd='/bin/bash prepare_rdb.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py/task_build_rdb' ...
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.100000
[INFO] tick: 8, #updatedTasks: 1, sleep_time=0.500000
[INFO] Success ('done'). Joining 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run1.py/task_build_rdb'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 5
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_run_daligner()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/rj_0000.sh'
[INFO] jobid=Je0ddc6198e5f9fba0a27cf615c1a6d1207f2d978a55ab3cfc9fd367beeacb564
[INFO] starting job Job(jobid='Je0ddc6198e5f9fba0a27cf615c1a6d1207f2d978a55ab3cfc9fd367beeacb564', cmd='/bin/bash rj_0000.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N Je0ddc6198e5f9f -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/Je0ddc6198e5f9fba0a27cf615c1a6d1207f2d978a55ab3cfc9fd367beeacb564 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-Je0ddc6198e5f9fba0a27cf615c1a6d1207f2d978a55ab3cfc9fd367beeacb564.bash
102910.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='Je0ddc6198e5f9fba0a27cf615c1a6d1207f2d978a55ab3cfc9fd367beeacb564', cmd='/bin/bash rj_0000.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/d_0000_raw_reads' ...
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.100000
[INFO] tick: 8, #updatedTasks: 1, sleep_time=0.500000
[INFO] Success ('done'). Joining 'task://localhost/d_0000_raw_reads'...
[INFO] Running task from function task_run_daligner()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0001/rj_0001.sh'
[INFO] jobid=Jc217ef7ca3b34c549739e5cdffe99f8e95b4829db4d0b564615e9525389c117f
[INFO] starting job Job(jobid='Jc217ef7ca3b34c549739e5cdffe99f8e95b4829db4d0b564615e9525389c117f', cmd='/bin/bash rj_0001.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0001', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N Jc217ef7ca3b34c -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/Jc217ef7ca3b34c549739e5cdffe99f8e95b4829db4d0b564615e9525389c117f -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-Jc217ef7ca3b34c549739e5cdffe99f8e95b4829db4d0b564615e9525389c117f.bash
102911.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='Jc217ef7ca3b34c549739e5cdffe99f8e95b4829db4d0b564615e9525389c117f', cmd='/bin/bash rj_0001.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0001', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/d_0001_raw_reads' ...
[INFO] Success ('done'). Joining 'task://localhost/d_0001_raw_reads'...
[INFO] tick: 16, #updatedTasks: 2, sleep_time=0.000000
[INFO] Running task from function task_daligner_gather()
[INFO] Symlink .las files for further merging:
{'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/m_00001': ['/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/L1.1.1.las',
                                                                                '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0001/L1.1.2.las'],
 '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/m_00002': ['/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0001/L1.2.1.las',
                                                                                '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0001/L1.2.2.las']}
[WARNING] Missing taskObj.generated_script_fn for task. Maybe we did not need it? Skipping and continuing.
[INFO] Queued 'task://localhost/rda_check' ...
[INFO] Success ('done'). Joining 'task://localhost/rda_check'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 7
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_run_las_merge()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/m_00002/rp_00002.sh'
[INFO] jobid=J6e6e88137a47843df341207664607ae5d0eb5ec4712e591bf100b965013fc6ff
[INFO] starting job Job(jobid='J6e6e88137a47843df341207664607ae5d0eb5ec4712e591bf100b965013fc6ff', cmd='/bin/bash rp_00002.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/m_00002', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N J6e6e88137a4784 -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/J6e6e88137a47843df341207664607ae5d0eb5ec4712e591bf100b965013fc6ff -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-J6e6e88137a47843df341207664607ae5d0eb5ec4712e591bf100b965013fc6ff.bash
102912.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='J6e6e88137a47843df341207664607ae5d0eb5ec4712e591bf100b965013fc6ff', cmd='/bin/bash rp_00002.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/m_00002', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/m_00002_raw_reads' ...
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.100000
[INFO] Success ('done'). Joining 'task://localhost/m_00002_raw_reads'...
[INFO] tick: 8, #updatedTasks: 1, sleep_time=0.000000
[INFO] Running task from function task_run_las_merge()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/m_00001/rp_00001.sh'
[INFO] jobid=J53c0cde6868f9d2b7ac4c7833461dd3b3c5a245ee70deb86d965712fefb601d1
[INFO] starting job Job(jobid='J53c0cde6868f9d2b7ac4c7833461dd3b3c5a245ee70deb86d965712fefb601d1', cmd='/bin/bash rp_00001.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/m_00001', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N J53c0cde6868f9d -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/J53c0cde6868f9d2b7ac4c7833461dd3b3c5a245ee70deb86d965712fefb601d1 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-J53c0cde6868f9d2b7ac4c7833461dd3b3c5a245ee70deb86d965712fefb601d1.bash
102913.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='J53c0cde6868f9d2b7ac4c7833461dd3b3c5a245ee70deb86d965712fefb601d1', cmd='/bin/bash rp_00001.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/m_00001', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/m_00001_raw_reads' ...
[INFO] Success ('done'). Joining 'task://localhost/m_00001_raw_reads'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 11
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_run_consensus()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/preads/c_00002.sh'
[INFO] jobid=Jf1f05abdbfdecf1271387ca867421bad6d8136f8660e94bc14f4a1bb10fa2c20
[INFO] starting job Job(jobid='Jf1f05abdbfdecf1271387ca867421bad6d8136f8660e94bc14f4a1bb10fa2c20', cmd='/bin/bash c_00002.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/preads', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N Jf1f05abdbfdecf -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/Jf1f05abdbfdecf1271387ca867421bad6d8136f8660e94bc14f4a1bb10fa2c20 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-Jf1f05abdbfdecf1271387ca867421bad6d8136f8660e94bc14f4a1bb10fa2c20.bash
102914.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='Jf1f05abdbfdecf1271387ca867421bad6d8136f8660e94bc14f4a1bb10fa2c20', cmd='/bin/bash c_00002.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/preads', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/ct_00002' ...
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.100000
[INFO] tick: 8, #updatedTasks: 1, sleep_time=0.500000
[INFO] Success ('done'). Joining 'task://localhost/ct_00002'...
[INFO] Running task from function task_run_consensus()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/preads/c_00001.sh'
[INFO] jobid=J04c86fbdb788b39fb6eb0b1382bd5513f3d1cb6a09b9f0ba8fdb0c91d4d007e1
[INFO] starting job Job(jobid='J04c86fbdb788b39fb6eb0b1382bd5513f3d1cb6a09b9f0ba8fdb0c91d4d007e1', cmd='/bin/bash c_00001.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/preads', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N J04c86fbdb788b3 -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/J04c86fbdb788b39fb6eb0b1382bd5513f3d1cb6a09b9f0ba8fdb0c91d4d007e1 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-J04c86fbdb788b39fb6eb0b1382bd5513f3d1cb6a09b9f0ba8fdb0c91d4d007e1.bash
102915.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='J04c86fbdb788b39fb6eb0b1382bd5513f3d1cb6a09b9f0ba8fdb0c91d4d007e1', cmd='/bin/bash c_00001.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/preads', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/ct_00001' ...
[INFO] tick: 16, #updatedTasks: 2, sleep_time=0.100000
[INFO] Success ('done'). Joining 'task://localhost/ct_00001'...
[INFO] Running task from function check_r_cns_task()
[WARNING] Missing taskObj.generated_script_fn for task. Maybe we did not need it? Skipping and continuing.
[INFO] Queued 'task://localhost/cns_check' ...
[INFO] Success ('done'). Joining 'task://localhost/cns_check'...
[INFO] Running task from function task_report_pre_assembly()
[INFO] length_cutoff=2000 from '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/length_cutoff'
[INFO] Report inputs: {'length_cutoff': 2000, 'i_raw_reads_fofn_fn': '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/input.fofn', 'i_preads_fofn_fn': '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/input_preads.fofn', 'genome_length': 5000}
[INFO] stats for raw reads:       FastaStats(nreads=50, total=100000, n50=2000, p95=2000)
[INFO] stats for seed reads:      FastaStats(nreads=50, total=100000, n50=2000, p95=2000)
[INFO] stats for corrected reads: FastaStats(nreads=50, total=99447, n50=1989, p95=1989)
[INFO] Report stats:
{
    "genome_length": 5000,
    "length_cutoff": 2000,
    "preassembled_bases": 99447,
    "preassembled_coverage": 19.8894,
    "preassembled_mean": 1988.94,
    "preassembled_n50": 1989,
    "preassembled_p95": 1989,
    "preassembled_reads": 50,
    "preassembled_yield": 0.99447,
    "raw_bases": 100000,
    "raw_coverage": 20.0,
    "raw_mean": 2000.0,
    "raw_n50": 2000,
    "raw_p95": 2000,
    "raw_reads": 50,
    "seed_bases": 100000,
    "seed_coverage": 20.0,
    "seed_mean": 2000.0,
    "seed_n50": 2000,
    "seed_p95": 2000,
    "seed_reads": 50
}
[WARNING] Missing taskObj.generated_script_fn for task. Maybe we did not need it? Skipping and continuing.
[INFO] Queued 'task://localhost/report_pre_assembly' ...
[INFO] Success ('done'). Joining 'task://localhost/report_pre_assembly'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 11
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_build_pdb()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/prepare_pdb.sh'
[INFO] jobid=J6520f84e07ad07d6d1ca5c2457459f716986fd3229c6b60d9342264bce371423
[INFO] starting job Job(jobid='J6520f84e07ad07d6d1ca5c2457459f716986fd3229c6b60d9342264bce371423', cmd='/bin/bash prepare_pdb.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N J6520f84e07ad07 -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/J6520f84e07ad07d6d1ca5c2457459f716986fd3229c6b60d9342264bce371423 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-J6520f84e07ad07d6d1ca5c2457459f716986fd3229c6b60d9342264bce371423.bash
102916.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='J6520f84e07ad07d6d1ca5c2457459f716986fd3229c6b60d9342264bce371423', cmd='/bin/bash prepare_pdb.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/build_pdb' ...
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.100000
[INFO] tick: 8, #updatedTasks: 1, sleep_time=0.500000
[INFO] Success ('done'). Joining 'task://localhost/build_pdb'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 14
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_run_daligner()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/job_0000/rj_0000.sh'
[INFO] jobid=Jb4e17fb75d5c877c9ee6ac5c90d0b4699513e122e9b72d86a0f972d747007736
[INFO] starting job Job(jobid='Jb4e17fb75d5c877c9ee6ac5c90d0b4699513e122e9b72d86a0f972d747007736', cmd='/bin/bash rj_0000.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/job_0000', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N Jb4e17fb75d5c87 -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/Jb4e17fb75d5c877c9ee6ac5c90d0b4699513e122e9b72d86a0f972d747007736 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-Jb4e17fb75d5c877c9ee6ac5c90d0b4699513e122e9b72d86a0f972d747007736.bash
102917.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='Jb4e17fb75d5c877c9ee6ac5c90d0b4699513e122e9b72d86a0f972d747007736', cmd='/bin/bash rj_0000.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/job_0000', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/d_0000_preads' ...
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.100000
[INFO] Success ('done'). Joining 'task://localhost/d_0000_preads'...
[INFO] tick: 8, #updatedTasks: 1, sleep_time=0.000000
[INFO] Running task from function task_daligner_gather()
[INFO] Symlink .las files for further merging:
{'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/m_00001': ['/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/job_0000/preads.1.las']}
[WARNING] Missing taskObj.generated_script_fn for task. Maybe we did not need it? Skipping and continuing.
[INFO] Queued 'task://localhost/pda_check' ...
[INFO] Success ('done'). Joining 'task://localhost/pda_check'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 16
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_run_las_merge()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/m_00001/rp_00001.sh'
[INFO] jobid=J44d2d2b3515e36ba842d68d1e636f111cebf156c1f4b54b0119ae4885fb3a3f1
[INFO] starting job Job(jobid='J44d2d2b3515e36ba842d68d1e636f111cebf156c1f4b54b0119ae4885fb3a3f1', cmd='/bin/bash rp_00001.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/m_00001', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N J44d2d2b3515e36 -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/J44d2d2b3515e36ba842d68d1e636f111cebf156c1f4b54b0119ae4885fb3a3f1 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-J44d2d2b3515e36ba842d68d1e636f111cebf156c1f4b54b0119ae4885fb3a3f1.bash
102918.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='J44d2d2b3515e36ba842d68d1e636f111cebf156c1f4b54b0119ae4885fb3a3f1', cmd='/bin/bash rp_00001.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/m_00001', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/m_00001_preads' ...
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.100000
[INFO] Success ('done'). Joining 'task://localhost/m_00001_preads'...
[INFO] tick: 8, #updatedTasks: 1, sleep_time=0.000000
[INFO] Running task from function check_p_merge_check_task()
[WARNING] Missing taskObj.generated_script_fn for task. Maybe we did not need it? Skipping and continuing.
[INFO] Queued 'task://localhost/pmerge_check' ...
[INFO] Success ('done'). Joining 'task://localhost/pmerge_check'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 18
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] tick: 2, #updatedTasks: 0, sleep_time=0.100000
[INFO] Running task from function task_run_db2falcon()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl/run_db2falcon.sh'
[INFO] jobid=J87fa279521da8203556c24b573441f6e10a97f583be9104082b202112b45ce00
[INFO] starting job Job(jobid='J87fa279521da8203556c24b573441f6e10a97f583be9104082b202112b45ce00', cmd='/bin/bash run_db2falcon.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N J87fa279521da82 -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/J87fa279521da8203556c24b573441f6e10a97f583be9104082b202112b45ce00 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-J87fa279521da8203556c24b573441f6e10a97f583be9104082b202112b45ce00.bash
102919.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='J87fa279521da8203556c24b573441f6e10a97f583be9104082b202112b45ce00', cmd='/bin/bash run_db2falcon.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/1-preads_ovl', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/db2falcon' ...
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.100000
[INFO] Success ('done'). Joining 'task://localhost/db2falcon'...
[INFO] tick: 8, #updatedTasks: 1, sleep_time=0.100000
[INFO] Running task from function task_run_falcon_asm()
[INFO] script_fn:'/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/2-asm-falcon/run_falcon_asm.sh'
[INFO] jobid=Jb93e27ec197b3d278a8f4357145e082e29404b21bd2b9db29623016696a8fca3
[INFO] starting job Job(jobid='Jb93e27ec197b3d278a8f4357145e082e29404b21bd2b9db29623016696a8fca3', cmd='/bin/bash run_falcon_asm.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/2-asm-falcon', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None})
[INFO] !qsub -N Jb93e27ec197b3d -l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected] -V -d /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/Jb93e27ec197b3d278a8f4357145e082e29404b21bd2b9db29623016696a8fca3 -o stdout -e stderr -S /bin/bash /home/stelo/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/wrappers/run-Jb93e27ec197b3d278a8f4357145e082e29404b21bd2b9db29623016696a8fca3.bash
102920.head
[INFO] Submitted backgroundjob=MetaJobTorque(MetaJob(job=Job(jobid='Jb93e27ec197b3d278a8f4357145e082e29404b21bd2b9db29623016696a8fca3', cmd='/bin/bash run_falcon_asm.sh', rundir='/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/2-asm-falcon', options={'sge_option': '-l nodes=1:ppn=16:nogpu,walltime=4:00:00,mem=50000mb -M [email protected]', 'job_type': None}), lang_exe='/bin/bash'))
[INFO] Queued 'task://localhost/falcon' ...
[INFO] tick: 16, #updatedTasks: 2, sleep_time=0.700000
[INFO] Success ('done'). Joining 'task://localhost/falcon'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
make[3]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-examples'
make -C run/synth0 test
make[3]: Entering directory `/home/stelo/FALCON-integrate/FALCON-examples/run/synth0'
./check.py
shifted by 3269 (rc)
make[3]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-examples/run/synth0'
make -C run/synth0 clean
make[3]: Entering directory `/home/stelo/FALCON-integrate/FALCON-examples/run/synth0'
\rm -rf 0-*/ 1-*/ 2-*/ *.log mypwatcher/
make[3]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-examples/run/synth0'
make -C run/synth0 go0 # still test the old pypeflow too, for now
make[3]: Entering directory `/home/stelo/FALCON-integrate/FALCON-examples/run/synth0'
fc_run0 fc_run.cfg logging.ini
[INFO] fc_run started with configuration fc_run.cfg
[INFO]  No target specified, assuming "assembly" as target
[INFO] # of tasks in complete graph: 1
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] Running task from function task_make_fofn_abs_raw()
[INFO] Queued 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run0.py/task_make_fofn_abs_raw' ...
[INFO] tick: 2, #updatedTasks: 1, sleep_time=0.000000
[INFO] Success ('done'). Joining 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run0.py/task_make_fofn_abs_raw'...
[INFO] _refreshTargets() finished with no thread running and no new job to submit
[INFO] # of tasks in complete graph: 2
[INFO] tick: 1, #updatedTasks: 0, sleep_time=0.000000
[INFO] Running task from function task_build_rdb()
[INFO] Queued 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run0.py/task_build_rdb' ...
[INFO] tick: 2, #updatedTasks: 1, sleep_time=0.000000
[INFO] (torque) '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/prepare_rdb.sh'
102921.head
[INFO] tick: 4, #updatedTasks: 1, sleep_time=0.200000
[INFO] '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/rdb_build_done.exit' found.
[WARNING] '/home/stelo/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/rdb_build_done' is missing. job: 'prepare_rdb.sh-task_build_rdb-task_build_rdb' failed!
[INFO] Failure ('fail'). Joining 'task:///home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run0.py/task_build_rdb'...
[CRITICAL] Any exception caught in RefreshTargets() indicates an unrecoverable error. Shutting down...
/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/controller.py:537: UserWarning:
            "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
            "! Please wait for all threads / processes to terminate !"
            "! Also, maybe use 'ps' or 'qstat' to check all threads,!"
            "! processes and/or jobs are terminated cleanly.        !"
            "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"

  warnings.warn(shutdown_msg)
[WARNING] #tasks=1, #alive=0
Traceback (most recent call last):
  File "/home/stelo/FALCON-integrate/fc_env/bin/fc_run0", line 9, in <module>
    load_entry_point('falcon-kit', 'console_scripts', 'fc_run0')()
  File "/home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run0.py", line 681, in main
    main1(argv[0], args.config, args.logger)
  File "/home/stelo/FALCON-integrate/FALCON/falcon_kit/mains/run0.py", line 512, in main1
    wf.refreshTargets([rdb_build_done])
  File "/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/controller.py", line 548, in refreshTargets
    raise Exception('Caused by:\n' + tb)
Exception: Caused by:
Traceback (most recent call last):
  File "/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/controller.py", line 523, in refreshTargets
    rtn = self._refreshTargets(task2thread, objs = objs, callback = callback, updateFreq = updateFreq, exitOnFailure = exitOnFailure)
  File "/home/stelo/FALCON-integrate/pypeFLOW/pypeflow/controller.py", line 740, in _refreshTargets
    raise TaskFailureError("Counted %d failure(s) with 0 successes so far." %failedJobCount)
TaskFailureError: 'Counted 1 failure(s) with 0 successes so far.'
make[3]: *** [run0] Error 1
make[3]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-examples/run/synth0'
make[2]: *** [test] Error 2
make[2]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-examples'
make[1]: *** [test] Error 2
make[1]: Leaving directory `/home/stelo/FALCON-integrate/FALCON-make'
make: *** [test] Error 2

Two levels of directory hierarchy

As noted in PacificBiosciences/FALCON#334, pwatcher uses os.listdir() to view the existence of job-done files ("exit" files). That probably uses readdir(), so it should be fast along as the number of files is < 100k.

However, there is a low limit in some filesystems on the number of sub-directories in a directory -- e.g. 32k in ext3. So the current pwatcher is near the limit. For genomes > 5GB, we will need to use 2 levels of directory naming, similar to object-file naming in git. E.g.

# ls ../.git/modules/pypeFLOW/objects/
00  09  12  1a  22  28  30  36  41  4b  54  5d  63  6d  73  7e  84  8b  93  9b  a4  ab  b1  ba  c1  c7  d0  d7  e1  e8  f1  f9  info
01  0a  13  1b  23  2a  31  39  43  4c  55  5e  64  6e  76  7f  85  8c  94  9d  a5  ac  b2  bb  c2  c8  d1  d8  e2  ea  f2  fa  pack
...
# $ ls ../.git/modules/pypeFLOW/objects/1f
373199fd8940c9338fb698d27a2ec9fa5622c5  f22aa0033b51faac5283ad37acc4c695887308  f37c0e9f149fac5bd6eed9636478c9ae063cf6

See the idea? With just 2 hex-digits, we have 1/256 as many files (or sub-dirs in our case) per directory. We could also use this for the heartbeat/exit files if necessary, but the problem I know we'll have is in the pwatcher/jobs directory, where every uniquely named job has its own directory (based on a checksum of the job-description).

Not urgent, but I don't want to forget about this.

PBS needs jobid for qdel

PacificBiosciences/FALCON#507

@yingzhang121 wrote:

I tried the setting of kill string, but it didn't work.
In PBS system, it is easy to get the job id. You can use ${PBS_JOBID}, or more simple, if you define, submit=$(qsub -S /bin/bash script.sh), the value of submit is actually the job id, and you can run "qdel ${submit}.

In my PBS, there is even not -W block=T option

Problem with network_based pwatcher on OSX

Traceback (most recent call last):
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/fc_env/bin/fc_run", line 11, in <module>
    load_entry_point('falcon-kit', 'console_scripts', 'fc_run')()
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/FALCON/falcon_kit/mains/run1.py", line 668, in main
    main1(argv[0], args.config, args.logger)
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/FALCON/falcon_kit/mains/run1.py", line 432, in main1
    watcher_directory=config['pwatcher_directory'])
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/pypeFLOW/pypeflow/simple_pwatcher_bridge.py", line 553, in PypeProcWatcherWorkflow
    watcher = pwatcher_impl.get_process_watcher(watcher_directory)
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/pypeFLOW/pwatcher/network_based.py", line 905, in get_process_watcher
    state = get_state(directory)
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/pypeFLOW/pwatcher/network_based.py", line 402, in get_state
    watcher_state.initialize(directory, hostname, port)
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/pypeFLOW/pwatcher/network_based.py", line 373, in initialize
    self.top['auth'], self.top['server'] = start_server(self.get_server_directories(), hostname, port)
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/pypeFLOW/pwatcher/network_based.py", line 238, in start_server
    hostname = get_localhost_ipaddress(hostname, port)
  File "/Users/cdunn2001/repo/gh/FALCON-integrate/pypeFLOW/pwatcher/network_based.py", line 193, in get_localhost_ipaddress
    list = socket.getaddrinfo(socket.gethostname(), port, socket.AF_INET, socket.SOCK_STREAM)
socket.gaierror: [Errno 8] nodename nor servname provided, or not known

Completely outside pypeflow:

>>> import socket
>>> socket.gethostname()
'MacBook-Air.local'
>>> port = 0
>>> socket.getaddrinfo(socket.gethostname(), port, socket.AF_INET, socket.SOCK_STREAM)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
socket.gaierror: [Errno 8] nodename nor servname provided, or not known

Heartbeat files not being created

While running the synth0 test under SLURM, I get a lot of messages like this:

2016-11-23 17:51:03,872 - pwatcher.fs_based - DEBUG - Unable to remove heartbeat '/home/bredelings/pacbio/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/heartbeats/hea Traceback (most recent call last): File "/home/bredelings/pacbio/FALCON-integrate/pypeFLOW/pwatcher/fs_based.py", line 553, in get_status os.remove(heartbeat_path) OSError: [Errno 2] No such file or directory: '/home/bredelings/pacbio/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/heartbeats/heartbeat-Pdc6b30580a8be9'

Also, I see a lot of lines like this:
[INFO]sleep 0.7s
And there is almost no cpu usage.

Problem with HPC pro and Falcon

Hi,
I have got the following error with install_unzip_180312.sh:

[5711]$(u'lfs setstripe -c 12 /lustre/scratch/waterhouse_team/benth/falcon')
[5711]Call u'lfs setstripe -c 12 /lustre/scratch/waterhouse_team/benth/falcon' returned 0.
[INFO]Setup logging from file "None".
[INFO]fc_run started with configuration fc_run.cfg
[ERROR]Failed to parse config "fc_run.cfg".
Traceback (most recent call last):
  File "/lustre/work-lustre/waterhouse_team/apps/falcon_unzip/fc_env_180905/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 136, in main1
    support.parse_config(input_config_fn))
  File "/lustre/work-lustre/waterhouse_team/apps/falcon_unzip/fc_env_180905/lib/python2.7/site-packages/falcon_kit/run_support.py", line 167, in get_dict_from_old_falcon_cfg
    sge_option = config.get(section, 'sge_option_da')
  File "/work/waterhouse_team/miniconda2/envs/falcon_unzip/lib/python2.7/ConfigParser.py", line 618, in get
    raise NoOptionError(option, section)
NoOptionError: No option u'sge_option_da' in section: u'General'
Traceback (most recent call last):
  File "/lustre/work-lustre/waterhouse_team/apps/falcon_unzip/fc_env_180905/bin/fc_run", line 11, in <module>
    load_entry_point('falcon-kit==1.0+git.6bb3daa96931aece9bd3742bccc77ad257b7bb65', 'console_scripts', 'fc_run')()
  File "/lustre/work-lustre/waterhouse_team/apps/falcon_unzip/fc_env_180905/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 644, in main
    main1(argv[0], args.config, args.logger)
  File "/lustre/work-lustre/waterhouse_team/apps/falcon_unzip/fc_env_180905/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 136, in main1
    support.parse_config(input_config_fn))
  File "/lustre/work-lustre/waterhouse_team/apps/falcon_unzip/fc_env_180905/lib/python2.7/site-packages/falcon_kit/run_support.py", line 167, in get_dict_from_old_falcon_cfg
    sge_option = config.get(section, 'sge_option_da')
  File "/work/waterhouse_team/miniconda2/envs/falcon_unzip/lib/python2.7/ConfigParser.py", line 618, in get
    raise NoOptionError(option, section)
ConfigParser.NoOptionError: No option u'sge_option_da' in section: u'General'

with the following cfg file:

[General]
input_fofn = ../fasta2DB_input.fofn
input_type = raw

seed_coverage = 40 #(for Banana Diploid) and 40 (for Nbenth)
genome_size = 320000000 #(for Banana Diploid) and 320000000 (for Nbenth)

length_cutoff = -1
length_cutoff_pr = -1

job_type = string
pwatcher_type = blocking
job_queue = qsub -S /bin/bash -V -q lyra -N ${JOB_ID} -o "${STDOUT_FILE}" -e "${STDERR_FILE}" -l nodes=1:ppn=${NPROC},mem=25gb -W umask=0007,block=true "${CMD}"

pa_concurrent_jobs = 96
cns_concurrent_jobs = 96
ovlp_concurrent_jobs = 96

pa_HPCdaligner_option =  -v -B128 -M32 -e.70 -l4800 -s100 -k18 -h480 -w8 
ovlp_HPCdaligner_option = -v -B128 -M32 -h1024 -e.96 -l2400 -s100 -k18

pa_DBsplit_option = -a -x500 -s200
ovlp_DBsplit_option = -s400

falcon_sense_option = --output_multi --min_idt 0.70 --min_cov 2 --max_n_read 100 --n_core 8
falcon_sense_skip_contained = True

overlap_filtering_setting = --max_diff 80 --max_cov 80 --min_cov 2 --n_core 12

How would I convert sge to HPC pro?

Thank you in advance.

Michal

`isSatisfied()` needs to wait and re-try on missing outputs.

Traceback (most recent call last):
  File "/home/UNIXHOME/cdunn/repo/pb/pf/smrtanalysis/bioinformatics/ext/pi/pypeFLOW/pypeflow/controller.py", line 523, in refreshTargets
    rtn = self._refreshTargets(task2thread, objs = objs, callback = callback, updateFreq = updateFreq, exitOnFailure = exitOnFailure)
  File "/home/UNIXHOME/cdunn/repo/pb/pf/smrtanalysis/bioinformatics/ext/pi/pypeFLOW/pypeflow/controller.py", line 637, in _refreshTargets
    if not (set(prereqJobURLs) & updatedTaskURLs) and taskObj.isSatisfied():
  File "/home/UNIXHOME/cdunn/repo/pb/pf/smrtanalysis/bioinformatics/ext/pi/pypeFLOW/pypeflow/task.py", line 149, in isSatisfied
    return not self._getRunFlag()
  File "/home/UNIXHOME/cdunn/repo/pb/pf/smrtanalysis/bioinformatics/ext/pi/pypeFLOW/pypeflow/task.py", line 143, in _getRunFlag
    return any( [ f(self.inputDataObjs, self.outputDataObjs, self.parameters) for f in self._compareFunctions] )
  File "/home/UNIXHOME/cdunn/repo/pb/pf/smrtanalysis/bioinformatics/ext/pi/pypeFLOW/pypeflow/task.py", line 854, in timeStampCompare
    inputDataObjsTS.append((f.timeStamp, 'A', f))
  File "/home/UNIXHOME/cdunn/repo/pb/pf/smrtanalysis/bioinformatics/ext/pi/pypeFLOW/pypeflow/data.py", line 115, in timeStamp
    raise FileNotExistError("No such file:%r on %r" % (self.localFileName, platform.node()) )
FileNotExistError: "No such file:'/home/UNIXHOME/cdunn/repo/pb/smrtanalysis-client/smrtanalysis/siv/testkit-jobs/sa3_pipelines/hgap5_fake/synth5k/job_output/tasks/falcon_ns.tasks.task_hgap_run-0/run-fasta2referenceset/asm.referenceset.xml' on 'vm1004-def14'"

But that file does exist!

Sometimes, exit file is written late

pwatcher.fs_based - DEBUG - Status EXIT 0 for heartbeat:heartbeat-J75fc47f525b4da4c2f26b876613517ecd4ecee362572f2eaf391cdec58e4579b

But sometimes we see

pwatcher.fs_based - DEBUG - Status EXIT  for heartbeat:heartbeat-Jf5e95c43e2eaec88a3616083cac729abc967efb72b38bd78adce8e88a8d6453e

Missing 0 after EXIT. We then get an exception,

Exception: Caused by:
Traceback (most recent call last):
  File "/lustre/hpcprod/cdunn/repo/gh/FALCON-integrate/pypeFLOW/pypeflow/controller.py", line 523, in refreshTargets
    rtn = self._refreshTargets(task2thread, objs = objs, callback = callback, updateFreq = updateFreq, exitOnFailure = exitOnFailure)
  File "/lustre/hpcprod/cdunn/repo/gh/FALCON-integrate/pypeFLOW/pypeflow/controller.py", line 657, in _refreshTargets
    numAliveThreads = self.thread_handler.alive(task2thread.values())
  File "/lustre/hpcprod/cdunn/repo/gh/FALCON-integrate/pypeFLOW/pypeflow/pwatcher_bridge.py", line 197, in alive
    fred.endrun(status)
  File "/lustre/hpcprod/cdunn/repo/gh/FALCON-integrate/pypeFLOW/pypeflow/pwatcher_bridge.py", line 83, in endrun
    code = int(status.split()[1])
IndexError: list index out of range

That's because we split EXIT 0 to learn the exit-code. Apparently, pwatcher is reading the exit file before the exit code has been written to it. (The exit file contains the expected 0 when I view it now.)

pypeflow.pwatcher_bridge - DEBUG - In alive(), updated result of query:{'jobids': {
  'J5d1ec9ee6da2fafc1dfc7bbaa84a8a1df8aa588423fbea1acdf8a960653f7f6f':
    'RUNNING',
  'Jf5e95c43e2eaec88a3616083cac729abc967efb72b38bd78adce8e88a8d6453e':
    'EXIT ',
  'Jf38d0c8a5aed4a499e1976659c2546d2044913288c65af093e6b213dd11351a6':
    'RUNNING'
}}

We could prevent that by writing to a different file, and then performing an atomic mv operation. Alternatively, pwatcher could just wait a few seconds and retry.

slurm command does not use MB, NPROC from config file

We successfully ran the small 200kb test case through fc_run and fc_unzip, under slurm using srun, with falcon-kit 1.4.4 & pypeflow 2.3.0.

But a larger data set fails with "out of memory" when the first srun command uses --mem-per-cpu=4000M --cpus-per-task=1 , and not 'NPROC': '6', 'MB': '30000' as specified in the cfg file.

How do we make the first call of srun use 30000MB? There is no mention of 4000MB anywhere in the cfg file.

"job.defaults": {
"JOB_QUEUE": "standard",
"MB": "30000",
"NPROC": "6",
"job_type": "slurm",
"njobs": "8",
"pwatcher_type": "blocking",
"submit": "srun --wait=0 -p ${JOB_QUEUE} -J ${JOB_NAME} -o ${JOB_STDOUT} -e ${JOB_STDERR} --mem-per-cpu=${MB}M --cpus-per-task=${NPROC} ${JOB_SCRIPT}",
"use_tmpdir": false
},
"job.step.asm": {},
"job.step.cns": {},
"job.step.da": {},
"job.step.dust": {},
"job.step.la": {},
"job.step.pda": {},
"job.step.pla": {}
}
[INFO]In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.blocking' from '/home/data/bioinf_resources/programming_tools/miniconda3/envs/denovo_asm5/lib/python3.7/site-packages/pwatcher/blocking.py'>
[INFO]job_type='slurm', (default)job_defaults={'job_type': 'slurm', 'pwatcher_type': 'blocking', 'JOB_QUEUE': 'standard', 'njobs': '8', 'NPROC': '6', 'MB': '30000', 'submit': 'srun --wait=0 -p ${JOB_QUEUE} -J ${JOB_NAME} -o ${JOB_STDOUT} -e ${JOB_STDERR} --mem-per-cpu=${MB}M --cpus-per-task=${NPROC} ${JOB_SCRIPT}', 'use_tmpdir': False}, use_tmpdir=False, squash=False, job_name_style=0
[INFO]Setting max_jobs to 8; was None
[INFO]Num unsatisfied: 2, graph: 2
[INFO]About to submit: Node(0-rawreads/build)
[INFO]Popen: 'srun --wait=0 -p standard -J P26a7bf2afdd410 -o /home/data/pest_genomics/DH_test/falcon_example5/out/0-rawreads/build/run-P26a7bf2afdd410.bash.stdout -e /home/data/pest_genomics/DH_test/falcon_example5/out/0-rawreads/build/run-P26a7bf2afdd410.bash.stderr --mem-per-cpu=4000M --cpus-per-task=1 /home/data/bioinf_resources/programming_tools/miniconda3/envs/denovo_asm5/lib/python3.7/site-packages/pwatcher/mains/job_start.sh'
[INFO](slept for another 0.0s -- another 1 loop iterations)
[INFO](slept for another 0.30000000000000004s -- another 2 loop iterations)
[...]
[...]

[INFO](slept for another 180.0s -- another 18 loop iterations)
[INFO](slept for another 190.0s -- another 19 loop iterations)
[INFO](slept for another 200.0s -- another 20 loop iterations)
srun: error: rothhpc402: task 0: Out Of Memory

Not able to detect any error

I have successfully completed ecoli and ecoli 2 genome on PBS cluster. But now when I am trying to run falcon on my actual data which is a corrected fasta file(Canu). This neither giving error nor it is giving any output, none of the directory generating any output or error file, just all.log is generating log continuously. It is running like since ages and continuously 4 line repeating?
Will you please figure what I am doing wrong here
This is my cfg file


 [General]
job_type = PBS

# list of files of the initial bas.h5 files
input_fofn = input.fofn
#input_fofn = preads.fofn

#input_type = raw
input_type = preads

# The length cutoff used for seed reads used for initial mapping
length_cutoff = 12000

# The length cutoff used for seed reads usef for pre-assembly
length_cutoff_pr = 12000


job_queue = batch
sge_option_da = -l nodes=8:ppn=28,walltime=720:00:00
sge_option_la = -l nodes=8:ppn=28,walltime=720:00:00
sge_option_pda = -l nodes=8:ppn=28,walltime=720:00:00
sge_option_pla = -l nodes=8:ppn=28,walltime=720:00:00
sge_option_fc = -l nodes=8:ppn=28,walltime=720:00:00
sge_option_cns = -l nodes=8:ppn=28,walltime=720:00:00

pa_concurrent_jobs = 26
ovlp_concurrent_jobs = 26

pa_HPCdaligner_option =  -v -B128 -t16 -e.70 -l1000 -s1000
ovlp_HPCdaligner_option = -v -B128 -t32 -h60 -e.96 -l500 -s1000

pa_DBsplit_option = -x500 -s200
ovlp_DBsplit_option = -x500 -s200

falcon_sense_option = --output_multi --min_idt 0.70 --min_cov 4 --max_n_read 200 --n_core 6

overlap_filtering_setting = --max_diff 100 --max_cov 100 --min_cov 20 --bestn 10 --n_core 24

Here is all.log

2017-04-11 16:28:18,769 - fc_run - INFO - Setup logging from file "None".
2017-04-11 16:28:18,770 - fc_run - INFO - fc_run started with configuration fc_run.cfg
2017-04-11 16:28:18,771 - fc_run - INFO -  No target specified, assuming "assembly" as target 
2017-04-11 16:28:18,771 - pypeflow.simple_pwatcher_bridge - WARNING - In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.fs_based' from '/home/scbb/FALCON-integrate/pypeFLOW/pwatcher/fs_based.pyc'>
2017-04-11 16:28:18,771 - pypeflow.simple_pwatcher_bridge - INFO - In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.fs_based' from '/home/scbb/FALCON-integrate/pypeFLOW/pwatcher/fs_based.pyc'>
2017-04-11 16:28:18,819 - pypeflow.simple_pwatcher_bridge - INFO - job_type='PBS', job_queue='batch', sge_option='-l nodes=8:ppn=28,walltime=720:00:00', use_tmpdir=False, squash=False
2017-04-11 16:28:18,862 - pypeflow.simple_pwatcher_bridge - DEBUG - Created PypeTask('0-rawreads/raw-fofn-abs', '/home/scbb/FALCON-integrate/FALCON-examples/run/picro/0-rawreads/raw-fofn-abs', "{'o_fofn': PLF('0-rawreads/raw-fofn-abs/input.fofn', None)}", "{'i_fofn': PLF('input.fofn', None)}")
2017-04-11 16:28:18,862 - pypeflow.simple_pwatcher_bridge - DEBUG - Added PRODUCERS['0-rawreads/raw-fofn-abs'] = PypeTask('0-rawreads/raw-fofn-abs', '/home/scbb/FALCON-integrate/FALCON-examples/run/picro/0-rawreads/raw-fofn-abs', "{'o_fofn': PLF('0-rawreads/raw-fofn-abs/input.fofn', None)}", "{'i_fofn': PLF('input.fofn', None)}")
2017-04-11 16:28:18,863 - pypeflow.simple_pwatcher_bridge - DEBUG - Built PypeTask('0-rawreads/raw-fofn-abs', '/home/scbb/FALCON-integrate/FALCON-examples/run/picro/0-rawreads/raw-fofn-abs', "{'o_fofn': PLF('input.fofn', '0-rawreads/raw-fofn-abs')}", "{'i_fofn': PLF('input.fofn', None)}")
2017-04-11 16:28:18,863 - pypeflow.simple_pwatcher_bridge - DEBUG - New Node(0-rawreads/raw-fofn-abs) needs set([])
2017-04-11 16:28:18,864 - pypeflow.simple_pwatcher_bridge - INFO - Num unsatisfied: 1, graph: 1
2017-04-11 16:28:18,864 - pypeflow.simple_pwatcher_bridge - INFO - About to submit: Node(0-rawreads/raw-fofn-abs)
2017-04-11 16:28:18,864 - pypeflow.simple_pwatcher_bridge - DEBUG - enque nodes:
set([Node(0-rawreads/raw-fofn-abs)])
2017-04-11 16:28:19,020 - pypeflow.simple_pwatcher_bridge - DEBUG - In rundir='/home/scbb/FALCON-integrate/FALCON-examples/run/picro/0-rawreads/raw-fofn-abs', sge_option=None, __sge_option='-l nodes=8:ppn=28,walltime=720:00:00'
2017-04-11 16:28:19,020 - pwatcher.fs_based - DEBUG - run(jobids=<1>, job_type=PBS, job_queue=batch)
2017-04-11 16:28:19,020 - pwatcher.fs_based - DEBUG - jobs:
{'P2d1aed37957c1c': Job(jobid='P2d1aed37957c1c', cmd='/bin/bash run.sh', rundir='/home/scbb/FALCON-integrate/FALCON-examples/run/picro/0-rawreads/raw-fofn-abs', options={'job_queue': 'batch', 'sge_option': '-l nodes=8:ppn=28,walltime=720:00:00', 'job_type': 'PBS'})}
2017-04-11 16:28:19,020 - pwatcher.fs_based - INFO - starting job Job(jobid='P2d1aed37957c1c', cmd='/bin/bash run.sh', rundir='/home/scbb/FALCON-integrate/FALCON-examples/run/picro/0-rawreads/raw-fofn-abs', options={'job_queue': 'batch', 'sge_option': '-l nodes=8:ppn=28,walltime=720:00:00', 'job_type': 'PBS'})
2017-04-11 16:28:19,021 - pwatcher.fs_based - DEBUG - Wrapped "python2.7 -m pwatcher.mains.fs_heartbeat --directory=/home/scbb/FALCON-integrate/FALCON-examples/run/picro/0-rawreads/raw-fofn-abs --heartbeat-file=/home/scbb/FALCON-integrate/FALCON-examples/run/picro/mypwatcher/heartbeats/heartbeat-P2d1aed37957c1c --exit-file=/home/scbb/FALCON-integrate/FALCON-examples/run/picro/mypwatcher/exits/exit-P2d1aed37957c1c --rate=10.0 /bin/bash run.sh || echo 99 >| /home/scbb/FALCON-integrate/FALCON-examples/run/picro/mypwatcher/exits/exit-P2d1aed37957c1c"
2017-04-11 16:28:19,021 - pwatcher.fs_based - DEBUG - Writing wrapper "/home/scbb/FALCON-integrate/FALCON-examples/run/picro/mypwatcher/wrappers/run-P2d1aed37957c1c.bash"
2017-04-11 16:28:19,070 - pwatcher.fs_based - DEBUG - CD: '/home/scbb/FALCON-integrate/FALCON-examples/run/picro/mypwatcher/jobs/P2d1aed37957c1c' <- '/home/scbb/FALCON-integrate/FALCON-examples/run/picro'
2017-04-11 16:28:19,095 - pwatcher.fs_based - INFO - !qsub -N P2d1aed37957c1c -q batch -l nodes=8:ppn=28,walltime=720:00:00 -V -o stdout -e stderr -S /bin/bash /home/scbb/FALCON-integrate/FALCON-examples/run/picro/mypwatcher/wrappers/run-P2d1aed37957c1c.bash
2017-04-11 16:28:19,161 - pwatcher.fs_based - DEBUG - CD: '/home/scbb/FALCON-integrate/FALCON-examples/run/picro/mypwatcher/jobs/P2d1aed37957c1c' -> '/home/scbb/FALCON-integrate/FALCON-examples/run/picro'
2017-04-11 16:28:19,162 - pwatcher.fs_based - INFO - Submitted backgroundjob=MetaJobPbs(MetaJob(job=Job(jobid='P2d1aed37957c1c', cmd='/bin/bash run.sh', rundir='/home/scbb/FALCON-integrate/FALCON-examples/run/picro/0-rawreads/raw-fofn-abs', options={'job_queue': 'batch', 'sge_option': '-l nodes=8:ppn=28,walltime=720:00:00', 'job_type': 'PBS'}), lang_exe='/bin/bash'))
2017-04-11 16:28:19,162 - pypeflow.simple_pwatcher_bridge - DEBUG - Result of watcher.run()={'submitted': ['P2d1aed37957c1c']}
2017-04-11 16:28:19,162 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:19,162 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:19,163 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:19,163 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.1s
2017-04-11 16:28:19,264 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:19,264 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:19,264 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:19,264 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.2s
2017-04-11 16:28:19,465 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:19,465 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:19,465 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:19,466 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.3s
2017-04-11 16:28:19,766 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:19,766 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:19,767 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:19,767 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.4s
2017-04-11 16:28:20,168 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:20,168 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:20,168 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:20,169 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.5s
2017-04-11 16:28:20,669 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:20,669 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:20,670 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:20,670 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.6s
2017-04-11 16:28:21,271 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:21,271 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:21,272 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:21,272 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.7s
2017-04-11 16:28:21,973 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:21,973 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:21,974 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:21,974 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.8s
2017-04-11 16:28:22,775 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:22,775 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:22,775 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:22,776 - pypeflow.simple_pwatcher_bridge - INFO - sleep 0.9s
2017-04-11 16:28:23,677 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:23,677 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:23,678 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:23,678 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.0s
2017-04-11 16:28:24,679 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:24,679 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:24,680 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:24,680 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.1s
2017-04-11 16:28:25,781 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:25,781 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:25,782 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:25,782 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.2s
2017-04-11 16:28:26,983 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:26,984 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:26,984 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:26,984 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.3s
2017-04-11 16:28:28,286 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:28,286 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:28,287 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:28,287 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.4s
2017-04-11 16:28:29,688 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:29,689 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:29,689 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:29,689 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.5s
2017-04-11 16:28:31,191 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:31,191 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:31,192 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:31,192 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.6s
2017-04-11 16:28:32,794 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:32,794 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:32,794 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:32,794 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.7s
2017-04-11 16:28:34,496 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:34,496 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:34,497 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:34,497 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.8s
2017-04-11 16:28:36,299 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:36,299 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:36,300 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:36,300 - pypeflow.simple_pwatcher_bridge - INFO - sleep 1.9s
2017-04-11 16:28:38,202 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:38,202 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:38,203 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:38,203 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.0s
2017-04-11 16:28:40,205 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:40,205 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:40,206 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:40,206 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.1s
2017-04-11 16:28:42,308 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:42,309 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:42,309 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:42,309 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.2s
2017-04-11 16:28:44,512 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:44,512 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:44,512 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:44,512 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.3s
2017-04-11 16:28:46,815 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:46,815 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:46,816 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:46,816 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.4s
2017-04-11 16:28:49,218 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:49,219 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:49,219 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:49,219 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.5s
2017-04-11 16:28:51,722 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:51,722 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:51,723 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:51,723 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.6s
2017-04-11 16:28:54,326 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:54,326 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:54,327 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:54,327 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.7s
2017-04-11 16:28:57,030 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:57,030 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:57,031 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:57,031 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.8s
2017-04-11 16:28:59,834 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:28:59,834 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:28:59,835 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:28:59,835 - pypeflow.simple_pwatcher_bridge - INFO - sleep 2.9s
2017-04-11 16:29:02,738 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:02,738 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:02,739 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:02,739 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.0s
2017-04-11 16:29:05,742 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:05,742 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:05,743 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:05,743 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.1s
2017-04-11 16:29:08,847 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:08,847 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:08,847 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:08,847 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.2s
2017-04-11 16:29:12,051 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:12,051 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:12,051 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:12,052 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.3s
2017-04-11 16:29:15,355 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:15,355 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:15,356 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:15,356 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.4s
2017-04-11 16:29:18,760 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:18,760 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:18,761 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:18,761 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.5s
2017-04-11 16:29:22,265 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:22,265 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:22,266 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:22,266 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.6s
2017-04-11 16:29:25,870 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:25,870 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:25,870 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:25,871 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.7s
2017-04-11 16:29:29,575 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:29,575 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:29,575 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:29,575 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.8s
2017-04-11 16:29:33,379 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:33,380 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:33,380 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:33,380 - pypeflow.simple_pwatcher_bridge - INFO - sleep 3.9s
2017-04-11 16:29:37,285 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:37,285 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:37,285 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:37,285 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.0s
2017-04-11 16:29:41,290 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:41,290 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:41,291 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:41,291 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.1s
2017-04-11 16:29:45,395 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:45,395 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:45,396 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:45,396 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.2s
2017-04-11 16:29:49,600 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:49,601 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:49,601 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:49,602 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.3s
2017-04-11 16:29:53,906 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:53,906 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:53,907 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:53,907 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.4s
2017-04-11 16:29:58,312 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:29:58,312 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:29:58,313 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:29:58,313 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.5s
2017-04-11 16:30:02,817 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:02,818 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:02,818 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:02,818 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.6s
2017-04-11 16:30:07,423 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:07,423 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:07,424 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:07,424 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.7s
2017-04-11 16:30:12,129 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:12,129 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:12,130 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:12,130 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.8s
2017-04-11 16:30:16,935 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:16,935 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:16,936 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:16,936 - pypeflow.simple_pwatcher_bridge - INFO - sleep 4.9s
2017-04-11 16:30:21,841 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:21,841 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:21,842 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:21,842 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.0s
2017-04-11 16:30:26,847 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:26,848 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:26,848 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:26,848 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.1s
2017-04-11 16:30:31,954 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:31,954 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:31,954 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:31,955 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.2s
2017-04-11 16:30:37,160 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:37,160 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:37,161 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:37,161 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.3s
2017-04-11 16:30:42,467 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:42,467 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:42,467 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:42,467 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.4s
2017-04-11 16:30:47,873 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:47,873 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:47,874 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:47,874 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.5s
2017-04-11 16:30:53,380 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:53,380 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:53,381 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:53,381 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.6s
2017-04-11 16:30:58,987 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:30:58,987 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:30:58,988 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:30:58,988 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.7s
2017-04-11 16:31:04,694 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:04,694 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:04,694 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:04,695 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.8s
2017-04-11 16:31:10,501 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:10,501 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:10,501 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:10,501 - pypeflow.simple_pwatcher_bridge - INFO - sleep 5.9s
2017-04-11 16:31:16,408 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:16,408 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:16,408 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:16,408 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.0s
2017-04-11 16:31:22,415 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:22,415 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:22,416 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:22,416 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.1s
2017-04-11 16:31:28,522 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:28,522 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:28,523 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:28,523 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.2s
2017-04-11 16:31:34,730 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:34,730 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:34,730 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:34,730 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.3s
2017-04-11 16:31:41,037 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:41,037 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:41,038 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:41,038 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.4s
2017-04-11 16:31:47,444 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:47,445 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:47,445 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:47,445 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.5s
2017-04-11 16:31:53,952 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:31:53,952 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:31:53,953 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:31:53,953 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.6s
2017-04-11 16:32:00,560 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:00,560 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:00,561 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:00,561 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.7s
2017-04-11 16:32:07,268 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:07,268 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:07,269 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:07,269 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.8s
2017-04-11 16:32:14,076 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:14,076 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:14,077 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:14,077 - pypeflow.simple_pwatcher_bridge - INFO - sleep 6.9s
2017-04-11 16:32:20,984 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:20,984 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:20,985 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:20,985 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.0s
2017-04-11 16:32:27,989 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:27,989 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:27,990 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:27,990 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.1s
2017-04-11 16:32:35,098 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:35,098 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:35,098 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:35,099 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.2s
2017-04-11 16:32:42,306 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:42,306 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:42,307 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:42,307 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.3s
2017-04-11 16:32:49,614 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:49,615 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:49,615 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:49,615 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.4s
2017-04-11 16:32:57,023 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:32:57,023 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:32:57,024 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:32:57,024 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.5s
2017-04-11 16:33:04,532 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:33:04,532 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:33:04,533 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:33:04,533 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.6s
2017-04-11 16:33:12,141 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:33:12,141 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:33:12,141 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:33:12,141 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.7s
2017-04-11 16:33:19,849 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:33:19,850 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:33:19,850 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:33:19,850 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.8s
2017-04-11 16:33:27,658 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:33:27,658 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:33:27,659 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:33:27,659 - pypeflow.simple_pwatcher_bridge - INFO - sleep 7.9s
2017-04-11 16:33:35,567 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:33:35,568 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:33:35,568 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:33:35,569 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.0s
2017-04-11 16:33:43,577 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:33:43,577 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:33:43,578 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:33:43,578 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.1s
2017-04-11 16:33:51,686 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:33:51,686 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:33:51,687 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:33:51,687 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.2s
2017-04-11 16:33:59,896 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:33:59,896 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:33:59,897 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:33:59,897 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.3s
2017-04-11 16:34:08,205 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:34:08,205 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:34:08,206 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:34:08,206 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.4s
2017-04-11 16:34:16,615 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:34:16,615 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:34:16,615 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:34:16,616 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.5s
2017-04-11 16:34:25,124 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:34:25,124 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:34:25,125 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:34:25,125 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.6s
2017-04-11 16:34:33,734 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:34:33,734 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:34:33,735 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:34:33,735 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.7s
2017-04-11 16:34:42,444 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:34:42,444 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:34:42,445 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:34:42,445 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.8s
2017-04-11 16:34:51,254 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:34:51,255 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:34:51,255 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:34:51,255 - pypeflow.simple_pwatcher_bridge - INFO - sleep 8.9s
2017-04-11 16:35:00,164 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:35:00,165 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:35:00,165 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:35:00,166 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.0s
2017-04-11 16:35:09,175 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:35:09,175 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:35:09,175 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:35:09,176 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.1s
2017-04-11 16:35:18,285 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:35:18,285 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:35:18,286 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:35:18,286 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.2s
2017-04-11 16:35:27,495 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:35:27,496 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:35:27,496 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:35:27,496 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.3s
2017-04-11 16:35:36,806 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:35:36,806 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:35:36,807 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:35:36,807 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.4s
2017-04-11 16:35:46,216 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:35:46,217 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:35:46,218 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:35:46,218 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.5s
2017-04-11 16:35:55,727 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:35:55,728 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:35:55,728 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:35:55,728 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.6s
2017-04-11 16:36:05,331 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:36:05,331 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:36:05,332 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:36:05,332 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.7s
2017-04-11 16:36:15,042 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:36:15,042 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:36:15,043 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:36:15,043 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.8s
2017-04-11 16:36:24,853 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:36:24,854 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:36:24,854 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:36:24,854 - pypeflow.simple_pwatcher_bridge - INFO - sleep 9.9s
2017-04-11 16:36:34,764 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:36:34,765 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:36:34,765 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:36:34,765 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10.0s
2017-04-11 16:36:44,775 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:36:44,776 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:36:44,776 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:36:44,776 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10.1s
2017-04-11 16:36:54,887 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:36:54,887 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:36:54,888 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:36:54,888 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:37:04,898 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:37:04,898 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:37:04,899 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:37:04,899 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:37:14,909 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:37:14,909 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:37:14,910 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:37:14,910 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:37:24,920 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:37:24,920 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:37:24,921 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:37:24,921 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:37:34,931 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:37:34,931 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:37:34,932 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:37:34,932 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:37:44,942 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:37:44,943 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:37:44,943 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:37:44,943 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:37:54,953 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:37:54,954 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:37:54,954 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:37:54,955 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:38:04,965 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:38:04,965 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:38:04,966 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:38:04,966 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:38:14,976 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:38:14,976 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:38:14,977 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c
2017-04-11 16:38:14,977 - pypeflow.simple_pwatcher_bridge - INFO - sleep 10s
2017-04-11 16:38:24,987 - pypeflow.simple_pwatcher_bridge - DEBUG - N in queue: 1 (max_jobs=8)
2017-04-11 16:38:24,987 - pwatcher.fs_based - DEBUG - query(which='list', jobids=<1>)
2017-04-11 16:38:24,988 - pwatcher.fs_based - DEBUG - Status RUNNING for heartbeat:heartbeat-P2d1aed37957c1c

trimmed

this is PBS script

#!/bin/bash
#PBS -N falcon
#PBS -q batch
#PBS -o out.log
#PBS -e err.log

# load dependencies

# source build
cd /home/scbb/FALCON-integrate
source env.sh

# navigate to job directory
cd /home/scbb/FALCON-integrate/FALCON-examples/run/picro

# run it!
fc_run.py fc_run.cfg
#echo $PBS_NODEFILE

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.