GithubHelp home page GithubHelp logo

galaxyproject / ansible-galaxy-extras Goto Github PK

View Code? Open in Web Editor NEW
11.0 18.0 40.0 584 KB

Ansible roles to configure assorted compontents for an Ubuntu VM or container configured with https://github.com/galaxyproject/ansible-galaxy with production services including nginx, uwsgi, supervisor, protftp, and slurm.

License: Other

Shell 34.69% Python 14.36% HTML 50.95%

ansible-galaxy-extras's Introduction

Ansible Galaxy Extras

This Ansible role is for building out some production services on top of Galaxy - the so-called @natefoo stack - uWSGI, NGINX, Proftpd, and supervisor.

Requirements

The role has been developed and tested on Ubuntu 14.04. It requires sudo access.

Dependencies

This role assumes Galaxy has already been installed and configured (for instance with the Galaxy role).

Role variables

All of the listed variabls are stored in defaults/main.yml. Individual variables can be set or overridden by setting them directly in a playbook for this role. Alternatively, they can be set by creating group_vars directory in the root directory of the playbook used to execute this role and placing a file with the variables there. Note that the name of this file must match the value of hosts setting in the corresponding playbook.

Additional Documentation

Much of the functionality of these ansible roles can be gleaned by reading through defaults/main.yml however we've also provided some additional documentation under docs/.

Example Usage

See planemo-machine for an example of how to use this role.

Code of Conduct

Please note that this project follows the Galaxy Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Project Organization

See the Project Organization document for a description of project governance.

ansible-galaxy-extras's People

Contributors

abretaud avatar afgane avatar angellandros avatar bgruening avatar chambm avatar dannon avatar drosofff avatar fabiorjvieira avatar hexylena avatar ilveroluca avatar jmchilton avatar jvanbraekel avatar kellrott avatar luke-c-sargent avatar manabuishii avatar matthdsm avatar mr-c avatar mvdbeek avatar natefoo avatar pcm32 avatar pvanheus avatar rhpvorderman avatar thobalose avatar thomaswollmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-galaxy-extras's Issues

additional support for pbs-clusters

Hi,

I made some changes to this role to include support for pbs-torque clusters, would you like me to send a pull request so these changes can be checked out/imported into the project?

Greetings
Matthias

Conflict for PYTHONHOME with conda

In a fairly standard supervisor setup we get the following handler section

[program:handler]
command         = /home/galaxy/galaxy/.venv/bin/python ./lib/galaxy/main.py -c /home/galaxy/galaxy/config/galaxy.ini --server-name=handler%(process_num)s --log-file=/home/galaxy/galaxy/handler%(process_num)s.log
directory       = /home/galaxy/galaxy
process_name    = handler%(process_num)s
numprocs        = 1
umask           = 022
autostart       = true
autorestart     = true
startsecs       = 20
user            = galaxy
environment     = PYTHON_EGG_CACHE=/home/galaxy/.python-eggs, PYTHONHOME=/home/galaxy/galaxy.venv
startretries    = 15`

PYTHONHOME is in conflict with conda, so when trying to resolve a conda module we get:

Traceback (most recent call last):
  File "/home/galaxy/conda/bin/conda", line 3, in <module>
    from conda.cli import main
ImportError: No module named conda.cli
Traceback (most recent call last):
  File "/home/galaxy/conda/bin/conda", line 3, in <module>
    from conda.cli import main
ImportError: No module named conda.cli
Traceback (most recent call last):
  File "/home/galaxy/conda/bin/conda", line 3, in <module>
    from conda.cli import main
ImportError: No module named conda.cli
Traceback (most recent call last):
  File "/home/galaxy/conda/bin/conda", line 3, in <module>
    from conda.cli import main
ImportError: No module named conda.cli
Traceback (most recent call last):
  File "/home/galaxy/conda/bin/conda", line 3, in <module>
    from conda.cli import main
ImportError: No module named conda.cli
Traceback (most recent call last):
  File "/home/galaxy/conda/bin/conda", line 3, in <module>
    from conda.cli import main
ImportError: No module named conda.cli

When removing PYTHONHOME we get

Traceback (most recent call last):
  File "/home/galaxy/galaxy/database/jobs/000/15/set_metadata_Smnwjn.py", line 1, in <module>
    from galaxy_ext.metadata.set_metadata import set_metadata; set_metadata()
  File "/home/galaxy/galaxy/lib/galaxy_ext/metadata/set_metadata.py", line 23, in <module>
    from sqlalchemy.orm import clear_mappers
ImportError: No module named sqlalchemy.orm

I'll have a try to see if we can just satisfy this by installing sqlalchemy into the conda root environment, but even if that works it's quite hacky. Suggestions, @bgruening @natefoo @jmchilton ?

Task fails when installing requirements.txt

task

TASK [galaxyprojectdotorg.galaxyextras : Run common_startup.sh to create welcome.html from sample]

package

Could not find a version that satisfies the requirement repoze.lru==0.6 (from -r requirements.txt (line 19)) (from versions: )", "No matching distribution found for repoze.lru==0.6 (from -r requirements.txt (line 19))

Install autofs system package breaks GalaxyKickStart

@rhpvorderman @afgane
To let you know that the PR #160 broke the GalaxyKickStart Travis tests.
Stricto sensus the PR works, except that supervisor autofs is in FATAL states. However, the test fails because we build a docker container starting from artbio/ansible-galaxy-os: here, the task [galaxyprojectdotorg.galaxy-extras : Install autofs system package]fails due to a failure in autofs package installation (see https://travis-ci.org/ARTbio/GalaxyKickStart/builds/261895456#L1443).

I'll try to work at a fix of this. In the meantime, I will pin ansible-galaxy-extras role to commit 03e894f. What would be cool in addition would be to condition cvmfs_client.yml execution with another variable than galaxy_extras_install_packages. That way, GKS could easily skip the cvmfs stuff that is not currently required.

nginx - check for apache first ?

Hi,

a couple of people, myself included, have had problems with this role (using GalaxyKickStart) in that nginx
does not install cleanly.

It seems this is related to nginx-extras not being able to be configured properly and nginx-core etc cannot be installed - missing dependencies.The error message from ansible is convoluted and difficult to decipher (see first post in issue below).

ARTbio/GalaxyKickStart#230

I think it might make sense to check for apache2 and set state to stopped before attempting to install nginx. This would help for all of those without sufficient resources to start mint VMs for each install.

Any thoughts ?

Thanks

Fix role testing

Hi @bgruening
I am trying to fix a little bit the tests of the extras role.
What I am trying to figure out currently is how the galaxy-postgresql ansible roles is downloaded during ci_test_docker_galaxy.sh execution, as requirement for the docker build of the galaxy-docker/test image.
In facts, I see that the docker build is failing when the playbook task "Stop and remove postgresql" is run

TASK [galaxyprojectdotorg.galaxyextras : Stop and remove postgresql.] **********
failed: [localhost] (item=postgresql) => {"changed": false, "item": "postgresql", "msg": "Could not find the requested service postgresql: host"}

RUNNING HANDLER [galaxyprojectdotorg.galaxyextras : restart nginx] *************
	to retry, use: --limit @/ansible/provision.retry

PLAY RECAP *********************************************************************
localhost                  : ok=43   changed=34   unreachable=0    failed=1

And I can also see that it is expected since this role is empty in the container, maybe (but not sure at all) because the github submodule galaxy-postgresql had not been explicitely included in the test code.

munge installation when ubuntu 16.04

As far as I can test the task https://github.com/galaxyproject/ansible-galaxy-extras/blob/master/tasks/slurm.yml#L7 fails when ubuntu 16.04.
The issue I see comes from failure in apt-get install munge and is related to https://bugs.launchpad.net/ubuntu/+source/munge/+bug/1287624/comments/11.

If before running the playbook I install manually munge (with failure) and fix it as indicated

1. sudo systemctl edit --system --full munge.service
   While in the editor, append " --syslog" to the ExecStart line:
   ExecStart=/usr/sbin/munged --syslog

2. sudo systemctl enable munge.service

3. sudo systemctl start munge.service

Then, the task https://github.com/galaxyproject/ansible-galaxy-extras/blob/master/tasks/slurm.yml#L7 can run to completion.

Let me know if I missed something. If not I can work on a fix in the playbook

Releases

I would like to start making releases for all of the galaxyproject ansible roles, so that we can reference a release instead of a commit when specifying these roles as dependencies.
Is that OK for everyone? ping @afgane @bgruening @jmchilton @natefoo @dannon

Reducing container startup time in non-export mode

Container startup time is largely dependent on the size of data that will be copied to the /export. This gets more and more important with containers with many tools. Considering the use-once-remove-later use-case this copy step is not needed and just slows down the startup time. Also during travis testing this can take up to 10 min and is useless.

I'm searching for the most elegant and stable way to detect if something is mounted in /export. If nothing is mounted I would like to symlink everything into /export and not copy it. We could check the UID, GID of /export but this is probably not stable enough. What about putting a empty file into /export during build-time. If the file exists nothing is mounted, if it does not exist something is mounted.

Any comment or implementation ;) is appreciated.
Thanks,
Bjoern

Postgres as NONUSE?

Would it be possible to consider postgres as NONUSE? For example, on installations that already have a centralized Postgres server, this would come in handy. This would probably require changing both galaxy.ini and proftpd.conf...

Cannot build with ie_proxy.yml

It works ok if I turn off ie_proxy.

TASK [galaxyprojectdotorg.galaxyextras : Install proxy dependencies (>=19.01).] ***
fatal: [localhost]: FAILED! => {
   "changed":true,
   "cmd":". /galaxy_venv/bin/activate && npm config set strict-ssl false && npm install && deactivate",
   "delta":"0:01:15.761056",
   "end":"2021-12-13 19:54:13.628163",
   "msg":"non-zero return code",
   "rc":1,
   "start":"2021-12-13 19:52:57.867107",
   "stderr":"npm WARN deprecated [email protected]: Please upgrade to @mapbox/node-pre-gyp: the non-scoped node-pre-gyp package is deprecated and only the @mapbox scoped package will recieve updates in the future\nnode-pre-gyp WARN Using needle for node-pre-gyp https download \nnode-pre-gyp WARN Tried to download(403): https://mapbox-node-binary.s3.amazonaws.com/sqlite3/v4.2.0/node-v83-linux-x64.tar.gz \nnode-pre-gyp WARN Pre-built binaries not found for [email protected] and [email protected] (node-v83 ABI, glibc) (falling back to source compile with node-gyp) \ngyp ERR! build error \ngyp ERR! stack Error: not found: make\ngyp ERR! stack     at getNotFoundError (/galaxy_venv/lib/node_modules/npm/node_modules/which/which.js:13:12)\ngyp ERR! stack     at F (/galaxy_venv/lib/node_modules/npm/node_modules/which/which.js:68:19)\ngyp ERR! stack     at E (/galaxy_venv/lib/node_modules/npm/node_modules/which/which.js:80:29)\ngyp ERR! stack     at /galaxy_venv/lib/node_modules/npm/node_modules/which/which.js:89:16\ngyp ERR! stack     at /galaxy_venv/lib/node_modules/npm/node_modules/isexe/index.js:42:5\ngyp ERR! stack     at /galaxy_venv/lib/node_modules/npm/node_modules/isexe/mode.js:8:5\ngyp ERR! stack     at FSReqCallback.oncomplete (fs.js:183:21)\ngyp ERR! System Linux 4.15.0-45-generic\ngyp ERR! command \"/galaxy_venv/bin/node\" \"/galaxy_venv/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js\" \"build\" \"--fallback-to-build\" \"--module=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64/node_sqlite3.node\" \"--module_name=node_sqlite3\" \"--module_path=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64\" \"--napi_version=7\" \"--node_abi_napi=napi\" \"--napi_build_version=0\" \"--node_napi_label=node-v83\"\ngyp ERR! cwd /galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3\ngyp ERR! node -v v14.15.0\ngyp ERR! node-gyp -v v5.1.0\ngyp ERR! not ok \nnode-pre-gyp ERR! build error \nnode-pre-gyp ERR! stack Error: Failed to execute '/galaxy_venv/bin/node /galaxy_venv/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64/node_sqlite3.node --module_name=node_sqlite3 --module_path=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64 --napi_version=7 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v83' (1)\nnode-pre-gyp ERR! stack     at ChildProcess.<anonymous> (/galaxy-central/lib/galaxy/web/proxy/js/node_modules/node-pre-gyp/lib/util/compile.js:83:29)\nnode-pre-gyp ERR! stack     at ChildProcess.emit (events.js:315:20)\nnode-pre-gyp ERR! stack     at maybeClose (internal/child_process.js:1048:16)\nnode-pre-gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5)\nnode-pre-gyp ERR! System Linux 4.15.0-45-generic\nnode-pre-gyp ERR! command \"/galaxy_venv/bin/node\" \"/galaxy-central/lib/galaxy/web/proxy/js/node_modules/.bin/node-pre-gyp\" \"install\" \"--fallback-to-build\"\nnode-pre-gyp ERR! cwd /galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3\nnode-pre-gyp ERR! node -v v14.15.0\nnode-pre-gyp ERR! node-pre-gyp -v v0.11.0\nnode-pre-gyp ERR! not ok \nnpm ERR! code ELIFECYCLE\nnpm ERR! errno 1\nnpm ERR! [email protected] install: `node-pre-gyp install --fallback-to-build`\nnpm ERR! Exit status 1\nnpm ERR! \nnpm ERR! Failed at the [email protected] install script.\nnpm ERR! This is probably not a problem with npm. There is likely additional logging output above.\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR!     /home/galaxy/.npm/_logs/2021-12-13T19_54_13_605Z-debug.log",
   "stderr_lines":[
      "npm WARN deprecated [email protected]: Please upgrade to @mapbox/node-pre-gyp: the non-scoped node-pre-gyp package is deprecated and only the @mapbox scoped package will recieve updates in the future",
      "node-pre-gyp WARN Using needle for node-pre-gyp https download ",
      "node-pre-gyp WARN Tried to download(403): https://mapbox-node-binary.s3.amazonaws.com/sqlite3/v4.2.0/node-v83-linux-x64.tar.gz ",
      "node-pre-gyp WARN Pre-built binaries not found for [email protected] and [email protected] (node-v83 ABI, glibc) (falling back to source compile with node-gyp) ",
      "gyp ERR! build error ",
      "gyp ERR! stack Error: not found: make",
      "gyp ERR! stack     at getNotFoundError (/galaxy_venv/lib/node_modules/npm/node_modules/which/which.js:13:12)",
      "gyp ERR! stack     at F (/galaxy_venv/lib/node_modules/npm/node_modules/which/which.js:68:19)",
      "gyp ERR! stack     at E (/galaxy_venv/lib/node_modules/npm/node_modules/which/which.js:80:29)",
      "gyp ERR! stack     at /galaxy_venv/lib/node_modules/npm/node_modules/which/which.js:89:16",
      "gyp ERR! stack     at /galaxy_venv/lib/node_modules/npm/node_modules/isexe/index.js:42:5",
      "gyp ERR! stack     at /galaxy_venv/lib/node_modules/npm/node_modules/isexe/mode.js:8:5",
      "gyp ERR! stack     at FSReqCallback.oncomplete (fs.js:183:21)",
      "gyp ERR! System Linux 4.15.0-45-generic",
      "gyp ERR! command \"/galaxy_venv/bin/node\" \"/galaxy_venv/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js\" \"build\" \"--fallback-to-build\" \"--module=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64/node_sqlite3.node\" \"--module_name=node_sqlite3\" \"--module_path=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64\" \"--napi_version=7\" \"--node_abi_napi=napi\" \"--napi_build_version=0\" \"--node_napi_label=node-v83\"",
      "gyp ERR! cwd /galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3",
      "gyp ERR! node -v v14.15.0",
      "gyp ERR! node-gyp -v v5.1.0",
      "gyp ERR! not ok ",
      "node-pre-gyp ERR! build error ",
      "node-pre-gyp ERR! stack Error: Failed to execute '/galaxy_venv/bin/node /galaxy_venv/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64/node_sqlite3.node --module_name=node_sqlite3 --module_path=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64 --napi_version=7 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v83' (1)",
      "node-pre-gyp ERR! stack     at ChildProcess.<anonymous> (/galaxy-central/lib/galaxy/web/proxy/js/node_modules/node-pre-gyp/lib/util/compile.js:83:29)",
      "node-pre-gyp ERR! stack     at ChildProcess.emit (events.js:315:20)",
      "node-pre-gyp ERR! stack     at maybeClose (internal/child_process.js:1048:16)",
      "node-pre-gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5)",
      "node-pre-gyp ERR! System Linux 4.15.0-45-generic",
      "node-pre-gyp ERR! command \"/galaxy_venv/bin/node\" \"/galaxy-central/lib/galaxy/web/proxy/js/node_modules/.bin/node-pre-gyp\" \"install\" \"--fallback-to-build\"",
      "node-pre-gyp ERR! cwd /galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3",
      "node-pre-gyp ERR! node -v v14.15.0",
      "node-pre-gyp ERR! node-pre-gyp -v v0.11.0",
      "node-pre-gyp ERR! not ok ",
      "npm ERR! code ELIFECYCLE",
      "npm ERR! errno 1",
      "npm ERR! [email protected] install: `node-pre-gyp install --fallback-to-build`",
      "npm ERR! Exit status 1",
      "npm ERR! ",
      "npm ERR! Failed at the [email protected] install script.",
      "npm ERR! This is probably not a problem with npm. There is likely additional logging output above.",
      "",
      "npm ERR! A complete log of this run can be found in:",
      "npm ERR!     /home/galaxy/.npm/_logs/2021-12-13T19_54_13_605Z-debug.log"
   ],
   "stdout":"\n> [email protected] install /galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3\n> node-pre-gyp install --fallback-to-build\n\nFailed to execute '/galaxy_venv/bin/node /galaxy_venv/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64/node_sqlite3.node --module_name=node_sqlite3 --module_path=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64 --napi_version=7 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v83' (1)",
   "stdout_lines":[
      "",
      "> [email protected] install /galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3",
      "> node-pre-gyp install --fallback-to-build",
      "",
      "Failed to execute '/galaxy_venv/bin/node /galaxy_venv/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64/node_sqlite3.node --module_name=node_sqlite3 --module_path=/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/binding/node-v83-linux-x64 --napi_version=7 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v83' (1)"
   ]
}

RUNNING HANDLER [galaxyprojectdotorg.galaxyextras : restart nginx] *************
        to retry, use: --limit @/ansible/provision.retry

PLAY RECAP *********************************************************************
localhost                  : ok=45   changed=36   unreachable=0    failed=1

The command '/bin/sh -c mkdir -p /shed_tools $EXPORT_DIR/ftp/     && chown $GALAXY_USER:$GALAXY_USER /shed_tools $EXPORT_DIR/ftp     && ln -s /tool_deps/ $EXPORT_DIR/tool_deps     && chown $GALAXY_USER:$GALAXY_USER $EXPORT_DIR/tool_deps     && apt update -qq && apt install --no-install-recommends -y ansible     && ansible-playbook /ansible/provision.yml     --extra-vars galaxy_venv_dir=$GALAXY_VIRTUAL_ENV     --extra-vars galaxy_log_dir=$GALAXY_LOGS_DIR     --extra-vars galaxy_user_name=$GALAXY_USER     --extra-vars galaxy_config_file=$GALAXY_CONFIG_FILE     --extra-vars galaxy_config_dir=$GALAXY_CONFIG_DIR     --extra-vars galaxy_job_conf_path=$GALAXY_CONFIG_JOB_CONFIG_FILE     --extra-vars galaxy_job_metrics_conf_path=$GALAXY_CONFIG_JOB_METRICS_CONFIG_FILE     --extra-vars supervisor_manage_slurm=""     --extra-vars galaxy_extras_config_condor=True     --extra-vars galaxy_extras_config_condor_docker=True     --extra-vars galaxy_extras_config_rabbitmq=True     --extra-vars galaxy_extras_config_cvmfs=True     --extra-vars galaxy_extras_config_uwsgi=False     --extra-vars proftpd_db_connection=galaxy@galaxy     --extra-vars proftpd_files_dir=$EXPORT_DIR/ftp     --extra-vars proftpd_use_sftp=True     --extra-vars galaxy_extras_docker_legacy=False     --extra-vars galaxy_minimum_version=19.01     --extra-vars supervisor_postgres_config_path=$PG_CONF_DIR_DEFAULT/postgresql.conf     --extra-vars supervisor_postgres_autostart=false     --extra-vars nginx_use_remote_header=True     --tags=galaxyextras,cvmfs -c local     && . $GALAXY_VIRTUAL_ENV/bin/activate     && pip install WeasyPrint     && deactivate     && cd $GALAXY_ROOT && ./scripts/common_startup.sh     && cd config && find . -name 'node_modules' -type d -prune -exec rm -rf '{}' +     && find . -name '.cache' -type d -prune -exec rm -rf '{}' +     && cd /     && rm $PG_DATA_DIR_DEFAULT -rf     && python /usr/local/bin/setup_postgresql.py --dbuser galaxy --dbpassword galaxy --db-name galaxy --dbpath $PG_DATA_DIR_DEFAULT --dbversion $PG_VERSION     && service postgresql start     && service postgresql stop     && apt-get autoremove -y && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && rm -rf ~/.cache/     && find $GALAXY_ROOT/ -name '*.pyc' -delete | true     && find /usr/lib/ -name '*.pyc' -delete | true     && find /var/log/ -name '*.log' -delete | true     && find $GALAXY_VIRTUAL_ENV -name '*.pyc' -delete | true     && rm -rf /tmp/* /root/.cache/ /var/cache/* $GALAXY_ROOT/client/node_modules/ $GALAXY_VIRTUAL_ENV/src/ /home/galaxy/.cache/ /home/galaxy/.npm' returned a non-zero code: 2

npm proxy dependencies installation failing

Hi all!

Currently installation of npm packages for proxy is failing like this:

TASK [galaxyprojectdotorg.galaxyextras : Install proxy dependencies.] **********
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "cd /export/galaxy-central/lib/galaxy/web/proxy/js && npm install", "delta": "0:01:10.787209", "end": "2018-04-28 08:15:51.105353", "msg": "non-zero return code", "rc": 1, "start": "2018-04-28 08:14:40.318144", "stderr": "npm http GET https://registry.npmjs.org/sqlite3/3.1.8\nnpm http GET https://registry.npmjs.org/http-proxy/1.6.0\nnpm http GET https://registry.npmjs.org/commander\nnpm http GET https://registry.npmjs.org/eventemitter3/0.1.6\nnpm http GET https://registry.npmjs.org/eventemitter3/0.1.6\nnpm http GET https://registry.npmjs.org/sqlite3/3.1.8\nnpm http GET https://registry.npmjs.org/http-proxy/1.6.0\nnpm http GET https://registry.npmjs.org/commander\nnpm http GET https://registry.npmjs.org/eventemitter3/0.1.6\nnpm http GET https://registry.npmjs.org/sqlite3/3.1.8\nnpm http GET https://registry.npmjs.org/http-proxy/1.6.0\nnpm http GET https://registry.npmjs.org/commander\nnpm ERR! Error: CERT_UNTRUSTED\nnpm ERR!     at SecurePair.<anonymous> (tls.js:1370:32)\nnpm ERR!     at SecurePair.EventEmitter.emit (events.js:92:17)\nnpm ERR!     at SecurePair.maybeInitFinished (tls.js:982:10)\nnpm ERR!     at CleartextStream.read [as _read] (tls.js:469:13)\nnpm ERR!     at CleartextStream.Readable.read (_stream_readable.js:320:10)\nnpm ERR!     at EncryptedStream.write [as _write] (tls.js:366:25)\nnpm ERR!     at doWrite (_stream_writable.js:223:10)\nnpm ERR!     at writeOrBuffer (_stream_writable.js:213:5)\nnpm ERR!     at EncryptedStream.Writable.write (_stream_writable.js:180:11)\nnpm ERR!     at write (_stream_readable.js:583:24)\nnpm ERR!     at flow (_stream_readable.js:592:7)\nnpm ERR!     at Socket.pipeOnReadable (_stream_readable.js:624:5)\nnpm ERR! If you need help, you may report this log at:\nnpm ERR!     <http://github.com/isaacs/npm/issues>\nnpm ERR! or email it to:\nnpm ERR!     <[email protected]>\n\nnpm ERR! System Linux 4.9.87-linuxkit-aufs\nnpm ERR! command \"/usr/bin/nodejs\" \"/usr/bin/npm\" \"install\"\nnpm ERR! cwd /galaxy-export/galaxy-central/lib/galaxy/web/proxy/js\nnpm ERR! node -v v0.10.25\nnpm ERR! npm -v 1.3.10\nnpm ERR! \nnpm ERR! Additional logging details can be found in:\nnpm ERR!     /galaxy-export/galaxy-central/lib/galaxy/web/proxy/js/npm-debug.log\nnpm ERR! not ok code 0", "stderr_lines": ["npm http GET https://registry.npmjs.org/sqlite3/3.1.8", "npm http GET https://registry.npmjs.org/http-proxy/1.6.0", "npm http GET https://registry.npmjs.org/commander", "npm http GET https://registry.npmjs.org/eventemitter3/0.1.6", "npm http GET https://registry.npmjs.org/eventemitter3/0.1.6", "npm http GET https://registry.npmjs.org/sqlite3/3.1.8", "npm http GET https://registry.npmjs.org/http-proxy/1.6.0", "npm http GET https://registry.npmjs.org/commander", "npm http GET https://registry.npmjs.org/eventemitter3/0.1.6", "npm http GET https://registry.npmjs.org/sqlite3/3.1.8", "npm http GET https://registry.npmjs.org/http-proxy/1.6.0", "npm http GET https://registry.npmjs.org/commander", "npm ERR! Error: CERT_UNTRUSTED", "npm ERR!     at SecurePair.<anonymous> (tls.js:1370:32)", "npm ERR!     at SecurePair.EventEmitter.emit (events.js:92:17)", "npm ERR!     at SecurePair.maybeInitFinished (tls.js:982:10)", "npm ERR!     at CleartextStream.read [as _read] (tls.js:469:13)", "npm ERR!     at CleartextStream.Readable.read (_stream_readable.js:320:10)", "npm ERR!     at EncryptedStream.write [as _write] (tls.js:366:25)", "npm ERR!     at doWrite (_stream_writable.js:223:10)", "npm ERR!     at writeOrBuffer (_stream_writable.js:213:5)", "npm ERR!     at EncryptedStream.Writable.write (_stream_writable.js:180:11)", "npm ERR!     at write (_stream_readable.js:583:24)", "npm ERR!     at flow (_stream_readable.js:592:7)", "npm ERR!     at Socket.pipeOnReadable (_stream_readable.js:624:5)", "npm ERR! If you need help, you may report this log at:", "npm ERR!     <http://github.com/isaacs/npm/issues>", "npm ERR! or email it to:", "npm ERR!     <[email protected]>", "", "npm ERR! System Linux 4.9.87-linuxkit-aufs", "npm ERR! command \"/usr/bin/nodejs\" \"/usr/bin/npm\" \"install\"", "npm ERR! cwd /galaxy-export/galaxy-central/lib/galaxy/web/proxy/js", "npm ERR! node -v v0.10.25", "npm ERR! npm -v 1.3.10", "npm ERR! ", "npm ERR! Additional logging details can be found in:", "npm ERR!     /galaxy-export/galaxy-central/lib/galaxy/web/proxy/js/npm-debug.log", "npm ERR! not ok code 0"], "stdout": "", "stdout_lines": []}
	to retry, use: --limit @/ansible/provision.retry

PLAY RECAP *********************************************************************
localhost                  : ok=16   changed=14   unreachable=0    failed=1  

Seems to be the that their certificate has expired or something...

People have been reporting it here apparently (although I have builds that are beyond that date that did work, we only had recent build 18 hours ago failing due to this):
npm/npm#20203

Dirty workaround seems to be in the mean time to do npm config set strict-ssl false. If someone PRs this, please do it in a very clean PR, so that I can then cherry pick that commit to our 17.09 based branch. Thanks!

Issue with PR #127

Hi all,

the last PR #127 breaks nginx restarts
in GalaxyKickStart https://github.com/ARTbio/GalaxyKickStart

RUNNING HANDLER [galaxyprojectdotorg.galaxy-extras : restart nginx] ************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/sbin/nginx", "-s", "reload"], "delta": "0:00:00.005885", "end": "2017-04-23 16:18:23.930299", "failed": true, "rc": 1, "start": "2017-04-23 16:18:23.924414", "stderr": "nginx: [emerg] open() \"/etc/nginx/conf.d/reports_auth.conf\" failed (2: No such file or directory) in /etc/nginx/nginx.conf:36", "stdout": "", "stdout_lines": [], "warnings": []}
...ignoring

This may be an issue in templates/startup.sh.j2 (and the cp of reports_auth.conf.source) , I am investigating.

Should there be an SMTP server ansible role/tasks in this repo?

Galaxy has hooks in galaxy.ini for an SMTP server to send email relating to resetting passwords for example. I'm not sure how usegalaxy does this, but I for @bgruening's docker image, I'd love if there was the option to have an SMTP server painlessly. I've taken a look at https://github.com/debops/ansible-postfix and I think we may be able to integrate postfix into this repo fairly easily, either as a parallel role or just a bunch of tasks. Would that be appropriate for this repo? @jmchilton @natefoo @martenson

SFTP versus FTP, and what is the policy on this repo?

Hello ansible-galaxy-extras developers!
I am building a "flavor" of galaxy, working off of the great work of @bgruening and for our purposes, we want SFTP instead of FTP. I've introduced a PR (#83) which does this. I'd love if you had a look.

But the real reason I'm raising this issue is that there are some other modifications I'd like to make, which aren't built into this "galaxy extras" repo yet, and I was wondering whether it would be appropriate to open those as pull requests or not. For example, I'd like to have an ansible playbook built in which configures HTTPS using letsencrypt/certbot so that when someone does docker run on @bgruening 's galaxy image or my own derivative image, their galaxy uses HTTPS by default (if they've configured it that way, using a GALAXY_CONFIG flag... see Bjoern's README for details).

Another modification I'd like to try is Globus/GridFTP which I think fits here, since FTP is already here.

Which brings me to my question: If I want to make a couple modifications like these, do they belong in this repo, perhaps turned off by default? The README to this repo says this is the "@natefoo" stack, and if these technologies are not part of your stack I should probably do this somewhere else, right? @jmchilton

Thanks!

Default to more traditional SLURM configuration

docker-galaxy-stable and planemo-machine both need to dynamically configure slurm at runtime - but this isn't appropriate for a more traditional cluster. Separating out the traditional slurm stuff from the crazy runtime slurm stuff would make role more useful.

User created directly through database connection generates errors when using it for API operations

I'm testing the use of ephemeris inside ansible-galaxy-extras in the context of the compose docker containers, using the user created in Galaxy through:

python /usr/local/bin/create_galaxy_user.py --user "$GALAXY_DEFAULT_ADMIN_USER" --password "$GALAXY_DEFAULT_ADMIN_PASSWORD" -c "$GALAXY_CONFIG_FILE" --key "$GALAXY_DEFAULT_ADMIN_KEY"

for executing ephemeris's workflow-install -g http://127.0.0.1 -u <user-email> -p <password> -w <path-to-workflows>. I get the following error on this last execution:

bioblend.ConnectionError: Unexpected HTTP status code: 500: {"err_msg": "Uncaught exception in exposed API method:", "err_code": 0}

At the Galaxy logs, you see the following error:

galaxy.web.framework.decorators ERROR 2018-02-05 21:08:07,654 Uncaught exception in exposed API method:
Traceback (most recent call last):
  File "lib/galaxy/web/framework/decorators.py", line 281, in decorator
    rval = func(self, trans, *args, **kwargs)
  File "lib/galaxy/webapps/galaxy/api/workflows.py", line 411, in import_new_workflow_deprecated
    return self.__api_import_new_workflow(trans, payload, **kwd)
  File "lib/galaxy/webapps/galaxy/api/workflows.py", line 535, in __api_import_new_workflow
    workflow, missing_tool_tups = self._workflow_from_dict(trans, data, **from_dict_kwds)
  File "lib/galaxy/web/base/controller.py", line 1266, in _workflow_from_dict
    exact_tools=exact_tools,
  File "lib/galaxy/managers/workflows.py", line 235, in build_workflow_from_dict
    trans.app.tag_handler.set_tags_from_list(user=trans.user, item=stored, new_tags_list=workflow_tags)
  File "lib/galaxy/managers/tags.py", line 60, in set_tags_from_list
    self.sa_session.flush()
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 157, in do
    return getattr(self.registry(), name)(*args, **kwargs)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2019, in flush
    self._flush(objects)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2137, in _flush
    transaction.rollback(_capture_exception=True)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2101, in _flush
    flush_context.execute()
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 373, in execute
    rec.execute(self)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 532, in execute
    uow
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 174, in save_obj
    mapper, table, insert)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 800, in _emit_insert_statements
    execute(statement, params)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
    return meth(self, multiparams, params)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
    context)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
    exc_info
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
    context)
  File "/export/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
    cursor.execute(statement, parameters)
IntegrityError: (psycopg2.IntegrityError) null value in column "user_id" violates not-null constraint
DETAIL:  Failing row contains (21, 2018-02-05 21:08:07.648718, 2018-02-05 21:08:07.648727, null, null, Fluxomics stationary 13C-MS iso2flux with visualization, f, f, null, t).
 [SQL: 'INSERT INTO stored_workflow (create_time, update_time, user_id, latest_workflow_id, name, deleted, importable, slug, published) VALUES (%(create_time)s, %(update_time)s, %(user_id)s, %(latest_workflow_id)s, %(name)s, %(deleted)s, %(importable)s, %(slug)s, %(published)s) RETURNING stored_workflow.id'] [parameters: {'update_time': datetime.datetime(2018, 2, 5, 21, 8, 7, 648727), 'user_id': None, 'importable': False, 'latest_workflow_id': None, 'deleted': False, 'create_time': datetime.datetime(2018, 2, 5, 21, 8, 7, 648718), 'published': True, 'slug': None, 'name': u'Fluxomics stationary 13C-MS iso2flux with visualization'}]

However, if you create another user manually on the Galaxy UI, and execute the same command with this second user, the command goes through and, in this case, workflows get imported.

My take on this is that the method used by the /usr/local/bin/create_galaxy_user.py directly interacts with the database through sqlalchemy or some other python layer, and might be missing to setup something that Galaxy requires for other API operations, like the one shown here for workflows.

On a different container, we use the master API key to create the admin user through Bioblend, and then we make very similar API calls to create workflows, which go without trouble (as in the case of the second user here). Any ideas? I realise that this is not purely an ansible-galaxy-extras issues of course. Should we aim to change the method used on /usr/local/bin/create_galaxy_user.py so that is not directly interacting with the database bypassing galaxy? or is there any programatic call that could somehow "rescue" the database-created user so that it works with the API calls mentioned?... or is it completely unrelated?

Any comments @jmchilton @bgruening?

Thanks!

booleans not evaluated correctly in _when_ statements

If there is an and operator inside a when statement, then booleans apparently get evaluated as strings. This gets reflected when setting the galaxy_extras_config_slurm and supervisor_manage_slurm both to false and running with the default provision.yml used in bgruening/docker-galaxy-stable:

TASK [galaxy-extras : Stop and remove munge.] **********************************
failed: [localhost] (item=munge) => {"failed": true, "item": "munge", "msg": "no service or tool found for: munge"}
    to retry, use: --limit @provision.retry

PLAY RECAP *********************************************************************
localhost                  : ok=19   changed=2    unreachable=0    failed=1 

This is produced by:

- name: Stop and remove munge.
  service: name={{ item }} state=stopped enabled=no
  with_items:
    - munge
  when: galaxy_extras_config_slurm and supervisor_manage_slurm

the undersired effect (wrong evaluation of the complete boolean construct) is only seen when you have at least one of the two variables as false, as true/true seem to evaluate as defined or filled strings, and hence true. Unfortunately, there is little documentation on the ansible site about this. This is solved by using a |bool filter after each boolean in the when statement.

This is when running on Ubuntu 14.04 and Ansible 2.1, as installed in bgruening/docker-galaxy-stable. I have tested the suggested changes, and the final outcome is as expected (no Ansible failure and installation according to booleans set). and will submit a PR soon.

Post-startup script

Hi there! I would like to have an option to provide an optional post-flight (or post startup script), which admins could use for all different things related to the configuration of the live instance (or other things that require a living galaxy instance, API accessible).

I would suggest adding some logic like:

if [[ -x $POST_START_SCRIPT ]]
   then
          # ephemeris installed already, maybe check for this
          galaxy-wait -g http://127.0.0.1 -v --timeout 120 > {{ galaxy_log_dir }}/post_start_actions.log && \
          $POST_START_SCRIPT >> {{ galaxy_log_dir }}/post_start_actions.log &
          # So this will wait for galaxy to start, without delaying the execution of the last tails commands that go after the user setup section.
fi

after or within

# In case the user wants the default admin to be created, do so.

as you might need user authentication and API Key.

Then admins would have the liberty to set a $POST_START_SCRIPT and do things like adding users (for a course for instance), adding workflows, setting quotas, etc.

I realise that this opens the door for trouble, but then is a price that comes with flexibility. For added security, we could have a standard location for the post start script, and refer to it directly (and then admins have to create the file with proper content).

Would this be well received as a contribution? Alternatively I can just PR the exact thing I need to have optionally done (which is to add workflows to a user and make them publishable).

Way of testing (ba)sh (boolean) parameter

The current convention for test parameter values looks like this : if [ "x$PARAM_NAME" != "x" ]; then : see ansible-galaxy-extras/templates/startup.sh.j2 for instance. This code hack could come from the lack of proper boolean type in bash. I don't agree with this way of testing parameters . Its a 'not empty' test.

Especially, for boolean values it can lead to conter-intuitive situation like : command -PARAM_NAME=false and the 'if condition' will still be executed. I would suggest to use a more specific test like if [ "$PARAM_NAME" = true ] ; then .

A convention should also be take for boolean values format for consistency concern and to avoid situations like : command -PARAM1=True -Param2=false -Param3=0 -Param4=yes

Additionaly, to avoid : command -PARAM_NAME="IPutHereWatheverIWant", a proper error message ( +usage ? ) should be yield when the parameter value does'nt match the expected ones.

Fetch DRMAA egg for Galaxy is now failing

see https://travis-ci.org/galaxyproject/ansible-galaxy-extras/jobs/395730003#L3753

And in command line I could reproduce the issue with the following output:

gks:~/galaxy$ .venv/bin/python scripts/fetch_eggs.py -e drmaa -c galaxy/config/galaxy.ini

Eggs in this release of Galaxy have been replaced by Python's newer packaging
format, wheels. Please use scripts/common_startup.sh to set up your
environment:

cd /home/galaxy/galaxy && ./scripts/common_startup.sh

This will create a Python virtualenv and install Galaxy's dependencies into it.

If you start Galaxy using means other than run.sh (as you probably do if you
are seeing this message), be sure to activate the virtualenv before starting,
using:

. /home/galaxy/galaxy/.venv/bin/activate

If you already run Galaxy in its own virtualenv, you can reuse your existing
virtualenv with:

cd /home/galaxy/galaxy && ./scripts/common_startup.sh --skip-venv

revivifying Galaxy-related ansible playbook

Hi,
I am a bit disappointed that unmerged PRs accumulate in this repo, and other related repos.

Is there a general issue that could be discussed with contributors ?

Is there a better way to contribute to a collective effort to Galaxy ansible playbooks in general ? I would be happy to help, rather than diverging our forks forever.

LetsEncrypt updated their license -- automagic HTTPS signing in this repo no longer works.

In November of 2017, LetsEncrypt updated their license. The new license is here.

The letsencrypt.sh script no longer works. Replacing one line of it (updating the license) causes it to work again.

The ansible role in this repo just downloads the script and runs it. Changing one line in that file might be a little bit of a pain

- name: Get letsencrypt script from github repository
get_url:
url: https://raw.githubusercontent.com/lukas2511/letsencrypt.sh/d81eb58536e3ae1170de3eda305688ae28d0575b/letsencrypt.sh
dest: /usr/bin/letsencrypt.sh
mode: "u=rwx,g=rx,o=r"
force: no

We could upgrade our version of letsencrypt.sh, but unfortunately, it's gone through major changes lately. It's now called Dehydrated and has some breaking changes - the entire way of getting certificates has changed some, which means actually doing a full upgrade would take a non-trivial amount of work.

(cc @bgruening @mvdbeek)

Squashing commits in Pull Requests should be policy

Hello @jmchilton @dannon @natefoo @martenson @bgruening

I've merged a couple PRs here and hoping to merge more in this repo as well as the main galaxy repo and others. I see that when my pull request was merged it was not squashed, meaning there are some intermediary commits in this repo's history. I personally expected my commits to be squashed automatically using github's feature for doing so: https://help.github.com/articles/about-pull-request-merge-squashing/

I'm of the opinion that PRs should squash commits, but if you all disagree strongly, I can adapt my commit strategy. Please let me know what your thinking on this is.

How should we set variables at runtime -- variables without defaults...

Hello! @bgruening @martenson @jmchilton @mvdbeek @afgane

Sorry for raising so many issues. Today I figured out that the question I raised in #95 isn't my real issue, per se. What I want to do is run jobs which rely on variables that don't have defaults, and I was not sure how to do that.

In ansible, there are two ways to get variables at runtime: a vars_prompt or a conditional include vars. Neither of those statements exists in the repo yet.

A vars_prompt seems distasteful, because that would require handholding while ansible is running. include_vars doesn't necessarily help since I would still need to list defaults in order for the code to be self-documenting. The only upside is those variables wouldn't exist unless some boolean flag is turned on.

I think I've decided to just follow what seems to be common practice and list some variables that should always be overridden with an --extra-vars in main.yml. See #88 to see what I mean.

How to launch an ansible task/role from startup.sh

@jmchilton, @mvdbeek, @afgane, @martenson
So for both my pull requests, #86 and #88, the tasks essentially rely on the specifics of the galaxy being launched. In #86, a username/pw which is unique to the galaxy admin is required, and in #88, a certificate specific to the galaxy fqdn is created.

I'm using these tasks through @bgruening's beautiful docker image, but the problem is that the username/pw and certificate can't be bundled into the image -- they need to be given at docker run time. That means the tasks need to be called from startup.sh I believe.

Should I establish a new ansible role for each of these, or is a set of tasks enough? How do I pass the variables (globus username/pw for #86 and galaxy fqdn for #88) to the role? Should I skip using ansible roles and just use shell scripts for each of those, which I can call from startup.sh?

Thanks!

NGINX byte-range support is not configurable

Javascript white listed tools (with client side only viz) and some interactive environments might benefit from byte-range support through NGINX for some requests (I'm working on one of these that requires it). This is currently not supported for configuration, which is done through http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_force_ranges

This is turned off by default.

If there is no opposition, I would like to add an optional setup for this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.