girder / girder_worker Goto Github PK
View Code? Open in Web Editor NEWDistributed task execution engine with Girder integration, developed by Kitware
Home Page: http://girder-worker.readthedocs.io/
License: Apache License 2.0
Distributed task execution engine with Girder integration, developed by Kitware
Home Page: http://girder-worker.readthedocs.io/
License: Apache License 2.0
[celery]
app_main=girder_worker
# broker=mongodb://127.0.0.1/girder_worker
broker=amqp://[email protected]/
[girder_worker]
# Root dir where temp files for jobs will be written
tmp_root=/tmp
# tmp_keep=true
# Comma-separated list of plugins to enable
plugins_enabled=r,girder_io,docker
# Colon-separated list of additional plugin loading paths
plugin_load_path=
If we need to move away from RabbitMQ it might be possible to implement a results backend that puts the results directly into Girder.
http://docs.celeryproject.org/en/latest/userguide/tasks.html#task-result-backends
Mocking is only going to get us so far. We are going to need tests that actually run Celery tasks. Particularly for more complex workflows.
Hi,
I was interested to know if girder worker can use XML or JSON spec to populate and run a workflow. I haven't seen such so far in the doc.
Rather than hard coding all the tasks of the workflow, if we can have a xml representation then it would be more dynamic.
Thanks.
See #47 (comment).
This issue involves developing a way to enable girder_worker tasks to report progress using named pipes that are currently used to stream read/write inputs/outputs.
It would be nice to abstract out the creation of progress reporting pipes in the task code by developing an API function that can simply called to report progress.
This would just be an additional metadata
field on a Girder output binding that would get set on the item after any data upload.
Running girder_worker
with the docker
plugin enabled when dockerd
was started with the flag --selinux-enabled
results in errors relating to file access and chmod when attempting to run a container. Cf. the output below. Starting dockerd
without this flag results in a clean run.
INFO:root:Created LRU Cache for 'tilesource' with 1934 maximum size
WARNING:ctk_cli.module:'reference' attribute of 'file' is not part of the spec yet (CTK issue #623)
>> CLI Parameters ...
Namespace(analysis_mag=20.0, analysis_roi=[14504.0, 17107.0, 767.0, 811.0], analysis_tile_size=4096.0, foreground_threshold=60.0, inputImageFile='/mnt/girder_worker/data/TCGA-02-0010-01Z-00-DX4.07de2e55-a8fe-40ee-9e98-bcb78050b9f7.svs/TCGA-02-0010-01Z-00-DX4.07de2e55-a8fe-40ee-9e98-bcb78050b9f7.svs', local_max_search_radius=10.0, max_radius=30.0, min_fgnd_frac=0.5, min_nucleus_area=80.0, min_radius=20.0, outputNucleiAnnotationFile='/mnt/girder_worker/data/output.anot', reference_mu_lab=[8.63234435, -0.11501964, 0.03868433], reference_std_lab=[0.57506023, 0.10403329, 0.01364062], scheduler_address='', stain_1='hematoxylin', stain_2='eosin', stain_3='null')
Traceback (most recent call last):
File "NucleiDetection/NucleiDetection.py", line 368, in <module>
main(CLIArgumentParser().parse_args())
File "NucleiDetection/NucleiDetection.py", line 182, in main
raise IOError('Input image file does not exist.')
IOError: Input image file does not exist.
[2017-04-25 10:44:36,516] ERROR: Error setting perms on docker tempdir /home/neal/work/DSA-dev/tmp/tmpYxc3a3.
STDOUT:
STDERR:chmod: /mnt/girder_worker/data: Permission denied
chmod: /mnt/girder_worker/data: Permission denied
Exception: Docker tempdir chmod returned code 1.
File "/home/neal/work/DSA-dev/virtualenv/lib/python2.7/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/neal/work/DSA-dev/virtualenv/lib/python2.7/site-packages/celery/app/trace.py", line 622, in __protected_call__
return self.run(*args, **kwargs)
File "/home/neal/work/DSA-dev/girder_worker/girder_worker/tasks.py", line 17, in run
return core.run(*pargs, **kwargs)
File "/home/neal/work/DSA-dev/girder_worker/girder_worker/core/utils.py", line 122, in wrapped
return fn(*args, **kwargs)
File "/home/neal/work/DSA-dev/girder_worker/girder_worker/core/__init__.py", line 366, in run
events.trigger('run.finally', info)
File "/home/neal/work/DSA-dev/girder_worker/girder_worker/core/events.py", line 73, in trigger
handler['handler'](e)
File "/home/neal/work/DSA-dev/girder_worker/girder_worker/plugins/docker/__init__.py", line 99, in task_cleanup
raise Exception('Docker tempdir chmod returned code %d.' % p.returncode)
This references the exception raised in the docker plugin when starting in a non-linux platform. The native docker binaries for Windows and Mac have been out for a while now. I've been using the docker plugin on Mac without any issues. Perhaps we could replace it with a warning about updating to native docker instead?
Right now we have no way of knowing if our packaging process works. We should do something akin to what we do in girder to test our sdist package works.
In the README, it says the name of the role is girder.girder_worker
, but in ansible galaxy the role name is girder.girder-worker
. Easy enough to fix in the README, but since this is already under the girder namespace, would it be better to have the role name of girder.worker
?
In order to construct workflows more easily from separate JSON components, enable task_uri
as an alternative to the task
key in workflow steps. These would be resolved to task specs within girder_worker.load()
.
The Slicer CLI specification supports boolean input types, but does not allow them to be passed in the form of --flag=false
, only --flag
or not. We currently don't support this if the flag is a task input, but we could do so using a new token syntax in container_args
, e.g.
$flag{--bool-flag-name}
Which would be bound to a boolean input that would cause --bool-flag-name
to be passed to the CLI if true, and omitted if false.
This is a silly inconsistency on my part when putting together the initial types and formats. The other basic types all use the type name as the default format (number, integer, boolean).
The least breaking way to do this is probably to duplicate the small number of converters to add a string
format that behaves exactly like text
, including a back and forth noop converter between the two formats. We can then remove text
from the docs and eventually deprecate it.
If a task times out (via Celery), the task failure hook should be able to tell that the exception that caused the task to fail was a time limit exception. GW should then mark the Girder job as timed out rather than failed, this would require a new "Timed out" status be added to Girder.
This probably means adopting girders style 😸
Currently a bad type and missing format will report that the format field is missing. Better would be to inform the user of the bad or missing type or format.
This is a placeholder for a possible integration I wanted to capture. It currently has Python and R bindings, so would fit in well here.
This was introduced as an issue in PR #96.
This happens both with implicit pulls and explicit pulls, including running BusyBox.
Specifically, there is no equivalent of not specifying --all-tags
for the pull command when accessed through python. For BusyBox, we can fix this by adding :latest
to the image name. For the general case, before we pull or run an image, we would need to check if there is a tag or digest. If not, we would need to query if there is already a local image of the specified name, and, if not, ask the remote server (probably hub.docker.com) what tags are available, and then use one of them (latest
being preferred).
To activate plugins, they must be added to the config file line plugins_enabled=
.
I've fixed this for my own case by adding the following to the bottom of pip.yml. Two problems with this approach are that changes the worker.dist.cfg in place, and instead should alter a worker.local.cfg, and this may not work with multiple plugins to properly form the config file line correctly.
- lineinfile:
dest: "{{ girder_worker_path }}/girder_worker/worker.dist.cfg"
regexp: '^plugins_enabled='
line: 'plugins_enabled={{ girder_worker_plugins }}'
For general updates and specifically to address girder/girder#1654 (comment)
I'd want to be able to enable logging of the system service of girder_worker installed by the Ansible role.
Provide a set of volumes that can be mounted in all docker containers. This could be in the config file, for instance.
Specifically, this could be used to allow docker containers to access a list of volumes in a standardized way.
One possible format would be a JSON-encoded list:
[docker]
volumes=["/home/ubuntu/files:/opt/files:ro","/home/ubuntu/data:/opt/data:ro"]
I'd recommend defaulting mounted volumes to read-only unless they are explicitly specified as read-write.
I have a job that requires Pillow version >= 3.4, but girder_worker has Pillow 3.2.0 as a hard requirement. The core plugins will NOT load with a different version of Pillow. Requirements should allow a version range or only specify a minimum version unless there is a known reason to have a fixed version.
Since requirements.txt should be the frozen version used for testing, setup.py should either specify packages explicitly (rather than reading a file), or, when reading the requirements.txt file should change simple == version requirements to ~=.
In travis testing, we try to download https://www.apache.org/dist/spark/spark-1.3.1/spark-1.3.1-bin-hadoop2.4.tgz
, but this fails.
We rate limit the flushing of output up to the job log, but we do so synchronously, so undesirable things can happen. Namely, if a bunch of output is written all at once while the timer is still waiting, but then nothing is written for a very long time, that output is just sitting there unseen to the user. We should use a mechanism akin to the twisted reactor's callLater()
, where on each call to write()
a timeout is updated a couple seconds in the future that will perform the flush if nothing else was written.
The task underlying this issue is to implement the functionality necessary for copying output files written by a docker task back to girder.
(task1.s () | task2.s()).delay()
Similar to #91, this would provide the JSON metadata for girder inputs to the processing task if requested.
Hi,
Does the girder worker support parallel execution of tasks inside a workflow.
For instance, lets say I have a workflow having tasks T1,T2,T3,T4... Is there a way I can run tasks T2 and T3 in parallel after task T1 is completed?
Thanks.
Edit project description on Github and add http://girder.readthedocs.org/ as website URL
I was interested in having dynamic text input (lets say the filename where I would like to save the blurred image, in the context of the example given in the documentation of girder worker). I created the outputFileName object and fed that to the execution of the save_image task.
I am following the convention of the input image (the lenna object in the code below) to create the outputFileName.
However, I got the error (as posted below). Can you explain if this is a bug or point out the way to handle this. I read the whole doc of girderWorker but couldn't figure it out.
---------------------python code-----------girderWorkerStandAlone.py--------------
import girder_worker
from girder_worker.specs import Workflow
wf = Workflow()
#create an input object
lenna = {
'type': 'image',
'format': 'png',
'url': 'https://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png'
}
'''
task to save the image
'''
save_image = {
'inputs': [
{'name': 'the_image', 'type': 'image', 'format': 'pil'},
{'name': 'file_Name', 'type': 'string', 'format': 'text'}
],
'outputs': [],
'script': '''
from datetime import datetime
#file_Name = '/home/vagrant/tangelo/tangelo_demo/proj/lenna__'+ datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f')[:-3]+'.jpg'
the_image.save(file_Name)
'''
}
outputFileName = {
'type': 'string',
'format': 'jpg',
'text': "/home/vagrant/tangelo/tangelo_demo/proj/lennaimage.jpg"
}
#running the single task
output = girder_worker.run(save_image, {'the_image': lenna, 'file_Name': outputFileName})
-----------error----------
Traceback (most recent call last):
File "girderWorkerStandAlone.py", line 37, in <module>
output = girder_worker.run(save_image, {'the_image': lenna, 'file_Name': outputFileName})
File "/home/vagrant/girder_env/local/lib/python2.7/site-packages/girder_worker/utils.py", line 291, in wrapped
return fn(*args, **kwargs)
File "/home/vagrant/girder_env/local/lib/python2.7/site-packages/girder_worker/__init__.py", line 251, in run
d, **dict({'task_input': task_input}, **kwargs))
File "/home/vagrant/girder_env/local/lib/python2.7/site-packages/girder_worker/io/__init__.py", line 104, in fetch
return _fetch_map[mode](spec, **kwargs)
File "/home/vagrant/girder_env/local/lib/python2.7/site-packages/girder_worker/io/__init__.py", line 31, in _inline_fetch
return spec['data']
KeyError: 'data'
worker.format.converter_path raises NetworkXNoPath but does not specify source and target types between which the conversion is being made.
Below is my task spec:
{
"auto_convert": true,
"cleanup": true,
"inputs": {
"foreground_threshold": {
"data": "160",
"format": "json",
"mode": "inline",
"type": "number"
},
"inputImageFile": {
"api_url": "http://localhost:8080/api/v1",
"format": "string",
"id": "573cc705a848737a396e529e",
"mode": "girder",
"name": "Easy1.png",
"resource_type": "item",
"token": "NOWAfxaQe3Es7SfGMa4VbEDFhnamHhKyJCjBX54nh0Yv1yGNd1eJxZcN6Gh0EplM",
"type": "string"
},
"local_max_search_radius": {
"data": "10",
"format": "json",
"mode": "inline",
"type": "number"
},
"max_radius": {
"data": "7",
"format": "json",
"mode": "inline",
"type": "number"
},
"min_nucleus_area": {
"data": "80",
"format": "json",
"mode": "inline",
"type": "number"
},
"min_radius": {
"data": "4",
"format": "json",
"mode": "inline",
"type": "number"
},
"stain_1": {
"data": "hematoxylin",
"format": "json",
"mode": "inline",
"type": "string"
},
"stain_2": {
"data": "eosin",
"format": "json",
"mode": "inline",
"type": "string"
},
"stain_3": {
"data": "null",
"format": "json",
"mode": "inline",
"type": "string"
}
},
"jobInfo": {
"headers": {
"Girder-Token": "S3N0OqmQ5dOHXW4YMpNKT8PE5I1jnxWlBMacouZ0PkuVby9VgM5G7tFZ6VNYEnM1"
},
"logPrint": true,
"method": "PUT",
"reference": "573cca08a8487305d18b1a4f",
"url": "http://localhost:8080/api/v1/job/573cca08a8487305d18b1a4f"
},
"outputs": {
"outputNucleiAnnotationFile": {
"api_url": "http://localhost:8080/api/v1",
"format": "string",
"mode": "girder",
"name": "Easy1_nuclei.anot",
"parent_id": "573cc72ca848737a396e52a0",
"parent_type": "folder",
"token": "NOWAfxaQe3Es7SfGMa4VbEDFhnamHhKyJCjBX54nh0Yv1yGNd1eJxZcN6Gh0EplM",
"type": "string"
},
"outputNucleiMaskFile": {
"api_url": "http://localhost:8080/api/v1",
"format": "string",
"mode": "girder",
"name": "Easy1_seg.png",
"parent_id": "573cc72ca848737a396e52a0",
"parent_type": "folder",
"token": "NOWAfxaQe3Es7SfGMa4VbEDFhnamHhKyJCjBX54nh0Yv1yGNd1eJxZcN6Gh0EplM",
"type": "string"
}
},
"task": {
"container_args": [
"NucleiSegmentation",
"--foreground_threshold",
"160",
"--local_max_search_radius",
"10",
"--max_radius",
"7",
"--min_nucleus_area",
"80",
"--min_radius",
"4",
"--stain_1",
"hematoxylin",
"--stain_2",
"eosin",
"--stain_3",
"null",
"/mnt/girder_worker/data/Easy1.png",
"/mnt/girder_worker/data/Easy1_seg.png",
"/mnt/girder_worker/data/Easy1_nuclei.anot"
],
"docker_image": "dsarchive/histomicstk:dev",
"inputs": [
{
"format": "string",
"id": "inputImageFile",
"name": "Input Image",
"target": "filepath",
"type": "string"
},
{
"default": {
"data": 160,
"format": "number"
},
"format": "number",
"id": "foreground_threshold",
"type": "number"
},
{
"default": {
"data": 10,
"format": "number"
},
"format": "number",
"id": "local_max_search_radius",
"type": "number"
},
{
"default": {
"data": 7,
"format": "number"
},
"format": "number",
"id": "max_radius",
"type": "number"
},
{
"default": {
"data": 80,
"format": "number"
},
"format": "number",
"id": "min_nucleus_area",
"type": "number"
},
{
"default": {
"data": 4,
"format": "number"
},
"format": "number",
"id": "min_radius",
"type": "number"
},
{
"default": {
"data": "hematoxylin",
"format": "string"
},
"format": "string",
"id": "stain_1",
"type": "string"
},
{
"default": {
"data": "eosin",
"format": "string"
},
"format": "string",
"id": "stain_2",
"type": "string"
},
{
"default": {
"data": "null",
"format": "string"
},
"format": "string",
"id": "stain_3",
"type": "string"
}
],
"mode": "docker",
"name": "NucleiSegmentation",
"outputs": [
{
"format": "string",
"id": "outputNucleiMaskFile",
"name": "Output Nuclei Segmentation Mask",
"path": "Easy1_seg.png",
"target": "filepath",
"type": "string"
},
{
"format": "string",
"id": "outputNucleiAnnotationFile",
"name": "Output Nuclei Annotation File",
"path": "Easy1_nuclei.anot",
"target": "filepath",
"type": "string"
}
],
"pull_image": true
},
"validate": false
}
which raises the following exception:
<class 'networkx.exception.NetworkXNoPath'>:
File "/media/common/EmoryImageAnnotationPlatform/code/girder_worker/srclnx/girder_worker/__main__.py", line 28, in run
retval = girder_worker.run(*pargs, **kwargs)
File "girder_worker/utils.py", line 295, in wrapped
return fn(*args, **kwargs)
File "girder_worker/__init__.py", line 277, in run
{'task_input': task_input, 'fetch': False}, **kwargs))
File "girder_worker/__init__.py", line 150, in convert
Validator(type, output['format'])):
File "girder_worker/format/__init__.py", line 113, in converter_path
raise NetworkXNoPath
Create this converter. Without it certain conversions (e.g. csv to jsonlines) is not possible.
see #97
At least provide the option to set up cron jobs at some scheduled interval to call Docker GC.
Running girder-worker
gives the following warning. It seems we should programmatically set the specified option to get rid of this warning.
[2016-06-10 09:05:18,289: WARNING/MainProcess] /Users/jeff/.virtualenvs/girder_worker/lib/python2.7/site-packages/celery/apps/worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
Currently, the job status can change to a completed state (error, success, or cancelled) without the logs being fully flushed.
When i run a docker task that uses matplotlib i get the following error:
Pulling docker image: dsarchive/histomicstk
Running container: "docker run -u 1000 -v /media/common/EmoryImageAnnotationPlatform/code/girder_worker/srclnx/tmp/tmp8mAMxi:/data dsarchive/histomicstk StandaloneColorDeconvolution --stainColor_3 "0.0, 0.0, 0.0" /data/A.png "0.65, 0.7, 0.29" "0.07, 0.99, 0.11" /data/stain_1.png /data/stain_2.png /data/stain_3.png"
Traceback (most recent call last):
File "StandaloneColorDeconvolution/StandaloneColorDeconvolution.py", line 3, in <module>
import skimage.io
File "/build/miniconda/lib/python2.7/site-packages/skimage/io/__init__.py", line 15, in <module>
reset_plugins()
File "/build/miniconda/lib/python2.7/site-packages/skimage/io/manage_plugins.py", line 93, in reset_plugins
_load_preferred_plugins()
File "/build/miniconda/lib/python2.7/site-packages/skimage/io/manage_plugins.py", line 73, in _load_preferred_plugins
_set_plugin(p_type, preferred_plugins['all'])
File "/build/miniconda/lib/python2.7/site-packages/skimage/io/manage_plugins.py", line 85, in _set_plugin
use_plugin(plugin, kind=plugin_type)
File "/build/miniconda/lib/python2.7/site-packages/skimage/io/manage_plugins.py", line 255, in use_plugin
_load(name)
File "/build/miniconda/lib/python2.7/site-packages/skimage/io/manage_plugins.py", line 299, in _load
fromlist=[modname])
File "/build/miniconda/lib/python2.7/site-packages/skimage/io/_plugins/matplotlib_plugin.py", line 3, in <module>
import matplotlib.pyplot as plt
File "/build/miniconda/lib/python2.7/site-packages/matplotlib/__init__.py", line 1131, in <module>
rcParams = rc_params()
File "/build/miniconda/lib/python2.7/site-packages/matplotlib/__init__.py", line 965, in rc_params
fname = matplotlib_fname()
File "/build/miniconda/lib/python2.7/site-packages/matplotlib/__init__.py", line 794, in matplotlib_fname
configdir = _get_configdir()
File "/build/miniconda/lib/python2.7/site-packages/matplotlib/__init__.py", line 649, in _get_configdir
return _get_config_or_cache_dir(_get_xdg_config_dir())
File "/build/miniconda/lib/python2.7/site-packages/matplotlib/__init__.py", line 626, in _get_config_or_cache_dir
return _create_tmp_config_dir()
File "/build/miniconda/lib/python2.7/site-packages/matplotlib/__init__.py", line 555, in _create_tmp_config_dir
tempdir = os.path.join(tempdir, 'matplotlib-%s' % getpass.getuser())
File "/build/miniconda/lib/python2.7/getpass.py", line 158, in getuser
return pwd.getpwuid(os.getuid())[0]
KeyError: 'getpwuid(): uid not found: 1000'
A fix for the same error is reported here on the matplotlib repository but that wont be available until matplotlib 1.5.2 is released.
Deliverables:
Basic Celery task execution API (e.g. .delay(), .apply_async()) generates a girder job model which is correctly updated with success/failure states
See: #126
Just talked with @cdeepakroy about this and wanted to dump the proposal here so people could comment on it before I actually get started.
As of right now, IO for docker mode is limited to the following:
All of this is done synchronously due to the design of our execution model, i.e. the output must be completely gathered by the worker before it is sent on its way. I propose adding two new features:
Please provide any feedback on these features, as well as ideas on how to structure the API/spec for them.
Docker-gc is too aggressive in what it removes. Specifically, I have installed docker images via the slicer_cli_web plugin, and then those images get removed by docker-gc, which breaks the cli enumeration.
This issue concerns the development of a mechanism to instruct girder_worker to optionally download/copy the whole parent item of input file parameters into the docker container.
I faced this need while developing an infrastructure to automatically generate REST end-points for slicer execution model CLIs. A large part of this infrastructure is the conversion from slicer execution model xml spec to girder_worker task spec along with the input/output bindings.
For girder_worker tasks where one of the inputs is a girder item, if the item contains a single file then girder_worker will copy the file to the worker machine and provide a path of this file to the docker task. Instead, if the input girder item contains multiple files, then girder_worker will copy all files of the item into a newly created directory on the worker machine and give the path of the this directory to the docker task.
With image files, the following two scenarios occur:
.mhd
file and image data is stored in dicom files located adjacent to the .mhd file..mha
, .nrrd
, and .png
formats wherein both the header and image data is stored in a single file.To support both the aforementioned scenarios, I would like to map input parameters of type image in slicer xml spec to a file on girder and have an ability to instruct girder_worker to copy the whole parent item into a directory on the worker machine and get a path to the specified file within it for use in the docker task.
When running girder_worker inside of a docker container, running docker tasks can be problematic.
For reference, we have three components we need to refer to: (1) host - the host machine, (2) worker docker - girder_worker running in a docker container, and (3) task docker - the task we want to run in a docker container via the girder_worker docker plugin.
There are two basic approaches to run a docker container from within another docker container.
Approach 1: run the task docker on the host, which can be done by mounting two volumes on the worker docker: -v /usr/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock
(assuming host and worker_docker are appropriate flavors of linux).
Approach 2: https://github.com/jpetazzo/dind , where a bash script does a bunch of magic as part of the start up of the task docker.
I don't like approach 2. We already run a script in the task docker which assumes that the task docker is debian-like and has bash. This doesn't work right in alpine or BusyBox where there are no groupadd
and useradd
commands. This would further limit what can be run as a task.
In Approach 1, the task docker can run but the mount points for the temporary directory and scripts directory end up referring to locations on the host, but were intended to refer to locations on the worker docker. A crude work around for this would be to copy our script to the temp directory (requiring a change in the docker plugin) and to set the tmp_root
to a path that is identical on the host and the worker docker and is volume mounted between the two.
I'm not sure if Approach 2 would work easily, as I haven't actually tried it out.
Any other recommendations on how to accomplish running docker tasks in a docker worker?
Right now it assumes the path provided is a file. If it's a directory, we should recursively upload it into the given parent.
There is not a flake8 rule for this but we can at least go through and convert all double quotes to single quotes where possible/reasonable. Currently the codebase looks a bit disjointed.
Move away from popen to https://github.com/docker/docker-py
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.