GithubHelp home page GithubHelp logo

biocontainers / specs Goto Github PK

View Code? Open in Web Editor NEW
47.0 23.0 12.0 491 KB

BioContainers specifications

Home Page: http://biocontainers.pro

License: Apache License 2.0

specifications docker biocontainers-specification biocontainers bioinformatics bioinformatics-containers biocontainers-architecture rkt bioinformatics-analysis

specs's Introduction

The latest information about BioContainers is available via https://BioContainers.pro

Join the chat at https://gitter.im/BioContainers/biocontainers

Containers

Repository of approved bioinformatics containers

Links:

Web Page : http://biocontainers.pro/

Project Definition : https://github.com/BioContainers/specs

Contribution Rules : https://github.com/BioContainers/specs/blob/master/CONTRIBUTING.md

Containers : https://github.com/BioContainers/containers

Email : [email protected]

License

Apache 2

Contents

  1. Essentials
    1.1. What is BioContainers
    1.2. Objectives
  2. Containers
    2.1. What is a container?
    2.2. Why do I need to use a container
    2.3. How to use a BioContainer
    2.4. BioContainers Architecture
    2.4.1 How to Request a Container
    2.4.2 Add a container
    2.4.3 Use a Container
  3. Developing containers
    3.1. How to build BioContainers
    3.2. What do I need to develop?
    3.3. How to create a Docker based BioContainer?
  4. Support
    4.1 Get involved

1. Essentials


1.1. What is BioContainers?

The BioContainers project came from the idea of using the containers-based technologies such as Docker or rkt for bioinformatics software. Having a common and controllable environment for running software could help to deal with some of the current problems during software development and distribution. BioContainers is a community-driven project that provides the infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics fields such as proteomics, genomics, transcriptomics and metabolomics. The main containers already implemented in BioContainers (https://github.com/BioContainers/containers) are discussed in details including examples on how to use BioContainers. The currently available BioContainers containers facilitate the usage, and reproducibility of software and algorithms. They can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, Cloud environments or HPC clusters). We also present the guidelines and specifications on how to create new containers, and how to contribute to the BioContainers project.

1.2. Objectives and Goals

  • Provide a base specification and images to easily build and deploy new bioinformatics/proteomics software including the source and examples.

  • Provide a series of containers ready to be used by the bioinformatics community (https://github.com/BioContainers/containers).

  • Define a set of guidelines and specifications to build a standardized container that can be used in combination with other containers and bioinformatics tools.

  • Define a complete infrastructure to develop, deploy and test new bioinformatics containers using continuous integration suites such as Travis Continuous Integration (https://travisci. org/), Shippable (https://app.shippable.com/) or manually built solutions.

  • Provide support and help to the bioinformatics community to deploy new containers for researchers that do not have bioinformatics support.

  • Provide guidelines and help on how to create reproducible pipelines by defining, reusing and reporting specific container versions which will consistently produce the exact same result and always be available in the history of the container.

  • Coordinate and integrate developers and bioinformaticians to produce best practice of documentation and software development.

2. Containers


2.1. What is a container?

Containers are build from existing operating systems. They are different from Virtual machines because they don't possess an entire guest OS inside, instead, containers are built using optimized system libraries and use the host OS memory management and process controls. Containers normally are centralized around a specific software and you can make them executable by instantiating images from them.

What is Container

2.2. What do I need to use a container?

Most of the time when a bioinformatics analysis is performed, several bioinformatics tools and software should be installed and configured. This process can take several hours and demands a lot of efforts including the installation of multiple dependencies and tools. BioContainers provides ready to use packages and tools that can be easily deployed and used in local machines, HPC and cloud architectures.

2.3. How to use a BioContainer

BioContainers are listed in two main registries:

  • Docker Hub: Docker based containers that can be used using the docker infrastructure.
  • QUAY Hub: Docker and rkt based containers that can be used using the rkt infrastructure.

A full documentation about how to use BioContainers to perform bioinformatics analysis: please check the Full Documentation

2.4. BioContainers Architecture

BioContainers is a community-driven project that allows bioinformatics to request, build and deploy bioinformatics tools using containers. The following figure present the general BioContainers workflow:

What is Container

The next sections explain in details the presented workflow:

  • (i) How to request a workflow
  • (ii) Use a BioContainer
  • (iii) Developing containers

2.4.1 How to Request a Container

Users can request a container by opening an issue in the [containers repository] (http://github.com/BioContainers/containers/issues) (In the previous workflow this is the first step performed by user henrik). The issue should contain the name of the software, the url of the code or binary to be package and information about the software see BioContainers specification. When the container is deployed and fully functional, the issue will be closed by the developer or the contributor to BioContainers.

2.4.2 Add a container

Fork the repository and add a container. To add a container:

  • create directory name_of_software/version_of_upstream_software
  • in directory add a Dockerfile following BioContainers specification
  • optionally (but recommended) add a test-cmds.txt file to test automatically the container (list of one-line bash commands

Example test-cmds.txt

mytool -v
mytool -h

Test the container recipe where your Dockerfile is:

docker build .

If ok, commit and push your changes to your fork and ask for a pull request to our repository.

2.4.3 Use a BioContainer

When a container is deployed and the developer closes the issue in GitHub, the user (henrik) receives a notification that the container is ready. The user can then use docker or rkt to pull or fetch the corresponding container.

3. Developing containers


3.1. How to build BioContainers

There are two different ways to build a container.

  • Go to the GitHub repository with the recipe of the software you want, clone it, and build it yourself on your machine.
  • Use the docker daemon to search for a ready-to-use version of the containerized software you want.

Inside the central repository there is a list of softwares with docker recipes, there you can find more information about how to work with them.

3.2. What do I need to develop?

BioContainers are based on Linux systems, so you will need a computer with Linux installed, you also will need the docker or rkt daemon and the software you want to containerize.

3.3. How to create a Docker based Biocontainer?

Having all in hands now you need to create a Dockerfile. Dockerfiles are simple recipes to instruct the daemon on how to set an appropriate OS and how to download, manage, install and give access to the software inside.

You can check the Docker documentation for more information.

Once the container is ready you can get in touch with us so we can make the appropriate arrangements to make your container available to everyone in the community by giving an automated build system.

3.3. How to create a rkt based Biocontainer?

Having all in hands now you need to create a rkt. rkt containers are simple recipes to instruct the daemon on how to set an appropriate OS and how to download, manage, install and give access to the software inside.

You can check the rkt documentation for more information.

Once the container is ready you can get in touch with us so we can make the appropriate arrangements to make your container available to everyone in the community by giving an automated build system.

4. Support


4.1. Get involved

Whether you want to make your own software available to others as a container, to just use them on your pipelines and analysis or just give opinions, you are most welcome. This is a community-driven project, that means everyone has a voice.

Here are some general ideas:

  • Browse our list of containers
  • Propose your own ideas or software
  • Interact with other if you think there is something missing.

specs's People

Contributors

gitter-badger avatar hroest avatar laurentgatto avatar manabuishii avatar mr-c avatar osallou avatar raynooo avatar sauloal avatar stianlagstad avatar vsivsi avatar ypriverol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

specs's Issues

Domain Configuration

@ypriverol I have an active Mailgun account, since you have the domains now, I can set an e-mail with the @biodocker extension. I can route the incoming e-mail to both of us and at the same time, or create individual e-mails.

dev dockerhub channel

I propose that, once new pull requests will be added to the dev channel (as described in the contributting.md) , we should have a separated dev channel on dockerhub so that the dev and stable do not mix

OpenMS 2.0

This is to mark the request for the OpenMS 2.0, for the Galaxy project

One single repository for containers

If we agree to follow #6 we could think about having one repository for all Dockerfiles.
From our experience this has many advantages for a developer, like bulk-updates, unified maintenance (travis, contribution.md, issues) and a grep-able big repository.

pipeline ballooning the number of builds

hey guys. we need to talk. there's a serious issue on our pipe-lining. whenever someone creates a push in the container/dev repository, ALL containers are rebuild. this is unacceptable. with ~60 repos, it means that there are 60 builds issued. worse still, the number of different builds to each container also grows so it makes terrible to differentiate between one build and the next (most of the time there will be no difference at all). Added to that, dockerhub makes 2 builds to each request because of the tagging system (program version + latest). so now, at each pull request/push we are creating 120 (40 + 80) new images.

Spec for a unified ENTRYPOINT

If we have a spec for the ENTRYPOINT and the container name this would be first step to run arbitrary containers without looking up the documentation everytime :)

For example:
docker run bioboxes/openms:2.0 FileMerger --help vs. docker run bioboxes/openms:2.0 OpenMS FileMerger --help

or

docker run bioboxes/searchgui:0.27 --help vs. docker run bioboxes/searchgui:0.27 searchgui --help

Should we allow privileged containers? docker inside docker?

A security issue.

To make things simpler to some applications, the user can have privileged containers and docker inside docker.

Should this be allowed despite the risk?

Docker inside docker is really useful to package databases and code in a single image.
It can be done by calling the host's docker (right way) or by installing docker inside the image (wrong way).

dockerfile template missing repository name and how to build it

I propose that the dockerfile template, in my view, needs to require the name to which the image should be registered (e.g.: biodocker/progam1) as one of the header informations

Also, it should contain the build command (e.g.: docker build biodocker/program1 .)

Should we have an extended dockerfile header and skip the readme ?

The last few days I've been busy creating new genomics packages in the sandbox and I came to a conclusion.

Instead of having a README.md file for each container repeating the information that is already available in the header of the dockerfile, why not extend the header with more information and skip the readme?

The advantages are:

  • you don't have to update the readme every time you update an package
  • the header of a dockerfile is always synchronized with its own content
  • if the header always follows the same format we can easily parse it
  • we can create a "dynamic" (global) readme in the root of the repository containing the header of all dockerfiles present

i've created 2 more experimental fields in the header of the dockerfile:
provides - list of provided programs (important if more than one)
tags - tags such as genomics, proteomics, etc

I propose to add more fields:
software website: source website

/cc @BioDocker/contributors

Should we start using Quay.io ?

Following @bgruening suggestions, I did some tests with Quay.io and it is really, much better than the native DockerHub service. So here are some points for us to consider:

Advantages

  • Better API
  • Faster (my test image was created and shipped within 10 minutes, compared to DockerHub that is taking from 2 to 6 hours)
  • Better namespace (here we can have the biodocker namespace, instead of biodckr)

Disadvantages

  • Looks that the build system is 'repository-centric', I'm not sure how to provide our containers with different versions. In DockerHub I create an Automated Build entry and inside I can specify every Dockerfile that is located on each folder.
  • Looks like you need an account to use images from Quay.io (see below).

The good thing about this, @ypriverol @bgruening , is that I figured out how to use the native docker client to search and download images from quay.io ( you need to create an account at quay.io )

  1. docker login quay.io
  2. docker search quay.io/comet or docker search quay.io/biodocker

simple as that!

Biodocker base image fails during build

I'm getting the follwong error while trying to build the base image:

Err http://mirror.hmc.edu/ubuntu/ precise-backports/main amd64 Packages
404 Not Found [Mirror: http://mirror.scalabledns.com/ubuntu/]
Err http://mirror.hmc.edu/ubuntu/ precise-backports/restricted amd64 Packages
404 Not Found [Mirror: http://mirror.scalabledns.com/ubuntu/]
Err http://mirror.hmc.edu/ubuntu/ precise-backports/universe amd64 Packages
404 Not Found [Mirror: http://mirror.scalabledns.com/ubuntu/]
Err http://mirror.hmc.edu/ubuntu/ precise-backports/multiverse amd64 Packages
404 Not Found [Mirror: http://mirror.scalabledns.com/ubuntu/]
Fetched 32.5 MB in 6s (4681 kB/s)
W: Failed to fetch mirror://mirrors.ubuntu.com/mirrors.txt/dists/precise-backports/main/binary-amd64/Packages 404 Not Found [Mirror: http://mirror.scalabledns.com/ubuntu/]

W: Failed to fetch mirror://mirrors.ubuntu.com/mirrors.txt/dists/precise-backports/restricted/binary-amd64/Packages 404 Not Found [Mirror: http://mirror.scalabledns.com/ubuntu/]

W: Failed to fetch mirror://mirrors.ubuntu.com/mirrors.txt/dists/precise-backports/universe/binary-amd64/Packages 404 Not Found [Mirror: http://mirror.scalabledns.com/ubuntu/]

W: Failed to fetch mirror://mirrors.ubuntu.com/mirrors.txt/dists/precise-backports/multiverse/binary-amd64/Packages 404 Not Found [Mirror: http://mirror.scalabledns.com/ubuntu/]

E: Some index files failed to download. They have been ignored, or old ones used instead.
The command '/bin/sh -c apt-get clean all && apt-get update && apt-get upgrade -y && apt-get install -y autotools-dev automake cmake curl fuse git wget zip openjdk-7-jre build-essential pkg-config python python-dev python-pip zlib1g-dev && apt-get clean && apt-get purge && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*' returned a non-zero code: 100

The same error prevents the building of other tools, so we must address this with urgency.

Compulsory test in travis?

Should all packages have a test suit created or have their testing compulsory run in travis?

E.g.:
if the package contains a "make test", the travis should not only test the compilation but the "make test" routine also.
if the package does not contain a test suit, a small test should be created and tested in travis

Spec for helper scripts

Regarding the helper scripts discussed #8 what should they be?

biodocker - in the host, runs biodockr/
help - in the container, shows the help on how to run the command in docker
version - shows the version information for both the program and the dockerfile

any more?

Official chanel for communication

We need to establish an official communication channel, here are the options (good and bad ones):

  • IRC channel
  • E-mail group list
  • Skype group
  • hangouts group
  • Gitter
  • Whatsapp
  • Telegram

Please @BioDocker/contributors, give me your feedback on this.

Should we use shippable for testing?

Well,

While dockerhub gives us the errors in building, it doesn't test the programs with real data.
For that a CI would be nice and the only one that I've found really compatible with docker is
shippable.

With shippable we can define the image to pull and create a yml file (just like all CIs) with commands to run on it.

This means we can have an automatic build from github and have shippable download a dataset and run the container on it.

Furthermore, shippable can pull the git, build the image, run the test and, it the container pass, submit the built container to dockerhub.

The disadvantage of the latter is that shippable requires the Dockerfile to be in the root of the git repository. May be in the future they change it.

What do you think? Should we advise for the usage of shippable and the creation of datasets for testing?

/cc @BioDocker/contributors

Some comments in the current guidelines

This are some questions from a user:

In many of your docker files I see that you change to a biodocker user or that you set as working directory the directory of a tool, but I was wondering is those behaviours are indicated somewhere, are they necessary requirements, etc. As a sideline to this, in one of our important use cases — running galaxy with dockerized tools within a container orchestration environment — having the containerised tool running ability bound to be in a defined working directory breaks things. I’m not saying that you’re actively doing this, but if that is the reason for setting the working directory on the docker file, then it would be a problem for us. This is because galaxy needs to control working directories to detect variably named output files. The only case where we see this happening is when the main entrypoint is something like , like Rscript myScript.R (receiving arguments from the outside). Our current solution to this is to wrap this call in shell file, pass arguments to the wrapped call, leave this wrapper in the path and make it executable.

Another question. We are currently building docker files within the our project for a number of metabolomics tools. However, for project reporting statistics and deliverables, we need to keep this under the ownership of the our github organization. Do you see a way in which we could both host these docker files in our github repo while also making them available at the biodocker repo? Would the git submodules work for this?

Travis failures

I noticed that some repositories are failing in the Travis test. The docker images build correctly, the software inside works OK, and the DockerHub also reports no error. Investigating a little better I found that those repositories failed because Travis went out of space during the test, see below.

This is part of the log from Travis while building MSAmanda container:

Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
Processing triggers for sgml-base (1.26+nmu4ubuntu1) ...
Untar re-exec error: exit status 1: output: write /usr/share/mime/packages/freedesktop.org.xml: no space left on device
+echo 1
+save_and_shutdown
+'[' '!' -f /home/travis/build/BioDocker/MSAmanda/.exit_code ']'
+set +e
++jobs -p
+kill -9 104
+halt -f
[  390.840000] System halted.
The command "./run 'docker build -t test 1.0.0.5242/.'" exited with 1.

Shared environment use case?

Hi everyone, I just came across this today and it sounds exactly like something we were hoping to do where I work.

Do you have any plans to support using BioDocker in a shared user-environment? as far as I can see from your base image, all the data needs to be owned by the 'biodocker' user with UID/GID 1000. In a shared environment we have multiple users on the same servers, all of whom should continue to own the data they're owners of.

One folder for each version?

from the wiki:
https://github.com/BioDocker/biodocker/wiki/Project-organization

Inside, each version of the software will have its own directory containing the Dockerfile and the necessary files.

this will force us to, when setting the autobuild from dockerhub, create a different image for each version instead of creating a different tag for each version.

e.g.:

https://github.com/BioDocker/containers/blob/master/Comet/2014011/Dockerfile -> biodockr/comet2014011:latest

instead of:

https://github.com/BioDocker/containers/blob/master/Comet/2014011/Dockerfile -> biodockr/comet:2014011

Adopting Conda for binary management

I want to have a space here so we can discuss the good parts and bad parts of adopting the Conda management for binaries, please leave your opinion here.

should we have our own base image?

Should be have our own base image?
Although not of compulsory usage, we could provide a image always updated (monthly?) and containing the helper scripts (help, version, etc) inside?

Users could use:
FROM biodocker/biodocker:latest

and have the latest ubuntu LTS with all updates already installed and apt cache cleaned.

complementary we could have a biodocker/biodockerdev containing gcc , biodocker/biodockerjava containing java, etc

Don't ship binaries and tarballs in repositories

I would like to put the binaries/tarballs etc. into a public location, separated from the source (Dockerfile), or use the original source (if they are trustable).
This will improve usability for us developers and shrink dramatically the download time. Moreover, a few packages like PeptideShaker are to big to store them in github.

Maybe the EBI can sponsor some storage or we can try to get some Google-Drive running?
The Galaxy project is running this service: http://depot.galaxyproject.org/ most of our target tools should be there or we can put them into the depot if you agree to work more closely with the Galaxy community?

BioDocker GUI

I recently talked with someone interested in using Docker and our containers with proteomics tools. The person told me that the possibility of having a full pipeline or some complex-to-install, ready-to-use software is very interesting, but there is one point that makes things "little bit complicated", the CLI. The ideal scenario this person described me is something like a desktop icon where he can open a window and then just select one container from the list. that way he can have the container downloaded and installed for him.

I want to get your opinion on this, what do you guys @BioDocker/contributors think about it? Do we have such solution around somewhere?

goals for publication

I think we should define some goals for the publication, lets define them here and so we can accelerate this process.

A common container for all the containers

@hroest @BioDocker/contributors @leprevost One of the contributors sent me an email, I attached:

Some docker containers

  • pyProphet, see here https://github.com/hroest/docker_files -- the
    pyprophet algorithm is a redevelopment of mProphet, our original
    algorithm for scoring targeted proteomics data.
    It would be nice to have some standardized "boilerplate docker code" around for this
    project which would make the interaction with the docker containers in
    the library somewhat standardized.
    In terms of standardization I think it would be really helpful and it would be
    much appreciated if there were some sort of "how-to" with a few things that the docker containers
    should provide -- or maybe a template docker file with a section "ENTER YOUR CODE HERE"
    which would make it easier for people to migrate their projects to docker.

    I have seen https://github.com/BioDocker/OpenMS/blob/master/1.11.1-3/Dockerfile which seems to
    contain some of that -- maybe you could add a "template.docker" file to
    https://github.com/BioDocker/specifications
    @leprevost It would be good to discuss this ideas.

Upgrade base image

I'm considering, as @julianu mentioned today, to upgrade the base image to the latest Ubuntu LTS version, 16.04. @BioDocker/contributors , any comments on that?

should we accept puppet and other orchestration programs

Should puppet and other orchestration programs be accepted? (http://www.quora.com/What-is-the-best-Docker-Linux-Container-orchestration-tool - http://brunorocha.org/python/dealing-with-linked-containers-dependency-in-docker-compose.html)

Some users might want to use compose (https://blog.docker.com/2015/06/compose-1-3-swarm-0-3-machine-0-3/), puppet, chef, saltstack or fig. They are program which allows the automatic definition of a set of docker containers to run, their command lines, their links and dependencies.

For example, a package (galaxy) might nee to be run in parallel and after LAMP. A user would need to first run a MySQL container and wait for it to initialize. Then run galaxy with the ports from MySQL forwarded. Then run apache for caching and static content with the folders from galaxy mapped inside.

Orchestration tools can take care of all of this complexity.

/cc @BioDocker/contributors

DeNovoGUI

Currently not working:

Command line:
/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -splash:resources/conf/denovogui-splash.png -Xms128M -Xmx4096M -cp /home/biodocker/bin/DeNovoGUI-1.5.2/DeNovoGUI-1.5.2.jar com.compomics.denovogui.gui.DeNovoGUI

Exception in thread "main" java.awt.HeadlessException:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:207)
at java.awt.Window.(Window.java:535)
at java.awt.Frame.(Frame.java:420)
at java.awt.Frame.(Frame.java:385)
at javax.swing.JFrame.(JFrame.java:174)
at com.compomics.denovogui.gui.DeNovoGUI.(DeNovoGUI.java:222)
at com.compomics.denovogui.gui.DeNovoGUI.main(DeNovoGUI.java:2022)
Process exitValue: 1

Unknown error: exception in thread "main" java.awt.headlessexception: no x11 display variable was set, but this program performed an operation which requires it. at java.awt.graphicsenvironment.checkheadless(graphicsenvironment.java:207) at java.awt.window.(window.java:535) at java.awt.frame.(frame.java:420) at java.awt.frame.(frame.java:385) at javax.swing.jframe.(jframe.java:174) at com.compomics.denovogui.gui.denovogui.(denovogui.java:222) at com.compomics.denovogui.gui.denovogui.main(denovogui.java:2022)
java.awt.HeadlessException:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:207)
at java.awt.Window.(Window.java:535)
at java.awt.Frame.(Frame.java:420)
at java.awt.Frame.(Frame.java:385)
at javax.swing.SwingUtilities$SharedOwnerFrame.(SwingUtilities.java:1756)
at javax.swing.SwingUtilities.getSharedOwnerFrame(SwingUtilities.java:1831)
at javax.swing.JOptionPane.getRootFrame(JOptionPane.java:1697)
at javax.swing.JOptionPane.showOptionDialog(JOptionPane.java:863)
at javax.swing.JOptionPane.showMessageDialog(JOptionPane.java:667)
at javax.swing.JOptionPane.showMessageDialog(JOptionPane.java:638)
at com.compomics.software.CompomicsWrapper.launch(CompomicsWrapper.java:360)
at com.compomics.software.CompomicsWrapper.launchTool(CompomicsWrapper.java:154)
at com.compomics.denovogui.DeNovoGUIWrapper.(DeNovoGUIWrapper.java:66)
at com.compomics.denovogui.DeNovoGUIZipFileChecker.(DeNovoGUIZipFileChecker.java:61)
at com.compomics.denovogui.DeNovoGUIZipFileChecker.main(DeNovoGUIZipFileChecker.java:71)

Specifications Refinements

Hi all:
The specifications should be changed to reflect several things. Some of them should be more prominent.

  • How to contribute with BioDocker.
  • How to generate a BioDocker compliant container.
  • How to report a broken container and ask for a new container.
  • LICENSE

Should we create folders with symlinks to themed repositories?

Should we create folders for specific fields (proteomics, genomics, metabolomics, etc) containing symlinks to the related tools to facilitate search?

Positives:

  • Easier to find tools
  • Easier to find related tools

Negatives:

  • One more thing to maintain
  • There's such a thing as Ctrl+F

/cc @BioDocker/contributors

Users must have a way to know where the programs are inside containers

I'm setting this as a bug because it impedes the user to run the container without knowing before the folder structures inside the containers.
For some cases like DIA_Umpire, the user must know beforehand where the software is (/home/biodocker/bin/DIA-Umpire/). This is not acceptable,as it is, so we must figure out how to deal with these scenarios where the executable cant be added to the path

New sections to Wiki

We should have a specific page for each of the following itens:

  • Container Versioning
  • Container Testing
  • Container Interfaces
  • Documentation
  • License

USER biodocker do not have permission to access /data

The /data/ folder set up as WORKDIR do not provide the biodocker user proper working permissions. I did some tests with the DIA_Umpire program and the software could not write the outputs on /data/. After removing the USER biodocker command, everything went back to normal.

we need a logo

I think it would be great if we had a custom logo for this project.

biodocker and bioboxes

Hi,
is this project still active or did it migrate completely to bioboxes?
I was using the basic image from biodocker when I run into a discussion about merging the two project. They both have cool sides, but I'd prefer to stay with this at the moment, unless is abandoned...

Also if still active maybe in future have a talk about #44 ?

Cheers

Adopting CMD for simple containers

I remember that we discussed this at some point in the past, but I want to bring this to discussion again.

My proposal is to follow the 'best practices' guidelines from Docker Inc. and start using CMD command for simple containers (i.e. containers with only one binary).

so this:

docker run biodckr/comet comet -p

becomes this:

docker run biodckr/comet -p

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.