GithubHelp home page GithubHelp logo

galaxyproteomics / tools-galaxyp Goto Github PK

View Code? Open in Web Editor NEW
34.0 34.0 57.0 387.73 MB

Galaxy Tool Shed repositories maintained and developed by the GalaxyP community

License: MIT License

CSS 7.83% JavaScript 59.85% Mako 0.62% Python 11.64% Shell 0.86% R 8.26% PHP 0.06% HTML 7.12% Perl 2.45% TeX 0.69% HCL 0.63%

tools-galaxyp's People

Contributors

abretaud avatar bebatut avatar bernt-matthias avatar bgruening avatar blankclemens avatar caleb-easterly avatar carloshorro avatar cat-bro avatar chambm avatar dglaetzer avatar eschen42 avatar foellmelanie avatar fwsadler avatar glormph avatar iracooke avatar jj-umn avatar jmchilton avatar jraysajulga avatar jvolkening avatar marenstillger avatar mariecrane avatar maux82 avatar npinter avatar pravs3683 avatar reid-wagner avatar stortebecker avatar tdudgeon avatar tmcgowan avatar trevor avatar zimmera avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tools-galaxyp's Issues

DeNovoGUI

Analogue to SearchGUI for library search engines, can we include DeNovoGUI for De-Novo-Sequencing?

OpenMS MSGFPlusAdapter and XTandemAdapter take only one fixed modification

The OpenMSTools MSGFPlusAdapter and XTandemAdapter allow only for one input in param_fixed_modifications and for param_variable_modifications. After adding the first one by clicking on + Insert param_fixed_modifications, this option is still displayed, but nothing happens when clicking it again. The same is true for the variable modifications.

The Galaxy wrappers for MS-GF+ / XTandem allow to add more than one modification. Here, the input is solved in a different way.

Add citations to all tools

Recent Galaxy versions support a citations tag. This issue will track to process of adding citations to all tools.

  • appendfdr
  • blast_plus_remote_blastp
  • dbbuilder
  • decoyfasta
  • digestdb
  • directag_and_tagrecon
  • fasta_merge_files_and_filter_unique_sequences
  • filter_by_fasta_ids
  • idpqonvert
  • ltq_iquant_cli
  • maxquant
  • mgf_formatter
  • msconvert
  • myrimatch
  • pepxml_to_xls
  • proteinpilot
  • protxml_to_xls
  • psm_eval
  • scaffold

Omega2 & Sipros3 (Metagenome Assembly & Metaproteome ID/Quant)

All: I am rather new to this group & would like to introduce open-source applications that I frequently use in our metagenome/metaproteome workflows (primarily on HPC systems):

  1. Omega2: https://bitbucket.org/omicsbio/omega2 & accompanying instructions: http://omega.omicsbio.org/instructions
    [Purpose: metagenomics assembler which applies a graph-overlap graph theory approach rather than de Bruijn graph theory. Works best for Illumina reads.]
    [Presently undergoing significant development & may be worth introducing @ any upcoming events]
    Omega2 data preprocessing prerequisites:
    a. Sickle: https://github.com/najoshi/sickle
    b. ecc.sh (an error correction component of BBMap): https://sourceforge.net/projects/bbmap/ & http://jgi.doe.gov/data-and-tools/bbtools/

  2. Canu: https://github.com/marbl/canu (a fork of the Celera Assembler for MinION reads)
    [Purpose: assembly of Oxford Nanopore Technologies MinION reads; documentation: https://github.com/marbl/canu]

  3. Sipros3: https://github.com/Omics-Bio/Sipros3
    [Purpose: Utilizes OpenMPI/MPI for the search of very large FASTA files (eg. those from metagenome assemblies with millions of entries, etc.]
    [Sipros/ProRata is quite flexibile for integration of protein ID with protein quantification, stable isotope probing, and PTM searches: http://sipros.omicsbio.org/ ]
    [Under significant development & may also be of interest & any upcoming events.

  4. UniFam: https://github.com/chaij/UniFam
    [Purpose: Enables large-scale protein annotation with UniProt-based families.]

If any of these are potentially interesting applications to others in the group, please LMK-- I'd be pleased to be able to field questions on the tools and/or work to get them incorporated into the shed.

Thanks!

OpenMS status

Can anyone comment on the status of OpenMS in Galaxy? The TS package is owned by galaxyp but is several years old and I don't see it in the github repo. @bgruening has it in his own tool repo and it looks active, but it is in the .tt_blacklist. Are either of these options suitable for a production server?

Functional analysis of (meta)proteomics datasets

A next step for Unipept is adding support for a functional analysis next to the existing taxonomic analysis. While the problem sounds similar to the taxonomic one, there are a few problems:

  • Available data: each UniProt entry has an associated taxon, but the same isn't true for functional annotations. Only 12% has an annotation with an EC number, GO terms are present for 60% of the entries.
  • UniProt proteins are associated with a single taxon, but can have multiple functions and thus multiple GO terms (2.8 terms per entry on average). These could be aggregated on a UniProt level or on a peptide level. The quality of the annotations is sometimes also disappointing.
  • We can't use the LCA algorithm of the taxonomy. GO exists out of 3 disjunct parts which are directed acyclic graphs whereas the taxonomy is a tree. This means that a node can have multiple parents. Aggregation must thus be done differently.

Finally, the question of "what is the expected output of a functional analysis" remains. Many articles pick a type of functional annotation and simply include a pie chart. If you ask a biologist if he learned something from the pie chart, the answer is almost always "no". We should be able to do better than that. Suggestions or good examples of such data visualisations are always welcome.

move COPYING file into readme.md file

Can we remove the COPYING into ~ or put them into every readme.md file to get rid of it?
Now that we have one single galaxyp repository this would make much sense, imho.

@jmchilton ok with you? Your are probably the copyright holder?

unipept "advanced missed cleavage" support

Answers from Bart Mesuere to questions from Pratik Jagtap:

  • Peptides only found with the API and not using the web interface:
    Is it possible that your results from the web interface date from before March 26 and the API results from after? On that day, we updated our database to use a new Uniprot release which increased the number of protein entries from 50 million to 90 million. If I search for those peptides on the website today, I get a result for each of them.
  • Peptides only found using the web interface and not using the API:
    You are correct that the API has no "advanced missed cleavage" support (for now), but this is only half the story. When you submit a peptide to the web interface, we will always do an in silico tryptic digest on the peptides to account for missed cleavages. This comes down to running a regular expression on the input peptides that splits the peptides after every K or R, except if followed by a P (In ruby, this is something like peptides.gsub(/([KR])([^P])/,"\1\n\2").gsub(/([KR])([^P])/,"\1\n\2")). By default, we search those split peptides and return each of them as a separate hit. If you enable the advanced missed cleavage handling, we try to piece together those separate parts and return it as one result. This is significantly slower than the default way, because we can't precompute this.

So, the reason you don't get a result for the listed peptides is that they aren't tryptic peptides. And while we don't offer the advanced missed cleavage handling with the API, you can still get some results for them by "making them tryptic". You could simply add a processing step to the galaxy workflow to perform the regular expression to split them and then run them through the Unipept API. As an extra preprocessing step, you could also filter out peptides longer than 50 or shorter than 5, since we only have peptides with a length in the range of [5,50] in our database. I did this for the unmatched peptides in your excel file and attached the results. As you can see this gives a result for each of the peptides and in many cases the result is equally specific as the web interface. While this approach is not perfect, it's definitely better than reporting nothing at all, plus it's several times faster than the advanced missed cleavage handling.

  • "I generally check all options except 'Equate I and L'.":
    If your workflow handles peptides resulting from metaproteomics experiments, I would definitely encourage you to enable this option. Since Leucine and Isoleucine are isobaric, search engines like mascot and sequest can't distinguish between them. This means that a matched peptide like "AALER" could equally be "AAIER". Checking this option has no performance penalty.
  • "we would like to see if we get equivalent parameters as the 'metaproteomics web' options.":
    If you're asking if the advanced missed cleavage handling will become available in the API any time soon, I'll have to disappoint you. It is on our todo list, but it's not a high priority item. The main reason for this is the performance. Returning the LCA for a tryptic peptide is just fetching a row from the database and returning it. For the advanced missed cleavage handling, we potentially have to retrieve thousands of rows from the database and perform our algorithm on them. This potential performance penalty is unsuited for an API that is meant for high throughput data analysis.

updating tools in public toolshed

I have a general question about bugfixes to tools in the public toolshed.

I'm setting up a local Galaxy instance for proteomics, and I've run into some issues with the versions of wrappers maintained by the 'galaxyp' which are in the main public toolshed. For instance, the 'myrimatch' wrapper is from 2014 and seems to have fatal issues (backspaces inserted into the command line which cause failure). These issues have been fixed in the version in the sandbox and in this repository but don't seem to have made it back into the "stable" toolshed.

My short-term solution is to install the latest version from here (which also had some bugs but which I can submit PRs for directly). Given that the galaxy docs recommend avoiding sandboxed tools on production servers, I'm wondering if there's a system in place for fixing critical bugs to tools in the main instance even when the latest code might not be well-tested enough to put up there.

myrimatch installation error (OpenMS)

When installing OpenMS a single dependency (myrimatch) gives an error (seemingly a wrong link):

File "/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/install_manager.py", line 121, in install_and_build_package_via_fabric
    tool_dependency = self.install_and_build_package( install_environment, tool_dependency, actions_dict )
  File "/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/install_manager.py", line 79, in install_and_build_package
    initial_download=True )
  File "/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/tool_dependencies/recipe/recipe_manager.py", line 31, in execute_step
    initial_download=initial_download )
  File "/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py", line 665, in execute_step
    dir = self.url_download( work_dir, downloaded_filename, url, extract=True, checksums=checksums )
  File "/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py", line 165, in url_download
    raise Exception( err_msg )

Error downloading from URL http://getgalaxyp.msi.umn.edu/downloads/myrimatch-bin-linux-x86_64-gcc41-release-2_1_131.tar.bz2 : <urlopen error [Errno -2] Name or service not known>

Make Tools Collection-Aware

@jmchilton recent work on workflow scheduling and data collections will change a lot for the galaxyp project and we should make our wrappers collection-aware.
Maybe we will have soon workflows in workflows and loop like structures. This Issue should track our progress on this.
Please add all tools that need to be adopted and one really complex example workflow.

  • fasta_merge_files_and_filter_unique_sequences

readme should be extended

Our readme should be extended a little bit to highlight the community project, a few tools, aims ...

Generating proteomic databases from 16S rRNA / taxonomy data.

Generating proteomic databases from 16S rRNA / taxonomy data.

TOOL IDEA:
Given that most of the metagenomics studies are based on 16S ribosomal RNA-based taxonomy identifications, a tool available that can take in species names as an input and parse out proteomes (if available) from UniProt website - would be desirable. In our discussion with researchers working in the field of metaproteomics - this would be a useful tool. Any ideas on effort that would be required to build this tool?

Suggestions (From emails in November 2014)-

A) Suggestion by Ira Cooke (@iracooke Australia):

Uniprot has a great API … so if you know the species identifier (or list of them) you can get a customized database direct from Uniprot by downloading using a special url that contains all the taxonomic identifiers. This negates the need for a merge step.

This is an example (Dog and Mouse)

http://www.uniprot.org/uniprot/?query=taxonomy%3a9615+OR+taxonomy%3a10090&force=yes&format=fasta

I guess the trick would be to go from species names to taxon ids … since this is inherently fuzzy (species might be listed under a different name from what you expect). For my purposes I just do this by hand using uniprot via the ncbi taxonomy database … but if you have a bulk list of species names I wouldn’t be sure how to do it in an automated way (unless all the species names had a perfect match in the database).

I believe this is the best option as it is simple (just a galaxy tool), it doesn’t require storing data locally and it will always give the latest data. It is also precise as there is no reliance on parsing names.

One missing piece is the “Species -> TaxonID” tool, but could be done using a local download of the NCBI Taxonomy data (or a web API .. I haven’t looked but Uniprot might even provide this too). I’d actually say that you’re better off getting away from using species names if possible … to be precise you need the taxon id’s at some point anyway.

B) Suggestion by Lennart Martens (Belgium):

DBToolkit can do this from the local, complete UniProt file (in .dat format) for species as well as for entire taxons, specified as either the text string ('homo sapiens') or the TaxIDs (9606). As stated above, it does require a local version of the file, however.

Conclusion:

Most 16S rRNA studies offer lists of identified species (and strains). It would be good idea to take this list and a) either convert into taxonomy identifiers or b) submit as species names through 1) UniProt API or 2) some features from db toolkit to 3) generate a FASTA file of available proteomes.

msconvert for Wiff

@jmchilton @iracooke I need to add support for converting Wiff files to the msconvert tools.

ProteoWizard Reader_ABI.cpp checks for the existence of a wiff scan file by appending ".scan" to the given input wiff file and searching for that filename in the same directory.

msconvert_wrapper.py as currently coded, doesn't have an option to copy in the .scan file without also adding it explicitly to the inputs added to the command line.

Any thoughts on how best to add wiff conversion support?

If no one has used the wiff datatype as yet, would it be acceptible to add explicit Metadata fields for the .wiff and .wiff.scan files similar to: BowtieIndex in lib/galaxy/datatypes/ngsindex.py or SnpSiftDbNSFP in
lib/galaxy/datatypes/text.py allowing direct use of ${input.extra_files_path}/${input.metadata.wiff} and ${input.extra_files_path}/${input.metadata.scan} in command line generation? This would allow original filenames to be used in the extra_files_path.

class Wiff( Binary ):
"""Class for wiff files."""
MetadataElement( name='reference_name', default='ABSCIEX' , desc='Reference Name', readonly=True, visible=True, set_in_upload=True, no_value='ABSCIEX' )
MetadataElement( name="wiff", default=None, desc="reference_name.wiff", readonly=True, visible=True, no_value=None )
MetadataElement( name="scan", default=None, desc="reference_name.wiff.scan", readonly=True, visible=True, no_value=None)

Quantitation tools for stable isotope labelling

Hejhej,
I have a question regarding current options in GalaxyP for protein quantitation, especially on stable isotope labels (e.g. SILAC). While there are a lot of good possibilities for peptide identification and protein inference, I do not see many options for quantitation. There is ProteinPilot and Scaffold, but its both not freeware. The integration of MaxQuant was discontinued, as @bgruening told me. I am not sure, if XPRESS and ASAPRatio from the Trans-Proteomic-Pipeline have been integrated, but I would rather not recommend them to anybody. (That's why.)

So the only possibilities I see come from the OpenMS project, e.g. FeatureFinderMultiplex and ERPairFinder. Did I get this correct or am I overlooking quantitation functions of other (free) tools?

DIA Swath Workflow

Tools needed for Data Independent Acquisition Swath Workflow using DIA-UmpireSE for signal extraction:
msconvert .wiff -> mzXML
DIA-Umpire SE mzXML -> *.mgf
msconvert .mgf -> mzXML
Xtandem mzXML -> pep.xml
PeptideProphet
Xinteract-iProphet
mayu
SpectraST
Spectrast2Spectrast_iRT
SpectraST_cons
Spectrast2tsv
TSVtoTRAML
OpenSwath

Metagenomic sequence processing

Hi everybody,

the idea is to have a suite of tools able to carry out quality filtering, assembling, clustering and ORF finding on metagenomic sequences. This would be really useful to generate (sample-matched) metagenome-based databases for metaproteomics, but also - more in general - for all microbiome/metagenome scientists.

Other issues and the Google SpreadSheet do already mention SixGill and Omega3, with the former already ready for testing. IMO it would be worth considering to add other tools. Sequence clustering, for instance, is almost mandatory in certain cases to minimize sequence redundancy and reduce database size.

These are the tools we're currently using in our lab:

  1. fastq_mergepairs (http://drive5.com/usearch/manual/cmd_fastq_mergepairs.html) and fastq_filter (http://drive5.com/usearch/manual/cmd_fastq_filter.html) from the USEARCH suite for merging of paired read and quality filtering
  2. Velvet (https://www.ebi.ac.uk/~zerbino/velvet/) as sequence assembler, which evolved in MetaVelvet and MetaVelvet-SL (http://metavelvet.dna.bio.keio.ac.jp/) (the above mentioned Omega3 might be better than all these)
  3. uclust (http://drive5.com/usearch/manual/uclust_algo.html) and CD-HIT (https://github.com/weizhongli/cdhit) for sequence clustering
  4. FragGeneScan for ORF finding (https://sourceforge.net/projects/fraggenescan/), as an alternative (often better) to six-frame translation.

I'm not a developer, so I don't know if these tools might be packed within Galaxy (or are already present in other Galaxy servers).
Do you think this could be useful? Can anybody have a look to these tools and/or propose better or simpler tools which do similar things?

Thanks!
Alessandro

Metaproteomics Hackathon: December 15th and 16th.

Call to software developers and users:
Help the metaproteomics community to improve tools, documentation and workflows for metaproteomics research. If you ever wanted to contribute to a vibrant community, this is your event. We will get you started and explain you everything. No coding skills necessary to contribute.

Google SpreadSheet: http://z.umn.edu/metaproteomics

https://gitter.im/GalaxyProteomics/Lobby#
https://github.com/galaxyproteomics/tools-galaxyp/issues

December 15th Start Time: Central European Time:9:00 AM
Bjoern will be at the gitter at 9 AM Central European Time for European developers. He plans to have one session for ALL developers at 4 PM CET (10 AM EST, 9 AM Minneapolis, 11 noon Pacific Standard Time).

ProteinPilot has datatypes defined

Datatypes are defined in it's own repository. We should remove them from the ProteinPilot repository.
Moreover, I think they can replaced with something already available in Galaxy. So no need to introduce yet another datatype.
The group datatype seems to be special, and there is a special wrapper to convert group to xml. If we include group2xml.exe XML $input $output into the ProteinPilot wrapper directly we can get rid of the datatype and one tool.
A TPP expert is needed here :)

planemo lint all the tools

planemo gained some recent features to strengthen linting support and we should lint all our tools.

This sounds like a real good hackathon task and does not involve much programming.

MetaProSIP parameter -r_executable seems to be rendered incorrectly

The parameter r_executable (type="data" format="txt" value="R") renders as a selection field in Galaxy which allows to select only data sets from the history.

This refers to revision a25d96e0d837 which should be the newest; using galaxy release 17.05.

Is it necessary to have an R installation available? On our systems R in not available by default, but needs to be "loaded" with a command (we use http://modules.sourceforge.net/). Are there R packages needed to run the tool?

Hackathon on conda packaging: September 27th and 28th

Hi GalaxyP Community,

I would like to invite you to a hackathon on the September 27th and 28th. We will package as many dependencies as possible into conda packages.

Conda is a new packaging system that Galaxy can use and that enables travis tool testing, finally! Would be great if someone can join!

We will meet tomorrow here:

https://hangouts.google.com/call/35zl5pahj5hkppxokhuxvszx24e
and on gitter here:
https://gitter.im/GalaxyProteomics/Lobby

Some slides for a Conda quick-start can be found here: https://galaxy.slides.com/bgruening/conda-quick-start/live#/

A tool list that needs help can be found here: https://docs.google.com/spreadsheets/d/1C9p_XLiLyrbMoRVKS_H582TL1I5v9FKAv2dl5VAYxjw/edit#gid=0

Crying for help - TPP needs a conda package

Most of the tools without testing and a conda package are now TPP based.
I started to work on this here: https://github.com/bioconda/bioconda-recipes/compare/tpp?expand=1

But did not succeed here. Any brave sole that can help me with this gets free beer at next GCC.
We could also think about splitting the different binaries into separate packages and not building the entire thing. Also we might want to build the latest TPP version and upgrade our tools.

This is the last big step missing for a completely tested GalaxyP repository - I think it's worth the pain!

PeptideShaker & SearchGUI updates

So we have now an updated version of PeptideShaker that works with the latest beta release. As far as I can tell it works and we could get rid of our ugly workarounds.
Many thanks to Marc, who fixed a lot of bugs on the PeptideShaker side today.

A few things that needs to be addressed.

  • Help needs to be updated.
  • searchgui_tinydb1.fasta is not in the desired format
  • We need to document the FASTA format in our help, as this can cause trouble down the road
  • Gene annotation is only available in UniProt formatted FASTA files as far as I understood, we need to find a way to communicate this
  • As soon as the new version is out, we need to create new tool_dependencies.xml files

Mycoplasma proteomes in Protein Database Downloader

Several mycoplasma strains are very common contaminants in cell culture. Protein Database Downloader does not support mycoplasma so far.

Could the following strains be added to Protein Database Downloader:

  • mycoplasma orale
  • mycoplasma hyorhinis
  • mycoplasma arginini
  • mycoplasma fermentans
  • mycoplasma hominis
  • acheloplasma laidlawii

As those would be often downloaded together and merged afterwards, maybe it would make sense to make it possible to download all of them at once. Maybe as Taxonomy: "Common mycoplasma contaminants (M. orale, M. arginini [...])"?

Add .shed.yml files

Add .shed.yml files to every repository to make make use of planemo's upload feature.

  • appendfdr
  • blast_plus_remote_blastp
  • dbbuilder
  • decoyfasta
  • digestdb
  • directag_and_tagrecon
  • fasta_merge_files_and_filter_unique_sequences
  • filter_by_fasta_ids
  • idpqonvert
  • ltq_iquant_cli
  • maxquant
  • mgf_formatter
  • msconvert
  • myrimatch
  • peptideshaker
  • pepxml_to_xls
  • proteinpilot
  • protxml_to_xls
  • psm_eval
  • scaffold

sixgill metaproteomics database for LC-MS/MS

https://github.com/dhmay/sixgill/wiki
Sixgill (Six-frame Genome-Inferred Libraries for LC-MS/MS) is a tool for using shotgun metagenomics sequencing reads to construct databases of 'metapeptides': short protein fragments for database search of LC-MS/MS metaproteomics data.
Tasks:

  • Conda recipe for six gill
  • Datatype: metapeptides.tsv.gz - tab-delimited gzipped files
  • Datatype: MetaGeneAnnotator.txt -
  • sixgill galaxy tools

SearchGUI -prec_ppm / "Precursor Ion Tolerance Units" issue

Dear Galaxy-P colleagues,

Testing the galaxy-p galaxy flavour, using search GUI Version 2.9.0 tools, I have changed the default "Precursor Ion Tolerance Units" to Da and have the following error: Error parsing the prec_ppm option: Found 2 where 0 or 1 was expected.

Cheers,

Yvan

MVP plugin for IGV

@PratikDJagtap presented a Multi-omics visualization platform (MVP) plugin for integrated genome viewer (IGV) at the ASMS. I did not find any documentation online. How can it be integrated to Galaxy?

Proteogenomics workflow

Can someone (@PratikDJagtap) point me to the Galaxy-P proteogenomic workflow into which I should integrate my Omicron tools, e.g. CustomProDB and PSM2SAM? I checked the "Published workflows" section of the public Galaxy-P site and it's not there. We can discuss any design considerations for the fused workflow here.

I see there's a "Tool needed" label; it begs the question, why is there no "Workflow needed" label? Pinging @bgruening because I don't know who better to ask. ;)

Metaproteomic public repositories (from Protein database downloader)

Links to metaproteomic public repositories

IDEA:
It would be a good idea to add a few common publically available metaproteomics databases (see HOMD database for an example below) to the Protein database downloader tool in Galaxy (https://github.com/galaxyproteomics/tools-galaxyp/tree/master/tools/dbbuilder).

Please suggest links for some useful metaproteomics databases.

HOMD database: ftp://ftp.homd.org/HOMD_annotated_genomes_archive/oral_microbiome_dynamic.aa.zip

move "convert windows newlines" tool into an other place

This tool looks strange to me. If the purpose of this tool is to convert windows newlines only for ProteinPilot, as the name suggest, we should include the tr magic into the wrapper of ProteinPilot.

If such a tool is really missing in general we should put it into a text-manipluation repository, e.g. here:
https://github.com/bgruening/galaxytools/tree/master/text_processing/text_processing

Anyone know if this is still a problem or does Galaxy take care of this?
@jmchilton @jj-umn ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.