GithubHelp home page GithubHelp logo

mbrukman / nemo Goto Github PK

View Code? Open in Web Editor NEW

This project forked from nvidia/nemo

0.0 1.0 0.0 128.45 MB

NeMo: a toolkit for conversational AI

Home Page: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/

License: Apache License 2.0

Python 30.11% Shell 0.10% Dockerfile 0.06% Jupyter Notebook 69.73%

nemo's Introduction

Project Status: Active โ€“ The project has reached a stable, usable state and is being actively developed. NeMo core license and license for collections in this repo Language grade: Python Total alerts Code style: black

NVIDIA NeMo

Introduction

NeMo is a toolkit for creating Conversational AI applications.

NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components - Neural Modules. Neural Modules are conceptual blocks of neural networks that take typed inputs and produce typed outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.

The toolkit comes with extendable collections of pre-built modules and ready-to-use models for:

Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.

NeMo product page.

Introductory video.

Requirements

NeMo's works with:

  1. Python 3.6 or 3.7
  2. Pytorch 1.6 or above

Docker containers:

The easiest way to start training with NeMo is by using NeMo's container.

It has all requirements and NeMo 1.0.0b2 already installed.

docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 --device=/dev/snd nvcr.io/nvidia/nemo:1.0.0b3

If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 20.09-py3.

docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:20.09-py3

Installation

If you are not inside the NVIDIA docker container, please install Cython first. If you wish to either use the ASR or TTS collection, please install libsndfile1 and ffmpeg as well.

  • pip install Cython
  • apt-get update && apt-get install -y libsndfile1 ffmpeg (If you want to install the TTS or ASR collections)

Once requirements are satisfied, simply install using pip:

  • pip install nemo_toolkit[all]==1.0.0b2 (latest version)

Or if you want the latest (or particular) version from GitHub:

  • python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all] - where {BRANCH} should be replaced with the branch you want. This is recommended route if you are testing out the latest WIP version of NeMo.
  • ./reinstall.sh - from NeMo's git root. This will install the version from current branch in developement mode.

Examples

<nemo_github_folder>/examples/ folder contains various example scripts. Many of them look very similar and have the same arguments because we used Facebook's Hydra for configuration.

Here is an example command which trains ASR model (QuartzNet15x5) on LibriSpeech, using 4 GPUs and mixed precision training. (It assumes you are inside the container with NeMo installed)

root@987b39669a7e:/NeMo# python examples/asr/speech_to_text.py --config-name=quartznet_15x5 \
model.train_ds.manifest_filepath=<PATH_TO_DATA>/librispeech-train-all.json \
model.validation_ds.manifest_filepath=<PATH_TO_DATA>/librispeech-dev-other.json \
trainer.gpus=4 trainer.max_epochs=128 model.train_ds.batch_size=64 \
+trainer.precision=16 +trainer.amp_level=O1  \
+model.validation_ds.num_workers=16  \
+model.train_ds.num_workers=16 \
+model.train_ds.pin_memory=True

#(Optional) Tensorboard:
tensorboard --bind_all --logdir nemo_experiments

Documentation

Version Status Description
Latest Documentation Status Documentation of the latest (i.e. main) branch
Stable Documentation Status Documentation of the stable (i.e. 0.11.1) branch
Main Documentation Status Documentation of the main branch
v0.11.1 Documentation Status Documentation of the v0.11.1 release
v0.11.0 Documentation Status Documentation of the v0.11.0 release

Tutorials

The best way to get started with NeMo is to checkout one of our tutorials.

Most NeMo tutorials can be run on Google's Colab.

To run tutorials:

  • Click on Colab link (see table below)
  • Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
Tutorials
Domain Title GitHub URL
NeMo Simple Application with NeMo Voice swap app
NeMo Exploring NeMo Fundamentals NeMo primer
NeMo Models Exploring NeMo Model Construction NeMo models
ASR ASR with NeMo ASR with NeMo
ASR Speech Commands Speech commands
ASR Speaker Recognition and Verification Speaker Recognition and Verification
ASR Online Noise Augmentation Online noise augmentation
ASR Beam Search and External Language Model Rescoring Beam search and external language model rescoring
NLP Using Pretrained Language Models for Downstream Tasks Pretrained language models for downstream tasks
NLP Exploring NeMo NLP Tokenizers NLP tokenizers
NLP Text Classification (Sentiment Analysis) with BERT Text Classification (Sentiment Analysis)
NLP Question answering with SQuAD Question answering Squad
NLP Token Classification (Named Entity Recognition) Token classification: named entity recognition
NLP Joint Intent Classification and Slot Filling Joint Intent and Slot Classification
NLP GLUE Benchmark GLUE benchmark
NLP Punctuation and Capitialization Punctuation and capitalization
NLP Named Entity Recognition - BioMegatron Named Entity Recognition - BioMegatron
NLP Relation Extraction - BioMegatron Relation Extraction - BioMegatron
TTS Speech Synthesis TTS inference
Tools CTC Segmentation CTC Segmentation
Tools Text Normalization for Text To Speech Text Normalization

Contributing

We welcome community contributions! Please refer to the CONTRIBUTING.md for the process.

License

NeMo is under Apache 2.0 license.

nemo's People

Contributors

alexgrinch avatar blisc avatar borisdayma avatar borisfom avatar chiphuyen avatar drnikolaev avatar ekmb avatar ericharper avatar fayejf avatar harisankarh avatar hkelly33 avatar jbalam-nv avatar khcs avatar kkersten avatar muyangdu avatar nithinraok avatar okuchaiev avatar pchitale1 avatar purn3ndu avatar qijiaxing avatar redoctopus avatar ryanleary avatar seovchinnikov avatar stasbel avatar titu1994 avatar tkornuta-nvidia avatar vahidoox avatar vladgets avatar vsl9 avatar yzhang123 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.