GithubHelp home page GithubHelp logo

hhy5277 / texar-pytorch Goto Github PK

View Code? Open in Web Editor NEW

This project forked from asyml/texar-pytorch

0.0 1.0 0.0 1.11 MB

Toolkit for Machine Learning and Text Generation, in PyTorch

Home Page: https://texar.io

License: Apache License 2.0

Python 99.63% Perl 0.37%

texar-pytorch's Introduction




Build Status Documentation Status License

(Note: This is the alpha release of Texar-PyTorch.)

Texar-PyTorch is an open-source toolkit based on PyTorch, aiming to support a broad set of machine learning, especially text generation tasks, such as machine translation, dialog, summarization, content manipulation, language modeling, and so on. Texar is designed for both researchers and practitioners for fast prototyping and experimentation.

If you work with TensorFlow, be sure to check out Texar (TensorFlow) which has (mostly) the same functionalities and interfaces.

With the design goals of modularity, versatility, and extensibility in mind, Texar extracts the common patterns underlying the diverse tasks and methodologies, creates a library of highly reusable modules and functionalities, and facilitates arbitrary model architectures and algorithmic paradigms, e.g.,

  • encoder(s) to decoder(s), sequential- and self-attentions, memory, hierarchical models, classifiers, ...
  • maximum likelihood learning, reinforcement learning, adversarial learning, probabilistic modeling, ...

With Texar, cutting-edge complex models can be easily constructed, freely enriched with best modeling/training practices, readily fitted into standard training/evaluation pipelines, and fastly experimented and evolved by, e.g., plugging-in and swapping-out different modules.



Key Features

  • Versatility. Texar contains a wide range of modules and functionalities for composing arbitrary model architectures and implementing various learning algorithms, as well as for data processing, evaluation, prediction, etc.
  • Modularity. Texar decomposes diverse complex machine learning models/algorithms into a set of highly-reusable modules. In particular, model architecture, losses, and learning processes are fully decomposed.
    Users can construct their own models at a high conceptual level just like assembling building blocks. It is convenient to plug-ins or swap-out modules, and configure rich options of each module. For example, switching between maximum likelihood learning and reinforcement learning involves only changing several lines of code.
  • Extensibility. It is straightforward to integrate any user-customized, external modules. Also, Texar is fully compatible with the native PyTorch interfaces and can take advantage of the rich PyTorch features, and resources from the vibrant open-source community.
  • Interfaces with different functionality levels. Users can customize a model through 1) simple Python/YAML configuration files of provided model templates/examples; 2) programming with Python Library APIs for maximal customizability.
  • Easy-to-use APIs; rich configuration options for each module, all with default values.
  • Pretrained Models such as BERT, GPT2, and more!
  • Well-structured high-quality code of uniform design patterns and consistent styles.
  • Clean, detailed documentation and rich examples.

Library API Example

A code portion that builds a (self-)attentional sequence encoder-decoder model:

import texar as tx

class Seq2Seq(tx.ModuleBase):
  def __init__(self, data):
    self.embedder = tx.modules.WordEmbedder(data.target_vocab.size, hparams=hparams_emb)
    self.encoder = tx.modules.TransformerEncoder(hparams=hparams_encoder) # config through `hparams`
    self.decoder = tx.modules.AttentionRNNDecoder(
        input_size=self.embedder.dim
      	encoder_output_size=self.encoder.output_size,
      	vocab_size=data.target_vocab.size,
        hparams=hparams_decoder)

  def forward(self, batch): 
    outputs_enc = self.encoder(
        inputs=self.embedder(batch['source_text_ids']),
        sequence_length=batch['source_length'])
     
    outputs, _, _ = self.decoder(
        memory=output_enc, 
        memory_sequence_length=batch['source_length'],
        helper=self.decoder.get_helper(decoding_strategy='train_greedy'), 
        inputs=self.embedder(batch['target_text_ids']),
        sequence_length=batch['target_length']-1)

    # Loss for maximum likelihood learning
    loss = tx.losses.sequence_sparse_softmax_cross_entropy(
        labels=batch['target_text_ids'][:, 1:],
        logits=outputs.logits,
        sequence_length=batch['target_length']-1) # Automatic masking

    return loss


data = tx.data.PairedTextData(hparams=hparams_data) 
iterator = tx.data.DataIterator(data)

model = Seq2seq(data)
for batch in iterator.get_iterator():
    loss = model(batch)
    # ...

Many more examples are available here

Installation

git clone https://github.com/asyml/texar-pytorch.git 
cd texar-pytorch
pip install -e .

Getting Started

Reference

If you use Texar, please cite the tech report with the following BibTex entry:

Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation
Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Wanrong Zhu, Devendra Sachan and Eric Xing
2018

@article{hu2018texar,
  title={Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation},
  author={Hu, Zhiting and Shi, Haoran and Tan, Bowen and Wang, Wentao and Yang, Zichao and Zhao, Tiancheng and He, Junxian and Qin, Lianhui and Wang, Di and others},
  journal={arXiv preprint arXiv:1809.00794},
  year={2018}
}

License

Apache License 2.0

texar-pytorch's People

Contributors

avinashbukkittu avatar gpengzhi avatar haoransh avatar huzecong avatar tomnong avatar weiwei718 avatar zhitinghu avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.