GithubHelp home page GithubHelp logo

Comments (2)

freewym avatar freewym commented on July 16, 2024 1

Looks like your numbers are 0.3-0.4 worse than what I got.
The final train/validation loss shown in my LM's train.log is:

| epoch 025 | loss 3.956 | ppl 15.52 | wps 19483 | ups 1 | wpb 27910.561 | bsz 1782.919 | num_updates 33875 | lr 9.76563e-07 | gnorm 0.160 | clip 0.021 | oom 0.000 | wall 48813 | train_wall 43183
| epoch 025 | valid on 'valid' subset | loss 4.119 | ppl 17.37 | num_updates 33875 | best_loss 4.11477

and those in my attention model's train.log is:
| epoch 035 | loss 1.970 | nll_loss 0.649 | ppl 1.57 | wps 560 | ups 0 | wpb 1644.168 | bsz 69.317 | num_updates 97300 | lr 1e-05 | gnorm 0.341 | clip 0.000 | oom 0.000 | wall 32856 | train_wall 116514
| epoch 035 | valid on 'valid' subset | loss 6.214 | nll_loss 5.392 | ppl 41.99 | num_updates 97300 | best_wer 14.2055 | wer 14.2757 | cer 15.1188

LM on eval2000:
| Evaluated 60411 tokens in 3.4s (17886.71 tokens/s) | Loss: 3.5959, Perplexity: 36.45

LM on rt03:
| Evaluated 109828 tokens in 5.5s (19909.00 tokens/s) | Loss: 3.7694, Perplexity: 43.35

Anyways, it's common that we get slightly different results, since the type/number of GPUs, the PyTorch/CUDA version yield different numbers. For my experiments, I was using 2 GeForce 1080 GTX Ti GPUs with the following environment (you can obtain this info by following the instruction from PyTorch github when you are creating new Issue there):

Collecting environment information...
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176

OS: Debian GNU/Linux 9.4 (stretch)
GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
CMake version: version 3.7.2

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti

Nvidia driver version: 387.26
cuDNN version: Could not collect

Versions of relevant libraries:
[pip3] numpy==1.12.1
[conda] blas 1.0 mkl
[conda] mkl 2019.1 144
[conda] mkl-service 1.1.2 py37h90e4bf4_5
[conda] mkl_fft 1.0.10 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch
[conda] torchvision 0.3.0 py37_cu9.0.176_1 pytorch

from espresso.

roshansh-cmu avatar roshansh-cmu commented on July 16, 2024

Thanks!

from espresso.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.