GithubHelp home page GithubHelp logo

pytorch / pytorch.github.io Goto Github PK

View Code? Open in Web Editor NEW
212.0 24.0 279.0 6.91 GB

The website for PyTorch

Home Page: https://pytorch.org

License: BSD 3-Clause "New" or "Revised" License

HTML 86.19% JavaScript 2.28% Ruby 0.01% Makefile 0.04% Shell 0.13% Jupyter Notebook 6.14% Python 0.45% SCSS 4.76%

pytorch.github.io's Introduction

pytorch.org site

https://pytorch.org

A static website built in Jekyll and Bootstrap for PyTorch, and its tutorials and documentation.

Prerequisites

Install the following packages before attempting to setup the project:

On OSX, you can use:

brew install rbenv ruby-build nvm

Setup

Install required Ruby version:

#### You only need to run these commands if you are missing the needed Ruby version.

rbenv install `cat .ruby-version`
gem install bundler -v 1.16.3
rbenv rehash

####

bundle install
rbenv rehash

Install required Node version

nvm install
nvm use

Install Yarn

brew install yarn --ignore-dependencies
yarn install

Local Development

To run the website locally for development:

make serve

Then navigate to localhost:4000.

Note the serve task is contained in a Makefile in the root directory. We are using make as an alternative to the standard jekyll serve as we want to run yarn, which is not included in Jekyll by default.

Building the Static Site

To build the static website from source:

make build

This will build the static site at ./_site. This directory is not tracked in git.

Deployments

The website is hosted on Github Pages at https://pytorch.org.

To deploy changes, merge your latest code into the site branch. A build will be automatically built and committed to the master branch via a CircleCI job.

To view the status of the build visit https://circleci.com/gh/pytorch/pytorch.github.io.

Contributing to PyTorch Documentation and Tutorials

  • You can find information about contributing to PyTorch documentation in the PyTorch repo README.md file.
  • Information about contributing to PyTorch Tutorials can be found in the tutorials README.md.
  • Additional contribution information can be found in PyTorch CONTRIBUTING.md.

pytorch.github.io's People

Contributors

alenacal avatar andresruizfacebook avatar arielmoguillansky avatar atalman avatar brianjo avatar brsoff avatar brucejlin avatar brucejlin1 avatar cjyabraham avatar ericnakagawa avatar hhsuk avatar hydroxyhelium avatar joelmarcey avatar jspisak avatar malfet avatar msaroufim avatar osalpekar avatar patmellon avatar pytorchbot avatar pytorchsam avatar ritaiglesias-96 avatar seemethere avatar sofiaguerraber avatar soumith avatar subramen avatar svekars avatar terryce avatar woo-kim avatar wookim3 avatar zou3519 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch.github.io's Issues

Confusing description of torch.multiprocessing and nn.DataParallel() in torch.distributed

๐Ÿ“š Documentation

The torch.distributed package provides PyTorch support and communication primitives for multiprocess parallelism across several...... This differs from the kinds of parallelism provided by Multiprocessing package - torch.multiprocessing and torch.nn.DataParallel() in that it supports multiple network-connected machines and in that the user must explicitly launch a separate copy of the main training script for each process.
source

What do the last 2 "that" in "in that" refer to? "torch.multiprocessing and torch.nn.DataParallel()" or "torch.distributed"?

Cannot reach "Get Started" page

๐Ÿ•‹ Website

To Reproduce

Steps to reproduce the behavior (if applicable):

just click the white "Get Started" button on pytorch.org

Expected behavior

https://pytorch.org/get-started/ should show the installation instruction, but it only directs to the dead end page

Screenshots

Screenshot from 2019-05-17 03-32-31

Desktop (please complete the following information):

  • OS: Ubuntu 16.04 LTS
  • Browser: Firefox
  • Version 66.0.5(64-bit)

Problem with Python 3.6.0

First I install PyTorch on Win8.1, Cuda 10, Python 3.6.0 and it goes wrong with DLL not found. After I upgrade to Python 3.6.7, I solve this problem. But the download guide didn't say that the PyTorch is not support for Python 3.6.0. So it is necessary to change the download guide. Thanks!

Website layout suggestion regarding the navigation bar

I have two little suggestions with regard to improving the PyTorch website

  1. It may be a good idea to include a link to the Tutorials in the navigation pane and rename the "Docs" to "API Docs" or so. This would make it easier for new users to find it. And for returning users, it would be easier than scrolling through the main page.

Alternatively, the "Docs" button could be a nested button with "API, Tutorials, API" under it that open up upon hovering over it (like, e.g., in the Mkdocs templates)

  1. It would maybe a nice thing to include a link to the "http://pytorch.org" main page on the http://pytorch.org/docs/ page. Sometimes, I am browsing the API docs and want to navigate back to the main page to get to & search the forum.

https redirection is not correct

To Reproduce

Steps to reproduce the behavior (if applicable):

  1. Go to 'https://pytorch.org/'
  2. Click on 'Docs'
  3. We are going to 'https://pytorch.org/docs'
  4. We are redirected to 'http://pytorch.org/docs/'
  5. See error 'pytorch.org refused to connect'
  6. Manually typing 'https://pytorch.org/docs/', and we are redirected to 'https://pytorch.org/docs/stable/index.html' and it works.

Desktop (please complete the following information):

  • OS: macOSX mojave
  • Browser: chrome
  • Version: 69

Additional context

Not only 'Docs', many other sections have the same issue.

JSON Parse Error on Book Download Page

๐Ÿ•‹ Website

When I go to https://pytorch.org/deep-learning-with-pytorch and click download book after entering my email, I get JSON Parse error. I am using Firefox(Language is Turkish).

To Reproduce

Steps to reproduce the behavior (if applicable):

  1. Go to 'https://pytorch.org/deep-learning-with-pytorch' using Firefox
  2. Click on 'Download Book'
  3. See error

Expected behavior

I expected to get a download link to the book.

Screenshots

Desktop (please complete the following information):

  • OS: Linux Mint XFCE 18.2
  • Browser: Firefox ( Language is Turkish if necessary)
  • Version: 70.0.1

Additional context

I was able to download it without any errors using Chromium.

Keep cuda 10.0 button in quickstart

๐Ÿ“š Documentation

c9b5b33#diff-7ea25fd69a9113c141f8c799665f10f8
After this commit, CUDA 10.0 option was removed from quickstart UI.

Many users are still using CUDA 10.0 instead of 10.1. Also, there are several available options of using 10.0 instead of 10.1. However, the quickstart UI only shows 10.1 option.

In my opinion, it may mislead users to install 10.1 again even if they have 10.0 in their server. (Actually, I did.) How do you think about keeping 10.0 in quickstart UI?

image

Question about Korean translation

๐Ÿ“š Documentation

Hello, I am working on a project to translate the official PyTorch tutorial to (unofficial) Korean.

The unofficial Korean tutorial site related above project is visited by more than 2,000 (Korean) visitors every week to learn about PyTorch.

Although this repository is licensed under BSD-3, I would still like to ask if there are any other provisions for translation. (Or I wonder if there are plans to offer in other languages. - As for the tutorial, here's a similar question.)

Thank you,
Junghwan.

Could not open doc from the Docs link on pytorch.org

๐Ÿ•‹ Website

  1. I could not open the Docs from the front page of pytorch directly. but I can open it from Tutorial --> Docs, I see they link to different links.

  2. I could not open torch._C from the link below https://pytorch.org/docs/stable/_modules/torch/_C.html, got a error of 'Oops!'

To Reproduce

Steps to reproduce the behavior (if applicable):

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior

Screenshots

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context

Potential problem with example code on extending the autograd

In the example code for extending the autograd (link), in the backward function, grad_output is a Variable, but all the saved variables input, weight, bias are tensors since they got converted before the call of forward. If we invoke mm with mixture of Tensor and Variable, we will get a RuntimeError: mm(): argument 'mat2' (position 1) must be Variable, not torch.FloatTensor error. Should we convert grad_output to Tensor first and then convert the results back to Variable in backward?

Repository size is ~1GB

Git size is around 1GB, should be reset.
I guess it's because @pytorchbot commits but why does it keep doing it when there isn't a change in docs?

I can't view the source code of torchvision on the "pytorch.org/docs/stable/_modules/torchvision"

๐Ÿ“š Documentation

Excuse me,I am learning how to construct some traditional models with pytorch,and I read the source code of the relative model in torchvision on "https://pytorch.org/docs/stable/_modules/torchvision",but now I can't reach it anymore.I am wondering wheather pytorch has already ended supporting torchvison's source code on its official website"https://pytorch.org/docs/stable/_modules/torchvision".
Thank you!

Minor error in instructions for installing previous version of pytorch

๐Ÿ“š Documentation

There is a minor error in instructions for installing previous version of pytorch using pip. The command should read pip install torch==1.0.1 -f https://download.pytorch.org/whl/cpu/stable and not pip install torch=1.0.1 -f https://download.pytorch.org/whl/cpu/stable

Legacy version wrong link

๐Ÿ“š Documentation

On download previous version it shows for
V1.2
Linux and Windows
CUDA 10.0
pip install torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html
but this will actually download version for cuda 9.2

Missing source code of deeplabv3_resnet101 in torchvision.models.segmentation.deeplabv3_resnet101

๐Ÿ•‹ Website

Link to the source of torchvision.models.segmentation.deeplabv3_resnet101 on https://pytorch.org/docs/stable/torchvision/models.html is broken. The link points to https://pytorch.org/docs/stable/_modules/torchvision/models/segmentation/segmentation.html#deeplabv3_resnet101 but it doesn't have any content.

To Reproduce

Steps to reproduce the behavior (if applicable):

  1. Go to https://pytorch.org/docs/stable/torchvision/models.html
  2. Click on [Source] of torchvision.models.segmentation.deeplabv3_resnet101
  3. See error

Requesting more precise information about installing "Previous versions"

๐Ÿ“š Documentation

(Add a clear and concise description of what the documentation content issue is. A link to any relevant https://pytorch.org page is helpful if you have one.)

Could you please add these to descriptions?

  1. https://pytorch.org/get-started/previous-versions/#from-source
    I tried to follow these instruction here to install previous version, but I failed again and again.
    Then I found that git submodule update --recursive is essential.

  2. https://pytorch.org/get-started/previous-versions/
    I tried to use anaconda to install previous version as written here, but I don't think conda install pytorch=0.3.0 cuda92 -c pytorch is working.

image
I tried to install stable version of pytorch via source code, but I found if I follow these instructions, it installs nightly version. Could you kindly write somewhere about this..?

Unclear class parameters description in torchvision.transforms

๐Ÿ“š Documentation

https://pytorch.org/docs/stable/torchvision/transforms.html#transforms-on-pil-image

For many of the Transforms on PIL Image classes in torchvision.transforms, there are optional 'interpolation' and 'resample' class parameters that use a mysterious int input.
Ex: torchvision.transforms.Resize
I believe the below is the mapping of int values to image interpolations for torchvision.transforms.Resize, which I found in the PIL.Image.Resize documentation.
NEAREST = NONE = 0
LANCZOS = ANTIALIAS = 1
BILINEAR = LINEAR = 2
BICUBIC = CUBIC = 3
BOX = 4
HAMMING = 5

I think the torchvision.transforms docs should explicitly list the mapping of interpolation/resample filter types for better clarity. I had to look through PIL.Image.resize() source code to find the mapping for the 'interpolation' parameter in torchvision.transforms.Resize, which is harder than it needs be.

pytorch blog the-road-to-1_0 has example code that failed to work

This is regarding sample at:
https://pytorch.org/blog/the-road-to-1_0/:

from torch.jit import script

@script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x, hidden)
return x

I cannot make it to work. Here is my code:

import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np

def test_ScriptModelRNN():
class SimpleRNNCell(nn.Module):
def init(self, input_size, hidden_size):
super(SimpleRNNCell, self).init()
self.linear_h = nn.Linear(input_size, hidden_size)

    def forward(self, inp, h_0):
        h = self.linear_h(inp)
        return h + h_0, h

with torch.no_grad():
    sequence_len, input_size, hidden_size = 4, 3, 2
    model = SimpleRNNCell(input_size, hidden_size)
    hidden = torch.zeros(1, hidden_size)

    # # test cell
    # cell_input = torch.randn(input_size)
    # cell_output, hidden = model(cell_input, hidden)
    # import pdb; pdb.set_trace()
    # #

    @torch.jit.script
    def rnn_loop(x):
        hidden = None
        for x_t in x.split(1):
            x, hidden = model(x_t, hidden)
        return x
        
    input = torch.randn(sequence_len, input_size)
    output = rnn_loop(input)

I am getting:
Exception has occurred: RuntimeError

for operator (Tensor 0, Tensor 1) -> (Tensor, Tensor):
expected a value of type Tensor for argument '1' but found Tensor?
@torch.jit.script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x_t, hidden)
~~~~~~ <--- HERE
return x
:
@torch.jit.script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x_t, hidden)
~~~~~ <--- HERE
return x
File "/home/liqun/pytorch/torch/jit/init.py", line 751, in script
_jit_script_compile(mod, ast, _rcb, get_default_args(obj))
File "/home/liqun/Untitled Folder/test_onnx_export.py", line 218, in test_ScriptModelRNN
@torch.jit.script
File "/home/liqun/Untitled Folder/test_onnx_export.py", line 282, in
test_ScriptModelRNN()
File "/home/liqun/.conda/envs/py36/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/liqun/.conda/envs/py36/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/liqun/.conda/envs/py36/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)

i find html error in get-started

๐Ÿ•‹ Website

To Reproduce

Steps to reproduce the behavior (if applicable):

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior

Screenshots

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context

Snipaste_2019-09-10_21-28-26

as you can see, there is '/a>.

torchvision code is gone

The link
https://pytorch.org/docs/stable/_modules/
Says it lists "All modules for which code is available"

Torchvision is listed here (and was previously available) but now it is just a broken link.

Specifically I was previously looking at the code for the vgg network setup:
https://pytorch.org/docs/stable/_modules/torchvision/models/vgg.html#vgg16

I think this should be re-added since it is very useful for understanding what actually is set up when you call:
vgg16 = models.vgg16(pretrained=True)

Windows font rendering on Chrome looks bad

๐Ÿ•‹ Website

To Reproduce

Steps to reproduce the behavior (if applicable):

  1. Go to https://pytorch.org/cppdocs/notes/tensor_basics.html
  2. View a header (I'm on Chrome on Windows), different letters look like they have some different weights

image

Expected behavior

It looks fine with the font-weight: normal rule disabled:

image

This is probably something related to cleartype, but it should look normal on Windows.

Desktop (please complete the following information):

  • OS: Windows
  • Browser Chrome
  • Version 71

Additional context

Something similar happens when zoomed into 125% but to normal paragraph text (note the "p" below) from the same page

image

Incorrect URL for python37 pip wheels

๐Ÿ•‹ Website

The URLs to download whl files are incorrect, giving errors:

$ pip install http://download.pytorch.org/whl/cu100/torch-1.0.0.post2-cp37-cp37m-linux_x86_64.whl
Could not install packages due to an EnvironmentError: 403 Client Error: Forbidden for url: http://download.pytorch.org/whl/cu100/torch-1.0.0.post2-cp37-cp37m-linux_x86_64.whl

I was able to install by removing the .post2 from the URL.

Expected behavior

The package should install without issues.

Screenshots

image

Desktop (please complete the following information):

  • OS: Arch
  • Browser FF & Chrome

Additional context

Install worked without .post2

Information on the page for graphic cards with cuda capability < 3.0

Please provide information to people with older graphic cards that pytorch (with CUDA) won't work for them.

I spent literally hours to install everything until I had the glorious torch.cuda.is_available() => True message popping up in my console window.

but then I realized that I can't use it at all because with my old graphics card (GeForce GM630m => cuda capability 2.1), I always get the cuda runtime error (8) : invalid device function error.

Searching around in the forum I found that these cards are simply not supported and wouldn't be faster than the cpu version any ways.

It would be very helpful if that would be visible on the top of the Get Started page where one can select the pytorch preferences for the install command.

Request provide TraceModel/ScriptModel API and structure detail.

๐Ÿ“š Documentation

Dear Contributors,

TracedModel and ScriptModel in python is hard to access.
Because there are only a little document describe this structure.
Can you help to provide TracedModel and ScriptModel API and structure Doc for python and C++?
Then user can access them easily, and can deploy it to expect target platform.

Thanks,
8086

Repo license?

What's the license on this repo? I couldn't find any info.

We're thinking of modernizing the website of our open source project, the current one is too retro. The PyTorch web seems nice and clean, so we're thinking of using it as a starting point for customization and tweaking.

Broken Link to Faster R-CNN Source

The link https://pytorch.org/docs/stable/_modules/torchvision/models/detection/faster_rcnn.html#fasterrcnn_resnet50_fpn, specifically at https://pytorch.org/docs/stable/torchvision/models.html#faster-r-cnn is broken, and brings me to this page:

image

https://pytorch.org/cppdocs/ torch::ones instead of at::ones

For example, while a tensor created with torch::ones will not be differentiable, a tensor created with torch::ones will be.

This should probably be written as:

For example, while a tensor created with at::ones will not be differentiable, a tensor created with torch::ones will be.

How to replace the website in the install command after -f ?

I am intalling pytorch on windows7 using pip. I get the command throgh the official website as the picture shows.
pytorch
The command is pip3 install torch===1.3.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html.
But it is too slow because of the -f https://download.pytorch.org/whl/torch_stable.html.
How can I replace this? My pip is already using a different source, but is no use. Thanks for help.

conda install 0.1.12 silently if mkl < 2018

I tried to use conda install -c pytorch pytorch on Ubuntu 16.04, and I found that it installed 0.1.12 without any warning. Finally, I found that Pytorch 0.3 requires mkl >= 2018, but it will not warn you at all if the requirement is not satisfied.

I am curious why conda didn't update mkl automatically or warn users during the installation?

pip installs wrong version of cuda with previous versions of pytorch

๐Ÿ“š Documentation

The pip install commands for older pytorch described here don't actually install the correct version of cuda with pytorch. For example, running the following command in Ubuntu 16.04 with Python 3.7.3

pip3 install torch==1.0.0 -f https://download.pytorch.org/whl/cu100/stablebuild

results in

$ python
Python 3.7.3 (default, Apr  3 2019, 19:16:38)
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.0.0'
>>> torch.version.cuda
'9.0.176'

Expected behavior

I expected cuda 10.0.

Other information

I'm reporting this as a documentation error since the instructions in the old documentation worked. For example

pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.0-cp37-cp37m-linux_x86_64.whl

results in

$ python
Python 3.7.3 (default, Apr  3 2019, 19:16:38)
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.0.0'
>>> torch.version.cuda
'10.0.130'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.