GithubHelp home page GithubHelp logo

fastai / docker-containers Goto Github PK

View Code? Open in Web Editor NEW
169.0 169.0 38.0 224 KB

Docker images for fastai

Home Page: https://hub.docker.com/u/fastai

License: Apache License 2.0

Dockerfile 4.22% Python 5.08% Shell 85.14% Ruby 5.55%
conda data-science docker fastai machine-learning mlops

docker-containers's Introduction

Welcome to fastai

CI PyPI Conda (channel only) docs

Installing

You can use fastai without any installation by using Google Colab. In fact, every page of this documentation is also available as an interactive notebook - click “Open in colab” at the top of any page to open it (be sure to change the Colab runtime to “GPU” to have it run fast!) See the fast.ai documentation on Using Colab for more information.

You can install fastai on your own machines with conda (highly recommended), as long as you’re running Linux or Windows (NB: Mac is not supported). For Windows, please see the “Running on Windows” for important notes.

We recommend using miniconda (or miniforge). First install PyTorch using the conda line shown here, and then run:

conda install -c fastai fastai

To install with pip, use: pip install fastai.

If you plan to develop fastai yourself, or want to be on the cutting edge, you can use an editable install (if you do this, you should also use an editable install of fastcore to go with it.) First install PyTorch, and then:

git clone https://github.com/fastai/fastai
pip install -e "fastai[dev]"

Learning fastai

The best way to get started with fastai (and deep learning) is to read the book, and complete the free course.

To see what’s possible with fastai, take a look at the Quick Start, which shows how to use around 5 lines of code to build an image classifier, an image segmentation model, a text sentiment model, a recommendation system, and a tabular model. For each of the applications, the code is much the same.

Read through the Tutorials to learn how to train your own models on your own datasets. Use the navigation sidebar to look through the fastai documentation. Every class, function, and method is documented here.

To learn about the design and motivation of the library, read the peer reviewed paper.

About fastai

fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. It aims to do both things without substantial compromises in ease of use, flexibility, or performance. This is possible thanks to a carefully layered architecture, which expresses common underlying patterns of many deep learning and data processing techniques in terms of decoupled abstractions. These abstractions can be expressed concisely and clearly by leveraging the dynamism of the underlying Python language and the flexibility of the PyTorch library. fastai includes:

  • A new type dispatch system for Python along with a semantic type hierarchy for tensors
  • A GPU-optimized computer vision library which can be extended in pure Python
  • An optimizer which refactors out the common functionality of modern optimizers into two basic pieces, allowing optimization algorithms to be implemented in 4–5 lines of code
  • A novel 2-way callback system that can access any part of the data, model, or optimizer and change it at any point during training
  • A new data block API
  • And much more…

fastai is organized around two main design goals: to be approachable and rapidly productive, while also being deeply hackable and configurable. It is built on top of a hierarchy of lower-level APIs which provide composable building blocks. This way, a user wanting to rewrite part of the high-level API or add particular behavior to suit their needs does not have to learn how to use the lowest level.

Layered API

Migrating from other libraries

It’s very easy to migrate from plain PyTorch, Ignite, or any other PyTorch-based library, or even to use fastai in conjunction with other libraries. Generally, you’ll be able to use all your existing data processing code, but will be able to reduce the amount of code you require for training, and more easily take advantage of modern best practices. Here are migration guides from some popular libraries to help you on your way:

Windows Support

Due to python multiprocessing issues on Jupyter and Windows, num_workers of Dataloader is reset to 0 automatically to avoid Jupyter hanging. This makes tasks such as computer vision in Jupyter on Windows many times slower than on Linux. This limitation doesn’t exist if you use fastai from a script.

See this example to fully leverage the fastai API on Windows.

We recommend using Windows Subsystem for Linux (WSL) instead – if you do that, you can use the regular Linux installation approach, and you won’t have any issues with num_workers.

Tests

To run the tests in parallel, launch:

nbdev_test

For all the tests to pass, you’ll need to install the dependencies specified as part of dev_requirements in settings.ini

pip install -e .[dev]

Tests are written using nbdev, for example see the documentation for test_eq.

Contributing

After you clone this repository, make sure you have run nbdev_install_hooks in your terminal. This install Jupyter and git hooks to automatically clean, trust, and fix merge conflicts in notebooks.

After making changes in the repo, you should run nbdev_prepare and make additional and necessary changes in order to pass all the tests.

Docker Containers

For those interested in official docker containers for this project, they can be found here.

docker-containers's People

Contributors

alisondavey avatar amy-tabb avatar chusc123 avatar feynmanliang avatar hamelsmu avatar jph00 avatar keeran avatar rtmlp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-containers's Issues

CUDA broken?

Hi

I recently updated my docker image to use the latest build (March 2021) Docker image and it seems to have killed my CUDA (11.3). I am able to run fastai inside docker but it complains that that pytorch hasn't been compiled for CUDA.

I checked to see if cuda was available and it is no longer working with the (March 2021) build.

My previous docker image works and I check the previous version (Dec 2020) and they are working fine.

There are a few changes in the new version

  • replacing the pytorch/pytorch with fastai/fastai (imports).
  • apt-get and pip removal.

I am using Amy-Tabb's docker file that builds on the fastdotai dockerfile.

I was wondering if you could provide some advice on what may be happening.

Thanks

05_pet_breeds.ipynb: RuntimeError: stack expects each tensor to be equal size

I'm running the following Docker container:

REPOSITORY                                               TAG                             IMAGE ID
fastdotai/fastai-course                                  latest                          020ece1e8fca

The notebook notebooks/fastbook/05_pet_breeds.ipynb produces an error when running the following cell:

#hide_output
pets1 = DataBlock(blocks = (ImageBlock, CategoryBlock),
                 get_items=get_image_files, 
                 splitter=RandomSplitter(seed=42),
                 get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'))
pets1.summary(path/"images")

The error is:

Setting-up type transforms pipelines
Collecting items from /root/.fastai/data/oxford-iiit-pet/images
Found 7390 items
2 datasets of sizes 5912,1478
Setting up Pipeline: PILBase.create
Setting up Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}

Building one sample
  Pipeline: PILBase.create
    starting from
      /root/.fastai/data/oxford-iiit-pet/images/english_cocker_spaniel_145.jpg
    applying PILBase.create gives
      PILImage mode=RGB size=391x500
  Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
    starting from
      /root/.fastai/data/oxford-iiit-pet/images/english_cocker_spaniel_145.jpg
    applying partial gives
      english_cocker_spaniel
    applying Categorize -- {'vocab': None, 'sort': True, 'add_na': False} gives
      TensorCategory(18)

Final sample: (PILImage mode=RGB size=391x500, TensorCategory(18))


Collecting items from /root/.fastai/data/oxford-iiit-pet/images
Found 7390 items
2 datasets of sizes 5912,1478
Setting up Pipeline: PILBase.create
Setting up Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
Setting up after_item: Pipeline: ToTensor
Setting up before_batch: Pipeline: 
Setting up after_batch: Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1}

Building one batch
Applying item_tfms to the first sample:
  Pipeline: ToTensor
    starting from
      (PILImage mode=RGB size=391x500, TensorCategory(18))
    applying ToTensor gives
      (TensorImage of size 3x500x391, TensorCategory(18))

Adding the next 3 samples

No before_batch transform to apply

Collating items in a batch
Error! It's not possible to collate your items in a batch
Could not collate the 0-th members of your tuples because got the following shapes
torch.Size([3, 500, 391]),torch.Size([3, 374, 500]),torch.Size([3, 375, 500]),torch.Size([3, 279, 300])
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_45/2530811607.py in <module>
      4                  splitter=RandomSplitter(seed=42),
      5                  get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'))
----> 6 pets1.summary(path/"images")

/opt/conda/lib/python3.8/site-packages/fastai/data/block.py in summary(self, source, bs, show_batch, **kwargs)
    188         why = _find_fail_collate(s)
    189         print("Make sure all parts of your samples are tensors of the same size" if why is None else why)
--> 190         raise e
    191 
    192     if len([f for f in dls.train.after_batch.fs if f.name != 'noop'])!=0:

/opt/conda/lib/python3.8/site-packages/fastai/data/block.py in summary(self, source, bs, show_batch, **kwargs)
    182     print("\nCollating items in a batch")
    183     try:
--> 184         b = dls.train.create_batch(s)
    185         b = retain_types(b, s[0] if is_listy(s) else s)
    186     except Exception as e:

/opt/conda/lib/python3.8/site-packages/fastai/data/load.py in create_batch(self, b)
    141         elif s is None:  return next(self.it)
    142         else: raise IndexError("Cannot index an iterable dataset numerically - must use `None`.")
--> 143     def create_batch(self, b): return (fa_collate,fa_convert)[self.prebatched](b)
    144     def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)
    145     def to(self, device): self.device = device

/opt/conda/lib/python3.8/site-packages/fastai/data/load.py in fa_collate(t)
     48     b = t[0]
     49     return (default_collate(t) if isinstance(b, _collate_types)
---> 50             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     51             else default_collate(t))
     52 

/opt/conda/lib/python3.8/site-packages/fastai/data/load.py in <listcomp>(.0)
     48     b = t[0]
     49     return (default_collate(t) if isinstance(b, _collate_types)
---> 50             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     51             else default_collate(t))
     52 

/opt/conda/lib/python3.8/site-packages/fastai/data/load.py in fa_collate(t)
     47     "A replacement for PyTorch `default_collate` which maintains types and handles `Sequence`s"
     48     b = t[0]
---> 49     return (default_collate(t) if isinstance(b, _collate_types)
     50             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     51             else default_collate(t))

/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py in default_collate(batch)
     53             storage = elem.storage()._new_shared(numel)
     54             out = elem.new(storage)
---> 55         return torch.stack(batch, 0, out=out)
     56     elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
     57             and elem_type.__name__ != 'string_':

/opt/conda/lib/python3.8/site-packages/fastai/torch_core.py in __torch_function__(self, func, types, args, kwargs)
    338         convert=False
    339         if _torch_handled(args, self._opt, func): convert,types = type(self),(torch.Tensor,)
--> 340         res = super().__torch_function__(func, types, args=args, kwargs=kwargs)
    341         if convert: res = convert(res)
    342         if isinstance(res, TensorBase): res.set_meta(self, as_copy=True)

/opt/conda/lib/python3.8/site-packages/torch/tensor.py in __torch_function__(cls, func, types, args, kwargs)
    960 
    961         with _C.DisableTorchFunction():
--> 962             ret = func(*args, **kwargs)
    963             return _convert(ret, cls)
    964 

RuntimeError: stack expects each tensor to be equal size, but got [3, 500, 391] at entry 0 and [3, 374, 500] at entry 1

Could not find kaggle.json. Make sure it's located in /root/.kaggle. Or use the environment method.

I'm running the following Docker container:

REPOSITORY                                               TAG                             IMAGE ID
fastdotai/fastai-course                                  latest                          020ece1e8fca

The notebook notebooks/fastbook/09_tabular.ipynb produces an error when running the cell

#hide
from fastbook import *
from kaggle import api
from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype
from fastai.tabular.all import *
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from dtreeviz.trees import *
from IPython.display import Image, display_svg, SVG

pd.options.display.max_rows = 20
pd.options.display.max_columns = 8

... the error:

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
/tmp/ipykernel_126/1277492647.py in <module>
      1 #hide
      2 from fastbook import *
----> 3 from kaggle import api
      4 from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype
      5 from fastai.tabular.all import *

/opt/conda/lib/python3.8/site-packages/kaggle/__init__.py in <module>
     21 
     22 api = KaggleApi(ApiClient())
---> 23 api.authenticate()

/opt/conda/lib/python3.8/site-packages/kaggle/api/kaggle_api_extended.py in authenticate(self)
    162                 config_data = self.read_config_file(config_data)
    163             else:
--> 164                 raise IOError('Could not find {}. Make sure it\'s located in'
    165                               ' {}. Or use the environment method.'.format(
    166                                   self.config_file, self.config_dir))

OSError: Could not find kaggle.json. Make sure it's located in /root/.kaggle. Or use the environment method.

course-v4 and fastbook folders inside fastcore

Hi,

In the fastdotai/fastai-course docker image, the course-v4 and fastbook folders are located inside the fastcore directory instead of /workspace directory.

If the decision is to keep the folders inside the fastcore, then please close this issue.

If not, the issue is because of this line in the dockerfile located in fastai-build

RUN /bin/bash -c "if [[ $BUILD == 'course' ]] ; then echo \"Course Build\" && cd fastai && pip install . && cd ../fastcore && pip install . && git clone https://github.com/fastai/fastbook --depth 1 && git clone https://github.com/fastai/course-v4 --depth 1; fi"
RUN echo '#!/bin/bash\njupyter notebook --ip=0.0.0.0 --port=8888 --allow-root --no-browser' >> run_jupyter.sh
COPY download_testdata.py ./
COPY extract.sh ./
RUN chmod u+x extract.sh
RUN chmod u+x run_jupyter.sh

After the pip install of fastcore library, the script doesn't navigate to workspace directory. I can make the change and raise a PR if you think this is an issue

Thanks
Ravi

Please Review

@muellerzr / @jph00 can you please review this README and let me know if you have any thoughts or comments?

@giacomov please take a look and let me know your thoughts. Many thanks for your prior art which I built on top of. I put a note with a link to your name in the README.

How to use the pre-trained optimal model to predict my own dataset?

Hello, thank you for sharing your code and results. I would like to ask you a question about how to use your optimal model to predict my own dataset. I have converted my dataset from csv file to pt file, but when I try to use it for prediction, I encounter an error that says the dataset cannot be input to the model. Could you please give me some guidance on how to solve this problem? Thank you very much!

fastai/fastai-course and RuntimeError: DataLoader worker (pid 47) is killed by signal

Hello, I am trying to setup a fastai learning environment on my linux computer which has a nvidia 1080 TI gpu using docker containers as pointed to on fast.ai site. When I tried bring up fastai/fastai-course and run the cell to download data files in 01_intro file in fastbook folder it erred with the following message: " RuntimeError: DataLoader worker (pid 47) is killed by signal: Bus
error. It is possible that dataloader's workers are out of shared
memory. Please try to raise your shared memory limit.". Any help would be appreciated. I am suspecting that it may be a limitation of the virtual storage size in the image. Should I use fastdotai/fastai image and -v to a folder on the host computer to circumvent this error?

Containers can not access resources and causes example code to fail

When going through the fastbook I can not execute commands because of connection issues.

The command I used to launch the docker container:

docker run --gpus all -p 8888:8888 fastdotai/fastai-course ./run_jupyter.sh


ERRORS:

# 01_intro.ipynb
#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()

causes this error:

WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f52023b2130>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/fastbook/
ERROR: Could not find a version that satisfies the requirement fastbook (from versions: none)
ERROR: No matching distribution found for fastbook

further down

#id first_training
#caption Results from the first training
from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'

def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224))

learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)

causes

ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /fast-ai-imageclas/oxford-iiit-pet.tgz (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fd3306788b0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))

fastdotai versus fastai username at Docker Hub

I'm playing around with these Docker images, by mistake I pulled fastai/fastai instead of fastdotai/fastai .

There are links to both from this repo -- to fastdotai in the README, and on the right, to fastai, but more occurrences of fastdotai in the README.

Which is the preferred username to pull from? Thanks.

https://hub.docker.com/r/fastdotai/fastai-course:latest not pulling fastbook

When I pull the docker image from:

https://hub.docker.com/r/fastdotai/fastai-course:latest

The fastbook directory and all its contents are missing. Am I misunderstanding that it should have that directory?

Edit: Please disregard, I used " sudo docker pull fastdotai/fastai-course" to pull the docker image but then to start the container I copy pasted "docker run --gpus all -p 8888:8888 fastdotai/fastai ./run_jupyter.sh" which caused it to pull the non-course image and start that, which explains why there was no course material. Apologies!

Jekyll image uses Ruby 3.0 which is not compatable with Jekyll

I was getting the following errors when using https://github.com/fastai/nbdev/blob/master/docker-compose.yml

jekyll_1    | Thank you for installing html-pipeline!
jekyll_1    | You must bundle Filter gem dependencies.
jekyll_1    | See html-pipeline README.md for more details.
jekyll_1    | https://github.com/jch/html-pipeline#dependencies
jekyll_1    | -------------------------------------------------
jekyll_1    | Configuration file: /data/docs/_config.yml
jekyll_1    |             Source: /data/docs
jekyll_1    |        Destination: /data/docs/_site
jekyll_1    |  Incremental build: disabled. Enable with --incremental
jekyll_1    |       Generating... 
jekyll_1    |       Remote Theme: Using theme fastai/nbdev-jekyll-theme
jekyll_1    |    GitHub Metadata: No GitHub API authentication could be found. Some fields may be missing or have incorrect data.
jekyll_1    |                     done in 1.239 seconds.
jekyll_1    | jekyll 3.9.0 | Error:  no implicit conversion of Hash into Integer
jekyll_1    | /var/lib/gems/3.0.0/gems/pathutil-0.16.2/lib/pathutil.rb:502:in `read': no implicit conversion of Hash into Integer (TypeError)
jekyll_1    |   from /var/lib/gems/3.0.0/gems/pathutil-0.16.2/lib/pathutil.rb:502:in `read'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/utils/platforms.rb:75:in `proc_version'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/utils/platforms.rb:40:in `bash_on_windows?'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/build.rb:77:in `watch'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/build.rb:43:in `process'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/serve.rb:93:in `block in start'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/serve.rb:93:in `each'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/serve.rb:93:in `start'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/lib/jekyll/commands/serve.rb:75:in `block (2 levels) in init_with_program'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary/command.rb:220:in `block in execute'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary/command.rb:220:in `each'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary/command.rb:220:in `execute'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary/program.rb:42:in `go'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/mercenary-0.3.6/lib/mercenary.rb:19:in `program'
jekyll_1    |   from /var/lib/gems/3.0.0/gems/jekyll-3.9.0/exe/jekyll:15:in `<top (required)>'
jekyll_1    |   from /usr/local/bin/jekyll:25:in `load'
jekyll_1    |   from /usr/local/bin/jekyll:25:in `<main>'

Based on this post, Ruby3 is not compatible with Jekyll.

https://talk.jekyllrb.com/t/error-no-implicit-conversion-of-hash-into-integer/5890

Exact version of ruby fastai/jekyll was using:

 docker run -it --rm  fastai/jekyll:latest
root@fddbcff7d3ad:~# ruby --version
ruby 3.0.2p107 (2021-07-07 revision 0db68f0233) [x86_64-linux-gnu]
root@fddbcff7d3ad:~#

fastai/fastai:latest, apt-get update, GitHub PPA missing Release file for Ubuntu Jammy

Hello!

I'm using the Docker container fastai:fastai/latest for a project I'm working on. I need to install the onnxruntime-gpu and opencv-python-headless packages using pip. I thought I needed some extra dependencies from apt, so I attempted to update the package list before installing, but this threw an error.

Dockerfile:

FROM fastai/fastai:latest

RUN apt-get update

ENTRYPOINT ["/bin/bash", "-c"]

This fails as so:

Terminal Output
user@computer:/mnt/c/Users/user/Desktop/test$ docker build -t test .
[+] Building 5.8s (6/6) FINISHED
 => [internal] load build definition from Dockerfile                                                              0.0s
 => => transferring dockerfile: 204B                                                                              0.0s
 => [internal] load .dockerignore                                                                                 0.0s
 => => transferring context: 2B                                                                                   0.0s
 => [internal] load metadata for docker.io/fastai/fastai:latest                                                   0.7s
 => CACHED [1/3] FROM docker.io/fastai/fastai:latest@sha256:2cf8a1564a65c14dd90670e4a5796cb24352f9d27676932dc2c3  0.0s
 => [2/3] WORKDIR /app                                                                                            0.0s
 => ERROR [3/3] RUN apt-get update                                                                                5.1s
------
 > [3/3] RUN apt-get update:
#5 0.445 Ign:1 https://cli.github.com/packages jammy InRelease
#5 0.464 Err:2 https://cli.github.com/packages jammy Release
#5 0.464   404  Not Found [IP: 185.199.111.153 443]
#5 0.481 Get:3 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB]
#5 0.603 Get:4 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
#5 0.750 Get:5 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [114 kB]
#5 0.814 Get:6 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [99.8 kB]
#5 0.877 Get:7 http://archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [266 kB]
#5 0.936 Get:8 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1792 kB]
#5 1.147 Get:9 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [322 kB]
#5 1.151 Get:10 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB]
#5 2.293 Get:11 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [4644 B]
#5 2.303 Get:12 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [132 kB]
#5 2.628 Get:13 http://archive.ubuntu.com/ubuntu jammy/restricted amd64 Packages [164 kB]
#5 2.640 Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [650 kB]
#5 2.695 Get:15 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [265 kB]
#5 2.717 Get:16 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [363 kB]
#5 2.748 Get:17 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [7791 B]
#5 2.749 Get:18 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [6909 B]
#5 3.658 Get:19 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [340 kB]
#5 4.567 Reading package lists...
#5 5.019 E: The repository 'https://cli.github.com/packages jammy Release' does not have a Release file.
------
executor failed running [/bin/sh -c apt-get update]: exit code: 100

The build fails because the GitHub CLI PPA has no release file for Ubuntu 22.04 LTS ‘Jammy Jellyfish.’ In the Dockerfile, I remove this repository before pulling updates, which fixes the build error.

Dockerfile:

FROM fastai/fastai:latest

RUN add-apt-repository -r https://cli.github.com/packages
RUN apt-get update

ENTRYPOINT ["/bin/bash", "-c"]

I had one of those moments where the original problem was fixed while I worked on this nested problem. I thought I should bring it to your attention nonetheless.

I'd also like to say: I've been using fastai for over a year now. Many thanks for developing such a great module along with fastbook and nbdev. They've all been great assets as I've learned about neural networks and modern software development.

How to get PyTorch 1.8 supported fastai docker image

Hi,

I tried pulling fastdotai/fastai:2021-02-11 which gave 1.7.1 version, while trying the latest tag version gave torch 1.10.0

Are there any tags that can be used to get 1.8 version specifically?

Command:
docker run --gpus 1 fastdotai/fastai:2021-02-11 python -c "import torch;print(torch.__version__)"

Thank you!

Graphviz and Course v4 Notebooks

I'm using these containers to follow along with the v4 course and had the following issues:

  1. Graphviz is not included. If you use the v4 course notebooks, you get an error to install graphviz. Please consider adding this to the image. I have to open a terminal in jupyter and type conda install graphviz every time to get the notebooks to work.
  2. Previous course notebooks are included not current course notebooks. This container should include the commented notebooks here: https://github.com/fastai/fastbook and the uncommented notebooks here: https://github.com/fastai/course-v4 instead of the course v3 notebooks.

fastai/fastai image no longer comes with fastai installed

Ironically, the current fastai/fastai docker image doesn't come with the fastai library. The same is probably true for the fastai/codespaces image, but I haven't checked it.

This is due to lines introduced in commits ec9a4cb and a6ac6ff, where script.sh attempts to install a package called neptune using pip:

pip install ipywidgets fastai neptune wandb pydicom tensorboard captum

PYPI doesn't contain a package called neptune (maybe the intention was neptune-cli?), so none of the packages are installed.

fastai/fastai:latest, CUDA & cuDNN Unavailable

Hello (again). I'm using the fastai:latest container for some neural network inferencing. (I see it was updated yesterday.) I seem to be unable to access my GPU through the container. I'm on a laptop with a GTX 3070 Max-Q and a Ryzen 9 5900HS, with Docker running on WSL2 Debian. Here is a sample Dockerfile:

FROM fastai/fastai:latest

RUN pip install --no-cache-dir --upgrade pip \
 && pip install --no-cache-dir onnxruntime-gpu opencv-python-headless

ENTRYPOINT ["/bin/bash", "-c"]

I test this using nvidia-smi and python -c "import torch; print(torch.cuda.is_available(), torch.backends.cudnn.is_available())". The Nvidia tool properly detects my hardware and the correct versions of the drivers as they are on Windows 11, but the Python statements both return False. (The pip installs may be omitted, producing the same result.)

The following containers work without modification:

docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker run -it --rm --gpus all --entrypoint /bin/bash pytorch/torchserve:latest-gpu

These show the same nvidia-smi output with the Python statements returning True.

After some searching yesterday, I figure that my fastai container has duplicate versions of some Nvidia driver(s). I will update this if I find a solution. Any suggestions or tips are appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.