GithubHelp home page GithubHelp logo

fastai / course20 Goto Github PK

View Code? Open in Web Editor NEW
841.0 30.0 306.0 35.18 MB

Deep Learning for Coders, 2020, the website

Home Page: https://book.fast.ai

License: Apache License 2.0

Makefile 0.29% Python 3.31% Jupyter Notebook 83.79% HTML 0.90% CSS 0.75% JavaScript 10.85% Shell 0.11%
deep-learning machine-learning teaching python jupyter-notebook

course20's Introduction

Practical Deep Learning for Coders

Deep Learning for Coders with fastai and PyTorch: AI Applications Without a PhD - the book and the course

Welcome to Practical Deep Learning for Coders. This web site covers the book and the 2020 version of the course, which are designed to work closely together. If you haven't yet got the book, you can buy it here. It's also freely available as interactive Jupyter Notebooks; read on to learn how to access them..

How do I get started?

If you're ready to dive in right now, here's how to get started. If you want to know more about this course, read the next sections, and then come back here.

To watch the videos, click on the Lessons section in the navigation sidebar. The lessons all have searchable transcripts; click "Transcript Search" in the top right panel to search for a word or phrase, and then click it to jump straight to video at the time that appears in the transcript. The videos are all captioned and also translated into Chinese (简体中文) and Spanish; while watching the video click the "CC" button to turn them on and off, and the setting button to change the language.

Each video covers a chapter from the book. The entirety of every chapter of the book is available as an interactive Jupyter Notebook. Jupyter Notebook is the most popular tool for doing data science in Python, for good reason. It is powerful, flexible, and easy to use. We think you will love it! Since the most important thing for learning deep learning is writing code and experimenting, it's important that you have a great platform for experimenting with code.

To get started, we recommend using a Jupyter Server from one of the recommended online platforms (click the links for instructions on how to use these for the course):

  • Gradient: Unlike Colab, this is a "real" Jupyter Notebook so everything in the course works. It also provides space to save your notebooks and models. However, sometimes the free servers get over-loaded, and when that happens it's impossible to connect
  • Colab: A popular free service from Google. Google have changed the notebook platform quite a lot, so keyboard shortcuts are different, and not everything works (e.g. much of chapter 2 doesn't work because Colab doesn't support creating web app GUIs).

If you are interested in the experience of running a full Linux server you can consider Google Cloud (extremely popular service, very reliable, but the fastest GPUs are far more expensive). We strongly suggest using one of the recommended online platforms for running the notebooks, and to not use your own computer, unless you're very experienced with Linux system adminstration and handling GPU drivers, CUDA, and so forth.

If you need help, there's a wonderful online community ready to help you at forums.fast.ai. Before asking a question on the forums, search carefully to see if your question has been answered before. (The forum system won't let you post until you've spent a few minutes on the site reading existing topics.) One bit that many students find tricky is getting signed up for the Bing API for the image download task in lesson 2; here's a helpful forum post explaining how to get the Bing API key you'll need for downloading images.

Is this course for me?

Thank you for letting us join you on your deep learning journey, however far along that you may be! Previous fast.ai courses have been studied by hundreds of thousands of students, from all walks of life, from all parts of the world. Many students have told us about how they've become multiple gold medal winners of international machine learning competitions, received offers from top companies, and having research papers published. For instance, Isaac Dimitrovsky told us that he had "been playing around with ML for a couple of years without really grokking it... [then] went through the fast.ai part 1 course late last year, and it clicked for me". He went on to achieve first place in the prestigious international RA2-DREAM Challenge competition! He developed a multistage deep learning method for scoring radiographic hand and foot joint damage in rheumatoid arthritis, taking advantage of the fastai library.

It doesn't matter if you don't come from a technical or a mathematical background (though it's okay if you do too!); we wrote this course to make deep learning accessible to as many people as possible. The only prerequisite is that you know how to code (a year of experience is enough), preferably in Python, and that you have at least followed a high school math course. The first three chapters have been explicitly written in a way that will allow executives, product managers, etc. to understand the most important things they'll need to know about deep learning -- if that's you, just skip over the code in those sections.

Deep learning is a computer technique to extract and transform data–-with use cases ranging from human speech recognition to animal imagery classification–-by using multiple layers of neural networks. A lot of people assume that you need all kinds of hard-to-find stuff to get great results with deep learning, but as you'll see in this course, those people are wrong. Here's a few things you absolutely don't need to do world-class deep learning:

Myth (don't need) Truth
Lots of math Just high school math is sufficient
Lots of data We've seen record-breaking results with <50 items of data
Lots of expensive computers You can get what you need for state of the art work for free

Deep learning has power, flexibility, and simplicity. That's why we believe it should be applied across many disciplines. These include the social and physical sciences, the arts, medicine, finance, scientific research, and many more. Here's a list of some of the thousands of tasks in different areas at which deep learning, or methods heavily using deep learning, is now the best in the world:

  • Natural language processing (NLP) Answering questions; speech recognition; summarizing documents; classifying documents; finding names, dates, etc. in documents; searching for articles mentioning a concept
  • Computer vision Satellite and drone imagery interpretation (e.g., for disaster resilience); face recognition; image captioning; reading traffic signs; locating pedestrians and vehicles in autonomous vehicles
  • Medicine Finding anomalies in radiology images, including CT, MRI, and X-ray images; counting features in pathology slides; measuring features in ultrasounds; diagnosing diabetic retinopathy
  • Biology Folding proteins; classifying proteins; many genomics tasks, such as tumor-normal sequencing and classifying clinically actionable genetic mutations; cell classification; analyzing protein/protein interactions
  • Image generation Colorizing images; increasing image resolution; removing noise from images; converting images to art in the style of famous artists
  • Recommendation systems Web search; product recommendations; home page layout
  • Playing games Chess, Go, most Atari video games, and many real-time strategy games
  • Robotics Handling objects that are challenging to locate (e.g., transparent, shiny, lacking texture) or hard to pick up
  • Other applications Financial and logistical forecasting, text to speech, and much more...

Who we are

We are Sylvain Gugger and Jeremy Howard, your guides on this journey. We're the co-authors of fastai, the software that you'll be using throughout this course.

Jeremy has been using and teaching machine learning for around 30 years. He started using neural networks 25 years ago. During this time, he has led many companies and projects that have machine learning at their core, including founding the first company to focus on deep learning and medicine, Enlitic, and taking on the role of President and Chief Scientist of the world's largest machine learning community, Kaggle. He is the co-founder, along with Dr. Rachel Thomas, of fast.ai, the organization that built the course this course is based on.

Sylvain has written 10 math textbooks, covering the entire advanced French maths curriculum! He is now a researcher at Hugging Face, and was previously a researcher at fast.ai.

We care a lot about teaching. In this course, we start by showing how to use a complete, working, very usable, state-of-the-art deep learning network to solve real-world problems, using simple, expressive tools. And then we gradually dig deeper and deeper into understanding how those tools are made, and how the tools that make those tools are made, and so on… We always teaching through examples. We ensure that there is a context and a purpose that you can understand intuitively, rather than starting with algebraic symbol manipulation.

The software you will be using

In this course, you'll be using PyTorch and fastai.

We've completed hundreds of machine learning projects using dozens of different packages, and many different programming languages. At fast.ai, we have written courses using most of the main deep learning and machine learning packages used today. We spent over a thousand hours testing PyTorch before deciding that we would use it for future courses, software development, and research. PyTorch is now the world's fastest-growing deep learning library and is already used for most research papers at top conferences.

PyTorch works best as a low-level foundation library, providing the basic operations for higher-level functionality. The fastai library is the most popular library for adding this higher-level functionality on top of PyTorch. In this course, as we go deeper and deeper into the foundations of deep learning, we will also go deeper and deeper into the layers of fastai. This course covers version 2 of the fastai library, which is a from-scratch rewrite providing many unique features.

What you will learn

After finishing this course you will know:

  • How to train models that achieve state-of-the-art results in:
    • Computer vision, including image classification (e.g., classifying pet photos by breed), and image localization and detection (e.g., finding where the animals in an image are)
    • Natural language processing (NLP), including document classification (e.g., movie review sentiment analysis) and language modeling
    • Tabular data (e.g., sales prediction) with categorical data, continuous data, and mixed data, including time series
    • Collaborative filtering (e.g., movie recommendation)
  • How to turn your models into web applications, and deploy them
  • Why and how deep learning models work, and how to use that knowledge to improve the accuracy, speed, and reliability of your models
  • The latest deep learning techniques that really matter in practice
  • How to implement stochastic gradient descent and a complete training loop from scratch
  • How to think about the ethical implications of your work, to help ensure that you're making the world a better place and that your work isn't misused for harm

Here are some of the techniques covered (don't worry if none of these words mean anything to you yet--you'll learn them all soon):

  • Random forests and gradient boosting
  • Affine functions and nonlinearities
  • Parameters and activations
  • Random initialization and transfer learning
  • SGD, Momentum, Adam, and other optimizers
  • Convolutions
  • Batch normalization
  • Dropout
  • Data augmentation
  • Weight decay
  • Image classification and regression
  • Entity and word embeddings
  • Recurrent neural networks (RNNs)
  • Segmentation
  • And much more

course20's People

Contributors

abeomor avatar bxbrenden avatar dangro avatar datacrunchio avatar dependabot[bot] avatar ezeeetm avatar gopitk avatar hamelsmu avatar isaac-flath avatar janvdp avatar jchapman avatar joe-bender avatar joedockrill avatar joshua-paperspace avatar jph00 avatar kerrickstaley avatar mathemage avatar micstn avatar mone27 avatar nikhilmaddirala avatar prosoitos avatar rpasricha avatar svishnu88 avatar theisshe avatar v-ahuja avatar velaia avatar yubozhao avatar zerotosingularity avatar zzweig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

course20's Issues

Instructions in CONTRIBUTING.md state to run nbdev_install_git_hooks as a first step, but it is not found

The instructions in CONTRIBUDING.md state:

Before anything else, please install the git hooks that run automatic scripts during each commit and merge to strip the notebooks of suerpfluous metadata (and avoid merge conflicts). After cloning the repository, run the following command inside it:

The command being nbdev_install_git_hooks. It is the only instance of this string in the repository and I cannot find any script or file with that name. I also have found no issue mentioning this. Am I reading the instructions incorrectly?

Resources mentioned in the book

As I discussed with @jph00 on Discord, there are several resources mentioned in the book that are supposed to be added to the book website (originally mentioned to be https://book.fast.ai in the book, now it's the course link.

There are several mentions of resources that would be there on the website. Some of the things I found by searching the fastbook repo include:

  • answers to questionnaire solutions
  • bonus chapter on transformers
  • bonus chapter on generative image models
  • recommended tutorials that will be present for each chapter

There may be more mentions of resources, and I can update the issue if I find any additional ones.

Add docs on how to use iko.ai to go through the course

iko.ai

There are several services that people can use to go through the fast.ai course, from notebook servers to execute notebooks, to Linux servers that give more control, to model deployment services.

This issue aims to add another option, https://iko.ai, that can be immensely helpful for those taking the course.

iko.ai offers real-time collaborative notebooks to train, track, package, deploy, and monitor your machine learning models. It lowers the barrier to entry and gives you leverage to do things that typically require a team of experts.

Here's a list of features that describe the platform. The guide on how to use it for the fast.ai course will be in a pull request. Feedback is welcome.

No-setup notebooks on your own Kubernetes clusters:

iko-cluster-choice-notebook

You can just start a fresh notebook server on your Kubernetes clusters from several Docker images that just work. You don’t need to troubleshoot environments, deal with NVIDIA drivers, or lose hours or days fixing your laptop or VMs, or breaking environments. For people without strong systems skills, they become autonomous instead of relying on others to fix their system. For those with the necessary skills, it is work they do not need to do anymore.

Real-time collaborative notebooks:

rtc-attempt-2

You can share a notebook with other users and collectively edit it, see each others’ cursors and selections. This is ideal for pair-programming, troubleshooting, tutoring, and a must when working remotely. We even use it for our meetings: each team member edits the same notebook with agenda items before the call on their own time. During the call, we'll go over them and edit, answer questions, add snippets of code to reproduce bugs or implement proof-of-concepts.

Long-running notebooks that don't lose output, executed in the background:

async-notebook-on-cluster

Regular notebooks lose computation results when there is a disconnection, which often happens in long-running training jobs. You can schedule your notebooks in a fire and forget fashion, right from the notebook interface, without interrupting your flow, and watch their output stream from other devices, even on your phone. Choose the Docker image, choose the Kubernetes cluster, and just fire and forget. You can even save the resulting notebook as another notebook, but even if you do not, the notebooks have checkpoints [we contributed that back to the Jupyter community] (jupyterlab/jupyterlab#5923 (comment))

AppBooks:

publish

You can run or have others run your notebooks with different values without changing your code. You can click “Publish” and have a parametrized version without changing cell metadata or adding tags. One use case is having a subject-matter expert tweak some domain specific parameters to train a model without being overwhelmed by the code, or the notebook interface. It becomes a form on top of the notebook with the parameters you want to expose.

Bring your own Kubernetes clusters:

clusters

You can use your own existing Kubernetes clusters from Google, Amazon, Microsoft, or DigitalOcean on iko.ai which will use them for your notebook servers, your scheduled notebooks, and different other workloads. You don’t have to worry about metered billing in yet another service. You can control access right from your cloud provider’s console or interface, and grant your own customers or users access to your clusters. We also shut down inactive notebooks that have no computation running automatically, to save on cloud spend.

Bring your own S3 buckets:

mounting-external-s3-buckets

You do not need to upload data to iko.ai to start working, you can just add an external S3 bucket and be able to access it as if it were a filesystem. As if your files were on disk. This is ideal not to pollute your code with S3 specific code (boto3, tensorflow_io), and reduces friction. This also ensures people work on the same data, and avoids common errors when working on local copies that were changed by accident.

Automatic Experiment Tracking:

You can’t improve what you cannot measure and track. Model training is an iterative process and involves varying approaches, parameters, hyperparameters, which give models with different metrics. You can’t keep all of this in your head or rely on ad-hoc notes. Your experiments are automatically tracked on iko, everything is saved. This makes collaboration easier as well, because several team members will be able to compare results and choose the best approach.

One-click model deployment:

deploy

You can deploy your model by clicking a button, instead of relying on a colleague to do it for you. You get a REST endpoint to invoke your model with requests, which makes it easy for developers to use these models without knowing anything about machine learning. You also get a convenience page to upload a CSV and get predictions, or enter JSON values and get predictions from that model. This is often needed for non-technical users who want a graphical interface on top of the model. The page also contains all the information of how that model was produced, which experiment produced it, so you know which parameters, and which metrics.

One-click model packaging:

model_docker_image

You don’t need to worry about sending pickles and weights or using ssh and scp. You can click a button and iko.ai will package your model in a Docker image and push it to a registry of your choice. You can then take your model anywhere. If you have a client for which you do on-premises work, you can just docker pull your image there, and start that container and expose that model to other internal systems.

Model monitoring

model-monitoring

For each model you deploy, you get a live dashboard that shows you how it is behaving. How many requests, how many errors, etc. This enables you to become aware as soon as something is wrong and fix it. Right now, we’re adding data logging so you can have access to data distribution or outliers. We’re also adding alerts so you get an alert when something changes above a certain threshold, and are exposing model grading, so you can note your model when it gets something wrong or right, and visualize its decaying performance. Automatic retraining is on the short term roadmap.

Integrations with Streamlit, Voilà, and Superset

You can create dashboards and applications right from the notebook interface, as opposed to having someone provision a virtual machine on GCP, create the environment, push the code, start a web server, add authentication, remember the IP address for a demo. You can also create Superset dashboards to work on your data.

APIs everywhere

You can use most of these features with an HTTP request, as opposed to going through the web interface. This is really important for instrumentation and integrations. We’re also adding webhooks soon, so iko.ai can emit and get events to and from other external systems. One application of this is Slack alerts, for example, or automatic retraining based on events you choose.

FastaiSageMakerStack template (https://fastai-cfn.s3.amazonaws.com/sagemaker-cfn-course-v4.yml) fails updating conda due to conda env update dependencies

Issue

Recently, a few people have faced problems with getting set up on AWS SageMaker, due to a bug in the FastaiSageMakerStack template dependencies on conda env ver updates, which results in conda env inconsistency and manifests in unavailability of fastai kernel for running notebooks.

Fixing this issue may help prevent new students from dropping off the valuable AWS SageMaker platform for running their fastai course Notebooks.

The issue and the solution history is elaborated here:

https://forums.fast.ai/t/sagemaker-notebook-deployment-problem-no-fastai-kernel/88806/10

The current stack template (template URL):

https://fastai-cfn.s3.amazonaws.com/sagemaker-cfn-course-v4.yml

The list of CloudFormation stack template links:

https://course.fast.ai/start_sagemaker#creating-the-sagemaker-notebook-instance

A stack template, which I have modified as shown below to successful resolution of the issue:

https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/quickcreate?filter=active&templateURL=https://fastai-cfn.s3.amazonaws.com/sagemaker-cfn-course-v4.yml&stackName=FastaiSageMakerStack

Solution

Update stack template to the version below (adds 'conda update --force-reinstall conda -y' line after echo "Updating conda" in OnCreate script and removes updating conda section from the OnStart script of the stack template):

Parameters:
  InstanceType:
    Type: String
    Default: ml.p2.xlarge
    AllowedValues:
      - ml.p3.2xlarge
      - ml.p2.xlarge
    Description: Enter the SageMaker Notebook instance type
  VolumeSize:
    Type: Number
    Default: 50
    Description: Enter the size of the EBS volume attached to the notebook instance
    MaxValue: 17592
    MinValue: 5
Resources:
  Fastai2SagemakerNotebookfastaiv4NotebookRoleA75B4C74:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Action: sts:AssumeRole
            Effect: Allow
            Principal:
              Service: sagemaker.amazonaws.com
        Version: "2012-10-17"
      ManagedPolicyArns:
        - Fn::Join:
            - ""
            - - "arn:"
              - Ref: AWS::Partition
              - :iam::aws:policy/AmazonSageMakerFullAccess
    Metadata:
      aws:cdk:path: CdkFastaiv2SagemakerNbStack/Fastai2SagemakerNotebook/fastai-v4NotebookRole/Resource
  Fastai2SagemakerNotebookfastaiv4LifecycleConfigD72E2247:
    Type: AWS::SageMaker::NotebookInstanceLifecycleConfig
    Properties:
      NotebookInstanceLifecycleConfigName: fastai-v4LifecycleConfig
      OnCreate:
        - Content:
            Fn::Base64: >-
              #!/bin/bash


              set -e


              echo "Starting on Create script"


              sudo -i -u ec2-user bash <<EOF

              touch /home/ec2-user/SageMaker/.create-notebook

              EOF


              cat > /home/ec2-user/SageMaker/.fastai-install.sh <<\EOF

              #!/bin/bash

              set -e

              echo "Creating dirs and symlinks"

              mkdir -p /home/ec2-user/SageMaker/.cache

              mkdir -p /home/ec2-user/SageMaker/.fastai

              [ ! -L "/home/ec2-user/.cache" ] && ln -s /home/ec2-user/SageMaker/.cache /home/ec2-user/.cache

              [ ! -L "/home/ec2-user/.fastai" ] && ln -s /home/ec2-user/SageMaker/.fastai /home/ec2-user/.fastai


              echo "Updating conda"

              conda update --force-reinstall conda -y
            
              conda update -n base -c defaults conda -y

              conda update --all -y

              echo "Starting conda create command for fastai env"

              conda create -mqyp /home/ec2-user/SageMaker/.env/fastai python=3.6

              echo "Activate fastai conda env"

              conda init bash

              source ~/.bashrc

              conda activate /home/ec2-user/SageMaker/.env/fastai

              echo "Install ipython kernel and widgets"

              conda install ipywidgets ipykernel -y

              echo "Installing fastai lib"

              pip install -r /home/ec2-user/SageMaker/fastbook/requirements.txt

              pip install fastbook sagemaker

              echo "Installing Jupyter kernel for fastai"

              python -m ipykernel install --name 'fastai' --user

              echo "Finished installing fastai conda env"

              echo "Install Jupyter nbextensions"

              conda activate JupyterSystemEnv

              pip install jupyter_contrib_nbextensions

              jupyter contrib nbextensions install --user

              echo "Restarting jupyter notebook server"

              pkill -f jupyter-notebook

              rm /home/ec2-user/SageMaker/.create-notebook

              echo "Exiting install script"

              EOF


              chown ec2-user:ec2-user /home/ec2-user/SageMaker/.fastai-install.sh

              chmod 755 /home/ec2-user/SageMaker/.fastai-install.sh


              sudo -i -u ec2-user bash <<EOF

              nohup /home/ec2-user/SageMaker/.fastai-install.sh &

              EOF


              echo "Finishing on Create script"
      OnStart:
        - Content:
            Fn::Base64: >-
              #!/bin/bash


              set -e


              echo "Starting on Start script"


              sudo -i -u ec2-user bash << EOF

              if [[ -f /home/ec2-user/SageMaker/.create-notebook ]]; then
                  echo "Skipping as currently installing conda env"
              else
                  # create symlinks to EBS volume
                  echo "Creating symlinks"
                  ln -s /home/ec2-user/SageMaker/.fastai /home/ec2-user/.fastai
                  echo "Updating conda skipped in the fixed FastaiSageMakerStack template"
                  echo "Activate fastai conda env"
                  conda init bash
                  source ~/.bashrc
                  conda activate /home/ec2-user/SageMaker/.env/fastai
                  echo "Updating fastai packages"
                  pip install fastai fastcore sagemaker --upgrade
                  echo "Installing Jupyter kernel"
                  python -m ipykernel install --name 'fastai' --user
                  echo "Install Jupyter nbextensions"
                  conda activate JupyterSystemEnv
                  pip install jupyter_contrib_nbextensions
                  jupyter contrib nbextensions install --user
                  echo "Restarting jupyter notebook server"
                  pkill -f jupyter-notebook
                  echo "Finished setting up Jupyter kernel"
              fi

              EOF


              echo "Finishing on Start script"
    Metadata:
      aws:cdk:path: CdkFastaiv2SagemakerNbStack/Fastai2SagemakerNotebook/fastai-v4LifecycleConfig
  Fastai2SagemakerNotebookfastaiv4NotebookInstance7C46E7E0:
    Type: AWS::SageMaker::NotebookInstance
    Properties:
      InstanceType:
        Ref: InstanceType
      RoleArn:
        Fn::GetAtt:
          - Fastai2SagemakerNotebookfastaiv4NotebookRoleA75B4C74
          - Arn
      DefaultCodeRepository: https://github.com/fastai/fastbook
      LifecycleConfigName: fastai-v4LifecycleConfig
      NotebookInstanceName: fastai-v4
      VolumeSizeInGB:
        Ref: VolumeSize
    Metadata:
      aws:cdk:path: CdkFastaiv2SagemakerNbStack/Fastai2SagemakerNotebook/fastai-v4NotebookInstance
  CDKMetadata:
    Type: AWS::CDK::Metadata
    Properties:
      Modules: aws-cdk=1.60.0,@aws-cdk/aws-iam=1.60.0,@aws-cdk/aws-sagemaker=1.60.0,@aws-cdk/cloud-assembly-schema=1.60.0,@aws-cdk/core=1.60.0,@aws-cdk/cx-api=1.60.0,@aws-cdk/region-

info=1.60.0,jsii-runtime=node.js/v14.8.0
    Condition: CDKMetadataAvailable
Conditions:
  CDKMetadataAvailable:
    Fn::Or:
      - Fn::Or:
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-east-1
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-northeast-1
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-northeast-2
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-south-1
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-southeast-1
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-southeast-2
          - Fn::Equals:
              - Ref: AWS::Region
              - ca-central-1
          - Fn::Equals:
              - Ref: AWS::Region
              - cn-north-1
          - Fn::Equals:
              - Ref: AWS::Region
              - cn-northwest-1
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-central-1
      - Fn::Or:
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-north-1
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-west-1
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-west-2
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-west-3
          - Fn::Equals:
              - Ref: AWS::Region
              - me-south-1
          - Fn::Equals:
              - Ref: AWS::Region
              - sa-east-1
          - Fn::Equals:
              - Ref: AWS::Region
              - us-east-1
          - Fn::Equals:
              - Ref: AWS::Region
              - us-east-2
          - Fn::Equals:
              - Ref: AWS::Region
              - us-west-1
          - Fn::Equals:
              - Ref: AWS::Region
              - us-west-2

Paperspace Gradient instructions seem incomplete - Unable to use notebooks

I set up the environment to use Paperspace Gradient using the instructions in https://course.fast.ai/start_gradient.

However, when attempting to run the first notebook (clean/01_intro), the very first cell throws the following error:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-2-2b820b2b946f> in <module>
      1 #hide
      2 get_ipython().system('pip install -Uqq fastbook')
----> 3 import fastbook
      4 fastbook.setup_book()

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastbook/__init__.py in <module>
     15 except ModuleNotFoundError:
     16     warn("Missing Azure SDK - please run `pip install azure-cognitiveservices-search-imagesearch`")
---> 17 try: from nbdev.showdoc import *
     18 except ModuleNotFoundError: warn("Missing `nbdev` - please install it")
     19 try:

/opt/conda/envs/fastai/lib/python3.8/site-packages/nbdev/__init__.py in <module>
      5 if IN_IPYTHON:
      6     from .flags import *
----> 7     from .showdoc import show_doc
      8     #from .export import notebook2script

/opt/conda/envs/fastai/lib/python3.8/site-packages/nbdev/showdoc.py in <module>
     12 from nbconvert import HTMLExporter
     13 
---> 14 if IN_NOTEBOOK:
     15     from IPython.display import Markdown,display
     16     from IPython.core import page

NameError: name 'IN_NOTEBOOK' is not defined```

image_cat() not defined error

HI,

What is image_cat() in the following code (given in lesson 1)?

img = PILImage.create(image_cat())
img.to_thumb(192)

It gives following error after running it:

**NameError Traceback (most recent call last)
in
----> 1 img = PILImage.create(image_cat())
2 img.to_thumb(192)

NameError: name 'image_cat' is not defined**

A few mistakes in the readme.md text

  1. Under Who we are, at the end of the 2nd paragraph it is written:
    "He is the co-founder, along with Dr. Rachel Thomas, of fast.ai, the organization that built the course this course is based on."
    Was this intentional? It feels like a mistake to me.

  2. Under Who we are, at the last paragraph:
    " We always teaching through examples."

Add images to fastbook

Some images are opened directly from the repo. These should be added to fastbook and made available using attributes.

FastaiSageMakerStack Template does not work

Hi,

The instructions in https://course.fast.ai/start_sagemaker do not work. The file: https://s3-eu-west-1.amazonaws.com/mmcclean-public-files/sagemaker-fastai-notebook/sagemaker-cfn-course-v4.yaml has AccessDenied or does not exist. I have tried to apply the fixes suggested in from #60 to the original template: https://fastai-cfn.s3.amazonaws.com/sagemaker-cfn-course-v4.yml but nothing worked. Either the fastai kernel was not created, or I was not able to connect to it.

Thanks!

Cheers,

Peter

Google Colab now supporting Chapter 2 widgets

I remember, when working on last year's course, that indeed the widgets you guys use in chapter 2 (whatever chapter it was before) did not work under Google Colab. But let me tell you that they now do. So, you might consider taking a look for yourselves.

gradient how to is no more current

https://course.fast.ai/start_gradient describes the process to setup a paper space gradient instance. However, most likely between the time the instructions were written and now, the instructions are now different. Moreover, it looks like the free access to the terminal is gone, requires the "pro" package at 8$/month. Might want to update the instructions.

Bracket Missing line 45

Bracket is missing in line 45, def search_images_bing(key,earch_images_bing(key, term, min_sz=128, max_images=150):
It is causing error I think.

Unclear text

The first three chapters have been explicitly written in a way that will allow executives, product managers, etc. to understand the most important things they'll need to know about deep learning -- if that's you, just skip over the code in those sections.

Shouldn't this say, "if that's not you just skip over..." or better "if you're an experienced coder just skip over..."?

Gradient instructions don't update the course materials

I went through the instructions for using Paperspace Gradient and set up notebooks etc as required.

Currently the instructions have you update the fastbook code with a git pull command in the terminal.

This fails to update the course-v4 files which are in a separate folder. This is important because if you navigate into the course-v4 folder (to get the slimmed-down notebooks which are used in the course videos) currently, all the code given to you automatically by paperspace on initialisation of your machine make reference to fastai2 instead of fastai.

So the documentation should probably be changed from:

Once you click on ‘Terminal’ a new window should open with a terminal. Type:

pip install fastai fastcore --upgrade
then

cd fastbook
git pull

Now you should close the terminal window.

to this:

Once you click on ‘Terminal’ a new window should open with a terminal. Type:

pip install fastai fastcore --upgrade
then

cd fastbook
git pull
cd ../course-v4
git pull
Now you should close the terminal window.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.