GithubHelp home page GithubHelp logo

dkloving / csci_e_82a_probabalistic_programming Goto Github PK

View Code? Open in Web Editor NEW

This project forked from stephenelston/csci_e_82a_probabalistic_programming

0.0 0.0 0.0 11.03 MB

Jupyter Notebook 99.78% Python 0.22%

csci_e_82a_probabalistic_programming's Introduction

Welcome to CSCI E-82A

Real-world machine intelligence and machine learning operates in an uncertain world. Probabilistic programming encompasses a range of algorithms for making decisions and inferences under uncertainty. Probabilistic programming has multiple uses in machine learning and artificial intelligence. Probabilistic programming methods arise in problems in many areas including, scheduling, robotics, natural language processing and image understanding. The focus of this course is developing understanding of the theory and gaining hands on experience with probabilistic representation, leaning and inference methods for planning and classification. Hands on exercises will be done using Python APIs for several powerful packages.

The course is built around the three pillars of machine learning and artificial intelligence; representation, learning and inference. This course will survey a number of powerful probabilistic programming methods for representation, learning, and inference:

  1. Review of probability and inference.
  2. Representations for probabilistic models.
  3. Learning in probabilistic models.
  4. Bayesian graphical models.
  5. Markov decision processes and planning.
  6. Partially observable Markov decision processes.
  7. Unsupervised probabilistic models, time permitting.
  8. Reinforcement learning methods.

Students completing this course will:

  1. Develop the ability to apply probabilistic programming methods to machine intelligence and machine learning applications.
  2. Have an understanding of the theory that connects various probabilistic programming methods.
  3. Have hands-on experience applying probabilistic programming algorithms to various machine intelligence and machine learning problems.

Note: For specific University policy information please see the course syllabus on the course Canvas page.

Some course mechanics

Meeting time: Tuesdays, 5:30-7:30 pm US Eastern time, online. Students are expected to attend and participate actively in class sessions.

Mandatory On-Campus Weekend Session: Saturday and Sunday December 8-9, 9am-5pm, Harvard Hall 202. Students must attend the entire weekend session to receive credit for the course.

Course materials: Course lecture material is in the form of Jupyter notebooks in the Github repository at https://github.com/StephenElston/CSCI_E_82A_Probabalistic_Programming . As this is a new course I am still developing the material.

Technical Requirements: You are required to have a computer/laptop and Internet connection capable of performing the course work. Specifically:

  • Highspeed Internet connection for watching videos.
  • Up to date web browser for watching videos and working with Microsoft Azure Machine Learning.
  • A modern CPU capable of (4) multi-core computations and ideally a GPU.
  • At least 50GB of free disk space.
  • At least 8 GB of RAM, but 16 GB will be better.
  • Running Windows, MAC OS, or Linux.
  • The ability to install the Python Anaconda stack (https://www.continuum.io/downloads ) including Jupyter notebooks.

Supplementary reading material

These texts are sources used for preparing the course. Students may wish to refer to these books for supplementary readings:

  1. Bayesian Reasoning and Machine Learning, David Barber, Cambridge University Press, 2012 – I find this book a useful source for both theory and algorithms for a wide range of topics.
  2. Artificial Intelligence, A Modern Approach, Stuart Russell and Peter Norvig, Prentice Hall, Third edition, 2010 – The go-to introductory AI textbook with introductory treatment of probabilistic models.
  3. Probabilistic Graphical Models, Principles and Techniques, Daphne Koller and Nir Freedman, MIT Press, 2009 – Comprehensive but quite theoretical text. I mostly use this book as a reference.
  4. Reinforcement Learning, An Introduction (Adaptive Computation and Machine Learning), Richard Sutton, Andrew Barto, MIT Press, Second edition, 2018 – Introductory text on reinforcement learning. You can download the draft of the second edition: https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  5. Decision Theory Under Uncertainty: Theory and Applications, Kochenderfer, et. al., MIT Press, 2015. I nice introduction to Markov decision processes (MDP) and partially observable Markov decision processes (POMDP). This book also contains a nice introduction to reinforcement learning.
  6. Machine Learning: A Probabilistic Perspective, Murphy, MIT Press, 2012. A good reference for the theory of many probabilistic machine learning algorithms. Not the ideal book for learning the material. Make sure you get the latest printing. See the preference for the printing.
  7. The Book of Why: The New Science of Cause and Effect, Pearl and Makenzie, Basic Books, 2018. An extended essay on causal models written for a broad audience.
  8. Deep Learning, Ian Goodfellow, Yushua Bengio, and Arron Courville, MIT Press, 2016 – The definitive (and more or less only) text on deep learning theory. Most of this material is not within the scope of this course.

Approximate Schedule

This preliminary lecture schedule is subject to change as the course progresses;

  1. Week 1 – Sep 4: Review of probability
    • Basics of probability
    • Conditional probability
    • Independence and conditional independence
  2. Week 2 – Sep 11: Probabilistic reasoning
    • Conditional probability, Priors, likelihood and posterior
    • Basic graph concepts
    • Introduction to probabilistic graphical models
  3. Week 3 – Sep 18
    • Bayesian belief networks
    • Representation in belief networks
    • Independence and separation in graphical models
  4. Week 4 – Sep 25
    • Markov properties and Markov networks
    • Efficient inference algorithms for graphical models
    • Message algorithms for inference in graphical models
  5. Week 5 – Oct 2
    • Overview of approximate inference in graphical models
    • MCMC methods
    • Variational methods
  6. Week 6 – Oct 9
    • Decision making and utility functions
    • Markov decision processes
    • Decision tree models
  7. Week 7 – Oct 16
    • Decisions with partially observable Markov processes
    • The EM algorithm
  8. Week 8 – Oct 23
    • Time dependent processes
    • Hidden Markov Models
    • Filtering and smoothing with MDP
    • Kalamn Filters
  9. Week 9 – Oct 30
    • The bandit problem
    • Dynamic programming
    • Limits on tabular MDP models
  10. Week 10 – Nov 6
    • Introduction to temporal difference learning
    • Introduction to Q learning
    • Actor-critic methods
  11. Week 11 – Nov 13
    • Introduction to deep learning
    • Representation in deep networks
    • Back propagation
      Note: No class week of Nov 20, unless needed for make-up.
  12. Week 12 – Nov 27
    • Deep Q learning

csci_e_82a_probabalistic_programming's People

Contributors

jeff-winchell avatar kenneychan avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.