GithubHelp home page GithubHelp logo

drkostas / data-science-methods Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 0.0 18.99 MB

A playground repo for the DSE-512 course

License: MIT License

Python 24.21% Makefile 0.47% Shell 1.84% E 8.70% Jupyter Notebook 59.30% Eiffel 5.48%

data-science-methods's Introduction

Typing SVG
GoogleScholar

  • 📖 Pursuing PhD in Data Science & Engineering @ The University of Tennessee.

  • 🎓 Conducting research on AI and Computer Vision @ the AICIP Lab.

  • 💻 Currently building Masked Image Modeling models for Remote Sensing data.

🖥️ Open-Source Projects

Machine Learning PyPi Packages
Title Stars Technologies
Minecraft-AI Stars TF
3D Semantic Segmentation Stars PyTorch OpenCV
Bert Rinehart Novels Stars PyTorch Spacy
Car Accidents Pred. Stars Pandas SciPy
Hybrid Girvan Newman Stars HGNPub PySpark
MySQL
COVID19 Vacc. Pred. Stars TF
Instagram Likes Pred. Stars TF OpenCV
RL Value Iteration Stars NumPy
Vanilla Numpy CNN Stars NumPy
Vanilla Numpy NN Stars NumPy
Title Stars Technologies
High SQL Stars SQLPyPi MySQL
CircleCI SQLDown
Cloud File Manager Stars CloudPyPi Dropbox
CircleCI CloudDown
YAML Wrapper Stars YamlPyPiCircleCI
YAMLDown
Color Logger Stars LogPyPi CircleCI
LogDown
Email Sender Stars MailPyPi Gmail
CircleCI MailDown
Benchmark Tools Stars BenchPyPi CircleCI
BenchDown
Bots Misc Projects
Title Stars Technologies
Youtube Comment Bot Stars YT Gmail Dropbox
MySQL RDS
CircleCI Heroku
Job Application Bot Stars Gmail Dropbox
MySQL RDS
CircleCI Heroku
Title Stars Technologies
Spotify Button Presser Stars Raspberry Spotify
Switchbot
Cross The Floor Stars Sankey Diagram
Wiki
2D Shooter Game Stars p5
Quantum Mechanics Quiz App Stars android
📈 Stats
My Github Stats


Currently Coding & Listening to:

spotify-github-profile

data-science-methods's People

Contributors

drkostas avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

data-science-methods's Issues

Assignment05 - Q1

(25 points) Ephemeral containers

We will launch a shell in an “ephemeral container” that is downloaded on demand and deleted when we are done. This is useful when trying out an image whenyou are unsure whether you will use it long-term or not. First, from an ISAAC login node, run the
following commands and note the output:
grep PRETTY_NAME /etc/os-release
ls /

Next, Run the following command do drop into an ephemeral singularity container. This may take a few
minutes.
singularity shell docker://dceoy/pydata:dnn-cpu

Re-run the first to commands from inside this shell and note the differences. You are inside a container which
has applied an overlay to the true filesystem. Press Ctrl-d to exit

Assignment05 - Extra

More exercises (0 points)

There you have it. A singularity container that has pandas, numpy, and other machine learning packages already installed, without having to manage a conda environment. Note that mpi4py is missing, and if you’d like to install it, you would need to extend this container. However, it is difficult to do so, since you must build new containers on another computer that you have root access on, like an Ubuntu virtual machine. That is not part of this assignment, but is the next step in understanding and
using Singularity, so if you are looking for more to learn, try modifying this container on a local machine (using the singularity bootstrap command and editing a Singularity file), then copying it back onto ISAAC to verify that it ran.

Assignment05 - Q4

(25 points) Run kmeans_vectorized.py in the container

Now, use the skills you developed in the first few problems to run your kmeans_vectorized.py script from previous assignments. Verify that the output matches what we saw in Assignment 01.

Assignment03

For this assignment, you will extend the code we created in class, located at /lustre/haven/proj/UTK0150/jhinkl13/kmeans.
The submission you return to us should be a brief report formatted in HTML, DOCX, or PDF.
For the report that you submit, you do not need to overly format it; you can simply list your responses
to each of the problems below.

Please do the following using number of clusters -k 4, using the TCGA dataset:

  • (25 points) Starting from the kmeans repository developed in class, which you extended in the last
    two assignments, refactor the kmeans.py to contain the following subfunctions: compute_distances(),
    expectation_step(), maximization_step(), called at the appropriate places inside the kmeans()
    function. Ensure that the program still runs. Do this for kmeans_vectorized.py as well. In your
    report, indicate you’ve completed problem 1 and provide the path to your code on ISAAC (or github if
    you choose to use it).
  • (25 points) Profile your newly-refactored kmeans.py and report the time spent in each of the
    three new functions both in seconds and as a percentage of the total runtime. Do the same
    for kmeans_vectorized.py. Recall that kmeans_vectorized.py attempted to speed up the
    compute_distances() portion
    . Use Amdahl’s Law to compute the theoretical maximum speedup
    possible by optimizing and parallelizing the compute_distances(). What percentage of that speedup
    have we actually obtained
    by vectorization with numpy?
  • (25 points) Visualize an icicle plot of the profiling output for kmeans.py. You may use Snake- Vis or Viztracer along with cprofiler, as Todd demonstrated in Lecture 15. Do the same for
    kmeans_vectorized.py
    . You may include these as screenshots in your report; please rescale the figures
    to ensure that we can see the main function names in these plots.
  • (25 points) Using the profiling output from Problem 2, determine the maximum speedup you could
    obtain by optimizing the expectation_step() and maximization_step()
    (note that you may need
    to refactor further to measure the runtime of the main kmeans loop). In a new file, kmeans_numba.py,
    use the @numba.jit decorator
    (install and import numba in your code), re-profiling your code, and
    compare your runtime to the ideal speedup given by applying Amdahl’s Law. Report the new profiled
    runtimes for these three functions and the total runtime using Numba in your report.
  • Full run for the tcga dataset
  • Measure times & Amdahl's law, create report, gather results, screenshots and submit.

Assignment05 - Q3

(25 points) Bind mounting directories

By default, singularity containers mount /home/$USER, /tmp, and $PWD, meaning you can see them inside your container. Let’s also mount your user home directory and print its contents from within the container. If you named your user directory something other than your username, you will need to make that edit to this command.
singularity exec --bind /lustre/haven/proj/UTK0150/$USER:/myproj pydatacpu.sif ls -l /myproj

Assignment05 - Q2

(25 points) Pulling containers

Containers can also be persisted to disk by “pulling”. Run the following command to pull the docker image into a singularity image file, and verify that you can shell into it.
singularity build pydatacpu.sif docker://dceoy/pydata:dnn-cpu

Assignment01

  • Problem 1
  • Problem 2
  • Problem 3
  • Add detailed comments
  • Deploy and test on ISAAC
  • Do the additional programming challenges

Assignment04 - Q2

Question 2

After training the model for 30 epochs for each of the p number of ranks in the problem
one, let’s compare both our models’ accuracy and their total runtime for each run.
Create two plots:

a. A line plot showing the epoch accuracy for each of the runs
b. A second line plot that shows the total runtime by the number of processes used

What is the effect of training time as we increase the number of processes available?
How does this line up with your expectations of the scalability laws that we discussed?
Which scalability law is most applicable in this case? How is our model accuracy
affected by the increase in the number of processes running? Present your figures and
answers to these questions in either a short document or a Jupyter notebook in your
repository.

Assignment02

  • (25 points) Clone the kmeans repository into your own area at /lustre/haven/proj/UTK0150/$USER.
  • (25 points) Write a job script that will use a single node and a single process per node (so only one process total). Ensure the job runs on a compute node, and run the non-distributed kmeans (kmeans_vectorized.py). Make a note of the output directory and commit the job script to your cloned repo.
  • (25 points) Write another job script to run the distributed kmeans script on two compute nodes using 20 processes, using the same iris data we've been looking at and submit the job, noting the output directory. This job should finish in a very short amount of time so requesting a walltime of 5 minutes will help you get through the queue quicker. Don't forget that you must launch your processes with mpirun inside the script...
  • (25 points) Modify the script to use the TCGA data in the /lustre/haven/proj/UTK0150/data directory (see README for refresher on how to load the data). Run another job on ISAAC using 20 processes and time how long the script takes to run, using 10 clusters. Make a note of the time it takes. Also run with a single process and one node and verify that both jobs output identical cluster assignments and centroids by saving the outputs of each job, loading them once complete and verifying that they match. (hint: successs at this requires identical initialization).
  • Submit a message here with the following information:
    • path to your code on ISAAC.
    • paths and brief description of relevant output log directories for ISAAC jobs that succeeded (please don't make us sort through your main output directory ourselves and sort through failed job IDs).
    • Timings for k-means on Iris and TCGA data, with single process vs twenty. Do you achieve a 20x speedup in each case?

Your assignment will not be graded unless you submit it here on Canvas; no exceptions.

Assignment04 - Q1

Question 1

Let’s write a data-parallel convolutional net to train on the MNIST dataset. Our goal
is to see how data-parallel training affects two things: our model’s accuracy, and its
runtime. Due to some limitations on Isaac, we are going to train this model using a
single node, but we will increase the number of processes per node to see how things
scale. Use the number of processes p ∈ {1, 2, 4, 8} and train the model for 30 epochs
each time. We want to keep track of two things:

a. Our training accuracy at every epoch
b. The total time it takes the model to train

Note: I recommend saving your training accuracy at every epoch as a numpy
array. You should have a numpy array for each run where p number of processes
were used. In the next step, we will plot these accuracy curves for each of the runs
to compare how our model accuracy changes as we vary the number of processors
(and the effective batch size as a result).

Steps:

  • Create a class that will include all the assignment operations
  • Load train and test MNIST
  • Pick CNN architecture and create Model Class
  • Create Training Part
  • Create Testing Part
  • Store the run statistics
  • Create a Data-Parallel version
  • Test for different number of processes
  • Create .pbs script and run it on ISAAC

Assignment05 - Q0

(0 points) Setup

We will be downloading containers, which will exhaust your home directory disk
quota. So first, create a new cache directory for singularity, and export the following environment
variable in your shell session.

export SINGULARITY_CACHEDIR=/lustre/haven/proj/UTK0150/$USER/singularity_cache
mkdir $SINGULARITY_CACHEDIR

Now, within this terminal any singularity commands will use the given directory to store the large image files
we will download, and you should not run out of disk space.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.