GithubHelp home page GithubHelp logo

sentientmachine / assimilate_odsc_bootcamp Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 21.8 MB

Reverse Engineer the brain using odsc gain of compute functions.

Python 0.15% Jupyter Notebook 99.75% JavaScript 0.11%

assimilate_odsc_bootcamp's Introduction

assimilate_odsc_bootcamp

https://odsc.com/boston/2024-prereqs

Day 1 - Virtual class day.

class01_data_wrangling_with_python_with_Sheamus_McGovern.md

class02_introduction_to_math_for_data_science_by_thomas_nield.md

class03_practical_introduction_data_viz_for_data_scientists_robert_kosara.md

class04_introduction_machine_learning_python_Sudip_Shrestha.md

Day 2

state of the art open source AI with Hugging face

class05_state_of_the_art_open_source_ai_hugging_face_julien_simon.md

Get rebuff API key, get usage hook to chat.openai.com. class06_api_for_gpt_andras_zsom.md

class07_data_science_biotech_reearch_pharma_eric_ma_phd.md

Dive into the lightning ai open source stack and lightning studios to unlock reproducible ai development on the cloud by Luca Antiga. Lightning AI, CTO. https://www.linkedin.com/in/lantiga/ Pytorch llightning leading open source framework.

LLM-native products: industry best practices and what's ahead . By Ivan Lee Datasour.ai CEO/Founder The next generation of llm powered productions. vip platinum gold silver bootcamp.

!!! Assimilate all this: https://www.kaggle.com/code trending

Pipe his IPYNB's into google colab and reproduce his work class08_ben_needs_a_friend_llvm_benjamin_batorsky.md

Good: (flatten to class09) Machine Learning: Jon k Rohn's https://www.linkedin.com/in/jonkrohn At Nebula: https://www.linkedin.com/company/nebula-io Generative A.I. with Open-Source LLMs: From Training to Deployment with Hugging Face and PyTorch Lightning Parts of this training will be accessible to anyone who would like to understand how to develop commercially-successful data products in the new paradigm unleashed by LLMs like GPT-4. To make the most of this training, attendees should be proficient in deep learning and Python programming. Jon Krohn's "Deep Learning with PyTorch and TensorFlow" training on April 23rd provides the neural-network foundations for this generative A.I. training. Tools/Languages utilized: Google Colab and Paperspace, Python Code can be found in the aptly named code directory ( https://github.com/jonkrohn/NLP-with-LLMs/tree/main/code Jupyter Notebooks are directly supported for execution in Google Colab https://colab.research.google.com/ .py files are for running at the command line (see instructions) N.B.: Code is intended to be accompanied by live instructions and so it will not necessarily be self-explanatory. https://github.com/jonkrohn/NLP-with-LLMs https://github.com/jonkrohn/DLTFpT/blob/master/notebooks/softmax_demo.ipynb

Idiomatic Pandas 2pm to 3pm with matt harrison metasnake. python and data science corporate trainer consultant. ODSC East 2024 Prerequisites 100% Prerequisites to download:

  1. Install pandas on your machine (using Anaconda or pip).
  2. Install Jupyter on your machine.
  3. Launch Jupyter and run the following: https://github.com/mattharrison/effective_pandas_book/blob/main/02-install-code.ipynb
    Github: https://github.com/mattharrison/effective_pandas_book Prerequisites to download:
  4. Install pandas on your machine (using Anaconda or pip).
  5. Install Jupyter on your machine.
  6. Launch Jupyter and run the following: https://github.com/mattharrison/effective_pandas_book/blob/main/02-install-code.ipynb Github: https://github.com/mattharrison/effective_pandas_book

class10_using_graphs_for_large_feature_engineering_pipelines_wes_madrigal.md

How to Practice Data-Centric AI and Have AI Improve its Own Dataset (DE Summit) with Jonas Mueller , Chief Scientist and Co-Founder | Cleanlab In Machine Learning projects, one starts by exploring the data and training an initial baseline model. While it’s tempting to experiment with different modeling techniques right after that, an emerging science of data-centric AI introduces systematic techniques to utilize the baseline model to find and fix dataset issues. Improving the dataset in this manner, one can drastically improve the initial model’s performance without any change to the modeling code at all! These techniques work with any ML model and the improved dataset can be used to train any type of model (allowing modeling improvements to be stacked on top of dataset improvements). Such automated data curation has been instrumental to the success of AI organizations like OpenAI and Tesla.
While data scientists have long been improving data through manual labor, data-centric AI studies algorithms to do this automatically. This tutorial will teach you how to operationalize fundamental ideas from data-centric AI across a wide variety of datasets (image, text, tabular, etc). We will cover recent algorithms to automatically identify common issues in real-world data (label errors, bad data annotators, outliers, low-quality examples, and other dataset problems that once identified can be easily addressed to significantly improve trained models). Open-source code to easily run these algorithms within end-to-end Data Science projects will also be demonstrated. After this tutorial, you will know how to use models to improve your data, in order to immediately retrain better models (and iterate this data/model improvement in a virtuous cycle).

Linear Algebra with John K Rohn is good

  • get Assimilate

https://github.com/jonkrohn/DLTFpT/blob/master/notebooks/deep_net_in_pytorch.ipynb

website: https://www.jonkrohn.com/resources

Programming: All code demos will be in Python so experience with it or another object-oriented programming language would be helpful for following along with the code examples.

Mathematics: Familiarity with secondary school-level mathematics will make the class easier to follow along with. If you are comfortable dealing with quantitative information -- such as understanding charts and rearranging simple equations -- then you should be well-prepared to follow along with all of the mathematics.

Github repository for the original Notebook https://github.com/jonkrohn/ML-foundations/tree/master/notebooks Linear Algebra Colab 1 https://colab.research.google.com/github/jonkrohn/ML-foundations/blob/master/notebooks/1-intro-to-linear-algebra.ipynb Linear Algebra Colab 2 https://colab.research.google.com/github/jonkrohn/ML-foundations/blob/master/notebooks/2-linear-algebra-ii.ipynb

Google Drive to Files https://drive.google.com/drive/folders/1HPiLTf4u-hXJd7ISTHrI28IBls8n4UxG?usp=sharing

Jupyter Notebooks Guide https://docs.google.com/document/d/1vpASB2kjn_XUGTJNwknhIEDWfK8cMhq7v8vyoggmw5M/edit#heading=h.wp9t8n7j3og

Data Cleaning

Eric Callahan , Principal, Data Solutions | Pickaxe Foundry https://www.linkedin.com/in/ericcallahan

Cross Entopy loss. Calculate loss between two bell distributions of priors and models given. https://en.wikipedia.org/wiki/Cross-entropy

Clean as You Go: Basic Hygiene in the Modern Data Stack (DE Summit)

When my children walk around the house, they generally leave a trail of mess behind them. They sometimes realize that they shouldn't be doing this, but they’re so excited to move on to the next thing that catches their eye that they’ll say “Oh, I’ll clean it up later.”

As grown adults with wisdom gained from experience, my wife and I know that this means either:

They’ve just signed themselves up for a massive future cleaning job, or … … that someone else will have to clean up after them.

We know that this is not good behavior for a child, so why do we so often do this as Data Engineers?

The culture of “Move Fast and Break Things” has pressured us into closing tickets as quickly as possible, frequently pushing us towards the “Oh, I’ll clean it up later” mindset. While this may save us a few minutes in the short-term, we are creating long term headaches such as:

Piles of small cleanup tasks for later Confusion among peers who try to use incomplete data assets Lack of metadata to activate throughout the Modern Data Stack

Unlock safety and savings mastering a secure cost effective cloud data lake

Ori Nakar Imperva Principal engineer threat research

Jonathan azaria imperva data science tech lead.

Day 3

Trust, Transparency and secured generative AI 9:00AM to 9:25AM Ballroom B Kate Soule IBM program director generative AI research https://www.linkedin.com/in/katesoule 1 person startup: "IBM": https://www.linkedin.com/company/ibm/ Jon Krohn will give an LLVM class from 11AM to 4PM.

Learning from mistakes: Empowering Humans to use AI the right way in high stakes decision making 9:30AM to 9:55AM Ballroom B Hilke Schellmann author of "The algorithm" assistant professor of journalism, new york university https://www.linkedin.com/in/hilkeschellmann Hachette Book Group: https://www.linkedin.com/company/hachette-book-group

How to Scale Trustworthy AI 10:00AM to 10:30AM Ballroom B Paul Hake Principal AI Engineer, IBM https://www.linkedin.com/in/paul-hake

From Code to Trust Embedding Trustworthy proactices across the AI Lifecycle 10:00AM to 10:30 Ballroom A Vrushali Sawant SAS Data Scientist Data Ethics Practice https://www.linkedin.com/in/vrushalipsawant/

Choose: Book Author: Quick Start Guide to large language Models 10:30AM to 11:00AM Expo Hall Networking Area Sinan Ozdemir LoopGenius, AI and LLM Expert Author and cofounder CTO https://www.linkedin.com/in/sinan-ozdemir

Who wants to live forever? Reliability engineering and mortality. 11AM to 11:30 Allen Downey Phd Brilliant.org https://www.linkedin.com/in/allendowney Ballroom B

Choose:

Generative AI with opensource llms from training to deployment Hugging face and pytorch lightning. Dr Jon Krohn. 11:00AM to 4PM https://www.linkedin.com/in/jonkrohn/ Room 312 SLIDES https://docs.google.com/presentation/d/1fyPAj5NTR7jGaONnOJr9kSjm-uCTi3DEjgcBpeZISeo/edit#slide=id.p3 CODE:

ReRead and reimplement:

https://arxiv.org/pdf/1802.05365.pdf

Best language model: Anthropic's claude 3 opus (see slides SOTA) https://www.anthropic.com/news/claude-3-family Google's Gemini 1.0 ultra. (see slides SOTA) OpenAI's GPT-4 (see slides SOTA)

See Mixtral 8x7B and 8x22B elmo things class 11: class11_generative_ai_opensource_llms_hugging_face_pytorch_jon_krohn.md

Developing Credit Scoring Models for banking and Beyond Room 304 11AM to 1PM Aric Labarr PHD Institute for Advanced Analytics at NC state university

LLMs Meet Google cloud. A new frontier in big data analytics. 11AM to 1PM Room 302 Mohammad Soltanieh-ha Rohan Johar

WOW! 20 years Quant Bloomberg. Imputation of financial data using collaborative filtering and generative machine learning Ballroom A 11:00AM to 11:30 Arun Verma PhD bloomberg, head of quant research. https://www.linkedin.com/in/arun-verma-0858b8/

Applied to bloomberg job application steps lol.

VIRTUAL1: Build GenAI Systems, Not Models,, Hugo Bowne-Anderson , Head of Developer Relations | Outerbounds 10AM to 10:30 This talk explores a framework for how data scientists can deliver value with Generative AI: How can you embed LLMs and foundation models into your pre-existing software stack? How can you do so using Open Source Python? What changes about the production machine learning stack and what remains the same? We motivate the concepts through generative AI examples in domains such as text-to-image (Stable Diffusion) and text-to-speech (Whisper) applications. Moreover, we’ll demonstrate how workflow orchestration provides a common scaffolding to ensure that your Generative AI and classical Machine Learning workflows alike are robust and ready to move safely into production systems. This talk is aimed squarely at (data) scientists and ML engineers who want to focus on the science, data, and modeling, but want to be able to access all their infrastructural, platform, and software needs with ease!

VIRTUAL2: Build GenAI Systems, Not Models Hugo Bowne-Anderson , Head of Developer Relations | Outerbounds This talk explores a framework for how data scientists can deliver value with Generative AI: How can you embed LLMs and foundation models into your pre-existing software stack? How can you do so using Open Source Python? What changes about the production machine learning stack and what remains the same? We motivate the concepts through generative AI examples in domains such as text-to-image (Stable Diffusion) and text-to-speech (Whisper) applications. Moreover, we’ll demonstrate how workflow orchestration provides a common scaffolding to ensure that your Generative AI and classical Machine Learning workflows alike are robust and ready to move safely into production systems. This talk is aimed squarely at (data) scientists and ML engineers who want to focus on the science, data, and modeling, but want to be able to access all their infrastructural, platform, and software needs with ease!

VIRTUAL3: Causal AI: from Data to Action Dr. Andre Franca , CTO | connectedFlow In this talk, we will explore and demystify th world of Causal AI for data science practitioners, with a focus on understand cause-and-effect relationships within data to drive optimal decisions. In this talk, we will focus on: from shapley to DAGs: the dangers of using post-hoc explainability methods as tools for decision making, and how tranditional ML isn't suited in situations where want to perform interventions on the system. discovering causality: how do we figure out what is causal and what isn't, with a brief introduction to methods of structure learning and causal discovery optimal decision making: by understanding causality, we now can accurately estimate the impact we can make on our system - how to use this knowledge to derive the best possible actions to make? This talk is aimed at both data scientists and industry practitioners who have a working knowledge of traditional statistics and basic ML. This talk will also be practical: we will provide you with guidance to immediately start implementing some of these concepts in your daily work.

VIRTUAL4: Advancing Ethical Natural Language Processing: Towards Culture-Sensitive Language Models Gopalan Oppiliappan , Head, AI Centre of Excellence | Intel India Natural Language Processing (NLP) systems play a pivotal role in various applications, from virtual assistants to content generation. However, the potential for biases and insensitivity in language models has raised concerns about equitable representation and cultural understanding. This talk explores the development of Culture-Sensitive Language Models (LLMs) as a progressive step towards addressing these issues. The core principles involve diversifying training data to encompass a wide range of cultures, implementing bias detection and mitigation strategies, and fostering collaboration with cultural experts to enhance contextual understanding. Our approach emphasizes the importance of ethical guidelines that guide the development and deployment of LLMs, focusing on principles such as avoiding stereotypes, respecting cultural diversity, and handling sensitive topics responsibly. The models are designed to be customizable, allowing users to fine-tune them according to specific cultural requirements, fostering inclusivity and adaptability. The incorporation of multilingual capabilities ensures that the models cater to global linguistic diversity, acknowledging the richness of different languages and cultural expressions. Moreover, we propose a feedback mechanism where users can report instances of cultural insensitivity, establishing a continuous improvement loop. Transparency and explainability are prioritized to enable users to comprehend the decision-making process of the models, promoting accountability. Through this multidimensional approach, we aim to advance the field of NLP by developing culture-sensitive LLMs that not only understand and respect diverse cultural nuances but also contribute to a more inclusive and ethical use of language technology.

VIRTUAL5: Advancing Ethical Natural Language Processing: Towards Culture-Sensitive Language Models Gopalan Oppiliappan , Head, AI Centre of Excellence | Intel India Natural Language Processing (NLP) systems play a pivotal role in various applications, from virtual assistants to content generation. However, the potential for biases and insensitivity in language models has raised concerns about equitable representation and cultural understanding. This talk explores the development of Culture-Sensitive Language Models (LLMs) as a progressive step towards addressing these issues. The core principles involve diversifying training data to encompass a wide range of cultures, implementing bias detection and mitigation strategies, and fostering collaboration with cultural experts to enhance contextual understanding. Our approach emphasizes the importance of ethical guidelines that guide the development and deployment of LLMs, focusing on principles such as avoiding stereotypes, respecting cultural diversity, and handling sensitive topics responsibly. The models are designed to be customizable, allowing users to fine-tune them according to specific cultural requirements, fostering inclusivity and adaptability. The incorporation of multilingual capabilities ensures that the models cater to global linguistic diversity, acknowledging the richness of different languages and cultural expressions. Moreover, we propose a feedback mechanism where users can report instances of cultural insensitivity, establishing a continuous improvement loop. Transparency and explainability are prioritized to enable users to comprehend the decision-making process of the models, promoting accountability. Through this multidimensional approach, we aim to advance the field of NLP by developing culture-sensitive LLMs that not only understand and respect diverse cultural nuances but also contribute to a more inclusive and ethical use of language technology.

VIRTUAL6 Everything About Large Language Models: Pre-training, Fine-tuning, RLHF & State of the Art Chandra Khatri , VP, Head of AI | Krutrim Generative Large Language Models like GPT4 have revolutionized the entire tech ecosystem. But what makes them so powerful? What are the secret components which make them generalize to a variety of tasks? In this talk, I will present how these foundation models are trained. What are the steps and core-components behind these LLMs? I will also cover how smaller, domain-specific models can outperform general purpose foundation models like ChatGPT on target use cases

VIRTUAL7: Machine Learning using PySpark for Text Data Analysis Bharti Motwani , Clinical Associate Professor | University of Maryland, USA In this session, unsupervised Machine Learning algorithms like Cluster Analysis and recommendation System and supervised Machine Learning algorithms like Random Forest, Decision Tree, Bagging and Boosting will be discussed for doing analysis using PySpark. The main feature of this workshop will be the implementation of these algorithms using the Text Data. Considering the importance of reviews and text data available on social media platforms, the availability and importance of text data analysis has grown multifold. The session will be particularly helpful for startups and existing business who wanted to use AI for improving performance.

VIRTUAL8: Everything About Large Language Models: Pre-training, Fine-tuning, RLHF & State of the Art Chandra Khatri , VP, Head of AI | Krutrim Generative Large Language Models like GPT4 have revolutionized the entire tech ecosystem. But what makes them so powerful? What are the secret components which make them generalize to a variety of tasks? In this talk, I will present how these foundation models are trained. What are the steps and core-components behind these LLMs? I will also cover how smaller, domain-specific models can outperform general purpose foundation models like ChatGPT on target use cases

Strategies for implementing responsible AI governance and risk management room 306 beatrice botti double verify https://www.linkedin.com/in/beatricebotti/ rdrails for data teams embracing a platform approach for workfloat management 11AM to 11:30 bill palombi https://www.linkedin.com/in/billpalombi jeff hale

hpcc systems - the definitive big data open source platform demo theater expo hall Bob foreman https://www.linkedin.com/in/bobforeman/ LexisNexis risk solutions.

beyond ML ops building AI systems with Metaflow Ballroom A Ville Tuulos cofound outerbounds

Accellerating the LLM lifecycle on the cloud 11:35 to 12:05 Luca Antiga

data pipeline architecture stop building monoliths elliott cordo founder ceo data futures

The unreasonable effectiveness of an asset graph Sean Lopp by dagster labs

machine learning with xgboost 12:05 to 1:05 matt harrison metasnake

Fyte A production-ready open source ai platform 12:10PM to 12:40PM Ballroom A Thomas J Fan Union.ai

AI as an engineering Discipline 12:10 to 12:40 Yucheng Low Phd cofounder xehub

YES: https://www.youtube.com/watch?v=kCc8FmEb1nY

Day 4

Algorithmic Auditing 9:00AM to 9:25 Ballroom B She says Orcaa.ai is hiring. apply Cathy o'neil ceo data scientist author nyt best seller weapons of math destruction The Shame Machine!!! Fairness engine. Racism machine. Discrimination machines. Blame machines. https://cathyoneil.org/ https://www.linkedin.com/company/orcaa-ai/ Is FICO Score legitimate can the racism and race agenda from on high be furthered by using AI on FICO to confirm or deny systematic unfairness? Explainable Fairness. Really calculate it out, is 1/8th black worse than 1/4th woman by these rank of disability insurance. Outcome of Interest as decider. Dispassionate, non-Religious.

Accellerating AI Adoption for DoD Decision Advantage 9:30AM to 9:55AM Ballroom B https://www.linkedin.com/in/bill-streilein-084a78 Dr William Bill W Streilein DoD Chief Digital and Ai Office, CTO WOW MIT Lincoln Labs prior experience. ENHANCED BATTLESPACE AWARENESS. --He Recommends: "https://www.tradewindai.com/" oh shit, shield. Share the best models, don't just hide and silo them. Joint warfighting, army navy airforce, how they come together to project force. provide interoperatation capabitly, ecosystem. Expanded digital talent management. MLOPS = Bring the data in, iterate on the data, monitor it, Senitized data. DoD isn't where the innovation comes from. Develop abilities outside the DOD and they will put it into the bots. "Task Force Lima", we're hiring. We can't get this code shit to work and we need you to build it. Accellerate the use cases and how to bring generative AI to the DoD space.

Large Language odels as Building Blocks Jay Alammar cohere director engineering fellow NLP https://www.linkedin.com/in/jalammar/ Cohere AI. https://www.linkedin.com/company/cohere-ai/

Accellerating AI Adoption for DoD Decision Advantage https://www.linkedin.com/in/bill-streilein-084a78/ National security JAIC Center: https://en.wikipedia.org/wiki/Joint_Artificial_Intelligence_Center AI isn't AI anymore when you know how it works and you can do it. He likes IBM Watson. Beat the then winner on Jeopardy. Alphafold predicting the structure of protiens given information. Algorithm able to figure it out. Conversational AI and chatGPT, super excited.

All models great and small: It's 2024 you don't have to use gpt-4 for everything anymore john dickerson PhD author, co-founder chief scientist https://www.linkedin.com/in/john-dickerson

Being, Training, and Employing Data scientists wisdoms and warnings from harvard data science review 10:00AM to 10:30AM Dr Xiao-Li meng harvard data science review whipple jones founding editor in chief, professor of statistics. What does it mean to be a successful data scientist.

Data Morph a Cautionary Tale of summary statistics 11AM to 11:30AM Room 302 Stefanie Molin bloomberg, data scientist software engineer author of hands on data with pandas https://www.linkedin.com/in/stefanie-molin Get a handle on statistics. https://www.linkedin.com/company/bloomberg/

10 quick wins to expidite your job search 11AM to 11:30AM Adam Ross Nelson Up level data llc data scientist and career coach. https://www.linkedin.com/in/arnelson/

  1. Mindset, make life better for others is the meaning of life.
  2. Use a second strategy, not just one like you're doing
  3. Online job posting is the biggest way someone got a job, next is through a friend.
  4. Do more of what is working and less of what isn't working.
  5. Professionalism is just a way to control others. Show up on time, speak in full sentences.
  6. Reconnect with an early boss I don't care how long it's been.
  7. Tell your friends and family what you're career goals are, they can make something come through. Referrals.
  8. Chat with Recruiters.
  9. Post job opportunities for others, network at networking.
  10. Chronological, not functional.
  11. Refreshing portfolio impressions.

Professionalism means: "Be less like you and more like me".

Training an openAI quality text embedding model from scratch 11AM to 11:30AM Room 306 Andriy Mulyar Nomic AI Founder and CTO https://www.linkedin.com/in/andriymulyar

Trial, Error, Triumph lessons learned using LLMs for creating machine learning training data 11AM to 11:30 Ballroom B Matt Dzugan muck rack director of data https://www.linkedin.com/in/mattdzugan/ Company: https://www.linkedin.com/company/muckrack/

End to End deep learning for time series forecasting and analysis 11AM to 12:00PM Room 312 Isaac Godfried simspace senior data scientist. https://www.linkedin.com/in/isaac-godfried-70874466/ Kaggle link to files: https://www.kaggle.com/code/isaacmg/avocado-price-forecasting-with-flow-forecast-ff AI Camp tutorial: https://www.youtube.com/watch?v=dZzs0EdFkiE Speaker: Isaac Godfried https://github.com/AIStream-Peelout/flow-forecast

Practical applications of bayesian statistics for business data science team 11:35AM to 12:05 Room 304 Matt DiNauta principal applied scientist, zillow. https://www.linkedin.com/in/matt-dinauta/ Applied to Zillow https://www.linkedin.com/company/zillow/

--good contact Generative AI Guardrails for enterprise LLM solutions 11:35 to 12:05PM Room 302 Preethi Raghavan Fidelity Vice president NLP https://www.linkedin.com/in/preethi-raghavan-26669a2/ Apply to Fidelity, ooh green pyramid symbology. https://www.linkedin.com/company/fidelity-investments/

Harnessing GPT Assistants for superior model ensembles: A beginner's guide to AI stacked classifiers 12:05PM to 1:05PM Room 312. Jason Merwin, PHD https://www.linkedin.com/in/jason-merwin-7bb05013/ Instructions: https://drive.google.com/file/d/1PoSa9h490a00LxUjaBOr58NnwTtLsCXA/view Github with ipynb's https://github.com/jrmerwin/ODSC_East_4.25.24/tree/main

Model Evaluation in LLM-enhanced products 12:10PM to 12:40PM Ballroom B Sebastian Gehrmann PhD. Head of NLP Bloomberg. https://www.linkedin.com/in/sebastiangehrmann/

[email protected]

Conversational Data Inelligence Transforming data interaction and analysis 2pm to 2:30PM Room 302 Kevin Rohling Presence product group head of AI Engineering. https://www.linkedin.com/in/krohling Company Presence https://www.linkedin.com/company/presence-llc-/ github.com/krohling kevinrohling.com

Bringing precision medicine to the field of metal healthcare through llm ai and psychedelics room 306 Gregory Rysilk PhD

LongChain on Kubernetes cloud native llm deployment made easy and efficient 2:35PM to 3:05PM Room 304 Ezequiel lanza Intel open source evangelist. https://www.linkedin.com/in/ezelanza/

Slide downloads

Great link to all the slides, githubs and things, get them all:

https://docs.google.com/spreadsheets/d/1Xmhh1zfVuWgdyS6O-aKnvVJzDA4RVOvrmzDLoMuWOXM/edit#gid=0

Videos

They said that all vidoes would be available next week. Download the videos and get them into my terminal.

assimilate_odsc_bootcamp's People

Contributors

sentientmachine avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.