GithubHelp home page GithubHelp logo

artificial-intelligence's Introduction

Hi there ๐Ÿ‘‹

  • ๐Ÿ”ญ Iโ€™m currently working on Situational Awareness, Drones and Unmanned Surface Vehicles (USV) and Computer Vision applications.
  • ๐ŸŒฑ Iโ€™m currently learning how to optimize local LLM deployments.
  • ๐Ÿ‘ฏ Iโ€™m looking to collaborate on developing SaaS AI-enabled applications.
  • ๐Ÿค” Iโ€™m looking for help with Web UI frameworks.
  • ๐Ÿ’ฌ Ask me about AI and ML
  • ๐Ÿ“ซ How to reach me: Twitter: monogioudis, LinkedIn: pantelis
  • โšก Fun fact: Every August I effectively disconnect for 2 weeks vacationing in the island of Chios, Greece.

Bio

I blog at aegean.ai and you can read my short bio here.

artificial-intelligence's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

artificial-intelligence's Issues

Issue on page /aiml-common/lectures/pgm/bayesian-inference/_index.html

Clarify that the Bayesian formula given here by David MacKay

"In this formula H is the overall hypothesis space, so there is no point in putting this as a p(H|xyz). I would read the formula as given in the lecture where a portion of the hypothesis space refers to the prior model of w (isotropic gaussian) and another to the linear relationship between x and y manifested via the conditional mean of p(y|x)=wo+w1*x."

Issue on page /aiml-common/lectures/nlp/rnn-language-models/_index.html

To train an LSTM language model

We start with big corpus of text which is a sequence of tokens
where T is the number of words / tokens in the corpus.

Every time step we feed one word at a time to the LSTM and compute the output probability distribution Missing argument for \mathbf, which is, by construction, a conditional probability distribution of every word in the dictionary given the words we have seen so far.

The loss function at time step is the classic cross entropy loss between the predicted probability distribution and the distribution that corresponds to the one-hot encoded true next token. $Missing argument for \mathbf$

Average all the t-step losses

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.