My repo for the second edition of AI Devs.
API Exercises | Prompt Exercises | Quizzes | Certificate |
---|---|---|---|
Certificate |
Chapter/Lesson | Name | Topics |
---|---|---|
C01L01 | helloapi | Intro to AI Devs API |
C01L04 | moderation | OpenAI Moderation API |
C01L04 | blogger | Langchain + output parsers; ensuring correct format of response |
C01L05 | liar | Guard mechanism for LLM response |
C02L02 | inprompt | LLM response based on dynamic context |
C02L03 | embedding | OpenAI Embedding API |
C02L04 | whisper | OpenAI Whisper API |
C02L05 | functions | OpenAI Functions Calling |
C03L01 | rodo | Placeholders in prompts to improve privacy |
C03L02 | scraper | Scrape article and use it as dynamic context + guard mechanism |
C03L03 | whoami | Build dynamic context during consecutive API calls |
C03L04 | search | Vector DB + similarity search |
C03L05 | people | Vector DB + similarity search + Traditional DB == dynamic context |
C04L01 | knowledge | Choosing tool to call based on input |
C04L02 | tools | Intent detection |
C04L03 | gnome | OpenAI Vision API - image recognition |
C04L04 | ownapi | Dedicated backend for your AI assistant; experiments with ngrok |
C04L05 | ownapiapi | Extended ownapi with keeping conversation context |
C05L01 | meme | Generating a meme using RenderForm API |
C05L02 | optimaldb | Summarizing facts about people to optimize DB |
C05L03 | Searching Google using SerpAPI to provide GPT with dynamic context | |
C05L04 | md2html | Fine-tuning of GPT-3.5 Turbo for converting Markdown to HTML |
# Install dependencies
bun install
# Create env file and fill it with your data
cp .env.example .env
# Generate new exercise file
bun new helloapi
bun new people --dir # (create directory for exercise)
# Run exercises (works for both single file and directory)
bun ex helloapi # Runs ./exercises/helloapi.ts
bun ex google # Runs ./exercises/google/google.ts
bun ex google/server # Runs ./exercises/google/server.ts
Activate debug mode and enable detailed logging by setting these variables in your .env file:
# Enable verbose output
LANGCHAIN_VERBOSE=true
# Activate enhanced tracing
LANGCHAIN_TRACING_V2=true
I'm using Qdrant for exercises with Vector Databases. You can run it locally using Docker:
# Pull docker image and run it
docker pull qdrant/qdrant
docker run -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant