- 39 Sections
- 223 Lessons
- 10 Weeks
Expand all sectionsCollapse all sections
- Module 1: Generative AI Fundamentals9
- 1.1What is Generative AI & Why It Matters in 2025
- 1.2Generative AI vs Traditional AI vs Machine Learning
- 1.3Types of Generative AI — Text, Image, Audio, Video, Code
- 1.4Real-World Generative AI Use Cases
- 1.5Generative AI Market — Tools, Platforms & Companies
- 1.6Generative AI Career Roadmap — Roles & Skills 2025
- 1.7– Risks & Limitations — Hallucinations, Bias, Security
- 1.8Responsible AI — Ethics & Governance
- 1.9Lab: Explore ChatGPT, Claude, Gemini — Prompt Comparison
- Module 2: Python for Generative AI9
- 2.1Python Refresher for AI Engineers
- 2.2Virtual Environments — venv & conda
- 2.3Essential Libraries Overview
- 2.4Working with JSON & Structured Outputs
- 2.5File Handling — PDF, DOCX, TXT, CSV for AI Pipelines
- 2.6Python Async — asyncio for AI Applications
- 2.7Error Handling & Logging in AI Applications
- 2.8Lab: Build Simple Python Script to Call OpenAI API
- 2.9Lab: Parse PDF & Text Files for AI Processing
- Module 3: Large Language Models (LLM) — Core Concepts12
- 3.1What is a Large Language Model (LLM)
- 3.2How LLMs Work — Transformers Architecture (Overview)
- 3.3Tokens — What They Are & Why They Matter
- 3.4Context Window — Limits & Strategies
- 3.5LLM Parameters — Temperature, Top-P, Top-K, Max Tokens
- 3.6LLM Families & Models
- 3.7Closed Source vs Open Source LLMs — When to Use What
- 3.8LLM Benchmarks — MMLU, HumanEval, HellaSwag
- 3.9Multimodal LLMs — Text + Image + Audio
- 3.10LLM APIs — OpenAI, Anthropic, Google, Hugging Face
- 3.11Lab: Compare LLM Outputs — GPT-4o vs Claude vs Gemini
- 3.12Quiz: LLM Fundamentals — 15 Questions
- Module 4: Prompt Engineering — Complete Guide13
- 4.1What is Prompt Engineering & Why It Matters
- 4.2Anatomy of a Good Prompt
- 4.3Prompting Techniques
- 4.4Prompt Templates — Reusable Prompt Patterns
- 4.5Avoiding Hallucinations with Better Prompts
- 4.6Prompt Injection Attacks & Defense
- 4.7Prompt Chaining — Connect Multiple Prompts
- 4.8Output Formatting — JSON, Markdown, Tables
- 4.9Prompt Versioning & Management
- 4.10Lab: Prompt Engineering for Data Extraction Task
- 4.11Lab: Chain of Thought — Complex Reasoning Tasks
- 4.12Lab: Build Reusable Prompt Template Library
- 4.13Quiz: Prompt Engineering — 20 Questions
- Module 5: LLM APIs & Integration8
- Module 6: LangChain — LLM Application Framework16
- 6.1What is LangChain & Why Use It
- 6.2LangChain Architecture — Components Overview
- 6.3LangChain Models — LLMs & Chat Models
- 6.4LangChain Prompts
- 6.5LangChain Chains
- 6.6LangChain Memory
- 6.7LangChain Document Loaders
- 6.8LangChain Text Splitters
- 6.9LangChain Output Parsers
- 6.10LangChain Expression Language (LCEL)
- 6.11LangChain Tools & Toolkits
- 6.12LangSmith — Tracing & Debugging LangChain Apps
- 6.13Lab: Build Conversational Chatbot with Memory
- 6.14Lab: Multi-Step Document Processing Chain
- 6.15Lab: LangSmith — Debug & Trace LLM Application
- 6.16Quiz: LangChain — 20 Questions
- Module 7: RAG — Retrieval Augmented Generation18
- 7.1What is RAG & Why It Solves Hallucination Problem
- 7.2RAG Architecture — Complete Flow
- 7.3Embeddings — What They Are & How They Work
- 7.4Embedding Models Comparison
- 7.5Vector Databases — Complete Guide
- 7.6Chunking Strategies — Critical for RAG Quality
- 7.7Similarity Search — Cosine, Dot Product, Euclidean
- 7.8RAG Pipeline with LangChain
- 7.9RAG Pipeline with LlamaIndex
- 7.10Advanced RAG Techniques
- 7.11RAG Evaluation — Metrics & Tools
- 7.12RAG on Structured Data — SQL + LLM
- 7.13Multimodal RAG — Images + Text
- 7.14Filter extraneous content in source documents that degrades RAG quality
- 7.15Lab: Build RAG Chatbot on Your Own PDF Documents
- 7.16Lab: Advanced RAG with Hybrid Search & Reranking
- 7.17Lab: Evaluate RAG Pipeline with RAGAs
- 7.18Quiz: RAG Concepts — 20 Questions
- Module 8: Vector Databases — Deep Dive11
- 8.1Vector Database Architecture — HNSW & IVF Index
- 8.2Pinecone — Complete Guide
- 8.3Chroma — Local & Production Setup
- 8.4Weaviate — Schema & GraphQL
- 8.5Qdrant — Collections & Payload Filtering
- 8.6pgvector — Vector Search in PostgreSQL
- 8.7Databricks Vector Search
- 8.8Choosing Right Vector Database — Decision Guide
- 8.9Vector DB Performance — Tuning & Scaling
- 8.10Lab: Build & Compare RAG with Pinecone vs Chroma
- 8.11Lab: Databricks Vector Search — End-to-End RAG
- Module 9: LlamaIndex — Data Framework for LLMs11
- 9.1What is LlamaIndex & How It Differs from LangChain
- 9.2LlamaIndex Core Concepts
- 9.3Index Types in LlamaIndex
- 9.4LlamaIndex Data Connectors — 100+ Sources
- 9.5LlamaIndex Response Synthesizers
- 9.6LlamaIndex with Databricks
- 9.7Advanced LlamaIndex
- 9.8LlamaIndex Evaluation — Faithfulness & Relevancy
- 9.9LlamaIndex vs LangChain — When to Use What
- 9.10Lab: Multi-Document Q&A System with LlamaIndex
- 9.11Lab: Sub-Question Query Engine for Complex Queries
- Module 10: Fine-Tuning LLMs13
- 10.1What is Fine-Tuning & When to Use It vs RAG
- 10.2Fine-Tuning vs RAG vs Prompt Engineering — Decision Guide
- 10.3Types of Fine-Tuning
- 10.4Fine-Tuning Dataset Preparation
- 10.5Fine-Tuning with OpenAI API
- 10.6Fine-Tuning Open Source Models
- 10.7Fine-Tuning on Databricks
- 10.8Model Evaluation — BLEU, ROUGE, Perplexity
- 10.9Quantization — GPTQ, GGUF, AWQ
- 10.10Deploying Fine-Tuned Models
- 10.11Lab: Fine-Tune LLaMA 3 with LoRA on Custom Dataset
- 10.12Lab: Fine-Tune OpenAI Model on Domain-Specific Data
- 10.13Quiz: Fine-Tuning — 15 Questions
- Module 11: Agentic AI — AI Agents Deep Dive0
- SECTION A — Agent Fundamentals5
- SECTION B — Agent Planning Algorithms0
- SECTION C — Agent Memory Architecture0
- SECTION D — Tools & Tool Selection Strategies0
- SECTION E — Failure Recovery Strategies0
- SECTION F — Agent Cost Control0
- SECTION G — Agent Evaluation Metrics0
- SECTION H — Agent State Management0
- SECTION I — Agent Safety Guardrails0
- SECTION J — LangChain Agents — Implementation0
- SECTION K — Hands-On Labs0
- Module 12: Multi-Agent Systems0
- SECTION A — Multi-Agent Fundamentals0
- SECTION B — Agent Orchestration Patterns0
- SECTION C — LangGraph — Stateful Agent Workflows0
- SECTION D — CrewAI — Multi-Agent Framework0
- SECTION E — AutoGen — Microsoft Multi-Agent Framework0
- SECTION F — Advanced Multi-Agent Topics0
- SECTION G — Multi-Agent Use Cases0
- SECTION H — Hands-On Labs0
- Module 13: Databricks for Generative AI18
- 32.1Databricks AI — Complete Platform for GenAI
- 32.2Mosaic AI — Databricks GenAI Stack
- 32.3Databricks Runtime ML — Optimized for AI
- 32.4Writing Chunked Text into Delta Lake — Unity Catalog
- 32.5MLflow PyFunc Chain
- 32.6Databricks Model Serving
- 32.7Databricks Foundation Model APIs
- 32.8Databricks AI Playground — Interactive LLM Testing
- 32.9Databricks Vector Search — Enterprise RAG
- 32.10Databricks AI Functions — SQL + LLM
- 32.11MLflow for GenAI
- 32.12Build RAG on Databricks
- 32.13Databricks GenAI Pipeline — End to End
- 32.14Ingest Documents → Delta Lake – Chunk & Embed → Vector Search – Query → Retrieve → Generate → Serve
- 32.15Lab: Build RAG with Databricks Vector Search
- 32.16Lab: Use AI Functions in SQL — Classify & Summarize
- 32.17Lab: Full GenAI Pipeline on Databricks Lakehouse
- 32.18Quiz: Databricks GenAI — 20 Questions
- Module 14: Advanced RAG — Production Techniques14
- 33.1RAG Failure Modes — Why RAG Fails in Production
- 33.2Advanced Chunking
- 33.3Advanced Retrieval Techniques
- 33.4Query Transformation
- 33.5GraphRAG — Microsoft Knowledge Graph + RAG
- 33.6Corrective RAG (CRAG)
- 33.7Self-RAG — LLM Decides When to Retrieve
- 33.8Adaptive RAG — Dynamic Strategy Selection
- 33.9RAG Caching — Semantic Cache
- 33.10RAG with Structured Data
- 33.11Multimodal RAG
- 33.12Lab: GraphRAG — Build Knowledge Graph from Documents
- 33.13Lab: Hybrid Search + Reranking Pipeline
- 33.14Lab: Text-to-SQL Agent with RAG Fallback
- Module 15: LLM Evaluation & Observability16
- 34.1Why LLM Evaluation is Critical in Production
- 34.2Types of Evaluation
- 34.3RAG Evaluation Metrics
- 34.4RAGAs Framework — Automated RAG Evaluation
- 34.5LLM as Judge — Use GPT-4 to Evaluate Outputs
- 34.6DeepEval — LLM Testing Framework
- 34.7Promptfoo — Prompt Testing & Comparison
- 34.8LangSmith — Tracing & Observability
- 34.9Arize Phoenix — Open Source LLM Observability
- 34.10MLflow LLM Evaluate — Databricks Integration
- 34.11A/B Testing Prompts in Production
- 34.12Inference Tables in Databricks — Track Live LLM Endpoint
- 34.13Monitoring LLMs in Production
- 34.14Lab: Evaluate RAG Pipeline with RAGAs
- 34.15Lab: LangSmith Tracing — Debug Multi-Agent System
- 34.16Lab: MLflow Evaluate — LLM Quality Metrics on Databricks
- Module 16: Building Production GenAI Applications15
- 35.1Production GenAI Architecture — Best Practices
- 35.2FastAPI for LLM Applications
- 35.3Streamlit for GenAI Demos
- 35.4Gradio for AI Applications
- 35.5LLM Caching — Reduce Cost & Latency
- 35.6Async LLM Calls — Handle High Throughput
- 35.7Token Optimization Strategies
- 35.8Security for GenAI Applications
- 35.9Cost Management
- 35.10Containerize GenAI App — Docker
- 35.11Legal & licensing requirements for data sources — copyright, fair use, open source licenses
- 35.12Deploy GenAI App — Cloud Options
- 35.13Lab: Build Production RAG API with FastAPI
- 35.14Lab: Streamlit Chat Application with File Upload
- 35.15Lab: Add Guardrails to LLM Application
- Module 17: Open Source LLMs & Local Deployment12
- 36.1Why Run LLMs Locally — Privacy & Cost
- 36.2Popular Open Source LLMs
- 36.3Ollama — Run LLMs Locally
- 36.4LM Studio — GUI for Local LLMs
- 36.5Hugging Face Transformers — Load & Run Models
- 36.6GGUF Format — Quantized Models for CPU
- 36.7vLLM — High Throughput LLM Serving
- 36.8Llama.cpp — Efficient CPU Inference
- 36.9Open Source Embeddings — Nomic, BGE, E5
- 36.10Build Private RAG with Ollama + Chroma
- 36.11Lab: Run LLaMA 3 Locally with Ollama
- 36.12Lab: Build Fully Private RAG — No Cloud APIs
- Module 18: Generative AI for Data Engineering10
- 37.1GenAI Use Cases in Data Engineering
- 37.2Text-to-SQL — Natural Language to SQL Queries
- 37.3Data Pipeline Code Generation
- 37.4Data Quality with LLMs
- 37.5Document Processing Pipelines
- 37.6LLM for Data Catalog
- 37.7Databricks AI Functions for Data Teams
- 37.8Lab: Text-to-SQL Agent on Data Warehouse
- 37.9Lab: Auto-Generate Column Descriptions using LLMs
- 37.10Lab: Process 1000 PDFs with LLM Pipeline on Databricks
- Module 19: End-to-End Generative AI Projects3
- Module 20: Generative AI Certification Preparation10
- 39.1AWS Certified Machine Learning Specialty — GenAI Topics
- 39.2Google Cloud Professional ML Engineer — GenAI Topics
- 39.3Databricks Generative AI Engineer Associate — Full Guide
- 39.4Microsoft AI-102 Azure AI Engineer — Overview
- 39.5DeepLearning.AI Courses — Recommended Path
- 39.6Practice Questions — Full Mock Tests 100 Questions
- 39.7Top 60 Generative AI Interview Questions & Answers
- 39.8Resume Building for GenAI Engineer Roles
- 39.9LinkedIn Profile Optimization
- 39.10Salary Negotiation — GenAI Engineer Roles
