The AI Engineering Bootcamp
Build production Agent applications in 2025 with leading frameworks and best practices, from prototype to production, cloud to on-prem.
Prototype agentic apps and get them into Production. Ship fast, then scale.


Dr. Greg" Loughnane and Chris "The Wiz πͺ" Alexiuk
Co-founders of AI Makerspace. Teaching weekly LLM concepts + code to thousands.
Industry-leading tooling, LLMs, & infra












Our Community
ποΈ π’ π AI Makerspace is on a mission to create the world's leading community for people who want to build, ship, and share production LLM application.
Welcome! We invite you to join our close-knit AI Makerspace community on YouTube, where every Wednesday (10am PST/ 1pm EST) we host a free, live event covering bleeding-edge tools, concepts, and code. See all of our upcoming free events!
Join your fellow peers from around the world who are building, shipping, and sharing everyday in our Gen AI Discord community.
Our Ethos
Build ποΈ, Ship π’, Share π!
This is the secret to success in a 21st-century workforce, and you will live this every class, in every week of the cohort!
π§βπ€βπ§ Specifically, you will get hands-on experience building and sharing complex LLM and agentic applications weekly, receiving constant feedback during live sessions and through your homework submissions via instructors, peer supporters, and your fellow cohort members.
What are the prerequisites?
You must be somewhat familiar with Python programming, prompt engineering, and data science. However, most of all, you must be driven to succeed in Generative AI in 2025.
How will I know if it's right for me?
π Check out a detailed curriculum & schedule!
π€© Watch some of the amazing Demo Day presentations from previous cohorts!
π€ People just like you have transformed in amazing ways, hear their stories in our Transformation of The Week video series!
Read more about the student experience from Richard Gower of Cohort 3, and Ches Roldan's great article, "20 Tips to Survive AI Makerspaceβs AI Engineering Bootcamp (as a non-technical person)"
How much do the tools cost?
Don't worry! This course is for the GPU-poor among us! This course adapts to industry-standard open-source software and compute hardware and is cloud platform agnostic. We strive to keep your additional costs low (below ~$100 total). Check Q&A below for more info.
Who is this course for?
1
Data scientists and machine learning engineers aiming at becoming full-stack AI practitioners.
2
Software engineers who want to build complex LLM and agentic applications
3
Technical product managers or execs who guide their teams on what to build, why, and how.
What youβll get out of this course
- π€© Demo Day!
You will present a unique project live to a cohort of your peers and the public! - πΌ Hiring Opportunities
Certified AI Engineers get direct access to job opportunities from our network - π§βπ€βπ§ Peer-Supported Live Coding
Work with certified AI Engineers in dedicated small groups to discuss and code throughout the cohort! - π€ 1:1 Career Coaching
Student success will help you achieve your goals before and after certification! - π§βπ» Prompt Engineering
Leverage in-context learning as an engineer for prototyping, RAG applications, agents, and more! - π Retrieval Augmented Generation (RAG)
Build Retrieval Augmented Generation applications and ground LLM outputs in your own reference data! - βοΈ Fine-Tuning
Efficiently fine-tune LLM and embedding models to adapt to your task or domain! - π΄οΈ Agents
Build complex LLM applications capable of reasoning, action, and working with external tools! - π€ Reasoning & Test-Time Compute
From o1 to agentic conversation programming, we'll cover the latest in LLM reasoning and frameworks. - π Evaluation
Instrument agent applications for metrics-driven development to show your quality improvements quantiatively. - π Observability
Debug, test, and monitor applications built using any leading LLM framework! - π Inference Optimization
Learn the leading tools and quantization techniques to accelerate inference for the latency and throughput you need. - π’ On-Prem Serving
Learn to efficiently leverage resources so you can keep your data super private in the years to come.

Cohort 6
Apr 1 - June 5, 2025
Cohort 7
June 17 - Aug 21, 2025
Every class is taught live and led by Dr. Greg and The Wiz every Tuesday and Thursday at 7pm EST/4pm PST, along with professional peer supporters assigned to you based on skill set.
** This class has limited seating **
Apply NowCourse Curriculum
Get the most comprehensive curriculum, from the highest rated bootcamp on Maven!
Week 1: April 1 - April 6
- π» Pre-Work, Session 1
- π¦Ύ The Four Prototyping Patterns: Prompting, Fine-Tuning, RAG, Agentic Reasoning
- 𧱠The LLM Application Stack
- π¨οΈ Prompt Engineering: Best Practices
- π₯ Prompt Iteration through a User Interface
- π§βπ» LLM API Roles: System, User, Assistant
- π Building and Sharing Your First Use Case Prototype
- π§° GenAI Toolbox
- π» Pre-Work, Session 2
- π’ Introduction to Embeddings and Similarity Search
- βοΈ Vector Databases
- π€ Embedding Models vs. LLM Chat Models
- π Introduction to Retrieval Augmented Generation (RAG)
- ποΈ Building Your First RAG Application from Scratch in Python
- π§° RAG Builder Toolbox
Week 2: April 7 - April 13
- π» Pre-Work, Session 3
- π€© Demo Day Overview
- πΌ Industry and Cohort Use Cases
- π¬ Building Simple User Interfaces with Generative AI
- π PDF Parsing 101
- π LLM Rate Limits
- π Shipping and Sharing a Rate-Unlimited, PDF-Upload-Ready RAG Application
- π§° End-to-End RAG Toolbox
- π» Pre-Work, Session 4
- βοΈ LangChain Core Constructs
- π» LangChain Expression Language: Chains and Runnables
- π LangChain vs. LangGraph
- π Monitoring, Visibility, and Observability with LangSmith
- π Evaluation Best-Practices for RAG
- π Building and Sharing a Production-Grade RAG Application
- π§° Production RAG Toolbox
Week 3: April 14 - April 20
- π» Pre-Work, Session 5
- π΄οΈ Agent and Agentic Reasoning: Three Introductions
- π€ The Reasoning-Action (ReAct) Framework
- π οΈ Enhancing Search and Retrieval Function Calling
- βοΈ Reflection, Tool Use, and Planning
- π Directed Cyclic Graphs
- π Building a production-grade Agentic RAG Application
- π§° Production Agents Application Toolbox
- π» Pre-Work, Session 6
- π§βπ€βπ§ Multi-Agent Architectures: Hierarchical, Collaboration, Supervision
- π΄οΈ Agent Supervisors as Routers
- π Event-Driven vs. Graph Traversal Frameworks
- βοΈ Building a Multi-Agent Application with LangGraph
- π§° Multi-Agent Application Toolbox
Week 4: April 21 - April 27
- π» Pre-Work, Session 7
- π§ͺ SDG for Fine-Tuning & Alignment of LLMs
- βοΈ SDG for Domain Adaptation
- π SDG for State-of-the-Art Small Language Models
- πΌ SDG in Industry
- ποΈ SDG for RAG Evaluation
- 𧬠Evolving Instructions to Increase Depth and Breadth
- ποΈ Custom Synthetic Test Data Generation for RAG Evaluation
- π§° SDG Toolkit
- π» Pre-work, Session 8
- π Metrics-Driven Development
- π RAG ASessment (RAGAS) Framework
- π RAG Metrics: Context Precision/Recall, Faithfulness, etc.
- π Agents or Tool Use Metrics: Topic Adherence, etc.
- π General Purpose Evaluation: Aspect Critiques
- ποΈ Assessing the accuracy of retrieval and generation in Agentic RAG applications
- π§° RAG Evaluation Toolbox
Week 5: April 8 - May 4
- π» Pre-Work, Session 9
- π Massive Text Embedding Benchmark (MTEB)
- ποΈ Downloading Open-Source Model Weights
- π§ Loading LMs on GPU
- π€ Hugging Face Sentence Transformers
- ποΈ Fine-Tuning Embedding Models for RAG using HF Sentence Transformers
- π§° Embedding Fine-Tuning Toolkit
- π» Pre-Work, Session 10
- βοΈ The Primary Roles of Fine-Tuning
- πΈ Parameter Efficient Fine-Tuning (PEFT)
- βοΈ Quantization
- ποΈLow-Rank Adaptation (LoRA/QLoRA)
- ποΈ Fine-Tuning Llama 3.1 with PEFT-QLoRA
- π§° LLM Fine-Tuning Toolkit
Week 6: May 5 - May 11
- π§βπ» Given: Data, Build and share: An End-to-End RAG application (submit by May 14)
- π» Pre-work, Session 11
- π΄οΈ Use Cases Across Industries
- π£οΈ Sharing Cohort Case Studies
- π§βπ€βπ§ Journey + Destination Group Networking Mixer
- π§βπ» Submit Demo Day Project Presentation
- π§° Ideation Toolbox
Week 7: May 12 - May 18
- π» Pre-Work, Session 13
- 𧱠Semantic Chunking
- π’ Reranking
- π¨βπ¦ Parent-Document Retrieval
- π«₯ Best-Matching 25 (Okapi BM25)
- π₯ Reciprocal Rank Fusion (RRF)
- 2οΈβ£ Ensemble Retrieval
- βοΈ Building, evaluating, and improving a RAG application with advanced techniques
- π§° Advanced RAG Toolbox
- π» Pre-Work, Session 14
- πΎ AG2: The AutoGen Framework
- π€ Reasoning LLMs: OpenAI's o1 models
- π£οΈ Conversations vs. Reasoning
- π Agent Framework Comparison: LangGraph, AG2, LlamIndex, CrewAI
- π§βπ» Building a multi-agent report generation application with AG2
- π§° Advanced Agentic Toolbox
Week 8: May 19 - May 25
- π» Pre-Work, Session 15
- π Baseline KPIs: Latency, Throughput, No. Tokens
- πͺ Prod KPIs: Time To First Token, Inter-Token Latency, etc.
- π€ Production-Ready Methods: Calling Chains, Functions, Tools, APIs
- πΈ Prompt and Embedding Caching
- π°οΈ Asynchronous Requests
- πΆ Parallel vs. Serialized Chains
- βοΈ Scalable Vector Databases
- βοΈ Building a Scalable Agent application
- π§° Production-Ready Agent Application Toolbox
- π» Pre-work, Session 16
- π Prod KPIs: Latency, No. Tokens, Time To First Token, Inter-Token Latency, etc.
- π How to Choose Open LLMs and Embedding Models
- βοΈ LLM Serving Tools vs. Cloud Service Providers
- π€ HF Inference Endpoints
- π LLM Serving Engine Comparison: HF Text Generation Inference, NVIDIA NIM, vLLM
- π Building, Shipping, Sharing, and Stress Testing an Open-Source Agent Application
- π§° Open-Source Scalable LM Endpoints Toolbox
Week 9: May 26 - June 1
- π» Pre-work, Session 17
- π² The Business Value of On-Prem: From Prototyping to Production
- π₯οΈ On-Prem Hardware Considerations
- π¦ Local LLM & Embedding Model Hosting Comparison: vLLM, ollama
- βοΈ LLM Application Hosting Through API Comparison: LangServe, Llama-Deploy
- π’ Building On-Prem Agents with LangGraph, LangServe, and ollama
- π§° On-Prem Agentic RAG in Production Toolbox
- π» Pre-Work, Session 18
- 𧱠GPU Hardware: A Primer
- π€ Generative Pre-Trained Transformer Quantization (GPTQ)
- π₯ Activation-aware Weight Quantization (AWQ)
- βοΈ Vector Post-Training Quantization (VPTQ)
- π Quantization Technique Comparison: GPTQ, AWQ, and VPTQ
- ποΈ Comparing Quantization Methods for serving a state-of-the-art Llama model
- π§° Inference Optimization Toolbox
Week 10: June 2 - June 5
- π§ Code Freeze
- π§βπ» Demo Day Rehearsals
- π€© Demo Day
- π Graduation and Certification Ceremony!
Instructors
Meet the crew who teach every class, live!

"Dr. Greg" Loughnane
Co-Founder & CEO @ AI Makerspace
In 2023, we created the LLM Engineering: The Foundations and LLM Ops: LLMs in Production courses on Maven!
From 2021-2023 I led the product & curriculum team at FourthBrain (Backed by Andrew Ng's AI Fund) to build industry-leading online bootcamps in ML Engineering and ML Operations (MLOps).
Previously, I worked as an AI product manager, university professor, data science consultant, AI startup advisor, and ML researcher; TEDx & keynote speaker, lecturing since 2013.

Chris "The Wiz πͺ" Alexiuk
Co-Founder & CTO @ AI Makerspace
In 2023, we created the LLM Engineering: The Foundations and LLM Ops: LLMs in Production courses on Maven!
During the day, I work as a Developer Advocate for NVIDIA. Previously, I worked with Greg at FourthBrain (Backed by Andrew Ng's AI Fund) on MLE and MLOps courses, and on a few Deeplearning.ai events!
A former founding MLE and data scientist, these days you can find me cranking out Machine Learning and LLM content! My motto is "Build, build, build!", and I'm excited to get building with all of you!
$2,999
AIE 06
April 1 - June 5
March 28
People Are Talking
From non-programming data scientists to Fortune 500 CTO's, students are seeing real returns on their investments! Graduates are getting promoted, starting new careers, launching companies, and working on real-world Gen AI projects, every day!

Julien de Lambilly
Lead AI Architect
Digit8 Group

Jimmy
Data scientist
BMO

Monalisha Singh
Member of Technical Staff
NetApp

Robert FitoΕ‘
AI Tribe Lead
Hrvatski Telekom

Colin Davis
Head Of Marketing

Alex
Data Scientist
Fonterra

Jithin James
Co-Founder and Maintainer
RAGAS - YC W24 Batch

Vinod
Financial Economist

Mike
Professor of Pediatrics
University of Utah
Free Prep Course
Do you want to really understand how LLMs work "under the hood" from concepts to code?
Ready to master the fundamentals of LLMs?
Whether you're looking to nail AI Engineer interviews or lead an entire AI Engineering team, positioning yourself the LLM expert in your context is just five days away.
Day 1 - Transformers: Attention Is NOT All You Need, but what else?
Day 2 - Attention: Are LLMs magical, or do they just pay attention?
Day 3 - Embeddings: Layers, models, representations, or all of the above?
Day 4 - Training: Let's train a GPT from scratch! Loss is the key.
Day 5 - Inference: Understand how transformers predict the next-token
Changing Lives
From community members to graduates, people from every imaginable background and skillset are becoming AI Engineers, and thriving! Hear their stories in our Transformation Of The Week!
See more amazing Transformation Of The Week stories on our YouTube page.
F.A.Q.
Answers to our most asked questions.
Yes! Your instructors, Dr. Greg and The Wiz, run each and every class LIVE. Nothing pre-recorded!
Yes! You can easily catch up asynchronously on the course if you need to miss a class or two, or if you're in a timezone that makes attending class live very difficult.
This course is designed for people with full-time jobs. If you're an aspiring AI Engineer, it is important for you to complete the weekly coding exercises (+2-4 hours/week outside of class and other sessions). If you're a AI Engineering Leader, you will be able to get a lot out of the class by simply attending the sessions!
The course focused on best-practice tools for the industry, so we will leverage Hugging Face Inference Endpoints to deploy scalable open-source models. We will use AWS and Amazon SageMaker during the course, although similar functionality also exists in MS Azure.
For team seat packages, please contact lusk@aimakerspace.io
You will need to set up billing for the following tools:
- ChatGPT Plus (to create your own GPT on the GPT Store)
- OpenAI API access (for building with OpenAI GPT models)
- GPU access through Google Colab Pro (for Fine-Tuning)
- Hugging Face Spaces (for hosting deployed apps)
Recommended budget ~= $100 total
Please read more about our deferral policy here.
$2,999
AIE 06
April 1 - June 5
March 28
Next cohort
Apr 1 - June 5, 2025