The AI Engineering Bootcamp

4.9 (48) Β· 10 Weeks Β· Cohort-based Course

Build production Agent applications in 2025 with leading frameworks and best practices, from prototype to production, cloud to on-prem.

Prototype agentic apps and get them into Production. Ship fast, then scale.

Dr. Greg" Loughnane and Chris "The Wiz πŸͺ„" Alexiuk
Co-founders of AI Makerspace. Teaching weekly LLM concepts + code to thousands.

Industry-leading tooling, LLMs, & infra

AI Makerspace Engineering Bootcamp learn LangChain
AI Makerspace Engineering Bootcamp learn OpenAI build your own ChatGPT
AI Makerspace Engineering Bootcamp learn RAGAS
AI Makerspace Engineering Bootcamp learn Meta AI
AI Makerspace Engineering Bootcamp learn LlamaIndex
AI Makerspace Engineering Bootcamp learn LangChain
AI Makerspace Engineering Bootcamp learn OpenAI build your own ChatGPT
AI Makerspace Engineering Bootcamp learn Hugging Face
AI Makerspace Engineering Bootcamp learn Meta AI
AI Makerspace Engineering Bootcamp learn LlamaIndex
AI Makerspace Engineering Bootcamp learn Cursor
AI Makerspace Engineering Bootcamp learn UV
Our Community

πŸ—οΈ 🚒 πŸš€ AI Makerspace is on a mission to create the world's leading community for people who want to build, ship, and share production LLM application.

Welcome! We invite you to join our close-knit AI Makerspace community on YouTube, where every Wednesday (10am PST/ 1pm EST) we host a free, live event covering bleeding-edge tools, concepts, and code. See all of our upcoming free events!

Join your fellow peers from around the world who are building, shipping, and sharing everyday in our Gen AI Discord community.

Our Ethos

Build πŸ—οΈ, Ship 🚒, Share πŸš€!

This is the secret to success in a 21st-century workforce, and you will live this every class, in every week of the cohort!

πŸ§‘β€πŸ€β€πŸ§‘ Specifically, you will get hands-on experience building and sharing complex LLM and agentic applications weekly, receiving constant feedback during live sessions and through your homework submissions via instructors, peer supporters, and your fellow cohort members.

What are the prerequisites?

You must be somewhat familiar with Python programming, prompt engineering, and data science. However, most of all, you must be driven to succeed in Generative AI in 2025.

How will I know if it's right for me?

πŸ“œ Check out a detailed curriculum & schedule!

🀩 Watch some of the amazing Demo Day presentations from previous cohorts!

🎀 People just like you have transformed in amazing ways, hear their stories in our Transformation of The Week video series!

Read more about the student experience from Richard Gower of Cohort 3, and Ches Roldan's great article, "20 Tips to Survive AI Makerspace’s AI Engineering Bootcamp (as a non-technical person)"

How much do the tools cost?

Don't worry! This course is for the GPU-poor among us! This course adapts to industry-standard open-source software and compute hardware and is cloud platform agnostic. We strive to keep your additional costs low (below ~$100 total). Check Q&A below for more info.


Who is this course for?
1

Data scientists and machine learning engineers aiming at becoming full-stack AI practitioners.

2

Software engineers who want to build complex LLM and agentic applications

3

Technical product managers or execs who guide their teams on what to build, why, and how.


What you’ll get out of this course
  • 🀩 Demo Day!
    You will present a unique project live to a cohort of your peers and the public!
  • πŸ’Ό Hiring Opportunities
    Certified AI Engineers get direct access to job opportunities from our network
  • πŸ§‘β€πŸ€β€πŸ§‘ Peer-Supported Live Coding
    Work with certified AI Engineers in dedicated small groups to discuss and code throughout the cohort!
  • 🀝 1:1 Career Coaching
    Student success will help you achieve your goals before and after certification!
  • πŸ§‘β€πŸ’» Prompt Engineering
    Leverage in-context learning as an engineer for prototyping, RAG applications, agents, and more!
  • πŸ™‹ Retrieval Augmented Generation (RAG)
    Build Retrieval Augmented Generation applications and ground LLM outputs in your own reference data!
  • βš–οΈ Fine-Tuning
    Efficiently fine-tune LLM and embedding models to adapt to your task or domain!
  • πŸ•΄οΈ Agents
    Build complex LLM applications capable of reasoning, action, and working with external tools!
  • πŸ€” Reasoning & Test-Time Compute
    From o1 to agentic conversation programming, we'll cover the latest in LLM reasoning and frameworks.
  • πŸ“ˆ Evaluation
    Instrument agent applications for metrics-driven development to show your quality improvements quantiatively.
  • πŸ”Ž Observability
    Debug, test, and monitor applications built using any leading LLM framework!
  • πŸ“Š Inference Optimization
    Learn the leading tools and quantization techniques to accelerate inference for the latency and throughput you need.
  • 🏒 On-Prem Serving
    Learn to efficiently leverage resources so you can keep your data super private in the years to come.

Cohort 6

Apr 1 - June 5, 2025

Cohort 7

June 17 - Aug 21, 2025

Every class is taught live and led by Dr. Greg and The Wiz every Tuesday and Thursday at 7pm EST/4pm PST, along with professional peer supporters assigned to you based on skill set.

** This class has limited seating **

Apply Now

Need to purchase multiple seats?

Course Curriculum

Get the most comprehensive curriculum, from the highest rated bootcamp on Maven!

Week 1: April 1 - April 6

  • πŸ’» Pre-Work, Session 1
  • 🦾 The Four Prototyping Patterns: Prompting, Fine-Tuning, RAG, Agentic Reasoning
  • 🧱 The LLM Application Stack
  • πŸ—¨οΈ Prompt Engineering: Best Practices
  • πŸ”₯ Prompt Iteration through a User Interface
  • πŸ§‘β€πŸ’» LLM API Roles: System, User, Assistant
  • πŸš€ Building and Sharing Your First Use Case Prototype
  • 🧰 GenAI Toolbox
  • πŸ’» Pre-Work, Session 2
  • πŸ”’ Introduction to Embeddings and Similarity Search
  • ↗️ Vector Databases
  • πŸ€– Embedding Models vs. LLM Chat Models
  • πŸ™‹ Introduction to Retrieval Augmented Generation (RAG)
  • πŸ—οΈ Building Your First RAG Application from Scratch in Python
  • 🧰 RAG Builder Toolbox

Week 2: April 7 - April 13

  • πŸ’» Pre-Work, Session 3
  • 🀩 Demo Day Overview
  • πŸ’Ό Industry and Cohort Use Cases
  • πŸ’¬ Building Simple User Interfaces with Generative AI
  • πŸ“ PDF Parsing 101
  • πŸ“ˆ LLM Rate Limits
  • πŸš€ Shipping and Sharing a Rate-Unlimited, PDF-Upload-Ready RAG Application
  • 🧰 End-to-End RAG Toolbox
  • πŸ’» Pre-Work, Session 4
  • ⛓️ LangChain Core Constructs
  • πŸ’» LangChain Expression Language: Chains and Runnables
  • πŸ”„ LangChain vs. LangGraph
  • πŸ” Monitoring, Visibility, and Observability with LangSmith
  • πŸ“Š Evaluation Best-Practices for RAG
  • πŸš€ Building and Sharing a Production-Grade RAG Application
  • 🧰 Production RAG Toolbox

Week 3: April 14 - April 20

  • πŸ’» Pre-Work, Session 5
  • πŸ•΄οΈ Agent and Agentic Reasoning: Three Introductions
  • πŸ€” The Reasoning-Action (ReAct) Framework
  • πŸ› οΈ Enhancing Search and Retrieval Function Calling
  • β™ŸοΈ Reflection, Tool Use, and Planning
  • πŸ”„ Directed Cyclic Graphs
  • πŸ— Building a production-grade Agentic RAG Application
  • 🧰 Production Agents Application Toolbox
  • πŸ’» Pre-Work, Session 6
  • πŸ§‘β€πŸ€β€πŸ§‘ Multi-Agent Architectures: Hierarchical, Collaboration, Supervision
  • πŸ•΄οΈ Agent Supervisors as Routers
  • πŸ†š Event-Driven vs. Graph Traversal Frameworks
  • ✍️ Building a Multi-Agent Application with LangGraph
  • 🧰 Multi-Agent Application Toolbox

Week 4: April 21 - April 27

  • πŸ’» Pre-Work, Session 7
  • πŸ§ͺ SDG for Fine-Tuning & Alignment of LLMs
  • βš–οΈ SDG for Domain Adaptation
  • 🌌 SDG for State-of-the-Art Small Language Models
  • πŸ’Ό SDG in Industry
  • πŸ—ƒοΈ SDG for RAG Evaluation
  • 🧬 Evolving Instructions to Increase Depth and Breadth
  • πŸ—οΈ Custom Synthetic Test Data Generation for RAG Evaluation
  • 🧰 SDG Toolkit
  • πŸ’» Pre-work, Session 8
  • πŸ“Š Metrics-Driven Development
  • πŸ“ RAG ASessment (RAGAS) Framework
  • πŸ“ RAG Metrics: Context Precision/Recall, Faithfulness, etc.
  • πŸ“„ Agents or Tool Use Metrics: Topic Adherence, etc.
  • πŸ“„ General Purpose Evaluation: Aspect Critiques
  • πŸ—οΈ Assessing the accuracy of retrieval and generation in Agentic RAG applications
  • 🧰 RAG Evaluation Toolbox

Week 5: April 8 - May 4

  • πŸ’» Pre-Work, Session 9
  • πŸ“š Massive Text Embedding Benchmark (MTEB)
  • πŸ‹οΈ Downloading Open-Source Model Weights
  • 🧠 Loading LMs on GPU
  • πŸ€— Hugging Face Sentence Transformers
  • πŸ—οΈ Fine-Tuning Embedding Models for RAG using HF Sentence Transformers
  • 🧰 Embedding Fine-Tuning Toolkit
  • πŸ’» Pre-Work, Session 10
  • βš–οΈ The Primary Roles of Fine-Tuning
  • πŸ’Έ Parameter Efficient Fine-Tuning (PEFT)
  • βš›οΈ Quantization
  • πŸŽ–οΈLow-Rank Adaptation (LoRA/QLoRA)
  • πŸ—οΈ Fine-Tuning Llama 3.1 with PEFT-QLoRA
  • 🧰 LLM Fine-Tuning Toolkit

Week 6: May 5 - May 11

  • πŸ§‘β€πŸ’» Given: Data, Build and share: An End-to-End RAG application (submit by May 14)
  • πŸ’» Pre-work, Session 11
  • πŸ•΄οΈ Use Cases Across Industries
  • πŸ—£οΈ Sharing Cohort Case Studies
  • πŸ§‘β€πŸ€β€πŸ§‘ Journey + Destination Group Networking Mixer
  • πŸ§‘β€πŸ’» Submit Demo Day Project Presentation
  • 🧰 Ideation Toolbox

Week 7: May 12 - May 18

  • πŸ’» Pre-Work, Session 13
  • 🧱 Semantic Chunking
  • πŸ”’ Reranking
  • πŸ‘¨β€πŸ‘¦ Parent-Document Retrieval
  • πŸ«₯ Best-Matching 25 (Okapi BM25)
  • πŸ’₯ Reciprocal Rank Fusion (RRF)
  • 2️⃣ Ensemble Retrieval
  • ✍️ Building, evaluating, and improving a RAG application with advanced techniques
  • 🧰 Advanced RAG Toolbox
  • πŸ’» Pre-Work, Session 14
  • πŸ‘Ύ AG2: The AutoGen Framework
  • πŸ€” Reasoning LLMs: OpenAI's o1 models
  • πŸ—£οΈ Conversations vs. Reasoning
  • πŸ†š Agent Framework Comparison: LangGraph, AG2, LlamIndex, CrewAI
  • πŸ§‘β€πŸ’» Building a multi-agent report generation application with AG2
  • 🧰 Advanced Agentic Toolbox

Week 8: May 19 - May 25

  • πŸ’» Pre-Work, Session 15
  • πŸ“ˆ Baseline KPIs: Latency, Throughput, No. Tokens
  • πŸͺ™ Prod KPIs: Time To First Token, Inter-Token Latency, etc.
  • πŸ€™ Production-Ready Methods: Calling Chains, Functions, Tools, APIs
  • πŸ’Έ Prompt and Embedding Caching
  • πŸ•°οΈ Asynchronous Requests
  • πŸ“Ά Parallel vs. Serialized Chains
  • ↗️ Scalable Vector Databases
  • ✍️ Building a Scalable Agent application
  • 🧰 Production-Ready Agent Application Toolbox
  • πŸ’» Pre-work, Session 16
  • πŸ“Š Prod KPIs: Latency, No. Tokens, Time To First Token, Inter-Token Latency, etc.
  • πŸ™‹ How to Choose Open LLMs and Embedding Models
  • ☁️ LLM Serving Tools vs. Cloud Service Providers
  • πŸ€— HF Inference Endpoints
  • πŸ†š LLM Serving Engine Comparison: HF Text Generation Inference, NVIDIA NIM, vLLM
  • πŸ“ˆ Building, Shipping, Sharing, and Stress Testing an Open-Source Agent Application
  • 🧰 Open-Source Scalable LM Endpoints Toolbox

Week 9: May 26 - June 1

  • πŸ’» Pre-work, Session 17
  • πŸ’² The Business Value of On-Prem: From Prototyping to Production
  • πŸ–₯️ On-Prem Hardware Considerations
  • πŸ¦™ Local LLM & Embedding Model Hosting Comparison: vLLM, ollama
  • ↔️ LLM Application Hosting Through API Comparison: LangServe, Llama-Deploy
  • 🏒 Building On-Prem Agents with LangGraph, LangServe, and ollama
  • 🧰 On-Prem Agentic RAG in Production Toolbox
  • πŸ’» Pre-Work, Session 18
  • 🧱 GPU Hardware: A Primer
  • πŸ€– Generative Pre-Trained Transformer Quantization (GPTQ)
  • πŸ’₯ Activation-aware Weight Quantization (AWQ)
  • ↗️ Vector Post-Training Quantization (VPTQ)
  • πŸ†š Quantization Technique Comparison: GPTQ, AWQ, and VPTQ
  • πŸ—οΈ Comparing Quantization Methods for serving a state-of-the-art Llama model
  • 🧰 Inference Optimization Toolbox

Week 10: June 2 - June 5

  • 🧊 Code Freeze
  • πŸ§‘β€πŸ’» Demo Day Rehearsals
  • 🀩 Demo Day
  • πŸŽ“ Graduation and Certification Ceremony!
AIM graduates come from companies of all industries, sizes, and locations.

Instructors

Meet the crew who teach every class, live!

"Dr. Greg" Loughnane

Co-Founder & CEO @ AI Makerspace

In 2023, we created the LLM Engineering: The Foundations and LLM Ops: LLMs in Production courses on Maven!

From 2021-2023 I led the product & curriculum team at FourthBrain (Backed by Andrew Ng's AI Fund) to build industry-leading online bootcamps in ML Engineering and ML Operations (MLOps).

Previously, I worked as an AI product manager, university professor, data science consultant, AI startup advisor, and ML researcher; TEDx & keynote speaker, lecturing since 2013.

Chris "The Wiz πŸͺ„" Alexiuk

Co-Founder & CTO @ AI Makerspace

In 2023, we created the LLM Engineering: The Foundations and LLM Ops: LLMs in Production courses on Maven!

During the day, I work as a Developer Advocate for NVIDIA. Previously, I worked with Greg at FourthBrain (Backed by Andrew Ng's AI Fund) on MLE and MLOps courses, and on a few Deeplearning.ai events!

A former founding MLE and data scientist, these days you can find me cranking out Machine Learning and LLM content! My motto is "Build, build, build!", and I'm excited to get building with all of you!

Cost
$2,999
Upcoming
AIE 06
Dates
April 1 - June 5
Deadline
March 28

People Are Talking

From non-programming data scientists to Fortune 500 CTO's, students are seeing real returns on their investments! Graduates are getting promoted, starting new careers, launching companies, and working on real-world Gen AI projects, every day!

Julien de Lambilly

Lead AI Architect
Digit8 Group

Bravo for successfully finding the right balance between assignments, quizzes, and activities to complete in the notebooks. It's very well thought out and balanced! I joined the cohort Demo Day and said to myself, "WOW this is what I need!", so I joined the next cohort!

Jimmy

Data scientist
BMO

Amazing course! Learned a lot in 10 weeks. It was tough but totally worth it. I can't believe I was able to build the product in the end.

Monalisha Singh

Member of Technical Staff
NetApp

I didn't know what Google Colab was before I joined this course! I was so new to this domain. For an upcoming NetApp Hackathon I'm proposing to fine-tune LLMs, so you can see the difference 7 weeks has made! It was fun learning hands-on rather than textbook or 'strategy for AI implementation' stuff!

Robert FitoΕ‘

AI Tribe Lead
Hrvatski Telekom

The reactions from the team were overwhelmingly positive. They learned a lot and liked the way your Bootcamp is structured. Most importantly, they felt pride in completing the challenges... There was blood, sweat, and tears, but a huge sense of accomplishment! I will explore opportunities to work together in the future, so please keep me in the loop on what you're building over there.

Colin Davis

Head Of Marketing

The AI Engineering bootcamp... has significant advantages for enterprise data teams. After having multiple team members complete the bootcamp, our approach to Generative AI projects has become more streamlined and effective, leading to faster prototyping. I strongly recommend this course to any data professional or team eager to upgrade their Gen AI skills and drive impactful innovation within their organization.

Alex

Data Scientist
Fonterra

Great course, awesome instructors, up-to-date and relevant material. You will get relevant skills to get you started in your AI Engineering career.

Jithin James

Co-Founder and Maintainer
RAGAS - YC W24 Batch

Your team [does] a really good job - even better than us at explaining how RAGAS (RAG ASsessment) metrics work!

Vinod

Financial Economist

This course is not for the faint of heart or the casual hobbyist in AI Engineering! Make no mistake - it is a rigorous bootcamp. But for those with the strong desire to meaningfully upskill in this area, I can't think of a better way to spend 10 weeks.

Mike

Professor of Pediatrics
University of Utah

Spectacular amount of information delivered in ten weeks. Lectures are extremely well prepared, and the combined expertise of Dr. Greg and The Whiz works magically. ...resources were plentiful, and our cohort developed a camaraderie and I believe some long lasting future friendships. Highly recommended.

Free Prep Course

Do you want to really understand how LLMs work "under the hood" from concepts to code?

Ready to master the fundamentals of LLMs?

Whether you're looking to nail AI Engineer interviews or lead an entire AI Engineering team, positioning yourself the LLM expert in your context is just five days away.

Day 1 - Transformers: Attention Is NOT All You Need, but what else?

Day 2 - Attention: Are LLMs magical, or do they just pay attention?

Day 3 - Embeddings: Layers, models, representations, or all of the above?

Day 4 - Training: Let's train a GPT from scratch! Loss is the key.

Day 5 - Inference: Understand how transformers predict the next-token

Changing Lives

From community members to graduates, people from every imaginable background and skillset are becoming AI Engineers, and thriving! Hear their stories in our Transformation Of The Week!

See more amazing Transformation Of The Week stories on our YouTube page.

F.A.Q.

Answers to our most asked questions.

Yes! Your instructors, Dr. Greg and The Wiz, run each and every class LIVE. Nothing pre-recorded!

Yes! You can easily catch up asynchronously on the course if you need to miss a class or two, or if you're in a timezone that makes attending class live very difficult.

This course is designed for people with full-time jobs. If you're an aspiring AI Engineer, it is important for you to complete the weekly coding exercises (+2-4 hours/week outside of class and other sessions). If you're a AI Engineering Leader, you will be able to get a lot out of the class by simply attending the sessions!

The course focused on best-practice tools for the industry, so we will leverage Hugging Face Inference Endpoints to deploy scalable open-source models. We will use AWS and Amazon SageMaker during the course, although similar functionality also exists in MS Azure.

For team seat packages, please contact lusk@aimakerspace.io

You will need to set up billing for the following tools:

  • ChatGPT Plus (to create your own GPT on the GPT Store)
  • OpenAI API access (for building with OpenAI GPT models)
  • GPU access through Google Colab Pro (for Fine-Tuning)
  • Hugging Face Spaces (for hosting deployed apps)

Recommended budget ~= $100 total

Please read more about our deferral policy here.

Cost
$2,999
Upcoming
AIE 06
Dates
April 1 - June 5
Deadline
March 28

Next cohort

Apr 1 - June 5, 2025