2026

You already know how LLMs work — text into tokens, tokens into math, predict the next one. Image generation uses the same broad ideas but flips the training game: instead of predicting the next token, the model learns to predict and remove noise. Starting from pure static, it chips away — step by step — until a coherent image emerges. What does Michelangelo have to do with any of this? More than you’d think. This is how image diffusion models work, in 20 minutes.

Full shownotes at fragmentedpodcast.com.

The hard part of AI coding isn’t generating code — it’s controlling quality, safety, and drift. Drawing from OpenAI’s Codex case study, Stripe’s Minions project, and real-world experience, Kaushik and Iury break down harness engineering: the five pillars for shaping an agent’s environment, and what it looks like when teams build custom harnesses from scratch.

Full shownotes at fragmentedpodcast.com.

AGENTS.md is becoming the common language for AI coding tools, but keeping repo rules, personal rules, and tool-specific files in sync is still messy. In this episode, Kaushik and Iury break down the sync problem, compare their own setups, and unpack what the latest AGENTS.md research actually says.

Full shownotes at fragmentedpodcast.com.

Download directly

Subagents are becoming a core primitive for serious AI-assisted development. In this episode, Kaushik and Iury disambiguate “agent” terminology, unpack plan mode vs subagents, and explain how parallel, scoped workers improve research quality without polluting the main thread.

Full shownotes at fragmentedpodcast.com.

Download directly

Agent Skills look simple, but they are one of the most powerful building blocks in modern AI coding workflows. In this episode, Kaushik and Iury break down when to use skills, how progressive disclosure works, and how skills compare with commands, instructions, and MCPs.

Full shownotes at fragmentedpodcast.com.

Download directly

Ever get asked “how do LLMs work?” at a party and freeze? We walk through the full pipeline: tokenization, embeddings, inference — so you understand it well enough to explain it. Walk away with a mental model that you can use for your next dinner party.

Full shownotes at fragmentedpodcast.com.

Download directly

MCPs are everywhere, but are they worth the token cost? We break down what Model Context Protocol actually is, how it differs from just using CLIs, the tradeoffs you should know about, and when MCPs actually make sense for your workflow.

Full shownotes at fragmentedpodcast.com.

Download directly

Most folks reference “AI coding” like it’s one thing. It’s really not. In this foundational episode Kaushik & Iury walk through (at least) four paradigms — from super autocomplete to agent orchestration — each with different workflows, expectations, and mental models.

What do most developers follow today? Where is the frontier? What’s coming in the future?

Listen to the episode and find out!

Full shownotes at fragmentedpodcast.com.

2025

Download directly

Join us as we talk with Vinay Gaba, Android GDE and leading voice in Android development, about the future of the field. Vinay shares insights from interviews with top Android devs on their three-year predictions, and offers his own perspective. We cover AI’s impact, evolving development roles, and crucial future skills.

You can find the full shownotes over at fragmentedpodcast.com.

Site Icon