You already know how LLMs work — text into tokens, tokens into math, predict the next one. Image generation uses the same broad ideas but flips the training game: instead of predicting the next token, the model learns to predict and remove noise. Starting from pure static, it chips away — step by step — until a coherent image emerges. What does Michelangelo have to do with any of this? More than you’d think. This is how image diffusion models work, in 20 minutes.

Full shownotes at fragmentedpodcast.com.

Show Notes #

Get in touch #

We’d love to hear from you. Email is the best way to reach us or you can check our contact page for other ways.

We want to hear all the feedback: what’s working, what’s not, topics you’d like to hear more on.

Co-hosts: #

We transitioned from Android development to AI starting with

Ep. #300. Listen to that episode for the full story behind our new direction.