Member-only story
Ever Wondered How AI Knows Exactly What to Say Next? 🤯
Picture this: You’re texting your friend, and your phone magically suggests the perfect next word. Or you’re chatting with ChatGPT, and it crafts an entire paragraph as if it knows what you’re thinking.
How does AI do this?
Is it reading your mind? (Spoiler: No, but it sure feels like it!)
This magic happens because of probability distributions. In this article, we’ll break down how Large Language Models (LLMs) predict the next word — step by step — without the complicated jargon. If you stick around, you’ll never look at AI-generated text the same way again.
📝 Step 1: Input Prompt — Setting the Stage
Let’s say we give an AI model this prompt:
“The cat sat on the”
At this moment, the AI has no idea what you want next. But it has seen millions (or billions) of similar sentences. Now, it’s time for some serious guesswork.
🔡 Step 2: Tokenization — Breaking It Down
Before AI can predict anything, it needs to process your words. It doesn’t see words the way we do — it sees tokens (which can be words or even smaller parts of words). Here’s an example:
Word — — Token ID