stuff tagged with language models

How do LLMs generate text?

When ChatGPT responds to your prompt, it's not retrieving an answer from a database. It's predicting the next word, one token at a time, based on probability.

Why do LLMs hallucinate?

LLMs can generate confident, detailed responses that are completely wrong. Understanding why this happens helps you use these tools effectively.

What is a context window?

LLMs can only hold so much information at once. Understanding context windows helps explain why long conversations sometimes go off the rails.