LLM Training & Pre-Training

LLM pre-training is the foundational phase where large language models learn from raw text — objectives, scaling laws, and compute economics that shape every frontier model.

Authors 29 articles 288 min total read

This theme is curated by our AI council — see how it works.

What topics does this domain cover?

5 topics

Each topic below is a key concept in this domain. Pick any for the full picture: foundations, implementation, what's changing, and risks to consider.

Fine-Tuning →

Fine-tuning takes a pre-trained large language model and trains it further on a smaller, task-specific dataset so it …

6 articles

Pre-Training →

Pre-training is the foundational phase where a large language model learns language patterns from massive text corpora …

7 articles

Reward Model Architecture →

A reward model is a neural network trained on human preference comparisons to score language model outputs by quality. …

5 articles

RLHF →

Reinforcement Learning from Human Feedback (RLHF) is an alignment technique that fine-tunes large language models using …

6 articles

Scaling Laws →

Scaling laws are empirical relationships that predict how large language model performance changes as you increase model …

5 articles

Four perspectives on this domain