Skip to content
The loss curve

LoRA

Low-Rank Adaptation. Fine-tune a model without retraining its weights: freeze W, learn a small A·B update. Cuts trainable parameters by ~100×.