LoRA
Low-Rank Adaptation. Fine-tune a model without retraining its weights: freeze W, learn a small A·B update. Cuts trainable parameters by ~100×.
Continue
Low-Rank Adaptation. Fine-tune a model without retraining its weights: freeze W, learn a small A·B update. Cuts trainable parameters by ~100×.
Continue