LoRA (Low-Rank Adaptation)
Technique
Definition
Low-Rank Adaptation — a parameter-efficient fine-tuning technique that adds trainable low-rank matrices to frozen model weights, significantly reducing memory and computation costs for fine-tuning.
In French
LoRA (Low-Rank Adaptation) — Technique efficace de fine-tuning permettant d’adapter un grand modèle en ne modifiant qu’une fraction des paramètres. LoRA réduit considérablement les coûts de mémoire et de calcul du fine-tuning, rendant l’adaptation de modèles accessible aux petites équipes.
Related terms
Explore the full glossary
Discover all artificial intelligence terms in our glossary.