LoRA, or low-rank adaptation, is a fine-tuning technique for LLMs (one of many disparate techniques). The idea is to inject low rank matrices into large pre training models.
Recall that the rank of a matrix A is the dimension of the vector space spanned by its columns. This in turn corresponds to the number of linearly independent columns of A.
No comments:
Post a Comment