INT4 LoRA fine-tuning vs QLoRA: A user inquired about the differences amongst INT4 LoRA fantastic-tuning and QLoRA in terms of precision and speed. An additional member explained that QLoRA with HQQ involves frozen quantized weights, would not use tinnygemm, and makes use of dequantizing together with torch.matmulTweet from Robert Graham (@ErrataRo