INT4 LoRA fine-tuning vs QLoRA: A user inquired about the dissimilarities involving INT4 LoRA good-tuning and QLoRA in terms of precision and speed. Another member explained that QLoRA with HQQ requires frozen quantized weights, won't use tinnygemm, and makes use of dequantizing along with torch.matmulGoogle Colab breaks · Issue #243 · unslothai/