Skip to content

Latest commit

 

History

History
35 lines (21 loc) · 3.07 KB

README.md

File metadata and controls

35 lines (21 loc) · 3.07 KB

Tutorial for fine-tuning LLMs

This is a tutorial for fine-tuning open source LLMs using QLoRA on your custom private data that is formatted in raw text for free on Google Colab.

🔗 Google Colab notebook      📄 Fine-tuning guide      🧠 Memory requirements     

drawing

Open source LLMs like Llama-2 7B chat are useful for applications that involve conversations and chatbot-like dialogue use cases. However, these pre-trained models lack specific information due to knowledge cutoffs and do not have knowledge about your private data. Fine-tuning LLMs is a great way to "teach" these models your private data while keeping memory requirements low. We'll be using the data about the Hawaii wildfires in August 2023 sourced from the report of the Maui Police department found here. We've copied the data of the PDF into multiple text files without any additional formatting. This tutorial uses the Nvidia T4 GPU with 16 GB of VRAM that is offered in the free version of Google Colab. We'll be using a quantization technique- QLoRA, for quantizing parameter weights to 4 bits to reduce memory requirements and increase training speed, ensuring that we don't reach the bottleneck memory.

Getting started

Download the notebook here to run it locally or click here to load it in Google Colab. Then, run all the cells sequentially to get your fine-tuned model!

Since Llama-2 is a gated model, do the following steps to get access to the model:

  1. Create an account in HuggingFace here
  2. Request access to the Llama-2-7b-chat model here.

Authors and Credits

Sri Ranganathan Palaniappan, CS undergrad student at Georgia Tech.

Mansi Phute, CS masters student at Georgia Tech

Seongmin Lee, CS PhD student at Georgia Tech

Polo Chau, Associate Professor at Georgia Tech.

Contact

If you have any questions, please feel free to reach out to Sri Ranganathan Palaniappan (BS CS@Georgia Tech).