Capstone Project: Building a Medical Domain Expert Model with Meta LlaMA 2 on AWS SageMaker JumpStart
The Meta Llama 2 7B foundation model was fine-tuned by utilizing Amazon Sagemaker and other AWS tools. This model has been trained for text-generation tasks. The goal is to adapt this model to my selected domain (medical), enhancing its ability to understand and generate domain-specific text and prompting to get the correct deployed result from the dataset..
-
Data Preparation: Loading a fine-tuned medical dataset based on a research paper ensuring it covers a wide range of medical specialties and concepts found in the research paper.
-
AWS SageMaker Setup: Utilizing AWS SageMaker JumpStart to streamline the deployment of the Meta LLaMA 2 model and set up the development environment with necessary endpoints.
-
Fine-Tuning Process: Implementing advanced fine-tuning techniques to adapt LLaMA 2 to the medical domain, focusing on preserving its general knowledge while enhancing its medical expertise.
-
Results: End results were testing using the specific prompts from the medical dataset trained on Meta LlaMa 2 Model to get the relevant output.