Skip to content

dicode04/MedicAI

Repository files navigation

Introducing-GenerativeAI-with-AWS

Capstone Project: Building a Medical Domain Expert Model with Meta LlaMA 2 on AWS SageMaker JumpStart

The Meta Llama 2 7B foundation model was fine-tuned by utilizing Amazon Sagemaker and other AWS tools. This model has been trained for text-generation tasks. The goal is to adapt this model to my selected domain (medical), enhancing its ability to understand and generate domain-specific text and prompting to get the correct deployed result from the dataset..

Key Project Components:

  1. Data Preparation: Loading a fine-tuned medical dataset based on a research paper ensuring it covers a wide range of medical specialties and concepts found in the research paper.

  2. AWS SageMaker Setup: Utilizing AWS SageMaker JumpStart to streamline the deployment of the Meta LLaMA 2 model and set up the development environment with necessary endpoints.

  3. Fine-Tuning Process: Implementing advanced fine-tuning techniques to adapt LLaMA 2 to the medical domain, focusing on preserving its general knowledge while enhancing its medical expertise.

  4. Results: End results were testing using the specific prompts from the medical dataset trained on Meta LlaMa 2 Model to get the relevant output.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published