Skip to content

Latest commit

 

History

History
 
 

3_llmops-aistudio

layout title nav_order has_children
default
Lab 3. LLMOps for SLM with Azure AI Studio
6
true

日本語

Lab 3. LLMOps for SLM with Azure AI Studio

This E2E example is for users who have just adopted Azure Open AI and want to build an LLM evaluation pipeline with Promptflow for quality assurance from scratch. It introduces the end-to-end processes of experimentation, model quality evaluation, deploying, and performance monitoring with Prompt flow and other tools after fine-tuned LLMs.

Overview

In this lab, you will learn how to set up, test, deploy, evaluate and monitor your fine-tuned models in the previous labs following your current use cases. By leveraging Azure AI studio and Prompt flow, you will establish a LLMOps pipeline for deploying and utilizing custom AI models. This E2E example is divided into five scenarios based on the yours current situation:

Scenario 1: Set Up Azure AI Studio for LLMOps

Scenario 2: Basic LLMOps for your first gen AI app with Promptflow

Scenario 3: Evaluate your models using Prompt Flow to keep optimizing

Scenario 4: Content Safety with Azure AI studio before production

🗑️ Clean up resources

TBD

Reference

LLMOps Prompt flow template github

GenAIOps github

Phi-3CookBook

https://github.com/just-the-docs/just-the-docs?tab=readme-ov-file#user-content-fn-2-6204df4f8c0dad5766232d4558ca98cf

https://serverspace.io/support/help/install-ruby-on-rails-ubuntu-20-04/

https://jekyllrb.com/