Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updated code got genai models #662

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
515 changes: 515 additions & 0 deletions applications/genai-on-gke/falcon/README.md

Large diffs are not rendered by default.

23 changes: 23 additions & 0 deletions applications/genai-on-gke/falcon/docker_image/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
FROM python:3.10-slim-bullseye

ENV HOST=0.0.0.0

ENV LISTEN_PORT 8080

ENV HUGGINGFACEHUB_API_TOKEN="hf_fugging_face_token_here"

EXPOSE 8080

RUN apt-get update && apt-get install -y git

COPY ./requirements.txt /app/requirements.txt

RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt

WORKDIR app/

COPY ./main.py /app/main.py
#COPY ./langchain_helper.py /app/langchain_helper.py
#COPY ./secret_key.py /app/secret_key.py

CMD ["streamlit", "run", "main.py", "--server.port", "8080"]
40 changes: 40 additions & 0 deletions applications/genai-on-gke/falcon/docker_image/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
import streamlit as st
from langchain import HuggingFaceHub
from langchain import PromptTemplate, LLMChain
import os
key = 'HUGGINGFACEHUB_API_TOKEN'
value = os.getenv(key)
os.environ['HUGGINGFACEHUB_API_TOKEN'] = value


# Set Hugging Face Hub API token
# Make sure to store your API token in the `apikey_hungingface.py` file
#os.environ["HUGGINGFACEHUB_API_TOKEN"] = apikey_hungingface

# Set up the language model using the Hugging Face Hub repository
repo_id = "tiiuae/falcon-7b-instruct"
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature": 0.3, "max_new_tokens": 2000})

# Set up the prompt template
template = """
You are an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's question
Question: {question}\n\nAnswer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)

# Create the Streamlit app
def main():
st.title("FALCON LLM Question-Answer App")

# Get user input
question = st.text_input("Enter your question")

# Generate the response
if st.button("Get Answer"):
with st.spinner("Generating Answer..."):
response = llm_chain.run(question)
st.success(response)

if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
langchain
huggingface_hub
streamlit
Loading
Loading