Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding chat history to RAG app and refactor to better utilize LangChain #648

Open
wants to merge 61 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
dda40b9
Also introduced a basic session history mechanism in the browser to k…
alpha-amundson May 3, 2024
5cc85b9
tflint formatting fixes
alpha-amundson May 3, 2024
6898666
TPU Provisioner: JobSet related fixes (#645)
nstogner May 6, 2024
1d6c052
Updated image to use code in this branch
alpha-amundson May 6, 2024
981e777
making tflint happy
alpha-amundson May 6, 2024
d1d1211
Working on improvements for rag application (#731)
german-grandas Jul 12, 2024
5a16b54
Rag langchain chat history (#747)
german-grandas Jul 22, 2024
9e416c8
Rag langchain chat history (#755)
german-grandas Jul 29, 2024
e750d12
Fixing issues and updating chat history on frontend
german-grandas Jul 31, 2024
a000c46
Fixing files on working tree
german-grandas Jul 31, 2024
0d853ea
Ignoring test rag, to review how the rag application is working
german-grandas Aug 1, 2024
386c437
ignoring unit test to review cloud build process
german-grandas Aug 1, 2024
be1839d
refactoring cloud sql connection helper
german-grandas Aug 6, 2024
7f081ff
Merge branch 'main' into rag-langchain-chat-history
german-grandas Aug 6, 2024
35f67e4
Change TPU Metrics Source for Autoscaling (#770)
Bslabe123 Aug 8, 2024
0022053
Refactor: move workload identity service account out of kuberay-opera…
genlu2011 Aug 15, 2024
48f655b
updating branch
german-grandas Aug 20, 2024
a9895d6
fixing conflicts with remote branch
german-grandas Aug 20, 2024
cd95c98
fixing conflicts with remote branch
german-grandas Aug 20, 2024
bc8d745
fixing conflicts with remote branch
german-grandas Aug 20, 2024
e9beeef
fixing conflicts applying rebase
german-grandas Aug 20, 2024
eb9ab02
Updating files based on reviewer comments
german-grandas Aug 20, 2024
dff8d94
reverting change on cloudbuild.yaml file
german-grandas Aug 20, 2024
138920f
Reverting comment of line
german-grandas Aug 26, 2024
c8e5d35
Updating length of variable
german-grandas Aug 26, 2024
c437736
updating branch with main
german-grandas Aug 30, 2024
8b4f55d
Merge branch 'main' of https://github.com/GoogleCloudPlatform/ai-on-g…
german-grandas Sep 2, 2024
4261818
Updating rag frontend image.
german-grandas Sep 4, 2024
a8258f1
updating rag frontend images with the latest changes
german-grandas Sep 9, 2024
4e1a4c5
Merge branch 'main' of https://github.com/GoogleCloudPlatform/ai-on-g…
german-grandas Sep 9, 2024
863ee72
updating branch
german-grandas Sep 9, 2024
4f02546
Fixing issue with database connection
german-grandas Sep 9, 2024
324abf7
Merge branch 'rag-langchain-chat-history' of github.com:GoogleCloudPl…
german-grandas Sep 9, 2024
88ee300
Updating Rag application test.
german-grandas Sep 9, 2024
e209b46
Merge branch 'rag-langchain-chat-history' of https://github.com/Googl…
german-grandas Sep 9, 2024
cf0a447
Adding exceptions to test
german-grandas Sep 9, 2024
bf2f990
Fixing bug on unit test
german-grandas Sep 9, 2024
74b6e9d
fixing unit test
german-grandas Sep 9, 2024
e94cab0
updating notebook to use the PostgresVectorStore instead of the custo…
german-grandas Sep 10, 2024
329417b
fixing issue with notebook
german-grandas Sep 10, 2024
f1bf05a
Fixing issue with missing environment varibles on notebook
german-grandas Sep 11, 2024
88fe07d
Refactoring example notebooks to handle new cloudsql vector store
german-grandas Sep 11, 2024
e38a101
Adding missing package to notebook
german-grandas Sep 11, 2024
799c8db
Creating a notebook for testing rag with a sample of the data
german-grandas Sep 12, 2024
14ff203
updating notebook to test rag
german-grandas Sep 12, 2024
a73f987
Merge branch 'main' of https://github.com/GoogleCloudPlatform/ai-on-g…
german-grandas Sep 12, 2024
c01cff6
Reverting changes on files, updating database model on notebook
german-grandas Sep 12, 2024
0a04782
Fixing name with column on notebook query
german-grandas Sep 12, 2024
68a11f7
Merge branch 'main' of github.com:GoogleCloudPlatform/ai-on-gke into …
german-grandas Sep 13, 2024
d872679
Merge branch 'main' of https://github.com/GoogleCloudPlatform/ai-on-g…
german-grandas Sep 16, 2024
e9a79ce
Merge branch 'rag-langchain-chat-history' of github.com:GoogleCloudPl…
german-grandas Sep 16, 2024
7fe461c
resolving conflicts
german-grandas Sep 17, 2024
c489d73
Delete applications/rag/example_notebooks/ingest_database.ipynb
german-grandas Sep 17, 2024
aa44fcb
updating Embedding model with missing column
german-grandas Sep 25, 2024
5882e28
Merge branch 'main' of https://github.com/GoogleCloudPlatform/ai-on-g…
german-grandas Sep 25, 2024
e1b8e50
Merge branch 'main' of https://github.com/GoogleCloudPlatform/ai-on-g…
german-grandas Sep 30, 2024
0ef245f
Updating packages, improving chain prompt
german-grandas Oct 2, 2024
8558a46
updating rag frontend sha
german-grandas Oct 2, 2024
44b3e72
updating column name
german-grandas Oct 8, 2024
ab06c07
updating max tokens lenght for inference service
german-grandas Oct 24, 2024
4209af8
Merge branch 'main' of https://github.com/GoogleCloudPlatform/ai-on-g…
german-grandas Oct 24, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
"source": [
"# RAG-on-GKE Application\n",
"\n",
"\n",
"This is a Python notebook for generating the vector embeddings used by the RAG on GKE application. For full information, please checkout the GitHub documentation [here](https://github.com/GoogleCloudPlatform/ai-on-gke/blob/main/applications/rag/README.md).\n",
"\n",
"\n",
Expand Down Expand Up @@ -283,7 +284,7 @@
"outputs": [],
"source": [
"from sqlalchemy.ext.declarative import declarative_base\n",
"from sqlalchemy import Column, String, Text, text\n",
"from sqlalchemy import Column, String, Text, text, JSON\n",
"from sqlalchemy.orm import scoped_session, sessionmaker, mapped_column\n",
"from pgvector.sqlalchemy import Vector\n",
"\n",
Expand Down Expand Up @@ -327,9 +328,10 @@
"\n",
"class TextEmbedding(Base):\n",
" __tablename__ = TABLE_NAME\n",
" id = Column(String(255), primary_key=True)\n",
" text = Column(Text)\n",
" text_embedding = mapped_column(Vector(384))\n",
" langchain_id = Column(String(255), primary_key=True)\n",
" content = Column(Text)\n",
" embedding = mapped_column(Vector(384))\n",
" langchain_metadata = Column(JSON) \n",
"\n",
"with pool.connect() as conn:\n",
" conn.execute(text(\"CREATE EXTENSION IF NOT EXISTS vector\"))\n",
Expand All @@ -342,7 +344,7 @@
"rows = []\n",
"for r in results:\n",
" id = uuid.uuid4() \n",
" rows.append(TextEmbedding(id=id, text=r[0], text_embedding=r[1]))\n",
" rows.append(TextEmbedding(langchain_id=id, content=r[0], embedding=r[1]))\n",
"\n",
"DBSession.bulk_save_objects(rows)\n",
"DBSession.commit()"
Expand All @@ -368,7 +370,7 @@
" transformer = SentenceTransformer(SENTENCE_TRANSFORMER_MODEL)\n",
" query_text = \"During my holiday in Marmaris we ate here to fit the food. It's really good\" \n",
" query_emb = transformer.encode(query_text).tolist()\n",
" query_request = \"SELECT id, text, text_embedding, 1 - ('[\" + \",\".join(map(str, query_emb)) + \"]' <=> text_embedding) AS cosine_similarity FROM \" + TABLE_NAME + \" ORDER BY cosine_similarity DESC LIMIT 5;\" \n",
" query_request = \"SELECT langchain_id, content, embedding, 1 - ('[\" + \",\".join(map(str, query_emb)) + \"]' <=> embedding) AS cosine_similarity FROM \" + TABLE_NAME + \" ORDER BY cosine_similarity DESC LIMIT 5;\" \n",
" query_results = db_conn.execute(sqlalchemy.text(query_request)).fetchall()\n",
" db_conn.commit()\n",
" \n",
Expand Down
206 changes: 76 additions & 130 deletions applications/rag/example_notebooks/rag-kaggle-ray-sql-latest.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install ray[default]==2.9.3 kaggle==1.6.6"
"!pip install ray[default]==2.9.3 kaggle==1.6.6 langchain-google-cloud-sql-pg"
]
},
{
Expand Down Expand Up @@ -73,57 +73,62 @@
"\n",
"import os\n",
"import uuid\n",
"\n",
"import ray\n",
"from langchain.document_loaders import ArxivLoader\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from sentence_transformers import SentenceTransformer\n",
"from typing import List\n",
"import torch\n",
"from datasets import load_dataset_builder, load_dataset, Dataset\n",
"from huggingface_hub import snapshot_download\n",
"from google.cloud.sql.connector import Connector, IPTypes\n",
"import sqlalchemy\n",
"from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings\n",
"\n",
"from langchain_google_cloud_sql_pg import PostgresEngine, PostgresVectorStore\n",
"from google.cloud.sql.connector import IPTypes\n",
"\n",
"# initialize parameters\n",
"INSTANCE_CONNECTION_NAME = os.environ[\"CLOUDSQL_INSTANCE_CONNECTION_NAME\"]\n",
"INSTANCE_CONNECTION_NAME = os.environ.get(\"CLOUDSQL_INSTANCE_CONNECTION_NAME\")\n",
"print(f\"Your instance connection name is: {INSTANCE_CONNECTION_NAME}\")\n",
"DB_NAME = \"pgvector-database\"\n",
"\n",
"db_username_file = open(\"/etc/secret-volume/username\", \"r\")\n",
"DB_USER = db_username_file.read()\n",
"db_username_file.close()\n",
"\n",
"db_password_file = open(\"/etc/secret-volume/password\", \"r\")\n",
"DB_PASS = db_password_file.read()\n",
"db_password_file.close()\n",
"\n",
"# initialize Connector object\n",
"connector = Connector()\n",
"\n",
"# function to return the database connection object\n",
"def getconn():\n",
" conn = connector.connect(\n",
" INSTANCE_CONNECTION_NAME,\n",
" \"pg8000\",\n",
"cloud_variables = INSTANCE_CONNECTION_NAME.split(\":\")\n",
"\n",
"GCP_PROJECT_ID = os.environ.get(\"GCP_PROJECT_ID\", cloud_variables[0])\n",
"GCP_CLOUD_SQL_REGION = os.environ.get(\"CLOUDSQL_INSTANCE_REGION\", cloud_variables[1])\n",
"GCP_CLOUD_SQL_INSTANCE = os.environ.get(\"CLOUDSQL_INSTANCE\", cloud_variables[2])\n",
"\n",
"DB_NAME = os.environ.get(\"INSTANCE_CONNECTION_NAME\", \"pgvector-database\")\n",
"VECTOR_EMBEDDINGS_TABLE_NAME = os.environ.get(\"EMBEDDINGS_TABLE_NAME\", \"netflix_reviews_db\")\n",
"CHAT_HISTORY_TABLE_NAME = os.environ.get(\"CHAT_HISTORY_TABLE_NAME\", \"message_store\")\n",
"\n",
"VECTOR_DIMENSION = os.environ.get(\"VECTOR_DIMENSION\", 384)\n",
"\n",
"try:\n",
" db_username_file = open(\"/etc/secret-volume/username\", \"r\")\n",
" DB_USER = db_username_file.read()\n",
" db_username_file.close()\n",
"\n",
" db_password_file = open(\"/etc/secret-volume/password\", \"r\")\n",
" DB_PASS = db_password_file.read()\n",
" db_password_file.close()\n",
"except:\n",
" DB_USER = os.environ.get(\"DB_USERNAME\", \"postgres\")\n",
" DB_PASS = os.environ.get(\"DB_PASS\", \"postgres\")\n",
"\n",
"engine = PostgresEngine.from_instance(\n",
" project_id=GCP_PROJECT_ID,\n",
" region=GCP_CLOUD_SQL_REGION,\n",
" instance=GCP_CLOUD_SQL_INSTANCE,\n",
" database=DB_NAME,\n",
" user=DB_USER,\n",
" password=DB_PASS,\n",
" db=DB_NAME,\n",
" ip_type=IPTypes.PRIVATE\n",
" ip_type=IPTypes.PRIVATE,\n",
")\n",
"\n",
"try:\n",
" engine.init_vectorstore_table(\n",
" VECTOR_EMBEDDINGS_TABLE_NAME,\n",
" vector_size=VECTOR_DIMENSION,\n",
" overwrite_existing=True,\n",
" )\n",
" return conn\n",
"except Exception as err:\n",
" print(f\"Error: {err}\")\n",
"\n",
"# create connection pool with 'creator' argument to our connection object function\n",
"pool = sqlalchemy.create_engine(\n",
" \"postgresql+pg8000://\",\n",
" creator=getconn,\n",
")\n",
"\n",
"SHARED_DATA_BASEPATH='/data/rag/st'\n",
"SENTENCE_TRANSFORMER_MODEL = 'intfloat/multilingual-e5-small' # Transformer to use for converting text chunks to vector embeddings\n",
"SENTENCE_TRANSFORMER_MODEL_PATH_NAME='models--intfloat--multilingual-e5-small' # the downloaded model path takes this form for a given model name\n",
"SENTENCE_TRANSFORMER_MODEL_SNAPSHOT=\"ffdcc22a9a5c973ef0470385cef91e1ecb461d9f\" # specific snapshot of the model to use\n",
"SENTENCE_TRANSFORMER_MODEL_PATH = SHARED_DATA_BASEPATH + '/' + SENTENCE_TRANSFORMER_MODEL_PATH_NAME + '/snapshots/' + SENTENCE_TRANSFORMER_MODEL_SNAPSHOT # the path where the model is downloaded one time\n",
"\n",
"# the dataset has been pre-dowloaded to the GCS bucket as part of the notebook in the cell above. Ray workers will find the dataset readily mounted.\n",
"SHARED_DATASET_BASE_PATH=\"/data/netflix-shows/\"\n",
"REVIEWS_FILE_NAME=\"netflix_titles.csv\"\n",
Expand All @@ -135,40 +140,18 @@
"DIMENSION = 384 # Embeddings size\n",
"ACTOR_POOL_SIZE = 1 # number of actors for the distributed map_batches function\n",
"\n",
"class Embed:\n",
"class Splitter:\n",
" def __init__(self):\n",
" print(\"torch cuda version\", torch.version.cuda)\n",
" device=\"cpu\"\n",
" if torch.cuda.is_available():\n",
" print(\"device cuda found\")\n",
" device=\"cuda\"\n",
"\n",
" print (\"reading sentence transformer model from cache path:\", SENTENCE_TRANSFORMER_MODEL_PATH)\n",
" self.transformer = SentenceTransformer(SENTENCE_TRANSFORMER_MODEL_PATH, device=device)\n",
" self.splitter = RecursiveCharacterTextSplitter(chunk_size=CHUNK_SIZE, chunk_overlap=CHUNK_OVERLAP, length_function=len)\n",
"\n",
" def __call__(self, text_batch: List[str]):\n",
" def __call__(self, text_batch):\n",
" text = text_batch[\"item\"]\n",
" # print(\"type(text)=\", type(text), \"type(text_batch)=\", type(text_batch))\n",
" chunks = []\n",
" for data in text:\n",
" splits = self.splitter.split_text(data)\n",
" # print(\"len(data)\", len(data), \"len(splits)=\", len(splits))\n",
" chunks.extend(splits)\n",
"\n",
" embeddings = self.transformer.encode(\n",
" chunks,\n",
" batch_size=BATCH_SIZE\n",
" ).tolist()\n",
" print(\"len(chunks)=\", len(chunks), \", len(emb)=\", len(embeddings))\n",
" return {'results':list(zip(chunks, embeddings))}\n",
"\n",
"\n",
"# prepare the persistent shared directory to store artifacts needed for the ray workers\n",
"os.makedirs(SHARED_DATA_BASEPATH, exist_ok=True)\n",
"\n",
"# One time download of the sentence transformer model to a shared persistent storage available to the ray workers\n",
"snapshot_download(repo_id=SENTENCE_TRANSFORMER_MODEL, revision=SENTENCE_TRANSFORMER_MODEL_SNAPSHOT, cache_dir=SHARED_DATA_BASEPATH)\n",
" return {'results':chunks}\n",
"\n",
"# Process the dataset first, wrap the csv file contents into a Ray dataset\n",
"ray_ds = ray.data.read_csv(SHARED_DATASET_BASE_PATH + REVIEWS_FILE_NAME)\n",
Expand All @@ -184,81 +167,44 @@
"}])\n",
"print(ds_batch.schema)\n",
"\n",
"# Distributed map batches to create chunks out of each row, and fetch the vector embeddings by running inference on the sentence transformer\n",
"ds_embed = ds_batch.map_batches(\n",
" Embed,\n",
"# Distributed map batches to create chunks out of each row.\n",
"ds_splitted = ds_batch.map_batches(\n",
" Splitter,\n",
" compute=ray.data.ActorPoolStrategy(size=ACTOR_POOL_SIZE),\n",
" batch_size=BATCH_SIZE, # Large batch size to maximize GPU utilization.\n",
" num_gpus=1, # 1 GPU for each actor.\n",
" # num_cpus=1,\n",
")\n",
"\n",
"# Use this block for debug purpose to inspect the embeddings and raw text\n",
"# print(\"Embeddings ray dataset\", ds_embed.schema)\n",
"# for output in ds_embed.iter_rows():\n",
"# # restrict the text string to be less than 65535\n",
"# data_text = output[\"results\"][0][:65535]\n",
"# # vector data pass in needs to be a string \n",
"# data_emb = \",\".join(map(str, output[\"results\"][1]))\n",
"# data_emb = \"[\" + data_emb + \"]\"\n",
"# print (\"raw text:\", data_text, \", emdeddings:\", data_emb)\n",
"\n",
"# print(\"Embeddings ray dataset\", ds_embed.schema)\n",
"\n",
"data_text = \"\"\n",
"data_emb = \"\"\n",
"\n",
"with pool.connect() as db_conn:\n",
" db_conn.execute(\n",
" sqlalchemy.text(\n",
" \"CREATE EXTENSION IF NOT EXISTS vector;\"\n",
" )\n",
" )\n",
" db_conn.commit()\n",
"print(\"torch cuda version\", torch.version.cuda)\n",
"device=\"cpu\"\n",
"if torch.cuda.is_available():\n",
" print(\"device cuda found\")\n",
" device=\"cuda\"\n",
" \n",
"embeddings_service = HuggingFaceEmbeddings(model_name=SENTENCE_TRANSFORMER_MODEL, model_kwargs=dict(device=device))\n",
"vector_store = PostgresVectorStore.create_sync(\n",
" engine=engine,\n",
" embedding_service=embeddings_service,\n",
" table_name=VECTOR_EMBEDDINGS_TABLE_NAME,\n",
")\n",
"\n",
" create_table_query = \"CREATE TABLE IF NOT EXISTS \" + TABLE_NAME + \" ( id VARCHAR(255) NOT NULL, text TEXT NOT NULL, text_embedding vector(384) NOT NULL, PRIMARY KEY (id));\"\n",
" db_conn.execute(\n",
" sqlalchemy.text(create_table_query)\n",
" )\n",
" # commit transaction (SQLAlchemy v2.X.X is commit as you go)\n",
" db_conn.commit()\n",
" print(\"Created table=\", TABLE_NAME)\n",
" \n",
" query_text = \"INSERT INTO \" + TABLE_NAME + \" (id, text, text_embedding) VALUES (:id, :text, :text_embedding)\"\n",
" insert_stmt = sqlalchemy.text(query_text)\n",
" for output in ds_embed.iter_rows():\n",
" # print (\"type of embeddings\", type(output[\"results\"][1]), \"len embeddings\", len(output[\"results\"][1]))\n",
" # restrict the text string to be less than 65535\n",
" data_text = output[\"results\"][0][:65535]\n",
" # vector data pass in needs to be a string \n",
" data_emb = \",\".join(map(str, output[\"results\"][1]))\n",
" data_emb = \"[\" + data_emb + \"]\"\n",
" # print(\"text_embedding is \", data_emb)\n",
"for output in ds_splitted.iter_rows():\n",
" id = uuid.uuid4()\n",
" db_conn.execute(insert_stmt, parameters={\"id\": id, \"text\": data_text, \"text_embedding\": data_emb})\n",
" splits = output[\"results\"]\n",
" vector_store.add_texts(splits, id)\n",
"\n",
" # batch commit transactions\n",
" db_conn.commit()\n",
"\n",
" # query and fetch table\n",
" query_text = \"SELECT * FROM \" + TABLE_NAME\n",
" results = db_conn.execute(sqlalchemy.text(query_text)).fetchall()\n",
" # for row in results:\n",
" # print(row)\n",
"#Validate results\n",
"query = \"List the cast of squid game\"\n",
"query_vector = embeddings_service.embed_query(query)\n",
"docs = vector_store.similarity_search_by_vector(query_vector, k=4)\n",
"\n",
" # verify results\n",
" transformer = SentenceTransformer(SENTENCE_TRANSFORMER_MODEL)\n",
" query_text = \"During my holiday in Marmaris we ate here to fit the food. It's really good\" \n",
" query_emb = transformer.encode(query_text).tolist()\n",
" query_request = \"SELECT id, text, text_embedding, 1 - ('[\" + \",\".join(map(str, query_emb)) + \"]' <=> text_embedding) AS cosine_similarity FROM \" + TABLE_NAME + \" ORDER BY cosine_similarity DESC LIMIT 5;\" \n",
" query_results = db_conn.execute(sqlalchemy.text(query_request)).fetchall()\n",
" db_conn.commit()\n",
" print(\"print query_results, the 1st one is the hit\")\n",
" for row in query_results:\n",
" print(row)\n",
"\n",
"# cleanup connector object\n",
"connector.close()\n",
"for i, document in enumerate(docs):\n",
" print(f\"Result #{i+1}\")\n",
" print(document.page_content)\n",
" print(\"-\" * 100)\n",
" \n",
"print (\"end job\")"
]
},
Expand Down
7 changes: 6 additions & 1 deletion applications/rag/frontend/container/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,9 @@ WORKDIR /workspace/frontend

RUN pip install -r requirements.txt

CMD ["python", "main.py"]
EXPOSE 8080

ENV FLASK_APP=/workspace/frontend/main.py
ENV PYTHONPATH=/workspace/frontend/
# Run the application with Gunicorn
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:8080", "main:app"]
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,3 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# This file is required to make Python treat the subfolder as a package
24 changes: 24 additions & 0 deletions applications/rag/frontend/container/application/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os

from flask import Flask

def create_app():
app = Flask(__name__, static_folder='static', template_folder='templates')
app.jinja_env.trim_blocks = True
app.jinja_env.lstrip_blocks = True
app.config['SECRET_KEY'] = os.environ.get("APPLICATION_SECRET_KEY")

return app
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Loading