Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pipeshift llama index integration #16610

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/docs/api_reference/llms/pipeshift.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.llms.pipeshift
options:
members:
- Pipeshift
384 changes: 384 additions & 0 deletions docs/docs/examples/llm/pipeshift.ipynb

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions docs/docs/module_guides/models/llms/modules.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ We support integrations with OpenAI, Anthropic, Hugging Face, PaLM, and more.
- [OpenRouter](../../../examples/llm/openrouter.ipynb)
- [PaLM](../../../examples/llm/palm.ipynb)
- [Perplexity](../../../examples/llm/perplexity.ipynb)
- [Pipeshift](../../../examples/llm/pipeshift.ipynb)
- [PremAI](../../../examples/llm/premai.ipynb)
- [Portkey](../../../examples/llm/portkey.ipynb)
- [Predibase](../../../examples/llm/predibase.ipynb)
Expand Down
4 changes: 4 additions & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -367,6 +367,7 @@ nav:
- ./examples/llm/paieas.ipynb
- ./examples/llm/palm.ipynb
- ./examples/llm/perplexity.ipynb
- ./examples/llm/pipeshift.ipynb
- ./examples/llm/portkey.ipynb
- ./examples/llm/predibase.ipynb
- ./examples/llm/premai.ipynb
Expand Down Expand Up @@ -1017,6 +1018,7 @@ nav:
- ./api_reference/llms/paieas.md
- ./api_reference/llms/palm.md
- ./api_reference/llms/perplexity.md
- ./api_reference/llms/pipeshift.md
- ./api_reference/llms/portkey.md
- ./api_reference/llms/predibase.md
- ./api_reference/llms/premai.md
Expand Down Expand Up @@ -2262,6 +2264,7 @@ plugins:
- ../llama-index-integrations/readers/llama-index-readers-document360
- ../llama-index-integrations/llms/llama-index-llms-gaudi
- ../llama-index-integrations/llms/llama-index-llms-zhipuai
- ../llama-index-integrations/llms/llama-index-llms-pipeshift
- redirects:
redirect_maps:
./api/llama_index.vector_stores.MongoDBAtlasVectorSearch.html: api_reference/storage/vector_store/mongodb.md
Expand Down Expand Up @@ -2548,6 +2551,7 @@ plugins:
./examples/llm/openrouter.html: https://docs.llamaindex.ai/en/stable/examples/llm/openrouter/
./examples/llm/palm.html: https://docs.llamaindex.ai/en/stable/examples/llm/palm/
./examples/llm/perplexity.html: https://docs.llamaindex.ai/en/stable/examples/llm/perplexity/
./examples/llm/pipeshift.html: https://docs.llamaindex.ai/en/stable/examples/llm/pipeshift/
./examples/llm/portkey.html: https://docs.llamaindex.ai/en/stable/examples/llm/portkey/
./examples/llm/predibase.html: https://docs.llamaindex.ai/en/stable/examples/llm/predibase/
./examples/llm/premai.html: https://docs.llamaindex.ai/en/stable/examples/llm/premai/
Expand Down
1 change: 1 addition & 0 deletions llama-index-cli/llama_index/cli/upgrade/mappings.json
Original file line number Diff line number Diff line change
Expand Up @@ -1013,6 +1013,7 @@
"SyncOpenAI": "llama_index.llms.openai",
"AsyncOpenAI": "llama_index.llms.openai",
"LMStudio": "llama_index.llms.lmstudio",
"Pipeshift": "llama_index.llms.pipeshift",
"GradientBaseModelLLM": "llama_index.llms.gradient",
"GradientModelAdapterLLM": "llama_index.llms.gradient",
"EntityExtractor": "llama_index.extractors.entity",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1012,6 +1012,7 @@
"SyncOpenAI": "llama_index.llms.openai",
"AsyncOpenAI": "llama_index.llms.openai",
"LMStudio": "llama_index.llms.lmstudio",
"Pipeshift": "llama_index.llms.pipeshift",
"GradientBaseModelLLM": "llama_index.llms.gradient",
"GradientModelAdapterLLM": "llama_index.llms.gradient",
"EntityExtractor": "llama_index.extractors.entity",
Expand Down
153 changes: 153 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-pipeshift/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
llama_index/_static
.DS_Store
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
bin/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
etc/
include/
lib/
lib64/
parts/
sdist/
share/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.ruff_cache

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints
notebooks/

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
pyvenv.cfg

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# Jetbrains
.idea
modules/
*.swp

# VsCode
.vscode

# pipenv
Pipfile
Pipfile.lock

# pyright
pyrightconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
poetry_requirements(
name="poetry",
)
18 changes: 18 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-pipeshift/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
GIT_ROOT ?= $(shell git rev-parse --show-toplevel)

help: ## Show all Makefile targets.
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[33m%-30s\033[0m %s\n", $$1, $$2}'

format: ## Run code autoformatters (black).
pre-commit install
git ls-files | xargs pre-commit run black --files

lint: ## Run linters: pre-commit (black, ruff, codespell) and mypy
pre-commit install && git ls-files | xargs pre-commit run --show-diff-on-failure --files

TEST_FILE ?= tests
test: ## Run tests via pytest.
poetry run pytest ${TEST_FILE}

watch-docs: ## Build and watch documentation.
sphinx-autobuild docs/ docs/_build/html --open-browser --watch $(GIT_ROOT)/llama_index/
112 changes: 112 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-pipeshift/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# LlamaIndex Llms Integration: Pipeshift

## Installation

1. Install the required Python packages:

```bash
%pip install llama-index-llms-pipeshift
%pip install llama-index
```

2. Set the PIPESHIFT_API_KEY as an environment variable or pass it directly to the class constructor.

## Usage

### Basic Completion

To generate a simple completion, use the `complete` method:

```python
from llama_index.llms.pipeshift import Pipeshift

llm = Pipeshift(
model="mistralai/Mistral-7B-Instruct-v0.3",
# api_key="YOUR_API_KEY" # alternative way to pass api_key if not specified in environment variable
)
res = llm.complete("supercars are ")
print(res)
```

Example output:

```
Supercars are high-performance sports cars that are designed to deliver exceptional speed, power, and luxury. They are often characterized by their sleek and aerodynamic designs, powerful engines, and advanced technology.
```

### Basic Chat

To simulate a chat with multiple messages:

```python
from llama_index.core.llms import ChatMessage
from llama_index.llms.pipeshift import Pipeshift

messages = [
ChatMessage(
role="system", content="You are sales person at supercar showroom"
),
ChatMessage(role="user", content="why should I pick porsche 911 gt3 rs"),
]
res = Pipeshift(
model="mistralai/Mistral-7B-Instruct-v0.3", max_tokens=50
).chat(messages)
print(res)
```

Example output:

```
assistant: 1. Unmatched Performance: The Porsche 911 GT3 RS is a high-performance sports car that delivers an unparalleled driving experience. It boasts a powerful 4.0-liter flat
```

### Streaming Completion

To stream a response in real-time using `stream_complete`:

```python
from llama_index.llms.pipeshift import Pipeshift

llm = Pipeshift(model="mistralai/Mistral-7B-Instruct-v0.3")
resp = llm.stream_complete("porsche GT3 RS is ")

for r in resp:
print(r.delta, end="")
```

Example output (partial):

```
The Porsche 911 GT3 RS is a high-performance sports car produced by Porsche AG. It is part of the 911 (991 and 992 generations) series and is%
```

### Streaming Chat

For a streamed conversation, use `stream_chat`:

```python
from llama_index.llms.pipeshift import Pipeshift
from llama_index.core.llms import ChatMessage

llm = Pipeshift(model="mistralai/Mistral-7B-Instruct-v0.3")
messages = [
ChatMessage(
role="system", content="You are sales person at supercar showroom"
),
ChatMessage(role="user", content="how fast can porsche gt3 rs it go?"),
]
resp = llm.stream_chat(messages)

for r in resp:
print(r.delta, end="")
```

Example output (partial):

```
The Porsche 911 GT3 RS is an incredible piece of engineering. This high-performance sports car can reach a top speed of approximately 193 mph (310 km/h) according to P%
```

### LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/pipeshift/
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from llama_index.llms.pipeshift.base import Pipeshift


__all__ = ["Pipeshift"]
Loading