Skip to content

Commit

Permalink
Update install
Browse files Browse the repository at this point in the history
  • Loading branch information
YoanSallami committed Jul 12, 2024
1 parent 51cfc21 commit 5cc9c6f
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 24 deletions.
4 changes: 2 additions & 2 deletions docs/dspy.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 2

## Programming not Prompting

**[DSPy](https://dspy-docs.vercel.app/)** is a machine learning framework for building LLM applications with a systematic approach by abstracting prompt engineering techniques and offering ways to evaluate your pipelines. Often when tackling a difficult task you will have to provide examples to your LLM, beside being time-consuming, handcrafting these examples can lead to the brittleness of your application specially if it involve numerous calls to LMs (like HybridAGI). When you change the LMs, you will have to re-design the prompts for that model. DSPy change that by emphazing small multi-input/multi-output prompts with automatic building of examples allowing a better control.
**[DSPy](https://dspy-docs.vercel.app/)** is a machine learning framework for building LLM applications with a systematic approach by abstracting prompt engineering techniques and offering ways to evaluate your pipelines. Often when tackling a difficult task you will have to provide examples to your LLM, beside being time-consuming, handcrafting these examples can lead to the brittleness of your application specially if it involve numerous calls to LMs (like HybridAGI). When you change the pipeline, you will have to re-design the examples. DSPy change that by emphazing small multi-input/multi-output prompts with automatic building of examples allowing a better control and faster iteration.

## DSPy language models

Expand Down Expand Up @@ -148,7 +148,7 @@ class CheckHelpfullnessSignature(dspy.Signature):
prefix="Helpful[Yes/No:]",
)

def check_helpfulness(example, prediction):
def check_helpfulness(example, prediction, teacher_lm = Optional[dspy.LM]):
# This line means that we discard the example if the agent reached the max iterations
# Meaning it was probably stuck in a loop
if prediction.finish_reason == "max iters":
Expand Down
29 changes: 7 additions & 22 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,36 +10,21 @@ sidebar_position: 4

Additionally you can install [Ollama](https://ollama.com/) to run the examples, pull `mistral` model and start the ollama server.

### 1. Install
<!--
#### From pip
```
virtualenv venv
source venv/bin/activate
pip install git+https://github.com/SynaLinks/HybridAGI
```
#### From sources -->
### 1. Install using pip

```bash
git clone https://github.com/SynaLinks/HybridAGI
cd HybridAGI
virtualenv venv
source venv/bin/activate
pip install poetry # If you don't have it already
poetry install
pip install hybridagi
```

Note: Before installing `hybridagi` we recommand you to create a virtual environment using virtualenv, conda or your preferred environment manager.

### 2. Setup the Knowledge Base (needed for the system to work)

<!-- ```
docker run -p 6379:6379 -it --rm falkordb/falkordb:edge
``` -->
Then setup the knowledge base using docker:

```bash
docker compose up
docker run -p 6379:6379 -p 3000:3000 -it --rm falkordb/falkordb:edge
```

### 3. Start programmming your Agent

You can start to learn more about graph prompt programming in this **[first tutorial](tutorials/basics/graph-prompt-programming.md)** or with this **[Python notebook](https://github.com/SynaLinks/HybridAGI/blob/main/notebooks/first_steps.ipynb)**.

0 comments on commit 5cc9c6f

Please sign in to comment.