Skip to content

Commit

Permalink
Merge pull request #157 from SylphAI-Inc/main
Browse files Browse the repository at this point in the history
[release 0.1.0.b6]
  • Loading branch information
Sylph-AI authored Jul 24, 2024
2 parents b4493d7 + c52e0f2 commit 0e93812
Show file tree
Hide file tree
Showing 6 changed files with 104 additions and 40 deletions.
13 changes: 13 additions & 0 deletions docs/source/tutorials/component.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,16 @@
.. raw:: html

<div style="display: flex; justify-content: flex-start; align-items: center; margin-bottom: 20px;">
<a href="https://colab.research.google.com/drive/1aD0C8-iMB8quIn8FKhrtFAGcrboRNg2C?usp=sharing" target="_blank" style="margin-right: 10px;">
<img alt="Try Quickstart in Colab" src="https://colab.research.google.com/assets/colab-badge.svg" style="vertical-align: middle;">
</a>
<a href="https://github.com/SylphAI-Inc/LightRAG/blob/main/lightrag/lightrag/core/component.py" target="_blank" style="display: flex; align-items: center;">
<img src="https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" alt="GitHub" style="height: 20px; width: 20px; margin-right: 5px;">
<span style="vertical-align: middle;"> Open Source Code</span>
</a>
</div>


Component
============

Expand Down
19 changes: 16 additions & 3 deletions docs/source/tutorials/generator.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
.. _generator:

.. raw:: html

<div style="display: flex; justify-content: flex-start; align-items: center; margin-bottom: 20px;">
<a href="https://colab.research.google.com/drive/1gmxeX1UuUxZDouWhkLGQYrD4hAdt9IVX?usp=sharing" target="_blank" style="margin-right: 10px;">
<img alt="Try Quickstart in Colab" src="https://colab.research.google.com/assets/colab-badge.svg" style="vertical-align: middle;">
</a>
<a href="https://github.com/SylphAI-Inc/AdalFlow/blob/main/tutorials/generator_note.py" target="_blank" style="display: flex; align-items: center;">
<img src="https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" alt="GitHub" style="height: 20px; width: 20px; margin-right: 5px;">
<span style="vertical-align: middle;"> Open Source Code</span>
</a>
</div>


Generator
=========

Expand Down Expand Up @@ -276,7 +289,7 @@ We will use the `JsonParser` to parse the output back to a `dict` object.
output_format_str = r"""Your output should be formatted as a standard JSON object with two keys:
{
"explaination": "A brief explaination of the concept in one sentence.",
"explanation": "A brief explanation of the concept in one sentence.",
"example": "An example of the concept in a sentence."
}
"""
Expand Down Expand Up @@ -304,7 +317,7 @@ The final prompt is:
<OUTPUT_FORMAT>
Your output should be formatted as a standard JSON object with two keys:
{
"explaination": "A brief explaination of the concept in one sentence.",
"explanation": "A brief explanation of the concept in one sentence.",
"example": "An example of the concept in a sentence."
}
Expand All @@ -320,7 +333,7 @@ The above printout is:
.. code-block::
<class 'dict'>
{'explaination': 'LLM stands for Large Language Model, which are deep learning models trained on enormous amounts of text data.', 'example': 'An example of a LLM is GPT-3, which can generate human-like text based on the input provided.'}
{'explanation': 'LLM stands for Large Language Model, which are deep learning models trained on enormous amounts of text data.', 'example': 'An example of a LLM is GPT-3, which can generate human-like text based on the input provided.'}
Please refer to :doc:`output_parsers` for a more comprehensive guide on the `Parser` components.

Expand Down
85 changes: 61 additions & 24 deletions lightrag/lightrag/components/model_client/ollama_client.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,16 @@
"""Ollama ModelClient integration."""

import os
from typing import Dict, Optional, Any, TypeVar, List, Type, Generator, Union
from typing import (
Dict,
Optional,
Any,
TypeVar,
List,
Type,
Generator as GeneratorType,
Union,
)
import backoff
import logging
import warnings
Expand All @@ -21,6 +30,27 @@
T = TypeVar("T")


def parse_stream_response(completion: GeneratorType) -> Any:
"""Parse the completion to a str. We use the generate with prompt instead of chat with messages."""
for chunk in completion:
log.debug(f"Raw chunk: {chunk}")
yield chunk["response"] if "response" in chunk else None


def parse_generate_response(completion: GenerateResponse) -> Any:
"""Parse the completion to a str. We use the generate with prompt instead of chat with messages."""
if "response" in completion:
log.debug(f"response: {completion}")
return completion["response"]
else:
log.error(
f"Error parsing the completion: {completion}, type: {type(completion)}"
)
raise ValueError(
f"Error parsing the completion: {completion}, type: {type(completion)}"
)


class OllamaClient(ModelClient):
__doc__ = r"""A component wrapper for the Ollama SDK client.
Expand Down Expand Up @@ -146,25 +176,16 @@ def init_async_client(self):

self.async_client = ollama.AsyncClient(host=self._host)

# NOTE: do not put yield and return in the same function, thus we separate the functions
def parse_chat_completion(
self, completion: Union[GenerateResponse, Generator]
self, completion: Union[GenerateResponse, GeneratorType]
) -> Any:
"""Parse the completion to a str. We use the generate with prompt instead of chat with messages."""
log.debug(f"completion: {completion}, {isinstance(completion, Generator)}")
if isinstance(completion, Generator): # streaming
for chunk in completion:
log.debug(f"Raw chunk: {chunk}")
yield chunk["response"] if "response" in chunk else None
log.debug(f"completion: {completion}, {isinstance(completion, GeneratorType)}")
if isinstance(completion, GeneratorType): # streaming
return parse_stream_response(completion)
else:
if "response" in completion:
return completion["response"]
else:
log.error(
f"Error parsing the completion: {completion}, type: {type(completion)}"
)
raise ValueError(
f"Error parsing the completion: {completion}, type: {type(completion)}"
)
return parse_generate_response(completion)

def parse_embedding_response(
self, response: Dict[str, List[float]]
Expand Down Expand Up @@ -264,18 +285,34 @@ def to_dict(self, exclude: Optional[List[str]] = None) -> Dict[str, Any]:
return output


# TODO: add tests to stream and non stream case
# if __name__ == "__main__":
# from lightrag.core.generator import Generator
# from lightrag.components.model_client import OllamaClient
# from lightrag.components.model_client import OllamaClient, OpenAIClient
# from lightrag.utils import setup_env, get_logger

# # log = get_logger(level="DEBUG")
# log = get_logger(level="DEBUG")

# setup_env()

# model_client = OllamaClient()
# model_kwargs = {"model": "phi3", "stream": True}
# generator = Generator(model_client=model_client, model_kwargs=model_kwargs)
# output = generator({"input_str": "What is the capital of France?"})
# for chunk in output.data:
# print(chunk)
# ollama_ai = {
# "model_client": OllamaClient(),
# "model_kwargs": {
# "model": "qwen2:0.5b",
# "stream": True,
# },
# }
# open_ai = {
# "model_client": OpenAIClient(),
# "model_kwargs": {
# "model": "gpt-3.5-turbo",
# "stream": False,
# },
# }
# # generator = Generator(**open_ai)
# # output = generator({"input_str": "What is the capital of France?"})
# # print(output)

# # generator = Generator(**ollama_ai)
# # output = generator({"input_str": "What is the capital of France?"})
# # print(output)
15 changes: 8 additions & 7 deletions lightrag/lightrag/core/generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,6 @@ def _post_call(self, completion: Any) -> GeneratorOutputType:
response = self.model_client.parse_chat_completion(completion)
except Exception as e:
log.error(f"Error parsing the completion {completion}: {e}")
# response = str(completion)
return GeneratorOutput(raw_response=str(completion), error=str(e))

# the output processors operate on the str, the raw_response field.
Expand Down Expand Up @@ -227,20 +226,22 @@ def call(
log.debug(f"api_kwargs: {api_kwargs}")
output: GeneratorOutputType = None
# call the model client
completion = None
try:
completion = self.model_client.call(
api_kwargs=api_kwargs, model_type=self.model_type
)

except Exception as e:
log.error(f"Error calling the model: {e}")
output = GeneratorOutput(error=str(e))
# process the completion
if completion:
try:
output = self._post_call(completion)

try:
output = self._post_call(completion)
except Exception as e:
log.error(f"Error processing the output: {e}")
output = GeneratorOutput(raw_response=str(completion), error=str(e))
except Exception as e:
log.error(f"Error processing the output: {e}")
output = GeneratorOutput(raw_response=str(completion), error=str(e))

log.info(f"output: {output}")
return output
Expand Down
8 changes: 4 additions & 4 deletions lightrag/lightrag/core/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,17 +68,17 @@ class ModelClientType:
.. code-block:: python
from lightrag.components.model_client import CohereAPIClient, TransformersClient, AnthropicAPIClient, GroqAPIClient, OpenAIClient
from lightrag.components.model_client import CohereAPIClient, TransformersClient, AnthropicAPIClient, GroqAPIClient, OpenAIClient
model_client = OpenAIClient()
model_client = OpenAIClient()
To this:
.. code-block:: python
from lightrag.core.types import ModelClientType
from lightrag.core.types import ModelClientType
model_client = ModelClientType.OPENAI()
model_client = ModelClientType.OPENAI()
"""
COHERE = CohereAPIClient
TRANSFORMERS = TransformersClient
Expand Down
4 changes: 2 additions & 2 deletions tutorials/generator_note.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ async def acall(self, query):

@dataclass
class QAOutput(DataClass):
explaination: str = field(
metadata={"desc": "A brief explaination of the concept in one sentence."}
explanation: str = field(
metadata={"desc": "A brief explanation of the concept in one sentence."}
)
example: str = field(metadata={"desc": "An example of the concept in a sentence."})

Expand Down

0 comments on commit 0e93812

Please sign in to comment.