diff --git a/docs/source/index.rst b/docs/source/index.rst index 3e9a7bc4..5ab66d1a 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -249,4 +249,4 @@ We are building a library that unites the two worlds, forming a healthy LLM appl .. :caption: Contributors .. :hidden: -.. contributor/index \ No newline at end of file +.. contributor/index diff --git a/docs/source/tutorials/text_splitter.rst b/docs/source/tutorials/text_splitter.rst index 989a7a33..3e001bca 100644 --- a/docs/source/tutorials/text_splitter.rst +++ b/docs/source/tutorials/text_splitter.rst @@ -191,7 +191,7 @@ Choosing the proper chunking strategy involves considering several key factors: - **Application of Results**: The application, whether it be semantic search, question answering, or summarization, dictates the appropriate chunking method, especially considering the limitations of content windows in large language models (LLMs). - **System Integration**: Efficient chunking aligns with system capabilities. For example, `Full-Text Search:` Use larger chunks to allow algorithms to explore broader contexts effectively. For example, search books based on extensive excerpts or chapters. `Granular Search Systems:` Employ smaller chunks to precisely retrieve information relevant to user queries, such as retrieving specific instructions directly in response to a user’s question. For example, if a user asks, "How do I reset my password?". The system can retrieve a specific sentence or paragraph addressing that action directly. - + Chunking Strategies ^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/pyproject.toml b/pyproject.toml index a4a8effb..2a1a6f53 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -49,4 +49,4 @@ lightrag = { path = "lightrag", develop = true } [build-system] requires = ["poetry-core>=1.0.0"] -build-backend = "poetry.core.masonry.api" \ No newline at end of file +build-backend = "poetry.core.masonry.api"