Skip to content

Commit

Permalink
deploy: dff36b0
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Jul 8, 2024
1 parent c810c20 commit 5f7251c
Show file tree
Hide file tree
Showing 7 changed files with 64 additions and 76 deletions.
Binary file modified .doctrees/environment.pickle
Binary file not shown.
Binary file modified .doctrees/index.doctree
Binary file not shown.
79 changes: 27 additions & 52 deletions _sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -1,47 +1,11 @@
.. =======================
.. Introduction
.. =======================

.. image:: https://raw.githubusercontent.com/SylphAI-Inc/LightRAG/main/docs/source/_static/images/LightRAG-logo-doc.jpeg
:width: 100%
:alt: LightRAG Logo


.. .. |GitHub| image:: https://img.shields.io/github/stars/SylphAI-Inc/LightRAG?style=flat-square
.. :target: https://github.com/SylphAI-Inc/LightRAG


.. .. |PyPI Version| image:: https://img.shields.io/pypi/v/lightRAG?style=flat-square
.. :target: https://pypi.org/project/lightRAG/
.. .. |Discord| image:: https://dcbadge.vercel.app/api/server/zt2mTPcu?compact=true&style=flat
.. :target: https://discord.gg/zt2mTPcu
.. .. |License| image:: https://img.shields.io/github/license/SylphAI-Inc/LightRAG
.. :target: https://opensource.org/license/MIT
.. .. |PyPI Downloads| image:: https://img.shields.io/pypi/dm/lightRAG?style=flat-square
.. :target: https://pypistats.org/packages/lightRAG
.. .. |GitHub Stars| image:: https://img.shields.io/github/stars/SylphAI-Inc/LightRAG?style=flat-square
.. :target: https://star-history.com/#SylphAI-Inc/LightRAG
.. .. raw:: html
.. <div style="text-align: center; margin-bottom: 20px;">
.. <a href="https://github.com/SylphAI-Inc/LightRAG"><img src="https://img.shields.io/github/repo-size/SylphAI-Inc/LightRAG?style=flat-square" alt="GitHub Repo"></a>
.. <a href="https://pypi.org/project/lightRAG/"><img src="https://img.shields.io/pypi/v/lightRAG?style=flat-square" alt="PyPI Version"></a>
.. <a href="https://star-history.com/#SylphAI-Inc/LightRAG"><img src="https://img.shields.io/github/stars/SylphAI-Inc/LightRAG?style=flat-square" alt="GitHub Stars"></a>
.. <a href="https://discord.gg/zt2mTPcu"><img src="https://dcbadge.vercel.app/api/server/zt2mTPcu?compact=true&style=flat" alt="Discord"></a>
.. <a href="https://opensource.org/license/MIT"><img src="https://img.shields.io/github/license/SylphAI-Inc/LightRAG" alt="License"></a>
.. </div>
.. raw:: html

<div style="text-align: center; margin-bottom: 20px;">
Expand All @@ -67,8 +31,6 @@
</p>
</div>

.. *LightRAG* helps developers with both building and optimizing *Retriever-Agent-Generator (RAG)* pipelines.
.. It is *light*, *modular*, and *robust*.



Expand Down Expand Up @@ -139,15 +101,15 @@
.. and Customizability
Maxium Customizability & Composability
Light
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We provide developers with fundamental building blocks of *100% clarity and simplicity*.

Developers who are building real-world Large Language Model (LLM) applications are the real heroes.
As a library, we provide them with the fundamental building blocks with 100% clarity and simplicity.
- Only two fundamental but powerful base classes: `Component` for the pipeline and `DataClass` for data interaction with LLMs.
- A highly readable codebase and less than two levels of class inheritance. :doc:`developer_notes/class_hierarchy`.
- We maximize the library's tooling and prompting capabilities to minimize the reliance on LLM API features such as tools and JSON format.
- The result is a library with bare minimum abstraction, providing developers with *maximum customizability*.

- Two fundamental and powerful base classes: `Component` for the pipeline and `DataClass` for data interaction with LLMs.
- We end up with less than two levels of class inheritance. :doc:`developer_notes/class_hierarchy`.
- The result is a library with bare minimum abstraction, providing developers with *maximum customizability and composability*.

.. - We use 10X less code than other libraries to achieve 10X more robustness and flexibility.
Expand All @@ -156,7 +118,19 @@ As a library, we provide them with the fundamental building blocks with 100% cla
.. Each developer has unique data needs to build their own models/components, experiment with In-context Learning (ICL) or model finetuning, and deploy the LLM applications to production. This means the library must provide fundamental lower-level building blocks and strive for clarity and simplicity:
Similar to the `PyTorch` module, our ``Component`` provides excellent visualization of the pipeline structure.
Modular
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

LightRAG resembles PyTorch in the way that we provide a modular and composable structure for developers to build and to optimize their LLM applications.

- `Component` and `DataClass` are to LightRAG for LLM Applications what `module` and `Tensor` are to PyTorch for deep learning modeling.
- `ModelClient` to bridge the gap between the LLM API and the LightRAG pipeline.
- `Orchestrator` components like `Retriever`, `Embedder`, `Generator`, and `Agent` are all model-agnostic (you can use the component on different models from different providers).


Similar to the PyTorch `module`, our `Component` provides excellent visualization of the pipeline structure.

.. code-block::
Expand All @@ -178,12 +152,12 @@ Similar to the `PyTorch` module, our ``Component`` provides excellent visualizat
.. and Robustness
Maximum Control and Robustness
Robust
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Our simplicity did not come from doing less.
On the contrary, we have to do more and go deeper and wider on any topic to offer developers *maximum control and robustness*.

- LLMs are sensitive to the prompt. We allow developers full control over their prompts without relying on API features such as tools and JSON format with components like ``Prompt``, ``OutputParser``, ``FunctionTool``, and ``ToolManager``.
- LLMs are sensitive to the prompt. We allow developers full control over their prompts without relying on LLM API features such as tools and JSON format with components like ``Prompt``, ``OutputParser``, ``FunctionTool``, and ``ToolManager``.
- Our goal is not to optimize for integration, but to provide a robust abstraction with representative examples. See this in ``ModelClient`` and ``Retriever``.
- All integrations, such as different API SDKs, are formed as optional packages but all within the same library. You can easily switch to any models from different providers that we officially support.

Expand All @@ -197,13 +171,14 @@ On the contrary, we have to do more and go deeper and wider on any topic to offe
.. It is the future of LLM applications
Unites both Research and Production
Unites Research and Production
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Our team has experience in both AI research and production.
We are building a library that unites the two worlds, forming a healthy LLM application ecosystem.

On top of the easiness to use, we in particular optimize the configurability of components for researchers to build their solutions and to benchmark existing solutions.
Like how PyTorch has united both researchers and production teams, it enables smooth transition from research to production.
With researchers building on LightRAG, production engineers can easily take over the method and test and iterate on their production data.
Researchers will want their code to be adapted into more products too.
- To resemble the PyTorch library makes it easier for LLM researchers to use the library.
- Researchers building on LightRAG enable production engineers to easily adopt, test, and iterate on their production data.
- Our 100% control and clarity of the source code further make it easy for product teams to build on and for researchers to extend their new methods.


.. toctree::
Expand Down
4 changes: 2 additions & 2 deletions get_started/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
<link rel="next" title="Installation" href="installation.html" />
<link rel="prev" title="Maxium Customizability &amp; Composability" href="../index.html" />
<link rel="prev" title="Light" href="../index.html" />
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<meta name="docsearch:language" content="en"/>
</head>
Expand Down Expand Up @@ -448,7 +448,7 @@ <h1>Get Started<a class="headerlink" href="#get-started" title="Link to this hea
<i class="fa-solid fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Maxium Customizability &amp; Composability</p>
<p class="prev-next-title">Light</p>
</div>
</a>
<a class="right-next"
Expand Down
55 changes: 34 additions & 21 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="viewport" content="width=device-width, initial-scale=1" />

<title>Maxium Customizability &amp; Composability &#8212; LightRAG documentation</title>
<title>Light &#8212; LightRAG documentation</title>



Expand Down Expand Up @@ -464,16 +464,25 @@
</div>
</div>
</div>
<section id="maxium-customizability-composability">
<h1>Maxium Customizability &amp; Composability<a class="headerlink" href="#maxium-customizability-composability" title="Link to this heading">#</a></h1>
<p>Developers who are building real-world Large Language Model (LLM) applications are the real heroes.
As a library, we provide them with the fundamental building blocks with 100% clarity and simplicity.</p>
<section id="light">
<h1>Light<a class="headerlink" href="#light" title="Link to this heading">#</a></h1>
<p>We provide developers with fundamental building blocks of <em>100% clarity and simplicity</em>.</p>
<ul class="simple">
<li><p>Two fundamental and powerful base classes: <cite>Component</cite> for the pipeline and <cite>DataClass</cite> for data interaction with LLMs.</p></li>
<li><p>We end up with less than two levels of class inheritance. <a class="reference internal" href="developer_notes/class_hierarchy.html"><span class="doc">Class Hierarchy</span></a>.</p></li>
<li><p>The result is a library with bare minimum abstraction, providing developers with <em>maximum customizability and composability</em>.</p></li>
<li><p>Only two fundamental but powerful base classes: <cite>Component</cite> for the pipeline and <cite>DataClass</cite> for data interaction with LLMs.</p></li>
<li><p>A highly readable codebase and less than two levels of class inheritance. <a class="reference internal" href="developer_notes/class_hierarchy.html"><span class="doc">Class Hierarchy</span></a>.</p></li>
<li><p>We maximize the library’s tooling and prompting capabilities to minimize the reliance on LLM API features such as tools and JSON format.</p></li>
<li><p>The result is a library with bare minimum abstraction, providing developers with <em>maximum customizability</em>.</p></li>
</ul>
<p>Similar to the <cite>PyTorch</cite> module, our <code class="docutils literal notranslate"><span class="pre">Component</span></code> provides excellent visualization of the pipeline structure.</p>
</section>
<section id="modular">
<h1>Modular<a class="headerlink" href="#modular" title="Link to this heading">#</a></h1>
<p>LightRAG resembles PyTorch in the way that we provide a modular and composable structure for developers to build and to optimize their LLM applications.</p>
<ul class="simple">
<li><p><cite>Component</cite> and <cite>DataClass</cite> are to LightRAG for LLM Applications what <cite>module</cite> and <cite>Tensor</cite> are to PyTorch for deep learning modeling.</p></li>
<li><p><cite>ModelClient</cite> to bridge the gap between the LLM API and the LightRAG pipeline.</p></li>
<li><p><cite>Orchestrator</cite> components like <cite>Retriever</cite>, <cite>Embedder</cite>, <cite>Generator</cite>, and <cite>Agent</cite> are all model-agnostic (you can use the component on different models from different providers).</p></li>
</ul>
<p>Similar to the PyTorch <cite>module</cite>, our <cite>Component</cite> provides excellent visualization of the pipeline structure.</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">SimpleQA</span><span class="p">(</span>
<span class="p">(</span><span class="n">generator</span><span class="p">):</span> <span class="n">Generator</span><span class="p">(</span>
<span class="n">model_kwargs</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;model&#39;</span><span class="p">:</span> <span class="s1">&#39;llama3-8b-8192&#39;</span><span class="p">},</span>
Expand All @@ -491,22 +500,25 @@ <h1>Maxium Customizability &amp; Composability<a class="headerlink" href="#maxiu
</pre></div>
</div>
</section>
<section id="maximum-control-and-robustness">
<h1>Maximum Control and Robustness<a class="headerlink" href="#maximum-control-and-robustness" title="Link to this heading">#</a></h1>
<section id="robust">
<h1>Robust<a class="headerlink" href="#robust" title="Link to this heading">#</a></h1>
<p>Our simplicity did not come from doing less.
On the contrary, we have to do more and go deeper and wider on any topic to offer developers <em>maximum control and robustness</em>.</p>
<ul class="simple">
<li><p>LLMs are sensitive to the prompt. We allow developers full control over their prompts without relying on API features such as tools and JSON format with components like <code class="docutils literal notranslate"><span class="pre">Prompt</span></code>, <code class="docutils literal notranslate"><span class="pre">OutputParser</span></code>, <code class="docutils literal notranslate"><span class="pre">FunctionTool</span></code>, and <code class="docutils literal notranslate"><span class="pre">ToolManager</span></code>.</p></li>
<li><p>LLMs are sensitive to the prompt. We allow developers full control over their prompts without relying on LLM API features such as tools and JSON format with components like <code class="docutils literal notranslate"><span class="pre">Prompt</span></code>, <code class="docutils literal notranslate"><span class="pre">OutputParser</span></code>, <code class="docutils literal notranslate"><span class="pre">FunctionTool</span></code>, and <code class="docutils literal notranslate"><span class="pre">ToolManager</span></code>.</p></li>
<li><p>Our goal is not to optimize for integration, but to provide a robust abstraction with representative examples. See this in <code class="docutils literal notranslate"><span class="pre">ModelClient</span></code> and <code class="docutils literal notranslate"><span class="pre">Retriever</span></code>.</p></li>
<li><p>All integrations, such as different API SDKs, are formed as optional packages but all within the same library. You can easily switch to any models from different providers that we officially support.</p></li>
</ul>
</section>
<section id="unites-both-research-and-production">
<h1>Unites both Research and Production<a class="headerlink" href="#unites-both-research-and-production" title="Link to this heading">#</a></h1>
<p>On top of the easiness to use, we in particular optimize the configurability of components for researchers to build their solutions and to benchmark existing solutions.
Like how PyTorch has united both researchers and production teams, it enables smooth transition from research to production.
With researchers building on LightRAG, production engineers can easily take over the method and test and iterate on their production data.
Researchers will want their code to be adapted into more products too.</p>
<section id="unites-research-and-production">
<h1>Unites Research and Production<a class="headerlink" href="#unites-research-and-production" title="Link to this heading">#</a></h1>
<p>Our team has experience in both AI research and production.
We are building a library that unites the two worlds, forming a healthy LLM application ecosystem.</p>
<ul class="simple">
<li><p>To resemble the PyTorch library makes it easier for LLM researchers to use the library.</p></li>
<li><p>Researchers building on LightRAG enable production engineers to easily adopt, test, and iterate on their production data.</p></li>
<li><p>Our 100% control and clarity of the source code further make it easy for product teams to build on and for researchers to extend their new methods.</p></li>
</ul>
<div class="toctree-wrapper compound">
</div>
<div class="toctree-wrapper compound">
Expand Down Expand Up @@ -552,9 +564,10 @@ <h1>Unites both Research and Production<a class="headerlink" href="#unites-both-
</div>
<nav class="bd-toc-nav page-toc" aria-labelledby="pst-page-navigation-heading-2">
<ul class="visible nav section-nav flex-column">
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#">Maxium Customizability &amp; Composability</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#maximum-control-and-robustness">Maximum Control and Robustness</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#unites-both-research-and-production">Unites both Research and Production</a><ul class="visible nav section-nav flex-column">
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#">Light</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#modular">Modular</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#robust">Robust</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#unites-research-and-production">Unites Research and Production</a><ul class="visible nav section-nav flex-column">
</ul>
</li>
</ul>
Expand Down
Binary file modified objects.inv
Binary file not shown.
2 changes: 1 addition & 1 deletion searchindex.js

Large diffs are not rendered by default.

0 comments on commit 5f7251c

Please sign in to comment.