diff --git a/Writerside/s.tree b/Writerside/s.tree index 9463096..6e9e5a6 100644 --- a/Writerside/s.tree +++ b/Writerside/s.tree @@ -8,15 +8,20 @@ + - - + + + + + + @@ -37,8 +42,6 @@ - - diff --git a/Writerside/topics/ConversationMemory.md b/Writerside/topics/ConversationMemory.md index 016b634..fe1d025 100644 --- a/Writerside/topics/ConversationMemory.md +++ b/Writerside/topics/ConversationMemory.md @@ -12,4 +12,11 @@ Future development: [ ]- Token Buffer (keep a buffer of messages that is less than `n` tokens) [ ]- Summary Buffer (keep a buffer of messages that is less than `n` tokens, but also summarize the messages) [ ]- Time weighted Vectorstore (keep a vectorstore of messages, weighted by time) + - I would want to implement something like this, but implement something like: + - All messages tied to the [User](User.md) are in the vector store + - Retriever accesses this vector store based on similarity to current [Couplet](Couplet.md) [Context](ContextRoute.md)inversely weighted by time (recent messages advantaged) + - Messages within [Chat](Chat.md) context given a (very?) big advantage over messages outside of context (Tunable via [UserInterface](User-Interface.md)) + - Give a (very strong) advantage to 'in-chat' (ensuring they will be selected in the [RAG](Retrieval-Augmented-Generation.md) model), + +or like, sigmoid or whatever so that recent messages within the conversation are weighted more heavily than older messages. - [https://js.langchain.com/docs/modules/data_connection/retrievers/time_weighted_vectorstore](https://js.langchain.com/docs/modules/data_connection/retrievers/time_weighted_vectorstore) diff --git a/Writerside/topics/Foam.md b/Writerside/topics/Foam.md index 2919c66..9e8e3ac 100644 --- a/Writerside/topics/Foam.md +++ b/Writerside/topics/Foam.md @@ -1,3 +1,5 @@ # Foam -A way to make [VsCode](VsCode.md) interact with Markdown like [Obsidian](Obsidian.md) does \ No newline at end of file +A way to make [VsCode](VsCode.md) interact with Markdown like [Obsidian](Obsidian.md) does + +Not technically an AI tool, but its well suited to become one (like [Obsidian](Obsidian.md) is) \ No newline at end of file diff --git a/Writerside/topics/Retrieval-Augmented-Generation.md b/Writerside/topics/Retrieval-Augmented-Generation.md new file mode 100644 index 0000000..b604ce2 --- /dev/null +++ b/Writerside/topics/Retrieval-Augmented-Generation.md @@ -0,0 +1,3 @@ +# Retrieval Augmented Generation (RAG) + +https://python.langchain.com/docs/use_cases/question_answering/ \ No newline at end of file diff --git a/Writerside/topics/User-Interface.md b/Writerside/topics/User-Interface.md new file mode 100644 index 0000000..d452659 --- /dev/null +++ b/Writerside/topics/User-Interface.md @@ -0,0 +1,23 @@ +# User Interface + +The stuff we want to let the [User](User.md) control about the [ChatBot](ChatBot.md) + +## LLM Config +- [ ] ModelName (default = "gpt4-1106-preview" for now, "gpt4-turbo" when it becomes available) +- [ ] Temperature (default = 0.7 I think) +- [ ] **OtherModelParameters + +## Conversation Memory +- [ ] Type, window, etc + +## [Vectorstore](VectorStore.md) config +- [ ] Which [Vectorstore](VectorStore.md) [contexts](ContextRoute.md) to make available: + - [ ] [FileSystem](FileSystem.md) interface kinda thing) + - [ ] Individual Documents (think like textbooks, websites, etc) + +- [ ] Retrieval + - [ ] By Tag (human applied) + - [ ] By Topic (AI extracted) + - [ ] By Summary (AI generated) + - Generate summaries at multiple scales, like I did with the [PaperSummary](https://github.com/jonmatthis/chatbot/blob/main/chatbot/ai/workers/green_check_handler/parse_green_check_messages.py#L20) thing in OG [ClassBot](ClassBot.md) + diff --git a/Writerside/topics/VectorStore.md b/Writerside/topics/VectorStore.md index d2bae1b..3a28586 100644 --- a/Writerside/topics/VectorStore.md +++ b/Writerside/topics/VectorStore.md @@ -2,12 +2,13 @@ - [ ] Ingestion Method - [ ] Ingestion interface - [ ] [FileSystem](FileSystem.md) -- [ ] Embedding method #Langchain - - [ ] #OpenAI - - [ ] #HuggingFace +- [ ] Embedding method + - [ ] [OpenAI](OpenAI.md) + - [ ] HuggingFace - [ ] Local - [ ] Persistence Method - - local #Chroma + - local + - [Chroma](Chroma.md) - hosted #MongoDB? - [ ] Retriever Method ? - [ ] Sources