Skip to content

Commit

Permalink
feat: support for multiple providers (#93)
Browse files Browse the repository at this point in the history
* chore: wip backup

* chore: wip backup

* chore: formating

* feat: working copilot agents

* chore: wip backup

* chore: wip backup + lmstudio test

* chore: fmt

* chore: wip

* fix: toggle GpChatNew popup (resolve #112)

* chore: auto-generate vimdoc

* chore: strip whitespace in buf target resolve

* fix: handle symlinks in chat checks (issue: #104)

* feat: ollama with openAI endpoint + gemini

* chore: fix typo

* feat: show current agent in chat

* chore: copilot bearer refreshing

* feat: dynamic model for googleai

* feat: support for perplexity and anthropic

* docs: vim-plug setup snippet (issue: #123)

* chore: auto-generate vimdoc

* feat: update defaults to use latest model versions: gpt-4o and gpt-3.5-turbo

* chore: auto-generate vimdoc

* feat: filter out empty lines in picker

* fix: set the popup window to markdown filetype

* chore: wip backup

* chore: wip backup

* chore: formating

* feat: working copilot agents

* chore: wip backup

* chore: wip backup + lmstudio test

* chore: fmt

* chore: wip

* feat: ollama with openAI endpoint + gemini

* chore: fix typo

* feat: show current agent in chat

* chore: copilot bearer refreshing

* feat: dynamic model for googleai

* feat: support for perplexity and anthropic

* chore: handle old user chat prefix

* chore: newer copilot headers

* chore: GPT4 => GPT4o rename

* chore: default sys prompt vars

* chore: disable all but openAI before merging to main

* chore: better migration message

* docs: readme section about multi provider support

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Shalom Yiblet <[email protected]>
Co-authored-by: Tan Long <[email protected]>
Co-authored-by: XXiaoA <[email protected]>
  • Loading branch information
5 people authored Jul 9, 2024
1 parent 486a061 commit 08bcfb6
Show file tree
Hide file tree
Showing 4 changed files with 701 additions and 82 deletions.
78 changes: 75 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Trying to keep things as native as possible - reusing and integrating well with
- properly working undo (response can be undone with a single `u`)
- **Infinitely extensible** via hook functions specified as part of the config
- hooks have access to everything in the plugin and are automatically registered as commands
- see [4. Configuration](#4-configuration) and [Extend functionality](#extend-functionality) sections for details
- see [5. Configuration](#5-configuration) and [Extend functionality](#extend-functionality) sections for details
- **Minimum dependencies** (`neovim`, `curl`, `grep` and optionally `sox`)
- zero dependencies on other lua plugins to minimize chance of breakage
- **ChatGPT like sessions**
Expand Down Expand Up @@ -121,7 +121,79 @@ The OpenAI API key can be passed to the plugin in multiple ways:

If `openai_api_key` is a table, Gp runs it asynchronously to avoid blocking Neovim (password managers can take a second or two).

## 3. Dependencies
## 3. Multiple providers
The following LLM providers are currently supported besides OpenAI:

- [Ollama](https://github.com/ollama/ollama) for local/offline open-source models. The plugin assumes you have the Ollama service up and running with configured models available (the default Ollama agent uses Llama3).
- [GitHub Copilot](https://github.com/settings/copilot) with a Copilot license ([zbirenbaum/copilot.lua](https://github.com/zbirenbaum/copilot.lua) or [github/copilot.vim](https://github.com/github/copilot.vim) for autocomplete). You can access the underlying GPT-4 model without paying anything extra (essentially unlimited GPT-4 access).
- [Perplexity.ai](https://www.perplexity.ai/pro) Pro users have $5/month free API credits available (the default PPLX agent uses Mixtral-8x7b).
- [Anthropic](https://www.anthropic.com/api) to access Claude models, which currently outperform GPT-4 in some benchmarks.
- [Google Gemini](https://ai.google.dev/) with a quite generous free range but some geo-restrictions (EU).
- Any other "OpenAI chat/completions" compatible endpoint (Azure, LM Studio, etc.)

Below is an example of the relevant configuration part enabling some of these. The `secret` field has the same capabilities as `openai_api_key` (which is still supported for compatibility).

```lua
providers = {
openai = {
endpoint = "https://api.openai.com/v1/chat/completions",
secret = os.getenv("OPENAI_API_KEY"),
},

-- azure = {...},

copilot = {
endpoint = "https://api.githubcopilot.com/chat/completions",
secret = {
"bash",
"-c",
"cat ~/.config/github-copilot/hosts.json | sed -e 's/.*oauth_token...//;s/\".*//'",
},
},

pplx = {
endpoint = "https://api.perplexity.ai/chat/completions",
secret = os.getenv("PPLX_API_KEY"),
},

ollama = {
endpoint = "http://localhost:11434/v1/chat/completions",
},

googleai = {
endpoint = "https://generativelanguage.googleapis.com/v1beta/models/{{model}}:streamGenerateContent?key={{secret}}",
secret = os.getenv("GOOGLEAI_API_KEY"),
},

anthropic = {
endpoint = "https://api.anthropic.com/v1/messages",
secret = os.getenv("ANTHROPIC_API_KEY"),
},
},
```

Each of these providers has some agents preconfigured. Below is an example of how to disable predefined ChatGPT3-5 agent and create a custom one. If the `provider` field is missing, OpenAI is assumed for backward compatibility.

```lua
agents = {
{
name = "ChatGPT3-5",
disable = true,
},
{
name = "MyCustomAgent",
provider = "copilot",
chat = true,
command = true,
model = { model = "gpt-4-turbo" },
system_prompt = "Answer any query with just: Sure thing..",
},
},

```


## 4. Dependencies

The core plugin only needs `curl` installed to make calls to OpenAI API and `grep` for ChatFinder. So Linux, BSD and Mac OS should be covered.

Expand All @@ -133,7 +205,7 @@ Voice commands (`:GpWhisper*`) depend on `SoX` (Sound eXchange) to handle audio
- Redhat/CentOS: `yum install sox`
- NixOS: `nix-env -i sox`

## 4. Configuration
## 5. Configuration

Bellow is a linked snippet with the default values, but I suggest starting with minimal config possible (just `openai_api_key` if you don't have `OPENAI_API_KEY` env set up). Defaults change over time to improve things, options might get deprecated and so on - it's better to change only things where the default doesn't fit your needs.

Expand Down
225 changes: 196 additions & 29 deletions lua/gp/config.lua
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,20 @@
-- Default config
--------------------------------------------------------------------------------

local default_chat_system_prompt = "You are a general AI assistant.\n\n"
.. "The user provided the additional info about how they would like you to respond:\n\n"
.. "- If you're unsure don't guess and say you don't know instead.\n"
.. "- Ask question if you need clarification to provide better answer.\n"
.. "- Think deeply and carefully from first principles step by step.\n"
.. "- Zoom out first to see the big picture and then zoom in to details.\n"
.. "- Use Socratic method to improve your thinking and coding skills.\n"
.. "- Don't elide any code from your output if the answer requires coding.\n"
.. "- Take a deep breath; You've got this!\n"

local default_code_system_prompt = "You are an AI working as a code editor.\n\n"
.. "Please AVOID COMMENTARY OUTSIDE OF THE SNIPPET RESPONSE.\n"
.. "START AND END YOUR ANSWER WITH:\n\n```"

local config = {
-- Please start with minimal config possible.
-- Just openai_api_key if you don't have OPENAI_API_KEY env set up.
Expand All @@ -17,9 +31,51 @@ local config = {
-- openai_api_key: "sk-...",
-- openai_api_key = os.getenv("env_name.."),
openai_api_key = os.getenv("OPENAI_API_KEY"),
-- api endpoint (you can change this to azure endpoint)
openai_api_endpoint = "https://api.openai.com/v1/chat/completions",
-- openai_api_endpoint = "https://$URL.openai.azure.com/openai/deployments/{{model}}/chat/completions?api-version=2023-03-15-preview",

-- at least one working provider is required
-- to disable a provider set it to empty table like openai = {}
providers = {
-- secrets can be strings or tables with command and arguments
-- secret = { "cat", "path_to/openai_api_key" },
-- secret = { "bw", "get", "password", "OPENAI_API_KEY" },
-- secret : "sk-...",
-- secret = os.getenv("env_name.."),
openai = {
endpoint = "https://api.openai.com/v1/chat/completions",
-- secret = os.getenv("OPENAI_API_KEY"),
},
azure = {
-- endpoint = "https://$URL.openai.azure.com/openai/deployments/{{model}}/chat/completions",
-- secret = os.getenv("AZURE_API_KEY"),
},
copilot = {
-- endpoint = "https://api.githubcopilot.com/chat/completions",
-- secret = {
-- "bash",
-- "-c",
-- "cat ~/.config/github-copilot/hosts.json | sed -e 's/.*oauth_token...//;s/\".*//'",
-- },
},
ollama = {
-- endpoint = "http://localhost:11434/v1/chat/completions",
},
lmstudio = {
-- endpoint = "http://localhost:1234/v1/chat/completions",
},
googleai = {
-- endpoint = "https://generativelanguage.googleapis.com/v1beta/models/{{model}}:streamGenerateContent?key={{secret}}",
-- secret = os.getenv("GOOGLEAI_API_KEY"),
},
pplx = {
-- endpoint = "https://api.perplexity.ai/chat/completions",
-- secret = os.getenv("PPLX_API_KEY"),
},
anthropic = {
-- endpoint = "https://api.anthropic.com/v1/messages",
-- secret = os.getenv("ANTHROPIC_API_KEY"),
},
},

-- prefix for all commands
cmd_prefix = "Gp",
-- optional curl parameters (for proxy, etc.)
Expand All @@ -36,58 +92,164 @@ local config = {
-- agents = { { name = "ChatGPT4" }, ... },
agents = {
{
name = "ChatGPT4",
name = "ChatGPT4o",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are a general AI assistant.\n\n"
.. "The user provided the additional info about how they would like you to respond:\n\n"
.. "- If you're unsure don't guess and say you don't know instead.\n"
.. "- Ask question if you need clarification to provide better answer.\n"
.. "- Think deeply and carefully from first principles step by step.\n"
.. "- Zoom out first to see the big picture and then zoom in to details.\n"
.. "- Use Socratic method to improve your thinking and coding skills.\n"
.. "- Don't elide any code from your output if the answer requires coding.\n"
.. "- Take a deep breath; You've got this!\n",
system_prompt = default_chat_system_prompt,
},
{
provider = "openai",
name = "ChatGPT3-5",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "gpt-3.5-turbo", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are a general AI assistant.\n\n"
.. "The user provided the additional info about how they would like you to respond:\n\n"
.. "- If you're unsure don't guess and say you don't know instead.\n"
.. "- Ask question if you need clarification to provide better answer.\n"
.. "- Think deeply and carefully from first principles step by step.\n"
.. "- Zoom out first to see the big picture and then zoom in to details.\n"
.. "- Use Socratic method to improve your thinking and coding skills.\n"
.. "- Don't elide any code from your output if the answer requires coding.\n"
.. "- Take a deep breath; You've got this!\n",
system_prompt = default_chat_system_prompt,
},
{
provider = "copilot",
name = "ChatCopilot",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "gpt-4", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = default_chat_system_prompt,
},
{
provider = "googleai",
name = "ChatGemini",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "gemini-pro", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = default_chat_system_prompt,
},
{
provider = "pplx",
name = "ChatPerplexityMixtral",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "mixtral-8x7b-instruct", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = default_chat_system_prompt,
},
{
provider = "anthropic",
name = "ChatClaude-3-Haiku",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "claude-3-haiku-20240307", temperature = 0.8, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = default_chat_system_prompt,
},
{
provider = "ollama",
name = "ChatOllamaLlama3",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = {
model = "llama3",
num_ctx = 8192,
},
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are a general AI assistant.",
},
{
name = "CodeGPT4",
provider = "lmstudio",
name = "ChatLMStudio",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = {
model = "dummy",
temperature = 0.97,
top_p = 1,
num_ctx = 8192,
},
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are a general AI assistant.",
},
{
provider = "openai",
name = "CodeGPT4o",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o", temperature = 0.8, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are an AI working as a code editor.\n\n"
.. "Please AVOID COMMENTARY OUTSIDE OF THE SNIPPET RESPONSE.\n"
.. "START AND END YOUR ANSWER WITH:\n\n```",
system_prompt = default_code_system_prompt,
},
{
provider = "openai",
name = "CodeGPT3-5",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gpt-3.5-turbo", temperature = 0.8, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are an AI working as a code editor.\n\n"
system_prompt = default_code_system_prompt,
},
{
provider = "copilot",
name = "CodeCopilot",
chat = false,
command = true,
-- string with the Copilot engine name or table with engine name and parameters if applicable
model = { model = "gpt-4", temperature = 0.8, top_p = 1, n = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = default_code_system_prompt,
},
{
provider = "googleai",
name = "CodeGemini",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gemini-pro", temperature = 0.8, top_p = 1 },
system_prompt = default_code_system_prompt,
},
{
provider = "pplx",
name = "CodePerplexityMixtral",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "mixtral-8x7b-instruct", temperature = 0.8, top_p = 1 },
system_prompt = default_code_system_prompt,
},
{
provider = "anthropic",
name = "CodeClaude-3-Haiku",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "claude-3-haiku-20240307", temperature = 0.8, top_p = 1 },
system_prompt = default_code_system_prompt,
},
{
provider = "ollama",
name = "CodeOllamaLlama3",
chat = false,
command = true,
-- string with the Copilot engine name or table with engine name and parameters if applicable
model = {
model = "llama3",
temperature = 1.9,
top_p = 1,
num_ctx = 8192,
},
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are an AI working as a code editor providing answers.\n\n"
.. "Use 4 SPACES FOR INDENTATION.\n"
.. "Please AVOID COMMENTARY OUTSIDE OF THE SNIPPET RESPONSE.\n"
.. "START AND END YOUR ANSWER WITH:\n\n```",
},
Expand All @@ -96,7 +258,7 @@ local config = {
-- directory for storing chat files
chat_dir = vim.fn.stdpath("data"):gsub("/$", "") .. "/gp/chats",
-- chat user prompt prefix
chat_user_prefix = "🗨:",
chat_user_prefix = "💬:",
-- chat assistant prompt prefix (static string or a table {static, template})
-- first string has to be static, second string can contain template {{agent}}
-- just a static string is legacy and the [{{agent}}] element is added automatically
Expand All @@ -106,7 +268,6 @@ local config = {
chat_topic_gen_prompt = "Summarize the topic of our conversation above"
.. " in two or three words. Respond only with those words.",
-- chat topic model (string with model name or table with model name and parameters)
chat_topic_gen_model = "gpt-3.5-turbo-16k",
-- explicitly confirm deletion of a chat file
chat_confirm_delete = true,
-- conceal model parameters in chat
Expand Down Expand Up @@ -301,6 +462,12 @@ local config = {
local copy = vim.deepcopy(plugin)
local key = copy.config.openai_api_key
copy.config.openai_api_key = key:sub(1, 3) .. string.rep("*", #key - 6) .. key:sub(-3)
for provider, _ in pairs(copy.providers) do
local s = copy.providers[provider].secret
if s and type(s) == "string" then
copy.providers[provider].secret = s:sub(1, 3) .. string.rep("*", #s - 6) .. s:sub(-3)
end
end
local plugin_info = string.format("Plugin structure:\n%s", vim.inspect(copy))
local params_info = string.format("Command params:\n%s", vim.inspect(params))
local lines = vim.split(plugin_info .. "\n" .. params_info, "\n")
Expand Down
Loading

0 comments on commit 08bcfb6

Please sign in to comment.