Skip to content

Commit

Permalink
📝 Add description of how to contribute a new provider
Browse files Browse the repository at this point in the history
  • Loading branch information
sestinj committed Jan 25, 2024
1 parent 52f5c87 commit 2a1d541
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 2 deletions.
14 changes: 14 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
- [Environment Setup](#environment-setup)
- [Writing Slash Commands](#writing-slash-commands)
- [Writing Context Providers](#writing-context-providers)
- [Adding an LLM Provider](#adding-an-llm-provider)
- [Adding Models](#adding-models)
- [📐 Continue Architecture](#-continue-architecture)
- [Continue VS Code Extension](#continue-vs-code-extension)
Expand Down Expand Up @@ -88,6 +89,19 @@ A Step can be used as a custom slash command, or called otherwise in a `Policy`.

A `ContextProvider` is a Continue plugin that lets type '@' to quickly select documents as context for the language model. The simplest way to create a `ContextProvider` is to implement the `provide_context_items` method. You can find a great example of this in [GitHubIssuesContextProvider](./server/continuedev/plugins/context_providers/github.py), which allows you to search GitHub Issues in a repo.

### Adding an LLM Provider

Continue has support for more than a dozen different LLM "providers", making it easy to use models running on OpenAI, Ollama, Together, LM Studio, and more. You can find all of the existing providers [here](https://github.com/continuedev/continue/tree/main/core/llm/llms), and if you see one missing, you can add it with the following steps:

1. Create a new file in the `core/llm/llms` directory. The name of the file should be the name of the provider, and it should export a class that extends `BaseLLM`. This class should contain the following minimal implementation. We recommend viewing pre-existing providers for more details. The [LlamaCpp Provider](./core/llm/llms/LlamaCpp.ts) is a good simple example.

- `providerName` - the identifier for your provider
- At least one of `_streamComplete` or `_streamChat` - This is the function that makes the request to the API and returns the streamed response. You only need to implement one because Continue can automatically convert between "chat" and "raw completion".

2. Add your provider to the `LLMs` array in [core/llm/llms/index.ts](./core/llm/llms/index.ts).
3. If your provider supports images, add it to the `PROVIDER_SUPPORTS_IMAGES` array in [core/llm/index.ts](./core/llm/index.ts).
4. Add the necessary JSON Schema types to [`config_schema.json`](./extensions/vscode/config_schema.json). This makes sure that Intellisense shows users what options are available for your provider when they are editing `config.json`.

### Adding Models

While any model that works with a supported provider can be used with Continue, we keep a list of recommended models that can be automatically configured from the UI or `config.json`. The following files should be updated when adding a model:
Expand Down
3 changes: 1 addition & 2 deletions extensions/vscode/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,14 @@ This is the Continue VS Code Extension. Its primary jobs are

1. Implement the IDE side of the Continue IDE protocol, allowing a Continue server to interact natively in an IDE. This happens in `src/continueIdeClient.ts`.
2. Open the Continue React app in a side panel. The React app's source code lives in the `gui` directory. The panel is opened by the `continue.openContinueGUI` command, as defined in `src/commands.ts`.
3. Run a Continue server in the background, which connects to both the IDE protocol and the React app. The server is launched in `src/activation/environmentSetup.ts` by calling Python code that lives in `server/` (unless extension settings define a server URL other than localhost:65432, in which case the extension will just connect to that).

# How to run the extension

See [Environment Setup](../CONTRIBUTING.md#environment-setup)

# How to run and debug tests

After following the setup in [Environment Setup](../CONTRIBUTING.md#environment-setup) you can run `npm run test` in the command line or the `Server + Tests (VSCode)` launch configuration in VS Code to debug tests + server.
After following the setup in [Environment Setup](../CONTRIBUTING.md#environment-setup) you can run the `Extension (VSCode)` launch configuration in VS Code.

## Notes

Expand Down

0 comments on commit 2a1d541

Please sign in to comment.