Skip to content

Commit

Permalink
📝 update autocomplete docs
Browse files Browse the repository at this point in the history
  • Loading branch information
sestinj committed Feb 29, 2024
1 parent 8a82097 commit 056c5fa
Showing 1 changed file with 28 additions and 9 deletions.
37 changes: 28 additions & 9 deletions docs/docs/walkthroughs/tab-autocomplete.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Tab Autocomplete (Experimental)
# Tab Autocomplete (beta)

Continue now provides experimental support for tab autocomplete in VS Code. You can enable it by downloading the pre-release version of the extension, opening VS Code settings, and then checking the box that says "Enable Tab Autocomplete". There is also a button on the bottom/status bar for fast access. We will be greatly improving the experience over the next few releases, and it is always helpful to hear feedback. If you have any problems or suggestions, please let us know in our [Discord](https://discord.gg/vapESyrFmJ).
Continue now provides support for tab autocomplete in VS Code (make sure to download the pre-release). We will be greatly improving the experience over the next few releases, and it is always helpful to hear feedback. If you have any problems or suggestions, please let us know in our [Discord](https://discord.gg/vapESyrFmJ).

## Setting up with Ollama (default)

We recommend setting up tab-autocomplete with a local Ollama instance. To do this, first download the latest version of Ollama from [here](https://ollama.ai). Then, run the following command to download our recommended model:

```bash
ollama run deepseek-coder:1.3b-base
ollama run starcoder:3b
```

Once it has been downloaded, you should begin to see completions in VS Code.
Expand All @@ -16,12 +16,12 @@ Once it has been downloaded, you should begin to see completions in VS Code.

All of the configuration options available for chat models are available to use for tab-autocomplete. For example, if you wanted to use a remote Ollama instance you would edit your `config.json` like this:

```json
```json title=~/.continue/config.json
{
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"model": "deepseek-coder:1.3b-base",
"model": "starcoder:3b",
"apiBase": "https://<my endpoint>"
},
...
Expand Down Expand Up @@ -49,6 +49,25 @@ This object allows you to customize the behavior of tab-autocomplete. The availa
- `maxSuffixPercentage`: The maximum percentage of the prompt that can be dedicated to the suffix. (Number)
- `prefixPercentage`: The percentage of the input that should be dedicated to the prefix. (Number)
- `template`: An optional template string to be used for autocomplete. It will be rendered with the Mustache templating language, and is passed the 'prefix' and 'suffix' variables. (String)
- `multilineCompletions`: Whether to enable multiline completions ("always", "never", or "auto"). Defaults to "auto".

### Full example

```json title=~/.continue/config.json
{
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"model": "starcoder:3b",
"apiBase": "https://<my endpoint>"
},
"tabAutocompleteOptions": {
"useCopyBuffer": false,
"maxPromptTokens": 400,
"prefixPercentage": 0.5
}
}
```

## Troubleshooting

Expand All @@ -57,21 +76,21 @@ This object allows you to customize the behavior of tab-autocomplete. The availa
Follow these steps to ensure that everything is set up correctly:

1. Make sure you have the pre-release version of the extension installed.
2. Make sure you have the "Enable Tab Autocomplete" setting checked.
2. Make sure you have the "Enable Tab Autocomplete" setting checked (can toggle by clicking the "Continue" button in the status bar).
3. Make sure you have downloaded Ollama.
4. Run `ollama run deepseek-coder:1.3b-base` to verify that the model is downloaded.
4. Run `ollama run starcoder:3b` to verify that the model is downloaded.
5. Make sure that any other completion providers are disabled (e.g. Copilot), as they may interfere.
6. Make sure that you aren't also using another Ollama model for chat. This will cause Ollama to constantly load and unload the models from memory, resulting in slow responses (or none at all) for both.
7. Check the output of the logs to find any potential errors (cmd/ctrl+shift+p -> "Toggle Developer Tools" -> "Console" tab).
8. If you are still having issues, please let us know in our [Discord](https://discord.gg/vapESyrFmJ) and we'll help as soon as possible.

### Completions are slow

We are working on this! You can expect improvements in the time frame of the next few days. For now, you will probably see faster completions in smaller files where there isn't as much context above the cursor.
Depending on your hardware, you may want to try a smaller, faster model. If 3b isn't working for you we recommend trying `deepseek-coder:1.3b-base`.

### Completions don't know about my code

We are working on this as well! Right now Continue only sees your current file surrounding the cursor. We're already partway through building a full-text search feature that will help Continue know about other files. You can expect to see improvements in the time frame of roughly a week.
We are working on this! Right now Continue uses the Language Server Protocol to add definitions to the prompt, as well as using similarity search over recently edited files. We will be improving the accuracy of this system greatly over the next few weeks.

### Completions contain formatting errors

Expand Down

0 comments on commit 056c5fa

Please sign in to comment.