Skip to content

Commit

Permalink
Merge branch 'main' into preview
Browse files Browse the repository at this point in the history
  • Loading branch information
sestinj committed Jul 29, 2024
2 parents 9a9d776 + 9961f85 commit 59f54e9
Show file tree
Hide file tree
Showing 34 changed files with 278 additions and 176 deletions.
8 changes: 8 additions & 0 deletions .changes/extensions/intellij/0.0.53.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
## 0.0.53 - 2024-07-10
### Added
* Support for .prompt files
* New onboarding experience
### Fixed
* Indexing fixes from VS Code versions merged into IntelliJ
* Improved codebase indexing reliability and testing
* Fixes for autocomplete text positioning and timing
6 changes: 6 additions & 0 deletions .changes/extensions/intellij/0.0.54.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
## 0.0.54 - 2024-07-13
### Added
* Partial autocomplete acceptance
* Autocomplete status bar spinner
### Fixed
* Fixed duplicate completion bug and others
12 changes: 0 additions & 12 deletions .changes/extensions/vscode/0.8.40.md

This file was deleted.

16 changes: 16 additions & 0 deletions .changes/extensions/vscode/0.8.42.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
## 0.8.42 - 2024-07-02

### Added

- Support for Gemini 1.5 Pro
- Link to code in the sidebar when using codebase retrieval
- Smoother onboarding experience
- .prompt files, a way of saving and sharing slash commands
- Support for Claude 3.5 Sonnet, Deepseek Coder v2, and other new models
- Support for comments in config.json
- Specify multiple autocomplete models and switch between them
- Improved bracket matching strategy reduces noisy completions

### Fixed

- Numerous reliability upgrades to codebase indexing
4 changes: 4 additions & 0 deletions .changes/extensions/vscode/0.8.43.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
## 0.8.43 - 2024-07-08
### Added
* Improved indexing reliability and testing
* Quick Actions: use CodeLens to quickly take common actions like adding docstrings
9 changes: 0 additions & 9 deletions .changes/server/v0.1.104.md

This file was deleted.

5 changes: 0 additions & 5 deletions .changes/server/v0.1.99.md

This file was deleted.

3 changes: 2 additions & 1 deletion .prettierignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
extensions/vscode/continue_rc_schema.json
**/.continueignore
**/.continueignore
CHANGELOG.md
2 changes: 1 addition & 1 deletion core/autocomplete/completionProvider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -456,7 +456,7 @@ export class CompletionProvider {
) {
shownGptClaudeWarning = true;
throw new Error(
`Warning: ${llm.model} is not trained for tab-autocomplete, and will result in low-quality suggestions. See the docs to learn more about why: https://docs.continue.dev/walkthroughs/tab-autocomplete#i-want-better-completions-should-i-use-gpt-4`,
`Warning: ${llm.model} is not trained for tab-autocomplete, and will result in low-quality suggestions. See the docs to learn more about why: https://docs.continue.dev/features/tab-autocomplete#i-want-better-completions-should-i-use-gpt-4`,
);
}

Expand Down
2 changes: 1 addition & 1 deletion core/config/promptFile.ts
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ export async function getPromptFiles(
const DEFAULT_PROMPT_FILE = `# This is an example ".prompt" file
# It is used to define and reuse prompts within Continue
# Continue will automatically create a slash command for each prompt in the .prompts folder
# To learn more, see the full .prompt file reference: https://docs.continue.dev/walkthroughs/prompt-files
# To learn more, see the full .prompt file reference: https://docs.continue.dev/features/prompt-files
temperature: 0.0
---
{{{ diff }}}
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/changelog.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ hide_table_of_contents: true
import VSCodeChangelog from "../../extensions/vscode/CHANGELOG.md";
import IntelliJChangelog from "../../extensions/intellij/CHANGELOG.md";

Below are the changelogs for the latest releases of the VSCode and IntelliJ extensions.
Below are the changelogs for the latest releases of the VSCode and IntelliJ extensions. Prior to 2024-07-02, the changelogs do not reflect all updates.

import TOCInline from "@theme/TOCInline";

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/customization/context-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Type '@open' to reference the contents of all of your open files. Set `onlyPinne

### Codebase Retrieval

Type '@codebase' to automatically retrieve the most relevant snippets from your codebase. Read more about indexing and retrieval [here](../walkthroughs/codebase-embeddings.md).
Type '@codebase' to automatically retrieve the most relevant snippets from your codebase. Read more about indexing and retrieval [here](../features/codebase-embeddings.md).

```json
{ "name": "codebase" }
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/docs/setup/model-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ You can use commercial LLMs via APIs using:
- [Azure OpenAI Service](../reference/Model%20Providers/openai.md)
- [Google Gemini API](../reference/Model%20Providers/geminiapi.md)
- [Mistral API](../reference/Model%20Providers/mistral.md)
- [Voyage AI API](../walkthroughs/codebase-embeddings.md#openai)
- [Voyage AI API](../features/codebase-embeddings.md#openai)
- [Cohere API](../reference/Model%20Providers/cohere.md)

**In addition to selecting providers, you will need to figure out [what models to use](./select-model.md).**
14 changes: 7 additions & 7 deletions docs/docs/setup/select-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ _We currently recommend the following open-source models:_
- Unlimited GPU: `llama3-70b`
- Limited GPU: `llama3-8B`

#### DeepSeek Coder from DeepSeek
#### DeepSeek Coder v2 from DeepSeek

- Unlimited GPU: `deepseek-coder-33b`
- Limited GPU: `deepseek-coder-6.7b`
- Unlimited GPU: `deepseek-coder-v2:236b`
- Limited GPU: `deepseek-coder-v2:16b`

_You can also use other open-source chat models by adding them to your `config.json`._

Expand Down Expand Up @@ -61,10 +61,10 @@ Our current recommendation for autocomplete, if you are able to choose any model

_We currently recommend the following open-source models:_

#### DeepSeek Coder from DeepSeek
#### DeepSeek Coder v2 from DeepSeek

- Unlimited GPU: `deepseek-coder-6.7b`
- Limited GPU: `deepseek-coder-1.3b`
- Unlimited GPU: `deepseek-coder-v2:16b`
- Limited GPU: `deepseek-coder:6.7b` or `deepseek-coder:1.3b`

#### StarCoder 2 from Hugging Face

Expand All @@ -75,7 +75,7 @@ _You can also use other autocomplete models by adding them to your `config.json`

## Embeddings

We recommend the following embeddings models, which are used for codebase retrieval as described [here](../walkthroughs/codebase-embeddings.md#embeddings-providers)
We recommend the following embeddings models, which are used for codebase retrieval as described [here](../features/codebase-embeddings.md#embeddings-providers)

### Open-source models

Expand Down
103 changes: 0 additions & 103 deletions docs/docs/walkthroughs/codellama.md

This file was deleted.

94 changes: 94 additions & 0 deletions docs/docs/walkthroughs/llama3.1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
title: Using Llama 3.1 with Continue
description: How to use Llama 3.1 with Continue
keywords: [llama, meta, togetherai, ollama, replicate]
---

# Using Llama 3.1 with Continue

Continue makes it easy to code with the latest open-source models, including the entire Llama 3.1 family of models.

If you haven't already installed Continue, you can do that [here for VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) or [here for JetBrains](https://plugins.jetbrains.com/plugin/22707-continue). For more general information on customizing Continue, read [our customization docs](../customization/overview.md).

Below we share some of the easiest ways to get up and running, depending on your use-case.

## Ollama

Ollama is the fastest way to get up and running with local language models. We recommend trying Llama 3.1 8b, which is impressive for its size and will perform well on most hardware.

1. Download Ollama [here](https://ollama.ai/) (it should walk you through the rest of these steps)
2. Open a terminal and run `ollama run llama3.1-8b`
3. Change your Continue config file like this:

```json title="~/.continue/config.json"
{
"models": [
{
"title": "Llama 3.1 8b",
"provider": "ollama",
"model": "llama3.1-8b"
}
]
}
```

## Groq

Groq provides the fastest available inference for open-source language models, including the entire Llama 3.1 family.

1. Obtain an API key [here](https://console.groq.com/keys)
2. Update your Continue config file like this:

```json title="~/.continue/config.json"
{
"models": [
{
"title": "Llama 3.1 405b",
"provider": "groq",
"model": "llama3.1-405b",
"apiKey": "<API_KEY>"
}
]
}
```

## Together AI

Together AI provides fast and reliable inference of open-source models. You'll be able to run the 405b model with good speed.

1. Create an account [here](https://api.together.xyz/signup)
2. Copy your API key that appears on the welcome screen
3. Update your Continue config file like this:

```json title="~/.continue/config.json"
{
"models": [
{
"title": "Llama 3.1 405b",
"provider": "together",
"model": "llama3.1-405b",
"apiKey": "<API_KEY>"
}
]
}
```

## Replicate

Replicate makes it easy to host and run open-source AI with an API.

1. Get your Replicate API key [here](https://replicate.ai/)
2. Change your Continue config file like this:

```json title="~/.continue/config.json"
{
"models": [
{
"title": "Llama 3.1 405b",
"provider": "replicate",
"model": "llama3.1-405b",
"apiKey": "<API_KEY>"
}
]
}
```
16 changes: 16 additions & 0 deletions docs/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,22 @@ const config = {
from: "/setup/select-provider",
to: "/setup/model-providers",
},
{
from: "/walkthroughs/codebase-embeddings",
to: "/features/codebase-embeddings",
},
{
from: "/walkthroughs/tab-autocomplete",
to: "/features/tab-autocomplete",
},
{
from: "/walkthroughs/prompt-files",
to: "/features/prompt-files",
},
{
from: "/walkthroughs/quick-actions",
to: "/features/quick-actions",
},
],
},
],
Expand Down
Loading

0 comments on commit 59f54e9

Please sign in to comment.