Skip to content

Commit

Permalink
merge main
Browse files Browse the repository at this point in the history
  • Loading branch information
RomneyDa committed Jan 7, 2025
2 parents 5b31f75 + eba6479 commit 8bc73bd
Show file tree
Hide file tree
Showing 175 changed files with 2,875 additions and 1,586 deletions.
9 changes: 9 additions & 0 deletions .changes/extensions/vscode/0.8.66.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
## 0.8.66 - 2024-12-20
### Changed
* Improved autocomplete in untitled files
### Fixed
* Display more mid-line completions
* Restored syntax highlighting
* Fix tool use bug for models that don't support tools
* Autodetect mistral API key type
* Fixes Azure OpenAI regressions
4 changes: 0 additions & 4 deletions .changes/unreleased/Changed-20241213-091505.yaml

This file was deleted.

4 changes: 0 additions & 4 deletions .changes/unreleased/Fixed-20241213-091454.yaml

This file was deleted.

4 changes: 0 additions & 4 deletions .changes/unreleased/Fixed-20241213-091519.yaml

This file was deleted.

4 changes: 0 additions & 4 deletions .changes/unreleased/Fixed-20241216-220802.yaml

This file was deleted.

4 changes: 0 additions & 4 deletions .changes/unreleased/Fixed-20241216-220818.yaml

This file was deleted.

4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ After you've written your context provider, make sure to complete the following:

### Adding an LLM Provider

Continue has support for more than a dozen different LLM "providers", making it easy to use models running on OpenAI, Ollama, Together, LM Studio, Msty, and more. You can find all of the existing providers [here](https://github.com/continuedev/continue/tree/main/core/llm/llms), and if you see one missing, you can add it with the following steps:
Continue has support for more than a dozen different LLM "providers", making it easy to use models running on OpenAI, Ollama, Together, Novita AI, LM Studio, Msty, and more. You can find all of the existing providers [here](https://github.com/continuedev/continue/tree/main/core/llm/llms), and if you see one missing, you can add it with the following steps:

1. Create a new file in the `core/llm/llms` directory. The name of the file should be the name of the provider, and it should export a class that extends `BaseLLM`. This class should contain the following minimal implementation. We recommend viewing pre-existing providers for more details. The [LlamaCpp Provider](./core/llm/llms/LlamaCpp.ts) is a good simple example.

Expand All @@ -209,7 +209,7 @@ While any model that works with a supported provider can be used with Continue,
1. Add a `ModelPackage` entry for the model into [configs/models.ts](./gui/src/pages/AddNewModel/configs/models.ts), following the lead of the many examples near the top of the file
2. Add the model within its provider's array to [AddNewModel.tsx](./gui/src/pages/AddNewModel/AddNewModel.tsx) (add provider if needed)
- [index.d.ts](./core/index.d.ts) - This file defines the TypeScript types used throughout Continue. You'll find a `ModelName` type. Be sure to add the name of your model to this.
- LLM Providers: Since many providers use their own custom strings to identify models, you'll have to add the translation from Continue's model name (the one you added to `index.d.ts`) and the model string for each of these providers: [Ollama](./core/llm/llms/Ollama.ts), [Together](./core/llm/llms/Together.ts), and [Replicate](./core/llm/llms/Replicate.ts). You can find their full model lists here: [Ollama](https://ollama.ai/library), [Together](https://docs.together.ai/docs/inference-models), [Replicate](https://replicate.com/collections/streaming-language-models).
- LLM Providers: Since many providers use their own custom strings to identify models, you'll have to add the translation from Continue's model name (the one you added to `index.d.ts`) and the model string for each of these providers: [Ollama](./core/llm/llms/Ollama.ts), [Together](./core/llm/llms/Together.ts), [Novita AI](./core/llm/llms/Novita.ts), and [Replicate](./core/llm/llms/Replicate.ts). You can find their full model lists here: [Ollama](https://ollama.ai/library), [Together](https://docs.together.ai/docs/inference-models), [Novita AI](https://novita.ai/llm-api?utm_source=github_continuedev&utm_medium=github_readme&utm_campaign=github_link), [Replicate](https://replicate.com/collections/streaming-language-models).
- [Prompt Templates](./core/llm/index.ts) - In this file you'll find the `autodetectTemplateType` function. Make sure that for the model name you just added, this function returns the correct template type. This is assuming that the chat template for that model is already built in Continue. If not, you will have to add the template type and corresponding edit and chat templates.

### Adding Pre-indexed Documentation
Expand Down
28 changes: 5 additions & 23 deletions binary/build.js
Original file line number Diff line number Diff line change
Expand Up @@ -118,22 +118,6 @@ async function installNodeModuleInTempDirAndCopyToCurrent(packageName, toCopy) {
}

(async () => {
// Informs of where to look for node_sqlite3.node https://www.npmjs.com/package/bindings#:~:text=The%20searching%20for,file%20is%20found
// This is only needed for our `pkg` command
fs.writeFileSync(
"out/package.json",
JSON.stringify(
{
name: "binary",
version: "1.0.0",
author: "Continue Dev, Inc",
license: "Apache-2.0",
},
undefined,
2,
),
);

console.log("[info] Downloading prebuilt lancedb...");
for (const target of targets) {
if (targetToLanceDb[target]) {
Expand Down Expand Up @@ -276,21 +260,19 @@ async function installNodeModuleInTempDirAndCopyToCurrent(packageName, toCopy) {
`node_modules/${targetToLanceDb[target]}/index.node`,
`${targetDir}/index.node`,
);
}

// Our dummy `package.json` is no longer needed so we can remove it.
// If it isn't removed, then running locally via `node out/index.js` will fail
// with a `Failed to locate bindings` error
fs.unlinkSync("out/package.json");
// Informs the `continue-binary` of where to look for node_sqlite3.node
// https://www.npmjs.com/package/bindings#:~:text=The%20searching%20for,file%20is%20found
fs.writeFileSync(`${targetDir}/package.json`, "");
}

const pathsToVerify = [];
for (target of targets) {
for (const target of targets) {
const exe = target.startsWith("win") ? ".exe" : "";
const targetDir = `bin/${target}`;
pathsToVerify.push(
`${targetDir}/continue-binary${exe}`,
`${targetDir}/index.node`, // @lancedb
"package.json", // Informs of where to look for node_sqlite3.node https://www.npmjs.com/package/bindings#:~:text=The%20searching%20for,file%20is%20found
`${targetDir}/build/Release/node_sqlite3.node`,
);
}
Expand Down
24 changes: 12 additions & 12 deletions binary/test/binary.test.ts
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import { SerializedContinueConfig } from "core";
// import Mock from "core/llm/llms/Mock.js";
import { FromIdeProtocol, ToIdeProtocol } from "core/protocol/index.js";
import FileSystemIde from "core/util/filesystem";
import { IMessenger } from "core/protocol/messenger";
import FileSystemIde from "core/util/filesystem";
import fs from "fs";
import {
ChildProcessWithoutNullStreams,
Expand Down Expand Up @@ -186,10 +186,11 @@ describe("Test Suite", () => {
});

it("should return valid config object", async () => {
const { config } = await messenger.request(
const { result } = await messenger.request(
"config/getSerializedProfileInfo",
undefined,
);
const { config } = result;
expect(config).toHaveProperty("models");
expect(config).toHaveProperty("embeddingsProvider");
expect(config).toHaveProperty("contextProviders");
Expand Down Expand Up @@ -229,18 +230,17 @@ describe("Test Suite", () => {
await messenger.request("config/addModel", {
model,
});
const { config } = await messenger.request(
"config/getSerializedProfileInfo",
undefined,
);
expect(config.models.some((m) => m.title === model.title)).toBe(true);
const {
result: { config },
} = await messenger.request("config/getSerializedProfileInfo", undefined);

expect(config!.models.some((m) => m.title === model.title)).toBe(true);

await messenger.request("config/deleteModel", { title: model.title });
const { config: configAfterDelete } = await messenger.request(
"config/getSerializedProfileInfo",
undefined,
);
expect(configAfterDelete.models.some((m) => m.title === model.title)).toBe(
const {
result: { config: configAfterDelete },
} = await messenger.request("config/getSerializedProfileInfo", undefined);
expect(configAfterDelete!.models.some((m) => m.title === model.title)).toBe(
false,
);
});
Expand Down
6 changes: 2 additions & 4 deletions core/autocomplete/CompletionProvider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,9 @@ import { TRIAL_FIM_MODEL } from "../config/onboarding.js";
import { IDE, ILLM } from "../index.js";
import OpenAI from "../llm/llms/OpenAI.js";
import { DEFAULT_AUTOCOMPLETE_OPTS } from "../util/parameters.js";
import { PosthogFeatureFlag, Telemetry } from "../util/posthog.js";

import { shouldCompleteMultiline } from "./classification/shouldCompleteMultiline.js";
import { ContextRetrievalService } from "./context/ContextRetrievalService.js";

Check warning on line 8 in core/autocomplete/CompletionProvider.ts

View workflow job for this annotation

GitHub Actions / core-checks

There should be no empty line within import group
// @prettier-ignore

import { BracketMatchingService } from "./filtering/BracketMatchingService.js";
import { CompletionStreamer } from "./generation/CompletionStreamer.js";
Expand Down Expand Up @@ -123,10 +121,10 @@ export class CompletionProvider {
}

private async _getAutocompleteOptions() {
const config = await this.configHandler.loadConfig();
const { config } = await this.configHandler.loadConfig();
const options = {
...DEFAULT_AUTOCOMPLETE_OPTS,
...config.tabAutocompleteOptions,
...config?.tabAutocompleteOptions,
};
return options;
}
Expand Down
8 changes: 4 additions & 4 deletions core/autocomplete/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Continue now provides support for tab autocomplete in [VS Code](https://marketpl
We recommend setting up tab-autocomplete with a local Ollama instance. To do this, first download the latest version of Ollama from [here](https://ollama.ai). Then, run the following command to download our recommended model:

```bash
ollama run qwen2.5-coder:1.5b
ollama run qwen2.5-coder:1.5b-base
```

Once it has been downloaded, you should begin to see completions in VS Code.
Expand Down Expand Up @@ -101,7 +101,7 @@ This object allows you to customize the behavior of tab-autocomplete. The availa
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"model": "qwen2.5-coder:1.5b",
"model": "qwen2.5-coder:1.5b-base",
"apiBase": "https://<my endpoint>"
},
"tabAutocompleteOptions": {
Expand All @@ -123,9 +123,9 @@ Follow these steps to ensure that everything is set up correctly:

1. Make sure you have the "Enable Tab Autocomplete" setting checked (in VS Code, you can toggle by clicking the "Continue" button in the status bar).
2. Make sure you have downloaded Ollama.
3. Run `ollama run qwen2.5-coder:1.5b` to verify that the model is downloaded.
3. Run `ollama run qwen2.5-coder:1.5b-base` to verify that the model is downloaded.
4. Make sure that any other completion providers are disabled (e.g. Copilot), as they may interfere.
5. Make sure that you aren't also using another Ollama model for chat. This will cause Ollama to constantly load and unload the models from memory, resulting in slow responses (or none at all) for both.
5. If you are using another Ollama model for chat and your resources are limited, this will cause Ollama to constantly load and unload the models from memory, resulting in slow responses (or none at all) for both. A resolution for this could be using the same model for chat and autocomplete.
6. Check the output of the logs to find any potential errors (cmd/ctrl+shift+p -> "Toggle Developer Tools" -> "Console" tab in VS Code, ~/.continue/logs/core.log in JetBrains).
7. If you are still having issues, please let us know in our [Discord](https://discord.gg/vapESyrFmJ) and we'll help as soon as possible.

Expand Down
13 changes: 13 additions & 0 deletions core/autocomplete/filtering/streamTransforms/lineStream.ts
Original file line number Diff line number Diff line change
Expand Up @@ -316,6 +316,19 @@ export async function* skipLines(stream: LineStream): LineStream {
}
}

/**
* Handles cases where original lines have a trailing whitespace, but new lines do not.
* @param {LineStream} stream - The input stream of lines.
* @yields {string} Filtered lines that are stripped of trailing whitespace
*/
export async function* removeTrailingWhitespace(
stream: LineStream,
): LineStream {
for await (const line of stream) {
yield line.trimEnd();
}
}

/**
* Filters and processes lines from a code block, removing unnecessary markers and handling edge cases.
*
Expand Down
2 changes: 1 addition & 1 deletion core/commands/slash/commit.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ const CommitMessageCommand: SlashCommand = {
name: "commit",
description: "Generate a commit message for current changes",
run: async function* ({ ide, llm, params }) {
const includeUnstaged = params?.includeUnstaged ?? true;
const includeUnstaged = params?.includeUnstaged ?? false;
const diff = await ide.getDiff(includeUnstaged);

if (diff.length === 0) {
Expand Down
4 changes: 2 additions & 2 deletions core/config/ConfigHandler.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ describe.skip("Test the ConfigHandler and E2E config loading", () => {
});

test("should load the default config successfully", async () => {
const config = await testConfigHandler.loadConfig();
expect(config.models.length).toBe(defaultConfig.models.length);
const result = await testConfigHandler.loadConfig();
expect(result.config!.models.length).toBe(defaultConfig.models.length);
});

test.skip("should add a system message from config.ts", async () => {
Expand Down
35 changes: 16 additions & 19 deletions core/config/ConfigHandler.ts
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ import {
ProfileDescription,
ProfileLifecycleManager,
} from "./ProfileLifecycleManager.js";
import { pathToFileURL } from "url";

export type { ProfileDescription };

Expand Down Expand Up @@ -60,13 +61,13 @@ export class ConfigHandler {

// Always load local profile immediately in case control plane doesn't load
try {
this.loadConfig();
void this.loadConfig();
} catch (e) {
console.error("Failed to load config: ", e);
}

// Load control plane profiles
this.fetchControlPlaneProfiles();
void this.fetchControlPlaneProfiles();
}

// This will be the local profile
Expand All @@ -88,7 +89,7 @@ export class ConfigHandler {
async openConfigProfile(profileId?: string) {
let openProfileId = profileId || this.selectedProfileId;
if (openProfileId === "local") {
await this.ide.openFile(getConfigJsonPath());
await this.ide.openFile(pathToFileURL(getConfigJsonPath()).toString());
} else {
await this.ide.openUrl(
"https://app.continue.dev/",
Expand Down Expand Up @@ -146,12 +147,8 @@ export class ConfigHandler {

async setSelectedProfile(profileId: string) {
this.selectedProfileId = profileId;
const newConfig = await this.loadConfig();
this.notifyConfigListeners({
config: newConfig,
errors: undefined,
configLoadInterrupted: false,
});
const result = await this.loadConfig();
this.notifyConfigListeners(result);
const selectedProfiles =
this.globalContext.get("lastSelectedProfileForWorkspace") ?? {};
selectedProfiles[await this.getWorkspaceId()] = profileId;
Expand All @@ -170,7 +167,7 @@ export class ConfigHandler {
// Automatically refresh config when Continue-related IDE (e.g. VS Code) settings are changed
updateIdeSettings(ideSettings: IdeSettings) {
this.ideSettingsPromise = Promise.resolve(ideSettings);
this.reloadConfig();
void this.reloadConfig();
}

updateControlPlaneSessionInfo(
Expand Down Expand Up @@ -236,24 +233,24 @@ export class ConfigHandler {
return this.profiles.map((p) => p.profileDescription);
}

async loadConfig(): Promise<ContinueConfig> {
return (
await this.currentProfile.loadConfig(this.additionalContextProviders)
).config!; // <-- TODO
async loadConfig(): Promise<ConfigResult<ContinueConfig>> {
return await this.currentProfile.loadConfig(
this.additionalContextProviders,
);
}

async llmFromTitle(title?: string): Promise<ILLM> {
const config = await this.loadConfig();
const model = config.models.find((m) => m.title === title);
const { config } = await this.loadConfig();
const model = config?.models.find((m) => m.title === title);
if (!model) {
if (title === ONBOARDING_LOCAL_MODEL_TITLE) {
// Special case, make calls to Ollama before we have it in the config
const ollama = new Ollama({
model: LOCAL_ONBOARDING_CHAT_MODEL,
});
return ollama;
} else if (config.models.length > 0) {
return config.models[0];
} else if (config?.models?.length) {
return config?.models[0];
}

throw new Error("No model found");
Expand All @@ -264,6 +261,6 @@ export class ConfigHandler {

registerCustomContextProvider(contextProvider: IContextProvider) {
this.additionalContextProviders.push(contextProvider);
this.reloadConfig();
void this.reloadConfig();
}
}
Loading

0 comments on commit 8bc73bd

Please sign in to comment.