Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce language server latency #1003

Merged
merged 3 commits into from
Jun 11, 2024
Merged

Reduce language server latency #1003

merged 3 commits into from
Jun 11, 2024

Conversation

zth
Copy link
Collaborator

@zth zth commented Jun 4, 2024

This reduces latency in the language server by caching a few project config related things, so that we most often don't need to do any potentially expensive lookups before we call the analysis, or invoke incremental typechecking.

Known issues with this approach: We need to recompute the ReScript version if it changes. I however don't believe we're watching anything already that can help us figure out when to recompute it. Will need to think a bit about how to do that. It's not a big problem though, because changing the ReScript version in a project does not happen often. And a restart of the language server (or closing all project files) will fix it.

@zth zth force-pushed the reduce-ls-latency branch from d26c253 to 3b1d2a9 Compare June 4, 2024 12:32
@zth zth marked this pull request as ready for review June 6, 2024 08:02
@zth zth requested a review from cristianoc June 6, 2024 08:03
@zth
Copy link
Collaborator Author

zth commented Jun 6, 2024

cc @cometkim

@zth
Copy link
Collaborator Author

zth commented Jun 7, 2024

Cc @cristianoc

Copy link
Collaborator

@cristianoc cristianoc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the perf difference in practice?

In case we're worried about things changing infrequently such as compiler version, one possibility would be to refresh the data from time to time (e.g. based on an interval). Which should have no impact on latency.

Anything else to pay attention to in this PR?

Comment on lines +12 to +14
rescriptVersion: string | undefined;
bscBinaryLocation: string | null;
namespaceName: string | null;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess these 3 are the added cached values.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, and rescriptVersion was already cached for incremental compilation, just moved that cache to a more central place.

@zth
Copy link
Collaborator Author

zth commented Jun 7, 2024

What's the perf difference in practice?

It seems to vary wildly depending on how the system handles frequently spawning child processes. For me, there's no noticeable difference, but for for example @cometkim and @JonoPrest, there's a pretty massive difference at times.

In case we're worried about things changing infrequently such as compiler version, one possibility would be to refresh the data from time to time (e.g. based on an interval). Which should have no impact on latency.

Yeah this is a good approach. I think what I want us to do soon is what you mentioned in another thread - figure out and write down all of the inputs that might change which the LS needs to react to, and then figure out which of those are worth caring about, and what then to do about them.

Anything else to pay attention to in this PR?

Not really no, this should be fairly straight forward.

@JonoPrest
Copy link

I certainly experience delays. I've increased my lsp timeout to 5 seconds and still often hit timeouts with rescript ls. It could be a configuration or setup issue but I don't get these delays on other lsps. Rust analyzer for eg also has to index a project before it can be used with the editor (which can take minutes) but it doesn't freeze the editor or hit any timeouts.

@fhammerschmidt
Copy link
Member

@JonoPrest did you try with the VSIX of this PR?
https://github.com/rescript-lang/rescript-vscode/actions/runs/9397264258

@JonoPrest
Copy link

@JonoPrest did you try with the VSIX of this PR?

https://github.com/rescript-lang/rescript-vscode/actions/runs/9397264258

I did yes 👍🏼 we've been discussing it on discord. I think there was an improvement but I haven't been able to try the project cache configuration yet.

// First look in project files
let foundRootFromProjectFiles: string | null = null;

for (const rootPath of projectsFiles.keys()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly here, the most common operation is to find the project info associated with the current open file.

I see you make a { [project root path]: [project info] } index above, but it seems like it's missing the { [open file]: [project root path] } or { [open file]: [project info] } inverted index.

I think we don't need this loop if we're using an index for it in the same way

export const projectRoots = new Map<p.DocumentUri, string>();
export const projects = new Map<string, projectFiles>();

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that, but this feels like a good trade off to me. That loop is cheap and it'll also be resilient to if roots change for some reason while the file is open (unlikely, but still).

@zth zth merged commit aed1e8a into master Jun 11, 2024
6 checks passed
@zth zth deleted the reduce-ls-latency branch June 11, 2024 16:46
jfrolich pushed a commit to jfrolich/rescript-vscode that referenced this pull request Sep 3, 2024
* avoid recomputing project roots and ReScript versions as much as possible

* more precomputation

* changelog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants