Skip to content

Latest commit

 

History

History
55 lines (50 loc) · 2.2 KB

.ai-prompts.md

File metadata and controls

55 lines (50 loc) · 2.2 KB

Model choice is now possible at command level

I want to be able to choose the model at the command level, so that I can use different models for different commands. If the model at command level is not set, empty string or nil, then the global model must be used. If the model at command level is set, then the command model must be used.

Configuration exemple for setting the model at command level:

{
    "rakotomandimby/code-ai.nvim",
    opts = {
        gemini_model = "gemini-exp-1206",
        chatgpt_model = "gpt-40-mini",
        -- other opts
        prompts = {
            generate_readme = {
                command = "AIGenerateReadme",
                instruction_tpl = "You are a developer who is writing a README.md file for this project",
                prompt_tpl = "${input}",
                result_tpl = "${output}",
                loading_tpl = "Working on it...",
                gemini_model = "gemini-1.5-pro-latest",
                chatgpt_model = "gpt-4o",
                require_input = true                
            },
            javascript_vanilla = {
                command = 'AIJavascriptVanilla',
                instruction_tpl = 'Act as a Vanilla Javascript developer. Format you answer with Markdown.',
                prompt_tpl = '${input}',
                result_tpl = '${output}',
                loading_tpl = 'Loading...',
                gemini_model = 'gemini-2.0-flash-exp',
                chatgpt_model = 'o1-mini-preview',
                require_input = true,
            },
            php_bare = {
            command = 'AIPhpBare',
            instruction_tpl = 'Act as a PHP developer. Format you answer with Markdown.',
            prompt_tpl = '${input}',
            result_tpl = '${output}',
            loading_tpl = 'Loading...',
            require_input = true,
            },
        }
    }
}

With this configuration,

  • the command AIGenerateReadme will use the model gemini-1.5-pro-latest
  • the command AIJavascriptVanilla will use the model gemini-2.0-flash-exp
  • the command AIPhpBare will use the default models because the model at command level is not set

I implemented the feature im ./lua/ai/init.lua , tell me if this is the right way to do it.