Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't start wren-ai-service for a qdrant interaction error #1121

Open
nemesisq3a opened this issue Jan 15, 2025 · 22 comments
Open

Can't start wren-ai-service for a qdrant interaction error #1121

nemesisq3a opened this issue Jan 15, 2025 · 22 comments
Labels
bug Something isn't working

Comments

@nemesisq3a
Copy link

Describe the bug
Unable to start wren-ai-service from docker compose startup

To Reproduce
Download docker-compose.yaml, .env and config.yaml examples from https://github.com/Canner/WrenAI/tree/main/docker and configure them to use a ollama embedded LLM (self-hosted). I'm using a docker image of ollama running on a network machine with the Qwen2.5-coder LLM loaded.

Expected behavior
wren-ai-service positive startup

Desktop (please complete the following information):

  • OS: [Ubuntu Server 22.04 LTS]
  • Docker manually installed (no Docker Desktop)

Wren AI Information

  • Version: 0.14.0
  • LLM_PROVIDER= # ollama_embedder
  • GENERATION_MODEL= # Qwen2.5-coder:7b

Additional context
ERROR LOG:

wrenai-wren-ai-service-1  | INFO:     Started server process [8]
wrenai-wren-ai-service-1  | INFO:     Waiting for application startup.
wrenai-wren-ai-service-1  | I0115 21:43:39.189 8 wren-ai-service:42] Imported Provider: src.providers.document_store
wrenai-wren-ai-service-1  | I0115 21:43:39.460 8 wren-ai-service:66] Registering provider: openai_embedder
wrenai-wren-ai-service-1  | I0115 21:43:39.460 8 wren-ai-service:66] Registering provider: qdrant
wrenai-wren-ai-service-1  | I0115 21:43:39.460 8 wren-ai-service:42] Imported Provider: src.providers.document_store.qdrant
wrenai-wren-ai-service-1  | I0115 21:43:39.460 8 wren-ai-service:42] Imported Provider: src.providers.embedder
wrenai-wren-ai-service-1  | I0115 21:43:39.461 8 wren-ai-service:66] Registering provider: azure_openai_embedder
wrenai-wren-ai-service-1  | I0115 21:43:39.461 8 wren-ai-service:42] Imported Provider: src.providers.embedder.azure_openai
wrenai-wren-ai-service-1  | I0115 21:43:39.462 8 wren-ai-service:66] Registering provider: ollama_embedder
wrenai-wren-ai-service-1  | I0115 21:43:39.462 8 wren-ai-service:42] Imported Provider: src.providers.embedder.ollama
wrenai-wren-ai-service-1  | I0115 21:43:39.462 8 wren-ai-service:42] Imported Provider: src.providers.embedder.openai
wrenai-wren-ai-service-1  | I0115 21:43:39.462 8 wren-ai-service:42] Imported Provider: src.providers.engine
wrenai-wren-ai-service-1  | I0115 21:43:39.462 8 wren-ai-service:66] Registering provider: wren_ui
wrenai-wren-ai-service-1  | I0115 21:43:39.462 8 wren-ai-service:66] Registering provider: wren_ibis
wrenai-wren-ai-service-1  | I0115 21:43:39.463 8 wren-ai-service:66] Registering provider: wren_engine
wrenai-wren-ai-service-1  | I0115 21:43:39.463 8 wren-ai-service:42] Imported Provider: src.providers.engine.wren
wrenai-wren-ai-service-1  | I0115 21:43:39.463 8 wren-ai-service:42] Imported Provider: src.providers.llm
wrenai-wren-ai-service-1  | I0115 21:43:39.468 8 wren-ai-service:66] Registering provider: azure_openai_llm
wrenai-wren-ai-service-1  | I0115 21:43:39.468 8 wren-ai-service:42] Imported Provider: src.providers.llm.azure_openai
wrenai-wren-ai-service-1  | /app/.venv/lib/python3.12/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
wrenai-wren-ai-service-1  | * 'fields' has been removed
wrenai-wren-ai-service-1  |   warnings.warn(message, UserWarning)
wrenai-wren-ai-service-1  | I0115 21:43:40.139 8 wren-ai-service:66] Registering provider: litellm_llm
wrenai-wren-ai-service-1  | I0115 21:43:40.139 8 wren-ai-service:42] Imported Provider: src.providers.llm.litellm
wrenai-wren-ai-service-1  | I0115 21:43:40.141 8 wren-ai-service:66] Registering provider: ollama_llm
wrenai-wren-ai-service-1  | I0115 21:43:40.141 8 wren-ai-service:42] Imported Provider: src.providers.llm.ollama
wrenai-wren-ai-service-1  | I0115 21:43:40.176 8 wren-ai-service:66] Registering provider: openai_llm
wrenai-wren-ai-service-1  | I0115 21:43:40.176 8 wren-ai-service:42] Imported Provider: src.providers.llm.openai
wrenai-wren-ai-service-1  | I0115 21:43:40.176 8 wren-ai-service:42] Imported Provider: src.providers.loader
wrenai-wren-ai-service-1  | I0115 21:43:40.177 8 wren-ai-service:18] initializing provider: ollama_embedder
wrenai-wren-ai-service-1  | I0115 21:43:40.177 8 wren-ai-service:93] Getting provider: ollama_embedder from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:118] Ollama model qwen2.5-coder:7b already exists
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:180] Using Ollama Embedding Model: qwen2.5-coder:7b
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:181] Using Ollama URL: http://192.168.200.111:11434
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:18] initializing provider: litellm_llm
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:93] Getting provider: litellm_llm from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:18] initializing provider: litellm_llm
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:93] Getting provider: litellm_llm from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:18] initializing provider: qdrant
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:93] Getting provider: qdrant from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768
wrenai-qdrant-1           | 2025-01-15T21:43:40.231810Z  INFO actix_web::middleware::logger: 172.20.0.4 "GET /collections/Document/exists HTTP/1.1" 200 57 "-" "python-httpx/0.27.2" 0.000223    
wrenai-qdrant-1           | 2025-01-15T21:43:40.233073Z  INFO storage::content_manager::toc::collection_meta_ops: Deleting collection Document    
wrenai-qdrant-1           | 2025-01-15T21:43:40.236175Z  INFO actix_web::middleware::logger: 172.20.0.4 "DELETE /collections/Document HTTP/1.1" 200 54 "-" "python-httpx/0.27.2" 0.003125    
wrenai-qdrant-1           | 2025-01-15T21:43:40.238040Z  INFO storage::content_manager::toc::collection_meta_ops: Creating collection Document    
wrenai-qdrant-1           | 2025-01-15T21:43:40.286497Z  INFO actix_web::middleware::logger: 172.20.0.4 "PUT /collections/Document HTTP/1.1" 200 52 "-" "python-httpx/0.27.2" 0.048509    
wrenai-qdrant-1           | 2025-01-15T21:43:40.306545Z  INFO actix_web::middleware::logger: 172.20.0.4 "PUT /collections/Document/index?wait=true HTTP/1.1" 200 72 "-" "python-httpx/0.27.2" 0.019139    
wrenai-wren-ai-service-1  | I0115 21:43:40.307 8 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768
wrenai-qdrant-1           | 2025-01-15T21:43:40.341330Z  INFO actix_web::middleware::logger: 172.20.0.4 "GET /collections/table_descriptions/exists HTTP/1.1" 200 56 "-" "python-httpx/0.27.2" 0.000173    
wrenai-qdrant-1           | 2025-01-15T21:43:40.341859Z  INFO storage::content_manager::toc::collection_meta_ops: Deleting collection table_descriptions    
wrenai-qdrant-1           | 2025-01-15T21:43:40.345164Z  INFO actix_web::middleware::logger: 172.20.0.4 "DELETE /collections/table_descriptions HTTP/1.1" 200 53 "-" "python-httpx/0.27.2" 0.003324    
wrenai-qdrant-1           | 2025-01-15T21:43:40.345903Z  INFO storage::content_manager::toc::collection_meta_ops: Creating collection table_descriptions    
wrenai-qdrant-1           | 2025-01-15T21:43:40.392021Z  INFO actix_web::middleware::logger: 172.20.0.4 "PUT /collections/table_descriptions HTTP/1.1" 200 54 "-" "python-httpx/0.27.2" 0.046174    
wrenai-qdrant-1           | 2025-01-15T21:43:40.414087Z  INFO actix_web::middleware::logger: 172.20.0.4 "PUT /collections/table_descriptions/index?wait=true HTTP/1.1" 200 72 "-" "python-httpx/0.27.2" 0.021094    
wrenai-wren-ai-service-1  | I0115 21:43:40.415 8 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768
wrenai-qdrant-1           | 2025-01-15T21:43:40.449300Z  INFO actix_web::middleware::logger: 172.20.0.4 "GET /collections/view_questions/exists HTTP/1.1" 200 58 "-" "python-httpx/0.27.2" 0.000162    
wrenai-qdrant-1           | 2025-01-15T21:43:40.449824Z  INFO storage::content_manager::toc::collection_meta_ops: Deleting collection view_questions    
wrenai-qdrant-1           | 2025-01-15T21:43:40.453089Z  INFO actix_web::middleware::logger: 172.20.0.4 "DELETE /collections/view_questions HTTP/1.1" 200 54 "-" "python-httpx/0.27.2" 0.003286    
wrenai-qdrant-1           | 2025-01-15T21:43:40.453791Z  INFO storage::content_manager::toc::collection_meta_ops: Creating collection view_questions    
wrenai-qdrant-1           | 2025-01-15T21:43:40.499653Z  INFO actix_web::middleware::logger: 172.20.0.4 "PUT /collections/view_questions HTTP/1.1" 200 54 "-" "python-httpx/0.27.2" 0.045910    
wrenai-qdrant-1           | 2025-01-15T21:43:40.521487Z  INFO actix_web::middleware::logger: 172.20.0.4 "PUT /collections/view_questions/index?wait=true HTTP/1.1" 200 72 "-" "python-httpx/0.27.2" 0.020818    
wrenai-wren-ai-service-1  | I0115 21:43:40.522 8 wren-ai-service:18] initializing provider: wren_ui
wrenai-wren-ai-service-1  | I0115 21:43:40.522 8 wren-ai-service:93] Getting provider: wren_ui from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}
wrenai-wren-ai-service-1  | I0115 21:43:40.522 8 wren-ai-service:24] Using Engine: wren_ui
wrenai-wren-ai-service-1  | I0115 21:43:40.528 8 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768
wrenai-qdrant-1           | 2025-01-15T21:43:40.568094Z  INFO actix_web::middleware::logger: 172.20.0.4 "GET /collections/Document/exists HTTP/1.1" 200 58 "-" "python-httpx/0.27.2" 0.000186    
wrenai-qdrant-1           | 2025-01-15T21:43:40.568834Z  INFO actix_web::middleware::logger: 172.20.0.4 "GET /collections/Document HTTP/1.1" 200 428 "-" "python-httpx/0.27.2" 0.000277    
wrenai-qdrant-1           | 2025-01-15T21:43:40.575790Z  INFO actix_web::middleware::logger: 172.20.0.4 "PUT /collections/Document/index?wait=true HTTP/1.1" 200 72 "-" "python-httpx/0.27.2" 0.002349    
wrenai-wren-ai-service-1  | ERROR:    Traceback (most recent call last):
wrenai-wren-ai-service-1  |   File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan
wrenai-wren-ai-service-1  |     async with self.lifespan_context(app) as maybe_state:
wrenai-wren-ai-service-1  |   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
wrenai-wren-ai-service-1  |     return await anext(self.gen)
wrenai-wren-ai-service-1  |            ^^^^^^^^^^^^^^^^^^^^^
wrenai-wren-ai-service-1  |   File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan
wrenai-wren-ai-service-1  |     async with original_context(app) as maybe_original_state:
wrenai-wren-ai-service-1  |   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
wrenai-wren-ai-service-1  |     return await anext(self.gen)
wrenai-wren-ai-service-1  |            ^^^^^^^^^^^^^^^^^^^^^
wrenai-wren-ai-service-1  |   File "/src/__main__.py", line 30, in lifespan
wrenai-wren-ai-service-1  |     app.state.service_container = create_service_container(pipe_components, settings)
wrenai-wren-ai-service-1  |                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wrenai-wren-ai-service-1  |   File "/src/globals.py", line 69, in create_service_container
wrenai-wren-ai-service-1  |     "db_schema": indexing.DBSchema(
wrenai-wren-ai-service-1  |                  ^^^^^^^^^^^^^^^^^^
wrenai-wren-ai-service-1  |   File "/src/pipelines/indexing/db_schema.py", line 346, in __init__
wrenai-wren-ai-service-1  |     "embedder": embedder_provider.get_document_embedder(),
wrenai-wren-ai-service-1  |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wrenai-wren-ai-service-1  | AttributeError: 'NoneType' object has no attribute 'get_document_embedder'
wrenai-wren-ai-service-1  | 
wrenai-wren-ai-service-1  | ERROR:    Application startup failed. Exiting.
@nemesisq3a nemesisq3a added the bug Something isn't working label Jan 15, 2025
@nemesisq3a nemesisq3a changed the title Can't start wren-ai-service for a qdran interaction error Can't start wren-ai-service for a qdrant interaction error Jan 15, 2025
@cyyeh
Copy link
Member

cyyeh commented Jan 16, 2025

@nemesisq3a could you provide your config.yaml? Thanks

@wisonzhu
Copy link

me too
image

@nemesisq3a
Copy link
Author

@nemesisq3a could you provide your config.yaml? Thanks

Sure! Thanks

type: llm
provider: litellm_llm
timeout: 120
models:
- model: gpt-4o-mini-2024-07-18
  api_base: https://api.openai.com/v1
  api_key_name: LLM_OPENAI_API_KEY
  kwargs:
    temperature: 0
    n: 1
    # for better consistency of llm response, refer: https://platform.openai.com/docs/api-reference/chat/create#chat-create-seed
    seed: 0
    max_tokens: 4096
    response_format:
      type: json_object
- model: gpt-4o-2024-08-06
  api_base: https://api.openai.com/v1
  api_key_name: LLM_OPENAI_API_KEY
  kwargs:
    temperature: 0
    n: 1
    # for better consistency of llm response, refer: https://platform.openai.com/docs/api-reference/chat/create#chat-create-seed
    seed: 0
    max_tokens: 4096
    response_format:
      type: json_object

---
type: embedder
provider: ollama_embedder
models:
  - model: qwen2.5-coder:7b
    dimension: 768
url: http://192.168.200.111:11434
timeout: 120

---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000

---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768
timeout: 120
recreate_index: true

---
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: openai_embedder.text-embedding-3-large
    document_store: qdrant
  - name: historical_question_indexing
    embedder: openai_embedder.text-embedding-3-large
    document_store: qdrant
  - name: table_description_indexing
    embedder: openai_embedder.text-embedding-3-large
    document_store: qdrant
  - name: db_schema_retrieval
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    embedder: openai_embedder.text-embedding-3-large
    document_store: qdrant
  - name: historical_question_retrieval
    embedder: openai_embedder.text-embedding-3-large
    document_store: qdrant
  - name: sql_generation
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: sql_correction
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: followup_sql_generation
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: sql_summary
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: sql_answer
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: sql_breakdown
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: sql_expansion
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: sql_explanation
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: sql_regeneration
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: semantics_description
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: relationship_recommendation
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: question_recommendation
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: question_recommendation_db_schema_retrieval
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    embedder: openai_embedder.text-embedding-3-large
    document_store: qdrant
  - name: question_recommendation_sql_generation
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    engine: wren_ui
  - name: intent_classification
    llm: litellm_llm.gpt-4o-mini-2024-07-18
    embedder: openai_embedder.text-embedding-3-large
    document_store: qdrant
  - name: data_assistance
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: sql_pairs_indexing
    document_store: qdrant
    embedder: openai_embedder.text-embedding-3-large
  - name: sql_pairs_deletion
    document_store: qdrant
    embedder: openai_embedder.text-embedding-3-large 
  - name: sql_pairs_retrieval
    document_store: qdrant
    embedder: openai_embedder.text-embedding-3-large
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: preprocess_sql_data
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: sql_executor
    engine: wren_ui
  - name: chart_generation
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: chart_adjustment
    llm: litellm_llm.gpt-4o-mini-2024-07-18
  - name: sql_question_generation
    llm: litellm_llm.gpt-4o-mini-2024-07-18

---
settings:
  column_indexing_batch_size: 50
  table_retrieval_size: 10
  table_column_retrieval_size: 100
  allow_using_db_schemas_without_pruning: false
  query_cache_maxsize: 1000
  query_cache_ttl: 3600
  langfuse_host: https://cloud.langfuse.com
  langfuse_enable: true
  logging_level: DEBUG
  development: false

@wisonzhu
Copy link

:39:04 return await val
2025-01-16 15:39:04 ^^^^^^^^^
2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
2025-01-16 15:39:04 await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^
2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
2025-01-16 15:39:04 self._handle_exception(observation, e)
2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 517, in _handle_exception
2025-01-16 15:39:04 raise e
2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
2025-01-16 15:39:04 result = await func(*args, **kwargs)
2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 15:39:04 File "/src/pipelines/generation/sql_answer.py", line 66, in generate_answer
2025-01-16 15:39:04 return await generator(prompt=prompt.get("prompt"), query_id=query_id)
2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 15:39:04 File "/src/providers/llm/azure_openai.py", line 66, in call
2025-01-16 15:39:04 return await self.run(*args, **kwargs)
2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry
2025-01-16 15:39:04 ret = await target(*args, **kwargs)
2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 15:39:04 File "/src/providers/llm/azure_openai.py", line 110, in run
2025-01-16 15:39:04 for chunk in completion:
2025-01-16 15:39:04 TypeError: 'LangfuseResponseGeneratorAsync' object is not iterable

@cyyeh
Copy link
Member

cyyeh commented Jan 16, 2025

:39:04 return await val

2025-01-16 15:39:04 ^^^^^^^^^

2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn

2025-01-16 15:39:04 await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)

2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^

2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper

2025-01-16 15:39:04 self._handle_exception(observation, e)

2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 517, in _handle_exception

2025-01-16 15:39:04 raise e

2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper

2025-01-16 15:39:04 result = await func(*args, **kwargs)

2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^

2025-01-16 15:39:04 File "/src/pipelines/generation/sql_answer.py", line 66, in generate_answer

2025-01-16 15:39:04 return await generator(prompt=prompt.get("prompt"), query_id=query_id)

2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

2025-01-16 15:39:04 File "/src/providers/llm/azure_openai.py", line 66, in call

2025-01-16 15:39:04 return await self.run(*args, **kwargs)

2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

2025-01-16 15:39:04 File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry

2025-01-16 15:39:04 ret = await target(*args, **kwargs)

2025-01-16 15:39:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

2025-01-16 15:39:04 File "/src/providers/llm/azure_openai.py", line 110, in run

2025-01-16 15:39:04 for chunk in completion:

2025-01-16 15:39:04 TypeError: 'LangfuseResponseGeneratorAsync' object is not iterable

@wisonzhu could u try to use litellm_llm as provider? azure_openai will be deprecated soon

@cyyeh
Copy link
Member

cyyeh commented Jan 16, 2025

@nemesisq3a could you provide your config.yaml? Thanks

Sure! Thanks


type: llm

provider: litellm_llm

timeout: 120

models:

- model: gpt-4o-mini-2024-07-18

  api_base: https://api.openai.com/v1

  api_key_name: LLM_OPENAI_API_KEY

  kwargs:

    temperature: 0

    n: 1

    # for better consistency of llm response, refer: https://platform.openai.com/docs/api-reference/chat/create#chat-create-seed

    seed: 0

    max_tokens: 4096

    response_format:

      type: json_object

- model: gpt-4o-2024-08-06

  api_base: https://api.openai.com/v1

  api_key_name: LLM_OPENAI_API_KEY

  kwargs:

    temperature: 0

    n: 1

    # for better consistency of llm response, refer: https://platform.openai.com/docs/api-reference/chat/create#chat-create-seed

    seed: 0

    max_tokens: 4096

    response_format:

      type: json_object



---

type: embedder

provider: ollama_embedder

models:

  - model: qwen2.5-coder:7b

    dimension: 768

url: http://192.168.200.111:11434

timeout: 120



---

type: engine

provider: wren_ui

endpoint: http://wren-ui:3000



---

type: document_store

provider: qdrant

location: http://qdrant:6333

embedding_model_dim: 768

timeout: 120

recreate_index: true



---

type: pipeline

pipes:

  - name: db_schema_indexing

    embedder: openai_embedder.text-embedding-3-large

    document_store: qdrant

  - name: historical_question_indexing

    embedder: openai_embedder.text-embedding-3-large

    document_store: qdrant

  - name: table_description_indexing

    embedder: openai_embedder.text-embedding-3-large

    document_store: qdrant

  - name: db_schema_retrieval

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    embedder: openai_embedder.text-embedding-3-large

    document_store: qdrant

  - name: historical_question_retrieval

    embedder: openai_embedder.text-embedding-3-large

    document_store: qdrant

  - name: sql_generation

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: sql_correction

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: followup_sql_generation

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: sql_summary

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: sql_answer

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: sql_breakdown

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: sql_expansion

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: sql_explanation

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: sql_regeneration

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: semantics_description

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: relationship_recommendation

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: question_recommendation

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: question_recommendation_db_schema_retrieval

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    embedder: openai_embedder.text-embedding-3-large

    document_store: qdrant

  - name: question_recommendation_sql_generation

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    engine: wren_ui

  - name: intent_classification

    llm: litellm_llm.gpt-4o-mini-2024-07-18

    embedder: openai_embedder.text-embedding-3-large

    document_store: qdrant

  - name: data_assistance

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: sql_pairs_indexing

    document_store: qdrant

    embedder: openai_embedder.text-embedding-3-large

  - name: sql_pairs_deletion

    document_store: qdrant

    embedder: openai_embedder.text-embedding-3-large 

  - name: sql_pairs_retrieval

    document_store: qdrant

    embedder: openai_embedder.text-embedding-3-large

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: preprocess_sql_data

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: sql_executor

    engine: wren_ui

  - name: chart_generation

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: chart_adjustment

    llm: litellm_llm.gpt-4o-mini-2024-07-18

  - name: sql_question_generation

    llm: litellm_llm.gpt-4o-mini-2024-07-18



---

settings:

  column_indexing_batch_size: 50

  table_retrieval_size: 10

  table_column_retrieval_size: 100

  allow_using_db_schemas_without_pruning: false

  query_cache_maxsize: 1000

  query_cache_ttl: 3600

  langfuse_host: https://cloud.langfuse.com

  langfuse_enable: true

  logging_level: DEBUG

  development: false

Since Wren AI is running in containers, you may need to add Ollama to the same docker network of WrenAI's, and change the url of embedding model accordingly

@wisonzhu
Copy link

16:59:04 I0116 08:59:04.712 12 wren-ai-service:263] Question Recommendation pipeline is running...
2025-01-16 16:59:04
2025-01-16 16:59:04 Provider List: https://docs.litellm.ai/docs/providers
2025-01-16 16:59:04
2025-01-16 16:59:04
2025-01-16 16:59:04 ********************************************************************************
2025-01-16 16:59:04 > generate [src.pipelines.generation.question_recommendation.generate()] encountered an error<
2025-01-16 16:59:04 > Node inputs:
2025-01-16 16:59:04 {'generator': '<function LitellmLLMProvider.get_generator.<locals...',
2025-01-16 16:59:04 'prompt': "<Task finished name='Task-106' coro=<AsyncGraphAda..."}
2025-01-16 16:59:04 ********************************************************************************
2025-01-16 16:59:04 Traceback (most recent call last):
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
2025-01-16 16:59:04 await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
2025-01-16 16:59:04 ^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
2025-01-16 16:59:04 self._handle_exception(observation, e)
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 517, in _handle_exception
2025-01-16 16:59:04 raise e
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
2025-01-16 16:59:04 result = await func(*args, **kwargs)
2025-01-16 16:59:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:04 File "/src/pipelines/generation/question_recommendation.py", line 48, in generate
2025-01-16 16:59:04 return await generator(prompt=prompt.get("prompt"))
2025-01-16 16:59:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:04 File "/src/providers/llm/litellm.py", line 71, in _run
2025-01-16 16:59:04 completion: Union[ModelResponse] = await acompletion(
2025-01-16 16:59:04 ^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1234, in wrapper_async
2025-01-16 16:59:04 raise e
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1085, in wrapper_async
2025-01-16 16:59:04 result = await original_function(*args, **kwargs)
2025-01-16 16:59:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 416, in acompletion
2025-01-16 16:59:04 _, custom_llm_provider, _, _ = get_llm_provider(
2025-01-16 16:59:04 ^^^^^^^^^^^^^^^^^
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 351, in get_llm_provider
2025-01-16 16:59:04 raise e
2025-01-16 16:59:04 File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 328, in get_llm_provider
2025-01-16 16:59:04 raise litellm.exceptions.BadRequestError( # type: ignore
2025-01-16 16:59:04 litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=devops-gpt-4o-50K-1st
2025-01-16 16:59:04 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
2025-01-16 16:59:04 -------------------------------------------------------------------
2025-01-16 16:59:04 Oh no an error! Need help with Hamilton?
2025-01-16 16:59:04 Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g
2025-01-16 16:59:04 -------------------------------------------------------------------
2025-01-16 16:59:04
2025-01-16 16:59:04 E0116 08:59:04.770 12 wren-ai-service:60] An error occurred during question recommendation generation: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=devops-gpt-4o-50K-1st
2025-01-16 16:59:04 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
2025-01-16 16:59:05 INFO: 172.25.0.6:60492 - "GET /v1/question-recommendations/b3c12663-cdde-4efc-bd34-c7c2b1674f79 HTTP/1.1" 200 OK
2025-01-16 16:59:06 INFO: 172.25.0.6:60502 - "POST /v1/sql-answers HTTP/1.1" 200 OK
2025-01-16 16:59:06 I0116 08:59:06.093 12 wren-ai-service:73] Preprocess SQL Data pipeline is running...
2025-01-16 16:59:06 I0116 08:59:06.124 12 wren-ai-service:141] Sql_Answer Generation pipeline is running...
2025-01-16 16:59:06
2025-01-16 16:59:06 Provider List: https://docs.litellm.ai/docs/providers
2025-01-16 16:59:06
2025-01-16 16:59:06
2025-01-16 16:59:06 ********************************************************************************
2025-01-16 16:59:06 > generate_answer [src.pipelines.generation.sql_answer.generate_answer()] encountered an error<
2025-01-16 16:59:06 > Node inputs:
2025-01-16 16:59:06 {'generator': '<function LitellmLLMProvider.get_generator.<locals...',
2025-01-16 16:59:06 'prompt': "<Task finished name='Task-123' coro=<AsyncGraphAda...",
2025-01-16 16:59:06 'query_id': '1a7637a4-5360-47f4-84c7-64612d481968'}
2025-01-16 16:59:06 ********************************************************************************
2025-01-16 16:59:06 Traceback (most recent call last):
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
2025-01-16 16:59:06 await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
2025-01-16 16:59:06 self._handle_exception(observation, e)
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 517, in _handle_exception
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
2025-01-16 16:59:06 result = await func(*args, **kwargs)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/src/pipelines/generation/sql_answer.py", line 66, in generate_answer
2025-01-16 16:59:06 return await generator(prompt=prompt.get("prompt"), query_id=query_id)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/src/providers/llm/litellm.py", line 71, in _run
2025-01-16 16:59:06 completion: Union[ModelResponse] = await acompletion(
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1234, in wrapper_async
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1085, in wrapper_async
2025-01-16 16:59:06 result = await original_function(*args, **kwargs)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 416, in acompletion
2025-01-16 16:59:06 _, custom_llm_provider, _, _ = get_llm_provider(
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 351, in get_llm_provider
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 328, in get_llm_provider
2025-01-16 16:59:06 raise litellm.exceptions.BadRequestError( # type: ignore
2025-01-16 16:59:06 litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=devops-gpt-4o-50K-1st
2025-01-16 16:59:06 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
2025-01-16 16:59:06 -------------------------------------------------------------------
2025-01-16 16:59:06 Oh no an error! Need help with Hamilton?
2025-01-16 16:59:06 Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g
2025-01-16 16:59:06 -------------------------------------------------------------------
2025-01-16 16:59:06
2025-01-16 16:59:06 Task exception was never retrieved
2025-01-16 16:59:06 future: <Task finished name='Task-122' coro=<SQLAnswer.run() done, defined at /app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py:204> exception=litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=devops-gpt-4o-50K-1st
2025-01-16 16:59:06 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers>
2025-01-16 16:59:06 Traceback (most recent call last):
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
2025-01-16 16:59:06 self._handle_exception(observation, e)
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 517, in _handle_exception
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
2025-01-16 16:59:06 result = await func(*args, **kwargs)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/src/pipelines/generation/sql_answer.py", line 142, in run
2025-01-16 16:59:06 return await self._pipe.execute(
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 375, in execute
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 366, in execute
2025-01-16 16:59:06 outputs = await self.raw_execute(_final_vars, overrides, display_graph, inputs=inputs)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 326, in raw_execute
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 321, in raw_execute
2025-01-16 16:59:06 results = await await_dict_of_tasks(task_dict)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks
2025-01-16 16:59:06 coroutines_gathered = await asyncio.gather(*coroutines)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value
2025-01-16 16:59:06 return await val
2025-01-16 16:59:06 ^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
2025-01-16 16:59:06 await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
2025-01-16 16:59:06 self._handle_exception(observation, e)
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 517, in _handle_exception
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
2025-01-16 16:59:06 result = await func(*args, **kwargs)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/src/pipelines/generation/sql_answer.py", line 66, in generate_answer
2025-01-16 16:59:06 return await generator(prompt=prompt.get("prompt"), query_id=query_id)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/src/providers/llm/litellm.py", line 71, in _run
2025-01-16 16:59:06 completion: Union[ModelResponse] = await acompletion(
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1234, in wrapper_async
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1085, in wrapper_async
2025-01-16 16:59:06 result = await original_function(*args, **kwargs)
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 416, in acompletion
2025-01-16 16:59:06 _, custom_llm_provider, _, _ = get_llm_provider(
2025-01-16 16:59:06 ^^^^^^^^^^^^^^^^^
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 351, in get_llm_provider
2025-01-16 16:59:06 raise e
2025-01-16 16:59:06 File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 328, in get_llm_provider
2025-01-16 16:59:06 raise litellm.exceptions.BadRequestError( # type: ignore
2025-01-16 16:59:06 litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=devops-gpt-4o-50K-1st
2025-01-16 16:59:06 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

@nemesisq3a
Copy link
Author

Since Wren AI is running in containers, you may need to add Ollama to the same docker network of WrenAI's, and change the url of embedding model accordingly

Ollama is running on a different server, in the same lan network (is a GPU dedicated server). I configured the http://192.168.200.111:11434 url (the ollama server machine lan ip) in the config.yaml file and in the .env file in accordance with the guide I found here.
The docker network of the wren ai containers is in bridged mode, so it can comunicate with the ollama api server.
Even the log provided seems to indicate that the ollama instance is reachable from wren ai

wrenai-wren-ai-service-1  | I0115 21:43:40.177 8 wren-ai-service:93] Getting provider: ollama_embedder from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:118] Ollama model qwen2.5-coder:7b already exists
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:180] Using Ollama Embedding Model: qwen2.5-coder:7b
wrenai-wren-ai-service-1  | I0115 21:43:40.195 8 wren-ai-service:181] Using Ollama URL: http://192.168.200.111:11434

From my prospective seems that the problem is beetween wren-ai-service and qdrant in itself.
Can you suggest something specific I can verify on the configuration files?
Thanks

@cyyeh
Copy link
Member

cyyeh commented Jan 16, 2025

@nemesisq3a could you join discord, let's schedule a time so that you could you share your screen with me and we could look at the issue together?

https://discord.gg/5DvshJqG8Z

@nemesisq3a
Copy link
Author

@nemesisq3a could you join discord, let's schedule a time so that you could you share your screen with me and we could look at the issue together?

https://discord.gg/5DvshJqG8Z

All the containers are running on a corporate cloud, so I can't share this with you. Let me see if I can replicate this thing on my home desktop pc, I've a popos installation and a RTX 3080, all should work. When I'm ready I'll join you on discord. Thanks for now

@thiner
Copy link

thiner commented Jan 19, 2025

I am getting the exact same error. It's raised from

wrenai-wren-ai-service-1  |   File "/src/pipelines/indexing/db_schema.py", line 346, in __init__
wrenai-wren-ai-service-1  |     "embedder": embedder_provider.get_document_embedder(),

and my WrenAI info as below:

version: 0.13.2
embedder provider: ollama_embedder

Configured env vars:

CONFIG_PATH
EMBEDDER_OLLAMA_URL
EMBEDDING_MODEL

I listed relevant vars only above. And the initialization logs of ai service as below:

 I0119 06:13:17.609 7 wren-ai-service:93] Getting provider: ollama_embedder from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0119 06:13:17.699 7 wren-ai-service:109] Pulling Ollama model bge-m3

 I0119 06:13:19.478 7 wren-ai-service:116] Pulling Ollama model bge-m3: 100%

 I0119 06:13:19.502 7 wren-ai-service:180] Using Ollama Embedding Model: bge-m3

 I0119 06:13:19.502 7 wren-ai-service:181] Using Ollama URL: http://ollama:11434

All these configuration and settings are verified, and they are working.

@kyyz147
Copy link

kyyz147 commented Jan 20, 2025

How can this problem be solved?
2025-01-20 14:23:03 I0120 06:23:03.457 12 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768 2025-01-20 14:23:03 I0120 06:23:03.565 12 wren-ai-service:135] Loading Helpers for DB Schema Indexing Pipeline: src.pipelines.indexing.utils 2025-01-20 14:23:03 I0120 06:23:03.567 12 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768 2025-01-20 14:23:03 I0120 06:23:03.643 12 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768 2025-01-20 14:23:03 I0120 06:23:03.752 12 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768 2025-01-20 14:23:03 I0120 06:23:03.986 12 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 768 2025-01-20 14:23:04 ERROR: Traceback (most recent call last): 2025-01-20 14:23:04 File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan 2025-01-20 14:23:04 async with self.lifespan_context(app) as maybe_state: 2025-01-20 14:23:04 File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__ 2025-01-20 14:23:04 return await anext(self.gen) 2025-01-20 14:23:04 ^^^^^^^^^^^^^^^^^^^^^ 2025-01-20 14:23:04 File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan 2025-01-20 14:23:04 async with original_context(app) as maybe_original_state: 2025-01-20 14:23:04 File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__ 2025-01-20 14:23:04 return await anext(self.gen) 2025-01-20 14:23:04 ^^^^^^^^^^^^^^^^^^^^^ 2025-01-20 14:23:04 File "/src/__main__.py", line 30, in lifespan 2025-01-20 14:23:04 app.state.service_container = create_service_container(pipe_components, settings) 2025-01-20 14:23:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-01-20 14:23:04 File "/src/globals.py", line 87, in create_service_container 2025-01-20 14:23:04 "data_assistance": generation.DataAssistance( 2025-01-20 14:23:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-01-20 14:23:04 File "/src/pipelines/generation/data_assistance.py", line 93, in __init__ 2025-01-20 14:23:04 "generator": llm_provider.get_generator( 2025-01-20 14:23:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-01-20 14:23:04 AttributeError: 'NoneType' object has no attribute 'get_generator' 2025-01-20 14:23:04 2025-01-20 14:23:04 ERROR: Application startup failed. Exiting.

@cyyeh
Copy link
Member

cyyeh commented Jan 20, 2025

@nemesisq3a @thiner @wisonzhu @kyyz147

Could you try to follow the config.yaml examples here and adapt to your usecases first? feel free to reach out to me if there are further issues. Thank you!

https://github.com/Canner/WrenAI/blob/chore/ai-service/add-llm-configs/wren-ai-service/docs/config_examples/config.ollama.yaml

@nemesisq3a
Copy link
Author

nemesisq3a commented Jan 20, 2025

Ok, I replicated the configuration on my desktop PC and same get_document_embedder error..
This evening coming back from office I'll try the example linked.
Just one question before doing, reading the example I can see:

- api_base: http://host.docker.internal:11434/v1  # change this to your ollama host, api_base should be <ollama_url>/v1
  model: openai/phi4:14b  # openai/<ollama_model_name>

In this case I have to keep the openai name even if it's not?
I have to write something like

model: openai/qwen2.5-coder:7b

right?

@cyyeh
Copy link
Member

cyyeh commented Jan 20, 2025

@nemesisq3a yes, as I said above, as long as you are using ollama LLMs, please follow the format: openai/<ollama_model_name>.

@cyyeh
Copy link
Member

cyyeh commented Jan 20, 2025

@nemesisq3a btw the reason is that I found performance of using ollama/<ollama_model_name> and ollama_chat/<ollama_model_name> is worse. These are all following the model naming rule of LiteLLM. openai/<ollama_model_name> means you want to access the model using the openai api compatible way

@nemesisq3a
Copy link
Author

As I wrote on discord I tried using the provided example.
Here's the code:

type: llm
provider: litellm_llm
timeout: 120
models:
- api_base: http://ollama:11434/v1  # change this to your ollama host, api_base should be <ollama_url>/v1
  model: openai/qwen2.5-coder:7b  # openai/<ollama_model_name>
  kwargs:
    n: 1
    temperature: 0

---
type: embedder
provider: ollama_embedder
models:
  - model: qwen2.5-coder:7b
    dimension: 7168
url: http://ollama:11434
timeout: 120

---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000

---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 7168  # put your embedding model dimension here
timeout: 120
recreate_index: false

---
# the format of llm and embedder should be <provider>.<model_name> such as litellm_llm.gpt-4o-2024-08-06
# the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
    document_store: qdrant
  - name: historical_question_indexing
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
    document_store: qdrant
  - name: table_description_indexing
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
    document_store: qdrant
  - name: db_schema_retrieval
    llm: litellm_llm.openai/qwen2.5-coder:7b
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
    document_store: qdrant
  - name: historical_question_retrieval
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
    document_store: qdrant
  - name: sql_generation
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: sql_correction
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: followup_sql_generation
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: sql_summary
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: sql_answer
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: sql_breakdown
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: sql_expansion
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: sql_explanation
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: sql_regeneration
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: semantics_description
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: relationship_recommendation
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: question_recommendation
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: question_recommendation_db_schema_retrieval
    llm: litellm_llm.openai/qwen2.5-coder:7b
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
    document_store: qdrant
  - name: question_recommendation_sql_generation
    llm: litellm_llm.openai/qwen2.5-coder:7b
    engine: wren_ui
  - name: chart_generation
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: chart_adjustment
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: intent_classification
    llm: litellm_llm.openai/qwen2.5-coder:7b
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
    document_store: qdrant
  - name: data_assistance
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: sql_pairs_indexing
    document_store: qdrant
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
  - name: sql_pairs_deletion
    document_store: qdrant
    embedder: ollama_embedder.openai/qwen2.5-coder:7b 
  - name: sql_pairs_retrieval
    document_store: qdrant
    embedder: ollama_embedder.openai/qwen2.5-coder:7b
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: preprocess_sql_data
    llm: litellm_llm.openai/qwen2.5-coder:7b
  - name: sql_executor
    engine: wren_ui
  - name: sql_question_generation
    llm: litellm_llm.openai/qwen2.5-coder:7b

---
settings:
  column_indexing_batch_size: 50
  table_retrieval_size: 10
  table_column_retrieval_size: 100
  allow_using_db_schemas_without_pruning: false
  query_cache_maxsize: 1000
  query_cache_ttl: 3600
  langfuse_host: https://cloud.langfuse.com
  langfuse_enable: true
  logging_level: DEBUG
  development: true

Still same error as before:

wren-ai-service-1 | AttributeError: 'NoneType' object has no attribute 'get_document_embedder'

@thiner
Copy link

thiner commented Jan 27, 2025

I am still getting the same error. My ai-service configuration as below:

type: embedder
provider: openai_embedder
timeout: 120
models:
  - model: openai/bce-embedding-base
    dimension: 768
url: http://bce-embedding.localai:8080/v1

---
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: litellm_llm.openai/bce-embedding-base

The error message:

 ERROR:    Traceback (most recent call last):

   File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan

     async with self.lifespan_context(app) as maybe_state:

   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
     return await anext(self.gen)
            ^^^^^^^^^^^^^^^^^^^^^
   File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan
     async with original_context(app) as maybe_original_state:
   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
     return await anext(self.gen)
            ^^^^^^^^^^^^^^^^^^^^^
   File "/src/__main__.py", line 30, in lifespan
     app.state.service_container = create_service_container(pipe_components, settings)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/src/globals.py", line 69, in create_service_container
     "db_schema": indexing.DBSchema(
                  ^^^^^^^^^^^^^^^^^^
   File "/src/pipelines/indexing/db_schema.py", line 346, in __init__
     "embedder": embedder_provider.get_document_embedder(),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 AttributeError: 'NoneType' object has no attribute 'get_document_embedder'
 ERROR:    Application startup failed. Exiting.

@cyyeh
Copy link
Member

cyyeh commented Jan 27, 2025

__

I am still getting the same error. My ai-service configuration as below:

type: embedder
provider: openai_embedder
timeout: 120
models:
  - model: openai/bce-embedding-base
    dimension: 768
url: http://bce-embedding.localai:8080/v1

---
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: litellm_llm.openai/bce-embedding-base

The error message:

 ERROR:    Traceback (most recent call last):

   File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan

     async with self.lifespan_context(app) as maybe_state:

   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
     return await anext(self.gen)
            ^^^^^^^^^^^^^^^^^^^^^
   File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan
     async with original_context(app) as maybe_original_state:
   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
     return await anext(self.gen)
            ^^^^^^^^^^^^^^^^^^^^^
   File "/src/__main__.py", line 30, in lifespan
     app.state.service_container = create_service_container(pipe_components, settings)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/src/globals.py", line 69, in create_service_container
     "db_schema": indexing.DBSchema(
                  ^^^^^^^^^^^^^^^^^^
   File "/src/pipelines/indexing/db_schema.py", line 346, in __init__
     "embedder": embedder_provider.get_document_embedder(),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 AttributeError: 'NoneType' object has no attribute 'get_document_embedder'
 ERROR:    Application startup failed. Exiting.

please use api_base for openai_embedder instead of url, thanks

@thiner
Copy link

thiner commented Jan 27, 2025

please use api_base for openai_embedder instead of url, thanks

The error remains the same. I am going to provide the whole log:

 Waiting for qdrant to start...

 qdrant has started.

 Waiting for wren-ai-service to start...

 INFO:     Started server process [7]

 INFO:     Waiting for application startup.

 I0127 05:33:37.553 7 wren-ai-service:42] Imported Provider: src.providers.document_store

 I0127 05:33:38.204 7 wren-ai-service:66] Registering provider: openai_embedder

 I0127 05:33:38.204 7 wren-ai-service:66] Registering provider: qdrant

 I0127 05:33:38.204 7 wren-ai-service:42] Imported Provider: src.providers.document_store.qdrant

 I0127 05:33:38.205 7 wren-ai-service:42] Imported Provider: src.providers.embedder

 I0127 05:33:38.206 7 wren-ai-service:66] Registering provider: azure_openai_embedder

 I0127 05:33:38.207 7 wren-ai-service:42] Imported Provider: src.providers.embedder.azure_openai

 I0127 05:33:38.211 7 wren-ai-service:66] Registering provider: ollama_embedder

 I0127 05:33:38.211 7 wren-ai-service:42] Imported Provider: src.providers.embedder.ollama

 I0127 05:33:38.211 7 wren-ai-service:42] Imported Provider: src.providers.embedder.openai

 I0127 05:33:38.212 7 wren-ai-service:42] Imported Provider: src.providers.engine

 I0127 05:33:38.213 7 wren-ai-service:66] Registering provider: wren_ui

 I0127 05:33:38.214 7 wren-ai-service:66] Registering provider: wren_ibis

 I0127 05:33:38.214 7 wren-ai-service:66] Registering provider: wren_engine

 I0127 05:33:38.214 7 wren-ai-service:42] Imported Provider: src.providers.engine.wren

 I0127 05:33:38.214 7 wren-ai-service:42] Imported Provider: src.providers.llm

 I0127 05:33:38.239 7 wren-ai-service:66] Registering provider: azure_openai_llm

 I0127 05:33:38.239 7 wren-ai-service:42] Imported Provider: src.providers.llm.azure_openai

 /app/.venv/lib/python3.12/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:

 * 'fields' has been removed

   warnings.warn(message, UserWarning)

 I0127 05:33:40.274 7 wren-ai-service:66] Registering provider: litellm_llm

 I0127 05:33:40.274 7 wren-ai-service:42] Imported Provider: src.providers.llm.litellm

 I0127 05:33:40.279 7 wren-ai-service:66] Registering provider: ollama_llm

 I0127 05:33:40.279 7 wren-ai-service:42] Imported Provider: src.providers.llm.ollama

 I0127 05:33:40.394 7 wren-ai-service:66] Registering provider: openai_llm

 I0127 05:33:40.394 7 wren-ai-service:42] Imported Provider: src.providers.llm.openai

 I0127 05:33:40.394 7 wren-ai-service:42] Imported Provider: src.providers.loader

 I0127 05:33:40.394 7 wren-ai-service:18] initializing provider: openai_embedder

 I0127 05:33:40.394 7 wren-ai-service:93] Getting provider: openai_embedder from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.394 7 wren-ai-service:203] Initializing OpenAIEmbedder provider with API base: http://bce-embedding.localai:8080/v1

 I0127 05:33:40.394 7 wren-ai-service:209] Using OpenAI API-compatible Embedding Model: openai/bce-embedding-base

 I0127 05:33:40.394 7 wren-ai-service:18] initializing provider: litellm_llm

 I0127 05:33:40.394 7 wren-ai-service:93] Getting provider: litellm_llm from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.394 7 wren-ai-service:18] initializing provider: litellm_llm

 I0127 05:33:40.394 7 wren-ai-service:93] Getting provider: litellm_llm from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.395 7 wren-ai-service:18] initializing provider: qdrant

 I0127 05:33:40.395 7 wren-ai-service:93] Getting provider: qdrant from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.395 7 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 3072

 I0127 05:33:40.559 7 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 3072

 I0127 05:33:40.710 7 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 3072

 I0127 05:33:40.865 7 wren-ai-service:18] initializing provider: wren_ui

 I0127 05:33:40.866 7 wren-ai-service:93] Getting provider: wren_ui from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.866 7 wren-ai-service:24] Using Engine: wren_ui

 I0127 05:33:40.869 7 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 3072

 ERROR:    Traceback (most recent call last):

   File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan

     async with self.lifespan_context(app) as maybe_state:

   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__

     return await anext(self.gen)

            ^^^^^^^^^^^^^^^^^^^^^

   File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan

     async with original_context(app) as maybe_original_state:

   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__

     return await anext(self.gen)

            ^^^^^^^^^^^^^^^^^^^^^

   File "/src/__main__.py", line 30, in lifespan

     app.state.service_container = create_service_container(pipe_components, settings)

                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

   File "/src/globals.py", line 69, in create_service_container

     "db_schema": indexing.DBSchema(

                  ^^^^^^^^^^^^^^^^^^

   File "/src/pipelines/indexing/db_schema.py", line 346, in __init__

     "embedder": embedder_provider.get_document_embedder(),

                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 AttributeError: 'NoneType' object has no attribute 'get_document_embedder'

 

 ERROR:    Application startup failed. Exiting.

 Timeout: wren-ai-service did not start within 60 seconds

@cyyeh
Copy link
Member

cyyeh commented Jan 27, 2025

please use api_base for openai_embedder instead of url, thanks

The error remains the same. I am going to provide the whole log:

 Waiting for qdrant to start...

 qdrant has started.

 Waiting for wren-ai-service to start...

 INFO:     Started server process [7]

 INFO:     Waiting for application startup.

 I0127 05:33:37.553 7 wren-ai-service:42] Imported Provider: src.providers.document_store

 I0127 05:33:38.204 7 wren-ai-service:66] Registering provider: openai_embedder

 I0127 05:33:38.204 7 wren-ai-service:66] Registering provider: qdrant

 I0127 05:33:38.204 7 wren-ai-service:42] Imported Provider: src.providers.document_store.qdrant

 I0127 05:33:38.205 7 wren-ai-service:42] Imported Provider: src.providers.embedder

 I0127 05:33:38.206 7 wren-ai-service:66] Registering provider: azure_openai_embedder

 I0127 05:33:38.207 7 wren-ai-service:42] Imported Provider: src.providers.embedder.azure_openai

 I0127 05:33:38.211 7 wren-ai-service:66] Registering provider: ollama_embedder

 I0127 05:33:38.211 7 wren-ai-service:42] Imported Provider: src.providers.embedder.ollama

 I0127 05:33:38.211 7 wren-ai-service:42] Imported Provider: src.providers.embedder.openai

 I0127 05:33:38.212 7 wren-ai-service:42] Imported Provider: src.providers.engine

 I0127 05:33:38.213 7 wren-ai-service:66] Registering provider: wren_ui

 I0127 05:33:38.214 7 wren-ai-service:66] Registering provider: wren_ibis

 I0127 05:33:38.214 7 wren-ai-service:66] Registering provider: wren_engine

 I0127 05:33:38.214 7 wren-ai-service:42] Imported Provider: src.providers.engine.wren

 I0127 05:33:38.214 7 wren-ai-service:42] Imported Provider: src.providers.llm

 I0127 05:33:38.239 7 wren-ai-service:66] Registering provider: azure_openai_llm

 I0127 05:33:38.239 7 wren-ai-service:42] Imported Provider: src.providers.llm.azure_openai

 /app/.venv/lib/python3.12/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:

 * 'fields' has been removed

   warnings.warn(message, UserWarning)

 I0127 05:33:40.274 7 wren-ai-service:66] Registering provider: litellm_llm

 I0127 05:33:40.274 7 wren-ai-service:42] Imported Provider: src.providers.llm.litellm

 I0127 05:33:40.279 7 wren-ai-service:66] Registering provider: ollama_llm

 I0127 05:33:40.279 7 wren-ai-service:42] Imported Provider: src.providers.llm.ollama

 I0127 05:33:40.394 7 wren-ai-service:66] Registering provider: openai_llm

 I0127 05:33:40.394 7 wren-ai-service:42] Imported Provider: src.providers.llm.openai

 I0127 05:33:40.394 7 wren-ai-service:42] Imported Provider: src.providers.loader

 I0127 05:33:40.394 7 wren-ai-service:18] initializing provider: openai_embedder

 I0127 05:33:40.394 7 wren-ai-service:93] Getting provider: openai_embedder from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.394 7 wren-ai-service:203] Initializing OpenAIEmbedder provider with API base: http://bce-embedding.localai:8080/v1

 I0127 05:33:40.394 7 wren-ai-service:209] Using OpenAI API-compatible Embedding Model: openai/bce-embedding-base

 I0127 05:33:40.394 7 wren-ai-service:18] initializing provider: litellm_llm

 I0127 05:33:40.394 7 wren-ai-service:93] Getting provider: litellm_llm from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.394 7 wren-ai-service:18] initializing provider: litellm_llm

 I0127 05:33:40.394 7 wren-ai-service:93] Getting provider: litellm_llm from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.395 7 wren-ai-service:18] initializing provider: qdrant

 I0127 05:33:40.395 7 wren-ai-service:93] Getting provider: qdrant from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.395 7 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 3072

 I0127 05:33:40.559 7 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 3072

 I0127 05:33:40.710 7 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 3072

 I0127 05:33:40.865 7 wren-ai-service:18] initializing provider: wren_ui

 I0127 05:33:40.866 7 wren-ai-service:93] Getting provider: wren_ui from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>}

 I0127 05:33:40.866 7 wren-ai-service:24] Using Engine: wren_ui

 I0127 05:33:40.869 7 wren-ai-service:370] Using Qdrant Document Store with Embedding Model Dimension: 3072

 ERROR:    Traceback (most recent call last):

   File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan

     async with self.lifespan_context(app) as maybe_state:

   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__

     return await anext(self.gen)

            ^^^^^^^^^^^^^^^^^^^^^

   File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan

     async with original_context(app) as maybe_original_state:

   File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__

     return await anext(self.gen)

            ^^^^^^^^^^^^^^^^^^^^^

   File "/src/__main__.py", line 30, in lifespan

     app.state.service_container = create_service_container(pipe_components, settings)

                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

   File "/src/globals.py", line 69, in create_service_container

     "db_schema": indexing.DBSchema(

                  ^^^^^^^^^^^^^^^^^^

   File "/src/pipelines/indexing/db_schema.py", line 346, in __init__

     "embedder": embedder_provider.get_document_embedder(),

                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 AttributeError: 'NoneType' object has no attribute 'get_document_embedder'

 

 ERROR:    Application startup failed. Exiting.

 Timeout: wren-ai-service did not start within 60 seconds

Please give me the config.yaml. There must be some config issues

@thiner
Copy link

thiner commented Jan 27, 2025

@cyyeh The config.yaml file as below:

type: llm
provider: litellm_llm
timeout: 120
models:
- model: openai/qwen2.5-72b-instruct
  api_base: https://llm.localai/v1
  api_key_name: LLM_OPENAI_API_KEY
  kwargs:
    temperature: 0
    n: 1
    # for better consistency of llm response, refer: https://platform.openai.com/docs/api-reference/chat/create#chat-create-seed
    seed: 0
    max_tokens: 32768
    response_format:
      type: json_object
- model: openai/qwen2.5-coder-instruct
  api_base: https://llm.localai/v1
  api_key_name: LLM_OPENAI_API_KEY
  kwargs:
    temperature: 0
    n: 1
    # for better consistency of llm response, refer: https://platform.openai.com/docs/api-reference/chat/create#chat-create-seed
    seed: 0
    max_tokens: 16384
    response_format:
      type: json_object

---
type: embedder
provider: openai_embedder
timeout: 120
models:
  - model: openai/bce-embedding-base
    dimension: 768
    api_base: http://bce-embedding.localai:8080/v1


---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000

---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 3072
timeout: 120
recreate_index: true

---
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: litellm_llm.openai/bce-embedding-base
    document_store: qdrant
  - name: historical_question_indexing
    embedder: litellm_llm.openai/bce-embedding-base
    document_store: qdrant
  - name: table_description_indexing
    embedder: litellm_llm.openai/bce-embedding-base
    document_store: qdrant
  - name: db_schema_retrieval
    llm: litellm_llm.openai/qwen2.5-72b-instruct
    embedder: litellm_llm.openai/bce-embedding-base
    document_store: qdrant
  - name: historical_question_retrieval
    embedder: litellm_llm.openai/bce-embedding-base
    document_store: qdrant
  - name: sql_generation
    llm: litellm_llm.openai/qwen2.5-coder-instruct
    engine: wren_ui
  - name: sql_correction
    llm: litellm_llm.openai/qwen2.5-coder-instruct
    engine: wren_ui
  - name: followup_sql_generation
    llm: litellm_llm.openai/qwen2.5-coder-instruct
    engine: wren_ui
  - name: sql_summary
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: sql_answer
    llm: litellm_llm.openai/qwen2.5-72b-instruct
    engine: wren_ui
  - name: sql_breakdown
    llm: litellm_llm.openai/qwen2.5-coder-instruct
    engine: wren_ui
  - name: sql_expansion
    llm: litellm_llm.openai/qwen2.5-coder-instruct
    engine: wren_ui
  - name: sql_explanation
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: sql_regeneration
    llm: litellm_llm.openai/qwen2.5-coder-instruct
    engine: wren_ui
  - name: semantics_description
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: relationship_recommendation
    llm: litellm_llm.openai/qwen2.5-72b-instruct
    engine: wren_ui
  - name: question_recommendation
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: intent_classification
    llm: litellm_llm.openai/qwen2.5-72b-instruct
    embedder: litellm_llm.openai/bce-embedding-base
    document_store: qdrant
  - name: data_assistance
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: sql_pairs_preparation
    document_store: qdrant
    embedder: litellm_llm.openai/bce-embedding-base
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: sql_pairs_deletion
    document_store: qdrant
    embedder: litellm_llm.openai/bce-embedding-base
  - name: sql_pairs_retrieval
    document_store: qdrant
    embedder: litellm_llm.openai/bce-embedding-base
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: preprocess_sql_data
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: sql_executor
    engine: wren_ui
  - name: chart_generation
    llm: litellm_llm.openai/qwen2.5-72b-instruct
  - name: chart_adjustment
    llm: litellm_llm.openai/qwen2.5-72b-instruct
---
settings:
  column_indexing_batch_size: 50
  table_retrieval_size: 10
  table_column_retrieval_size: 100
  allow_using_db_schemas_without_pruning: false
  query_cache_maxsize: 1000
  query_cache_ttl: 3600
  langfuse_host: http://langfuse.dify:3000
  langfuse_enable: true
  logging_level: DEBUG
  development: false

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants