Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama/llama3.2 maximum sequence length #882

Closed
AceCodePt opened this issue Jan 12, 2025 · 2 comments
Closed

ollama/llama3.2 maximum sequence length #882

AceCodePt opened this issue Jan 12, 2025 · 2 comments

Comments

@AceCodePt
Copy link

graph_config = {
    "llm": {
        "model": "ollama/llama3.2",
        "temperature": 0,
        "model_tokens":10000,
        "format": "json",  # Ollama needs the format to be specified explicitly
        # "base_url": "http://localhost:11434", # set ollama URL arbitrarily
    },
    "verbose": True,
    "headless": False
}

Result with:
Token indices sequence length is longer than the specified maximum sequence length for this model (1385 > 1024). Running this sequence through the model will result in indexing errors

smart_scraper_graph = SmartScraperGraph(
    prompt="List all links",
    source="https://www.google.com/search?q=why+it+doesn%27t+work",
    schema=Properties,
    config=graph_config
)

Originally posted by @AceCodePt in #875 (comment)

Copy link

dosubot bot commented Jan 12, 2025

Hey there, @AceCodePt! 👋 I'm here to help with any bugs, questions, or contributions you may have while waiting for a human maintainer. Let's tackle this together!

There is a similar closed issue regarding the sequence length problem with the ollama/llama3.2 model. The issue was resolved in a pull request that made the model_tokens dictionary key accessible for all models. This fix was included in version 1.27.0-beta.13 and later confirmed to be resolved in version 1.28.0. Updating to version 1.28.0 or later should resolve the issue [1][2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@PeriniM
Copy link
Collaborator

PeriniM commented Jan 12, 2025

Hey @AceCodePt this issue has just been fixed in v1.36.0, try it out! #856

@PeriniM PeriniM closed this as completed Jan 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants