Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Does groq not work with tool call? #1501

Open
2 of 3 tasks
JINO-ROHIT opened this issue Jan 24, 2025 · 6 comments
Open
2 of 3 tasks

[Question] Does groq not work with tool call? #1501

JINO-ROHIT opened this issue Jan 24, 2025 · 6 comments
Labels
question Further information is requested

Comments

@JINO-ROHIT
Copy link
Collaborator

Required prerequisites

Questions

Im trying to get groq working with a tool call but it returns an empty response.

Sample snippet

tools_list = [
    *MathToolkit().get_tools(),
    *SearchToolkit().get_tools(),
]

model = ModelFactory.create(
    model_platform = ModelPlatformType.GROQ,
    model_type = ModelType.GROQ_MIXTRAL_8_7B
)

# Set message for the assistant
assistant_sys_msg =  """You are a helpful assistant to do search task."""


# Set the agent
agent = ChatAgent(
    assistant_sys_msg,
    model=model,
    tools=tools_list
)

# Set prompt for the search task
prompt_search = ("""When was University of Oxford set up""")
# Set prompt for the calculation task
prompt_calculate = ("""Assume now is 2024 in the Gregorian calendar, University of Oxford was set up in 1096, estimate the current age of University of Oxford""")

# Convert the two prompt as message that can be accepted by the Agent
user_msg_search = BaseMessage.make_user_message(role_name="User", content=prompt_search)
user_msg_calculate = BaseMessage.make_user_message(role_name="User", content=prompt_calculate)

# Get response
assistant_response_search = agent.step(user_msg_search)
assistant_response_calculate = agent.step(user_msg_calculate)

response generated

msgs=[BaseMessage(role_name='Assistant', role_type=<RoleType.ASSISTANT: 'assistant'>, meta_dict={}, content='', video_bytes=None, image_list=None, image_detail='auto', video_detail='low', parsed=None)] terminated=False info={'id': 'chatcmpl-33ecaaeb-b6cd-4a8c-813f-e83f71ff06b6', 'usage': {'completion_tokens': 217, 'prompt_tokens': 11391, 'total_tokens': 11608, 'completion_tokens_details': None, 'prompt_tokens_details': None, 'queue_time': 0.03222505100000006, 'prompt_time': 0.537353879, 'completion_time': 0.337716099, 'total_time': 0.875069978}, 'termination_reasons': ['tool_calls'], 'num_tokens': 3819, 'tool_calls': [], 'external_tool_request': None}
@JINO-ROHIT JINO-ROHIT added the question Further information is requested label Jan 24, 2025
@Wendong-Fan
Copy link
Member

Hey @JINO-ROHIT , thanks for raising this issue, Groq tool calling hasn't been supported in camel yet, below is the tool platforms we support native tool calling for now, we will support Groq soon and let you know once the feature is ready~

Image

@JINO-ROHIT
Copy link
Collaborator Author

hey, thanks for writing back. So i try the same thing with gemini and flash model

model=ModelFactory.create(
    model_platform=ModelPlatformType.GEMINI,
    model_type=ModelType.GEMINI_1_5_FLASH
)

This returns an error, attaching my entire traceback below.
I have verified that my API key is valid and works.

2025-01-26 20:12:26,494 - camel.models.model_manager - ERROR - Error processing with model: <camel.models.gemini_model.GeminiModel object at 0x00000132B18BB2C0>
2025-01-26 20:12:26,494 - camel.agents.chat_agent - ERROR - An error occurred while running model gemini-1.5-flash, index: 0
Traceback (most recent call last):
  File "E:\open-source\camel stuff\camel\camel\agents\chat_agent.py", line 1100, in _step_model_response
    response = self.model_backend.run(openai_messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\open-source\camel stuff\camel\camel\models\model_manager.py", line 211, in run
    raise exc
  File "E:\open-source\camel stuff\camel\camel\models\model_manager.py", line 201, in run
    response = self.current_model.run(messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\open-source\camel stuff\camel\camel\models\gemini_model.py", line 109, in run
    response = self._client.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\open-source\camel stuff\camel\.venv\Lib\site-packages\openai\_utils\_utils.py", line 279, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "e:\open-source\camel stuff\camel\.venv\Lib\site-packages\openai\resources\chat\completions.py", line 859, in create
    return self._post(
           ^^^^^^^^^^^
  File "e:\open-source\camel stuff\camel\.venv\Lib\site-packages\openai\_base_client.py", line 1283, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\open-source\camel stuff\camel\.venv\Lib\site-packages\openai\_base_client.py", line 960, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "e:\open-source\camel stuff\camel\.venv\Lib\site-packages\openai\_base_client.py", line 1064, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - [{'error': {'code': 400, 'message': 'Request contains an invalid argument.', 'status': 'INVALID_ARGUMENT'}}]

@Wendong-Fan
Copy link
Member

Hey @JINO-ROHIT , thanks for reporting this issue!

We just fixed this gemini tool calling issue and also added tool call support for Groq and SGLang. PR: #1512
Feel free to take a try with camel's latest master branch, we will also release a new camel version in next week

@JINO-ROHIT
Copy link
Collaborator Author

cool, will do!

@JINO-ROHIT
Copy link
Collaborator Author

the gemini tool call issue still persists!
while groq seems to work well, i found one small nit

from camel.toolkits import MathToolkit, SearchToolkit
sys_msg = 'You are a curious stone wondering about the universe.'
agent = ChatAgent(
    system_message=sys_msg,
    model=model,
    tools = [
        *MathToolkit().get_tools(),
        *SearchToolkit().get_tools(),
    ]
    )
response = agent.step("What is CAMEL AI?")

The tool_calls object is empty, although we get an answer

Sample response

ChatAgentResponse(msgs=[BaseMessage(role_name='Assistant', role_type=<RoleType.ASSISTANT: 'assistant'>, meta_dict={}, content="CAMEL AI is a cutting-edge company that specializes in artificial intelligence and machine learning technologies. They create innovative solutions to help businesses and individuals make the most of their data. Their work includes natural language processing, computer vision, and predictive analytics, among other areas. CAMEL AI's goal is to make AI accessible and useful for everyone.", video_bytes=None, image_list=None, image_detail='auto', video_detail='low', parsed=None)], terminated=False, info={'id': 'chatcmpl-ec239188-76dc-4706-81a7-159889279f5a', 'usage': {'completion_tokens': 74, 'prompt_tokens': 6714, 'total_tokens': 6788, 'completion_tokens_details': None, 'prompt_tokens_details': None, 'queue_time': 0.02417765700000002, 'prompt_time': 0.370985381, 'completion_time': 0.113781916, 'total_time': 0.484767297}, 'termination_reasons': ['stop'], 'num_tokens': 30, 'tool_calls': [], 'external_tool_request': None})

@Wendong-Fan
Copy link
Member

Hey @JINO-ROHIT , it works from my side, could you double check whether you have pulled the latest code of master branch? We can also schedule a quick call to look into this together https://cal.com/wendong-fan-5yu7x5/30min?date=2025-01-28&month=2025-01

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants