Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run_async's response_stream is a generator and not an async generator #67

Closed
jonchun opened this issue Jan 27, 2025 · 1 comment
Closed

Comments

@jonchun
Copy link
Contributor

jonchun commented Jan 27, 2025

Note: Running python3.12 on Ubuntu (WSL)

Problem:
Attempting to use the run_async method gives me the following error:

 line 208, in run_async
    async for partial_response in response_stream:
TypeError: 'async for' requires an object with __aiter__ method, got generator

After digging around for a bit, here's what I found in the instructor documentation:
https://python.useinstructor.com/concepts/partial/?h=partial#understanding-partial-responses

extraction_stream = client.chat.completions.create_partial(
    model="gpt-4",
    response_model=MeetingInfo,
    messages=[
        {
            "role": "user",
            "content": f"Get the information about the meeting and the users {text_block}",
        },
    ],
    stream=True,
)
console = Console()

for extraction in extraction_stream:
    obj = extraction.model_dump()
    console.clear()
    console.print(obj)

It's clear from the docs that

When specifying a create_partial and setting stream=True, the response from instructor becomes a Generator[T]. As the generator yields results, you can iterate over these incremental updates. The last value yielded by the generator represents the completed extraction!

this is supposed to return a Generator rather than an AsyncGenerator. The async for needs to be used OUTSIDE of the agent with the above fixes:

e.g.

async for partial in agent.run_async(user_input):
    console.clear()
    console.print(partial )
@KennyVaneetvelde
Copy link
Member

I actually opened a new issue: #76

Reason being that the changes from the PR broke other implementations so I reverted it and made a new ticket to better support it.

That being said, your error likely was not a true error but happened due to using the wrong client

You actually should have used openai.AsyncOpenAI as in the examples

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants