Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gemini 2.0 API Key Configuration Unclear #479

Open
6p5ra opened this issue Jan 23, 2025 · 7 comments
Open

Gemini 2.0 API Key Configuration Unclear #479

6p5ra opened this issue Jan 23, 2025 · 7 comments

Comments

@6p5ra
Copy link

6p5ra commented Jan 23, 2025

Unable to determine the correct location and variable name for inserting the Gemini 2.0 API key in the project configuration.
To Reproduce

Attempt to integrate Gemini 2.0 API
Search through configuration files
Unable to identify correct key insertion point
Experiencing configuration uncertainty

Additional Details

Specific questions:

Where should the Gemini 2.0 API key be placed?
What is the exact variable name for the API key?

@abi
Copy link
Owner

abi commented Jan 24, 2025

Hi there, you can set GEMINI_API_KEY in backend/.env

and then, in generate_code.py set variant_models = [Llm.GPT_4O_2024_11_20, Llm.GEMINI_2_0_FLASH_EXP] or whatever LLM combo you want to have. Make sure to set this right before this line: for index, model in enumerate(variant_models):

Let me know if this works.

@6p5ra
Copy link
Author

6p5ra commented Jan 24, 2025

replaced with and now it works

variant_models = []

            # For creation, use Claude Sonnet 3.6 but it can be lazy
            # so for updates, we use Claude Sonnet 3.5
            if generation_type == "create":
                claude_model = Llm.CLAUDE_3_5_SONNET_2024_10_22
            else:
                claude_model = Llm.CLAUDE_3_5_SONNET_2024_06_20

            if openai_api_key and anthropic_api_key and GEMINI_API_KEY:
                variant_models = [
                    claude_model,
                    Llm.GPT_4O_2024_11_20,
                    Llm.GEMINI_2_0_FLASH_EXP
                ]
            elif openai_api_key and GEMINI_API_KEY:
                variant_models = [
                    Llm.GPT_4O_2024_11_20,
                    Llm.GEMINI_2_0_FLASH_EXP,
                ]
            elif anthropic_api_key and GEMINI_API_KEY:
                variant_models = [
                    claude_model,
                    Llm.GEMINI_2_0_FLASH_EXP,
                ]
            elif openai_api_key and anthropic_api_key:
                variant_models = [
                    claude_model,
                    Llm.GPT_4O_2024_11_20,
                ]
            elif openai_api_key:
                variant_models = [
                    Llm.GPT_4O_2024_11_20,
                    Llm.GPT_4O_2024_11_20,
                ]
            elif anthropic_api_key:
                variant_models = [
                    claude_model,
                    Llm.CLAUDE_3_5_SONNET_2024_06_20,
                ]
            elif GEMINI_API_KEY:
                variant_models = [
                    Llm.GEMINI_2_0_FLASH_EXP,
                    Llm.GEMINI_2_0_FLASH_EXP,
                ]
            else:
                await throw_error(
                    "No OpenAI, Anthropic, or Gemini API key found. Please add the environment variable OPENAI_API_KEY, ANTHROPIC_API_KEY, or GEMINI_API_KEY to backend/.env or in the settings dialog. If you add it to .env, make sure to restart the backend server."
                )
                raise Exception("No API keys found")

@6p5ra
Copy link
Author

6p5ra commented Jan 24, 2025

except generating from video. I dont know why but Flash 2.0 supports video inputs

@abi
Copy link
Owner

abi commented Jan 25, 2025

Yup, video support is very alpha and currently only works with Claude Opus. It hasn't been updated in a while.

@therkut
Copy link

therkut commented Feb 8, 2025

replaced with and now it works

variant_models = []

            # For creation, use Claude Sonnet 3.6 but it can be lazy
            # so for updates, we use Claude Sonnet 3.5
            if generation_type == "create":
                claude_model = Llm.CLAUDE_3_5_SONNET_2024_10_22
            else:
                claude_model = Llm.CLAUDE_3_5_SONNET_2024_06_20

            if openai_api_key and anthropic_api_key and GEMINI_API_KEY:
                variant_models = [
                    claude_model,
                    Llm.GPT_4O_2024_11_20,
                    Llm.GEMINI_2_0_FLASH_EXP
                ]
            elif openai_api_key and GEMINI_API_KEY:
                variant_models = [
                    Llm.GPT_4O_2024_11_20,
                    Llm.GEMINI_2_0_FLASH_EXP,
                ]
            elif anthropic_api_key and GEMINI_API_KEY:
                variant_models = [
                    claude_model,
                    Llm.GEMINI_2_0_FLASH_EXP,
                ]
            elif openai_api_key and anthropic_api_key:
                variant_models = [
                    claude_model,
                    Llm.GPT_4O_2024_11_20,
                ]
            elif openai_api_key:
                variant_models = [
                    Llm.GPT_4O_2024_11_20,
                    Llm.GPT_4O_2024_11_20,
                ]
            elif anthropic_api_key:
                variant_models = [
                    claude_model,
                    Llm.CLAUDE_3_5_SONNET_2024_06_20,
                ]
            elif GEMINI_API_KEY:
                variant_models = [
                    Llm.GEMINI_2_0_FLASH_EXP,
                    Llm.GEMINI_2_0_FLASH_EXP,
                ]
            else:
                await throw_error(
                    "No OpenAI, Anthropic, or Gemini API key found. Please add the environment variable OPENAI_API_KEY, ANTHROPIC_API_KEY, or GEMINI_API_KEY to backend/.env or in the settings dialog. If you add it to .env, make sure to restart the backend server."
                )
                raise Exception("No API keys found")

Hi,
@6p5ra and @abi

When I use the GEMINI model, I encounter the error in the picture when it requests reordering. I would like your support regarding this issue.

Excellent work. Thanks.

ERROR:

Error generating code. Please contact support.
Traceback (most recent call last):
 File "/home/user/stc/backend/backend/llm.py", line 269, in stream_gemini_response
   if content_part["type"] == "image_url":  # type: ignore
      ~~~~~~~~~~~~^^^^^^^^
TypeError: string indices must be integers, not 'str'
Traceback (most recent call last):
 File "/home/user/stc/backend/backend/llm.py", line 269, in stream_gemini_response
   if content_part["type"] == "image_url":  # type: ignore
      ~~~~~~~~~~~~^^^^^^^^
TypeError: string indices must be integers, not 'str'
ERROR:    Exception in ASGI application
Traceback (most recent call last):
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 250, in run_asgi
   result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
   return await self.app(scope, receive, send)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
   await super().__call__(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__
   await self.middleware_stack(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/middleware/errors.py", line 152, in __call__
   await self.app(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/middleware/cors.py", line 77, in __call__
   await self.app(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
   await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
   raise exc
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
   await app(scope, receive, sender)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
   await self.middleware_stack(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
   await route.handle(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/routing.py", line 362, in handle
   await self.app(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/routing.py", line 95, in app
   await wrap_app_handling_exceptions(app, session)(scope, receive, send)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
   raise exc
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
   await app(scope, receive, sender)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/starlette/routing.py", line 93, in app
   await func(session)
 File "/home/user/miniforge3/envs/dev-env/lib/python3.12/site-packages/fastapi/routing.py", line 383, in app
   await dependant.call(**solved_result.values)
 File "/home/user/stc/backend/routes/generate_code.py", line 372, in stream_code
   raise Exception("All generations failed")
Exception: All generations failed
INFO:     connection closed

Image

Screencast.from.2025-02-08.17-55-17.webm

@abi
Copy link
Owner

abi commented Feb 9, 2025

Looks like follow-up prompts are failing but first prompts are working? Ah, I think currently the repo only supports Gemini for the initial generation. The code needs to be modified to do a better translation of the messages from the OpenAI format to the Gemini format for follow-ups to work. So yeah, the Gemini implementation is a little hacky at the moment.

@therkut
Copy link

therkut commented Feb 9, 2025

@abi
Yes, the initial requests are working, but the correction requests are showing an error on the screen.
I am eagerly waiting for the code to be fixed.

Thanks

BibbyChung added a commit to BibbyChung/screenshot-to-code that referenced this issue Feb 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants