-
Notifications
You must be signed in to change notification settings - Fork 996
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trusted publishing: prevent OIDC credential re-use #16194
Comments
If we make this |
Just double-checked, and Google Cloud does not support either @th3coop could you provide an example of the claim set that ActiveState offers in its JWTs? I'm curious if your claims include a |
I've asked @DarkaMaul to look at this! |
Hey morning @woodruffw, sounds like our docs aren't as navigable as they should be. The supported claims (though not worded as such) are here: https://docs.activestate.com/platform/user/oidc/#understanding-the-token So we support We do not currently support |
For (2); I've done a similar entry tracking, validation and expiry backed by Redis before. Pros:
Caveats:
Just an idea. No sweat if not used/totally ignored :D. Don't hesitate to ping me if you need more info about the ActiveState stuff or if you'll need ActiveState to put up a PR or anything else I can assist with. |
Got it, thanks @th3coop! I swear I've seen that page before, but for some reason I couldn't find it when searching.
Not to snipe you into adding this, but how difficult do you think it would be to add to your claims? 16 bytes of CSPRNG should be more than sufficient. Re: Redis: that is probably workable! We definitely already have a Redis service in Warehouse's deployment, and it already provides JWKS caching/rotation for the verification phase. I don't have a super strong intuition for whether that would be better or worse than a new DB table, though -- the table would be "cheap" here in the sense that it would have no relations, but still heavier weight than Redis. OTOH, with Redis, I think we'd need to keep a potentially unbounded number of JWTs resident in RAM, right? (Technically the number of JWKS is also unbounded, but we currently pull those from trusted parties, i.e. the OIDC IdPs we trust.) |
The hard/longest part of the task will be me second guessing myself on picking the correct algorithm 😄. I'd probably just slap a UUID in there since we use them for everything already. Other than that, it would not be hard for us to add to out OIDC JWT tokens. What kind of timeline are you all hoping for here? Re: Redis: When you say "potentially unbounded number" you mean that there could be any number of values stored at any given time with no guarantee that you have enough space, correct? Then definitely yeah. You need estimates on how many requests you expect to get over the expiry time of a these OIDC tokens, which should be quite short compared to out JWTs. Would you be able to scale vertically, adding memory, if volume spiked for some reason? |
UUIDv4 has 122 bits of entropy when generated correctly, so this would be more than sufficient 🙂
No hard timeline for this, IMO -- the top level feature would be a good one for PyPI to have, but it's currently not breaking anything and can be rolled out incrementally for each IdP. So it'd be great to have ActiveState support
Yep. That part was mostly me thinking out loud -- I suspect in practice this wouldn't be a huge issue, but AFAIK there aren't many other places where user interaction with PyPI can cause unbounded Redis usage, so I figured it was worth calling out. |
That all makes sense. I don't see problem getting this added to our OIDC token pretty quickly when the time comes. Keep my posted and please let me know if I can help in other ways, if I can, I will. |
Hi - I'm taking over this issue. For the moment, the plan is as follows:
Let me know if you have any feedback on how to improve this. |
Instead of this, I like @th3coop's idea here:
E.g., using the EXPIRE command to let redis automatically expire these keys. |
Sure, I'll implement the Redis version first, and we can replace it with a table if needed afterwards. |
If you use the |
I think it should be just |
I hate to necromancer a closed issue -- I'm happy to create a new one if I'm on the right track! -- but I'm seeing what looks like a failure of GitHub actions' tokens to include a JTI: https://github.com/offbyone/django-svcs/actions/runs/10552758336/job/29233437205
This is in reference to test pypi, not primary, so it's possible this is something less critical, but it's biting me as I try to publish a new package for the first time using a trusted publisher. |
@offbyone Huh that is not something we expected. I tried to reproduce it but couldn't. Looking at |
Happy to, here ya are: https://github.com/offbyone/django-svcs/actions/runs/10552758336 (if you'd prefer I open a new issue for this, just say the word) |
Yeah go for it, better to have this problem documented in its own issue |
Pointed out by @sethmlarson and @di: an OIDC credential can currently be re-used to issue multiple temporary API tokens, for as long as its expiry window remains open. This results in two potential weaknesses:
(2) is unlikely on GitHub, since any context in which the attacker has access to the OIDC credential is also likely one in which they can mint a fresh credential. But this may be still relevant for other supported OIDC IdPs.
Both of these can be addressed by enforcing one-time use of the OIDC credential within Warehouse:
jti
claim in their JWTs.jti
s: if the same JTI is seen again, the JWT it belongs to should be rejected and token exchange should fail.For (1), both GitLab and GitHub support
jti
. Google Cloud supportsnonce
, which is subtly different (jti
is server controlled,nonce
is user-controlled). I'm not 100% sure what ActiveState since their docs don't mention any claims explicitly 🙂For (2), we probably need either a single new table with a unique constraint on
(provider-name, jti)
or a new table per-provider with a unique constraint onjti
. With both, the table will probably have unbounded growth unless we also periodically delete it (on the same cadence as our self-expiring API tokens, maybe?)The text was updated successfully, but these errors were encountered: