-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale ATProto handle resolution #744
Comments
Upcoming option: https://dns.kitchen/ , all you can eat zones/records, $5/mo. |
@neatnik mentioned https://desec.io/ too. |
Maybe obsoleted by #830 🤞 |
...nope, turns out #830 probably won't work after all, so this issue definitely still applies. |
The 10k limit in Google Cloud DNS is a quota, |
Tried requesting a quota bump. 🤞 |
In unrelated very good news, @bnewbold built us a whole new microservice to solve this! bluesky-social/atproto#1697 (comment) . Extremely generous of him. Thank you Bryan!!! |
More good news, I asked for a GCP DNS quota bump to 50k and got it. Woo! |
We're currently at 6500 DNS records, with the limit still at 50k. We'll (hopefully) still need to figure this out eventually, but it seems like we have plenty of time. |
Count is 28k now! Out of quota of 50k. 😳 I think a decent number of those are disabled, many but not all from Flipboard. |
37k DNS records now! Time to get serious about this. |
I have no idea what this software is, but stumbled across this issue while researching secondary dns providers. :) Given the programmatic nature of these records, maybe you'd be better off with something like powerdns with the remote backend? Or if you need a managed service, Bunny DNS? You'd never run into record limitations, as the responses are just created on the fly. |
Here is a direct link to it hasn't been reviewed or merged to (bumping quota is probably the near-term solution though?) |
Just chiming in here as it's related to another I just added a comment to... Have you considered just running your own nameserver? This would both mean no limits and faster updates as records change among other possible gains? |
@shiribailem yes! Definitely considered that, hence "or run our own DNS server" in the original description here. It would mean no limits and faster updates, but it would also mean running and maintaining our own nameserver. One more level of admin cost and risk. Whee. But yeah we may end up doing that regardless, esp since @bnewbold built us one! #744 (comment) |
I mean the good news is that nameservers are very very low requirements on resources, performance, and effort, even for security. The worst risk is the fully custom service vs just using internal APIs/config-files to update zones on an established DNS server. Given that DNS isn't a big attack surface compared to the entire bridge software itself, I wouldn't stress as much over that. |
Done! Reclaimed ~13.5k total, currently at 24.5k out of 50k quota. |
Requested a quota bump to 200k. 🤞 |
just randomly thinking... have you considered stripping periods from the converted usernames? That would make it trivial to switch to HTTPS resolution... Sadly that's something that would be easier at the beginning of the project before you had thousands of users, but I could still see it working. If you did that, I'd probably leverage the code for setting custom domains and just convert all existing listings to custom domains (which would otherwise leave them unchanged). From there you could just use HTTPS verification going forward. |
@shiribailem we have, #744 (comment), #830 |
Holy crap, they gave us the quota bump to 200k records! 🎉 |
We need to serve ATProto handle resolution for all users bridged into ATProto (background: #381). They support both DNS and HTTPS methods, but our ATProto handles are multi-level, eg
@[email protected]
becomesuser.mastodon.social.ap.brid.gy
, and you can't make multi-level wildcard SSL certs, so HTTPS won't work, so DNS it is.We're currently using Google Cloud DNS. It serves the entire
brid.gy
DNS zone, and we create_atproto
records for handle resolution manually. The catch is that GCP DNS has a hard limit of 10k records per zone, which we'll likely outgrow. Grr.We could make a zone per sub-subdomain, eg per fediverse instance, so
mastodon.social.ap.brid.gy
would become its own zone, but GCP DNS also has a hard limit of 10k zones total.To do:
So, we eventually need to switch to a different programmatic DNS service or run our own DNS server. Whee.
The text was updated successfully, but these errors were encountered: