took me about 10 minutes to get my local deepseek model to tell me about tienamen square, it was a valiant effort on the devs part, but alas, not enough to prevent it from thinking it's a history professor at a US college.
Its been a long day. RE:
Even if it’s hard, go find some grass to touch this holiday season. https://morel.us-east.host.bsky.network/xrpc/com.atproto.sync.getBlob?did=did:plc:vpkhqolt662uhesyj6nxm7ys&cid=bafkreihl4plbl67tcix2vs6hf4vum2yg4ypwndcvbjl6hpwer6h6pfv3se
Search your code for any occurrences of "TODO: this query is fast for now, probably want to improve locking around this when it gets big" and fix them. This PSA brought to you by Discover being slow today.
In Germany, Bluesky+ will be called β€œBluesky Super Cool” and will additionally include one of four different techno tracks you can have autoplay when people visit your profile (Final naming, features, and track selection still tbd)
Pro Tip: If you remove an unused index, make sure your ORM won't try and recreate it next time you restart the process. In completely unrelated news Discover and other [bsky.app]( ) feeds should be back online now.
Having issues with nodes in our scylla cluster randomly dropping out and then taking forever to rejoin, resulting in a bunch of degraded performance for the duration. RE: View quoted note β†’
If this whole non-archival relay thing moves forward as is, it means pretty much any dev can run a relay. Requirements move down to ~8 cores, 16gb ram, ~2Tb ssd (can reduce this too) and as much bandwidth as you want to provide to your downstream users Obv would scale with network, but a good start
firehose consumers may experience some issues for the next ~20 minutes, apologies, working on getting it back in a good spot.
To be clear, we do have plans for scaling, we just kinda expected more than a couple days notice before getting blasted with a million new users a day. The team is rapidly deploying fixes and new software to adapt. More servers in the mail.