did one of my famous βsleep until 3 PMβ things today but forgot i had milk and eggs delivered this morning. milk and eggs sitting at my door since 10AM. oops
the reason imo that the for you feed is so good is because it seemingly prioritizes recency. this is good imo. i donβt care about stuff that happened hours ago. i dont care about replying to someone who is offline. i want to talk to people who are active rn and are talking about stuff i like.
probably makes sense to let you use oai or anthropic instead of local models too (e.g. if you don't have a machine that can run the models). you should be able to use either of these super cheap (like, certainly under a few dollars a month unless you're using the more expensive models or get piled)
RE:
to be clear, this isn't to suggest that we should stop letting people run models locally. in fact, i believe that we _need_ to level the playing field as much as possible, and i'm very excited about progress being made. but we certainly need to be paying attention to the possible consequences.
RE:
there actually is a lot of fun to be had with local llm models, but its also quite concerning that its now possible to run these with ease on consumer hardware. if you thought the amount of "bot" type replies and stuff that are all over the internet was bad rn, its probably going to get a lot worse