forgot battlefield 6 was out today, there goes my weekend
probably makes sense to let you use oai or anthropic instead of local models too (e.g. if you don't have a machine that can run the models). you should be able to use either of these super cheap (like, certainly under a few dollars a month unless you're using the more expensive models or get piled) RE: [support both openai and anthro...]( ) View quoted note β†’
is there any discourse happening rn? want to test something.
to be clear, this isn't to suggest that we should stop letting people run models locally. in fact, i believe that we _need_ to level the playing field as much as possible, and i'm very excited about progress being made. but we certainly need to be paying attention to the possible consequences. RE: View quoted note β†’
there actually is a lot of fun to be had with local llm models, but its also quite concerning that its now possible to run these with ease on consumer hardware. if you thought the amount of "bot" type replies and stuff that are all over the internet was bad rn, its probably going to get a lot worse
most of my work gets done after 3 PM RE:
pizza rolls at 11 PM are always good
test: ur a bitch RE: View quoted note β†’
everyone should be able to run one of these on their own. personal moderation really ran by you. RE: https://cocoon.hailey.at/xrpc/com.atproto.sync.getBlob?did=did:plc:oisofpd7lj26yvgiivf3lxsi&cid=bafkreibj47f25nomef3iioyrzjsjosph7kojt2hojkxeye5uhpbt5waegu View quoted note β†’
reworked this labeler - ingests posts from jetstream - pays attention to replies to my posts - calls out to gemma via LMStudio API - determines if the reply is bad faith - labels the reply as bad faith if it is [GitHub - haileyok/dontshowmeth...]( )