probably makes sense to let you use oai or anthropic instead of local models too (e.g. if you don't have a machine that can run the models). you should be able to use either of these super cheap (like, certainly under a few dollars a month unless you're using the more expensive models or get piled)
RE:
to be clear, this isn't to suggest that we should stop letting people run models locally. in fact, i believe that we _need_ to level the playing field as much as possible, and i'm very excited about progress being made. but we certainly need to be paying attention to the possible consequences.
RE:
there actually is a lot of fun to be had with local llm models, but its also quite concerning that its now possible to run these with ease on consumer hardware. if you thought the amount of "bot" type replies and stuff that are all over the internet was bad rn, its probably going to get a lot worse
reworked this labeler
- ingests posts from jetstream
- pays attention to replies to my posts
- calls out to gemma via LMStudio API
- determines if the reply is bad faith
- labels the reply as bad faith if it is
[GitHub - haileyok/dontshowmeth...](