I'm suuuper late to the party, I know, but I was pleasantly surprised by how good the 3B Llama3.2 model is to use on my 16GB 2020 macbook M1, via Ollama and @Simon Willison's stellar โ€œllmโ€ package. I'm referring to this here: And this llm package: It feels comparable to the speed of using the lightweight models on Perplexity.ai, and it feels fast enough to me just run queries locally. Really nice piece of CLI design - it's a real pleasure to use.
Dear fediverse nerds: If you're interested in a more open, diverse market like me, then you might find the 6th International Open Search Symposium 2024, (also known as #ossym24 to its friends) interesting. I'm going for next week, and I'll be in Munich from Wednesday to Friday. I've written a bit more about why I'm going: /1