Thread

Replies (12)

What is describing is a local llm running what they call a rag system. “Running a local LLM with a Retrieval-Augmented Generation (RAG) system gives you the power of AI without relying on the cloud. You keep full control of your data, run queries securely and privately, and enhance the model with your own knowledge base in real time. It’s fast, customizable, and cost-efficient—perfect for teams or individuals who want AI that works with their information, not just what the model was trained on.”
I love @Maple AI and use it daily. Two quick questions if you’re reading… In Claude you can create a project folder. All the chats are saved there as are any documents you upload. And then it’s basically what McConaughey is wishing he had. Maybe I’ve missed this - if so let me know. Also, if you had a post about use cases for each model that’d be great. I use the Deepseak the most, then OpenAI. I’m just asking ask questions. Both are great but I’m curious what their strengths are and what the other models excel at. Thanks for accepting Bitcoin! Getting another account for a family member this weekend. View quoted note →