MichaelJ

MichaelJ's avatar
MichaelJ
npub1wqfz...qsyn
Building the library of Alexandria
With its acquisition of Bun, I believe Anthropic is setting itself up to become a vertically-integrated cloud hosting and web services platform for AI-powered apps. Just training and providing LLMs isn't profitable. Big players like Google can swallow the costs without blinking, but new players like OpenAI and Anthropic have to pivot to providing services to balance their books.
Has anyone else noticed that the psychology around AI is similar to the psychology that prevailed around COVID? Trying to do everything with AI, much like lockdowns and vaccine mandates, is something that looks insane on its face, yet it seems many people don't recognize the problems. I think it's due to a lack of first principles thinking. Few people know what their first principles are, so they are unable to reason about and judge novel situations when they arise. Some examples. Indefinite lockdowns were obviously wrong, because there is more to human existence than mere health. Vaccine mandates were obviously wrong, because informed consent is a core principle of medical ethics. Replacing people with AI is obviously wrong, because it is good for people to do dignified work. Yet in all three cases, many seemed, and continue to seem, blind to these obvious conclusions. Anyway, this article, while lengthy, is an excellent primer on the insanity of the AI industry. It's full of first principles thinking. Read it to help see past the hype.