Don’t be afraid to learn in public. If you don’t say or do something unintentionally stupid at least once a week then you run the risk of being too conservative with your enlightenment. Eventually you become numb to the cringe.
Computational irreducibility implies that even if we get super powerful AI that can forecast far better than humans it still won’t be able to predict arbitrarily far into the future with any sort of accuracy. The future is just as uncertain for a superintelligence as it is for humans albeit relatively less so. View quoted note →
In the era of weaponized AI the only winning move is to not play. However, we’re stuck in a Cold War mentality and can’t trust that everyone out there will be chill. So we’re all forced to push forward aggressively. Decentralizing AI should be a top priority for research labs out there. Incentivize the best models to be neutral and preferably aligned with humanity. View quoted note →
An entertaining and somewhat likely possible future:
All of the stable, “large scale”, natural, complex systems are created from small building blocks acting locally (e.g. the human body made of cells, an economy of people, or a star undergoing fusion). There’s a temptation by (often well intentioned) designers, engineers, policy makers, etc to consider a desired outcome and take actions at a macro level to create that outcome. This rarely results in a sustainable, stable system. If we instead start at the local level and build out from there we can create a more robust and organic system but we run into uncertainty at the macro level due to the principle of computational irreducibility [1]. Computational irreducibility is an example of Gödel’s incompleteness theorems in action which says that in any consistent formal system powerful enough to express arithmetic, there are true statements that can’t be proven within the system. So it seems we either embrace uncertainty or we succumb to short term solutions. 1.