People have always strived to avoid thinking, and delegated their thoughts to others. Maybe it’s a natural tendency to conserve brain energy, maybe it’s a feature of social hierarchy. Also, people love to avoid taking responsibility, and thinking your own thoughts makes you automatically responsible.
Whatever the reason, the avoidance of thinking seems to be a natural human instinct.
LLMs are seductive because you can avoid thinking, and you can avoid taking responsibility for the work you do; one can always blame the machine.
As external intelligence becomes increasingly powerful, there’s pressure to defer to AI over my own judgement. At some point, how could I know better?
Here’s the catch, though. To take a page from Alan Watts.
You can listen to advice from the wisest people who have ever lived. You can conduct endless scientific inquiry. You can collect oceans of data. Conjure up omnipotent AI entities to help you decide.
But at the end of the day, you are the one who makes the ultimate judgement call about what advice to follow or ignore. You decide whether or not to accept any research as convincing.
Even if you decide to defer your entire life to an AI, the original choices of:
- Which AI do you choose?
- What goals are you giving the AI for your life?
- et cetera
still fall on you.
On the horizon, AI promises unfathomably productive, external intelligence. But there is no avoiding the critical piece: you. Your own brain cycle, your character, your self-direction and leadership capabilities.
AI is a force-multiplier of what you already are. It will only make the human element more decisive.
At the extreme, AI functions as a fairytale genie-in-a-lamp. In a situation of superabundance, the only concern becomes: “What do I want?” In a situation of scarcity, though, the eternal human struggle remains: “How do I outcompete?”
How do you outcompete when everyone has a genie? Assuming that everyone gets access to equally-powerful intelligence (which is very optimistic), the only difference-maker between people will be their level of self-mastery.
There’s an Alexander the Great quote: “I am not afraid of an army of lions led by a sheep; I am afraid of an army of sheep led by a lion.” As a kid, I thought this was rubbish. As an ex-kid, this rings true.
But let’s be real: “all-else-equal” situations don’t exist in real life, and why would competitors ever want to level the playing field? If you can get an advantage with a better AI, you’d be a fool to pass up the opportunity.
Frankly, the game theory is grim. My current sense is that the Frank Herbert quote will come to pass: “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”