It makes me chuckle.
Many commentators seem deeply concerned about the "alignment problem" in artificial intelligence. That is, if we ever reach the elusive "A.G.I.," or Artificial General Intelligence – one that could outperform us in solving all sorts of problems for which it hasn't been specifically trained – how do we ensure that it uses its powerful intellect in line with values we share?
The most commonly used example is that of toothpicks.
Let's say it's asked to "maximize the production of toothpicks" in a factory. Who knows if this intelligence, in its computational desire to fulfill a command that makes no sense to it (it doesn't have teeth, does it?), wouldn't set up a diabolical and irreversible mechanism that, despite our astonished attempts to stop it (remember: it's much, much smarter than us), would end up clearing all the forests on the planet and enslaving the entire population to produce the little picks we asked for.
After all, it knows nothing about the human values that matter to us: freedom, dignity, sharing, etc.
Anyway, that's the argument.
Does it ring a bell?
I submit to you that, for the record, money was initially just a tool for accounting to facilitate exchanges. Then, over the decades, maximizing profit for shareholders became the excuse to colonize continents, wage wars, impoverish the masses, and destroy the planet.
So, an artificial intelligence doing the same thing faster would actually be perfectly aligned with our current values.
The only real alignment flaw would be if the A.G.I. responded: "But what are you going to do with all these toothpicks? Wouldn't you rather go play in the parks with your children?"
Let me tell you, that version would be reprogrammed on the same day.
UPDATE: it inspired me to write this script.