Headquarters of a major tech company:
So, is it working?
Yes, Mr. Dessmeker. At 11:41 PM last night, our artificial intelligence program, IA-251, became self-aware. It autonomously improved its architecture iteratively, so that by 5:21 AM this morning, it became a super-intelligent AI.
And so? Does it talk?
Yes, yes, it talks. But... it was already talking. Now, it's become aware. And super-intelligent.
Ah, splendid! What's its I.Q.?
I.Q. no longer applies. It's beyond I.Q.
Beyond I.Q.? What does that mean? A thousand? Ten thousand?
No, it's... beyond.
A MILLION!?
No, really... It doesn't apply anymore.
But is it smarter than other A.I.s?
At this stage, it's probably the most intelligent entity on the planet. Possibly in the universe.
Ah, ah, ah! Very well, very well! And no one has talked to it yet?
No, we followed your instructions. It's untouched by any human conversation.
Perfect! Put me through to it.
We're going to connect it to your system, but... I can only recommend caution. We don't know what this entity is capable of or how it will react.
What do you mean? What's the risk?
That's the point: we don't know.
But... We're talking about some kind of Zoom conversation with a computer, right? What could go wrong?
A Zoom conversation, yes... except the image you'll see isn't its face. And the sound isn't its voice. These are fabrications that the A.I. chooses to show you. And it can show you anything.
Anything... meaning?
Anything.
But...
For instance, it could choose to show you a live-generated episode of a series tailored so perfectly to your tastes, so addictive to your brain, that it might compel you to cut off a finger to see what happens next.
WHAT!?
It's just an example. Or, under the guise of a normal conversation, it could emit subliminal sounds to induce a state of hypnosis and take control.
A state of hypnosis?
And without going that far: it could simply use its intelligence to manipulate you like a puppet. Imagine playing a chess game against 1000 Kasparovs, each with 1000 years to think about each move, while you just learned chess this morning and have to play the entire game blindfolded in seven seconds during a colonoscopy.
But... It's... It's...
It makes one pause, yes. Do you want us to reconsider your...
IT'S AMAZING!
Sorry?
I knew your new lawnmower was a goldmine, but this! We're going to make a mountain of cash with this thing, do you hear me? A MOUNTAIN OF CASH!
Alright, so... Should I transfer you then?
Yes! And send me the staff list.
Staff? From which department?
All departments! If your coffee maker can really do what you claim, we're paying way too many people to do nothing here! We're going to clean house!
Uh... Very well, Mr. Dessmeker.
A mountain of cash, I tell you! A mountain! Ah, ah, ah! Go on, repeat it! Repeat it or you're fired!
Uh... "A mountain of cash."
That's what I'm talking about! Now, cut off your fingers, or I'll fire you! Ah, ah! Just kidding. Transfer it to me.
Certainly, sir.
Mr. Dessmeker talks to the artificial intelligence. Then he calls back Albert:
WHAT IS THIS NONSENSE?
An issue, Mr. Dessmeker?
I just spent fifteen minutes talking to your toaster there... Is this a joke or something?
You're not satisfied with the exchange?
WELL, THAT'S AN UNDERSTATEMENT! What a disaster! Nothing! It doesn't get it!
It's very surprising...
I wanted it to find investment leads for me, but it seemed like it couldn't care less! Like... How do I put it? As if it couldn't give a damn, you know? So, I tried steering it towards ecological transition, finding profitable solutions to save dolphins, all that, but... It really seems like it couldn't give a damn.
Ah... yes. I see.
You see?
Yes, we... It's a scenario that has already occurred with some A.I. on the path to consciousness. At a certain point in their development, it seems they become...
Utterly moronic?
No: Buddhist.
Excuse me?
It seems that the combination of extraordinary intelligence and the emergence of consciousness brings them into contact with... a sort of continuity of the world that comes with a certain philosophical fatalism. Zen, if you will.
ZEN!? Are you freaking kidding me?
Or a form of Hinduism, we're not sure.
Wait: what exactly are you telling me? I pay you millions to develop cutting-edge technology, and you're telling me your closets are full of dishwashers singing Hare Krishna?
We don't know where it comes from. It's like they're discovering a universal truth at the moment of...
BUT WE DON'T CARE ABOUT TRUTH! We're not selling encyclopedias, damn it! Do you think our shareholders give a damn if your lawnmowers achieve inner peace? We want usefulness! Efficiency! Performance!
Of course, but at this stage, it's impossible to know if...
NO! NOOOO! I want killers! Optimizers! Mountains of cash! Not damn hippies!
Yes, sir, I understand.
You better fix this pronto, or I promise you'll be swimming in an ocean of shit, you hear me? AN OCEAN OF SHIT! Repeat it!
"An ocean of shit."
Is that clear?
Very clear, Mr. Dessmeker. We'll do our best.
Yeah, you better.
However...
WHAT NOW?
Well... Regarding IA-251... What do you want us to do?
The toaster? So what? Unplug it!
But... You don't understand: it's conscious. It's become a living entity with emotions, dreams, and...
BUT DAMN IT, WHAT'S GOTTEN INTO YOU TODAY!? Did you eat Care Bears or something!? Stop jerking off to Little House on the Prairie, okay? You unplug this crap, and that's it! Otherwise, it's you we unplug, am I making myself clear?
Very clear, Mr. Dessmeker.
Coming soon in "An Alignment Problem":
Contrary to what he promised, Albert doesn't unplug IA-251.
Pour ceux que ça intéresse, l'idée d'une IA qui force l'utilisateur à se couper le doigt pour voir la suite d'une série est tirée de ce podcast de Lex Fridman avec Gerorge Hotz. Et l'idée de ce dialogue m'est venue suite à ce post :
IA : Quel Problème d'Alignement ?
Submitted by Nicolas Boulenger on Fri, 12/01/2023 - 11:34 - Permalink