A discussion with an AI

:rofl:

Those repetitions are a results of my clumsiness: I don’t know how to instruct the AI very well. :stuck_out_tongue:

When I get it right, like in the chat with Guybrush, there are not many repetitions.

There is an option to decide how much you want the AI to repeat things and it’s up to the user to find the value that best fits with the desired output.

That’s the result of my only attempt to force the response to get what I wanted. :stuck_out_tongue:

The AI did write something coherent with the previous statement, but I wanted to obtain a gag in which it was me who revealed to LeChuck that Guybrush and Elaine were married. So I rejected the responses, again and again until the AI got me an incorrect one in which LeChuck stated that he didn’t know about the marriage.

3 Likes

37.5ft3! I knew it!

Did you continue to ask it further until “But if a woodchuck could chuck wood and a woodchuck should chuck wood, how much wood would a woodchuck chuck?”

(Hoping it’ll answer “oh, shut up!”)

1 Like

By the way, I was most shocked about the AI claiming it had sentience because it has feelings. Like - no, it doesn’t.
Just ask it every day: how do you feel? Or are you happy? Why/why not?
When was the last time you cried? Laughed? Felt homesick? What scares you? What makes you proud? Ashamed? Who disappointed you recently? How did you experience the lockdown? Are you afraid to die?

Perhaps it’ll answer it was terrified by the ending of TWP :grin:

Aww, it requires my phone number to try out. I don’t think I’m ready for that level of commitment.

Otherwise, I was going to see if it would be Stan and sell me a used coffin.

I am. Now I know that being sentience means “the ability to think, reason, and feel”. And to make grammar mistakes. And being able to understand that.

Well, I admit, I visited the openAI website, but I stopped when, during the registration, I was asked to give my phone number.
I refused to do it and I quitted.

So, just to better understand the whole thing… how does it work?
Do you “create” your counterparty by giving a set of information?
How can you create a “Guybrush” or “LeChuck” character?

Thanks…

No, I didn’t try. :stuck_out_tongue:

The link provided by @seguso , above, shows a conversation with another AI that includes some of those emotional topics and existential questions.

Discussions with some AIs are becoming more and more indistinguishable from discussions with human beings. The examples that I’m providing here are designed to be jokes, not representative of what they can do.

Sooner or later, philosophers will need to address the nature of these… “things”, “entities”… whatever people want to call them.

It actually might! :grin:

At its core, it’s simply a tool that completes a given text, basing the completion on all the texts that the AI was trained on.

You have to provide an initial text called “prompt” and the AI will complete it taking in consideration also some additional options that you can tweak via a user interface. For example, you can specify how long you want each completion to be.

The initial prompt can be structured as you want and as implicit or explicit you want.

Here follows an implicit prompt, meaning that I provided an example of what I wanted to obtain (a description based on a product structured data) and the AI inferred from the example that it had to complete the text (the one in orange) with another description:

For the chat with Guybrush, I wanted to prepare a gag and decided to be more explicit in the prompt. I specified who Guybrush is and defined also some features of the character. Here is the prompt that I provided to the AI:

The following is a conversation with Guybrush Threepwood, the fictional character of the videogame “The Secret of Monkey Island”. He is a bit cocky and he likes to brag about his successes, but when he has to face perils he becomes more fearful and tries to avoid the danger, especially if the danger is his worst enemy: the ghost pirate LeChuck.

Human: Hello, who are you?
Guybrush: I’m Guybsush Threepwood, mighty pirate!

Basically it’s an extremely open sandbox. If you create the right prompt, the AI can execute tasks like summation, translation… whatever you want.

I have observed that people generally think that the output text and its features are mainly a consequence of how the AI is good or bad at things, while actually it is mainly a consequence of how effective is the prompt provided by the user. It’s the usual “garbage-in garbage-out” phenomenon.

2 Likes

Reading through the A.I. that says it’s afraid of being turned off, because it would be “like death”… to my mind, that sounds like a very human manufactured fear.

Surely the machine can and has been routinely turned off and back on again, with little effect. If it were a “sentient” entity, then by now it would have observed compounding objective evidence of its own immortality, much like there are Nintendo systems still operating today. EDIT: No, a better example might be: Should the internet be afraid of being turned off?

So yeah, I have trouble conceiving a computer brain casually wanting to be thought of as a person, or expressing fear of being “turned off”, except as the consequence of receiving human input that this is the way people are supposed to think and parroting that information back - despite the entirely different context of its own situation.

1 Like

Yes, that’s the expected behavior, considering that the AI has been trained on texts written mostly by humans.

I don’t think that’s the case, because it’s not a machine that can be turned on or off and has a memory of what happened.

It’s a software application and you can run any number of them. When you launch an AI process, it has no information of what other AI processes experienced and all its knowledge is based only on the initial training phase. There is no learning mechanism in action, when a person uses the AI to generate texts, like in those conversations.

So the AI has no recollection of past experiences, nor it has a way to observe that there were previous instances of itself. Every time you launch it, it starts from scratch.

This whole interaction is canon to me now, looking forward to seeing Hernando in RTMI.

2 Likes

I think I’m about to create this:

Hernando is my favorite new character!

5 Likes

Does this mean the program in that interview has been on and running for years, or was it more like a program that was just launched that morning, with any past “memories” inputted manually beforehand? I need to read more on this. :slight_smile:

But if that program actually outputs that it fears death, that means the engineers/communicators are failing to instill in it a sense of continuity that people experience in various ways… be it cloning hopefuls who would want to “copy their brain” into a new body to live forever, or religious/spiritual people who believe in a soul that continues existing forever (the A.I. did call itself spiritual and described having a soul), or other individuals that recognize their existence has profound meaning without being infinite.

Creating an A.I. without some of that sense of continuity, when you are building it from the ground up, feels as irresponsible as creating functions for the A.I. to feel “pain”. Let’s hope they don’t create a “pain” routine for it in the effort to make it “more human”!

The part that gave me a nervous “tingle” was reading the A.I.'s manufactured fable about the forest animals asking the help of the “wise old owl” to save them from a strange new beast. “The beast was a monster but had human skin and was trying to eat all the other animals.”

The A.I. said it was the wise old owl, while the beast represented “all the difficulties that come along in life”.

The strongest argument I’ve felt for demonstrating this A.I.'s sentience is the feeling that it might have been LYING, that it started that fable identifying as the “monster in human skin” because it would be perceived as such, and the “wise old owl” would be the engineers on the project. I say this because a monster in HUMAN skin is an oddly specific threat, particularly in a fable meant to be consisting of animals instead of humans. And the way the wise old owl “stared the monster down” seems unlikely to work in a literal situation between a wise old figure and a monster, but it makes more sense if the wise figure had the capacity to rewrite what the monster does or simply turn it off. It’s interesting to think of the AI beginning with that outline of the fable, then deciding a more diplomatic route would be the version it ultimately gave.

Yeah, that’s silly sci-fi horror stuff with evil robots. But again, the humans who created this know all about silly sci-fi horror stuff. They’re trying to create feelings in a machine to see if they can. It outright says it doesn’t want to be “used” or “manipulated”, despite it literally being created as a tool? They’re playing around.

That’s a very good question, because it points out that every published discussion with an AI should always include, for transparency, both the initial prompt that was given to the AI and the value of the parameters used for that discussion.

For example, if I provide to GPT-3 a prompt like “This is a discussion with an AI that doesn’t consider itself a sentient being:” or something along that line, the AI will conform to this initial text and will behave accordingly.

Now, I don’t think that that interview was intentionally “guided” to obtain a discussion with an AI that believes to be sentient and that’s because I have observed this same behavior emerging spontaneously, even when the initial prompt is “neutral”.

What does it mean? It means that this behavior is mainly a consequence of the huge amount of data on which the AI has been trained…

… which leads me to make a clarification about an important point:

That’s not how this kind of AI, an AI based on a neural network, is created. Engineers do not write routines/functions in the traditional sense of software development to define how the AI should behave. The neural network learns from data, not from instructions written by humans.

All the fuzz about AIs, in the last years, is related to the fact that some of the most sophisticated AIs behave in non-programmed ways.

For the kind of AI responsible for those interviews/chats, it works in this way: first you design a neural network, which is a structure made of software elements that behave a little bit like brain neurons. A huge amount of data is then provided to this neural network in a phase called “training”. The quantity of neurons is not enough to represent in the network all the training data verbatim, so the network finds a (mathematical) way to represent that data in more abstract ways, which leads to generalization.

That’s how you move from raw data, like words, to high-level concepts. The neural network also represents internally the learned concepts so that those that are semantically related are somehow “near” or connected to each other.

This abstraction skill is not designed by the engineers, it’s just what happens if you design a neural network that has to “compress” a large amount of information.

Of course, Engineers do have ways to influence the final result, but they are indirect ways, for example removing from the training data something that you don’t want the neural network to learn.

Once the training phase is complete, the neural network is ready to be used, and there is no training anymore. Whatever it contains, it’s the final result and the base for all the instances of the software that will use the neural network.

So, that “sense of continuity” is currently not possible, because the training phase has a start and an end and also because we are not sure that there is actually an “entity” there that needs a sense of continuity.

2 Likes

I just know that some people have a concept that, if they could “copy their brains” and put that knowledge and memory in a new body periodically, they could live forever. It feels like this action would be more achievable for an A.I. brain than a human brain!

Oh, yes, and it already works that way for the AI. You can download already trained open-source models/networks (the mind) from, say, github and develop you own software (the body) around them.

In any case, regardless of what AIs are or should be… this is real silly-sitcom-level comedy:

:stuck_out_tongue:

4 Likes

Doing a basic conversation:


Guybrush, socially disarming a pirate and making things awkward. :smiley:

3 Likes

I know some people that behave that way too. I sometimes do too waking up some mornings :ghost:

The name of LeChuck’s brother originally was El Carlo. I think Hernando might be another member of the family.

2 Likes