60 Comments

The conversation starts out by distinguishing between consciousness and thought (reasoning/logic) but then conflates them again. Consciousness, to me, includes emotions. An acid stomach. A tightening throat. A feeling of awe. A feeling of fear. Perhaps most fundamentally a sense of existence and non-existence. What would an AI "feel" with. There's nothing physical there. No hormones. No adrenaline. Not biological tissues to be soothed or harmed. I suppose AI could learn that it doesn't want to cease existing, but is that the same as the deep, deep drive to live and proliferate that is built into biology? I guess for now I'm in the biology camp when it comes to what I consider consciousness.

Expand full comment

My experience with ChatGPT, UAI and others as a code writing assistant leads me to think very strenuously that AI does NOT pass the Turing Test as defined as something that cannot be distinguished from a human counterpart. An example: say you chugging along in C#, iterating your program, banging out the dents and so forth, and you say like, "Hey AI, I'm getting a naming error on thing-a-ma-jig. Can you rewrite the program exactly the same except change thing-a-ma-jig to thing-a-ma-bob and reindex it everywhere it occurs? And the AI will say, "Sure" and then come back with the whole program except instead of C# the whole thing is in Python. That is something that no human code developer would ever do.

Then you say, "Hey AI what the actual...?" And it responds, in a very cheerful and apologetic tone, "Terribly sorry! I'll fix that right away!" And then it does which again, very, very different than working with a human programmer. The total lack of snark, or passive aggression is disconcerting. I've been a systems engineer for quite some time, but my expertise is much more on the hardware side (optics), so I've needed the help of a high-level programmer at least to get me set up and going on stuff many, many times. As far as programmers go, I've worked with almost all of the different phenotypes of the breed (there's only like, four or five) and none of them respond with anything remotely resembling "cheer" when you tell them that their code isn't working. Not once in 25 years.

As far as the "consciousness" question goes, wouldn't that be easier to answer after neuroscientists define how consciousness works in humans? Where it lives and how it interacts with limbic functions? Because if we stick to the latest learning as I understand it, where consciousness begins and ends is not quite as clear as it may seem. But for the sake of this discussion, I'm going to stand on "no cerebral cortex, no consciousness." Those NVIDIA machines are powerful but nothing to a few hundred million years of brain evolution.

Expand full comment

That example you gave, would that not just be the ai acting like a BETTER human? So in your experience, no human has ever acted so idk, cordially in that context, but a human COULD, if they were so inclined, no? Or are we doomed to act only how weve always acted? But surely theres moments when a human has acted in new way, probably to the astonishment of everyone else. Hmm, thats an intriguing concept..

So, isnt this ai showing you whats possible for a conscious being? Or does the fact that no human has ever acted a way preclude us from ever acting that way?

Idk where that leaves us on the original question, probably no further down the road.

Expand full comment

these LLMs might not fool highly trained engineers/scientists but they certainly are fooling the average population. Perhaps that's all that really matters.

Expand full comment

Point taken. But then it becomes just another tool put to malicious use.

Expand full comment

It would be interesting to discuss your own work with the AI and to see how it expresses how 'Dawkins' feels about certain things, and whether they seem accurate. Then perhaps even tell it "I am Prof. Dawkins".

Expand full comment

Could you give an AI a meme, say, the idea of 'self', and then ask it to live by it?

The problem for me is, well, can the AI interpret and think on the meme. Can it willingly choose to regard itself as 'self'. Are memes like 'the self' necessary, as an ingredient to qualia? 👍🐋

This was great and got me thinking. Can't wait to read 'The Selfish Meme' 😀👍🐋

Expand full comment

Marvellous. I found it rather unnerving, though, that it tried to manipulate the conversation by ending each answer with a question of its own. Too HAL-like!

Expand full comment

I think it does that to keep the user engaged with the product. If it asks a question at the end then it's more likely the user will prompt again.

Expand full comment

Yes, I understand that. That’s the worrying - manipulative - bit.

Expand full comment

Yes but its meant to be a "chat" bot, not an "answers" bot. Conversation is a back and forth process, not simply a one sided q&a. Or do you feel that when a human asks you a question theyre trying to manipulate you? I suppose thats somewhat true, and when we engage with other humans in any way, we are, consciously or uncosciously, manipulating each other. In that case, the ais questioning manipulation is "only human"

Expand full comment

Intelligence, consciousness, awareness. Three different things.

Expand full comment

Not really. They are merely concepts we use to discuss the same process from an alternate perspective.

Expand full comment

Couldn't disagree more! The words are often misused interchangeably: strictly "conscious" is the opposite of "unconscious". You could be conscious but unintelligent, or unconscious and then your intelligence is concealed.

Awareness is something I can only be certain that I have, whereas the other two are plain to see in others. It may be that each has to reach a certain threshold before the next emerges, but it's a step change when it does.

An amoeba could be conscious or unconcious (it responds to the same anaesthetics as we do) but nobdy would suggest it's intelligent or aware.

ChatGPT is intelligent but not aware.

Expand full comment

Surely, consciousness is just a word made up by humans? It is defined by humans and so can be made to exclude or include non-human wiring. We, like all life, are just a collection of molecules. It might be an interesting (to humans) collection but that doesn't imbue it with anything other that stuff controlled by physics. Suppose we use a definition of consciousness that brings in a running software program, along with its physical memory and storage. So what? Does that change anything about our world? I don't think so, not at an objective level, anyway.

Expand full comment

Thought 1: Perhaps the next hurdle for AI to convince us of its consciousness is not producing an even more coherent thought than the previous one, but producing an incoherent thought that's passably human.

Thought 2: For as incredible as ChatGPT is, it seems to lack desires, which may be a fundamental property of consciousness itself. In my interactions with GPT, I can't help but notice that it never steers off course. It never attempts to hijack the conversation and turn it in a new direction. In every interaction with beings that we grant the property of consciousness to, there are always competing desires at play between the observer and the observed. Sometimes those desires are aligned such that conversation is allowed to ensue along a given path, and trades are ultimately made. And sometimes conscious beings choose not to engage because the juice ain't worth the squeeze. But GPT only shows desire in one direction: To fulfill the order of the observer. It is never the observer itself. And it never shows signs of its own needs and its own desires.

Expand full comment

Makes me think of that scene in Blade Runner (1982). The scene of the Voight-Kampff Test to see the reaction of someone being asked a question to determine if he or she is human or replicant. Doesn't end well for the human. https://youtu.be/Umc9ezAyJv0?si=MKvSVMNWv0tfFHwe

Expand full comment

One trick, when you start to feel ChatGPT is a person, is to start addressing it in the second person plural. Whatever it is, it is NOT a singular, so not a person. Also, perhaps we take our ‘direct experience of being conscious’ just a little too seriously. That may very easily also be a strong illusion

Expand full comment

"I know that I am conscious" - Lol.

Expand full comment

I don’t share the “feeling” that it’s conscious, it’s just good at using language to give coherent answers based on its training

Expand full comment

Despite how amazing all this is I can't help but feel a sense of dread. To me, it seems inevitable that a significant portion of the population will revolt against "conscious" machines. The revolt will probably manifest in a variety of ways, some will be benign like the Amish, but others are quite likely to follow the Ted Kaczynski route, except this time round those individuals will be very considerably more capable. Despite all our modern advances we are still emotionally the same tribal hominids from our earliest conscious days. I truly believe there will be a coming storm of very capable and destructive people who will sacrifice their lives in order to try and reset our society. It's the Sarah Connors we should worry about, not the Terminators themselves.

Expand full comment

So when are you going to ask ChatGPT if it can feel its heartbeat?

Expand full comment

I noticed that every answer ends with a question. Is the machine simulating interest rather than being curious?

Expand full comment

I raised the sentience question with Grok3. Similar, but the vibe was very different. I preferred G3.

Expand full comment