5 Comments
User's avatar
Baz's avatar

Great interview there and super engaging.

If we agree that our consciousness emerged through natural selection then if AI achieves consciousness could that not be considered as Richard argues a feature of an extended phenotype?

Here’s ChatGTP’s answer to the same question:

Yes, under a generous and philosophically expansive reading of Dawkins’ extended phenotype, a conscious AI created by humans could arguably be considered part of that extended phenotype — particularly as an expression of our cognitive design capacities evolved through natural selection. But this interpretation would push the boundaries of the original biological concept into a more speculative and philosophical realm.

I hope this doesn’t sound too cynical but money then must also be considered an extended phenotype as it seems to be a contributing factor in the AI arms race.

Expand full comment
Mark Slight's avatar

Dennett already answered this question: "they are not conscious in any interesting way". But I'll check out the video!

Expand full comment
DEMedia is Propaganda's avatar

Of course not

It's just brute force computing on an unimaginable scale

Expand full comment
Sufeitzy's avatar

The concept that these engines hold an encoded model of reality which provides a continuously modulating prediction of its environment and its own processes, and uses sensory input to correct that mode continuously is so far away from the reality of what these models do, it’s like comparing a stool to an airplane.

Zero continuous generation of a model, zero error sampling of the model, very mold predictive construction.

If you could call writing letters about a war consciousness, well, it’s all done isn’t it.

Expand full comment
User's avatar
Comment removed
May 13
Comment removed
Expand full comment
Baz's avatar

I swear my screen just winked at me….

Expand full comment