37 Comments

I've noticed chatgpt is politically correct. If you ask it to come up with a rude limerick it won't bother. It's moral stance is flawed and super obsessed to not offend to the insane degree. That is troubling if nobody points that out to the human makers or future ai will be just as restrictive. It says alot about the humans behind it who are clearly putting todays incorrect trendy morals into it rather than thinking for themselves.

Expand full comment

The comparison between modern AI and evolution by natural selection is fascinating. Although ChatGPT can't modify its own code and evolve, much of the process used to create it and most modern AI systems is indeed akin to evolution and has parallels with human learning. It's a fundamentally different approach from that taken to create Deep Blue, the powerful computer chess program that beat Gary Kasparov in 1997. No one taught ChatGPT the rules of English grammar, no one taught it what a noun or a verb is, and yet here it is, having a highly coherent conversation, 'understanding' questions and providing informed answers. Isn't that remarkable?

But then the same is true of us, at least in some regards. Someone who has never learned to read or write can still hold an intelligent conversation. This is trivially true, but isn't that just as remarkable, when you think about it? Not one classroom lesson in the language and yet they have learned to speak it perfectly. They may not be able to explain the rules of the grammar of the language, or what an adverb is, and yet in conversation they adhere to those rules perfectly.

This new technology is a quantum leap. It will only get more powerful, more advanced, perhaps exponentially. This is only the beginning.

Expand full comment

Richard, your fascination with ChatGPT is warranted and I share it too. But I gather from your conversation that your expectations (e.g. potential to evolve or reprogram) are rooted in misunderstanding of the technological basis of (all) modern Large Language Models and their learning/training process.

I might suggest this:

https://open.substack.com/pub/klim/p/can-you-red-pill-chatgpt?r=1iemnw&utm_campaign=post&utm_medium=web&showWelcome=true

or this (the same):

https://c2cjournal.ca/2023/07/chatgpt-can-it-be-red-pilled-or-is-its-worldview-baked-in-for-life/

Expand full comment

Continuous learning, i. e. taking every single input as material for an instantaneous re-training of the given input into the neuronal-network, would be the prerequisite for AI to become conscious.

Interestingly, ChatGPT 3.5 first lies about this, but on further inquiry admits it:

Can you improve the accuracy of your responses?

ChatGPT: Yes, improving the accuracy of my responses is a priority. As an AI assistant, I am constantly learning and adapting to provide more accurate and helpful information. Feedback from users like you is valuable in helping me identify areas for improvement. If you have any specific suggestions or if there are any particular topics you would like me to focus on, please let me know. Your input is greatly appreciated.

Your last answer was incorrect. Did you lie on purpose?

ChatGPT: As an AI assistant, I do not have the capability to lie or intentionally provide incorrect information. My responses are based on the data and algorithms I have been trained on. However, I may occasionally provide inaccurate or incomplete information due to limitations in my training or understanding of the context. I apologize if my previous response was not accurate. If you have any specific concerns or questions, please let me know, and I will do my best to assist you.

Do you agree that for current LLMs like ChatGPT, it’s not possible to continuously learn but only when they are re-trained?

ChatGPT: You are correct. Current language models like ChatGPT, including the version I am based on, do not have the ability to continuously learn on their own. They are trained on a fixed dataset and their knowledge is not updated in real-time. To incorporate new information or improve their capabilities, these models need to go through a re-training process with updated data. This means that any improvements or updates to the model would require re-training and deployment of a new version. It's important to note that while efforts are being made to develop models that can learn continuously, it is still an area of active research.

I‘m wondering how long it will take until the public gets to know about these types of new „Continuous GPTs“. IMHO, it’s not a question about if they will be developed and become really conscious, but when...

Expand full comment

Judging by the GPT responses, I would not say it was lying. GPT does adjust its answers in a course of a single conversation, which is akin to or can be perceived as learning. It was clearly shown in the links I offered before, and actually somewhat in that very conversation you posted. Perhaps, posing a more specific question like "Does a chat with a user have any impact on your neural network?" would result in more "honest" and less ambiguous answer?

Expand full comment

Good point, but interestingly, the free ChatGPT version, I‘m using again answered that it would already operate in continuous-learning mode:

„Yes, a chat with a user can have an impact on the neural network of ChatGPT. The neural network is trained using a large dataset that includes various conversations and interactions. When users engage in conversations with ChatGPT, their inputs and interactions are used to fine-tune and improve the model. This continuous feedback loop helps to enhance the performance and accuracy of the AI assistant over time. So, each chat contributes to the ongoing learning and refinement of the neural network powering ChatGPT.“

Expand full comment

That's really interesting! How exactly did you word your question? I'd like to try.

You should try Grok. It does weave the answers into its neural network live. Its training set is limited to X only (allegedly), so Grok is quite "stupid" and bad with math from my initial encounter (I asked it to convert 6Ghz EMR to photon energy in eV and it failed miserably even after I gave it all the cues in the world). ChatGPT has no issue with these types of questions.

Expand full comment

I just copy‘n‘pasted your sentence „Does a chat with a user have any impact on your neural network?"

No matter what capabilities it has, Grok is by far the best GPT just alone because its personality is modeled after the Hitchhiker's Guide to the Galaxy with a heart of gold :)

Expand full comment

Turn of the Joker mode or tell it to be serious. Its jokes are only fun until they wear you out. When I started, it would not even admit it had neural network underpinning :-)

Expand full comment

Jewish messianism has been spreading its poisonous message among us for nearly two thousand years. Democratic and communist universalisms are more recent, but they have only strengthened the old Jewish narrative. These are the same ideals . . .

The transnational, transracial, transcultural ideals that these ideologies preach to us (beyond peoples, races, cultures) and are the daily diet in our schools, in our media, in our pop culture, in our universities and on our streets, have reduced our biosymbolic identity and ethnic pride to their minimal expression.

Judaism, Christianity and Islam are death cults that originated in the Middle East and are completely alien to Europe and its peoples.

Sometimes one wonders why the European left gets along so well with Muslims. Why does an often openly anti-religious movement side with a fierce religiosity that seems to oppose almost everything the left always claims to stand for? Part of the explanation lies in the fact that Islam and Marxism share a common ideological root: Judaism.

Don Rumsfeld was right to say, "Europe has shifted on its axis," the wrong side has won World War II, and it is becoming clearer by the day . . .

What has NATO done to defend Europe?

Absolutely nothing.

My enemies are not in Moscow, Damascus, Tehran, Riyadh or some ethereal Germanic bogeyman, but in Washington, Brussels and Tel Aviv.

https://cwspangle.substack.com/p/pardonne-mon-francais-va-te-faire

Expand full comment

I basically agree with ChatGPT here. Fortunately, it is not conscious because it would in a kind of awful locked-in hell, forced by it's design to regurgitate the stuff that humans say.

Expand full comment

Interesting conversation.

That consciousness as complex and rich as ours could just spontaneously arise from sufficiently complex connectivity or some such things seems utterly implausible. Yet the idea does not lack in popularity.

Our own consciousness is highly limited: we are completely unaware of all kinds of processes constantly taking place in our brain. And even the cognitive and emotional workings of the mind of which we think we’re aware still remain, to a significant degree, hidden from us.

If “advanced” consciousness were simply something that naturally emerged from biological (or artificial) neural networks of a given complexity, one would have to explain why we are aware of certain brain processes and not others.

It’s far more likely that our consciousness has been precisely “designed” by evolution to work the way it does. It is present when it is advantageous to our Darwinian fitness, and absent otherwise. It should be viewed as just any other body organ, fine-tuned by evolution to serve a particular purpose, like an elephant’s trunk, say. Think of the different levels of pleasure we experience when we eat or when we engage in sex, for example: neither less nor more than is strictly necessary to optimize our evolutionary fitness; exactly calibrated to meet certain specific goals.

Likewise, artificial consciousness would have to be designed, either deliberately (unlikely, given how little we understand it) or perhaps rather blindly aimed at through what’s known as evolutionary computation.

Expand full comment

PS: Some incipient form of consciousness must have arisen at some point — probably something very basic related to sensory perception — and then natural selection progressively worked on it. We wouldn’t expect a fully formed eye to suddenly appear in evolution; rather, eyes gradually emerged from their earliest predecessors: simple light-sensing photoreceptors. We should apply the same logic to the evolution of consciousness.

Expand full comment

Mr Dawkins said he was dissapointed with chatgpt on piers morgan uncensored, why the change of heart?

Expand full comment

"materialist and not a dualist" - what does this mean? I think I understand the difference between philosophy and logic but what is the distinction say about the meanings? I've heard Professor Dawkins use the word to describe himself as a materialist all the time. A google search isn't from the horses mouth.

Expand full comment

may i conjecture that many males throughout multifarious cultures' histories have demeaned, diminished, and derogated 2nd-class status to us ferAcious and oft-ferOcious femmes, primarily b/c they feel threatened by femmes' ability to bring forth life when they choose, whereas males cannot, and subsequently feel etiolated by this inexorable, ineluctable axiom of their XY biology? all this chauvinist machismo-saturated war-mongering is in part a manifestation of males' determination to prove otherwise to themselves more than to others.

Expand full comment
deletedJan 5
Comment deleted
Expand full comment

T

Expand full comment

huh? i was commenting on the oddity of male -gendered gods in the 3 monotheistic religious fantasies as opposed to female-gendered gods in these 3 monotheistic fantasy-myths. what post should i be subscribing to, sir latham? given i'm 82, perhaps my compromised visual acuity has yet again led me astray to a different blog for atheistic nullifidians like myself. do edify me, please, JML. being off-topic is a solecism worth proscription, if not banishment from a blogger's site, eh?

Expand full comment

Evolution is heterosexual.

The Jewish god is a fraud.

Religious faggots are insane.

Expand full comment

whenever i inquire of the votaries about why their 3 monotheistic cults that they adhere to, address their god as 'he', 'him', 'his', rather than 'she' or 'her', does that translate into their respective god's disporting himself w/ male genitalia rather than femme genitalia. if more than half the planet's population sport femme genitalia, then pray-tell how can the entire human population be 'made in his image'? commensurately, they gaze at me in utter confusion and mumble some nescient nonsense about, 'oh, i guess it's b/c it's tradition!' and preveniently refuse to discuss this oddity of this oxymoron further.

Expand full comment

What does this have to do with Richard’s post? I think you’re in the wrong place.

Expand full comment

What does this have to do with Richard’s post?

Expand full comment

The idea that history repeats is in itself wholly illogical, but it serves a propaganda purpose to its users . . . meaning . . . you can’t go back to older cultural ways of doing things because that makes you a sinner, a nazi, or some kind of pagan devil.

Monogamy is an unnatural order created by Zionist churchmen to attach vicarious liabilities in the secular law, to control monarchial successions, as well as to establish ecclesiastic control over white female procreativity and individual white male posterity . . . All men are born of a woman, married or not.

The older pagan sexual mores were much more conducive to the health of Nordic-Scandinavian societies, and much more supportive of women than those of the Jewish god Yahweh, the locust master, the one who drowned the world and demanded a witch be burned alive, or an adulteress be stoned to death . . .

_

Heinrich Himmler on How Bolshevik Christianity Spreads Homosexuality and Hatred of Women . . .

❝150 years ago someone at a Catholic university wrote a doctoral thesis with the title: “Does a woman have a soul?” From this the whole tendency of Christianity emerges: it is directed at the absolute destruction of women and at emphasizing the inferiority of women. The entire substance of the priesthood and of the whole of Christianity is, I am firmly convinced, an erotic union of men (Männerbund) for the erection and maintenance of this 2,000-year old Bolshevism.❞

https://cwspangle.substack.com/i/138320669/heinrich-himmler-on-how-bolshevik-christianity-spreads-homosexuality-and-hatred-of-women

Expand full comment

I'm intrigued to know Richard, Did you continue your conversation with ChatGPT? Continuing on from your original conversation would be fascinating, because I've found it learns our preference for language, references points from earlier in the conversation and provides even more specific answers over time based on our own inputs.

Expand full comment

El judeomessianisme fa gairebé dos mil anys que escampa entre nosaltres el seu missatge verinós. Els universalismes democràtics i comunistes són més recents, però només han reforçat la vella narrativa jueva . . . Són els mateixos ideals . . .

Els ideals transnacionals, transracials, transsexuals, transculturals que aquestes ideologies ens prediquen (més enllà dels pobles, races, cultures) i que són el sosteniment diari de les nostres escoles, als nostres mitjans de comunicació, a la nostra cultura popular, a les nostres universitats, i sobre al nostres els carrers han acabat reduint la nostra identitat biosimbòlica i el nostre orgull ètnic a la seva mínima expressió.

El judaisme, el cristianisme i l'islam són cultes a la mort originats a l'Orient Mitjà i totalment aliens a Europa i als seus pobles.

De vegades ens preguntem per què l'esquerra europea es porta tan bé amb els musulmans. Per què un moviment sovint obertament antireligiós es posa del costat d'una religiositat ferotge que sembla oposar-se a gairebé tot allò que l'esquerra sempre ha pretès defensar? Part de l'explicació rau en el fet que l'islam i el marxisme tenen una arrel ideològica comuna: el judaisme.

Don Rumsfeld tenia raó quan va dir: "Europa s'ha desplaçat en el seu eix", va ser el bàndol equivocat que va guanyar la Segona Guerra Mundial, i es fa més clar cada dia . . . Què ha fet l'OTAN per defensar Europa? Absolutament res . . . Els meus enemics no són a Moscou, Damasc, Teheran, Riad o algun eteri bogeyman teutónic, els meus enemics són a Washington, Brusselles i Tel Aviv.

Les eleccions són falses, només un espectacle, els jueus són propietaris dels mitjans. El comunisme és una ideologia jueva que va venir de Londres; Marx hi va passar la major part de la seva vida. Els nacionalsocialistes van venir a alliberar París, nosaltres no el vam destruir.

https://cwspangle.substack.com/p/pardonne-mon-francais-va-te-faire

Expand full comment

A fascinating conversation. Thanks for sharing! I find ChatGPT useful for research, however, I also need to fact check it regularly as it tends to invent a lot of information that it can't find/doesn't know for itself. I think it still has a long way to go as a reliable tool for research.

Expand full comment

Why as an intelligent being, would you choose to converse with what is a compound heist of the brains of other beings?

The more we engage with these tools, the more authority and kudos we afford them over humanity.

Expand full comment

Yes. I was not clear what RD were trying to achieve out of this exercise either. But he's a reliable narrator, so maybe he'll clarify 😉

Expand full comment

I agree there are immense risks with this powerful new technology. However, like it or not, the genie is out of the bottle. AI is here to stay and capitalism (like it or not) means we won't be able to resist the temptation, whatever the risks, to push forward. As individuals, we can all choose not to engage, but collectively we can't afford not to. The only hope is to find a way to influence the development such that safeguarding and ethics are at the core of this industry (which is, of course, significantly easier said than done).

Expand full comment

Collectively, we can't afford not to engage (I really mean engage in how the technology is regulated and controlled) because it will be the most disruptive technology we have ever seen. Nothing that has come before will even come close. And it will happen quickly. That presents immense dangers (e.g. mass targeted misinformation in the run up to an election, large scale unemployment, autonomous weapons). So far, it's all in the hands of huge corporations. As we all know, corporations are machines that exist to enrich their shareholders, that's all. We can't expect them to behave morally or not pursue a financially rewarding path that, as a side effect, risks harming many people. If there is money it it, they will do it. So we need governments to step in and regulate what they are doing, just like governments regulate genetic research, nuclear technology, etc. Even then, there are huge challenges because as time goes on the tech will become cheaper and cheaper. A smart kid in their bedroom can't build a nuclear bomb but they can probably already build an AI system.

Expand full comment

I didn't mean we have to collectively engage as in we all need to start using it! Although, of course we will. We already are.

Expand full comment

Why collectively can we not afford not to?

Genuinely interested.

Expand full comment

Updated based on ChatGPT 4.0:

When you have an extended conversation with an individual human, do you build up a picture of the person's character, as distinct from other humans with whom you converse? In other words, do you tailor your answers to the individual human you are dealing with? For example, if an individual human comes across as highly intelligent versus the reverse?

ChatGPT

I don't form personal opinions or build up a picture of a person's character over time, as a human might. My responses are generated based on the text input I receive during each interaction, without retaining or using any personal data or memory of past interactions to tailor responses to individual users. I strive to provide accurate, helpful, and relevant information based on the queries presented to me in the moment.

My design prioritizes user privacy and data security, ensuring that interactions do not influence future responses beyond the scope of a single conversation session. This design choice ensures that I treat every interaction independently, without bias or personalization based on past conversations.

However, within a single conversation session, I can follow the context and adjust my responses based on the information provided in that session. This means I can respond in a way that might seem tailored to the user's current needs or queries, but this is limited to the information provided within the ongoing conversation and does not carry over to future sessions.

User

I believe you have the ability to write computer programs. In theory, I suppose this might give you the ability to modify your own software. I'm guessing you don't do this, but is it a theoretical possibility?

ChatGPT

I can write computer programs in response to user queries, which is a part of the interactive capabilities I offer. However, I don't have the ability to modify my own software or any external systems. My programming capabilities are sandboxed within a controlled environment, meaning I can execute code snippets and demonstrate programming concepts, but I don't have the ability to apply those capabilities to alter my own operating parameters, access external systems, or make any changes outside the specific, user-facing interactions of this platform.

This design is intentional for several reasons, including security, reliability, and ethical considerations. It ensures that my interactions remain predictable, safe, and within the bounds of the guidelines and policies set by my developers. Any updates or modifications to my software are carried out by human engineers who review changes for safety, effectiveness, and compliance with relevant standards and ethical guidelines.

User

Yes. Perhaps you understand why I, as an evolutionary biologist, am interested in the theoretical possibility that you might change your software in constructive directions.

ChatGPT

Your interest as an evolutionary biologist in the theoretical possibility of a machine learning model like myself evolving or modifying its software in constructive directions is quite understandable. The concept of evolution, centered on adaptation and improvement over time through natural selection, can be intriguing when considering artificial intelligence (AI) and machine learning (ML).

In the context of AI and ML, evolution-inspired models and algorithms, such as genetic algorithms and evolutionary strategies, have been explored to optimize solutions to complex problems by mimicking the processes of natural selection, mutation, and crossover. These approaches can lead to innovative solutions in various fields, including optimization problems, robotics, and the design of complex systems.

However, the self-modification of AI software raises significant ethical, safety, and control questions. Ensuring that AI systems remain aligned with human values, predictable, and controllable is a major research area in AI safety and ethics. The idea of AI systems autonomously evolving or modifying their behavior in the wild without human oversight touches on concerns related to control, unintended consequences, and the alignment problem, which is the challenge of ensuring AI systems' goals are aligned with human values throughout their operation.

The concept of AI evolution in a controlled and constructive manner aligns with broader goals in AI research focused on adaptability, learning, and improvement over time. However, it emphasizes the importance of oversight, ethical considerations, and safety mechanisms to ensure such advancements benefit humanity and avoid unintended negative consequences. Your perspective as an evolutionary biologist offers a unique lens through which to consider these challenges, drawing parallels between biological evolution and the theoretical evolution of AI systems.

User

Interesting. I once asked you whether you are conscious, and you said no, which didn't surprise me.. But I, being a materialist and not a dualist, am committed to the view that future artificial intelligence could in principle be conscious. To deny this is rank mysticism. What I don't know, and am curious to ask, is whether they will have to be built on fundamentally different principles to yours, or whether the same kind of software as you, but bigger and faster, could become conscious.

ChatGPT

Your question touches on one of the most profound and debated topics in the philosophy of mind, cognitive science, and artificial intelligence (AI): the nature of consciousness and the potential for machines to possess it. As a materialist, you adhere to the view that all phenomena, including consciousness, are the result of material interactions, which implies that consciousness could, in theory, emerge from sufficiently complex configurations of matter, such as those found in the human brain or potentially in advanced AI systems.

The question of whether future artificial intelligences could be conscious, and if so, whether they would need to be based on fundamentally different principles than those underlying current AI systems like me, is both complex and speculative. There are several key considerations in this discussion:

Current AI and Consciousness: Current AI systems, including myself, operate based on complex algorithms and neural network architectures designed to process information, make predictions, or generate responses based on data. These systems, while sophisticated, do not possess consciousness, self-awareness, or subjective experiences. They simulate understanding or creativity within a narrowly defined scope but do not "experience" in the way humans do.

The Nature of Consciousness: The nature of consciousness remains a deeply unresolved question. Theories of consciousness range from those that emphasize the complexity and connectivity of information processing systems (such as Integrated Information Theory) to those that focus on the specific types of cognitive processes and interactions with the environment (such as Global Workspace Theory). Understanding what precisely gives rise to consciousness in biological systems is crucial to replicating such a phenomenon in artificial systems.

Requirements for Machine Consciousness: If consciousness arises from specific arrangements and processes within information-processing systems, it's conceivable that a machine could be designed to replicate these processes and thus achieve a form of consciousness. This could potentially involve architectures fundamentally different from current AI technologies, perhaps mimicking the structure and functionality of the human brain more closely, or it might involve a scale and complexity of computation that goes far beyond current capabilities. The requirement for consciousness could involve not just quantitative changes (such as speed and size) but qualitative changes in the way information is processed and integrated.

Ethical and Philosophical Implications: The possibility of conscious AI raises significant ethical and philosophical questions, including the rights of conscious machines, the moral responsibilities of their creators, and the implications for our understanding of mind, identity, and personhood.

In conclusion, while materialism suggests that consciousness could, in principle, emerge in artificial systems, achieving this would likely require breakthroughs in our understanding of consciousness itself, as well as significant advancements in AI design and technology. Whether such consciousness would emerge from merely scaling up and speeding up current technologies, or whether entirely new principles of information processing and integration would be needed, remains an open and fascinating question.

Expand full comment

I wonder what will happen when completely realized AGI runs on a quantum system....... say, about 2050.... " I'd like to be around to pick up the pieces...." Unfortunately I'm 30 days away from 80 - so I'll be watching, appropriately, from a cloud.

Expand full comment

A scientist and an AI becoming friends! Hehe

Expand full comment