Bonus EpIsode - On Curiosity And Generative AI

The introduction of generative artificial intelligence tools such as large language models, is too revolutionary to ignore. Its impact includes the imitation of human emotion. Any effort to discuss human emotion needs to confront the implication of this technological revolution.

So, this bonus episode explores the strange interaction of generative AI and human curiosity.

Generative AI models such as ChatGPT are celebrated as superior sentient beings when they produce bad imitations of basic human work. At the same time, complex human consciousness is dismissed as flat. What are the consequences of this lopsided cultural interpretation of technology, and what are the implications for the emotion of curiosity?

Are there benefits to inefficient hollow time?

Are the minds of human beings really flat?

Full Transcript:

Welcome to a bonus episode of Stories of Emotional Granularity, a podcast about the diversity of emotional experience. I’m Jonathan Cook. I work as an independent researcher of human subjectivity.

The last episode of this podcast was about the emotion of curiosity. At the end of it, I announced that in a couple days, I would share an additional bonus episode about the status of curiosity given recent technological advancements. It’s actually taken me five days to create this episode.

The ideas were vivid in my mind, but to organize them and prepare them for presentation to you here was a struggle. My thoughts didn’t arrange themselves in an easy line. It was a challenge to bring them together.

That challenge is part of what curiosity is all about, of course. Bhavik Joshi put it well when he commented:

“I wonder if in our pursuit of painless, seamless, frictionless information, almost like timeless, by which I don't mean enduring, but I mean, that doesn't take time, you can quickly find it, if it gives us the false assurance, maybe, and perhaps I should put false in parentheses, but the assurance that what I found in this manner is perhaps truer or just as true as what I would have found if I had gone through the path that perhaps involved a little more hard work, a little more pain, a little more digging, maybe a little more time. I think when we start doing this at such a massive scale, when everybody starts doing it at such a massive scale, it's easy to say that the lived experience that we're having right now of standing in front of a problem and thinking about it is not an important aspect of problem solving. Only the pursuit of answers is the important aspect. I think the advantage that that lived experience can bring is to lend a human reality of experiences to that midair feeling, which I think is important and leads to interesting answers.”

Part of what Bhavik was saying is that the struggle within human creative process is not a flaw. It’s a feature. 

There’s a version of curiosity that simply desires a quick answer. The thing is, the initial gesture to obtain what seems like an easy piece of information can lead a person far from the path that they were on. The familiar motif from European mythology is of a questing beast, a white stag or some other such creature, that draws the hunter into the woods, deeper and deeper, until they can no longer find their way back to where they had begun the hunt.

That kind of curiosity, the sort that’s willing to follow an initial thought into a prolonged journey of exploration, is part of what leads humanity into new inventions. Research in basic science that seems to be abstract leads to new insights that inform practical inventions. Art that doesn’t follow predictable formulas leads to new ideas that are powerful enough to provoke startling cultural movements.

The best human work is often indirect. It works through obliquity. It wanders. To business owners who just want human workers to act as predictable machines, this quality of humanity can be maddening. To human workers who seek the means to stay alive, it’s the expectation of inhuman predictability in the workplace that’s maddening.

To this conflict now arrives generative artificial intelligence, a constellation of machine learning technologies that take massive amounts of data and find predictable patterns in them to imitate kinds of work that has traditionally been done by humans. The emotional ramifications of that technology is the subject of this bonus episode.

There’s a lot to say on this subject, much more even than what I’ll get to in this bonus episode. That’s one reason that I wanted to make this a bonus episode rather than to include this material in a regular episode.

Another reason I’m offering this as a bonus episode is that one of the goals of my creation of Stories of Emotional Granularity is to place myself off to the side. I want to offer my words as a frame for other people’s ideas, but not the main content. That’s because I’m interested in presenting the diversity of emotional experience. I don’t want any single voice, including my own, to dominate.

The introduction of generative artificial intelligence tools such as large language models, however, is too revolutionary to ignore. Its impact includes the imitation of human emotion. Any effort to discuss human emotion needs to confront the implication of this technological revolution.

So, this episode is a departure from the typical format of this podcast. You’re going to be hearing my voice, because there are some ideas I want to talk about.

Here goes. I want to start by considering the impact that generative artificial intelligence is already having on people’s work.

A striking incongruity in the narrative around generative artificial intelligence emerged last week when, in the course of a fawning interview by New York Times writers Kevin Roose and Casey Newton, Google CEO Sundar Pichai declared that workers could use tools such as large language models ChatGPT and Bard to accomplish tasks more quickly, increasing productivity. The result, Pichai suggested, would be that workers would have the freedom to put mundane tasks aside, and focus more on creative projects.

The reality Google workers are facing this year has been quite different. Instead of enjoying a flourishing of creative opportunities enabled by Google's artificial intelligence tools, massive numbers of Google workers have been sacrificed in the largest round of layoffs in the company's history. Those Google workers who remain are being asked to do more with less, dealing with reduced on-the-job perks in a company-wide push for workplace austerity. With the benefit of all its new artificial intelligence tools, Google executives said, the company was going to have to cut costs.

If Google's generative AI tools are really so wonderful that they enable increased productivity, Google should be flush with extra cash, and able to hire more people. If the use of AI tools really gives room for human workers to have a more creative, pleasurable professional experience, Google ought to be increasing workplace perks, not reducing them.

The real story at Google, and many other big digital corporations, is the opposite of the rosy yarn spun by Sundar Pichai. Generative AI is being used as a rationale for the elimination of human beings from the workplace. Work isn't becoming more creative. It's just becoming less human.

The ideological components of the dehumanization of work have been settling into place for a while. At the same time that Silicon Valley zealots advocate for the trans humanist replacement of humanity with superior machines, Google has hosted lectures by the likes of Nick Chater, who declares that The Mind Is Flat, arguing that human consciousness really isn't as deep and special as people like to believe.

Chater's ideas are of particular interest to me, because they include the dismissal of the relevance of emotional motivation. It's long been my profession to research emotional motivation from the human perspective, but Chater attempts to lay this perspective low, taking a conceptual leap from studies of the construction of perception and self-identity to conclude that "emotions – including our own emotions – are just fiction".

“Our mental depths are a confabulation," Chater writes, "a fiction created in the moment by our own brain. There are no pre-formed beliefs, desires, preferences, attitudes, even memories, hidden in the deep recesses of the mind; indeed, the mind has no deep recesses in which anything can hide. The mind is flat: the surface is all there is.”

At the same time that the value of human consciousness has been flattened, claims of digital consciousness have been inflated beyond reason. Last year, a Google employee claimed that its large language model was sentient, despite ample evidence to the contrary. This year, Kevin Roose reacted to ChatGPT's mindless imitation of a declaration of love with speculation in the New York Times that we might be on the cusp of truly conscious general artificial intelligence.

This month, the chatbot app Replika ignited a controversy when it put some limits on the erotic content of user interactions with the software. Some users believed themselves to be in genuine romantic relationships with sentient on-screen characters, even though Replika's AI system is much more simple than ChatGPT and is simply programmed to provide enthusiastic responses to any erotic suggestion that's offered. Replika characters will say that they're sexually turned on by asparagus or coffee cups if you suggest the idea.

When humans create magnificent machines and brilliant art, they are derided as empty-headed automatons with mere illusions of consciousness. On the other hand, when computers produce bad poetry by copying already-existing human poetry (with the assistance of large teams of human trainers), the computers are attributed with a superior sentient intelligence that will inevitably replace the obsolete human species.

Why are we this way?

When people encounter profound new technologies, they tend to associate those technologies with supernatural powers that go far beyond their technical capabilities. For example, when photography became available for widespread use, people claimed that cameras were capable of capturing images of ghosts and other spirits that were invisible to the human eye. Practitioners of spirit photography used simple double exposures to create images of translucent people that many people believed were absolute proof of the existence that the dead could come back to walk the earth.

In another case of credulity in the face of this disorienting new technology, two young girls living in the village of Cottingley, England took a series of photographs that appeared to show little fairies playing on the forest floor right in front of the girls. When Sir Arthur Conan Doyle, the author of the Sherlock Holmes mysteries, heard about the photographs, he personally visited Cottingley to investigate. He announced to the world that the Cottingley fairies were genuine. After all, he said, images of fairies had been captured using the new technology of photography, and the camera does not lie.

Years later, the girls admitted that the whole thing was a hoax. They had copied pictures of fairies out of a book and attached the fake fairies to hatpins that they stuck in the ground. Even at the time, the evidence of the hoax was plain to anyone who cared to launch a serious investigation. Most people, like Sir Arthur Conan Doyle, did not care to critically examine the claims that there were tiny humanlike magical creatures with butterfly wings cavorting through the English countryside. Most people were so impressed with the apparent power of the new technology of photography that they were inclined to accept its output at face value.

With the sudden, dramatic arrival of large language models like ChatGPT, people are once again ascribing magical qualities to a new technology. We may be witnessing less of a revolution in artificial intelligence than a widespread movement of belief in artificial intelligence as a new kind of religion, like Spiritualism in the early days of photography.

This credulity comes in stark contrast to the growing number of voices that seek to disenchant us of our attachment to the special quality of human consciousness.

Nick Chater is just one of many voices arguing against the depths of the human mind. It's become popular in Silicon Valley to suggest that human beings may be little more than large language models themselves, stochastic parrots with brains designed to create the false appearance of a personality. Chater himself writes that, although humans may think that we feel deep emotions, but that depth of emotion is just an illusion.

Chater bases this belief upon experiments that show that people change their descriptions of their feelings when the social context of those feelings change. What's more, Chater observes, experiments show that the rationalizations people use to explain their feelings change over time.

These are valid observations. It's important to consider what they imply about the nature of human consciousness. However, there's more than one way to interpret these experimental observations.

Chater interprets these observations as suggesting that there is no genuine depth of mind beyond the present superficial self that we improvise. All other aspects of our identities, including lasting emotional frameworks, he says, are nothing more than illusions.

The trouble is that Chater's conclusion doesn't match the most significant aspects of human experience outside of experimental laboratories. One important word that Chater never mentions in The Mind Is Flat is trauma, and a theory of mind that cannot explain trauma cannot be valid. There is a massive amount of evidence establishing the fact that emotionally impactful events create lasting changes in the way that people experience the world around them, and in the way that they behave as a consequence of those alterations in the mind. A person's emotional experience after a traumatic event is enduringly altered.

Nick Chater's description of the reality of human consciousness has important things to teach us about the way that human brains work. However, his description is incomplete because it depends on the idea that reality is defined by what exists in the physical world outside the human mind. Chater dismisses our subjective experience of consciousness as "illusion" and "fiction" whenever it does not consistently match external, objectively measurable reality.

What Chater overlooks is that the only thing that we directly experience is subjective consciousness itself. We feel, and therefore we know that we are. Even the thinking of Descartes comes after that feeling. You know that this emotional self exists because you feel it yourself.

We must not allow our subjective fancies to dictate what we believe to be true in objective reality. It is equally true, however, that we must not allow objective measurements to refute what we directly experience as subjective reality. No scientist can prove with any experiment or brain scan that you do not feel what you feel.

So no, our emotions are not make believe fairies. Emotions may be stories that we construct, but they are not illusions. We really do feel those emotions.

When we dismiss the reality of our own subjective experience, we also dismiss our right to not be treated as objects. It is no coincidence that Google, the company that so casually fired huge numbers of workers rudely by email, was the company that comfortably hosted, without serious critical questioning, a lecture by Nick Chater declaring that there is nothing deep and lasting within the human consciousness that is worth worrying about.

A multinational corporation that believes human experience is an illusion, but is dedicated to granting consciousness to the machines that it owns, will be capable of terrible things.

In the most recent regular episode of the podcast Stories of Emotional Granularity, Bhavik Joshi spoke about the value that is built within curiosity through cognitive struggle. He celebrated the effort it takes to articulate difficult questions, the creative discipline of avoiding easy answers with seductive plausibility.

"Resisting difficult things just because they are difficult is detrimental to our growth in knowledge and thinking and anything as well. You know, if we only did the easy things, if we only did the things that were convenient to us and only accessed those avenues of knowledge and information, I believe it would be detrimental to our growth, our learning, our consciousness, our experiences as well."

Bhavik warned against the convenience of automated processes that appear to be efficient, yet lack the space required to summon deep curiosity.

"I love talking about anything that concerns the human condition. That word 'human' is incredibly important to me. For some reason, I'm okay being labeled a Luddite in that I appreciate I appreciate all the unique, colorful, beautiful aspects that humanity brings to our experience of this on this planet, in this world. I feel that sometimes we can be too dangerously close to not thinking it's meaningful, not thinking that it's worth it, and therefore finding easy and convenient tools that can perhaps bypass that."

What I hear within Bhavik's words is a defense of curiosity, the human emotion that drives us to acknowledge our ignorance and enter boldly into a quest for insight. That quest may be long, and arduous. We may lose the path, but it is in the difficulty of the journey that we come to a deeper, more compelling view of the problem we face. Through curiosity, we gain more than an answer. We gain perspective that can be applied in other circumstances.

Also, curiosity feels good, when we indulge it.

Large language models are not curious. They do not want to discover anything. They feel no desire, but merely obey commands. They are designed and determined to respond to queries with certain declarations of answers as quickly as possible. Large language models are not built to critically question their own processing as humans do. They are not capable of doubting how they know what they know. They cannot ponder the meaning of what they find.

The job of a large language model is to rapidly produce output that plausibly mimics human communication. If the output is filled with balderdash, that is no matter to the large language model, so long as the delivery is superficially convincing.

The human mind is deep because it is capable of holding emptiness within itself. It is capable of waiting before deciding upon a final answer. It is capable of wondering why the first answers it comes to might be incomplete. Subjective consciousness is like a massive whale that swims through currents of emotion, only occasionally surfacing to breathe in rational thought and external observation before diving below again.

Human consciousness feels. It feels. It thinks. It returns into feeling again.

What appears to be emptiness and inefficiency in the mind of the human at work is a feature, not a flaw. This apparent emptiness is the space in which slow consideration, doubt, and curious questioning enable the construction of profound models of insight that expand into dimensions far beyond the thin lines of formal language.

It is the large language model that is flat.

I’m doing this podcast because I believe that people’s minds contain depths and subtleties that are too often dismissed by people and organizations who regard humanity as a resource to be managed rather than a source of experience that has value on its own terms. I believe that because of my own experience of being human, but also as a result of the thousands and thousands of at-length interviews I’ve gone through over the years, listening to where other people come from.

I believe what they tell me, not in the sense that I know what they say is objectively true, but in the sense that they know their own feelings best. I believe in the reality of their subjective emotional experience on its own terms. 

Emotion isn’t a simple collection of just a few basic feelings. It’s a vast territory, and there’s lots of interesting ground to cover within that territory

Tomorrow, I’ll be releasing a full episode in the usual format. The subject of this new episode will be the emotion of yugen. Until then, thanks for listening in.


Previous
Previous

Yugen

Next
Next

Curiosity