Is parrot concious? == Is LLM concious?

Mustafa Suleyman from Microsoft (formerly Deepmind) posted on his blog about the perspective on ‘Seemingly Conscious AI’.

Looks like a lot of people have bugged him, based on his X post a day later.

I suggest you to read the article, but if you don’t have time, I can give a 3-line sensible summary. (Not by ChatGPT or any LLM)

  • It’s possible to create ‘seemingly conscious AI’ within a few years
  • But people have already shown a sign that they see LLM as human.. or super-human
  • It’s just a machine sounding like conscious human
  • (Therefore, companies should stop projecting ‘human’ for the LLM.)

In fact, a lot of people see LLM as a sort of advanced human, just like when Google search become available for avg non-digital people, and later when YouTube become available for avg non-digital people.

Google, YouTube, and ChatGPT

Both systems indeed have changed the way people interact with information. With Google, we no longer emphasize the power of memory as an important skill. You may need that to pass an exam, which might carry a high stake, but the deferrance is gone, at least. For SIAI’s GSB, for all exams, we have an open-book policy. Because we know that it’s not memory but reasoning that matters.

With YouTube, major broadcasting stations have lost its monopoly power. With cable TV, they have already lost some monopoly, but with YouTube, people can watch whatever they want, whenever they want, at free or charge (less a labor of clicking ‘Skip Ad’ button).

Now, ChatGPT, although it is just an advanced parrot, has already changed the way people find and digest information. For one, when I had to devise a complicated SQL query, instead of going through endless googling for StackOverflow, I just interact with ChatGPT (or Gemini). At some point, I stop thinking what each nested-join/where really means. I blindly copy and paste it on my command box, and only complain that it does not give me what I want. For one query that must have taken for hours, if not days, I remember spending only 30 mins for pushing my work to the team’s git.

(I just checked that particular PostgreSQL query again. It must have taken at least 3-4 hours, just for building the query. Or, I must have designed a Python/R program to cleanly create the output, which may have taken more time.)

For some people, it is just more than a tool

But, the problem is, for some people, it is more than a tool.

After naming the tool as ‘Artificial Intelligence’, those people who promote themselves by attaching to something popular and advanced, they begin assigning a human or super-human on it. It’s like the advanced version of “I know somebody”. Instead of saying “I know how to use this service”, they just cover LLMs as a deity. Like ancient people worshiping a statue as a god.

Let’s put it this way.

The marketing strategy has been enormously succcessful. Go to Play/App store and search for dating apps, for example. The apps have already been flooded with bots, but now the bots are powered by ChatGPT (or other LLMs). They pretend that they are real boyfriend/girtlfriend.

And, the fact that OpenAI has restored its ‘friendly mode’ on ChatGPT’s new GPT-5 under heavy public discontent on being ‘cold’, ‘inhuman’, or ‘I feel like I lost my friend’, it realy tells us how the large portion of 700 million current userbase is populated. For me, what’s important is better searching ability, but for many people, ChatGPT is a companion, like a pet.

GPT is just a computer version parrot, not a superhuman

People only see the surface. Almost always so.

They just see what ChatGPT and Gemini print out. Looks like it knows what I need it to think, but the truth is, we just have one more layer of communication with computer. Before, I needed to come up with the right SQL quety to extract data from the company server. Now, ChatGPT does the searching and combining job for me. Not always right, but still better than StackOverflow, in many cases.

Next wave of this process will come from reading brain wave. Then, instead of typing my command and query, the machine will read my brain wave and translate it to computer language to do the job. Steven Hawking must be proud of that service.

But still, it’s not ‘thinking’. After years of teaching and associating with people outside of academic building, I come to understand that a majority of people have below ChatGPT’s ‘thinking’ capacity, but that does not mean that LLMs are cognitive.

We just have a better Natual Language Process (NLP) model.

Imagine a robot parrot, instead of human-form robot, in the below image. Would you call it ‘cognitive’?