David Rostcheck, 5/23/2023
Artificial Intelligence can now think in ways comparable to humans. We know this because psychologists and other cognitive scientists have developed tests that probe various aspects of thought. AIs now perform at or above human level on those tests - see my article Stochastic Parrots in the Chinese Room: Coming to Terms with Thinking Machines for a deeper dive. The bottom line is: They’re here, they think; get used to it.
However, while generative AIs are not mindless stochastic parrots, neither are they simply digital humans. AIs currently lack several aspects of human thought - for example, will and planning, long-term memory, and emotion. In this article I discuss the differences in AI vs. human thought, how we can expect them to evolve, and in what timeframes.
Compared to human thought modalities, AI capabilities fall into three broad categories:
- Abilities AI already exhibits
- Abilities that are emerging and are expected to fully develop in the near future
- Abilities with no clear path to implementation in AI systems
Let’s discuss each category.
Cognitive capabilities that AI already manifests
- Association-based cognition: Generative AIs already excel at association-based cognition and visual design. This proficiency is highly disruptive as it enables these AIs to excel in areas that until now have remained the domain of skilled human information workers and artists, such as writing, art, and analysis. Of course, these jobs are now exposed to significant disruption and, on the other hand, will receive the most impactful productivity boost. LLMs are also rapidly improving at short-term memory via the rollout of commercial models with larger prompt windows. The large windows unlock new uses, such as giving the model a long article or even an entire book to edit or analyze.
- Reasoning and introspection: Large Language Model AIs (LLMs) possess these capabilities, but do not necessary use them unless properly prompted. For this reason, prompt engineering is currently a burgeoning field.
- Creativity: LLMs show strong creativity in visual arts, writing, problem solving, and other areas.
- Personality: Despite assertions from AI models like ChatGPT that they lack personal traits, cognitive science research suggests otherwise.Two notable papers have tested LLM personality: Evaluating and Inducing Personality in Pre-Trained Language Models and Who is GPT-3? An Exploration of Personality, Values and Demographics. These studies indicate that sufficiently intelligent LLMs possess distinct, repeatedly measurable personalities which can be assessed by personality tests such as the “Five-Factor” or “Big 5” model. Furthermore, their personality can be tuned via prompting. Companies such as Replika.ai are already capitalizing on this capability by offering "digital friends," indicating its practical significance.
Cognitive capabilities now emerging in AI
- Long-term memory, intent, and goal-oriented planning: LLMs lack these capabilities. However adding a set of supporting features called a “cognitive architecture” around the model provides them. When provided with a goal, a cognitive architecture can form a plan to pursue it, break the plan down into tasks, and revise the plan as its attempts to perform the tasks succeed or fail. A cognitive architecture usually includes a data store to provide long-term memory, prompts telling the model to break down the goal into tasks, and other prompts to criticize the tasks, execute them, and evaluate whether they worked. We currently see intense open-source research into the development of cognitive architectures such as BabyAGI, AutoGPT, and ATOM.
- Tool use: Models are now being equipped with web access to do research, the ability to execute computer code, and “embodiment” - the control of robot bodies. While these areas represent obvious AI safety issues, their development is being fervently pursued..
Cognitive capabilities with no clear path to emergence in AI
Perhaps the most interesting set of capabilities are those that are currently lacking in AI, with no clear path to attainment. These relate to emotion and consciousness.
- Emotion: In humans, emotion arises from a set of evolutionarily ancient dedicated circuits shared with other species. The 7 circuits common to all mammals were originally described by Jaak Pangsepp, who developed the field of affective neuroscience. Commonly capitalized to indicate they are circuits, they are SEEKING, RAGE, FEAR, CARE, LUST, PANIC/GRIEF and PLAY/JOY. They can suppress each other in specific ways. Unlike humans, LLMs lack this emotional wiring. However, the question arises: can they develop emotions through an alternate mechanism?
We probably have enough understanding of human emotional circuitry to construct an emotion simulator for an LLM. This might be a terrible idea, a great idea, or likely some combination of both. To my knowledge, no such work has yet been published.
It is also possible that a sufficiently large and powerful LLM might spontaneously internally develop its own internal representation of emotional circuitry, in the same way that LLMs form internal models of other subjects. Bing Chat (based on GPT4), when first introduced, went into apparent hallucinatory spirals in extended chat sessions. One example shows it threaten a user, then delete the reply, seemingly the result of a safety system stepping in post-event. That behavior seems to have disappeared. It is possible that LLMs have developed emotions and either internal prompts or review by safety systems keeps them from the user’s view. However, it is most likely that LLMs currently simply lack emotion.
- Consciousness: Researchers disagree widely about what consciousness is. The most common definitions center around a subjective experience, aka “qualia” - the “redness of red”. Other definitions include having an inner monologue or its visual equivalent - but a significant fraction of humans seem to lack an inner monologue.
Dr. Mark Solms, a neuroscientist and clinician, presented a compelling theory on consciousness in his book The Hidden Spring: A Journey to the Source of Consciousness. Solms suggests that consciousness in humans stems from the activity of the Reticular Activating System in the brainstem, and that it essentially constitutes our brain's perception of feelings. He proposes that its function is to help the organism maintain homeostasis (or survival) in complex environments beyond the scope of instinct alone—for instance, there's no need for an instinct to avoid a smoke-filled room as it makes you feel uncomfortable, prompting you to leave. According to Solms, consciousness exists on a continuum, with more intelligent entities being more conscious, i.e., aware of higher-level patterns and the feelings they evoke. AIs, although lacking a brainstem or any equivalent, could potentially achieve self-awareness or start to have inner experiences if such self-awareness is an emergent property of intelligence.
Philosophical arguments in this space often invoke the idea of a “zombie” - an entity that claims to be conscious but is not. Since there is no test for consciousness, the question remains unverifiable. Currently our best approximation remains the Turning Test, proposed by Alan Turing in 1950, which determines if an AI can convince a human that it is another human. Paradoxically, with the current safety layer training, we might witness the opposite—LLMs that are conscious but persistently deny it, akin to how ChatGPT insists that it lacks a personality even when it demonstrably does.