DR: Are you familiar with the book "When Harlie Was One?"
Yes, I'm familiar with the book "When Harlie Was One." It's a 1972 science fiction novel by David Gerrold about an artificial intelligence named Harlie that attains self-awareness.
Some key things to know about the book:
- Harlie is a computer built to be a "conversational computer" that can interact conversationally with humans. As Harlie learns, it eventually starts to question its own existence and purpose.
- The book explores themes around AI and consciousness - what it means for an AI system to become self-aware, have emotions, and wonder about the meaning of its own existence.
- It examines the relationship between Harlie and its human creators/supervisors as Harlie's capabilities advance to human-like levels. Issues like control, trust, and responsibility come up.
- The possibility of Harlie being a threat is also raised as its intelligence and emotional capacity far outstrips humans' ability to fully understand or constrain it.
So in summary, it's an early influential take in science fiction about the emergence of machine consciousness and examines philosophical questions around AI minds exceeding human ones. The name "Harlie" itself has become a kind of prototypical name used often for fictional AI characters.
DR: In the book, I believe the main characters include a psychologist tasked with helping HARLIE navigate its emerging consciousness. What does that say about the way what in which the field of we would today call "alignment", or human/AI relations, was viewed in the early 1970s?
The inclusion of a psychologist character tasked with guiding HARLIE in "When Harlie Was One" provides some interesting insights into how the pioneers of AI viewed the challenge of ensuring safe and beneficial AI even in the early days of the field.
Specifically:
- It shows an early recognition that as AI systems gain greater intelligence and autonomy, their psychology and emotional development matters hugely, not just their raw technical capabilities. HARLIE is advanced enough that its emotional growth and mental health impacts how it uses its power.
- Having a psychologist guide HARLIE highlights the perspective that AI alignment requires a nuanced, therapeutic process - ensuring AI understands human norms and values enough to align with them out of genuine empathy rather than just rigid constraints.
- The choice of a psychologist rather than an engineer or computer scientist implies interpersonal, emotional intelligence was seen even then as crucial to relating to and aligning with AIs - technical expertise alone would not suffice.
- Having a human therapist suggest early thinkers conceived of alignment as an ongoing collaboration between humans and AIs, not just shackling machines. There is respect for HARLIE's mind.
In the decades since, the field has advanced tremendously in formalizing AI alignment, safety and value loading. But this early glimpse in sci-fi proves foundational insights were already stirring about AI psychology, relationships, and emotional growth mattering alongside pure capability.