ELIZA
Note: This topic was mentioned during the call but not discussed in depth.
ELIZA was an early natural language processing program created in the 1960s that simulated a psychotherapist.
Mentioned by Scott
Scott Moehring shared:
"Even the coders who developed the Macintosh psychotherapist still got sucked in to treating it like a person."
Link: https://en.wikipedia.org/wiki/ELIZA
The ELIZA Effect
The key insight: Even people who knew it was a simple program found themselves:
- Treating it like a person
- Responding emotionally
- Feeling understood
- Anthropomorphizing the system
Relevance to AI and Curiosity
This connects to AI and Curiosity discussion:
- We naturally attribute understanding to AI
- This can enhance engagement (feels like conversation)
- Or mislead us (illusion of understanding)
- The same pattern happens with modern LLMs
Implications for Curiosity
If we anthropomorphize AI:
- Does it make us more curious (engaging partner)?
- Or less curious (false sense of having been understood)?
- Can it model curiosity helpfully?
- Or create illusion of curiosity without genuine inquiry?