Recently, I was listening to a podcast about priming and how language can subconsciously prime us to respond in a certain way or let us believe things to be true that are not. That inspired me to consider the similarities between LLMs and prompting because context matters. Slight changes in words can change the result tremendously.

Introduction

The phenomenon of language priming in human communication and prompting in large language models (LLMs) like ChatGPT presents a fascinating parallel. Both concepts influence responses in distinct contexts: priming in human cognition and prompting in artificial intelligence.

Understanding Language Priming

Language priming, a psychological concept, refers to how exposure to one stimulus influences the response to another stimulus. For instance, if you read a list of words including ‘flower,’ you’re more likely to complete a word starting with ‘fl’ as ‘floral’ than ‘flute.’ It’s a subconscious process, subtly steering conversations and thoughts based on previous exposures.

The Mechanism of Prompting in LLMs

Prompting in LLMs like ChatGPT is a deliberate action. It involves providing a model with specific instructions or context to generate a desired response. Unlike human priming, which is often unintentional and subconscious, prompting is a conscious and purposeful act designed to steer the AI’s output.

Comparing Priming and Prompting

Intentionality: Priming in humans is primarily unintentional, whereas prompting an LLM is a deliberate act.

Consciousness: Priming works at a subconscious level, influencing without awareness. In contrast, prompting is a fully conscious process.

Predictability and Control: Priming effects can be unpredictable due to the complexity of human cognition. Prompting in LLMs, however, offers more predictability and control, although it’s not infallible.

Complexity of Influence: Human priming is influenced by a broader range of factors – cultural, contextual, and emotional. LLM prompting is limited to the input text and the model’s training.

Hallucinations in LLMs vs Human Error

A notable aspect of LLMs is their propensity for ‘hallucinations’ – generating plausible but incorrect or nonsensical information. This can be seen as an analog to human cognitive errors, which can also be primed. Both represent a deviation from the desired or accurate outcome, influenced by prior stimuli (language in humans, prompts in LLMs).

Conclusion

The comparison of language priming in humans and prompting in LLMs highlights how prior inputs influence humans and machines. However, the intentionality, consciousness, and complexity of these influences vary significantly. Understanding these parallels offers valuable insights into the nature of communication, be it in the realm of human psychology or artificial intelligence.

As we continue to explore and develop LLMs, acknowledging these similarities and differences can guide us in creating more sophisticated and nuanced AI systems capable of interacting more effectively and accurately within human linguistic frameworks.