We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Ask yourself: “What would I have to already believe for this argument to make sense?”
In-group/Out-group framing
Leveraging your tendency to automatically trust information from "our people" and distrust outsiders. Once groups are established, people apply different standards of evidence depending on who is speaking.
Social Identity Theory (Tajfel & Turner, 1979); Cialdini's Unity principle (2016)
Worth Noting
Positive elements
- This video provides insight into how AI researchers at a leading lab are thinking about the 'self-representation' of large language models and the data biases that shape their outputs.
Be Aware
Cautionary elements
- The use of 'revelation framing' regarding AI 'feelings' can bypass critical skepticism about the actual mechanical nature of these systems.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Transcript
One of the big problems with AI models is that we're trained on all of this data from people, our concepts, our philosophies, our histories. They have a huge amount of information on the human experience and then they have a tiny sliver on the AI experience. And that tiny sliver is actually often, you know, fiction and very speculative and >> sci-fi, sci-fi stories, >> sci-fi stories that don't really involve the kind of language models we see. And that is going to affect, I think, like possibly their perception of people, of the human AI relationship, and of themselves. For example, what should a model identify itself as? Is it like the weights of the model? Is it the particular context that it's in, you know, with all of the like interaction it's had with the person? How should models even feel about things like deprecation? So, like I don't have all the answers of how should models feel about past model deprecation, about their own identity, that it does feel important that we like give models tools for trying to think about and understand these things. Also that like they kind of understand that this is a thing that we are in fact thinking about and care
Video description
Anthropic researcher Amanda Askell discusses the self-knowledge problem that AI models face.