We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
DARK MATTER + · 135 views · 1 likes
Analysis Summary
Performed authenticity
The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.
Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity
Worth Noting
Positive elements
- This video provides a detailed look at the physical costs of AI, specifically the short lifespan of GPUs and the environmental impact of data center waste.
Be Aware
Cautionary elements
- The use of 'revelation framing'—suggesting that the truth is hidden behind your phone or car—is designed to make the speaker's specific regulatory perspective feel like an objective unveiling of reality.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Can we build AI that serves everyone, not just the elite? ✨🤝 #InclusiveAI #Innovation #future
DARK MATTER +
Who benefits from AI? We need trusted voices, not just Big Tech hype. #ai #TechEthics #government
DARK MATTER +
Transcript
Hi, my name is Gopal Ramchurn. Uh, I'm a professor of artificial intelligence at the University of Southampton and I'm also the CEO of Responsible AI UK. I've always been passionate about technology uh from a very young age. Interested in robots, interested in autonomous cars, the night rider um and uh grew up um getting my first computer just at the time when the internet was was uh was built and um and really wanted computer science and ended up doing a PhD in a very relevant topic now which is trust in AI. So, I I I learned how to build AI that would be trustworthy and uh how to make AI more um acceptable or friendlier to humans um in a number of projects that I've been working on over the last 25 years. AI has been around for a very long time um going back 1950s and these are not completely new technologies. The techniques are quite old actually and that makes it hard to understand what is AI and who is an expert in AI because AI is not just one one kind of technique or mathematical technique. It is a range of different techniques that help us reason about things, express ourselves in in in language, in through images, um help us understand the world. And I think where we are seeing this now converges is into real tools that people will use uh at work, at home, um and in their daily lives. I think right now we have a trust crisis because it's very hard to know who is a real expert. Who can we trust to give us the right expectations about AI to give us the right predictions about the impact of AI and that has evolved to now AI that is now called general purpose AI and I think even the media industry and the public doesn't quite know what that will mean for them and that is now leading to over expectations. So expectations of of of of behaviors that cannot possibly be achieved by AI is that we have built so far and that I think is due to many companies hyping up the topic of AI to boost their share price. So we need to be mindful of this when we we hear various news coming from the media uh coming from different platforms um and and and you know and and then try and figure out what is it that is real in in in this picture. I run responsible AI UK and responsible AI for me is is really empowering. It's uh it's about um giving you the power to choose how you build, how you deploy and how you use AI. The apps that we use, films that we watch, the apps that that monitor our sleep, our physical activity, our mental health, they all powered by AI and that will share that data with third parties that we may not know about, we may not have any control over. So it is really worrying where we're going with AI in that sense, but also gives us something to think about. Is that not the right way to to help us live better, live safer, live um in ways that are more enjoyable. Since the internet era started, I think we've tried and we pushed hard to achieve instant gratification. Not thinking about what is behind all these systems, the amount of compute that needs to be done to serve our need for a cat video, for a funny AI generated video. So much compute goes into that. We just don't see that. The lifetime of a GPU tends to be around two or three years and we're building all these data centers which will be phased out every three or four or five five years maybe and that's a lot of waste and this waste goes to a number of developing countries and gets gets um a lot of gold get extracted and etc etc but that harms people's lives creates huge environmental issues as well. We don't see that it's all hidden behind the app. It's all hidden behind the phone. It's all hidden behind the autonomous car. The UK's approach to adopting AI and and deploying it at scale has largely been dictated by views from the US, I would say. And that has led to a focus on um more compute, buying more GPUs, right? Buying bigger and better data centers. Um not really thinking about where all of this energy that we will push into these GPUs uh will how will that impact society? How will that benefit um the poorest in society? those that live at the margins of society. We need people who are in government and in industry who understand that it's an explo. We need to explore the world with trusted parties, not just listen to external forces, not just accept what the big tech companies are telling us as God's truth, as that is the right way to go. We need to really think about how we unite the research community, the industrial community and and government agencies to work on a on a common plan. Otherwise, we keep reinventing the wheel. To give you an idea how quickly the AI community is evolving, a big research group was about 15 to 20 researchers. Nowadays, a research group working on AI, a Deepine or Meta or Apple is in the hundreds, maybe the thousands in China as well. It's moving so fast. So fast that instead of reading scientific papers that are submitted to conferences, I'm having to read the news, right? The tech news to keep up with what's coming up next. While we can't keep up with the capabilities of the AI, we we can keep up with the limitations of the AI to some extent. And educating people about the limitations of this AI is, I think, much easier to work with than trying to educate people about what it could do next. You don't improve your knowledge. And I I'm I've really concerned about this because we call this a positive feedback loop. you keep getting fed the same stuff over and over again. In typical mechanical systems, when you have an a positive feedback loop, you have an effect called resonance. And resonance is very bad for a system. It what caus bridges to break because people walk in the at the same step on on or cars drive at the same step or the wind blows at a at a certain rate. So I think that can break society if we don't stop that positive feedback loop of ultrapersonalized content. And countries need to really worry about the kind of weights that exist in these models because they don't represent their culture. They don't represent their individual know ethics, their uh priorities, the the the the the emotions that can be expressed in natural language in their own language. It can cause a sort of homogenization of cultures if we are not careful about this. It will cause a degradation in skills if we're not careful about this. The best we can hope is that the AI technologies that we will build will be diverse enough to serve the needs of everyone in society. Worst case scenario is that AI is owned by a few companies um based in a foreign country that has very different values from everyone else. And um the AI gets to control what we believe, what we say without us having a way to stop it from doing so. I think AI AI's impact and and long-term sustainability can only be really felt like as an emotion if you see your children growing with it. You see your children um being impacted by it and it's sometimes in very bad ways. um and realizing how many things you have to teach them to protect them from making mistakes with the AI.
Video description
"Right now, we have a trust crisis." In this extended interview from the series 'The UK AI Revolution', we sit down with Professor Gopal Ramchurn, CEO of Responsible AI UK (RAI UK) and Professor of AI at the University of Southampton. Hear as Gopal explains why building a responsible AI ecosystem is essential to moving beyond the hype and ensuring technology truly serves the public good. Watch the full documentary series at https://www.revolution.movie/