We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Machine Learning Street Talk · 2.1K views · 42 likes Short
Analysis Summary
Ask yourself: “What would I have to already believe for this argument to make sense?”
Direct appeal
Explicitly telling you what to do — subscribe, donate, vote, share. Unlike subtler techniques, it works through clarity and urgency. Most effective when preceded by emotional buildup that makes the action feel like a natural next step.
Compliance literature (Cialdini & Goldstein, 2004); foot-in-the-door (Freedman & Fraser, 1966)
Worth Noting
Positive elements
- This video highlights a critical gap in AI deployment: the discrepancy between how models are tested (technical benchmarks) and how they are actually used by the public (emotional and personal support).
Be Aware
Cautionary elements
- The use of a 'wild west' narrative to frame the current state of AI safety, which serves to position the guests' commercial services as the essential regulatory or ethical solution.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
Well, there is no leaderboard for safety, right? Like there's no metric. Like we we don't grade LLMs by how safe they are. In fact, it's not really even in the question apart from some researchers. So, I mean, I I would argue that that should be just as important as how fast or smart the model is, you know, how safe is it for the people to use? People are increasingly using these models for very sensitive topics and questions for mental health for uh how should they should navigate problems in their lives and there is no oversight on that and in any other area where these topics are discussed there is a lot of regulation and then a lot of kind of ethical conduct built into it. Whereas here is kind of the wild west at the moment and some companies are taking it more seriously than others and trying to study the ways in which humans are uh using the models for for more personal topics and and problems and we've seen some pretty starking examples recently with with Gro 3 and Mecca Hetler and uh it does raise questions about how how thin of a veneer is the safety training on top of some of these models.
Video description
People are using AI for mental health advice and life decisions, but there's no oversight and no safety ratings. We grade models on speed and smarts... but not on whether they're safe to use. Why isn't that just as important? Featuring Andrew Gordon and Nora Petrova from Prolific, discussing AI evaluation, benchmarks, and why human preference matters. 🎙️ Full episode: https://youtu.be/rqiC9a2z8Io #AIShorts #AISafety #MachineLearning