We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Stanford Graduate School of Business · 2.2K views · 0 likes
Analysis Summary
Ask yourself: “If I turn the sound off, does this argument still hold up?”
Worth Noting
Positive elements
- Provides detailed, firsthand insights into zero-day markets, AI-automated attacks, and defensive challenges from a former NYT reporter turned cyber VC.
Be Aware
Cautionary elements
- Guest's disclosed VC role in cybersecurity may subtly frame threats to spotlight investment needs, though fully transparent.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
[music] I'm Alexis Offerman. I'm an MBA 2 and one of the student directors of the corporations and society initiative. We are very glad to have Nicole Pearlloth with us today for a discussion about cyber security and AI and disinformation or wherever we want to take the conversation. So, I have a few prepared questions here, but then I'll open it up to the audience for questions um for the last 20 minutes or so. Um so, Nicole, first of all, welcome. Thanks for being here. Thank you so much. Um I almost didn't make it and the reason we have this one and in part is my husband was in a horrible ski accident over the weekend. So, he is about to have part two of his emergency surgeries. And um but I don't want to cancel on you all because this is actually one of my favorite things I do all year is come to the GSB. And I've been in Glenn Crayon's business writing class and Keith Hennessy's US China class. And it's always sort of the thing when I when I at the end of the year when I ask myself, okay, what are the things I did that brought me the most joy? This is always one of them. So, thank you for having me. Always. >> Yeah. Well, we're so glad to do this today and especially given the circumstances. Thank you for making time for this. Uh, so Nicole, you've had a very interesting career. You got into cyber security initially at the New York Times as the lead cyber security journalist, uh, where you spent more than a decade there. And you also wrote a book that was a New York Times bestseller, This is How They Tell Me the World Ends. Incredible book. For those of you who haven't had the chance to read it, I definitely recommend. Um you al also have spent some time in the public se sector advising for CISA and more recently you've gone into venture capital starting your own venture capital firm uh where you are now investing in cyber security technology. So quite a quite a career. I love the story about how you initially got into cyber security. Uh would you share that and also what led how that led you to where you are now? Yeah. So, [sighs and gasps] um, let's see. This is maybe 2009. I had, in retrospect, a very cush, blissful job covering venture capital at Forbes magazine. And things were picking up, you know, after the dot crash and secondaries and people were super interested in Facebook and Uber and Twitter. And so VCs were on the rise again. And I wrote a number of his stories about Peter, like people like Peter Teal and Jim Brier. And I think the New York Times was paying attention. So they called me one day and they said, "We're looking at you for a job, but we're not sure you're going to want it." And I said, "How bad could it be? You're the New York Times. I'll take whatever it is." And they said, "It's cyber security." And I remember like trying to stifle this audible groan like you want to take me off this very exciting beat of venture capital right now which is just picking up again and put me on something that's very technical that I have no background in. And frankly at that time I thought it was even a little boring. So I said, "Okay, I will come interview." But I told myself, "If the New York Times thinks they're hiring some cyber security expert for this job, it's going to be a disaster." So I went to the New York Times and it was 13 interviews over the course of two days. And uh every time after every interview, I said, you know, by the way, don't hire me to cover cyber security. Here's a short list of people that I think cover cyber security quite well. You should hire them. But I was interviewing with the New York Times food critic at the time. And so I use the opportunity to ask him things like, "What's the restaurant you will not write about?" Because it's so good, you don't want to ruin it for yourself. And what's your exercise regimen? Because you're quite spelt, but I know you're eating out all the time. And so my last interview was with this managing editor, John Gettis, and he said, "I have, you know, two questions for you. One, what scares you about this place?" And I said, "Everything." And he said, "Good." Because the best people who come in here come in scared. The worst people aren't scared enough. And I said, "Two, do you consider yourself a better writer or a better reporter?" And I said, "I think I'm a better writer. I'm still really learning reporting." And he said, "Well, we disagree because Sam Siftton said you got out of him that his favorite restaurant is the Dutch," which is what it was. And every sort of person who interviewed you had a similar story. They told you things that they wouldn't have told anyone else. So, we think you're a pretty good reporter and we hear that you keep giving everyone this short list of cyber security reporters to hire. And I don't know if anyone's told you this yet, but we actually interviewed everyone on your list and we couldn't understand a single goddamn thing they were saying. [laughter] So, they said, "You're hired uh because you've been translating technology quite well and we need a translator." >> And so, I walked in the same month Stuckset was discovered. A lot of you may know Stucknut. This was the USIsraeli operation against an Iranian nuclear plant. And then I walked out more than 10 years later after the supply chain attack called Solar Winds >> where Russia had essentially compromised a software company and then used their software update as a way to break into a lot of federal agencies and major US Fortune 500 companies. And in between everything was a big first. Uh and so I finally got a call uh from the Biden administration. Congress had mandated that they set up an advisory board at SISA, which is this new cyber security agency that got a little bit famous in the 2020 election when Chris Krebs, who was the director, said this was the most secure election in history. And she said, "We really want a journalist on this board." And that meant leaving journalism. I don't think they realized that because you can't be a journalist and kind of inside the government tent at the same time. So I left the New York Times and I went inside and I said, "I'll do this, but I don't want a security clearance. Do not give me a security clearance because if you get a security clearance, then you have to get government review for everything you you write." >> And so that was my worst nightmare if I ever wanted to get back to journalism. And uh then my first meeting my husband I said I'm going to DC for my first meeting and I went out and uh they said signed this and they sent us all into a skiff [laughter] [gasps] and uh what they told me they've declassified so I can tell you which was Russia's going to invade Ukraine and we have the date and we're very worried about the potential for cyber blowback. So that's what I did uh sort of inside the tent for a couple years during the Biden administration and then I was fired last January. Um and I've been doing advising and venture capital ever since. >> Wow, that's incredible. Um what what an experience and a roller coaster. Um I know from your time um at the New York Times and potentially personally too, you know better than most how uh fastm moving and everchanging the cyber threat is. And now in today's age we layer on AI. How do you see that changing the cyber threat landscape? >> Yeah. So um you know AI we chat GPT had not arrived when I wrote my book. And if you read my book then you know a lot of it focuses on what's called the zero day market. And a zero day is just an undiscovered vulnerability in something like your iOS software that if I'm a hacker and I know about this vulnerability and I could potentially write a program to exploit it. Don't touch that honey. Um then I could potentially use that exploit to spy on you or you know essentially turn your iPhone into an ankle bracelet. And the time it takes to discover these zero day vulnerabilities could be years in some cases for a really good one. And if you develop a really good one, you can sell it to a number of brokers. The highest bidder right now is the Saudis. They'll pay you $10 million if you can discover really good iOS zero day exploit, but they'll pay less, but still substantial amounts for certain zero day exploits. when AI happened, she was two months old and I just buried my head in the sand because I knew the time to create a zero day exploit, a really good zero day exploit and exploit it is going to go down to subsecond. And that's where it is now. It's there's still, you know, to find a really good iOS or Android zero day exploit, which is really like what what every government and spy agency wants. It's not going to be subsecond, but we're starting to see things like Expo, which is an AI zero day exploit uh technology top the hacker one leaderboard. In other words, they're finding more zero days than anyone else. AI is we're seeing Claude win hacking competitions. So, effectively, we're able to use AI to do what it took humans, you know, experts, a small group of experts, months, sometimes years to do. And then you kind of zoom out and what we're really seeing is that this barrier to entry to really have kind of the sophisticated hacking capabilities of an NSA or a unit 8200 in Israel um is collapsing. And that means anyone who really has the intent will quickly develop the means or the AI tooling to be able to pull off a ransomware attack at scale um to do quiet espionage to do living off the land prepositioning attacks on our critical infrastructure at scale. And the reason why I've done a lot of conversations like this this year is because in this conversation around AI, when you hear sort of like Sam Altman and Daario talk about the risks, you hear a lot of risks around sort of this boogeyman boogeyman of AGI. Sometimes you hear about bioteterrorism risks, but I actually think we're not talking enough about the immediate risk that comes with things like fully automated ransomware. And we're starting to already see and I and I I like to break this up and make it really tangible because this cut's very theoretical. Hopefully you guys can still hear me. Um you know when you think about ransomware, ransomware always had some automated com components, right? Like a hacker just need they could use scanning software to find anywhere that you know no one had patched or known vulnerabilities and then they could get in. But then there were all these manual stages. You know, they had to basically go inside and find assets that were worth encrypting. They had to encrypt those assets. They had to do payment negotiations. What we've seen over the last six months is that all of those manual stages are now being automated. LLMs are being trained to find those critical business assets. They're being trained to conduct a monetization strategy. We're seeing them train AI chat bots to manage payment negotiations for maximum psychological pressure. So, put that all together, what they're really doing is they're using AI to automate the entire kill chain. >> And when you can do that, think of the number of ransomware attacks we're going to start to see. Mhm. >> And so the the sort of point and the strong takeaway on defense is all these things that cyber security experts have told us all to do for years, you know, patch for software vulnerabilities, you know, use multifactor authentication, you know, all these things. It's like suddenly with AI, there's no more room for human error. >> You know, all of these things will be able to be discovered and exploited at scale. >> Yeah. Well, the thought of the barrier to entry being lower and the kill chain being faster is pretty scary. Um, what is AI doing to help on the defensive side if anything? >> So, that's very exciting. So, right now, let you guys don't hear this one. Um, so right now we're seeing, you know, unfortunately offense has the advantage right now. That was a big debate in the cyber security industry. Will AI help the bad guys or the good guys? And unfortunately, we're seeing it tilt towards the bad guys. So, and it it's really with every kind of attack vector. Social engineering, we're suddenly seeing, you know, much better crafted fishing emails using chat GPT. Um, we're seeing deep fakes used for advanced social engineering attacks. Um, certainly deep fake, deep fake audio. And so there are technologies coming to market like get real which is one it was my first investment when I decided to move into the investing space that does real time deep fake detection. So that's good but again we're sort of playing catchup >> um on AI you know we can suddenly use AI and sort of virtual agents to tell us where do we have gaps in our security estate. um to help us prioritize you know remediation um to you know automate things like patching to essentially be like eyes on the ball in a way that humans sort of missed some of these alerts. Now we can use AI to sort of triage these alerts. And some of the big attacks that I covered at the times were things like Target. You know Target was their their point of sale system. you know, the the way that you'd check out at the register was compromised. And a cyber security tool caught that and it fired off an alert, but unfortunately the alert got missed between time zones and never picked up and then it led to this massive attack at Target. Now, it doesn't matter what time zone you're on. You know, the AI is always watching. So, on some in in some ways, AI is upleveling us there. We're also seeing tools used to fill these like vacant jobs in cyber security. So there's a lot of um there's a lot of work that needs to go into things that are not that exciting, at least don't sound that exciting, which is third-party risk assessment. You know, assessing do all my vendors have, you know, are they applying NIS standards? Are they using multifactor authentication? Are they using strong security protocols? Now, there are tools coming to market that have agents that do all of that for you. And not just one time a year through some kind of like paperwork compliance checklist, but 247 they're continuously scanning your third party systems and third party software to tell you if anything's off track. So, in that sense, AI is is a godsend in some ways. Unfortunately, again, I think the bad guys have sort of first mover advantage and will continue to have first mover advantage for a while. >> Pulling the thread a little bit on the things that aren't exciting. Um, we have a lot of um founders and soon to be founders here in this room. And one of the things that you've brought up in your book and in your podcast is this friction between moving fast and breaking things and then also putting in the right security measures which tend to be a little non-exiting. and um maybe not the priority of of a lot of founders. Um what are things that founders, future founders can do to protect their companies and also the customers they serve? So one thing I worry about a lot is AI for coding. Um there's this saying uh it was it there was a lecture given I think it was 70 years ago by a guy named Ken Thompson and he was awarded this very prestigious award in cyber security called the touring award named after Alan Touring and when he won the award he used his turn at the lectern to give this speech called reflections on trusting trust and the point of the lecture was to say you'll never be able to trust source code you didn't write yourself. Mhm. >> And I think about it all the time [laughter] now >> because now we're using AI for coding. >> Mhm. >> And there's a big question of how secure is this code? And you'd think that using AI they'd be able to sort of factor in all the mistakes that we've made for years, for decades even. And unfortunately, there was a Vera code study that came out late last year, not even that late, maybe August, and they looked at LLM generated code and at best it received an F, >> 55 out of 100. >> Oh my gosh. >> Uh, at secure coding. And so that's a huge red flag because when you think of how much more code we're generating using AI and then you think about where that code is going um and you think about just how vulnerable it still is. That's not a great place to be and I don't think we're talking about that at all. >> Yeah. Um, now Claude, you know, Anthropic is partnering with some technologies like Seam GRP that do a better job of sort of taking the context in and and trying to address very urgent security vulnerabilities and I think that'll be a great that that is one thread of hope that I have. But I think founders right now, I mean, you can create a company in a day. You can use lovable to do so much. It's so exciting. But I think we have to make sure that people understand that every time you introduce new code, you're widening an attack surface. >> And it's not a theoretical that it will come back to bite you. It's like, like I said, it's like there will be bad guys scanning for your vulnerabilities continuously. And when they find them, they can now exploit them in subsecond. in some cases. And so it's really important that you understand that you are part of a broader ecosystem and the decisions you make around things like secure coding or whether or not to use strong MFA or changing the default passwords or monitoring for basic behavior so you can flag any anomalous or abnormal behavior. All of that is really really critical no matter what area you're building in. I'm so glad you brought up some specific things people can do. Multiffactor authentication, monitoring systems. I think uh maybe just a show of hands. Who's ever used vibe coding in this room? Okay. So, a lot of a lot of probably particularly new founders um I think probably their concern is well, if I'm not using this, then I'm going to fall behind because everyone's using this. So I guess what are are there any additional small things that founders can take if they if they don't have the money to pay a software engineer to actually look over the code or they have to rely on vibe coding. Yeah, and I think that's where the tooling is just getting better and better like things like SGP or you know definitely take a look at SGP um because it's able to sort of it's almost like a spell check for code >> and so it's showing you as you're vibe coding like where these vulnerabilities are popping up and then the other nice thing about it is it's taking in the full context of what you're building. So it's not just alert after alert after alert and you're drowning in alerts. It's actually using that context to give you actionable steps that don't drown you out and make you just like throw up your hands. And I think those are really exciting. And I think, you know, the fact that Claude is integrating that is a really good sign. And I and I'm really glad, you know, I don't think Anthropic's perfect, but I am really relieved and grateful that they are using their position to create these market pressures around things like security and surveillance. Mhm. Yeah, that's awesome. I hope all the builders out there are taking some notes. Um, I'd love to kind of broaden the scope a little bit and talk about how AI is changing our access to information and the validity of information that we have access to. And one of the things you brought up during our pre-call was um a anecdote about Saudi Arabia um altering the amount and the the the amount that they're charging AI companies for energy based on the AI company's guarantees to portray Saudi Arabia in a certain way in their LLM outputs. And um that's pretty remarkable to me. Um are there other instances of nation states or other actors affecting how we're receiving information via AI? So I think the the instance you're referring to was um you know lawyers at OpenAI confided in me and I'm happy to you know basically cite them in an anonymous context here um that SA governments like Saudi Arabia are using the foothold that they now have in terms of data centers and AI backend infrastructure to try to pressure some of the big LLM makers to start censoring their outputs. to outputs that are more favorable to the regime. You know, essentially using their their footholds as leverage for censorship and companies like OpenAI fortunately have boatloads of lawyers that they can tap into to push back on it. But I think there's a big question which is what happens when every SAS company is has its own LLM? you know, who will be able to fight back against some of this, you know, pretty, you know, strong uh external pressure. And so on that, one thing I've been advocating for is we need third-party independent tooling >> that can mind readad these LLMs in as close to real time as possible. And that's another exciting technology that's coming to market. There's a company I invested in through my mission fund called Realm Labs that is doing this. and they're able to show you, you know, where is their potential censorship uh that's happening in real time. And I think that's another thing we're not talking about enough is like who is behind these LLMs? Who makes these decisions? Sometimes it's newsworthy when things like Grock, you know, start spitting out something racist and then you're like, what happened here on the back end, right? But you only hear about these things when they become truly newsworthy. And the news business, in case you haven't been paying attention, is having a lot of trouble and market pressures recently. So, I think we definitely need to start pushing for some sort of independent tooling that can show us who might be tipping the scales and where. Mhm. I'm glad you bring that up because um it's it seems like there's a lot of really dark, scary things going on in the world of AI and disinformation and I'm wondering if there is room for optimism and it sounds like there are some technologies that are countering potentially disinformation threat. Um can you talk a little bit more about the type of technology you've seen and what you're investing in now? >> Yeah, and and then I'll, you know, caveat it with something a little bit depressing, but >> [laughter] >> Unfortunately, a lot of this is not that uplifting, but you know, there there I always say like cyber security is is the number three threat that I track. You know, the number two threat I think is climate change because resource wars and I think will become a huge national security issue. And we could talk about, you know, throw income inequality in there too when you combine all three. But number one, I think is disinformation. And part of it is what's happening to the press certainly, but I think a lot of it is just again the barrier to entry for mass disinformation campaigns has effectively collapsed with AI. And so there are companies out there now like Althia Group is one of them. Blackbird is another that you want to go back to Sally. Okay. that can help you track narrative attacks against your brand. And they're doing a great job and and sometimes they're doing some pro bono work for different foundations and they're helping you track these narrative attacks and then they're h helping you with remediation strategies in real time. You know, take down requests. They can identify is this a coordinated bot campaign or is this a couple individuals? Where are they from? You know, what kind of tooling are they using, etc., etc. So, that's great and we need that. The problem I worry about is that these tools are really becoming the provenence of the 0.001%. >> You know, so those companies I just mentioned to get off the ground and go to market, they have to pitch to the Fortune 100, >> but then you kind of think about, well, who does that leave? And yeah, some of them are doing great work to to be, you know, make their services available to people who are running for office and various foundations, but I worry about the rest of us. Like what are we going to use when when someone decides they don't like us for whatever reason or we write a book that someone doesn't like? And I think when you start sort of extrapolating on that, you see how these tools could become real um real censorship, you know, real have very powerful silencing capabilities. >> And you know, occasionally I'll get pitched by companies that um you know, essentially one of them was a company that produces legal agents that can essentially send takeown legal takeown requests to anyone who's posting anything negative about you. Well, their biggest backers were Elon Musk, um Sam Alman, and um Bill Aman. >> So, they are on Twitter saying, "Hey, we we should dismantle anyone who's trying to track these coordinated campaigns on Twitter, right? That's censorship." But in the background, they're investing heavily and using agents that are sending legal takedown requests all day long on their behalf. >> Oh my gosh. So what about the rest of us? And then you start thinking about that future and that gets very dystopian. So I think those tools we need to figure out how either to set real limits or have real, you know, not censorship but real ways to track these coordinated campaigns to out them and take them down. Um or we need to figure out how to democratize the tooling that outs them at scale. And that's very very difficult and I think you know much more difficult even than some of the cyber security challenges I'm focused on. >> Yeah, absolutely. Is there anything that the individual can do or like the little guys can do at this point to protect themselves from disinformation? >> No. I mean really no. [laughter] You know, you can always go on these platforms and and call a spade a spade and say, "Hey, I'm under attack." And I've seen some people do that really well. I've also seen that only incentivize more trolling and more disinformation and so it's very very difficult and it's it's not my focus but I've decided you know I really need to start calling this out. I'll tell you a very interesting story. So I I spoke at this thing called the business roundt and this is a you know a very elite group of Fortune 100 CEOs. >> You want to stay with me? >> Okay. Okay. And so they get together and I went and I spoke on stage and it was me and George Curts, the CEO of Crowdstrike and I talked about ransomware that day and then uh afterwards we go to a dinner and so I was in the shuttle and I I was sitting sitting next to hold on I was sitting next to the CEO of Riotinto like the big mining company and he said yeah Nicole like I listened to your speech it was great yada yada [laughter] ransomware we get it we have a blink check for ransomware and cyber security I want to talk to you about disinformation information and he said, "I want to tell you what happened to me." He said, "We had a I think it was a lithium mine. It was a cobalt or lithium and it was in Serbia and we had spent two we' committed $2 billion uh of investment jobs um whatever it was, was it cobalt or whatever we use in electric vehicle batteries." Yeah. So anyway, everyone was on and usually on board with this mining project >> and he said all we had to do was do the televised ribbon cutting ceremony. So he flies to Serbia, they do the televised ribbon cutting ceremony. Immediately they get hit with Russian bots and trolls and they're hitting them with all sorts of conspiracy theories about the CEO personally about the mine project that there was it was going to create acid rain that it was going to be this huge environmental disaster. You know, I haven't done my own independent assessment. My understanding was they had done all the environmental impact assessments. None of this was true. And it was so effective that you started going to Serbian soccer matches and you started seeing people in the stands holding up signs protesting the mine. Um you started seeing like horrible personal conspiracy theories about the CEO and the then prime minister of Serbia who I think may have even been been the first female prime minister of Serbia. And finally she called him and said I'm really sorry but I'm up for reelection in the fall and we have to shut this mind down. He said, "Nicole, I was prepared for a $50 million, $100 million ransomware attack. I was not prepared for a $2 billion disinformation attack." >> And there was nothing. There was no playbook. Everyone on my comm's team said, "Just don't give it oxygen. Don't respond. Put your head down. It'll go away. It's so insane. It'll go away. It'll go away." And it didn't go away. And it cost them $2 billion. and and he said, "If I couldn't do it [clears throat] at Riotinto, like what chance does everyone else have to fight back against some of these narrative attacks?" >> And I didn't have a good answer. And that's actually when I went out and tried to find one. And I landed on Lisa Kaplan at Althia, who's amazing. You should actually have her to to one of these. She's an incredible entrepreneur. >> Wow. Well, that's a very humbling story. Thanks for sharing it. Also, it sounds like there's maybe scant room for optimism, but we can still be hopeful that something comes out in the future that can help curb disinformation. Um, one more question for me and then I'll open it up to the audience. So, we've heard a lot about anthropic over the last few days in um their friction with the Department of War. It really brings to light a question about who gets to decide what the safety um guard rails are on AI. um and also brings to light a new dynamic on corporate responsibility. Uh what's your perspective on this in in light of what's going on with anthropic? >> Well, I'm already blanking um on what was the second stipulation. It was mass surveillance and what was the second one? >> Oh, I am all >> Oh, right. We're right. Autonomous weapons. >> You know, at least they called it out, right? [laughter] I mean otherwise we might not know about it and and of course another company whether it's open AAI or you know someone else will step into that void and take that $200 million government contract right but wow I mean it's it's really amazing to me just how far we've fallen just in this discussion since Snowden you know I I was highly involved with some of the Snowden in coverage. In my book, I talk about how I had to work from this makeshift skiff for months [laughter] away from my family um on some of you know what what the NSA was doing at the time. And we've circled back to a point where the trust we had to build back up between the public and private sector after Snowden was really hard because if you remember during the Snowden mess, you know, we were seeing like little postits that, you know, claimed to show that the NSA had somehow hacked into Google's backend infrastructure and was collecting data unencrypted. Um, and so, uh, you know, in cyber security, we're not, and I'm sorry this is a little bit of a tangential answer, but we're never going to get out of this with without the public and the private sector working together hand in hand. >> And it that's that's where I'm most afraid is, you know, we first of all, like good for anthropic for taking a stand on mass surveillance. And I liked that they were very specific in saying we're not actually ready to start using autonomous weapons yet and and baking AI into autonomous weapons. Um I thought that was a very specific call out because it's a very visceral call out too, right? Like what could go wrong? >> Um and good for them for creating this market pressure around security. Thank God. But man, you know, it's it's like the money is winning the day right now with AI. We are building these things at breakneck speed. And I don't see anyone in this administration talking thoughtfully about regulation. Sometimes I listen to the All-In podcast and I'm like, "Turn it off. [laughter] [gasps] Turn it off." because when they do bring up some of these nightmare scenarios that I've come really close to seeing firsthand play out firsthand, they always downplay it. And I just hope there are some adults in the room who are bringing up, you know, how these things could really go off the rails. >> Um, so that's that, you know, I don't have anything more thoughtful to say on it than that, but we'll see. I will say, you know, right now there was a FBI, SISA, I think NSA joint advisory that went out last night about what we should be worried about with regard to Iran. >> And it's very real. Um, you know, I covered a number of instances where we saw Iran trying to attack critical infrastructure. I covered an attack where they attacked Saudi Aramco. They got into their business IT systems. They tried to get into their back-end pipeline OT systems we call them and they weren't successful, but they basically detonated malware that bricked 30,000 Aramco computers and took all their data and replaced them with an image of a burning American flag. And it was the most expensive attack on record. It paralyzed operations at Saudi Aramco. um you know it sent oil prices up to get the number of of desktop computers that they needed to replace those systems. Like if you had tried to buy a computer during that period it would have been much more expensive than your usual desktop computer. Anyway, point is these Iranian hackers tried and weren't successful. >> They're much better now and we actually have seen them poking around critical infrastructure. We have seen them get into these OT systems. We saw a lot of this in sort of the the peak of the Israeli Hamas situation. We saw them poking around water treatment facilities, wastewater systems, um oil and gas here and also in Israel. >> And they're prepositioning. That was pre-positioning for a future attack. And now this what this joint advisory report was saying was like you have to look for evidence that they're inside your systems because based on what we've seen before in many cases they're already there >> and right now with what's been happening you know they have little left to lose they're going to try everything. So I think actually right now you know I'm talking about disinformation I'd be remiss if I didn't talk about that a little bit because I think that's >> that's you know the far more immediate risk that we have today. Okay. >> Um, okay. On that, we are going to open up the um open up to questions from the audience. Um, so I'll go ahead and start here. >> Do do I need a mic or >> No, no, >> no, I think it's all right. Okay. Uh, thank you very much. Um, I'm a short-term visiting scholar here at Stanford from Estonia. Cyber security expert from Estonia. Uh, it's great to see that your book has also been translated into Estonian. So, great to have you here. uh based on your on your uh work with Ukraine, I wanted and and referring to the topic. I wanted to ask if you have any if you're looking more closely or or or have some insights into some useful practices from the Baltics or from the war in Ukraine of combining these threats of yours, number three and number one, the disinformation and cyber security countering uh Russia in that sense mostly. Could you share some thoughts on that? Thank you. >> Yes, thank you so much. And um I was in Estonia last year speaking about this this specific to this specific question on what are the learnings to come out of Ukraine. >> So it's the ultimate case study [laughter] and there's a lot of hope in it and I should have remembered it when you asked about hope. Hope and room for optimism. No one remembers any real cyber attack that Russia pulled off in Ukraine since they invaded. And so I think there's this dangerous misdiagnosis that nothing happened when in fact we saw unprecedented cyber attacks against Ukraine. Now part of it is once the bombs start dropping they become less useful and and people don't really care as much about cyber attacks. you know, if the bomb takes out the power infrastructure, okay, it's not so interesting that they were trying to take it out with cyber, right? But here's what's interesting. So, there there were these unprecedented attacks. So, the first thing that happened was they sent these denial of service attacks after Ukrainian government agencies and banks and we mitigated them. We actually brought in Cloudflare and we brought in Google and we brought in AWS and we were able to mitigate them within 24 hours and a lot of this I had a front row seat to through my affiliation with CISA. The second thing was happen that happened was they took out viaat which was this in internet satellite broadband provider and that would have cut Ukraine's internet access essentially and there's a great story I could share later but uh in came Starlink and there was this fabulous story waiting to be told about how they got all that Starlink infrastructure into Ukraine from Poland on short order and Starlink you know for for whatever I Think about Elon Musk actually gave Ukraine a big fighting chance. It it kept their military comms online and it really kept them online and and made it possible for them to air all this footage of what Russia was doing and fight a lot of the disinformation. And then they did come after the power infrastructure with cyber attacks. But actually, ESET and Ukraine Cyber Defense Agency and CISA and other entities in the private sector all came together and they were able to discover it and root it out before it was time to detonate a few months later. And then there was something. Oh, then [laughter] there was this miraculous declassification of something called Pipe Dream, which was a By the way, everyone, thank you for being here with my 2-year-old Remy today. Um, there was this this tool that they discovered called Pipeream. And Pipedream was essentially a digital Swiss Army knife that you could use to hack any kind of critical infrastructure and very powerful. and somehow we found it when it was in development and declassified it before it was ready for use and warned everyone essentially how to mitigate against this pipe dream tool. So all of those things are like tiny miracles in my opinion and they were all lessons in what happens when you have true real time threat intelligence sharing between the public and private sector and all of it comes down to trust between the public and private sector and I think that is the case study we need to probably be teaching at the GSB and elsewhere because that's that's what it takes you know, especially here in the US, you know, Israel has made more compromises on privacy in the name of security, but last time I checked, we still don't want the NSA or Cyber Command inside our private enterprises monitoring traffic in real time sniffing for these attacks. But 80% of our critical infrastructure is owned, operated, maintained, and hopefully secured by the private sector. So if we want to keep it that way and we want to keep these privacy rules up, then we need to make sure that the private sector is looking for what they need to be looking for, logging, tracking, defending, that the government is declassifying these threats, sharing what they know from their foreign field operations with the private sector, that the private sector's triaging those attacks and prioritizing them. And all of that requires this kind of constant communication between the two. So, you know, whether it's what's happening with anthropic and the memo and the and the federal government or just sort of this breakdown in trust that we're seeing because of ICE or whatever it is, those are huge cyber security red flags. And I think that actually is not a technical issue at all. It's really like one of public private partnership that we need to start talking about in in much bigger formats. But yeah, I think like that's the that's the hopeful next book to be written is like what happened to come together to basically keep Ukraine online through all of that and and there's a lot of hope and a lot of lessons to be shared there. >> Hi. Um, thank you Nicole for I actually started my career in the cyber defense operations center of Microsoft. So I was in war rooms when um Solar Winds was happening and other various other zero day attacks. So I just want to commend you for making this subject so accessible to everyone in the room who didn't spend a lot of time in those rooms. And I also later read product for a cyber security startup that was looking at software supply chain risk. So this is just yeah an incredible talk very relevant to I think everyone in the room but I'm actually curious about your opinion about the Paramont Sky Dance and Warner Brothers merger if we're talking about disinformation and media merging and maybe your thoughts on that and how this affects >> I I I have so much to worry about with cyber security and then you know I've added disinformation since that story I shared about Riotinto. But I don't spend a lot of time thinking about journalism, but I obviously worry about it and what's happened at the Washington Post, I think, is a harbinger of what could come in a different format with the merger. I mean, having all of those entities, having CNN, you know, under Larry Ellison's control or David Ellison is not great. Um, I don't know what to do, guys. I really I hope you guys can all go out there and solve this one. You know, the line I always use is like the truth is paywalled and the lies are free. And that is a very elitist opinion, right? I fully, you know, will tell you that. But I also tell you that at the New York Times, you know, the New York Times get should get and has earned a lot of the criticism that it gets. And when you're inside the walls, there's no bigger critics, I assure you, than people who work inside the New York Times about the New York Times. Um, but what I saw with my own two eyes every day there was just people doing the best they possibly could to share the unbiased truth in under very difficult circumstances and very difficult timelines and um with a lot of bureaucracy. I always say the real reason I left the New York Times was the affing bureaucracy of the New York Times. But people are really doing their best and you should and can criticize them all day long when they get things wrong and hopefully they own up to it with corrections. But there was never anyone who told me, "Don't write that because you could potentially offend our corporate overlord or advertiser ever. Ever. Ever. ever. There's a great story when I joined about that they share at orientation about um one of our reporters at the magazine went out and wrote a story. I'm sure you could find it about all the places adults were getting high at Disneyland. [laughter] And they published the story and the reporter told the story about how Jill Abramson, who was the executive editor at the time, was like, "Get in my office." And she's like, "You have to sit here. Disney's calling." And they called and they made them sit through this call where they were like, "Are you kidding me? we are your biggest advertiser. Like, why would you write this story? And they just all kind of like nodded along, hung up the phone, and then Jill turned to the reporter and said, "It was a great story, you know, and they never did anything differently, you know, and that was our biggest advertiser that year." So, we're in a different place now. You know, we're And by the way, I remember when I was when I was working on the Snowden documents, I tell the story in my book. I had to work out of Arthur Solsberger's storage closet. So, it was right outside his office. It was the only room in the New York Times that didn't have windows. And one of Snowden Snowden's stipulations was we couldn't work out of a window, a room with windows because foreign governments could shoot lasers at the windows and hear everything we were doing. And um the New York Times, funny story, was built by Renzo Piano as a model for full transparency. So there's there's no room in the building that doesn't have windows except for bathrooms in this one storage closet. So I was in Arthur's storage closet when the news broke that Jeff Bezos had acquired the Washington Post. And I remember, you know, so he popped in and told us the news and I said, "Oh my god, are you know, are you scared? you know, is this terrible for us? And he said, "No, this is great. Like, we need great competition out there. Everyone does their best work when they have an equal competitor. This is like a great thing for journalism." And I think it was for for a time, but then we saw what happened with, you know, the presidential endorsement ahead of the last election. And now we're seeing, you know, what's happening with cuts and some influence over coverage. And, you know, Larry Ellison is not known for being the most neutral party [laughter] on things like this. So, more is coming. And that's that's terrifying. And I think for a long time, you know, you would hear people like Bernie Sanders talk about the billionaire, the millionaires and the billionaires. Sally says it better than I do. Um, who own, you know, the big media companies. And I sort of would roll my eyes because my experience was, well, you know, we've never seen our coverage really touched, uh, based on profit motivations or or based on our owners. And, um, that's not true anymore. So, I'm just, you know, I'm depressed about it, but but I'm going to leave it all to you all to figure that that piece out. >> Well, Nicole, we know you have a flight to run to, so we're going to go ahead and wrap things up, but I would just like to say that I'm very glad that you're doing the work you're doing and making this information available to the lay people who don't otherwise understand cyber security and that you're now investing in things that could potentially change the landscape moving forward. So, thank you so much for what you do and for being here today and also bringing Remy with you. This has been such a delight. [music]
Video description
Discussion hosted by the Corporations and Society Initiative (CASI) on March 3rd with former New York Times cyber reporter and cyber investor Nicole Perlroth on the new threats facing governments, companies, and citizens in an era of automated cyberattacks. Moderated by Alexis Opferman (MBA '26).