The deepfake detective
— Dr. Hany Farid“As a technologist, you have a couple of options: You can bury your head and pretend this stuff doesn’t exist. Or you can try to make the world a better place.”
Transcript
Transcript:
The deepfake detective
JORDAN PEELE as “BARACK OBAMA”: We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things. So they could have me say things like, I don’t know… Killmonger was right….
CATERINA FAKE: Hi, it’s Caterina. You’ve probably seen this PSA, released about two years ago with Jordan Peele’s words coming out of the mouth of President Barack Obama.
PEELE: You see, I would never say these things… at least not in a public address but someone else would, someone like Jordan Peele.
FAKE: Peele is doing his famous, and eerily accurate, Obama impression, using digitally manipulated video to make a strong point about misinformation in deepfakes.
PEELE: This is a dangerous time. Moving forward, we need to be more vigilant with what we trust from the internet.
FAKE: We call these hyper-realistic videos deepfakes because, well, they’re fake and because they’re created not through ordinary video editing, but through a type of artificial intelligence called deep learning.
HANY FARID: We’ll spend three hours looking at this image and trying to figure out if it’s real or not. You’ve got a fraction of a second, if that.
FAKE: Dr. Hany Farid is a computer science professor at UC Berkeley who’s been trying to confront the threat of deepfakes, but he’s racing to keep up.
FARID: A billion uploads to Facebook a day, 500 hours of video a minute uploaded to YouTube, hundreds of millions of tweets every day.
FAKE: We’ll meet Farid, who has developed a deepfake detective for debunking them in close to real time. But a debunked video can still go viral. And the problem is only getting worse. Today’s deepfakes may seem convincing. But tomorrow’s deepfakes will just seem real. Will a high-tech fact checker be enough to hold back the flood?
FARID: Imagine 24, 48 hours before an election, I create a fake video that goes viral. We can swing an election. I’m concerned for our democracy both here and abroad. And I don’t think we’re at a low point, yet.
[THEME MUSIC]
FAKE: Well, we’re right on time for our meeting.
FAKE: Hi. We’re back with Should This Exist at one of my favorite spots on the UC Berkeley campus.
FAKE: It’s I think, the most beautiful Berkeley University building. It has all of these beautiful Victorian features like a mansard roof and a widow’s walk. And this is the School of Information, which used to be the School of Library Sciences. I used to serve on the board.
FAKE: As you might guess from the fact that the campus is open, we recorded this pre-COVID. We’ve come here to see Dr. Hany Farid, a pioneer in the analysis of digital images.
FAKE: We found him.
FARID: Hi.
FAKE: Great to meet you, Catarina Fake.
FARID: Nice to meet you, too. I’ve just discovered that Neil Diamond is a f****** genius.
FAKE: You did?
FAKE: Hany is an expert in cutting-edge artificial intelligence, but his musical tastes aren’t so current.
FAKE: Why is he a genius?
FARID: I just love his music. It’s been a long time since I’ve listened to him.
FAKE: Right.
FARID: And I just sort of like, rediscovered him.
FAKE: I literally just met him, and all of a sudden he’s got me singing “Sweet Caroline.”
FAKE: “Touching you, touching me.” He’s 79.
FARID: OK. But that’s not the reason you came to talk to me though is it?
FAKE: No, but…
FAKE: Hany and I come to this conversation with other interests in common. He’s one of the foremost experts in digital photo forensics, evaluating photos and videos for their authenticity in trials, with news organizations and nonprofits. And I was the co-founder of Flickr.
I’m here to talk with him about a technology that’s been around for a while. Where we’ve already seen some of its dark underbelly and are bracing for more: deepfakes.
FARID: And the idea is basically pretty simple, which is that you are now using advances in machine learning and artificial intelligence to do the hard work of what used to be a digital artist.
So it used to be somebody in the darkroom painting over a negative and re-exposing it to a talented artist in Photoshop. Now, as someone who downloads some code and says “Replace this person’s face with this person’s face” and done. Automatically happens.
FAKE: It just happens.
FAKE: Hany Farid is trying to defuse these AI-generated fakes with AI of his own. We’ll get to how his “deepfake detective” works in a moment. But to start, l want to talk to him about what’s actually at stake.
Hany has only been at Berkeley for less than a year, after spending most of his career at Dartmouth.
FARID: So being here, there’s something exciting about being here. Because I think you are at the heart of the universe of all things tech.
FAKE: Right?
FARID: No, no question about it. The downside that I don’t like is you’re at the heart of all things tech.
FAKE: How is that a downside?
FARID: Well, because I think there’s two faces to the technology revolution. There’s clearly phenomenal things that we have. You know, the internet and mobile devices and access to technology and democratization.
But there’s also a really dark side to technology, right? The weaponization of technology, surveillance capitalism, privacy problems, the rise of hate online.
FAKE: And deepfakes is a good example of those two faces.
FARID: And so now we’ve democratized access to technology that used to be in the hands of Hollywood studios and state sponsored actors. And while that is a big step, I would still argue it is a step in a continuum of the ability to create increasingly sophisticated fakes and disseminate and amplify them.
FAKE: In spite of the fact that, you know, there will be 10 million people who use it for benign purposes.
FARID: Sure.
FAKE: One bad actor comes in…
FARID: Let me give you the scenarios, right? So lots of first, if you haven’t done this, you should do this. Go search for deepfake and Nic Cage and some very funny Internet users have spliced Nic Cage’s face into almost every movie ever made.
FAKE: Nic Cage as Julie Andrews in the Sound of Music. Nic Cage as the Scarecrow in the Wizard of Oz. Nic Cage as Elvis.
FARID: It’s fantastic.
FAKE: And it’s funny. It’s a thing.
FARID: And I love it. It’s good for political satire. We can make President Trump say things he never said. And that’s funny from the point of view of satire and commentary.
But now imagine the following scenario: that I take your image, because you have done or said something that I don’t like, and I insert you into an sexually explicit content, and I distribute that sexually explicit content online to do you harm both personal harm and potentially physical danger, which of course, is what is happening in the form of nonconsensual pornography.
Imagine 24, 48 hours before an election, I create a fake video of President Trump that goes viral, we can swing an election.
Imagine I release a video of Jeff Bezos, Mark Zuckerberg, pick your favorite multi-billionaire CEO, saying, “Our company’s profits are down 10 percent.” I can manipulate global markets to the tune of billions of dollars. And that takes all of what, 30, 60, 90 seconds?
FAKE: Sure.
FARID: Imagine that I create a video of a military leader in a conflict region saying something religiously or racially insensitive. We can lead to riots in the streets.
FAKE: As much as deepfakes seem like the dark side of 21st-century technology, Hany says that they’re part of a long continuum of doctoring images. In other words, this isn’t new.
FARID: Now, you can go back to Stalin, who had photographs manipulated to remove people who fell out of favor, sort of an 1800s version of a deepfake, if you will.
FAKE: Abraham Lincoln.
FARID: Abraham Lincoln’s famous portrait is his head and somebody else’s, Calhoun’s, body because apparently Lincoln had bad posture. And so they created this composite. And of course, the dissemination methods, of course, today are very different.
FAKE: Right. So you don’t have to have a daguerreotype studio.
FARID: Exactly. That’s exactly right. I would never be able to pronounce that word. So I’m glad you did. So 20 years ago, I was a young faculty member at Dartmouth College, and the internet was really in baby steps.
FAKE: What year is this?
FARID: This is 1999, right? No mobile devices, digital cameras are a glimmer in our eye, film still dominates the landscape, but you could see the trends.
FAKE: And by the early aughts, digital took over. Citizen journalism was on the rise. More and more people were walking around with a tiny camera in their pockets that had the ability to post anything they wanted to the internet at any time.
FAKE: I remember the photographs of the bombing of the embassy in Jakarta was actually in 2004 on Flickr.
FAKE: We had photos of a car bombing at the Australian Embassy before the major news services had it.
FAKE: Actually there are photos from inside the bombing of the subway in London in 2005, and the thing that was significant about it was because it was then subsequently reported.
FAKE: From documenting the Arab Spring to police actions, cellphones and citizen-powered news was becoming the new norm.
FARID: So it got on my radar a little bit later, primarily through organizations like the Associated Press and the Reuters, New York Times, who would contact me, and they would have material from things like what you’re describing, world-altering events that would need to be reported. But they didn’t have a reporter there.
FAKE: Can you authenticate this?
FARID: Right. And that was really the start of the world that we enter today, of course.
FAKE: Now things are dramatically different. With nothing more than an off-the shelf phone and a bit of easy-to-find software, anybody can create fake content. Combine that with unfiltered social media, and well, you start to see the scale of the problem Hany is dealing with.
FARID: Probably 2016, probably even earlier, we really started to see the impact of mis- and disinformation from election tampering to the horrific violence in Myanmar and Sri Lanka and the Philippines and India.
And now the issue of authentication over the last two, three, four, or five years has taken on a very different scale and a different urgency, because it used to be in a court of law, I’d get some evidence. I’d have weeks, months to analyze things.
And now we have a fraction of a second before a viral video blows up and people start killing each other somewhere in the world.
FAKE: And it’s the speed at which it’s happening is also preventing people from using their neocortex. And they’re relying entirely on their amygdalas.
FARID: That’s exactly right.
FAKE: That’s the emotional/irrational brain vs. the thinking/rational part.
Members of the military in Myanmar used Facebook to incite riots, with fake stories to fuel hate. Many blame this propaganda for hundreds of thousands of people being displaced and thousands being killed. But social media platforms have long been insulated from liability for distributing the harmful content.
FARID: Section 230 of the Communications Decency Act is the gift from the gods for the technology sector.
FAKE: Yes.
FARID: It says that platforms – and that word is very, very important – are not liable for user-generated content with a few exceptions: copyright infringement and child sexual abuse. And by the way, we protected copyright owners before we protected children. What does that tell you about the society we live in?
So Section 230 says there’s no responsibility, therefore there’s no incentive for the companies to do better. So I testified before Congress late last year about 230 reform in a rare show of bipartisan support from the far right to the far left. Everybody agrees we have to reform tech.
How we do it is not necessarily agreed upon, but we have to start holding these companies, and Mark Zuckerberg, personally responsible when his services lead to the death of thousands of people in Myanmar. He should be in handcuffs and be going to jail. That’s how you effect change on these services. It’s just a question of when it will come and what form it will take. Okay, now let’s talk about what I do here.
FAKE: Hany Farid’s work is in direct response to this lack of accountability in tech. If tech platforms won’t invent the tools to fight disinformation, he will. So he is creating a weapon to defuse deepfakes in the geopolitical landscape – a detective AI System. To understand how it works, you first need to understand how deepfakes operate.
FARID: So let me start with this image. It will show you the image of a person that doesn’t exist.
FAKE: Okay.
FARID: So these six people that you see on my screen here have never existed. They were fully 100 percent synthesized by a computer.
FAKE: Faces of an older woman, a young boy, people of all races – they could be portraits you’d see on Linkedin or Facebook.
FARID: So these are generated with what is called a generative adversarial network, a GAN.
FAKE: A GAN is a subset of AI, one of the underlying technologies that enable deepfake creation. You can think of it like a pair of AIs. The first AI generates fake images and tries to fool the second AI into thinking they’re real. The second AI tries to spot all the fakes. It’s sort of a deep-learning competition between the two networks.
FARID: So, one of the things that we are concerned about is how this type of technology will be weaponized either in 2020 or 2024 or whatever. And so we’ve started focusing on really how do we detect deepfakes of world leaders, politicians, people running for office.
So I’m gonna show you a series of video clips of former president Obama, and see if you notice anything. Okay?
FAKE: Okay.
CLIPS OF “OBAMA”: Hi, everybody. Hi, everybody. Hi, everybody. Hi, everybody. Hi, everybody. Hi, everybody.
FARID: So those were all seconds-short clips. And what you may have noticed is they’re all the beginnings of his weekly addresses that he did when he was in the White House. And what you might notice is every time he says “Hi, everybody,” he tilts his head up a little bit. “Hi, everybody. Hi, everybody.” Just this little head nod, right? Almost like, hey, how you doing? A little New Jersey thing going on there. And what you noticed is as he brings his head down, he actually purses his lips. So it’s this very interesting mannerism. And Donald Trump doesn’t do this, and Hillary Clinton doesn’t do it. And I don’t do it. And you don’t do it. It’s distinct to former president Obama.
FAKE: Hany is in the business of learning these distinct mannerisms of world leaders. Studying hours and hours of video of how President Obama says a certain word.
FARID: And so we look for those mannerisms, what we find is when we create deepfakes, when other people create these fakes, they violate these mannerisms that are expected, right?
We build what we call soft biometric models – biometric, because we’re identifying somebody, and soft, because we don’t expect this to distinguish you from 7 billion people in the world. But we think that it will distinguish you from somebody impersonating you. Yep?
FAKE: This biometric model is the core of Hany’s detective tool. But there’s a bit of an arms race here – because as soon as Hany explains what he’s doing, someone is developing a technology to get around it. But so long as he’s making it harder to make convincing deepfakes, he’s making progress.
FARID: And our goal is not to eliminate the creation of deepfakes, but our goal is to take the creation of sophisticated deepfakes out of the hands of the amateurs. But if a bunch of teenagers in Macedonia can disrupt a U.S. election, we have a real problem.
FAKE: So there’s an interesting tension here with how you deploy this technology.
FARID: We’re going to put it in the hands of journalists, and it’s going to look something like this: There’s gonna be a web portal that you navigate to, and it will be closed off to the public. But vetted journalists will have access to it, and they will upload a video, and we will say, “Ah, this seems to be Donald Trump.” We take that video, we analyze it, we compare it against our model, and we basically come up with an answer. You know, the likelihood of being authentic is x.
But what we hope is that as videos and mis- and disinformation start to leak out, that we will provide the tools to journalists – who, in my opinion, are the gatekeepers between the nonsense that is the world and us as the voters – to make us informed voters. We want to enable them with the tools that they need to assess whether something is real or not.
FAKE: Journalists getting to verify videos before they get widely circulated seems like a step in the right direction. But it opens up some new questions too. Like, which journalists get access to the portal?
For this reason, Hany Farid says he and his colleagues will publish papers, describing what they do, but won’t make the data or the code available for fear that it will simply enable the creation of better fakes. He calls it semi-transparency.
FAKE: Do you worry about unintended consequences?
FARID: All the time. I think what’s difficult about this issue is that it’s literally the unintended part is that it’s hard to see them. I mean, if I can see what the consequences are, I can put the safeguards in place. But the problem is, we don’t see it.
How things are going to be misused by bad actors is very hard to predict. And the best intentions can go awry quickly.
FAKE: So, is there such a thing as a good reason to use deepfakes? Like getting an Oscar nomination? Coming up, on Should This Exist?
FARID: So it was about… God, uh… more than 10 years ago…
FAKE: Dr. Hany Farid is telling me about a surprising use for machine learning that isn’t a deepfake, but uses very similar tech. It’s a video detection case that haunted him for years. A case that helps explain the advances and the best possible uses for this technology.
FARID: Let’s call it around 2010, I got an email from a father whose son had been killed and the whole thing had been filmed on CCTV camera.
FAKE: That’s one of those cameras up on a road where the traffic light is.
FARID: You could see the shooting. You could see the car. But it was so grainy and dark and low quality that you couldn’t make out the license plate, you couldn’t make out faces. It was horrific.
FAKE: Then after nearly a decade, the technology got to a point where it could really make a difference.
Hany was able to pick up on this case – produce and analyze tens of millions of realistic-looking license plates. After combing through them and matching them to images from the CCTV, he had a breakthrough.
FARID: My understanding as of a year or so ago was that they felt like they had found the suspects in the murder. And that was, I mean, 10 years. But it was close. Yeah, yeah, yeah.
FAKE: For Hany, it’s seeing the meaningful uses of deep learning that make the misuses so frustrating.
FARID: And so what’s really, I think what’s interesting about that story is the latest advances in machine learning and AI, while they do have some problematic applications, there is some really cool applications and not just Hollywood studios making better special effects, because if that’s all it was, I would say this isn’t worth it. I don’t need better special effects, but that’s cool. Right?
FAKE: Point taken. But for people who work in the film industry, “cool special effects” aren’t just a fun distraction. They’re core to career-making innovations in their field. Like in Martin Scorsese’s film The Irishman, that received an Oscar nomination for its special effects, using a first-of-its-kind technology to make the actors look decades younger.
THE IRISHMAN CLIP: Hello? Is that Frank? Hi Frank, this is Jimmy Hoffa.
FAKE: De-aging Al Pacino, Joe Pesci, and Robert DeNiro. For example, DeNiro, who was 76 when they were filming, ages from about 30 to 80.
And deepfake technology is being used in the art world at the Salvador Dalí Museum in St. Petersburg, Florida.
“SALVADOR DALÍ”: Greetings, I am Salvador Felipe Dalí.
FAKE: With a life-sized deepfake of the artist where he delivers lines drawn from his actual quotes or writing.
DALÍ: I have a longstanding relationship with death.
FAKE: Or in reaching worldwide audiences where David Beckham delivers a message for the Malaria Must Die Campaign.
DAVID BECKHAM: Malaria isn’t just any disease, it’s the deadliest disease that’s ever been.
FAKE: And with the help of facial re-animation, he speaks in nine languages.
GERAINT REES: The technology underpinning these deepfakes is used for so many other different things that we would view as useful. That actually drawing the boundaries to just shut it down seems temptingly easy.
FAKE: That’s Professor Geraint Rees at University College London, where they’ve launched the AI for People and the Planet project with the belief that AI research and innovation is ultimately for positive impact on individuals and societies.
REES: So the idea of digital twins, for example, is around all over the place. All the jet engines on all the airplanes we ever fly on have a digital twin. And the purpose there is to recreate a simulation so accurately that we can start to use predictive analytics to try and anticipate things happening before they do. Could we have a digital twin of ourselves to work out the effect of treatments of different preventative mechanisms to keep us healthy?
FAKE: Or there’s Deep Empathy, a UNICEF and MIT project that uses deep learning in Syrian neighborhoods affected by conflict, and then simulates how cities around the world, like Boston or London, would look in the midst of a similar civil war.
REES: If we’re going to understand the phenomena, and if we’re going to think about acting globally, shouldn’t we actually perhaps be thinking of this more positively as an opportunity to set the frameworks – and to think about what we would want to do with this technology as a society?
FAKE: In Washington, DC, Congresswoman Yvette Clarke, a Democrat from New York, believes we need to be actively engaged in creating guidelines or guardrails for deepfakes and its darker side.
CLARKE: We’re really, actually, sort of behind in raising awareness of what type of damage it can do.
FAKE: Representative Clarke has put forward legislation called the DEEPFAKES Accountability Act – first introduced in June of last year. It says detection doesn’t go far enough. We need legal recourse including digital watermarks and disclaimers on altered video content.
FAKE: Does this bill have bipartisan support?
CLARKE: Well, we’re still working on getting our Republicans on board. I think that there are many more colleagues who are interested. Of course, folks don’t want to admit to being deceived or deceptive practices during an election cycle.
FAKE: Oh, that’s super interesting. I hadn’t even thought about it from that angle.
CLARKE: Yes.
FAKE: Okay, because there’s a certain amount of shame attached to having been fooled.
CLARKE: And there’s an advantage to turning a blind eye to the fact that deceptive materials would be out during an election season. There’s just certain behaviors that, I think, have been tolerated. And unfortunately, we’re not in a climate where if it doesn’t happen to you, individuals see a need to protect the American people from it.
FAKE: Along with fears for the 2020 election, the vast majority of deepfakes – up to 96% of them by some estimates – are used to violate and degrade women.
CARRIE GOLDBERG: Over five years, we went from having just a few states with revenge porn laws to having 46.
FAKE: Carrie Goldberg is a lawyer who started her own firm as a victim’s rights attorney specializing in sexual privacy. She is the author of the book Nobody’s Victim, about many of her cases and her own nightmare experiences. And in only a few years, she’s helped craft revenge-porn laws across the country.
GOLDBERG: Most of my cases involving deepfakes are celebrities. The public figures are sometimes the first targets and then there’s a trickle down.
And one of the things about deepfakes is that it really falls through the cracks of our laws because our revenge porn laws require that your own naked body be exposed. And so with deepfakes, when your head is superimposed onto somebody else’s naked body, it doesn’t fall within the statutes of defamation.
FAKE: You’re not protected. Oh, that is so strange, right? It’s so confusing because it’s part of your body, just as much as the part below your neck.
GOLDBERG: Yeah. And I mean, the humiliation is there either way.
FAKE: Of course. One question I wanted to ask you is: how can we guide women to meaningful action?
GOLDBERG: The Cyber Civil Rights Initiative is one of the main powerhouses that originally fought for revenge porn laws and is just on generally the cutting edge, putting pressure on tech companies to do better.
The National Women’s Law Center, their Time’s Up fund pays for litigations. And as women, one thing that we can do is hold accountable the people who’ve harmed us. People who do take action are doing something selfless.
FAKE: Yeah. They’re doing it for the rest of us, really.
GOLDBERG: They are.
FAKE: And prevent the perpetrators from doing it again.
GOLDBERG: Absolutely.
FAKE: We’re back with Professor Hany Farid at UC Berkeley. In addition to working in the field of deepfake detection, Hany’s one of the foremost experts in image forensics in the world. He’s been an expert witness in countless civil and criminal trials.
FARID: I will tell you the things that haunt me. I do a lot of work in the child sexual abuse space, which is horrific. I do work in the counterterrorism space, which is equally horrific. A lot of what I do is deal with national security, criminal issues that are extremely high stakes. And while I try to compartmentalize, it’s hard sometimes.
FAKE: He helps nonprofits like the Canadian Centre for Child Protection, helping them think through technological innovations to get better at finding child sexual abuse material. He says they’re the real heroes.
FARID: And on the flight back, it was Winnipeg to Vancouver, Vancouver to San Francisco, I must have probably spent 80 percent of that time just in tears. I think the person sitting next to me just thought, “man, this guy’s in bad shape.” And then you start thinking about the real victims, the real children behind this. And it is haunting. It is haunting.
All of these things are about the weaponization of technology. And deepfakes is just part of that equation. You can get to this point where you wonder what is wrong with humanity. How are we so willing to create so much pain for so many people? I don’t understand that.
On the other hand, as a technologist, you have a couple of options. You can bury your head and pretend this stuff doesn’t exist and don’t look at it, but it doesn’t make it go away. And so my job as an academic and as a technologist is trying to make the world a better place in whatever way I can.
FAKE: With deepfakes, the technology’s here, and it’s getting better. And it isn’t going away. Researchers like Hany Farid are preparing for a future where deepfakes are more common and harder to detect. So with someone who is leading the charge to use AI to fight AI, I couldn’t help but ask the question – the question that someone perhaps should have asked a long time ago, when this technology was developed. Should this exist?
FARID: Looking at the landscape today, I would say no. Is there a landscape five years from now, 10 years from now, where there are some military applications? Possibly, and that’s where the rub is.
So the rub is, do you say, “Look, let’s stop working on this because we see more harm than good?” But what if we’re wrong? What if this eventually leads to some remarkable breakthrough that we can’t see today? So that’s the rub with science, right? You can’t predict.
So I think probably the better answer is not stop working on it, but put safeguards in place. Stop putting the stuff out there for anybody to download. Figure out how do you, for example, embed a watermark into every piece of deepfake content so that we can detect it at the back end without a lot of work? I mean, I think there’s compromises here, as opposed to “Guys, let’s stop working on this today.”
And I think there’s more interesting compromises that find a balance between protecting our societies and democracies and moving science and technology forward.
And so I’ve just got to stop saying the problems are too hard and are doing something. And if everybody did something, well, then I think things will get better. And that’s sort of my glimmer of hope is that if we keep chipping away at this, things will actually get better.
FAKE: Having talked to an amazing group of people working on deepfakes, I came to the conclusion that given the tradeoffs – its being used to degrade women or to undermine political opponents – it’s not worth the Nic Cage videos or seeing Robert DeNiro be 30 again.
But I also know that stopping deepfakes isn’t as simple as passing a single law or inventing a single algorithm.
I wish this technology did NOT exist. But since it does, I’m grateful that people like Carrie Goldberg, Representative Yvette Clarke, and professor Hany Farid are working to mitigate the damage deepfakes do. Their job – and ours – will be to try and see around the corner, and make sure today’s best efforts don’t become tomorrow’s unintended consequences.
Look, I don’t get to decide should this exist? And neither does this show. Our goal is to inspire you to ask that question and the intriguing questions that grow from it.
LISTENER: What question am I asking about deepfakes at the dinner table, is the same question that I’m asking about the pork chops we’re eating: Whose job is it to ensure they’re safe?
LISTENER: The genie is out of the bottle. You can’t uninvent this technology. So what are we going to do now?
LISTENER: Honestly, just shut it down. Is there anything good about this?
LISTENER: Maybe art? I mean, I guess it’s just an evolution of Photoshop.
LISTENER: People could just make me say stuff that I actually don’t believe.
LISTENER: The one thing that seems remarkably positive is where you can hear stories from individuals that may not be alive to tell them today. What does it feel like when you hear Shakespeare talk to you?
LISTENER: I’m trying to be open-minded with deepfake. It scares the beep out of me.
FAKE: Agree? Disagree? You might have perspectives that are completely different from what we’ve shared so far. We want to hear them. To tell us the questions you’re asking go to www.ShouldThisExist.com, where you can record a message for us. And join the Should This Exist newsletter at www.ShouldThisExist.com.
I’m Caterina Fake.