background preloader

Deep Fakes and Social Media

Facebook Twitter

Une intelligence artificielle crée des visages ultraréalistes de personnes qui n'existent pas. Créé par un ingénieur d'Uber, le site ThisPersonDoesNotExist («cette personne n'existe pas») créé de toutes pièces des photos de visages de personnes qui ne sont pas réelles. Le site s'appelle «ThisPersonDoesNotExist», littéralement «cette personne n'existe pas».

Pourtant, au début, on a du mal à y croire. On rafraîchit la page plusieurs fois, pour faire apparaître de nouvelles photos. Puis, on finit par percevoir des défauts. Des yeux trop oranges, des veines apparaissant à des endroits étranges. C'est l'expérience troublante que propose Philip Wang, un ingénieur travaillant au sein de la société Uber. Le but de «ThisPersonDoesNotExist» est d'éduquer le grand public aux progrès de l'intelligence artificielle dans la manipulation des images. Même s'il peut inquiéter de prime abord, ce type de technologies peut être utilisé à des fins tout à fait inoffensives, comme pour créer des effets spéciaux ultraréalistes dans des films. Comment faire dire ce que l'on veut à Barack Obama grâce à l'intelligence artificielle. Internet Companies Prepare to Fight the ‘Deepfake’ Future. SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.

Internet Companies Prepare to Fight the ‘Deepfake’ Future

Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. Grover - A State-of-the-Art Defense against Neural Fake News. Tool to Help Journalists Spot Doctored Images Is Unveiled by Jigsaw. A doctored, phony image of President Barack Obama shaking hands with President Hassan Rouhani of Iran.

Tool to Help Journalists Spot Doctored Images Is Unveiled by Jigsaw

A real photograph of a Muslim girl at a desk doing her homework with Donald J. Trump looming in the background on television. It is not always easy to tell the difference between real and fake photographs. But the pressure to get it right has never been more urgent as the amount of false political content online continues to rise. On Tuesday, Jigsaw, a company that develops cutting-edge tech and is owned by Google’s parent, unveiled a free tool that researchers said could help journalists spot doctored photographs — even ones created with the help of artificial intelligence. Jigsaw, known as Google Ideas when it was founded, said it was testing the tool, called Assembler, with more than a dozen news and fact-checking organizations around the world. The tool is meant to verify the authenticity of images — or show where they may have been altered. Deepfakes — Believe at Your Own Risk. Producer/Director Andréa Schmidt Watch — very closely — as an ambitious group of A.I. engineers and machine-learning specialists try to mimic reality with such accuracy that you may not be able to tell what’s real from what’s not.

Deepfakes — Believe at Your Own Risk

If successful, they’ll have created the ultimate deepfake, an ultrarealistic video that makes people appear to say and do things they haven’t. Experts warn it may only be a matter of time before someone creates a bogus video that’s convincing enough to fool millions of people. Over several months, “The Weekly” embedded with a team of creative young engineers developing the perfect deepfake — not to manipulate markets or game an election, but to warn the public about the dangers of technology meant to dupe them. The team picked one of the internet’s most recognizable personalities, the comedian and podcaster Joe Rogan, who unwittingly provided the inspiration for the engineers’ deepfake moonshot.

[Join the conversation about @theweekly on Twitter and Instagram. It Will Soon Be a Crime in China to Post Deepfakes Without Disclosure. What should newsrooms do about deepfakes? These three things, for starters. Headlines from the likes of The New York Times (“Deepfakes Are Coming.

What should newsrooms do about deepfakes? These three things, for starters

We Can No Longer Believe What We See“), The Wall Street Journal (“Deepfake Videos Are Getting Real and That’s a Problem“), and The Washington Post (“Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned’“) would have us believe that clever fakes may soon make it impossible to distinguish truth from falsehood. Deepfakes — pieces of AI-synthesized image and video content persuasively depicting things that never happened — are now a constant presence in conversations about the future of disinformation.

These concerns have been kicked into even higher gear by the swiftly approaching 2020 U.S. election. A video essay from The Atlantic admonishes us: “Ahead of 2020, Beware the Deepfake.” An article from The Institute for Policy Studies asks: “Will a ‘Deepfake’ Swing the 2020 Election?” Video and photo manipulation has already raised profound questions of authenticity for the journalistic world. Twitter Will Label And Remove Deepfake Videos, Images, And Audio Starting In March. Twitter will soon begin removing altered videos and other media that it believes threatens people’s safety, risks mass violence, or could cause people to not vote.

Twitter Will Label And Remove Deepfake Videos, Images, And Audio Starting In March

It will also start labeling significantly altered media, no matter the intent. The company announced the new rule Tuesday. It will go into effect March 5. Face2Face: Real-time Face Capture &Reenactment of RGB Video. What are 'deepfakes?' Deepfakes Are Coming. We Can No Longer Believe What We See.