Deepmind and Deepfake: The light and dark of AI Technology

T
The Source
By: Guest contributor, Wed Sep 7 2022
_

Author: Guest contributor

There are many big challenges facing the world and societies today and Springer Nature is committed to creating a sustainable business to help tackle them. This not only means using technology to open up research and accelerate solutions to the UN’s Sustainable Development Goals (SDGs), but doing so in a manner that is ethical and responsible, and that supports people in a fair and impartial way.

This post looks at some examples of how we’re showcasing the best of what AI technology can offer to science, as well as highlighting its potential dangers – as covered in our recently published Sustainable Business Report.

As publishers, it’s a crucial part of our role to publish and amplify scientific developments that could have a great impact on science and society. But we also have a responsibility to be aware of and understand the potential of new platforms and media that can be dangerous if misused. 

Advances made possible by artificial intelligence (AI) can fall into either and even sometimes both of these groups. Below we look into two examples of AI in research, showing the incredibly bright side of what AI could make possible, as well as the dark side we all need to be more alert for.

“It will change everything” – a gigantic leap in solving protein structures

Nature offers a crucial platform for amplifying cutting-edge scientific developments. And it’s hard to get more cutting edge than the results published in 2021 from a new project by DeepMind, a British artificial intelligence (AI) company.

The AI network developed by DeepMind, called AlphaFold, made a gargantuan leap in solving one of biology’s grandest challenges – determining a protein’s 3D shape from its amino-acid sequence.

In some cases, AlphaFold’s structure predictions were indistinguishable from those determined using ‘gold standard’ experimental methods. And while it might not remove the need for these laborious and expensive methods – yet – the AI will make it possible to study living things in new ways.

“It’s a game changer,” said Andrei Lupas, an evolutionary biologist at the Max Planck Institute for Developmental Biology in Tübingen, Germany, speaking to Nature. “This will change medicine. It will change research. It will change bioengineering. It will change everything.”

As well as publishing these findings, Springer Nature’s media outreach resulted in coverage of the research in more than 850 news stories, including BBC News, the Financial Times, the New York Times, Wired, and the front page of The Times.

Deepfake: an information ecosystem at risk

The incredible potential of AI, as demonstrated by DeepMind, has a dark side. This was chillingly demonstrated by the efforts of media artists at the Massachusetts Institute of Technology (MIT) and two artificial intelligence companies (Canny AI and Respeecher), who then worked closely with Scientific American.

Starting in 2019, the team set out to create a posthumous ‘deepfake’ of Richard Nixon delivering a speech prepared in the event of a fatal moon landing – which crucially he never had to make. (The full deepfake speech can be viewed at https://moondisaster.org.) 

The MIT team wanted to demonstrate such deepfakes’ potential to raise questions about how they might affect our shared history and future experiences. 

“Deepfakes” are a class of AI-generated synthetic media – you may be familiar with examples of the “face-swap” variety. Beyond this, however, there are various other kinds of AI-based synthesized media. Deep learning has also been adapted to create audio fakes, lip syncing, and whole-head and whole-body puppetry. 

The MIT team didn’t create a simple face swap. Instead, they wanted to create the best technical fake possible while documenting the work involved to produce it – the process took more than half a year.

To accomplish the visual part of the fake, Canny AI employed a technique called “video dialogue replacement”. While for the sound, Respeecher used a “voice conversion system” to synthesize Nixon delivering the speech from an actor’s performance.

The team then worked with Scientific American to show their video to a group of experts on AI, digital privacy, law and human rights. In the short film ‘To Make a Deepfake’, these experts provided necessary context on the technology.

The team at MIT, Respeecher, CannyAI, and Scientific American won an Emmy for Outstanding Interactive Media Documentary for this work

This post is part of a series to accompany the publication of Springer Nature's Sustainable Business Report 2021. It highlights just a few of the contributions we have made towards some of the SDGs over the past year, and how we continue to build foundations for the future through the continued opening up of research and the building of new partnerships.

Find out more by reading  Springer Nature's Sustainable Business Report 2021.

_

Author: Guest contributor

Guest Contributors include Springer Nature staff and authors, industry experts, society partners, and many others. If you are interested in being a Guest Contributor, please contact us via email: thesource@springernature.com.