Welcome to the synthetic decade
Technology is giving us tools to alter reality in more and more areas. You might soon not only eat artificial meat but also interact with your personal double. And – not least consume more and more information created by AI. Welcome to the Synthetic Decade.
The idea that we’re entering a new era, is established by futurist Amy Webb and her team at the Future Today Institute. She states: ”Not only will we eat beyond burgers, but we will consume synthetic content, or train the next generation of AI with synthetic data sets”.
Recent developments within AI, prove them right. AI will impact the way we consume, get informed and envision health and life span. It’s not in a distant future, and you might already have encountered what is now defined as ”synthetic content”. If you’ve ordered a beyond burger you had synthetic meat, if you used a face swap filter on your phone you produced synthetic media.
Editing DNA
As we will progress into the synthetic decade, synthetic experiences and relationships will shape greater parts of our life. A really good example is the development of synthetic biology and the ability to engineer living systems and structures, by programming DNA with Crispr, to design and re-design organisms to do what we want them to do. Editing DNA is possible since 2010, but it is a very laborious task. Synthetic biology promises to automate the editing process. As Amy Webb puts its ”In this, decade synthetic biology is going to allow us to read, edit and write life. We will program living biological structures as we build tiny computers.” This is not science fiction, and we can envision many positive use cases for improving our own health and life span, and also helping our living structures adapt to new conditions such as global warming or pandemics.
Looking into one of these fields – synthetic media – many of the trends behind the synthetic decade are uncovered. It has started to unfold, and it tells us a lot about the potential outcomes and the many questions it triggers, blurring the line between what we consider ”real” or ”virtual” even more.
2017 was a landmark for synthetic media, with Vice reporting the emergence of pornographic videos altered with the use of algorithms to insert the faces of famous actresses. The term ”deep fake” was coined soon after, bringing a lot of attention to the phenomenon and its harmful potential for misinformation. It then triggered a fundamental discussion, that will likely be at the core of synthetic media, about ethics and the potential harm around the ”forgery” of content through AI. A very famous example is a deep fake video of Obama, created by Buzzfeed and enacted by Jordan Peel, warning us that ”We’re entering an era in which our enemies can make anyone say anything at any point in time.” – and indeed we are!
The potential impact of synthetic media lies in the automation of editing
Synthetic media is the term used for content created using artificial intelligence. With an initial set of data, algorithms learn to reproduce, and create, pictures videos, sound, gestures, text and more. The result is realistic-looking and sounding artificial digital content.
Looking closer at the tech behind synthetic media, the past few years have shown significant advancements in deep learning and generative adversarial networks (GANs) have accelerated their growth. Synthetic media is mostly based on GAN technologies, even if there are many different techniques being developed. This has resulted in the quality of synthetic media improving rapidly, and soon it might just be indistinguishable from traditional media.
The potential impact of synthetic media lies in the automation of editing which makes it possible to create content at scale. The cost to create synthetic media has considerably lowered due to the wide availability of the techniques. Open source software already enables anyone with some technical knowledge and a powerful-enough graphics card to create a deep fake. This has led to a drastic improvement of synthetic media quality (check out thispersondoesnotexist.com), without countless tedious hours of work.
A meaningful trend
If we also think about new behaviors such as how we consume media on social channels, how we expect even more personalization and accessibility or the fact that we have normalized virtual spaces for socializing (see the rise of Fortnite, or Animal Crossing as social media during the quarantine period), we have a very favorable ground for synthetic content to be a meaningful trend and impact the way we create and consume content online.
This again raises the familiar question if synthetic media is bad. It is a delicate yet fundamental question, and the answer is the same as with most tech: it’s not harmful in itself, it depends what we are using it for. Synthetic media has a lot of potential because it is not just deep fakes, there is a growing interest in how it could be used to support new business and creative areas. The industry around synthetic media is blooming and many companies and investors are looking into the trend, believing strongly in its future.
For now, entertainment applications are the entry point for larger audiences. We all have the possibility to create synthetic media in our pocket today. For example, Snapchat released their gender-swap filter in 2019. Russian app, Faceapp made us look older and in China ZAO released a deep fake app that can engrave the user’s face into some clips from famous films or series. It’s not hard to imagine the next iteration of a social media app being one where users can transform their voices, create their own synthetic character, or pretend to be their favorite celebrities.
Synthetic media could become a leverage for the media industry
But it’s about more than just entertainment – synthetic media could become a leverage for the media industry starting with automated news reporting and delivery.
In today’s newsroom, some types of reporting are extremely tedious and straightforward – human opinion and effort are not adding value. Weather reporting is a very good example. In the UK, the BBC blue lab has been exploring how synthetic media could help weather reporting. Given the growth of digital assistants and the industry’s drive for greater personalization, they are betting that in the future, we might expect that a video response to a query will be digitally generated. To try this out, the editorial department collaborated with the AI firm Synthesia and created an experiment where the presenter reads the names of 12 cities, numbers from -30 to 30 and several phrases to explain the temperature, to the camera. You can then pick your city and get a personalized, but synthetically created weather report.
Within Schibsted several of our media houses have simpler, but also automated services, reporting on weather, sports and real estate.
Another application that is very promising is automated, real time translation and dubbing. In that field, Synthesia is one of the most prominent companies looking into real time automated translation, with use cases ranging from education to customer service.
With improvement in synthetic voices, we can also imagine a rapid adaptation of voice technology in traditional media production pipelines. Particularly in video games and audio books which are markets that today face significant challenges scaling human voice over. Overall synthetic media could be a powerful technology for businesses that are reliant on content and would like to adapt their offering to different audiences. Today what would require many hours of work could be done through synthetic content creation.
Texts and dialogue are prominent use cases of Synthetic media. Hence, we are seeing the development of more realistic and accurate conversational and companionship technologies. From a simple bot, which generates a tailored conversation to a virtual double, the potential for service or leisure conversation opens up.
Having a conversation with an AI
Right now, most of our interactions with AI are transactional in nature: ”Alexa, what’s the weather like today”, or ”Siri, set a timer for ten minutes”. But what about developing a profound conversation with an AI? A stunning example is from a conversational bot called Replika which is programmed to ask meaningful questions about your life and to offer you emotional support without judgment. Since its launch more than two million people have downloaded the Replika app.
Digital assistants could be used for companionship purposes, but also education or training. It could for example help us recreate a learning environment, especially when working remotely. What if you could interact with simulated persons to learn from them or practice management techniques? And – would you invite a synth to a dinner party?
For all of this to happen and to convince us to interact with our virtual counterpart, the improvement of virtual human character and emotional response is crucial. The more these companions will look, talk and listen like humans the more we will be inclined to interact with them. For example, Samsung’s virtual human ”Neon” which they describe as their ”first artificial human” is here. These Neons can go out-of-script and develop their own ”personality”. It can generate new expressions, gestures, and reactions out of this unique ”personality”.
Producing quality synthetic content is still very costly and tech intensive, but companies that specializing in synthetic content are emerging, allowing businesses and individuals to buy and rent synthetic media.
Synthetic media is rather new and it’s moving fast. So fast that regulation has not followed yet. Whether it is about deep fakes, synthetic voices used for customer service, or entertainment pieces we will need to lay some ground rules about the ownership of such content and establish the responsibilities that come along. So far, many questions are still left unanswered such as, who will ”own” the content produced? How will copyright laws apply on a reproduction of a celebrity? Who would be held responsible if a digital assistant hurts someone in real life?
Still in early stages
Synthetic content has already made its way into our lives. But not all part of its ecosystem is moving at the same pace. The synthetic media sub trend, has already emerged to mainstream audiences, the technology powering it has left the research lab to find very concrete business applications. From strong ethical fears, to concrete valuable use cases, this development tells us a lot about the potential trajectory, outcomes and questions of the synthetic decade. Other areas such as biology are still in early stages, but their applications alter our lives even more profoundly. Overall, the technology underlying synthetic media, synthetic biology and other fields of synthetic content – namely AI, computer vision, deep learning etc., are the same. This means, the early questions that have risen with synthetic media are indicating the fundamental discussions we will face during the synthetic decade. In every field will arise interrogations and debate around the rights to edit, create and use what is created, determining ownership of the content, what is considered ethical or not. This also means we still have some agency to decide what comes next, and the synthetic decade to come will not necessarily be dystopian.
Sophie Tsotridis
Former Associate Product Manager and Trainee in Schibsted
Years in Schibsted
2
What I’ve missed the most during the Corona crisis
Being able to see a movie in a theater!