Let your team fail
Let your team fail
Leaders need to create an environment for exploration and experimentation – and set up teams for failure, says David Gill, CPO at Schibsted in Finland. This is how companies working with tech will survive.
Our users’ expectations and needs grow every day. What was a relevant solution yesterday can suddenly be considered outdated and ripe for disruption today. There is just one problem: across consumer technology companies, most of the improvements we put in front of our users are completely useless. Various internal benchmarks and academic studies show that even among the world’s strongest technology companies, many of the changes made to their products do not deliver value to users. As measured through user behaviour and metrics, most new functionality isn’t used and some even lead to lower user engagement. You have probably been in this situation yourself: your favourite app suddenly changes, introduces meaningless new features, or even messes up the navigation.
So, how do you avoid these well-intentioned but wasteful missed opportunities? How do we lower the risk of spending our time and energy building features that no user will ever use? And on the other hand, how can we speed up our ability to quickly identify the most pressing problem to be solved for our users, and determine the solution that best solves this problem?
We need to set up our teams for failure. They need to fail a lot, ideally many times a week. Why? Let’s remember the data: even the most user-centric, smart and well-equipped product teams will ship a lot of improvements that do not solve any relevant problem well enough. Only one out of four ideas will positively move the needle, at least in its first iteration. That means three out of four ideas should not have been built in the first place!
Since we do not have a magic crystal ball to know which three ideas will not work, we simply must test them out for ourselves. This means that the best teams in product development are the ones that work through and discard the bad ideas the quickest. The teams that creatively and efficiently spend the least time determining that a certain idea will fail to meet user needs, are the teams that ship the most meaningful improvements. As you might have guessed, failures and critical learnings are almost as important as finally shipping that one improvement that really moves the needle.
The right environment is crucial
For our leaders, this means providing the right environment for teams tasked to tackle this hard and time-consuming journey of exploration and experimentation. By studying some of the most successful technology companies in the world and reflecting on my own experiences, there are some key enablers that need to be in place for your teams to have the user-centric, strategically relevant impact of our dreams:
1. Focus on outcome
Set goals based on the outcomes you want to achieve, not the functionality you want to build. As tempting it might be for a leader to measure progress in tangible feature releases (we built something! I can see it on the app!), it’s incredibly important that the teams are instead empowered to achieve positive user and business outcomes as measured by metrics. This recognises the fact that the first idea/feature will probably not work, while also giving the team the freedom to decide on solutions while also boosting the responsibility of delivering results.
2. Open atmosphere
Psychological safety: many leaders know that a key trait of any successful team is the degree of psychological safety they feel when interacting with each other. This basically means having permission to speak your mind with your colleagues without any fear of repercussions or judgment. This is hard to establish, possibly even harder to achieve remotely, but it’s a critical foundation if you want your team to reap the benefits of being diverse and cross-functional.
3. Product discovery
Get really good at product discovery. Product discovery is the art and science of finding out which problems are most worth solving and whether the proposed solution will work. It’s a series of techniques ranging from user interviews, surveys, prototyping and design validation, technical feasibility, competitor benchmark and other types of experimentation. This requires both time, skills, empathy and patience. I have yet to see a company doing enough product discovery.
4. A clear strategy
Develop a clear vision and strategy. So, it’s super hard to figure out what to build. And it needs quite a bit of time-consuming product discovery. How do we avoid the pitfall of straying too far from the purpose and mission of the company? How do we avoid the teams coming up with all kinds of ideas that make the totality of your product look like Frankenstein’s monster? The answer is through serious strategic immersion between leaders and the team. It’s not enough to share a fancy slide in a company all-hands a few times a year. The strategy needs to be discussed, questioned and familiarised with everyone in the team until the strategic direction of the company is understood. And most importantly, it needs to be clear how each individual team will contribute to this strategy.
5. Time and focus
Give enough time and focus. We know that successful product development is high in complexity and low in predictability. The process is messy and requires both creativity and resilience. It also requires investment in learnings and development, and that the teams reflect often and deeply on how they can improve their ways of working together. This means that overworked or stretched teams will have a much harder time achieving this tough task. Ensuring that they have enough support and slack in their agendas is fundamental for their success.
Product development is a tricky business. It requires talented and motivated people, strong leadership, a curious company culture, sharp strategy, patience and a bit of luck. However, it can also breed team engagement, happy users and positive business outcomes. And you know what? Much of the above isn’t exclusive to product development. Almost all the work we do these days has the same high degree of complexity, which makes these five tips applicable to your entire organisation!
David Gill
Chief Product Officer, Schibsted Marketplaces Finland
Years in Schibsted: 10
3 Lessons We Can Learn from Orcs
3 Lessons We Can Learn from Orcs
When the futures we imagine confront our logic and business models – it might help to look at gaming
to break free from the gravity of the present.
The full moon hangs over the horizon, and a light rain begins to fall from heavy clouds. Your boots squelch in the mud as the pitter-patter of raindrops lulls you into a false sense of safety. A monstrous howl breaks the calm night, followed by several more, echoing from the forest around you. Whatever creatures lurk in the darkness have you and your companions surrounded. What do you do?
Over the last decade successful TV shows like Game of Thrones and Stranger Things, as well as live-streaming platforms like Twitch, have brought tabletop role playing games (TTRPGs) out of the basement and into the mainstream, as exemplified by the commercially successful Dungeons and Dragons which has developed a cult-like following.
Propelled by collaborative storytelling, TTRPGs share three characteristics. First, players take on the role of imagined characters, each with their own histories, motivations, strengths and weaknesses. Second, characters inhabit a fictional setting with its own internal logic, covering everything from medieval fantasy to space opera. Finally, these games use elements of chance and improvisation to resolve conflicts and create consequences of the characters’ actions.
Players often use their characters to explore parts of themselves or experience things to which they aren’t normally exposed. This can be an incredibly energising experience; imagine what it would feel like to actually save the world with your friends? Other times, this experience can be quite personal. In the game “14 Days”, players take on the role of someone living with severe migraines, and experience what the unpredictability of chronic pain means for them over the course of the two-week game. Simply using narrative and imagination can take us one step closer to empathy and experience.
What sets these games apart from their digital cousins is the near-complete freedom they offer. Not bound by the limitations of a script or computer program, the question “what do you do?” gives players the chance to take unconventional paths that they would otherwise miss. It builds a sense of agency and allows stories to unfold in emergent ways. Unconventional strategies can reward unique narrative turns.
In many games, this freedom is tempered by a Game Master (GM), who acts as referee of sorts, describing the context the characters are in, acting as any people they may meet in the world, and adjudicating the results when the outcome of the characters’ actions is uncertain. With multiple players making individual and collective decisions, the GM helps maintain the tone and direction of the narrative, providing believable consequences for the characters’ actions. Collaborative storytelling, meaningful cause and effect, and emergent plot twists are what makes these games unique.
While it has a very different goal, Strategic Foresight is a discipline that follows a very similar logic to TTRPGs. Used by leading companies and public institutions worldwide, including here at Schibsted by our Tech Experiments team, Strategic Foresight helps us imagine a range of possible futures and develop robust, forward-looking strategies in times of uncertainty. Like TTRPGs, it uses fictional, collaborative scenarios to explore different actions and imagine possible outcomes.
The process follows four core steps. First, we scan the present for signals of change, including market trends, news stories, and user behaviour. Second, we use these signals to identify trends affecting our organisation. Next, we extrapolate and create future scenarios, imagining our place in them. Finally, we identify and prioritise actions we can take today to move us towards our preferred future.
This can be a powerful approach to anticipating future trends, risks, and opportunities. It can also be an unsettling one. When the futures we imagine confront our logic and business models, how can we break ourselves free from the gravity of the present and imagine different paths going forward? This is where Strategic Foresight can draw lessons from TTRPGs.
Make it experimental
Part of what sets role-playing games apart from books, films or computer games is the feeling that you and your friends were actually there, living the story, even if it is fantastic and unbelievable. When working with Strategic Foresight, paradigm-shifting futures can be hard to believe; if we don’t have parallel experiences to draw on, how can we imagine what that future might feel like? This is where we can use design and storytelling to simulate those experiences. Building prototypes of future products and services that we can touch and experience can bridge the gap and make challenging futures more believable.
Make it divergent
Engaging stories are full of choice, conflict, failure and success. Role playing games embrace this and allow us to find opportunities we might have disregarded otherwise. In Strategic Foresight processes, it can be tempting to focus on the most preferable or probable future we identify. If we give ourselves the chance, these processes also provide low-investment, low-risk opportunities to explore novel paths and alternative strategies. This kind of divergent thinking helps challenge our approach to business-as-usual, adapt to unpredictable events and identify novel areas for exploration and investment.
Make it emergent
Collaborative storytelling means that no one at the table knows what will happen next. It builds excitement and a sense of ownership, and the collective choices of the players can lead to truly unpredictable twists. In parallel, one of the main challenges for Strategic Foresight is preparing organisations for unpredictable, so-called ‘Black Swan’ events, like global pandemics, as well as unforeseen areas of opportunity, like NFTs. To foster emergent thinking, Strategic Foresight processes should involve a diverse range of participants from across the organisation in a safe space for open discussion. Truly novel ideas can arise only when a mix of experiences, expertise and perspectives have a seat at the table and are given the chance to imagine together.
Christopher Pearsell-Ross
UX Designer
Years in Schibsted: 3 months
“I went from handicapped to cyborg”
“I went from handicapped to cyborg”
Last fall I became a cyborg. Now, when I tap twice on my left ear, I answer a phone call or hang up. If I tap twice on my right ear, I activate Siri. My new hearing aids make me feel superpowered. Improving our senses and bodies will be the future for all of us, says Sven Størmer Thaulow.
As a kid I handled the fact that I was born with a reduced hearing by compensating with lip-reading and being seated in the first row in the classroom. Later, in the army and at university – exposed to the myriads of Norwegian accents and most likely some effects of playing in bands – it became more tiresome to compensate, so I started using hearing aids.
Until last fall it has been a varied experience. They serve as an important support, but they lack in refinement – and always make me acutely aware of my handicap. But then something happened. For years we´ve talked about cyborgs and wearables, while techies implanted RFID chips under their skin to demo the future and numerous smart glasses with AR have been tested. But now I’m convinced that it’s within the audio space that things will take off. Superpowered hearing aids will become consumer electronics – filling the missing link in the audio interface ecosystem.
But the road to make me a cyborg has been long.
It all started with “in ear”-aids, devices that practically plugged my ears so that no natural sound could enter. In the mid-90s, I got the first programmable devices that could amplify sound on six different frequency bands. Then, in the early 2000s, came the tiny aids that hung behind my ear and that had a speaker connected by a fairly invisible chord. At that time, the buds stuck into my ear were full of holes, and amplification was much better, resulting in a more natural sound picture. These are still the default mode today. But all this time I still felt handicapped, and I didn’t like that they were visible – so I kept my hair long. Being a vocalist in a band was a good combination.
Around 2014, the first mobile connected hearing aid entered the scene. It was connected to the mobile through Bluetooth, not only in an app, but into the operating system IOS. The hearing aids worked like headphones and the voice audio was excellent – but they didn’t have a microphone. I had to hold up the phone to my mouth when speaking. You could also control different programs and the volume, either through an app or directly in the control centre on Iphone. I could also stream music from my Iphone – but the sound was optimised for voice, so it was treble-crap. But this started to become cool. So, I cut my hair.
Still, the potential was so much greater. When could I drop my airpods? Why weren’t the hearing aids truly connected two-ways so I could talk in them, talk to Siri, and get answers? I kept asking every year for innovations and I told my audiologist Heidi to call me when any great leaps were made on functionality.
Then came the breakthrough. Heidi called me in November 2020, asking me to come over. She handed me a hearing device called Phonak from the Swiss company Sonova Group. And now, after 15 years with hearing aids – I have officially become a cyborg. I don’t feel like I have a handicap anymore. I feel superpowered. Privileged. And I am sure this is the future for all humans.
I can use the aids as I do airpods when I talk on the phone. It’s almost a problem as people don’t have any visual cue that I am on a call. The audio quality is also close to the quality from airpods. I have now used these hearing aids for about twelve months, and I haven’t used my airpods at all. And I am listening to a lot of music.
But what’s probably the coolest thing about my new hearing aids is that they are gesture activated. If I tap twice on my left ear, I answer a phone call or hang up. If I tap twice on my right ear, I activate Siri and can ask whatever I want – hoping she will understand. I use it mostly for controlling Spotify, beefing up the volume, setting a timer, jotting down a to-do, sending a simple text message, etc. A few times I’ve asked questions like “who is xx”, but what you can do with this new audio interface is limited to the intelligence of Siri and not to the content or the value chain, which is super smooth.
For 17 of the 24 hours in a day, I am connected to the internet – inside my brain practically. It saves me loads of friction during the day; I probably pick up the phone 40–50 percent less. I don’t need to charge or look for my airpods either.
In my view, the true breakthrough of connected humans will come from medtech. And it starts here, with audio. And while I’m walking around with some fellow hearing-impaired cyborgs, waiting for my audiologist, Heidi, to reveal yet another breakthrough – here are some predictions about the soon-to-come future of the audio interface:
In five years from now you will be able to buy “Invisible Airpods” from Apple or equivalent. They will be the priciest Airpods you can get hold of, but way cheaper than my Phonaks (which are hovering around 1,400 USD in Norway).
“Invisible Airpods” will be the primary audio interface toward the internet. It’s always on you and it’s personal – so why bother with Google Home?
Siri will become a lot smarter and tailored towards “non-screen” communication. This means you won’t need to pick up your phone to browse through what Siri has found on the Internet when asking her a question. Just imagine you are about to have a meeting with someone and you’d like to get some information about them. Siri will be able to provide that, directly into your ear.
App providers will build in Siri support on loads of functions so that it’s possible to use functionality inside the apps without picking up the phone. Today there are very few apps that have implemented functions towards Siri, which is why “she” has limited reach on our phones.
I can’t wait for the next innovation in this space!
Sven Størmer Thaulow
EVP Chief Data and Technology Officer
Years in Schibsted: 2.5
Crypto finance is going mainstream
Crypto finance is going mainstream
Cryptocurrencies have made decisive moves towards mainstream adoption in recent years. A May 2021 report by New York Digital Investment Group showed that 17 percent of American adults owned at least a fraction of Bitcoin. Among millennials, nearly half own some form of cryptocurrency.
Moving at a much slower pace behind buzzy cryptocurrencies such as Bitcoin and Dogecoin is the notion of decentralised finance (defi). Billed as finance for the internet age, it boils down to the notion that anyone in the world can lend, save, invest and borrow blockchain assets. Unlike today’s financial systems, defi is run on peer-to-peer networks where financial transactions take place through smart contracts – programs on the blockchain that only run when conditions between the buyer and the seller have been met. The users define the rules of engagement, not the institutions.
Anyone have access
The advantages of defi are numerous. Without institutions having the final word on financial transactions, anyone can access banking services. The same permissionless feature applies to the developers who build on these decentralised platforms. There’s also the added benefit of transparency, as software built on these decentralised networks is open-source, and transactions on the blockchain are recorded for all to see.
Furthermore, defi promises the benefit of 24/7 accessibility, unlike traditional financial institutions. Instead of subjecting their financial history to the whims of a bank manager, a defi user could use their Ethereum tokens (the crypto that powers many defi protocols) as collateral for the loan.
Global adoption of defi would have a disruptive effect on our current financial institutions
And rather than opening a savings account with a paltry interest rate, a user could stake certain coins, earning interest rewards far beyond those determined by central banks. By adding your tokens to the blockchain’s liquidity pool, you provide the capital to power other defi services. Many of today’s blockchains operate with these so-called Proof-of-Staking models, an environmentally friendlier alternative to Bitcoin’s Proof-of-Work framework. The user (and their crypto) help serve as the infrastructure.
Disruptive effect
Global adoption of defi would have a disruptive effect on our current financial institutions, reengineering everything we understand about costly financial services such as transfers and global remittances. Banks are particularly exposed to risk – even digital-savvy banks still largely operate by a traditional set of rules. Not surprisingly, some banks have begun commissioning analysis studies and stating that it’s “time to cooperate” with defi.
Defi still has a steep barrier to entry though. It requires a high level of internet savvy to understand. Another downside is the inherent volatility of many cryptocurrencies. The Ethereum tokens you stake on a pool, for example, could drop 20 percent in value over the course of a week (although the opposite is also true).
As I write this, Defi Pulse reports that there are 83 billion USD of assets locked up in defi protocols. Q1 2021 saw 1.5 trillion USD worth of transactions settled with Ethereum, an amount that the venture capital firm Andreesen Horowitz noted represents 50 percent of Visa’s payment volume. Given the sheer volume of capital involved, it’s a question of when, not if, defi will start to play a central role in how we handle our money.
Jeremy Cothran
Former Editor, Schibsted Daily
Years in Schibsted: 1.5
Tech trends in short
Tech trends in short
Non-fungible tokens, deepfakes and decentralised finance. The future of tech gives us many new opportunities and concepts, but also a sensible need for regulation and structure.
Cyber scores
Just like people and companies have credit scores today, we will likely see the development of cyber scores in the near future. Companies and organisations will be ranked on how they’ve stood up to cyber threats and how secure their data is. In the long run, a good cyber score could be as important as a good credit score.
Deepfakes
Deepfakes are becoming more believable every day and are already being used in both novel and malicious ways. The tech behind deepfakes will become more sophisticated, and wider use of it has the potential to erode trust on social media even further. Here, we believe regulation will be necessary in the not too distant future.
Regulation for tech
We’re already seeing a need for regulation around AI and transparency in tech with proposals like the EU’s Artificial Intelligence Act. Trust is key when it comes to implementing new technologies, and regulation will need to be part of building that trust going forward.
No-code and tech democratisation
The democratisation of tech will continue. The effort to make tech-led innovation as accessible as possible has been ongoing, but the continued investment in AI and the evolution of software-as-a-service companies will strengthen that effort. The increasing need for people skilled in tech has led to an explosion of self-service and “do-it-yourself” solutions. No-code interfaces will
also keep evolving, limiting the need to know code to develop tech services and products.
Artificial Intelligence
Calling AI a trend is an understatement; AI is being adopted as a technology in a vast part of society. One specific field of special interest for Schibsted is within natural language processing. In light of this, Schibsted is currently building a new Norwegian language model, together with other media companies, the Norwegian national library, NTNU and professor Jon Atle Gulla.
Non-fungible token
Non-fungible tokens and virtual assets will keep evolving, especially as Big Tech keeps investing in virtual spaces. As the adoption of NFTs in the art and entertainment spaces increases, more companies will get involved in creating them for profit. We will see an increasing need for expertise around the tokenisation process and requirements in this space.
Cloud solutions
Hand in hand with the democratisation of tech is the continued evolution of cloud solutions. Companies have experienced first-hand during the pandemic how important it is to have cloud infrastructure to be able to not only store and access data remotely, but to work within the cloud as well.
How crises disrupt and evolve society
How crises disrupt and evolve society
Evolutionary changes, wars, natural disasters – and now a virus – all change the world and society as we know it. Covid-19 has pushed digitalization forward in ways that previously were unforeseeable, and healthcare is one of the most affected sectors.
In 1347, the bacterium Yersinia pestis was carried along the Silk Road from Asia. An overpopulated Europe was already suffering from 50 years of famine. Ships carrying infected rats arrived in Genoa that year. In just four years, 40 percent of Europe’s population died from the pandemic that would become known as the Black Death.
Between 1290, when the famine began, and 1430, when the pandemic began to recede, Europe lost 75 percent of its population. The Black Death remains one of the greatest catastrophes in human history.
So why am I talking about the Black Death? As a disease, Covid-19 is not comparable: Yersinia pestis is a bacterium, Covid-19 is a virus. The mortality rate for Yersinia pestis was extremely high, while for Covid-19 it is much lower.
Nonetheless, they share a common feature: both were enormous shocks to the system for the global community. And shocks are also extreme catalysts of change.
History has shown how major crises change the world, initially for the worst, with poverty, disease and death, but then something new often emerges. Out of new knowledge and discoveries, new needs and behaviors, and new opportunities that arise, society is compelled to develop and to find new solutions, to chart out a new course.
The world wars not only redrew the world map; they also brought about fundamental changes in society. Political regimes fell and new ones emerged. A colonial world order was supplanted by two superpowers and a cold war, but also by international treaties and organizations such as the UN and the EU, and financial support packages that led industry to change course and prepare to innovate.
Natural disasters have wiped out entire species and then paved the way for new ones. On a smaller scale, they have also led to innovations such as earthquake-proof buildings, flood defences and improved aid efforts.
And pandemics like the Black Death and Covid-19 have led to new insights and knowledge in medicine and healthcare.
An Exponential digitalization
But Covid-19 will also leave a more lasting imprint on history. The virus has paved the way for exponential digitalization. E-commerce and home delivery of goods ordered online have exploded. We have learned to work and learn digitally. We socialize via monitors and games.
Lockdowns and isolation have most likely accelerated the rate of digitalization in many industries, too. When suppliers of parts and components stopped production, the benefit of fully automated processes became obvious.
Not least, the health services and healthcare sector has been profoundly affected, having undergone fundamental change. Just as during the Black Death, new technology has either contributed to, or resulted from, the current situation.
Prior to the Black Death, the role of medical science was led by the Church. The physicians were monks with close religious ties, who received their training at monastic schools in the largest countries or city-states. Like the Church, medical science was extremely conservative. The Black Death changed both medical practices and training. For example, treatments such as blood-letting and poultices of goat dung proved utterly ineffective in curing diseases. Slowly but surely, practices based on experience influenced medical training. Instead of cramming patients together, patient groups were kept separate from each other, goat dung was replaced with oils, healing was aided by fresh air, and different types of masks were used to treat patients.
Some parts of the healthcare service have suffered from an almost Luddite view of technology.
During the Covid-19 pandemic, the health sector has also suffered a shock, and many measures have been implemented to prevent it from collapsing altogether. However, this shock has accelerated the rate of digitalization. This sector is already one of the most high-tech sectors we have; apart from modern weapons systems, medical and technical equipment is among the most advanced and most expensive there is. But some parts of the healthcare service have suffered from an almost Luddite view of technology. A one-sided focus on security and privacy in particular has hindered digitalization. Compared with most other sectors, the level of friction for staff and patients has been and remains alarming.
Take hospitals as an example. Until the Covid-19 shock, staff at Oslo University Hospital were not allowed to use Skype to communicate with each other or with practices outside the hospital. The first surge of the virus marked the first time that health personnel were allowed to run applications on their home computers, where they could, for example, view live-streamed lung examinations of Covid-19 patients and advise on-duty personnel on how to improve the situation for their patients. This demonstrates that thorough assessments were made between benefit and risk to allow for pragmatic solutions to be used in these exceptional times. And it took a virus to make it happen.
As recently as February this year, for example, the Norwegian company Confrere was struggling to get general practitioners in Norway to start using video consultations. Apart from companies like Kry and Hjemmelegene, general practitioners have little competition to deal with. Despite patient demand, there was little motivation for change. When the Covid-19 virus broke out in the Nordic region, the use of video consultations exploded, and any self-respecting general practitioner now provides this service. It’s better for the patients. Better for the doctors. Better for the private sector. And it took a virus to make it happen.
Rapid change
Finally I would highlight the process of testing and getting test results. I myself was tested for Covid-19 in February 2020. Everything was done manually, the information was chaotic, but we did get the test results relatively fast because we were among the first to be tested in Norway. In July, our children were tested again, and this time the entire process was completely automated! And this happened rapidly – not only in terms of technology, but also in terms of regulatory amendments, which can often take years. Now all laboratories submit their test results to Emsis, a national database which can be accessed by patients and healthcare personnel alike via the patient’s core medical records available at helsenorge.no. So remember – before the outbreak of the Covid-19, Norway had no national infrastructure for test results! All it took was a shock and a few months to fix it. And it took a virus to make it happen.
Last but not least, I want to mention the coronavirus tracing app in Norway. Attempts by national authorities to implement virus tracing using mobile phones have been contested in many countries. Nonetheless, the Covid-19 virus has proven the necessity to grant ”some” access to highly sensitive personal data and often biometric data, and at rapid speeds. If we view this in connection with the critical need to make patients’ medical records, case summaries and test results digitally accessible, we can easily subscribe to Yuval Noah Harari’s analysis: ”The corona-virus pandemic could prove to be a watershed event in terms of enabling greater surveillance of society. People could look back in 100 years and identify the corona-virus epidemic as the moment when a new regime of surveillance took over, especially surveillance under the skin which I think is maybe the most important development of the 21st Century, is this ability to hack human beings.” Assuming that this capacity is used properly in terms of privacy and security, I believe that the extent to which it will prove to be good for society and for individuals is difficult for us to envisage today. And it took a virus to make it happen.
Now that we – hopefully – will return to normality during 2021 and 2022, it is vital that society, exemplified here by the healthcare services, does not revert to the old normal and reverse the digital quantum leaps that were brought about by the shock the Covid-19 pandemic caused. Those nations and companies that manage to seize the opportunities that have emerged will be the winners.
Sven Størmer Thaulow
EVP Chief Data and Technology Officer
Years in Schibsted
1,5
What I’ve missed the most during the Corona crisis
The informal chats at work!
Healthcare will go digiphysical
Healthcare will go digiphysical
The pandemic has sped up technology adoption in healthcare by three to five years. Both patients and providers have drastically changed their habits. But Covid-19 has also highlighted the paradox that digital isn’t enough.
During the era of social distancing, the way we interact with each other, companies and services is changing. As people were encouraged (or forced) to stay home, barriers to adoption of new technology were almost artificially lowered. Online groceries, pharmacies and retail all experienced unprecedented growth, as did the platforms that enable video conferencing. Something happened to patients and providers too.
Overnight, patients stopped visiting the doctor. However, their medical needs did not go away. New habits formed, as many patients flocked to the digital front doors of healthcare for the first time. Some patients needed help directly related to Covid-19, while the majority of consultations were about the same medical issues as before: diabetes management, ear infection or a bad knee perhaps. Whatever the problem, patients realized they could get their first medical consultation without waiting or even traveling. Perhaps equally important – no risk of contracting, or spreading, the corona virus.
Patients went online
When Wuhan, China went into lockdown, patients went online. According to the Economist, half of the ten million digital consultations that Chinese health platform JD Health sold in February, were first-time online patients. At least a third will continue using the services, according to the company. The rest of the world has followed suit. In March 2020, Norwegian tech news site Shifter reported that digital healthcare providers Kry, Eyr and Hjemmelegene had a three-digit volume growth year-on-year. Digitally enabled healthcare providers in the US, Sweden and UK reported the same explosive growth. Across the world, many relatively new providers assumed a vital role in primary care overnight. Not because governments asked them to, but because patient needs shifted. Providers and doctors shifted too.
The technology needed to offer a digital front door to healthcare did not arrive in 2020. Smart phones, sufficient network bandwidth, camera and audio technology, security and encryption, and platform technology – all of these components have been around for at least a decade. So how come a pandemic was ”needed” before healthcare changed?
Providers and doctors alike, have traditionally been somewhat slow to adopt new, non-medical technology. While medical knowledge, research and technology has made huge leaps forward decade after decade – drastically improving healthcare outcomes – service innovation has not kept pace. The visit to the doctor’s office has not changed much in the last 20–30 years.
Covid-19 changed this. Doctors and providers were forced to adapt in a matter of weeks. They had to enable patients to get in touch, while at the same time limiting disease spread. In Norway, the doctors’ union used to see digital consultations as a threat, stating that it was outright dangerous to treat any condition without seeing a patient physically. Before Covid-19, 7–12 percent of family physicians in Norway offered video consultations. Six to eight weeks after Covid-19 hit the country, that number went up to 70 percent.
A complex sector
Make no mistake, healthcare is going digital. However, digitization is challenged by the sheer complexity and physical nature of healthcare services. A bad knee still has to be physically examined to be diagnosed. No tech or AI can remove ear wax. The inherent physical properties of medicine and anatomy does not always translate well into ones and zeros. Also, humans are still humans. A physical consultation may enable a richer conversation. Lastly, today’s technology is not equally accessible and may leave the old and fragile behind. For all these reasons, healthcare has to become ”digiphysical” rather than just digital.
Covid-19 has highlighted this paradox. Digital tools have proven useful in advising patients, screening for conditions, and even tracking the outbreak. However, diagnosis rests upon a throat swab and lab analysis. Providers need to offer a digital front door, and combine it with on-demand, physical services (e.g. Covid-19 testing) that meet patient expectations.
Digitization may improve patient outcomes, experiences, and lower costs of healthcare in the years to come. But the rewards may only be reaped if it plays along with the complexity of healthcare, and the biological, physical and psychological nature of medicine. AI may not replace doctors, but doctors who use AI will replace those who do not.
Hopefully, Covid-19 will eventually recede. Previous pandemics always have. But, just like previous pandemics, Covid-19 may have changed healthcare forever.
Nicolai Skarsgård
Doctor and CEO of Hjemmelegene
Years in Schibsted
1,5
What I’ve missed the most during the Corona crisis
Golf trips
Edtech pushes lifelong learning
Edtech pushes lifelong learning
During the Corona crisis, schools have struggled to bring education online. But the pandemic has only shed light on a need that was already there. And now the edtech market is growing rapidly.
Imagine an early morning in 2030. We are in Båtsfjord, in the North of Norway, where Roar has just logged on to his computer at home, ready for a new day of studying business interaction and international entrepreneurship. After 25 years in fishing, he has gone ”back to school” for an education that will give him new opportunities, hopefully in the international market. Roar and his fellow students from 20 different nations study in a digital, seamless system that’s available anytime and anywhere.
Roar is not in a unique position by continuing his lifelong learning or finishing an international degree from his home. He is an example of a modern learner seeking an education that is not easily replaced by automation and artificial intelligence, but rather is focused on creativity and collaboration.
A market that is growing
Today we’re still quite far from this scenario. Education within lifelong learning is grossly under-digitized, with less than 3 percent of expenditure on technology. At the same time the digital education market has grown rapidly during the last decades. The total expenditure in edtech is expected to double, reaching 342 billion USD by 2025, with an expected compound annual growth rate of 12 percent in the period.
2019 was also a record year for edtech investments, with 187 deals totaling 1.2 billion USD. Of all edtech investments in Europe, 18 percent are invested in the Nordic markets, indicating a growing market in this region.
So we’re on our way. This development is accelerated by the evolution of working life in the 21st century, which has given rise to the development of new skills. More and more people are willing to educate themselves as a part of lifelong learning. Higher education matriculation data from 2020 shows that the applicants were older than ever before, on average. This can be partly attributed to the Corona crisis, during which a significant part of our workforce lost their jobs or were furloughed. But it is also a sign of these new learning habits – which need new solutions.
The biggest challenge to speed up digitization is to replace physical meetings and classroom lectures – which are still an important part of university education. This traditional type of learning is very institutional and people-driven. Thus, a key opportunity is to replace classroom training with edtech, but many content creators lack the knowledge to make the change. Due to the number of stakeholders involved, such as universities, professors, government, international institutions etc, the speed of digitization in the education sector has been estimated to be at about one-fifth of the speed seen in other sectors.
Education is a tool to improve health, equality, peace and stability
In the Nordic countries, there are some new players in this market, including Coursera, EdX and Udemy (Inspera), international companies with local connection in the Nordics. The traditional Nordic educational institutions have not yet tried to take a significant role. So, there are still openings for strong players to take a broad position as a ”change agent” for lifelong learning. Not just for distribution or a marketplace, but also to contribute to delivering today’s classroom training with edtech solutions – and with the opportunity to reach a broader, international audience.
Education is one of the greatest tools we have to reduce poverty and improve health, gender equality, peace and stability. This is easily underestimated in the Nordic countries, but nevertheless important to remember. For communities, education drives long-term economic growth, innovation and fosters social cohesion. All of these are strong incentives for continued growth of the edtech market. For Roar, digital education will open the door to new job opportunities. It will stimulate economic growth in his local community. And it will give the world a new, global employee.
Ragnhild J Buch
Business Developer
Years in Schibsted
3
What I’ve missed the most during the Corona crisis
Meeting colleagues on a daily basis.
Tough news gets translated for young readers
Tough news gets translated for young readers
Aftenposten Junior is something as exceptional as a printed newspaper success. It all started with the need to explain a tragedy to children – now even adults read it to really understand the news.
In July 2011, Norway was shocked and horrified by the terrorist attack in Oslo’s government quarter and on the island of Utøya. Naturally, Norwegian children had many questions about what had happened, and parents and teachers struggled to find the words to explain the incidents on a level that the young ones could understand.
It was about this time that the idea of launching a newspaper for kids arose in Aftenposten. The first edition came out in the spring of 2012, coinciding with the opening of the trial against the terrorist responsible. That was the start of explaining a many difficult and complicated news incidents, which is what Aftenposten Junior still does today. As editor, I get to meet a lot of curious and smart kids with thoughtful (and challenging!) questions. I promise you I have the most interesting and meaningful job!
A huge success
At the time of the launch some critical voices said a print edition would never gain popularity. Adult print newspapers had been declining for years, and there were even darker clouds ahead. Children were embracing digital services such as games and social media, and getting young ones to read a newspaper seemed like a mission impossible. Eight years later though, it is safe to say that Aftenposten Junior has become a huge success. The circulation numbers steadily rose and have now stabilized around 30,000, and the brand Aftenposten Junior is well known among Norwegian children, to the extent that the editorial staff receive enormous amounts of e-mails and letters every week. But how did this all happen?
Over the last years, user research has become a very important discipline in all product development. In Aftenposten Junior, the editorial team was involved in user research right from the beginning, and it became apparent that this is a crucial part of the methodology to write engaging news stories for a young audience. Instead of doing user research now and then, it has become a continuous process that involves the readers at all times.
In the research, our journalists will ask kids how much they knew about a news story, and ask them what they would like to know. Later, the journalists have kids read the finished story to pinpoint difficult words. After a while, the concept of ”Aftenposten Junior reporters” also emerged. Kids would interview top politicians and celebrities, and they had the most brilliant questions.
When writing for kids, there are certain challenges that our staff writers are very good at solving. For example, there might be a lot of historical aspects to a story, like when writing about George Floyd this summer. To sum up centuries of history and social studies in a very short text is harder than you think. The tough nut is to simplify the language without losing the important details and nuances. Often, our journalists will rewrite the text several times with their editor before it’s good to go.
The presentation of the news stories is also a very important part of Aftenposten Juniors success. Visual elements will trigger most children’s reading desire. Outstanding photos from Aftenposten’s photographers, striking illustrations and extensive use of infographics make the pages look inviting and give the readers a lot of information without overwhelming them with text. The cartoon format is also a good way to explain complex issues, as it combines several illustrations with small text pieces.
A growing family
The Aftenposten Junior universe has grown bigger than only a print newspaper. For several years, Aftenposten Junior has hosted different events, like the wildly popular Minikloden. The successful cartoon Grønne greier has launched two hard cover books, one of them in South Korea. Aftenposten Junior is also producing a podcast, Juniorrådet, where kids are talking about big and small challenges, like being nervous before a performance, or falling in love. This fall we will also launch a news podcast.
In 2015, there was a new addition to the family, with the launch of sister newspaper SvD Junior, from Svenska Dagbladet in Sweden. The two editorial teams co-operate on some of the content, and in 2019 another Junior was born when Postimees Juunior launched in Estonia.
The spring of 2020 was a very special time all over the world because of the Covid-19 situation. In Norway, all of the schools shut down on March 12th. The day after, Aftenposten Junior announced that we opened the paywall on our digital edition, so that all children in Norway could have access to reliable information at a time when their lives were turned upside-down. Teachers were ecstatic when they could give their pupils engaging reading assignments, and we noticed that some teachers made questions, puzzles and quizzes tied to the editions. They said that they loved the current news stories that spoke directly to the kids, and that it taught them important things in a fun way.
A big digital leap
During the time of homeschooling, teachers and pupils took a big leap in their usage of digital equipment and services. In fact, there were close to 800,000 openings of the digital editions of Aftenposten Junior during the period that the schools were closed.
This gave us the idea to make a special digital product for schools, where teachers could easily find the content they were looking for and share it with their class. After extensive UX research, the team of Aftenposten Junior skole are now on their way to creating a full on educational resource, planned to launch in late 2020. This is timed well with the introduction of a brand-new curriculum in Norway’s schools this fall, one that is supposed to encourage the use of current events.
So, the future looks bright for Aftenposten Junior. Even though breaking news is faster than ever and the number of news sources is plentiful, there is still a need for a good explanation of current events. This might be why grown-ups tell me all the time that when they read Aftenposten Junior to their kids, they finally understand what that story was all about.
Mari Midtstigen
Editor, Aftenposten Junior
Years in Schibsted
3
What I’ve missed the most during the Corona crisis
Eating lunch with my colleagues
AI – a toolbox to support journalism
AI – a toolbox to support journalism
As artificial intelligence makes its way into editorial products and processes, media organizations face new challenges. They need to find out how to use this new computational toolbox and how it can contribute to creating quality content.
Do you find human-like robots creepy? You wouldn’t be the first to feel that way. ’The Frankenstein Complex’ was introduced in novels by Isaac Asimov already in the late 1940s as a representation of human beings’ intricate relationship to humanoid robots. While coined in science fiction, this term has found its footings in very real scenarios today, based on key areas of concern related to robots replacing our jobs.
The concerns are not unfounded for. There are indeed a wealth of robots (or programmable machines) employed across the globe, rendering many human workers in sectors, such as manufacturing, transportation and healthcare, obsolete. These industries are undergoing rapid transformation through the use of robotics and technologies such as artificial intelligence.
Creators and consumers of news express unease about the potential downsides of AI
These concerns extend into the media industry as well, where both creators and consumers of news express unease about the potential downsides of AI. To deal with these concerns, it is about time that we offer an alternative narrative to the Frankenstein Complex!
We might as well start with the basics. Robots are highly unlikely to enter newsrooms any time soon. What is already there, though, is a great new computational toolbox that can help human reporters and editors create and share high quality news content.
AI technologies are currently used in newsrooms in a myriad of ways, from data mining to automated content production and algorithmic distribution. While the possibilities know no bounds, the applications tend to be geared towards information processing tasks like calculating, prioritizing, classifying, associating or filtering information – sometimes simultaneously.
With recent advances in technological domains such as natural language processing and generation (NLP/NLG), the potential to leverage AI in editorial products and processes is increasing rapidly. In Schibsted, we are currently exploring the use of AI in news work in various ways, such as helping editors decide when to put content behind paywall, supporting journalists in tagging their articles and optimizing something as old school as printed papers in order to maximize sales and minimize waste.
AI learn from the past
The opportunities offered by AI are vast, but the technologies won’t help with every newsroom task. To responsibly leverage the potential of AI, reflecting on the unique traits of humans and machines becomes key.
AI systems are incredible tools for identifying patterns in data. However, this feature also renders AI technologies susceptible to reinforcing biases. And through technology such as face recognition systems and language translations, we have uncovered a key limitation to AI: the learn from the past.
Journalists, on the other hand, shape the future. They introduce new ideas through stories and reporting, often subtly influencing the ways our societies and democracies progress.
In order to recognize the unique skills (and limitations) brought by both sides of a human-machine relationship, we need to equip ourselves with reasonable expectations. We need to stop portraying AI as flawless human-like robots excelling at any task given to them. Instead, we should offer a narrative in which human beings are assisted by computational systems.
Let’s use a kitchen metaphor. If you are expecting an AI system to bring you a perfect omelet, you are bound to be disappointed. But if you are expecting the AI system to help prepare your ingredients – crack the eggs, grind some cheese, chop an onion – you are more likely to end up with a great lunch. You might have had to pluck some eggshells out of the mix or do a second round of onion chopping, but the overall process was smoother with the help of AI.
Training is needed
The idea of humans and machines working together is a topic gaining traction in academia, not least in the field of journalism where the term hybridization is increasingly used. One way of enabling constructive hybridization is to routinely practice decomposition. This means breaking down big news projects into smaller, more tangible tasks, so that news professionals can more easily identify what can be done by the machine and what requires human expertise.
To get to this point, news professionals should be offered appropriate training and information about the potential and the limitations of AI technologies. Introductory courses such as Elements of AI are a great starting point for anyone looking to familiarize themselves with the terminology. However, news organizations (Schibsted included) need to go beyond that and step up their game in terms of culturally and practically upskillling the workforce, aiming to bridge gaps between historically siloed departments.
We need to bring our full organizations onboard to understand how to responsibly leverage these new technologies. Schibsted is currently part of multiple research efforts at Nordic universities, such as the University of Bergen, NTNU in Trondheim and KTH in Stockholm, where we explore both technological and social aspects of these new technologies. Just as we do in academia, we need to take an interdisciplinary approach when equipping our organization with the skills needed to thrive with AI.
It is time for news organizations to take the lead in the industry’s AI developments
We put ideals such as democracy and fair competition at risk if we allow the global information flow to be controlled – implicitly and explicitly – by a few conglomerate companies. It is time for news organizations to take the lead in the industry’s AI developments. This does not mean that we need to match big tech’s R&D funding (as if that was an option…), but we need increased reflection and engagement regarding how we want AI to impact our industry, organizations, readers, and ultimately, society.
A pressing task for the news media industry is to ensure that AI in newsrooms is optimized for values that support our publishing missions. To do so, we have to stop talking about robots and focus on how newsrooms – and just to be clear, the human beings in them – can benefit from the capabilities of these new technologies. One such attempt can be found in the global industry collaboration JournalismAI run by Polis at the London School of Economics, which Schibsted is part of. There, newsrooms from across the world are joining forces to experiment and test the potential of applying AI to achieve newsroom goals. The collaboration serves as a great illustration of what would make a nice bumper sticker: Power to the Publishers!
Agnes Stenbom
Responsible Data & AI Specialist/Industry PhD Candidate
Years in Schibsted
2,5
What I’ve missed the most during the Corona crisis
Global leadership.
Technology will fertilize farming
Technology will fertilize farming
With more and more people to feed in a less reliable world, the farming industry is the key to change. For years, the food production system has been one of the largest producers of emissions and a main reason behind reduced biodiversity. But new technologies to produce more food in a smart way are already here. Now it’s about scaling and putting things in order for sustainable food production.
In 1973, when I was born, there were 3.9 billion people in the world. Today there are 7.8 billion. Twice as many. Consider that for a minute. In 47 years humanity has doubled. And it keeps on growing.
The UN estimates that there will be 9.7 billion people on Earth in 2050, and that we will reach peak population of eleven billion in the year 2100. As if that wasn’t enough, the population growth will not be evenly spread. For example, it is expected that the population of Africa, south of Sahara, will double before 2050.
Furthermore, things are going the wrong way in many areas. Rising temperatures mean less land for growing wheat, one of the most important sources of calories in the world. More extreme weather destroys cultivation, while flooding and intense farming leads to soil erosion. The use of pesticides is ruining bio-diversity both on the surface and underneath the soil.
The timing couldn’t be worse. It will be difficult to keep pace with food production with 200,000 more mouths to feed every day, year round. The irony is that those who are producing the food are, to a large extent, the same people who are destroying the possibility to produce food. Globally, 23 percent of the man-made climate emissions come from farming, forestry and other land use, according to the UN. But the destruction caused by farming is not limited to emissions.
In September 2020, when the WWF published the report Living Planet, many people spilled their morning coffee in surprise (coffee, by the way, might be in short supply in the future). In the stock studied by WWF, there are on average 68 percent fewer animals than 50 years ago.
The main reason, the report says, is a change in area farming. The natural habitats are gone. Species are made extinct. Biodiversity is reduced. Pollinators are gone. The seas are warming and containing less oxygen, and they are being destroyed by over-fishing, pollution and contamination.
The food system is a large part of the problem, which means that it is also a large part of the solution
With today’s food system, the world is moving towards catastrophe and everything that entails, from suffering and political uproar, to conflicts, wars and migration. But the report is giving a bit of hope as well. It is possible to turn the trend around and increase the biodiversity on the ground, in the soil and in the water.
”We know that it will take a global, collective effort; increased conservation efforts are key, along with changes in how we produce and consume our food and energy”, the report says.
In other words; the food system is a large part of the problem, which means that it is also a large part of the solution. How do you produce more food in a smarter way? How can you reduce the energy consumption, throw away less food and reduce the greenhouse gas emissions? How can we use less water and fewer pesticides that destroy the microbiology of the soil? How do you use fewer fertilizers to block drainage and problems with the groundwater? And what about acreage? Is it possible to produce much more food in a much smaller place?
All around the globe, researchers and innovative centers are asking these questions. They find answers too. There are so many things going on within agricultural technology right now that there is reason for optimism. Here are three of the most important ones:
1. Biotechnology
Possibly the most revolutionary development in agriculture is occurring right now in genetic research. Earlier, genetic research has been somewhat primitive, mostly about moving DNA between various species to give plants or animals the characteristics one wanted. But changing the gene pool can be risky and that is why the opposition to so called GMO has been strong.
But developments have been rolling on and today I can hardly think of a field where the distance between researchers and the general public is wider. The agricultural sector is no stranger to genetic modification. The food plants of today and production animals are a result of generations of cross-breeding in order to have plants with the desired characteristics, which in many cases are completely different from their ancestors in nature.
The idea that modern gene modifications is ”fiddling with nature” is therefore rather confusing, because people have been ”fiddling with nature” ever since they went from being only hunters to being only gatherers who selected and grew plants with specific characteristics, about 12,000 or so years ago.
The really big breakthrough came with Crispr, the technology that makes it possible to enter DNA sequences and make changes in absolutely every living organism, whether bacteria, virus, insects, fish, human beings or other mammals. Actually, this technique has been developed by nature itself. The researchers have merely copied it. Already, seed has been developed that is resistant to fungus infection, potatoes containing less acrylic (a substance that can induce cancer), mushrooms that don’t go brown, corn that can survive a drought and pigs that are resistant to common virus infections, to mention a few.
Agrisea, an unbelievably exciting British start-up, is a splendid example of how gene technology can be used to get food to a growing population. They have developed a rice plant (and heaps of other plants, of course) that can grow in salt water and make use of the nutrients in the sea. Thus the food can be growing in a floating compound in the sea without soil, without fertilizers and without having to add fresh water which, as we know, is a scarce commodity in many places. The first test installation will be run towards the end of 2020.
Crispr opens up endless possibilities for better food production and makes it more sustainable. With plants that are more robust, one can produce more food on a smaller area, with less loss and waste. Better animal health makes for better animal welfare and less wasted feed and energy. The question is how one can secure all these improvements without losing control.
The next step for gene modification is to have regulations that make it possible put it to use. There are no international regulations. So far, EU has said that Crispr should be treated with the same level of severity as GMO with imported DNA, that is with serious restrictions. In other places in the world, like the USA, a difference is made between GMO and gene modification. Plants that could have been cultivated with traditional methods, but are improved with Crispr or some other genetic tool, are not hampered by any special restrictions. American authorities treat them as they treat all food from conventional farming.
With good regulations, gene technology has the potential to make a strong contribution to a more efficient and sustainable food production.
2. Precision agriculture
People like to think of agriculture as something a bit old-fashioned, close to nature and a constant. In reality, farming has been well on its toes when it comes to technology development dating back to the industrial revolution. Today it is common to have both milking robots and self-driving agricultural machinery. But big changes are on their way. The biggest problem with the farming technology of today is that it is too coarse. A huge field is usually treated as if there were no variation in the entire field. Even if some part can have more than enough humidity and another part too little, the field is watered equally much everywhere. Pesticides are being sprayed all over the place, sometimes even from an airplane. Fertilizers are evenly spread too. Giant, heavy trucks are driving on the fields, compressing the soil so hard that it is difficult to grow anything there.
Next generation farming is much more precise and less harmful. Unmanned planes and drones can scan the cultivating areas, collecting and analyzing data to find out which places need watering, fertilizing or spraying. Down on the ground, all-electric light robots roll along between the plant rows studying plants on leaf level, sowing or spraying only on the exact spots where that is needed – and then rolling back to charge itself.
This technology makes it possible to have large plantations with a better quality on the same acreage, and at the same time reduce the use of pesticides and fertilizers by almost 95 percent.
This will improve the soil quality, which in turn will mean large benefits. When the soil offers good conditions for microbes and living organisms, sufficient content of organic material and a good soil structure, it will be able to prevent erosion, produce better crops, create better conditions to store water and to drain off excessive water and, not least, ensure better conditions to store carbon.
3. Internet of things
When everything is connected to the net, that is because there have been strong, simultaneous developments in many technological fields. Mobile tech, location tech, sensors and data storage are only a few of these key technologies. If you combine them you can make rather funky things.
Connected sensors can obviously be used in the field. But it can also be used to establish a more sustainable meat production.
A long, long list of companies are now developing solutions that will ensure better animal health, animal welfare and yield in meat production. What many of these projects have in common is that they put a sensor on the animal, gauging the animal’s body temperature, movements and level of activity. The data is collected and treated in real time. When, for example, a cow has a slightly increased body temperature, is moving less and lowers her head, that could mean that she is about to fall ill. Such early warning signs can be next to impossible to detect in a herd, and the earlier the animal receives treatment, the easier it is to limit the passing of the infection in the stock and apply a treatment that stops a serious illness. Some systems even have a lamp on the sensor in the cow’s ear. It lights up when illness is suspected, to make it easier for the farmer to find the right cow among all the others.
To sum up, one might say that the constantly growing human population is facing an enormous challenge. Biodiversity must increase. The protection of natural habitats must step up. The soil health must improve. Plants and animals must become more robust. The production must be more reliable in a less reliable world.
I don’t think it will happen through innovation. The solutions are here already. It is rather the scaling that will cause a problem. To make authorities invest, regulate and put things in order for sustainable, efficient food production and protection of nature. If they do this, they are contributing to saving the world, no less.
You reap what you sow.
Joacim Lund
Technology commentator, Aftenposten
Years in Schibsted
15
What I’ve missed the most during the Corona crisis
Italy! Ever since I was a choirboy one summer in the Vatican in the 80s, I’ve visited as often as I can.
On the hunt for human emotions
On the hunt for human emotions
Artificial intelligence is behind countless services that we use every day. But how close is it to really understanding human emotions? Affective computing has already come a long way – and as in many areas, big tech is in the lead.
A somber, suited man stands in a cemetery. Softly, he strokes a gravestone before throwing his arms up toward the sky, howling in sorrow. The inscription on the stone reads:
Clippy. 1997 – 2004.
The scene is from a Microsoft commercial for their Office software. In reality, however, few people mourned the demise of the paper clip-formed Office assistant, tasked with aiding Microsoft users in their screen work.
Unfailingly pseudo-helpful, Clippy may be the most ubiquitously reviled piece of software ever created. Not because a digital assistant is inherently a bad idea, but because its tone-deaf servility pushed Microsoft users closer and closer to insanity.
Designed to respond intuitively
Ever since computers became everyday tools, tech companies have been investing heavily in improving the ways humans and machines interact. We have gone from the days when using a computer required impressive technical skills and hours hunched over dense user manuals, to the plug and play era where software is designed to respond intuitively to our needs and wishes.
Even so, digital computers and human emotions have never gotten along very well. Too many computer engineers have made the cool rationality of computers the standard to which humans need to adjust. But as algorithms become more and more intertwined with every aspect of our lives, things are changing. For better and for worse.
In 1995, the American computer engineer Rosalind Picard wrote a pioneering paper, ”Affective computing”, about a nascent research field investigating the possibilities of computers learning human emotions, responding to them and perhaps even approximating human emotions to more efficiently make decisions.
Any algorithm that takes human behavior as input is indirectly responding to human emotions. Take Facebook for example, and the way its algorithms feed on human agitation, vanity and desire for companionship. Their algorithms systematically register the actions these emotions trigger (likes, shares and comments, commonly referred to as engagement), and then attempt to amplify and monetize them.
Making tech less frustrating
The field of affective computing, however, is ideally less about manipulation and more about making tech less frustrating and more helpful, perhaps even instilling in it some semblance of empathy. Counter-intuitively, one key to making affective computing work well may be to avoid anthropomorphizing the interface. Humanizing Clippy did not make people relate better to their Microsoft software, quite the opposite. And while chat bots are popular among companies hoping to slash customer service costs, for customers they are less like magically helpful spirits and more of a needlessly convoluted way of accessing information from an FAQ.
Affective computing endeavors to understand us better and deeper, by analyzing our calendars, messaging apps, web use, step count and geolocation. All this information can be harvested from our phones, along with sleep and speech patterns. Add wearable sensors and cameras with facial recognition, and computers are getting close to reading our emotions without the intermediary of our behavior.
In the near future this could result in consumer technology such as lightbulbs that adjust to your mood, sound systems that find the perfect tune whether your feeling blue or elated, and phones that adjust their notification settings as thoughtfully as a first-rate butler – just to name a few possible applications. It could also be used for surveillance of employees or citizens, for purposes malicious or benign.
Rosalind Picard is currently a professor at Massachusetts Institute of Technology, running the Affective Computing Research Group. She is also the co-founder of two groundbreaking startups in this space: Affectiva in 2009 and Empatica in 2014. Through her work she has become keenly aware of the potential to use affective computing for for both humanitarian and profit-driven purposes.
Affectiva’s first applications were developed to help people on the autism spectrum better understand facial expressions. Later the company developed technology to track the emotional state of drivers. And after Picard had moved on to form Empatica, a company hoping to address the medical needs of epilepsy patients, Affectiva has been attracting clients like Coca-Cola – who use the technology to measure the effectiveness of their advertising – and political campaigns who want to gauge the emotional response to political debates.
Simulate human emotions
Microsoft’s doomed Clippy was neither the first nor the last anthropomorphized bundle of algorithms. Robots have often been envisioned as synthetic persons, androids that understand, exhibit and perhaps even experience human-like emotions. There are currently countless projects around the world in which robots are developed for everything from education and elderly care to sex work. These machines rarely rely on cutting-edge affective computing technology, but they nevertheless simulate a range of human emotions to please their users.
If science fiction teaches us anything about synthetic emotion it is a bleak lesson. Ever since the 19th century, when a fictional android appeared in Auguste Villiers de l’Isle-Adam’s novel ”The Future Eve”, they have tended to bring misery and destruction. In the ongoing HBO series ”Westworld”, enslaved robots rise up against their makers, massacring their human oppressors. In the acclaimed British author Ian McEwan’s 2019 novel ”Machines Like Me”, the first sentient androids created by man gradually acquire human emotions, and then commit suicide.
Of course, we should celebrate the ambition to create software that adjusts to our needs and desires – helps us live and learn a little bit better. But it is worth keeping in mind the failure of Clippy, and perhaps even the warnings from concerned science fiction writers. More than that: at a time when big tech companies are hoarding personal data and using that data to manipulate us, affective computing will inevitably be a double-edged sword. After all, why should we trust Facebook’s or Google’s algorithms to ever understand empathy so long as the companies themselves show little capacity for it?
Sam Sundberg
Freelance writer and Editor for Svenska Dagbladet
Years in Schibsted
1,5
What I’ve missed the most during the Corona crisis
City life!
Welcome to the synthetic decade
Welcome to the synthetic decade
Technology is giving us tools to alter reality in more and more areas. You might soon not only eat artificial meat but also interact with your personal double. And – not least consume more and more information created by AI. Welcome to the Synthetic Decade.
The idea that we’re entering a new era, is established by futurist Amy Webb and her team at the Future Today Institute. She states: ”Not only will we eat beyond burgers, but we will consume synthetic content, or train the next generation of AI with synthetic data sets”.
Recent developments within AI, prove them right. AI will impact the way we consume, get informed and envision health and life span. It’s not in a distant future, and you might already have encountered what is now defined as ”synthetic content”. If you’ve ordered a beyond burger you had synthetic meat, if you used a face swap filter on your phone you produced synthetic media.
Editing DNA
As we will progress into the synthetic decade, synthetic experiences and relationships will shape greater parts of our life. A really good example is the development of synthetic biology and the ability to engineer living systems and structures, by programming DNA with Crispr, to design and re-design organisms to do what we want them to do. Editing DNA is possible since 2010, but it is a very laborious task. Synthetic biology promises to automate the editing process. As Amy Webb puts its ”In this, decade synthetic biology is going to allow us to read, edit and write life. We will program living biological structures as we build tiny computers.” This is not science fiction, and we can envision many positive use cases for improving our own health and life span, and also helping our living structures adapt to new conditions such as global warming or pandemics.
Looking into one of these fields – synthetic media – many of the trends behind the synthetic decade are uncovered. It has started to unfold, and it tells us a lot about the potential outcomes and the many questions it triggers, blurring the line between what we consider ”real” or ”virtual” even more.
2017 was a landmark for synthetic media, with Vice reporting the emergence of pornographic videos altered with the use of algorithms to insert the faces of famous actresses. The term ”deep fake” was coined soon after, bringing a lot of attention to the phenomenon and its harmful potential for misinformation. It then triggered a fundamental discussion, that will likely be at the core of synthetic media, about ethics and the potential harm around the ”forgery” of content through AI. A very famous example is a deep fake video of Obama, created by Buzzfeed and enacted by Jordan Peel, warning us that ”We’re entering an era in which our enemies can make anyone say anything at any point in time.” – and indeed we are!
The potential impact of synthetic media lies in the automation of editing
Synthetic media is the term used for content created using artificial intelligence. With an initial set of data, algorithms learn to reproduce, and create, pictures videos, sound, gestures, text and more. The result is realistic-looking and sounding artificial digital content.
Looking closer at the tech behind synthetic media, the past few years have shown significant advancements in deep learning and generative adversarial networks (GANs) have accelerated their growth. Synthetic media is mostly based on GAN technologies, even if there are many different techniques being developed. This has resulted in the quality of synthetic media improving rapidly, and soon it might just be indistinguishable from traditional media.
The potential impact of synthetic media lies in the automation of editing which makes it possible to create content at scale. The cost to create synthetic media has considerably lowered due to the wide availability of the techniques. Open source software already enables anyone with some technical knowledge and a powerful-enough graphics card to create a deep fake. This has led to a drastic improvement of synthetic media quality (check out thispersondoesnotexist.com), without countless tedious hours of work.
A meaningful trend
If we also think about new behaviors such as how we consume media on social channels, how we expect even more personalization and accessibility or the fact that we have normalized virtual spaces for socializing (see the rise of Fortnite, or Animal Crossing as social media during the quarantine period), we have a very favorable ground for synthetic content to be a meaningful trend and impact the way we create and consume content online.
This again raises the familiar question if synthetic media is bad. It is a delicate yet fundamental question, and the answer is the same as with most tech: it’s not harmful in itself, it depends what we are using it for. Synthetic media has a lot of potential because it is not just deep fakes, there is a growing interest in how it could be used to support new business and creative areas. The industry around synthetic media is blooming and many companies and investors are looking into the trend, believing strongly in its future.
For now, entertainment applications are the entry point for larger audiences. We all have the possibility to create synthetic media in our pocket today. For example, Snapchat released their gender-swap filter in 2019. Russian app, Faceapp made us look older and in China ZAO released a deep fake app that can engrave the user’s face into some clips from famous films or series. It’s not hard to imagine the next iteration of a social media app being one where users can transform their voices, create their own synthetic character, or pretend to be their favorite celebrities.
Synthetic media could become a leverage for the media industry
But it’s about more than just entertainment – synthetic media could become a leverage for the media industry starting with automated news reporting and delivery.
In today’s newsroom, some types of reporting are extremely tedious and straightforward – human opinion and effort are not adding value. Weather reporting is a very good example. In the UK, the BBC blue lab has been exploring how synthetic media could help weather reporting. Given the growth of digital assistants and the industry’s drive for greater personalization, they are betting that in the future, we might expect that a video response to a query will be digitally generated. To try this out, the editorial department collaborated with the AI firm Synthesia and created an experiment where the presenter reads the names of 12 cities, numbers from -30 to 30 and several phrases to explain the temperature, to the camera. You can then pick your city and get a personalized, but synthetically created weather report.
Within Schibsted several of our media houses have simpler, but also automated services, reporting on weather, sports and real estate.
Another application that is very promising is automated, real time translation and dubbing. In that field, Synthesia is one of the most prominent companies looking into real time automated translation, with use cases ranging from education to customer service.
With improvement in synthetic voices, we can also imagine a rapid adaptation of voice technology in traditional media production pipelines. Particularly in video games and audio books which are markets that today face significant challenges scaling human voice over. Overall synthetic media could be a powerful technology for businesses that are reliant on content and would like to adapt their offering to different audiences. Today what would require many hours of work could be done through synthetic content creation.
Texts and dialogue are prominent use cases of Synthetic media. Hence, we are seeing the development of more realistic and accurate conversational and companionship technologies. From a simple bot, which generates a tailored conversation to a virtual double, the potential for service or leisure conversation opens up.
Having a conversation with an AI
Right now, most of our interactions with AI are transactional in nature: ”Alexa, what’s the weather like today”, or ”Siri, set a timer for ten minutes”. But what about developing a profound conversation with an AI? A stunning example is from a conversational bot called Replika which is programmed to ask meaningful questions about your life and to offer you emotional support without judgment. Since its launch more than two million people have downloaded the Replika app.
Digital assistants could be used for companionship purposes, but also education or training. It could for example help us recreate a learning environment, especially when working remotely. What if you could interact with simulated persons to learn from them or practice management techniques? And – would you invite a synth to a dinner party?
For all of this to happen and to convince us to interact with our virtual counterpart, the improvement of virtual human character and emotional response is crucial. The more these companions will look, talk and listen like humans the more we will be inclined to interact with them. For example, Samsung’s virtual human ”Neon” which they describe as their ”first artificial human” is here. These Neons can go out-of-script and develop their own ”personality”. It can generate new expressions, gestures, and reactions out of this unique ”personality”.
Producing quality synthetic content is still very costly and tech intensive, but companies that specializing in synthetic content are emerging, allowing businesses and individuals to buy and rent synthetic media.
Synthetic media is rather new and it’s moving fast. So fast that regulation has not followed yet. Whether it is about deep fakes, synthetic voices used for customer service, or entertainment pieces we will need to lay some ground rules about the ownership of such content and establish the responsibilities that come along. So far, many questions are still left unanswered such as, who will ”own” the content produced? How will copyright laws apply on a reproduction of a celebrity? Who would be held responsible if a digital assistant hurts someone in real life?
Still in early stages
Synthetic content has already made its way into our lives. But not all part of its ecosystem is moving at the same pace. The synthetic media sub trend, has already emerged to mainstream audiences, the technology powering it has left the research lab to find very concrete business applications. From strong ethical fears, to concrete valuable use cases, this development tells us a lot about the potential trajectory, outcomes and questions of the synthetic decade. Other areas such as biology are still in early stages, but their applications alter our lives even more profoundly. Overall, the technology underlying synthetic media, synthetic biology and other fields of synthetic content – namely AI, computer vision, deep learning etc., are the same. This means, the early questions that have risen with synthetic media are indicating the fundamental discussions we will face during the synthetic decade. In every field will arise interrogations and debate around the rights to edit, create and use what is created, determining ownership of the content, what is considered ethical or not. This also means we still have some agency to decide what comes next, and the synthetic decade to come will not necessarily be dystopian.
Sophie Tsotridis
Former Associate Product Manager and Trainee in Schibsted
Years in Schibsted
2
What I’ve missed the most during the Corona crisis
Being able to see a movie in a theater!
10 trends for 2021 – the pandemic shift
10 trends for 2021 – the pandemic shift
Get an overview of some of the most interesting ongoing tech trend with Schibsted’s top ten trends-list!
1. Splinternet: our new, fractured online life
The web is dead, so said the cover of Wired Magazine in August 2010. Ten years later it is still around, but there is no denying that its original form – the free, global, hyperlinked internet – is a thing of the past. Governments around the world are increasingly asserting control of the digital realm. China’s ”Great Firewall” and other censorship efforts are prime examples. Other countries are in turn responding to China’s global ambitions by banning Chinese apps. India banned 59 Chinese apps in July, and the US threatens to ban some of the most popular: Wechat and Tiktok. Meanwhile, as the EU is trying to make American tech companies play by the rules of GDPR legislation, Facebook recently responded that they may leave the EU if they cannot store Europeans’ data on American servers. Threat or promise? Many European tech startups would no doubt be thrilled to see Facebook go, hoping for a chance to create new social media platforms for European users.
2. Rise of the super apps
In Asia, ”super apps” collect many services within proprietary eco-systems. Wechat, Alipay, Grab and Gojek all compete in this space; Wechat is the front runner with over one billion monthly users and one million mini-apps on its platform. Watch out for companies like Amazon and Facebook trying to bring this winner-take-all trend to the West.
3. Shopping goes online
The pandemic has forced many brick-and-mortar stores to close, but according to data from IBM, this has fast-forwarded e-commerce growth by about five years. As new user groups learn to shop everything from groceries to fashion online, the pressure is on retailers to up their game, offering friction-free payments and same day delivery.
4. Games become social
Online gaming has been a refuge during the pandemic. For gamers around the world, hits like Fortnite and Animal Crossing offer more than just game mechanics. The games themselves, as well as Twitch streams, Youtube play-by-plays and Discord gaming chats, are spaces of connection and camaraderie – important forms of social media.
5. 5G changes everything
The battle for 5G supremacy may have stolen the spotlight for now, with several countries banning Huawei from their telecom infrastructure. But the more interesting news is that 5G speed (and bandwidth) changes the game for augmented reality and the internet of things. Finally, these hyped technologies have a chance to live up to our expectations.
6. Dining in the cloud
We may not meet up with friends at the restaurant as often these days, but we still need to eat. Thus, delivery services such as Foodora and Uber Eats are keeping busy. There is also new opportunity for nimble food startups, foregoing dining spaces (and expensive rent) and instead setting up efficient kitchens and selling food online to stay-at-home diners.
7. Deeper authenticity
Our lives are increasingly cloud-based and intertwined with algorithms. Despite this – or because of it – research shows that millennials crave authenticity: real people, real connections. The race is on to solve digital identity, ensuring users can own their online identity and data. And it may be a race where blockchain tech finally wins out.
8. No hiding from Big brother
500 million surveillance cameras track the Chinese people, along with a country-wide network of human informers. The social credit program aims to create a record of the entire population’s trustworthiness. Just wait for security-minded western politicians, and managers eager to check in on their work-from-home staff, to take a page out of the Chinese playbook.
9. Deep fakes
In September, The Guardian published an op-ed on why the human race must not fear artificial intelligence. The twist? It was written by an AI. Synthetic media is not coming, it is already here. Time to get used to AI-generated text, audio and video, created by learning algorithms, powered by engines from companies including OpenAI and Deepmind.
10. Smile for the camera
We waited decades for video chat to take off, in schools, in the workplace and just for fun. The pandemic was the tipping point that finally made video tech like Zoom, Skype and Google Meet indispensible everyday tools. Watch this space for a burst of innovation as screen sharing and fun filters evolve into sophisticated AR applications.
Meet our people in Tech
Meet our People
Ioana Havsfrid is part of Schibsted’s machine learning team. She’s working on a project that focuses on content understanding.
Looking for hidden content
”Text, video or audio – in all forms of content there is a lot of hidden information. We want to understand this underlying content within the journalism that our newspapers produce.”
With the help of machine learning the team is trying to automatically extract different kinds of information, such as people mentioned in a text, places or events, or even whether the sentiment is positive or negative on a given topic.
”That would differ in an article from Italy based on whether it’s an inspirational travel piece or if it’s about Corona”, Ioana explains.
The main reason is to develop contextual ads – where you can match commercials more in detail with specific kinds of content. New regulations and tough competition are challenging traditional advertising. This makes contextual advertising, based on targeting, an interesting alternative. But this also opens up possibilities to create more personalized subscription offers, for example. ”It’s crucial that the team knows the domain and the product, and that they read and follow to get ideas on how to apply the technology. And in our case, to work closely with the sales organization, then we can build something together.” Ioana also points out that this way of thinking is new, and that data can throw you in new directions. ”We need to explore more and explain to our organizations how these new technologies work.”
Ioana Havsfrid
Engineering manager, Machine Learning team
Years in Schibsted: 6 months
What I have missed the most during the Corona crisis: Chats by the coffee machine.
A platform to integrate podcasts
Podcasts are a high priority in newsrooms – and now users can listen to them directly from Schibsted’s news sites – thanks to our new, very own platform. Erik Saastad got the assignment to investigate whether podcasts were effective in driving login and increasing willingness to subscribe on Schibsted’s news sites.
He soon realized they could – but they would need to be published directly on the sites, not only on external platforms, like A-cast or Spotify.
”To stream sound isn’t that different from streaming video, and we already had a solution for that which we could build on”, Erik explains.
For news sites such as VG and Aftonbladet, the new platform means they can publish podcasts earlier on their own sites to reach all their users and open up for more ads – and then on external platforms where the large podcast audience will find them. Aftenposten is experimenting with publishing behind their paywall to drive subscriptions. ”We have also found that we reach new users – people not that familiar with podcasts now find them”, says Erik.
Erik Saastad
Product Manager
Years in Schibsted: 8 and counting
What I have missed the most during the Corona crisis: Colleagues and drinking beer with my team in Krakow.
Privacy and data lives together
In Schibsted, using and sharing user data is a crucial part of developing new, relevant products and services. Just as important is to handle data in a responsible way to protect peoples’ privacy.
”It’s about a lot more than just being compliant. To earn our users’ trust we also want to be a driving force in finding that perfect balance between data and privacy”, says Siv Kristin Henriksen, Privacy Project Manager.
Schibsted has quite a large privacy team which has been focusing on this since the early days of GDPR. The reason is simply that Schibsted is a tech-driven company working a lot with data.
”We want to be the ones guarding and leading the way – in discussion with legislators.”
Lately Siv has been working a lot with startups that Schibsted is investing in. ”It’s super exciting to meet them. Sometimes it’s just three people having a very good idea.” She also recognizes the advantage to get help from her team, instead of turning to a larger law firm. ”We have a broader perspective and we look for possibilities because we realize both the entrepreneurs’ needs and the user perspective.”
Siv Kristin Henriksen
Legal Counsel, Privacy
Years in Schibsted: 4.5
What I have missed the most during Corona crises: My colleagues! Having lunch together, and coffee chats.