The rise of China as a high-tech superpower

The rise of China as a high-tech superpower

The rise of China as a high-tech superpower

The rise of China as a high-tech superpower

The prospect of a booming Chinese tech sector is setting off alarm bells in Washington, DC. But what is Europe’s place in the cold war over tech?

In the early hours of a cool spring morning in Penn Valley, Pennsylvania, Temple University professor Xiaoxing Xi was awoken by someone at his front door. BANG! BANG-BANG! Forceful, intimidating – “Who knocks on people’s doors like that?” Xi thought before rushing downstairs. The Federal Bureau of Investigation, it turns out, is who knocks on people’s door like that.

Xi has since testified in Congress and in interviews as to how government agents poured into his home, handcuffed him, marshalled his wife and children out of their rooms at gunpoint and proceeded to search the family’s home in their quiet Philadelphia suburb.

It was May 2015, and the university professor had been under surveillance for months. Based on his email activity, the FBI suspected that Xi was transmitting classified details of a pocket heater – an advanced instrument used in superconductivity research – to China.

Dramatic and lifechanging, as it may be, this type of raid is now routine work for FBI agents. The bureau has officially singled out Chinese tech espionage as its top counterintelligence priority and a “grave threat to the economic well-being and democratic values of the United States”.

Intense counterintelligence efforts

Over the past decade, intense counterintelligence efforts have been afoot in Silicon Valley and at universities across the US In 2018, they culminated in the China Initiative, launched by former President Donald Trump’s Department of Justice.

The initiative, dismantled by Pre­sident Joe Biden in early 2022, was a well-funded scheme devised to foil Chinese industrial espionage in cutting-edge research and business. Because, surely, Chinese spies had infiltrated these institutions to steal American tech secrets?

One thing is for sure: China’s tech ambitions are great. In the autumn of 2020, President Xi Jinping revealed China’s new five-year plan. The plan preceding it had set growth targets for a nation still climbing out of relative poverty, and in that five-year span GDP per capita grew by 30%. Millions of Chinese were lifted out of relative poverty, and some became very rich. In 2021, GDP per capita increased in 21% in a single year.

And even if the 2022 congress says little of growth, the Chinese tech sector has proven to be a formidable engine for companies like Baidu, Tencent, Alibaba, Bytedance and Xiaomi becoming to juggernauts feared even in Sili­con Valley.

Wants to learn from the West

The objectives of Chinese innovation are diverse, but they are mainly focused on achieving the Chinese Communist Party’s goals for the nation: prosperity, modernisation and self-reliance.

It should come as no surprise that China wants to learn from the West. The Chinese government is actively working to counteract the brain-drain of Chinese researchers and engineers who are relocating to the US. They have attractive programs in place to encourage repatriation, and Chinese law stipulates that every citizen must co-operate if the authorities ask for assistance – or even trade secrets.

These laws are at the heart of the concerns over Chinese intellectual property (IP) theft. Over the past couple of years, these fears have led to several large Chinese tech companies being sanctioned – and crippled – by the US. Among them, the mobile communications companies ZTE and Huawei.

Chinese authorities invest heavily in key areas and set long-term targets for private and public sector innovation.

Another major difference in innovation strategies is the way Chinese authorities invest heavily in key areas and set long-term targets for private and public sector innovation. They have an ambitious program for conquering space, of course, but there are more strategic endeavours where China hopes to become world leaders. The key fields of strategic importance are transistor technology, quantum computing, superconductors, weapons technology, artificial intelligence and any technology – such as social media and 5G infrastructure – that expands its surveillance capabilities.

Lagging behind

Transistor technology, which is found in the advanced factories in neighbouring Taiwan, is a priority because this underpins all digital technologies. China is currently lagging a few generations behind the state-of-the-art in this field and some western think-tanks argue that maintaining China’s dependence on other countries for advanced chips is crucial.

Quantum computing research is a race where the state that first manage to harness the technology will achieve the capability to decrypt communication today thought to be secure, along with many other exciting applications. The government lists quantum technology as the second priority, after artificial intelligence. It should be noted that this research is still embryonic and by no means a quick fix for China’s chip-making problems.

Superconductors promise to revolutionise our use of electricity as they provide zero-resistance transmission of electricity. China is slowly catching up to the UK and US in this nascent and investment-demanding domain. They are already leaders in the adjacent field of solid-state batteries which, among other things, can increase the range of electric vehicles and drones.

A constant race

Weapons technology is a constant race to stay ahead of the curve, to ensure adequate deterrence against potential attacks. Currently the name of the game is drone tech, battlefield AI and cyber-warfare – all disciplines where Chinese tech is at the bleeding edge.

Social media, payments systems and communications infrastructure are examples of technologies that facilitate mass-scale surveillance. Currently the Five Eyes pact (US, UK, Canada, Au­stralia and New Zealand) is leading this field. However, China has invested heavily in domestic surveillance, in­clu­ding a vast network of CCTV cameras and an equally impressive network of human informants. Recent controversies over TikTok, Huawei 5G and cell phone brands like ZTE and Huawei are indicative of western fears that ubiquitous Chinese tech exports may propel its authorities’ surveillance powers onto the global stage.

While competition is fierce in these fields and beyond, artificial intelligence has emerged as the most hotly contested battleground. State-of-the-art AI – and in a possible future, artificial general intelligence, which is human-level AI and beyond – has the potential to turbocharge all other research.

Much has been made of the vast troves of data that Chinese companies could mine from the nation’s almost one billion internet users. This dataset could be the key to China surpassing the AI efforts of other nations. In a recent report, the Future Today Institute warns that Chinese companies such as Tencent and Baidu have superpowers, thanks to their access to this data “without the privacy and security restrictions common in much of the rest of the world”.

However, the recently enacted Per­sonal Information Protection Law (PIPL) mirrors Europe’s GDPR, affording Chinese users many of the same protections as EU citizens. The communist party has further proven willing to play hardball with its most profitable companies, imposing some of the highest fines ever on its own tech juggernauts. Companies in violation of PIPL may find themselves facing fines of up to five percent of their annual re­ve­nue.

In other words, the national treasure of Chinese data is not free for companies like Tencent and Baidu to mine at their will. That level of power is reserved for the Chinese state itself.

State surveillance is culturally ingrained, a fact of life since the cultural revolution and even long before.

While China’s tech ambitions are a boon to many Chinese, who have seen technology add comfort and convenience to their lives, technology always has the potential to be used for both good and bad. Surveillance is pervasive in China, with a vast network of CCTV cameras surveilling public spaces, and an immense network of human informants keeping track of neighbourhoods throughout the country.

State surveillance is culturally ingrained, a fact of life since the cultural revolution and even long before. China’s controversial social credit system has precursors that date as far back as the third century. Many Chinese seem to accept and even welcome this type of surveillance.

But there is a high cost for minorities such as the Uyghurs in the Xinjiang province, who are systematically targeted, suspected of terrorist affiliations due to their ethnicity alone, and sent to re-education camps if found to be engaging in any sort of behaviour deemed suspicious by authorities.
These human rights concerns make China’s technological rise seem ominous, and they have been rightly criticised by human rights groups and democratic countries in the west. It is ironic then that the United States is likewise using its tech prowess to monitor and target ethnic minorities, like Xiaoxing Xi.

FBI lacked expertise

After the FBI raided Xi’s home, the Temple University professor was suspended from his job and he faced the prospect of spending the rest of his life in prison.
Then, after four months, all charges against him were su­d­den­ly dropped. Xi’s colleagues had convinced the Department of Justice that the schematics he had emailed to China were, in fact, detailing a widely published innovation of his own, which had nothing to do with pocket heater technology. The FBI simply lacked the scientific expertise to understand it.

For Xi, the damage was already done. Not only was his reputation shattered, the suspicion of treason hung over him like a dark cloud. He had lost his sense of belonging and security in his home country, as a naturalised citizen of the United States.

Xi’s case is far from unique. The US finds itself in a predicament in which its companies need Chinese talent to stay competitive, but the US government fears the leaking of trade secrets and intellectual property to the rival nation.

In the past decade, US authorities have targeted hundreds of academics of Chinese descent – many of them American citizens – on suspicion of possible espionage. A few cases have been tried in court. There have been convictions, mostly for the common (but illegal) practice of trying to enrich oneself by transferring intellectual property from a previous employer to a new one.

Not one case has resulted in a conviction for espionage.

As the relationship between China and the US shows no sign of thawing, European countries must decide what role they want to play in this cold war over tech supremacy. China and the US have shown that they are both willing to play dirty to win this race. European countries will have to forge their own path, or risk ending up as collateral damage.

Sam Sundberg

Sam Sundberg
Freelance writer, Svenska Dagbladet

Future trends for regulating the digital economy

Future trends for regulating the digital economy

Future trends for regulating the digital economy

Future trends for regulating the digital economy

Today we live in a very different world than we did just a few years ago. Everything has changed: the geopolitical landscape, the energy market and the cost of living. What has also changed is the view on regulating the Internet.

Some years ago, most politicians in the Nordics believed that the Internet should be free of rules and that a liberal regime was the only true guardian of innovation. Tech companies should not be liable for the content on their platforms and big tech should be able to grow by acquiring start-ups.

The pendulum has now swung to the other side, and in 2022, we see overwhelming support from Nordic decision-makers for the EU landmark regulations over the digital landscape, namely the Digital Services Act (DSA) and the Digital Markets Act (DMA).

Both regulations were adopted by the EU in summer 2022, and they will become reality by early 2024, at the latest. These instruments will have a huge impact on the way platforms deal with liability for illegal content, as well as how big tech companies should deal with their business users. The guiding principles for the new regulations are focused on creating fair and transparent rules that level the competition in the market and protect users and consumers from unfair commercial practices.

The current EU Commission is now halfway through its mandate and has already achieved a lot by means of proposing new digital regulation and becoming a force against the big tech companies. The Commission has set up several goals for 2030 related to increasing tech talent and the number of European unicorns, but they still have a long way to go to meet their goals.

EU targets for 2030

But while the EU wants to boost the European digital economy, it also wants to create a safe internet based on European values. This is a tricky balancing act between those who want liberal rules that allow for innovation and increased global competition, and those who want heavy regulation that protects the consumers of digital services.

The EU’s objective can be summarised in two words: values and sovereignty. Or in the words of Commission president Ursula Von der Leyen: “Digital sovereignty is not just an economic concept. We are a Union of values. One of the great questions is: How can we preserve and promote our values in a digitised world?”

Recent global events have undoubtedly strengthened the belief among EU policymakers that the EU needs to be its own strong force and promote a distinct ‘third way’ of regulating the digital economy, somewhere in between USA’s ‘laissez-faire’ approach and Chinese authoritarianism.

This will – and has already – led to more regulatory oversight and enforcement powers at the EU level, making Brussels an increasingly important hub of tech regulation. This centralisation will make the European Commission a bigger interventionist player in the digital economy, not only by proposing new legislation but also by monitoring and ensuring compliance, which will take responsibility away from national authorities and make the rules even more harmonised across the EU.

Based on this new landscape, there are four major regulatory trends that will set the stage for the coming Internet rule book.

Safeguarding EU values

This EU Commission is much more political and principals-based than previous Commissions. It has taken the fight with the big tech, at the urging of France, but it is also taking a political stance against countries such as Poland and Hungary that are an increasing source of irritation to the Commission and other Member States with their populistic and nationalistic agenda.

As part of this battle, the Commission wants to protect the European value of freedom of expression, enshrined in the EU Charter of Fundamental Rights, by introducing a proposal that aims to protect the safety of journalists and the independence of newsrooms from any external influence, such as states, owners and platforms.

This proposal will be much debated, as regulating the media is a sensitive issue that typically is left at the national level. The fact that the EU Commission is considering a proposal to regulate media on EU level shows that it will do anything to protect the values that it holds dear. But it will be an uphill battle to defend this proposal against member states and media companies that oppose any regulation of freedom of expression beyond the national level.

Protecting consumers, minors and privacy

One of the most discussed and contested issues in digital regulation is the use of personal data for targeted advertising. Many EU decision-makers are frustrated with the slow and inefficient enforcement of the General Data Protection Regulation (GDPR), and the need for a targeted revision of the rules is under discussion. The DSA and the DMA have already included specific rules to ban targeted advertising directed towards minors and stricter consent rules for digital gatekeepers, but there is a clear political push to do more to protect privacy.

We also see a continued push from EU institutions towards greater transparency regarding the use of algorithms, with the aim to ensure that people are empowered and informed about them. And similarly, we expect some moves to regulate the use of so-called ‘dark patterns’ and platform designs that are perceived to manipulate and steer user choice and inhibit freedom of choice online.

Fostering EU innovation

The European Commission will propose a series of legislative measures to strengthen the EU’s position as a hub for emerging technology innovation. This is expected to include a review of the regulatory definitions of ‘start-ups’ and ‘scale-ups’ in 2023, intended to promote the emergence and success of EU technology companies. On the one hand, these measures are an attempt to make it easier to create new digital services, through a common EU digital identity and a review of EU competition policy, ensuring that competition instruments (merger control, market definitions and State aid control) are fit-for-purpose in a fair and balanced digital market. On the other hand, the EU will also look at regulating new technologies.

One example is the AI Act that is currently under discussion, in which the EU aims to regulate a technology that is still developing. The objective is to scrutinise certain AI technologies that can be seen as high-risk, such as facial recognition. But some are also of the opinion that AI used in newsrooms, such as bots and computer-created images, should be seen as high-risk AI, as the content may not be trustworthy and therefore in need of heavy regulation.

Promoting a circular economy

The EU’s ambition to be climate-neutral by 2050 remains the guiding policy for the green transition, which has also impacted digital regulation. The EU Commission has adopted a package of proposals for the circular economy, empowering consumers in the green transition and making sustainable products the norm.

A key proposal on Ecodesign for Sustainable Products Regulation (ESPR) introduces (among other things) a digital product passport for new products, as well as certain requirements for online marketplaces.

The EU will also propose a regulation on Right to Repair this autumn that will require products to be repairable and, as a result, prolong the average product lifetime.

At the same time, there is enormous political pressure to increase product liability of online marketplaces, which could include liability for green claims of products.

Schibsted’s voice

Schibsted is a leading voice and will actively contribute to a regulatory landscape that allows for the innovation of state-of-the art digital products and services in the Nordic market. We will focus on calling for effective implementation and enforcement of EU regulations, such as the DSA and the DMA, and ensuring balanced regulation for online marketplaces, as well as the continued ability to use data to create of relevant products and services for our users. We will defend editorial independence from external influence and the Nordic self-regulatory system in the media. And we will support efforts to promote sustainable products and circular consumption in Europe.

Petra Wikström
Director of Public Policy
Years in Schibsted: 4


Influencers might need new skills to survive

Influencers might need new skills to survive

Influencers might need new skills to survive

Influencers might need new skills to survive

Social media is fundamentally changing. Algorithms focusing on our interests will make us more passive, and influencers are in for a challenge.

We have now entered the third era of social media algorithms. This new development has major implications for some of the tech world’s leading players and for our personal well-being, and is one of the trends that will affect us most in the coming years.

Time is money

The first era of algorithms was simple by today’s standards. We as users decided for ourselves what interested us and which accounts we would follow. Then posts from those accounts began appearing in our feeds in chronological order.

During the second era they were shuffled up so that posts from accounts we already followed were mixed with posts our friends commented on and with accounts that resembled the ones we already followed.

Now we have entered the third era, where we don’t even need to tell social media what we’re interested in. It doesn’t matter which accounts we follow. Recommendation algorithms are now becoming so accurate that they always give us what we want without us having to actively tell them what that is.

Just like before, it’s all about consuming as much user time as possible. Time is money or, to put it more precisely, the more companies can hold our attention, the more advertising they can sell. They earn more money – and therefore more value – for their shareholders.

Influencers might need new skills to survive

This era also goes to show that we ourselves don’t know what we want. The companies can figure that out for themselves and then get us to spend our valuable time on them.

The principle isn’t new, but the amount of money being invested in developing it is. And it’s TikTok that’s leading the way. Its parent company Bytedance spent SEK 163 billion on research and development in 2021 alone. Developing a market-leading algorithm is expensive.

Algorithm development is also transforming how we use social media. Previously, users would interact with friends and acquaintances and share their everyday life with them through photos and status updates. The apps served as extensions of our social lives.

Now the focus has been shifted to entertainment. Here, too, TikTok is the one driving the change and is sitting in the driver’s seat. On its platform, it’s not who you follow that determines what content you view, but rather the type of content you like. The social function has been peeled away. And its competitors are following suit.

Video is the new gold

For the social media giants, video is the new gold. Instagram and Facebook are fighting to get Reels, their TikTok clone, to take off. YouTube is investing heavily in the very similar Shorts format. It’s about reversing a trend where, for example, Instagram and Facebook owner Meta is seeing its first ever decline in user growth and revenues.

This trend is also redrawing the map for influencers who enjoyed huge success and earned large amounts of money on those platforms. For many years now they could sit back and enjoy growing audiences and engagement, which guarantees collaborating advertisers a certain amount of exposure. But what happens when followers cease to be so important?

The fact that TikTok’s algorithm is based on interest rather than audience size means that anyone can go viral. This summer Instagram tried to roll out similar changes in its algorithm but faced fierce pushback from the most established influencers, from Kim Kardashian to Swedish Rebecca Stella. In SvD, one influencer described the new reality on the platform as “Russian roulette”.

Instagram had to admit they were wrong and withdrew it, but it’s probably only a matter of time before the new algorithm returns. Instagram simply can’t afford not to keep up with users’ changing behaviour, and has declared Reels as the future for the platform.

Influencers need to start over

For many of the leading influencers around the world, this means they will have to start over, learning new tricks and understanding user behaviour on a new platform. Those who built careers on generating engagement by posting nice pictures will suddenly have to learn how to make videos and create a different type of content. Not everyone will survive the transition,

And perhaps that’s the natural process of succession; after all, it’s normal in most industries for new skills to emerge and for old ones to die out, and for companies to change their strategies.

So how are the platforms’ new, advanced recommendation algorithms affecting us users? We’re becoming less active and more passive. We’re using the platforms less and less for keeping in touch with friends and acquaintances. And instead we passively scroll through infinite feeds over which we have no control. One aspect of it is how it makes us feel.

Research on psychological well-being and social media use is still in its infancy, and it’s very difficult to say anything about cause and effect, but there are some indications – and they’re pointing in the same direction. A study conducted in the United States found that individuals who passively consume social media content run a 33% higher risk of developing symptoms of depression, while the same risk for active users is 15%. A study conducted in Iceland on more than 10,000 adolescents found that passive consumption correlated negatively with anxiety and symptoms of depression. The same correlation was not found in active users, even after controlling for other factors.

As already mentioned, the relationship between cause and effect is not easy to establish, but we can be pretty certain that development of the algorithms has more to do with enriching the social media giants’ shareholders than it has with making life better for us users.

Author Sophia Sinclair

Sophia Sinclair
Tech Reporter SvD Näringsliv
Years in Schibsted: 4

Author Henning Eklund

Henning Eklund
Tech Reporter SvD Näringsliv
Years in Schibsted: 2

Algorithms can encourage empathy and connections”

“Algorithms can encourage empathy and connections”

Algorithms can encourage empathy and connections”
Victor Galaz is deputy director and associate professor at Stockholm Resilience Center, and a writer for Svenska Dagbladet.

“Algorithms can encourage empathy and connections”

Could AI make us care about the climate? Or will it just bring a flood of auto-generated disinformation? It’s Victor Galaz’s job to find out.

Next year, Routledge will publish Victor Galaz’s book Dark Machines, an essay on the impact of artificial intelligence in a future of climate change. As deputy director and associate professor at Stockholm Resilience Center (and a writer for Svenska Dagbladet), he spends a lot of time pondering resilience and sustainability. Over Zoom, from his home in Stockholm, he explains what makes for a resilient society.

“It’s a society with the capacity to predict, adapt to and recover from shocks. In that process, it also innovates and renews itself. For instance, the war in Ukraine and the pandemic pose huge challenges for global food systems, energy systems and so on. However, we shouldn’t strive to get back to normal from this point, because we need to change these things anyway. Our societies need to evolve.”

How resilient is our society?

“Different societies have different levels of resilience. A country with weak public institutions and little money is always more vulnerable than a country like Sweden. However, one difference between the world today and the world twenty years ago is that we’re much more global and interlinked. A disturbance in one part of the world rapidly spreads to other parts.”

Do we have the resilience needed for future challenges?

“We can never take that for granted. Climate change and loss of biological diversity pose massive challenges. Over time, our drive to optimise and maximise has created huge values for a lot of people. But we have never lived in a time of climate change like this one, and we simply don’t know yet if we can handle it.”

In his upcoming book, Victor Galaz explores how AI is cause for both hope and concern among climate scientists. He talks of a “silent tsunami” of AI seeping into
all aspects of our society – more or less unnoticed.

Is AI a threat in itself or is it a matter of who controls it?

“Technologies are not neutral. Some AI systems are explicitly designed to harm us, for instance through surveillance and discrimination of ethnic minorities. That said, it is a matter of control and of fair distribution of the enormous gains these new technologies bring.”

Regulating new technologies is a notoriously tough task. As the British academic and writer David Collingridge once pointed out: “When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.”

The challenge, then, is foreseeing the future. If we fail, AI will bring unintended and unwanted consequences, according to Victor Galaz.

”There are some direct climate effects of AI, such as energy costs, social costs and environmental impacts. We are coming to terms with these. But then there are indirect, long-term effects that are even bigger, and much harder to manage. Take digitalisation of agriculture, for instance. As we use technologies to optimise and maximise food production, we get enormous monocultures, as these are the most efficient, and we see the end of small-scale farming, loss of local job opportunities and more vulnerable ecosystems. And these are just some examples. Another is mass-scale climate disinformation through social media bots.”

If we do solve the problem of control, how can AI contribute to a resilient society?

“In two ways. Firstly, it will give us a better understanding of how our planet is changing, and how dependent we are on it. Secondly, it could help expand our empathy with other people and even with other species. Just as algorithms can exploit negative emotions to drive engagement in social media, they can encourage empathy and connection.”

“These and other emotions are important to bring about change. Just look at the mass appeal of Greta Thunberg. She is sad. She is disappointed. She is angry. These emotions make people care.”

Sam Sundberg

Sam Sundberg
Freelance writer, Svenska Dagbladet

Campanyon makes nature accessible to everyone

Campanyon makes nature accessible to everyone

Campanyon makes nature accessible to everyone

Campanyon makes nature accessible to everyone

The way we travel is changing. During the pandemic the few opportunities left for travel were local and in nature, away from crowds. With tourism now back in full swing, the industry is signalling that this trend is here to stay. And Norwegian start-up Campanyon is at the forefront of it.

With over 10,000 bookable stays across more than 20 countries, the online booking platform Campanyon has already established itself as the leading platform for outdoor stays across the Nordics – only a year after launch. It’s now aiming to strengthen its position across Europe.

Talk to any entrepreneur and they’ll tell you that timing is critical in terms of both when to launch and to succeed with a new business. The same held true for Kristian Qwist Adolphsen and Alexander Raknes, the two founders of Campanyon, when they decided to explore Campanyon as a new business idea in spring 2020.

A passion for sports and the outdoors

The two originally met while studying at Copenhagen Business School, where they quickly became friends due to their shared passion for entrepreneurship, sports and the outdoors.They ended up working together at the digital marketing agency Precis Digital, and eventually, they both joined Google. It was there that the first ideas around Campanyon were formed.

Campanyon makes nature accessible to everyone
Campanyon’s founders: Aline Nieuwlaat, Werner Huber, Kristian Qwist Adolphsen, Alexander Raknes and Sven Röder.

After being sent home from the Google offices shortly after the Covid pandemic hit, the pair spotted some new and interesting trends emerging across various industries, as a direct result of the lockdown. One of the trends that captured their attention was the increasing appetite for being in nature, as people were longing to escape isolation but were banned from travelling abroad. This resulted in new records for nature-focused and camping-related search terms and overnight stays.

Alexander and Kristian decided to do more research on this budding market and quickly realised it was extremely difficult to both find and book places in nature in a seamless way, mainly due to it being a very fragmented market consisting of small platforms with limited supply. At the same time, they couldn’t find any platform in the Nordics that attempted to unlock unused private land for campers to book and stay.

“It was very clear from early on that the market and appetite for local, authentic, and nature-focused stays was growing. At the same time, there were very few established players offering user-friendly solutions – which we found interesting,” Kristian says.

Being an avid skier, surfer and mountaineer, Alexander could relate to the trend they were observing.

“I, too, had been longing for cheaper and more sustainable options to spend the night in nature, get local tips and meet like-minded people.”

Teamed up with former colleagues

Those insights led to the early-start of Campanyon, which began during late spring of 2020. A few months later, the two teamed up with former colleagues Aline Nieuwlaat, Sven Röder and Werner Huber, who all are very experienced with product engineering and UX design, and they quickly became Campanyon’s co-founders, too.

Aline was just wrapping up her work on a food app when Alex called her to let her know about the idea for Campanyon, something that immediately resonated with Aline.

“I’m a passionate camper so when Alex called, I was instantly committed to join the journey! Just before that I saw an ad from another player in the market and thought to myself how smart the idea was to offer private land to campers.”

Funnily enough, the five co-founders are based in five different countries. The first time they met in person after they started working on Campanyon was in December 2021 – the day they signed the deal with Schibsted Ventures in Oslo and around one-and-a-half years after they began working together on Campanyon.

Being born out of Covid and having a fully remote setup from day one, the team knew this would come with both opportunities and challenges. They have been fortunate to learn from leading companies, such as Google, on how to approach and adapt to working remotely and they have introduced some of the things that worked well directly into Campanyon, while skipping the things that weren’t quite as efficient.

Campanyon makes nature accessible to everyone

In the early days, it was clear that too many initiatives were being launched all at once, to make everyone in the organisation comfortable with the new setting of working remotely. This meant almost daily check-in meetings, coffee huddles, shared lunch breaks and other attempts at creating a shared working experience – which to some extent had the opposite effect.

The tech team is the perfect example of Campanyon’s effective teamwork.  For Sven, hiring and scaling has been a fantastic challenge and opportunity as the CTO. His team consists of a healthy mix of employees and freelancers from all over Europe.

“We have some incredible talent on board that is motivated to work in an ‘always on’ start-up environment. Open communication and cloud tools that support our development flow allow for rapid iterations of UI/UX and continuous updates of our services.”

Crucial to have local people on the ground

Campanyon has people working from nine different countries now, and nowhere is that more palpable than in the sales team. Kristian sees it as crucial to their success.

“Having local people on the ground across our key markets has been instrumental in growing both supply and demand. The local presence gives us the opportunity to establish relationships with key stakeholders and offer customer service at a different level, something that is particularly important in the Southern European markets we operate in.”

Campanyon got off to a great start since its launch in 2021. Or as Kristian puts it, they’ve been extremely busy growing since the launch.
“Since we launched the platform last year in April, we have grown from around 100 host listings in Denmark and Norway to more than 3,000 host listings across more than 20 markets.”

Alexander, who embodies the companionship that is core to the company’s ethos, visits many of the newly onboarded hosts to get feedback and foster a sense of community.

“I’ve already met a lot of campers and great hosts in unique places, and all of whom have stories that I want more people to hear.”
Campanyon experienced a huge appetite for joining the platform early on and they have used various channels to create awareness and grow the number of hosts in efficient ways.

“We have also seen a large number of organic signups from hosts in locations we don’t actively target, which is really funny and also inspiring, as we see the project resonates across so many different countries and cultures,” Kristian says.

Going forward, the focus for Cam­panyon will remain on growing in key markets in Europe to further establish their position as the leading platform for stays in nature, while continuing to enhance the user experience to become “campers’ best friend”.

Author Jeremy Sudibyo

Jeremy Sudibyo
Brand & Content, Campanyon
Years in Schibsted: 1

Our sonic attention is worth fighting for

Our sonic attention is worth fighting for

Our sonic attention is worth fighting for
The politics podcast “En runda till” (“One more round”) with Soraya Hashim, My Rohwedder and Lena Mellin is recorded in Aftonbladet’s studio in Stockholm.

Our sonic attention is worth fighting for

While companies around the world are engaged in an intensifying battle for users’ screen time, the rise of audio might be the next frontier in winning user attention.

Fuelled by wireless headset adoption and an ever-growing selection of content made for listening, the audio trend represents a major opportunity for any company that aims to be relevant during all those moments that users are away from their screens.

Although we cannot accurately predict how much total screen time (and news publishers’ share of it) will grow in the coming years, we clearly see that time spent on audio is growing rapidly. Around the world, more and more people listen regularly, and each person listens for a longer period of time.

In Norway, the share of users listening to podcasts per month has nearly doubled, from 24% in 2017 to 43% in 2020, with Norwegian-language podcasts leading the charge. Users aged 16 to 24 show the highest adoption rates, with listeners in this group averaging nearly two hours per day on podcasts or audiobooks. Among Swedish users in general, average time spent on podcasts and radio daily already matches that of digital news consumption.

Different from radio

While audio as a product is nothing new per se, there are many ways in which the current move to audio is different from traditional broadcast radio:

  • It is fuelled partially by hardware adoption, led by AirPods’ exponential growth, having captured more than one-third of the wireless earbuds market. And several other wearable devices have also seen double-digit sales growth over the last few years. A 2022 report estimates that three in four US teens now own AirPods. The convenience of these new devices means people now wear headphones more often and in situations they previously wouldn’t – even while talking to their friends!
  • Our mobile devices are always connected, enabling users to listen to any topic, any time, while doing other things. The ability to multitask is, as one would expect, one of the main reasons users turn to audio in their busy lives.
  • Lastly, the sheer volume of content is growing rapidly, with an entire publishing industry transitioning to audio books, and all-time-high investments from tech- and media companies going into the podcast industry.

As users move to airpods for consuming content, we also see that several audio-first start-ups have emerged over the past few years. In addition, industry experts talk about wearable audio as the first mass market adoption of augmented reality devices. For many young users, audio is their primary channel for news. Clearly, publishers who want to stay relevant must find their place in the audio domain.

For news organisations, understanding the opportunity that comes with audio starts with acknowledging how the newspaper landscape has changed. We’ve gone from a world of physically distributed newspapers, where there was little competition and a general scarcity of information, to a world of unlimited digital distribution and global competition for attention. In this world, news organisations are not just competing against each other, but rather against any company distributing their product on a screen. Those other companies include technology giants with massive budgets and a world-class ability to get users addicted to their products.

News attracts users attention

We know that tech and streaming giants dominate users’ visual attention, and it seems unlikely that news publishers will turn the tide on that anytime soon. But in the audio world, news as a category gets an outsized share of users’ attention, accounting for 30% of top podcast episodes despite comprising only 7% of podcasts.

However, increasing audio content production for news organisations does not come without its challenges:

  • The cost for voice actors and studio time remains high
  • Recording and editing takes several times that of actual audio output
  • There’s a risk of spending significant resources on content of low interest

The nature of news as perishable limits the types of content that can be produced without becoming outdated as stories evolve. Today, publishers mostly accept the fact that investments in the audio domain are expensive, and that it will be worth the effort in the long run. But there are also ways that technology can enable production of more audio in smarter ways.

Firstly, the need for studios may soon disappear, as cheaper and more mobile recording setups hit the market. Companies like Nomono (which Schibsted recently invested in) are challenging the existing workflow as well as the costs associated with high-quality podcast production.

Secondly, for narrated articles, we might soon get rid of the need for both studios and narrators entirely as text-to-speech technology matures. A synthetic voice that can read any text input out loud offers some unique advantages. It allows for unlimited production of narrated articles with near zero marginal cost, as it converts a written text into audio within seconds. Since it is connected to the publisher’s CMS, it also enables flexibility to update and edit published stories, without ever needing to step into a studio. The fact that it can be scaled across the entire daily article output of a newspaper also means that users can rely on the feature to listen to any article they prefer and do so while commuting or cooking at home. Since many users cancel their subscriptions because they simply don’t have enough time to sit down and read all the articles they pay for every day, solving this “bad conscience-problem” for subscribers might be a key factor in reducing the churn rates most newspapers are seeing.

Listeners complete more of a  story

Early results from text-to-speech experiments in Aftenposten show that the gap between human and synthetic voices is closing in terms of listener retention, and that users opting for audio consumption complete more of each story compared to text. Plans for enabling users to save stories for later listening, as well as the ability to queue synthetically narrated articles after premium flagship podcasts, may all lead to more widespread adoption of audio as a mode of news consumption. The result might be a significant increase in the total time users spend engaging with Aftenposten’s journalism each day – read more about it on the next page.

Looking back at the battle for users’ screen time, as described earlier, could it be that by focusing on users’ eyeballs, we miss an emerging behaviour change that may one day account for most of our time? The next frontier in winning user attention might in fact be about sonic attention, and those who make the right investments now may be on a course to become the giants of the audio world.

Author Karl Oskar Teien

Karl Oskar Teien
Director of Product, Schibsted Subscription Newspapers
Years in Schibsted: 8

An AI voice makes news accessible to everyone

An AI voice makes news accessible to everyone

An AI voice makes news accessible to everyone

An AI voice makes news accessible to everyone

Why limit the audio presentation of journalism to podcasts? Aftenposten’s cloned voice will be able to present all the newspaper’s content – and by doing so, give everyone access to the same information.

Today a large part of society is left out when it comes to consuming journalism. It is, in fact, a democratic problem that media prevents people from getting information about society because much content is only accessible as text. This is also a big risk for news companies, as they may be missing out on a market opportunity by not offering an audio alternative to the huge amount of written journalism produced every day.

According to Dysleksi Norge, between 5 and 10% of all Norwegians suffer from dyslexia. This means that as many as 270,000 to 540,000 children and adults in Norway are reluctant to consume written journalism. This is not the only group who have challenges with reading. People with attention deficit disorder concentrate better when listening instead of reading. Refugees and asylum seekers who are in the process of learning Norwegian also find it very helpful to be able to listen and read Norwegian simultaneously.

Students struggle to read

When Aftenposten started looking into this, we primarily had our newspaper for kids in mind – Aftenposten Junior skole. Since this is a news product for use in public schools, we are obligated to fulfil all accessibility requirements.

We learned from teachers that 92% of them have students who struggle to read in their classroom, and we were even told that schools were not interested in buying our product if we could not offer text-to-speech.

Two important observations and findings from our research also convinced us that adults in the future will have needs quite similar to today’s users of Aftenposten Junior skole.

Firstly, we observed that many kids, beyond those who struggle to read, actively chose to listen to the text. And today’s kids and teenagers are potentially future subscribers who tend to bring their media habits from childhood into adulthood. After observing how popular listening is when given the choice between sound and text, we are pretty sure that we need to have a sound alternative ready for them before they grow up.

Secondly, dyslexia and attention deficit are lifelong problems. This means that people who suffer from it will probably still prefer to listen to a long article instead of reading when they grow up, and they will not find our news products worth paying for unless we can offer more than text-based journalism.

A voice you can recognise

Our primary goal was to make an artificial voice with the highest possible quality. That is why we offer a cloned voice and not a purely synthetic voice. A synthetic voice is an artificial voice that is not meant to sound like a specific, real person. A cloned voice, on the other hand, is created in the same way as a synthetic voice but simulates the speech of a real person. That means that if it is a voice that is familiar to you, you will recognise the voice and may even struggle to understand that it is not a real person but rather an artificial cloned voice that’s reading the news for you.

To build an artificial voice we needed speech data. Speech data in this context is recorded sentences from our newspapers. Using our past articles, our collaborator, BeyondWords, extracted 6,812 phonetically rich utterances. These sentences were recorded by Anne Lindholm, a podcast host in Aftenposten, who is now also the voice behind our cloned voice.

After processing the speech data and training a neural network, the first version of the voice was ready – and it was impressive. Anne herself could not believe how similar it had become to her own voice. Still, as with all other AI-features, we needed to train it to improve it. By training we mean that a person listened to a huge amount of sound files that were converted from articles and reported mistakes.

A linguist from the company that developed the voice technology then made corrections to the phonemic dictionary that served as the foundation for the quality of the cloned voice.  When a mistake is corrected in this way, the correction will affect all future articles in which the same words occur. Over the last few months, the voice improved a lot and we are soon ready to scale up so that you can hear the voice on many more Aftenposten articles.

Many benefits with a robot

When it comes to the quality of the voice, a real voice still beats the robotic one. But we have done A/B tests between the real voice and the artificial voice, and the results indicate that the quality difference is not very high and that the benefits with a robot voice outweigh the disadvantages.

One of the benefits has to do with the nature of digital presentation of news. When a dramatic incident first occurs, like the start of the war in Ukraine, the news gets updated from minute to minute, and it is impossible for a real person to compete with the speed of updating audio files with the cloned voice.  Not to mention the cost of having a real person doing multiple recordings of an updated article, as well as the time saved for the journalists, who can instead focus on the next news article.

Artificial intelligence and our cloned voice have the potential to be revolutionary and make a hugely positive impact for large groups in our society who now can access journalism they could never access before.

This is why we believe that offering a robot voice based on artificial intelligence is an important bet on the future of journalism. It shows that new technology can contribute to a more open and inclusive society where everyone has access to the same information.

Author Lena Beate Hamborg Pedersen

Lena Beate Hamborg Pedersen
Product Manager, Schibsted Subscription Newspapers
Years in Schibsted: 3

Tech Trends 2023

Tech trends in short 2023

Tech Trends 2023

Tech trends

If we look beyond the metaverse – what other tech trends will affect our lives in one, or five years? We gathered some of them, with help from Schibsted News Media experts.

Techlash 2.0

In the last few years, we’ve seen more and more regulation hitting the tech giants and big corporations in the EU, the UK and the US. The regulations put in place by the EU are expected to eventually be copied by the US and the UK, and there will likely be more laws put in place to hold Big Tech accountable.

Tech Trends 2023

Bye, bye passwords!

We are moving from endless lists of passwords and password managers will soon be replaced by biometric passkeys. The FIDO authentication credential that provides “passwordless” sign-ins to online services. Already widely used by Apple, think fingerprint scanning and Face ID, it’s likely that more companies will adopt the technology for using users’ biometric data to create safer login processes, making password leaks a thing of the past.

Tech Trends 2023

Social shopping becomes mainstream

Shopping hauls and unboxings have been a social media tradition for years on Instagram and YouTube, but TikTok – and its Chinese counterpart Douyin – have taken the phenomenon into the mainstream. Businesses, from clothing brands to restaurants, are livestreaming to engage with viewers, and they are seeing increased revenues from the social shopping aspect.

Service fragmentation will grow

We’re already seeing the streaming world become severely fragmented, with new services announced all the time. Though the giants may still have the lead, the competition is growing fiercer and the consumer has more choices than ever. We will likely see these developments in other spaces as well, as social media is well on its way and new apps for podcasts are fighting for the users’ attention, too. As users become more interested in niche platforms and products, the fragmenting of our digital services will follow.

Tech Trends 2023

The war for tech talent

The war on talent isn’t news at this point, but tech talent is an especially sought-after commodity worldwide. New ways of working and the ability to demand more from employers will have tech workers picking and choosing, while the companies work to improve their offerings, whether at the office or remotely.

Tech Trends 2023

Our time is value

We’re seeing an increase in the fight for the users’ time, not necessarily their money. For publishers and social media, attention and usage are becoming far more important in the long run, as exemplified by Netflix’s choice to make a cheaper subscription tier that comes with advertising. Of course, this is not a new phenomenon in the publishing industry, where advertising-based revenue versus subscription-based revenue has been the question for decenniums. The fact that a user’s time is considered more valuable is becoming common knowledge, and we’ll likely see that mirrored in more companies’ business models in the future.

Tech Trends 2023

Your home will be even smarter

Apple’s development of Nearby Interaction will likely spur similar new features from other companies. Nearby Interactions allows Apple users to connect to other devices and accessories depending on their location. Recently announced Background Sessions would enable users to use their accessories hands-free. For example, you could set your music to turn on when you enter your home or a specific room, or you can trigger other actions on connected accessories. This type of technology will probably grow more popular soon, making your smart home even smarter.

Tech Trends 2023

Vertical video is winning

TikTok keeps winning ground over other social media platforms, and the rest are left scrambling to keep up. The vertical video format will likely keep gaining in popularity, whether in short- or long-form. And vertical video is expected to be used in other formats as well, with its potential to make news products stronger as publishers work to engage with the medium. Many social media-forward publishers already have large teams in place for Instagram and TikTok, and there is no question that others will follow suit.

Regulation will pave the way for the future of cryptoKarina

Regulation will pave the way for the future of crypto

Regulation will pave the way for the future of cryptoKarina

Regulation will pave the way for the future of crypto

The crypto winter has made value sink drastically. But Karina Rothoff Brix, from the crypto service Firi, is certain that the crypto phenomenon will be a natural part of our trading culture and system – at least, once regulations are put in place.

To some people, crypto is the latest attempt to reinvent the fastest way we exchange money and goods. And when looking back on history, the evolution of money has always been moving towards more convenience and easier transactions. But crypto is so much more. Some even define it as the next revolution – not only for money but for the entire trading culture and system in our society.

The decline we see is, in my view, a normal part of the market cycles, which influences the perceived and traded value of crypto. But the value behind the crypto projects is increasing as innovation continues. Adoption is here – look no further than the number of ATM machines worldwide where crypto is easy to purchase, or the growing number of both private and public organisations that accept crypto as payment or as remuneration.

So, how did the industry grow from small crypto “nerd” projects to its current state, consisting of more than 13,000 different cryptos and an asset that you can pay your taxes with if you live in the state of Colorado, or purchase gas with when driving in Australia?

Several attempts before Bitcoin

We often hear that the story of crypto dates back to 2008, when the most well-known and oldest crypto of all was released with a whitepaper – Bitcoin. But there were several attempts to define e-money or digital currency before Bitcoin was invented or described.

It all began in the early days of the internet when David Chaum, in 1982, wrote a dissertation paper called Computer systems established, maintained, and trusted by mutually suspicious groups. At that time, David Chaum was a graduate student at Berkeley and his dissertation is the first known description of a blockchain protocol. His work laid the foundation for the crypto and blockchains we know today, and it was driven by his motivation to protect the privacy of individuals. A privacy that he early on feared that governments would not be able to ensure on the Internet.

David Chaum founded a company called Digicash, Inc. in 1989. His company attempted to release an e-currency called E-cash but failed and was then sold in 1995. The world was simply not ready for the technology – as the first online payment from a credit card was made in the early 1990s.

On its way

But the phenomenon was on its way. One of the first worldwide money, or digital currencies, was created in 1996. It was called E-gold and was backed by gold. The transactions were irreversible and approximately five million users were registered. But E-gold was quickly adopted by criminals who saw it as a safe haven, as regulation was lacking. Soon the currency was banned by the US government.

One of the first companies to succeed in offering a fast and paper-free transaction method using the internet was PayPal. Both PayPal and E-gold are like crypto in the sense that they use the internet to make transactions. But there’s one thing that is completely different. To simplify – cryptos are decentralised and both PayPal and E-gold transactions were controlled by a central unit.

A paper that made a difference

A milestone in the crypto story happened in 1997, when a researcher from the US National Security Agency (NSA) published a paper called How to make a mint, the cryptography of anonymous electronic cash. It described a decentralised network and payment system.

The concept described in the NSA paper was further developed by two researchers in 1998. Nick Szabo created what he called Bitgold, which introduced the concept of smart contracts to the system. Wei Dai wrote B-money, an anonymous distributed electronic cash system, which described the fundamentals of all the crypto systems we know today. Nick Szabo later helped the founders of Bitcoin code the system based on his findings, and Wei Dai’s work was also cited in the Bitcoin’s paper. Today, the smallest unit of Ethereum (ETH) is called a Wei.

A more secure system

But it wasn’t until October 2008 that Bitcoin became the first operating crypto currency, after adding blockchain technology. This was in the middle of the financial crisis, and some say that the ambition was to create a more secure and sustainable system that could not be manipulated by centralised entities. With a fixed amount of Bitcoin being produced, the mission was also to protect against inflation.

The Bitcoin vision was published by Satoshi Nakamoto, and it described a purely peer-to-peer version of electronic cash that would allow online payments to be sent directly from one party to another without involvement from a financial institution. The idea was to change the protocols that the financial institutions were building on and transfer funds instantly, anonymously, and without middleman fees and governmental surveillance and control. In January 2009, the first block of the Bitcoin blockchain, called The Genesis, was made.

The first real purchase with Bitcoin was made on May 22, 2010. The pizzas purchased with it became historical because until that point, the Bitcoin did not have a value but had only been transferred between peers – and mostly for fun.

The creator disappeared

Satoshi Nakamoto was nominated for the Nobel Prize in Economics in 2015, but he disappeared shortly after making Bitcoin. No one has yet been able to identify who’s really behind the paper or who is Satoshi Nakamoto. Before disappearing, Satoshi Nakamoto chose a software engineer to oversee the building of Bitcoin’s original coin. His name is Gavin Andreson, and he later founded the non-profit organisation The Bitcoin Foundation. As with Ethereum and its honouring of Wei, the smallest part of a Bitcoin that can be sent is called a Satoshi.

In the years that followed Bitcoin’s entrance on the market, the usage spread, but not only to legitimate businesses. Once again, governments had to shut down several illegal websites. The idea that crypto is only for criminals is a sticky myth for the industry to rid itself of, and the need for more detailed regulation is growing.

With a market capital of more than USD 3 trillion at its peak in 2021, the crypto industry is becoming an asset with which our society needs to handle and interact.

Legal tender in two countries

Two countries have made Bitcoin their legal tender. In El Salvador, Bitcoin has been the national currency since September 2021, along with the US dollar. Every citizen in El Salvador has a digital wallet with BTC in it, and it is mandatory for all merchants to accept BTC as payment.

The small African country of Central African Republic also voted BTC as their legal tender in late summer 2022, along with the franc issued by the French government. Many among the population, primarily in African states, are “unbanked”, and crypto payments give them access to trades and the basic service of securing their money and receiving payments for goods.

Close to 90 other countries are in the process of deciding the role of cryptos in their jurisdiction.

Regulation in place 2024

Retail crypto investors are also increasing in numbers. In 2021, 8% of American households had invested in crypto; in the Nordics it was between 11 and 15%. The growth is expected to increase with global adoption, along with the EU crypto regulation that is expected to be in place in 2024. With this regulation,
institutional money is expected to be a significant part of the growth for the crypto industry going forward.

Another powerful driver for adoption is Web3 – the next generation of internet. Web3 is expected to be largely built on blockchains, meaning crypto would have an essential role as a digital asset – not only for transaction of payments. In essence, Web3 provides all industries with new virtual markets where the technologies enable people to interact and transfer ownership in virtual settings, seamlessly and conveniently.

The pure digital presence, the virtual interaction, and the gaming habits of younger generations in Western countries show us how owning digital items and being part of virtual events is perceived to be just as real and as valuable for this generation as experiences and assets in the physical world.

This, combined with technology, talent attraction and funding in this space, lays the groundwork for the innovative and disruptive businesses of the future.

The definition of crypto

The use of the word “currencies” when talking about crypto can be misleading because crypto is much more than currencies. A general definition of crypto is: “Digital assets on a blockchain, that can be traded, utilised as a medium of exchange and used as a store of value”. The use of each crypto can vary and be coded to enable different – or multiple – things:

  • A security token where the token holder owns a part of the entity that have issued the token.
  • A utility token where the token grants an option or a right to the token holder.
  • A commodity token where the token represents ownership of another digital or physical asset.
  • A governance token where the token represents the token holders right to vote or in another manner be part of the governance in a project.

Author Karina Rothoff Brix

Karina Rothoff Brix
Country Manager Denmark, Firi
Years in Schibsted: Almost one

Stuck in the world of big tech

Stuck in the world of big tech

Stuck in the world of big tech

Stuck in the world of big tech

It’s been a rough few years for a handful of US tech companies, due to a seemingly endless stream of scandals and harsh criticism from politicians on both sides of the Atlantic. The result? “Big Tech” is bigger than ever. But what if they have only started to flex their muscles?

Several executives reacted with shock, according to the people in the room. The proposal meant crossing a line, unleashing a hitherto unused weapon. The code name was “Project Amplify”, and it was a new strategy that social media behemoth Facebook hatched in a meeting in early 2021, as reported by the New York Times. The mission: to improve Facebook’s image by using the site’s News Feed-function to promote stories that put the company in a positive light.

The potential impact is enormous. News Feed is Facebook’s most valuable asset. It’s the algorithm that decides what is shown to users when they log in to the site. In essence, it is the “window to the world” for their users, who, totalling nearly three billion, constitute more than a third of all humans on planet Earth.

“Truth” is now the same as “what makes Facebook look good”

For many years Facebook founder Mark Zuckerberg defended the company’s policy on free speech with the mantra that the social network should not be the “arbiter of truth” online, i.e., they would not censor content that people posted. Critics would say that Facebook has been doing this all along, letting its algorithms prioritise what is presented to users. What shows up in the News Feed is what people perceive as important, a form of personal truth for every individual. “Project Amplify” would mean something entirely new. By actively promoting positive news stories about the company, “truth” is now the same as “what makes Facebook look good”.

Stuck in the world of big tech

Silicon Valley veteran and social media-critic Jaron Lanier referred to the major social media networks as “gigantic manipulation machines”, possessing the power to alter emotions and political views among billions of people by pulling digital levers. Now Facebook has decided to use its machine for its own purpose.

We will return to the implications of this, but first, it is important to understand why Facebook and Mark Zuckerberg would want to do this. It is no bold assertion to say that Facebook’s public image is in acute need of a facelift. The company has been plagued by scandals for years. In 2018, it was revealed that the company Cambridge Analytica had harvested data from 87 million Facebook users, data that had been used in Donald Trump’s presidential campaign. The revelation not only tarnished Facebook’s reputation, but it also had real financial consequences. When the story broke in March 2018, Facebook’s stock tanked. In July the same year, Facebook announced that growth had slowed down due to the scandal. The stock fell 20 percent in one day. In a few months, 200 billion USD of the company’s market capitalisation was wiped out.

Facebook’s reaction can be summarised as follows: we are sorry and promise to do better. This has been repeated every time new, negative stories about the company emerge, such as the spread of disinformation, the negative impact Facebook’s product has on the mental health of young people, and how the network was used to instigate genocide in Myanmar, among other things.

If the stock market is a reliable gauge of the future, and it often is, the conclusion is clear: these companies are untouchable

But behind the many apologies it seems Facebook has continued with business as usual. In September 2021, the Wall Street Journal published “The Facebook Files”, a damning investigation showing that the company, including Mark Zuckerberg, was very aware of the harm the platform was causing. The company’s own researchers identified problems in report after report, but the company chose not to fix them, despite public vows to do so.

From the company’s perspective, its strategy has been a success. Advertising revenues have continued to rise and in autumn 2021, Facebook’s stock market value broke one trillion USD, double of what it was before Cambridge Analytica.

Stuck in the world of big tech

The same can be said of the other tech giants. Companies including Google, Amazon and Apple have been at the crosshairs of public debate for years, both for alleged abuse of their dominant market positions and for the negative effect their products and business models can have on people and society.

But if the stock market is a reliable gauge of the future, and it often is, the conclusion is clear: these companies are untouchable. Despite a storm of criticism, court cases and billion-dollar fines, stocks have continued to propel ever upwards. How is this possible? Let’s start with breaking down the different ways Big Tech dominates the world today.

When discussing this topic, parallels are often drawn to the influential corporations of the late 1800s and early 1900s, Standard Oil for example. These comparisons are misleading. Standard Oil and its owner John D. Rockefeller could never dream of the amount of power that rests in the hands of the Silicon Valley-titans of the 2020s.

The new economic superpower

In 2010, the total market cap of Apple, Google, Microsoft, Facebook and Amazon was more than 700 billion USD. That was equivalent to the GDP of the Netherlands. The ascent had been amazingly fast; at this point Amazon was 16 years old, Google twelve and Facebook only six. In autumn 2021, their combined value had reached 9,500 billion USD, more than the GDP for Japan and Germany combined. The total annual revenue for these five corporations is north of one trillion dollars, more than the defence budgets of USA, China, and Russia combined.

The market superpowers

Facebook owns four of the five largest social media networks in the world. Google, owner of the second largest (Youtube), has a 92 percent market share on search. Apple’s and Google’s operating systems, IOS and Android, control 99 percent of the global smart phone market outside of China. Apple takes in 65 percent of the global revenue on mobile apps, and Amazon has 50 percent of the e-commerce market in the US, as well as 32 percent of the global market for cloud services, followed by Microsoft. The list goes on. This not only creates huge profits but also creates an enormous asset in form of the 21st century’s most valuable commodity: data.

The innovation superpowers

Up to 50 percent of the venture capital raised by start-ups circles back to Google and Facebook in the form of advertising, almost as a “tax on innovation”. If new, competing services emerge, Big Tech can either try to buy them or launch competing products. Their headway in terms of resources and user base makes it extremely difficult – if not impossible – to pose a real threat.

The perception superpowers

Twenty years after the 9/11 terrorist attacks, one in 16 Americans believe the US government knew about the attacks and let them happen. Conspiracy theories and disinformation have become the new normal, and research has shown social media plays an important role. What Google and Facebook choose to allow, or not allow, on their platforms shapes our view of the world. In 2012, Facebook conducted an experiment among 700,000 users to see if their states of mind could be altered by changes in News Feed. The answer was yes.

The infrastructure superpowers

In December 2020, Google went down, meaning users could not access Gmail, Google Docs or Youtube. Although the outage only lasted 45 minutes, it made headlines all over the globe. The same thing happened to Facebook in October 2021. As an expert said to CNN: “For many people Facebook is the same as internet”. After the 2008 financial crisis it was clear that a small number of banks were “systemically important”. This is now very true for Big Tech. Serious disruptions in their services would quickly have severe and costly consequences.

The political superpowers

Big Tech has surpassed Big Oil as the biggest spenders on lobbying in Washington D.C., with an increase in spending from 20 to 124 million USD between 2010 and 2020. In the election cycle of 2020, a total of 2.1 billion USD was spent on political ads on Facebook and Google. In a manifest published in 2017, Mark Zuckerberg noted that in elections across the world “the candidate with the largest and most engaged following on Facebook usually wins”. In other words: use us or you lose.

The capital markets superpowers

It could be argued that the stock market has become the most important gauge for global decision-makers. It takes decades to make them do anything to combat climate change, but if stocks drop dramatically, decisive action from politicians and central banks are delivered within days or even hours. This was last seen in early 2020, when fears of the economic impact of the pandemic brought the Dow Jones down by 13 percent in one day. The direction of the stock market is, in turn, more and more intertwined with that of Big Tech. Apple, Google, Facebook, Amazon and Microsoft constitute a quarter of the S&P 500 index.

The AI superpowers

“Dark patterns”. That is what scientists call the tricks that digital companies deploy to manipulate users. Sometimes the purpose can be quite trivial, like making people sign up for a newsletter or share their email. The point is that you as a user do something that you didn’t intend to do. With artificial intelligence these tools become more and more powerful and potentially deceptive. The more data an AI-algorithm can use to train on, the more effective it becomes. This places Big Tech in a unique position to use these techniques. The problem with this is best summed up by Meredith Whittaker, a former Google engineer and now head of the AI Now Institute at New York University: “You never see these companies picking ethics over revenue.”

In all the ways mentioned above, the power of Big Tech is growing bigger every day. It is important to say that not everybody thinks this is a problem. However, it seems like there is a consensus among democratically-elected leaders in both the U.S. and Europe that the influence of these companies must be reined in. The U.S. and the European Union recently agreed to take a more unified approach in regulating big technology firms. In fact, even the people who work in the industry share this view. In a survey of 1,578 tech employees made by Protocol, 78 percent said that Big Tech is too powerful.

So, what can be done? A variety of options are already on the table, from forcing companies to break up to altering laws that give social media companies a free pass compared to traditional media. If the New York Times publishes hate speech they are liable, when Facebook does the same, they are not. In the US, this legislation is referred to as “Section 230”, and there is a debate around whether to change it. At the same time, numerous lawsuits have been filed around the world against the Big Tech-companies on anti-trust issues. The stock market has sent the message that the idea that any of these measures could seriously harm these companies is simply unfounded. And that view could very well be justified. There are several reasons why Big Tech-titans can sleep well at night. Let’s run through some of them.

Breaking up is almost impossible

The businesses of Big Tech are deeply interconnected. It would take years of litigation to make such a decision a reality. With 300 billion dollars of annual profits, the legal coffers of Silicon Valley are limitless.

Fines would have to be astronomical to make a difference

Between 2017 and 2019, the EU slammed Google with a total of eight billion USD in fines. That is less than seven percent of the company’s pre-tax profit during those three years. As the stock market often regards fines as a “one-off”, it is not clear if even larger fines would hurt the market cap at all.

Too drastic of measures could trigger a stock crash

In theory, politicians could of course make new laws that severely hurt Big Tech. This would very likely lead to correction of their stock prices, which in turn would weigh heavily on the start-up ecosystem and the economy at-large. To have voters lose trillions of dollars, or even worse their jobs, is not a price any politician is willing to pay.

The companies could fight back

This is the most underestimated scenario of all. What if Google eliminated negative news stories about Google from searches, or they monitored Gmail and Google Docs to stop whistle-blowers or investigative reporters?

What if Facebook took down the accounts of politicians who are critical of Big Tech? What if Youtube only recommended documentaries that showed how fantastic Silicon Valley is for humanity?

This might all seem rather dystopian, but the question must be asked. After all, anything that can be done with technology tends to be done. With Facebooks “Project Amplify”, this is already inching towards reality. Most importantly, what could anyone do to prevent this? The answer is: nothing.

As things stand now, Facebook and Google are controlled by Mark Zuckerberg and Larry Page/Sergey Brin who own more than 50 percent of the voting power. An American president can be thrown out of office, but no one can sack Mark Zuckerberg. And the reality is that Big Tech can use the power of their platforms for more or less any purpose they please. As Facebook whistle-blower Frances Haugen told CBS 60 minutes:

“The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook, and Facebook over and over again chose to optimise for its own interests, like making more money.”

To satisfy Wall Street, Big Tech-giants must deliver constant growth and more profits every year

Here we arrive at the crux of the problem. Silicon Valley’s algorithms govern the world, but these giants are in turn governed by an even more powerful algorithm: the paradigm that is called shareholder value.
To satisfy Wall Street, Big Tech-giants must deliver constant growth and more profits every year. And in the choice between ethics and profit, the answer is, more often than not, profits.

Silicon Valley author and entrepreneur Tim O’Reilly has called Big Tech “slaves under a super-AI that has gone rogue” – meaning the financial markets.
Breaking out of this cycle is easier said than done. Companies like Apple, Google and Facebook use their stock to pay their employees, which means they are highly dependent on stock prices rising.

But bad ethics also runs the risk of alienating these same employees. Internal protests have rocked Google, Amazon, and Microsoft in recent years.

Hurt society or hurt the stock price? Lose staff over scandals or over bad pay? These are the dilemmas that the most powerful companies in history face. Whether Big Tech has really become too-big-to-stop remains to be seen. Ultimately the power rests with you. Without the billions of daily users, Silicon Valley’s influence amounts to exactly zero.

So, if your kids or grandkids one day ask how a few individuals acquired so much wealth and power, the answer is simple: we gave it to them.

Andreas Cervenka

Andreas Cervenka
Columnist, Aftonbladet
Years in Schibsted: 10 years at SvD (2007– 2017), at Aftonbladet from December 2021.

Enter the Metaverse

Enter the Metaverse

Enter the Metaverse

Enter the Metaverse

Facebook is hiring 10,000 people to work on it. The Metaverse is no longer science fiction – some say it’s the next Internet.

Throughout the last few decades, much of what we previously considered science fiction has become more science than fiction. We may not have flying cars yet but asking your fridge to create a grocery list is an everyday occurrence. And – as has been stated so many times before – our precious smartphones have more processing power than NASA did when it sent Neil Armstrong and Buzz Aldrin to the moon. So it shouldn’t come as a surprise to us when terms like “the metaverse” are no longer constrained to The Matrix and Ready Player One. But before we look at how the metaverse will impact our future, let’s understand it.

A futuristic virtual world

The term “metaverse” was coined by author Neal Stephenson in his 1992 novel “Snow Crash”. He used the term to refer to a 3D virtual world inhabited by avatars of real people. The term comes from “meta” (beyond) and “verse” (from universe), and it is typically used to describe a futuristic virtual world on the internet in some form. Still, the term, as it pertains to an actual, real-world phenomenon, doesn’t really have a universally accepted definition.

Venture capitalist and author of “The Metaverse Primer”, Matthew Ball, writes that “The Metaverse is best understood as ‘a quasi-successor state to the mobile internet’”. This is to say, it’s a technology that will completely change how we operate in the world but also take a long time to develop based on many different, secondary innovations and inventions.

Since metaverse is not a completely developed term with a clear definition, it’s tricky to pin down, but I’ll give it my best shot. Currently, as far as we can define it, the metaverse looks like a successor to the internet in which its users are more tangibly connected to the virtual experiences taking place there. That could be via everything from voice interfaces, to VR headsets and haptic wearables.

We’re already unlocking our phones via facial recognition and buying digital art via non-fungible tokens.

In the long run, it is believed that all of these experiences will be connected and collected in the metaverse, just like the internet is the collection of a vast universe of 2D websites. The Verge has broken down the parts of the metaverse that most excite the tech industry right now, things like “real-time 3D computer graphics and personalised avatars,” and “a variety of person-to-person social interactions that are less competitive and goal-oriented than stereotypical games.”

Enter the Metaverse

Previously mentioned author Matthew Ball believes that the metaverse will have as big of an impact on our daily lives as the electricity revolution and the mobile internet. It’s also an interaction of the same kind – the internet (and its iterative version, the mobile internet) couldn’t have happened without the electricity revolution, and the metaverse couldn’t happen without the internet.

Ball writes that the “metaverse iterates further by placing everyone inside an ‘embodied’, or ‘virtual’ or ‘3D’ version of the internet and on a nearly unending basis. In other words, we will constantly be ‘within’ the internet, rather than have access to it, and within the billions of interconnected computers around us, rather than occasionally reach for them, and alongside all other users and real-time”. Facebook CEO Mark Zuckerberg, who is a big proponent of the metaverse, has described it as “an embodied internet”.

10,000 new workers

His plan is to shift Facebook from a social media company to a metaverse company, and he’s already put this plan in motion. The company will change its name – the social network will keep Facebook but the parent company will be renamed, much like Google’s approach with its parent company Alphabet. This could be to avoid the less than stellar reputation linked to Facebook, but it also symbolises the shift in course. Zuckerberg plans to employ 10,000 new workers in Europe to join the company’s components and create this metaverse.

Now, that all sounds very sci-fi. But it’s really just the next step in the development of things we already take for granted. We’re already unlocking our phones via facial recognition and buying digital art via non-fungible tokens. Admittedly, it feels more far more far-fetched that we’ll be having our team meetings remotely via VR setups, sitting on our couches at home while appearing at a conference table in a virtual space with our colleagues.

Independent tech analyst Benedict Evans explained the current discourse around the metaverse like “standing in front of a whiteboard in the early 1990s and writing words like interactive TV, hypertext, broadband, AOL, multimedia, and maybe video and games, and then drawing a box around them all and labelling the box ‘information superhighway’”. Essentially, we’re merging tech and concepts such as augmented reality, virtual reality, mixed reality, gaming, cryptocurrencies, non-fungible tokens and more, under one umbrella.

Connectivity is the key difference

I’ve started thinking about the theory of the metaverse like electricity. The idea is that we’ll be using it so naturally in our daily lives, when swapping our glasses for AR glasses with interactive screen overlays, that we don’t even think about it unless it goes down. As I mentioned before, we already use an extended internet, in which our physical bodies, voices and gestures are connected to our devices (think face scans, voice assistants, and VR). The metaverse is the extension of this. Perhaps we’ll all have virtual avatars representing ourselves online, for which we buy virtual designer clothes and virtual bespoke art pieces with non-fungible tokens.

The key difference between that and what we have today is connectivity – between ourselves and the virtual world, and between the many different existing virtual worlds. Especially following the giant disruptor that is the Covid-19 pandemic, the need to physically engage with other people is huge, but so is the need to connect with people over vast distances in just a matter of seconds. The metaverse could be a merging of the two needs.

The major challenge with this, of course, is that not everyone has access to the tech fuelling this development – a lot of people don’t even have access to a reliable internet connection. Just as the smartphone accelerated the mobile internet, other personal devices will be the driving forces of the hypothetical metaverse. The smartphone is relatively accessible for the majority of people today, but that wasn’t the case when ideas about a mobile internet arose. Just like all other technical developments throughout history, the metaverse will surely develop into something completely different than how we first imagine it now. But until then, I’ll look forward to meeting your avatar form in my virtual palace filled with non-tangible art in the near future.

Camilla Buch

Camilla Buch
Advisor Editorial Content
Years in Schibsted: 1.5

A superposition to change computing

The quantum computers that exist today typically look like large chandeliers, hanging from the roofs of science labs.

A superposition to change computing

Is quantum computing the next major revolution in computer science or will it remain a dream scenario for the foreseeable future? Sven Størmer Thaulow, EVP Chief Data and Technology Officer, looks into an area that is still surrounded by myths.

The fields of quantum mechanics and quantum computing are difficult to understand, even for people who have studied them at university level. But what are they, and what makes their application in computer science so interesting?

First we need to take a step back. Data processing traditionally operates via a digital computer language; everything – images, sounds, text, etc – is broken down into 1s and 0s. When I write “S” using my keyboard, it represented as “01010011” – the ASCII character code in binary format. This is done by feeding current into eight “transistors” in a processor (or chip), with different voltage levels representing the binary states of “1” or “0”. A thing inside the computer reads this and displays “S” on my screen.

Packing transistors

In data processing, building more powerful computers has largely been a matter of packing as many transistors as possible into a chip and getting the clock frequency (the speed at which the computer computes) as high as possible. Many will be familiar with Moore’s Law describing the increase in processing power. It states that the number of transistors on a chip will double every other year. It’s hotly debated, but for some years now, many have claimed that Moore’s Law will soon be dead and that we have reached the limit for how many transistors can be packed into a chip. We’re currently down to three nanometres between the transistors, with the standard distance on your Iphone chip being five nanometres. Attempts to remedy this are being made by designing processors in 3D and other techniques.

However, increased computing power is not just about the number of transistors on a single chip; today we buy vast amounts of computing power in the cloud and no longer have to rely on having our own computers in-house. This means that we can all easily access vast resources to solve computing problems precisely when needed, no more, no less.

Machine learning behind the demand

Demand for such computing power has grown especially rapidly due to the need to train machine learning algorithms on large datasets. These algorithms try to find an optimum in a system with a large number of dimensions (for example, housing prices) – a big mathematical problem. Just imagine how many variables that influence the price of a house. The bigger and more complex the optimisation task, the greater the need for computing capacity – a need it will be difficult to keep up with using conventional data processing techniques. Even today, tasks already exist that are so complex that running them on even the world’s biggest computer cluster is inconceivable. This is where quantum computing is emerging as a promising technology.

Quantum computing is about using quantum mechanics – the theory of how things interact at small scale – to create a computer that is insanely faster at solving certain problems than a conventional binary computer. A quantum computer does not have bits (no 0s or 1s) but rather qubits, i.e. bits with more states than just 0 or 1. Qubits draw on two properties that distinguish them from regular bits: first, they can be put into a “superposition state” that represents both 1 and 0 simultaneously. Second, multiple qubits can be entangled, meaning that states in pairs of quibits can be changed immediately.

Another important difference be tween quantum computers and con ventional processors is how computing power is scaled. To double the computing power in a conventional processor, you essentially need to double the number of transistors. In a quantum computer, the computing power is doubled with every additional qubit. This means that the computing power in a quantum computer grows exponentially when the processor is scaled.

Combined, this enables quantum computers to perform multiple computing operations simultaneously, churning their way through computations which today’s biggest supercomputers would take thousands of years to complete.

This sounds incredible, but at what stage are we at in the development of quantum computing?

More than 40 years old

Well, the theory of quantum computing is more than 40 years old; in 1981 the American physicist and Nobel laureate Richard Feynman said: “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy”.

In many ways it can look as if we’ve reached the same point in solving that problem as the internet had in the early 1990s. Most work is currently being run in labs, though industry is beginning to grasp its potential. Big Tech companies (such as Google and IBM) have launched separate research programmes. Venture capital firms are investing in quantum startups. The first exclusively quantum companies have gone public. National authorities are investing strategically in the defence sector, among others, after having financed basic research over several decades.

Yet we’re still lagging behind when it comes to application. We’ve not yet reached the point of “quantum advantage”, at which a quantum computer can solve a problem faster than a computer using conventional data processing. Researchers expect the first case of quantum advantage will be achieved some time in the next three to five years.

The aim of quantum computers is to perform computations that no conventional computers can realistically manage. A major task that lies ahead will be to explore their applications. And to do this we need to think differently. New computational strategies must be developed to take full advantage of these totally new devices. The mathematics and the algorithms underlying the tasks to be performed will be fundamentally different.

Easy to miss the mark

Researchers and innovators often miss the mark when getting to grips with new innovations: Thomas Edison thought that the phonograph he invented would be used primarily for language learning; the developers behind text messaging thought it would primarily be used by courier companies to notify their customers of parcel deliveries. So what do we think quantum computers will be used for? Three likely areas stand out:

  • Large-scale optimisation problems where the task is to maximise an output based on an inconceivably vast number of variables simultaneously. Some practical examples of application are in the transport sector, for finding optimal routes, or in finance for optimising profit based on a seemingly endless list of constraints and variables.
  • Classification tasks using machine learning. A typical example of a classification task involves putting labels on images, for example: “dog”, “tree” and “street”. Quantum computing has proven to be more efficient at performing complex classification tasks.
  • Simulation of nature, such as in molecular modelling. Modelling anything other than the most basic chemical reactions is mathematically impossible for conventional computers, but with a quantum computer this may be doable. Development of medicines and batteries are two practical examples of potential areas of application.

A supplement

The key point here is that when or if quantum computers become commercially available, they will serve as a supplement to conventional data processing. State authorities, hyperscalers (Big Tech companies) and large universities are expected to be the early adopters of quantum computers due to the fact that they will probably need to operate at extremely low temperatures in dedicated facilities. So the number of quantum computers will likely be small initially – that is, given today’s technological constraints.

That said, quantum computers will more extensively be offered as a cloud-based service, on par with conventional data resources, and be made available using much simpler user interfaces (high-level programming language) than those we have today, where developers in reality need to understand quantum mechanics in order to program the machines.

So what does this mean for Schibsted? We will monitor developments, but will probably wait a few years before we start experimenting with the technology – and when that day comes, we will do it using cloud-based quantum computing.

Sven Størmer Thaulow

Sven Størmer Thaulow
EVP Chief Data and Technology Officer
Years in Schibsted: 2.5

A polish hub changed the way we work

A polish hub changed the way we work

A polish hub changed the way we work

A polish hub changed the way we work

The pandemic changed our way of working. But for many Schibsted companies, the road to a distributed work style started in Krakow ten years ago when the tech hub in Poland was established.

It was a biting cold January day in 2012. A group of Polish and Norwegian product and tech people were gathered in a conference room in Andels Hotel in central Krakow, Poland. The atmosphere was tense as the group tried to get to know each other. Only one topic was on the agenda: how do we work together as one team that’s located in both Krakow and Oslo?

Little did we know then that ten years later Schibsted Tech Polska would grow to more than 250 employees and become an indispensable part of Schibsted’s product and tech organisation.

A polish hub changed the way we work

But this is also a story about how teams from different Schibsted brands started working together for the first time – and how the first distributed development teams were created.

Back in 2011, working from different locations was unimaginable for most teams in Schibsted. Practically all software engineers were based in the same locations as the brands they supported. Video meetings were equally rare. But with the Poland hub, an early form of hybrid work emerged, where all teams were distributed.

Collaborating across cultures

Almost no teams operating out of the Poland hub have all their people in Poland alone, which is why video conferences and digital collaboration tools have been key to collaboration since the beginning. Another important aspect has been collaboration across cultures and brands – and building a hub that attracts tech talents.

“In Schibsted, the tech hub in Poland is unique because it is a pure tech company. Basically, all the employees are software engineers. This, together with the Scandinavian work culture, is how we can attract the best talents,” says Konrad Pietrzkiewicz, who joined in 2012, and is today part of the Schibsted Tech Polska management team.

Schibsted Tech Polska grew out of Media Norway Digital, a joint product development unit for Schibsted’s subscription-based newspapers in Norway.
In 2011, the urgency of planning for a digital transition was weighing heavily on the top management in the media houses. This led to a flow of ambitious digital product plans.

With some 20 employees at the time, Media Norway Digital struggled to respond quickly enough. Many more developers were needed. But they were almost impossible to recruit in Norway.

A radical decision was made: We would have to look abroad for our next colleagues.

Several cities were considered. In the end, Krakow was chosen because of its proximity to Scandinavia and the many technical universities producing a steady flow of young software engineers.

A polish hub changed the way we work

The new company was established at record speed in the autumn of 2011. Within a few months offices had been rented, furniture from Ikea assembled, and the first two teams recruited. Video conferencing equipment was shipped from Norway and the new colleagues met for the first time.

The even bigger nut to crack was how to develop mutual trust between the new Norwegian and Polish colleagues. Many were sceptical at first, especially since many other companies had failed in setting up this kind of distributed team structure.

Starting with the meeting in Andels Hotel, ground rules were set to establish the right culture:

  • We would always talk about being one team – across countries. Always say “us”, never “they” and “we”.
  • The standard of offices, laptops, video conferencing systems and other equipment would be at least as high in Krakow as in Oslo, if not higher.
    The teams were encouraged to meet physically and often to get to know each other on a personal level, and to make sure that everyone had the same understanding of the problems to solve.
  • All teams had to meet every day on video. This rule was relaxed later, but in the beginning it was crucial to put in place a culture of frequent communication.

The first offices only had space for 30 people. But in only a few weeks, VG and Aftonbladet asked if they could also set up teams in Krakow. Already by May 2012, the company had completely revised its strategy. A new and bigger office space was leased. It would now be a tech hub welcoming all Schibsted brands.

But to join the brands had to promise to collaborate. There should be open and free knowledge sharing across the brand teams, and all code produced in Krakow could be freely reused by other Schibsted companies that also had teams in Poland.
The new offices soon looked like a United Nations of Schibsted, with brand logos filling up the walls: VG, Aftonbladet, Aftenposten, Stavanger Aftenblad, Bergens Tidende, Finn, Distribution Innovation and more.

Almost every month new brands joined, and within a year there were close to 100 employees.

Like a cool small start-up

“We had been told several thousand people worked for Schibsted. But the reality is that we felt like we all worked in a cool and small start-up. It was fast-paced, informal and everyone knew each other,” Konrad reflects.

He was recruited as team leader for VG in Krakow in May 2012. VG had decided to give one of its most important projects to the Krakow team. The project was to build a new web-TV platform.

“I had worked for a Polish media company before. This was my chance to step up the game with an international media group,” Konrad explains.

Konrad and his team embraced the VG culture immediately. And as with the other teams, they were eager to prove they could be trusted with the projects they had been given.

“I would say it was a healthy competition between the brands. In many ways we inherited the culture from the newspapers in Scandinavia, and we were all eager to demonstrate that our team was the most innovative,” he says.

Konrad’s team was also the first to develop a product that would be scaled and used by other media houses. Today all Schibsted’s news brands use the streaming platform developed in Krakow.

“I am really proud of that. And it was exciting. We really had to push our limits to make it happen.”

Informal atmosphere

Over time the new colleagues learned to work well together, despite cultural differences. Many Polish developers came from companies with a more hierarchical structure than in the laidback Scandinavian work culture. They were surprised about the informal atmosphere, and especially the friendly tone between managers and employees. On the other hand, Scandinavians were taken aback about the ambitious dedication and high competence level of their new colleagues.

In 2014, a second office was opened in Gdansk.

This site became a base for teams working with Schibsted’s brands within ventures and financial services. Today more than 250 people work for Schibsted Tech Polska.

Konrad Pietrzkiewicz is now part of the management team. He is still responsible for Schibsted’s streaming solutions, and his team is one of the most long-standing development teams in Schibsted.

He has stayed on for almost ten years for two reasons.

“First I strongly believe in video in media – and love taking part in developing the best solutions. But equally important: I am passionate about integrity in the news. In Schibsted I can combine these two.”

Schibsted tech polska

  • Established in October 2011 as a development unit in Krakow, Poland working with Media Norway Digital.
  • In the spring of 2012 it was changed to a tech hub welcoming teams from all Schibsted brands.
  • Today Schibsted Tech Polska has around 250 employees in Krakow and Gdansk.
  • Currently has software engineering teams working for all business areas in Schibsted as well as the central Data & Tech unit.

John Einar Sandvand

John Einar Sandvand
Communications Manager for Product & Tech
Years in Schibsted: 28

Meet our People: Unlocking the Schibsted universe

Meet our People

With Schibsted Account, Ida Kristine Norddal and her team has a powerful tool to explain Schibsted offers, Armin Catovic is developing contextual advertising and Ralph Benton is making sure Schibsted is safe. Get to know some of our people.

Unlocking the Schibsted universe

With more than three million users logged in every day, Schibsted Account is a player you can count on. The service is used to log in to Schibsted’s newspapers, marketplaces and other digital services, and it has become the way that most end users engage with the Schibsted brand.

Ida Kristine Norddal’s product team and related engineering and UX teams, all within the User Foundation unit, are growing rapidly to keep up with user needs and to further develop the service and experience.

“We want to step up our game, to make sure the users understand who Schibsted is and that they can trust us with their information,” she explains.
Schibsted Account’s most obvious task is to enable Schibsted users to prove who they are, and by doing so, they get access to our products and services, it also enables subscription offers. But the potential and future ambitions point towards so much more – Schibsted Account is on its way to becoming an important key to the Schibsted universe.

“We also want users to be able to discover and explore all the different things that we offer”, adds Ida Kristine.

She describes this as Schibsted’s hidden treasure – news, buying and selling, ordering breakfast, finding a handyman and a lot more – services that the users can access with their Schibsted account.

“Our ambition is to simplify the whole user experience, making it easy to go between our different services, using the same log-in and, when needed, the same account information”, says Ida Kristine.

No doubt, her team has a lot to do – and many treasures yet to reveal. In between, they also communicate with all these users. Every month they send out more than three million emails, another reason why Schibsted Account is the Schibsted brand’s most important ambassador.

Ida Kristine Norddal
Product Lead, Schibsted account
Years in Schibsted: Almost 4



Armin Catovic
Armin Catovic

Serving ads without using tracking

Data is what makes Armin Catovic tick. It’s also one of the reasons why he joined Schibsted – in addition to the fact that he now can use it to do good.
His team has developed a contextual advertising product, which uses data models to identify specific content in articles on Schibsted’s news sites.

“This makes it possible to offer ad segments to our customers, based on news content alone. We don’t need to use cookies or tracking”, Armin explains.
More specifically, it means that advertisers can pinpoint certain keywords to which they want to be connected. The data models find those words in articles on Schibsted’s news sites, and an ad can be placed there.

Recently, Armin and the team developed a more advanced model that can identify broader contexts.

“For instance, bicycle retailers would typically want to be connected to cycling or Tour de France. Now they can have a wider perspective and choose to be seen in stories about climate change, since bikers often cycle to reduce their environmental impact.”

And the advertisers are happy about it.

“Feedback from both our own product specialists and the advertisers has been very positive, and we’ve now reached a monthly revenue target of
1 million SEK/NOK.”

Armin Catovic
Senior Data Scientist & Tech Lead
Years in Schibsted: Almost 1

Ralph Benton
Ralph Benton

“It’s about protecting society”

The number of cyber-attacks has increased dramatically in the last ten to five years. Today everyone is a target.
“Like many others, we really need to improve in this area”, says Ralph Benton.

As CISO he has initiated a cyber security program to improve information and IT security, how Schibsted detect and respond to cyber attacks and to educate all employees on security risks.

“It’s about protecting yourself, your colleagues, Schibsted – and in the end the whole society.”

That last thing is particularly valid for a company with media outlets. Except attacks where someone’s trying to steal customer information, or ransomware attacks where someone brings a site or service down – in the fake news era, news sites are facing risks that their content might be manipulated.

As an individual, you should protect your digital identity.

“When possible, use multifactor authentication – like Bankid or OKTA – when not possible, use strong and unique passwords, and do not use the same password everywhere”, urges Ralph.

The good news is that in Schibsted we have already learned a lot.
“When we sent out the last fake phishing attack, a lot of employees reported it to the IT Service Desk and that is really good”.

Ralph Benton
Chief Information Security Officer
Years in Schibsted: 2.5

Reading minds and saving lives

Reading minds and saving lives

Reading minds and saving lives

Reading minds and saving lives

We are entering a new era of medicine, where AI will revolutionise healthcare and treatment. It might also lead to more empathetic doctors.

Pancho was 20 years old when he was paralysed and lost his ability to speak. He was a Mexican-born amateur football player and a field worker in the California vineyards, until one summer Sunday after a match, he was in a car crash. After surgery, he suffered a stroke and his life changed.

When he woke up from his coma, he tried to move. He couldn’t. He tried to talk and discovered he couldn’t form a word. Then he started to cry, but he couldn’t make a sound.

At that point he wished he hadn’t come back from the coma at all, he recently told a New York Times reporter. Pancho felt like his life was over.

Today Pancho can speak again

Fifteen years later, his life would change again, when a group of American scientists implanted a chip with 128 electrodes into his brain, plugged a cable into his skull and trained deep learning software to read his mind. Today, Pancho can speak again. When he thinks of words, the software decodes the signals from his brain and his words are spoken in a gravelly, synthetic voice.

Interpreting his brain signals and translating them into words is only possible thanks to an artificial neural network interacting with Pancho’s organic one. The AI software was trained on his thoughts during 50 sessions in which Pancho was trying to speak words, until it recognised the patterns of brain signals that corresponded to certain words. (Thanks to his new ability to speak, the patient has expressed that Pancho is the nickname he prefers, to protect his privacy.)

Pancho’s AI-assisted rehabilitation may seem like the stuff of science fiction. It is all the more remarkable for being real, and part of an AI revolution in medicine and health care.

Reading minds and saving lives

The AI research field was conceived at Dartmouth College in 1956, a year when James Dean ruled the silver screen and Elvis Presley had his first number one hit, Heartbreak Hotel. Since then, AI has had its ups and downs, exaggerated expectations followed by troughs of disillusionment. The term “AI winter” was coined specifically to denote periods of disenchantment, when investment in AI dried out and the pace of progress slowed.

Notably, AI systems have defeated humans at games like go, Dota and Jeopardy.

Since Pancho’s accident in 2003, however, AI has made great strides thanks to the advent of artificial neural networks. The theory behind it had been established decades earlier, the idea that software networks mimicking the arrangement of neurons in the human brain would provide an important step in the evolution of thinking computers. But it wasn’t until the 2000s that it bore fruit, when processor speeds, storage capacity and data availability finally reached a tipping point in which it became feasible to train large neural networks on vast amounts of data, quickly leading to improvements in image recognition, automatic translation, and other domains.

Reading minds and saving lives

Notably, AI systems have defeated humans at games like go, Dota and Jeopardy. Social media corporations have used machine learning to find the best methods to keep us refreshing our feeds in endless loops of distraction, and self-driving cars are inching ever closer to arriving on our streets. But only in the last few years has medical AI started taking off in a big way. And if you think hooking a cable to Pancho’s skull and reading his mind is something of a miracle, well, that’s just the tip of the iceberg.

Alphabet’s AI research company Deepmind is perhaps best known to the public for creating Alphago, the go-playing AI that runs circles around human go masters. But their work on Alphago has turned out to be useful in another domain as well. In 2020, Deepmind entered their tweaked AI, Alphafold, in a contest to solve the problem of protein folding. To nobody’s surprise, Alphafold crushed the competition.

Mapped out proteins of the Corona virus

Biologists have hailed Alphafold’s success as the first time AI has solved a significant scientific problem. An improved understanding of protein folding is already significantly reducing development timelines for drugs and vaccines. For instance, during the Covid-19 pandemic, Alphafold’s predictions were used to map out the proteins of the virus.

Currently, AlphaFold is being used to help develop new drugs for tropical diseases. It has already helped scientists find a safe drug for treating sleeping sickness, replacing a previous drug that is highly toxic.

Other areas where AI is currently being used include:

Medical imaging

Interpretation of visual data is key in fields such as dermatology and radiology, and fertile ground for machine learning algorithms. Tech companies, hospitals and universities around the world have developed machine learning algorithms that surpass human experts at detection of various skin conditions, cancers and other abnormalities. While human specialists can be highly skilled at examining rashes and reading x-ray plates, they require years of training and they are a scarce resource. AI speeds up the process, makes these skills more accessible and can, in some cases, detect anomalies that human doctors cannot.

Health records analysis

Electronic health records create new opportunities for machine learning algorithms to find previously unknown patterns in symptoms, diagnoses, medications and treatment effects. Researchers in Uruguay have developed a system that analyses the text a doctor enters in a health record and pulls up similar cases that may contain valuable insights. Spanish researchers have created a system that analyses a patient’s health record, and predicts risks for diseases based on their historical data along with data from their family’s health records. As health record data grows at a rapid pace, human physicians cannot keep up, but they can learn from the analysis performed by AI systems.


Symbolic AI was first tailored to diagnose disease in the 1970s, using rules-based decision trees to create “expert systems” to assist professionals in their decision making. Recent methods instead rely on artificial neural networks and have proven themselves useful in diagnosing a vast range of diseases, relying on techniques such as image recognition, symptom evaluation and natural language processing.

Disease control

On December 31, 2019, the Canadian medtech company Blue Dot alerted its clients about a new flu-like virus spreading in Wuhan, China. They were a week ahead of the World Health Organisation and the Centers for Disease Control, thanks to their use of machine learning algorithms that were processing 100,000 news items and disease reports in 65 languages every day. AI-supported systems such as the Blue Dot platform can monitor the spread of new diseases, but also predict future trajectory based on data such as travel patterns, hospital admittance, historical data from previous epidemics and mathematical models.

Drug interactions

For patients taking a large number of medications, drug interactions can be potentially harmful as well as hard to track by individual physicians. Spanish and Chinese researchers have shown that machine learning algorithms can scan databases, medical literature and electronic health records and find patterns indicating adverse reactions to drug combinations.

Drug discovery

AI has proven its ability to speed up drug discovery dramatically. It is useful in several ways, such as predicting the 3D structure of target proteins, predicting drug-protein interactions and toxicity risks, designing biospecific drug molecules as well as assessing drug activity. While Deepmind has turned its attention to neglected tropical diseases, British medtec start-up Exscientia and Japanese pharmaceutical firm Sumitomo Dainippon Pharma used AI to create a treatment for obsessive-compulsive disorder. Development took only twelve months (compared to the expected timeline of five years) and the drug is now in human trials. Many other research teams around the world are currently using AI to find new drugs, and Andrew Hopkins, CEO of Exscientia, recently predicted that all new drugs will be created with the help of AI by 2030.

Neural prosthetics

With machine learning, Swiss researchers have trained algorithms to interpret the muscle signals of an amputee, to control a prosthetic hand. The task is complex because muscle signals are noisy and traditional approaches have yielded clumsy results. Machine learning changes the game, and the AI-powered prosthetic is more precise and quicker to react than previous prostheses. Advances in this field are bearing fruit in many research labs and Pancho’s new ability to talk is another example of this approach, called a speech neuroprosthesis.

As Pancho’s example and all the ongoing research demonstrates, we are entering a new era of medicine where machines can read the signals of our bodies, unfold amino acid molecules, replace lost abilities and tailor drugs to the protein structure of specific viruses. There will be disputes over privacy and security, as AI systems crave massive amounts of data to further increase their capabilities. But the technology is so powerful and the stakes are literally life and death. As evidence of AI’s potential continues to grow, odds are that privacy advocates will be fighting a lost cause. After all, no government would prioritise privacy over a cure for cancer or prevention of the next pandemic.

Humans doctors can relate

Will friendly and competent doctors be replaced then, by robodocs with superior diagnostic skills and knowledge of drug interactions? Well, not so fast. If there is one domain where humans are irreplaceable, it’s emotional work. No algorithm will feel genuinely sympathetic about your knee pain. If you want cold, hard facts, go see Dr Alphacure. If you want someone who can relate to your problems, go see someone made of the same stuff as you: flesh, blood, bone.

Sometimes, you see your doctor just to get your prescription renewed. But often, when you go because of a new ailment or injury, you are not only hoping for an intelligent analysis of your symptoms. You want to be taken care of, by someone who understands – from the inside – what it is to have your body starting to fail. You want someone who knows what it is like to wake up at 4 a.m., chest pounding, fearing that your expanding mole might be a mark of death. In time, AI could be taught to simulate these emotional skills. But fake empathy, in the end, is just manipulation.

While human expertise can be replaced by AI, empathy cannot. But as AI unburdens medical practitioners from many of the mechanical tasks of the profession, physicians may have greater opportunity to develop their human-to-human skills. In this way, medtech AI will not only revolutionise drug research, give voice to the speechless and improve diagnostic accuracy. It may just lead to more empathetic doctors, too.

Sam Sundberg

Sam Sundberg
Freelance writer, Svenska Dagbladet
Years in Schibsted: 2.5