Gaza conflict: Instagram changes algorithm after alleged bias

Instagram is changing the way it showcases content after being accused of suppressing pro-Palestinian messages during the recent Gaza conflict.

The app had favoured original content in its “stories” feature over existing, re-shared posts, but will now give them equal treatment, it said.

The current system had a “bigger impact than expected” on some types of posts.

But that was an unintended side-effect rather than an attempt to censor any particular viewpoint, Instagram said.

During the recent Gaza conflict, social media platforms were heavily used to spread messages of support on both sides.

Many pro-Palestinian messages were among those widely re-shared – which means they would have received less prominence than original posts under the current system.

How Israeli-Palestinian conflict plays out on social media
The Israel-Palestinian conflict explained
A company spokeswoman said the logic for prioritising original posts was that most Instagram users had more stories to follow than time to check them – and the company believed people were “more interested in original stories from their closest friends”.

“It’s also caused people to believe we were suppressing stories about particular topics or points of view. We want to be really clear – this isn’t the case,” she said.

“This applied to any post that’s re-shared in stories, no matter what it’s about.”

Instagram has seen an increase in how many people are re-sharing posts in general, the platform said – and is now acknowledging that posts not “getting the reach people expect them to” is “not a good experience”.

The spokeswoman also said the company had seen this issue over a long period of time, and it was not solely a reaction to the recent controversy.

Bias allegations
The change follows a number of weeks in which some users – and employees – questioned how Facebook handled posts about the Gaza conflict on its own site and on apps it owns such as Instagram.

Buzzfeed News reported on internal strife about the way Palestinian-related content often had warnings attached, while the Financial Times reported that a group of up to 50 employees had been involved in raising concerns about supposed suppression of pro-Palestinian content.

Many of the issues are believed to have been caused by large-scale automated moderation, rather than deliberate attempts by individuals to restrict content, reports said.

Instagram said this newly announced shift towards equal weighting of original posts and re-shared stories will happen over time, rather than being an instant change.

“We still think people want to see more original stories, so we’re looking at other ways to focus stories on original content through things like new creative tools,” it added.

Tesla cameras will monitor driver awareness

A new software update for Tesla cars appears to include monitoring of drivers through the car’s internal cameras when Autopilot is in use.

The “self-driving” feature requires drivers to pay attention at all times, but has been criticised as easy to fool.

Users have been able to activate the assist feature and leave the driver’s seat, and video themselves doing so.

But the new feature will detect how attentive the driver actually is.

Tesla’s cars have relied on sensors in the wheel to make sure the driver’s hands remain on it.

Some other car manufacturers have used internal sensors to observe where a driver’s eyes are looking. That means the vehicle can slow down or switch off automated driving features if the driver starts looking at their phone, for example.

A new software update for Tesla cars, reported by several media outlets, reveals that the camera above the car’s rear-view mirror will start performing a similar function.

It “can now detect and alert driver inattentiveness while Autopilot is engaged,” the release notes say.

But Tesla also said that “camera data does not leave the car itself” and no data is transmitted unless data-sharing is enabled in the car’s settings, which can be changed by the user at any time.

Tesla, which disbanded its press team last year, has not issued a statement about the reasons behind the new feature.

But it comes after several recent stories about the company’s so-called “Autopilot” and “full self-driving” features, which are more accurately described as driver-assistance features.

In April, one of the most influential consumer magazines in the United States claimed that Tesla cars could easily be tricked into operating without a driver.

Safety ratings row
That report came days after a fatal crash in Texas, which police initially said happened when no-one was in the driver’s seat – although this is now disputed.

And the new feature coincides with Tesla losing safety endorsements from Consumer Reports – the same magazine which investigated tricking the Autopilot system – and a major insurance group.

Earlier this week, Tesla announced that its Model 3 and Model Y vehicles sold in the US would no longer include radar sensors, but would instead use camera-based systems and more advanced software.

Tesla itself said that its move away from radar would mean some key features would be “temporarily limited or inactive”, and would be “restored” in the coming weeks through software updates.

The Insurance Institute for Highway Safety in the US told news agencies it plans to remove its “Top Safety Pick Plus” label for those cars due to the missing radar, while Consumer Reports will also pull its “Top Pick” award for both.

It follows the US National Highway Traffic Safety Administration – an official public body – removing marks on its website for safety features such as forward collision warning and automatic emergency braking.

Consumer Reports’ Jake Fisher said: “It is extremely rare for an automaker to remove safety features from a vehicle during a production run, even temporarily, but this isn’t the first time that Tesla has done this.”

The magazine pointed to a 2016 issue in which some Model X cars were sold without the automatic emergency braking feature, which took months to fix.

NatWest launches ‘urgent’ cryptocurrency scam alert

NatWest mobile app users are being directed to a warning screen advising them to beware of cryptocurrency scams.

The bank said it had received a “record number” of reports about such scams between January and March 2021.

The alert warns account holders to make sure they have direct control of any digital wallet that is set up to handle transactions, and to beware of promises of big profits.

One common scam involves fake celebrity endorsement, it said.

Typically, potential investors are prompted to fill in an online contact form and then encouraged over the phone to set up a cryptocurrency wallet – but unknown to them the scammer installs remote access software on their device, granting them access to it as well.

The victim is then persuaded over time to invest ever larger amounts – until the criminal empties the wallet.

Another ruse is the “get rich quick” cryptocurrency investment opportunity, the bank added.

“We have prevented millions of pounds from being sent to crypto-criminals who are exploiting the high levels of interest in the currency. However, consumers should always be alert, especially to the use of fake websites and bogus celebrity endorsements,” said Jason Costain, NatWest’s head of fraud prevention.

The Financial Conduct Authority has a searchable list of unauthorised firms and individuals offering financial services.

People using these will not be protected by the UK’s financial authorities if something goes wrong, the FCA warns.

‘All your money’
The app alert, which NatWest said was an urgent reminder, tells customers:

a “trader” getting in touch with promises of big profits and assistance in setting you up on a scheme “is a scam”
if you can’t access your own crypto-wallet or you did not set it up yourself, you should cease all payments to it
make sure any cryptocurrency seller you want to use is registered with the Financial Conduct Authority
“You could lose all of your money,” without following the advice, it warns.

Cryptocurrencies are notoriously volatile in value. This month the price of Bitcoin alone has fluctuated by up to 40% – creating some big winners and losers in the process. It is not regulated by any financial authorities.

There has been a flurry of advertising around crypto-investments on social media, and on Wednesday the Advertising Standards Authority banned an “irresponsible” billboard ad for a cryptocurrency exchange which appeared on London transport.

In the US, cryptocurrency scammers pretending to be Tesla boss Elon Musk made more than $2m (£1.4m) in six months, consumer-protection officials said in May.

Instagram lets users hide likes to reduce social media pressure

Instagram is offering its users the option to hide the number of likes they receive on posts on the app.

The aim is to “depressurize people’s experience” on the platform, the social media giant said.

Users with the feature enabled will now see the username of a follower who has liked the post, “and others”, instead of a number.

The tool has been tested in several countries since 2019, but it is now being rolled out globally.

“This has taken longer than I had hoped, but I am pretty excited about… giving people more control over the experience,” Instagram’s boss Adam Mosseri told the BBC.

Mental health
In its testing and research, Instagram said that removing likes had little impact on behaviour or wellbeing – after concerns that using the platform could be linked to insecurity and poor mental health.

Despite this, Mr Mosseri said Instagram – which is owned by Facebook – introduced the feature to make “people feel good about the time they spend” on the platform.

“I do think there’s more to do in this space,” he added. “The more we can give people the ability to shape Instagram and Facebook into what’s good for them, the better.”

Presentational grey line
How to activate it
The feature can be switched on or off at any time. To change it:

Go to Settings
Enter the new Posts section
Select Hide Like and View Counts
Even if a user has Like Counts enabled, they will not be able to see the number of likes on accounts or posts that have hidden them.

Users will also have the option hide counts on a specific post, before and after it goes live on the platform.

Presentational grey line
‘Less worried’
“The spirit of this is to give people a choice,” Mr Mosseri added, using the example of going through a break-up in a relationship or switching schools.

“Maybe you want to be a little bit less worried about how many likes everyone’s getting for a couple weeks or a couple of months, and then maybe you want to switch back.

“If it’s a one-way door, people tend to get hesitant about using the control.”

Instagram’s algorithm will still take the number of likes into account when promoting posts on the platform, but it also takes into consideration other factors, such as what the user follows or engages with.

Mr Mosseri said there had been a “polarised” reaction from creators – accounts which make money through brand partnerships and advertising on the platform – but that the new feature didn’t affect revenues.

Instagram for children
Earlier this year, concerns were raised over leaked plans to design a version of Instagram for children.

Plans to create an Instagram for under-13s were not “fully fledged”, Mr Mosseri said, explaining that it was difficult to verify ages, as children often didn’t have IDs.

“It has to be more responsible to give parents oversight and transparency than to have kids continue to lie about their age.”

He said the app would “take some time” to create.

Russia threatens to slow down Google over ‘banned content’

Russia’s media watchdog has threatened to slow down the speed of Google if it fails to delete what it calls “unlawful content.”

Roskomnadzor has given Google 24 hours to remove videos it says relate to drugs, violence and extremism.

Google – which owns YouTube – could be fined between 800,000 and 4 million roubles (£7,700 – £38,000) by the service.

The tech firm said it often requires court decisions to react to requests.

Roskomnadzor sent more than 26,000 notices to Google to delete what it called “illegal information,” the watchdog said in a statement reported by state-run news agency TASS.

The statement also accused Google of restricting YouTube access to Russian media outlets, including RT and Sputnik, and supporting “illegal protest activity.”

Google said it receives requests from different government organisations across the world and the laws of each country vary.

The company often responds once a court decision has been reached. But it also said it was important to maintain YouTube as an open platform for various kinds of views.

Internet traffic
If Google does not act, the watchdog said it could also slow down internet speeds for users in Russia trying to access Google.

The state has already used these powers in March, to restrict access to Twitter after Roskomnadzor said it failed to remove around 3,000 posts.

Internet service providers in Russia can limit or block the flow of data to websites, making connections slower when accessing certain pages.

YouTube
Google is currently suing Roskomnadzor over demands that it removes content, according to court documents seen by Reuters.

The case involves twelve YouTube videos which include encouraging minors to join unsanctioned protests in January, in support of jailed Kremlin critic Alexei Navalny.

Mr Navalny has more than 6.5 million subscribers on YouTube, and regularly posts videos on the platform voicing his opposition to President Vladimir Putin and the Russian government.

A hearing has been scheduled for 14 July.

Charlie Bit My Finger video to be taken off YouTube after selling for £500,000

As one of the original viral videos, the Charlie Bit My Finger clip is a little piece of internet history.

But now the much-loved clip of baby Charlie gnawing on his brother Harry’s finger will be taken off YouTube after it was sold for $760,999 (£538,000).

The Davies-Carr family auctioned the clip as an NFT, a non-fungible token.

Bids came into their auction page throughout the weekend, but the price dramatically increased in its final hours on Sunday.

The bidding battle was between two anonymous accounts.

“3fmusic” eventually outbid “mememaster” for the video, which has been watched more than 880 million times since it was put on YouTube in 2007.

The clip had been due to be removed from the video sharing platform on 23 May, following the auction – but at the moment it’s still there.

An NFT is like a certificate to say that you own something digital. It means original versions of viral videos, memes or tweets can be sold as if they were art.

It’s a lucrative business for those who own viral clips.

The “Disaster Girl Meme” – a picture of a young girl smiling with a fire in the background – was recently sold as an NFT for $473,000 (£341,000).

Sunday’s spending spree means the mysterious anonymous bidder will become the owner of the Charlie Bit My Finger clip.

But it also gives them a chance to create some follow-up content.

The auction page says the NFT winner will be given the opportunity to “recreate a hilarious modern-day rendition of the classic clip” that will feature “the original stars, Harry and Charlie”.

What are NFTs and why are some worth millions?
Watch: ‘Memes should be archived in a museum’
Are NFTs a new opportunity for digital artists?
Now aged 17 and 15, it’s the right time for Harry and Charlie to “embrace the next iteration of the internet”, the site adds.

The video was uploaded to YouTube by Harry and Charlie’s dad, Howard, in 2007, because he couldn’t email it to their godparents in America.

The family website says the clip was filmed as “a part of catching random moments as the boys were growing up”, and that it “unintentionally went viral”.

They haven’t said what they’re planning on doing with the money.

Sasha Johnson: Black Lives Matter activist critical after shooting, her political party says

Prominent Black Lives Matter activist Sasha Johnson is in a critical condition after being shot, her political party says.

Taking the Initiative Party said she was being treated in intensive care after being shot in the head in the early hours of Sunday morning.

The BBC understands the incident happened in south London.

The Metropolitan Police said a 27-year-old woman was shot shortly before 03:00 at a gathering in Southwark.

The force has not confirmed the woman’s identity.

Officers said the woman was taken to a south London hospital with life-threatening injuries and have appealed for witnesses. Her family have been informed.

Police said at this stage there was no evidence to suggest it was a targeted shooting or that she had received any credible threats against her prior to the incident.

Detectives from the Met’s Specialist Crime Command have been conducting enquiries at the scene in Consort Road and the surrounding area, and are pursuing a number of lines.

It is believed that the shooting occurred near a house where a party was taking place and that a number of people may have been in the area, a Met police statement said.

Family supported
Detective Chief Inspector Jimi Tele said: “This was a shocking incident that has left a young woman with very serious injuries. Our thoughts are with her family who are being provided with support at this terribly difficult time.

He said detectives were making progress but needed the public’s help: “If you saw anything suspicious in the Consort Road area in the early hours of Sunday morning or if you have heard information since that could help detectives, it is crucial that you get in touch.”

Ms Johnson, a graduate of Oxford Brookes University, has been a leading figure in the Black Lives Matter movement in the UK, and is a member of the Taking the Initiative Party’s leadership committee.

In a statement on Instagram, the party said Ms Johnson was a mother of two, and a “powerful voice” who had always been fighting for black people and the injustices that surround the black community.

“Let’s all come together and pray for Sasha, pray for her recovery and show our support to her family and loved ones,” it said.

The women fighting for digital equality

Lockdowns have forced people to spend the past year and more learning, working, and socialising online – but in many countries, women have been missing out.

They are less likely than men to have access to the internet in nearly every region of the world, according to the latest figures from UN agency the International Telecommunication Union (ITU).

The so-called digital gender gap is most noticeable in Africa, where the ITU estimates that 37% of men have internet access but only 20% of women.

What is more, the divide appears actually to have widened in Africa since 2013.

“If you don’t have digital skills, you’re going to be left behind,” says Regina Honu, founder of Soronko Academy, a tech school for women and girls in Ghana.

“Before Covid, if we put out an invitation for people to sign up, we would have 100 or 200 women,” she says.

“After Covid, we had more than 2,000 women signing up.

“Covid has made people realise that if you don’t have digital skills, you’re going to be left behind.”

Regina says many girls in Ghana might not even touch a computer until they go to school, or might have their internet access restricted by male family members.

And in the era of social distancing, connecting with people who are lacking the tech or the experience is difficult. Video calls are not possible for students without a computer or high-end smartphone, or for whom the cost of data is an issue.

Regina has instead found solutions that work for simpler phones.

“We used WhatsApp and Telegram. We made calls to check in and find out how they were doing,” she explains.

But it has not been possible to reach everybody using such methods and she was pleased when lockdown easing meant they could “come back in person”.

Gender-neutral
Lack of digital skills may deprive women of healthcare, education, work, and financial independence.

It is not easy to explain why the digital gender gap is wider in certain places than in others, says Boutheina Guermazi, director of digital development at the World Bank. But social, cultural, and financial factors all appear to be at work.

“I think it’s linked to the broader gender divide”, she says. “In regions where women do not have equal rights to own land, for example, or equal rights to the job market – when we add a digital dimension, the gap will clearly be deeper than in other regions, where those rights are equal.”

And she says the Covid situation “hasn’t been gender-neutral,” making life even harder for some women.

Globally, women remain less likely than men to own a smartphone, according to mobile industry body the GSMA. But they are increasingly likely at least to be able to use one to access the internet.

This has been driven by positive change in South Asia, says the GSMA. But a gap does persist in the region: women are still estimated to be 36% less likely than men to have mobile internet access.

It was around 2018 that Nirmala Kumari used a simple mobile for the first time.

Today, she is a skilled user who is also helping other women in India to get connected, thanks to her work for a social-media network run by tech firm Gram Vaani.

Typically in her east Indian state of Bihar, if there is only one handset per household, men tend to get priority. As for the women, some of them do not even know how to charge the device.

But the pandemic has changed this, Nirmala told the BBC in an interview before India’s devastating second wave of Covid infections.

“During the lockdown period, as the entire family used to stay in one place, the mobile phone was available at home,” she said.

“So, at that time, women did get mobile access.”

Her Mobile Vaani project gives women a voice on an audio-based media platform. Users dial in, listen to material on topics such as maternity advice, and can even record and contribute their own words to a conversation, as well. The service is designed to be informative and to boost women’s confidence with using mobiles.

Coronavirus has made the platform even more useful for many people – but harder for Nirmala to promote than before. She switched to a remote-working method that relies on her calling influential women who operate within village community self-help groups.

“We told them to ask others to listen to Mamp, because it had important information, and because we needed their stories about coronavirus – good ones and bad ones,” she says.

“The trick worked, and we got some very inspiring stories.”

Abuse on rise

Gender-based violence during lockdowns has been described by the UN as a “shadow pandemic”.

“If there’s discrimination in the offline world, that is going to be projected on to our online spaces as well,” says Jannat Fazal, who manages a cyber-harassment phone helpline in Pakistan. Spearheaded by the country’s Digital Rights Foundation, it is the first of its kind in South Asia.

After Pakistan entered lockdown, the Lahore-based helpline logged a 500% increase in calls. Typical complaints included women being bullied or impersonated on social media, or being blackmailed when their personal information was shared without their consent.

Online spaces
In Pakistan’s patriarchal society, these female victims may be blamed for supposedly “dishonouring” their families. In extreme cases, they may even be killed for it.

Jannat offers legal help, advice on digital security and psychological assistance for those in distress. During a peak earlier in the pandemic, her team found itself needing to offer round-the-clock support.

“Men tend to navigate online spaces without thinking something will happen”, she says. “But women, on the other hand, have this constant fear.

“We need digital literacy and training so that women are more confident using social-media platforms and they’re less vulnerable.”

Ms Guermazi is hopeful that the issue of the digital gender gap is now more prominent in the minds of policymakers – including those in developed nations.

“I think what coronavirus did for the digital agenda was something unprecedented.”

WeWork reports $2bn loss ahead of stock market debut

Office-sharing startup WeWork has posted a $2.06bn (£1.45bn) quarterly loss after being hit hard by Covid-19.

The announcement comes as WeWork prepares for its stock market debut.

The company’s first attempt to go public collapsed in 2019 over concerns about its business model and co-founder Adam Neumann’s leadership style.

Since Mr Neumann’s exit the company has gone through a major shakeup that has seen major job cuts and businesses sold off.

WeWork office firm valued at $9bn
WeWork sues SoftBank after withdrawal of $3bn deal
WeWork axes 2,400 staff globally
The business felt the impact of the pandemic particularly hard as social distancing rules drove a surge in people working from home and concerns about infections saw workers avoiding shared office spaces.

WeWork, which is backed by Japanese tech giant SoftBank, said its first-quarter revenue almost halved from a year ago to $598m.

But the firm said people are now returning to its offices as coronavirus restrictions are eased.

Its occupancy rate edged up to 50% in the most recent quarter, compared to 47% in the previous three months.

The company also said it expects the change in working habits to increase demand for the kind of short-term leases it offers.

Stock market plans
In March, WeWork said it would finally see its shares start trading on the stock market, through the purchase by the publicly traded BowX Acquisition Corp.

BowX is a so-called special purpose acquisition company, a shell firm that uses proceeds from a public listing to buy a private firm.

The firm is led by the owner of the NBA’s Sacramento Kings and affiliated with basketball legend Shaquille O’Neill.

The deal valued WeWork at $9bn – roughly a fifth of the its estimated worth in 2019, before its earlier flotation effort spectacularly imploded.

Investors had raised questions about the company’s finances and how the business was being managed by founder Adam Neumann, who then left the firm.

After plans to list the company were shelved, WeWork restructured the the business.

It closed around 100 locations, pulled out of non-core ventures – including a dog walking app and a wave pool maker – and now has just a third of the employees it did in mid-2019.

The company said it incurred restructuring costs of $494m, including its settlement with Mr Neumann. It posted an impairment charge of $299m partly due to an exit out of some real estate.

In February, WeWork’s backer, SoftBank, and Mr Neumann reached a settlement to end a legal battle that started in 2019.

Should encryption be curbed to combat child abuse?

For nine years, Chris Hughes has fought a battle very few people ever see.

He oversees a team of 21 analysts in Cambridge who locate, identify and remove child sexual abuse material (CSAM) from the internet.

The Internet Watch Foundation (IWF) is funded by the global tech industry.

It manually reviews online reports of suspected criminal content sent in by the public. Mr Hughes sees upsetting material every day.

When content is verified, analysts create unique “digital fingerprints” of each photo or video, then send it to law enforcement and tech firms. They also search for material online.

Occasionally, there are harrowing situations racing to track down victims from live streaming video.

Reports jumped during the pandemic, he says: “Over the recent May bank holiday weekend, we had more than 2,000 reports.”

In 2020, IWF received 300,000 reports and 153,000 were verified to be new CSAM content.

Police say more child predators can now be found on messaging apps, rather than on the dark web. Many don’t even encrypt their web traffic.

Many authorities are concerned that Facebook wants to introduce end-to-end encryption on messages sent over Messenger and Instagram Direct.

End-to-end encryption is a privacy feature that makes it impossible for anyone except the sender and recipient to read messages sent online.

Authorities are concerned, saying this will make it much harder to apprehend suspects and detect child predators.

Facebook says using such technology will protect users’ privacy.

But the US, UK and Australia have repeatedly objected to the idea since 2019, saying it will jeopardise work to combat child abuse.

Australia has also demanded the tech industry hand over public encryption keys – backdoors to their networks – to authorities. Firms, both abroad and in Australia, refused.

Enabling backdoors would be bad, says Jenny Afia, head of Schillings’ legal team: “Any legally-enforced weakening of the encryption algorithm, or vulnerability placed within the software…would potentially allow criminals to exploit [it].

“It is worth bearing in mind that having end-to-end encryption in place has already prevented a lot of crime.”

Netsweeper in Canada catalogues the internet to help schools and internet service providers block harmful content.

It sees a quarter of the world’s internet traffic and is in 37% of British schools, scanning 100 million new URLs daily. Up to 300 URLs are reported to IWF daily.

“To date, governments have left the large tech companies alone – probably because they didn’t understand them as much as they do now,” says Netsweeper’s chief executive Perry Roach.

“But if we don’t enable law enforcement with sophisticated tools, it will allow criminals, scammers, paedophiles and terrorists to move across the internet undetected.”

Software engineer Brian Bason founded US firm Bark after giving his sons their first mobile phones.

Bark uses AI neural networks to analyse text messages and social media in milliseconds for bullying, online predation, child abuse, signs of depression and suicidal ideas.

Children have to agree to hand over their login credentials, but only relevant sections of messages are sent in alerts to parents and schools.

Bark has informed the FBI of nearly a thousand child predators over the last five years.

“The reality is end-to-end encryption will drastically reduce the amount of CSAM material reported to authorities,” Mr Bason tells the BBC. “To me, the trade-off is not worth it.”

Perhaps these firms disagree because their business models rely on having unfettered access to data pipelines.

However, former UK and US intelligence agency staff tell the BBC there are other successful methods investigators can use if end-to-end encryption is introduced, like phishing, where users are tricked into visiting fake websites and handing over login credentials.

Internet giants should use machine learning to detect child predator behaviour on the device or server, they add, which wouldn’t break encryption, as it occurs only after the message has been decrypted.

Thorn, a US foundation that develops software to combat child exploitation, identifies eight child victims and 215 pieces of child abuse material per day.

Sarah Gardner, VP of external affairs at Thorn, suggests using “homomorphic encryption” – a form of encryption that lets users perform computations on encrypted data, without first decrypting it.

Another option would be to invest in better solutions, she adds.

Edinburgh-based Cyan Forensics, which uses statistical sampling to scan suspects’ devices for CSAM content in just 10 minutes, agrees.

“End-to-end encryption is here already and it’s neither good nor bad,” says Cyan Forensics’ co-founder and chief executive Ian Stevenson.

“However, there is a dire need for broader protocols to ensure the safety of children online.”

Former detective constable Alan McConnell, who worked on more than a hundred child sex abuse cases, left Police Scotland to teach Cyan about the problems the police face.

As a result of his work, a major UK police force used Cyan’s software to detect CSAM material on an ex-offender’s computer in March. The individual was found to have surreptitiously installed cameras at a club used by children.

However, a senior German prosecutor says his biggest problem is getting tech firms to play ball.

“We’re addressing all the big tech firms – please help us,” says Markus Hartmann, director of North Rhine-Westphalia’s central cybercrime department.

“You hear they have these big teams fighting digital crimes, and I wonder, why don’t they file any complaints with law enforcement?”

His unit recently busted a child pornography ring, charging 65 suspects and rescuing a 13-year-old child.

They were aided by Microsoft, which scanned its database of Skype users to locate the suspects’ IP addresses.

Mr Hartmann is surprisingly in favour of encryption.

“If you break encryption, put in backdoors or ban it, then you’re doing more harm than good… and I doubt the guys we are really going after, will not be able to get around it,” he says.

“Even as a prosecutor, I could set up my own end-to-end encrypted network in two days, routed through public libraries.”