AI

AI making cyber attacks costlier and more effective: Munich Re

  • "They rely on highly personalised phishing emails, automatically generated malware, and synthetic identities that appear deceptively real," he said.
  • Artificial intelligence is making cyberattacks increasingly sophisticated and costlier for businesses, reinsurer Munich Re said Wednesday, warning of methods ranging from highly personalised phishing emails to computer-generated, convincing fake identities.
  • "They rely on highly personalised phishing emails, automatically generated malware, and synthetic identities that appear deceptively real," he said.
Artificial intelligence is making cyberattacks increasingly sophisticated and costlier for businesses, reinsurer Munich Re said Wednesday, warning of methods ranging from highly personalised phishing emails to computer-generated, convincing fake identities.
"If cybercrime were a country, it would be the third-largest economy in the world", behind only the United States and China, the reinsurer said in a report.
Citing figures from market analysis firm Statista, Munich Re projected cybercrime will generate global losses of some $14 billion (12.07 bn euros) in 2028.
Martin Kreuzer, head of cyber risks at Munich Re, told AFP that "automation now plays a central role", enabling attackers to operate "efficiently and in a more targeted way". 
"They rely on highly personalised phishing emails, automatically generated malware, and synthetic identities that appear deceptively real," he said.
The trend of "agentic AI" means the advent of systems that can "act autonomously, make decisions, and even circumvent defensive mechanisms," according to Kreuzer.
The most widespread cyberattacks are still ransomware, in which hackers lock systems and demand money to release them.
Munich Re's study says that the number of publicly reported attacks of this kind "increased by nearly 50 percent in 2025 and... continue unabated in 2026".
Coordinated attacks via networks of hijacked devices, used to overwhelm systems, also more than doubled in 2025 and are becoming more common thanks to services available for hire. 
At a more advanced level, criminals are collaborate with states, concealing the origin of attacks and accelerating their global operations. 
"Nation-state actors are among the most professional players in the cyber threat landscape," said Kreuzer. 
"Here we are seeing both an evolution of tools and methods, and the emergence of hybrid warfare driven by geopolitical motives," he said, adding that "disinformation is increasingly being used as a weapon".
The study notes that while attacks on large businesses attract most public attention, "the majority of cyber incidents and claims affect micro-companies and SMEs".
Compared with risks from natural catastrophes, for which almost half of all losses were insured in 2025, Kreuzer said that "cyber risk coverage is still far too low, with only a fraction insured".
jpl/jsk/sr/rl

AI

OpenAI kills Sora video app in pivot toward business tools

  • "We respect OpenAI's decision to exit the video generation business and to shift its priorities elsewhere," a Disney spokesman told The Reporter. 
  • OpenAI said Tuesday that it would shut down its artificial intelligence video generation app Sora barely six months after its launch, as the company shifts toward business tools ahead of a potential stock market debut.
  • "We respect OpenAI's decision to exit the video generation business and to shift its priorities elsewhere," a Disney spokesman told The Reporter. 
OpenAI said Tuesday that it would shut down its artificial intelligence video generation app Sora barely six months after its launch, as the company shifts toward business tools ahead of a potential stock market debut.
"We're saying goodbye to Sora," the company posted on X.
The shutdown marks the end of one of the most high-profile consumer AI product launches of the past year.
OpenAI said it would later provide timelines for winding down the standalone app, as well as details on how people can preserve their work.
The closing comes at a sensitive time for OpenAI, which faces increasing questions about the sustainability of its business model, with costs skyrocketing far faster than revenue despite having about one billion daily users worldwide.
According to The Wall Street Journal, OpenAI chief executive Sam Altman announced the changes to staff on Tuesday.
It also follows reports that OpenAI's applications chief, Fidji Simo, told staff this month that they could not be distracted by "side quests," outlining a push toward agentic AI capabilities.
These are AI systems that can work autonomously on computers to write software, analyze data and carry out other tasks.
The Hollywood Reporter meanwhile said the end of Sora would mean the end of a megadeal signed in December with Disney, which was to invest $1 billion in OpenAi and allow the licensing of its popular characters for making videos.
Citing a source close to the matter, the report said the ultimate goal had been access to Sora for the Disney+ streaming service.
"We respect OpenAI's decision to exit the video generation business and to shift its priorities elsewhere," a Disney spokesman told The Reporter. 
"We will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators."
arp/js/md

tech

New Mexico jury finds Meta liable for endangering children

  • A separate jury in California is weighing whether Meta and YouTube should be held liable for harms caused to children on their platforms, including by making them addictive.
  • A New Mexico jury on Tuesday found social media giant Meta liable for endangering children by making them vulnerable to predators on its platforms and other dangers.
  • A separate jury in California is weighing whether Meta and YouTube should be held liable for harms caused to children on their platforms, including by making them addictive.
A New Mexico jury on Tuesday found social media giant Meta liable for endangering children by making them vulnerable to predators on its platforms and other dangers.
The verdict came after roughly a day of deliberations following a six-week trial in which the state accused Facebook and Instagram's parent company of failing to protect minors from sexual abuse, online solicitation and human trafficking.
The state had sought the maximum $2.2 billion in damages, but the jury awarded a lesser amount of $375 million.
The case, tried in a Santa Fe court, is among the first involving social media platforms and child safety to produce a jury verdict.
"The jury's verdict is a historic victory for every child and family who has paid the price for Meta's choice to put profits over kids' safety," said New Mexico Attorney General Raul Torrez, who brought the case.
"Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," he added.
Meta said it would challenge the decision.
"We respectfully disagree with the verdict and will appeal," a company spokesperson said.
"We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content."
The jury reached its verdict following a trial that heard testimony from 40 witnesses, including employees-turned-whistle-blowers, and reviewed hundreds of documents, reports and emails.
Torrez filed suit in 2023 against Meta — parent company of Facebook, Instagram and WhatsApp — and CEO Mark Zuckerberg, alleging the company failed to protect children from online dangers.
During closing arguments, prosecution attorney Linda Singer told jurors that Meta's algorithms had directed adults toward content posted by teenage users while the company concealed internal findings about the risks to young people.
The jury found Meta violated the state's Unfair Practices Act by misleading consumers about the safety of its products for children.
A second phase of proceedings is scheduled to begin May 4, when a judge will hear the state's claim that Meta should be ordered to pay additional penalties and make specific changes to its platforms and company operations.
A separate jury in California is weighing whether Meta and YouTube should be held liable for harms caused to children on their platforms, including by making them addictive.
That case is considered a bellwether that could influence the outcome of thousands of similar lawsuits against social media companies across the United States.
arp/md

trial

Meta awaits verdict in New Mexico child safety trial

  • The New Mexico jury began its work following closing arguments and a six-week trial involving testimony from 40 witnesses, including employees turned whistle-blowers, and hundreds of documents, reports and emails.
  • A New Mexico jury began its first full day of deliberations on Tuesday in a trial where social media giant Meta is accused of endangering children by making them vulnerable to predators.
  • The New Mexico jury began its work following closing arguments and a six-week trial involving testimony from 40 witnesses, including employees turned whistle-blowers, and hundreds of documents, reports and emails.
A New Mexico jury began its first full day of deliberations on Tuesday in a trial where social media giant Meta is accused of endangering children by making them vulnerable to predators.
The state of New Mexico is seeking billions of dollars in penalties in one of two major US cases against the company now in jury hands.
A separate jury in California is weighing whether Meta and YouTube should be held liable for harms caused to children on their platforms, including by making them addictive.
That case is considered a bellwether that could influence the outcome of thousands of similar lawsuits against social media companies across the United States.
The New Mexico jury began its work following closing arguments and a six-week trial involving testimony from 40 witnesses, including employees turned whistle-blowers, and hundreds of documents, reports and emails.
New Mexico Attorney General Raul Torrez filed suit in 2023 against Meta — parent company of Facebook, Instagram and WhatsApp — and CEO Mark Zuckerberg, alleging the company failed to protect children from sexual abuse, online solicitation and human trafficking.
Prosecution attorney Linda Singer told jurors in closing arguments that Meta's algorithms had directed adults toward content posted by teenage users while the company concealed internal findings about the risks to young people.
"Meta failed to explain that the algorithm was designed to maximize teen time spent on the platform," Singer said, according to the Albuquerque Journal. "Meta didn't disclose the likelihood that the algorithm would introduce predators to teens, that it would recommend such sensational and harmful content."
A Meta spokesperson said the state's case was "sensationalist" and based on "cherry-picked" documents. "The State failed to prove its case," the spokesperson said. "We're focused on demonstrating our longstanding commitment to supporting young people."
The state is seeking the maximum civil penalty of $5,000 for each of an estimated 221,000 New Mexico teenagers it says use Facebook and Instagram, a figure that is contested by Meta.
New Mexico's attorneys must prove Meta violated the state's Unfair Practices Act by misleading residents about the safety of its products for children. 
The case, tried before First Judicial District Court Judge Bryan Biedscheid, is among the first involving social media platforms and child safety to reach a jury.
A second phase of proceedings in New Mexico is scheduled for May, when a judge will hear the state's claim that Meta created a public nuisance and should fund programs to address alleged harms to children.
arp/msp

tourism

'Perfect Japan' posts spark Gen Z social media backlash

  • The short video posts on platforms like TikTok show how even just the words "Tokyo, Japan" with a cherry blossom emoji can make an otherwise banal street scene more appealing for some users.
  • Take an everyday video on any suburban transport network, add anime-style music and a rosy filter, and it's suddenly a scene from the Japanese holiday of your dreams.
  • The short video posts on platforms like TikTok show how even just the words "Tokyo, Japan" with a cherry blossom emoji can make an otherwise banal street scene more appealing for some users.
Take an everyday video on any suburban transport network, add anime-style music and a rosy filter, and it's suddenly a scene from the Japanese holiday of your dreams.
That's the "Japan effect": a Gen Z social media trend satirising the often-romanticised image of the Asian country, which welcomed a record number of visitors last year.
Residents of Kyoto and other tourist hotspots have expressed exasperation with selfie-taking crowds, and now an online backlash against Japan fever is growing.
The short video posts on platforms like TikTok show how even just the words "Tokyo, Japan" with a cherry blossom emoji can make an otherwise banal street scene more appealing for some users.
"The point is to make fun of Japan's 'cute' image online, with all its cliches and stereotypes," 25-year-old French YouTuber Rocky Louzembi, who analyses internet culture, told AFP.
Along with the chronically weak yen, the booming popularity of anime and game franchises such as Pokemon is drawing tourists to the nation.
But some people take their love of Japan too far, said Louzembi, who goes by the handle rockylevrai.
To describe the phenomenon, he used the slang word "glazing" -- to excessively praise something.
A "Japan glazer" is "someone who puts everything that comes from Japan on a pedestal, while disparaging things that come from their own country", Louzembi said.

'Not that clean'

Japan logged a record 42.7 million tourist arrivals in 2025, despite a steep fall in Chinese visitors in December due to a diplomatic row.
Many visitors post online about their trip -- making pilgrimages to real-life locations from cartoons or joking about spending $1,000 on flights just so they can eat a $1 convenience store rice ball.
"The 'Japan' portrayed in an anime world is often quite different from how Japanese society is", said Marika Sato, a 29-year-old who works in marketing in Tokyo.
For instance, many women have experienced groping, said Sato, a contributor to "Blossom The Project", an Instagram account focused on Japanese social issues.
Graphic designer and fellow Blossom contributor Maya Kubota, 28, said that she appreciates people liking Japan and wanting to visit.
But over-the-top comments such as "Japanese people are next level" give her an "icky vibe", Kubota told AFP.
Some of the online Gen Z pushback focuses on the exaggerated idea that Japan's streets are so spotless people don't even have to wear shoes.
"Japan is clean but not THAT clean," joked a US couple who post social media content about the country under the name The Hitobito -- showing off their dirty white socks after a real-life experiment.

Viral effect

Japan's tourist boom has forced some authorities to take action.
A cherry blossom festival boasting a highly Instagrammable view of Mount Fuji was cancelled this year after residents complained of overtourism.
"People associate Japan with carefully composed visuals," said Seio Nakajima, a professor in the Graduate School of Asia-Pacific Studies at Waseda University.
That could be because of the detailed, beautiful backgrounds in anime, or because of a deeper "cultural tradition of emphasising form".
"If people focus on form rather than meaning, it becomes easier to go viral. Because you don't need to think," Nakajima told AFP.
Japan's formalities -- from the complexity of polite language to extreme attention to detail in packaging or wrapping -- may surprise visitors, he said.
But "Japan is not always clean and aesthetic. That's only part of the reality."
Despite the backlash, tourists in Tokyo's busy Tsukiji market told AFP that the country had lived up to their expectations.
"In Russia, it's very popular to hype Japan," said Tatiana Mokeeva, 25.
When asked if posts about Japan could be unrealistic, she said: "To tell the truth, no... I love all about Japan."
str/kaf/ane/ceg

addiction

US social media addiction trial jury struggles for consensus

BY ROMAIN FONSEGRIVES

  • A 20-year-old California woman identified as Kaley G.M. testified at the trial that YouTube and Instagram fueled her depression and suicidal thoughts as a child, telling jurors that she became obsessed with social media, starting with YouTube videos, when she was six.
  • Jurors resume deliberations on Tuesday in a landmark social media trial after signaling that they were having trouble agreeing when it comes to one of the two defendants, Meta and YouTube.
  • A 20-year-old California woman identified as Kaley G.M. testified at the trial that YouTube and Instagram fueled her depression and suicidal thoughts as a child, telling jurors that she became obsessed with social media, starting with YouTube videos, when she was six.
Jurors resume deliberations on Tuesday in a landmark social media trial after signaling that they were having trouble agreeing when it comes to one of the two defendants, Meta and YouTube.
"The jury has difficulty coming to a consensus regarding one defendant, do you have any advice on how to move forward?" the jurors told Judge Carolyn Kuhl, according to a note she read out loud. 
Kuhl responded by asking the jurors to continue their deliberations.
"If you are unable to reach a verdict, the case will have to be applied before another jury selected in the same manner and from the same community from which you were chosen, and add additional cost to everyone," she told the jurors.
The afternoon ended with no verdict, meaning the panel will return on Tuesday to continue its quest for consensus.
The jury's first full week of deliberations ended Friday with the panel sending the judge a query related to calculating damages in the case, which is expected to set a precedent for thousands of similar suits in the United States.
That indicated enough jurors agreed that one or both of the tech platforms was negligently or harmfully designed and users should have been warned, according to verdict forms.
The lawsuit is one of hundreds accusing social media firms of luring young users into becoming addicted to their content and potentially suffering from depression, eating disorders, psychiatric hospitalization and even suicide.

'Negligent' designs

Internet titans have long shielded themselves with Section 230 of the US Communications Decency Act, which frees them of responsibility for what social media users post.
But this case argues that the firms are responsible for defective products, with business models designed to hold people's attention and to promote content that can harm their mental health.
The verdict could turn on the question of whether familial strife and other real-world trauma, or rather YouTube and Meta apps such as Instagram, are to blame for the mental woes of the woman who filed the suit.
A 20-year-old California woman identified as Kaley G.M. testified at the trial that YouTube and Instagram fueled her depression and suicidal thoughts as a child, telling jurors that she became obsessed with social media, starting with YouTube videos, when she was six.
Under cross examination, however, Kaley also talked about feeling neglected, berated and picked on by family members.
A jury form given to jurors asks the panel to decide whether Meta or YouTube should have known their services posed a danger to children or if they were negligent in design.
If so, jurors are to decide if Meta or YouTube were "substantial factors" in causing Kaley's woes and how much they should pay in damages.
The trial was selected as a "bellwether" proceeding, the outcome of which establishes a precedent for resolving other lawsuits that blame social media for fueling an epidemic of mental and emotional trauma.
However, being unable to agree on a verdict regarding Meta or YouTube could result in a different case setting that standard.
"We're reading tea leaves and we don't know what they mean," said plaintiff's attorney Mark Lanier.
"I don't think that we're even remotely close to the issue of a mistrial."
rfo-arp-gc/jgc

games

No 'silver bullet' for video game age restrictions: PEGI chief

BY KILIAN FICHOU

  • In future, "we will have to work out a plan of attack, an approach to live service games," Bosmans said, "especially games that will continually provide new updates".
  • The head of Europe's video game rating system, PEGI, has warned against supposed "silver bullet" child protection solutions such as age verification, in an interview with AFP. A new set of PEGI (Pan-European Game Information) age ratings, coming into force from June, will take into account factors including in-game purchases, incentives to constantly revisit games or the ability to limit in-game messages from strangers.
  • In future, "we will have to work out a plan of attack, an approach to live service games," Bosmans said, "especially games that will continually provide new updates".
The head of Europe's video game rating system, PEGI, has warned against supposed "silver bullet" child protection solutions such as age verification, in an interview with AFP.
A new set of PEGI (Pan-European Game Information) age ratings, coming into force from June, will take into account factors including in-game purchases, incentives to constantly revisit games or the ability to limit in-game messages from strangers.
It had taken "a couple of years" for PEGI to work out the new classification, its director general Dirk Bosmans told AFP.
The games sector has in recent years been the subject of debate, including over allegedly addictive mechanics such as "loot boxes" -- virtual items purchasable for real money that contain a random in-game reward.
PEGI's new ratings will not apply to games released before June this year -- even the most widely played titles, such as "Fortnite" or "League of Legends".
In future, "we will have to work out a plan of attack, an approach to live service games," Bosmans said, "especially games that will continually provide new updates".
Introduced in 2003, PEGI is the only media age classification system harmonised across European countries, its chief noted -- although Germany has its own ratings.
As a self-regulatory mechanism by the games industry, its rules are applied by major console makers Nintendo, Sony and Microsoft, as well as by Google on its app store.
Apple has its own age rating system, while the dominant PC gaming platform Steam -- based in the US -- has not implemented one.

'Regulatory pressure'

PEGI has updated its approach in part in response to growing "regulatory pressure" within the European Union, Bosmans said.
Even as the EU has tightened digital regulation in recent years, member states are taking their own steps -- including a draft law in France barring under-15s from social media, which the government has warned would cover some online games with social aspects, such as "Roblox".
If passed, the law will require all users to prove their age from 2027.
While automated online verification "sounds like it's going to fix everything... data protection organisations are very concerned", Bosmans said.
"We first need to have a really good conversation before we start deciding on where to apply it."
He added that companies in the sector have welcomed the updated PEGI classifications.
"They understand that by making PEGI better and stronger, they are better protected against lack of nuance, quick fixes," Bosmans said.

Parents needed

Bosmans also spoke out against full-on bans of games for children below a certain age -- as mooted by French President Emmanuel Macron last month ahead of an expert inquiry.
"A ban is not very nuanced. It's not very proportionate, no matter for what you apply it," he said, recalling that PEGI was created to avoid just such a scenario in the early 2000s.
What's more, in Australia -- where social media has already been banned for under-16s -- "there is now concern that kids are primarily busy with trying to circumvent the rules, sometimes with the help of their parents," Bosmans said.
"You can try all kinds of technical or legal methods to enforce PEGI ratings. If in the end parents decide, no, my 13-year-old is going to play this 16 (rated) game, it doesn't change anything," he added.
"Thinking that you can do it without the parents is the biggest mistake you can make."
kf/tgb/jhb

internet

Russia's Max: The unencrypted super-app being forced on citizens

  • Even so, she has ditched Max in favour of IMO, a less popular US-made app that has encryption.
  • Russia is pushing its Max messenger -- a social media platform without encryption -- onto its citizens with a massive promotion campaign and the simultaneous blocking of Whatsapp and Telegram, the country's two most popular messenger apps.
  • Even so, she has ditched Max in favour of IMO, a less popular US-made app that has encryption.
Russia is pushing its Max messenger -- a social media platform without encryption -- onto its citizens with a massive promotion campaign and the simultaneous blocking of Whatsapp and Telegram, the country's two most popular messenger apps.
The rollout has raised concerns among critics and digital rights groups that Moscow will use Max to surveil its citizens and further cut digital links to the West.
"Any data that passes through this application can be considered to be in the hands of its owner, and in this case, the hands of the Russian state," cybersecurity researcher Baptiste Robert, CEO of the French company Predicta Lab, told AFP.
Launched in 2025 by Russian social media giant VK, the app has been compared to China's WeChat, combining social media and messaging functions with access to government services, a digital ID card system, banking and payments.
It is not officially mandatory, but the authorities are making it clear that life without Max will become increasingly hard.
President Vladimir Putin has touted it as a more "secure" platform that meets Russia's demand for "technological sovereignty."
Moscow has been pushing that agenda for years.
"This is the culmination of policies aimed at creating a sovereign internet," Marielle Wijermars, an associate professor of internet governance at Maastricht University told AFP.
"Russia wants to restructure the internet to better control what is published" including "by migrating all Russians to platforms that are more state-controlled," she added.

'Forced' to download

Max has been pre-installed on phones and tablets sold in Russia since September.
The design is familiar and resembles Telegram, offering private messages, public channels and cute stickers.
Unlike Telegram and Whatsapp, it is also on Russia's "white list" of approved digital services that stay online during the increasingly common forced internet blackouts that Moscow says are necessary to thwart Ukrainian retaliatory drone attacks.
Initially only available to users with a Russian or Belarusian SIM card, the app is now available in English and to those with phone numbers from 40 other countries -- only those Russia deems "friendly," like Cuba, Pakistan and ex-Soviet republics in Central Asia.
It is not available in the European Union -- or Ukraine. 
That has not stopped Ukrainian President Volodymyr Zelensky vowing to infiltrate the messenger.
One of the reasons Russia wants to ditch Telegram is because it has become a platform used by Ukraine to recruit Russians for sabotage attacks, including assassinations.
Inside Russia, opinions are split.
"You can send messages, photos and videos. What more do you need?" said Yekaterina, a 35-year-old dance teacher.
Irina, a 45-year-old doctor, however, complained she has been "forced" to use Max for school activities for her children and to access the government's official online portal, Gosuslugi, where her patients make appointments.
She plans to "buy another SIM card to download Max on another phone."
Large businesses have been accused of forcing employees to download the app and schools have migrated all communication with parents to the platform.
At the same time, celebrities and popular bloggers are moving their content to Max.
Dmitry Zakharchenko, founder of the Russian analytics agency GRFN, has compared the "aggressive" campaign with Soviet propaganda billboards.
The carrot-and-stick approach has driven downloads -- more than 100 million users in March, according to the service.

'Being watched'

The launch of Max comes years into Russia's political and technological campaign to develop a "sovereign internet", less reliant on -- and vulnerable to -- foreign services.
Russian telecoms regulator Roskomnadzor and the security services have enjoyed growing powers to monitor and block sites they deem dangerous.
Unlike Telegram and Whatsapp, Max does not use end-to-end encryption and its terms of use state that user data is stored exclusively on services in Russia.
Varvara, a 35-year-old interpreter said she was not worried about that as she was not a "foreign agent" and had nothing to hide -- referring to a label used by the Kremlin to target critics.
Even so, she has ditched Max in favour of IMO, a less popular US-made app that has encryption.
Scientist Alexandra, 32, refuses to download Max "out of contrariness" to its heavy-handed promotion.
"We're already being watched everywhere," she added, dismissing the privacy concerns.
But another resistant user -- Natasha, 48 -- shows the general feeling of resignation when it comes to the future of the app in Russia.
"Sooner or later, there will be no alternative."
bur/gv

X

French prosecutors suspect Musk encouraged deepfakes row to inflate X value

BY CLARA WRIGHT

  • French authorities are already investigating X over allegations that its algorithm was used to interfere in French politics, as well as Grok's dissemination of Holocaust denials and the sexualised deepfakes.
  • French prosecutors said Saturday they had alerted US authorities to a suspicion that tech tycoon Elon Musk had encouraged controversy over sexualised deepfakes on X to "artificially" increase the value of his company.
  • French authorities are already investigating X over allegations that its algorithm was used to interfere in French politics, as well as Grok's dissemination of Holocaust denials and the sexualised deepfakes.
French prosecutors said Saturday they had alerted US authorities to a suspicion that tech tycoon Elon Musk had encouraged controversy over sexualised deepfakes on X to "artificially" increase the value of his company.
The social media network's Grok AI chatbot stirred outrage earlier this year over it generating images of naked women and girls without their consent.
"The controversy sparked by sexually explicit deepfakes generated by Grok (X's AI) may have been deliberately generated in order to artificially boost the value of companies X and xAI," the Paris prosecutor's office said, confirming a report in Le Monde newspaper on Friday.
This could have been done towards "the planned June 2026 stock market listing of the new entity created by the merger" between SpaceX and xAI, it added.
The prosecutor's office said it had on Tuesday reached out to the US Department of Justice, as well as the US Securities and Exchange Commission (SEC), a financial market regulation body, to share its concerns.
X's lawyer in France was not immediately available for comment.
Replying on X in French to a link to AFP's coverage of the story, Musk slammed French prosecutors as "mentally retarded."
French authorities are already investigating X over allegations that its algorithm was used to interfere in French politics, as well as Grok's dissemination of Holocaust denials and the sexualised deepfakes.
AI chatbot Grok has its own account on the X social network allowing users to interact with it.
For a period, users could tag the bot in posts to request image generation and editing, receiving the image in a reply from Grok. Many sent Grok photos of women or tagged the bot in replies to women's photo posts, giving it prompts such as "put her in a bikini" or "remove her clothes".

'Incitements'

It generated an estimated three million sexualised images -- mostly of women, though also 23,000 that appeared to depict children -- in 11 days, the Center for Countering Digital Hate (CCDH), a nonprofit watchdog, said in late January.
Le Monde pointed to "several posts by Musk, published at the height of the controversy, which prosecutors interpret as incitements to generate non-consensual images". 
"The billionaire posted several messages in which he expressed delight, using numerous emojis, about his AI engine's 'undressing' capabilities, even sharing an image of himself in which his chatbot depicted him wearing a bikini," Le Monde reported.
Daily average app downloads for Grok worldwide soared by 72 percent from January 1 to January 19 compared to the same period in December, the Washington Post has cited market intelligence firm Sensor Tower as saying.
French authorities last month summoned Musk to a "voluntary interview" and searched the local offices of his social media network, in what Musk called a "political attack".
Both Britain and the European Union have also opened investigations into the creation of the sexualised deepfakes.
bur-arp/acb

trial

US jury finds Elon Musk misled Twitter shareholders

BY GLENN CHAPMAN

  • The civil complaint in California accused Musk of driving down Twitter's stock price to gain leverage to renegotiate the purchase price or get out of the deal completely, causing people who sold shares to lose money.
  • A federal jury in California found Friday that tech tycoon Elon Musk misled Twitter shareholders, driving down the company's share price as he was poised to buy it in a $44 billion deal.
  • The civil complaint in California accused Musk of driving down Twitter's stock price to gain leverage to renegotiate the purchase price or get out of the deal completely, causing people who sold shares to lose money.
A federal jury in California found Friday that tech tycoon Elon Musk misled Twitter shareholders, driving down the company's share price as he was poised to buy it in a $44 billion deal.
The verdict in the class action securities lawsuit means the world's richest person could be ordered to pay billions of dollars, according to damages calculated by jurors.
Minutes after the judgment was announced, the entrepreneur's lawyers informed AFP that their client will appeal the decision, characterizing it as a "setback."
After a three-week trial in a San Francisco federal court -- which included in-person testimony from Musk -- the jury found that two tweets posted in May 2022 by the Tesla and SpaceX CEO contained false statements responsible for a plunge in Twitter's share price.
Investor Giuseppe Pampena had filed the suit on behalf of people who sold Twitter shares between mid-May and early October 2022.
Musk acquired the social media platform in late October 2022 and later renamed it X.
Jurors agreed that Musk violated a securities rule that bars false and misleading statements that sink a stock price, in this case that of Twitter, the verdict form showed.
An attorney for the plaintiffs estimated the damages at about $2.6 billion.
Musk, who has a near-constant presence on X, did not immediately react to the verdict.

Teflon tycoon?

The judgment marks a rare legal defeat for Musk, often dubbed "Teflon Elon" for his ability to emerge unscathed from lawsuits he is expected to lose.
His lawyers, in fact, reminded AFP of this track record, noting that a Texas court cleared him just that same day in a separate defamation case.
In 2023, a jury in the same San Francisco federal court cleared him within hours of similar charges brought by Tesla shareholders, following his 2018 tweets claiming he had the funding to take the automaker private.
The civil complaint in California accused Musk of driving down Twitter's stock price to gain leverage to renegotiate the purchase price or get out of the deal completely, causing people who sold shares to lose money.
Musk tweeted at one point during the process that the acquisition deal was temporarily on hold until Twitter executives could prove the percentage of "bots" -- fake accounts run by software instead of real users -- was as low as the social media platform claimed.
The plaintiffs contended that these statements were part of a scheme designed to pressure the board of directors into accepting a price lower than his initial offer -- at a time when Tesla's share price was falling, meaning Musk would have to sell more of his shares to finance the deal.
Musk abandoned his effort to get out of buying Twitter in late 2022 after the company took him to court to uphold the contract.
Musk has since merged the social media platform with his artificial intelligence startup xAI and his private space exploration firm SpaceX.
Forbes magazine early this month estimated Elon Musk's net worth at $839 billion, a figure based primarily on his stakes in his portfolio of companies including Tesla and SpaceX.
gc-cl/des/lga/jfx

addiction

Jury signals tech titans on hook for social media addiction

BY ROMAIN FONSEGRIVES

  • If so, jurors are to decide if Meta or YouTube were "substantial factors" in causing Kaley's woes and how much they should pay in damages. 
  • A question by jurors in a landmark social media addiction trial on Friday signaled Meta or YouTube may have to pay for letting a girl get hooked onto their platforms.
  • If so, jurors are to decide if Meta or YouTube were "substantial factors" in causing Kaley's woes and how much they should pay in damages. 
A question by jurors in a landmark social media addiction trial on Friday signaled Meta or YouTube may have to pay for letting a girl get hooked onto their platforms.
The jury's first full week of deliberations ended with the panel sending the judge a query related to calculating damages in the case, which is expected to set a precedent for thousands of similar suits in the nation.
"We don't start dancing in the streets over what seems to be a good question," said plaintiff's attorney Mark Lanier.
"But we're appreciative of the fact that they're on the issues of damages."
To turn their attention to damages, enough jurors had to essentially agree that one or both accused tech platforms was negligently or harmfully designed and users should have been warned, according to verdict forms.
Jurors will return to the Los Angeles courthouse on Monday to resume deliberations.
Since jury deliberations began on March 13, the jury has sent questions to the judge related to the plaintiff's family troubles as well as how much she actually used Meta-owned Instagram as a child.

Negligent in design?

The verdict could turn on the question of whether familial strife and other real-world trauma, or YouTube and Meta apps such as Instagram, were to blame for mental woes of the woman who filed the suit.
A 20-year-old California woman identified as Kaley G.M. testified at trial that YouTube and Instagram fueled her depression and suicidal thoughts as a child, telling jurors that she became obsessed with social media, starting with YouTube videos, when she was six.
Under cross examination, however, Kaley also talked about feeling neglected, berated and picked on by family members.
A jury form given to jurors asks the panel to decide whether Meta or YouTube should have known their services posed a danger to children or if they were negligent in design.
If so, jurors are to decide if Meta or YouTube were "substantial factors" in causing Kaley's woes and how much they should pay in damages. 
Whatever the verdict, the trial highlights "an important tension" between social media platforms and vulnerable young internet users, reasoned University of Pittsburgh marketing professor Vanitha Swaminathan.
"The platforms have to address the concerns of this important segment," Swaminathan told AFP.
The lawsuit is one of hundreds accusing social media firms of luring young users into becoming addicted to their content and potentially suffer from depression, eating disorders, psychiatric hospitalization and even suicide. 
Internet titans have long shielded themselves with Section 230 of the US Communications Decency Act, which frees them of responsibility for what social media users post.
However, this case argues that the firms are responsible for defective products, with business models designed to hold people's attention and to promote content that can harm their mental health.
The outcome of the trial is expected to establish a precedent for resolving other lawsuits that blame social media for fueling an epidemic of mental and emotional trauma.
gc-rfo-tu/jgc

US

Souped-up VPNs play 'cat and mouse' game with Iran censors

BY TOM BARFIELD

  • Iran uses all of those, and it is generally much more aggressive than other countries in targeting the entire IP ranges of service providers that VPNs typically use.
  • Iranians are managing to get online during the current war with the US and Israel despite drastic censorship and frequent blackouts, throwing the spotlight on to providers of tools such as VPNs (virtual private networks).
  • Iran uses all of those, and it is generally much more aggressive than other countries in targeting the entire IP ranges of service providers that VPNs typically use.
Iranians are managing to get online during the current war with the US and Israel despite drastic censorship and frequent blackouts, throwing the spotlight on to providers of tools such as VPNs (virtual private networks).
AFP asked Adam Fisk, head of US-based nonprofit Lantern, which offers an advanced VPN, how his technology and similar apps can get around such heavy-handed blocking.
Question: How does Iran's internet blocking work?
Answer: In general, censoring countries block traffic using DNS (Domain Name System, which translates between human- and machine-readable names for websites and other resources), SNI (server name identification), IP-based blocking (of specific internet addresses) and other forms of Deep Packet Inspection (probing the content of data sent over the internet).
Iran uses all of those, and it is generally much more aggressive than other countries in targeting the entire IP ranges of service providers that VPNs typically use.
Iran is also uniquely aggressive in shutting down all international connectivity in times of crisis. In those cases, traffic is primarily limited to the domestic internet, or NIN (National Information Network).
Q: How do tools like Lantern get around the blocking?
A: Lantern and Psiphon (a similar tool made by a Canadian company) share the same general approaches but use different protocols and codebases.
A powerful approach is hiding in common forms of traffic, such as TLS (Transport Layer Security, used to protect applications like web browsing, email, instant messaging and voice calls) or DNS.
The additional traffic from Lantern or other tools becomes a subset of a much larger whole. If done carefully, it can be hard to distinguish from ordinary web traffic.
There is definitely a cat-and-mouse element to the relationship. Lantern and other tools are constantly discovering new approaches or vulnerabilities, while censors such as Iran discover new ways to shut them down.
Q: How do people inside countries like Iran get software to circumvent blocking?
A: When there is international internet connectivity, people get Lantern from sites that censors are unwilling to block due to the economic consequences such as (software development platform) GitHub.
During internet shutdowns, however, people rely on their existing copies of Lantern and other tools, or they can get new updates through services like (satellite broadcast system) Toosheh or other users who have Starlink, for example.
Iran is generally a very tech-savvy country, and many people constantly have multiple circumvention apps on their phones.
Q: Could Iran's hackers glean data about users from your systems?
A: We don't store any personally identifiable information about users at all, and Lantern undergoes regular security audits. 
We are also generally strong security engineers and take care to secure our backend infrastructure in a variety of ways.
Q: Where do Lantern's resources come from and can ordinary people help out?
A: Lantern is a US-based nonprofit that earns revenue from Lantern Pro users worldwide who pay for a better version. Historically, we have received funding from the Open Technology Fund (a US government-funded NGO that campaigns for internet freedom), the US State Department and private philanthropists.
We also have Unbounded, where anyone can become a proxy (a "bridge" between people in censored countries and Lantern's network) with the click of a button.
This will use your bandwidth to some degree but won't have a significant impact on the performance of your machine. People can run it for however long they want.
Q: Where else is Lantern widely used and is demand growing?
A: In general, we have seen censorship growing around the world for many years, with Lantern usage growing accordingly to around two million globally.
We have a significant number of users in Russia, Myanmar and the UAE.
From Iran at the moment, there's very little traffic getting through, very little traffic in general apart from what's on the NIN.
tgb/jxb

history

Turkey in cultural diplomacy push to bring history home

BY FULYA OZERKAN

  • But that changed after archaeometry expert Professor Ernst Pernicka concluded there was "no doubt whatsoever" the statue came from Bubon, where an imperial shrine housed bronze sculptures of Roman emperors. 
  • When an ancient bronze statue of Roman Emperor Marcus Aurelius landed back on Turkish soil after decades abroad, it was more than a symbolic homecoming.
  • But that changed after archaeometry expert Professor Ernst Pernicka concluded there was "no doubt whatsoever" the statue came from Bubon, where an imperial shrine housed bronze sculptures of Roman emperors. 
When an ancient bronze statue of Roman Emperor Marcus Aurelius landed back on Turkish soil after decades abroad, it was more than a symbolic homecoming.
It marked the latest victory in Turkey's increasingly assertive push to recover antiquities illegally taken abroad -- a campaign supported by a newly-developed AI tool for identifying cultural assets of Turkish origin.
The life-sized bronze, which dates back to the second- or third-century, was taken in the 1960s from the ancient city of Bubon near Turkey's southwestern Antalya resort. 
After a years-long investigation involving research, scientific testing and statements from now elderly witnesses, the headless statue arrived back in Turkey last year.
Its repatriation from an Ohio museum involved cooperation with the US Department of Homeland Security and the Manhattan District Attorney's Office.
For Zeynep Boz, director of Turkey's department for combating the illicit trafficking of cultural property, one moment stands out. 
"I clearly remember when the computer finally processed the data and we saw the match come together. It was an exciting moment," she told AFP at Istanbul's archaeology museum.
That the statue survived at all is exceptional: in antiquity, bronze was a valuable raw material routinely melted down for weapons, coins or everyday objects.
"For this reason, bronze statues of this scale have rarely been preserved until today," she said.
For years, Cleveland's Museum of Art had dragged its feet, claiming there was insufficient evidence to prove where it came from, Boz said. 
But that changed after archaeometry expert Professor Ernst Pernicka concluded there was "no doubt whatsoever" the statue came from Bubon, where an imperial shrine housed bronze sculptures of Roman emperors. 
Soil and lead samples provided crucial scientific evidence which convinced the museum, Boz said. 
"It was a long struggle. We were determined and patient and we won," Culture Minister Mehmet Nuri Ersoy said when the statue returned in July.
Turkey has stepped up efforts to combat illicit antiquities trading and in 2025 alone secured the repatriation of 180 cultural artefacts.

AI to identify trafficked objects

Although its newly-developed AI-powered "TraceART" system was not involved in recovering the Marcus Aurelius statue, the tool helped identify two 16th-century Iznik tiles that were recovered from Britain this month.
Developed by the culture ministry, it scans images on sales platforms, auctions and social media to identify any cultural assets of Turkish origin that may have been trafficked, with flagged items sent for expert assessment.
TraceART went operational in 2025 and has since identified hundreds of objects for review, Boz said.
In January, Turkey recovered an Anatolian-style marble head from Denver Art Museum in Colorado, said Burcu Ozdemir of the antiquities trafficking unit.
The museum contacted Ankara because the piece "had been donated by the wife of a US consul general who served in Istanbul in the 1940s", she said. 
Turkey's campaign also involves returning items to countries like Iran, China and Egypt.
"We returned two of the artefacts stolen from temples in China," Boz told AFP. 
Turkey also returned "a key of the Kaaba to Egypt" after realising it had ended up in Turkey illegally, she said of the cube-shaped stone structure at Mecca's Grand Mosque.

Ottoman tiles at the Louvre

Turkey is now seeking the repatriation of other antiquities taken during the Ottoman era: an ancient marble torso called the "Old Fisherman" from Berlin, and dozens of Iznik tiles held at France's Louvre museum. 
"There's an assumption that artefacts taken in the 18th-19th centuries were acquired legally. We don't share that view," Boz said. 
The illegal tile swap came to light in 2003 when one fell from the wall of an Ottoman-era library and on the back was the French manufacturer's mark.
The original and others were taken in the late 1800s by a Frenchman who claimed to be restoring them, then replaced them with fakes. 
"We have repeatedly shared evidence with France and talked with the Louvre but no resolution has been reached," she said. 
The tiles were on a panel by the tomb of Ottoman Sultan Selim II in the garden of the Hagia Sophia. 
Today it bears a plaque in English, French and Turkish reading: "The tiles before us are replicas."
The originals are currently on display at a branch of the Louvre in Lens, 200 kilometres north of Paris, which says they were "bought in 1895". 
The museum did not respond to several requests for comment from AFP. 
fo/hmw/yad/ceg/lga

internet

Social media addiction trial jury deliberations continue

  • The lawsuit is one of hundreds accusing social media firms of luring young users to become addicted to their content and potentially suffer from depression, eating disorders, psychiatric hospitalization and even suicide. 
  • Jurors will return to court here on Thursday to continue deliberations in a civil trial accusing Meta and YouTube of harmfully hooking young internet users.
  • The lawsuit is one of hundreds accusing social media firms of luring young users to become addicted to their content and potentially suffer from depression, eating disorders, psychiatric hospitalization and even suicide. 
Jurors will return to court here on Thursday to continue deliberations in a civil trial accusing Meta and YouTube of harmfully hooking young internet users.
Since jury deliberations began on March 13, the jury sent questions to the judge related to the plaintiff's family troubles as well as how much she actually used Meta-owned Instagram as a child.
The verdict could turn on the question of whether family and other real world trauma, or YouTube and Meta apps such as Instagram, were to blame for mental woes of the woman who filed the suit.
A 20-year-old California woman identified as Kaley G.M. testified at trial that YouTube and Instagram fueled her depression and suicidal thoughts as a child, telling jurors that she became obsessed with social media, starting with YouTube videos, when she was six.
Under cross examination, however, Kaley also talked about feeling neglected, berated and picked on by family members.
A jury form given to jurors asks the panel to decide whether Meta or YouTube should have known their services posed a danger to children or if they were negligent in design.
If so, jurors are to decide if Meta or YouTube were "substantial factors" in causing Kaley's woes and how much they should pay in damages. 
The lawsuit is one of hundreds accusing social media firms of luring young users to become addicted to their content and potentially suffer from depression, eating disorders, psychiatric hospitalization and even suicide. 
Internet titans have long shielded themselves with Section 230 of the US Communications Decency Act, which frees them of responsibility for what social media users post.
However, this case argues that the firms are responsible for defective products, with business models designed to hold people's attention and to promote content that can harm their mental health.
The outcome of the Los Angeles trial is expected to establish a precedent for resolving other lawsuits that blame social media for fueling an epidemic of mental and emotional trauma.
gc-rfo/js

lifestyle

Music popstar will.i.am meshes AI and 'micromobility'

  • "Their vehicle that got them to work is a part of their tool set; and it's working in the parking lot while they work," he added, referring to Trinity as "brains on wheels."
  • Black Eyed Peas star will.i.am is putting artificial intelligence agents to work in three-wheel vehicles tailored for modern urban life.
  • "Their vehicle that got them to work is a part of their tool set; and it's working in the parking lot while they work," he added, referring to Trinity as "brains on wheels."
Black Eyed Peas star will.i.am is putting artificial intelligence agents to work in three-wheel vehicles tailored for modern urban life.
The musician turned tech entrepreneur demonstrated a so-called autocycle called Trinity at Nvidia's annual developers conference that ends Thursday in the heart of Silicon Valley.
"I'm an artistic creator because of tech," will.i.am told AFP.
"Creating with musical teams is great, but hopping into a different realm and being hyper creative with full-stack developers, electrical engineers, mechanical engineers, world builders -- that is the ultimate level of creativity."
His Trinity startup is named for an alignment of human, vehicle and agentic AI.
The single-passenger electric vehicle, which shares its name with the startup, lets a human do the driving but is infused with an AI agent that acts as a virtual assistant for conversation-based collaborations on the move, he will.i.am said.
"When a human has an agent of their own, a company has a super employee," he said of brainstorming and delegating tasks to Trinity AI agents conversationally while commuting.
"Their vehicle that got them to work is a part of their tool set; and it's working in the parking lot while they work," he added, referring to Trinity as "brains on wheels."
The vehicle, designed to accelerate quickly from zero to 60 mph (96 kmh), uses an Nvidia graphics processor to power built-in AI that can interpret and reason about the world around it, according to the startup.
The vehicles are to be made in a Los Angeles facility that will also serve as a school for robotics and agentic AI systems.
"I was ambitious, audacious and a little bit of naive," will.i.am said of pursuing the project.
"That's a good combination, because if you don't have that little bit of naive and everything is skeptical, you probably wouldn't take crazy risks."
An initial production of run of 500 units is planned, with an aim to begin deliveries in August of next year, and to keep the vehicle's price at less than $30,000.
gc-rv/js