Digital World

Accumulating bitcoin a risky digital rush by companies?

BY LUCIE LEQUIER

  • Tesla has previously accepted payments in bitcoin, while Trump Media soon plans to offer crypto investment products.
  • US President Donald Trump's media group and Tesla, the electric carmaker owned by tech billionaire Elon Musk, are among an increasing number of companies buying huge amounts of bitcoin.
  • Tesla has previously accepted payments in bitcoin, while Trump Media soon plans to offer crypto investment products.
US President Donald Trump's media group and Tesla, the electric carmaker owned by tech billionaire Elon Musk, are among an increasing number of companies buying huge amounts of bitcoin.
The aim? To diversify reserves, counter inflation and attract investors, analysts say. 

Who also invests?

Companies frequently own bitcoin -- the largest cryptocurrency by market capitalisation -- to take part in sector activities such as "mining", which refers to the process of validating transactions in exchange for digital tokens.
Tesla has previously accepted payments in bitcoin, while Trump Media soon plans to offer crypto investment products.
Other players who had core operations totally unrelated to cryptocurrency, such as Japanese hotel business MetaPlanet, have switched to buying bitcoin.
US firm Strategy, initially a seller of software under the name MicroStrategy, holds more than three percent of all bitcoin tokens, or over 600,000.
Its co-founder Michael Saylor "created real value for its original set of investors" by offering the opportunity to invest in shares linked to cryptocurrencies, Andy Constan, chief executive of financial analysts Damped Spring Advisors, told AFP.
This was five years ago when other financial products allowing investment in cryptocurrencies, without a need to directly own tokens, were not permitted.

Why invest?

Companies collect bitcoins "to diversify" their cash flow and "counter the effects of inflation", said Eric Benoist, a tech and data research expert for Natixis bank.
Some struggling companies are riding the trend in a bid to "restore their image" by "backing themselves with an asset perceived as solid and one that appreciates over time", he added.
Strategy's current focus is on accumulating bitcoin, simply to attract investors interested in the currency's potential. 
Bitcoin can also have a simple practical use, as in the case of the Coinbase exchange, which uses its own reserves as collateral for its users.

The risks?

Bitcoin's value has soared around ninefold in five years, fuelled recently by US regulatory changes under Trump, a strong backer of the crypto sector.
However, the unit's volatility is four times greater than that of the main US stock index, the S&P 500, according to Campbell Harvey, a professor of finance at Duke University in the United States.
Harvey warns against using a company's cash reserves, "their safe haven", to buy crypto.
Bitcoin's price, currently around $117,000, has in recent years been boosted by large holders of cryptocurrency, referred to as "whales".
Harvey argues that in the case of "major buyer" Strategy, liquidating all their 600,000 bitcoin tokens is no simple task owing to the high value.
"Assuming that you could liquidate all of those bitcoin at the market price is a heroic assumption," he told AFP, adding such a deal would see the cryptocurrency's price plummet.
Jack Mallers, chief executive and co-founder of bitcoin-focused company Twenty One Capital, said his business embraced the sector's volatility, adding the market would need to be flooded for the token's price to crash.

A bubble?

According to its own calculation, Strategy's stock is selling at about 70 percent above the value of its bitcoin reserves. 
The company -- which did not answer AFP's request for comment -- is growing thanks to bitcoin purchases, which in turn is attracting investors and pushing up its share price.
But ultimately it will need to monetise these crypto assets, for example by linking them to financial products, for its business to be sustained. 
Should Strategy and other so-called "bitcoin treasury funds" fail to do so, Benoist fears the crypto investment bubble will burst.
He points out that the strategy of accumulation runs counter to the original philosophy of bitcoin, which was conceived in 2008 as a decentralised means of payment. 
Today, "bitcoins end up in electronic safes that are left untouched", he said.
lul-bcp/ajb/jkb/rl

internet

OpenAI releases ChatGPT-5 as AI race accelerates

BY GLENN CHAPMAN

  • Co-founder and chief executive Sam Altman touted this latest iteration as "clearly a model that is generally intelligent."
  • OpenAI released a keenly awaited new generation of its hallmark ChatGPT on Thursday, touting "significant" advancements in artificial intelligence capabilities as a global race over the technology accelerates.
  • Co-founder and chief executive Sam Altman touted this latest iteration as "clearly a model that is generally intelligent."
OpenAI released a keenly awaited new generation of its hallmark ChatGPT on Thursday, touting "significant" advancements in artificial intelligence capabilities as a global race over the technology accelerates.
ChatGPT-5 is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists.
Co-founder and chief executive Sam Altman touted this latest iteration as "clearly a model that is generally intelligent."
Altman cautioned that there is still work to be done to achieve the kind of artificial general intelligence (AGI) that thinks the way people do.
"This is not a model that continuously learns as it is deployed from new things it finds, which is something that, to me, feels like it should be part of an AGI," Altman said.
"But the level of capability here is a huge improvement."
Industry analysts have heralded the arrival of an AI era in which genius computers transform how humans work and play.
"As the pace of AI progress accelerates, developing superintelligence is coming into sight," Meta chief executive Mark Zuckerberg wrote in a recent memo.
"I believe this will be the beginning of a new era for humanity."
Altman said there were "orders of magnitude more gains" to come on the path toward AGI.
"Obviously... you have to invest in compute (power) at an eye watering rate to get that, but we intend to keep doing it."
Tech industry rivals Amazon, Google, Meta, Microsoft and Elon Musk's xAI have been pouring billions of dollars into artificial intelligence since the blockbuster launch of the first version of ChatGPT in late 2022.
Chinese startup DeepSeek shook up the AI sector early this year with a model that delivers high performance using less costly chips.

'PhD-level expert'

With fierce competition around the world over the technology, Altman said ChatGPT-5 led the pack in coding, writing, health care and much more.
"GPT-3 felt to me like talking to a high school student -- ask a question, maybe you get a right answer, maybe you'll get something crazy," Altman said.
"GPT-4 felt like you're talking to a college student; GPT-5 is the first time that it really feels like talking to a PhD-level expert in any topic."
Altman expects the ability to create software programs on demand -- so-called "vibe-coding" -- to be a "defining part of the new ChatGPT-5 era."
In a blog post, British AI expert Simon Willison wrote about getting early access to ChatGPT-5.
"My verdict: it's just good at stuff," Willison wrote.
"It doesn't feel like a dramatic leap ahead from other (large language models) but it exudes competence -- it rarely messes up, and frequently impresses me."
However Musk wrote on X, formerly Twitter, that his Grok 4 Heavy AI model "was smarter" than ChatGPT-5.

Honest AI?

ChatGPT-5 was trained to be trustworthy and stick to providing answers as helpful as possible without aiding seemingly harmful missions, according to OpenAI safety research lead Alex Beutel.
"We built evaluations to measure the prevalence of deception and trained the model to be honest," Beutel said.
ChatGPT-5 is trained to generate "safe completions," sticking to high-level information that can't be used to cause harm, according to Beutel.
The company this week also released two new AI models that can be downloaded for free and altered by users, to challenge similar offerings by rivals.
The release of "open-weight language models" comes as OpenAI is under pressure to share inner workings of its software in the spirit of its origin as a nonprofit.
gc-juj/dl

Apple

Apple to hike investment in US to $600 bn over four years

  • In February, Apple said it would spend more than $500 billion in the United States and hire 20,000 people, with Trump quickly taking credit for the decision. 
  • Apple will invest an additional $100 billion in the United States, taking its total pledge to $600 billion over the next four years, US President Donald Trump said Wednesday.
  • In February, Apple said it would spend more than $500 billion in the United States and hire 20,000 people, with Trump quickly taking credit for the decision. 
Apple will invest an additional $100 billion in the United States, taking its total pledge to $600 billion over the next four years, US President Donald Trump said Wednesday.
Trump announced the increased commitment at the White House alongside the tech giant's CEO Tim Cook, calling it "the largest investment Apple has made in America." 
"Apple will massively increase spending on its domestic supply chain," Trump added, highlighting a new production facility for the glass used to make iPhone screens in Kentucky.
In February, Apple said it would spend more than $500 billion in the United States and hire 20,000 people, with Trump quickly taking credit for the decision. 
It builds on plans announced in 2021, when the company founded by Steve Jobs said it would invest $430 billion in the country and add 20,000 jobs.
"This year alone, American manufacturers are on track to make 19 billion chips for Apple in 24 factories across 12 different states," Cook said in the Oval Office.
Trump, who has pushed US companies to shift manufacturing home by slapping tariffs on trading partners, claimed that his administration was to thank for the investment.
"This is a significant step toward the ultimate goal of... ensuring that iPhones sold in the United States of America also are made in America," Trump said. 
Cook later clarified that, while many iPhone components will be manufactured in the United States, the complete assembly of iPhones will still be conducted overseas.
"If you look at the bulk of it, we're doing a lot of the semiconductors here, we're doing the glass here, we're doing the Face ID module here... and we're doing these for products sold elsewhere in the world," Cook said.
He gifted Trump a custom-engraved glass piece made by iPhone glassmaker Corning, set in a 24-karat gold base.
Cook said the Kentucky-made glass piece was designed by a former Marine Corps corporal now working at Apple. 
After receiving it, Trump said it was "nice" that "we're doing these things now in the United States, instead of other countries, faraway countries."

'They're coming home'

Trump has repeatedly said he plans to impose a "100 percent" tariff on imported semiconductors, a major export of Taiwan, South Korea, China and Japan. 
"We're going to be putting a very large tariff on chips and semiconductors," he told reporters at the White House.
Taiwanese giant TSMC -- the world's largest contract maker of chips, which counts Nvidia and Apple among its clients -- would be "exempt" from those tariffs as it has factories in the United States, Taipei said Thursday. 
While he did not offer a timetable for enactment of the new tech levies, on Tuesday, he said fresh tariffs on imported pharmaceuticals, semiconductors and chips could be unveiled within the coming week.
The United States is "going to be very rich and it's companies like Apple, they're coming home," Trump said.
Trump specified further that "Apple will help develop and manufacture semiconductors and semiconductor equipment in Texas, Utah, Arizona and New York." 
He noted that if tech companies commit to manufacturing their wares in the United States, "there will be no charge."
Apple reported a quarterly profit of $23.4 billion in late July, topping forecasts despite facing higher costs due to Trump's sweeping levies.
aue/cdl/lb

sleep

Dangerous dreams: Inside internet's 'sleepmaxxing' craze

BY CALEIGH KEATING AND ANUJ CHOPRA WITH RACHEL BLUNDY IN LONDON

  • Another popular practice is taping of the mouth for sleep, promoted as a way to encourage nasal breathing.
  • From mouth taping to rope-assisted neck swinging, a viral social media trend is promoting extreme bedtime routines that claim to deliver perfect sleep -- despite scant medical evidence and potential safety risks.
  • Another popular practice is taping of the mouth for sleep, promoted as a way to encourage nasal breathing.
From mouth taping to rope-assisted neck swinging, a viral social media trend is promoting extreme bedtime routines that claim to deliver perfect sleep -- despite scant medical evidence and potential safety risks.
Influencers on platforms including TikTok and X are fueling a growing wellness obsession popularly known as "sleepmaxxing," a catch-all term for activities and products aimed at optimizing sleep quality.
The explosive rise of the trend -- generating tens of millions of posts -- underscores social media's power to legitimize unproven health practices, particularly as tech platforms scale back content moderation.
One so-called insomnia cure involves people hanging by their necks with ropes or belts and swinging their bodies in the air.
"Those who try it claim their sleep problems have significantly improved," said one clip on X that racked up more than 11 million views.
Experts have raised alarm about the trick, following a Chinese state broadcaster's report that attributed at least one fatality in China last year to a similar "neck hanging" routine.
Such sleepmaxxing techniques are "ridiculous, potentially harmful, and evidence-free," Timothy Caulfield, a misinformation expert from the University of Alberta in Canada, told AFP.
"It is a good example of how social media can normalize the absurd."
Another popular practice is taping of the mouth for sleep, promoted as a way to encourage nasal breathing. Influencers claim it offers broad benefits, from better sleep and improved oral health to reduced snoring.
But a report from George Washington University found that most of these claims were not supported by medical research.
Experts have also warned the practice could be dangerous, particularly for those suffering from sleep apnea, a condition that disrupts breathing during sleep.
Other unfounded tricks touted by sleepmaxxing influencers include wearing blue- or red-tinted glasses, using weighted blankets, and eating two kiwis just before bed.

'Damaging'

"My concern with the 'sleepmaxxing' trend -- particularly as it's presented on platforms like TikTok -- is that much of the advice being shared can be actively unhelpful, even damaging, for people struggling with real sleep issues," Kathryn Pinkham, a Britain-based insomnia specialist, told AFP.
"While some of these tips might be harmless for people who generally sleep well, they can increase pressure and anxiety for those dealing with chronic insomnia or other persistent sleep problems."
While sound and sufficient sleep is considered a cornerstone of good health, experts warn that the trend may be contributing to orthosomnia, an obsessive preoccupation with achieving perfect sleep.
"The pressure to get perfect sleep is embedded in the sleepmaxxing culture," said Eric Zhou of Harvard Medical School.
"While prioritizing restful sleep is commendable, setting perfection as your goal is problematic. Even good sleepers vary from night to night."
Pinkham added that poor sleep was often fuelled by the "anxiety to fix it," a fact largely unacknowledged by sleepmaxxing influencers.
"The more we try to control sleep with hacks or rigid routines, the more vigilant and stressed we become -- paradoxically making sleep harder," Pinkham said.

Beauty over health

Many sleepmaxxing posts focus on enhancing physical appearance rather than improving health, reflecting an overlap with "looksmaxxing" –- another online trend that encourages unproven and sometimes dangerous techniques to boost sexual appeal.
Some sleepmaxxing influencers have sought to profit from the trend's growing popularity, promoting products such as mouth tapes, sleep-enhancing drink powders, and "sleepmax gummies" containing melatonin.
That may be in violation of legal norms in some countries like Britain, where melatonin is available only as a prescription drug.
The American Academy of Sleep Medicine has recommended against using melatonin to treat insomnia in adults, citing inconsistent medical evidence regarding its effectiveness.
Some medical experts also caution about the impact of the placebo effect on insomnia patients using sleep medication -- when people report real improvement after taking a fake or nonexistent treatment because of their beliefs.
"Many of these tips come from non-experts and aren't grounded in clinical evidence," said Pinkham.
"For people with genuine sleep issues, this kind of advice often adds pressure rather than relief."
burs-ac/mlm

Israel

Grok, is that Gaza? AI image checks mislocate news photographs

BY ALEXIS ORSINI AND DOUNIA MAHIEDDINE

  • "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'."
  • This image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory.
  • "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'."
This image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory.
But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago.
The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. 
At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free.
Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018.
In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.
Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP.
Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP--and even that's "not always available".
Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources."
The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen.
The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate.

Radical right bias

Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics.
"We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT".
Each AI has biases linked to the information it was trained on and the instructions of its creators, he said. 
In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right. 
Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach.
"Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'."
AI does not necessarily seek accuracy -- "that's not the goal," the expert said.
Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016.
That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation.

'Friendly pathological liar'

An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the so-called alignment phase -- which then determines what the model would rate as a good or bad answer.
"Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said.
"Its training data has not changed and neither has its alignment."
Grok is not alone in wrongly identifying images.
When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen.
For Diesbach, chatbots must never be used as tools to verify facts.
"They are not made to tell the truth," but to "generate content, whether true or false", he said. 
"You have to look at it like a friendly pathological liar -- it may not always lie, but it always could."
dou-aor/sbk/rlp

social

In Cuba, Castro's 'influencer' grandson causes a stir

BY LETICIA PINEDA

  • "El Necio," an online influencer, has argued that Sandro Castro "goes against the security of this country" and "against the ideals" of the revolution.
  • Cuban influencer Sandro Castro has chosen a very different path to his revolutionary grandfather Fidel, using his name to pursue online fame while occasionally poking fun at the island's dire shortages of food, medicine, power and fuel.
  • "El Necio," an online influencer, has argued that Sandro Castro "goes against the security of this country" and "against the ideals" of the revolution.
Cuban influencer Sandro Castro has chosen a very different path to his revolutionary grandfather Fidel, using his name to pursue online fame while occasionally poking fun at the island's dire shortages of food, medicine, power and fuel.
It is a pastime some find entertaining, even fair commentary, but the 33-year-old is coming under increasing scrutiny from those loyal to Cuba's communist project for disrespecting his ancestor's legacy.
For others locked in a daily struggle for survival, the younger Castro's high-flying lifestyle and apparent lack of empathy is offensive on a whole different level.
On his Instagram account, Sandro regales his 127,000 followers with images of him partying, at times with scantily-clad women, often with a beer in hand.
He is sometimes dressed as a monk or a vampire, sporting cat whiskers or the jersey of the Barcelona football club.
From time to time, he mocks the struggles engendered by the country's worst economic crisis in three decades.
"I woke up today with my favorite recipe, chicken with beer... but there is no chicken,' he says in one post while holding up a bottle of the national lager, Cristal.
He also jokes about the power outages that have plagued the island, suggestively addressing a woman with the words: "If I caught you like the UNE (electric company), I'd get you every four hours, Monday to Monday."
The character entertains some, annoys others, but never fails to make a splash.
Castro's followers jokingly refer to him as the "next president," but voices aligned with the communist government are demanding he be silenced. 
Loyalist historian and author Ernesto Limia complained on Facebook that Castro "does not respect the memory" of his famous grandfather, who led the revolution that toppled a dictatorship and installed a communist government.
"El Necio," an online influencer, has argued that Sandro Castro "goes against the security of this country" and "against the ideals" of the revolution.
Despite his famous name, some believe Castro may be pushing his luck.
Activists and critics in Cuba are often rounded up for sharing anti-government views, and several are serving sentences for crimes such as "contempt" or disseminating "enemy propaganda."

'Little toys'

Manuel Cuesta Morua, a dissident historian who has been detained multiple times for his democratic activism, said the Sandro phenomenon embodied "the distance of the grandchildren's generation from the original revolutionary project."
It also put Castro in stark contrast to the rest of his family, who unlike him enjoy their privileged status "discreetly," he said.
While Fidel Castro was alive, Cubans knew very little about his second wife Dalia Soto del Valle and their five sons -- one of whom is Sandro's father, Alexis Castro Soto del Valle, 63.
The family lived out of the public eye in Punto Cero, an extensive wooded area west of Havana with access controlled by the military.
In 2021, during the Covid-19 pandemic, Sandro came into the spotlight in a leaked video that showed him driving a luxurious Mercedes-Benz. 
"We are simple people, but every now and then we have to take out these little toys we have at home," he said in the clip that went viral and sparked public outrage, forcing him to apologize.
Three years later, he caused another stir by celebrating his birthday at a bar he owns in the capital, burning massive neon lights and dancing on tables as the country reeled from the after-effects of a massive blackout.
lp/mlr/mlm

government

US government gets a year of ChatGPT Enterprise for $1

  • OpenAI, in "coordination" with the US government, will help countries build data centers and provide customized versions of ChatGPT, according to the tech firm.
  • OpenAI on Wednesday said it was letting the US government use a version of ChatGPT designed for businesses for a year, charging just $1 for the service.
  • OpenAI, in "coordination" with the US government, will help countries build data centers and provide customized versions of ChatGPT, according to the tech firm.
OpenAI on Wednesday said it was letting the US government use a version of ChatGPT designed for businesses for a year, charging just $1 for the service.
Federal workers in the executive branch will have access to ChatGPT Enterprise in a partnership with the US General Services Administration, according to the pioneering San Francisco-based artificial intelligence (AI) company.
"By giving government employees access to powerful, secure AI tools, we can help them solve problems for more people, faster," OpenAI said in a blog post announcing the alliance.
ChatGPT Enterprise does not use business data to train or improve OpenAI models and the same rule will apply to federal use, according to the company.
Earlier this year, OpenAI announced an initiative focused on bringing advanced AI tools to US government workers.
The news came with word that the US Department of Defense awarded OpenAI a $200 million contract to put generative AI to work for the military.
OpenAI planned to show how cutting-edge AI can improve administrative operations, such as how service members get health care, and also has cyber defense applications, the startup said in a post.
OpenAI has also launched an initiative to help countries build their own AI infrastructure, with the US government a partner in projects.
The tech firm's move to put its technology at the heart of national AI platforms around the world comes as it faces competition from Chinese rival DeepSeek.
DeepSeek's success in delivering powerful AI models at a lower cost has rattled Silicon Valley and multiplied calls for US big tech to protect its dominance of the emerging technology.
The OpenAI for Countries initiative was launched in June under the auspices of a drive -- dubbed "Stargate" -- announced by US President Donald Trump to invest up to $500 billion in AI infrastructure in the United States.
OpenAI, in "coordination" with the US government, will help countries build data centers and provide customized versions of ChatGPT, according to the tech firm.
Projects are to involve "local as well as OpenAI capital."
gc/aha

shooting

Backlash after 'interview' with AI avatar of US school shooting victim

  • "It was more of a bizarre AI demonstration than an interview," wrote columnist Kirsten Fleming in the New York Post tabloid.
  • Independent journalist Jim Acosta faced a torrent of online criticism Wednesday after he posted an "interview" conducted with an AI avatar of a US school shooting victim.
  • "It was more of a bizarre AI demonstration than an interview," wrote columnist Kirsten Fleming in the New York Post tabloid.
Independent journalist Jim Acosta faced a torrent of online criticism Wednesday after he posted an "interview" conducted with an AI avatar of a US school shooting victim.
Former CNN White House chief correspondent Acosta interacted with a virtual likeness of Joaquin Oliver, one of the 17 people killed in the Parkland, Florida school shooting in 2018.
Acosta, a long-standing hate figure for some supporters of President Donald Trump who often derided the veteran Washington correspondent, has long been an advocate for increased gun control.
The clip posted on Acosta's YouTube channel on August 4 to coincide with what would have been Oliver's 25th birthday has gathered more than 22,000 views.
On the Guy Benson Show on Fox News, conservative columnist Joe Concha said of the segment "It's just sick."
Acosta said that Oliver's parents Manuel and Patricia "have created an AI version of their son to deliver a powerful message on gun violence" after falling victim to one of the deadliest US mass shootings.
In the interview Acosta asks Oliver, who was killed aged 17, what happened to him.
Despite having the blessing of Oliver's parents, critics said the approach was tasteless and did not advance the campaign against gun violence.
"It was more of a bizarre AI demonstration than an interview," wrote columnist Kirsten Fleming in the New York Post tabloid.
"It's also false. And grotesque. Like a dystopian plot come to life."
In the clip, Oliver's likeness gives opinions on how to counter gun violence.
"I was taken from this world too early while at school due to gun violence," says a metallic, sped-up voice synthesized to sound like Oliver's.
"It's important to talk about these issues so we can create a safer future for everyone."
In an opinion piece published Wednesday, journalism institute Poynter suggested that Acosta's move from major media outlet CNN to an independent operation where he operates without an editorial support mechanism was behind his judgment.
"I hope Jim Acosta decides to phone a friend next time. We've all got a lot of figuring out to do," it said.
It is not the first time artificial intelligence has been used to highlight the impact of the Parkland shooting.
Last year US lawmakers heard recreations of Oliver's voice and those of other victims in AI phone call recordings demanding to know why action had not been taken on gun control. 
On February 14, 2018, then 19-year-old Nikolas Cruz walked into Marjory Stoneman Douglas High School in Parkland, a town north of Miami, carrying a high-powered AR-15 rifle. 
He had been expelled from the school a year earlier for disciplinary reasons.
In a matter of nine minutes, he killed 14 students and three school employees, then fled by mixing in with people frantically escaping the gruesome scene.
Police arrested Cruz shortly thereafter as he walked along the street. He pleaded guilty to the massacre to the massacre in 2021 and was sentenced to life without parole a year later.
rh-gw/bjt

technology

OpenAI releases free, downloadable models in competition catch-up

  • Meta touts its open-source approach to AI, and Chinese AI startup DeepSeek rattled the industry with its low-cost, high-performance model boasting an open weight approach that allows users to customize the technology.
  • OpenAI on Tuesday released two new artificial intelligence (AI) models that can be downloaded for free and altered by users, to challenge similar offerings by US and Chinese competition.
  • Meta touts its open-source approach to AI, and Chinese AI startup DeepSeek rattled the industry with its low-cost, high-performance model boasting an open weight approach that allows users to customize the technology.
OpenAI on Tuesday released two new artificial intelligence (AI) models that can be downloaded for free and altered by users, to challenge similar offerings by US and Chinese competition.
The release of gpt-oss-120b and gpt-oss-20b "open-weight language models" comes as the ChatGPT-maker is under pressure to share inner workings of its software in the spirit of its origin as a nonprofit.
"Going back to when we started in 2015, OpenAI's mission is to ensure AGI (Artificial General Intelligence) that benefits all of humanity," said OpenAI chief executive Sam Altman.
An open-weight model, in the context of generative AI, is one in which the trained parameters are made public, enabling users to fine-tune it.
Meta touts its open-source approach to AI, and Chinese AI startup DeepSeek rattled the industry with its low-cost, high-performance model boasting an open weight approach that allows users to customize the technology.
"This is the first time that we're releasing an open-weight model in language in a long time, and it's really incredible," OpenAI co-founder and president Greg Brockman said during a briefing with journalists.
The new, text-only models deliver strong performance at low cost, according to OpenAI, which said they are suited for AI jobs like searching the internet or executing computer code, and are designed to be easy to run on local computer systems.
"We are quite hopeful that this release will enable new kinds of research and the creation of new kinds of products," Altman said.
OpenAI said it is working with partners including French telecommunications giant Orange and cloud-based data platform Snowflake on real-world uses of the models.
The open-weight models have been tuned to thwart being used for malicious purposes, according to OpenAI.
Altman early this year said his company had been "on the wrong side of history" when it came to being open about how its technology works.
He later announced that OpenAI will continue to be run as a nonprofit, abandoning a contested plan to convert into a for-profit organization.
The structural issue had become a point of contention, with major investors pushing for better returns.
That plan faced strong criticism from AI safety activists and co-founder Elon Musk, who sued the company he left in 2018, claiming the proposal violated its founding philosophy.
In the revised plan, OpenAI's money-making arm will be open to generate profits but will remain under the nonprofit board's supervision.
juj-gc/des/bgs

technology

Meta says working to thwart WhatsApp scammers

  • WhatsApp detected and banned more than 6.8 million accounts linked to scam centers, most of them in Southeast Asia, according to Meta.
  • Meta on Tuesday said it shut nearly seven million WhatsApp accounts linked to scammers in the first half of this year and is ramping up safeguards against such schemes.
  • WhatsApp detected and banned more than 6.8 million accounts linked to scam centers, most of them in Southeast Asia, according to Meta.
Meta on Tuesday said it shut nearly seven million WhatsApp accounts linked to scammers in the first half of this year and is ramping up safeguards against such schemes.
"Our team identified the accounts and disabled them before the criminal organizations that created them could use them," WhatsApp external affairs director Clair Deevy said.
Often run by organized gangs, the scams range from bogus cryptocurrency investments to get-rich-quick pyramid schemes, WhatsApp executives said in a briefing.
"There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings," Meta-owned WhatsApp said in a blog post.
WhatsApp detected and banned more than 6.8 million accounts linked to scam centers, most of them in Southeast Asia, according to Meta.
WhatsApp and Meta worked with OpenAI to disrupt a scam traced to Cambodia that used ChatGPT to generate text messages containing a link to a WhatsApp chat to hook victims, according to the tech firms.
Meta on Tuesday began prompting WhatsApp users to be wary when added to unfamiliar chat groups by people they don't know.
New "safety overviews" provide information about the group and tips on spotting scams, along with the option of making a quick exit.
"We've all been there: someone you don’t know attempting to message you, or add you to a group chat, promising low-risk investment opportunities or easy money, or saying you have an unpaid bill that's overdue," Meta said in a blog post.
"The reality is, these are often scammers trying to prey on people's kindness, trust and willingness to help -- or, their fears that they could be in trouble if they don't send money fast."
gc-juj/bgs

conflict

Banned Russian media sites 'still accessible' across EU: report

  • But the ISD report criticised the European Commission for its "failure" to maintain a "definitive list of different domain iterations" -- or website addresses -- associated with each media outlet.
  • Websites of banned Russian media outlets can still be easily accessed across the EU in the "overwhelming majority" of cases, experts said Tuesday, denouncing the bloc's "failure" to publish full lists of the websites involved.
  • But the ISD report criticised the European Commission for its "failure" to maintain a "definitive list of different domain iterations" -- or website addresses -- associated with each media outlet.
Websites of banned Russian media outlets can still be easily accessed across the EU in the "overwhelming majority" of cases, experts said Tuesday, denouncing the bloc's "failure" to publish full lists of the websites involved.
After Russia invaded Ukraine in February 2022, EU authorities banned Kremlin-controlled media from broadcasting in the bloc, including online, to counter "disinformation". 
But more than three years on, "sanctioned outlets are largely still active and accessible" across member states, said a report released by the Institute for Strategic Dialogue (ISD), a London-based think tank. 
"Russian state media continues to maintain a strong online presence, posing a persistent challenge to Western democracies," the report said, with blocks by internet service providers "largely ineffective".
EU sanctions banned RT, previously known as Russia Today, and Sputnik media organisations as well as other state-controlled media accused of "information warfare".
The ISD report covered Germany, France, Italy, Poland, the Czech Republic and Slovakia, testing the top three internet service providers in each.
It identified 26 media under sanctions and tried to view 58 associated domains. In 76 percent of tests, providers failed to block access.
EU member states are responsible for ensuring blocks are applied by internet service providers.
But the ISD report criticised the European Commission for its "failure" to maintain a "definitive list of different domain iterations" -- or website addresses -- associated with each media outlet.
It said this left countries and internet service providers "without the guidance needed for effective and targeted implementation".
"The issue is when they sanction Russian state media, they mention the outlet that they are sanctioning -- so Russia Today, Sputnik, etc -- but what they don't list is what domain falls under this entity," said the report's author, Pablo Maristany de las Casas.
"If the European Commission were to list the different domains that are known to be linked to these entities, that would make it much easier for member states and the internet service providers in those member states to enforce these blocks," he said.
The report urged the EC to post a "continuously updated and publicly accessible list" and include it in sanctions packages and on its online sanctions dashboard.
A commission spokesperson told AFP: "It is up to the relevant providers to block access to websites of outlets covered by the sanctions, including subdomains or newly created domains."

Grey zone and mirrors

Enforcement needs to be more agile because Russia has sought to circumvent sanctions, the report's author said.
"Some outlets, for example, RT, use so-called mirror domains" where they "simply copy the contents of the blocked site into a new URL -- a new link -- to circumvent those sanctions," he said.
The report found that Slovakia, whose Prime Minister Robert Fico is known for his pro-Russia positions, performed the worst on enforcement, with no blocks at all.
Slovakia's legal mandates to block pro-Russian websites expired in 2022 after lawmakers failed to extend them.
Poland was the second worst, while France and Germany were most effective overall.
Most sanctioned domains had little traction in the bloc, with under 1,000 monthly views, but Germany, with its large Russian diaspora, was the exception: three domains including RT had over 100,000 monthly visitors from there.
The report's author spotted another "loophole": numerous accounts on X posting links to banned media, mainly aimed at French and German speakers.
In May, such accounts posted almost 50 thousand links, almost all to RT-affiliated sites, the report found.
X largely blocks official media accounts, the author said, but "with these anonymous accounts that only repost this kind of content, there seems to be a grey zone and it seems not be withheld in the EU."
am-del/jkb

AI

AI search pushing an already weakened media ecosystem to the brink

BY THOMAS URBAIN

  • This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions. 
  • Generative artificial intelligence assistants like ChatGPT are cutting into traditional online search traffic, depriving news sites of visitors and impacting the advertising revenue they desperately need, in a crushing blow to an industry already fighting for survival.
  • This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions. 
Generative artificial intelligence assistants like ChatGPT are cutting into traditional online search traffic, depriving news sites of visitors and impacting the advertising revenue they desperately need, in a crushing blow to an industry already fighting for survival.
"The next three or four years will be incredibly challenging for publishers everywhere. No one is immune from the AI summaries storm gathering on the horizon," warned Matt Karolian, vice president of research and development at Boston Globe Media. 
"Publishers need to build their own shelters or risk being swept away."
While data remains limited, a recent Pew Research Center study reveals that AI-generated summaries now appearing regularly in Google searches discourage users from clicking through to source articles. 
When AI summaries are present, users click on suggested links half as often compared to traditional searches.
This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions. 
According to Northeastern University professor John Wihbey, these trends "will accelerate, and pretty soon we will have an entirely different web."
The dominance of tech giants like Google and Meta had already slashed online media advertising revenue, forcing publishers to pivot toward paid subscriptions. 
But Wihbey noted that subscriptions also depend on traffic, and paying subscribers alone aren't sufficient to support major media organizations.

Limited lifelines

The Boston Globe group has begun seeing subscribers sign up through ChatGPT, offering a new touchpoint with potential readers, Karolian said. 
However, "these remain incredibly modest compared to other platforms, including even smaller search engines."
Other AI-powered tools like Perplexity are generating even fewer new subscriptions, he added.
To survive what many see as an inevitable shift, media companies are increasingly adopting GEO (Generative Engine Optimization) -- a technique that replaces traditional SEO (Search Engine Optimization). 
This involves providing AI models with clearly labeled content, good structure, comprehensible text, and strong presence on social networks and forums like Reddit that get crawled by AI companies.
But a fundamental question remains: "Should you allow OpenAI crawlers to basically crawl your website and your content?" asks Thomas Peham, CEO of optimization startup OtterlyAI.
Burned by aggressive data collection from major AI companies, many news publishers have chosen to fight back by blocking AI crawlers from accessing their content.
"We just need to ensure that companies using our content are paying fair market value," argued Danielle Coffey, who heads the News/Media Alliance trade organization.
Some progress has been made on this front. Licensing agreements have emerged between major players, such as the New York Times and Amazon, Google and Associated Press, and Mistral and Agence France-Presse, among others.
But the issue is far from resolved, as several major legal battles are underway, most notably the New York Times' blockbuster lawsuit against OpenAI and Microsoft.

Let them crawl

Publishers face a dilemma: blocking AI crawlers protects their content but reduces exposure to potential new readers. 
Faced with this challenge, "media leaders are increasingly choosing to reopen access," Peham observed.
Yet even with open access, success isn't guaranteed. 
According to OtterlyAI data, media outlets represent just 29 percent of citations offered by ChatGPT, trailing corporate websites at 36 percent. 
And while Google search has traditionally privileged sources recognized as reliable, "we don't see this with ChatGPT," Peham noted.
The stakes extend beyond business models.
According to the Reuters Institute's 2025 Digital News Report, about 15 percent of people under 25 now use generative AI to get their news.
Given ongoing questions about AI sourcing and reliability, this trend risks confusing readers about information origins and credibility -- much like social media did before it.
"At some point, someone has to do the reporting," Karolian said. "Without original journalism, none of these AI platforms would have anything to summarize."
Perhaps with this in mind, Google is already developing partnerships with news organizations to feed its generative AI features, suggesting potential paths forward.
"I think the platforms will realize how much they need the press," predicted Wihbey -- though whether that realization comes soon enough to save struggling newsrooms remains an open question.
tu/arp/jgc

X

Musk's X accuses Britain of online safety 'overreach'

  • Many people resort to virtual private networks (VPNs) to get around territorial restrictions on access to online content.
  • Elon Musk-owned social network X on Friday accused Britain's government of "overreach" with a new law designed to protect children from harmful online content such as pornography.
  • Many people resort to virtual private networks (VPNs) to get around territorial restrictions on access to online content.
Elon Musk-owned social network X on Friday accused Britain's government of "overreach" with a new law designed to protect children from harmful online content such as pornography.
The Online Safety Act's "laudable intentions are at risk of being overshadowed by the breadth of its regulatory reach," X said in a post to its Global Government Affairs account.
"A plan ostensibly intended to keep children safe is at risk of seriously infringing on the public's right to free expression," it added, arguing that the impact "shows what happens when oversight becomes overreach".
Beyond the law, X criticised a separate new code of conduct for online platforms as "parallel and duplicative" as well questioning the free-speech impact of a new police unit tasked with monitoring social media.
The social network nevertheless last week introduced formal systems for age verification in response to the British law as well as new rules in Ireland and the wider European Union.
Its options range from estimating the age of a user based on the date their account was created or their email address, to requesting a selfie whose age would be determined by artificial intelligence, or uploading an official ID document.
Media regulator Ofcom says such age checks -- required since July 25 -- must be "technically accurate, robust, reliable and fair".
Platforms failing to comply risk fines of up to 18 million pounds ($24 million) or 10 percent of their global revenue -- whichever is larger.
Serious infringers could be blocked from British territory.
The fight over age verification to access sensitive content in Britain echoes months of debate in France over new rules requiring pornography sites to verify users' ages -- a step also required by many US states.
While hailed by child safety campaigners, opponents say such requirements risk compromising legitimate users' privacy -- or even exposing them to scams such as identity theft if the personal details used to verify their age were to be hacked.
Many people resort to virtual private networks (VPNs) to get around territorial restrictions on access to online content.
The most popular free apps on Apple's UK download store since last week have been VPNs, with one, Proton, reporting earlier this week a 1,800 percent rise in downloads, according to British media.
dax-tgb/rl

conflict

Thai-Cambodian cyberwarriors battle on despite truce

BY NATTAKORN PLODDEE WITH SUY SE IN PHNOM PENH

  • These included spamming reports to online platforms and distributed denial of service (DDoS) attacks -- halting access to a website by overloading its servers with traffic.
  • Thailand and Cambodia may have reached a ceasefire to halt their bloody border clashes, but cyber warriors are still battling online, daubing official websites with obscenities, deluging opponents with spam and taking pages down.
  • These included spamming reports to online platforms and distributed denial of service (DDoS) attacks -- halting access to a website by overloading its servers with traffic.
Thailand and Cambodia may have reached a ceasefire to halt their bloody border clashes, but cyber warriors are still battling online, daubing official websites with obscenities, deluging opponents with spam and taking pages down.
The five-day conflict left more than 40 people dead and drove more than 300,000 from their homes.
It also kicked off a disinformation blitz as Thai and Cambodian partisans alike sought to boost the narrative that the other was to blame.
Thai officials recorded more than 500 million instances of online attacks in recent days, government spokesperson Jirayu Huangsab said on Wednesday.
These included spamming reports to online platforms and distributed denial of service (DDoS) attacks -- halting access to a website by overloading its servers with traffic.
"It's a psychological war," Cambodian government spokesman Pen Bona told AFP. 
"There's a lot of fake news and it wouldn't be strange if it came from social media users, but even official Thai media outlets themselves publish a lot of fake news."

Disinformation

Freshly created "avatar" accounts have targeted popular users or media accounts in Thailand.
On July 24, a Facebook post by suspended Thai prime minister Paetongtarn Shinawatra condemning Cambodia's use of force was bombarded with 16,000 comments, many of them repeating the same message in English: "Queen of drama in Thailand".
Another, similar post by Paetongtarn on July 26 was hit with 31,800 comments, many reading: "Best drama queen of 2025", with snake and crocodile emojis.
Government spokesman Jirayu said the attacks were aimed at "sowing division among Thais" as well as outright deception.
Similarly, Cambodian government Spokesman Pen Bona said fake news from Thailand aimed to divide Cambodia.
Apparent bot accounts have also published and shared disinformation, adding to the confusion.
Videos and images from a deadly Cambodian rocket attack on a petrol station in Thailand were shared with captions saying they showed an attack on Cambodian soil.
Other posts, including one shared by the verified page of Cambodian Secretary of State Vengsrun Kuoch, claimed Thai forces had used chemical weapons.
The photo in the post in fact shows an aircraft dropping fire retardants during the Los Angeles wildfires in January 2025.
AFP contacted Vengsrun Kuoch for comment but did not receive a reply.

Obscenities

Hackers from both sides have broken into state-run websites to deface pages with mocking or offensive messages.
One of the targets was NBT World, an English-language news site run by the Thai government's public relations department. 
Headlines and captions on articles about acting prime minister Phumtham Wechayachai were replaced with obscenities.
Thai hackers meanwhile, changed the login page of Sachak Asia Development Institute, a Cambodian education facility, to show an image of influential ex-leader Hun Sen edited to have a ludicrously exaggerated hairstyle.
The image was a reference to a video -- much mocked in Thailand -- of Cambodian youths sporting the same hairstyle visiting one of the ancient temples that were the focus of the fighting.
Online attacks -- whether disinformation messaging or full-blown cyber strikes to disrupt an adversary's infrastructure or services -- are a standard feature of modern warfare.
In the Ukraine conflict, Kyiv and its allies have long accused Russia of state-backed cyberwarfare, disrupting government and private IT systems around the world.
And earlier this week, Ukrainian and Belarusian hacker groups claimed responsibility for a cyberattack on Russia's national airline that grounded dozens of flights.
Jessada Salathong, a mass communications professor at Thailand's Chulalongkorn University, said the border clashes had invoked the full spectrum of information disorder, carried out by both sides.
"In an era when anyone can call themselves media, information warfare simply pulls in everyone," he told AFP.
tii-suy/pdw/lb