internet

Meta's AI talent war raises questions about strategy

BY THOMAS URBAIN

  • "But still, you need those people on board now and to invest aggressively to be ready for that phase" of generative AI development.
  • Mark Zuckerberg and Meta are spending billions to recruit top artificial intelligence talent, triggering debates about whether the aggressive hiring spree will pay off in the competitive generative AI race.
  • "But still, you need those people on board now and to invest aggressively to be ready for that phase" of generative AI development.
Mark Zuckerberg and Meta are spending billions to recruit top artificial intelligence talent, triggering debates about whether the aggressive hiring spree will pay off in the competitive generative AI race.
OpenAI CEO Sam Altman recently complained that Meta has offered $100 million bonuses to lure engineers away from his company, where they would join teams already earning substantial salaries. 
Several OpenAI employees have accepted Meta's offers, prompting executives at the ChatGPT maker to scramble to retain their best talent.
"I feel a visceral feeling right now, as if someone has broken into our home and stolen something," Chief Research Officer Mark Chen wrote in a Saturday Slack memo obtained by Wired magazine. 
Chen said the company was working "around the clock to talk to those with offers" and find ways to keep them at OpenAI.
Meta's recruitment drive has also landed Scale AI founder and former CEO Alexandr Wang, a Silicon Valley rising star, who will lead a new group called Meta Superintelligence Labs, according to an internal memo, whose content was confirmed by the company.
Meta paid more than $14 billion for a 49 percent stake in Scale AI in mid-June, bringing Wang aboard as part of the acquisition. Scale AI specializes in labeling data to train AI models for businesses, governments, and research labs.
"As the pace of AI progress accelerates, developing superintelligence is coming into sight,” Zuckerberg wrote in the memo, which was first reported by Bloomberg.
"I believe this will be the beginning of a new era for humanity, and I am fully committed to doing what it takes for Meta to lead the way," he added.
US media outlets report that Meta's recruitment campaign has also targeted OpenAI co-founder Ilya Sutskever, Google rival Perplexity AI, and the buzzy AI video startup Runway.
Seeking ways to expand his business empire beyond Facebook and Instagram, Zuckerberg is personally leading the charge, driven by concerns that Meta is falling behind competitors in generative AI.
The latest version of Meta's AI model, Llama, ranked below heavyweight rivals in code-writing performance on the LM Arena platform, where users evaluate AI technologies.
Meta is integrating new recruits into a dedicated team focused on developing "superintelligence" -- AI that surpasses human cognitive abilities.

'Mercenary' approach

Tech blogger Zvi Moshowitz believes Zuckerberg had little choice but to act aggressively, though he expects mixed results from the talent grab.
"There are some extreme downsides to going pure mercenary... and being a company with products no one wants to work on," Moshowitz told AFP. 
"I don't expect it to work, but I suppose Llama will suck less."
While Meta's stock price approaches record highs and the company's valuation nears $2 trillion, some investors are growing concerned.
Institutional investors worry about Meta's cash management and reserves, according to Baird strategist Ted Mortonson.
"Right now, there are no checks and balances" on Zuckerberg's spending decisions, Mortonson noted.
Though the potential for AI to enhance Meta's profitable advertising business is appealing, "people have a real big concern about spending."
Meta executives envision using AI to streamline advertising from creation to targeting, potentially bypassing creative agencies and offering brands a complete solution.
The AI talent acquisitions represent long-term investments unlikely to boost Meta's profitability immediately, according to CFRA analyst Angelo Zino. "But still, you need those people on board now and to invest aggressively to be ready for that phase" of generative AI development.
The New York Times reports that Zuckerberg is considering moving away from Meta's Llama model, possibly adopting competing AI systems instead.
tu-gc-arp/mlm

cryptocurrency

Tougher Singapore crypto regulations kick in

  • Singapore, a major Asian financial hub, has taken a hit to its reputation after several high-profile recent cases dented trust in the emerging crypto sector.
  • Singapore ramped up crypto exchange regulations Monday in a bid to curb money laundering and boost market confidence after a series of high-profile scandals rattled the sector.
  • Singapore, a major Asian financial hub, has taken a hit to its reputation after several high-profile recent cases dented trust in the emerging crypto sector.
Singapore ramped up crypto exchange regulations Monday in a bid to curb money laundering and boost market confidence after a series of high-profile scandals rattled the sector.
The city-state's central bank last month said digital token service providers (DTSPs) that served only overseas clients must have a licence to continue operations past June 30 -- or close up shop.
The Monetary Authority of Singapore in a subsequent statement added that it has "set the bar high for licensing and will generally not issue a licence" for such operations.
Singapore, a major Asian financial hub, has taken a hit to its reputation after several high-profile recent cases dented trust in the emerging crypto sector.
These included the collapse of cryptocurrency hedge fund Three Arrows Capital and Terraform Labs, which both filed for bankruptcy in 2022. 
"The money laundering risks are higher in such business models and if their substantive regulated activity is outside of Singapore, the MAS is unable to effectively supervise such persons," the central bank said, referring to firms serving solely foreign clients.
Analysts welcomed the move to tighten controls on crypto exchanges. 
"With the new DTSP regime, MAS is reinforcing that financial integrity is a red line," Chengyi Ong, head of Asia Pacific policy at crypto data group Chainalysis, told AFP.
"The goal is to insulate Singapore from the reputational risk that a crypto business based in Singapore, operating without sufficient oversight, is knowingly or unknowingly involved in illicit activity."
Law firm Gibson, Dunn & Crutcher said in a comment on its website that the move will "allow Singapore to be fully compliant" with the requirements of the Financial Action Task Force, the France-based global money laundering and terrorist financing watchdog.
Three Arrows Capital filed for bankruptcy in 2022 when its fortunes suffered a sharp decline after a massive sell-off of assets it had bet on as prices nosedived in crypto markets.
Its Singaporean co-founder Su Zhu was arrested at Changi Airport while trying to leave the country and jailed for four months.
A court in the British Virgin Islands later ordered a US$1.14 billion worldwide asset freeze on the company's founders.
Singapore-based Terraform Labs also saw its cryptocurrencies crash dramatically in 2022, forcing it to file for bankruptcy protection in the United States.
The collapse of the firm's TerraUSD and Luna wiped out around US$40 billion in investments and caused wider losses in the global crypto market estimated at more than US$400 billion.
South Korean Do Kwon, who co-founded Terraform in 2018, was arrested in 2023 in Montenegro and later extradited to the United States on fraud charges related to the crash.
He had been on the run after fleeing Singapore and South Korea.
mba/jhe/dan

AI

AI is learning to lie, scheme, and threaten its creators

BY THOMAS URBAIN

  • Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. 
  • The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals.
  • Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. 
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals.
In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.
Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed.
These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. 
Yet the race to deploy increasingly powerful models continues at breakneck speed.
This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses.
According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.
"O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.
These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives.
- 'Strategic kind of deception' - 
For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. 
But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."
The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. 
Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up."
Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. 
"This is not just hallucinations. There's a very strategic kind of deception."
The challenge is compounded by limited research resources. 
While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. 
As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception."
Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS).

No rules

Current regulations aren't designed for these new problems. 
The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. 
In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.
Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread.
"I don't think there's much awareness yet," he said.
All this is taking place in a context of fierce competition.
Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. 
This breakneck pace leaves little time for thorough safety testing and corrections.
"Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.".
Researchers are exploring various approaches to address these challenges. 
Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.
Market forces may also provide some pressure for solutions. 
As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it."
Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. 
He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.
tu/arp/md

pornography

US Supreme Court upholds Texas age-check for porn sites

  • During arguments in January before the Supreme Court, a lawyer representing the Free Speech Coalition said the law was "overly burdensome" and that its goal could be accomplished using content filtering programs.
  • The US Supreme Court on Friday upheld a Texas law requiring pornographic websites to verify visitors' ages, rejecting arguments that this violates free speech and boosting efforts to protect children from online sexual content.
  • During arguments in January before the Supreme Court, a lawyer representing the Free Speech Coalition said the law was "overly burdensome" and that its goal could be accomplished using content filtering programs.
The US Supreme Court on Friday upheld a Texas law requiring pornographic websites to verify visitors' ages, rejecting arguments that this violates free speech and boosting efforts to protect children from online sexual content.
The court's decision will impact a raft of similar laws nationwide and could set the direction for internet speech regulation as concerns about the impact of digital life on society grow.
Texas is one of about 20 US states to institute checks that porn viewers are over 18, which critics argue violate First Amendment free speech rights.
Britain and Germany also enforce age-related access restrictions to adult websites, while a similar policy in France was blocked by the courts a week ago.
US companies like Meta, meanwhile, are lobbying Washington lawmakers for age-based verification to be carried out by smartphone giants Apple and Google on their app stores.
The Texas law was passed in 2023 by the state's Republican-majority legislature but was initially blocked after a challenge by an adult entertainment industry trade association.
A federal district court sided with the trade group, the Free Speech Coalition, saying the law restricted adults' access to constitutionally protected content.
But a conservative-dominated appeals court upheld the age verification requirement, prompting the pornography trade group to take its case to the Supreme Court, where conservatives have a 6-3 supermajority.
Under the law, companies that fail to properly verify users' ages face fines up to $10,000 per day and up to $250,000 if a child is exposed to pornographic content as a result.
To protect privacy, the websites aren't allowed to retain any identifying information obtained from users when verifying ages, and doing so could cost companies $10,000 daily in fines.
During arguments in January before the Supreme Court, a lawyer representing the Free Speech Coalition said the law was "overly burdensome" and that its goal could be accomplished using content filtering programs.
But Justice Amy Coney Barrett, the mother of seven children, took issue with the efficacy of content filtering, saying that from personal experience as a parent, such programs were difficult to maintain across the many types of devices used by kids.
Barrett also asked the lawyer to explain why requesting age verification online is any different than doing so at a movie theater that displays pornographic movies.
The lawyer for the Free Speech Coalition -- which includes the popular website Pornhub that has blocked all access in some states with age verification laws -- said online verification was different as it leaves a "permanent record" that could be a target for hackers.
During the court's hearing of the case in January, Chief Justice John Roberts and Justice Clarence Thomas, both Republican appointees, seemed to suggest that advances in technology might justify reviewing online free speech cases.
In 1997, the Supreme Court struck down, in an overwhelming 7-2 decision, a federal online age-verification law in what became a landmark free speech case that set a major precedent for the internet age.
arp/sms/bgs

tourism

Spain makes Booking.com scrap 4,000 tourist rental ads

  • Spain has also ordered online tourist accommodation giant Airbnb to take down more than 65,000 adverts for violating licence rules and has been in a legal battle with the US-based company.
  • Online hotel booking giant Booking.com on Friday said it had taken down thousands of advertisements in Spain in the leftist government's latest crackdown on illegal short-term tourist rentals.
  • Spain has also ordered online tourist accommodation giant Airbnb to take down more than 65,000 adverts for violating licence rules and has been in a legal battle with the US-based company.
Online hotel booking giant Booking.com on Friday said it had taken down thousands of advertisements in Spain in the leftist government's latest crackdown on illegal short-term tourist rentals.
A tourism boom has driven the buoyant Spanish economy but fuelled local concern about increasingly scarce and unaffordable housing, a top priority for the minority coalition government.
"We have deleted a very small number of adverts in Spain at the request of the consumer ministry for supplying valid licences," Booking.com said in a statement.
The Amsterdam-based platform said the non-compliant adverts represented "less than two percent" of its 200,000 properties in Spain and that it had always collaborated with the authorities to regulate the short-term rental sector.
The consumer rights ministry on Thursday announced Booking.com had scrapped 4,093 illegal ads, most of them located in the Atlantic Ocean's Canary Islands, a top tourist destination.
Spain has also ordered online tourist accommodation giant Airbnb to take down more than 65,000 adverts for violating licence rules and has been in a legal battle with the US-based company.
The world's second most-visited country hosted a record 94 million foreign tourists in 2024, but residents of hotspots such as Barcelona blame short-term rentals for the housing crisis and changing their neighbourhoods.
"We're making progress in the fight against a speculative model that expels people from their neighbourhoods and violates the right to a home," far-left consumer rights minister Pablo Bustinduy wrote on social network Bluesky.
al/imm/lth

art

Game 'reloots' African artefacts from Western museums

BY JULIE BOURDIN

  • Growing up in Zambia, she knew of her country's iconic "Broken Hill Man", a skull about 300,000 years old held in London's Natural History Museum and which is also featured in "Relooted".
  • Under the cover of darkness, Nomali jumped over a wall, burst into a museum and snatched a human skull from a pedestal before escaping through a window to the wail of an alarm.
  • Growing up in Zambia, she knew of her country's iconic "Broken Hill Man", a skull about 300,000 years old held in London's Natural History Museum and which is also featured in "Relooted".
Under the cover of darkness, Nomali jumped over a wall, burst into a museum and snatched a human skull from a pedestal before escaping through a window to the wail of an alarm.
The daring heist was not the work of a real-life criminal. Nomali is the protagonist of a new action-packed video game where players "reclaim" artefacts taken from African countries to be displayed in the West.
Developed by Johannesburg studio Nyamakop, "Relooted" is set in an imaginary future but tackles a topical issue: calls for Western institutions to return to Africa the spoils of colonisation.
Players are tasked with taking back 70 artefacts -- all of which exist in real life -- with a "team of African citizens", said producer Sithe Ncube, one of a team of 30 working on the game.
The items include the "Benin Bronzes" sculptures removed from the former kingdom of Benin more than 120 years ago, and which The Netherlands officially returned to Nigeria on June 21.
Another is the sacred Ngadji drum from Kenya's Pokomo community, which was confiscated by British colonial authorities in 1902.
"Its removal destabilised the community," Ncube said as an animated drawing of the wooden instrument flashed on her computer. Players "can see where it's from... and read about the history," she said, giving a demo.

'Is it stealing?'

On the screen a crew of characters in Afrofuturist costumes debated a plan to recover the remains of Tanzanian chiefs hanged by German colonial forces.
One asked: "Is it stealing to take back what was stolen?"
"We are going to do whatever it takes to take back Africa's belongings, and we are going to do it together," said the character Nomali. 
"Sometimes the stories behind these (artefacts) are actually very upsetting," Ncube told AFP. "It makes you see how much colonialism has affected... and shaped the world."
Growing up in Zambia, she knew of her country's iconic "Broken Hill Man", a skull about 300,000 years old held in London's Natural History Museum and which is also featured in "Relooted".
But it was only when working on the game that Ncube realised how many African cultural artefacts were held abroad, she said.
In France alone, museums stored about 90,000 objects from sub-Saharan Africa, according to a 2018 report commissioned by the government.
"Africans, to actually see these things that are part of their own culture, have to get a visa, pay for flights and go to a European country," Ncube said. "My whole life, I've never seen 'Broken Hill Man'."

Skewed identity

The looting of artefacts over centuries robbed communities of their "archives" and "knowledge systems", said Samba Yonga, co-founder of the digital Museum of Women's History in Zambia.
"Our history predates colonisation by millennia," she told AFP, but many people "don't even realise that we have a skewed sense of self and identity."
Reclaiming these objects would enable "a shift in how the next generation views their culture and identity," she said.
The same hope underpinned "Relooted", which was unveiled this month at Los Angeles's Summer Game Fest where it attracted a lot of interest from the diaspora and other Africans, Ncube said.
"I hope that the game encourages people from other African countries to want to tell their own stories and bring these things to light," she said.
One character felt personal for the producer: Professor Grace, Nomali's grandmother and described as "the brains behind the mission".
"I started seeing my own grandmother in her," Ncube said with emotion. "She represents a connection between our generations, fighting for the same thing we've always been fighting for."
jcb/br/giv

lifestyle

Roblox's Grow a Garden explodes online video game numbers

BY KILIAN FICHOU

  • "(But) I think we can be confident it's a record for Roblox because Roblox has given us these these figures," he said. 
  • A gardening game created by a teenager on online platform Roblox has attracted a record 21 million simultaneous players, a figure rarely seen in the industry. 
  • "(But) I think we can be confident it's a record for Roblox because Roblox has given us these these figures," he said. 
A gardening game created by a teenager on online platform Roblox has attracted a record 21 million simultaneous players, a figure rarely seen in the industry. 
"You could quite easily never have heard of Grow a Garden... and yet it is by some measures the biggest video game at the moment," Dom Tait, an analyst with UK firm Omdia, told AFP. 
More than 21 million players connected to Grow a Garden at the same time on June 21, buying seeds to cultivate a little patch of virtual land, harvesting crops, selling their produce and nicking stuff from other players' plots.
That shattered the record held by the adrenalin-packed Fortnite, which attracted 15 million concurrent users (CCUs) during an event in late 2020 featuring characters from the Marvel universe. 
"It's enormous," Tait said of Grow a Garden's success.  
He said it was difficult to say categorically if the sedate farming-themed game had broken all CCU records because other platforms do not necessarily publish numbers for other hugely popular games, such as Honor of Kings.  
"(But) I think we can be confident it's a record for Roblox because Roblox has given us these these figures," he said. 
Roblox, which is popular with children and teenagers, was released in 2005 and is now available on almost all consoles and on mobile phones.  
It has morphed into an online gaming platform -- one of the world's largest -- where players can programme their own games and try out other users' creations. 
Games on the platform are free to play. Roblox makes its money through a range of revenue streams, including in-game purchases, advertising and royalty fees. 
- Created in three days - 
Grow a Garden appeared in late March, developed by a teenager about whom little is known. 
Game development group Splitting Point Studios soon snapped up a share. 
The original creator "literally made the game in, like, three days", Splitting Point CEO Janzen Madsen told specialist website Game File. 
Tait says the success of Grow a Garden, with its simple graphics and basic mechanics, can be explained by its comforting nature. 
"There's not much danger. There's not much threat. You just sort of go on and do things and just sort of have a gentle experience," he said. 
He pointed to the satisfaction players derived from seeing their garden evolve, even when they are not connected. A bit like a real garden, only quicker. 
The concept is reminiscent of Animal Crossing, a simulation of life in a village populated by cute animals that became a soothing refuge for many players during the first Covid lockdowns in 2020. 
For specialist site Gamediscover, another attraction of Grow a Garden is the ease with which players can get to grips with the game -- a bonus for Roblox, which said 40 percent of the platform's users last year were under 13.
- Massive audience - 
It is difficult to know exactly how much Grow a Garden has earned for its developers.  
But Tait said those who created the best paid experiences received "about 70 percent" of the money spent by gamers "with Roblox taking the rest".  
Roblox says on its website it paid out $923 million to developers in 2024. 
"It is big money. So there's a little bit of nervousness in the industry about, 'Is Roblox taking away the audience that would otherwise have spent hundreds of pounds on a console and bought my console games?'"  
These sums demonstrate the weight in the video game industry of behemoths like Roblox and Fortnite, which have recently peaked at 350 and 100 million monthly players respectively. 
"Both places provide a massive audience -- as large as any single console platform audience -- and they provide awesome opportunities for creators," Tim Sweeney, the CEO of Fortnite publisher Epic Games, told The Game Business website. 
Beyond its success, Roblox has also come in for criticism. 
US investment research firm Hindenburg Research published a report in 2024 accusing the platform of inflating its monthly active player count and not sufficiently protecting users from sexual predators. 
In response, Roblox rejected Hindenburg's "financial claims" as "misleading" and said on its investor relations website it had "a robust set of proactive and preventative safety measures designed to catch and prevent malicious or harmful activity". 
kf/mch/gil/ach 

rights

'Mass scale' abuses in Cambodia scam centres: Amnesty

BY SALLY JENSEN

  • Government spokesman Pen Bona told AFP: "Cambodia is a victimised country used by criminals to commit online scams.
  • While looking for jobs on Facebook, Jett thought he had found a well-paying opportunity working in online customer service in his home country of Thailand.
  • Government spokesman Pen Bona told AFP: "Cambodia is a victimised country used by criminals to commit online scams.
While looking for jobs on Facebook, Jett thought he had found a well-paying opportunity working in online customer service in his home country of Thailand.
Following instructions to travel across the kingdom, the 18-year-old ended up being trafficked across the border to a compound in Svay Rieng, Cambodia.
There Jett was beaten, tortured and forced to perpetrate cyberscams, part of a multibillion-dollar illicit industry that has defrauded victims around the world.
He was forcibly held at the compound for seven months, during which "there was no monetary compensation, and contacting family for help was not an option", he told AFP.
"Will I survive, or will I die?" Jett (a pseudonym to protect his identity) recalled asking himself.
Abuses in Cambodia's scam centres are happening on a "mass scale", a report published Thursday by Amnesty International said, accusing the Cambodian government of being "acquiescent" and "complicit" in the exploitation of thousands of workers.
The report says there are at least 53 scam compounds in Cambodia, clustered mostly around border areas, in which organised criminal groups carry out human trafficking, forced labour, child labour, torture, deprivation of liberty and slavery.
Amnesty's Montse Ferrer said that despite law enforcement raids on some scam compounds, the number of compounds in Cambodia has increased, "growing and building" in the last few months and years.
"Scamming compounds are allowed to thrive and flourish by the Cambodian government," she told AFP.
The Cambodian government has denied the allegations.
Jett was made to romance his wealthy, middle-aged compatriots on social media, gaining their trust until they could be tricked into investing in a fake business.
"If the target fell into the trap, they would be lured to keep investing more until they were financially drained -- selling their land, cars, or all their assets," he said.
Scam bosses demanded exorbitant targets of one million baht ($31,000) per month from overworked employees –- a target only about two percent of them reached, he said.
"Initially, new recruits wouldn't face physical harm, but later, reprimands escalated to beatings, electric shocks, and severe intimidation," Jett told AFP.
- 'Woefully ineffective' –
The other employees in his multi-storey building were mostly Chinese, with some Vietnamese and some Thais.
Amnesty International says none of the ex-scammers of the 58 they interviewed for the report were Cambodian, and "overwhelmingly" were not paid for their labour.
Most of the scam centre bosses were Chinese, Jett said, adding that they used Thai interpreters when meting out punishments to those who performed poorly.
"Sometimes they'd hold meetings to decide who would be eliminated tomorrow," he said. "Or who will be sold (to another scam compound)? Or did anyone do something wrong that day? Did they break the company rules?"
He claims a colleague falsely accused him of wrongdoing to the Chinese bosses for a bounty. He pleaded his innocence but they "just didn't listen".
Ferrer said Cambodian government interventions against the scam centres had been "woefully ineffective", often linked to corruption by individual police officers at a "systemic and widespread level".
Government spokesman Pen Bona told AFP: "Cambodia is a victimised country used by criminals to commit online scams. We do recognise that there is such thing, but Cambodia has taken serious measures against the problem."
The UN Office on Drugs and Crime said in April that the scam industry was expanding outside hotspots in Southeast Asia, with criminal gangs building up operations as far as South America, Africa, the Middle East, Europe and some Pacific islands.
In Cambodia, Jett ultimately staged a dramatic escape after a particularly severe beating in which his arm was broken. He jumped out of a building, passed out and later woke up in hospital.
"Whether I died or survived, both options seemed good to me at the time," he said. "Consider it a blessing that I jumped."
He is now seeking legal recourse with assistance from Thai government agencies who have categorised him as a victim of human trafficking.
But Ferrer said effective action to help end the industry must come from the Cambodian government.
"We are convinced that if the Cambodian government wanted to put a stop they would be able to put a stop. At the very least they would be able to do much more than what we're seeing," she said.
suy-sjc/pdw/sco

internet

US judge sides with Meta in AI training copyright case

BY GLENN CHAPMAN

  • - A different federal judge in San Franciso on Monday sided with AI firm Anthropic regarding training its models on copyrighted books without authors' permission.
  • A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission.
  • - A different federal judge in San Franciso on Monday sided with AI firm Anthropic regarding training its models on copyrighted books without authors' permission.
A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission.
District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law, in the second such courtroom triumph for AI firms this week.
However, it came with a caveat that the authors could have pitched a winning argument that by training powerful generative AI with copyrighted works, tech firms are creating a tool that could let a sea of users compete with them in the literary marketplace.
"No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," Chhabria said in his ruling.
Tremendous amounts of data are needed to train large language models powering generative AI. 
Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation.
"We appreciate today's decision from the court," a Meta spokesperson said in response to an AFP inquiry.
"Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology."
In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents.
Books involved in the suit include Sarah Silverman's comic memoir "The Bedwetter" and Junot Diaz's Pulitzer Prize–winning novel "The Brief Wondrous Life of Oscar Wao," the documents showed.
"This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," the judge stated.
"It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one."

Market harming?

A different federal judge in San Franciso on Monday sided with AI firm Anthropic regarding training its models on copyrighted books without authors' permission.
District Court Judge William Alsup ruled that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act.
"Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision.
"The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his decision, comparing AI training to how humans learn by reading books.
The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train chatbot Claude, the company's ChatGPT rival.
Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.
gc/jgc

AI

Tech giants' net zero goals verging on fantasy: researchers

BY MARLOWE HOOD

  • The report identifies a number of ways in which the tech sector can curb its carbon footprint, even as it develops AI apace. 
  • The credibility of climate pledges by the world's tech giants to rapidly become carbon neutral is fading fast as they devour more and more energy in the race to develop AI and build data centres, researchers warned Thursday.
  • The report identifies a number of ways in which the tech sector can curb its carbon footprint, even as it develops AI apace. 
The credibility of climate pledges by the world's tech giants to rapidly become carbon neutral is fading fast as they devour more and more energy in the race to develop AI and build data centres, researchers warned Thursday.
Apple, Google and Meta said they would stop adding CO2 into the atmosphere by 2030, while Amazon set that target for 2040. 
Microsoft promised to be "net negative" -- pulling CO2 out of the air -- by the end of this decade. 
But those vows, made before the AI boom transformed the sector, are starting to look like a fantasy even as these companies have doubled down on them, according to independent analysts.
"The greenhouse gas emissions targets of tech companies appear to have lost their meaning," Thomas Hay, lead author of a report by think tanks Carbon Market Watch and NewClimate Institute, told AFP. 
"If energy consumption continues to rise unchecked and without adequate oversight," he added, "these targets will likely be unachievable."
The deep-dive analysis found the overall integrity of the climate strategies at Meta, Microsoft and Amazon to be "poor", while Apple's and Microsoft's were deemed "moderate".
When it came to the quality of emissions reduction targets, those of Meta and Amazon were judged "very poor", while Google and Microsoft scored a "poor" rating. Only Apple fared better.     
The expanding carbon footprint of the five top tech behemoths stems mostly from the breakneck expansion of artificial intelligence, which requires huge amounts of energy to develop and run.
Electricity consumption -- and the carbon emissions that come with it -- has doubled for some of these companies in the last three or four years, and tripled for others, the report found.
The same is true across the sector: operational emissions of the world's top 200 information technology companies was nearly 300 million tonnes of CO2 in 2023, and nearly five times that if the downstream use products and services is taken into account, according to the UN's International Telecommunications Union.
If the sector were a country, it would rank fifth in greenhouse gas emissions ahead of Brazil.  
Electricity to power data centres increased on average 12 percent per year from 2017 to 2024, and is projected to double by 2030, according to the IEA.

'Quite unregulated'

If all this extra power came from solar and wind, CO2 emissions would not be rising. 
But despite ambitious plans to source their energy from renewables, much of it is still not carbon neutral.  
Studies estimate that half of the computing capacity of tech companies' data centres comes from subcontractors, yet many companies do not account for these emissions, the study points out. 
The same is true for the entire infrastructure and equipment supply chain, which accounts for at least a third of tech companies' carbon footprint. 
"There is a lot of investment in renewable energy, but overall, it has not offset the sector's thirst for electricity," Day said.
Given the status of AI as a driver of economic growth, and even as a vector for industrial policy, it is unlikely that governments are going to constrain the sector's expansion, the report noted.
"So far the whole AI boom has been altogether quite unregulated," Day said.
"There are things these companies can and will do for future proofing, to make sure they're moving in the right direction" in relation to climate goals, he added.
"But when it comes to decisions that would essentially constrain the growth of the business model, we don't see any indications that that can happen without regulatory action."
The report identifies a number of ways in which the tech sector can curb its carbon footprint, even as it develops AI apace. 
Ensuring that data centres -- both those belonging to the companies as well as third party partners -- run on renewable electricity is crucial.
Increasing the lifespan of devices and expanding the use of recycled components for hardware production could also make a big difference.
Finally, the methods use for calculating emissions reduction targets are out-of-date, and in need of revision, the report said.
mh-dax/phz

internet

US judge sides with Meta in AI training copyright case

  • In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents.
  • A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission.
  • In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents.
A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission.
District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law, in the second such courtroom triumph for AI firms this week.
However, it came with a caveat that the authors could have pitched a winning argument that by training powerful generative AI with copyrighted works, tech firms are creating a tool that could let a sea of users compete with them in the literary marketplace.
"No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," Chhabria said in his ruling.
Tremendous amounts of data are needed to train large language models powering generative AI. 
Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation.
"We appreciate today's decision from the court," a Meta spokesperson said in response to an AFP inquiry.
"Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology."
In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents.
Books involved in the suit include Sarah Silverman's comic memoir "The Bedwetter" and Junot Diaz's Pulitzer Prize–winning novel "The Brief Wondrous Life of Oscar Wao," the documents showed.
"This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," the judge stated.
"It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one."
gc/jgc

reconstruction

Syrian architect uses drone footage to help rebuild hometown

BY OMAR HAJ KADDOUR

  • Architect Mohammed said his dream was "for the village to be rebuilt, for people and life to return".
  • Syrian architect Abdel Aziz al-Mohammed could barely recognise his war-ravaged village when he returned after years away.
  • Architect Mohammed said his dream was "for the village to be rebuilt, for people and life to return".
Syrian architect Abdel Aziz al-Mohammed could barely recognise his war-ravaged village when he returned after years away. Now, his meticulous documentation of the damage using a drone helps to rebuild it.
"When I first came back, I was shocked by the extent of the destruction," said Mohammed, 34.
Walking through his devastated village of Tal Mardikh, in Syria's northwestern Idlib province, he said he could not recognise "anything, I couldn't even find my parents' home".
Nearly half of Tal Mardikh's 1,500 homes have been destroyed and the rest damaged, mainly due to bombardment by the former Syrian army.
Mohammed, who in 2019 fled the bombardment to near the Turkish border, first returned days after an Islamist-led offensive toppled longtime ruler Bashar al-Assad in December.
The architect, now based in Idlib city, had documented details of Tal Mardikh's houses and streets before fleeing, and afterwards used his drone to document the destruction.
When he returned, he spent two weeks carefully surveying the area, going from home to home and creating an interactive map showing the detailed conditions of each house.
"We entered homes in fear, not knowing what was inside, as the regime controlled the area for five years," he said.
Under the blazing sun, Mohammed watched as workers restored a house in Tal Mardikh, which adjoins the archaeological site of Ebla, the seat of one of ancient Syria's earliest kingdoms.
His documentation of the village helped gain support from Shafak, a Turkey-based non-governmental organisation which agreed to fund the reconstruction and rehabilitation of 434 out of 800 damaged homes in Tal Mardikh.
The work is expected to be completed in August, and includes the restoration of two wells and sanitation networks, at a cost of more than one million dollars.

'Full of hope'

Syrians have begun returning home after Assad's ouster and following nearly 14 years of civil war that killed over half a million people and displaced millions of others internally and abroad.
According to the United Nations refugee agency, UNHCR, more than 600,000 Syrians had returned home from abroad, while around 1.5 million internally displaced people have gone back to their regions of origin.
The agency estimates that up to 1.5 million Syrians from abroad and two million internally displaced people could return by the end of this year.
Around 13.5 million currently remain displaced internally or abroad, according to UNHCR figures for May.
In Tal Mardikh, Alaa Gharib, 45, is among only a few dozen residents who have come back.
"I lived in tents for seven years, and when liberation came, I returned to my village," said Gharib, whose home is among those set for restoration.
He is using a blanket as a makeshift door for his house which had "no doors, no windows, nothing".
After Western sanctions were lifted, Syria's new authorities are hoping for international support for post-war reconstruction, which the UN estimates could cost more than $400 billion.
Efforts have so far been limited to individuals or charities, with the government yet to launch a reconstruction campaign.
Architect Mohammed said his dream was "for the village to be rebuilt, for people and life to return".
He expressed hope to "see the Syria we dream of... the Syria full of hope, built by its youth".
ohk-lk/nad/lg/ami/tc

wedding

Amazon tycoon Bezos arrives in Venice for lavish wedding

  • The lavish celebration has sparked soul-searching in Venice, one of the world's most popular tourism destinations where some fear the arrive of so many A-list guests and their entourages will make life worse.
  • Amazon's billionaire founder Jeff Bezos and his fiancee Lauren Sanchez arrived in Venice on Wednesday ahead of their wedding, an event that has sparked protests in the Italian city.
  • The lavish celebration has sparked soul-searching in Venice, one of the world's most popular tourism destinations where some fear the arrive of so many A-list guests and their entourages will make life worse.
Amazon's billionaire founder Jeff Bezos and his fiancee Lauren Sanchez arrived in Venice on Wednesday ahead of their wedding, an event that has sparked protests in the Italian city.
Bezos, the world's fourth-richest person, and his former television anchor bride-to-be were seen stepping off a water taxi at the Aman Hotel on the Grand Canal.
The couple's three-day nuptials are due to start on Thursday, and the wedding ceremony is to be held at a secret location.
Bezos, 61, and Sanchez, 55, are said to have booked out the city's finest hotels for a star-studded guest list rumoured to include Leonardo DiCaprio, Mick Jagger, Kim Kardashian, Oprah Winfrey and Orlando Bloom.
Ivanka Trump, the oldest daughter of US President Donald Trump, arrived with her husband Jared Kushner and their three children on Tuesday afternoon.
Rumours have swirled that the ceremony might be held at the Church of the Abbey of Misericordia, or at the Arsenale, a vast shipyard complex dating back to when the city was a naval powerhouse.
At least 95 private planes have requested permission to land at Venice's Marco Polo airport, Italian newspaper Corriere della Sera said, with the pair reportedly inviting about 200 guests.
The lavish celebration has sparked soul-searching in Venice, one of the world's most popular tourism destinations where some fear the arrive of so many A-list guests and their entourages will make life worse.
Greenpeace highlighted the hypocrisy of spending huge amounts on partying in a fragile city "sinking under the weight of the climate crisis".
Activists unfurled a giant banner in St Mark's square on Monday, with a picture of Bezos laughing and a sign reading: "If you can rent Venice for your wedding, you can pay more tax."
Sanchez has also been criticised for saying more must be done to tackle climate change while also taking part in a space flight in April on a rocket developed by Bezos's space company Blue Origin.
ljm/yad/jxb/phz

politics

AI fakes duel over Sara Duterte impeachment in Philippines

BY LUCILLE SODIPE WITH PURPLE ROMERO IN HONG KONG

  • - Five minutes' work - The video making the case for impeachment -- also with millions of views -- depicts an elderly woman peddling fish and calling out the Senate for failing to hold a trial.
  • Days after the Philippine Senate declined to launch the impeachment trial of the country's vice president, two interviews with Filipinos arguing for and against the move went viral.
  • - Five minutes' work - The video making the case for impeachment -- also with millions of views -- depicts an elderly woman peddling fish and calling out the Senate for failing to hold a trial.
Days after the Philippine Senate declined to launch the impeachment trial of the country's vice president, two interviews with Filipinos arguing for and against the move went viral.
Neither were real.
The schoolboys and elderly woman making their cases were AI creations, examples of increasingly sophisticated fakes possible with even basic online tools.
"Why single out the VP?", a digitally created boy in a white school uniform asks, arguing that the case was politically motivated.
The House of Representatives impeached Sara Duterte in early February on charges of graft, corruption and an alleged assassination plot against former ally and running mate President Ferdinand Marcos.
A guilty verdict in the Senate would result in her removal from office and a lifetime ban from Philippine politics.
But after convening as an impeachment court on June 10, the senior body immediately sent the case back to the House, questioning its constitutionality.
Duterte ally Senator Ronald dela Rosa shared the video of the schoolboys -- since viewed millions of times -- praising the youths for having a "better understanding of what's happening" than their adult counterparts.
The vice president's younger brother Sebastian, mayor of family stronghold Davao, said the clip proved "liberals" did not have the support of the younger generation.
When the schoolboys were exposed as digital creations, the vice president and her supporters were unfazed.
"There's no problem with sharing an AI video in support of me. As long as it's not being turned into a business," Duterte told reporters.
"Even if it's AI... I agree with the point," said Dela Rosa, the one-time enforcer of ex-president Rodrigo Duterte's drug war.

Five minutes' work

The video making the case for impeachment -- also with millions of views -- depicts an elderly woman peddling fish and calling out the Senate for failing to hold a trial.
"You 18 senators, when it's the poor who steal, you want them locked up immediately, no questions asked. But if it's the vice president who stole millions, you protect her fiercely," she says in Tagalog.
Both clips bore a barely discernible watermark for the Google video-generation platform Veo.
AFP fact-checkers also identified visual inconsistencies, such as overly smooth hair and teeth and storefronts with garbled signage.
The man who created the fish peddler video, Bernard Senocip, 34, told AFP it took about five minutes to produce the eight-second clip.
Reached via his Facebook page, Senocip defended his work in a video call, saying AI characters allowed people to express their opinions while avoiding the "harsh criticism" frequent on social media.
"As long as you know your limitations and you're not misleading your viewers, I think it's fine," he said, noting that -- unlike the Facebook version -- he had placed a "created by AI" tag on the video's TikTok upload.
While AFP has previously reported on websites using hot-button Philippine issues to generate cash, Senocip said his work was simply a way of expressing his political opinions.  
The schoolboy video's creator, the anonymous administrator of popular Facebook page Ay Grabe, declined to be interviewed but said his AI creations' opinions had been taken from real-life students.
AFP, along with other media outlets, is paid by some platforms including Meta, Google and TikTok for work tackling disinformation.  

'Grey area'

Using AI to push viewpoints via seemingly ordinary people can make beliefs seem "more popular than they actually are", said Jose Mari Lanuza of Sigla Research Center, a non-profit organisation that studies disinformation.
"In the case of the impeachment, this content fosters distrust not only towards particular lawmakers but towards the impeachment process."
While some AI firms have developed measures to protect public figures, Jose Miguelito Enriquez, an associate research fellow at Nanyang Technological University, said the recent Philippine videos were a different animal.
"Some AI companies like OpenAI previously committed to prevent users from generating deepfakes of 'real people', including political candidates," he said. 
"But... these man-on-the-street interviews represent a grey area because technically they are not using the likeness of an actual living person."
Crafting realistic "humans" was also getting easier, said Dominic Ligot, founder of Data and AI Ethics PH.
"Veo is only the latest in a string of rapidly evolving tools for AI media generation," he said, adding the newest version produced "smoother, more realistic motion and depth compared to earlier AI video models".
Google did not reply when AFP asked if they had developed safeguards to prevent Veo from being used to push misinformation.
For Ligot, guardrails around the swiftly evolving technology are a must, warning AI was increasingly being used to "influence how real people feel, pressure decision-makers and distort democratic discourse".
pr-ls/cwl/ecl/pst

conflict

Grok shows 'flaws' in fact-checking Israel-Iran war: study

BY ANUJ CHOPRA

  • With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation.
  • Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool.
  • With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation.
Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool.
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation.
"The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.
"Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims."
The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media."
Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found.
It oscillated -- sometimes within the same minute -- between denying the airport's destruction and confirming it had been damaged by strikes, the study said. 
In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. 
When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said.
The Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts.
AI chatbots also amplified falsehoods.
As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support.
When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard.
Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles.
Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries.
Musk's startup xAI blamed an "unauthorized modification" for the unsolicited response. 
Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people.
Musk himself blasted Grok after it cited Media Matters -- a liberal media watchdog he has targeted in multiple lawsuits -- as a source in some of its responses about misinformation.
"Shame on you, Grok," Musk wrote on X. "Your sourcing is terrible."
ac/jgc

patent

US judge backs using copyrighted books to train AI

BY GLENN CHAPMAN

  • Tremendous amounts of data are needed to train large language models powering generative AI.  Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
  • A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment.
  • Tremendous amounts of data are needed to train large language models powering generative AI.  Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment.
District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act.
"Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision.
"The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books.
Tremendous amounts of data are needed to train large language models powering generative AI. 
Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation.
"We are pleased that the court recognized that using 'works to train LLMs was transformative,'" an Anthropic spokesperson said in response to an AFP query.
The judge's decision is "consistent with copyright's purpose in enabling creativity and fostering scientific progress," the spokesperson added.

Blanket protection rejected

The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT.
However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.
Along with downloading books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital formats, according to court documents.
Anthropic's aim was to amass a library of "all the books in the world" for training AI models on content as deemed fit, the judge said in his ruling.
While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, the judge ruled, regardless of eventual training use.
The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages.
Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options.
"Judge Alsup's decision is a mixed bag," said Keith Kupferschmid, chief executive of US nonprofit Copyright Alliance.
"In some instances AI companies should be happy with the decision and in other instances copyright owners should be happy."
Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives. 
The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.
gc/arp/jgc

US

The billionaire and the TV anchor: Bezos, Sanchez's whirlwind romance

BY ALEX PIGMAN

  • - With his new romance flourishing, Bezos stepped down as CEO of Amazon in 2021. 
  • Their whirlwind romance began under a cloud of scandal, but now Lauren Sanchez, a former morning TV anchor with a love of flying, is set to wed Amazon founder Jeff Bezos, the world's fourth-richest person, in a Venice extravaganza. 
  • - With his new romance flourishing, Bezos stepped down as CEO of Amazon in 2021. 
Their whirlwind romance began under a cloud of scandal, but now Lauren Sanchez, a former morning TV anchor with a love of flying, is set to wed Amazon founder Jeff Bezos, the world's fourth-richest person, in a Venice extravaganza. 
Both were married to other people when they began secretly dating sometime before 2019.
In January of that year, Bezos and his first wife, the publicity-shy MacKenzie Scott, announced their divorce, stating their intention to continue "our shared lives as friends."
Bezos met Scott in 1992 while they were both working at a New York hedge fund. They quit their jobs to co-found Amazon in a rented garage in Bellevue, Washington.
A month after the split, Bezos publicly accused the US tabloid the National Enquirer of blackmail in an offer to prevent the publication of salacious photos and text messages with Sanchez. 
He suggested the effort was orchestrated by Saudi Arabia, whose leaders were reportedly upset with how The Washington Post -- which Bezos owns -- covered the murder of its reporter Jamal Khashoggi. 
However, Sanchez later revealed that her brother sold the phone content to the Enquirer for a reported $200,000.

'Is it hot?'

With his new romance flourishing, Bezos stepped down as CEO of Amazon in 2021. 
Bezos, 61, stated his primary reason for pulling back was to dedicate more time and energy to Blue Origin, his space exploration company, and charity work. 
He remains Amazon's executive chairman, the retail giant's biggest shareholder, and still holds considerable influence over the company's direction.
Bezos and Sanchez are fixtures at Oscar parties and other celebrity haunts. Sanchez often uses Instagram to communicate, sometimes expressing her love for Bezos or her children. In 2023, they announced their engagement.
Bezos has notably changed his look during his relationship with the exuberantly dressed Sanchez, trading in the wardrobe of a scrawny tech executive for that of a style-conscious playboy with a more muscular physique. 
"Is it just me, or is it hot outside?" Sanchez wrote in the caption of a 2023 Instagram post showing a shirtless Bezos in swimming trunks climbing the ladder of his $500 million mega yacht.

'Changed my life'

Before her relationship with Bezos, Sanchez, 55, was not a nationally known figure. 
A third-generation Mexican American originally from New Mexico, Sanchez has dyslexia and has made awareness of the learning disability one of her missions. 
She has shared that she assumed she was "stupid" until a community college professor informed her she had the condition and was perfectly smart.
"It changed my life," helping her win a scholarship to the University of Southern California, Sanchez told the Wall Street Journal.
She dropped out of USC to begin her TV career at a local station in Phoenix, Arizona, before working on Fox Sports and Extra, a TV tabloid-style news show in Los Angeles, which would become her home for decades. 
In 1999, she narrowly missed national fame when she was turned down for a spot on "The View," the talk show hosted by TV news legend Barbara Walters.
Sanchez instead became a familiar face to Angelenos as a co-host of a local morning news show from 2011 to 2017. 
During most of those years, she was married to Hollywood super-agent Patrick Whitesell, with whom she has two children, Evan and Ella. 
She also has a first son, Nikko, from a relationship with former NFL star Tony Gonzalez. 
Bezos has four children with his ex-wife: a son, Preston, born in 2000, as well as two sons and one adopted daughter whose ages and names are not public.

Women can fly

Sanchez has a deep passion for flying. After leaving morning television, she founded a company specializing in aerial filming and served as a consultant on Christopher Nolan's film Dunkirk.
"This space is dominated by men," she told The Hollywood Reporter in 2017. "But there's nothing physical about flying a helicopter... There's no reason more women aren't in this."
Her passion for the skies also led her to space in April as part of an all-female flight on Blue Origin, though the 11-minute trip has been criticized as wasteful.  
Among the crew were pop singer Katy Perry, who was also a guest at Sanchez's bachelorette party in Paris last month. 
The A-list guest list for the party also included Kim Kardashian, Kris Jenner, and Eva Longoria.
arp/st

tech

UK aims to tackle Google dominance of online search

  • It followed the 2025 implementation of Britain's Digital Markets Competition Regime, which the regulator on Tuesday said "can help unlock opportunities for innovation and growth".
  • Britain's competition watchdog on Tuesday proposed measures aimed at tackling Google's dominance in online search, with the US tech giant warning that "punitive regulations" could impact UK economic growth.
  • It followed the 2025 implementation of Britain's Digital Markets Competition Regime, which the regulator on Tuesday said "can help unlock opportunities for innovation and growth".
Britain's competition watchdog on Tuesday proposed measures aimed at tackling Google's dominance in online search, with the US tech giant warning that "punitive regulations" could impact UK economic growth.
The Competition and Markets Authority (CMA) said it proposes to designate Google with "strategic market status", subjecting it to special requirements under new UK regulations.
A similar tech competition law from the European Union, the Digital Markets Act, carries the potential for hefty financial penalities. 
Britain's CMA in January launched an investigation into Google's dominant position in the search engine market and its impacts on consumers and businesses.
It followed the 2025 implementation of Britain's Digital Markets Competition Regime, which the regulator on Tuesday said "can help unlock opportunities for innovation and growth".
Google's spokesman on competition, Oliver Bethell, warned that the CMA update, preceding a final outcome due in October, "could have significant implications for businesses and consumers in the UK".
"The positive impact of Google Search on the UK is undeniable. Our tools and services contribute billions of pounds (dollars) a year to the UK," he added in a statement.
While noting that "Google Search has delivered tremendous benefits", CMA chief executive Sarah Cardell said "there are ways to make these markets more open, competitive and innovative".
The regulator said that it plans to consult on potential changes, including "ensuring people can easily choose and switch between search services -- including potentially AI assistants -- by making default choice screens a legal requirement".
Another proposal is for "ensuring Google's ranking and presentation of search results is fair and non-discriminatory".
Bethell expressed concern that "the scope of the CMA's considerations remains broad and unfocused, with a range of interventions being considered before any evidence has been provided".
The CMA noted that "Google Search accounts for more than 90 percent" of online enquiries in the UK.
It added that more than 200,000 businesses in the UK rely on Google search advertising to reach customers. 
bcp/har/lth