UK spy agency set to use AI against cyber attacks and state actors – World News Curatory

admin

GCHQ, the UK spy agency, is preparing to use artificial intelligence to combat cyber attacks, identify state-backed disinformation, and help track criminal networks around the globe. The move, announced on Wednesday, reflects growing anxiety that adversaries such as Russia and China are already weaponising AI technology against Britain and its allies. While security officials are…

imageGCHQ, the UK spy agency, is preparing to use artificial intelligence to combat cyber attacks, identify state-backed disinformation, and help track criminal networks around the globe.
The move, announced on Wednesday, reflects growing anxiety that adversaries such as Russia and China are already weaponising AI technology against Britain and its allies.
While security officials are keen to distance the UK from unethical applications of machine learning — such as facial recognition and the mass creation of fake online identities in troll farms — they say they are “on the cusp” of using new algorithms to boost national security.
In an article for the Financial Times, GCHQ’s director Jeremy Fleming said “good AI” would enable spies to work in different ways, “allowing analysts to deal with ever-increasing volumes and complexity of data, improving the quality and speed of decision-making.”
The increased reliance on algorithms when it comes to our sensitive information should raise alarm bells over the sheer scale of snooping currently carried out on us.
He added that the applications of AI are broad, “from identifying and countering ‘troll farms’pedalling disinformation to mapping and tracking international networks that are helping to traffic people, drugs or weapons”.
For many years, spies have used simple AI functions such as translation, but security officials said more recent advances in the speed of data processing, and increases in the availability of data needed to train algorithms, means GCHQ can deploy machine learning more ambitiously.
Possible applications to counter disinformation include machine-assisted fact checking to identify false online identities known as “deepfakes”, as well as automatic detection and blocking of botnets and other sources of misleading content online.AI could be used to actively defend against cyber attacks, by helping spies find malicious software and tracing it to its source, security officials said.GCHQ could also analyse complex chains of financial transactions and uncover the involvement of hostile states or terrorists.
Fleming insisted that the UK’s use of this technology would be “legal, proportionate and ethical”.
“In the hands of an adversary with little respect for human rights, such a powerful technology could be used for oppression,” he wrote.“Inaction can let those who build the technology of tomorrow — whether a country or company — project their values or interests by stealth, poor design or inadequate diversity.The consequences are hard to overstate.”
The use of AI is authorised under the Investigatory Powers Act, and is overseen by both ministers and the Investigatory Powers Commissioner’s Office.
Alexander Babuta, a research fellow in National Security and Resilience at the Royal United Services Institute, said the problem for British spies was that adversaries “will undoubtedly use AI to attack the UK, but they are not bound by the same legal and ethical framework”.
“The UK government’s requirement to develop AI capabilities is all the more pressing in the context of emerging AI-enabled security threats from hostile state actors — most notably Russia and China,” he said.Recommended
However, ever since Edward Snowden, a former contractor at the US National Security Agency, revealed GCHQ’s bulk data collection programme in 2013, the organisation has come under legal challenge from privacy organisations and battled to persuade the public that it can be trusted with data.
Megan Goulding, a lawyer at the human rights campaign group Liberty, suggested GCHQ’s need to deploy AI reflected the growing volumes of data they had been given permission to collect.
“The increased reliance on algorithms when it comes to our sensitive information should raise alarm bells over the sheer scale of snooping currently carried out on us,” she said.Nvidia earnings boosted by gaming and data centre chip demand
Published Telegraf
Nvidia forecast a much bigger growth surge in the coming months than Wall Street had been expecting, as the US chipmaker reported quarterly numbers that showed it has continued to ride strong demand for gaming and data centre chips during the pandemic.
The forecast, which lifted the company’s shares 3 per cent in after-market trading, came despite supply shortages that have caused convulsions in some parts of the semiconductor supply chain.
Stronger demand had “limited the availability of capacity and components” throughout the supply chain, with gaming particularly affected, Nvidia said
Despite that, strong sales of a new generation of gaming cards lifted revenue from this part of the business by 67 per cent in the latest period, to just under $2.5bn.With data centre sales up 97 per cent — thanks partly to last year’s acquisition of Mellanox — Nvidia reported overall revenue of $5bn in the three months to the end of January.
That was 61 per cent higher than the year before, and some 4 per cent ahead of most analysts’ forecasts.
Wall Street had been expecting sales to slow after the recent strong run.

Instead, Nvidia forecast an acceleration in growth in the current quarter, with revenue rising 71 per cent to $5.3bn, or about 18 per cent above analysts’ expectations.
Jensen Huang, chief executive, said the latest quarter had capped “a breakout year for Nvidia’s computing platforms”.
Besides their use in high-end gaming PCs and machine learning systems — both markets that have been lifted by the pandemic — Nvidia’s chips are also widely used in cryptocurrency “mining”, a market that has boomed on the back of the soaring bitcoin price.
Despite a fall of nearly two percentage points in its gross profit margin, caused partly by Mellanox, Nvidia’s net income jumped 53 per cent, to $1.46bn.
At $3.10, pro forma earnings per share were up 64 per cent, and 29 cents ahead of expectations.Based on formal accounting principles, earnings per share rose 51 per cent, to $2.31.After Google drama, Big Tech must fight against AI bias
Published Telegraf
In the 2010s, the American political scientist Virginia Eubanks set out to investigate whether computer programs equipped with artificial intelligence were hurting poor communities in places such as Pittsburgh and Los Angeles.
Her resulting book, Automating Inequality (2018), makes chilling reading: Eubanks found that AI-enabled public and private systems linked to health, benefits and policing were making capricious — and damaging — decisions based on flawed data and ethnic and gender biases.
Worse, the AI systems were so impenetrable that they were hard to monitor or challenge when decisions were wrong — especially by the people who were victims of these “moralistic and punitive poverty management strategies”, as Eubanks puts it.
Eubanks’ warnings received scant public attention when they emerged.But now, belatedly, the issue of AI bias is sparking angry debate in Silicon Valley — not because of what is happening to those living in poverty but following a bitter row among well-paid tech workers at Google.
Earlier this month, Margaret Mitchell , a Google employee who co-led a team studying ethics in AI, was fired after allegedly engaging in the “exfiltration of confidential business-sensitive documents and private data of other employees”, according to Google.The tech group has not explained what this means.But Mitchell was apparently looking for evidence that Google had maltreated Timnit Gebru , her co-leader at the AI ethics unit, who was ousted late last year.
If you look at who is getting prominence and paid to make decisions [about AI design and ethics] it is not black women’
This is deeply embarrassing for the tech giant.Gebru is a rarity — a senior black female techie — who has been campaigning against racial and gender biases via the industry group Black in AI .More embarrassing, her departure came after she tried to publish a research paper about the dangers of untrammelled AI innovation that apparently upset Google executives.
As it happens, the offending paper is too geeky to grab headlines.

However, it argues, among other things, that natural language processing platforms, which draw on huge bodies of text, can embed the type of biases that Eubanks warned about.And after Gebru was ousted, Mitchell told her Google colleagues that Gebru had been targeted because of the “same underpinnings of racism and sexism that our AI systems, when in the wrong hands, soak up”.
Mitchell tells me: “I tried to use my position to raise concerns to Google about race and gender inequity . . . To now be fired has been devastating.” Gebru echoes: “If you look at who is getting prominence and paid to make decisions [about AI design and ethics] it is not black women . . . There were a number of people [at Google] who couldn’t stand me.”
Google denies this and says Gebru left because she breached internal research protocols.

The company points out that it has now appointed Marian Croak, another black female employee, to run a revamped AI ethics unit.Chief executive Sundar Pichai has also apologised to staff.
But the optics look “challenging”, to use corporate-speak, not least because according to Google’s latest diversity report, fewer than a third of its global employees are women (down slightly on 2019) and only 5.5 per cent of its US employees are black (compared with 13 per cent of the US population).
This story will no doubt run and run, but there are at least three things that everyone, even non-techies, needs to note now.

First, Silicon Valley’s problems with gender and racial imbalance did not start and end with the more scandal-prone members of the Big Tech fraternity — the issue is endemic and likely to last for years.
Second, what pressure there is on tech giants to reform is coming not so much from regulators or shareholders but from employees themselves.

They are becoming outspoken lobbyists, not just over gender and race but on the environment and labour rights as well.Even before this latest drama, Google had faced employee protests over sexual harassment; Amazon is experiencing similar opposition over green issues.Recommended
Third, the problem with AI and bias that Eubanks highlights in her book is becoming more acute.Companies such as Google are not just racing to create ever larger AI platforms, but embedding them deeper in our lives.

The tools that Gebru’s paper takes a swipe at are a key component of Google’s search processes.
These systems often deliver extraordinary efficiency and convenience.But AI programs operate by scanning unimaginably vast quantities of data about human activity and speech to find patterns and correlations, using the past to extrapolate the future.This works well if history is a good guide to how we want things to unfold, but not if we want to build a better future by expunging elements of our past — such as racist speech.
The solution is to have more and better human judgment in these programs.Getting non-white faces involved in designing facial recognition tools, say, can reduce pro-white bias.But the rub is that human intervention slows down AI processes — and innovation.

The question posed by the Gebru saga is not simply: “Is tech racist or sexist?” but also: “Will we sacrifice some time and money to get a fairer AI system?”
Let’s hope the Google drama finally focuses attention on that.
Gillian will join Mark Carney, UN special envoy on climate and former governor of the Bank of England, to discuss “How to Save the Planet — and Rethink the Global Economy” at the FT Weekend Digital Festival, March 18-20; ftweekendfestival.com
Follow Gillian on Twitter @gilliantett and email her at [email protected]
Follow @FTMag on Twitter to find out about our latest stories first.Listen to our podcast, Culture Call , where FT editors and special guests discuss life and art in the time of coronavirus.

Subscribe on Apple , Spotify , or wherever you listen .

Leave a Reply

Next Post

DecentraTech Collective Spins Out from Austin Blockchain Collective to expand Focus and Reach

DecentraTech Collective Spins Out from Austin Blockchain Collective to expand Focus and Reach Upload your press release DecentraTech Collective Spins Out from Austin Blockchain Collective to expand Focus and Reach DecentraTech leverages blockchain and other transformational technologies to enable decentralized models to securely scale to meet business needs AUSTIN, TX, February 23, 2021 — The…
DecentraTech Collective Spins Out from Austin Blockchain Collective to expand Focus and Reach

Subscribe US Now