The Morning Jolt

National Security & Defense

If You Liked Wuhan’s Viral Research, You’ll Love Its AI Experiments

(3DSculptor/iStock/Getty Images)

On the menu today: We all learned the hard way that our lives can be seriously disrupted by scientific experiments conducted in Wuhan, China, that “break the rules.” It’s probably worth keeping an eye on Wuhan University researchers who recently allowed an artificial intelligence to control an Earth-observation satellite, which led the satellite to start looking at Indian military bases and a Japanese port used by the U.S. Navy. Apparently, when Wuhan scientists watch the Terminator movies, they root for the machines. The Chinese Communist Party’s efforts to develop advanced artificial intelligence are moving full speed ahead, with all kinds of potentially malevolent applications. Meanwhile, Senator Dianne Feinstein of California sounds like she doesn’t realize she’s missed any time in the chamber this year.

Dangerous Developments in China

From the South China Morning Post:

Chinese researchers say an artificial intelligence machine was given temporary full control of a satellite in near-Earth orbit, in a landmark experiment to test the technology’s behaviour in space.

For 24 hours the Qimingxing 1, a small Earth observation satellite, was directed by a ground-based AI, without any human order, assignment or intervention, according to a paper published in the journal Geomatics and Information Science of Wuhan University.

The research team, led by Wang Mi from the university’s State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, said the aim of the experiment was to see what the AI would do of its own accord.

The scientists said the AI picked a few places on Earth and ordered the Qimingxing 1 to take a closer look. No explanation was given for why the technology may have selected the locations.

One targeted area was Patna, an ancient city by the Ganges River in northeast India, which is also home to the Bihar Regiment – the Indian Army unit that met China’s military in a deadly encounter in the Galwan Valley in the disputed border region in 2020.

Osaka, one of Japan’s busiest ports which occasionally hosts US Navy vessels operating in the Pacific, also ranked highly in the AI’s list of interests.

“This approach breaks the existing rules in mission planning,” said Wang and his colleagues in their paper published on April 3.

Oh, Wuhan, haven’t you already given the world enough through scientific experiments that “break the existing rules”? Accidentally spilling the Andromeda Strain wasn’t enough, so now you have to go and try to invent SkyNet?

(By the way, it says something about our modern world that a few months into the Covid pandemic, the nuclear powers of India and China could have their militaries get into a border skirmish that killed anywhere from 35 to 43 Chinese troops and 20 Indian troops, and it was barely noticed or discussed on this side of the world.)

For those wondering about the sourcing, the South China Morning Post is technically an independent newspaper in Hong Kong, but one with a well-established, non-antagonistic relationship with the Chinese government. The newspaper has been described as “on the leading edge of China’s efforts to project soft power abroad” by the New York Times, and The Atlantic reported that “concerns have been raised over the newspaper’s ethics, and its willingness to cooperate with Beijing.” Rose Lüqiu Luwei, a professor of journalism at Hong Kong Baptist University, told the publication Quartz that “the top management is already indirectly controlled” by the Chinese government. In other words, if you see news in the SCMP, it is very likely that the Chinese Communist Party wants you to see that news.

But satellite control isn’t the only area where Chinese-sponsored and -controlled institutions are racing ahead.

A recent Harvard University symposium concluded that China is setting itself up to be a kind of global arms dealer in the race to apply AI to government surveillance and control — the best friend of every autocratic regime on the planet:

Harvard Economics Professor David Yang spoke to the outsized success of China’s AI sector at a recent dean’s symposium on insights gleaned from the social sciences about the ascendant global power. As evidence, he cited a recent U.S. government ranking of companies producing the most accurate facial recognition technology. The top five were all Chinese companies.

“Autocratic governments would like to be able to predict the whereabouts, thoughts, and behaviors of citizens,” Yang said. “And AI is fundamentally a technology for prediction.” This creates an alignment of purpose between AI technology and autocratic rulers, he argued. . . .

Yang’s research shows China exporting huge amounts of AI technology, dwarfing its contributions in other frontier technology sectors. Yang also demonstrated that autocratic regimes around the world have a particular interest in AI. “AI quite startlingly is the only sector out of the 16 frontier technologies where there’s disproportionately more buyers that are weak democracies and autocracies.”

Basically, if an AI program has a potential military or national-security application, the Chinese government wants to use it. Last month, the Japan Times reported about how the Chinese government is seeking to develop artificial-intelligence applications to meet all kinds of national-security and military objectives:

How important AI has become for China’s national security and military ambitions was highlighted by President Xi Jinping during the 20th Party Congress last October, where he emphasized Beijing’s commitment to AI development and “intelligent warfare” — a reference to AI-enabled military systems.

Not only does China plan to become the world’s leading AI power by 2030, Beijing has also turned to a military-civil fusion strategy to achieve it. This approach has enabled the country to speed up defense innovations by eliminating barriers between China’s civilian research and commercial sectors, and its military and defense industrial sectors.

This puts the discussions of some sort of restrictions on artificial-intelligence development in a new and unnerving light.

Recent warnings from groups such as the Association for the Advancement of Artificial Intelligence and Geoffrey Hinton, the “Godfather of A.I.,” likely are expressing genuine concerns and have the best of intentions. Human beings generally don’t like to scaremonger about their own lives’ work. But if the U.S. chooses to restrict its development of artificial intelligence, and the Chinese government does not, doesn’t that make U.S. efforts moot? And doesn’t that leave Beijing with a potential advantage, both on the battlefield and in any economic or soft-power competition?

Hinton told the New York Times that his “immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will ‘not be able to know what is true anymore.’” Yesterday, I noted that so far, social-media users have been pretty good about spotting AI-generated images — those Midjourney AI images still don’t quite pass the uncanny-valley test — and when used in efforts at propaganda, they’re more likely to backfire.

But Hinton also worries about what happens when a military develops an advanced artificial intelligence and gives it the ability and authority to kill.

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Oh, and bringing this full circle back to Wuhan, Senate Republicans released the results of their two-year investigation into the origins of Covid:

A careful reading of reports from the WIV spanning more than a three-year period yielded a picture of a struggling institution: underfunded, underregulated, and understaffed. WIV leadership complained that some portion of their overworked staff 6 was also poorly trained, while some reports revealed a work culture of laxity toward safety matters and described difficulties adapting to the work environment at their newly constructed facilities. Persistent problems popped up month after month in report after report, casting considerable doubt on the WIV’s claims of successful remedy. By their own admission, WIV researchers conducted experiments involving SARS-like coronaviruses, prone as they are to airborne transmission, in BSL-2 laboratory conditions with the relatively negligible protections required of researchers at that biosafety level. The WIV was almost an accident waiting to happen, and it appears that an accident, or perhaps accidents, did happen, and roughly concurrent with the initial outbreak of SARSCoV-2.

Beginning in late 2018 and building like a crescendo throughout the months of 2019 that preceded the initial outbreak in Wuhan, a series of reports from the WIV indicated that inspections had identified “hidden dangers,” “shortcomings,” “nonconforming items,” and various biosafety “problems” that were described alternatively as “foundational,” “critical,” and even “urgent.” CCP cadres spoke of a rough start for the WIV’s new BSL4 laboratory complex in which they suffered from “no equipment and technology standards, no design and construction teams, and no experience operating or maintaining [a lab of this caliber].” In late July 2019, WIV leaders warned of “urgent problems we are currently facing,” and by November, they “pointed to the severe consequences that could result from hidden safety dangers.”

But hey, I’m sure the Chinese state-sponsored research and experiments in the military applications of artificial intelligence are a lot more careful.

ADDENDUM: Slate’s Jim Newell describes a deeply concerning short conversation with California senator Dianne Feinstein, in which the senator seemed to believe she hadn’t missed any time in the Senate recently.

Exit mobile version