Excerpt from "The Transpacific Experiment" by Matt Sheehan

This week’s free issue of the newsletter is an excerpt from Matt Sheehan’s new book The Transpacific Experiment: How China and California Collaborate and Compete for Our Future.

Matt is a Fellow at the Paulson Institute's think tank, MacroPolo, where he covers the China-US technology relationship with a focus on artificial intelligence. Prior to that he worked as The Huffington Post's first China correspondent in Beijing, where he was one of my favorite foreign reporters in China. Since moving back to the Bay Area in 2016, he's worked as a journalist, analyst and consultant on projects connecting California and China. His new book covers China-California issues across several dimensions: students, technology, Hollywood, green investment, real estate, and local politics.

I hope you enjoy the excerpt, and you can buy the book here on Amazon.

Today, the central axis of China–U.S. competition runs through frontier technologies: artificial intelligence (AI), 5G networks, and quantum computing. These technologies hold the potential to reshape the geopolitical balance of power—economic, cybersecurity, and military—and each country brings a different set of strengths and weaknesses to them. For a better picture of these strengths and weaknesses, let’s zoom in on the technology that’s making the most waves in both countries today: AI.

AI acts as an umbrella term for an incredibly diverse range of technologies that involve machine learning. Those technologies have more applications than there are industries in the world, making a one-dimensional analysis of “which country leads in AI” impossible. Instead, assessing the relative capabilities of China and the US requires breaking down these two AI ecosystems into their key inputs: research talent, data, semiconductors, private sector companies, and government policy. Looking at how the two countries compare along each of these inputs can provide a first approximation of their strengths and weaknesses in the field, and serve as a basis for well-grounded technology policymaking going forward.

Research talent constitutes one of the most important—and the most directly quantifiable—inputs for AI systems. Researchers around the world compete for the opportunity to publish and present their papers at top AI conferences, with the selections made by leading figures in the field. By looking at which researchers and institutions are selected for publication at these conferences, we can get a sense of where top AI researchers believe the most important research is happening today.

As part of a study for The Paulson Institute’s think tank, MacroPolo, my colleague Joy Dantong Ma broke down publications at what many agree is the top AI conference in the world: NeurIPS 2018. Looking at the most elite papers at the conference—those selected for oral presentations, with a 1% acceptance rate—the United States holds an overwhelming advantage. A full 60% of authors of these papers are currently working or studying at US institutions, compared with just 1% currently at Chinese institutions. Those findings were in line with a similar study of NeurIPS 2017, and echoed the qualitative assessments I’ve heard from many Chinese and American researchers in the field: while China is producing an increasing amount of AI research, the vast majority of cutting-edge breakthroughs are still coming out of places like Google or Carnegie Mellon University.

But examined from another angle, the global distribution of AI talent appears much flatter. If you look at where these elite researchers completed their undergraduate degrees (a rough proxy for where they grew up), over half of America’s top AI research talent comes from abroad. The top source country for that talent? China. While Chinese institutions publish just a tiny fraction of top AI research, researchers who grew up in China make up sizeable chunks of this body of work: 15% in 2017, and 9% in 2018.

The deltas between where these top researchers come from and where they work today (29% to 60% for the U.S., 9% to 1% for China) represent the gains and losses from highly skilled immigration, a major brain gain for the U.S., and substantial brain drain for China. Data from the NeurIPS conference gives us just one window into the multi-dimensional landscape of AI research capabilities. We need to deepen and broaden these datasets by examining more conferences, building alternative metrics, and regularly assessing how ever-shifting policies in the U.S. and China are affecting this population of highly-educated, internationally mobile AI researchers. For the U.S. government, these are some of the international students subject to new restriction and heightened scrutiny of their visas. For the Chinese government, these overseas researchers represent a population of potential “sea turtles” that they hope will bring their talents back to China.

Creating useful metrics for the other key AI inputs—data, semiconductors, companies and government policy—is less straightforward, but no less important. Data is often cited as an overwhelming advantage for China, but the reality is far more complex. While Chinese companies have access to rich data from a large pool of relatively homogenous domestic users, leading American companies draw on diverse data from users around the globe. The result is a mixed bag when it comes to comparative advantages in data, one that requires a far more granular analysis of specific use cases and settings to determine advantage.

For advanced semiconductors—the crucial chips powering AI functions in phones, computers and cars—U.S. companies maintain a near stranglehold on the industry. That dominance was powerfully illustrated by the threat of banning these exports to Chinese telecoms ZTE and Huawei, a move that likely would have shaken those companies to their foundations. But that same threat has now catalyzed greater investment in China’s domestic semiconductor capabilities, and some top research labs at places like Tsinghua University have begun publishing attention-grabbing research on developing new hybrid chips.

The intersection of private sector companies and government policy presents a similarly mixed bag. The Chinese government’s 2017 national plan for AI helped push domestic activity in AI to a new level. It encouraged even greater levels of private investment, and acted as a high-profile signal for local officials around the country: AI was the next big thing, and they were to do whatever they could to accelerate adoption. Those officials began adapting public infrastructure, subsidizing private investment, and procuring products, all in an attempt to stimulate their local AI industry. That then fed into a huge boom in venture capital funding as investors and entrepreneurs sought to apply AI to government priorities: transportation, "city brains," medical care, and largest of all, surveillance and public security.

The ripple effects of that initial push are still being felt—but so are its limitations. After the early frenzy of Chinese venture capital investment in AI faded, some sobering realities set in: many of China’s “AI startups” use hardly any AI in their products, and they have no sustainable business model beyond raising more money. And while government subsidies and procurement can help gin up demand for the AI products of today, it remains unclear if they can plant the seeds for the AI breakthroughs of tomorrow.

China and the United States enter the age of AI like a study in contrasts. While the U.S. leads in game-changing research, China shows strength in practical applications. Where American companies draw data from diverse users around the globe, China’s AI giants have a wealth of relatively homogenous data at home. And while Silicon Valley sometimes actively rejects entanglements with the U.S. government, Chinese companies often work with local officials to bring large-scale AI projects to life.
Those contrasts are challenging some American technologists and policy makers to rethink the connection between government policy and innovation. A relatively light-touch approach has served the U.S. well during the internet revolution, turning Silicon Valley into a global mecca for software engineers and ambitious entrepreneurs. But real-world applications of AI—driverless cars, automated factories, and intelligent cities—have a much larger physical footprint, one that often requires the government adopt, or at least adapt to, the technology. In that context, how can the American government be proactive enough to ensure it fully leverages the technology, but without smothering the organic sources of American AI strengths?

But as AI systems become more capable and more omnipresent, U.S.– China competition could raise questions far deeper and more troubling than the current round of geopolitical wrangling. Today, AI systems have the power to do extraordinary things when they are trained to perform a single, narrow task (e.g., winning at Go, or recommending a song). But a single narrow AI system doesn’t yet have the general intelligence of humans that allows a single brain to perform the full range of complex human tasks: conducting innovative research, raising a child, or writing a bestselling novel. Estimates of when we will be able to build powerful AI systems up to these complex tasks, often called "artificial general intelligence" (AGI) or "high-level machine intelligence (HLMI), vary widely. The most aggressive estimates from leading AI scientists say that achieving AGI in under ten years should be taken as a "serious possibility," while median estimates from surveys of researchers have projected HLMI in around forty years.

Whatever the timeline is for HLMI or AGI, its arrival would pose profound questions for humanity—questions that could be further complicated by a tense U.S.–China relationship. Such AI systems could be put to almost limitless uses—curing disease, automating jobs, or designing complex cyber-weapons that cripple an entire country. What happens if the United States and China are engaged in spiraling competition to build ever more intelligent machines, AI systems that we might not know how to control? Former U.S. Secretary of the Navy Richard Danzig has described that rush toward building increasingly autonomous systems as like playing a game of "technology roulette." Many scientists familiar with the field, from elite AI researchers such as Stuart Russell to the late astrophysicist Stephen Hawking, fear that if not designed with foolproof safety mechanisms, such AI systems could potentially threaten the very existence of humanity.

Given the potential risks from AGI systems, the move toward disentanglement of the two technology ecosystems raises serious questions. Should the U.S. government attempt to silo America’s top scientists, walling off new AI breakthroughs from China? Or would we be made safer by a two-way dialogue around AI safety, one that allows for communication around international best practices in safety? Can two superpowers that find themselves at geopolitical loggerheads avoid a potentially destructive technology-fueled arms race? And can the thousands of technologists who are at the center of the Transpacific Experiment—the students, researchers, and entrepreneurs crisscrossing the Pacific Ocean—play a role in mitigating these threats? END

"Excerpted and adapted from The Transpacific Experiment: How China and California Collaborate and Compete for Our Future, copyright © 2019 by Matt Sheehan. Reprinted by permission of Counterpoint Press."

You can buy the book here on Amazon.

To celebrate the Mid-Autumn Festival 中秋节 Sinocism is offering you something better than mooncakes. If you subscribe using this link between now and 11:59 PM EST on September 14th you will get 33% off the regular subscription price.