|An optimistic vision on the impact of AI on the job market:|
We Survived Spreadsheets, and We’ll Survive AI
History shows technology fuels new kinds of jobs in addition to the ones it renders obsolete
By Greg Ip
The Wall Street Journal
Updated Aug. 2, 2017 11:47 a.m. ET
A robot inspected a power system in Chuzhou, China, last month. Photo: Song Weixing/SIPA Asia/Zuma Press
Whether truck drivers or marketing executives, all workers consider intelligence intrinsic to how they do their jobs. No wonder the rise of “artificial intelligence” is uniquely terrifying. From Stephen Hawking to Elon Musk, we are told almost daily our jobs will soon be done more cheaply by AI.
Yet AI is too amorphous a label to actually convey anything useful about what, precisely, it’s supposed to displace. Instead, think of it as a technology that does one thing particularly well: predictions. Such as, will that mark on the X-ray prove to be a tumor? Is the object in the road a paper bag or a child? Which headline will get the most readers to click on an article?
Treating prediction as an input into an economic process makes it much easier to map AI’s impact. History and economics show that when an input such as energy, communication or calculation becomes cheaper, we find many more uses for it. Some jobs become superfluous, but others more valuable, and brand new ones spring into existence. Why should AI be different?
Back in the 1860s, the British economist William Stanley Jevons noticed that when more-efficient steam engines reduced the coal needed to generate power, steam power became more widespread and coal consumption rose. More recently, a Massachusetts Institute of Technology-led study found that as semiconductor manufacturers squeezed more computing power out of each unit of silicon, the demand for computing power shot up, and silicon consumption rose.
The “Jevons paradox” is true of information-based inputs, not just materials like coal and silicon. Until the 1980s, manipulating large quantities of data—for example, calculating how higher interest rates changed a company’s future profits—was time-consuming and error-prone. Then along came personal computers and spreadsheet programs VisiCalc in 1979, Lotus 1-2-3 in 1983 and Microsoft Excel a few years later. Suddenly, you could change one number—say, this year’s rent—and instantly recalculate costs, revenues and profits years into the future. This simplified routine bookkeeping while making many tasks possible, such as modeling alternate scenarios.
“You could play the what-if game. You know, what if I did this instead of that?” accountant Allen Sneider, the first registered buyer of VisiCalc, told NPR’s “Planet Money” in 2015 for a retrospective on spreadsheets.
The new technology pummeled demand for bookkeepers: their ranks have shrunk 44% from two million in 1985, according to the Bureau of Labor Statistics. Yet people who could run numbers on the new software became hot commodities. Since 1985, the ranks of accountants and auditors have grown 41%, to 1.8 million, while financial managers and management analysts, which the BLS didn’t even track before 1983, have nearly quadrupled to 2.1 million.
Just as spreadsheets drove costs down and demand up for calculations, machine learning—the application of AI to large data sets—will do the same for predictions, argue Ajay Agrawal, Joshua Gans and Avi Goldfarb, who teach at the University of Toronto’s Rotman School of Management. “Prediction about uncertain states of the world is an input into decision making,” they wrote in a recent paper.
Unlike spreadsheets, machine learning doesn’t yield exact answers. But it reduces the uncertainty around different risks. For example, AI makes mammograms more accurate, the authors note, so doctors can better judge when to conduct invasive biopsies. That makes the doctor’s judgment more valuable.
Jim Manzi, a Washington, D.C., entrepreneur whose companies develop AI-based applications for business, says over the decades the AI label has been slapped on whatever the frontier of computing is at the time. That tends to overstate its revolutionary character. Today is no different. If you took statistics in college, you learned how to use inputs to predict an output, such as predicting mortality based on body mass, cholesterol and smoking. You added or removed inputs to improve the “fit” of the model.
Machine learning is statistics on steroids: It uses powerful algorithms and computers to analyze far more inputs, such as the millions of pixels in a digital picture, and not just numbers but images and sounds. It turns combinations of variables into yet more variables, until it maximizes its success on questions such as “is this a picture of a dog” or at tasks such as “persuade the viewer to click on this link.”
AI has already made some skills obsolete. Google Translate is faster, cheaper and often as good as a human interpreter. Some AI programs can outperform human radiologists at identifying malignant tumors.
Yet as AI gets cheaper, so its potential applications will grow. Just as better weather forecasting makes us more willing to go out without an umbrella, Mr. Manzi says, AI emboldens companies to test more products, strategies and hunches: “Theories become lightweight and disposable.” They need people who know how to use it, and how to act on the results.
Mr. Manzi should know. He co-founded a company, Applied Predictive Technologies, to test companies’ strategies using AI, and sold it to Mastercard Inc . for $600 million in 2015. It’s still hiring.
Write to Greg Ip at email@example.com