Is AI Held Back by the Profit Motive?

In my overview post on the Internet of Things, I offered the suggestion that IoT needs artificial intelligence to succeed:

One of the primary challenges for the smart home and office is the creation of a genius butler/housekeeper/office manager/chauffeur that always knows what’s required. We’re going to need somebody like Jeeves who can translate our ever-changing whims into an objective state of affairs, and that means artificial intelligence will finally get called up to the major leagues. Is it ready? Probably not yet, but some astute early adopters are already calling for it.

Hence, the news last week that Elon Musk and some other Silicon Valley heavy-weights were forming a non-profit, OpenAI, to conduct research on AI with more than a little interest. OpenAI implies that the profit motive is holding back AI development:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.

The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

I’m not sure that the profit motive is to blame for the slow progress we’ve made on AI over the last 30 years, but I’ll get to that after I address the main problem with AI research. As I see it – and I should note that while I’ve worked with one of top minds in AI, Marty Tenenbaum, I’m not a professional AI researcher myself so this may be unfair to the AI community.

The first and most important problem for AI, as for any field of computer science, is the question of goals. Do we want AI to create computers that do all the things that humans do (only better,) or do we want computers to get better at doing the things that computers do? I would suggest the latter is the more productive path, because there’s a more comprehensive goal: improving quality of life over both the short term and the long term. This is to say that the goals of any field of technology research should be bound to human welfare in a broad sense. We don’t improve human life, for example, by destroying the environment, wiping out all of our plant and animal companions, or exploiting huge swaths of humanity so that a few can live ridiculously well. I’m making a utilitarian argument that technology’s job is to produce “the greatest happiness for the greatest number,” a formulation of one Francis Hutcheson (in his 1975 Inquiry concerning Moral Good and Evil) that is commonly misattributed to Jeremy Bentham.

Hence, we don’t need computers to mimic human consciousness, we need them to assist humans in the creation of new processes, devices, and systems that improve the overall state of our life experience. Successful robots don’t mimic humans, they form parts of production processes that humans would not be able to conduct on our own. One case in point is IBM’s first computer terminal assembled by robots, the 3270. In order to automate the production process, IBM eliminated a number of assembly chores by created VLSI computer chips that reduced the component count and allowed designers to simplify the assembly. People quipped that the cost reduction for the final terminal had more to do with replacing an inefficient design with a better one, but the better design had to be assembled in a process that would have been too tedious and repetitive for humans to perform over the long term. So it was an example of computers doing things that only computers can do.

This model works in very many facets of human life: health care, food production, transportation, education, and the rest of the IoT market niches. To take just one example, genetic engineering can create more tasty and nutritious varieties of food – with lower environmental impact – faster than traditional trial-and-error breeding in test plots has been able to do. This comes about by humans collecting information about genes and genetic interactions and then directly editing plant and animal genomes to produce the desired results. With the advent of the CRISPR technology, gene editing can be done by small startups and underfunded universities just as easily as agribusiness giants have used first and second generation genetic engineering tools. The only thing holding CRISPR back is regulatory stagnation and the vested interests of traditional agribusiness giants such as Whole Foods Market and McDonalds. These firms aren’t food producers, but they set standards for a surprisingly large number of farmers.

In the ag-tech field, we literally have more technology than we know what to do with, and nowhere to go because legal, regulatory, and consumer preferences have lagged behind. We can see similar things happening in medicine, education, and transportation.

It seems to me that we’re in a similar place with AI. Cars, for example, have voice control systems that should, in theory, assist with navigation, climate control, maintenance, and entertainment. But are they any good? I’m sure some are, but every time I use the one in my high-priced 2015 car, it’s an exercise in frustration any time I want to do something more complicated that adjusting the temperature. But the things that the car’s voice response system can’t do, such as navigation and selecting music playlists, are handled perfectly well by the $300 (on sale at Best Buy) Apple Watch. Siri is AI that performs a wide variety of tasks perfectly well, and it doesn’t cost an arm and a leg like the infotainment system in the typical car.

So regardless of who produces the AI, it has no impact until it’s build into products and sold. So does eliminating the profit motive really help? Researchers working for non-profits are supported by generous donors, which is fine. As long as there’s a firewall between donors and researchers donors don’t exercise undue influence.

But the non-profit model doesn’t help bridge the gap between research and product development, and I would maintain this is where the bottleneck exists. The nice thing about working for profit is that it connects researchers with real people who have real needs. This is especially important to Silicon Valley types because they live in a bubble of artificial problems and limited contact with ordinary people with real world problems.

So I don’t expect much from OpenAI, although I wish its founders all the best.