AI: Useful Tool or Existential Threat?
Perhaps because there seems to be a shortage of natural intelligence these days, artificial intelligence is grabbing a prominent place in public discourse. MIT Technology Review is running an exceptional column by iRobot founder Rodney Brooks on the perils of prediction, “The Seven Deadly Sins of AI Predictions”.
Brooks builds on most famous trope about tech futurism: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” It’s pertinent because computer scientists have been talking about AI since the ’60s but consumers didn’t seem much benefit from the first 50 years.
But that’s changing now that we’re in the brink of pervasive driverless cars, unmanned drones, machine learning, and consumer robots more sophisticated than iRobot’s Roomba. Brooks cautions that these useful tools fall far short of the “artificial general intelligence” that critics fear.
Artificial General Intelligence is Incredibly Hard
AGI is the idea behind scary, out-of-control systems like the Skynet of Terminator fame. The great horror about AGI is that the systems will attain self-awareness and decide humans are pests that need to be eliminated.
Some of us certainly are, but we’re no closer to AGI today than we were 50 years ago. At the present rate of progress, that means we’ll have Skynet in roughly infinity years. Even if Ray Kurzweil discovers a mix of vitamins and vampire blood that extends life to hundreds of years, I won’t be around then and neither will you.
Humans can’t build AGI, so those who are working on it are hoping to build systems that can build systems that can build AGI. This signals that we still have no clue how to approach the problem; hence, no progress.
Reacting to Sensory Patterns is Pretty Easy
Brooks tries to spread a little reassurance about AI and job loss by asserting that no grounds and maintenance workers in the U.S. have lost their jobs to robots. If that’s true, it won’t be for long. One application that will be on the market fairly soon is the automatic weeder for farms.
The John Deere tractor company has opened a lab in San Francisco to evaluate acquisitions and develop new technology. One of Deere’s first moves was the acquisition of a fancy weeding system:
The company spent $305 million to acquire Blue River Technology, a startup with computer vision and machine learning technology that can identify weeds–making it possible to spray herbicides only where they’re needed. The technology reduces chemical use by about 95%, while also improving yield.
This makes sense for Deere because the company already has precision agriculture systems that can plant, fertilize, and grow food while protecting it from pests. Driverless cars are leading edge, but farmers have been using driverless tractors for more than two decades.
Driverless tractors are easy because they run predictable routes at low speeds where traffic isn’t an issue. Their enabler is civilian use of GPS – something President Reagan made possible – and basic computer vision systems.
Victims of Bad Metaphors
Most AI systems today don’t do anything like “thinking”. They capture information from data streams of various sorts, do some pattern-matching against a database to understand what they’re seeing, and then employ a programmatic reaction.
In the case of the Deere weed control system, the patterns correspond to well known varieties of weeds and the reaction is a very localized shot of herbicide. This reduces pesticide use by as much as 95%.
Spray is preferable to organic methods like blow torching and pulling because weeds tend to be resistant to these easy controls: their root systems are too deep to be pulled out the ground completely and a blow torch can set off fires and still not kill the weed.
Fast Company calls this system “see and spray”, which is a much better description than the terms that involve thinking. We don’t really want “thinking machines” as much as those that can eliminate tedious drudgery. And few things as tedious as hand weeding, an occupation that violates regulations for farm worker safety in some states.
Work That Nobody Wants to Do
This system doesn’t put farmworkers out of work because it only does jobs that are fast becoming unlawful. But it does make it harder for regulators to issue waivers from work rules designed to protect workers from exploitation.
Given the labor shortages in agriculture country, systems like this would make sense even if farmworkers didn’t have hard lives. “See and react” systems never get bored, never cut corners, and never take time off from work during harvest.
They also make a cleaner environment, limit waste, lower the cost of food, and improve reliability. If all we ever get from so-called AI is safer and cheaper food less environmental impact that traditional methods, I’d be more than happy.
The fact of the matter is that many of our traditional methods of farming and manufacturing are threats to human life and safety. So it’s not AI that we need to worry about as much as the primitive status quo.