Intelligence is not ‘Artificial’: Human beings are in charge

Artificial intelligence is here now — it is the software and hardware that surrounds everyone on earth.

Sundar Pichai - Google CEO have given a surprising interview recently. Asked by CNN’s Poppy Harlow about a Brooking's report, predicting that 80 million American jobs would be lost by 2030 because of Artificial Intelligence, Pichai said, “It’s a little bit tough to predict how all of this will play out.” Pichai seemed to say that the future is uncertain, so there’s no sense in solving problems that may not occur. He also added that Google could deal with any disruption caused by its technology development by “slowing down the pace,” as if Google could manage disruption merely by pacing itself — and that no disruption was imminent.

The term “Artificial Intelligence” often prompts this kind of hand waving. It has built-in deniability, with no definite meaning, and an uncertain impact always seeming to lie in the future. It also implies the intelligence belongs to machines, as if humans have no control. It distances Google and others from responsibility.

Artificial intelligence is here now — it is the software and hardware that surrounds everyone on earth.We are humans - we are the architects, refining solutions of all kinds to make AI perform intelligently. Designing systems that serve useful purposes is the intelligence; the rest is rote. It will be a long time before machines can identify new purposes and adapt solutions to them. For now, and for some time, machines will have considerable human help.

The term “Advancing Intelligence” might keep some technology CEOs more accountable. Replacing artificial with advancing signifies the intelligence is human, not machine, and is guided by people working at technology companies. It widens the scope to the thousands of technologies — collectively intelligent — upon which people are already dependent, and signals that the future is a function of technology companies’ roadmaps by which their employees are (intelligently) building products to serve people in a multitude of ways. It also emphasizes advancement — social utility and its impact, not only the apparent aptitude of the machine.

Google’s CEO, then, could have acknowledged that Google is now already become a world leader in AI (advancing intelligence). AI is not only robots and autonomous vehicles, but also information services that can extend human intelligence, such as: search, voice-recognition, mapping, news, and video services (YouTube), sharable documents, cloud storage, mobile access (Android), and so on. With a mission statement “to organize the world’s information and make it universally accessible and useful” Google has made a large fraction of the world’s information available to people with search, handling 6 billion requests every day.

Pichai could also have boldly declared that disruption is already happening. A crisis of public mistrust in Internet technology providers, including Google, is in full swing, especially around the lifeblood of the advancing future: data and information.

The public’s tolerance for data collection, or electronic surveillance, is reaching a limit — triggered, notably, by the failure of technology companies to protect them. People are calling for protection of their privacy; safety from misuse of their data and the services built upon that data; and a guarantee of cybersecurity. Without trust in technology companies to protect them, people should fear the future.

Pichai should act on his word and take charge of a disruptive crisis that has already arrived. The public needs a plan for addressing their mistrust of the Internet as a data and communication platform — the result of advancing intelligence of which Google is at the center.

Moreover, Google, Facebook, Microsoft and other Internet platforms must take leadership now in innovating ways to assure protection, safety, and security of the Internet’s most precious cargo, information and data. To do that, they would build trust that will become capital in the future — by investing, for instance, in tools that provide data and algorithmic transparency; credentialing of data purchasers; and querying the source of any information; and by a steady investment in public education.

All major technology companies should see a duty — moral and fiduciary — to act now. Their share value rests on data, information, and the safety and well-being of the billions of people who have grown dependent upon them. And given the stakes to citizens, the public will need regulation to enforce it, with or without voluntary corporate action.

By investing in technologies of trust today, major technology companies can demonstrate they accept responsibility for advancing intelligence safely, attentive to public concern. Then as other disruptions materialize — or, better yet, before they do — the public may trust them with their well-being.