Principles

The Fall and Rise of Ai

There is a revolution happening in analytics and Artificial Intelligence (Ai) is at its center. My goal with this post is to catch you up on the 80-year history of Ai by the time I finish my coffee. Fair warning, I do take longer than some!

The idea of non-human intelligence has been around since antiquity. However, serious research on Artificial Intelligence (Ai) coincided roughly with the invention of the first programmable machines around 1940 1Z1 was invented by the German civil engineer Konrad Zuse between 1936 and 1938. This was the world’s first electromechanical and programmable computer that used binary logic. Around the same time, Alan Turing’s work on the theory of computation showed that a machine could simulate almost any act of mathematical deduction just by shuffling ones and zeroes. If we presume human intelligence to be our ability to reason, it was suddenly and theoretically possible for a machine to mimic it. The promise of this research was immediately captivating and expectations soared high.

Lofty claims around a new technology are a double-edged sword. Funding pours into projects and companies, seemingly overnight. However, disappointment has a long memory and funding can dry up for years if expectations fall short. An Ai winter can set in.

There have been at least a couple such winters. The first one was during the Cold War. The US government wanted to translate Russian documents quickly. Instead, researchers quickly discovered that commonsense was not common or easy among machines. A story goes that efforts to translate “the spirit is good but the flesh is weak” from a Russian cable yielded “the vodka is good but the meat is rotten”2Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.

I think that our knowledge is grounded in our embodied experience. We experience reality with several senses and that shapes our conversation. Language is messy and constantly evolving. No wonder that the promise of Ai did not translate at the time. But I digress.

The second Ai winter in the seventies was a bit of an own goal. The Ai community split between those that saw promise in rigid, top-down symbolic Ai (e.g. expert systems), vs. flexible, bottom-up connectionist Ai (e.g. interconnecting artificial neurons). Symbolic Ai won a battle but lost sight of the war. Expert systems were all the rage for a while. They were too hard to maintain, did not learn, and were not fault-tolerant. Several years and millions of dollars later, they fell from grace by 1990. Important advances3for example, backpropagation were made in the theory of connectionist Ai, however computer processing power was just not there to apply them.

Research in Ai plateaued during the 90’s and 2000’s. Some members of the community re-branded themselves as cognitive scientists, working in informaticsanalytics, or even machine learning. Watered-down…err…narrower definitions of Ai seemed technically feasible and general Ai considered a bit of a quack pursuit.

As a quick aside, narrow Ai is what is all around us today. Phone assistants like Siri, google search engine, image detection, or self-driving cars – are all examples of machines trained for specific (or, narrowly defined) tasks. General Ai is a machine that can do everything including being sentient. We don’t have one of those…yet.

Two things happened after 2010 that thawed the latest Ai winter. Researchers converged upon a specific type of connectionist neural network architecture as the best candidate for learning patterns from data. This will ultimately lead to deep learning. I will describe it in another post, but will note for now that it is remarkable that every example of Ai you see today uses essentially the same algorithm! This type of architecture required massive processing power. Available CPUs were struggling to keep up.

Cut to 2013. The stock of NVIDIA, a chip-maker for computer graphics, had been languishing in the low teens for years. The company noticed all these graduate students buying their GPUs4Graphical Processing Units. They realized that these students had not become hardcore gamers overnight, but that the GPU architecture lends itself well to deep learning5from the excellent interview of OpenAi’s Greg Brockman on This Week in Startups. A GPU can have orders of magnitude more cores than a CPU. And neural networks are massively parallelizable. It is a marriage made in binary heaven. You can take chunks of a neural network and distribute them across GPU cores in order to compute them simultaneously. NVIDIA’s stock went from $25 a share in 2015 to almost $300 by 9/2018.

Narrow Ai surrounds us today. Even my thermostat tells me that it is learning. It sounds wonderful to have little Ai helpers take over mundane tasks like programming the thermostat, organizing pictures, or recommending which show to binge next. Such harmless fun. However, the likes of Bill Gates and Elon Musk have sounded off a warning alarm on the future potential of Ai.

One of my favorite exchanges involves the CEO of Facebook, Mark Zuckerberg, terming Ai naysayers as “pretty irresponsible” in a casual BBQ video. Elon was quick to respond with “I’ve spoken to Mark about this. His understanding of the subject is limited”. Ouch.

I am almost through my coffee, and I’ll save comments around the potential perils of general Ai for a future post. I’ll end by noting that this time it feels different. Saying that we have only achieved narrow Ai used to be a dig on the perceived unfulfilled promise of Ai. Today, we recognize it as Ai. In our pocket, in our home, and in our car. Our relationship with deep learning is becoming personal and pervasive. And our increased reliance on machines whose operation we no longer understand should concern us.

For Ai research and researchers, it does not feel like another winter is coming anytime soon.

Leave a Reply

Your email address will not be published. Required fields are marked *