The contrast between artificial intelligence and ordinary computing is best documented in how machines got one up over humans in one of our favorite pastimes: board games.

If you have ever wondered why we are seeing a revolution in Machine Learning now, it is worthwhile to take a peek into how computer technology developed in the past. Tracking the advancement of the digital age over several decades since the 1970s is always a fascinating exercise. The timeline reveals a rapidly increasing growth of processing power. It also leads to the unearthing of  some vastly obsolete devices (ever heard of the other kind of PDA: Personal Digital Assistants?) and displays the embarrassingly feeble capabilities of the predecessors of devices existing today.

Exactly 17 years prior to this millennium, the 1983 edition of the Time magazine replaced its popular feature, ‘Man of the Year’ with ‘Machine of the Year’, elevating the personal computer to this pedestal. These were the nascent stages of the oh-so-familiar table-top system comprising a monitor unit, keyboard, mouse and a CPU. At the turn of the millennium, these systems were still commonplace, albeit significantly advanced. Smartphones today have a memory bettering that of Apple II released in the 1980s.

This progress is attributed to the exponential increase in the number of transistors that could be made to fit onto a computer chip, a trend popularly known as Moore’s law. A transistor is the basic silicon-based device that dictates digital logic in electronic circuits. New innovations in producing these solid state devices reduced their size while increasing their efficiency almost every passing year. The number of transistors in the Intel 4004 chip in 1971 was around 2000 when Intel co-founder George Moore foretold that it would double every two years.

This amazing prediction was obediently followed over 5 decades, making people believe it to be a fundamental law. The wonders of exponential growth have seen the number of transistors on the latest Intel chips used today grow to over 1.5 billion!

Moore’s law made computers faster and capable of completing certain tasks beyond human capabilities through sheer brute force computations alone. The more transistors you have to operate in a computing device, the more calculations per unit time you can complete. But does speed and efficiency in performing computations suffice in making machines smarter than human minds?

The pinnacle of computational triumph came in 1996 in Philadelphia, when IBM’s Deep Blue supercomputer was pitched against the reigning world champion Garry Kasparov in the game of chess. Known to be one of the most complex games involving a deep knowledge of various strategies and the ability to foresee possible scenarios several moves in advance, the world held its breath to see whether a creation would surpass the best amongst the creators.

Kasparov vs Deep Blue (Man versus Machine, the Ultimate)

Kasparov vs Deep Blue (Man versus Machine, the Ultimate)

In the very first game of a 6 game match, Deep Blue defeated Kasparov to become the first machine to beat a reigning chess world champion. Deep Blue achieved this feat through pure combinatorics, with the capability of calculating over millions of possible positions and their consequences before deciding the best possible move.

This was a remarkable achievement sparking debates over whether machines have finally reached a stage where they could mimic human minds.

The reality couldn’t have been farther off. While computers could multiply dozens of dozen digit numbers in an instant, they failed at tasks any average person could do in the blink of an eye, for example differentiating between a handwritten 8 and a 9. The algorithmic logic dictating their actions was all pre-programmed, making them super efficient at tasks they were meant to do.

However, what makes us human is our ability to make a decision in circumstances straying off from what we are used to dealing. Even if the decision is erroneous, we learn from our mistakes and re-program ourselves for the future. This is the essence of our intelligence, a quality absent from Deep Blue and its contemporary computers. Ironically, instead of Deep Blue getting plaudits for its triumph over human intelligence, the match of 1996 led to undermining the status of chess itself to a game which could be easily conquered by brute force.

Today, we now have smartphone applications capable of beating Deep Blue in chess. This is another vestige of Moore’s law continuing to shrink computing devices. However, there’s a limit to how small we can fabricate our transistors and a large number of computer scientists believe that the Moore’s law has already reached its saturation point.

In spite of that, the progress in technology has been alarmingly fast. Not only has the size of our computational devices continued to decrease with the advent of wearable tech (smartwatches, I believe, would be looked at in a similar light as PDAs in the future), but machines are increasingly becoming smarter in addition to being faster. And yes, they can now perform these little tasks which humans take for granted but computers just couldn’t digest earlier, like pointing out which people are smiling in a photo, comprehending human handwriting, identifying and differentiating between animals and objects and so on.

This second big revolution in computer technology was brought about not by improving their architecture, but by developing algorithms that make them learn and think like a human. The turn of this millennium heralded the era of Machine Learning.

Telling apples from oranges is just a part of the basics. Machine learning has revolutionized the modern world in more ways than we can imagine. It is a part of a big chunk of our daily lives. Netflix suggests new movies one can watch based on our past choices (which it uses to make ‘personality sketches’ of every person). Simple smartphone apps can turn our photographs into works of art. Self-driving cars can visualize their surroundings thanks to Machine Learning. For a more comprehensive list, refer to the recent article published in the BBC magazine.

So are machines finally coming close to attaining true artificial intelligence? The answer is quite positively, yes. It took computers to beat another reigning world champion of a board game to convince the critics in this debate. The game played this time, was GO, an ancient Chinese game where two players place pebbles of two opposing colors on a 19 x 19 square grid board. The aim is to surround more grid territory than the opposing player’s stones. This apparent simplicity of the game hides an enormous number of possibilities that branch out with the placement of every stone. The total game configurations exceed the number of atoms in the observable universe. Writing a computer code to solve this game is a problem nearly impossible to solve using brute force, regardless of how much computing power one holds.

AlphaGO vs Lee Sedol

AlphaGO vs Lee Sedol

This is where machine learning was brought into the picture by software developers at Google’s DeepMind, to build a GO-playing computer called AlphaGO. The structure of machine learning algorithms does mimic the human brain in the sense that different stages of solving a problem are connected in a way close to the neuronal connections in our mind. These are known as neural networks. Using very simple rules, the computer is tasked with classifying a ‘test’ dataset and allowed to make mistakes.These are recorded as positive feedback which adjusts the strengths of these artificial neuronal connections between various stages. With a large number of iterations, the computer teaches itself different seen and unseen nuances in the task at hand and reprograms its network to better deal with them. Modern machine learning algorithms involve several hidden layers of neural connections which can adapt to address different applications very quickly, without the need of a human programmer to tweak its code. This, known as ‘Deep Learning’, holds the key to making machines truly intelligent.

Using similar deep-learning-based algorithms, Google’s AlphaGO played the game with itself multiple times before being pitched against the world champion, Lee Sedol in March last year. In a series of 5 games, the computer won a surprising 4 times. Such was the level of sophistication in AlphaGO’s gameplay that one of its moves in game 3 was so unexpected that it made Lee literally drop his jaw. Analysts later commented that the computer had explored the technicalities of the game beyond human reach. This feat truly opens up the old debates about machines acquiring human-like intelligence.

But somewhere it almost feels like we have such scratched the surface. How much more intelligence can a machine acquire is left for technological advancements in the future to tell us.

About The Author

Sumeet Kulkarni is a graduate student studying Astrophysics at the University of Mississippi. Since his undergraduate days at the Indian Institute of Science Education and Research, Pune, he has harboured a keen interest in writing and public engagement through creative expressions of scientific ideas