“AI is far more dangerous than nukes. (Elon Musk)

“The development of full artificial intelligence could spell the end of the human race…It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” (Stephen Hawking on the BBC.)

“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that” (Gray Scott)

“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers…They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.”(Alan Turing )

“If we do it right, we might be able to evolve a form of work that taps into our uniquely human capabilities and restores our humanity. The ultimate paradox is that this technology may become a powerful catalyst that we need to reclaim our humanity.” (John Hagel)

Do We Really Have To Fear AI

Warnings like those ones and iconic movies like Terminator or the unique 2001 A Space Odyssey of Stanley Kubrick based on a short story of the giant Arthur C Clarke (The Sentinel) have all painted the dark and inanity around the advent of a whole powerful omniscient AI. Even when it carries an optimistic message like the quote of John Hagel it starts with a big conditional 'If' But is it a real danger? One must remember that all new technology that will be deployed on a large scale often provokes opposition. Back in the day, people thought that the iron machines (trains) that were slowly invading the landscapes of numerous countries would cause sterility to the bovines among other irrational fears. While it is always healthy to debate on the application of a new technology which will impact society on a large scale, one should neither overestimate nor at worst underestimate, the risks associated. We will speak today about strong AI.

"...AlphaGo also relied on neural networks and reinforcement learning...

What is strong AI compared to weak AI that is in general use today? If we can, we will oversimplify it in one sentence. Strong AI is able to do anything that a human can do being far better compared to weak AI which does it better than a human being but only on one specific task. We can see examples in everyday life, from chat boxes which handle end to end conversations, or AI handling delicate railway traffic, to the detection of several diseases by AI using machine learning which competes and surpasses the capacities of their human counterparts. An example often quoted is the fact that AI now continuously beats the best chess Grandmasters, a feat already achieved in 1997 when the supercomputer Deep Blue crushed the legendary champion Garry Kasparov. The machines went on to improve by mastering the Chinese game of Go, a much more complicated strategy game than Chess (In 2015, Google Deep Mind’s Alpha Go program surprisingly defeated Lee Sedol in the match AlphaGo versus Lee Sedol). While Deep Blue mainly relied on brute computational force to evaluate millions of positions, AlphaGo also relied on neural networks and reinforcement learning and nowadays, machines continuously compete and defeat gamers online.

An interesting incident to note. In 1966, MIT student Richard Greenblatt wrote the chess program Mac Hack VI using MIDAS macro assembly language on a Digital Equipment Corporation PDP-6 computer with 16K of memory. Mac Hack VI evaluated 10 positions per second.

"...no computer program could defeat even a 10-year-old child at chess. . ...

In 1967, several MIT students and professors (organized by Seymour Papert) challenged Dr. Hubert Dreyfus to play a game of chess against Mac Hack VI. Dreyfus, a professor of philosophy at MIT, wrote the book ‘What Computers Can’t Do’, questioning the computer's ability to serve as a model for the human brain. He also asserted that no computer program could defeat even a 10-year-old child at chess. Dreyfus accepted the challenge . The computer was beating Dreyfus when he found a move which could have captured the enemy queen. The only way the computer could get out of this was to keep Dreyfus in check with its own queen until it could fork the king and queen, and then exchange them. That is what the computer did. Soon, Dreyfus was losing. Finally, the computer checkmated Dreyfus in the middle of the board. This story is anecdotal but it is relevant when listening to opinion of those who predict that Strong AI will not happen.

One thing is sure the grand majority of researchers on Artificial Intelligence know that the advent of Strong AI is inevitable where they fail to achieve a consensus is the time frame even so the most pessimistic would bet that it will happen before the end of the century.

What’s the big deal?

Any way you look at it and even without considering an apocalyptic Terminator kind of scenario, Strong AI will create profound changes in all aspects of our society, so profound that it could only be compared to the invention of fire or the Industrial Revolution. You must not think only in terms of the loss of jobs in many professional sectors which we used to think were only possible if it involved humans. Think about it, all the problems we could have solved, all the mysteries and research that bear their rewards in the hard won results, all of that will end as AI will do it quicker and better and present it with voluminous amounts of detailed information. We will have to deal with the philosophical implications and even religious ones. What will be the rights of Strong AI? Do we have the moral right to enslave a thinking entity? Despite the fact that those issues already appear huge, they are mild in comparison to what could possibly go wrong.

Strong AI goes rogue.

Our civilization has dealt differently with new technology. Elon Musk debates interestingly with Joe Rogan on the subject however, our big problem is ‘Time’. When you introduce a new technology like a new smart toaster, evidently you will have produced your new toaster for it to successfully pass all the security regulations. However despite all your precautions, let’s assume that accidents occurred with the toaster and even caused human death, habitually the company will defend its products, the people will separate into two camps those for or against the smart toaster. The regulators will pass new laws imposing on all toasters of this kind to incorporate a new feature that prevents injuries or death. But this process is long, and several years can pass before all parties achieve settlement of the problem and enjoy the toaster in total security. This kind of attitude has worked pretty well even when large companies put all their weight in the balance to avoid regulations (famous examples are the seat belt and cigarette smoking). The problem is that this method of late regulation will not work with AI. Once the Singularity (Strong AI) is here among us, it will be too late for any regulation. Regulations will have to be enacted and enforced before Strong AI enters the scene.

How will regulations help?

First of all is an In built security system, so using Asimov’s Laws of Robotics to implement those laws are: The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human except if this is in contradiction with the first law. The third law is that a robot shall avoid actions or situations that could cause it to come to harm itself except if it enters into contradiction with the first and second law. [6]

At first view, these laws in a sentient robot should avoid major problems but it is not so simple, you will have to perfectly define a human being. Is someone with a prosthesis still a human being? Does someone speaking with a different accent still qualify as a human being? Even these laws will ultimately, if not at the very beginning, become insufficient to avoid problems. So are we doomed? Not necessarily, the advent of AI will be a master challenge for homo sapiens and maybe the solution resides in one of the main reasons that make us human, our ‘Values’.

"...It may be the most important piece of work that we would have achieved in this century. ...

As Nick Bostrom, the ultimate authority in the field (At least in terms of pondering about the subject) points out, people will program AI to do war. Worse still, an inhumane totalitarian political regime could be successful with Strong AI helping it to impose its diktat on society and it would take us hundreds of years if not more, to get out of the situation. But instead, if we teach AI our Values, then we may not have to worry about keeping the genie in the box. We could have invented the most effective companion the human race could have to confront the challenges of the future. By creating Strong A.I, we will have dealt with the fact that we have created intelligence, so bypassing millions of years of biologic evolution and certainly playing god. At the same time, we would have created something infinitely more intelligent and capable than us. Our best bet in the matter would be to create it as a friendly companion with all our values already inbuilt and available to the machine through its own deductions. If we keep in mind that the arrival of strong AI will change our lives radically, then as Nick Bostrom says, it may be that the future generations will acknowledge only one thing about us, despite all our mistakes we got those regulations right. It may be the most important piece of work that we would have achieved in this century.

When?

That being said the real question is when will Strong AI step into our life? The model we use to create artificial intelligence is based on the human brain but we are very far from understanding the human brain so is the direction we are taking flawed? Not exactly, Alan Turing one of the creators of the modern framework of computing said in a paper he co-authored with Alonzo Church stipulates given infinite time and memory, any kind of problem can be solved algorithmically. This makes sense since deep learning and other subsets of artificial intelligence are basically a function of memory and having infinite (or a large enough amount of) memory can mean that problems of the highest possible levels of complexity can be solved using algorithms.). With that knowledge and the Moore’s law (Moore's Law refers to Moore's perception that the number of transistors on a microchip doubles every two years, though the cost of computers is halved. Moore's Law states that we can expect the speed and capability of our computers to increase every couple of years, and we will pay less for them some theoreticians and AI experts predict that a singularity could be born around the 2060-2065 period of our era. Some however, predict it as closer to us and consider the possibility of a sudden leap which is always possible, so humanity could easily get caught napping on the subject. We do not have too much time to invent new disciplines like Robopsychology (As predicted by Isaac Asimov), even less time to think about regulations and every other means necessary to help humanity go through this transition smoothly. If we fail, then may be the days that we have to spend on this planet will be counted in decades…

Get In Touch

272 Bath Street Glasgow G24JR UK

Whatsapp Text Only

interouts3@gmail.com

Follow Us

© The International OutsiderC0.Reg.No SC715702 . All Rights Reserved. Design by HTML Codex