Thursday 20 October 2016

Roboethics

The topic of Roboethics has become increasingly important as we enter the age of Artificial Intelligence. AI and robots are about to become crucial components of mainstream society. With the rise in automation in every industry, many ethical challenges are posed. For instance, if a self-driving car is responsible for an accident, who exactly is responsible? The car or its manufacturer? If a robot hurts or kills someone, who is responsible? The robot or its maker? Every advance in AI and robots raises important ethical challenges. 

The book "The glass cage" by Nicholas Carr addresses these issues very powefully. One challenge we must contend with is the fact that the rise in automation is making us overly reliant on technology, to our detriment. Because automation is becoming so advanced and sophisticated, many people spend less time developing important skills that our forefathers would have developed. One good example of this is GPS technology. Many people now rely upon GPS whenever they travel anywhere. However, if the GPS fails for some reason, people can become severely disoriented. Another example is the issue of memory. Because people can now find important information about almost anything online, they make less of an effort to memorise it than people did in the pre-internet era. This means that we are developing our memories less effectively than we did in the past. This can not be a good thing.

At the heart of much of the controversy surrounding the future of AI is a very simple yet poignant question. What exactly does it mean to be human? This question is much more difficult to answer than it may seem. As AI becomes ever more advanced, many of the tasks that were thought to be only achievable by humans, are being done by AI. One specific example of this is the aviation industry. Many of the tasks involved in flying a plane are now fully automated. This was inconceivable twenty years ago. The downside is that pilots are now becoming to reliant on automation in planes. If something goes wrong with the automation program, the results can be disastrous.

Many of the jobs that are done by highly trained human beings could soon be accomplished by robots or AI. We could soon have robotic doctors, dentists or pharmacists. But if a robotic medical professional misdiagnoses somebody by mistake, who will be responsible? There is no easy answer to questions like these. But we must work out answers to these questions before the AI revolution really takes off. If we don't address these issues now, they will raise very serious problems in the future.




No comments:

Post a Comment