home future scenarios, industry 4.0, video Artificial Intelligence and Ethics — Where to Draw the Line?: Intl Federation of Informational Processing

Artificial Intelligence and Ethics — Where to Draw the Line?: Intl Federation of Informational Processing

(Big data and AI | Open AI)

After a 75-year incubation, Artificial Intelligence (AI) has become a household word, reflected in popular culture through books, movies and even music.

From self-driving vehicles and interactive robots to Apple’s Siri concierge and IBM’s Watson, which is increasingly being used to solve business problems, AI technology is playing a growing role in our day-to-day world.

While true AI systems are still far less common than most people think — often what we call “AI” is simply pre-programmed rules that the software reinforces in different contexts — impressive advances are continually being made in autonomous, adaptive and AI systems that will see them having a greater impact over time.

Ensuring Trustworthy AI Systems

As President of the International Federation of Information Progressing, the global federation of information and communication technology (ICT) professional societies, I’m conscious that the work our members and others engage in to program these systems is critical to their performance and their trustworthiness.

To ensure that the impacts of AI systems remain positive and constructive, it is essential that we build in certain standards and safeguards.

Take the example of autonomous cars, which rely both on their self-driving functions as well as the ability to access and interpret information from their surroundings to safely navigate their environment.

While automation functions enable the car to start, accelerate, make turns and brake, the way the system interprets additional information from its environment (other vehicles, speed limits, terrain etc.) creates the impetus for decisions about when and how to make those actions.

Currently, most autonomous vehicles respond to different situations in a predetermined way. For example, if the car in front brakes, they also will slow. And if the car behind accelerates at the same time as the car in front brakes, they will attempt to change lanes as their sensors provide input about the other vehicles’ behaviour. But what happens if changing lanes means hitting another car, a wall or worse still, a pedestrian?

In circumstances such as these, a human driver might take any one of a number of options, (aggression, caution, freezing or evasion) many of which could result in an accident.

The reality is that self-driving cars won’t really be practical until all vehicles are self-driving and the unpredictable human factor has been removed from the equation. But then, given that the true test of an AI application is its ability to learn and make unprogrammed decisions, one wonders how unpredictable AI might be in such a context.

RELATED: The next frontier is here: 3 key capabilities that make AI so valuable

Building in Safeguards

Most of my work in autonomous and adaptive systems has related to space exploration through my involvement with NASA and other space agencies. Here, where there are so many unknowns, there is a limit to the number of situations we can predict and, thus, for which we can program.

The solution in this and other cases involving artificially intelligent systems is to define the range of actions or decisions they can make and where they must defer to human judgement.

If we want a system to be truly adaptive, we must give it a range of actions it can take without specifying exactly what it must do while also prohibiting certain actions. For example, in our self-driving car example, a vehicle with a prime directive to save human life might shut down to avoid causing an accident. While this might be an appropriate action if the car were driving on a back street, it could be catastrophic if the vehicle were driving on a busy highway.

It’s also important for AIs and other autonomous systems to incorporate appropriate security and privacy measures to ensure they operate ethically and within the law, as well as protecting them from external hacks or other intrusions.

As more and more decisions are made without human involvement, it’s important that we specify a range of behavioural rules that society will accept from AIs and those that we won’t.

RELATED: Preparing for an AI-driven society

In order for humans to accept and trust AI systems and their actions, we need to build some predictability, or at least boundaries into their behaviour, beyond which they cannot go.

Asimov’s Prime Directive might be the stuff of stories, but it provides a sense of certainty that will be a prerequisite for most people to be willing to incorporate AI systems into their daily lives, particularly when it relates to safety-critical functions.

Mike Hinchey

******

​​This article is syndicated with permission from Mike Hinchey and Caroline New at Quantum Values.

Mike Hinchey is President of IFIP (International Federation for Information Processing) and Vice-Chair (and Chair-Elect) of IEEE​ UK & Ireland section. Hinchey holds a B.Sc. in Computer Systems from University of Limerick, an M.Sc. in Computation from University of Oxford and a PhD in Computer Science from University of Cambridge. He is a Chartered Engineer, Chartered Engineering Professional, Chartered Mathematician and Charted Information Technology Professional, as well as a Fellow of the IET, British Computer Society and Irish Computer Society.   

 

 

Want more thought leadership on the opportunities and risks of AI?

Go in-depth with the UN’s special edition of ITU News Magazine.

Further Resources:

 

Leave a Reply

Your email address will not be published. Required fields are marked *