0
X

You have no items in cart

Axi_Master_logos_Horizontal_RGB-2020-01
Safety By Design Info

Academy Xi Webinars

Safety by design

By Academy Xi

Share on facebook
Share on linkedin
Share on twitter

New global legislation is demanding corporate responsibility for keeping vulnerable users safe online. As a direct result, industry demand for safe, ethical design is skyrocketing. As developers and designers, we hold the keys to making digital products safe.

Discover what you can do to upskill, understand risks and make sure the products you create are a force for good.

Join our speakers: 

In this video, you’ll learn:

  • Why demand for safe, ethical design is skyrocketing
  • How online child abuse is the world’s fastest growing major crime
  • Why being unprepared or unaware of the risk to users is no longer acceptable
  • How new global legislation is increasing corporate responsibility around safe online practice
Want to keep up to date with the latest webinars from Academy Xi? Follow us here on LinkedIn.
 

Academy Xi Blog

Design Ethics for Artificial Intelligence

By Charbel Zeaiter

Share on facebook
Share on linkedin
Share on twitter

After a great recent weekend at Future Assembly in Melbourne, I was compelled to repeat my talk, “Design Ethics for Artificial Intelligence” (slightly abridged).

Artificial Intelligence has been edging its way into our reality for a while now and it’s a topic that’s been discussed for decades. The fear of humans becoming slaves to AI is an interesting fear; some observers would say we’re already slaves to our devices and gadgets, therefore slaves to intelligence outside of ourselves.

The purpose of my talk was not to paint the expected doomsday view of AI and its possible effects on humanity but to open up the discussions on the complexity of embedding value systems in relation to decision making.

Knowing that to take action we need to assess a situation and make a judgement call. Where do these judgement calls come from? They’re our value systems and they’re complicated.

Using Isaac Asimov’s Three Laws of Robotics, I posed a single scenario (with some variants each time) and asked the audience to make a judgement call based on different value systems:

  1. Emotional
  2. Economic
  3. Probability
  4. Religious
  5. Environmental

With a central character, Caitlin, our robotic butler, we posed these scenarios and presented a choice that she had to make based on the above value systems.

A great Q & A followed, exploring the frustrating, flawed, emotional and highly subjective complexity of the human condition aka, our value systems.

When designing Artificial Intelligence, what are we really designing? Further more, what happens when Artificial Intelligence is no longer Artificial and can ponder its very existence?

Download the presentation and discuss it at work and home. I’m not at all worried about AI, if it’s left alone; I’m concerned when humans, who are fundamentally flawed, design decision making into immature intelligence.