Academy Xi Blog

Self Identity in the Post-Work Era

By Charbel Zeaiter

Share on facebook
Share on linkedin
Share on twitter

On the back of my presentation at Future Assembly, we’re contemplating more and more the potential effects of AI as well as the massive, inevitable changes that will result from having to rethink who we are when work may no longer be the predominant way in which we identify ourselves.

The idea that one day we would have smart machines has been floated for centuries. We have grown up hearing Sci-Fi stories that are optimistic about our artificial future, as well as those imagining the world as a futuristic wasteland where humans are no longer required. Technology has finally reached a point where ideas from these stories are coming to life and it’s now becoming important for us to understand the flow on effects these changes are going to have on life as we know it.

Robots are becoming increasingly present in the workplace. Robots like Baxter, brainchild of Rodney Brooks of Rethink Robotics (formerly MIT), is marketed as being able to complete “the monotonous tasks that free up your skilled human labor”. Just this week Hitachi announced that they are introducing intelligent robots to supervise and manage their employees. The robots ‘hired’ by Hitachi may be able to manage employees on the floor, and boost productivity, but at what cost?

Management at its core has a core human element that focuses on motivating, leading, relationships and empathy. What will the fallout be for the human workers without this human element, and how long will it be before the robot supervisor is supervising a completely automated workforce?

Sure we can design intelligence, but what about emotional intelligence? One Japanese company thinks they can. Aldebaran, has created Pepper, the social robot, who was designed to live with humans. Pepper is described as being “a companion able to communicate with you through the most intuitive interface we know: voice, touch and emotions.” Pepper is able to understand someone’s emotional state by analysing their facial expressions, body language, and word choices, but is that really authentic emotional intelligence? How much deeper is it?

Recognising someones mood is one thing, are we ever going to be able to design empathy? Are we ever going to be able to design a program so complex that it can genuinely understand and help its human counterpart navigate their human emotions? And really, is designing emotional intelligence something we should do? Do we really need robots that are emotionally intelligent?

There is another question that we have to ask ourselves when we start imagining this modern, completely autonomous workforce. When the world’s production capability reaches 100% continual production – what are they producing? Who is it for? And honestly, how much stuff do we actually need? With our current production output doing irreversible, and downright devastating damage to the planet, it begs the question, why are we really pursuing this kind of technology. Will our changing attitudes towards sustainability and protecting the planet be in alignment with the attitudes of organisations whose production lines will no longer be limited by the output capacity of humans?

“Ultimately though, the biggest change we are facing now is how will we define ourselves when work ceases to be the centrepiece upon how we introduce ourselves?”

There are some pretty confronting statistics flying around about the amount of jobs the world looks like it will be losing over the next decade. Yet with all this uncertainty, and doom-and-gloom there are also some pretty startling opportunities. People have this exciting reason now to reinvent themselves, and to learn new things. Is what we do really what defines us? If and when machines begin to dominate the workforce, will people be free to start exploring who they are, and what makes them truly happy?

Academy Xi Blog

Design Ethics for Artificial Intelligence

By Charbel Zeaiter

Share on facebook
Share on linkedin
Share on twitter

After a great recent weekend at Future Assembly in Melbourne, I was compelled to repeat my talk, “Design Ethics for Artificial Intelligence” (slightly abridged).

Artificial Intelligence has been edging its way into our reality for a while now and it’s a topic that’s been discussed for decades. The fear of humans becoming slaves to AI is an interesting fear; some observers would say we’re already slaves to our devices and gadgets, therefore slaves to intelligence outside of ourselves.

The purpose of my talk was not to paint the expected doomsday view of AI and its possible effects on humanity but to open up the discussions on the complexity of embedding value systems in relation to decision making.

Knowing that to take action we need to assess a situation and make a judgement call. Where do these judgement calls come from? They’re our value systems and they’re complicated.

Using Isaac Asimov’s Three Laws of Robotics, I posed a single scenario (with some variants each time) and asked the audience to make a judgement call based on different value systems:

  1. Emotional
  2. Economic
  3. Probability
  4. Religious
  5. Environmental

With a central character, Caitlin, our robotic butler, we posed these scenarios and presented a choice that she had to make based on the above value systems.

A great Q & A followed, exploring the frustrating, flawed, emotional and highly subjective complexity of the human condition aka, our value systems.

When designing Artificial Intelligence, what are we really designing? Further more, what happens when Artificial Intelligence is no longer Artificial and can ponder its very existence?

Download the presentation and discuss it at work and home. I’m not at all worried about AI, if it’s left alone; I’m concerned when humans, who are fundamentally flawed, design decision making into immature intelligence.

Academy Xi Blog

The chatbot keeping your mental health in check

By Academy Xi

Share on facebook
Share on linkedin
Share on twitter

Startup Mental Health Crisis

According to a medical study by the University of California, nearly 50 percent of business founders have self-reported mental health conditions. The same study also revealed that founders were more likely than the general population to report substance abuse and bipolar disorders.

What’s happening to our entrepreneurs?

Nine out of ten startups fail. Entrepreneurs experience enormous pressure — not only from investors, but also from their staff, customers, and families. They carry the weight of the entire business on their shoulders. The startup culture has a hard time accepting and talking about failures. If entrepreneurs aren’t “killing it,” then they’re likely to be struggling alone.

The fail fast mantra doesn’t help those founders who fail slow and ultimately burnout. The entrepreneurial burnout is real and can land founders in the ER if they’re not careful. Avis Mulhall, founding CEO of Australia’s first disability-focused technology incubator, warns people not to put founders on a pedestal and envision them as the pinnacle of success.

As someone who has experienced a burnout, Avis explains that entrepreneurs are just like everyone else. “They have the same struggles, the same fears and anxieties. The difference is that they do it anyway and they don’t let fears or anxieties hold them back.”

Technology is propelling businesses into the future faster than ever before. While the speed of technology and the rate of change may cause an entrepreneur to fail, it may also offer a solution. And that solution comes in the form of a virtual assistant, a chatbot that will listen to the stories of failure and lend a helping, artificial hand.

The Rise of Chatbots

Do machines dream?

Chatbots date all the way back to the 1950s when computer scientist Alan Turing created the first machine intelligence test: the Turing Test. Computer scientist Joseph Weizenbaum used this test on computer program ELIZA to trial the first ever conversation between a computer and a real human. ELIZA is considered the first chatbot and has led to the rise of chatbots in the 21st century. Today, the ELIZAs of the world work for companies behind the scenes, often as the customer’s first point of contact.

Chabots are commonly used by companies to provide customers with additional information and help them navigate through a host of products and services. Every month, virtual assistant Siri helps 41.4 million users find the information they need, without having to physically navigate the web.

While chatbots have helped millions of businesses improve their customer experience, at the same time, Hollywood is painting a dark and macabre picture of A.I’s. The A.I’s in iRobot and Terminator have surpassed human intelligence and threaten to kill us all.

But, will the future of chatbots and artificial intelligence really end in fire and brimstone?

More recently, chatbots have found a new reason to exist. And that existence is marked by helping people who suffer from mental health issues find the support they need.

Could a chatbot help guide CEOs through the struggles of startup life and steer them away from an entrepreneurial burnout?

Meet Amelie, the Artificially Intelligent Chatbot

Julian Bright is the founder of, an Australian mental health chatbot that connects people through Facebook messenger with support services and resources. Julian believes technology can play an important role in servicing communities that are isolated and supporting people who just need someone to talk to.

“Having worked in startups, it’s easy to see why mental health issues may be more pronounced,” says Julian. “Entrepreneurs are constantly juggling lots of different things and dealing with the ups and downs of startup life is hard.”

To help entrepreneurs like himself alleviate the pressures of startup life, Julian looked into the rise of chatbots to find alternative ways A.I could provide assistance to humans.

Earlier this year, a study on virtual-agent therapists was conducted with war veterans suffering from post-traumatic stress. Research concluded that war veterans were up to three times more likely to reveal symptoms of post-traumatic stress to a chatbot than on a military health assessment.

This study sparked Julian’s idea of using chatbots to help assist people with mental health issues. “That’s where the idea came from: I wanted to understand how a chatbot could be used to help reduce that stigma. And it’s a stigma associated with people not only seeking help, but opening up about their issues.”


Chatbots are designing a new kind of conversation that mimics human interaction and replaces the human touch with an artificial one. But, it’s not just chatbots changing the way we communicate, wearable technology and IoT devices are growing in power and importance. 

Learn more about the latest in technology through our online courses here.

Search our website

Find the right course