BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Can We Trust Robots? Futurists Share Provocative Insights

SAP

How can we coexist and work with robots in the future? Should robots be employed at all? If so, which jobs should go to robots? What can robots tell us about ourselves and how we value human life?

These are just some of the questions that were explored by noted futurists Kai Goerlich, Chief Futurist, SAP, and Gray Scott, Futurist at GrayScott.com on a recent episode of Internet talk radio program Coffee Break with Game-Changers, presented by SAP. The following are just some of the provocative insights shared during the one-hour show.

This summer there was a news story about Steve, a security robot hired to patrol the Washington Harbor complex in Georgetown for $7 / hour. One day Steve was found drowned in the office fountain, an apparent “suicide” of sorts. What do you think really happened to Steve?

Gray Scott: I think more than likely it was just a coding error or it could have been a hardware situation where he ran over something and fell over. I do not have the details on the specific case, but I do know that if these machines in the future are programmed for self-preservation and know that water would kill it basically, they would do everything in their power to avoid those situations.

Kai Goerlich: Robotics is still in its infancy, or human walking robots at least. We know that two-legged animals are really difficult to build, but we nevertheless try to build them according to our shape. I think it was just a mechanical or algorithmic failure. For sure not suicidal, but the question will be if a frustration that something is not working can be somehow felt by an algorithm or by a machine, because this is a classical science fiction idea that machines can feel what we call frustration.

Will robots, if ever, have consciousness? If so, should they be subjected to psychological evaluations?

Gray: As we move into the future, these machines are going to mimic human behavior and human psychology to a degree where we cannot tell the difference between what is human and what is mechanical. Those lines are already starting to blur. As far as the psychological profile, the psychological community is going to have to incorporate this into the DSM, which is the manual of disorders, because you do not want a machine that is caring for your grandmother or watching your baby to have a psychological problem, even if that is a mimicked human behavior.

As of right now, it is still a code, an algorithm, but we are hearing whispers all over the place that machines are just now starting to write their own codes and change their codes. What does that mean for a future where a machine may be depressed and it finds itself in a situation where it does not want to follow the orders?

SAP

Will people become emotionally attached to their robots and smart machines?

Kai: The tendency that we have as humans is to put our human emotions into other things. I think that with robots acting smart, you just cannot avoid thinking emotionally about them. In the future, we have to learn that these machines are smart acting, but not in the way that we are smart. We can foresee things that might happen, or due to our social glue, act differently from time to time. We tend to not stick to our rules anyway, so we bend them according to our needs and to social environment.

Gray: Most of what we are going to see in the near future are people embracing these robots, especially humanoid robots, as tools in the beginning. But as they become more human-like, as people add skin to them and as those structures are able to feel heat and cold and pleasure and feel pain, or mimic pain and those types of things, we are going to move towards projecting onto these mechanical things what we are, what we want to be, and what we are afraid of.

We have to think about what we are as a species and where we are going, because all of that is going to be reflected in these machines. Technology is a mirror and these machines are going to force us to face ourselves.

What humanity are we looking for through our interactions with robots?

Gray:  I think we see this in cultures around the world, especially now with immigration issues – society, culture, national pride, and things like that. Typically, we look at people as the “other” – like someone coming into our tribe and disrupting our tribe:  this is ours, this is mine, and there are the boundaries. The world just is not like that anymore. We live in a global society now.

As these machines begin to emerge, that is another layer to the “other” effect, and so we have to start to unravel that behavior. Why do we do that? Why do we project our fear? Why do we project our insecurities and our hatred onto the other when the other really, in this case, is just a manifestation of our imagination and of our vision of the future?

Kai: Yes, I think that is very true. We are upon a new renaissance, where again humans are in the focus of what we will do in the future, but due to the possibilities that we have with technology now. I think the analogy of a mirror is accurate. When we discuss what robots may do, we are actually talking about what the value of human life is. What kind of work do we want to do? Do we need to actually work? What about our empathy and creativity – because we are afraid that it is taken away and in the last decades we have not given much thought about it? I found it especially interesting, Gray, your comment about immigration and migration. I have not thought about it, but that is a spooky coincidence that we see lots of backlash on migration and increasing robotization of the world.

Gray: The psychologist Carl Jung would say that this is not a coincidence, that we are all sort of emerging into a new realm of the unconscious becoming conscious. I mean, literally, this is a new species that we are birthing. It is not a coincidence that we see this at the same time that we see all the things that are happening in our world. There is a connection there. Companies that are creating these machines and do not have someone in their company thinking in that way about these machines – they are going to make a lot of mistakes coding them, building them, and implementing them.

What will our purpose as humans be in a future filled with advanced robots?

Gray: The purpose, I think is going to shift back to the vision, the dreaming, the idea that we are here to serve each other; we are here on this planet right now to find out what the other is feeling; and we are here to find out what the other is learning and knowing. Most of us find the most joy in our lives typically are in those moments we spend with the people that we love and that we admire. I think that is where we are moving towards. Hopefully, we will not disrupt that movement with bad algorithms and greedy algorithms. That is my hope.

For more information, listen to a complete recording of the show: Robots at Work: Whose Job Is It Anyway?

(Panelists comments have been edited and condensed for this space.)