Moral Technologies was a four-day conference this April 2016 in Melbourne exploring the question: Is humainty ready for artificial intelligence (AI)? The keynote speaker was Nicanor Perlas.

The conference was hosted by SEED (www.seedaustralia.net.au).

The Setting

About 50 participants covered an age range from 14 to 70 years, providing for a great diversity of perspectives in the context of technological familiarity.

The Questions

This was a conference that invited us to explore so many questions! I love the quote: “We grow in the direction of the questions we ask.”

The participants of Moral Technologies Conference have surely grown a whole lot!

We explored questions such as:
● Is humanity prepared for AI?
● What is human consciousness?
● Is consciousness just in the brain?
● What is life? Is AI alive?
● What worldview will shape the future technological landscape?
● What facets of our consciousness are we programming?
● Does consciousness need a brain?

Nicanor Perlas

Nicanor has a magic ability of weaving together facts and historical context to make sense of complexity. His storytelling, his humour and his ability to meet us where we are at are astounding. One can feel his deep thinking over the last many years. He has been witnessing and observing, seeing many patterns and themes. He delivered a HUGE amount of content during his four lectures. He shaped and crafted content that kept an audience spanning 60 years engaged and intrigued.

The Content
I work alongside tech developers on a daily basis and I was expecting to feel comfortable and familiar coming into this space to explore this content. However, I found Nicanor’s questions and content challenging and can well imagine everyone else in the room did, too. Some snippets from my notes, together with more questions…..

i) We need to develop AI together with its control system
If AI is a tool – we need to define a context in which it is used and how it will be used. A tool is not necessarily “evil” or “good”. It is how it is used that creates that judgement. Thus the tools of AI need to be developed together with their control systems. In this case that control system could be seen as the moral ethics on how that tool can be used. But you can’t think about every permutation or use of a tool before it is created – so how can we ensure that the control system provides adequate support for a moral use of the AI tool?

ii) Why are we afraid of intellectual technology and not physical technology?
We (humanity) have always used some forms of technology – from the flint stone, to the tractor – so why are we now scared of this next form of technology? Humans have often been concerned about the new, eg. when trains were first invented. Is any fear about AI just a fear of the new?

iii) Are machines just based on intellectual thinking? What about heart thinking? Or gut thinking?
A core part of the conversation was based on exploring Artificial Intelligence. Therefore, we needed to explore the different types of intelligence. AI conversations tend to focus on the intellectual type of intelligence. Thus what does AI look like in the context of heart intelligence? How can an AI emulate love or feelings? How might a computer appreciate art? Can one programme a computer to be intuitive?

iv) Brains and memories
Neuroscience can’t find where memories are stored. Thus how can we programme AI to replicate human memory? Additionally if all the molecules are constantly changing – where does our memory reside? This all brings us to the larger question: Does consciousness need a brain?

What did I leave with?
I was engaged, intrigued and left with many questions. At times I struggled to find the practical applications of the questions we were exploring. In response to that, I hosted a sharing of our work or questions in the context of the theme of this conference. The diversity of questions reflected the range of backgrounds, from educators and farmers to engineers and physics professors. Questions such as: the relationship that children have to technology, the impact of bionic hearts, the challenges of starting new businesses, new forms of architecture and the role of art therapy in the future.
I am left with a concern that is it very easy to abstract AI into some future scary state. Let’s not name it as a future wave that might hit us all of a sudden. Many forms of simple types of AI are already a deeply integrated part of our daily lives. Let us focus on exploring its best use as tools and recognise the responsibility that comes with the freedom and ability to create technology.

Note of thanks
To Rose Nekvapil, the Moral Technologies organising team and the stewards of SEED for bringing together such a great room of people and to Nicanor Perlas for engaging us all to this topic with such grace and curiosity. Additionally I’d like to thank the Anthroposophical Society of New Zealand (through the Tinderbox fund) for supporting me to make the trip to Melbourne.