In the words of Shane Hegarty in his Irish Times article: ‘[AI] has been compared to a potential first encounter with extra-terrestrial life, and that we should in fact treat it as we have been given warning of a spaceship’s arrival 50 years from now.’ AI is already profoundly changing the ways we interact with each other, the ways we purchase, the ways we consume news and information, and much more, almost without us fully grasping it. Algorithms determine what advertisements we see, what social media posts float to the top of our feeds, as well as determining our credit scores, notifying us about problems with our bank account, and filing our emails. All of this has occurred in a (relatively) brief space of time.
And AI is continually evolving. As Shane Hegarty says: a spaceship is arriving in 50 years, and we have to be ready to greet whatever comes out of it.
The benefits, and potential drawbacks, of advanced AI will be almost impossible to predict. Combine that with other exponential technologies, such as 3D printing, IoT, sensors, and robotics, and you have an even vaster array of possibility.
There is, understandably, concern over what changes AI might bring to our future. The video-game developer Hideo Kojima once wrote:
‘War has changed’ in his sci-fi epic Metal Gear Solid, depicting a future-world in which drones fight proxy wars against genome-modified soldiers, and hidden technologists control the world’s arms via a global interconnected ‘system’.
The reality is, we’re not far off of this. Wars are already being fought ‘by proxy’ in one sense. Push a button, and entire cities, civilisations, and populaces vanish. Though I remain optimistic about the future our technology can bring, the reality is that it is a tool, and in the wrong hands, even a tool used for creating beautiful art can be used to kill.
The very democratisation of technology will become an issue in the future. On the one hand, we want everyone to be able to benefit from the advances and improved living-standards, robotics, automated medical diagnosis, driverless cars, bio-printing and more can can offer, but on the other, do we want anybody with basic skills to be able to create weapons of mass destruction or develop code that can re-write societal infrastructures?
Who decides who gets to use technology and how? And, perhaps more concerningly, how do we stop them if they don’t care? The emergent skills gap only exacerbates this problem. As technology becomes more complex, more intelligent, and more integrated into every facet of society, only a small percentage of people will truly understand its full utility, purpose, and functional modalities. Take flight as a case in point. Airplanes have been around since approximately 1903 (the year of the first successful flight), but how many of us today, 116 years later, understand the intricacies of aerodynamics? Very few indeed, yet most of us will get on a plane.
The number of people who are going to understand the science behind artificial cell-growth, or tissue-printing, or an emergent technological revolution is even smaller. Can we then arbitrate on something we don’t even understand? Should only people who build and understand technology be able to dictate its uses? What if they then turn technology abusively to their own ends, or worse, the undermining of rivals? Do we then wrest it from them? Can we?
And if AI becomes truly humanly intelligent (not even superintelligent, as if so often the case in the movies, but simply human-level), does it then have inalienable rights and can we, then, force it to abide by human laws if it is not, essentially, human?
These questions are fascinating, if somewhat circular, because the reality is likely that we will not know until we get there. And there in lies part of the problem.
AI will be able to solve problems in ways that humans cannot conceive of because it is fundamentally inhuman, not having gone through the millions of years of evolution that we have, and not requiring a body to function. This is both its advantage and disadvantage. As Tom Chivers observed in a recent Irish Times opinion-piece: tell an AI to ‘cure cancer’, and it may well chemical-bomb the Earth clean of humans. That’s one way to do it. However, provided we establish key parameters, it might also be able to create other solutions currently inaccessible to our brains, mired as we are in traditions, societal expectations, and indeed, linear thinking.
On the less apocalyptic scale, there is great fear over people being displaced by technology, as well as an emerging skills gap. In the words of Vanessa Bates Ramirez from Singularity Hub:
‘One of the most widely-referenced and panic-inducing figures on the topic came from a 2013 paper by two Oxford economists, Michael Osborne and Carl Benedikt Frey. Their research found that up to 47 percent of American jobs were at risk of being automated by the mid-2030s. According to The Economist, the paper has been cited in over 4,000 articles, unnerving workers in all sectors of the economy and justifying catastrophic outlooks.’
One of my companies, Adaptai, have made it their mission to ‘leave no-one behind’, addressing the gap in skills by measuring and improving AQ (adaptability quotient). AI is going to challenge who we are, and the way we define ourselves. If an AI can do many of our current jobs and do it better, if an AI can outhink us, then who are we as functioning members of society? We will have to adapt, and it will not be easy.
More play, more learning, more creativity, more art? What awaits our future selves?
An intriguing white paper on the Future of Workforce by The Adecco Group & Boston Consulting Group observed:
‘In the future world of work, skills acquisition will no longer be a process with an ending. Companies will need to reassess constantly the capabilities of their workforce while workers will need to regularly upgrade their skills to meet advances in technology, new ways of working and changes in the demands of the labour market.’
Rather than working towards a single profession or skillset from an early age, as the current model of education seems to be, we will instead be constantly shifting and changing what we do in order to offer value in new and original ways. Unlearning old skills and learning new ones.
We bear a burden of great responsibility, to ensure the future we build, the intelligences we create, and the ways in which we use AI, are moral, ethical, and for the benefit of humanity. At its best, technology can be empowering and equalising, allowing people from impoverished nations or disadvantaged backgrounds to create new things and access learning, resources, and communities where before they would have been shut off. There is also an increasing awareness, of which HR-tech is a part, of the ways in which technology can be used to help in the psychological human sphere.
We face a global problem of declining mental health. Suicide and depression rates are at an all-time high, despite the abundances and advantages of modern life enjoyed by the majority of world nations. In fact, paradoxically, many of the most-developed countries in the world have worse mental health than those of third-world or un-developed nations. According to Our World in Data, in 2017, 17.34% of the populace of the United States of America suffered from either a mental health disorder (such as bi-polarism, schizophrenia, anxiety or depression), or substance abuse. In, Zimbabwe only 11.62% suffered. These statistics also take into account the widespread under-reporting of mental health problems (particularly in men).
As a species, many are currently surviving a crisis in meaning and identity. Can therapeutic bots, care-robots, and coaching chatbots support here? Perhaps help us overcome these difficulties by allowing us to reach more people, without the constraints of logistics and availability human coaches, therapists, and nurses have?
As with any problem, the key is to be open to whatever solution presents itself, however unusual. As our world becomes more data led, governed by algorithms and logic that we struggle to keep up with, we will have to be prepared for startling and unexpected answers to these questions. A core philosophy I embrace is: ‘Become attached to the problem, not the solution’. If you become attached to a solution, then you will inevitably limit your chances of succeeding in your true goal, because it may be that new research or understanding invalidates the previous route you were taking to solve this issue. We must be ready to adapt our thinking and strategies at any moment in the light of new evidence and thought; only that way can we pivot to unique and unexpected solutions.
A theme of this new world that is emerging around us, is letting go. Letting go of old knowledge and processes that no longer benefit us, letting go of old societal norms, letting go of apprehensions about interacting with inhuman interfaces. The reality is, whether we like it or not, it’s coming. We can choose to fight it to the bitter end and risk extinction, or embrace a new way of life with a considered mind, open hearts, and open arms.
I, for one, want to be at the landing site when it touches down.