There is a Tesla Roadster broadcasting David Bowie on repeat while travelling through space. What a time to be alive! And, I’d argue, the perfect time to rethink what it means to be human. But, we need to do it very carefully.
We’re welcoming robots into our homes, cars and workplaces. And they are enabling things that were impossible, or very hard, or very expensive to do. Doesn’t it blow your mind that every day you can use satellites orbiting the planet to find out if there is traffic on your route home?
There is also an understandable moral panic that we are in the process of losing our humanity. This is it: we’ve opened Pandora’s Box and the inevitable result is subjugation by our cyber-overlords. While I agree we’ve reached the point of no return and a digital future is a certainty, I think we should take this chance to re-examine what it means to be human, and code this into the software that is part of our lives. This will, hopefully, ensure technology delivers on its promise to be an equaliser, and not per-petuate the divisions in society.
Unfortunately we’ve already seen a few instances where the latter has happened. Take Google’s facial recognition system that only recognised white faces. Or Google Translate, that translated the gender-neutral pronoun in Turkish in a decidedly 1950s way: ‘He is a doctor, she is a nurse.’ Or LinkedIn, that, when you enter a typically female name, suggests that you might be looking for the male equivalent, but not the other way around.
But this is hardly surprising, as the AI field is new, and we are still learning. On the other hand, it does seem that the less pleasant side of humanity is rising to the surface, thanks to lack of diversity in development and testing teams, and lack of repre-sentative data in samples. Or, as in the case of the Google Translate example, which was based on existing common word combinations, the machines are simply reflecting our shortcomings back at us.
Nevertheless, it is time for some serious thought. Trust, ethics and morality need to be coded into our software now – consider the decisions self-driving cars are going to start making on our behalf. Who will define these rules and algorithms, and what choices will they make? Because at the moment it is corporates that are making them (or not making them), and the last time I checked, increasing profit and shareholder value still ride very high on corporates’ priority list.
Trust, ethics and morality need to be front of mind as we enter the digital age. This is how we’ll unlock all the benefits of AI and other technologies, and hopefully limit the downsides.
I leave you with the words of Steve Wozniak, co-founder of Apple: ‘You used to ask a smart person a question. Now, who do you ask? It starts with g-o, and it’s not God …’