Raina Kumra, CEO of Juggernaut and leader of the tech ethics portfolio at Omidyar Network, is developing an Ethical OS toolkit to help tech companies grapple with the the unintended potential consequences of the technology they are creating.
This digital code of ethics is based on accountability and responsibility at every level of production, from the very start of a product or service development process. The Ethical OS is designed to increase awareness of risk and consequence to ensure positive outcomes, and to build respect for user data. By identifying risk zones for social harm, the Ethical OS helps creators to consider the broader impacts their products and conduct can have on society to ensure they are both ethically and morally accountable.
“With great code comes great responsibility,” according to Kumra.
But where does that responsibility lie? Are mediation, censorship and increased scrutiny of algorithms the way forward? These are the ethical dilemmas that could have lasting, horrifying impacts.
Kumra suggests introducing ‘ethical bounty hunters,’ whereby tech companies pay bounties for identifying major potential risks in their new services in technology, identifying security flaws and vulnerabilities in relation to mental health impacts, risks to democracy and perpetual structural inequality.
Taking the example of artificial intelligence (AI), which is taught using biased data, has the potential to replicate the discrimination present in society. Yet as AI increases in complexity, it needs to be carefully monitored by humans to prevent AI-created bias. In other words, human designers and monitors need to work carefully with operating systems to challenge their own biases and to control the biases that are emerging in algorithms unchecked. Deep-learning algorithms are dependent on tagged-data sets, the more data it is fed, the quicker and more sophisticated the algorithm becomes. It has led to multiple cases of racial, ableist and sexist bias within data and resulting real-world outcomes.
For example, job advertising algorithms often target higher-paying jobs at male applicants and unfairly skew advertising towards gendered-specified careers. It can be particularly damaging in cases of law-enforcement AI, which privileges white people and targets people of colour, and loan-approval algorithms that further marginalise underrepresented groups and deprives them of service assistance.
It is important to remember that algorithms and artificial intelligence are designed by humans. They implicitly carry human bias. The data-sets fed into these systems must be diverse, varied and challenged for bias to prevent bigoted algorithms.
As Kumra suggests, it is our responsibility to be aware of the stories we take into our systems. We must be rigorous in the interrogation of our own morality, lest we create immoral systems and perpetuate stories that should be challenged.
Rather than handball human ethics and responsibility onto the systems we are creating, we must confront them ourselves. We need to both consider the potential consequences of these technologies and be aware of their possible unintended consequences. The huge capacity for AI to be used for good is dependant on its development being grounded in thoughtful and empathetic values from its inception.
One of the strategies suggested in Kumra’s Ethical OS is a ‘Hippocratic Oath for Data Collectors’, whereby everyone working with user data must take an oath in order to obtain, use and share data ethically, and undergo compulsory tech ethics training. The development of new technologies must start with a deep understanding of the ethical responsibilities and consequences that come with progress, which is key to minimising risks and ensuring an equitable future for everyone.
We need to change the ethos of creating code. We need to examine whose values are being represented in every system and who is being excluded or marginalised within them in order to challenge existing power imbalances. Humans and AI must work together to be moral agents, reflecting and challenging the morality programmed into them.