Ethical design is at the very early stages when it comes to high-tech innovations. Other industries including healthcare, financial services, energy and others have had to ponder various ethical issues and over time they became highly regulated because the safety and security of their customers or the public at large are at stake.
In the high-tech industry, ethical considerations lag behind innovation which is an increasingly dangerous proposition in light of self-learning systems. As more tasks and knowledge work are automated, unexpected outcomes likely will occur that are tragic, shocking, disruptive and even oppressive, mostly arising out of ignorance, blind faith and negligence than nefarious intentional purposes, though all will co-exist.
Ideally ethical design would begin in higher education so fewer people would need to be trained at work. Some universities are already designing engineering ethics or digital ethics classes, including Columbia, Cornell and Georgetown.
Existing developers and their employers first need to understand what digital ethics is. Then they need to start thinking consciously about it in order to build it into their mindset, embed it in their processes and design it into products.
Right now, the bottom-line impact of digital ethics is not obvious, like the early days of computer security, so like computer security, most companies aren’t investing in it yet or forming ethical review boards. However, like security, digital ethics will eventually become a brand issue that’s woven into the fabric of organizations and their products because customers, lawmakers and society will want assurances that intelligent products and services are safe, secure and can be trusted.
How to build ethics into your mindset
Before ethics can be baked into products, there has to be an awareness of the issue to begin with, and processes need to evolve that ensure that ethical design can be achieved. Recognizing this, global professional services firm EY recently published a report the ethics of autonomous and intelligent systems because too many of today’s companies are racing to get AI-enabled products to market without considering the potential risks.
“The ethics of AI in a modern context is only a couple of years old, largely in academic circles,” said Keith Strier, global and Americas AI leader at EY. “As a coder, designer, or data scientist, you probably haven’t been exposed to too much of this, so I think it’s helpful to build an awareness of the topic so you can start thinking about the negative implications, risks and adjacent impacts of the technology or model you’re building.”
Because the risks associated with self-learning systems are not completely understood, developers and others in their organizations should understand what ethical design is and why it’s important. What may not be obvious is it requires a mindset shift from short-term gain to longer-term implications.
To aid the thinking process, a popular resource is the Institute for the Future’s Asilomar AI Principles. There is also the Institute of Electrical and Electronic Engineers’ (IEEE’s) Ethically-Aligned Design (EAD) document which just completed its third round of public commentary. (Disclosure: this author contributed commentary and was appointed to the editing committee.) The IEEE also recently announced the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) since there isn’t anything that otherwise “proves” that a product is ethical or not. The certification program will create specifications for certification and marking processes advancing transparency, accountability and reduction of algorithmic bias in autonomous and intelligent systems. In addition, the Association of Computing Machinery (ACM) recently updated its code of ethics, the first such update since 1992.
How to build ethics into processes
Digital ethics has to be a priority to make its way into processes and ultimately products.
“Most software developers don’t have it in their job description to think about outcomes. They’re focused on delivery,” said Nate Shetterley, senior director of data and design at design and innovation company Fjord. “As we look at the processes we use to drive our industry like Agile development, you can modify those processes. We do testing and QA, that’s part of the development lifecycle, let’s add ethical considerations and unintended outcomes.”
Right now, ethics after-the-fact is the norm. However, if the Facebook/Analytica scandal is any indication, ethics after the fact is an extremely expensive proposition. It may erode customer trust, permanently in some cases.
“The question of building digital ethics into processes is one of organizational values,” said Nicolas Economou, chairman and CEO of legal technology firm H5. “Some companies are focused on efficiency, but efficiency can be contrary to notions of fairness or treating people well and having respect for people. You need to create mechanisms for review and governance of those principles and you have to engage a broad range of folks by design. You also have to have an ongoing digital ethics impact assessment.”
The way a digital assessment works is the impact of development is assessed against the organization’s principles and values. If the development deviates from that, corrective steps are taken and another digital ethics impact assessment is done.
“You need to think about the harms you may cause and the societal impact you want to have. If you really understand digital ethics, you know it has to involve all of your processes because it’s not just the job of engineers,” said Economou. “If you want to build digital ethics into your product, you have to build it into your processes and your mindset.”
However, unlike security, testing and all other things shifting left, it’s not like digital ethics is baked into the software development lifecycle – anywhere – in most organizations as of yet. There are exceptions, such as law enforcement solution provider Axon (see: Axon Prioritizes Ethical Design) which, despite the fact it builds weapons, actually has a stated principle of making the world a safer place.
Building digital ethics into processes requires a reflection on what processes are, what the ethical principles of the company are, and how to embed those principles into the fabric of business and other processes including software development. Fjord’s Shetterly suggested that ethics and unintended consequences might be part of the QA process. He also suggested tools might nudge developers to consider what would happen if a particular bug affected another one.
“It’s like a startup going from shipping code every day to ensuring they don’t break a financial transaction. As you grow, you adapt your software development processes,” said Shetterly. “As an industry, I think we need to adopt processes that consider ethics and unintended consequences.”
How to build ethics into products and pipelines
Global professional services firm Avanade places a set of data ethics principles in the hands of its developers and data scientists that remind them behind the software and the data is a person.
“One of the biggest risks and opportunities is the correlated use of repurposed data,” said Florin Rotar, global digital lead at global professional services company Avanade. “When you put data through a supply chain and analyze it, it’s basically the intention of how that data is used and whether that differs from the [original] intent of data use.”
One thing developers need to understand is that self-learning systems are not deterministic, they’re probabilistic. Increasingly, computer science majors are getting the benefit of more statistics, basic data science and basic machine learning in the newer university curricula. Experienced developers should also brush up on those topics as well as learn any other relevant areas they may be unfamiliar with.
However, given the speed and scale at which AI is capable of operating, humans won’t be able to see or understand everything which is why supervisory AI instances are being proposed. The job of a supervisory AI is to oversee the training and operation of another AI instance to ensure it is operating within bounds and if not, to intervene autonomously by shutting the monitored AI instance down or notifying a human.
Meanwhile, organizations need to understand digital ethics within the context of their industries, organizations, customers and products and services to affect ethical design in the practical sense.
For example, H5 has an entire range of ethical guidelines and obligations it has to comply with, some of which are regulatory and professional. Its use of data differs from the average enterprise, however. Most of today’s companies use personal data to target products, monitor behavior, or change behavior. H5 uses that information along with other information to prove the facts in a lawsuit.
“We use data to support the specific objectives that a litigator has in proving or disproving a case,” said Economou. “The data we have has been created through a legal process in litigation.”
Digital ethics is not a one-size-fits-all proposition, in other words.
In the meantime, the pace of innovation is moving at warp speed and laws that will impact the use of AI, robotics, and privacy are moving at the pace of automobiles. Organizations and individual developers have to decide for themselves how they will approach product design, and they need to be prepared to accept the consequences of their decisions.
“Evolution is now intrinsically linked to computers. We’re shaping our world through computers and algorithms and the world to date has developed as a result of human intervention,” said Steven Mills, associate director of Machine Learning and Artificial Intelligence at Boston Consulting Group’s Federal division. “Computers are shaping the world now and we’re not thinking about that at all.”