While Google is amassing data on everyone, nudging them to do this or that or buy this or that, Axon is trying to make the world a safer place. Perhaps somewhat ironically, the company builds technology solutions and weapons for law enforcement, self-defense and the military.
Axon — formerly known as TASER International — has received significant recognition for its attention to ethical design in part because the company exercises top-down and bottom-up approaches to digital ethics. For example, the company has an ethical review board, which few companies have formed to date, but more will do in the future. Axon also makes a point of ensuring ethically designed products because it’s is an extension of the company’s culture.
“[Ethics] always been top of mind for Axon. Our CEO is a visionary who thinks about the future of technology and how it can save lives,” said Moji Solgi, director of AI and Machine Learning at Axon. “Axon wants to make bullets obsolete and ensure evidence is always captured.”
Apparently, some consider AI for law enforcement and military use cases unethical because something might go wrong. Solgi argues that stifling innovation out of fear is unethical because the potential good that comes from innovation would also be negated.
“There are already a few open source packages for making sure that your dataset is not biased, making sure you model is secure and making sure that adversarial attacks cannot severely impact your model,” said Solgi. [T]here are more tools and libraries people can leverage as the guardrails and tools for ensuring we can deal with bias, security and privacy and auditing. [There are also] tracking logs so if something goes wrong – and things will go wrong if you look at Facebook’s recent news – that we’re prepared for negative consequences.”
Developing an Ethical Culture and Practices
There’s an idea behind every company and in some cases necessity is the mother of invention. Axon grew out of its CEO’s desire to stop the violence that occurs when law enforcement uses handguns. The company, which was founded in 1991, began its AI initiative in 2017 and has addressed the ethics of that specifically.
“We started at a high level with things being vague such as we need AI ethics, so we decided to look at literature, but it turned out the best way [to implement AI ethics] is to bring together a group of people who are authorities in their own field, for technology, ethics and community,” said Solgi. “[The ethical review board guides] us in the ethical development of this technology. It’s a work in progress so we don’t have all the answers, but we are going in a bottom-up way for each one of the things we’re doing, asking what are the considerations and what kind of due diligence we should do.”
Apparently, Axon is approaching digital ethics at many layers ranging from the long-term impact on society to the low-level details of what the code will look like, including ensuring that the data isn’t biased.
“[A]s the people who are building this stuff, we have more responsibility and it can’t be all executives. We all have an obligation a moral responsibility to consider the impact it can have on society,” said Solgi. “When it comes to individual software engineers, [you should] learn about those tools [and how] to make a machine learning model secure. Even if your product manager or your manager doesn’t know much, it’s not on his radar in terms of putting in logging, tracking and due diligence systems in our software pipelines, you as the person who’s building it should raise that as a way that processes should change.”