In recent years, technology analysts have devoted much attention to the topic of developers and how the demographics of developers are changing. For starters, the International Data Corporation (IDC) has noted the growth of developer populations in China, India, Brazil, Russia, Indonesia and Turkey, as well as select countries in East Africa. In addition, IDC has recognized the growth in the adoption of low code, no code, visually guided and model driven development tools. Moreover, IDC has pioneered research into the category of “part time developers,” a category of developers that do not have developer in their job title, but who nevertheless perform development-related work as part of their job roles and responsibilities. 

Part-time developers are often conflated with the term “citizen developers,” although it is worth mentioning that the term “part time developer” more accurately captures the way in which software developers are financially compensated for their work. Examples of part-time developers include business analysts, data scientists, data analysts, project managers, risk managers and strategy managers as well as DevOps engineers, storage engineers, network engineers, and IT operations administrators. 

While the distinction between full-time and part-time developers is an important one, there is another category of developers that represents a critical piece of the conversation regarding developer populations worldwide. This category involves the professional resources that are responsible for evaluating the ethics of software applications. As AI/ML-based applications proliferate, it becomes important for ethicists to opine on the ethical implications of the implementation of AI/ML, and whether a specific algorithm discriminates against one or more groups of people. Recent allegations that an AI/ML algorithm used by a prominent health insurer discriminated on the basis of race, for example, underscore the need for ethical reflection to accompany software development. 

Ethical reflection is required not only for AI/ML applications, but for any application that collects data about its users and customers. While organizations have a long history of collecting data about customers, important ethical questions prevail about the extent to which end users are aware of how their data is used, particularly in cases involving the sale of that data to third party organizations. Is it ethical for software applications to render data about end users accessible to foreign governments or to transnational organizations that seek to influence a political election? Similarly, is it ethical for organizations that are well known for committing, or facilitating criminal activity to buy data from a software application that can subsequently be used to commit crimes? 

RELATED CONTENT: AI ethics: Early but formative days

Another example of the need for ethicists with respect to software concerns is the ability of social media users to make false claims that are disseminated to expansive audiences. Should users be allowed to run political ads that contain falsified information about other individuals? Who determines what is false? Would it be acceptable for the KKK or a neo-Nazi group to run ads on a social media website? How does one arbitrate what is objectionable and what is considered otherwise considered controversial or renegade, but nevertheless admissible? 

Meanwhile, all software should be examined for its inclusiveness with respect to whether it accommodates users with special needs because of a personal attribute such as blindness, deafness or the inability to use their hands or voice, for example. Does a particular software application discriminate against dyslexic or color blind users, for example? Is there a way to mitigate or circumvent such discrimination? Can software be used to both identify and remediate a lack of inclusiveness with respect to the design of software applications? 

The larger point here is that contemporary software requires ethical deliberation and validation as part of the testing and QA process. This need for ethical validation has always been important for software applications, but the proliferation and ubiquity of contemporary software require a deeper level of ethical evaluation and analysis. Software is so interwoven into the fabric of our existence—in wearables, mobile devices, automobiles, digital assistants, consumer applications and enterprise applications—that a reflection on the ethics of software is critical to the software development lifecycle. 

Currently, the technology industry is woefully unprepared to undertake this responsibility regarding the ethical evaluation of software because of a lack of professional resources with the requisite training in both philosophy and software development. What this suggests is that, whereas a computer science degree was one once of the most highly sought after degrees, going forward, the technology industry will be in dire need of minds that have training in philosophy, ethics and computer science. The infiltration of software into daily life requires the emergence of ways of evaluating the ethics of software that can be swiftly integrated into the software development life cycle itself by ethics engineers that are as much software developers as are the resources that write code.