AI and machine learning saw several steps forward in 2020, from the first beta of GPT-3, stricter regulation of AI technologies and conversations around algorithmic bias, and strides in AI-assisted development and testing.
GPT-3 is a neutral-network-developed language model created by OpenAI. It entered its first private beta in June this year, and OpenAI reported that it had a long waitlist of prospective testers waiting to assess the technology. Among the first to test the beta were Algolia, Quizlet, Reddit, and researchers at the Middlebury Institute.
GPT-3 is being described as “the most capable language model created to date.” It is trained on key data models like Common Crawl, a huge library of books, and all of Wikipedia.
In September, Microsoft announced that it had teamed up with OpenAI to exclusively license GPT-3. “Our mission at Microsoft is to empower every person and every organization on the planet to achieve more, so we want to make sure that this AI platform is available to everyone – researchers, entrepreneurs, hobbyists, businesses – to empower their ambitions to create something new and interesting,” Kevin Scott, executive vice president and chief technology officer for Microsoft, wrote in a blog post.
The ethics of AI and its potential biases were also more heavily talked about this year, with the Black Lives Matter movement bringing more attention to an issue that has been talked about in the industry for the past few years. Anaconda’s 2020 State of Data Science report revealed that social impact that stems from bias in data and models was the top issue that needs to be addressed in AI and machine learning, with 27% of respondents citing it as their top concern.
In April, Washington state passed facial recognition legislation that ensured upfront testing, transparency, and accountability for facial recognition. The law requires that government agencies can only deploy facial recognition software if they make an API available for testing of “accuracy and unfair performance differences across distinct subpopulations.”
In June, IBM also decided to sunset its facial recognition software in order to address the responsible use of such technology by law enforcement. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” IBM CEO Arvind Krishna wrote in a letter to congress. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
There was also a lot of buzz around autonomous testing over the past few years, but are we actually at that point yet? It seems the answer is still no, but according to Nate Custer, senior manager at testing automation company, TTC Global, there are a number of areas where AI and machine learning could have a positive impact.
Test selection is at the top of the list, specifically, the ability to test everything in an enterprise, not just web and mobile apps. The second most promising area is surfacing log difference, so that if a test took longer than it should to run, the tool might suggest that delay was the result of a performance issue. A third area is test generation using synthetic test data.
Gartner analyst Thomas Murphy believes that it’s still early days for autonomous testing.
AI has also made its way into development tools. In a conversation on the podcast What the Dev?, OutSystems’ senior product marketing manager Forsyth Alexander explained how development tools have incorporated AI to help make developers more productive. These AI-enabled platforms can help surface areas that need work for developers, help with coding, and discover problems as they’re created instead of finding them in testing or production.
It is expected that all of this automation and AI-assisted tooling will help, not replace, human workers. An IBM report from earlier this year revealed that 45% of respondents from large companies had adopted AI, and 29% of respondents from small and medium-sized businesses did the same. Those companies are still in the early days of adoption and are looking for ways to utilize it to bolster their workforce.
Former IBM CEO Ginni Rometty said in March that she prefers the term augmented intelligence over artificial intelligence. “AI says replacement of people, it carries some baggage with it and that’s not what we’re talking about,” Rometty said. “By and large we see a world where this is a partnership between man and machine and that this is in fact going to make us better and allows us to do what the human condition is best able to do.”