You may be familiar with the quote “software is eating the world,” but as of late it seems more like “ChatGPT is eating the world.” Ever since its public debut, it’s dominated front pages of the news, sparked many conversations about how AI will shape the future, and if you look at the trending page of GitHub at any given moment, about half of the projects will be related to the tool.
It’s a lot to take in and keep on top of, so here’s a review of ChatGPT-related news from the past week.
Microsoft rolls out ChatGPT-enabled version of Bing
Perhaps the biggest story of the past week was that Microsoft has officially incorporated ChatGPT into its search engine Bing.
This comes shortly after the company had made a large multi-billion dollar investment in OpenAI, the company behind ChatGPT.
Microsoft says that integrating ChatGPT into Bing will help provide better search results, more complete answers, a new chat experience, and the ability to generate content.
Search powered by ChatGPT will surface relevant information like sports scores, stock prices, and weather, and summarizes search results to provide comprehensive answers to complex queries too. For example, you would be able to ask how to substitute eggs in a recipe and get instructions on how to do so without actually having to search through multiple results yourself.
Just like with ChatGPT, you can also converse with Bing in a new chat experience that allows you to keep refining your search until you are able to get the result you need.
Google announces a new experimental conversational AI service
While not technically ChatGPT, Google also plans to incorporate more AI into Google Search. It announced Bard, a conversational AI service based on the LaMDA model.
Bard is intended to foster the combination of knowledge with the power, intelligence, and creativity of Google’s language models. The AI service utilizes information from the web to offer users high-quality responses.
The team stated that Bard is initially being released with Google’s lightweight model version of LaMDA, which calls for less computing power so it can be scaled to a larger user base.
The announcement wasn’t without flaws; An ad for Bard from Google’s Twitter account included an incorrect answer about the James Webb Space Telescope, which Reuters was first to point out. According to Reuters, after this incident, Alphabet (Google’s parent company) shares dropped $100 billion, with trading volumes during that day being about three times the 50-day moving average.
Technical interviewing firm Karat now allows ChatGPT in interviews
The company had already allowed candidates to use tools like Google and Stack Overflow during the interview.
According to Karat, there are still rules on what you can use these tools for. In a blog post, the company stated that reference materials like these can be used to answer syntax questions, look up language details, and interpret error output from compilers.
“As working developers, we’ve all done those things; so why should we have to struggle without basic resources during an interview? But we ask that they not look for a full solution to the problem, or copy and paste code from elsewhere directly into their solution,” Jason Wodicka, principal developer advocate at Karat, wrote in a blog post.
In the blog post, Wodicka also asked the question of whether it’s even a good idea to use ChatGPT in an interview. He explained that ChatGPT has no idea if the response it gives you is correct, which could be problematic in an interview.
Cybercriminals hacking ChatGPT to have it generate malicious responses
Researchers at security firm Check Point discovered an instance where attackers had used ChatGPT to alter the code of an Infostealer malware from 2019.
They have also found hackers who are finding workarounds to ChatGPT’s restrictions on producing harmful content.
On how ChatGPT handles potentially harmful inputs, OpenAI says: “While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.”
According to Check Point, these hackers have discovered how to bypass these safeguards to allow it to create malicious content, like phishing emails and malware code. They say this is done by creating Telegram bots that use OpenAI’s API.
“In conclusion, we see cybercriminals continue to explore how to utilize ChatGPT for their needs of malware development and phishing emails creation. As the controls ChatGPT implement improve, cybercriminals find new abusive ways to use OpenAI models – this time abusing their API,” Check Point concluded in its blog post.
The Pentagon uses ChatGPT to write press release
This is according to a report by Motherboard, who linked to the article in question. The press release announced a new task force focused on countering the threats of Unmanned Aerial Systems.
The press release included the following disclaimer at the top: “The article that follows was generated by OpenAI’s ChatGPT. No endorsement is intended. The use of AI to generate this story emphasizes U.S. Army Central’s commitment to using emerging technologies and innovation in a challenging and ever-changing operational environment.”
ChatGPT used by judge in court case
Vice also reported that Judge Juan Manuel Padilla Garcia used ChatGPT to help make a legal decision in a court case in Colombia. It is believed this is the first use case of AI in a court case, and using AI in court decisions is allowed in Colombia.
According to Garcia, in this case ChatGPT was used to reduce the time spent drafting judgements, and he included the full responses from ChatGPT in the decision.