Up until the last 10 years or so, artificial intelligence was the stuff of science fiction: machines that could learn from a variety of interactions to make decisions and take actions that would normally require a human to execute. Because of that science fiction, there are those who fear AI as the beginning of the rise of intelligent robots. So, ethical development of AI has become an important issue in the IT community.
Artificial intelligence is having a big impact on application development. Today, we see AI in many different computing environments. It is also popping up in customer service call centers, in dialog boxes on websites, in the Industrial Internet of Things, as well as in our children’s toys, our homes and businesses. When coupled with automated processes, machines can take over many of the more mundane tasks businesses have to complete on a daily basis.
Of course, applications of AI are much broader and more sophisticated. AI can be found in automotive controls, such as applying the brake when your car is quickly approaching the one ahead. It’s found in data analytics, processing and management, where AI can learn to spot anomalies in data and trigger alerts and actions to remediate the issue.
OpenAI and Microsoft have announced that they are expanding their current partnership. This comes on the heels of OpenAI’s public release of ChatGPT at the tailend of last year, which has been making waves throughout the industry as people experiment with its capabilities. Microsoft had previously made large investments in OpenAI in 2019 and 2021, … continue reading
Regardless of the industry, if an organization is failing to measure up, customers will not hesitate to find alternatives, resulting in a loss of revenue as well as damaging the company’s reputation and relationship with customers. One of the most essential aspects of remaining competitive is adopting automation, and introducing artificial intelligence tooling into every … continue reading
Point-E is OpenAI’s new system which produces 3D models from prompts in only 1-2 minutes on a single GPU. Generating 3D models was previously very different from the image generation models such as Dall-E because those can typically produce images within seconds or minutes while a state-of-the-art 3D model required multiple GPU hours to produce … continue reading
With less than 20% of the world’s population speaking English as their first or second language, Google is ramping up the efficiency of video voice dubbing with technologies for cross-lingual voice transfer and lip reanimation using deep learning and TensorFlow. The first technology keeps the voice similar to that of the original speaker and the … continue reading
Over the last few years, AI and automation have been slowly but surely changing the landscape of the software development industry. Whether it is applied to testing, security, or reducing wait times for tasks that had previously been done manually, this technology has proven to be essential in order for organizations to keep up with … continue reading
The intelligent continuous delivery solution provider, OpsMx, announced new software modules and support services for Argo that make it faster, easier, and safer for companies to use Argo in production, according to the company. New automated analysis capabilities can increase the speed and accuracy of complex progressive deployments. A unified view and centralized audit of … continue reading
Anyscale, the company behind the open source unified compute framework for machine learning called Ray, has announced new updates to the Anyscale Platform. The platform enables companies to build, deploy, and manage machine learning and Python applications. One new addition is Anyscale Workspaces, which provides a unified development environment for building machine learning workloads. Developers … continue reading
IBM unveiled three new embeddable AI libraries to reduce the barriers for AI adoption and to address the AI skills shortage. The models include the same language processing and speech libraries that IBM uses to power its own IBM Watson software. One of the new libraries is IBM Watson Natural Language Processing Library (NLP), designed … continue reading
Domino 5.3 was released to improve how organizations can get the most of data science across any cloud or on-premises infrastructure. The new version introduces a private preview of Domino Nexus hybrid and multi-cloud capabilities and an expanded suite of connectors to simplify and democratize access to critical data sources. On top of that, new … continue reading
It seems that every day in the tech world we hear about the salvation that the new era of the web will bring by taking away mega corporations’ hold on user data and giving control back to the people (at least some of it). But it isn’t until we read into the matter further that … continue reading
The BigCode initiative’s aim is to build state-of-the-art large language learning models (LLMs) to build code in an open and responsible way. Code LLMs enable the completion and synthesis of code from other code and natural language descriptions, and enables users to work across a wide range of domains, tasks, and programming languages. The initiative … continue reading
OpenAI has removed the waitlist for the DALL-E beta so that users can get started right away. DALL-E allows users to type infinite combinations of prompts that will each generate a unique set of images generated by ML/AI. Whether the prompts are as simple as “an armchair in the shape of an avocado” or as … continue reading