Google is already harnessing the power of artificial intelligence with its multi-touch, location, identity and payment features, but according to Google CEO Sundar Pichai, developing for an AI-first world means rethinking many of Google’s products.

At today’s kick-off of Google I/O 2017, Google announced several ways it is embracing this shift in computing. The company also announced a few ways that it is making life easier for its 800 million active users and its many developers.

To start, Google, which has never added a programming language to Android, announced that it added support for Kotlin. Kotlin is a statically typed programming language for the Java Virtual Machine, and it’s a preferred language for writing Android apps.

Some companies have already adopted the programming language into their production apps, including Pinterest and Square. Developers can get started with Kotlin by downloading the Android Studio 3.0 preview.

As a way to bring computing to everyone, Google announced its new mission called “Android Go,” which is designed around three things: the operating systems, the apps, and the play experience.

Dave Burke, vice president of engineering at Android, wrote in a blog that the Android Go experience will ship in 2018 for all Android devices that have 1GB or less of memory. The goal is to help manufacturers continue to offer lower-cost devices that still provide great user experiences.

Google then showcased ways that Android developers can create new apps and experiences for users. The company announced the release of the Android O Developer Preview 2, which comes with notification dots, a new way for developers to surface activity in their apps. Google also designed Picture-in-Picture, which provides seamless multitasking on any size screen, and it announced a new homescreen for Android TV, which developers will be able to get started by using the new TvProvider support library APIs.

Google also announced its new TPUs, designed specifically to advance its AI training and create intelligent applications. Each chip provides 180 teraflops for processing the tasks, and the chips are even able to be placed together into sets called TPU Pods, which provides up to 11.5 petaflops to accelerate the training of a large machine learning model.

The new TPUs are coming to Google Compute Engine as Cloud TPUs, where the TPUs can be connected to other virtual machines and hardware like Skylake CPUs and NVIDIA GPUs. And, in order to help as many researchers as possible, Google is also making 1,000 Cloud TPUs available at no cost to machine learning researchers via the new TensorFlow Research Cloud.

Pichai said that in order to drive the shift and apply AI to solve specific problems, it would help if there was one place that focused on bringing the benefits of AI to everyone. Google is doing that with the site Google.ai, which focuses specifically on three pillars: state of the art research, tools and infrastructure, and applied AI. As part of Google’s applied AI efforts, it wants thousands of developers to be able to use machine learning.

Also during the keynote, Google announced AutoML, which is a new reinforced learning approach to teach neural nets to train other neural nets how to learn. Pichai calls this “learning to learn,” and joked that it’s kind of like the movie “Inception.”

“We must go deeper,” Pichai joked.

About Madison Moore

Madison Moore is an Online and Social Media Editor for SD Times. She is a 2015 graduate from Delaware Valley University, Pa., with a Bachelor's Degree in media and communication. Moore has reported for Philly.com, The Philadelphia Inquirer, and PhillyVoice. She is new to Long Island and is a cat enthusiast. Follow her on Twitter at @Moorewithmadi.