At AnDevCon today, Timothy Jordan, head of Google platform developer relations, gave a keynote to detail the current and future state of the platform and its ensuing toolsets. He described changes coming in Android 7.1, and he taught attendees how to get started with TensorFlow on Android devices.
The next release of the Android platform will offer developers new features designed to make their applications easier to use. First in line is app shortcuts, which would allow users to select from a set of predefined actions when tapping an app icon.
With an app shortcut, users long-pressing a messaging application might be offered a pop-up list of tasks that can be performed within it. These could include creating a new message or jumping to a recent conversation.
(Related: Google puts out final Android 7.1 Developer Preview)
These new app shortcuts are described with CML files, and developers build these actions around intents. Jordan said that this will enable users to get to the functionality they need quickly and easily from the home screen.
Android 7.1 will also add support for round icons, which can be created in the Image Asset Studio. Another image-based feature coming in 7.1 is support for image-based keyboards. For users inputting text into an application, this new Commit Content API will allow images and other rich content to be pushed into the text field.
Applications will declare what types of media they support, and doing so will allow users to select images or other content from a sliding bar at the bottom of the screen, acting as a sort of keyboard. The new API is also supported all the way back to the Honeycomb releases of Android.
Retailers will most be able to make use of Demo User mode. This enables developers to throw their applications into demo mode, making them perform in a manner more appropriate to being shown on a retail floor. Some examples of demo mode toggles include disabling the creation of new users, or removing billing screens.
Jordan also discussed forthcoming changes to Android Studio 2.3. These include an overall update to the latest version of IntelliJ, as well as new support for lint checks. This version also adds support for Android 7.1.
Jordan also discussed the features of Firebase, which has expanded its offerings to support developers on the back end of their applications. New features for Firebase include Unity support, better analytics, and a new Udacity course to teach developers how to use the service.
Jordan spent half his time talking about these new products and features, and then started a half-hour tutorial on how to build a Hello World project with TensorFlow, Google’s machine learning library.
“When I talk to developers worldwide who are building applications and thinking about machine learning, it sounds very complicated the way they describe it. And it is,” said Jordan. “However, the first overview is simpler than you think. You can start understanding it pretty quickly, and you can start using it even quicker than that. I’m not an expert in machine learning, but I am an expert in the tools. That’s what’s exciting now. It’s the first time we’ve been able to access this level of intelligence without having two or three Ph.D.’s on our team.”
Jordan then described some of Google’s machine learning offerings. On mobile devices, for example, Google’s Vision API can be run locally. He also discussed Cloud Machine Learning support in Google Cloud, which can train machine learning algorithms in Google’s data centers. Google’s Vision API is open source and available for free.
Jordan demonstrated some simple machine learning capabilities, such as style transfer on artworks. Using style transfer, the developer can combine two images, merging their imagery and style together. TensorFlow was used to build Google’s Deep Dream project, which created truly bizarre images by analyzing and reimagining them with a machine learning algorithm.
Jordan also said that TensorFlow can be used to do textual analysis through Parsey McParseface, an open-source library released earlier this year. He said that the work Google is doing on machine learning is no longer in the research phase, and is now ready for business use.