At its Google Next conference in San Francisco, enterprise Google was on full display. The company introduced new tools and services across its enterprise offerings; from Google Cloud Platform to G Suite to Hangouts. Among the week’s announced updates were numerous cloud development tools, as well, designed to make container creation and application management simpler for developers.

Google Cloud Functions on the Google Cloud Platform (GCP) brings an event-driven serverless code option into the retinue of developer options. Targeted at similar use cases as Amazon’s Lambdas, Functions are not only stand-alone ways to run raw code against event triggers, they’re also now tied into Firebase, giving mobile application developers direct access to these lightweight processes.

Cloud Functions is in beta release, but is already integrated with Google Cloud Storage and Cloud Pub/Sub, both of which can throw event triggers on actions, such as file uploads, reception of a message, and even an entry in a logging system.

RELATED CONTENT: Google shifts to a slower pace to woo enterprises

While discussing consulting and implementation partnerships in her first-day keynote, Diane Greene, senior vice president of Google Cloud, pointed out that Google has partnered with Pivotal and Rackspace to help companies implement their GCP plans.

Following that up today on the third day of the conference, Pivotal and Google announced Project Kubo, a combination of BOSH and Kubernetes. BOSH is an open-source tool for handling release engineering, deployment and lifecycle management for distributed systems, while Kubernetes handles the management of containers. Project Kubo offers a more polished Kubernetes management experience built on top of BOSH.

One announcement at Google Next that got Internet commenters excited was the release Google Cloud SQL for PostgreSQL. This allows developers to host their PostgreSQL databases within the Google Cloud, and to use the cloud-based management tools to keep that database available and performant. This service launches on Monday,  March 13, and most availability and cloning features are still not yet supported, as the service will launch in beta form.

At the start of Google Next, Google acquired Kaggle, a machine learning education and competition site. Developers competed on Kaggle to solve data science problems. An example is their introduction project, which tasks users with calculating an individual passenger’s chance of surviving the sinking of the Titanic.

Google was, evidently, so enamored with Kaggle’s success as a startup that it had developed its own internal copy of the service called Gaggle. Kaggle’s co-founder and CEO Anthony Goldbloom said that he hopes the site will expand to be a place where data scientists can perform all of their work.

In furtherance of machine learning innovation, Google also announced that it will be opening a competition for early stage startups implementing machine learning. Google will offer a million dollars of GCP credits for the winning startup, as well as access to engineers.

Elsewhere this week, Google released the Google Cloud Container Builder. This fast and flexible software packaging utility will make the job of building Docker containers easier for developers and administrators, according to the Google Cloud Platform blog.

While many of its cloud announcements were pushing Google into competition with Amazon and Microsoft, both of those companies have turned in earnings results that show significant revenue from cloud floating the bottom line of the company as a whole. That’s not yet true for Google. Google’s cloud revenues are growing dramatically, but not yet competing with the company’s larger advertising earnings, which have topped US$25 billion per quarter.

Collectively, in Q4 of 2016, Google’s cloud revenues couldn’t have been more than US$3 billion, as the company lumps cloud earnings in with other non-advertising businesses, such as the Play store and Chromecast sales. While this number has grown a great deal, it’s still far behind the top two cloud providers. Amazon posted US$12.2 billion in cloud revenues in 2016, and Microsoft reported US$6 billion in Intelligent Cloud revenues in its most recent quarter.

On the other side of its cloud, however, Google’s collaboration and productivity announcements at Google Next weren’t about playing catch-up with the competition. Instead, the company’s G Suite and Hangouts announcements touted the company’s existing success and future plans for growth in a market where Google says it helped Verizon transfer over 100,000 enterprise users to G Suite productivity tools.

Hangouts got an enterprise update at Google Next, introducing more Slack-like functionality. The new Hangout Chats will offer deep integrations with G Suite tools for documents and spreadsheets, while Hangout Video chats are getting a major overhaul, and adding a new whiteboard standalone screen that companies can use and tie into video conferences.

Google announced dozens of other integrations, beta releases and tools at Google Next. The new Titan security chip is used to establish root trust and to perform hardware-level authentication, while the new Data Loss Prevention beta will help companies keep sensitive information from leaking out of GCP data stores.

Google’s also released a new data preparation tool for loading information into the GCP. A new line of commercial datasets were added to BigQuery, such as weather information, news archives from Dow Jones, and real estate information from HouseCanary.

Processing all that information can be made a bit easier for users of the Python SDK for Cloud Dataflow. This Apache Beam-based tool will make it easier for Python users to perform ETL actions within the Google Cloud Dataflow.

Google Cloud Datalab reached general availability for the first time. This data science workflow tool allows developers to build with the Jupyter notebook and SQL, Python and shell commands. TensorFlow support has also been added for the first time.

Cloud Dataproc was updated at Google Next as well. This fully managed service for running Apache Spark, Flink and Hadoop pipelines is designed to make stream processing easier and to accelerate development of stream processing applications and architectures.