Google’s guide for successful Google Play apps
Google is releasing a guide to provide developers with best practices and tools to help their apps be successful on Google Play. The guide, “The Secrets to App Success on Google Play,” will be split up into seven sections: publishing on Google Play; quality; discoverability and reach; engagement and retention; monetization; measurement with Google Analytics; and going global.
The guide is currently available in English, and Google plans to add more languages in the upcoming months.
“Developing an app or game and distributing it on Google Play is a good start, but it’s only the first step to building a sustainable business,” wrote Dom Elliott from the Google Play team on the company’s Android blog. “That’s why we’ve written ‘The Secrets to App Success on Google Play,’ a detailed playbook on the best practices and tools you can use to maximize the reach, retention, and revenue of your new app.”
Auto-correct and autocomplete for programmers
Researchers at Rice University, funded by the Defense Advanced Research Projects Agency (DARPA), are developing a tool called PLINY that will auto-correct and autocomplete code for programmers.
The four-year, US$11 million PLINY project is part of DARPA’s Mining and Understanding Software Enclaves (MUSE) program, which is accruing hundreds of billions of lines of open-source code to create a searchable database of properties, behaviors and vulnerabilities. According to Rice computer scientist and PLINY team member Swarat Chaudhuri, PLINY is being built to recognize and match similar patterns regardless of programming languages or code. PLINY’s core will be a data-mining engine that continuously scans the MUSE program codebase for the right code snippet to auto-correct or autocomplete with.
The PLINY team includes Rice computer scientists Chaudhuri, Keith Cooper, Chris Jermaine, Vivek Sarkar, and Moshe Vardi. More information on PLINY is available here.
Confluent: An Apache Kafka and real-time data startup
Former LinkedIn engineers have formed a startup called Confluent, dedicated to commercializing a real-time data stream platform built around Apache Kafka.
Confluent cofounders Jay Kreps, Neha Narkhede and Jun Rao all previously worked at LinkedIn, and were some of the original engineers and architects working on and ultimately open-sourcing the Apache Kafka project, a horizontally scalable messaging system using publish-subscribe messaging reimagined as a distributed commit log. According to a blog post by Kreps introducing the company, Confluent is a way to bring the real-time data architecture of Apache Kafka to more commercial and enterprise applications.
“We felt that what we had ended up creating was less a piece of infrastructure and more a kind of central nervous system for the company,” wrote Kreps. “Everything that happened in the business would trigger a message that would be published to a stream of activity of that type.
“We realized that just open-sourcing Kafka, the core infrastructure, wasn’t really enough to make this idea of pervasive real-time data a reality. Much of what we did at LinkedIn wasn’t just the low-level infrastructure; it was the surrounding code that made up this real-time data pipeline—code that handled the structure of data, copied data from system to system, monitored data flows, and ensured the correctness and integrity of data that was delivered, or derived analytics on these streams. All of these things were still internal to LinkedIn and not really built in a general-purpose way that companies could adopt easily without significant engineering effort to recreate similar capabilities.”
More information on Confluent is available here.