The Apache Software Foundation has announced that Apache Falcon has graduated from the Apache Incubator to a Top-Level Project. Falcon is a data processing and management solution for Apache Hadoop with a focus on data motion, data discovery, coordination of data pipelines, and life-cycle management.
“Apache Falcon solves a very important and critical problem in the Big Data space,” said Srikanth Sundarrajan, vice president of Apache Falcon. “Graduation to TLP marks an important step in progression of the project. Falcon has a robust road map to ease the pain of application developers and administrators alike in authoring and managing complex data-management and processing applications.”
More information about Apache Falcon is available here.
Canonical launches Ubuntu Core for the Internet of Things
Canonical has announced its Snappy Ubuntu Core will now be available for the Internet of Things, connected devices and autonomous machines. Ubuntu Core provides developers with a secure platform, reliable updates and access to the Ubuntu ecosystem.
“We are inspired to support entrepreneurs and inventors focused on life-changing projects,” said Mark Shuttleworth, founder of Ubuntu and Canonical. “From scientific breakthroughs by autonomous robotic explorers, to everyday miracles like home safety and energy efficiency, our world is being transformed by smart machines that can see, hear, move, communicate and sense in unprecedented ways.”
More information is available here.
Facebook open-sources AI tools for Torch
Facebook AI Research is open-sourcing its deep-learning modules for Torch, an open-source development environment designed for computer vision, machine learning and numerics. Torch is used in a number of academic labs and technology companies such as AMD, Google DeepMind, Intel, NVIDIA and Twitter, according to Facebook.
“Progress in science and technology accelerates when scientists share not just their results, but also their tools and methods,” wrote Soumith Chintala, AI researcher and software engineer at Facebook, on the company’s blog. “These modules are significantly faster than the default ones in Torch and have accelerated our research projects by allowing us to train larger neural nets in less time.”
The release will include GPU-optimized modules for large convolutional nets, networks with sparse activations, and CUDA-based models and containers.