Eight years ago, the Robot Operating System (ROS) project began, and since then there have been huge advancements made to the robotics industry. Robots are teaching kids to code, becoming companions, have been given X-ray vision, and even started to fly.

But adding these features isn’t easy, and that is where the Robot Operating System comes in, according to Brian Gerkey, CEO of the Open Source Robotics Foundation (OSRF).

We caught up with Gerkey to talk about how the ROS project has grown, how it’s helping contribute to the advancements in robotics, and what toexpect over the next year.

SD Times: What is the Open Source Robotics Foundation and how is it related to ROS?
Gerkey: We are a non-profit research and development company that is about three and a half years old now. Most of us who founded the company came from a robotics incubator called Willow Garage where a lot of the original work on ROS and some of the related software happened. What we decided back in 2012 was that it was the right time to create a separate company that would focus completely on this open-source development platform with the goal to produce the highest-quality, open-source software that is free for everybody to use in whatever robotics application or research project they want to tackle.

What is ROS exactly?
First of all, it is not actually an operating system despite the fact that we put that in the name. Strictly speaking, it is a collection of software libraries and tools that people use to develop robot applications. ROS is used on top of an existing, true operating system, most often Linux, but also Mac OS X, and recently there has been some work to get it going on Windows, but that is still a little bit preliminary. Most of the robots in the world, and even new robots that are being built, are running Linux, so that is really where we focused our efforts. What it does is it gives you all the building blocks that you need to program a robot.

How has the project grown over the past eight years?
When this started, we were developing this software with the primary goal of supporting the PR2 robot, a mobile manipulation platform that is a human-sized robot, drives around on wheels, has two arms, and a lot of sensors. When we were building, designing the robot, and building ROS as the software infrastructure, the goal was getting copies of that robot and software out to labs so that they could do interesting things.

Along the way, we made sure ROS was not specific to the PR2 so that you could reasonably use it on other robots, and that is what started happening. First, in universities, we saw a lot of uptake with labs who had other kinds of robots they bought from other companies, or they built their own robots or customized robots. They were able to take the software we were providing and use it on their robot even if that robot was really different from the robot we had.

So they started doing that: They contributed patches, they started writing their own new packages and contributing that to the overall ROS ecosystem. The software platform just started to grow and grow. By the time we left and started the OSRF, there was already a critical mass of developers, contributors and users worldwide who were using and helping to improve the software platform.

What we are seeing now is ROS as the de facto standard certainly in research labs, both academic and industrial; but more interestingly over the last few years we’ve seen a lot of uptake in industry itself. You are starting to see companies from small startups who are designing brand new robots to do new tasks from hotel delivery to warehouse logistics, building products based on ROS, to big established companies like BMW, Bosch [and] Qualcomm, really putting effort into either using ROS in their R&D efforts or to explicitly support ROS running on their hardware because they expect people to build robots using their hardware and they want to make sure that ROS is well supported. We are seeing a lot of attention from industry at a time where the robotics industry itself is really exploding.

What other factors have contributed to the robotics industry taking off?
Well, it certainly helped the field a couple of years ago when Google made a big public investment in robotics by acquiring a number of companies, setting up a substantial robotics program within Google. I think it caused some people to really sit up and pay attention and say, “Well if Google is putting money into this, then maybe we should really get involved.” There is a public perception aspect of it that I think is going in the right direction.

On the technical side, there have just been some real improvements over the last five to 10 years that have made certain things possible today that weren’t possible before.

Around 2011, Microsoft released the Xbox Kinect, which was an awesome 3D sensor for robots. As soon as it was released, everybody in the robotics lab went down to their local electronics store and bought as many as they could and immediately threw away the game that came with it, cut the cable and hooked it onto a robot to give it 3D sensors. It was astonishing that you could now have 3D sensor of a quality and a frame rate that you really couldn’t before. It was really the first entry into making 3D sensors that are good, small, low-powered and affordable, which meant robots could have a much better perspective of the world.

Another aspect is better actuation, specifically robot arms or manipulators that are safe to use around people. In the last five to 10 years we have seen a new generation of robot arms, whether it is designing robot arms that are capable of doing useful things, but are also intrinsically safe and can be used with people around them.

The third thing is the availability of very capable high-quality open-source platforms like ROS that you can use to build an application. Ten years ago, if you were going to build a robot company, you really had to start from scratch by writing all the device drivers, all the communication systems, all the logins, all the diagnostics, and all the visualization tools. Now, as part of this overall ROS ecosystem that we are stewarding (but which is really contributed by thousands of people around the world), you just have a much better starting point.

How does developing for robotics make it different from software?
Robot programming has all the same difficulties and challenges as any software engineering exercise, except you also have the difficulty of interacting with a wide array of peripheral devices that are either sensors giving you information about the world, or they are actuators that you’re sending commands to cause change in the world such as moving the robot around or moving something around in the world using the robot’s arm. One of the more difficult software engineering tasks out there is to write good robot software, and you need a lot of infrastructure to help you do that.

Why should developers get involved in the ROS community?
Well, I personally think that robotics generally is one of the most interesting things that you can do in computer science. If you are going to get into robotics, you are going to need infrastructure and you should at the very least give ROS a try to see if it will do what you need.

All the code we write is open source, you can do whatever you like with it, you can put it in your proprietary product, you don’t have to give any changes back, it doesn’t affect any license you put on your code, and so on. I always encourage people whenever they can [to] contribute back. There are often aspects of a system that you are building: Either you are improving some of the core infrastructure that everybody else is relying on, and that improvement is probably not your added value, so you might as well roll all those improvements back and then give those back to the community. Also, you will get feedback back and improvements from the community for free.

Once you contribute to the community, who manages all of that?
One of the interesting aspects of the ROS community is from the beginning we structured it with what we call a federated development model. By that I mean that here at OSRF, we maintain control of the core system. Not all of the ROS software is stored in one place. It is stored in thousands of different repositories around the world. We did this on purpose, and what we tell people is if you want to contribute to ROS, then the way you contribute is you create a repository at a place like GitHub or Bitbucket and put your code there. If you tell us where it is, then we will add it to our index and we’ll make sure other people can find it. It becomes part of the ROS ecosystem, but it is up to you to manage it, maintain it and release it when you want with whatever license you want.

Of course we want to make sure that the core system is working properly, and so the core tools are maintained by OSRF employees and a handful of people from other organizations, and we collectively are doing quality assurance and making sure we have test suites in place for the stuff that we maintain.

Why is ROS well suited for the new industry of drones?
It is fascinating for us to see this because this is an industry that didn’t exist at the time we were developing ROS, so we certainly didn’t design for this use case. I think it is a good fit for a couple of reasons: One is that right now the drones you can buy have basically a GPS autopilot on them, a little embedded microcontroller that can autonomously take off, hover, go to a GPS waypoint, and it can land.

What is hard with the current drones is the ability to extend that capability. If you want to plug in a camera or a laser ranging device or something else, and add a new capability like flying around and using the camera to avoid obstacles along the way, that is not something that is built into today’s drones, and it is not something that is easy to add on to that really kind of special-purpose GPS autopilot.

What people are doing now is adding a second small computer, they call it either a companion computer or sometimes a copilot to provide a full suite of peripherals to plug in. You can plug in all your USB, Ethernet and whatever-based sensors, and now essentially you have a classic robot problem there. You have a reasonably powerful computing environment, you are running Linux, you can put ROS in there and you are building up the representation of the world. You are deciding what path to take and so on, and then the output of that is you are generating commands down to that GPS autopilot, which is a perfectly capable system that you can make smarter by adding another computer and better software.

The second reason is that drones by nature are a difficult thing to test. Every time you make a change (and it is a bad change), you probably are going to break the drone if you are testing the physical drone, so what you need is a really good software-based simulation of the drone in order to be able to test changes without breaking all your equipment, and also to be able to test every change in a variety of situations across a variety of vehicles. In addition to running ROS onboard, there is increasing interest in using Gazebo, which is an open-source robot simulation project we maintain. People in the drone community are now starting to use it as their simulation test bed together with ROS based software on the vehicle.

What does the future of ROS look like? What can we expect in 2016?
Based on what I have seen this year, I expect to see much more use of ROS in two areas that are frankly the hottest in robotics around the world, and that is drones and cars. Self-driving cars and driving assistance systems are a really hot topic of research right now, and we have seen a lot of interest from car companies, equipment suppliers, and startups who want to use ROS as part of the software infrastructure for their cars, so I think we will see a lot more of that.

From the development side at OSRF, we are focused on ROS 2.0. Now that we are about 8 years in, we have been working for the past year in earnest on a rewrite of the core communication system. The way that we are doing that is we are building on top of an industry standard for communication middleware called DDS because it brings us a whole bunch of new capabilities that we didn’t have with our old communication system. It also makes it easier for both government agencies and companies to prove that their software is working in a reliable fashion because the DDS communications middleware is used already in mission-critical applications by the U.S. Navy and NASA and a bunch of other companies and agencies.

So we expect to see ROS 2.0 next year, I would say probably in the summer to the fall when there should be enough of the system in place that people can start to experiment with it, and maybe switch over to a mix of ROS 1 and ROS 2.