The National Oceanic and Atmospheric Administration (NOAA) announced that it is ramping up its computing power with two new Cray supercomputers in Virginia and Arizona, each with 12 petaflops of capacity, bringing NOAA’s total power up to 40 petaflops.Β 

These computers will unlock new possibilities for better forecast model guidance through higher-resolution and more comprehensive Earth-system models, using larger ensembles, advanced physics, and improved data assimilation, according to NOAA.Β 

Many Cray computers run SUSE’s Linux Enterprise Server,Β  and SUSE has been working with organizations to enhance existing computer systems that will allow them to predict weather patterns to help combat climate change.

Jeff Reser, Head of HPC Solutions at SUSE, offered his insight on what the computing expansions will mean for tracking climate change.Β 

SD Times: Can you tell me about the significance of the recent advancements regarding the NOAA supercomputer expansions?Β 
Reser: We’ve worked with NOAA quite a bit in the past. We also worked with some other supercomputer owners like the National Center for Atmospheric Research (NCAR) in Wyoming.Β 

I know NOAA has an initiative called Earth Prediction Innovation Center (EPIC), which is bringing in two big Cray supercomputers. And on those computers, they are running a Cray Linux environment. And CLE is actually a derivative of what we have with SUSE Linux enterprise or HPC so those two big Cray supercomputers are running us and they’re using it to do weather forecasting, weather modeling, climate change modeling, things like that for NOAA and putting a lot of data to help with that simulation.

How will the simulations projected by these supercomputers be used with regard to climate change and how accurate is the technology now?
The accuracy really depends on how many data points can be brought in. I know we’re working also with another weather company in Austria – ZAMG – and what they’re doing is very similar to what NOAA wants to do as well. They are looking at providing sensors all over Vienna, Austria and using them to collect the data from all of these various sensors into a repository, which they then do a lot of very quick analytics on to help them in weather forecasting to find out what the weather is going to be or how a storm is going to track over the next 10 minutes.Β 

It also collects that data and puts it into a library for long-term evaluation of how the climate is changing or how the weather forecasting over years over a long time span is affected. So the more data they can collect from all of these thousands of sensors that they have all over, the better. And that’s a similar thing with weather forecasting in these different areas.Β 

It’s all about the data and how much they could collect.Β 

Do you think there’s enough investment in purposing supercomputers to predict climate change trends and do you think this is keeping up with the rate that the problem is growing?
I think there’s more coming. I think with the advent of new exascale supercomputers, the forecasting climate change and climate modeling will get more intense, so to speak, and the simulations will be more data-intensive as well. So, yeah, I think the investments are growing around the world. Weather forecasting is relegated primarily to agencies like NOAA right now.

If we talk about some of the other uses of high-performance computing, I would say it’s starting to move into vertical enterprises, whether it’s used in manufacturing or automotive or consumer goods. We see a lot more cases of that happening right now in vertical enterprises accepting the need to run their data-intensive workloads on an agency environment that understands parallel computing and understands how to manage all those parallel clusters.Β 

But getting back to weather forecasting, I think, yeah, there will be more investments in supercomputers that could handle the loads and especially with the short bursts of weather forecasting. They need that data, but they also need to make decisions very, very quickly and get that out to the public and let them know how storms are tracking or especially tornadoes or what have you. And it’s also very important to get those decisions out quickly. The only way to do that is if you have supercomputers with high-scale capabilities to get those decisions out and work.

In terms of the capabilities of supercomputers and how they’re being used to create simulations, at what stage of maturity is the technology?
I think from a simulation and modeling standpoint, the simulation and modeling applications that are out there now are fairly advanced. There’s a lot more work going on. But I think from a simulation standpoint, they’re in good shape. It’s just looking at additional data points and clicking them in additional locations and different elevations that will make a difference as well. That’s what has become a lot more important.Β 

And maybe, yes, as additional data points come into place in the simulation program, it might need to be updated as well and how you graphically visualize what’s happening that becomes important. There are some universities that are doing climate or ocean modeling as well. They have a lot of data points in the ocean environments that they keep track of. And I think the simulations that they’re trying to use for that, are still evolving in how they understand long term effects of the oceans rising, for example, and how that impacts everything else.

So have supercomputers already been in use for weather tracking for a while now or are a fairly recent advancement?
Supercomputers have been in use for four decades, but it’s only until recently that started coming out with exascale supercomputers where we’re dealing with petaflops of speed and very much increased power. Now, the US is coming out with some exascale computers, which are based on Crey, so in the end, they will be running Crey Linux environments. SUSE is working to really try to manage a lot of these high-end simulation workloads. So I think from a simulation standpoint, I think we’re in good shape with a lot of algorithms that are being used today.Β 

From an AI/ ML perspective, I think some of those areas are still in the infant stages. In order for machine learning to work, it needs a lot of data and data that’s been collected over a long period of time in order to understand what kind of patterns are in the data and how to interpret the patterns and what to infer from those patterns. Once that becomes more practical, I think you’ll see a much heavier usage of machine learning. I think it’s still in its infancy just because we need to collect so much more data to make it more effective. I think for weather forecasting, they have made great strides already.Β 

Is there anything else you feel is important to take away from this?
I’d just like to say that with our HPC platform and the tools that we provide and that we support, we’re looking at third-party tools as well and what makes sense in different environments, especially HPC in the cloud, we’re starting to see a lot more of our customers doing HPC bursting.Β 

This means that they might have an HPC on-premise that they’re using but when they need to gain some dedicated resources on-demand or more scalability on-demand, they would burst the job into the cloud, maybe even into the public cloud. So we want to make sure that we provide the means to establish an HPC environment in that public cloud so they can do some bursting and make it really effective. So that’s another area where we’re seeing some uptake.Β 

And also from a business standpoint, we’re looking at all of these new wave applications that are being built, whether it’s AI, machine learning and even deep learning. It’s having an effect on how we shape our HPC platform in the future to make sure it’s as effective and manageable as possible.Β