A shell and a book: the two tools any developer needs to enhance his or her self-worth. Open a fresh shell, crack the spine on that brand new O’Reilly or No Starch book, pop up vi or Emacs, and go to town. Obviously, the equation has also changed to include the millions of websites that can teach you how to re-develop your development skills: codeschool.com, codecademy.com, or even Zed Shaw’s “Learn Code the Hard Way.”
But there’s one big question that plagues every developer about to sit by the fire with a good programming book: Which book should I read?
Instead of picking by author, title or publisher, we’re simply going to lay out five areas where we feel your future could lie. These are the technologies that will not only increase your skill as a developer, but will also expand your horizons and make you a much more expensive geek. Pick any of these five technologies to brush up on, and you’ll be well on your way to a $200k salary.
You thought we were going to say “Hadoop” first, right? Well, we are! But that boat has sailed. There’s still a tremendous amount of desire for Hadoop experts in the market, with some who’ve only done one short project being snatched up as if they were internationally known experts on the subject.
But you already knew that, and you’re probably a Java developer working on this type of thing already, right? How can you increase your value if you’re already a Hadoop developer? Learn Spark. Apache Spark, which originated at the University of California, Berkeley, is the next generation of Map/Reduce.
While Hadoop 2.0 has reworked all the core elements of the Hadoop framework to be more flexible, the basic Map/Reduce portions of Hadoop remained bound to disk. Spark foregoes that and transfers all of that data being processed into memory. The result is queries that can run up to 100x faster than in regular Hadoop.
We know what you’re thinking now, however: “But doesn’t everyone already use HDFS and have big Hadoop clusters standing? Why would they rip and replace?” Because Spark is not a rip-and-replace; it’s just a replace. It still runs on HDFS, but the nodes have some different capabilities. In other words: You run Spark in your existing Hadoop cluster and you do not have to move any data to do so.
If you want to be cutting edge, do yourself a favor and install Spark in a test cluster and play around with it. A few hours of experience can go a long way in a space where only a small number of people have yet deployed the thing. If you’re feeling really special, you could even contribute to the project and add that to your resume!
There are two ways you could go with OpenStack: Either learn how to administrate it, or learn how to develop for it. Both are quite lucrative right now. This is primarily because OpenStack is so tremendously complex, with so many moving parts, that just knowing how to stand up an OpenStack cluster is a highly desirable systems administration skill.
Of course, the main reason everyone wants an OpenStack cluster up and running is so that they can customize it. Ever done an SAP ERP customization? It’s nothing compared to the potential rabbit holes you could get into customizing OpenStack. The platform, being the basis for a cloud hosting environment, could be molded into just about anything that needs to provision and scale compute resources. That means long contracts, long projects and big teams.
Couple this with the fact that OpenStack is needed only by the largest, most complicated technology companies in the world, and you’ve got a recipe for making yourself into a middle-six-figure VP in no time flat.
No, really. There’s a large business in the luxury technology consulting services world. From extremely overpriced entertainment-focused Web development and domain registration companies, to private iPhone development firms for the filthy stinking rich, there’s big business in the democratization of development.
#!4: R (or any analytics)
We already talked about how important Hadoop and Spark are for the future of business computing. But none of those things matter in the slightest if your firm doesn’t have the brainpower to do analysis of all that Big Data.
Enter R, Python or even good old Fortran. Anything you can use to programmatically extract meaning from all of these insanely large data-processing systems will also aid you in your quest to be worth more money. Big businesses are shelling out a lot of dough for analysts these days, but they must be able to work with Hadoop and other modern data storage systems like Teradata and Greenplum.
Go to R, or sit down for a weekend with some Python and a big dataset from Data.gov. Data analysis is the big growth job for the future, and when coupled with a real basis in software development, you can become an incredibly compelling hire for someone like a Wall Street quant or an international bank.
5: Chef or Puppet
With the rise of the cloud, there was an ever-growing need for automated deployment of fresh systems. Chef and Puppet filled this void, but they’ve both butted up against a fundamental law of systems administrators: Where a script will do, the admin does not improve.
In the Chef and Puppet worlds, this is manifesting as static scripts that get updated every day, by hand, by need, by project. The end goal in all provisioning and configuration-management systems, however, should be complete automation. That means programmatically delineating the contents of a server based on situational needs, regression test results, and the current needs of the infrastructure itself.
Problem is, with all of those scripts to maintain on a daily basis, modern admins have had no time to learn how to do this. Thus, the Ruby-compatible portions of Chef are being left to wither on the vine inside of most organizations.
Enter you, the king of DevOps with a focus on Dev. Learn about programming systems delivery through Chef and Puppet, and the world will be beating a path to your door, bearing checks, corner offices and contracts.