Cloud Nebula: A NASA Story
In a world where the cloud footprint has been progressively growing, many experts predicted big players like NASA would eventually begin influencing the cloud world.
And true to their words, NASA began a journey that ultimately led to an Infrastructure-as-a-Service platform, ushering in a new era of cloud computing by productizing and curating OpenStack. With more than 60% of enterprises already leveraging the cloud and 93% experimenting with Infrastructure as a Service, Nebula was expected to take off like one of NASA’s rockets, and possibly even surpass existing cloud infrastructures.
But unfortunately, that was not to be.
On the 1st of April 2015, Nebula’s operations were ceased, killing what many thought had the potential of blowing up and revolutionizing the cloud world. According to Nebula, employees and shareholders worked very hard to resuscitate and possibly give Nebula some hope of life, but could not evade the inevitable.
What Cloud Nebula Offered
So the three big questions are: when did the rain start hitting Nebula? What led to its rather unfortunate downfall and what does it actually mean for customers? Nebula’s genesis began at the NASA Ames Research Centre, as a strategy to reduce the organization’s energy consumption and facilitate better resource utilization.
But why would NASA even need the cloud in the first place? Didn’t they already have excessive hardware and software resources, that supported their operations? Technically, yes, NASA already had acres of underutilized data centers.
However, since they obviously didn’t run on air, NASA had to invest heavily in energy, environmental controls and labor. This led to the creation of OpenStack, which ultimately changed how NASA was managing its computer resources, by allowing its scientist pool to access only the resources they needed at a particular time.
And to subsequently release them to other users, when they stopped using them. As far as energy utilization is concerned, Nebula, which started out as NASA’s private cloud storage, was 50% more energy efficient compared to traditional data centers.
This was achieved by holding up to 15 petabytes of data in one shipping container. With time, this architecture proved to be conveniently flexible, since the containers were mobile, and could be expanded or upgraded depending on computing needs of NASA.
In addition to improving computing, cloud Nebula also favorably changed NASA’s working environment. With an architecture that was designed to facilitate interoperability, with a wide range of other commercial cloud services like AWS, NASA cloud data storage made it possible for researchers to shift code and data sets — subsequently using them on other cloud platforms.
It also saved NASA hundreds of working hours, thanks to its extensive pool of high performance IT resources. Which allowed them to concentrate on other mission-critical operations, as opposed to spending hours, days and months constantly setting up new IT infrastructure; to keep up with the organizations’ growing computing needs.
Why It Was Nicknamed The “Super Cloud”
Nebula’s impressive features won it a number of nicknames, with the most popular one being “Super Cloud”. It could efficaciously accommodate individual file systems of 100 terabytes, hold files as large as 8 terabytes, and effectually manage 10,000 to 100,000 times the amount of data as the biggest and most expansive commercial cloud services.
Compare that to AWS, which was and still is considered as one of the most powerful cloud platforms, with its EC2 file and file system sizes that can only extend to 1 terabyte. Amazingly, it doesn’t end there. Thanks to a converged 10Gig-E switching fabric, Nebula’s networking was ten times faster than other cloud platforms, which could only manage 100 Megabytes on 1Gig-E.
The combination of all these features, plus hardware RAID configurations, made Nebula the most powerful dedicated hardware, unmatched by any other cloud service.
NASA had undoubtedly lived up to the expectations of many cloud experts and CIOs. When he was asked to compare the scale of NASA’s projects to other large private enterprises, Chric C. Kemp, the founder and Chief Strategy Officer of Nebula, said that NASA was able to make significant strides because they dedicated a substantial portion of their annual budget to IT.
Back when he was a CIO, at the AMES Research Center, NASA was injecting about $2 billion of its money (which translated to 10% of the overall budget), into the project.
When Things Began Going South
Although the cloud project had been initiated by NASA in the year 2008, Nebula, as a startup was co-founded in 2011, and funded not only Chris C Kemp, but also a wide range of Silicon Valley’s venture capitalists –who collectively injected $38.5 million into the project.
On paper, it seemed like a unicorn, something that would grow into a multi-billion dollar business. The media and investors hyped it up as a revolutionary cloud system, that could transform existing server rooms into mega data centers, which would be very effective in managing web applications. Prospective users and investors were excited about the whole project.
Competitors, particularly established cloud service providers, were running scared of the “new, revolutionary” cloud system. During the period of 2011 to 2013, everything seemed to be going right for Nebula. Chris C. Kemp and co-founder Devin Carlen were backed by NASA, and were able to bring over a couple of engineers to work on and develop the commercial project.
The executive team was reinforced with other experts, including a former Dell sales big shot Dave Withers, and the first web-browser co-inventor Jon Mittelhauser. Meanwhile, Kemp, who had already become a prominent public speaker thanks to a NASA resume, went round marketing Nebula in conferences.
Finally, in the spring of 2013, Nebula was officially launched as an IaaS to customers. As expected, it created a buzz during the first couple of weeks. Customers rushed to subscribe and subsequently reported positive experiences. Unfortunately, the hype was short lived. Sales were not picking up as earlier projected.
Since people could not comprehensively understand what exactly Nebula was, they decided to stick with other cloud service providers.
In a bid to resuscitate an almost-lifeless project, the board even replaced Kemp with Gordon Stitt, and moved the former to the Chief Strategy Officer docket. Ultimately, after exploring alternatives and exhausting all potential options, Nebula decided to shutdown its operations.
According to the company, Nebula’s private cloud deployed on customer sites would continue functioning without support.
Conclusion
So far, it’s not clear whether Nebula may or may not be revived in the future, possibly after re-branding, as a last resort to save what would otherwise have been a revolutionary project. What is clear is the lesson, that despite having a strong team, investor backing, press interest, next-gen technologies and slick marketing.
You still need to comprehensively assess the market, and ultimately come up with a product that’s understood and appreciated by consumers — complexity is a but a recipe for disaster. What do you think Cloud Nebula could have done differently and should do differently?
Share you opinions and thoughts with us in the comments section below.