A lot of people have been visiting our table in the OSCON Hack Zone — mostly because of the presence of our Little Black Boxes.
The common question we’ve heard: “where did you guys *get* those things?”
INORITE? They are *totally* cute. We bought the parts and assembled them ourselves. They are now Standard Issue to all new Eucalyptus engineers; a short stack of three gives any engineer enough firepower to do serious development and testing on the whole Eucalyptus stack.
Here’s the parts list from Amazon.com, courtesy of the talented and ruggedly handsome @zacharyjhill:
The main housing unit is an Intel NUC, about 4″ by 4″ by 2″. The SSD is available in different sizes; ours is 128GB. With some of the boxes, we only use 8GB of RAM and with others we use 16GB. We also like to have wireless, though it’s not required – and don’t forget the cheapo power cable.
Building a fully functional 3-node Eucalyptus developer cloud: $1500.
Having an entire AWS-compatible cloud the approximate size of a coffee cup: priceless.
Awesome USB pens with complete virtual Eucalyptus cloud not included — those you’ll have to get from us at OSCON. Come see us in the Hack Zone.
Simon Wardley is never shy to share a provocative opinion.
A summary of his latest missive: OpenStack is already doomed because of their inability, or unwillingness, to produce AWS clones. There’s nothing new in Simon’s position there, but it’s his bluntest statement of opinion yet on OpenStack’s prospects.
I’m not going to presume to agree or disagree with Simon’s prediction — but in his blog post and the ensuing conversation, I saw a few opportunities to clarify how I think about Eucalyptus and AWS fidelity.
* * * * *
First, on proving AWS fidelity.
Obviously, at Eucalyptus we think deeply about the AWS fidelity problem, and how to approach it. Simon suggests one possible model:
So could CloudStack, Eucalyptus, Open Nebula and some of the OpenStack party create a rich set of AWS compatible environments — of course. But the problem becomes you have to define one thing as the ‘reference’ model. The only way around this that I know is for the groups to create a massive set of test scripts and provide some sort of AWS compatibility service and define that as the reference model and each show compatibility to it and it to AWS. It’s possible, I’ve hinted enough times that people could try that route but there’s no takers so far.
I can’t speak for the other projects, but we find that the best tests of AWS compatibility can be found in the AWS ecosystem itself — which is one of the key advantages of working in such an ecosystem. Chasing a full API is an exhausting process, and can be discouraging, but we’ve had good success by first ensuring compatibility with the most popular open source tools in the AWS world. By moving progressively through these tools, we will cover ever-expanding sections of the API, leaving the dustiest corners of the API for last (perhaps never even to be implemented; after all, an API is ultimately only as useful as the tools that exercise it.)
The Netflix OSS toolchain is a great example of this. The team at Netflix has taken quite a bit of heat for relying so heavily on the AWS family of services, but their decision to open source all of their tools has been a boon to us. They are smart users who exercise AWS at a scale that other users can scarcely imagine, so it’s a safe bet that they’re exercising many of the most interesting parts of the AWS API. We’ve learned much, and proved much, by following their trail.
Of course, we also have our own automated test suite for AWS/Eucalyptus fidelity; we call it Eutester. It’s designed, at least in part, to run identical test cases against both Eucalyptus and AWS, and anyone who wants to test the AWS fidelity of their own IaaS can pick up that code and run with it. That codebase will continue to grow as Eucalyptus grows. Patches welcome, as they say.
* * * * *
Second, on architecting for AWS fidelity.
It seems to be assumed — among some, anyway — that AWS API compatibility is something that can simply be dropped into OpenStack at any time. The trouble is, no one will know how true that is, or isn’t, until they actually do the work. Geoff Arnold’s comments hit the nail on the head:
Load balancing is a great example. For good or ill, Amazon’s Elastic Load Balancer is a cornerstone of web-tier cloud applications architecture. If the OpenStack community was serious about AWS compatibility, the LBaaS team would have established ELB compatibility as a fundamental requirement. It didn’t. On the contrary, much of the preliminary documentation focused on all of the cool features that LBaaS would support that were not available with ELB. By Grizzly, all that we had to show for our efforts was a proof of concept based on a single instance of haproxy. Elastic provisioning was officially out of scope for the core LBaaS effort.
Software design is about choices, and with every choice you make, there’s a chance that you’ve made some other choice impractical, or even impossible. We know that there are, at the very least, syntactic differences between OpenStack and AWS; it is also quite likely that there are deeper semantic mismatches. It may be that the OpenStack community will be able to bridge both syntactic and semantic mismatches between OpenStack and AWS with ease — but given our experiences, it doesn’t seem likely. The devil is in the details, and one must care deeply about the details in order to conquer that devil.
* * * * *
Which brings me to the last point, which is caring about AWS fidelity.
As Thierry Carrez says,
EC2 API support has always been there in OpenStack. It just never found (yet) a set of contributors that cared enough to make it really shine. Canonical promised it (with AWSOME) then let it go. More recently Cloudscaling promised it, but I’ve seen nothing so far. The next in line might just deliver.
Maybe. There is great power in being the one who “cares enough”. And Thierry’s response here begs the question: why doesn’t the OpenStack community care more about supporting the various AWS APIs? (Since EC2 is just the tip of the iceberg.)
That’s a question for the OpenStack community to answer. In the meantime, I can assure you that, at Eucalyptus, we care deeply about AWS compatibility — as do our users. We work towards that goal tirelessly, every day, and I think it’s safe to say it’s because of that passion that we have taken the lead. And it’s a lead that we have every intention of extending.
Anyway. See some of you at Netflix tonite.
Today we officially launch the next generation of FastStart, the quick deployment solution for Eucalyptus. We think it’s a pretty dramatic improvement to our previous version, and it’s certainly the easiest way to stand up your own AWS-compatible private cloud.
And while I have you, I’d like to shout out to the guy who made most of this happen: a guy named Bill Teachenor. When you use FastStart today and discover that it’s totally awesome, come by #eucalyptus and say thanks to bteachenor for all his hard work on the Silvereye project, the codebase upon which the new FastStart is based. There were plenty of other folks who helped — but Bill was the one who took the ball.
Open source is powerful because you don’t need anyone’s permission to make it better. You just need time, belief, determination, and a bit of skill in the right places. Bill looked at FastStart with the eyes of an experienced sysadmin, picked out a whole bunch of places where we could do stuff better, and led the way. When you write good code that does useful stuff, people will follow. Rough consensus and working code: it’s what drives the open source world.
So here’s to Bill, and all the folks who say “I can make this better” and then commit code at 2am to prove it.
(I’m sure you all know that step one is “cut a hole in the box”.)
We’ve been continually working to improve the install process of Eucalyptus over the past few months. In particular, we’ve been working on a project that we call Silvereye. Our most recent goal: make it trivial to install a fully-running Eucalyptus cloud on a single machine.
A cloud on one machine? Why bother? Well, lots of reasons, actually. The biggest: the developer workstation. If you’re hacking on Eucalyptus, it’s pretty awesome to have Eucalyptus on a single system that you tear down and rebuild in 15 minutes.
Anyway: mission accomplished. Go to our Silvereye downloads directory and get the latest build (right now it’s silvereye_centos6_20121004.iso). Burn it to DVD, boot your target system, and choose the “Cloud-in-a-box” option from the Centos-based installer. Answer some simple questions. Boom, in 15 minutes you’ve got a cloud-in-a-box!
(Note #1: a helpful README can be found in the Github repo for Silvereye: github.com/eucalyptus/silvereye.)
(Note #2: in the cloud-in-a-box config, when you log in as root for the post-install config, it’ll say “hey, do you want to install the frontend now?” Answer yes. It automatically installs the node controller for you.)
(Note #3: Silvereye is not supported. At all. If you use it, there are ABSOLUTELY NO GUARANTEES that it won’t burn down your house, steal your pickup truck, or throw your mother into a wood-chipper.)
Silvereye is mostly the work of sysadmin-par-excellence Bill Teachenor, based on the original Faststart installer written by David Kavanagh — but various folks are now working on it; Andy Grimm, Graziano Obertelli, and Andrew Hamilton have all been pushing the cloud-in-a-box on various distros, and Scott Moser of Canonical did some great proof-of-concept work on the UEC code. So thanks to all of them, and everyone else who’s played with it.
Give it a spin; it really is dead-easy. We still need to round off a few corners before we can call it the official installer of record, but we’re quite close now.
Want that AWS-compatible cloud on your laptop? Of course you do. Now go get it.
Here’s the thing about running your own cloud infrastructure: once you make the decision to rely on it, then it had better work. The whole thing. Every part of it. Under heavy load. All the time.
Obvious, right? But it bears repeating. When you decide to make the move to doing things The Cloud Way, you are placing a gigantic bet on your infrastructure layer — and that bet is placed not only on the Cloud As A Whole, but on every individual component that comprises that cloud. In the open source world, these are frequently components that you didn’t write and do not control. I can assure you that customers don’t care in the least.
At Eucalyptus, we have smart and demanding customers, with extremely high expectations. They are not content with assurances that things will be production-ready at some magical release point in the future. They don’t care whether the bugs are in the cloud controller code, or the node controller code, or in libvirt, or in the kernel. They are using Eucalyptus at extreme scale, right now, to solve extreme business problems, right now. Which means that when their cloud breaks, they expect fixes right now — and if that means libvirt patches or kernel patches, that’s what it means. That’s why they give us all that nice money. That’s why customers pay us for free software.
Our customers try to squeeze every ounce of performance out of their machines; that’s part of the point of having a cloud, after all. And when the virtualization technologies we depend upon experience heavy load over a long period of time, we see some crazy things. Like segfaults in libvirtd, for instance. Or libvirt handlers that suddenly and inexplicably lose their mind. Or other weird occurrences that might lead one to believe that libvirt isn’t quite as thread safe as advertised. These failures may only occur at times of very high load, and they may not happen often — but they do happen. And when they happen, we have to handle them. The 3.1.2 release is the result of many hours of hard work by our engineers to find and fix these issues.
It’s a challenge and a privilege to serve customers like this. At times it can put incredible stress on the entire organization — and it’s at precisely these times when we are at our very best. Watching great engineers solve critical problems under pressure is a lot like watching great athletes at the end of a big game — and when they win, it’s just as exhilarating. These engineers are at the heart of what we do. Compared to them, I’m just selling tickets and fetching Gatorade.
It’s not that hard to put together a bunch of components and call it a cloud. But making a cloud bulletproof? That’s hard. And that, friends, is where we are the best in the world.
…is that any time a user runs into problems figuring out how to do something with Eucalyptus, it’s quite likely that the corresponding AWS procedure, as documented by the AWS community, will “just work”.
Example: growing an EBS volume. The commands listed here:
…basically work by switching ec2-tools and euca2ools. It’s nice to have that kind of knowledge base to fall back on, even as a starting point that may need to change subtly for euca-specific cases.
One of the projects I’m enjoying working on right now is the Eucalyptus Recipes project, which you can find on Github. I actually hacked together some code, and even checked it in! Needless to say, patches welcome. And if “patches” means “complete replacement with better code,” that’s fine also.
The goal is to build a collection of recipes (small right now, but growing) that any Eucalyptus user can inject into the boot process of an instance at start time, using cloud-init or a similar mechanism. Simple predefined Euca image + Euca recipe of your choice = fully configured software appliance. Because all Eucalyptus users have access to a standardized set of pre-built images, we can be relatively sure that any recipe that builds atop a particular image will be guaranteed to build properly anywhere that image runs.
This is in contrast to an image-based approach, to which AWS users have become accustomed. There are thousands of pre-built AMIs out there from which AWS users can pick and choose. That’s good, because there are images for almost every imaginable need — but it’s also problematic in a lot of ways. These AMIs are basically opaque. You don’t know what’s in them, you don’t know who built them, you don’t know how they were built, and until you actually run one, you don’t know what they actually do. The new improved AWS image catalogue will help this some, but it’s a problem inherent to the image model.
At Eucalyptus, we’re working on an images project as well, but I believe that the recipes approach holds more promise in the near term. Here’s why:
1. Storage. Eucalyptus provides a mechanism for users to fetch a set of predefined Eucalyptus machine images (EMIs). One day, we may provide a huge catalog of pre-built EMIs, but in the short term, we’re not really set up to host such a thing. With the recipe approach, we can concentrate on providing a small set of minimal EMIs for the major distros, and we can test them thoroughly so that they make a strong base for building from.
2. Ease of customization. In a pre-baked image, the configuration is fixed. If you want to change how the image works, it means hacking the image in place and rebundling it. That’s a pain, especially for, say, changing the MySQL root password for your spiffy WordPress install. Following the recipe approach, you just fork the recipe, replace passwords and other sensitive options in the forked recipe itself, and then build with the forked recipe.
3. Education! Read the recipe, and you can see how the application is actually built and configured. This is important to me personally; I distrust black boxes, and when I was a heavy AWS user, it was one of the things that made me nervous. Four Kitchens made a great Drupal+Varnish AMI available, and it “just worked”, which was pretty sweet and saved me a bunch of time — but I lived in a low-grade fear that if something went wrong, I wouldn’t understand how it was configured. My hope is that we end up with some very well-documented and interesting recipes that also teach people a little bit about how things work along the way.
4. Community development. If an AMI or an EMI is broken, patching it basically means creating an entirely new image that has no evident relationship to the old one. There’s really no clear concept of “upstream” with an image, and no simple way to collaboratively improve upon it. Defining an appliance as a script in Github, on the other hand, makes collaborative development and improvement of that appliance comparatively straightforward; it works just like any other open source project.
5. Integration with complementary tools. I wrote my first recipe in bash, because when it comes to coding I’m a bit simple, really, and nothing to be done. And it’s not as though this recipe notion is a new one; Puppet and Chef both have emerging forges with recipe collections of their own, and two of the first recipes we wrote were for Chef and Puppet bootstrappers. I’m not quite sure how it will work, but it’s pretty clear that many of the recipes will be “hey, make sure Puppet is running, and then go get that Puppet recipe from over there and run it.” One of the recipes I checked in recently sets up nginx based on the Puppet forge recipe.
6. Amazon compatibility. There’s no reason in the world that these recipes shouldn’t work on AWS as well. It’s my hope to add “tested with these AMI IDs” as part of every recipe’s documentation.
To be clear, there are also a couple of downsides to the recipe approach:
1. Time to instantiation. The image versus recipe dispute is age old, and one reason people have traditionally chosen to run from images is because they are “ready” so much more quickly. Going from image to fully functioning instance in Eucalyptus takes seconds; going from image, to recipe, to fully functioning instance can take minutes. When that difference matters, images are still the way to go — although I still think the right approach is to use a recipe to create an instance, and then to snapshot that instance and store it as the deployable image.
2. Proprietary applications. There will doubtless be organizations that will want to deliver proprietary software appliances to Eucalyptus users. This mechanism may not be suitable for those providers, since it’s fairly incompatible with secret sauces.
As it turns out, recipe building is also a perfect use case for our Eucalyptus Community Cloud. The ECC is intended to give potential Eucalyptus users a sense of how Eucalyptus works — but because the ECC is small and resource-constrained, we kill instances every six hours or so. When writing recipes, though, iteration is the name of the game, so it’s perfect. I wrote a Drupal 6 recipe over a weekend using the ECC.
Want to check out a recipe on the ECC? Simple stuff:
* Install euca2ools on your local system. It’s yum/apt-get installable from most repos at this point.
* Get your account on the ECC.
* Download and source your credentials for the ECC. Be sure to set up your ssh keys as well.
* Get the recipes repo:
git clone https://github.com/eucalyptus/recipes.git
* Get a list of images available by running euca-describe-images. Pick the base image you want to start from. The ID of the vanilla CentOS 6.2 image is emi-D482103E.
* Start your instance with the recipe, for instance:
euca-run-instances -k yourkey emi-D482103E -t m1.large -f centos6_nginx.sh
* ssh into your instance and watch the show. (For me, this was mostly tailing the yum log.)
So, it’s the beginning of a thing. Like all beginnings of all things, its future is uncertain — but it feels useful to me, and I hope that we can build some value with it in the coming weeks and months.
Oh, also: see you at OSCON.
Eucalyptus 3.1 is open for business.
No more artificial separation between Enterprise and Community. No more frenzied checkins to the “enterprise edition” while the separate-but-equal “community version” atrophies. No more working on new features behind closed doors for months on end. No more wondering about what’s on the roadmap. No more going weeks without any publicly visible check-ins. No more.
Today is the day that we release Eucalyptus 3.1, and reassert our position as the world’s leading open source cloud software company. With the emphasis on open source. We’ve been working to get to this day for months, and now, the day has come.
For those who want to get started with the new bits immediately, the Faststart installer can be found here. With two virt-capable laptops installed with Centos 6.2 minimal, you can have a private cloud running in 15 minutes if you follow the directions — and a few hours if you don’t.
Package repositories for the various distributions can be found here.
A list of all currently known bugs in 3.1 can be found here.
The list of features we’re currently scoping for 3.2 can be found here.
We have lots of other projects moving forward on Github as well. Projects like Eutester for automated testing of Eucalyptus (and Amazon) instances, Recipes for automated deployments of Eucalytpus (and Amazon) instances, our nextgen installer Silvereye, and many others.
All of these projects are open to community participation and transparently managed. We hold weekly meetings on IRC. You can find the weekly meeting schedule here. Minutes for all meetings for the past six months can be found here.
We’re also hiring.
“Build together. Run together. Manage together.” That’s been the mantra for this release, and it speaks directly to the culture of our company. If I learned anything at Red Hat, it’s that company culture matters. It literally makes or breaks the company. Especially in open source: either you’re an open source company, or you’re not. We are deeply committed to the open source model, because we believe that it creates the best software, and we’re going to prove it.
The most exciting thing about today’s release, to me, is that we’re only getting started. It’s been a long climb to get to this plateau. We’ve still got a lot of mountain yet to climb, though, and we’re looking forward to the challenge — but that can wait for another day. Maybe two. Today is about appreciating where we’ve been, and enjoying the view.
Well done, Eucalyptians. Well done.
Note: beta still means beta. We’re aiming for release candidates for Eucalyptus 3.1 within the next month or so. Still, these packages are pretty stable for us so far, pass the majority of our ridiculous battery of QA tests, and are altogether suitable for a quick install to see what the fuss is all about. And it’s a whole lot simpler than building from source.
It’s taken a while, but the move is complete. The source code for Eucalyptus 3.1 Beta is open and publicly available in Github. It’s actually been there for a while now, but we’ve done enough housekeeping and we’re ready to open the doors.
Build instructions can be found in the INSTALL file, but they are still in flux; comments and patches are welcome. Don’t hesitate to join us on #eucalyptus on freenode or on our community mailing list if you have questions.
Packages for the beta will be available for various distros in the coming days. Special props go to Debian Partner company Credativ for their impressive work on the Google Web Toolkit libraries.
We’re also working on our new bug tracker; we’re in private beta to work through various auth and workflow kinks. If you’re interested, ask for access on IRC or the mailing list, and we will set you up. After this beta period is concluded, we will open the new bugtracker to anyone and everyone — but we’re happy to give early access to anyone who asks.
This is another critical step in our evolution as an open source company. But we’re not done yet. Stay tuned.