Greg DeKoenigsberg Speaks

The Simplest Way to Learn About Eucalyptus Code

Posted in Uncategorized by Greg DeKoenigsberg on February 26, 2014

We get lots of people who want to use Eucalyptus as a way to learn about how cloud computing works at a code level. Which is great: Freedom to Learn is one of the fundamental Free Software guarantees.

So what’s my advice to users who want to learn about Eucalyptus? It’s pretty simple.

1. Get a tiny Euca cloud running. The perfect tool for this is eucadev.  If you’ve got a laptop that supports Vagrant, you’ve got a Euca cloud. It’s a small cloud, to be sure, but it’s got all of the key features required for cloud orchestration, and it’s a tool that our own developers use. 

2. Find a problem to solve! The best way to learn about a codebase is to dig into it with a clear goal in mind. If you don’t yet have a clear goal, we’ve got a great list of open bugs that are tagged as “fruit” (of the low-hanging variety) to get you started.

3. Create your own local branch of the Eucalyptus source, and start hacking! An explanation of how to do this can be found in the README for Eucadev.

There’s really no substitute for getting your hands into code. Dig in. If you get stuck, swing by #eucalyptus-devel on freenode and ask for help. (After you’ve read through the docs and Googled a bit, of course.)

Tagged with: ,

Being on the napkin

Posted in Uncategorized by Greg DeKoenigsberg on September 27, 2013

Alistair, I loved your napkin-based comparison of the private cloud players.  It’s hilarious, and I agree with most of it. Yes, the private cloud discussion definitely shares parallels with the platform wars of the past. It’s a very useful lens for viewing our little world… even if that lens happened to be the bottom of a wine bottle. :)

First of all, I’ve got to say that it’s great to see Eucalyptus in the center of your napkin. Look at the size of those other players: put together the combined market cap of the OpenStack companies, VMWare, and Citrix… and then look at little old us. We must be doing something right!

What really interests me, though, is where you tried to fit Eucalyptus into your napkin analysis. I suspect that you weren’t quite sure where to put us, so you chose OS/2, found some surface similarities, and moved on to the other parts of the napkin.

In the immortal words of the great sage Jules Winnfield, “allow me to retort.”

“Integration happens when people rally around one thing.” Yes, this is exactly the right argument — and customers are rallying around AWS. Those guys are farther to the upper right in the latest Gartner Magic Quadrant than I’ve ever seen.  They’re doing more public cloud business than all the other players in that market combined right now. Which is why we, and our users and customers, are rallying around one thing: AWS compatibility.

“Apps matter more than legacy protocols.” This, also, is exactly right. Developers are writing apps for the cloud — and that generally means writing them for AWS first. And developers are busy, which means that despite their good intentions to make their apps portable, portability generally comes somewhere between localization and database normalization on the priority list. Which is, of course, why legacy players like VMWare are fighting tooth-and-nail to protect their own legacy protocols. Just listen to Pat Gelsinger, CEO of VMWare: “if a workload goes to Amazon, you lose, and we have lost forever.”  That’s precisely why the value we add is so useful to our users and customers: the switching costs from AWS to Eucalyptus (and back) are orders of magnitude simpler than any other option.

“IBM spent a lot of time making OS/2 work with legacy mainframe protocols and existing enterprise environments.”

Wait… did you just compare AWS to the IBM System/360?

“…and Eucalyptus is OS/2.”

<blink>

LOL.

OK, story time.

I worked at IBM way back in the day, fresh out of no college. I was the designated worldwide global support guru for the Audiovation Sound Card , for Microchannel, for OS/2. This was a combination of hardware and software that was never tested, and thus never worked. So being “worldwide global support guru” basically meant picking up the phone and saying “that combination doesn’t work, send the card back, here’s your case number, have a nice day.” And customers would always ask, irritatedly, “you made all these products — how is it possible that they don’t work together?” And my inability to lie about the answer to that question — “because none of these three components are important individually in the market, let alone collectively” — was probably a contributing factor to my getting fired from that job.

In my opinion, the chief failure of OS/2 was in its attempts to be too many things to too many people — being OK at everything, but good at nothing in particular.

That is precisely the opposite of the Eucalyptus strategy, which is to be insanely great at interoperability with the AWS API, the de facto standard for talking to the world’s dominant cloud platform.

I’ve said this before, and I’ll say it again: it’s about focus for us. We are focusing on a very particular pain point, that developers are feeling ever more acutely: the potential for AWS lock-in. Our goal is to provide an open source alternative for those users to mitigate those potential lock-in risks. Focus, focus, focus.

We see the benefits of this focused approach every day. Our roadmap is spread out before us in great detail — and that roadmap, combined with some of the best cloud engineers on the planet, gives us a feature velocity that no one else in the private cloud world can currently match.  Which is why we’re on your napkin, despite being a fraction of the size of the napkin’s other inhabitants.

But in thinking about it, maybe your comparison to old school mainframe connectivity is right, and you’ve just got your timeframes wrong.

Maybe it’s 1965, and OpenStack is Multics, and Red Hat is GE, and Amazon is IBM, and AWS is the brand new System/360, ready to dominate the computing landscape for the next two decades.

Which would make Eucalyptus the open source little brother that the System/360 never had, that has no analogue in the history books, and that could have changed everything. Trouble is, I’m not sure how to fit that on your napkin.

Hobos and Vagrants and Quality

Posted in Uncategorized by Greg DeKoenigsberg on September 25, 2013

I don’t know how many people have actually met Vic Iglesias, our Quality Hobo.  Here’s what he looks like in his natural habitat:

"Quality!"

Quality Hobo: both smarter and better looking than you.

It’s really super important not to make Quality Hobo angry. Let me assure you from personal experience: no one wants that.

Here are some things that make Quality Hobo angry (and that you should, therefore, avoid):

  • Stealing Quality Hobo’s cigarettes, electronic or otherwise.
  • Remarking upon Quality Hobo’s resemblance to a certain celebrity.
  • Wasting Quality Hobo’s time with questions about which test cases are most recent.
  • Wasting Quality Hobo’s time by writing tests that don’t integrate into Eutester.
  • Wasting Quality Hobo’s time with questions on how to build your QA environment.
  • Wasting Quality Hobo’s time in any way at all.

Here’s what I’m saying: it was only a matter of time before our Hobo got mixed up with a Vagrant.

Vagrant is incredible, and Mitchell Hashimoto is incredible for creating it.  It’s the sort of tool you try out for a particular reason, and then once you’ve used it, you find a million other reasons to use it.  Last night, as I chased down a bug in Faststart, I got to try out Micro-QA, which is a mashup of Vagrant and Eutester and Jenkins and Ansible and all kinds of stuff, built on top of one of the standard Vagrant boxes for Centos 6.4.

Here’s the full instructions  for getting a fully updated, complete testing environment for Eucalyptus installed on your laptop:

  1. Install Vagrant+Virtualbox on your laptop.
  2. Run “git clone” on the Micro-QA repo.
  3. Run “vagrant up”.
  4. Point your browser to http://localhost:8080.

Aaaaand you’re done.

Micro-QA in action. YOUR TEST IS A FAILURE.

Micro-QA in action. YOUR ARGUMENT IS INVALID.

Ease of automated testing is one of those force multipliers that doesn’t seem super-exciting, but really is amazingly super-exciting.  Because here’s the thing: When the cost of testing is higher than basically zero, people don’t bother with it – or, they do a really half-assed job of it and then say “oh yeah, I totally tested that.” Which is waaaaaay worse.

In Micro-QA, we have a tool that brings the cost of automated QA to near-zero.  And not just for QA folks: it’s a tool that can be used by our QA team, our engineering team, our support team, our customers, and our community, all with comparatively little knowledge required. It’s a huge win for us.

And here’s the kicker: it’s not only a QA tool; it’s also a great tool for hybrid cloud diagnostics.  Set up your Euca environment; set up your AWS environment; bake in some tests that run against each; run them on a regular basis; scream when something breaks.  It’s kinda sorta magic.

Anyway. If you’ve got a Euca install, go get Micro-QA running on your laptop, bring it up in your browser, pick some test cases to run, drop the contents of your eucarc file into your test case, and run it.  If it breaks, ping us on Freenode (#eucalyptus-qa) and let us know what broke.

Vic announced Micro-QA less than six months ago.  It was cool at the time, but now it’s way past cool.  I’m really impressed by how far it’s come in such a short time.  It’s like I’m living in the future.

(For the record: Vic is really a super sweetheart of a guy, and I don’t think he actually lives under a bridge at present. I think he may be living Between Two Ferns, though.)

Our little cloud boxes

Posted in Uncategorized by Greg DeKoenigsberg on July 24, 2013

A lot of people have been visiting our table in the OSCON Hack Zone — mostly because of the presence of our Little Black Boxes.

The common question we’ve heard: “where did you guys *get* those things?”

INORITE? They are *totally* cute.  We bought the parts and assembled them ourselves.  They are now Standard Issue to all new Eucalyptus engineers; a short stack of three gives any engineer enough firepower to do serious development and testing on the whole Eucalyptus stack.

Here’s the parts list from Amazon.com, courtesy of the talented and ruggedly handsome @zacharyjhill:

The main housing unit is an Intel NUC, about 4″ by 4″ by 2″. The SSD is available in different sizes; ours is 128GB. With some of the boxes, we only use 8GB of RAM and with others we use 16GB. We also like to have wireless, though it’s not required – and don’t forget the cheapo power cable.

Note the coffee cup in the foreground. Yes, they're that small.

Note the coffee cup in the foreground. Yes, they’re that small.

Building a fully functional 3-node Eucalyptus developer cloud: $1500.

Having an entire AWS-compatible cloud the approximate size of a coffee cup: priceless.

Awesome USB pens with complete virtual Eucalyptus cloud not included — those you’ll have to get from us at OSCON. Come see us in the Hack Zone. ;)

Tagged with: , , ,

Cloud Horror Stories at OSCON

Posted in Uncategorized by Greg DeKoenigsberg on July 21, 2013

Running a private cloud can be like being the victim in a horror movie. You stand up the cloud and everything’s just fine. You get some workloads running. You’re scaling up. You’re scaling down. All is well. And then the spooky music starts, and little things start going wrong — and then suddenly the chairs are spinning in the air and you’re running for your life.

If you’re in Portland for OSCON on Monday night, come on by the Cloud Horror Stories birds-of-a-feather session. We’ll have people there with scary stories to tell, so if you’re interested in learning how things can go terrible wrong in cloud-land, come on by. Even better — if you’ve got a story of your own to share, we’d love to hear you tell it, preferably in your spookiest voice.

See you around the campfire. BOOOOOO!

(And no, we’re not actually going to start a campfire in the convention center. Definitely, almost certainly, maybe not. We can do that flashlight-under-the-chin thing, anyway.)

Who cares about AWS compatibility?

Posted in Uncategorized by Greg DeKoenigsberg on July 17, 2013

Simon Wardley is never shy to share a provocative opinion. :)

A summary of his latest missive: OpenStack is already doomed because of their inability, or unwillingness, to produce AWS clones. There’s nothing new in Simon’s position there, but it’s his bluntest statement of opinion yet on OpenStack’s prospects.

I’m not going to presume to agree or disagree with Simon’s prediction — but in his blog post and the ensuing conversation, I saw a few opportunities to clarify how I think about Eucalyptus and AWS fidelity.

* * * * *

First, on proving AWS fidelity.

Obviously, at Eucalyptus we think deeply about the AWS fidelity problem, and how to approach it. Simon suggests one possible model:

So could CloudStack, Eucalyptus, Open Nebula and some of the OpenStack party create a rich set of AWS compatible environments — of course. But the problem becomes you have to define one thing as the ‘reference’ model. The only way around this that I know is for the groups to create a massive set of test scripts and provide some sort of AWS compatibility service and define that as the reference model and each show compatibility to it and it to AWS. It’s possible, I’ve hinted enough times that people could try that route but there’s no takers so far.

I can’t speak for the other projects, but we find that the best tests of AWS compatibility can be found in the AWS ecosystem itself — which is one of the key advantages of working in such an ecosystem. Chasing a full API is an exhausting process, and can be discouraging, but we’ve had good success by first ensuring compatibility with the most popular open source tools in the AWS world. By moving progressively through these tools, we will cover ever-expanding sections of the API, leaving the dustiest corners of the API for last (perhaps never even to be implemented; after all, an API is ultimately only as useful as the tools that exercise it.)

The Netflix OSS toolchain is a great example of this. The team at Netflix has taken quite a bit of heat for relying so heavily on the AWS family of services, but their decision to open source all of their tools has been a boon to us. They are smart users who exercise AWS at a scale that other users can scarcely imagine, so it’s a safe bet that they’re exercising many of the most interesting parts of the AWS API. We’ve learned much, and proved much, by following their trail.

Of course, we also have our own automated test suite for AWS/Eucalyptus fidelity; we call it Eutester. It’s designed, at least in part, to run identical test cases against both Eucalyptus and AWS, and anyone who wants to test the AWS fidelity of their own IaaS can pick up that code and run with it. That codebase will continue to grow as Eucalyptus grows. Patches welcome, as they say.

* * * * *

Second, on architecting for AWS fidelity.

It seems to be assumed — among some, anyway — that AWS API compatibility is something that can simply be dropped into OpenStack at any time. The trouble is, no one will know how true that is, or isn’t, until they actually do the work. Geoff Arnold’s comments hit the nail on the head:

Load balancing is a great example. For good or ill, Amazon’s Elastic Load Balancer is a cornerstone of web-tier cloud applications architecture. If the OpenStack community was serious about AWS compatibility, the LBaaS team would have established ELB compatibility as a fundamental requirement. It didn’t. On the contrary, much of the preliminary documentation focused on all of the cool features that LBaaS would support that were not available with ELB. By Grizzly, all that we had to show for our efforts was a proof of concept based on a single instance of haproxy. Elastic provisioning was officially out of scope for the core LBaaS effort.

Software design is about choices, and with every choice you make, there’s a chance that you’ve made some other choice impractical, or even impossible. We know that there are, at the very least, syntactic differences between OpenStack and AWS; it is also quite likely that there are deeper semantic mismatches.  It may be that the OpenStack community will be able to bridge both syntactic and semantic mismatches between OpenStack and AWS with ease — but given our experiences, it doesn’t seem likely. The devil is in the details, and one must care deeply about the details in order to conquer that devil.

* * * * *

Which brings me to the last point, which is caring about AWS fidelity.

As Thierry Carrez says,

EC2 API support has always been there in OpenStack. It just never found (yet) a set of contributors that cared enough to make it really shine. Canonical promised it (with AWSOME) then let it go. More recently Cloudscaling promised it, but I’ve seen nothing so far. The next in line might just deliver.

Maybe.  There is great power in being the one who “cares enough”. And Thierry’s response here begs the question: why doesn’t the OpenStack community care more about supporting the various AWS APIs?  (Since EC2 is just the tip of the iceberg.)

That’s a question for the OpenStack community to answer.  In the meantime, I can assure you that, at Eucalyptus, we care deeply about AWS compatibility — as do our users. We work towards that goal tirelessly, every day, and I think it’s safe to say it’s because of that passion that we have taken the lead. And it’s a lead that we have every intention of extending.

Anyway. See some of you at Netflix tonite.

Tagged with: , , , ,

Our Friend Seth

Posted in Uncategorized by Greg DeKoenigsberg on July 12, 2013

I first became aware of Seth Vidal years ago. I didn’t know him at all; I knew him only from his work, and from that work I surmised that he was Not My Friend.

I was working for Red Hat, you see — and Seth, at the time, was not.

People think of Red Hat now as this hugely successful software behemoth, but I can assure you that it wasn’t always that way. There was a time, not so long ago, when Red Hat could only dream of making the kind of money they’re making now. Not that there weren’t huge expectations: after the initial IPO in August 1999, the stock price went up to a ridiculous 132 dollars per share. When I joined in February of 2001, the share price was at a more realistic 6.44. Not long after that, it dropped to 3.12. I couldn’t help but feel a little bit responsible.

I was part of the Red Hat Network team. The idea, in a nutshell, was that we would encourage users to download Red Hat Linux for free, all they liked — but if they wanted timely software updates, we would force them to pay. Muwahahaha! RHN was the mechanism to deliver those updates, and the RHN server would be proprietary software. For me, this was an uncomfortable compromise; I had hoped to come to Red Hat to work on open source software, so it was painful to be a part of the only team in the company that was writing proprietary software. Matthew Szulik, the CEO, would meet with us every quarter to remind us how critical our work was. We were the business engine that would fuel the company, he told us. For a time, it was almost true.

Meanwhile, Seth was busily working on a rewrite of a little tool called Yup. Yup was the update tool for Yellow Dog Linux, and Seth decided to rewrite it to work with Red Hat Linux. He called this new tool Yum (Yellow Dog Updater, Modified). He made it primarily because he himself needed it. And because it was such a useful little tool, other people started using it. A lot of other people. Wow, just a whole lot of people started using Yum. It was far simpler than RHN, and for most users, it was better — or at least good enough. And it was, of course, Free Software.

Today, bits of Yum-related source code can be found in nearly all of the software packaging that Red Hat does — and that includes Spacewalk, the open source descendent of RHN. Open source is especially powerful when it’s commoditizing away the value proposition of proprietary software, and boy, did Yum ever do that. Yum is great software, sure — but to me, Seth’s truly lasting professional legacy is that he taught mighty Red Hat a humbling lesson about open source. And having well learned that lesson, Red Hat proceeds now to teach it to everyone else.

I myself was on the pointy end of that lesson, and it was one of the reasons that I left the RHN team to focus on the nascent Fedora community in 2004. One of my new duties was to write for the late, great Red Hat Magazine; one of my first assignments was to write an article about the resounding successes of the Fedora Project. Resounding Successes, don’t you know! Exciting! So I sent some emails to some of my new friends in Fedora-land, asking for their ideas for this article.

I often wish I still had my old Red Hat emails lying around, because I’m sure Seth’s response to this particular email would have made for great reading. I do remember the gist, which was this: “dude, you do *not* want me talking about my opinions of the Fedora Project in public.” (He did reference Icon Ryabitsev’s excellent mock IRC chat, which is required historical reading for anyone curious about the early days of the Fedora project.)

So anyway, I asked Seth if he’d be willing to talk to me, in private, off the record, about how we might be able to improve Fedora to a point at which he *would* be happy to talk about it in public. He agreed.

That was when I really started getting to know Seth Vidal.  Some of my fondest personal and professional memories come from the time that followed. Now that all seems like — sadly, is — a lifetime ago.

* * * * *

I saw Seth two nights before he died.

The American Dance Festival is a centerpiece of summer life in Durham. Performances happen just about every night at various venues across Durham, for a couple of months. ADF is a Durham institution, and we’re very proud of it here. My wife bought us front-row tickets to see an Argentinian aerial dance troupe at the Durham Performing Arts Center last Saturday night. When we arrived, four seats down from us sat Seth and his companion Eunice.

It was how I usually saw Seth, after I left Red Hat: just hanging around Durham, while we were both doing Durham Things. Seth was not only a linchpin of the global open source community; he was a linchpin of of the local, real world community that he and I shared. Seeing him at an ADF performance, or the Durham farmer’s market, or at a local restaurant like Toast, or Parker and Otis, was always a treat, but never a surprise: “oh, yeah,” I would think, “sure Seth is here.” Or if I didn’t see him, I would hear about him from mutual friends: “hey, Seth was here the other day.” He was a presence in Durham. It seemed like everyone knew him, or at least knew of him — people who knew little or nothing about his open source life.

So we chatted that night, crowded against the stage. He asked me how I was doing. For those of us who knew Seth well, he had a very particular way of asking that question, that you can hear in your head even now. He asked the question with concern and purpose. For him, that question was never a throwaway.

I told him I was doing well. We talked about the great seats. He told me that he and Eunice had bought ADF season tickets, because of course they had; there’s not a more Durham thing you can do. I wanted to chat more, because I hadn’t seen him in a couple of months — but I was in people’s way. We agreed that we’d definitely catch up at the upcoming Flock conference, if not sooner. I hurried to take my seat.

The performance was great, but the house was crowded and hot, so when the show ended, my wife and I hurried out without saying our goodbyes. Not even a look or a wave back — because hey, I knew that I’d be seeing him soon enough anyway, right? Right?

* * * * *

My personality flaws have always been magnified in the presence of cyclists.

At my worst, I am impatient, easily annoyed, and impulsive. When I’m in a bad mood, I’m exactly the kind of guy who will sulk behind a pack of cyclists and then speed around them at the first available opportunity. I know a lot of cyclists, and have nothing but respect for them in the abstract — but in the real world, whenever cyclists take up space in my precious roadway, especially when I’m in a hurry, I often find myself fighting the urge to act like an asshole.

I wish I could say that I have nothing at all in common with the guy who ran Seth into the ditch on Monday night. I wish I could say, honestly, that I couldn’t possibly imagine myself in his position. How I would love to be able to say that.

I drove around Durham on Tuesday for much of the day, just thinking. Mostly I drove around Watts-Hillandale, the neighborhood where Seth lived. In particular, I must have driven up and down the length of Hillandale Road a dozen times. It’s a neighborhood street, but it’s also a thoroughfare, which means that people speed along it all the time. People who are in a hurry to get to someplace else. People like me.

It’s funny, how invisible bike lanes and “Share the Road” signs are, until you have reason to notice them — and then you notice them everywhere. For instance: the big yellow “Share the Road” sign on the 1900 block of Hillandale Road. That sign is big. All those signs are big. You can’t miss them. How can you miss them?

I fantasized briefly about altering that sign, and every other sign in Durham, to read “Share the Fucking Road”, so that for a few days, everyone might actually take a good look at them, and be Enlightened.  Except the responsible side of my brain pointed out that it would be an angry and ultimately futile gesture, and that I would probably do well to stop brooding, and get back to my life, and just fix my own self.

So that last thing — fix my own self, slow down, be patient — that’s a thing I can work on.  But stop brooding?  Get back to my life?  Not quite sure how to do those things just yet.

Bye Seth. I’ll save a chair at Toast for you.

Dammit.

Tagged with: ,

Demo of Eucalyptus hotness: 3.3 milestone 6

Posted in Uncategorized by Greg DeKoenigsberg on April 23, 2013

Our demo day for milestone 6 was yesterday, and it was choice. We’re at feature completeness at this point, and we’re now on final approach for release sometime Soon-ish, as soon as we shake out all the code nasties. We’ve got some good stuff to show off on Vimeo. The basic transcript:

  • 0:00 Eric Choi, Product Mktg Manager, with agenda/housekeeping.
  • 1:25 Tim Cramer, VP of Engineering, sets the table for demos.
  • 3:45 Yours Truly, VP of Community, talks (a little too long) about Eucalyptus compatibility with the AWS Ruby SDK.
  • 16:00 Vic Iglesias, QA Lead, talks about EucaLobo, an amazing fork of ElasticWolf that provides an autoscaling UI for both Eucalyptus and AWS. (Watch this one twice; it’s really the star of the show.)
  • 24:40 Vic Iglesias talks about Micro-QA, the self-contained environment for running automated testing on Eucalyptus installations.
  • 31:30 Chris Grzegorczyk, Co-founder and Chief Architect, shows Asgard and other NetflixOSS tools running on Eucalyptus.
  • 41:25 David Kavanagh, Senior Engineer, shows tagging and other latest updates to the Eucalyptus 3.3 User Console.
  • 45:00 Colby Dyess, Partner Manager, shows Eucalyptus compatibility with the AWS Toolkit for Eclipse.

Exciting, and more proof that when it comes to AWS compatibility in a private IaaS, we’re the only game in town.

Proof: Netflix OSS on Eucalyptus

Posted in Uncategorized by Greg DeKoenigsberg on March 20, 2013

The Netflix OSS team sure knows how to throw a good party.  We’ve been to their meetup twice now, most recently at their event last Wednesday, at which they unveiled the Netflix OSS Prize.  At this rate, given the amount of interest they’re generating, they’ll need to rent out hotel space for their next event.

What draws the crowd? It’s a good question. The simplest answer, I guess, is that they’re a cool company doing cool stuff — and people want to be like them.

At Eucalyptus, we certainly enjoy working with Netflix, and we enjoy hanging out with them generally, because they’re cool.  (And their headquarters are awesome — “spa-like”, one might say.)  But we also have very specific interests in following the Netflix approach:

1. Netflix understands cloud.  Surely this is obvious; Netflix is one of the most advanced user of cloud services on the planet. They were the first to understand the true value of the AWS model, and in taking full advantage of it, they’ve developed a reputation for being industry leaders who work at true cloud scale.  Other leaders now seek to learn from them and emulate their practices.

What does that mean, though — “working at true cloud scale”? In a nutshell: Netflix has embraced the reality that sometimes services just go away.  This is the single biggest shift that one must make when moving into the world of cloud: not just accepting, but embracing the idea that sometimes your systems just go away.  That’s the point of cloud, and if you don’t build your systems with that mindset from Day One, you’re Doing It Wrong. Netflix has demonstrated an understanding here that few organizations can match.  They’ve moved from Chaos Monkey, which randomly knocks over instances, to Chaos Gorilla, which randomly knocks over entire availability zones — and they’ve made these tools, and others like them, available for those who dare to follow their lead.

2. Netflix understands open source. Not just as users, but as producers.  They are willing to share some pretty amazing software, because they understand the difference between software that provides differentiating value for their business (the engine) and software that provides non-differentiating value (the plumbing).  The more they can share the cost of maintaining the plumbing, the more resources they can commit to the engine.  This is a highly strategic choice, which makes them all the more committed to it — they have 26 projects in their Github repo and counting, with no sign of slowing down anytime soon.

3. Netflix OSS requires real AWS API fidelity, and Eucalyptus provides it.  Netflix is completely committed to the AWS model, and all of their code currently assumes that you’re using AWS.  If you want to figure out if your own AWS compliant IaaS “just works”, the Netflix OSS tools represent the best possible tests of AWS API fidelity. The “Asg” in Asgard, for instance, stands for Autoscaling Groups — so as we get our own autoscaling functionality up to speed for the Eucalyptus 3.3 release, it makes perfect sense to use Asgard as the benchmark to test against.  Which is precisely why we demonstrated at the Netflix OSS event: Chaos Monkey for knocking down instances, Asgard for Autoscaling to replace the destroyed instances, and Edda for auditing the whole process — and all working on Eucalyptus precisely as it would work against AWS.

Here’s the point, and it’s a simple one: it’s easy and cool to claim that your private cloud is “AWS API compatible”.  But it’s another thing entirely to prove it.  At Eucalyptus, we stake our entire reputation on proving it, with every single release. At the Netflix meetup, people walked up to our demo station, and they could *see* Netflix OSS on a private cloud. Edda, Asgard, and Chaos Monkey, all running in the cloud that was sitting right there on the table next to them.

Don’t trust hype.  Trust proof.

Annotation of Euca demo 3.3 milestone 4

Posted in Uncategorized by Greg DeKoenigsberg on March 14, 2013

We’ve got two hours worth of demo video from yesterday’s show and tell session for Eucalyptus 3.3 milestone 4 — with every milestone, we continue to improve the most compatible AWS private cloud platform.  ELB, autoscaling and cloudwatch are pretty much in the bag; now it’s mostly spit and polish. For those who don’t want to watch the whole two hours, but are interested in the progress of various features, here’s the timings of the various demos:

At 2:38, David Kavanagh talks about data management features in the UI.

At 11:00, Ean Schuessler talks about data layer architecture changes in the UI.

At 25:42, Jeff Uphoff at 25:42 talks about migrating instances from one node controller to another.

At 45:15, Vasya Kochergin talks about migrating instances in vmware.

At 51:35, Swathi Gangisetty talks about support for NetApp cluster mode.

At 1:19:30, Steve Jones talks about autoscaling improvements.

At 1:28:56, Ken Edwards talks about cloudwatch improvements.

At 1:37:35, Evan Thomas talks about cloudwatch alarms.

At 1:49:20. Sang-Min Park talks about elastic load balancing.

At 2:05:45, Matt Spaulding talks about the elastic load balancer VM.

Enjoy. As always, if you have any qyestions, feel free to ask on IRC (#eucalyptus on freenode) or join our mailng list.

P.S. we should have Silvereye builds of 3.3m4 available early next week.

Follow

Get every new post delivered to your Inbox.

Join 34 other followers