London’s a great city, and it’s full of great advocates for Ansible. To everyone I met at the meetup earlier this month: thanks for coming. It was a delight to be able to meet all of you.
I wasn’t expecting to speak at this inaugural meetup, but due to a last minute cancellation (Richard, hope the family is doing better) I found myself pressed into service. I’m not really much for presentations anyway, so it was no hardship; Ali Asad Lotia and Mudassar Mian provided the technical firepower, and then when it was my turn to speak, I fell back on my tried and true strategy of asking questions and listening. This was the first opportunity a lot of people in London have had to talk with someone from Ansible in the flesh, and I’ve learned how very important it is just to meet people where they are and share stories with them.
The first thing I generally do when speaking with a roomful of users, whether I’m giving a formal presentation or not, is to survey the room for expertise. I tend to ask the same series of questions: “who has heard of X? who uses X? who uses X extensively?” What impressed me most about the crowd in London was how many of the hands stayed up through all of those questions. These were serious, serious users of Ansible. Granted, it was an Ansible meetup, so I expected more expertise than at a typical generic meetup — but it seemed as though there were hardly any novices at all.
Except for me, of course. :)
There were a lot of questions; I think we had more than an hour of Q+A. The audience was engaged, not only with me, but with one another; the crowd frequently came to my aid when I didn’t have an answer, and by the end of the evening, I was pretty much an afterthought as people in the crowd carried on their own conversations.
The evening also confirmed the good sense of our decision to hold our first AnsibleFest outside of North America in London. Stay tuned for details on that front soon.
Thanks again to all who attended. Looking forward to my next pint and curry.
In September of 2007, I sat down at Three Cups, a coffee shop in downtown Chapel Hill, NC, with three friends from Red Hat: Michael DeHaan, Adrian Likins, and Seth Vidal. We’d all been lamenting the poor state of systems management tools, and figured that we could do better. Rather: they figured they could do better, but wanted me along to help them with the “community stuff”. I contributed exactly two things: a dogged insistence on simplicity and modularity, and the name: Func.
Here’s the blog post I wrote about it at the time:
The key quote from that blog post, which now looks prescient: “The takeaway: it’s not enough to write code and say ‘look, it’s open source.’ If you want to build communities around useful open source projects, you must architect them the right way to begin with.”
In February of 2012, Michael DeHaan started the Ansible project, building on the lessons he’d learned from Func and elsewhere. Here’s the very first check-in:
The key quote from that check-in, which now looks prescient: “As Func, which I co-wrote, aspired to avoid using SSH and have its own daemon infrastructure, Ansible aspires to be quite different and more minimal, but still able to grow more modularly over time.”
Simplicity, modularity, community: tested in Func, perfected in Ansible.
* * *
In a little more than two years, Ansible has built one of the largest and most passionate communities of users in the open source world. Just a few examples of that passion (and really, just a few, because there are hundreds):
.@TheNextWeb is moving to a brand new server infrastructure with the help of @ansible. I’ve never been so in love with a tool.
11:07 AM – 17 Jun 2014
OH: “if a vegan does crossfit and uses Ansible, what do they talk about first?”
8:57 AM – 18 Mar 2014
I’ll have to admit I like watching this @ansible playbook updating servers a lot more than #soccer
Finals: “All patched”
4:13 PM – 12 Jun 2014
How have I only just found out how awesome @ansible is?!
3:38 PM – 15 Jun 2014
@ansible Thanks for making dealing with #Heartbleed extremely easy. Patched ~100 servers in no time at all. Deployed new SSL keys as well.
4:33 PM – 8 Apr 14
Hey @ansible – you do most everything else I need except for completing my income taxes. Would you accept a pull request? :)
3:43 PM – 7 Apr 14
That’s the kind of excitement you can’t buy with marketing dollars. It’s the kind of excitement you can’t manufacture with press releases and trade show gimmicks. It’s a genuine enthusiasm that people feel when they discover something insanely great that makes their lives easier.
To have a significant role in building such a thing would be a once-in-a-lifetime opportunity for any technologist.
That is why, with humility and great excitement, I’m joining Ansible today. It’s literally a dream job for me, with a dream company and a dream community.
We’re going to accomplish great things together. I can’t wait to get started.
* * *
p.s. if you’re ever in Durham, stop by the office for a visit! Here are the directions:
* Drive to downtown Durham.
* Look up at the Lucky Strike smokestack.
* Park your car somewhere close to it.
* Walk towards it until you’re close enough to touch it.
* Walk through our front door. :)
See you at a meetup soon.
Eucalyptus 4.0 is coming soon, and you can try it in beta, on a single system, right now.
To take your own AWS-compatible cloud-in-a-box for a spin, here’s what you do:
- Install CentOS 6.5 minimal on a box that supports virtualization. Give it a fixed IP address.
- Set aside a range of contiguous IP addresses to play with. 20 should be plenty.
- As root, run the following command:
It’s still in beta, so if it breaks, you get to keep both pieces — but it’s pretty stable for me.
The installation script is based on Chef Solo. The goal is to provide a very simple installation experience that results in either a running cloud, or a very clear explanation of why you do not have a running cloud. Once CentOS minimal is installed, a typical install takes about 15 minutes. (NOTE: do *not* install the Desktop version; NetworkManager and PackageKit get in the way.)
If you have any problems getting your cloud-in-a-box up and running, don’t hesitate to reach out to me on Twitter (@gregdek) or on freenode (gregdek, #eucalyptus). At this stage, bad news is the best news, so if you have bugs, let’s see ‘em.
I especially invite my friends who work for OpenStack vendors to see how the other half lives. ;)
We get lots of people who want to use Eucalyptus as a way to learn about how cloud computing works at a code level. Which is great: Freedom to Learn is one of the fundamental Free Software guarantees.
So what’s my advice to users who want to learn about Eucalyptus? It’s pretty simple.
1. Get a tiny Euca cloud running. The perfect tool for this is eucadev. If you’ve got a laptop that supports Vagrant, you’ve got a Euca cloud. It’s a small cloud, to be sure, but it’s got all of the key features required for cloud orchestration, and it’s a tool that our own developers use.
2. Find a problem to solve! The best way to learn about a codebase is to dig into it with a clear goal in mind. If you don’t yet have a clear goal, we’ve got a great list of open bugs that are tagged as “fruit” (of the low-hanging variety) to get you started.
3. Create your own local branch of the Eucalyptus source, and start hacking! An explanation of how to do this can be found in the README for Eucadev.
There’s really no substitute for getting your hands into code. Dig in. If you get stuck, swing by #eucalyptus-devel on freenode and ask for help. (After you’ve read through the docs and Googled a bit, of course.)
Alistair, I loved your napkin-based comparison of the private cloud players. It’s hilarious, and I agree with most of it. Yes, the private cloud discussion definitely shares parallels with the platform wars of the past. It’s a very useful lens for viewing our little world… even if that lens happened to be the bottom of a wine bottle. :)
First of all, I’ve got to say that it’s great to see Eucalyptus in the center of your napkin. Look at the size of those other players: put together the combined market cap of the OpenStack companies, VMWare, and Citrix… and then look at little old us. We must be doing something right!
What really interests me, though, is where you tried to fit Eucalyptus into your napkin analysis. I suspect that you weren’t quite sure where to put us, so you chose OS/2, found some surface similarities, and moved on to the other parts of the napkin.
In the immortal words of the great sage Jules Winnfield, “allow me to retort.”
“Integration happens when people rally around one thing.” Yes, this is exactly the right argument — and customers are rallying around AWS. Those guys are farther to the upper right in the latest Gartner Magic Quadrant than I’ve ever seen. They’re doing more public cloud business than all the other players in that market combined right now. Which is why we, and our users and customers, are rallying around one thing: AWS compatibility.
“Apps matter more than legacy protocols.” This, also, is exactly right. Developers are writing apps for the cloud — and that generally means writing them for AWS first. And developers are busy, which means that despite their good intentions to make their apps portable, portability generally comes somewhere between localization and database normalization on the priority list. Which is, of course, why legacy players like VMWare are fighting tooth-and-nail to protect their own legacy protocols. Just listen to Pat Gelsinger, CEO of VMWare: “if a workload goes to Amazon, you lose, and we have lost forever.” That’s precisely why the value we add is so useful to our users and customers: the switching costs from AWS to Eucalyptus (and back) are orders of magnitude simpler than any other option.
“IBM spent a lot of time making OS/2 work with legacy mainframe protocols and existing enterprise environments.”
Wait… did you just compare AWS to the IBM System/360?
“…and Eucalyptus is OS/2.”
OK, story time.
I worked at IBM way back in the day, fresh out of no college. I was the designated worldwide global support guru for the Audiovation Sound Card , for Microchannel, for OS/2. This was a combination of hardware and software that was never tested, and thus never worked. So being “worldwide global support guru” basically meant picking up the phone and saying “that combination doesn’t work, send the card back, here’s your case number, have a nice day.” And customers would always ask, irritatedly, “you made all these products — how is it possible that they don’t work together?” And my inability to lie about the answer to that question — “because none of these three components are important individually in the market, let alone collectively” — was probably a contributing factor to my getting fired from that job.
In my opinion, the chief failure of OS/2 was in its attempts to be too many things to too many people — being OK at everything, but good at nothing in particular.
That is precisely the opposite of the Eucalyptus strategy, which is to be insanely great at interoperability with the AWS API, the de facto standard for talking to the world’s dominant cloud platform.
I’ve said this before, and I’ll say it again: it’s about focus for us. We are focusing on a very particular pain point, that developers are feeling ever more acutely: the potential for AWS lock-in. Our goal is to provide an open source alternative for those users to mitigate those potential lock-in risks. Focus, focus, focus.
We see the benefits of this focused approach every day. Our roadmap is spread out before us in great detail — and that roadmap, combined with some of the best cloud engineers on the planet, gives us a feature velocity that no one else in the private cloud world can currently match. Which is why we’re on your napkin, despite being a fraction of the size of the napkin’s other inhabitants.
But in thinking about it, maybe your comparison to old school mainframe connectivity is right, and you’ve just got your timeframes wrong.
Maybe it’s 1965, and OpenStack is Multics, and Red Hat is GE, and Amazon is IBM, and AWS is the brand new System/360, ready to dominate the computing landscape for the next two decades.
Which would make Eucalyptus the open source little brother that the System/360 never had, that has no analogue in the history books, and that could have changed everything. Trouble is, I’m not sure how to fit that on your napkin.
I don’t know how many people have actually met Vic Iglesias, our Quality Hobo. Here’s what he looks like in his natural habitat:
It’s really super important not to make Quality Hobo angry. Let me assure you from personal experience: no one wants that.
Here are some things that make Quality Hobo angry (and that you should, therefore, avoid):
- Stealing Quality Hobo’s cigarettes, electronic or otherwise.
- Remarking upon Quality Hobo’s resemblance to a certain celebrity.
- Wasting Quality Hobo’s time with questions about which test cases are most recent.
- Wasting Quality Hobo’s time by writing tests that don’t integrate into Eutester.
- Wasting Quality Hobo’s time with questions on how to build your QA environment.
- Wasting Quality Hobo’s time in any way at all.
Here’s what I’m saying: it was only a matter of time before our Hobo got mixed up with a Vagrant.
Vagrant is incredible, and Mitchell Hashimoto is incredible for creating it. It’s the sort of tool you try out for a particular reason, and then once you’ve used it, you find a million other reasons to use it. Last night, as I chased down a bug in Faststart, I got to try out Micro-QA, which is a mashup of Vagrant and Eutester and Jenkins and Ansible and all kinds of stuff, built on top of one of the standard Vagrant boxes for Centos 6.4.
Here’s the full instructions for getting a fully updated, complete testing environment for Eucalyptus installed on your laptop:
- Install Vagrant+Virtualbox on your laptop.
- Run “git clone” on the Micro-QA repo.
- Run “vagrant up”.
- Point your browser to http://localhost:8080.
Aaaaand you’re done.
Ease of automated testing is one of those force multipliers that doesn’t seem super-exciting, but really is amazingly super-exciting. Because here’s the thing: When the cost of testing is higher than basically zero, people don’t bother with it — or, they do a really half-assed job of it and then say “oh yeah, I totally tested that.” Which is waaaaaay worse.
In Micro-QA, we have a tool that brings the cost of automated QA to near-zero. And not just for QA folks: it’s a tool that can be used by our QA team, our engineering team, our support team, our customers, and our community, all with comparatively little knowledge required. It’s a huge win for us.
And here’s the kicker: it’s not only a QA tool; it’s also a great tool for hybrid cloud diagnostics. Set up your Euca environment; set up your AWS environment; bake in some tests that run against each; run them on a regular basis; scream when something breaks. It’s kinda sorta magic.
Anyway. If you’ve got a Euca install, go get Micro-QA running on your laptop, bring it up in your browser, pick some test cases to run, drop the contents of your eucarc file into your test case, and run it. If it breaks, ping us on Freenode (#eucalyptus-qa) and let us know what broke.
Vic announced Micro-QA less than six months ago. It was cool at the time, but now it’s way past cool. I’m really impressed by how far it’s come in such a short time. It’s like I’m living in the future.
(For the record: Vic is really a super sweetheart of a guy, and I don’t think he actually lives under a bridge at present. I think he may be living Between Two Ferns, though.)
A lot of people have been visiting our table in the OSCON Hack Zone — mostly because of the presence of our Little Black Boxes.
The common question we’ve heard: “where did you guys *get* those things?”
INORITE? They are *totally* cute. We bought the parts and assembled them ourselves. They are now Standard Issue to all new Eucalyptus engineers; a short stack of three gives any engineer enough firepower to do serious development and testing on the whole Eucalyptus stack.
Here’s the parts list from Amazon.com, courtesy of the talented and ruggedly handsome @zacharyjhill:
The main housing unit is an Intel NUC, about 4″ by 4″ by 2″. The SSD is available in different sizes; ours is 128GB. With some of the boxes, we only use 8GB of RAM and with others we use 16GB. We also like to have wireless, though it’s not required — and don’t forget the cheapo power cable.
Building a fully functional 3-node Eucalyptus developer cloud: $1500.
Having an entire AWS-compatible cloud the approximate size of a coffee cup: priceless.
Awesome USB pens with complete virtual Eucalyptus cloud not included — those you’ll have to get from us at OSCON. Come see us in the Hack Zone. ;)
Running a private cloud can be like being the victim in a horror movie. You stand up the cloud and everything’s just fine. You get some workloads running. You’re scaling up. You’re scaling down. All is well. And then the spooky music starts, and little things start going wrong — and then suddenly the chairs are spinning in the air and you’re running for your life.
If you’re in Portland for OSCON on Monday night, come on by the Cloud Horror Stories birds-of-a-feather session. We’ll have people there with scary stories to tell, so if you’re interested in learning how things can go terrible wrong in cloud-land, come on by. Even better — if you’ve got a story of your own to share, we’d love to hear you tell it, preferably in your spookiest voice.
See you around the campfire. BOOOOOO!
(And no, we’re not actually going to start a campfire in the convention center. Definitely, almost certainly, maybe not. We can do that flashlight-under-the-chin thing, anyway.)
Simon Wardley is never shy to share a provocative opinion. :)
A summary of his latest missive: OpenStack is already doomed because of their inability, or unwillingness, to produce AWS clones. There’s nothing new in Simon’s position there, but it’s his bluntest statement of opinion yet on OpenStack’s prospects.
I’m not going to presume to agree or disagree with Simon’s prediction — but in his blog post and the ensuing conversation, I saw a few opportunities to clarify how I think about Eucalyptus and AWS fidelity.
* * * * *
First, on proving AWS fidelity.
Obviously, at Eucalyptus we think deeply about the AWS fidelity problem, and how to approach it. Simon suggests one possible model:
So could CloudStack, Eucalyptus, Open Nebula and some of the OpenStack party create a rich set of AWS compatible environments — of course. But the problem becomes you have to define one thing as the ‘reference’ model. The only way around this that I know is for the groups to create a massive set of test scripts and provide some sort of AWS compatibility service and define that as the reference model and each show compatibility to it and it to AWS. It’s possible, I’ve hinted enough times that people could try that route but there’s no takers so far.
I can’t speak for the other projects, but we find that the best tests of AWS compatibility can be found in the AWS ecosystem itself — which is one of the key advantages of working in such an ecosystem. Chasing a full API is an exhausting process, and can be discouraging, but we’ve had good success by first ensuring compatibility with the most popular open source tools in the AWS world. By moving progressively through these tools, we will cover ever-expanding sections of the API, leaving the dustiest corners of the API for last (perhaps never even to be implemented; after all, an API is ultimately only as useful as the tools that exercise it.)
The Netflix OSS toolchain is a great example of this. The team at Netflix has taken quite a bit of heat for relying so heavily on the AWS family of services, but their decision to open source all of their tools has been a boon to us. They are smart users who exercise AWS at a scale that other users can scarcely imagine, so it’s a safe bet that they’re exercising many of the most interesting parts of the AWS API. We’ve learned much, and proved much, by following their trail.
Of course, we also have our own automated test suite for AWS/Eucalyptus fidelity; we call it Eutester. It’s designed, at least in part, to run identical test cases against both Eucalyptus and AWS, and anyone who wants to test the AWS fidelity of their own IaaS can pick up that code and run with it. That codebase will continue to grow as Eucalyptus grows. Patches welcome, as they say.
* * * * *
Second, on architecting for AWS fidelity.
It seems to be assumed — among some, anyway — that AWS API compatibility is something that can simply be dropped into OpenStack at any time. The trouble is, no one will know how true that is, or isn’t, until they actually do the work. Geoff Arnold’s comments hit the nail on the head:
Load balancing is a great example. For good or ill, Amazon’s Elastic Load Balancer is a cornerstone of web-tier cloud applications architecture. If the OpenStack community was serious about AWS compatibility, the LBaaS team would have established ELB compatibility as a fundamental requirement. It didn’t. On the contrary, much of the preliminary documentation focused on all of the cool features that LBaaS would support that were not available with ELB. By Grizzly, all that we had to show for our efforts was a proof of concept based on a single instance of haproxy. Elastic provisioning was officially out of scope for the core LBaaS effort.
Software design is about choices, and with every choice you make, there’s a chance that you’ve made some other choice impractical, or even impossible. We know that there are, at the very least, syntactic differences between OpenStack and AWS; it is also quite likely that there are deeper semantic mismatches. It may be that the OpenStack community will be able to bridge both syntactic and semantic mismatches between OpenStack and AWS with ease — but given our experiences, it doesn’t seem likely. The devil is in the details, and one must care deeply about the details in order to conquer that devil.
* * * * *
Which brings me to the last point, which is caring about AWS fidelity.
As Thierry Carrez says,
EC2 API support has always been there in OpenStack. It just never found (yet) a set of contributors that cared enough to make it really shine. Canonical promised it (with AWSOME) then let it go. More recently Cloudscaling promised it, but I’ve seen nothing so far. The next in line might just deliver.
Maybe. There is great power in being the one who “cares enough”. And Thierry’s response here begs the question: why doesn’t the OpenStack community care more about supporting the various AWS APIs? (Since EC2 is just the tip of the iceberg.)
That’s a question for the OpenStack community to answer. In the meantime, I can assure you that, at Eucalyptus, we care deeply about AWS compatibility — as do our users. We work towards that goal tirelessly, every day, and I think it’s safe to say it’s because of that passion that we have taken the lead. And it’s a lead that we have every intention of extending.
Anyway. See some of you at Netflix tonite.
I first became aware of Seth Vidal years ago. I didn’t know him at all; I knew him only from his work, and from that work I surmised that he was Not My Friend.
I was working for Red Hat, you see — and Seth, at the time, was not.
People think of Red Hat now as this hugely successful software behemoth, but I can assure you that it wasn’t always that way. There was a time, not so long ago, when Red Hat could only dream of making the kind of money they’re making now. Not that there weren’t huge expectations: after the initial IPO in August 1999, the stock price went up to a ridiculous 132 dollars per share. When I joined in February of 2001, the share price was at a more realistic 6.44. Not long after that, it dropped to 3.12. I couldn’t help but feel a little bit responsible.
I was part of the Red Hat Network team. The idea, in a nutshell, was that we would encourage users to download Red Hat Linux for free, all they liked — but if they wanted timely software updates, we would force them to pay. Muwahahaha! RHN was the mechanism to deliver those updates, and the RHN server would be proprietary software. For me, this was an uncomfortable compromise; I had hoped to come to Red Hat to work on open source software, so it was painful to be a part of the only team in the company that was writing proprietary software. Matthew Szulik, the CEO, would meet with us every quarter to remind us how critical our work was. We were the business engine that would fuel the company, he told us. For a time, it was almost true.
Meanwhile, Seth was busily working on a rewrite of a little tool called Yup. Yup was the update tool for Yellow Dog Linux, and Seth decided to rewrite it to work with Red Hat Linux. He called this new tool Yum (Yellow Dog Updater, Modified). He made it primarily because he himself needed it. And because it was such a useful little tool, other people started using it. A lot of other people. Wow, just a whole lot of people started using Yum. It was far simpler than RHN, and for most users, it was better — or at least good enough. And it was, of course, Free Software.
Today, bits of Yum-related source code can be found in nearly all of the software packaging that Red Hat does — and that includes Spacewalk, the open source descendent of RHN. Open source is especially powerful when it’s commoditizing away the value proposition of proprietary software, and boy, did Yum ever do that. Yum is great software, sure — but to me, Seth’s truly lasting professional legacy is that he taught mighty Red Hat a humbling lesson about open source. And having well learned that lesson, Red Hat proceeds now to teach it to everyone else.
I myself was on the pointy end of that lesson, and it was one of the reasons that I left the RHN team to focus on the nascent Fedora community in 2004. One of my new duties was to write for the late, great Red Hat Magazine; one of my first assignments was to write an article about the resounding successes of the Fedora Project. Resounding Successes, don’t you know! Exciting! So I sent some emails to some of my new friends in Fedora-land, asking for their ideas for this article.
I often wish I still had my old Red Hat emails lying around, because I’m sure Seth’s response to this particular email would have made for great reading. I do remember the gist, which was this: “dude, you do *not* want me talking about my opinions of the Fedora Project in public.” (He did reference Icon Ryabitsev’s excellent mock IRC chat, which is required historical reading for anyone curious about the early days of the Fedora project.)
So anyway, I asked Seth if he’d be willing to talk to me, in private, off the record, about how we might be able to improve Fedora to a point at which he *would* be happy to talk about it in public. He agreed.
That was when I really started getting to know Seth Vidal. Some of my fondest personal and professional memories come from the time that followed. Now that all seems like — sadly, is — a lifetime ago.
* * * * *
I saw Seth two nights before he died.
The American Dance Festival is a centerpiece of summer life in Durham. Performances happen just about every night at various venues across Durham, for a couple of months. ADF is a Durham institution, and we’re very proud of it here. My wife bought us front-row tickets to see an Argentinian aerial dance troupe at the Durham Performing Arts Center last Saturday night. When we arrived, four seats down from us sat Seth and his companion Eunice.
It was how I usually saw Seth, after I left Red Hat: just hanging around Durham, while we were both doing Durham Things. Seth was not only a linchpin of the global open source community; he was a linchpin of of the local, real world community that he and I shared. Seeing him at an ADF performance, or the Durham farmer’s market, or at a local restaurant like Toast, or Parker and Otis, was always a treat, but never a surprise: “oh, yeah,” I would think, “sure Seth is here.” Or if I didn’t see him, I would hear about him from mutual friends: “hey, Seth was here the other day.” He was a presence in Durham. It seemed like everyone knew him, or at least knew of him — people who knew little or nothing about his open source life.
So we chatted that night, crowded against the stage. He asked me how I was doing. For those of us who knew Seth well, he had a very particular way of asking that question, that you can hear in your head even now. He asked the question with concern and purpose. For him, that question was never a throwaway.
I told him I was doing well. We talked about the great seats. He told me that he and Eunice had bought ADF season tickets, because of course they had; there’s not a more Durham thing you can do. I wanted to chat more, because I hadn’t seen him in a couple of months — but I was in people’s way. We agreed that we’d definitely catch up at the upcoming Flock conference, if not sooner. I hurried to take my seat.
The performance was great, but the house was crowded and hot, so when the show ended, my wife and I hurried out without saying our goodbyes. Not even a look or a wave back — because hey, I knew that I’d be seeing him soon enough anyway, right? Right?
* * * * *
My personality flaws have always been magnified in the presence of cyclists.
At my worst, I am impatient, easily annoyed, and impulsive. When I’m in a bad mood, I’m exactly the kind of guy who will sulk behind a pack of cyclists and then speed around them at the first available opportunity. I know a lot of cyclists, and have nothing but respect for them in the abstract — but in the real world, whenever cyclists take up space in my precious roadway, especially when I’m in a hurry, I often find myself fighting the urge to act like an asshole.
I wish I could say that I have nothing at all in common with the guy who ran Seth into the ditch on Monday night. I wish I could say, honestly, that I couldn’t possibly imagine myself in his position. How I would love to be able to say that.
I drove around Durham on Tuesday for much of the day, just thinking. Mostly I drove around Watts-Hillandale, the neighborhood where Seth lived. In particular, I must have driven up and down the length of Hillandale Road a dozen times. It’s a neighborhood street, but it’s also a thoroughfare, which means that people speed along it all the time. People who are in a hurry to get to someplace else. People like me.
It’s funny, how invisible bike lanes and “Share the Road” signs are, until you have reason to notice them — and then you notice them everywhere. For instance: the big yellow “Share the Road” sign on the 1900 block of Hillandale Road. That sign is big. All those signs are big. You can’t miss them. How can you miss them?
I fantasized briefly about altering that sign, and every other sign in Durham, to read “Share the Fucking Road”, so that for a few days, everyone might actually take a good look at them, and be Enlightened. Except the responsible side of my brain pointed out that it would be an angry and ultimately futile gesture, and that I would probably do well to stop brooding, and get back to my life, and just fix my own self.
So that last thing — fix my own self, slow down, be patient — that’s a thing I can work on. But stop brooding? Get back to my life? Not quite sure how to do those things just yet.
Bye Seth. I’ll save a chair at Toast for you.