Jeff Lindsay2014-06-25T00:07:45-05:00http://progrium.comJeff Lindsayprogrium@gmail.comThe Start of the Age of Flynn2014-02-06T00:00:00-06:00http://progrium.com/blog/2014/02/06/the-start-of-the-age-of-flynn<p>For about the past six months, I’ve been working on an open source project called <a href='https://flynn.io/'>Flynn</a>. It’s gotten a lot of attention, but I haven’t written much about it. I’ve been hoping to start a series discussing the design and development behind Flynn, but it seems appropriate to at least introduce the project and provide some context.</p>
<h2 id='what_is_flynn'>What is Flynn?</h2>
<p>Before development, I started writing the official <a href='https://github.com/flynn/flynn-guide'>Flynn Guide</a>. There I explained Flynn like this:</p>
<blockquote>
<p>Flynn has been marketed as an open source Heroku-like Platform as a Service (PaaS), however the real answer is more subtle. <br /><br /> Flynn is two things: <br /><br /> 1) a “distribution” of components that out-of-the-box gives companies a reasonable starting point for an internal “platform” for running their applications and services, <br /><br /> 2) the banner for a collection of independent projects that together make up a toolkit or loose framework for building distributed systems. <br /><br /> Flynn is both a whole and many parts, depending on what is most useful for you. The common goal is to democratize years of experience and best practices in building distributed systems. It is the software layer between operators and developers that makes both their lives easier.</p>
</blockquote>
<p>It’s easy right now to describe Flynn as another, better, modern open source PaaS. But it’s really much more than that. I usually need to underline this in discussions because in most people’s mind, a PaaS is a black box system that you deploy and configure and then you have something like Heroku. Like you can deploy OpenStack Nova and get something like EC2.</p>
<p>Flynn can be that, but it’s designed so it can be used as an open system, or framework, to build a service-oriented application operating environment. The truth is, if you’re building and operating any sort of software as a service, you’re not just building an application, you’re building a <em>system</em> to support your application and the processes around it.</p>
<p>You might be tempted to call Flynn a tool for “devops”. While that might be true, remember that the original ideas around devops were about organization-wide systemic understanding of your application, its lifecycle, and its operation. In reality, Flynn is designed for this type of thinking and should hopefully blur the line between operations and engineering, encouraging both to work together and think about the <em>entire</em> system you’re building.</p>
<h2 id='why_build_flynn_how_did_it_come_about'>Why build Flynn, how did it come about?</h2>
<p>This is a long story, but it provides context for the vision of Flynn, what problems inspired it, and shows just how long the idea has been stirring. Though keep in mind Flynn is a collaboration and this is just my half of the story.</p>
<h3 id='falling_in_love_with_paas'>Falling in love with PaaS</h3>
<p>For years I was obsessed with improving the usefulness and programmability of our collective macro distributed system of web services and APIs. Think webhooks. Circa 2006 I was also building lots of little standalone web utilities and APIs for fun. I quickly learned that using technologies like App Engine and Heroku was imperative to sanely operate so many of these free services, and keep costs near zero for what I considered public utilities.</p>
<p>It turns out, for the exact same reasons of cost and automation, these PaaS providers were slowly revolutionizing a subset of commercial application development. The idea of generic managed applications (“NoOps”) and streamlining the delivery pipeline (necessary for Continuous Delivery) has always had huge implications for web apps and their businesses. For me, even though PaaS providers couldn’t have come soon enough, I seemed to always want more than what they could provide. I constantly struggled with the limitations of App Engine, Heroku, and dotCloud. Originally it was limited to certain languages, certain types of computation, certain types of protocols. In fact, there still isn’t a great PaaS provider that lets you build or deploy a non-HTTP service, like say, an SMTP server or custom SSH server.</p>
<h3 id='the_divide_between_paas_and_hostbased_infrastructure'>The divide between PaaS and host-based infrastructure</h3>
<p>For all the systems and design knowledge, best practices, and solutions to important problems that Heroku, dotCloud, App Engine, and others have figured out, if for some reason you cannot use them, you get none if it. If it’s too expensive, or your system is just too complicated, or you need to use a protocol they don’t support, you just get EC2 or bare metal hosts and have to work from there. If you’re lucky or smart, depending on who makes these decisions, you get to use Chef or Puppet.</p>
<p>But I’ll be honest, the Chef and EC2 combo is still a huge step down from what a system like Heroku can offer. What’s more is that large scale organizations like Google and Twitter have it pretty well figured out, but they have it figured out for them. The rest of us are left with a myriad of powerful if not mystical solutions to building our distributed systems like Mesos and Zookeeper. If we’re lucky enough to discover and understand those projects, we often avoid them until absolutely necessary and only then figure out how to integrate them into our already complex systems. Most of which we had to build ourselves because the baseline starting point has always been “a Linux server”.</p>
<h3 id='twilio_and_serviceoriented_architectures'>Twilio and service-oriented architectures</h3>
<p>For me, a lot of this was learned at <a href='https://www.twilio.com/'>Twilio</a>, which was a perfect place to think about this space. The Twilio system, behind that wonderfully simple API, is a highly complex service-oriented system. When I left a couple years ago, it involved roughly 200 different types of services to operate. Some were off-the-shelf open source services, like databases or caches. Most of them were custom services that spoke to each other. Some were written in Python, some in PHP, some in Java, others in C. Lots of protocols were used, both internally and publicly facing. Primarily HTTP, but also SIP, RTP, custom TCP protocols, even a little ZeroMQ. Most people forget that databases also add protocols to the list.</p>
<p>I can’t tell you how many problems come with a system that complicated. Though it did its job quite well, the system was effectively untestable, had an atrocious delivery pipeline, was incredibly inefficient in terms of EC2 instances, and nobody really understood the whole system. A lot of these are common problems, but they’re compounded by the scale and complexity of the system.</p>
<p>The first half my time at Twilio was spent building scalable, distributed, highly-available messaging infrastructure. What a great way to learn distributed systems. However, I ran into so many problems and was so frustrated by the rest of the system, that I dedicated the second half of my time at Twilio to improving the infrastructure and helped form the Platform team. The idea being that we were to provide a service platform for the rest of the engineering organization to use. Unfortunately it never really became more than a glorified operations and systems engineering team, but we did briefly get the chance to really think through the ideal system. If we could build an ideal internal platform, what would it look like? We knew it looked a lot like Heroku, but we needed more. What we ended up with looked a lot like Flynn. But it never got started.</p>
<p>I remember very clearly working out sketches for a primitive that I thought was central to the whole system. It was a “dyno manager” to me, which gave you a utility to manage the equivalent of Heroku dynos on a single host. These were basically fancy Linux containers. Again, though, not important to Twilio at the time. Eventually, I left Twilio and started contracting.</p>
<h3 id='more_history_with_cloud_infrastructure'>More history with cloud infrastructure</h3>
<p>First, I worked with some old friends back from the <a href='http://bit.ly/1fBtien'>NASA Nebula</a> days. I forgot to mention, in 2009, before Twilio, I worked at NASA building open source cloud infrastructure. The plan was to implement an EC2, then a sort of App Engine on top of it, and then specifically for purposes at NASA, lots of high level application modules. Turns out the first part was hard enough. We started using Eucalyptus, but realized it was not going to cut it. Eventually the team wrote their own version from what they learned using Eucalyptus, called it Nova, partnered with Rackspace, and that’s how <a href='https://www.openstack.org/'>OpenStack</a> was born.</p>
<p>I was on the project pre-OpenStack to actually work on the open source PaaS back then in 2009, but we never got to that before I left. Probably for the best in terms of timing. Another reason I was there was because we also wanted to provide project hosting infrastructure at NASA. This was before Github was popular, and in fact, I was running a competing startup called DevjaVu since 2006 that productized Trac and SVN. As Github got more popular and I was distracted with other projects, I decided to shutdown DevjaVu, admitting that Github was doing it right. But my experience meant I could easily throw together what was originally going to be code.nasa.gov.</p>
<p>Fast forward to my contracting after Twilio, I worked with my friends at <a href='http://www.pistoncloud.com/'>Piston Cloud</a>, one of the OpenStack startups that fell out of the NASA Nebula project. My task wasn’t OpenStack related, it was actually to automate the deployment of <a href='http://www.cloudfoundry.com/'>CloudFoundry</a> on top of OpenStack for a client. CloudFoundry was one of the first open source PaaS projects. It popped up in 2011. This gave me a taste of CloudFoundry, and boy it was a bad one. Ignoring anything about CloudFoundry itself, just deploying it from scratch, while 100% automated, would take 2 <em>hours</em> to complete. Nevertheless, there are still some aspects of the project I admire.</p>
<h3 id='docker_and_dokku'>Docker and Dokku</h3>
<p>My next big client turned out to be an old user of DevjaVu, but I never realized it until I started talking with them. It was a company called dotCloud. Quickly hitting it off with Solomon Hykes, we tried to find a project to collaborate on. I mentioned my “dyno manager” concept, and he brought up their next-gen container technology. Soon I was working on a prototype called <a href='http://www.docker.io/'>Docker</a>.</p>
<p>While the final project was mostly a product of the mind of Solomon and the team, while working on the prototype I made sure that it could be used for the systems I was envisioning. In my mind it was just one piece of a larger system, though a very powerful piece with many other uses and applications. Solomon and I knew this, and would often say it was the next big thing, but it’s still a bit crazy to see that it’s turning out to be true.</p>
<p>Not long after Docker was released, Solomon and I went to give a talk at GlueCon to preach this great new truth. The day before the talk, I spent 6 hours hacking together a demo for the talk that would demonstrate how you could “easily” build a PaaS with Docker. I later released this as <a href='https://github.com/progrium/dokku'>Dokku</a>, a Docker powered mini-Heroku.</p>
<p>Dokku intentionally left out anything having to do with distributed systems. It was meant for a single host. I thought maybe it would be good for personal use or for building internal tools. Turns it was, and it got pretty popular. I had a note in the readme that said it intentionally does not scale to multiple hosts, and that perhaps this could be done as a separate project that I referred to as “Super Dokku”. That’s what I imagined Flynn as at the time.</p>
<h3 id='flynn_takes_shape'>Flynn takes shape</h3>
<p>Now, back around the time I was working on the Docker prototype in Winter 2012, I was approached by two guys, Daniel Siders and Jonathan Rudenberg, that had been working on the Tent protocol and its reference implementation. They wanted to build a fully distributed, enterprise grade, open source platform service. They said they were going to be working on it soon, and they wanted to work on it with me. The only problem is they didn’t have the money, and they’d get back to me after a few funding meetings.</p>
<p>Later, I think around the time I released Dokku mid-2013, Daniel and Jonathan approached me again. They were serious about this platform service project and had the idea to crowdfund it with company sponsorships. You’d think I’d be rather dubious about the idea, but given the growing interest in Docker, the great response from Dokku, and basically testimonial after testimonial of companies wanting something like this, I figured it could work.</p>
<p>We decided to call the project Flynn, and got to work comparing notes and remote whiteboarding the project. I was very lucky that they were very like-minded, and we were already thinking of very similar architectures and would generally agree on approaches. We put together the Flynn Guide and the website copy and funding campaign using <a href='http://selfstarter.us/'>Selfstarter</a>, then let it loose.</p>
<p>We quickly met our funding goal for the year and then spent the rest of the year working on Flynn. Unfortunately, the budget only covered part-time, but we had planned to have a working development release by January 2014.</p>
<h2 id='what_now'>What now?</h2>
<p>It’s now February 2014, so let’s take a look at where we are.</p>
<p>Like most software schedules, ours fell behind a little. While the project has been in the open on Github from the beginning, we planned to share a rough but usable developer release last month. We’re <em>so</em> close.</p>
<p>What makes it difficult is we’re out of our 2013 budget! This affects my contribution more than Jonathan’s. I’ve been putting time into it here and there, but it no longer pays my bills. That could change soon, but until then it might move a little bit slower until our initial release. Only after the release can we can go for another sponsorship campaign, so you can see how right now is just a little frustrating.</p>
<p>That said, there’s still more and more interest in the project, we already have a few brave souls that have been contributing to the project components, and like I said, hopefully the money situation will sort itself out soon. A few things are in the works.</p>
<p>In the meantime, although I’m not as active on it at this moment, I do feel compelled to use this time to write about it here on my blog. Hopefully catch everybody up on the architecture, discuss design decisions, talk about the future, and then a lot of that should be useable for official project documentation.</p>
<p>And hell, if I can’t work on Flynn for some reason, after all this, hopefully the writing will allow somebody to continue my work. :)</p>Viewdocs: Hosted Markdown project documentation (finally!)2013-11-13T00:00:00-06:00http://progrium.com/blog/2013/11/13/viewdocs-hosted-markdown-project-documentation<p>A huge part of the user experience for open source software is the documentation. When writing new software to be adopted, I’ve learned it’s more important to first write decent docs than tests. And when I forget, <a href='https://github.com/kennethreitz'>Kenneth Reitz</a> is there to remind me.</p>
<p>When I’ve outgrown a README on Github, I only consider two options for providing documentation: <a href='http://pages.github.com/'>Github Pages</a> and <a href='https://readthedocs.org/'>Read the Docs</a>. Unfortunately, I have problems with both of them. Chiefly, Read the Docs makes me use reStructured Text, and Github Pages means maintaining a separate orphan branch and using a static page generator.</p>
<p>What I’ve really wanted is something like <a href='http://gist.io/'>Gist.io</a>, but for my repository. Nobody has stepped up, so I built it.</p>
<p>I call it <a href='http://progrium.viewdocs.io/viewdocs'>Viewdocs</a>. It renders static pages on-demand from Markdown in your project’s docs directory. There’s no setup, just follow the conventions and it works. It may even already be working for you, since Markdown in a docs directory is not that uncommon. And keeping your documentation in the same branch as your code means it’s easier for people to contribute docs with their pull requests.</p>
<p>The default layout is borrowed from Gist.io, giving you a clean, elegant documentation site. All you have to do is write some Markdown. That’s about all there is to it.</p>
<p>You can read more on the <a href='http://progrium.viewdocs.io/viewdocs'>homepage for Viewdocs</a>, which is powered by Viewdocs. Or here’s a quick video introduction:</p>
<iframe allowFullScreen='allowFullScreen' frameborder='0' height='394' mozallowfullscreen='mozallowfullscreen' src='http://player.vimeo.com/video/79066808' webkitAllowFullScreen='webkitAllowFullScreen' width='700'>No iFrames?</iframe>Hacker Dojo: Community Trading Zone2013-08-18T00:00:00-05:00http://progrium.com/blog/2013/08/18/hacker-dojo-community-trading-zone<p>I recently came out of a Hacker Dojo board meeting, as I do every month, but this time with a renewed sense of excitement for Hacker Dojo. We began with the usual board meeting stuff — finances, staff benefits, etc — but there was one final item to discuss that’s more in tune with the reason I’m even on the board. There has been an increasingly pressing issue around what Hacker Dojo is. We used to know what it was and had a reasonable idea of what we always wanted it to be, but we’ve grown, we’ve learned, and our model has to adapt. This discussion led to a rethinking of the conceptual structure of Hacker Dojo.</p>
<p>One of the reasons this came up is growth. We’ve had consistent membership growth with only a couple expected downturns, due to various setbacks. For example, when we were temporarily limited to a maximum occupancy of 49 people, membership dropped because people couldn’t throw the same events as before. Despite setbacks, we’ve had impressive long-term growth. If you describe us as a “hackerspace,” we are the largest in the United Sates, and I believe one of the largest in the world. It’s clear that overall we’re doing quite well, but we want to take it further. We want to keep pushing because Hacker Dojo means a lot to all of us and we want to see it, and the culture and ideals that go with it, reach new people and new places.</p>
<p>We’ve always talked about franchising and starting new locations, but we’ve learned that a single, 24/7 location with 400 members and around 2,000 people coming through every month is quite difficult to run, especially as a bootstrapped non-profit with minimal staff. We’ve had sponsorships, but we work hard for those sponsorships and provide services to receive them. Most of our income comes from membership fees. Despite all this we’re trying our best to continue to take Hacker Dojo to the next level, and it should be noted we have been quite successful so far, for many reasons that could go into another long blog post.</p>
<p>Scaling any organization is hard. Scaling an organization like this one can be extra difficult, especially when we’re trying to maintain the grassroots, bottom-up, culture that started it. We’ve always tried to support the concept of a democratic organization. For the first few years we were 100% volunteer run, with no paid employees. The growth of Hacker Dojo has generated lot of extra work to get done that nobody really wants to do. Raising money, working with the city, organizing contractors, dealing with financial issues, having to move to a new location and rent the previous one … all this requires a LOT of leg work and consistent attention that just wasn’t happening with volunteers.</p>
<p>Eventually, you need to start hiring full or part-time people. We now have a small staff of paid employees who tackle these more time-intensive tasks. While this has helped us manage the growth, it has created a kind of interesting tension between the forces of the democratic, bottom-up nature of our organization and the forces of more traditional, more centralized modes of operation necessary for efficient execution of a vision. The vision being in the short-term to improve the quality of the experience at Hacker Dojo, and in the long-term to bring Hacker Dojo and all that it stands for — what some have called the epitome of true Silicon Valley culture — to more people.</p>
<p>This tension has been healthy and has helped Hacker Dojo often reap the benefits of both worlds. However, as we grow, so does the tension and discussion around it. This was the catalyst for our discussion at the board meeting. Now the realization I’m excited about is somewhat of an aside from this issue of tension, but the issue was the catalyst for revisiting what Hacker Dojo is. I quite enjoy the occasional existential crisis as it often results in refreshed sense of purpose and meaning.</p>
<p>We revisited many ideas, for example that Hacker Dojo is a platform, and like many platforms it can be hard to describe. We serve purposes in the worlds of education, business, social, and many others. We’d talked about how Hacker Dojo has played a part in not just projects and startups, but relationships — partnerships, friendships, and even marriages. We considered different organizations for analogy: universities, incubators, fraternities, and anything else that comes close to an existing framework for all the amazingness that Hacker Dojo produces.</p>
<p>Then it hit us. Communities. Plural.</p>
<p>What we realized is that we’ve effectively been treating Hacker Dojo as one community. We’ve sort of acknowledged that there are sub-groups within Hacker Dojo, but we’ve more or less operated under the assumption that we serve two types of citizens: members and the general public. As we’ve gotten larger, it’s become much more difficult to effectively treat either of those as one group. The reality is that they are all actually part of many communities.</p>
<p>The idea was right under our nose the whole time, just under a different guise. Events have always been a core part of Hacker Dojo because Hacker Dojo started with the idea that it could be a place where people could meet and host events like the event that inspired Hacker Dojo itself, SuperHappyDevHouse. When an event happens on a regular basis it turns into a community. A group of people with a common interest and/or set of values come together, turn into a community, and these communities grow, develop, and spawn really interesting things, just as SuperHappyDevHouse spawned Hacker Dojo.</p>
<p>Hacker Dojo now serves many communities both inside and outside of Hacker Dojo; not only internal communities but external ones as well. External communities can also leverage the infrastructure that Hacker Dojo provides. These communities might already have a member of Hacker Dojo, or someone in the community chooses to become a member of Hacker Dojo, usually to throw an event. Once that community gets into Hacker Dojo, they not only see Hacker Dojo, but all the other communities that come together under our roof. Sometimes this inspires even more people to sign up as members, not just to be a part of Hacker Dojo, but to participate in and see what other communities Hacker Dojo offers.</p>
<p>We’re now thinking of Hacker Dojo not just as a community hub, but as a hub of communities. A community <a href='http://en.wikipedia.org/wiki/Trading_zones'>trading zone</a>, if you will. While we will always support and listen to individual members, we should begin to think of communities, in the plural, as first-class citizens of Hacker Dojo. This may seem like a subtle change but it is a big difference. It means acknowledging and supporting the communities that operate in and around Hacker Dojo. It means going to those communities and asking how we can better serve <em>them as a community</em>, not just individual members.</p>
<p>By connecting with communities we can provide infrastructure to foster their growth and development. Imagine going to the Hacker Dojo website and seeing a page devoted to the many communities of Hacker Dojo. When a new member signs up they could indicate their interests and we could provide them with a list of communities they might be interested in. Hacker Dojo would in a sense then be improving the communities’ “deal flow.”</p>
<p>Rethinking Hacker Dojo as infrastructure for communities has led to lots of exciting new ideas. By focusing on empowering them with infrastructure that allows them to flourish, we are then supporting our members in a more meaningful way.</p>
<p>I’m hoping this simple change in the way the board and members invested in Hacker Dojo think about Hacker Dojo will hopefully lead to lots of positive change. We don’t have these conversations very often on the board but we need to have them to maintain the vision of Hacker Dojo and we need to have them in public. Clearly this is a collaborative effort, so we want to know how the general community feels about this idea. So I’m putting this out there, and hopefully it will lead to more exciting discussions.</p>Dokku: The smallest PaaS implementation you've ever seen2013-06-19T00:00:00-05:00http://progrium.com/blog/2013/06/19/dokku-the-smallest-paas-implementation-youve-ever-seen<p><a href='https://github.com/progrium/dokku'>Dokku</a> is a mini-Heroku powered by Docker written in less than 100 lines of Bash. Once it’s set up on a host, you can push Heroku-compatible applications to it via Git. They’ll build using Heroku buildpacks and then run in isolated containers. The end result is your own, single-host version of Heroku.</p>
<p>Dokku is under 100 lines because it’s built out of several components that do most of the heavy lifting: Docker, Buildstep, and gitreceive.</p>
<ul>
<li><a href='http://www.docker.io/'>Docker</a> is a container runtime for Linux. This is a high-level container primitive that gives you a similar technology to what powers Heroku Dynos. It provides the heart of Dokku.</li>
<li><a href='https://github.com/progrium/buildstep'>Buildstep</a> uses Heroku’s open source buildpacks and is responsible for building the base images that applications are built on. You can think of it as producing the “stack” for Dokku, to borrow a concept from Heroku.</li>
<li><a href='https://github.com/progrium/gitreceive'>Gitreceive</a> is a project that provides you with a git user that you can push repositories to. It also triggers a script to handle that push. This provides the push mechanism that you might be familiar with from Heroku.</li>
</ul>
<p>There are a few other projects being developed to support Dokku and expand its functionality without inflating its line count. Each project is independently useful, but I’ll share more about these as they’re integrated into Dokku.</p>
<p>For now, here’s a screencast that shows how to set up Dokku, along with a quick walk-through of the code.</p>
<iframe allowFullScreen='allowFullScreen' frameborder='0' height='394' mozallowfullscreen='mozallowfullscreen' src='http://player.vimeo.com/video/68631325' webkitAllowFullScreen='webkitAllowFullScreen' width='700'>No iFrames?</iframe>Executable Tweets and Programs in Short URLs2013-01-05T00:00:00-06:00http://progrium.com/blog/2013/01/05/executable-tweets-and-programs-in-short-urls<p>A few weeks ago I was completely consumed for the better part of a day that I would have otherwise spent on more practical work.</p>
<blockquote class='twitter-tweet tw-align-center'><p>Let's reflect. On a whim, I spent 6 hours writing programs that live in URL shorteners to create installable programs from Tweets.</p>— Jeff Lindsay (@progrium) <a data-datetime='2012-12-13T07:01:11+00:00' href='https://twitter.com/progrium/status/279118756561711104'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script>
<p>Yeah, what? Weird, right? It started from a Twitter conversation earlier that day with my friend Joel:</p>
<blockquote class='twitter-tweet tw-align-center'><p>$ for app in `heroku apps | grep -v '='`; do echo <a href='https://twitter.com/search/$app'>$app</a>; heroku ps --app <a href='https://twitter.com/search/$app'>$app</a>; done # how to figure out what you have running on heroku</p>— Joël Franusic (@jf) <a data-datetime='2012-12-13T01:10:20+00:00' href='https://twitter.com/jf/status/279030460674347008'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script><blockquote class='twitter-tweet tw-align-center' data-in-reply-to='279030460674347008'><p>@<a href='https://twitter.com/jf'>jf</a> reminds me of yet another app i need to build</p>— Jeff Lindsay (@progrium) <a data-datetime='2012-12-13T01:10:56+00:00' href='https://twitter.com/progrium/status/279030609345667072'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script><blockquote class='twitter-tweet tw-align-center' data-in-reply-to='279030609345667072'><p>@<a href='https://twitter.com/progrium'>progrium</a> I just wrote and launched a "client side" "bash app" right there. Bam.</p>— Joël Franusic (@jf) <a data-datetime='2012-12-13T01:15:47+00:00' href='https://twitter.com/jf/status/279031831809097728'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script><blockquote class='twitter-tweet tw-align-center' data-in-reply-to='279031831809097728'><p>@<a href='https://twitter.com/jf'>jf</a> app tweets. an app in a tweet.</p>— Jeff Lindsay (@progrium) <a data-datetime='2012-12-13T01:16:53+00:00' href='https://twitter.com/progrium/status/279032107265839104'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script><blockquote class='twitter-tweet tw-align-center' data-in-reply-to='279032107265839104'><p>@<a href='https://twitter.com/progrium'>progrium</a> $ apptweet install id:279030460674347008</p>— Joël Franusic (@jf) <a data-datetime='2012-12-13T01:19:10+00:00' href='https://twitter.com/jf/status/279032681373790208'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script>
<p>This wishful brainstorming inspired me to start building exactly that. But first, a digression.</p>
<p>The idea reminded me of an idea I got from <a href='https://twitter.com/rndmcnlly'>Adam Smith</a> back when I was working on Scriptlets. If you can execute code from a URL, you could “store” a program in a shortened URL. I decided to combine this with the curl-pipe-bash technique that’s been starting to get popular to bootstrap installs. If you’re unfamiliar, take this Gist of a Bash script:</p>
<script src='https://gist.github.com/4464431.js'> </script>
<p>Given the “view raw” URL for that Gist, you can curl it and pipe it into Bash to execute it right there in your shell. It would look like this:</p>
<pre><code>$ curl -s https://gist.github.com/raw/4464431/gistfile1.txt | bash
Hello world</code></pre>
<p>Instead of having Gist store the program, how could we make it so the source would just live within the URL? Well in the case of curl-pipe-bash, we just need that source to be returned in the body of a URL. So I built a simple app to run on Heroku that takes the query string and outputs it in the body, a sort of echo service.</p>
<script src='https://gist.github.com/4464442.js'> </script>
<p>Letting you do this:</p>
<pre><code>$ curl "http://queryecho.herokuapp.com?Hello+world"
Hello world</code></pre>
<p>Which you could conceal and shorten with a URL shortener, like Bitly. I prefer the j.mp domain Bitly has. And since they’re just redirecting you to the long URL, you’d use the <code>-L</code> option in curl to make it follow redirects:</p>
<pre><code>$ curl -L http://j.mp/RyUN03
Hello world</code></pre>
<p>When you make a short URL from the bitly website, they conveniently make sure the query string is properly URL encoded. So if I just typed <code>queryecho.herokuapp.com/?echo "Hello world"</code> into bitly, it would give me a short URL with a properly URL encoded version of that URL that would return <code>echo "Hello world"</code>. This URL we could then curl-pipe into Bash:</p>
<pre><code>$ curl -Ls http://j.mp/VGgI3o | bash
Hello world</code></pre>
<p>See what’s going on there? We wrote a simple Hello world program in Bash that effectively lives in that short URL. And we can run it with the curl-pipe-bash technique.</p>
<p>Later in our conversation, Joel suggests an example “app tweet” that if executed in Bash given a URL argument, it would tell you where it redirects. So if you gave it a short URL, it would tell you the long URL.</p>
<blockquote class='twitter-tweet tw-align-center' data-in-reply-to='279032107265839104'><p>@<a href='https://twitter.com/progrium'>progrium</a> $ echo "$1"; curl -IL --silent "$1" | grep Location | grep -o 'http.*' # this is a URL "unshortener"</p>— Joël Franusic (@jf) <a data-datetime='2012-12-13T01:23:08+00:00' href='https://twitter.com/jf/status/279033679592951809'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script>
<p>Just so you know what it would look like, if you put that program in a shell script and ran it against a short URL that redirected to www.google.com, this is what you would see:</p>
<pre><code>$ ./unshortener.sh http://j.mp/www-google-com
http://j.mp/www-google-com
http://www.google.com/</code></pre>
<p>It prints the URL you gave it and then resolves the URL and prints the long URL. Pretty simple.</p>
<p>So I decided to put this program in a short URL. Here we have <a href='http://j.mp/TaHyRh'>j.mp/TaHyRh</a> which will resolve to:</p>
<pre><code>http://queryecho.herokuapp.com/?echo%20%22$url%22;%20curl%20-ILs%20%22$url%22%20|%20grep%20Location%20|%20grep%20-o%20'http.*'</code></pre>
<p>Luckily I didn’t have to do all that URL encoding. I just pasted his code in after <code>queryecho.herokuapp.com/?</code> and bitly took care of it. What’s funny is that this example program is made to run on short URLs, so when I told him about it, my example ran on the short URL that contained the program itself:</p>
<pre><code>$ curl -Ls http://j.mp/TaHyRh | url=http://j.mp/TaHyRh bash
http://j.mp/TaHyRh
http://queryecho.herokuapp.com/?echo "$url"; curl -ILs "$url" | grep Location | grep -o 'http.*'</code></pre>
<p>You may have noticed my version of the program uses <code>$url</code> instead of <code>$1</code> because we have to use environment variables to provide input to curl-pipe-bash scripts. For reference, to run my URL script against the google.com short URL we made before, it would look like this:</p>
<pre><code>$ curl -Ls http://j.mp/TaHyRh | url=http://j.mp/www-google-com bash
http://j.mp/www-google-com
http://www.google.com/</code></pre>
<p>Okay, so we can now put Bash scripts in short URLs. What happened to installing apps in Tweets? Building an <code>apptweet</code> program like Joel imagined would actually be pretty straightforward. But I wanted to build it in and install it with these weird programs-in-short-URLs.</p>
<p>The first obstacle was figuring out how to get it to modify your current environment. Normally curl-pipe-bash URLs install a downloaded program into your <code>PATH</code>. But I didn’t want to install a bunch of files on your computer. Instead I just wanted to install a temporary Bash function that would disappear when you leave your shell session. In order to do this, I had to do a variant of the curl-pipe-bash technique using eval:</p>
<pre><code>$ eval $(curl -Ls http://j.mp/setup-fetchtweet)
$ fetchtweet 279072855206031360
@jf you asked for it... Jeff Lindsay (@progrium) December 13, 2012</code></pre>
<p>As you can see by inspecting that URL, it just defines a Bash function that runs a Python script from a Gist. I cheated and used Gist for some reason. That Python script uses the Twitter embed endpoint (same one used for the embedded Tweets in this post) to get the contents of a Tweet without authentication.</p>
<p>The next thing I built installed and used fetchtweet to get a Tweet, parse it, put it in a Bash function named by the string after an <code>#exectweet</code> hashtag (which happens to also start a comment in Bash). So here we have a Tweet with a program in it:</p>
<blockquote class='twitter-tweet tw-align-center'><p>echo Hello world <a href='https://twitter.com/search/%23exectweet'>#exectweet</a> helloworld</p>— Jeff Lindsay (@progrium) <a data-datetime='2012-12-13T04:57:28+00:00' href='https://twitter.com/progrium/status/279087620145958912'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script>
<p>To install it, we’d run this:</p>
<pre><code>$ id=279087620145958912 eval $(curl -Ls http://j.mp/install-tweet)
Installed helloworld from Tweet 279087620145958912
$ helloworld
Hello world</code></pre>
<p>We just installed a program from a Tweet and ran it! Then I wrapped this up into a command you could install. To install the installer. This time it would let you give it the URL to a Tweet:</p>
<pre><code>$ eval $(curl -Ls http://j.mp/install-exectweet)
Installed exectweet
$ exectweet https://twitter.com/progrium/status/279087620145958912
Installed helloworld from Tweet 279087620145958912
$ helloworld
Hello world</code></pre>
<p>Where would I go from there? An app that calls itself into a loop, of course!</p>
<blockquote class='twitter-tweet tw-align-center'><p>exectweet <a href='http://t.co/ri0XTprA' title='http://j.mp/recursive-app'>j.mp/recursive-app</a> ; recursive-app <a href='https://twitter.com/search/%23exectweet'>#exectweet</a> recursive-app</p>— Jeff Lindsay (@progrium) <a data-datetime='2012-12-13T07:20:12+00:00' href='https://twitter.com/progrium/status/279123541054595074'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script>
<pre><code>$ exectweet https://twitter.com/progrium/status/279123541054595074 && recursive-app
Installed recursive-app from Tweet 279123541054595074
Installed recursive-app from Tweet 279123541054595074
Installed recursive-app from Tweet 279123541054595074
Installed recursive-app from Tweet 279123541054595074
...</code></pre>
<p>Obviously, this whole project was just a ridiculous, mind-bending exploration. I shared most of these examples on Twitter as I was making them. Here was my favorite response.</p>
<blockquote class='twitter-tweet tw-align-center' data-in-reply-to='279087620145958912'><p>@<a href='https://twitter.com/progrium'>progrium</a> End of the world, brought to you by Jeff Lindsay, via the Internet collapsing in on itself and taking the world with it.</p>— Matt Mechtley (@biphenyl) <a data-datetime='2012-12-13T05:00:44+00:00' href='https://twitter.com/biphenyl/status/279088441084497922'>December 12, 2012</a></blockquote><script charset='utf-8' src='//platform.twitter.com/widgets.js'> </script>
<p>You may have noticed, it just happened to be 12/12/2012 that day.</p>
<style type='text/css'>
.twitter-tweet-rendered {
clear: none!important;
}
.twt-reply {
display: none!important;
}
</style>Where did Localtunnel come from?2013-01-01T00:00:00-06:00http://progrium.com/blog/2013/01/01/where-did-localtunnel-come-from<p>Five years ago, async network programming scared me. I was a web developer. Working with the high level tools and frameworks of HTTP seemed much easier than any sort of serious low level networking. Especially since network programming would often also mean some kind of concurrent programming with threads or callbacks. I had mostly avoided multithreading and had no idea what an event loop was. I came from PHP.</p>
<p>Around 2007, I was starting to think about webhooks. One motivator was how you could use webhooks to let web developers, like me, build systems that used other protocols without them having to work with that protocol. For example, one of my first projects with webhooks was called Mailhooks. I wanted to accept email in my application, but I didn’t want to deal with email servers. I wanted to get an HTTP POST when an email came in with all the email fields nicely provided as POST parameters.</p>
<p>This is how I started working with Twisted. Twisted became my main tool to build webhook adapters for existing protocols. I even tried to generalize that idea in a project called Protocol Droid. Slowly I started to grok, and not fear, this kind of programming.</p>
<p>It’s funny how my desire to work with abstractions that didn’t exist yet to avoid a certain kind of programming was directly responsible for me eventually becoming an expert in that kind of programming.</p>
<p>Then in late 2009, I had another idea while thinking about webhooks. It would be great if I could expose a local web server to the Internet with a friendly URL. It should just be a simple command. There would have to be a server, but there could just be a public server that you didn’t even have to think about.</p>
<p>I committed <a href='https://github.com/progrium/localtunnel/tree/prototype'>the first prototype of Localtunnel</a> to Github in January 2010. It was written entirely in Twisted. It also didn’t actually work. I really recommend taking a look because it was terrible. One of the challenges was multiplexing the HTTP requests into a single tunnel connection. My approach was so naive it just didn’t work. As soon as you made more than one request at a time, it broke.</p>
<p>A few months later, I decided to take a different approach. Instead of doing my own protocol, client, and server, I’d just make a wrapper around what I knew already worked: SSH tunneling. This was pretty quick to make happen, and that version is basically what’s been in production to this day.</p>
<p>This shortcut came with a lot of weird quirks. For example, the easiest way I found to implement an SSH tunnel client was a Ruby library, so I implemented the client in Ruby. The server, though, was in Python because I still only really knew Twisted for evented programming.</p>
<p>Actually, using SSH was the source of most of the quirks and annoyances. I was pretty bothered that it slowed down the initial user experience by requiring a public key to be uploaded. But most of the pain was operational. The server, sshd, would create a process for every tunnel. Localtunnel also needed its own user and to pretty much own the SSH configuration for that machine. Then, on occasion, something weird would happen where a tunnel would die and the process would go crazy eating up CPU. It would have to be manually killed or it would eventually bring the server to a halt. And, eventually, the authorized_keys file would become enormous from all the keys uploaded.</p>
<p>On top of all this, SSH is pretty opaque. It’s been around for so long and used so much that it certainly just works … you just don’t really know how. I still don’t know how SSH does tunneling or what the protocol looks like, even after trying to read the RFC for it.</p>
<p>By mid-2011, I was working at Twilio building distributed, real-time messaging systems at scale. I certainly came a long way from fearing async network programming. Localtunnel was still running the implementation based on SSH. By then it had quite a large user base and collected a number of bugs and feature requests. I also had my own operations and user experience wish list. With such a huge list of new requirements, so many problems with the current implementation, and a drastically different experience level and mindset, I decided to redesign Localtunnel from the ground up.</p>
<p>Since I was pretty consumed by Twilio, I didn’t have a lot of time to work on Localtunnel. I thought the biggest bang for buck in the long term would be to slowly work on the new version. They say software is never done, but I personally believe software can be finished. It just requires an aggressive drive for simplicity, and the <em>only</em> way you can make significant advances in simplicity is through redesign.</p>
<p>In the meantime, users continued to experience issues with the current implementation. These problems only got worse as it became more popular. For example, the biggest issue was that the namespace for tunnel names was too small. Users would get requests from old tunnels, and in rare cases tunnel names would get pulled out from under you while using them. This created confusion and a lot of emails and issue tickets, but it still worked with the occasional restart.</p>
<p>I’ve used this constant stream of complaints, which has been going on for almost two years, to make sure I keep making progress on the new version. In fact, I’m pretty sure I needed it because of my lifestyle of abundant projects.</p>
<p>Last week I finally <a href='http://progrium.com/blog/2012/12/25/localtunnel-v2-available-in-beta/'>released a beta of the new version</a>. What’s interesting is that it’s a completely different architecture from what I started out with for the redesign. After the original unreleased prototype, there’s been 3 major approaches to implementation. In the coming weeks I’m going to share a more technical history of the architecture of Localtunnel, leading up to a deep exploration of what I hope will be its final form.</p>Localtunnel v2 available in beta2012-12-25T00:00:00-06:00http://progrium.com/blog/2012/12/25/localtunnel-v2-available-in-beta<p>A few years back, I released <a href='http://localtunnel.com'>Localtunnel</a> to make it super easy to expose a local web server to the Internet for demos and debugging. Since then, it’s gotten a ton of use. A few people even copied it and tried to make a paid service around the idea. Luckily, Localtunnel will always be free and open source.</p>
<p>With the release of <a href='http://j.mp/localtunnel-v2'>Localtunnel v2</a>, it will not only remain competitive with similar services, but continue to be the innovator of the group. I’ll post more on this later.</p>
<p>For now, let’s talk logistics. The current, soon-to-be-legacy Localtunnel stack includes the client that you install with Rubygems, and a server that runs on a host at Rackspace. These will continue to be available into 2013, but will be marked as deprecated. This means you should be making the switch to v2.</p>
<p>Besides the fact v1 will eventually be shutdown, there are a number of reasons to switch to v2. Here are some of the major ones:</p>
<ul>
<li>It’s actively maintained. Bug reports, pull requests, and service interruptions are dealt with promptly.</li>
<li>No more mysterious requests from old tunnels. The subdomain namespace is much larger.</li>
<li>Custom subdomains. The new client lets you pick a tunnel name on a first come, first served basis.</li>
<li>Supports long-polling, HTTP streaming, and WebSocket upgrades. Soon general TCP tunneling.</li>
<li>No SSH key to start using it. A minor annoyance setting up v1, but it doesn’t exist in v2.</li>
</ul>
<p>One implementation detail that affects users is that the client is now written in Python. This means you won’t use Rubygems to install it. Instead, you can use <code>easy_install</code> or <code>pip</code>.</p>
<pre><code>$ easy_install localtunnel</code></pre>
<p>On some systems, you may need to run this with <code>sudo</code>. If you don’t have <code>easy_install</code>, first make sure you have Python installed:</p>
<pre><code>$ python --version</code></pre>
<p>Localtunnel requires Python 2.6 or later, which comes standard on most systems. If you don’t have Python, you can <a href='http://wiki.python.org/moin/BeginnersGuide/Download'>install it for your platform</a>. If <code>easy_install</code> isn’t available after you install Python, you can install it with this bootstrap script:</p>
<pre><code>$ curl http://peak.telecommunity.com/dist/ez_setup.py | python</code></pre>
<p>Once you’ve installed Localtunnel with <code>easy_install</code>, it will be available as <code>localtunnel-beta</code>. This lets you keep the old client to use in case anything goes wrong with v2 during the beta. Eventually, it will be installed as <code>localtunnel</code>, but only after v1 is shutdown.</p>
<p>Using <code>localtunnel-beta</code> is pretty much the same as before:</p>
<pre><code>$ localtunnel-beta 8000
Thanks for trying localtunnel v2 beta!
Port 8000 is now accessible from http://fb0322605126.v2.localtunnel.com ...</code></pre>
<p>Like I mentioned earlier, you can use a custom tunnel name if it’s not being used:</p>
<pre><code>$ localtunnel-beta -n foobar 8000
Thanks for trying localtunnel v2 beta!
Port 8000 is now accessible from http://foobar.v2.localtunnel.com ...</code></pre>
<p>Keep in mind v2 is in active development. There might be some downtime while I work out operational bugs, but you can always use the old version if you run into problems.</p>
<p>If you do run into any problems, you can <a href='http://twitter.com/progrium'>ping me on Twitter</a>. If you get traceback you can <a href='https://github.com/progrium/localtunnel/issues'>create an issue on Github</a>. If you have more in-depth questions or want to get involved in development, check out the <a href='https://groups.google.com/forum/#!forum/localtunnel'>Localtunnel Google Group</a>.</p>HTTP Signatures with Content-HMAC2012-12-17T00:00:00-06:00http://progrium.com/blog/2012/12/17/http-signatures-with-content-hmac<p>Today I wanted to propose another header. It would be used for signing HTTP content with HMAC, and is appropriately called Content-HMAC. In <a href='http://progrium.com/blog/2012/11/26/x-callback-header-an-evented-web-building-block/'>a previous post</a> about the Callback header, I mentioned using an X-Signature header in callback requests to sign the payload of the callback. It looked like this:</p>
<pre><code>X-Signature: sha1=<hexdigest of sha1 hmac></code></pre>
<p>The HMAC would be built with just the content of the request (i.e., no headers, no query params) and a secret key. <a href='http://pubsubhubbub.googlecode.com/svn/trunk/pubsubhubbub-core-0.3.html#authednotify'>This was borrowed directly from the PubSubHubbub spec</a>, but the general idea of using HMAC to sign callback requests has become pretty standard in the world of webhooks. Here are details on how <a href='http://code.google.com/p/support/wiki/PostCommitWebHooks#Authentication'>Google</a> and <a href='http://www.twilio.com/docs/security#validating-requests'>Twilio</a> use them.</p>
<p>Each of these providers is using their own header for basically the same use case. It would seem like there is an opportunity to standardize on a common header format for it. There’s been a number of proposals for a general Signature header to sign an entire request. There was a fairly comprehensive one proposed called <a href='http://tools.ietf.org/html/draft-burke-content-signature-00'>Content-Signature</a>. With signing, the difficulty is often getting the input string correct. Most signing mechanisms need to normalize their input. If you’ve ever had to deal with OAuth or AWS signatures, you’ll know what I’m talking about. With request signing, the headers pose a particularly tricky situation with signing since they often change as the request goes through proxies.</p>
<p>The idea of Content-HMAC is to focus on a simpler goal of signing just the content payload, since it’s normally treated as-is, and is not altered when going through proxies. The X-Signature proposal I had was a decent one, as is almost any cowpath-based proposal, but I realized it would probably be a good idea to limit the implied scope to what it’s really doing: providing an HMAC for request (or response) content.</p>
<p>It turns out there’s a similar header that’s not used that often anymore called Content-MD5. It was a simple mechanism to provide an MD5 digest of the content. My current proposal is to take this existing pattern and apply it to HMAC, giving us the Content-HMAC header:</p>
<pre><code>Content-HMAC: <hash mechanism> <base64 encoded binary HMAC></code></pre>
<p>Here’s an example:</p>
<pre><code>Content-HMAC: sha1 f1wOnLLwcTexwCSRCNXEAKPDm+U=</code></pre>
<p>This proposal borrows its naming convention from Content-MD5, but the format is more similar to Authorization. The Authorization header allows multiple authorization schemes to be used. You define the scheme followed by a space and then the actual authorization data. Since HMAC allows different hashing techniques to be used, we use that pattern here to let you specify the hashing technique. We also take the existing pattern of base64 encoding used in several HTTP headers to make it conform even more to existing standards.</p>
<p>Content-HMAC was created for callback requests, but it’s a useful way to sign any HTTP request or response payload. For requests, it’s worth mentioning it only applies when there is a content payload, so for example it’s meaningless with GET requests.</p>
<p>It’s also very worth mentioning that the need for content signing is unnecessary when using HTTPS. It currently looks like the future will eventually be 100% SSL encrypted HTTP, but until then, there will always be situations where HTTPS is not available. Content-HMAC is perhaps a stop-gap until we reach that ideal. Until then, I think Content-HMAC is a good, standard way to add authorization to callback requests.</p>
<p>Let me know if you have any questions or feedback on this proposal. Further discussion is likely to happen on the <a href='https://groups.google.com/forum/#!forum/webhooks'>Webhooks Google Group</a>.</p>Avoiding environmental fallacy with systems thinking2012-12-15T00:00:00-06:00http://progrium.com/blog/2012/12/15/avoiding-environmental-fallacy-with-systems-thinking<p>In 1905, German chemist Alfred Einhorn invented Novocaine to be used by doctors in surgery as a general anesthetic. Unfortunately, doctors didn’t find Novocaine to be a suitable general anesthetic. However, dentists were dying to use it as a <em>local</em> anesthetic. The inventor didn’t want to sell it for the “mundane purpose” of drilling teeth, so he continued marketing to doctors and surgeons. Einhorn persisted until his death, unwilling to let the market dictate the use of his invention. He felt the intrinsic value of Novocaine as a general anesthetic was enough to sell it as such, no matter what extrinsic value was placed on it by actual market demands. Charles West Churchman would call this an “environmental fallacy.”</p>
<p>Environmental fallacy is the blunder of ignoring or not understanding the effects of the environment of a system. Examples of this fallacy are all around us. Anti-drug legislation fails to see long-term, societal implications because they’re preoccupied by the immediate, localized problems. Efforts to improve a standardized public education are precisely and meticulously solving the wrong problem. Silicon Valley startups spend our brightest intellectual resources on photo sharing and social-whatever, while industries that affect the quality of living for millions are left with bureaucrats.</p>
<p>One could describe these all as failing to see the bigger picture. In systems we call this the environment of a system. The significance of which is governed by the principle of openness.</p>
<p>Openness is the principle that open systems, which includes everything from problems to corporations to opinions to products, can only be understood in the context of their environment. This is because open systems are dependent on and co-determined by their context. A closed system, like a watch or a hammer, can function entirely based on its own internal structure and process. An open system interacts with and is inextricably linked with its environment.</p>
<center><img src='/images/content/open_vs_closed_sys.png' title='Open systems vs Closed systems' /></center>
<p>This insight may seem banal. In fact, the younger generations and the progressive recent generations are quite familiar with this concept at least as a vague intuition. But this is a very recent development. We don’t appreciate how little this idea was understood for basically all of human existence up until just a few decades ago.</p>
<p>Science, for example. Science is our greatest effort to understand our objective reality. Like any other open system, it was defined and limited by the context of its time. As modern science began to develop 350 years ago, it was based on a worldview that denied the principle of openness. Most subjects were studied as closed systems.</p>
<p>For the greater part of its life, science has only understood the environment as something to be minimized. This is best shown in laboratories, a symbol of scientific activity, which are specifically designed to exclude the environment. Based on the doctrines of determinism and reductionism, science up until the last 4 or 5 decades has ignored the environment in favor of reductionist explanations focused on internal determinism. At best this only partially describes most actual phenomenon. For example, Galileo’s equations for freely falling bodies completely ignore air resistance and the rotation of Earth, and Ohm’s law assumes there will be no dramatic change in surrounding temperature. In both cases, the assumption is no environment.</p>
<p>This understated handicap of traditional science ended up as the major dilemma in the 1992 film Medicine Man. Sean Connery’s character finds a miracle cure for cancer in a flower, but in the lab he’s unable to reproduce it. He eventually finds out the flower itself was not the cure, but that the cure was produced by the flower interacting with another element in its environment. The unintentional moral of the story is about the significance of environment and environmental fallacy.</p>
<p>Only until ecology took off in the mid-20th century did we have a science that explicitly observed the environment, though primarily as a subset of biology. In many ways, ecology was a precursor to systems sciences. The difference between an ecological environment and the environment of a system is that a system environment is more general. It can be used to talk about physical environments, but also abstract environments, such as decision-making and problem-solving environments.</p>
<p>Often the environment refers to all external variables and conditions of a system, but in some cases it might refer to a particular part of the total environment. This is because the environment represents any surrounding system. Any one open system is embedded in a greater system, embedded in an even greater system, and so on. For example, one slice of how nested environments can affect an individual at work might look like this:</p>
<img src='/images/content/nested_environments.png' style='float: left;' title='Nested
environments' />
<p>If all of these layers influence each other, you start to realize, maybe somewhat helplessly, that everything depends on everything else. No wonder science originally dismissed the environment. But ignoring the complexities and dynamics of open systems leads to sometimes serious disparities from reality.</p>
<p>In 1850, which for historical context was when California became a state and the US got its 13th president, the leading scientists of the western world convened for a conference in Europe. They actually concluded that in just 50 years, through science, they would have a complete understanding of the universe. This absurd notion stemmed from the foundations of scientific thought, which been tremendously useful, but also severely limiting. Only after the Heisenberg principle in the late 1920’s have we begun to accept that reality is just too complicated to fully understand at once.</p>
<p>Ironically, admitting this has been beneficial to our grasp of reality. It’s helped us realize new frameworks for thinking and coping with our increasingly complex and interdependent world. Luckily, our world is so globalized and connected today that modern generations are growing up with this reality as a daily experience. Systems theory and systems thinking are tools that can keep the appreciation of openness and the defining power of context as a first class tenet in all our endeavors.</p>Async HTTP Responses with Response Redirection2012-12-06T00:00:00-06:00http://progrium.com/blog/2012/12/06/async-http-responses-with-response-redirection<p>What if you could perform any HTTP request, but get the response back via a webhook? This is the simple goal of Response Redirection, a simple micro-protocol for telling an HTTP server to send the response to a URL. Instead of returning the response in the connection created by the request, the response is returned in HTTP callback fashion.</p>
<p>The primary use case for this is handling HTTP responses that take longer than you would prefer to keep an open connection. As we build APIs that start interacting with the real world and human processes, you could expect operations that might take hours to days to complete.</p>
<h3 id='example_of_response_redirection'>Example of Response Redirection</h3>
<p>Response Redirection is done by performing a regular HTTP request with two additions: a Pragma directive telling the server you want the response to be redirected, and a Callback to be used for the response.</p>
<pre><code>GET /helloworld HTTP/1.1
Host: example.com
Pragma: redirect
Callback: <http://server.com/callback>; method="post"; rel="redirect"</code></pre>
<p>The response to this is a 202 Accepted or an appropriate error code. 202 Accpted is the standard response to give for operations that have been accepted and will be processed or finished later. As soon as the server as processed the request and has rendered a response, it would perform a request:</p>
<pre><code>POST /callback HTTP/1.1
Host: server.com
Status: 200 OK
Content-Length: 11
Content-Type: text/plain
Hello world</code></pre>
<p>As we talked about with the <a href='http://progrium.com/blog/2012/11/26/x-callback-header-an-evented-web-building-block/'>Callback header</a>, if a <code>secret</code> parameter was given, it would imply that an HMAC signature be provided in the callback request. We’ll revisit this again in another post.</p>
<p>If the server doesn’t understand <code>Pragma: redirect</code>, it would return a normal response to the initial request and the client would have to handle it as usual.</p>
<h3 id='using_pragma'>Using Pragma</h3>
<p>You’ll notice we didn’t invent a header to tell the server to we want to do Response Redirection. You may remember Pragma’s original use was only for <code>Pragma: no-cache</code> and was eventually replaced with other cache control headers. However, the semantics of Pragma remain useful. To quote the <a href='http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.32'>HTTP 1.1 spec</a>:</p>
<blockquote>
<p>The Pragma general-header field is used to include implementation-specific directives that might apply to any recipient along the request/response chain.</p>
</blockquote>
<p>Another potential header field would be the <a href='http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.20'>Expect header</a>. The Expect header is designed to let the client specify certain behaviors expected of the server. This would be perfect except for this property:</p>
<blockquote>
<p>The Expect mechanism is hop-by-hop: that is, an HTTP/1.1 proxy MUST return a 417 (Expectation Failed) status if it receives a request with an expectation that it cannot meet.</p>
</blockquote>
<p>In today’s world, this renders it useless unless we were talking about changing the behavior of proxies. The Pragma header was designed to be forwarded by proxies and ignored if it doesn’t know how to fulfill the directive.</p>
<h3 id='alternate_implementation'>Alternate Implementation</h3>
<p>In the discussion that followed the Callback header post, not only did we learn that the <code>X-</code> header prefix <a href='http://tools.ietf.org/html/rfc6648'>is now deprecated</a>, but there is an RFC draft called <a href='http://tools.ietf.org/html/draft-snell-http-prefer-17'>Prefer Header for HTTP</a>.</p>
<p>It actually addresses the issue with the Expect header, providing an alternative for specifying optional preferences for how the server handles a request. One of the example preferences is for returning the response asynchronously, which is exactly what we’re achieving with Response Redirection. The only missing element is the callback, which we can easily include with our Callback header. Here we augment an example directly from the spec:</p>
<pre><code>POST /collection HTTP/1.1
Host: example.org
Content-Type: text/plain
Prefer: respond-async
Callback: <http://server.com/callback>; rel="respond-async"
{Data}</code></pre>
<p>The server would respond with 202 Accepted, just as we would have had with the Pragma implementation. It’s up for discussion which implementation is ideal.</p>
<h3 id='last_thoughts'>Last Thoughts</h3>
<p>Granted, there are likely better ways to approach the use case we described at the beginning. Creating a resource immediately and subscribing to state changes might actually be ideal. Perhaps the Response Redirection spec is purely academic. That brings us again to the idea of HTTP Subscriptions, which I’ll get to posting about soon.</p>
<p>However, Response Redirection is a great example of a simple protocol built on top of the Callback header. The use of Prefer and Pragma will also set the stage for the design decisions of my initial informal draft of HTTP Subscriptions. It will continue this trend of reusing existing pieces of technology (Pragma, Expect, Prefer, Callback) as building blocks that, from my perspective, were intended to be re-combined to achieve new behavior.</p>
<p>Let me know what you think in the comments or in the <a href='https://groups.google.com/forum/#!forum/webhooks'>Webhooks Google Group</a>.</p>X-Callback Header: Evented Web Building Block2012-11-26T00:00:00-06:00http://progrium.com/blog/2012/11/26/x-callback-header-an-evented-web-building-block<div class='alert alert-info'>
Since this posting, we decided to adopt "best current
practice" and drop the X- prefix as described in <a href='http://tools.ietf.org/html/rfc6648'>RFC 6648</a>. Future posts refer to this as the Callback header.
</div>
<p>Webhooks is the simple concept of HTTP callbacks. It expands on the simple request/response model of HTTP, giving you the semantics of <a href='http://j.mp/10EitT8'>callbacks in programming</a>. Request/response gives you one response for one request in one synchronous operation. It’s like invoking a function and getting a return value. With callbacks, after you register a callback, the callback will receive one or more invocations, perhaps minutes or hours apart.</p>
<p>Callbacks are a necessary component of any evented or reactor-based system, like Node.js, Twisted, or EventMachine. So, naturally, HTTP callbacks are necessary to achieve <a href='http://progrium.com/blog/2012/11/19/from-webhooks-to-the-evented-web/'>the Evented Web</a>.</p>
<p>Modeling callbacks in HTTP is somewhat straightforward. The callback is a URL. You perform an HTTP request against an application to register a callback URL. The application then performs an HTTP request to that URL to invoke that callback.</p>
<p>Those high-level requirements are enough to set anybody in the right direction to effectively implement webhooks or HTTP callbacks for their application. The problem is that now every application implements the specifics differently. While this is fine to provide a callback paradigm for each application, it doesn’t let us <em>build</em> on this paradigm. The Evented Web needs to agree on some standards, and the X-Callback header is one of those standards.</p>
<h3 id='xcallback_header'>X-Callback Header</h3>
<p>The X-Callback header is a proposal for a common way to describe HTTP callbacks, primarily in the case of registering them. It does not get into the specifics for different ways of using HTTP callbacks, so it’s more of a building block for APIs or larger protocols such as HTTP Subscriptions, which I mentioned <a href='http://progrium.com/blog/2012/11/19/from-webhooks-to-the-evented-web/'>in my previous post</a>.</p>
<p>Here’s what it looks like to use X-Callback:</p>
<pre><code>X-Callback: <http://example.com/callback>; method="post"</code></pre>
<p>The format is directly borrowed from the <a href='http://www.w3.org/Protocols/9707-link-header.html'>Link header</a> used for responses. You provide a URL and then optional key-value parameters. In the case above, the HTTP method for invoking the callback was specified as a parameter.</p>
<p>Here is a more formal description of the header:</p>
<pre><code>X-Callback = "X-Callback" ":" #("<" URI ">" *( ";" callback-param ) )
callback-param = token [ "=" ( token | quoted-string ) ]</code></pre>
<p>Since this is just the beginning of the conversation, there are no “built-in” callback parameters in this definition. They’re effectively all extensions. However, these are what I’d propose for standard parameters:</p>
<ul>
<li><strong>method</strong>: The HTTP method preferred for invoking this callback. Servers can ignore or override based on their policies, but this lets the requester optionally state preference.</li>
<li><strong>secret</strong>: The secret to be used for signing callback requests. More on this in the next section.</li>
<li><strong>rel</strong>: The relationship of this callback to this request, similar to the rel of the Link header. This lets you specify the role of this callback, which is useful when multiple callbacks are provided. It effectively lets you classify the callbacks.</li>
</ul>
<h3 id='authenticating_with_signatures'>Authenticating with Signatures</h3>
<p>A common pattern across most implementations of webhooks has been the use of signatures for authenticating the callback “invocation” requests. Either built-in to the X-Callback header spec or maybe as a separate extension, a standard way of providing a secret then building and including a signature would be a Good Idea. The following is a proposal based on PubSubHubbub’s signature model, but is not that different from the majority of implementations out there.</p>
<p>We start with a shared secret. Transmission of this secret can be done out of band (through a dashboard, for example), or the secret can be provided via the <em>secret</em> parameter of the X-Callback header during registration.</p>
<p>The secret can then be used with HMAC to sign anything. In the case of callbacks, it would sign the body of the callback request body. Since you can use different hashing techniques with HMAC, the technique used is specified along with a hexadecimal digest of the HMAC signature. This is put in the X-Signature header of requests:</p>
<pre><code>X-Signature: sha1=0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33</code></pre>
<p>Now the callback handler can rebuild this signature knowing the secret and having the content body and the hash technique. Authenticating is then comparing the built signature with the one provided in the X-Signature header.</p>
<p>Signing lets the callback handler be more certain of the source without requiring SSL. Signatures become much less necessary if all requests are using HTTPS. But having this simple complement to X-Callback makes it easy when you need it, and may help unify all the different approaches that are all effectively doing the same thing.</p>
<h3 id='example_callback_flow'>Example Callback Flow</h3>
<p>Let’s use all this in an example, showing you the actual HTTP requests. First, we’re going to register a callback at a particular endpoint:</p>
<pre><code>POST /callbacks/register HTTP/1.1
Host: server-example.com
X-Callback: <http://example.com/callback>; method="post"; secret="opensesame"
Content-Length: 0</code></pre>
<p>The server can respond however it likes since X-Callback header doesn’t specify anything more than how to hand the server a callback. Let’s assume it returned 200 OK.</p>
<p>Now, whenever it likes, the server is going to be performing an HTTP POST on the callback URL. Since a secret was provided, the server will be providing a signature using the X-Signature header. Here’s what one of those requests might look like:</p>
<pre><code>POST /callback HTTP/1.1
Host: example.com
Content-Type: application/json
Content-Length: 26
X-Signature: sha1=76afe1da675cf6d3d59c71a4af44dafc69fd03f0
{"payload": "Hello world"}</code></pre>
<p>You’ll notice we’ve stayed completely out of the content layer of HTTP. This is quite intentional. This gives implementors lots of flexibility and keeps this a “pure” extension to HTTP.</p>
<h3 id='a_building_block'>A Building Block</h3>
<p>As I’ve mentioned, this header is intended to be used in APIs and protocols that use callbacks in different ways. The obvious example is HTTP Subscriptions, which will let you subscribe to events using HTTP callbacks.</p>
<p>Another example, which I’ll also talk about soon, is HTTP Response Redirection. Regular <a href='http://en.wikipedia.org/wiki/URL_redirection#HTTP_status_codes_3xx'>HTTP Redirection</a> lets the server redirect the client’s request to another URL, whereas Response Redirection lets the client redirect where the server sends the response using an HTTP callback.</p>
<p>The X-Callback header is simple, focused, and content neutral. Hopefully this makes it a powerful building block for other technologies of the Evented Web.</p>From Webhooks to the Evented Web2012-11-19T00:00:00-06:00http://progrium.com/blog/2012/11/19/from-webhooks-to-the-evented-web<p>Back in 2007 I started thinking and talking a lot about <a href='http://progrium.com/blog/2007/05/03/web-hooks-to-revolutionize-the-web/'>an idea called webhooks</a>. Over the following few years I started evangelizing it. I spent a lot of my free time giving talks and building tools around the idea of webhooks. Some of these tools are still around today, including <a href='http://localtunnel.com'>Localtunnel</a> and <a href='http://requestb.in'>RequestBin</a> (originally PostBin). There were others that might not be around anymore: <a href='http://mailhooks.com'>MailHooks</a>, <a href='http://clickhooks.com'>ClickHooks</a>, <a href='http://twitterhooks.com'>TwitterHooks</a>, <a href='http://scriptlets.org'>Scriptlets</a>, and a few others.</p>
<p>Webhooks wasn’t really a new technology in the sense that there was a specification or tangible piece of software. It was more of an architectural pattern, and a loose one at that. To me it was just a different way to think about web applications, and it opened up a lot of new possibilities.</p>
<p>I was really excited by those possibilities, so I started telling people about it. The only problem was that it was this semi-vague idea. I often spoke in high-level notions. It was hard for some people to understand at the time. I think some people mostly got it, but a lot of people didn’t get it and thought they did.</p>
<h3 id='confusion'>Confusion</h3>
<p>For example, the name “webhooks” was more about the pattern than any specific part of how it works. Webhooks involve two parts: an application that triggers a URL, and a handler at that URL. If you were to ask “where is the webhook?” different people will answer you differently. Some say it’s the trigger side. Some say it’s the handler side. For me, “a webhook” is the combination of both a trigger and a handler.</p>
<p>It also didn’t help that there was never a spec. I always avoided a spec because there were a lot of different implementations already out there, and you might implement it slightly different for different use cases. It made sense to me to just keep it a general pattern and not limit what was possible.</p>
<p>I didn’t want to say, “Well, if you want to implement webhooks, it’s got to be JSON. And use this payload structure. And this is the API for registering them. Otherwise, it’s not webhooks.” Because if you didn’t do those things in a particular way, they’d still be webhooks to me.</p>
<h3 id='mild_success'>Mild Success</h3>
<p>After a while, the idea got out there and companies like Google, Facebook, Wordpress, GitHub, Twilio, and other startups started implementing it. Five years later and I still often run into new applications or open source projects using the term webhooks. But even after all this time, there’s still a lot of cool stuff that I wanted to emerge that hasn’t really happened yet.</p>
<p>Some of it is starting to happen, though. For example, how do you write these handlers script? I really didn’t believe in being able to just plug apps together like pipes. That’s something that could come later and would definitely need a spec. Instead, I wanted people to actually write handler scripts with code. That way they could make something that did whatever they wanted, exactly how they wanted.</p>
<p>To facilitate that, I wanted a service that would let you write and it would host for you these little handler scripts for processing HTTP webhook requests. I actually built a prototype of this called Scriptlets. It was a web app where you could write JavaScript, hit save, and then you’d have a little script at a URL that you could use for webhooks.</p>
<p>Scriptlets didn’t get very popular, though I didn’t push it very hard. There was a lot I wanted to do with it but there wasn’t enough demand to drive development, and I was so busy that it eventually became defunct.</p>
<p>Four years later, we actually have a service like this. I discovered it about a week ago. It’s called <a href='http://webscript.io'>Webscript</a>. It’s basically Scriptlets done right. Webscript is a web app where you can write Lua, hit save, and then you’ve got little web service. It has basically everything you need to write webhook handler scripts.</p>
<h3 id='the_ecosystem'>The Ecosystem</h3>
<p>Slowly, people <em>are</em> building out pieces of the ecosystem. You could say that the webhooks paradigm was really about this ecosystem. At some point I realized this and decided to give that ecosystem a name. It turned out the ecosystem was really what I was getting at with webhooks. That’s where the magic was.</p>
<p>I started calling this ecosystem the Evented Web. Like the Semantic Web and “programmable web,” it’s an umbrella term for a family of technologies coupled with a vision of what the world could be like. The Evented Web envisions a world where the programmable web that we have today of traditional web APIs is complemented by APIs that produce events through webhooks. Adding a callback mechanism to web APIs makes the web more like a giant evented framework.</p>
<p>Just like with Node.js, perhaps the most popular evented framework, there’s all kinds of innovation happening in the community. It’s a new way of thinking about things. Pipes and streams come up a lot in the Node.js world now, and similar sorts of things can be done across web applications with an Evented Web.</p>
<p>By the time I started talking more about the Evented Web instead of just webhooks, I was already pretty tired of talking about it all. I was sort of “over it” and I started to not care if people didn’t see or share this vision. I continued to think it was cool, but I started to move on to other interests.</p>
<h3 id='the_future'>The Future</h3>
<p>These days, the people that really get it are starting to build some really neat things. Webhooks have spread enough that you can at least reference them or the idea of HTTP callbacks and not have to explain yourself. Now is maybe the perfect time for me to put a few specific projects into motion that could at least provide a tangible foundation for building out the Evented Web. Not just vague notions.</p>
<p>The most immediate thing is a lightweight spec for implementing webhooks. Specifically I mean registering a callback URL and invoking the callback URL. I’ve intentionally put this off for a lot of reasons. I didn’t want to get it wrong. I didn’t want to leave people out. I wanted to capture best practices, which for the longest time we hadn’t them figured out. But now might be the perfect time because there is a lot on the verge of happening.</p>
<p>Stay tuned for my proposal for HTTP Subscriptions. It will be the first of several really cool developments for the Evented Web, from me and from others.</p>Piping into and out of the cloud with skypipe2012-09-30T00:00:00-05:00http://progrium.com/blog/2012/09/30/piping-into-and-out-of-the-cloud-with-skypipe<p><a href='https://github.com/progrium/skypipe'>Skypipe</a> is a magical command line tool that lets you easily pipe data across terminal sessions regardless of whether the sessions are on the same machine, across thousands of machines, or even behind a firewall. It gives you named pipes in the sky and lets you pipe data <em>anywhere</em>.</p>
<p>I built it while on vacation the last couple months. I wasn’t intending to write software while on my trip, but I just couldn’t help myself.</p>
<p>Skypipe is conceptually similar to named pipes or netcat, but with more power and a simpler interface. Here is a basic example using skypipe as you would a regular named pipe to gzip a file across shells:</p>
<pre><code>$ skypipe | gzip -9 -c > out.gz</code></pre>
<p>Your skypipe is now ready to receive some data from another shell process:</p>
<pre><code>$ cat file | skypipe</code></pre>
<p>Unlike named pipes, however, <em>this will work across any machines connected to the Internet</em>. You don’t have to specify a host address or set up “listen mode” like you would with netcat. In fact, unlike netcat, which is point to point, you could use skypipe for log aggregation. Here we’ll used named skypipes. Run this on several hosts:</p>
<pre><code>$ tail -f /var/log/somefile | skypipe logs</code></pre>
<p>Then run this on a single machine:</p>
<pre><code>$ skypipe logs > /var/log/aggregate</code></pre>
<p>This can also broadcast to multiple hosts. With the above, you can “listen in” by running this from your laptop, even while behind a NAT:</p>
<pre><code>$ skypipe logs</code></pre>
<p>You can also temporarily store data or files in the pipe, even several files, until you pull them out. On one machine load some files into a named skypipe:</p>
<pre><code>$ cat file_a | skypipe files
$ cat file_b | skypipe files</code></pre>
<p>Now, from somewhere else, pull them out in order:</p>
<pre><code>$ skypipe files > new_file_a
$ skypipe files > new_file_b</code></pre>
<p>Lastly, you can use skypipe like the channel primitive in Go for coordinating between shell scripts. As a simple example, we’ll use skypipe to wait for an event triggered by another script:</p>
<pre><code>#!/bin/bash
echo "I'm going to wait until triggered"
skypipe trigger
echo "I was triggered!"</code></pre>
<p>Triggering is just sending an EOF over the pipe, causing the listening skypipe to terminate. We can do this with a simple idiom:</p>
<pre><code>$ echo | skypipe trigger</code></pre>
<h3 id='how_does_this_magic_work'>How does this magic work?</h3>
<p>You’ll need a free <a href='https://www.dotcloud.com/'>Dotcloud</a> account to use skypipe, but you don’t need to know anything about using Dotcloud to use skypipe.</p>
<p>When you first use skypipe it will want you to run a setup command (<code>skypipe --setup</code>). This will deploy a very simple messaging server to Dotcloud. From then on, skypipe will use your account to transparently find and use this server, no matter where you are. The server is managed automatically and runs on Dotcloud free of charge, so you never need to think about it.</p>
<h3 id='software_with_a_service'>Software with a service!</h3>
<p>This is a new paradigm of creating tools that transparently leverage the cloud to create magical experiences. It’s not quite software as a service, it’s software <em>with</em> a service. Nobody is using a shared, central server, and no one needs to setup or manage their own server. The <em>software</em> deploys and manages its own server for you.</p>
<p>Thanks to platforms like Heroku and Dotcloud, we can now build software leveraging features of software as a service that is <em>packaged and distributed like normal open source software</em>.</p>
<p>I’m excited to see what else can be done with this pattern. Naturally, I’m already thinking about a number of other potential uses.</p>
<h3 id='using_skypipe_and_getting_involved'>Using skypipe and getting involved</h3>
<p>Skypipe is still an alpha piece of software. Be warned, there are some rough edges. That said, you can install skypipe with pip:</p>
<pre><code>$ pip install skypipe</code></pre>
<p>The user experience is not yet entirely optimized. One of the biggest issues is that it needs to check for the server on every use. This can be done less often and cached, which would make it much snappier and on par with most command line utilities.</p>
<p>This and a few other issues are already tracked in <a href='https://github.com/progrium/skypipe/issues'>Github Issues</a>; feel free to take a whack at them. The codebase is intentionally small, documented, and written to be read, although there are no tests yet.</p>
<p>The project also depends on ZeroMQ, which requires a C extension to be compiled. Even using the pyzmq-static package, you still need certain header files (python.h at the very least) to install skypipe, and not every environment has these. Ideally, I’d like to find a way to package skypipe in a way that includes all its dependencies. Perhaps <a href='http://www.pyinstaller.org/'>PyInstaller</a> can help with this.</p>
<p>Another feature I’m sure a lot of people will want (or complain about) is being able to run your own server and ignore the software with a service aspect. Since the server is packaged with the client, this is not far off from happening. Somebody just needs to make it happen.</p>
<p>Contribution ideas aside, I’m hoping skypipe will be useful to others besides myself. I was really going for a magical tool. I think a big part of this magic is the use of software with a service, which I consider a bit novel in itself. What do you think?</p>Let me tell you about my website2012-09-07T00:00:00-05:00http://progrium.com/blog/2012/09/07/let-me-tell-you-about-my-website<p>Since mid-July I’ve been on vacation traveling around the world. Originally, the only project I allowed myself to work on is my website and blog. I quickly broke that rule with a number of new and existing projects. Nevertheless, as you can see, I did ship this site.</p>
<p>I’ve never been happy with my website or blog, perhaps because I’ve never been able to invest enough time into it. Over the years, I’ve at least been able to put a lot of thought into what I want and how I want to express myself. This attempt gets pretty close.</p>
<h3 id='highlevel_goals'>High-level Goals</h3>
<p>Too many of my personal site designs have been dark and gloomy, often monochromatic. I think one of the biggest ideas going into this project was to make something bright and colorful. From a pure functional standpoint, I felt this was important. I also wanted to actually reflect my style. Too often I’ve settled for pre-designed blog themes that “kinda, sorta” match my sense of style and how I want to express myself. This time I would have full control.</p>
<p>As I’m starting to freelance again, I also wanted to have a good marketing tool for myself. Even if it just expressed the affordances I have as a free agent. If somebody discovered me online, they’d know they could buy my time. “Yes, I can be hired for any of these fine services.”</p>
<p>Combining my personal website with my blog is another thing I wanted to do. So far they’ve always been separate. Not only would this be for consistency and simplicity, but for future proofing. Anything else I want to put online I can do with this site in a way that feels part of a whole. A whole that represents my identity and personal brand.</p>
<h3 id='aesthetic'>Aesthetic</h3>
<p>Like I mentioned, I wanted bright and colorful. I also wanted simple and toy-like. I wanted it to feel well designed, but without a lot of modern web design tropes. This led to a minimalist foundation that I could sprinkle my favorite motifs on top of.</p>
<p>I actually only had one website in mind that I used for initial inspiration. In fact, I think when I came across <a href='http://disqus.com/for-websites/awesome-ux'>this page of the Disqus website</a>, I immediately started imagining a new personal website. Most of that initial vision has since disappeared, but with that framing I was able to move on to colors.</p>
<p>The colors on that page reminded me of one of my favorite kinds of infographics: transit maps. I quickly started poking around ColourLovers for palettes inspired by transit maps. I found a few and settled on <a href='http://www.colourlovers.com/palette/1043750/Tokyo_Subway'>one based on Tokyo’s subway map</a>. Then I moved on to typeface.</p>
<p>I limited my options to what was available on Google Webfonts. Previously I was a fan of the Droid Sans and Droid Serif families. This time around, I used Open Sans as the primary font. For the header title (my name), I needed something different. I wanted something heavier but not wide. Ideally, I wanted a bold Futura Condensed, but a heavy Futura isn’t on Google Webfonts.</p>
<p>I struggled with this for a while, then by accident found VT323. It had the weight and shape qualities I was looking for, but I didn’t think I wanted a pixelated typeface. It seemed too cliche. However, when I tried it, it became obvious VT323 not only went well with my pixelated avatar, but it added more of me to the overall design without being too cliche.</p>
<p>The tree in the header was a late addition. I have a thing for trees as a symbol of nature and I wanted to add more character to the design. Originally I was going to use Context Free to create a procedurally generated tree, but this proved too time consuming to get what I wanted. A free vector based tree was not hard to find.</p>
<h3 id='platform'>Platform</h3>
<p>I’ve become a fan of using <a href='http://pages.github.com/'>GitHub Pages</a> for simple websites. It’s hosted, it’s free, and powered by Git. The site is then versioned, editable online, and even forkable. The only limitation is that it only hosts static files.</p>
<p>Luckily, they’ve built-in support for <a href='https://github.com/mojombo/jekyll'>Jekyll</a>, which is more or less like pre-rendering a dynamic site. This not only gives you templating, layouts, and includes, but it’s “blog aware” so you get pagination, meta-data, and even related post links. The only real dynamic bits of the site are blog comments and analytics, both are solved by client-side JavaScript powered services. Neither of those I care to have in my Git repo anyway.</p>
<p>The other part of this plan I like is that it lets you write content in simple Markdown files kept in a Git repo. If I ever need, I can take this repo anywhere and still have my blog posts as Markdown files. This is a far more ideal place to be in that most blogging platforms I’ve used in the past.</p>
<h3 id='future'>Future</h3>
<p>It’s still a work in progress. You may have noticed the homepage currently redirects to the blog index. I’m hoping to have more of a landing or intro page as the homepage. The idea there is to quickly communicate what I’m about and what I work on.</p>
<p>I also want to incorporate more hand-drawn elements into the design. A lot of my writing is well accompanied by diagrams and visuals, so I wanted to include more of these in my posts. Doing as many of these as I can with my drawing tablet will give the site more character. My character.</p>
<p>I was also thinking about “project pages” that describe and introduce the projects I’m involved in without being hidden as an old blog post. Perhaps more exciting, though, are project idea pages or “blueprints”. These would allow me to document projects I’m thinking about building, letting me get feedback and encourage collaboration before even starting.</p>
<p>Building a site from scratch was a pretty substantial investment. With the right tools and enough time to think through and iterate on the visual design, I now have something that works and that I can build on as needed. Not only that, I shipped before I got back from vacation. :)</p>Making a local web server public with localtunnel2010-05-11T00:00:00-05:00http://progrium.com/blog/2010/05/11/making-a-local-web-server-public-with-localtunnel<p>These days it’s fairly common to run a local environment for web development. Whether you’re running Apache, Mongrel, or the App Engine SDK, we’re all starting to see the benefits of having a production-like environment right there on your laptop so you can iteratively code and debug your app without deploying live, or even needing the Internet.</p>
<p>However, with the growing popularity of callbacks and <a href='http://webhooks.org'>webhooks</a>, you can only really debug if your script is live and on the Internet. There are also other cases where you need to make what are normally private and/or local web servers public, such as various kinds of testing or quick public demos. Demos are a surprisingly common case, especially for multi-user systems (“Man, I wish I could have you join this chat room app I’m working on, but it’s only running on my laptop”).</p>
<p>The solution is obvious, right? SSH remote forwarding, or reverse tunneling. Use a magical set of options with SSH with a public server you have SSH access to, and set up a tunnel from that machine to your local machine. When people connect to a port on your public machine, it gets forwarded to a local port on your machine, looking as if that port was on a public IP.</p>
<p>The idea is great, but it’s a hassle to set up. You need to make sure sshd is set up properly in order to make a public tunnel on the remote machine, or you need to set up two tunnels, one from your machine to a private port on the remote machine, and then another on the remote machine from a public port to the private port (that forwards to your machine).</p>
<p>In short, it’s too much of a hassle to consider it a quick and easy option. Here is the quick and easy option:</p>
<pre><code>$ localtunnel 8080</code></pre>
<p>And you’re done! With localtunnel, it’s so simple to set this up, it’s almost fun to do. What’s more is that the publicly accessible URL has a nice hostname and uses port 80, no matter what port its on locally. And it tells you what this URL is when you start localtunnel:</p>
<pre><code>$ localtunnel 8080
Port 8080 is now publicly accessible from http://8bv2.localtunnel.com ...</code></pre>
<p>What’s going on behind the scenes is a web server component running on localtunnel.com. It serves two purposes: a virtual host reverse proxy to the port forward, and a tunnel register API (try going to <a href='http://open.localtunnel.com'>http://open.localtunnel.com</a>). This simple API allocates a port to tunnel on, and gives the localtunnel client command the information it needs to set up an SSH tunnel for you. The localtunnel command just wraps an SSH library and does this register call.</p>
<p>Of course, there’s also the authentication part. As a free, public service, we don’t want to just give everybody SSH access to this machine (as it may seem). The user localtunnel on that box is made just for this service. It has no shell. It only has a home directory with an authorized_keys file. We require you to upload a public key for authentication, and we also mark that key with options that say you can only do port forwarding. Although, it can’t be used for arbitrary port forwarding… because it’s only a private port on the remote side, it can only be used with the special reverse proxy.</p>
<p>So there it is. <a href='http://github.com/progrium/localtunnel'>And the code is on GitHub.</a> You might notice the server is in Python and the client in Ruby. Why? It just made sense. Python has Twisted, which I like for server stuff. And Ruby is great for command line scripts, and has a nice SSH library. In the end, it doesn’t matter what it’s written in. Ultimately it’s a Unix program.</p>
<p>Enjoy!</p>Learning from expectation disparity, aka "failure"2010-02-02T00:00:00-06:00http://progrium.com/blog/2010/02/02/learning-from-disparity-aka-failure<p>I’ve long argued that failure is the only way a person can learn. This has been a very resonating bit of wisdom for me, despite the fact people have often argued against it with rather sound logic: “Yes, you can learn from failure, but you can also learn from success!” Only after reading Jason Fried’s blog post from one year ago, <a href='http://37signals.com/svn/posts/1555-learning-from-failure-is-overrated'>Learning from failure is overrated</a>, have I realized exactly what’s wrong. We’re talking about different kinds of failure.</p>
<p>Learning happens when you correct the mismatch of an expected outcome from the actual outcome. This is where the idea of learning from failure actually comes from. Failure represents a mistake in judgment, a disparity between expectation and reality. Therefore, if your expectation is validated by success and there is no disparity, then you didn’t actually <em>learn</em> anything — you already knew.</p>
<p>However, validation of an expected but <em>unsure</em> success is obviously learning because it corrects the expectation of doubt you had. Furthermore, analysis of an unexpected success can result in learning from correcting the assumptions that led to expecting failure. A successful outcome, but a failure to expect it.</p>
<p>This is where the confusion comes in. Unfortunately it’s an issue of semantics. The meaning of the word failure in the context of “learning from failure” is this failure to know the outcome. In this way, it’s true, you can only learn from failure. However, it is only this specific instance of expectation-failure this applies to so absolutely.</p>
<p>Separately although related (which I think adds to the confusion), despite outcome-failures not necessarily teaching you what will work, they tend to be the strongest lessons experienced. I mostly attribute this to the greater level of disparity and correction made to your mental model that comes from expecting a failing outcome to succeed.</p>Notify.io brings notifications to the web2010-01-26T00:00:00-06:00http://progrium.com/blog/2010/01/26/notifyio-brings-notifications-to-the-web<blockquote>
<p><strong>Update:</strong> Notify.io is currently out of service as it is being re-imagined in smaller pieces based on open standards. Contact me for more information.</p>
</blockquote>
<p>In October 2009 I started a project called <a href='http://www.notify.io/'>Notify.io</a> and a month later announced it. I talked about how it will bring notifications to the web. Now that it’s basically alpha complete, I’ll give you a quick walkthrough of what makes it so great.</p>
<center><iframe frameborder='0' height='315' src='http://www.youtube.com/embed/Fs9NauQ2M6o' width='560'> </iframe></center>
<h3 id='overview'>Overview</h3>
<p>At a really high level, you can think of Notify.io as a notification router. As a web service, it provides a singleton endpoint for any web-connected program, whether a web application, desktop application or user script, to send notifications to somebody. For users, you can control what notifications you get and how you get them. In this way, Notify.io is like a global, web-accessable version of the popular <a href='http://growl.info/'>Growl</a> application for OS X (which should honestly just ship with OS X). Only it’s even better.</p>
<h3 id='desktop_notifications'>Desktop Notifications</h3>
<p>The original inspiration for Notify.io was to make Growl more useful by fixing its ability to receive notifications from the Internet. Out of the box, Growl is effectively only good for notifications from sources running on your machine. If you wanted to get notifications from a web app, you’d have to wait for them to release a desktop notifier, which hopefully would use Growl to actually display the notifications. So you end up with all these desktop notifiers running for some apps, and have no option of desktop notifications for others.</p>
<p>This is probably the killer feature of Notify.io: it lets you get desktop notifications from any web app that supports it, which is an order of magnatude easier for them to do than build their own desktop notifier.</p>
<h3 id='sources_and_outlets'>Sources and Outlets</h3>
<p>The language of Notify.io is based around Sources and Outlets. Sources are pretty straightforward. They’re a source of notifications. They could represent an application, script, company, person (or perhaps object?) that can send you notifications.</p>
<p>Outlets repesent the other major feature of Notify.io. They’re ways you can get a notification. The Desktop Notifier is your first and default outlet, but is just one of several options. Currently supported Outlets besides Desktop Notifier are Email, Jabber IM, and Webhooks. Outlets to look forward to are SMS, Twitter, IRC, and perhaps telephone.</p>
<p>The magic is in routing notifications from Sources to Outlets. Currently this is a simple mapping of Source to Outlet. For example, you can get notifications from Source A on your desktop, while notifications from Source B go to IM. This simplistic routing is just the beginning. We’ll talk about how we’ll do advanced routing when we get to the Roadmap.</p>
<h3 id='the_nio_client'>The Nio Client</h3>
<p>For developers, it’s worth mentioning that the pipe for our Desktop Notifier is really just a Comet HTTP stream. It can be consumed by pretty much anything. We were originally talking with Growl and authors of other desktop notifiers of direct integration. This is still a possibility, but just so we could move forward, we built our own client for OS X. Clients for other systems are available (but not yet “officially” supported) or are in progress, including Windows and Android.</p>
<p>Our OS X client is called Nio, short for Notify.io, so you can pronounce it N-I-O, but I tend to pronounce it “neo”. It’s basically just an application that sits in your menu bar listening to HTTP streams (yes, plural) for notifications and pipes them into Growl.</p>
<p>For ease of installing streams, we made it handle files of the extension ListenURL. Once Nio is installed, you can download a ListenURL file containing a URL and it gets installed by Nio. The URL we give you is basically a “capability URL” or secret URL. This means streams are not super secure, but this is by design. If you wanted, you could share your URL with somebody so you both get notifications sent to that Outlet. You can always delete the Outlet and make another to disable that URL.</p>
<p>The other cool thing about our client is that it has a shell script notification hook. This means you can have notifications trigger a shell script that’s passed the notification details. This is pretty powerful because it means you can do things like create your own local logging, hear your notifications with text-to-speech, or make certain notifications trigger a more obstrusive means of notifying you, such as Quicksilver’s Large Type feature. This kind of programmability is central to our approach to design, as you’ll see later on in the Roadmap.</p>
<h3 id='simple_api_and_approval_model'>Simple API and Approval Model</h3>
<p>For proper adoption, we need web apps to integrate Notify.io, so we have a super simple API for Sources. It’s a simple REST API based on an endpoint constructed by the target of your notification. Like <a href='http://en.gravatar.com/'>Gravatar</a>, we use an MD5 hash of a user’s email address to identify targets. For example, to send a notification to test@example.com, you’d do an HTTP POST to this URL:</p>
<pre><code>http://api.notify.io/v1/notify/55502f40dc8b7c769880b10874abc9d0</code></pre>
<p>You’d pass a few parameters, with at least your API key (meaning you need an account) and the text you want to send, and optionally an icon URL, link URL, title text and whether the notification should be “sticky”. That’s it. The request should respond immediately so it may be quick enough to be done inline in your app, but we recommend it be done asynchronously.</p>
<p>Then what happens is the first notification you send actually triggers a notification to that user that you want to send them notifications. If they accept, future notifications will be sent and your previous notifications will show up in their history. This may change to replay previous notifications on approval, but the point here is the user has to approve notifications before they get them. In this way, it’s similar to Jabber’s approval model and helps avoid spammers.</p>
<h3 id='public_service_software'>Public Service Software</h3>
<p><a href='http://code.google.com/p/notify-io/'>Notify.io and its clients are open source</a>. The service is free. Or rather, it’s not-for-profit donationware. Notify.io is being run under a model I’m developing called POSS, the goal of which is to automate/abstract away the maintainence and funding of its operation. The end result should be: the service exists, it’s open source, and some in the developer community can deploy changes. But no single person is financially responsible for it, and it’s run on maintained cloud infrastructure. In this case it’s mostly App Engine.</p>
<p>This means that Notify.io is not a startup. It’s public infrastructure. Ideally, I’m not even in the loop. It should be a self-sustaining public service. This is not fully realized, but it will be as it starts to consume more resources. For more information, you can <a href='http://blogrium.wordpress.com/2009/10/29/public-open-source-services/'>read more</a> on POSS or <a href='http://groups.google.com/group/poss-talk'>join our discussion group</a>.</p>
<p>For now, the important thing is that Notify.io is open source. This means anybody can contribute bug fixes, new outlets, new desktop clients, etc.</p>
<h3 id='roadmap'>Roadmap</h3>
<p>Okay, sure, Notify.io is pretty cool now. But here are some of the major things that will be coming soon. Hopefully with your help!</p>
<p><em>Advanced Routing and Filters</em><br /> From the beginning, I wanted really powerful routing and filtering. My evangelism of webhooks has given me the obvious answer to this, but in a more integrated way. Basically, how do you allow any routing scheme imagineable by users? Let them write code. Originally it was going to be powered by <a href='http://www.scriptlets.org/'>Scriptlets</a>, but since I split the eval engine out as <a href='http://github.com/progrium/DrEval'>DrEval</a>, it will be based on that.</p>
<p>Basically, just a imagine a UI with a little textarea for writing JavaScript that can make web calls. Route notifications based on your IM status, your location, what music you’re listening to, arbitrary time schedules, or anything you can code.</p>
<p><em>More Outlets</em><br /> Obviously, more Outlets are good. Obvious ones are IRC, SMS, and Twitter DM. With Twilio we can do voice call notifications. Integration with push clients like the iPhone’s Prowl app would be easy to do. Our outlet system is very simple, so you can <a href='http://github.com/progrium/notify-io/blob/master/www/outlet_types.py'>look at the source of our existing ones</a>, write an outlet and it’s likely we’ll deploy it.</p>
<p><em>OpenID Support</em><br /> Right now, you authenticate with Google. I don’t believe in creating authentication systems, and Google was the quickest given the platform. It’s also pretty popular and ensures you have an email address we can use. However, there are plenty of people that don’t like the idea of using their Google Account, so at some point we’ll support OpenID login and then go from there.</p>
<p><em>Multiple Email Support</em><br /> Ideally, a web app can use whatever email address you used to register with them to send you notifications. However, unless Gmail is your primary email you use for registration, they’ll still need to ask you for your email. It’s the Gravatar model. So like Gravatar, we’ll need to let you add multiple emails to your account, allowing web applications to be able to send notifications based on any of them.</p>
<p><em>Convenience Libraries</em><br /> Our API is simple, but people are lazy. We’re currently working on convenience libraries for popular langauges that it make that much easier to integrate with Notify.io. If you use a neat language, you should make a libnio package for it!</p>
<p><em>Ad-hoc Sources</em><br /> Sources require an account, which is a bit heavyweight. Sometimes you want to create your own distinct sources to share with others or use in your scripts to easily send yourself notifications. This is the idea of Ad-hoc Sources, inspired by David Reid and capability URLs. The idea is simple: create an ad-hoc source and you get a secret URL. This URL acts just like the notify API endpoint, only you don’t need an API key. You can use this in public scripts or give it to others to send you notifications, and if it’s ever abused or falls in the wrong hands, just delete it and make another.</p>
<p><em>More Supported Clients</em><br /> A developer in Japan started a Windows client based on Nio that we’re planning to support as our primary Windows client. Another developer is working on an Android client. iPhone users have Prowl, so once there is a Prowl outlet, you can get them on your iPhone. But Prowl is not free, so perhaps it would be helpful if we had our own iPhone client. There are the beginnings of a Linux/libnotify client. These are all ways you can start contributing to Notify.io. ;)</p>
<p>That’s about it. You can probably see why I describe Notify.io as the open notification platform of the web. It’s simple, powerful, and open source. It’s come a long way in just 3 months thanks to the contributions of Abimanyu Raja, Amanda Wixted, Mike Lundy, David Reid, Christopher Lobay, Hunter Gillane, Nakamatsu Shinji, and everybody that’s given user feedback so far.</p>Hacker Dojo: place of the way of the hacker2009-12-08T00:00:00-06:00http://progrium.com/blog/2009/12/08/hacker-dojo-place-of-the-way-of-the-hacker<p>Two weeks ago it was being taped for a story on Fox News. One month ago it received a congratulatory visit from the mayor of Mountain View. Two months ago it was featured on <a href='http://www.flickr.com/photos/progrium/4017351989/'>the front page</a> of the Mercury News. And four months ago was when we signed the lease for it. Hacker Dojo is officially on its way to something big.</p>
<p><a href='http://hackerdojo.com'>Hacker Dojo</a>, aspiring to be a global hub of innovation, is based on a simple idea: to provide a community center for hackers, thinkers and technologists to meet, discuss, learn and create.</p>
<p>It’s a non-profit, volunteer-run, member-supported operation that simply provides a space for hackers to do their thing 24/7, whether it’s working on side projects, organizing a meetup, building the next startup, giving a class, working remotely, or just hanging out with diverse yet like-minded people. The thing they have in common is the hacker spirit, a force that drives ordinary but curious people to create what can end up being extraordinary, such as the personal computer, the first video game system, or the technology powering the Internet.</p>
<p>A dojo is considered a place to do and train one’s craft with others. It’s most commonly used in the context of martial arts or other physical training. But the word “dojo” is simply defined as “place of the way.” In the case of Hacker Dojo, that way is the way of the hacker.</p>
<p>The hacker is not what most people think. Although 99% of the 100+ members of Hacker Dojo are capable of doing the things most people think “hackers” do, I’m 100% positive none of us have the intention of doing those things maliciously if at all. In fact, security has almost nothing to do with the way of the hacker. We define a hacker something like this:</p>
<blockquote>
<p>A hacker is expert in their field, whether hobby or professional, that pushes the envelope of what’s possible through hands-on exploration, driven by relentless curiosity and a desire to challenge the status quo.</p>
</blockquote>
<p>Steven Levy, author of <a href='http://www.amazon.com/Hackers-Computer-Revolution-Steven-Levy/dp/0141000511'>Hackers: Heroes of the Computer Revolution</a>, describes computer hackers as people that “regard computing as the most important thing in the world.” It’s about passion. It’s about something most of us can’t even describe. It inexplicably compels us to explore technology. To build things. To learn things. To be like the heroes in our field and achieve the remarkable, which is something I think everybody can relate to.</p>
<p>In fact, there is no reason why anybody shouldn’t aspire to be like a hacker, an innovator. This is what Hacker Dojo is about. We want to foster hacker culture so it can grow, develop and spread. We want to make its values explicit, and be the “place of the way” of the hacker.</p>
<p>If you still don’t understand the hacker, or want to learn more, take a look at the <a href='http://en.wikipedia.org/wiki/Hacker_ethic'>Hacker ethic article</a> on Wikipedia. There’s also an excellent, short documentary available to watch online that shows you the energy and excitement of true hackers called <a href='http://www.youtube.com/view_play_list?p=906FF3F2339D0C70'>Hackers: Wizards of the Electronic Age</a>.</p>
<p>If you want to get involved in Hacker Dojo, come stop by or <a href='http://hackerdojo.com/'>visit the website</a>! We hold many events at the Dojo, but it’s also generally open for anybody to come and hack. If you’re too far to come to us, you may find another <a href='http://en.wikipedia.org/wiki/Hackerspace'>hackerspace</a> nearby. For example, in SF there is the excellent <a href='https://www.noisebridge.net/wiki/Noisebridge'>Noisebridge</a> hackerspace.</p>
<p>When you’re this obsessed with technology, so much that most “normal” people have no idea what you’re talking about, it helps to have a place where people don’t think you’re crazy. People have moved to Mountain View to be closer to Hacker Dojo. If you already live nearby, you should definitely take advantage of this place of the way of the hacker.</p>Why efficiency is not as important as you think2009-11-05T00:00:00-06:00http://progrium.com/blog/2009/11/05/why-efficiency-is-not-as-important-as-you-think<blockquote>
<p>“Efficiency is doing things right; effectiveness is doing the right things.” -Peter Drucker</p>
</blockquote>
<p>When people, usually analytical people, want to improve a situation, they tend to optimize efficiency: achieve maximum output for input. “Let’s reduce waste! Let’s simplify! Let’s make things smoother! Let’s try and get more out of the system!” I suppose the obsession with efficiency is explained in the Drucker quote: that efficiency is “doing things right.” Who wouldn’t want to do things right?</p>
<p>The problem with efficiency is that it has nothing to do with whether or not what you are currently doing is the right thing to do. Whereas effectiveness is about achieving the right result, or being on the right path.</p>
<p>Too many people assume a system is on the right path. If there is a problem, they address it by smoothing things out and making the process more efficient without questioning the larger system they were produced in. But if the system is going in the wrong direction, that’s only going to make the real problem worse. The push for more standardized testing in public education comes to mind.</p>
<p>What’s really important is effectiveness. In the end, it doesn’t matter if your business is spending the least amount possible or your computer program running as fast as possible or your lifestyle entirely streamlined. If it’s effective, it achieves the desired goal. Your business is producing value, your program is functionally useful, your lifestyle is making you happy. Effectiveness is qualitative. Efficiency is quantitative, which is why I think it’s so big with analytical people. Probably intelligent people in general.</p>
<p>If you think about it, intelligence, especially knowledge, is mostly concerned with efficiency. It’s more about how to solve problems, less so with what problems to solve. Knowledge is a tool. It’s neutral. To what ends do you actually use it for? That requires values and intention—the realm of wisdom. A wise person tends to be an effective person.</p>
<p>When approaching a problem, wisdom and pragmatism must frame intelligence. Before you start thinking about efficiency, you should step back and think about effectiveness. In computer engineering this idea spread with Donald Knuth’s quote “premature optimization is the root of all evil.” His argument being that 97% of efficiency optimizations are unnecessary to achieve functionality, at which point you can determine which optimizations will be the most effective improvements of efficiency.</p>
<p>In a way, it’s a look before you leap argument. Don’t get me wrong with all this. Efficiency is terribly valuable and can improve a situation, but only if you’re on the right path. Just because a system is currently working or was previously working doesn’t mean it should be, or will in the future. You should always consider effectiveness before efficiency, even in “working” systems. Here’s why:</p>
<p>Effectiveness opens the door for efficiency, but efficiency can change the requirements for effectiveness.</p>
<p>Quantitative improvements can qualitatively change the situation if taken far enough. The game can change. For example, you can become so efficient at producing cars that production isn’t the problem anymore. Then it’s a question of variety, like choice of color. “You can have any color as long as that color is black,” Ford said, and soon after lost the lead in car manufacturing. Perhaps when business slowed, they tried to make their sales and marketing or administrative organization more efficient. They didn’t re-asses whether the thing they were doing so right (building cars so efficiently)… was the right thing. The effective thing.</p>
<p>Efficiency is important, but powerless without effectiveness. Always keep an eye on effectiveness.</p>Public Open Source Services2009-10-29T00:00:00-05:00http://progrium.com/blog/2009/10/29/public-open-source-services<p>Last night I went off and put up a wiki about an idea I’ve been thinking about for a while: <a href='http://poss.gliderlab.com/'>public open source services</a> or POSS. Think: public services or utilities on the web run as open source.</p>
<p>Unlike open source software, web services aren’t just source code. They’re source code that <em>runs</em>. They have to be maintained in order to keep running, and the resources they consume have to be paid for. This is why most web services are built using a business as the vehicle. This effectively constrains what you can build by framing it as something that needs to turn a profit or support you to work on it. But does it need to be that way? Can web services be built in a way that make it self-sufficient? Not needing some ambivalent leader to take responsibility for it?</p>
<p>I originally blogged about it in February 2007, 6 months after I first wrote about webhooks. Unfortunately my old blog isn’t online right now. Back then, I was trying to solve the administrative problem. How do you maintain the servers in an open source way? My idea then, was to build a self-managing system using something like cfengine or Puppet, where the recipes and configurations are kept with the publicly available source code. As new configurations are checked in, the server(s) adopt the new directives and continue to self-manage.</p>
<p>The practicality of such a setup is a little far fetched, but seemed pretty feasible for smaller projects. However, since the release of Google App Engine, this concern for simple web applications has disappeared. Google just automates the system administration, and scaling! This means to run the app, you just have to write the code and hit deploy. That’s a huge step! Administration concerns? Pretty much solved.</p>
<p>The next thing is the financial concern. How do you pay for it? Or rather, how does it pay for itself? This took longer to figure out, but here we are. From the wiki essay:</p>
<blockquote>
<p>You use the same Google Merchant account that App Engine debits as the one that accepts donations. This way no bank account is involved. Then you track the money that goes into the account (using the Google Merchant IPN equivalent). Then you look at your usage stats from the App Engine panel and predicate future usage trends. Then calculate the cost per month. Then divide the cash in the account by that and you have how long the service will run. You make this visible on all pages (at the bottom, say) that this service will run for X months, “Pay now to keep it running.” You accept any amount, but you are completely clear about what the costs are. And this is all automated.</p>
</blockquote>
<p>Take the humans out of the loop!</p>
<p>Then you rely on the same sort of community approach of open source to contribute to the application. Like a few members of the project community are given certain rights, some will be given permission to deploy the app from time to time for updating the running service.</p>
<p>If the service isn’t useful, nobody uses it, it’s not paid for, it disappears. If it is useful, people will pay for it to keep it running. They are assured they are paying operating costs, which are significantly lower than most because it doesn’t include paying for human resources! Volunteers might need to meddle with settings, but otherwise, the coders are in control and the community accepts or denies changes made by whoever wants them.</p>
<p>So if this is interesting, read the <a href='http://poss.gliderlab.com/'>full essay I wrote up on the wiki</a>. It’s been my intention to prototype and validate this model with many of my projects.</p>Why minimalist software wins at workflow2009-06-26T00:00:00-05:00http://progrium.com/blog/2009/06/26/why-minimalist-software-wins-at-workflow<p>Recently I’ve been evaluating software to help support agile/scrum development on our team, and ideally to roll into our NASA Code product for others to use. We’re already married to <a href='http://trac.edgewall.org/'>Trac</a>, so we’ve been playing with <a href='http://www.agile42.com/cms/pages/agilo/'>Agilo</a> and are looking at some of the other agile plugins for Trac. Unfortunately they’re all so heavyweight, despite some that claim not to be.</p>
<p>I came back to a realization I’m sure a lot of us have had: most software sucks. Especially software that’s intended to augment some real-life process. When asking friend/colleague <a href='http://timothyfitz.com'>Timothy Fitz</a> about recommendations on agile tools, he said: “A board and post-its. Seriously.”</p>
<p>This is part of the reason most enterprise software sucks so terribly. Enterprise is about lots of real-life process and workflow, and given that process augmentation software even in small doses generally sucks, large amounts of it will suck exponentially.</p>
<p>A lot of us have learned that less software is more effective. One major attraction of Trac was their goal of staying out of the way through minimalism. The trick with minimalism, in general, is knowing what’s actually important; the essence of the message or design. This is a big part of my design process. Asking, “How can I fold these requirements into fewer features and UI?” instead of directly implementing a feature for every requirement.</p>
<p>The other thing about minimalism is that, like abstraction (another form of compression), everything you leave in the design makes such a huge difference. In programming, when you make abstractions, you’re deciding what you can assume. This means abstractions can go in different directions depending on the assumption requirements of what’s going to use the abstraction. The risk with minimalist software is that a simple design choice can drastically change the direction of the abstraction and make or break whether the software fits your needs.</p>
<p>Luckily, minimalism buys you a sort of abstraction that can enable projection. By this I mean that users can project their actual process and workflow onto the software. If it doesn’t have features that impose a particular process, users are free to do what works for them. This is why wikis are so powerful.</p>
<p>Coming back to Timothy’s “a board and post-its” remark, why do you even need software? If you can do it without software, why would you want to bring software in to slow things down?</p>
<p>Software does have a couple strengths. First, it encodes process in way that means you can automate parts of it. Nobody has to worry about manually typesetting when using a word processor. Second, it persists and organizes information that would normally be lost in handwritten notes, or worse, somebody’s head. The trick is getting these advantages without getting in the way.</p>
<p>A naive approach to software design is thinking that perfectly modeling a system, such as your development process, is the way to good software. I used to think this. It sounds great because then you can programmatically reason about every aspect of the system. But in the real-world, no two systems are exactly alike. In fact, a given system can change quite a bit in its lifetime. So there’s really no point to modeling systems with that kind of precision.</p>
<p>However, I’m seeing a lot of this in agile/scrum software. Requirements have stories, stories have tasks, organized into iterations and releases. CRUD for every noun mentioned in scrum. This on top of abstractions in a direction different than we need. Numbers where it doesn’t really matter. Nice pie chart breakdowns we’ll rarely use. Top it off with horrible UI, since with all these features there isn’t time to make them easy to use.</p>
<p>Honestly, <a href='http://www.pivotaltracker.com/'>Pivotal Tracker</a> seems to have the best abstraction of agile. It folds requirements, stories and tasks into just stories. It automatically and dynamically creates iterations and calculates velocity. It keeps you on a single page, keeping you focused on what’s important.</p>
<p>Unfortunately, we can’t use Pivotal Tracker since we’d need it on our servers and the licensing they offered doesn’t scale if we want to essentially give it away as part of NASA Code. It’s likely I’ll want to just start nudging Trac in the right direction using Pivotal Tracker as a model reference, pulling together code from Agilo and other plugins. If there’s one thing that complements minimalist design, it’s an extension architecture, and Trac has an excellent plugin system.</p>
<p>Anyway, as far as augmenting process and workflow, I’ve always liked the idea of starting with a wiki and lazily formalizing the process into custom software as needed. As long as you can keep it under control, mind your requirement abstractions, and avoid writing too much software.</p>Oh no! Hackers!2009-06-17T00:00:00-05:00http://progrium.com/blog/2009/06/17/oh-no-hackers<p>That’s right. Hide your floppies and cover your Ethernet ports. Virus-laden hackers are coming to take over your computer, steal your passwords, and do terrible things! Like hack your MySpace! Oh noes!</p>
<p>I guess I’m getting over the fact that we probably won’t be able to undo the damage done with the public’s perception of what a “hacker” is. Perhaps, though, we can overload it to the point of ambiguity, so there’s at least some question of context whether it’s the good kind or the bad kind.</p>
<p>The problem is that positive connotations aren’t enough. The people that would venture to understand “hackers” the slightest bit more than what they hear in the headlines are going to pretty quickly find the “good hackers” … they’re white hats, right?</p>
<p>The other day I was questioned by somebody (that should know better) if I hacked somebody’s website that was recently compromised. Seriously? My response was “I barely have enough time to do important things, let alone something that would be a waste of my time.”</p>
<p>I self-identify as a hacker. I invite my friends over to “hack.” I started a party for “hackers and thinkers” (a very intentional choice of words). I’m co-founding a community center called Hacker Dojo. What are we doing at all these functions? Building and learning.</p>
<p>We use the term perhaps too liberally, but always implying tribute to the true hackers that, as Steven Levy put it, “regard computing as the most important thing in the world.”</p>
<p>These people push the envelope of what’s possible through hands-on exploration, driven by relentless curiosity and a desire to challenge the status quo. Steve Wozniak, Lee Felsenstein, Linus Torvalds, Tim Berners-Lee, John Carmack…</p>
<p>Hell, there’s something big behind this idea, why stop at computing? Buckminster Fuller, Nikola Tesla, Richard Feynman, Alfred Kinsey, Ben Franklin…</p>
<p>Perhaps we’re generalizing too far. Perhaps we’re rendering “hacker” meaningless. Or are we giving it more meaning? Getting down to its essence. I wouldn’t be defending this idea so strongly if I didn’t think it had some great significance to humanity.</p>
<p>What upsets me is that many who would identify as hackers in this sense seem to be afraid to claim it. Most likely in fear of confusing the layman that has the media’s myopic view of hackers. You’ve never heard of the canonical conference for real hackers. No, it’s not DefCon. (Just get up and leave.) It’s a conference called Hackers. You’ve never heard of it because they keep it secret!</p>
<p>This conference was started to gather everybody together that was mentioned in Steven Levy’s book <a href='http://www.amazon.com/Hackers-Computer-Revolution-Steven-Levy/dp/0141000511/ref=sr_1_2?ie=UTF8&s=books&qid=1245307316&sr=8-2'>Hackers: Heroes of the Computer Revolution</a> (one of the last good publication on hackers, and it came out in 1984!). This conference holds all the values of true hackerism and it’s been happening for 25 years. But they won’t promote it! It goes by a fake name and even has all mentions of “hacker” on their website replaced with an image so it won’t be indexed. Seriously??</p>
<p>Trying to supplant public perception of hackers by just saying they’re something different and providing a better name nobody uses (“crackers”) is not going to work. It hasn’t worked. They need something to replace those visions of crackers with. We need tangible examples and stories. We need heroes. Heroes willing to wear the title.</p>
<p>Luckily, we have a new generation of hackers. One that has started a global movement called <a href='http://hackerspaces.org/'>hackerspaces</a>, probably one of the biggest things for hackers in years. Our local hackerscene fostered by Silicon Valley culture and events like <a href='http://devhouse.org'>SuperHappyDevHouse</a> have led to a hackerspace we hope will have a big impact. One that proudly wears the name hacker: <a href='http://hackerdojo.com'>Hacker Dojo</a>.</p>Web hooks to revolutionize the web2007-05-03T00:00:00-05:00http://progrium.com/blog/2007/05/03/web-hooks-to-revolutionize-the-web<p>There once was a command line. It was a powerful thing. Not only could you navigate your filesystem and launch applications, but you could program shell scripts to automate tasks and make convenient shortcuts. It also had pipes.</p>
<p>One of the major sources of power on the Unix command line is the simple construct of input and output. A program can read from <code>STDIN</code> and can write to <code>STDOUT</code>, and you have a fair amount of control over re-routing them however you want. Most commonly this is used to chain commands together, “piping” the output of one to the input of another.</p>
<p>This is infrastructure. Infrastructure that encourages simple, independent programs to be made almost exclusively for the purpose of chaining with other commands. These are commands like cat, grep, uniq, wc, sort, nc, and others. Many of them are useless by themselves, but together they achieve more than the sum of their parts. Especially when combined with larger programs. This is made possible from the simple idea of input and output.</p>
<p>This idea was implemented on Unix in 1977. Twenty years later, Jon Udell expressed a vision of web sites as data sources that could be reused and remixed, and a new programming paradigm that took the whole Internet as its platform. This lead Tim OReilly to ask in 2000:</p>
<blockquote>
<p>What is the equivalent of the pipe in the age of the web?</p>
</blockquote>
<p>There seems to be a resounding consensus that the answer is feeds. The name sounds very promising, it sort of make you imagine feeds like in the telecom world. Data coming directly to you. But this is misleading because feeds aren’t about data coming to you. They’re about you getting that data yourself. A lot. Over and over again. This is polling.</p>
<p>If you could avoid polling, you probably would. If you’re building an app that works with a feed, you have to write a polling system. This means messing with the often-hard-to-debug crontab or writing and managing a full-fledged daemon. Then you have to worry about caching and parsing. Feeds seem to be made for the browser because the browser does a lot of this work for you, but it’s a different story if you’re constantly polling feeds or APIs in the backend. Then it almost becomes too much work to do anything simple with feeds and APIs. If you want to use feeds for real-time notification, have fun setting that up.</p>
<p>I also think there’s a problem with the command line metaphor in the first place. The web is not linear. Web apps aren’t synchronous. A better metaphor would be a daemon process. How do they communicate? IPC? Sockets? Queues?</p>
<div class='alert alert-info'>
Note from 2012: Remember, this was 2007. Web developers didn't have Node.js or EventMachine, and mainstream developers didn't understand queuing systems or async operations. Often all they had was Apache and PHP.
</div>
<p>Unfortunately, web stacks are stateless request processors, so you can’t really use sockets. You could use Amazon SQS or some other queuing system, but queues often just move the polling to somewhere else. What we need is something simple, stateless, and ideally real-time. We need to push.</p>
<p>This is where web hooks come in. Web hooks are essentially user defined callbacks made with HTTP POST. To support web hooks, you allow the user to specify a URL where your application will post to and on what events. Now your application is pushing data out wherever your users want. It’s pretty much like re-routing <code>STDOUT</code> on the command line.</p>
<p>We’re already sort of doing this with pingbacks on blogs. However, the difference with web hooks is that the payload is arbitrary event data and the target URLs are user web scripts or handlers. From there, the users can do whatever they want.</p>
<p>The idea here is to create new infrastructure. New opportunities. I’ve been thinking a lot about the possibilities of a web hook enabled web, and it makes me really excited. Because it’s such open ended infrastructure, I’m sure the possibilities extend well beyond what I can think of. Especially when combined with our growing ecosystem of APIs and feeds.</p>
<p>Web hooks are easy to implement for application developers. You just implement a UI or API to let the user specify target URLs, and then you’re just making a standard HTTP POST on events. This is a fairly trivial operation in most environments, since you already do this to use other web APIs.</p>
<p>Let’s be clear. When I talk about the user of web hooks, it’s often a power user or developer. But being based on HTTP POST makes it very accessible for mainstream web developers. They already know how to work with POST variables. And PHP hosting is widely available and practically free. I think PHP would become a popular language for writing hook scripts.</p>
<p>Just think about it. Writing all the boilerplate polling and parsing infrastructure just to use a feed? Or writing a little PHP script that has all the incoming data in <code>$_POST</code>. Plus it’s real-time. Which has a lower barrier to entry when somebody gets a bright idea on how to use all this data we’ve “opened up” on the programmable web?</p>
<p>The Unix pipe is simple because it’s about linear input and output of text streams. The web is very different. At a high level, I think web hooks achieve the same simplicity but more appropriately for the web. When coupled with our existing ecosystem of feeds and APIs, we’ll have an even more powerful platform than what pipes gave Unix.</p>