Nov 26
2012

X-Callback Header: Evented Web Building Block

Since this posting, we decided to adopt "best current practice" and drop the X- prefix as described in RFC 6648. Future posts refer to this as the Callback header.

Webhooks is the simple concept of HTTP callbacks. It expands on the simple request/response model of HTTP, giving you the semantics of callbacks in programming. Request/response gives you one response for one request in one synchronous operation. It’s like invoking a function and getting a return value. With callbacks, after you register a callback, the callback will receive one or more invocations, perhaps minutes or hours apart.

Callbacks are a necessary component of any evented or reactor-based system, like Node.js, Twisted, or EventMachine. So, naturally, HTTP callbacks are necessary to achieve the Evented Web.

Modeling callbacks in HTTP is somewhat straightforward. The callback is a URL. You perform an HTTP request against an application to register a callback URL. The application then performs an HTTP request to that URL to invoke that callback.

Those high-level requirements are enough to set anybody in the right direction to effectively implement webhooks or HTTP callbacks for their application. The problem is that now every application implements the specifics differently. While this is fine to provide a callback paradigm for each application, it doesn’t let us build on this paradigm. The Evented Web needs to agree on some standards, and the X-Callback header is one of those standards.

X-Callback Header

The X-Callback header is a proposal for a common way to describe HTTP callbacks, primarily in the case of registering them. It does not get into the specifics for different ways of using HTTP callbacks, so it’s more of a building block for APIs or larger protocols such as HTTP Subscriptions, which I mentioned in my previous post.

Here’s what it looks like to use X-Callback:

X-Callback: <http://example.com/callback>; method="post"

The format is directly borrowed from the Link header used for responses. You provide a URL and then optional key-value parameters. In the case above, the HTTP method for invoking the callback was specified as a parameter.

Here is a more formal description of the header:

X-Callback     = "X-Callback" ":" #("<" URI ">" *( ";" callback-param ) )
callback-param  = token [ "=" ( token | quoted-string ) ]

Since this is just the beginning of the conversation, there are no “built-in” callback parameters in this definition. They’re effectively all extensions. However, these are what I’d propose for standard parameters:

  • method: The HTTP method preferred for invoking this callback. Servers can ignore or override based on their policies, but this lets the requester optionally state preference.
  • secret: The secret to be used for signing callback requests. More on this in the next section.
  • rel: The relationship of this callback to this request, similar to the rel of the Link header. This lets you specify the role of this callback, which is useful when multiple callbacks are provided. It effectively lets you classify the callbacks.

Authenticating with Signatures

A common pattern across most implementations of webhooks has been the use of signatures for authenticating the callback “invocation” requests. Either built-in to the X-Callback header spec or maybe as a separate extension, a standard way of providing a secret then building and including a signature would be a Good Idea. The following is a proposal based on PubSubHubbub’s signature model, but is not that different from the majority of implementations out there.

We start with a shared secret. Transmission of this secret can be done out of band (through a dashboard, for example), or the secret can be provided via the secret parameter of the X-Callback header during registration.

The secret can then be used with HMAC to sign anything. In the case of callbacks, it would sign the body of the callback request body. Since you can use different hashing techniques with HMAC, the technique used is specified along with a hexadecimal digest of the HMAC signature. This is put in the X-Signature header of requests:

X-Signature: sha1=0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33

Now the callback handler can rebuild this signature knowing the secret and having the content body and the hash technique. Authenticating is then comparing the built signature with the one provided in the X-Signature header.

Signing lets the callback handler be more certain of the source without requiring SSL. Signatures become much less necessary if all requests are using HTTPS. But having this simple complement to X-Callback makes it easy when you need it, and may help unify all the different approaches that are all effectively doing the same thing.

Example Callback Flow

Let’s use all this in an example, showing you the actual HTTP requests. First, we’re going to register a callback at a particular endpoint:

POST /callbacks/register HTTP/1.1
Host: server-example.com
X-Callback: <http://example.com/callback>; method="post"; secret="opensesame"
Content-Length: 0

The server can respond however it likes since X-Callback header doesn’t specify anything more than how to hand the server a callback. Let’s assume it returned 200 OK.

Now, whenever it likes, the server is going to be performing an HTTP POST on the callback URL. Since a secret was provided, the server will be providing a signature using the X-Signature header. Here’s what one of those requests might look like:

POST /callback HTTP/1.1
Host: example.com
Content-Type: application/json
Content-Length: 26
X-Signature: sha1=76afe1da675cf6d3d59c71a4af44dafc69fd03f0

{"payload": "Hello world"}

You’ll notice we’ve stayed completely out of the content layer of HTTP. This is quite intentional. This gives implementors lots of flexibility and keeps this a “pure” extension to HTTP.

A Building Block

As I’ve mentioned, this header is intended to be used in APIs and protocols that use callbacks in different ways. The obvious example is HTTP Subscriptions, which will let you subscribe to events using HTTP callbacks.

Another example, which I’ll also talk about soon, is HTTP Response Redirection. Regular HTTP Redirection lets the server redirect the client’s request to another URL, whereas Response Redirection lets the client redirect where the server sends the response using an HTTP callback.

The X-Callback header is simple, focused, and content neutral. Hopefully this makes it a powerful building block for other technologies of the Evented Web.

Comments
Nov 19
2012

From Webhooks to the Evented Web

Back in 2007 I started thinking and talking a lot about an idea called webhooks. Over the following few years I started evangelizing it. I spent a lot of my free time giving talks and building tools around the idea of webhooks. Some of these tools are still around today, including Localtunnel and RequestBin (originally PostBin). There were others that might not be around anymore: MailHooks, ClickHooks, TwitterHooks, Scriptlets, and a few others.

Webhooks wasn’t really a new technology in the sense that there was a specification or tangible piece of software. It was more of an architectural pattern, and a loose one at that. To me it was just a different way to think about web applications, and it opened up a lot of new possibilities.

I was really excited by those possibilities, so I started telling people about it. The only problem was that it was this semi-vague idea. I often spoke in high-level notions. It was hard for some people to understand at the time. I think some people mostly got it, but a lot of people didn’t get it and thought they did.

Confusion

For example, the name “webhooks” was more about the pattern than any specific part of how it works. Webhooks involve two parts: an application that triggers a URL, and a handler at that URL. If you were to ask “where is the webhook?” different people will answer you differently. Some say it’s the trigger side. Some say it’s the handler side. For me, “a webhook” is the combination of both a trigger and a handler.

It also didn’t help that there was never a spec. I always avoided a spec because there were a lot of different implementations already out there, and you might implement it slightly different for different use cases. It made sense to me to just keep it a general pattern and not limit what was possible.

I didn’t want to say, “Well, if you want to implement webhooks, it’s got to be JSON. And use this payload structure. And this is the API for registering them. Otherwise, it’s not webhooks.” Because if you didn’t do those things in a particular way, they’d still be webhooks to me.

Mild Success

After a while, the idea got out there and companies like Google, Facebook, Wordpress, GitHub, Twilio, and other startups started implementing it. Five years later and I still often run into new applications or open source projects using the term webhooks. But even after all this time, there’s still a lot of cool stuff that I wanted to emerge that hasn’t really happened yet.

Some of it is starting to happen, though. For example, how do you write these handlers script? I really didn’t believe in being able to just plug apps together like pipes. That’s something that could come later and would definitely need a spec. Instead, I wanted people to actually write handler scripts with code. That way they could make something that did whatever they wanted, exactly how they wanted.

To facilitate that, I wanted a service that would let you write and it would host for you these little handler scripts for processing HTTP webhook requests. I actually built a prototype of this called Scriptlets. It was a web app where you could write JavaScript, hit save, and then you’d have a little script at a URL that you could use for webhooks.

Scriptlets didn’t get very popular, though I didn’t push it very hard. There was a lot I wanted to do with it but there wasn’t enough demand to drive development, and I was so busy that it eventually became defunct.

Four years later, we actually have a service like this. I discovered it about a week ago. It’s called Webscript. It’s basically Scriptlets done right. Webscript is a web app where you can write Lua, hit save, and then you’ve got little web service. It has basically everything you need to write webhook handler scripts.

The Ecosystem

Slowly, people are building out pieces of the ecosystem. You could say that the webhooks paradigm was really about this ecosystem. At some point I realized this and decided to give that ecosystem a name. It turned out the ecosystem was really what I was getting at with webhooks. That’s where the magic was.

I started calling this ecosystem the Evented Web. Like the Semantic Web and “programmable web,” it’s an umbrella term for a family of technologies coupled with a vision of what the world could be like. The Evented Web envisions a world where the programmable web that we have today of traditional web APIs is complemented by APIs that produce events through webhooks. Adding a callback mechanism to web APIs makes the web more like a giant evented framework.

Just like with Node.js, perhaps the most popular evented framework, there’s all kinds of innovation happening in the community. It’s a new way of thinking about things. Pipes and streams come up a lot in the Node.js world now, and similar sorts of things can be done across web applications with an Evented Web.

By the time I started talking more about the Evented Web instead of just webhooks, I was already pretty tired of talking about it all. I was sort of “over it” and I started to not care if people didn’t see or share this vision. I continued to think it was cool, but I started to move on to other interests.

The Future

These days, the people that really get it are starting to build some really neat things. Webhooks have spread enough that you can at least reference them or the idea of HTTP callbacks and not have to explain yourself. Now is maybe the perfect time for me to put a few specific projects into motion that could at least provide a tangible foundation for building out the Evented Web. Not just vague notions.

The most immediate thing is a lightweight spec for implementing webhooks. Specifically I mean registering a callback URL and invoking the callback URL. I’ve intentionally put this off for a lot of reasons. I didn’t want to get it wrong. I didn’t want to leave people out. I wanted to capture best practices, which for the longest time we hadn’t them figured out. But now might be the perfect time because there is a lot on the verge of happening.

Stay tuned for my proposal for HTTP Subscriptions. It will be the first of several really cool developments for the Evented Web, from me and from others.

Comments
Sep 30
2012

Piping into and out of the cloud with skypipe

Skypipe is a magical command line tool that lets you easily pipe data across terminal sessions regardless of whether the sessions are on the same machine, across thousands of machines, or even behind a firewall. It gives you named pipes in the sky and lets you pipe data anywhere.

I built it while on vacation the last couple months. I wasn’t intending to write software while on my trip, but I just couldn’t help myself.

Skypipe is conceptually similar to named pipes or netcat, but with more power and a simpler interface. Here is a basic example using skypipe as you would a regular named pipe to gzip a file across shells:

$ skypipe | gzip -9 -c > out.gz

Your skypipe is now ready to receive some data from another shell process:

$ cat file | skypipe

Unlike named pipes, however, this will work across any machines connected to the Internet. You don’t have to specify a host address or set up “listen mode” like you would with netcat. In fact, unlike netcat, which is point to point, you could use skypipe for log aggregation. Here we’ll used named skypipes. Run this on several hosts:

$ tail -f /var/log/somefile | skypipe logs

Then run this on a single machine:

$ skypipe logs > /var/log/aggregate

This can also broadcast to multiple hosts. With the above, you can “listen in” by running this from your laptop, even while behind a NAT:

$ skypipe logs

You can also temporarily store data or files in the pipe, even several files, until you pull them out. On one machine load some files into a named skypipe:

$ cat file_a | skypipe files
$ cat file_b | skypipe files

Now, from somewhere else, pull them out in order:

$ skypipe files > new_file_a
$ skypipe files > new_file_b

Lastly, you can use skypipe like the channel primitive in Go for coordinating between shell scripts. As a simple example, we’ll use skypipe to wait for an event triggered by another script:

#!/bin/bash
echo "I'm going to wait until triggered"
skypipe trigger
echo "I was triggered!"

Triggering is just sending an EOF over the pipe, causing the listening skypipe to terminate. We can do this with a simple idiom:

$ echo | skypipe trigger

How does this magic work?

You’ll need a free Dotcloud account to use skypipe, but you don’t need to know anything about using Dotcloud to use skypipe.

When you first use skypipe it will want you to run a setup command (skypipe --setup). This will deploy a very simple messaging server to Dotcloud. From then on, skypipe will use your account to transparently find and use this server, no matter where you are. The server is managed automatically and runs on Dotcloud free of charge, so you never need to think about it.

Software with a service!

This is a new paradigm of creating tools that transparently leverage the cloud to create magical experiences. It’s not quite software as a service, it’s software with a service. Nobody is using a shared, central server, and no one needs to setup or manage their own server. The software deploys and manages its own server for you.

Thanks to platforms like Heroku and Dotcloud, we can now build software leveraging features of software as a service that is packaged and distributed like normal open source software.

I’m excited to see what else can be done with this pattern. Naturally, I’m already thinking about a number of other potential uses.

Using skypipe and getting involved

Skypipe is still an alpha piece of software. Be warned, there are some rough edges. That said, you can install skypipe with pip:

$ pip install skypipe

The user experience is not yet entirely optimized. One of the biggest issues is that it needs to check for the server on every use. This can be done less often and cached, which would make it much snappier and on par with most command line utilities.

This and a few other issues are already tracked in Github Issues; feel free to take a whack at them. The codebase is intentionally small, documented, and written to be read, although there are no tests yet.

The project also depends on ZeroMQ, which requires a C extension to be compiled. Even using the pyzmq-static package, you still need certain header files (python.h at the very least) to install skypipe, and not every environment has these. Ideally, I’d like to find a way to package skypipe in a way that includes all its dependencies. Perhaps PyInstaller can help with this.

Another feature I’m sure a lot of people will want (or complain about) is being able to run your own server and ignore the software with a service aspect. Since the server is packaged with the client, this is not far off from happening. Somebody just needs to make it happen.

Contribution ideas aside, I’m hoping skypipe will be useful to others besides myself. I was really going for a magical tool. I think a big part of this magic is the use of software with a service, which I consider a bit novel in itself. What do you think?

Comments
Sep 07
2012

Let me tell you about my website

Since mid-July I’ve been on vacation traveling around the world. Originally, the only project I allowed myself to work on is my website and blog. I quickly broke that rule with a number of new and existing projects. Nevertheless, as you can see, I did ship this site.

I’ve never been happy with my website or blog, perhaps because I’ve never been able to invest enough time into it. Over the years, I’ve at least been able to put a lot of thought into what I want and how I want to express myself. This attempt gets pretty close.

High-level Goals

Too many of my personal site designs have been dark and gloomy, often monochromatic. I think one of the biggest ideas going into this project was to make something bright and colorful. From a pure functional standpoint, I felt this was important. I also wanted to actually reflect my style. Too often I’ve settled for pre-designed blog themes that “kinda, sorta” match my sense of style and how I want to express myself. This time I would have full control.

As I’m starting to freelance again, I also wanted to have a good marketing tool for myself. Even if it just expressed the affordances I have as a free agent. If somebody discovered me online, they’d know they could buy my time. “Yes, I can be hired for any of these fine services.”

Combining my personal website with my blog is another thing I wanted to do. So far they’ve always been separate. Not only would this be for consistency and simplicity, but for future proofing. Anything else I want to put online I can do with this site in a way that feels part of a whole. A whole that represents my identity and personal brand.

Aesthetic

Like I mentioned, I wanted bright and colorful. I also wanted simple and toy-like. I wanted it to feel well designed, but without a lot of modern web design tropes. This led to a minimalist foundation that I could sprinkle my favorite motifs on top of.

I actually only had one website in mind that I used for initial inspiration. In fact, I think when I came across this page of the Disqus website, I immediately started imagining a new personal website. Most of that initial vision has since disappeared, but with that framing I was able to move on to colors.

The colors on that page reminded me of one of my favorite kinds of infographics: transit maps. I quickly started poking around ColourLovers for palettes inspired by transit maps. I found a few and settled on one based on Tokyo’s subway map. Then I moved on to typeface.

I limited my options to what was available on Google Webfonts. Previously I was a fan of the Droid Sans and Droid Serif families. This time around, I used Open Sans as the primary font. For the header title (my name), I needed something different. I wanted something heavier but not wide. Ideally, I wanted a bold Futura Condensed, but a heavy Futura isn’t on Google Webfonts.

I struggled with this for a while, then by accident found VT323. It had the weight and shape qualities I was looking for, but I didn’t think I wanted a pixelated typeface. It seemed too cliche. However, when I tried it, it became obvious VT323 not only went well with my pixelated avatar, but it added more of me to the overall design without being too cliche.

The tree in the header was a late addition. I have a thing for trees as a symbol of nature and I wanted to add more character to the design. Originally I was going to use Context Free to create a procedurally generated tree, but this proved too time consuming to get what I wanted. A free vector based tree was not hard to find.

Platform

I’ve become a fan of using GitHub Pages for simple websites. It’s hosted, it’s free, and powered by Git. The site is then versioned, editable online, and even forkable. The only limitation is that it only hosts static files.

Luckily, they’ve built-in support for Jekyll, which is more or less like pre-rendering a dynamic site. This not only gives you templating, layouts, and includes, but it’s “blog aware” so you get pagination, meta-data, and even related post links. The only real dynamic bits of the site are blog comments and analytics, both are solved by client-side JavaScript powered services. Neither of those I care to have in my Git repo anyway.

The other part of this plan I like is that it lets you write content in simple Markdown files kept in a Git repo. If I ever need, I can take this repo anywhere and still have my blog posts as Markdown files. This is a far more ideal place to be in that most blogging platforms I’ve used in the past.

Future

It’s still a work in progress. You may have noticed the homepage currently redirects to the blog index. I’m hoping to have more of a landing or intro page as the homepage. The idea there is to quickly communicate what I’m about and what I work on.

I also want to incorporate more hand-drawn elements into the design. A lot of my writing is well accompanied by diagrams and visuals, so I wanted to include more of these in my posts. Doing as many of these as I can with my drawing tablet will give the site more character. My character.

I was also thinking about “project pages” that describe and introduce the projects I’m involved in without being hidden as an old blog post. Perhaps more exciting, though, are project idea pages or “blueprints”. These would allow me to document projects I’m thinking about building, letting me get feedback and encourage collaboration before even starting.

Building a site from scratch was a pretty substantial investment. With the right tools and enough time to think through and iterate on the visual design, I now have something that works and that I can build on as needed. Not only that, I shipped before I got back from vacation. :)

Comments
May 11
2010

Making a local web server public with localtunnel

These days it’s fairly common to run a local environment for web development. Whether you’re running Apache, Mongrel, or the App Engine SDK, we’re all starting to see the benefits of having a production-like environment right there on your laptop so you can iteratively code and debug your app without deploying live, or even needing the Internet.

However, with the growing popularity of callbacks and webhooks, you can only really debug if your script is live and on the Internet. There are also other cases where you need to make what are normally private and/or local web servers public, such as various kinds of testing or quick public demos. Demos are a surprisingly common case, especially for multi-user systems (“Man, I wish I could have you join this chat room app I’m working on, but it’s only running on my laptop”).

The solution is obvious, right? SSH remote forwarding, or reverse tunneling. Use a magical set of options with SSH with a public server you have SSH access to, and set up a tunnel from that machine to your local machine. When people connect to a port on your public machine, it gets forwarded to a local port on your machine, looking as if that port was on a public IP.

The idea is great, but it’s a hassle to set up. You need to make sure sshd is set up properly in order to make a public tunnel on the remote machine, or you need to set up two tunnels, one from your machine to a private port on the remote machine, and then another on the remote machine from a public port to the private port (that forwards to your machine).

In short, it’s too much of a hassle to consider it a quick and easy option. Here is the quick and easy option:

$ localtunnel 8080

And you’re done! With localtunnel, it’s so simple to set this up, it’s almost fun to do. What’s more is that the publicly accessible URL has a nice hostname and uses port 80, no matter what port its on locally. And it tells you what this URL is when you start localtunnel:

$ localtunnel 8080
Port 8080 is now publicly accessible from http://8bv2.localtunnel.com ...

What’s going on behind the scenes is a web server component running on localtunnel.com. It serves two purposes: a virtual host reverse proxy to the port forward, and a tunnel register API (try going to http://open.localtunnel.com). This simple API allocates a port to tunnel on, and gives the localtunnel client command the information it needs to set up an SSH tunnel for you. The localtunnel command just wraps an SSH library and does this register call.

Of course, there’s also the authentication part. As a free, public service, we don’t want to just give everybody SSH access to this machine (as it may seem). The user localtunnel on that box is made just for this service. It has no shell. It only has a home directory with an authorized_keys file. We require you to upload a public key for authentication, and we also mark that key with options that say you can only do port forwarding. Although, it can’t be used for arbitrary port forwarding… because it’s only a private port on the remote side, it can only be used with the special reverse proxy.

So there it is. And the code is on GitHub. You might notice the server is in Python and the client in Ruby. Why? It just made sense. Python has Twisted, which I like for server stuff. And Ruby is great for command line scripts, and has a nice SSH library. In the end, it doesn’t matter what it’s written in. Ultimately it’s a Unix program.

Enjoy!

Comments