Fanout

Serverless WebSockets with AWS Lambda & Fanout

The basics of adding realtime data push to your serverless backend

JVSystems

Serverless

Serverless is one of the developer world’s most popular misnomers. Contrary to its name, serverless computing does in fact use servers, but the benefit is that you can worry less about maintenance, scale, and configuration. This is because serverless is a cloud computing execution model where a cloud provider dynamically manages the allocation of machine and computational resources. You are basically deploying code to an environment without visible processes, operating systems, servers, or virtual machines. From a pricing perspective, you are typically charged for the actual amount of resources consumed and not by pre-purchased capacity.

Pros

  • Reduced architectural complexity
  • Simplified packaging and deployment
  • Reduced cost to scale
  • Eliminates the need for system admins
  • Works well with microservice architectures
  • Reduced operational costs
  • Typically decreased time to market with faster releases

Cons

  • Performance issues — typically higher latency due to how commute resources are allocated
  • Vendor lock-in (hard to move to a new provider)
  • Not efficient for long-running applications
  • Multi-tenancy issues where service providers may run software for several different customers on the same server
  • Difficult to test functions locally
  • Different FaaS implementations provide different methods for logging in functions

AWS Lambda

Amazon’s take on serverless comes in the form of AWS LambdaAWS Lambda lets you run code without provisioning or managing servers — while you only pay for your actual usage. With Lambda, you can run code for virtually any type of application or backend service — Lambda automatically runs and scales your application code. Moreover, you can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Websockets

A WebSocket provides a long-lived connection for exchanging messages between client and server. Messages may flow in either direction for full-duplex communication. A client creates a WebSocket connection to a server, using a WebSocket client library. WebSocket libraries are generally available in every language, and of course browsers support it natively using the WebSocket JavaScript object. The connection negotiation uses an HTTP-like exchange, and a successful negotiation is indicated with status code 101. After the negotiation response is sent, the connection remains open to be used for exchanging message frames in either binary or unicode string format. Peers may also exchange close frames to perform a clean close.

Building AWS IoT Websockets

Function-as-a-service backends, such as AWS Lambda, are not designed to handle long-lived connections on their own. This is because the function invocations are meant to be short-lived. Lambda is designed to integrate with services such as AWS IoT to handle these types of connections. AWS IoT Core supports MQTT (either natively or over WebSockets), a lightweight communication protocol specifically designed to tolerate intermittent connections.

AWS IoT Core Site

However, this approach alone will not give you access to the raw protocol elements — and will not allow you to build a pure Lambda-powered API (if that is your intended use case). If you want this access, then you need to take a different approach.

Building Lambda-Powered WebSockets with Fanout

You can also build custom Lambda-powered WebSockets by integrating a service like Fanout — a cross between a message broker and a reverse proxy that enables realtime data push for apps and APIs. With these services together, we can build a Lambda-powered API that supports plain WebSockets.

This approach uses GRIP, the Generic Realtime Intermediary Protocol — making it possible for a web service to delegate realtime push behavior to a proxy component.

This FaaS GRIP library makes it easy to delegate long-lived connection management to Fanout, so that backend functions only need to be invoked when there is connection activity. The other benefit is that backend functions do not have to run for the duration of each connection.

The following step-by-step breakdown is meant as a quick configuration reference. You can checkout the Github libraries for Node and Pythonintegrations.

1. Initial Configuration

You will first configure your Fanout Cloud domain/environment and set up an API and resource in AWS API Gateway to point to your Lambda function, using a Lambda Proxy Integration.

2. Using Websockets

Whenever an HTTP request or WebSocket connection is made to your Fanout Cloud domain, your Lambda function will be able to control it. To do this, Fanout converts incoming WebSocket connection activity into a series of HTTP requests to your backend.

3. You’ve Got Realtime

You now have a realtime WebSockets driven by a Lambda function!

An Example

This Node.js code implements a WebSocket echo service. I recommend checking out the full FaaS GRIP library for a step-by-step breakdown, and for instructions on implementing HTTP long polling and HTTP streaming.

var grip = require('grip');
var faas_grip = require('faas-grip');

exports.handler = function (event, context, callback) {
    var ws;
    try {
        ws = faas_grip.lambdaGetWebSocket(event);
    } catch (err) {
        callback(null, {
            statusCode: 400,
            headers: {'Content-Type': 'text/plain'},
            body: 'Not a WebSocket-over-HTTP request\n'
        });
        return;
    }

    // if this is a new connection, accept it
    if (ws.isOpening()) {
        ws.accept();
    }

    // here we loop over any messages
    while (ws.canRecv()) {
        var message = ws.recv();

        // if return value is null, then the connection is closed
        if (message == null) {
            ws.close();
            break;
        }

        // echo the message
        ws.send(message);
    }

    callback(null, ws.toResponse());
};

Overall, if you‘re not looking for full control over your raw protocol elements, then you may find it easier to try a Lambda/AWS IoT configuration. If you need more WebSocket visibility and control, then the Lambda+Fanout integration is probably your best bet.

High scalability with Fanout and Fastly

Fanout Cloud is for high scale data push. Fastly is for high scale data pull. Many realtime applications need to work with data that is both pushed and pulled, and thus can benefit from using both of these systems in the same application. Fanout and Fastly can even be connected together!

fanout-fastly

Using Fanout and Fastly in the same application, independently, is pretty straightforward. For example, at initialization time, past content could be retrieved from Fastly, and Fanout Cloud could provide future pushed updates. What does it mean to connect the two systems together though? Read on to find out.

Proxy chaining

Since Fanout and Fastly both work as reverse proxies, it is possible to have Fanout proxy traffic through Fastly rather than sending it directly to your origin server. This provides some unique benefits:

  1. Cached initial data. Fanout lets you build API endpoints that serve both historical and future content, for example an HTTP streaming connection that returns some initial data before switching into push mode. Fastly can provide that initial data, reducing load on your origin server.
  2. Cached Fanout instructions. Fanout’s behavior (e.g. transport mode, channels to subscribe to, etc.) is determined by instructions provided in origin server responses, usually in the form of special headers such as Grip-Hold and Grip-Channel. Fastly can cache these instructions/headers, again reducing load on your origin server.
  3. High availability. If your origin server goes down, Fastly can serve cached data and instructions to Fanout. This means clients could connect to your API endpoint, receive historical data, and activate a streaming connection, all without needing access to the origin server.

Network flow

Suppose there’s an API endpoint /stream that returns some initial data and then stays open until there is an update to push. With Fanout, this can be implemented by having the origin server respond with instructions:

HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 29
Grip-Hold: stream
Grip-Channel: updates

{"data": "current value"}

When Fanout Cloud receives this response from the origin server, it converts it into a streaming response to the client:

HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: Transfer-Encoding

{"data": "current value"}

The request between Fanout Cloud and the origin server is now finished, but the request between the client and Fanout Cloud remains open. Here’s a sequence diagram of the process:

fanout-flow

Since the request to the origin server is just a normal short-lived request/response interaction, it can alternatively be served through a caching server such as Fastly. Here’s what the process looks like with Fastly in the mix:

fanout-fastly1

Now, guess what happens when the next client makes a request to the /stream endpoint?

fanout-fastly2

That’s right, the origin server isn’t involved at all! Fastly serves the same response to Fanout Cloud, with those special HTTP headers and initial data, and Fanout Cloud sets up a streaming connection with the client.

Of course, this is only the connection setup. To send updates to connected clients, the data must be published to Fanout Cloud.

We may also need to purge the Fastly cache, if an event that triggers a publish causes the origin server response to change as well. For example, suppose the “value” that the /stream endpoint serves has been changed. The new value could be published to all current connections, but we’d also want any new connections that arrive afterwards to receive this latest value as well, rather than the older cached value. This can be solved by purging from Fastly and publishing to Fanout Cloud at the same time.

Here’s a (long) sequence diagram of a client connecting, receiving an update, and then another client connecting:

fanout-fastly3

At the end of this sequence, the first and second clients have both received the latest data.

Rate-limiting

One gotcha with purging at the same time as publishing is if your data rate is high it can negate the caching benefit of using Fastly.

The sweet spot is data that is accessed frequently (many new visitors per second), changes infrequently (minutes), and you want changes to be delivered instantly (sub-second). An example could be a live blog. In that case, most requests can be served/handled from cache.

If your data changes multiple times per second (or has the potential to change that fast during peak moments), and you expect frequent access, you really don’t want to be purging your cache multiple times per second. The workaround is to rate-limit your purges. For example, during periods of high throughput, you might purge and publish at a maximum rate of once per second or so. This way the majority of new visitors can be served from cache, and the data will be updated shortly after.

An example

We created a Live Counter Demo to show off this combined Fanout + Fastly architecture. Requests first go to Fanout Cloud, then to Fastly, then to a Django backend server which manages the counter API logic. Whenever a counter is incremented, the Fastly cache is purged and the data is published through Fanout Cloud. The purge and publish process is also rate-limited to maximize caching benefit.

The code for the demo is on GitHub.

Examining Mature APIs (Slack, Stripe, Box)

In our previous blog post, we discussed the disconnect between API pricing plans where you pay monthly for a set number of calls and regular developer use cases. We think competition will drive new pricing models that are more developer friendly – and a potential approach could be charging for calls based on their business value. Examining webhook events available via API from Stripe, Slack, and Box gives us a forward look into how this could work.

What’s a mature API?

Forbes nicely summarizes where they see API development going in this graphic (ignore the “customer-driven platform revolution”) portion:

forbes

They make a valid point that APIs become more valuable as the data that flows from them becomes bi-directional – APIs are not only returning data based on calls, but actively pushing out data based on API activity.

This data push generally starts around activity with high business value – so we’re going to examine APIs from Stripe, Slack, and Box to get an idea of what events they make available.

Slack has a separate “Events API”

Slack has chosen to implement a separate Events API for developers who want to build apps that respond to events within Slack. Here’s the full list of event types that they can push in realtime as they happen.

Looking at this list in more detail, it’s focused around key messaging and collaboration activities:

  • Creating and updating channels
  • Uploading, sharing, and commenting on files
  • Messages being posted to various channels

Box uses event triggers

Box uses webhooks with event triggers attached to Box files and folders to monitor events attached to files and folders and notify you when they occur. Here’s their full list of events for files and folders.

As expected for Box, events are focused around file management and collaboration:

  • Uploading, previewing, and downloading files
  • Comment and task assignment creation and updating

Stripe sends a variety of events

Stripe sends a wide variety of events around payments, both keyed to internal and external usage:

  • Account creation and updating
  • Product or plan creation
  • Card charges and updates

What does it mean?

The events that these mature APIs have chosen to make available for realtime push have substantial business value for developers building apps using their functionality. As more APIs begin to offer push of data, they may move to a blended pricing model that charges more for these high-value events. We’re interested to see what happens!

realtime_notifications fanout slack pushpin intercom

Realtime data for smart notifications

From the Fanout Blog

It’s becoming the new normal that messaging and collaboration apps and platforms are available across multiple devices.

Business tools like Slack and JIRA offer feature-rich mobile apps, and users increasingly consume content from social networks like Facebook on their mobile devices instead of a desktop or laptop.

This isn’t a surprise – and we’re here to share our perspective on how developers can use realtime data to provide cross-platform users with the best notification experience.

Mary Meeker’s 2017 Internet Trends Report tracks the trend towards increasing mobile adoption:

meeker

What’s not stated explicitly in this slide is that that much of this engagement occurs simultaneously – it’s not uncommon for users to have an app open on their desktop and phone at the same time.

‘Dumb’ notifications produce a poor user experience

Simultaneous use of cross-platform apps has created a user experience issue that many of us are familiar with. When a new Trello card is assigned to me, I get a push notification on my phone, a ping in the Trello interface, and an email in my inbox. Due to my Trello integration with Slack, things quickly get worse – I get a notification on Slack on each of my devices. I can get up to 6 notifications tied to a single event.

This isn’t ideal – and as more devices become connected, the problem will only be compounded. Imagine a future where your phone, smartwatch, smart TV, and smart thermostat are all buzzing simultaneously. It doesn’t need to be this way.

Collaboration and messaging app developers can get smart

We didn’t come up with the idea of ‘smart’ notifications (entire companies like Intercom and OpenBackare built to enable them) – but we do have a perspective on how app developers can use realtime data to enable them.

Realtime data is already present in many chat or collaboration apps – typing indicators, read receipts, and live editing are all features that we take for granted. The next step for developers is taking a wider variety of realtime data into account when building notifications into their user experiences.

Luckily, mobile devices offer a wealth of realtime data to developers who want to do this:

Presence and attention-awareness (knowing which device a user is active on) allows a single ping to that device, instead of a ping to all devices. Results-driven logic can drive a secondary notification to another device or channel in the instance the first notification is not responded to. This can lead to some pretty complex logic, as in the case of Slack’s notification tree below:

slack_notifications

Slack’s blog post on how they built a lightweight desktop client to handle the complex interactions between team, channel, and user preferences and states when sending notifications is worth reading.

Time and location data is crucial – work notifications don’t need to be sent on the weekend, and pop-up notifications for events or sales are only relevant in bounded areas. Slack enables manual setting of ‘Do Not Disturb’ hours in order to keep notifications from taking over user’s lives. Context can be user-generated (like in the Slack example) or learned based on prior interactions with notifications.

Device and connection state information is underutilized. Know a user has low battery life? Maybe the notification to download the latest game update can wait. Users on Wifi are more likely to interact with rich notifications than those on cellular connections. If a user loses connectivity and many notifications are queued, they may no longer all be relevant when the user is back in range.

Realtime is a crucial component for smart notifications

As users constantly switch devices and platforms, realtime knowledge of their status is key to providing intelligent notifications. Developers who do this well will continue to retain user interest, and those who don’t will have a hard time keeping their attention.