Realtime API Blog

Laravel Websockets Released

In many modern web applications, WebSockets are used to implement realtime, live-updating user interfaces. When some data is updated on the server, a message is typically sent over a WebSocket connection to be handled by the client. This provides a more robust, efficient alternative to continually polling your application for changes.

Laravel WebSockets is a WebSockets server implemented in PHP for your Laravel projects. It’s also a drop-in replacement for Pusher via the Pusher protocol and Laravel Echo JavaScript server, which means all packages that work with Pusher will work with the Laravel WebSockets package.

To learn more about the package and how to use it to check out the Laravel WebSockets documentation. The source code is available on GitHub at beyondcode/laravel-websockets

Full Article

Serverless Real-time Notifications for Web Apps in AWS

Using Serverless solutions for real-time communication for web apps solves many challenges. Implementing scalability, reliability and fault tolerance while dealing with WebSocket connections are few of them. When implementing real-time web applications, there are two services provided by AWS – AWS AppSync and AWS IoT. Each of these services has applicability for different use cases based on the features and pricing model they offer.

Full Article

Realtime Web Data Streaming with Kafka and Pushpin

Supercharging Kafka:  Enable Realtime Web Streaming by Adding Pushpin

Exposing Kafka messages via a public HTTP streaming API

Matt Butler

Apache Kafka is the new hotness when it comes to adding realtime messaging capabilities to your system. At its core, it is an open source distributed messaging system that uses a publish-subscribe system for building realtime data pipelines. But, more broadly speaking, it is a distributed and horizontally scaleable commit log.

In a Kafka cluster, you will have topics, producers, consumers, and brokers:

  • Topics — A categorization for a group of messages
  • Producers — Push messages into a Kafka topic
  • Consumers — Pulls messages off of a Kafka topic
  • Kafka Broker — A Kafka node
  • Kafka Cluster— A collection of Kafka brokers

Take a deep dive into Kafka here.

Overall, Kafka provides fast, highly scalable and redundant messaging through a publish-subscribe model.

A pub-sub model is a messaging pattern where publishers categorize published messages into topics without knowledge of which subscribers would receive those messages (if any). Likewise, subscribers express interest in one or more topics and only receive messages that are of interest, without knowing anything about the publishers (source).

Kafka Strengths

As a messaging system, Kafka has some transformative strengths that have catalyzed its rising popularity

  1. Realtime Data Pipeline — Can handle realtime messaging throughput with high currency
  2. High-throughput — Ability to support high-velocity and high-volume data (1000’s per second)
  3. Fault-tolerant — Due to its distributed nature, it is relatively resistant to node failure within a cluster
  4. Low Latency — Milliseconds to handle thousands of messages
  5. Scalability — Kafka’s distributed nature allows you to add additional nodes without downtime, facilitating partitioning and replication

Kafka Limits

Due to its intrinsic architecture, Kafka is not optimized to provide API consumers with friendly access to realtime data. As such, many orgs are hesitant to expose their Kafka endpoints publicly.

In other words, it is difficult to expose Kafka across a public API boundary if you want to use traditional protocols (like websockets or HTTP).

To overcome this limit, we can integrate Pushpin into our Kafka ecosystem to handle more traditional protocols and expose our public API in a more accessible and standardized way.

Pushpin + Kafka

Server-sent events (SSE) is a technology where a browser receives automatic updates from a server via HTTP connection (standardized in HTML5 standards). Kafka doesn’t natively support this protocol, so we need to add an additional service to make this happen.

Pushpin’s primary value prop is that it is an open source solution that enables realtime push — a requisite of evented APIs (GitHub Repo). At its core, it is a reverse proxy server that makes it easy to implement WebSocket, HTTP streaming, and HTTP long-polling services. Structurally, Pushpin communicates with backend web applications using regular, short-lived HTTP requests.

Integrating Pushpin and Kafka provides you with some notable benefits:

  • Resource-Oriented API — Provides a more logical resource-oriented API to consumers that fits in with an existing REST API. In other words, you can expose data over standardized, more-secure protocols.
  • Authentication — Reuses existing authentication tokens and data formats.
  • API Management — Harnesses your existing API management system or load balancers.
  • Web Tier Scaleability — If the number of your web consumers grows substantially, then it may be more economical and performant to scale out your web tier, rather than your Kafka cluster.

In this next example, we will expose Kafka message via HTTP streaming API. 

Building Kafka Server-Sent Events

This example project reads messages from a Kafka service and exposes the data over a streaming API using Server-Sent Events (SSE) protocol over HTTP. It is written using Python & Django, and relies on Pushpin for managing the streaming connections.

How it Works

In this demo, we drop a Pushpin instance on top of our Kafka broker. Pushpin acts as a Kafka consumer, subscribes to all topics, and re-publishes received messages to connected clients. Clients listen to events via Pushpin.

More granularly, we use views.py to set up an SSE endpoint, while relay.py handles the messaging input and output.

  1. First, we need to setup virtualenv and install dependencies:
virtualenv --python=python3 venv
. venv/bin/activate
pip install -r requirements.txt

2. Create a suitable .env with Kafka and Pushpin settings:

KAFKA_CONSUMER_CONFIG={"bootstrap.servers":"localhost:9092","group.id":"mygroup"}
GRIP_URL=http://localhost:5561

3. Run the Django server:

python manage.py runserver

4. Run Pushpin:

pushpin --route="* localhost:8000"

5. Run the relay command:

python manage.py relay

The relay command sets up a Kafka consumer according to KAFKA_CONSUMER_CONFIG, subscribes to all topics, and re-publishes received messages to Pushpin, wrapped in SSE format.

Clients can listen to events by making a request (through Pushpin) to /events/{topic}/:

curl -i http://localhost:7999/events/test/

The output stream might look like this:

HTTP/1.1 200 OK
Content-Type: text/event-stream
Transfer-Encoding: chunked
Connection: Transfer-Encoding
event: message
data: hello
event: message
data: world

Repo on GitHub

How To Power Your App Using a Realtime Data CDN

Combining Fastly (high scale pull) and Fanout (high scale push) to power realtime messaging at the edge

CDN — Content Delivery Network

Let’s start with defining a CDN. A content delivery network (CDN) is a system of distributed servers that traditionally delivers web content to a user, based on the geographic locations of the user, the origin of the webpage and the content delivery server. I use the term traditionally because we’re entering an era where CDNs are doing more than just delivering web content.

An example would be Cloudflare Workers, which lets you use their CDN to run code at the edge, rather than just serve web pages / cached content. You are basically able to deploy and run JavaScript away from the origin server — allowing you to decouple code from a user’s device. According to Cloudflare, “these Workers also enable programmatic functionality for routing, filtering and responding to HTTP requests that would otherwise need to be run on a customer’s server at the origin.”

The main point is that CDNs and edge computing are continuously evolving — whereby the two are starting to meld together in an era where high scalability is paramount.

Melding Realtime Data Push with Realtime Data Pull

Many realtime applications need to work with data that is both pushed and pulled (i.e live sports scores, auctions, chat). Separately, data push and data pull are fairly straightforward as independent entities. At initialization time, past content could be retrieved from a pull CDN and new/future updates could be pushed from a separate service.

But, what if you could chain these mechanisms together?

Proxy Chaining with Fastly and Fanout

Fastly is an edge cloud platform that enables applications to process, serve, and secure data at the edge of a network. It is essentially high scalable data pull and response, using a platform that can listen and respond to users’ needs in realtime. Similar to a traditional CDN, Fastly does allow you cache content, but it also lets you deliver application logic at the edge.

On the other hand, Fanout is high scalable data push — serving as a reverse proxy that handles long-lived client connections and pushes data as it becomes available.

Both Fastly and Fanout work as reverse proxies, so it is possible to have Fanout proxy traffic through Fastly — rather than sending that traffic directly to your origin server. Together, this coupled system has some interesting benefits:

  1. High availability — If your origin server goes down, Fastly can serve cached data and instructions to Fanout. This means clients could connect to your API endpoint, receive historical data, and activate a streaming connection, all without needing access to the origin server.
  2. Cached initial data — Fanout lets you build API endpoints that serve both historical and future content, for example an HTTP streaming connection that returns some initial data before switching into push mode. Fastly can provide that initial data, reducing load on your origin server.
  3. Cached Fanout instructions — Fanout’s behavior (e.g. transport mode, channels to subscribe to, etc.) is determined by instructions provided in origin server responses (using a system of special headers called Grip). Fastly can subsequently cache these instructions and headers.
  4. High scalability — By caching Fanout instructions and headers, Fastly can further reduce the load on your origin server — bringing that processing logic closer to the edge.

Mapping the Network Flow

Using Fanout and Fastly, let’s map the network flow to see how these push and pull mechanisms could work together.

Let’s suppose there’s an API endpoint /stream that returns some initial data and then stays open until there is a new update to push. With Fanout, this can be implemented by having the origin server respond with instructions:

HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 29
Grip-Hold: stream
Grip-Channel: updates

{"data": "current value"}

When Fanout receives this response from the origin server, it converts it into a streaming response to the client:

HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: Transfer-Encoding

{"data": "current value"}

The request between Fanout and the origin server is now finished, but the request between the client and Fanout remains open. Here’s a sequence diagram of the process:

Since the request to the origin server is just a normal short-lived request/response interaction, it can alternatively be served through a caching server such as Fastly.

Here’s what the process looks like with Fastly in the mix:

Now, when the next client makes a request to the /stream endpoint, the origin server isn’t involved at all:

In other words, Fastly serves the same response to Fanout, with those special HTTP headers and initial data, and Fanout sets up a streaming connection with the client.

Of course, this is only the connection setup. To send updates to connected clients, the data must be published to Fanout.

Purging the Fastly Cache

If an event that triggers a publish causes the origin server response to change, then we may also need to purge the Fastly cache.

For example, suppose the “value” that the /stream endpoint serves has been changed. The new value could be published to all current connections, but we’d also want any new connections that arrive afterwards to receive this latest value as well, rather than the older cached value. This can be solved by purging from Fastly and publishing to Fanout at the same time.

This sequence diagram illustrates a client connecting, receiving an update, and then another client connecting:

Effectively Handling Rate-Limiting

If your publishing data rate is relatively high, then this can negate the caching benefit of using Fastly.

The ideal data rate to effectively harness Fastly’s cache would be data that is:

  • Accessed frequently — many new vistors per second
  • Changed frequently — updates ever few seconds or minutes
  • Delivered instantly — in milliseconds

An example of this would be a live blog, whereby most requests can be served and handled from cache.

However, if your data changes multiple times per second (or has the potential to change that fast during peak moments), and you expect frequent access, you really don’t want to be purging your cache multiple times per second.

The workaround is to rate-limit your purges. For example, during periods of high throughput, you might purge and publish at a maximum rate of once per second or so. This way, the majority of new visitors can be served from cache, and the data will be updated shortly after.

Demo

You can reference the Github source code for the Fastly/Fanout high scale Live Counter demo. Requests first go to Fanout, then to Fastly, then to a Django backend server which manages the counter API logic. Whenever a counter is incremented, the Fastly cache is purged and the data is published through Fanout. The purge and publish process is also rate-limited to maximize caching benefit.

Final Thoughts: The Emergence of a Messaging CDN?

Broadly speaking, we could define a messaging content delivery network as a geographically distributed group of servers which work together to provide near realtime delivery of dynamic data and web content.

This new genre of CDN could allow data processing to take place at the edge, away from an app’s origin — thereby ushering in a new era of realtime computing that is both affordable and scalable.

How Blockchain and Realtime APIs are Totally Changing Healthcare

In his article, Philip Levinson discusses how realtime data has become essential for advances in healthcare software — specifically as applied to new blockchain technology.

As with other healthcare companies and organizations, one of the keys to Oscar’s model depends on “using real-time data to get actionable insights in front of members and physicians,” says Schlosser.

As a result, blockchain “has the power to revive the healthcare industry by reorganizing operations, generating new business models and integrating patients’ medical records,” according to Zacks.

The latter of these represent two ways blockchain is most likely to change healthcare in the short-run.

Full Article 

Payments Are Moving To Real-Time In Countries Around The World

In this Forbes’ article, Tom Groenfeldt discusses the emergence of the realtime payment ecosystem and its demand on the realtime API environment.

Faster payment systems are being adopted in countries all around the globe even though there is no compelling ROI argument for them, according to the fourth annual Flavors of Fast payments study from FIS.

In fact, many of the innovations accompanying faster payments are not about pure speed but other attributes such as 24x7x365 operating hours and standards like ISO 20022 that support data like invoices moving with payments or requests for payments.

Full Article

Serverless WebSockets with AWS Lambda & Fanout

The basics of adding realtime data push to your serverless backend

JVSystems

Serverless

Serverless is one of the developer world’s most popular misnomers. Contrary to its name, serverless computing does in fact use servers, but the benefit is that you can worry less about maintenance, scale, and configuration. This is because serverless is a cloud computing execution model where a cloud provider dynamically manages the allocation of machine and computational resources. You are basically deploying code to an environment without visible processes, operating systems, servers, or virtual machines. From a pricing perspective, you are typically charged for the actual amount of resources consumed and not by pre-purchased capacity.

Pros

  • Reduced architectural complexity
  • Simplified packaging and deployment
  • Reduced cost to scale
  • Eliminates the need for system admins
  • Works well with microservice architectures
  • Reduced operational costs
  • Typically decreased time to market with faster releases

Cons

  • Performance issues — typically higher latency due to how commute resources are allocated
  • Vendor lock-in (hard to move to a new provider)
  • Not efficient for long-running applications
  • Multi-tenancy issues where service providers may run software for several different customers on the same server
  • Difficult to test functions locally
  • Different FaaS implementations provide different methods for logging in functions

AWS Lambda

Amazon’s take on serverless comes in the form of AWS LambdaAWS Lambda lets you run code without provisioning or managing servers — while you only pay for your actual usage. With Lambda, you can run code for virtually any type of application or backend service — Lambda automatically runs and scales your application code. Moreover, you can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Websockets

A WebSocket provides a long-lived connection for exchanging messages between client and server. Messages may flow in either direction for full-duplex communication. A client creates a WebSocket connection to a server, using a WebSocket client library. WebSocket libraries are generally available in every language, and of course browsers support it natively using the WebSocket JavaScript object. The connection negotiation uses an HTTP-like exchange, and a successful negotiation is indicated with status code 101. After the negotiation response is sent, the connection remains open to be used for exchanging message frames in either binary or unicode string format. Peers may also exchange close frames to perform a clean close.

Building AWS IoT Websockets

Function-as-a-service backends, such as AWS Lambda, are not designed to handle long-lived connections on their own. This is because the function invocations are meant to be short-lived. Lambda is designed to integrate with services such as AWS IoT to handle these types of connections. AWS IoT Core supports MQTT (either natively or over WebSockets), a lightweight communication protocol specifically designed to tolerate intermittent connections.

AWS IoT Core Site

However, this approach alone will not give you access to the raw protocol elements — and will not allow you to build a pure Lambda-powered API (if that is your intended use case). If you want this access, then you need to take a different approach.

Building Lambda-Powered WebSockets with Fanout

You can also build custom Lambda-powered WebSockets by integrating a service like Fanout — a cross between a message broker and a reverse proxy that enables realtime data push for apps and APIs. With these services together, we can build a Lambda-powered API that supports plain WebSockets.

This approach uses GRIP, the Generic Realtime Intermediary Protocol — making it possible for a web service to delegate realtime push behavior to a proxy component.

This FaaS GRIP library makes it easy to delegate long-lived connection management to Fanout, so that backend functions only need to be invoked when there is connection activity. The other benefit is that backend functions do not have to run for the duration of each connection.

The following step-by-step breakdown is meant as a quick configuration reference. You can checkout the Github libraries for Node and Pythonintegrations.

1. Initial Configuration

You will first configure your Fanout Cloud domain/environment and set up an API and resource in AWS API Gateway to point to your Lambda function, using a Lambda Proxy Integration.

2. Using Websockets

Whenever an HTTP request or WebSocket connection is made to your Fanout Cloud domain, your Lambda function will be able to control it. To do this, Fanout converts incoming WebSocket connection activity into a series of HTTP requests to your backend.

3. You’ve Got Realtime

You now have a realtime WebSockets driven by a Lambda function!

An Example

This Node.js code implements a WebSocket echo service. I recommend checking out the full FaaS GRIP library for a step-by-step breakdown, and for instructions on implementing HTTP long polling and HTTP streaming.

var grip = require('grip');
var faas_grip = require('faas-grip');

exports.handler = function (event, context, callback) {
    var ws;
    try {
        ws = faas_grip.lambdaGetWebSocket(event);
    } catch (err) {
        callback(null, {
            statusCode: 400,
            headers: {'Content-Type': 'text/plain'},
            body: 'Not a WebSocket-over-HTTP request\n'
        });
        return;
    }

    // if this is a new connection, accept it
    if (ws.isOpening()) {
        ws.accept();
    }

    // here we loop over any messages
    while (ws.canRecv()) {
        var message = ws.recv();

        // if return value is null, then the connection is closed
        if (message == null) {
            ws.close();
            break;
        }

        // echo the message
        ws.send(message);
    }

    callback(null, ws.toResponse());
};

Overall, if you‘re not looking for full control over your raw protocol elements, then you may find it easier to try a Lambda/AWS IoT configuration. If you need more WebSocket visibility and control, then the Lambda+Fanout integration is probably your best bet.

The Edge is Nothing Without the Fog

Edge computing is hot right now. The growing maturity of IoT networks ranging from industrial to VR applications means that there’s an enormous amount of discussion around moving from the cloud to the edge (from us as well). But edge computing is only the first step.

We first want to makes sure we define the terms we’ll use.

  • The edge refers to the devices, sensors, or other sources of data at the edge of the network.
  • The cloud is the datacenter at the “center” of the network.
  • The fog is a management layer in-between the two (we know this is vague, read on)

More data, more problems

As more and more devices become connected to networks, we’re going to see an enormous uptick in the amount of data generated. Andy Daecher and Robert Schmid of Deloitte believe that “globally, the data created by IoT devices in 2019 will be 269 times greater than the data being transmitted to data centers from end-user devices and 49 times higher than total data center traffic.” Calling this big data is an understatement.

These volumes of data mean big problems:

  1. Moving this amount of data means latency issues for networks
  2. Privacy and security concerns increase as more data is moved
  3. Devices sending more data require more hardware and power to run

Prioritization is the answer, but it’s not solved at the edge

The answer to increasing data volume is the fog: the prioritization and management layer on the continuum between the edge and the cloud. The fog needs to answer the crucial decision: what to analyze at the end, and what to push back to the cloud?

It’s unreasonable to expect an IoT sensor at the edge (like a drone, that requires sub-millisecond reaction times), to process all the data it collects in realtime or push that data all the way to the cloud for processing. The fog reduces latency and takes the processing load off the drone, acting as a management layer and allowing for efficient distribution of resources across the network.

So, what does architecture incorporating the cloud, the edge, and the fog look like?

Justin Baker of RealtimeAPIHub has an excellent guide, including this graphic from Ergomonitor:

edge_ergomonitor

Intelligently separating data analysis tasks across the network continuum will be crucial as we move forward into the next era of IoT.

High scalability with Fanout and Fastly

Fanout Cloud is for high scale data push. Fastly is for high scale data pull. Many realtime applications need to work with data that is both pushed and pulled, and thus can benefit from using both of these systems in the same application. Fanout and Fastly can even be connected together!

fanout-fastly

Using Fanout and Fastly in the same application, independently, is pretty straightforward. For example, at initialization time, past content could be retrieved from Fastly, and Fanout Cloud could provide future pushed updates. What does it mean to connect the two systems together though? Read on to find out.

Proxy chaining

Since Fanout and Fastly both work as reverse proxies, it is possible to have Fanout proxy traffic through Fastly rather than sending it directly to your origin server. This provides some unique benefits:

  1. Cached initial data. Fanout lets you build API endpoints that serve both historical and future content, for example an HTTP streaming connection that returns some initial data before switching into push mode. Fastly can provide that initial data, reducing load on your origin server.
  2. Cached Fanout instructions. Fanout’s behavior (e.g. transport mode, channels to subscribe to, etc.) is determined by instructions provided in origin server responses, usually in the form of special headers such as Grip-Hold and Grip-Channel. Fastly can cache these instructions/headers, again reducing load on your origin server.
  3. High availability. If your origin server goes down, Fastly can serve cached data and instructions to Fanout. This means clients could connect to your API endpoint, receive historical data, and activate a streaming connection, all without needing access to the origin server.

Network flow

Suppose there’s an API endpoint /stream that returns some initial data and then stays open until there is an update to push. With Fanout, this can be implemented by having the origin server respond with instructions:

HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 29
Grip-Hold: stream
Grip-Channel: updates

{"data": "current value"}

When Fanout Cloud receives this response from the origin server, it converts it into a streaming response to the client:

HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: Transfer-Encoding

{"data": "current value"}

The request between Fanout Cloud and the origin server is now finished, but the request between the client and Fanout Cloud remains open. Here’s a sequence diagram of the process:

fanout-flow

Since the request to the origin server is just a normal short-lived request/response interaction, it can alternatively be served through a caching server such as Fastly. Here’s what the process looks like with Fastly in the mix:

fanout-fastly1

Now, guess what happens when the next client makes a request to the /stream endpoint?

fanout-fastly2

That’s right, the origin server isn’t involved at all! Fastly serves the same response to Fanout Cloud, with those special HTTP headers and initial data, and Fanout Cloud sets up a streaming connection with the client.

Of course, this is only the connection setup. To send updates to connected clients, the data must be published to Fanout Cloud.

We may also need to purge the Fastly cache, if an event that triggers a publish causes the origin server response to change as well. For example, suppose the “value” that the /stream endpoint serves has been changed. The new value could be published to all current connections, but we’d also want any new connections that arrive afterwards to receive this latest value as well, rather than the older cached value. This can be solved by purging from Fastly and publishing to Fanout Cloud at the same time.

Here’s a (long) sequence diagram of a client connecting, receiving an update, and then another client connecting:

fanout-fastly3

At the end of this sequence, the first and second clients have both received the latest data.

Rate-limiting

One gotcha with purging at the same time as publishing is if your data rate is high it can negate the caching benefit of using Fastly.

The sweet spot is data that is accessed frequently (many new visitors per second), changes infrequently (minutes), and you want changes to be delivered instantly (sub-second). An example could be a live blog. In that case, most requests can be served/handled from cache.

If your data changes multiple times per second (or has the potential to change that fast during peak moments), and you expect frequent access, you really don’t want to be purging your cache multiple times per second. The workaround is to rate-limit your purges. For example, during periods of high throughput, you might purge and publish at a maximum rate of once per second or so. This way the majority of new visitors can be served from cache, and the data will be updated shortly after.

An example

We created a Live Counter Demo to show off this combined Fanout + Fastly architecture. Requests first go to Fanout Cloud, then to Fastly, then to a Django backend server which manages the counter API logic. Whenever a counter is incremented, the Fastly cache is purged and the data is published through Fanout Cloud. The purge and publish process is also rate-limited to maximize caching benefit.

The code for the demo is on GitHub.