Realtime API Blog

Article Spotlight: 5 Protocols For Event-Driven API Architectures by Kristopher Sandoval

In this article, Kristopher Sandoval highlights the five most common event-driven methods for data push.  These methods all have their pros and cons, and work best based on your particular use cases.

The internet is a system of communication, and as such, the relationship between client and server, as well as server to server, is one of the most oft-discussed and hotly contested concepts. event-driven architecture is a methodology of defining these relationships, and creating systems within a specific set of relationships that allow for extensive functionality.

In this piece, we’re going to discuss 5 common event-driven methods — WebSocketsWebHooksREST HooksPub-Sub, and Server Sent Events. We’ll define what they fundamentally are and do, and how API providers go about using them. Additionally, we’ll provide some pros and cons on each to make choosing a solution for your platform easy and intuitive.

Full Article

realtime_notifications fanout slack pushpin intercom

Realtime data for smart notifications

From the Fanout Blog

It’s becoming the new normal that messaging and collaboration apps and platforms are available across multiple devices.

Business tools like Slack and JIRA offer feature-rich mobile apps, and users increasingly consume content from social networks like Facebook on their mobile devices instead of a desktop or laptop.

This isn’t a surprise – and we’re here to share our perspective on how developers can use realtime data to provide cross-platform users with the best notification experience.

Mary Meeker’s 2017 Internet Trends Report tracks the trend towards increasing mobile adoption:

meeker

What’s not stated explicitly in this slide is that that much of this engagement occurs simultaneously – it’s not uncommon for users to have an app open on their desktop and phone at the same time.

‘Dumb’ notifications produce a poor user experience

Simultaneous use of cross-platform apps has created a user experience issue that many of us are familiar with. When a new Trello card is assigned to me, I get a push notification on my phone, a ping in the Trello interface, and an email in my inbox. Due to my Trello integration with Slack, things quickly get worse – I get a notification on Slack on each of my devices. I can get up to 6 notifications tied to a single event.

This isn’t ideal – and as more devices become connected, the problem will only be compounded. Imagine a future where your phone, smartwatch, smart TV, and smart thermostat are all buzzing simultaneously. It doesn’t need to be this way.

Collaboration and messaging app developers can get smart

We didn’t come up with the idea of ‘smart’ notifications (entire companies like Intercom and OpenBackare built to enable them) – but we do have a perspective on how app developers can use realtime data to enable them.

Realtime data is already present in many chat or collaboration apps – typing indicators, read receipts, and live editing are all features that we take for granted. The next step for developers is taking a wider variety of realtime data into account when building notifications into their user experiences.

Luckily, mobile devices offer a wealth of realtime data to developers who want to do this:

Presence and attention-awareness (knowing which device a user is active on) allows a single ping to that device, instead of a ping to all devices. Results-driven logic can drive a secondary notification to another device or channel in the instance the first notification is not responded to. This can lead to some pretty complex logic, as in the case of Slack’s notification tree below:

slack_notifications

Slack’s blog post on how they built a lightweight desktop client to handle the complex interactions between team, channel, and user preferences and states when sending notifications is worth reading.

Time and location data is crucial – work notifications don’t need to be sent on the weekend, and pop-up notifications for events or sales are only relevant in bounded areas. Slack enables manual setting of ‘Do Not Disturb’ hours in order to keep notifications from taking over user’s lives. Context can be user-generated (like in the Slack example) or learned based on prior interactions with notifications.

Device and connection state information is underutilized. Know a user has low battery life? Maybe the notification to download the latest game update can wait. Users on Wifi are more likely to interact with rich notifications than those on cellular connections. If a user loses connectivity and many notifications are queued, they may no longer all be relevant when the user is back in range.

Realtime is a crucial component for smart notifications

As users constantly switch devices and platforms, realtime knowledge of their status is key to providing intelligent notifications. Developers who do this well will continue to retain user interest, and those who don’t will have a hard time keeping their attention.

realtime_data fanout pushpin

Pushpin: An Evented API for your DevOps Stack

“Real-time” is becoming an omnipresent force in the modern tech stack. As consumers demand faster and more frequent data transactions, companies are increasingly investing in product infrastructure that accelerates these transactions. Though we’ve seen APIs become an economic and technological imperative, they are typically based on request-response style interactions, which limits their scope and effectiveness in the real-time arena.

Request-Response vs Event-Driven APIs

At its core, request–response is a message exchange pattern in which a requestor sends a request message to a replier system. The replier system receives and processes the request, and if all goes well, it returns a message in response. While this exchange format works well for more structured requests, it limits integrations to those where the expectant system has a clear idea what it wants from the other. These request-response style APIs, therefore, must follow the interaction script from the calling service.

Pushpin Fanout - Request/Response vs Evented APIs

In an event-driven architecture, applications integrate multiple services and products as equals based on event-driven interactions. These interactions are driven by event emitters, event consumers, and event channels, whereby the events, themselves, are typically significant ‘changes in state’ that are produced, published, propagated, detected, or consumed. This architectural pattern supports loose coupling amongst software components and services. The advantage is that an event emitter does not need to know the state of the consumer, who the consumer is, or how the event will be processed (if at all). It is a mechanism of pushing data through a persistent stream.

Evented API Solutions

In the tech ecosystem, there are a number of ways to approach data streaming and evented APIs. Some of the leading SAAS solutions include PubNubPusherKaazing, and Fanout – which each have their own pros/cons and ramp up investment. For the purposes of understanding the fundamentals of event-driven architecture, we’ll explore some open source software called Pushpin.

Pushpin

Pushpin’s primary value prop is that it is an open source solution that enables real-time push — a requisite of evented APIs (GitHub Repo). At it’s core, it is a reverse proxy server that makes it easy to implement WebSocket, HTTP streaming, and HTTP long-polling services. Structurally, Pushpin communicates with backend web applications using regular, short-lived HTTP requests.

This architecture provides a few core benefits:

  • Backend languages can be written in any language and use any webserver.
  • Data can be pushed via a simple HTTP POST request to Pushpin’s private control API
  • It is invisible to connected clients
  • It manages stateful elements by acting as the responsible party when requiring data from your backend server
  • Horizontally scalable by not requiring communication between Pushpin instances
  • It harnesses a publish-subscribe model for data transmission
  • It can act as both a proxy server and publish-subscribe broker

Integrating Pushpin

From a more systemic perspective, there are a few ways you can integrate Pushpin into your stack. The most basic setup is to put Pushpin in front of a typical web service backend, where the backend publishes data directly to Pushpin. The web service itself might publish data in reaction to incoming requests, or there might be some kind of background process/job that publishes data.

PushPin real-time reverse proxy

Because Pushpin is a proxy server, it works with most API management systems — allowing you to do perform actual API development.  For instance, you can chain proxies together, placing Pushpin in the front so your API management system isn’t subjected to long-lived connections.  More importantly, Pushpin can translate WebSocket protocol to HTTP, allowing the API management system to operate on the translated data.

PushPin real-time reverse proxy with API management system

The Future of Evented APIs

In some follow up articles, I’ll discuss some of the unique features that we can see in evented APIs moving foward.  These include event batching, salience filters, and a standard subscription interface.  If you’re looking to play around with a real-time drop-in API proxy, then I highly recommend experimenting with Pushpin to get started.

 

Spotlight Article: How to Describe, Publish & Consume Real-Time Data by Phil Leggetter

In this article, Phil Leggetter discusses techniques for analyzing and processing realtime data. He goes through an example using RethinkDB. Check out the full article here.

In the first post in the series we covered discovering real-time data within your systems and applications. In part two we went through the use cases for your real-time data. In this final section we’ll cover the how: how to describe, publish & consume real-time data from your systems and expose the data so that you can build real-time features.

The main steps we’re going to cover are doing the following with the real-time event data:

Analyse/Process

Describe

Publish

Consume and Use

Full Source

The Developer’s Guide to Building vs Buying Services

Defining a process for objectively selecting homegrown or purchased solutions

For almost every functional or architectural application component, there are a plethora of ‘as a service’ offerings. We see infrastructure as a service (IaaS), backend as a service (BaaS), SaaS, PaaS.. and a new ‘aaS’ seems to be added daily.

What do all these services have in common? Well, they aspirationally promise to give you, the engineer, (1) more freedom to focus on your core product, (2) faster time to market, and (3) production-ready solutions for complex and repeatable engineering operations.

Sometimes this is case. Sometimes it isn’t. This purpose of this guide is to provide a rational set of objective criteria to assess whether you should build or buy a particular service.

What is build? What is buy?

Build does not necessarily mean that you are making something from scratch. It means that you are combining custom code, open source libraries, and individual/community expertise to construct a solution for your use case. This solution is something that you will design, build, run, maintain, and scale internally.

On the other hand, buy does not necessarily mean that you are purchasing an end-to-end, out-of-the-box solution for your use case. It more accurately represents the purchase of a defined service that adds near-immediate value to your use case. Typically, the viability of the service itself will be guaranteed by the seller and you will not need to design and build the service itself. However, depending on the type of service purchased, you may choose to run and scale it internally. Generally, you will offload the running, maintenance, and scalability to the seller.

The Developer Mind

Before we continue, let’s reset our frame of mind.

Many developers have strong egos, and that’s generally an empowering attribute. Strong egos give devs the confidence to power through complex obstacles, focus for days and weeks at a time, and cultivate entirely new industries. However, there’s a fine line between reasonable and unreasonable confidence.

“I can build ____ in ____ days!”
“Ha! I can build a better ____ in a weekend!”
“This is so expensive. I’m just going to build it.”

We frequently see and hear these comments on dev forums, aggregators like Reddit and HackerNews, and in our day-to-day interactions. If we don’t say it, then some of us probably think it from time to time. Hey, sometimes we’re probably right, but often times, our initial ego-driven reaction distances us from the objective criteria we apply to our general practice of programming.

When assessing what to build vs buy, or which ratio we choose, it is critical that we reset our frame of mind and approach our solutioning as open-minded and objectively as possible. Excluding the purists, no one cares if we were able to build our product from scratch or if we cleverly integrated a series of purchased solutions together. What people care about is if our product works and delivers exceptional value to customers.

With the build vs buy decision-making process, we will answer the question: “How do we deliver exceptional value to our customers quickly, efficiently, and prudently?”

Build vs Buy Decision-Making Model

build versus buy guide and process for developers to choose software

Step 1 – Identify and categorize your product’s functional scope

Your team has been tasked with building an ecommerce platform that allows users to upvote and downvote products. So, what are your product’s functional and architectural features?

Functional

  • Marketplace service
  • Voting service
  • Product display service
  • Inventory management service
  • Transaction service
  • Buyer, seller, and admin account management service
  • Search, filter, refine service

Architectural and Process

  • Databases
  • Servers
  • Load Balancers
  • Dev Environment / Version Control
  • Continuous Integration / Delivery Pipeline
  • REST / Realtime APIs
  • Frontend Framework
  • Deployment Controls / AB Testing

While these are not comprehensive feature sets, the important point is that there is a clear distinction between core product features (marketplace, voting), and necessary system & process architecture (server environment, CI/CD pipeline). There are features that are proprietary and unique to your product, and there are architectural features that are found in almost every modern application system.

Your job is to identify which of these features are proprietary to your platform and which are replicable proven solutions. To do this, ask the following questions:

  • What are the proprietary, core features that make my application unique?
  • What architectural services do I need for my platform scaffolding?
  • What is my ideal development pipeline going to look like?

Keep in mind, we are not solutioning yet or deciding what to build vs buy. We are identifying and categorizing our product’s functionality.

Step 2 – Define the scope of work and reconcile against constraints

Based on your feature categorization in step 1, it is time to define the scope of work to build each feature.

First, itemize and prioritize the detailed functionality for each feature:

  • What is the minimum functional scope for the feature to be viable?
  • What is the ideal functional scope for the feature?
  • Is this a feature I need now? Or can it wait?

Second, for each feature, answer the following build questions for the minimum and ideal functional scope:

  • How many developer resources do I have available to build this feature? Maintain this feature?
  • Can I harness any domain experts to help design this feature?
  • Has anyone on my team built this before?
  • How much time to design (A), build (B), test (C), deploy (D), maintain (E) this feature?
  • Will building this divert resources from something else?
  • Do I need to hire additional resources? If so, what is the cost breakdown?
  • What is the infrastructure cost to run this internally?

Third, for each core feature, answer the following buy questions for the minimum and ideal functional scope:

  • What is my monthly budget for this service?
  • How do I anticipate my budget changing over time?
  • Can I harness any domain experts to help me assess the best solution?
  • What developer resources do I have available to integrate and configure the solution?
  • If applicable, will I have the resources to self-host, run, maintain, and scale the service?

Step 3 – Solution divergence

Now we can get to the good stuff! In this step, we are not deciding what to build or buy; rather, we are aggregating an inventory of choices.

First, scour the interwebs, get referrals, and assess the solution ecosystem. Have other teams built this successfully? Have they bought it successfully? What are the horror and success stories?

Second, create a build vs buy comparison matrix. Make sure to note the monthly, infrastructure, and long-term maintenance costs. Note the total upfront and ongoing time needed for each build or buy solution (having build/buy hybrids are great too!).

Step 4 – Solution convergence

Start narrowing down your options.

Remember that buying does not mean out-of-the-box instant magic. There are always build costs associated with buying:

  • Sandboxing and initial technical vetting
  • Integration and setup
  • Configuration and fine tuning
  • Operational training and staff onboarding

Similarly, building does not necessarily mean that everything is made from scratch, but it does mean that you will assume the costs of ongoing maintenance, scaling, and debugging. You will also need to train staff and develop new operational processes.

Step 5 – Build or buy or both

Choose a primary and secondary solution option for each feature. This way, you will have a backup plan if the primary solution does not pan out. It is absolutely critical that you involve your team during the selection process and make the selection criteria transparent.

Step 6 – Develop guidelines for reassessment

The solution that you’ve selected for day 1 of your product will likely not fit your product at day 600. This is okay, but we must be able to anticipate and preempt any future scaling issues. To do this, set both quantitative and qualitative benchmarks for triggering a build vs buy scaling reassessment. For example, we’re confident that our current architectural solution allows us to handle up to 500k concurrent connections with ease, but our current growth model forecasts 2m connections in 8 months. When we start to near the 300k mark, then this will trigger another build vs buy assessment so we can preempt any issues at scale. This reassessment should include:

  • What have we learned about the needs of our product in the past X months?
  • What has been more difficult than anticipated? What has been easier?
  • How has our resource and knowledge pool shifted?
  • Have our product’s core competencies shifted?
  • Is there anything new and better out there?

Final Thoughts – Try It Your Way

Well, this looks like a lot of work. It may even take a day or multiple days to assess a feature. But realistically, when we take into account the full lifecycle of your product, a few upfront days can save you months and lots of money down the road. Those few days may also make or break your product.

Customize your build vs buy assessment process to meet your organization’s needs. Though a large enterprise is way different than a startup, the assessment metrics remain very similar. Add or remove metrics, codify a more refined process, or make your own from scratch.

Either way, it is important to remember that building a successful product is very hard, so don’t make it harder on yourself than necessary. Let your decision be driven by choosing the right solution for your product, rather than the right solution for you.

Spotlight Article: Bringing The API Deployment Landscape Into Focus by Kin Lane

In this article, Kin Lane (API Evangelist) dives into the current landscape of APIs and the host of definitions that drive the industry.

I am finally getting the time to invest more into the rest of my API industry guides, which involves deep dives into core areas of my research like API definitionsdesign, and now deployment. The outline for my API deployment research has begun to come into focus and looks like it will rival my API management research in size.

With this release, I am looking to help onboard some of my less technical readers with API deployment. Not the technical details, but the big picture, so I wanted to start with some simple questions, to help prime the discussion around API development.

Where? – Where are APIs being deployed. On-premise, and in the clouds. Traditional website hosting, and even containerized and serverless API deployment

How? – What technologies are being used to deploy APIs? From using spreadsheets, document and file stores, or the central database. Also thinking smaller with microservices, containes, and serverless

Who? – Who will be doing the deployment? Of course, IT and developers groups will be leading the charge, but increasingly business users are leveraging new solutions to play a significant role in how APIs are deployed.

Full Source

realtime-api what is it fanout pushpin

What is a realtime API?

Many software developers are familiar with realtime, but we believe that realtime concepts and user experiences are becoming increasingly important for less technical individuals to understand.

At Fanout, we power realtime APIs to instantly push data to endpoints – which can range from the actual endpoints of an API (the technical term) to external businesses or end users. We use the word in this post loosely to refer to any destination for data.

We’re here to share our experience with realtime: we’ll provide a definition and current examples, peer into the future of realtime, and try and shed some light on the eternal realtime vs. real-time vs. real time semantic debate.

The simple definition

Realtime refers to a synchronous, bi-directional communication channel between endpoints at a speed of less than 100ms.

We’ll break that down in plain[er] english:

  • Synchronous means that both endpoints have access to data at the same time (not to be confused with sync/async programming).
  • Bi-directional means that endpoints can send data in either direction.
  • Endpoints are senders or receivers of data: they could be anything from an API endpoint that makes data available to a user chatting on their phone.
  • 100ms is somewhat arbitrary: data cannot be delivered instantly – but under 100ms is pretty close, especially with respect to human perception. Robert Miller proved this in 1986.

An example of a realtime user experience

A simple example of a realtime user experience is that of a chat app. In a chat app, you ‘immediately’ (sub 100ms) see messages from the person (endpoint) you’re chatting with, and can receive information about when they read your messages (synchronous, bi-directional).

Realtime vs. request-response

Web experiences are beginning to move from request-response experiences to live, realtime ones. Social feeds don’t require a refresh (a request) to update, and you don’t need to email documents as attachments that need to be downloaded (request) and sent back with edits (response) – you just use collaboration software that works in realtime.

More realtime experiences

Realtime user experiences are everywhere you look – especially where near-instant access to information is valuable. You’ll find realtime in:

  • Collaboration: realtime access to internal and external information from your team is becoming the norm. It’s accepted that a sales inquiry (data) can be instantaneously relayed from live chat on your website, into your customer service portal and then into Slack.
  • Finance: stock tracking and bitcoin wallets require immediate access to information. Applications like high-frequency trading exist specifically because of the ability of certain parties to access and act on data faster than others.
  • Events: second-screen experiences for sports, including live betting with realtime odds updates, are becoming increasingly common.
  • Crowdsourcing: distributed collection, analysis, and dissemination of data from distributed endpoints (think reports from WeatherUnderground stations or from the traffic app Waze) is only valuable when it occurs in realtime.

Realtime in the future

As we see it (and admittedly, we are a little biased), realtime is quickly becoming the new normal. Up-to-date information is expected by businesses and end users. Realtime is the natural complement to trends like:

Big Data: as the number of digitally connected businesses, experiences, and devices rises, so does the amount of data generated. Data becomes more valuable as the three V’s of a dataset (velocity, volume, variety) increase – and realtime transmission is central to the velocity component.

In the past, companies benefitted from hoarding data, but increasingly data is becoming most valuable when shared (and monetized). The companies that can aggregate and share the most data, as quickly as possible, will be successful.

Proliferation of APIs: businesses sharing data are increasingly going to do so through APIs. Entire businesses are being built on APIs by platform providers like Twillio (they only have an API) or they are coming to comprise substantial portions of existing businesses (like Salesforce’s API).

An elegant end-user experience is increasingly the product of data that’s being moved through multiple APIs – and the number of APIs is only going to increase as they trend towards becoming less technical and more accessible and interoperable. The APIs that provide access to data or move it through their system as quickly as possible will rise over those that cannot.

Realtime vs. real-time vs. real time

The endless debate – what’s the correct way to write what we’ve been discussing? We use realtime, because we believe that “real time” refers to something experienced at normal speed and not condensed or sped up. For example, watching grass grow in ‘real time’ is not very exciting – but a time lapse is.

We also don’t like hyphenating – so we went with realtime instead of real-time (and it looks like most of the industry agrees with us).