HTTP long-polling provides a long-lived connection for instant data push. It is the easiest mechanism to consume and also the easiest to make reliable.
How it works: A client makes a request to an HTTP endpoint in the usual way, with the intention of requesting data it has not yet received. If there is no new data available, then the server holds the request open until data becomes available to respond with. If enough time passes without new data becoming available, then the server sends a timeout response. The length of time the server waits may be specified by the client (for example, via a header or query parameter), or the server may use a fixed default. After receiving a response (whether new data or timeout), the transaction is complete. The client may create a new request to listen for further data.
When to use it: HTTP long-polling is great for making reliable APIs, because sync and listen actions can be combined into the same request. If you already have a polling API that clients can use to sync with the server, it’s easy to augment it into a long-polling API while retaining the same semantics. Since long-polling uses request/response interactions, you can even make it RESTful. Short timeouts make network resilience effortless; if a user’s IP address changes due to roaming between wireless access points or tethering, a good long-polling API will weather the storm.
Caveats: Uses more bandwidth than other mechanisms, particularly if you push data to the same receiver more frequently than the request timeout. If you push extremely often (faster than the round trip latency with the client), then expect a batching effect as well.