It’s been several months now since Twitter switched over to version 1.1 of the Twitter API. Headaches were expected, and headaches are being had. A primary cause of these headaches is the move to enforce OAuth for every API request, but a second culprit - rate limiting - is responsible for drastic restructuring of existing code.
The move to API v1.1 didn’t exactly come as a surprise since Twitter had been warning folks for about a year, however those who were caught sleeping ended up with a larger problem than they may have anticipated.
The two primary changes in the move to API v1.1 are a requirement that every request be authenticated using OAuth, which is a pain on its own, and that strict rate limitations would be enforced on a per user and per app basis. The OAuth move wasn’t a coincidence, by forcing authentication, rate limits become simple to enforce.
The new rate limitations are pretty strict, most requests are limited to 1 request per minute with some increased to 12 per minute, and the highest for app level requests topping out at 20-30 per minute. Think about that from the perspective of a mobile app or a website feed. If every user of your app or every visitor to your site triggers an API request you’re going to be receiving “rate limit exceeded” responses in a matter of seconds.
Since the rate limit is enforced based on the authenticated user, you have two choices:
Force each user to authenticate with their own twitter credentials
Authenticate the app and cache the requests to avoid the rate limit
Option 1 is not really viable for a website but is a possibility for mobile apps. In my opinion that would be extremely annoying if I had to log into twitter in a mobile app to read a timeline that I can’t interact with. If the app allows you to work with Twitter more deeply, say post tweets from your account, then it makes sense.
Option 2 is going to be the more common option for a wide range of use cases. You can create a Twitter app, authenticate your application using the OAuth credentials for that app, and make your requests transparently from the user’s perspective as long as you implement a cache strategy.
A cache strategy
There are a ton of ways to go about caching your requests to Twitter, but I’m going to propose one relatively straightforward option that we’ve used many times with success since the move to API v1.1 - a server side Twitter API proxy with caching.
The basic idea with this strategy is that you create a server side program to act as a go-between (or proxy) for your app and the Twitter API. This proxy will be responsible for authenticating the requests as well as maintaining a cache of the responses. To maintain the cache, the proxy checks to see when the cache file was last updated. If the time that has elapsed exceeds the threshold (say 1 minute), a request is performed by the proxy to the Twitter API and the response will be used to rebuild the cache. Then, as your mobile app or website makes requests to your proxy, it is always served a response from the cache, which means that the response will not only be immune to rate-limits, it will also be instantaneous.
The code can be downloaded from GitHub by visiting the twitter-api-php-cached page.
This version of the code only offers a single API method, one which reads the timeline of the authenticated user. The cache is a flat file which also stores the update timestamp on the first line. This package is meant to serve as a starting point for your Twitter proxy service, you’ll no doubt want to add additional API calls and possibly move your cache to something in memory like memcache.
To get started, read through the simple setup instructions in the ReadMe and you’ll be retrieving your cached tweets in no time.