Avatar image for rick
#1 Posted by rick (506 posts) -

Many of you have noticed that our API rate limiting is stifling to put it mildly. We heard you and we, yet again, changed the way we limit API use. You'll like this one we're sure...

Previously:

There's a limit of 450 requests within a 15 minute window. If you go above that you're temporarily blocked. You can make all those requests within anywhere from 1 second to 15 minutes.

Now:

TL;DR: Space out your requests so AT LEAST one second passes between each and you can make requests all day. Go even a millisecond faster and you'll hit a brick wall REALLY HARD.

There is no limit of the number of requests. You are limited to how often you can make requests. There are no hard numbers in this, its more of a throttling algorithm that will restrict aggressive apps and reward those that are well behaved. If your app spreads out requests to at most one per second you will not have any problems and can make requests 24/7. If the time between requests is less than 1 second you will be restricted and the more of these requests you make the more likely you will be blocked and proceeding amounts of allowed requests will dramatically drop.

Avatar image for conmused
#2 Posted by Conmused (47 posts) -

@edgework: Is this rate limiting based on endpoint, or the entire API? Just curious where to put my rate limiter.

Avatar image for conmused
#3 Posted by Conmused (47 posts) -

@edgework: Also, is this currently in effect? Even spacing out my requests, the giantbomb.com/api page seems to show me at ~200 requests an hour per endpoint, rather than an evenly paced 3600.

(I hit the api faster than 1 request/second at various points last night, when my rate limiter went haywire. So, that could explain it.)

Avatar image for rick
#4 Edited by rick (506 posts) -

@conmused: Not quite. Read this

http://www.comicvine.com/forums/api-developers-2334/api-rate-limiting-1746419/?page=1#js-message-15850814

Avatar image for conmused
#5 Posted by Conmused (47 posts) -
Avatar image for lucascunha
#6 Posted by lucascunha (4 posts) -

@rick said:

TL;DR: Space out your requests so AT LEAST one second passes between each and you can make requests all day. Go even a millisecond faster and you'll hit a brick wall REALLY HARD.

Hi, I've been spacing my requests to exactly one second, but I was looking at my API page and there was a message of something like this.

"You have used XXX requests in the last hour for API Path '/game' ...a tad bit gluttonous don't you think? (reset in XX minutes)"

Am I going to have a problem if I query on this rate? This is just for the initial setup of a project that I'm doing, after that the requests are going to go way down.

Thanks!

Avatar image for wcarle
#7 Posted by wcarle (391 posts) -

@lucascunha: You shouldn't have a problem, that warning is just meant as a deterrent from too much crazy usage. If you do run into any limits those limits will reset within an hour and if you consistently run into problems feel free to PM me and we'll sort it out.

Staff
Avatar image for ngoodman
#8 Posted by ngoodman (3 posts) -

Does the API return any sort of rate limiting error code? I'm assuming not, because "you'll hit a brick wall REALLY HARD" sounds like a block :)

I'm working on a refactor of the data layer to one of the GB apps. I can implement a global rate limit at the HTTP client level, but it would simplify things if I could do something more reactive like:

Response response = Api.makeRequest();

if (response.statusCode == RATE_LIMIT_ERROR) {

// suspend retries for 1 second

}

This is likely a feature request, but I just wanted to clarify that the API won't tolerate logic like that?

To implement this proactively, I'll likely have to do something like this if I want to handle all edge cases:

makeRequest() {

// I need to store the timestamp in storage that will live longer than the lifetime of the app to cover restarts
long lastTimestamp = getLastRequestTimeFromPersistentStorage();
long currentTimestamp = getCurrentTime();

if (currentTimestamp - lastTimestamp < ONE_SECOND) {

// block all requests and wait at least ONE_SECOND - (currentTimestamp - lastTimestamp)

}

writeLastRequestTimeToPersistentStorage(currentTimeStamp);

}

The solution will be more complex when concurrency logic is handled.

Avatar image for ngoodman
#9 Posted by ngoodman (3 posts) -

After some experimenting, I found it simpler to just enforce the rate limit in the client only within the scope of the running application. The rate limit contract could be broken by multiple forced app restarts (or crashes), but I think this is an unlikely edge case.

Anyone who wants to see my implementation can check it out below. It's for Android, using OkHttp and Guava, but any Java application could re-use the approach:

https://bitbucket.org/neilgoodman/gbenthusiast/src/3a42529d11eca845ddda68cfba25e5608d49ab6d/app/src/main/java/com/alecgdouglas/gbenthusiast/http/interceptor/GiantBombApiRateLimitInterceptor.java

I'm still interested in knowing if the server will tolerate the reactive approach I asked about earlier.