Looks like the Great Firewall or something like it is preventing you from completely loading www.skritter.com because it is hosted on Google App Engine, which is periodically blocked. Try instead our mirror:


This might also be caused by an internet filter, such as SafeEyes. If you have such a filter installed, try adding appspot.com to the list of allowed domains.


There are several cases where you'll want to do API calls asynchronously. The batching system allows you to push several API calls at once, and then check the status and fetch the results with a separate call. This is helpful if you want to:

  • Download all of a User's account data to initialize your client.
  • Save several sections of a VocabList all at once (or several lists at once).
  • You want to minimize HTTP requests to the API to conserve battery.

Quick Overview

There are three endpoints for dealing with Batches:

  • POST for submitting Batches.
  • GET for checking just the status of requests (how many there are, which are finished, etc).
  • GET for fetching the resulting content of the requests.

There are two recommended 'flows' for using this system.

With the basic flow, submit your requests, then periodically poll the Batch's status, providing some sort of progress bar for the user to see things are moving. Once the number of tasks running has gone to zero, use the regular GET endpoint to fetch all the data at once.

With the parallel flow, don't wait to start downloading from the regular GET. Fetch from it repeatedly until you have downloaded all the data. The system will, by default, only return to you requests which have been finished since your last request. This way you can start getting data even before all the requests are done, so that the bottleneck is the slower of network speed and server processing speed, rather than the sum of the two.

Parallel GET Queries

Sometimes you just want to do a huge query, say to get a ton of data about a User that just logged in. The normal solution would be to download one piece at a time, chaining cursors from one request to the next. With parallel queries, you POST the query to the system once, and it automatically generates and runs all these individual requests at once. This makes it possible to download all the data your client needs to get going for a given User in a matter of minutes. For particularly learned users, getting the data serially could take hours.

Here's how it works: POST a Batch with a GET query that involves a cursor and set the spawner property to true for the Request object, and the system will automatically generate all requests it needs to run the entire query, using your input as the starting point (including beginning with a cursor), as well as still taking into account any limits you impose. When you access the results, they'll include the auto-generated Requests.

Batch Endpoints

POST http://legacy.skritter.com/api/v0/batch Submits a list of the created Requests for the server to run asynchronously.


An array of Request objects.


  • Batch
    The created Batch, which comes with the id you'll need to get the result.

Note: Requests in the returned Batch are in the order they were given, so the ids they are given can be matched up.

GET http://legacy.skritter.com/api/v0/batch/(id)/status Checks the overall status of the Batch.


  • detailed
    boolean to get some key stats (responseSize, responseStatusCode, done, id). This will slow the request slightly.
    (default false)
  • request_ids
    comma-separated list of Request ids to get detailed stats on.
    (limit 100)


  • Batch
    The specified Batch, with the most recent progress data.
GET http://legacy.skritter.com/api/v0/batch/(id) Fetches the responses for the requests in the given Batch. If no ids are provided, returns up to 2MB (when uncompressed) of response data which has not been received yet. This way you can make repeated calls to this endpoint without modifications until you have everything. Note in rare cases, you may get a request more than once, so make sure to check.


  • request_ids
    comma-separated list of Request ids to fetch.
    (limit 100)
  • request_fields comma-separated list of Request properties to return.


Note: response data is deleted one hour after the Batch starts, becoming null. All meta data remains intact, however.