Handling batch operations with REST APIs
So, you created your REST API following the best practices, named your endpoints accordingly, used the correct HTTP verbs and everything is working well.
For example, you create users by making a POST /users
call, get a list of them using GET /users
, or get a single one by doing GET /users/:userId
Awesome!
Developers are happy, customers using the API are happy, what a beautiful world!
Until, someone comes and says: “Hey, I need to import 10000 users”. Ouch.
Your first thought might be, “well… just do a for loop and make a POST for each one of them, I don’t care if it takes half an hour”
That may be a solution in most of the cases because as I said, you built your API following the best practices and your underlying cloud infrastructure is horizontally scalable, you can create as many instances as your credit card allows, with almost unlimited computing resources like CPU or RAM.
But there is something that doesn’t scale too much, networking.
Why networking is an issue?
Networking or the number of calls you need to make is a bottleneck, each networking call needs to negotiate a complicated protocol like TCP and find its way through an unreliable global network of routers and switches.
Some clients may face the issue of having a hard limit of outbound connections. (As I did, for an API I built myself, which triggered this post)
Usually, an outbound connection from within a system uses SNAT which means one TCP port needs to be used per each request. Poorly built or complex systems may have a really low amount of available (allocatable) TCP ports.
How can we fix it?
The solution for this networking issue is to have some way of sending multiple items in a single call. Therefore, we can make fewer requests with more data as opposed to making a single request per user.
But this means we need to do changes in our beautifully designed API. Not only that but it also means you need to start asking yourself a lot of new questions, for example, what happens if I (as an API) receive an array of users, and when processing one of them fails due to the lack of a required property. What should be the response code? do I return 200? 400? do I return an array of response codes?
That is totally the opposite of best-practices. So we need to find a better way.
What options do we have?
We have several options to fix this issue, let’s enumerate them:
- Change your contract to accept arrays in the body
- Change your server-side code to accept multiple body formats
- Rename your endpoints
- Create a new endpoint for arrays
- Create an endpoint for receiving batches (for each entity)
- Create a new batch endpoint
Now, let’s take a deeper dive into each one of the options, and let’s talk about why you should use it or not:
Change your contract to accept arrays in the body
This might be the first thing that came to your mind, hey, let’s just accept an array instead of an object as we do nowadays.
So instead of:
POST /users
{
"username": "Diego",
"password": "123456"
}
You would do:
POST /users
[
{
"username": "Diego",
"password": "123456"
}
]
That sounds nice, but it is an anti-pattern and worst, if you have your API already published with customers using it, it means it is a breaking change. It is not backward-compatible, it is a contract change.
You could still use this approach if you change the version, like POST /v2/users
Change your server-side code to accept multiple body formats
If you created your API using… let’s say node, express, and swagger/openAPI, you might be tempted to fork the swagger library and modify its code.
This is the worst option of all (I believe). It means you need to change the libraries you used to create the API, and make them more “smart” so they don’t explode and route the traffic differently if an array was received instead of an object.
This is, again, an anti-pattern. DON’T DO THIS PLEASE.
Rename your endpoints
What if instead of users
, we change its name to user
and then make the users
endpoint to accept array and user
to accept a single object…
This is another anti-pattern, entity names should be in plural. (not to mention that this is also a breaking change)
Create a new endpoint for arrays
Ok, now things are taking more shape… what if we do POST /usersArray
It is a new endpoint, at first it is not an anti-pattern, but looks fishy. I wouldn’t recommend this approach since it can become inconsistent quickly. Although it may be a good “quick-fix” it is not a long-term solution.
Create an endpoint for receiving batches (for each entity)
We can do POST /users/batch
Hey, this kind of looks nice. You are exposing a new endpoint for the POST method, you will need to add logic in your controller to do a batch job with the received array, but it is not an anti-pattern and is one of the recommended ways to go.
Create a new batch endpoint
This takes the previous approach to the next level, providing a good long-term re-usable solution.
Instead of POST /users/batch
you can do POST /batch/users
This sight order change means a huge difference in the backend. If your API is using microservices, this means a completely new batch microservice. If not, it means a completely new controller, like the users one, but this one is called batch.
The purpose of this controller is to run batch jobs against the API endpoints.
So we can do:
POST /batch/users
[
{
"method": "POST",
"body": {
"username": "Diego",
"password": "123456"
}
},
{
"method": "POST",
"body": {
"username": "Diego2"
}
},
]
And the response code of the batch endpoint would almost always be 200, but the response body can contain an array with each single response code:
[
{
"responseCode": "200",
"responseBody": {
"userId": "12839"
"username": "Diego"
}
},
{
"responseCode": "400",
"responseBody": {
"error": "Missing password"
}
},
]
You can take this even further, by allowing async jobs by adding an ID to each batch job, then you can query for that ID to get the results.
Conclusion
From all the options, only three of them are not anti-patterns, you can change your contract and accept arrays in a newer version of your API if you are willing to confront the risks of having an inconsistent experience, or you can use any of the last two options to build a batch endpoint.
I tend to choose the last one (build a batch microservice/controller) as the “best” option, but it really depends on your API, the business context, and some other factors.