I happen to use AWS DynamoDB at work (ikr), and one of the things that are way harder to grasp than they should is the way they count consumed read and write capacity. It is however pretty simple, once you manage to find the right pages (with an s) of their documentation. I’ll try to summarize it here:
Read capacity
A read capacity unit (RCU?) allows you one strongly consistent read per second, if your read is up to 4KB in size. If your read is larger than 4KB, it will consume more (always rounded up to the nearest 4KB multiple). If you use eventually consistent reads, it counts half. The default reading mode is the eventually consistent one.
If you get an item (< 4KB), it counts for one read (or half a read if using eventually consistent). If you get X items (each < 4KB), it counts for 1 read per item, no matter if you do X Get or 1 BatchGet (so I’m not sure how useful BatchGet is, compared to the code complexity it adds).
If you query items, only the total size matters.
If you “just” count items (eg, a query with Count: true
and Select: 'COUNT'
), you will still consume as much capacity as if you had returned all items.
Note that if your result set it larger than 1MB, it will be cut at 1MB. To read more than 1MB of data, you’ll have to perform multiple queries, with pagination.
Practical examples:
– Get a 6.5KB item + get a 1KB item = 3 reads (if strongly consistent) or 1.5 reads (if eventually consistent)
– Query 54 items for a total of 39KB = 10 reads (if strongly consistent) or 5 reads (if eventually consistent)
– Count 748 items that have a total size of 1.1MB = 250 reads (if strongly consistent) or 125 reads (if eventually consistent) for the first 1MB + another count query for the remaining 100KB.
Write capacity
A write capacity unit (WCU?) allows you one write per second, if your write is up to 1KB in size (yup, that’s not the same size as for the reads… how not confusing!). Multiple items or items larger than 1KB work just as for reads. Also, I don’t remember where I read that, but I’m pretty sure I remember seeing that delete operations count like writes and that update operations count like writes, with as reference the size of the larger version of the modified object.
Practical examples:
– Write a 1.5 KB item + write a 200 bytes item = 3 writes
– Delete a 2.9KB item = 3 writes
– Update a 1.7KB item with a new version that’s 2.1KB = 3 writes
– Update a 1.1KB item with a new version that’s 0.7KB = 2 writes
On a side note, I’m not really sure if DynamoDB uses 1KB = 1000 bytes or 1KB = 1024 bytes.
Burst capacity
At the moment (apparently it may change in the future), DynamoDB retains up to 300 seconds of unused read and write capacity. So, for instance, with a provision of 2 RCU, if you do nothing for 5 minutes, then you can perform 1200 Get operations at once (for items < 4KB and using eventually consistent reads).
Sources and more details
I tried to focus on the most important points about read and write units. You can find more details about this topic in particular and, of course, about DynamodDB in general, in the docs. Notably I used those pages a lot here:
– AWS DynamoDB Documentation – Throughput Settings for Reads and Writes
– AWS DynamoDB Documentation – Best Practices for Designing and Using Partition Keys Effectively
– AWS DynamoDB Documentation – Working with Queries
0 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.