My experience with dynamic provisioning has been that it is pretty inelastic, at least at the lower range of capacity. E.g. if you have a few read units and then try to export the data using AWS's cli client, you can pretty quickly hit the capacity limit and have to start the export over again. Last time, I ended up manually bumping the capacity way up, waiting a few minutes for the new capacity to kick in, and then exporting. Not what I had in mind when I wanted a serverless database!
I understand it's not really your point, but if you're actually looking to export all the data from the table, they've got an API call you can give to have DynamoDB write the whole table to S3. This doesn't use any of your available capacity.