In our case many of our objects had redundant data (aka denormalized) so updates required multiple calls to the DynamoDB service. By normalizing we saw throughput gains in our application and reduced service calls by taking fewer trips. Additionally we had conflated a couple of our domain-specific concepts in the data model and by splitting what was actually two independent entities that had been modeled as one we reduced the absolute record count.
I describe these optimizations as "making the data smaller" and "normalization".
In our case many of our objects had redundant data (aka denormalized) so updates required multiple calls to the DynamoDB service. By normalizing we saw throughput gains in our application and reduced service calls by taking fewer trips. Additionally we had conflated a couple of our domain-specific concepts in the data model and by splitting what was actually two independent entities that had been modeled as one we reduced the absolute record count.
I describe these optimizations as "making the data smaller" and "normalization".