It is very much the opposite. With this pattern, you're going to have lots of copies of your data in different transformations in potentially many different data stores. The idea is that you take the stream of changes from something like Postgres and use that stream to populate caches, indexes, denormalizations/representations, counts, etc.
If your nail looks like smaller size important data, CDC / immutable datastores seem like a great hammer. For all other stuff, the answer is: it depends. Some thoughs on the limitations of this approach: http://www.xaprb.com/blog/2013/12/28/immutability-mvcc-and-g...