Well, of course, because that statement is deeply incorrect—the described mistake would cause the most recent reading to have more weight.
If you have a set of readings, say, [0.1, 0.02, 0.3, 0.05, 0.08], normally when you average them you would get 0.55—the mean of the set.
Calculating the average by "averaging the new reading with the previous average" would mean new + old / 2 every time. That means that for each reading after the first, your "averages" would be: [0.06, 0.18, 0.115, 0.195].
If we add a new reading of 0.01 to each of these, in the first case, we would get an average of 0.46, and in the second case, 0.1025. As you can see, even taking into account the already-very-skewed numbers, the second case biases it much further in favor of the new reading (which, in this case, is very low compared to the existing readings).
yeah the language is slightly ambiguous enough where you can't for certain know. first vs newest.
the issue is that taken as a whole the quantization of the samples into 8 bins is a much bigger issue along with the problem of the no hardware watchdog or hardware malfunction alarms. along with the poor testing methodologies too.