> the numbers are actually really hard to make sense of.
1. Then how do you know builds are slower?
2. How will you know that the things you are doing to improve build times are working?
> jump up and down based on time, days, moods, network, and many more causes, plenty of which we have no idea what they are.
3. Performance being highly inconsistent is actually a (significant) performance problem itself and worthy of reporting, analysing and quantifying. (ranges, averages etc.)
> not yet seen a successful attempt at cleaning this data up so that some number would be worth publishing.
4. Not sure why you think they need to be "cleaned". Again, numbers being all over the place is likely an important data point in itself, an even if it isn't the raw data would be valuable.
> We could try building some toy example that we separated.
Why? Just do a clean build on a local machine. Report real and user+system time. (man time). Repeat.
1. There's some trend lines, or you can see a Kotlin module during build taking a lot of time. However, we can't tell how much of it is due to Kotlin and how much as an artifact of optimizations such as [1] that we work for Java and not Kotlin for now.
2. We have more specific metrics for parts of the build that we know would improve. For example, migrating annotation processors to use KSP instead of KAPT makes modules build faster. (This doesn't mean we can easily expect a different trend line in the top level graphs since in the meantime the amount of Kotlin code is changing.)
3. I'm not sure what your experience is, but according to mine it will be a "nice to have" but un attainable or at a cost of real user improvement. For example, if a lot of people build at once network caches might be slower. How does one remove that noise? Up the caches - that's more money and other updating issues; try to take it into account in data? maybe, but that's one of many issues (going back to the next point)
4. The commenter asked for numbers on builds. I don't think a graph that's jumping up and down due to the complexities and optimizations of Meta' build architecture is useful to anyone outside Meta. I think deducing results about Kotlin build times from it would be deceiving.
> Why? Just do a clean build on a local machine. Report real and user+system time. (man time). Repeat.
This number is not stable. Even if you shut off network caches and other similar things. But more importantly - this number is not useful. Our users rarely have that experience. One of the common sub goals we have when dealing with build times is to minimize the times someone will have to do a clean build.
1. Then how do you know builds are slower?
2. How will you know that the things you are doing to improve build times are working?
> jump up and down based on time, days, moods, network, and many more causes, plenty of which we have no idea what they are.
3. Performance being highly inconsistent is actually a (significant) performance problem itself and worthy of reporting, analysing and quantifying. (ranges, averages etc.)
> not yet seen a successful attempt at cleaning this data up so that some number would be worth publishing.
4. Not sure why you think they need to be "cleaned". Again, numbers being all over the place is likely an important data point in itself, an even if it isn't the raw data would be valuable.
> We could try building some toy example that we separated.
Why? Just do a clean build on a local machine. Report real and user+system time. (man time). Repeat.