> Huh? Analytics is how you focus on your product.
For the first decade of my career, I really believed this, and spent a lot of time doing split tests, and studying analytics, and trying to "make things better" by understanding the numbers as they were given to me.
This was a mistake.
I've since learned that these numbers will rarely help me make meaningful improvements to my product, and my business. Sure, they can be useful so that I'm not "running blind," but they simply aren't going to show me how to create an ingenious idea that takes things to the next level.
Analytics will help you optimize things to a "local maximum", but they'll blind you to the real possibilities of creating something new that can completely transform a business. As soon as I understood this distinction, I've been quite a lot more effective.
There's a similar problem with things like "user interviews." A common pitfall is to ask people which features they want. That has limited use. The real work you need to do is the "creative thinking" that others haven't done. Figure out what people don't know they want; learn what the numbers can't tell you. Then go and build it. Yes, understand the numbers, choose a good business model, and optimize based on those numbers, but don't let the numbers create the product. It's a dead end.
> but they simply aren't going to show me how to create an ingenious idea that takes things to the next level.
Of course not. There's no substitute for straight-up creativity and deep thinking.
But once you have your ingenious idea, you still have to design it, make sure it's clear to users, that they find it and can use it effectively. Your "ingenious idea" may turn out to be largely sabotaged if a button you thought had an intuitive label is misunderstood by 90% of users, or a link you thought was highly visible is being scrolled past by nearly everyone.
Yes, analytics is all about optimizing things to a local maximum. But you might not be anywhere near your local maximum. It's astonishingly easy for the first version of your ingenious idea to only be achieving 5% or 10% of the actual local maximum potential. We shouldn't downplay the difficulty or achievement involved in getting even close to a local maxima.
And you're correct that in user interviews, if you only ask what features they want, you're drastically limiting the value you might uncover. On the other hand, you'd better not ignore the features users are frequently requesting either. A lot of users are pretty smart and know exactly what they need, at least to get to that local maxima.
If you’re just using analytics to look at how effective UI designs are in making business conversions, couldn’t you still measure that by checking the backend and looking for a spike in activity towards the API endpoint that the UI invokes? Couldn’t you measure effectivity with a spike or drop in sales? I mean, good UX doesn’t so much rely on Google Analytics but on a UX engineer’s depth of knowledge about human psychology.
> couldn’t you still measure that by checking the backend and looking for a spike in activity towards the API endpoint that the UI invokes?
You could have multiple UIs hitting the same endpoint. Also, why limit yourself with such crude metrics?
> good UX doesn’t so much rely on Google Analytics but on a UX engineer’s depth of knowledge about human psychology
UX in theory, and UX in application are two different things. You could have the best models of how users will interact with your site, but until you deploy and measure, you have no idea what will happen.
You can still parameterizethe API calls if you want to attribute user activity to a specific flow, and that way you wouldn’t be “feeding the beast” that is GA.
How you mark/report the events is different from where you report them. You could use any one of self-hosted solutions on your own domain instead of GA without changing much the way you report back.
There's plenty of alternatives to Google Analytics, including open-source software you can self-host so it doesn't share your users' private data with a third party. You don't need to roll your own just to avoid GA.
If you are unable to find out that 90 percent of people can't use a major feature that you thought would differentiate your product, you are missing all forms of feedback, including much more important sources besides analytics.
I disagree completely because I've seen it in practice.
A button turns out to be below the fold on common small screen sizes that a new designer forgot to consider. A bad translation results in 90% of users in a particular country misunderstanding something. A JavaScript library doesn't work on a common Android phone you don't test with. Latency issues make something virtually unusable from the other side of the country because of a single badly written function that's easy to rewrite -- but you still have to catch it!
You need ALL forms of feedback. It's not a question of some being more important than others. They're all important and play their own unique roles. Analytics is for catching AND debugging all the things that go wrong at scale in the real world, as opposed to the artificial and limited environments used for user testing.
One goalpost at a time please. 90% of users in "a given country" is not the same things as saying 90 percent of all users. Unless that country is so important that it's everyone, and again you needed to test that your product is usable in the first place.
And even so, while there are a lot of countries and languages, a collection of tests in all of them, which is all it takes for these examples, is not "big data". Again you are crediting analytics for things that people were fully capable of and responsible for doing in the days without traffic data. I realize it is great for the resume and sounds sexier to do "analytics" as opposed to just basic software testing (you should see what people are calling "AI" in other industries these days). And I'm certainly happy to agree that analytics has some value, say in improving wording and stuff that a single instance won't tell you, but again these aren't cases of that.
If you see in your backend metrics, that a button isn't clicked by a huge number of visitors you can start there and analyze. Common screen sizes are helpful, but more helpful is to look at the page since maybe other optimisations can be made, if that is the important button.
Similarly with languages, that you cans er from backend metrics.
More data seems always nice, but analyzing more data isn't making analysis simpler and there is a big privacy impact in analytics, especially when outsourcing to data collectors, who can gather data across sites. (Which then also gives Google information which services are interesting to user and can be integrated into search etc.)
> if you are unable to find out that 90 percent of people can't use a major feature
How would the users know they can't use a feature they don't know it exists?
Let's say you add a brilliant new feature X, but due to a bug the users can't load the code for that feature so they never see it. How would they know to submit a form feedback for a feature they don't know it's there?
Because when you're talking to them directly you ask them about it. This is what I mean by all forms of feedback.
I'm getting the impression that people are just ignoring all advice about communicating with customers in their startups and just throwing stuff out there to see what sticks. Besides being wasteful in doing stuff no one wants anyway, what if that bad feature crippled your product and your paying customers have permanently switched to a competitor the instant your change frustrated them? And now you're bankrupt and can't afford analytics. Relying on analytics as a crutch to catch these things was the mistake in the first place.
Fun story I actually had forgotten about till now: I briefly worked at a tiny startup out of my school in the ending days of the internet bubble, trying to sell a "data mining" software product. Way too soon before it was cool, sadly. It was really hard to make the case that people needed to pay us $100k for the benefits we could get from their data. And companies certainly weren't going to go for a pitch like "if you launch a broken product we will catch that fact". They spend a lot of money to be sure that doesn't happen already. We even had one major customer figure out that they could just have an engineer perform a simple counter over incoming communications that would catch all they needed to know, and hence they didn't need our product anymore. That was kind of the end for us, in fact.
You're essentially arguing for qualitative data instead of quantitative, but both together is usually where the money is. I agree that qualitative analytics are underestimatd because they're hard to do, but I also think that having quantitative analytics together with qualitative allows you to contextualize your numbers in ways that lead to insights you wouldn't have otherwise.
Also, after you've already reached product-market fit, it's important to take your product to its "local maximum".
I think you may be right; after all, thinking critically about my story above, I did spend quite a lot of time learning about analytics and quantitative numbers. It could be this gave me an intuitive sense of what works, which I could then apply to the more creative thinking. I don't know. Either way, I'm grateful to make a living the way I do.
Qualitative data has another challenge - representativeness. It's very easy to do 10 user interviews and feel comfortable that you understand the market. Our brains lie to us all the time.
Quantitative data lets you drill down into different dimensions. Because it is much easier to collect at scale (it's the sum of your users' interaction w/ your product, after all!), it's much easier to make representative decisions.
No it isn’t because the data will never tell you “why” people are doing something or not doing something. You can guess but you’ll never know why until you a) talk to users and b) watch them use your product. Qualitative research isn’t about statistical significance, it’s about deep insights. 10 user interviews will undercover 100 insights.
What makes you think the 100 insights are actually insights, and not simply deriving from confirmation bias?
To be clear, I support qual + quant user research (+ split testing). They are all tools that help influence product direction. I have, however, seen a lot of cases where qualitative research uncovers an insight that doesn't actually match real world behavior when we bake it into the product.
Compare that to a split test, where I don't know why something is happening, but I am able to optimize against goals. It gives less insight but is more foolproof-actionable. With insight, you have to go insight -> improved_action. With testing, you just have the improved action.
To me, qualitative research is crucial to build a model of the environment. These can be used to generate hypotheses, which you then split test. Any time I see someone take direct insight and build a product feature from it, I have a lot of questions about what alternate ideas they tested. And how to know the one that was developed is as close to optimal as you can get, given the fixed resources available.
Apple famously "ignores" its users, partly because users usually can't see far beyond what is in front of them, often because they don't know about impending advances in technology or clever new designs. They'll ask for faster, cheaper versions of what they already have (faster horses, cheaper buggy whips as they say) rather than the next big thing. Faster/cheaper weren't the primary draws of the Mac, iPod, iPhone, iPad, etc. (though price/performance is a big draw of the M1, the big breakthrough is performance/watt which leads to all-day battery life and better thermals.) Instead it was a quantum improvement in design, usability, and functionality combined.
As another example, consider that in 2007 Apple developers were begging for an iPhone SDK, and Steve Jobs crushed their hopes by telling them to just make web apps. A year later Apple came out not just with an iPhone SDK, but with an entire App Store. (Though I suppose some developers [Epic] and users [HN] wish they had just come out with an SDK, and that the iPhone wasn't locked down.)
That being said, they do a lot of user testing of the next big thing before it is revealed publicly.
It's really not that useful for Apple to collect analytics because they make physical products, where the potential of analytics is limited. When it comes to a SaaS or a web page, the possibilities of analytics are much greater.
And yet, Macs will still send usage and performance data to Apple so they can incorporate that information into future product versions and find out about system software issues.
We're pretty deep in a single thread here, but the original article isn't saying "don't use analytics", it's saying "don't use Google Analytics".
Your SaaS or webpage probably gathers 80% of what Google Analytics does in your log files. It could without doubt provide deeper insights that GA is capable of by adding your own behaviour tracking code (which can be written with the understanding of your specific problem dom ain, rather than be an "everything to everybody" generalised solution).
Nobody is suggesting we shouldn't collect usage and performance data, the thesis here is that we shouldn't send all that directly to the worlds most profitable advertising agency, just because they'll draw us some pretty graphs for free.
Well, you can't really opt-out of all analytics. It's really hard to completely stop all server logs, or them tracking how many purchases you make (eg. invoices).
Where exactly do we draw the line to what's core for a company to track in order to run their business and what can be opt-out?
The issue I have with analytics, additional to your (imho) valid points is that lots of practical users seem to always measure the wrong endpoints and don't include negative outcomes.
Leads are useless without measuring how many people bounced off your shitty online shop, because they couldn't understand your UI.
Staying time is useless without measuring closed tabs, because your website is unreadable with ads.
The irony behind it is that ad metrics are used to buy/sell a website's worth for ads. Sometimes I feel like it's like a bubble that's invented on purpose to not have a measurable outcome of anything.
I agree with you in theory but I'll point out you do measure bounce rate and reoccuring vs new visitors and staying time at the same time to give you that exact view.
But I agree they measure the wrong end points or rather they measure too many end points for me to make sense of which one is better. The goal of overall traffic sometimes works against keeping bounce rates low or staying time high.
In the end you would think products sold matters. But an over promising website will over hype and under deliver causing returns or bad reviews which may affect future sales. The message has to describe the product but still meet the market's demand.
> There's a similar problem with things like "user interviews." A common pitfall is to ask people which features they want.
Asking what users want is not the right way to do interviews. I advise you read "Just enough research" by Erika Hall. The goal of interviews is to gather enough data points to understand the user needs and struggle, and to know their journey through the product with accuracy, and also understanding why they use or don't use a function. It is not a way to get a "wish list".
Then, designers usually have tons of tools and methods to process this data, take decisions and try creative solutions (usually more than one), and play them back to the users through prototypes to see which ones work better. You can check design thinking as a starter, but there are many more.
A similar book is The Mom Test. You're not supposed to let the user find how they would've solved the problem (as had they known, they would've solved it themselves), you're supposed to understand their pain points and then design the most effective solution for them.
I've done so many projects that revolved around instrumenting every little click and creating A and B versions of an experience to run tests against and in all that time I can't recall a single useful product decision that was actually informed by the collected data.
I would disagree on user research. It works on certain types of broad questions like messaging and branding. It's probably overused though.
I'm with you in having come around on not trying to cram everything into that hole of testability. But I do think there's room, specifically when it comes to testing your ideas. It's rare to look at data and see what the problem is, but it's common to come up with a hypothesis for what the problem is and a way to experimentally measure whether you new solution has actually made a dent.
Why is analytics only numbers / quantitative data to you? I've used services like HotJar that record the interactions of the user on the site or app which is qualitative analytics at scale, and it helped me identify how the user was actually using the product. I wouldn't not call this analytics simply because it's not numbers driven.
Why would you need consent to track anonymous data from users, without sharing the data to 3rd parties? What's the difference between recording an event "clicked_purchase" and displaying the click event in a graphical way, as a recording or heatmap?
Do you know of any legal document mentioning this, that storing the clicked element is allowed, but not the click position? They can be both used to run the business and improve the product, thus being in the "necessary" category.
Of course not. And unless I'm in the EU, I don't need it (and even then, not really, HotJar can be made GDPR compliant). HN will definitely balk at me saying that, but understanding UX is much more important than whatever philosophies one has about not tracking users. Because if you don't understand UX, there's no point to the debate about tracking vs not tracking users, because you won't have any users in the first place.
For the first decade of my career, I really believed this, and spent a lot of time doing split tests, and studying analytics, and trying to "make things better" by understanding the numbers as they were given to me.
This was a mistake.
I've since learned that these numbers will rarely help me make meaningful improvements to my product, and my business. Sure, they can be useful so that I'm not "running blind," but they simply aren't going to show me how to create an ingenious idea that takes things to the next level.
Analytics will help you optimize things to a "local maximum", but they'll blind you to the real possibilities of creating something new that can completely transform a business. As soon as I understood this distinction, I've been quite a lot more effective.
There's a similar problem with things like "user interviews." A common pitfall is to ask people which features they want. That has limited use. The real work you need to do is the "creative thinking" that others haven't done. Figure out what people don't know they want; learn what the numbers can't tell you. Then go and build it. Yes, understand the numbers, choose a good business model, and optimize based on those numbers, but don't let the numbers create the product. It's a dead end.