Hacker Newsnew | past | comments | ask | show | jobs | submit | Yoric's commentslogin

> Additionally, my perception (from posts and discussions like these, I'm not a financial analyst and I have no meaningful insights into their business) is also that they probably receive enough funding through non-advertising means that they don't actually need to do this if they were to pare back the nonsense spending they're so greatly known for.

Last time I checked, Mozilla received 90%+ of its funding from Google. This is a situation that nobody likes (except Google, of course). These ads are an attempt to diversify income streams.

People are really unhappy that Mozilla gets money from Google, but also extremely vocally unhappy whenever Mozilla attempts to find other sources.

I haven't seen anyone suggest alternate solutions yet.


I don’t actually mind their money from Google but why is charging money for a quality product an unfathomable business model? Ads or bust it seems.

Because pretty much nobody is willing to pay for it.

For context, I recall that for years and years, Firefox was the highest ranked mobile browser. Mozilla invested a lot in mobile, Firefox devs had to rewrite the Android linker, invent new ways of starting binaries on Android, etc. just to make Firefox work (all of which were later used by Chrome for Android).

It still didn't make a dent in mobile browser shares.

Sure, Mozilla could have invested even more in Firefox mobile, but at some point, this would have come at the expense of Firefox desktop, which was the source of ~100% of the funding.


What Firefox was doing 4 months after Android 1.0 GA'd would indeed been unlikely to have made a dent in mobile share compared to what effort was going on once Android had a billion users. Why put all of that effort in before something is even used to just then let it rot for years anyways? In the end, they ended up spending the resources to refresh it in 2019 anyways - by which time billions had already decided Firefox was just a battery hog and slow on mobile.

It's a sad story because Firefox was so good on mobile when nobody had a chance to use it then it was crap when they did. On desktop Firefox is still the #1 non-bundled browser, things went so poorly on mobile they can't even come close to that today. In a parallel universe timings were inverted and Firefox may have even had more users on mobile than it does desktop today.


I feel that, at some point around the Brendan Eich-gate, the Internet decided that Mozilla was always wrong. Change the shape of tabs? We received rape threats. Change it back? Bomb threats. Bringing in new APIs for add-ons that make Firefox faster, more secure, more stable and doesn't break all the time? No, we want addon $X, we don't care about security.

I'm not going to claim that everything Mozilla has done is right, but the bad will of the tech crowd is a bit exhausting.

Writing this as a former Firefox contributor.


I never worked on Firefox, and am often critical of Mozilla, but I can second this sentiment. It's seemed like everything Mozilla does makes everyone mad, all the time. It's frustrating.

Also, compared to the scale of harm that Google does and the risk of it de facto controlling the web with the chromium engine, all the things that Mozilla does to piss people off should be small potatoes.

I'm still happy with mozilla. Keep it up!

It's the "vocal minority", right? Sure it's not fun to receive threats but it's a known fact that communicating over the Internet makes people unhinged. Maybe there's stuff to complain about but I am a happy Firefox user for .. what? over two decades! :) so, thanks for that.

If no one is sending you stupid threats online, are you even alive?

As a former Firefox user, I got fed up with the constant change for the sake of change. Why change the tabs? They were fine the way they were. People got mad about the addon situation since it broke their workflows because of vague technical reasons. And Mozilla usually ignored user protests while pointing at telemetry, and did whatever they wanted to, users be damned.

At least that's how it looked from this side. I switched to Vivaldi some 4-5 years ago, and it looks and works pretty much the same since I started using it. New features and changes have happened, but they've been able to be ignored/disabled/hidden without doing CSS brain surgery.

If/when the Google Adblockerblocker changes trickles down to Vivaldi I may have to crawl back to Firefox, but I dread the prospect.


> And Mozilla usually ignored user protests while pointing at telemetry, and did whatever they wanted to, users be damned.

When I worked on Firefox, most of the changes happened exactly because user research determined that users wanted them and/or that not having them hurt the product. We changed the tabs at least once because users thought that the old shape of tabs made the browser feel slow (true story, sadly). We changed the add-on API (after having warned add-on developers for at least 6 years) because the old API was incompatible with multi-threading, multi-process, sandboxing, which in turn was really bad for both performance and security.

I'll absolutely grant you that Mozilla hasn't been very good at communicating these choices, but again, the sheer hostility of tech crowds is exhausting.


Anyone can hip fire an accusation from the philosopher's chair (potty), and it's like that thing about a falsehood circling the world five times before the truth even gets out of bed.

Against the avalanche of claims that they've "done nothing", it can be tedious to pull out examples of, say, major projects achieving huge performance improvements in WebGPU, but meanwhile it costs nothing to claim Firefox has "done nothing since Quantum" which I've heard claimed in these parts in full sincerity.


> broke their workflows because of vague technical reasons

> I switched to Vivaldi

You refer to important security improvements as "vague technical reasons" and you switched to Vivaldi, a browser that is based[1] on extended stable Chromium, which is not "recommended for any team where security is a primary concern"[2].

It seems you don't care about security.

[1] https://help.vivaldi.com/android/android-privacy/security-fa...

[2] https://chromium.googlesource.com/chromium/src/+/HEAD/docs/p...


You're right - security is not my primary concern.

>security fixes that are relevant to any Chrome Browser platforms will be landed on the extended stable branch

>complex and risky changes [...] may not be viable to backport

Big deal.


I don't know but I've been using Firefox since forever and I can't even recall the tabs changing at all. Of course they have changed many times over the course of years, but that happens in every browser. I don't know what happened to tabs that affected you so badly? I feel like it's an excuse for some people sometime that if some little thing in the UI changes that they claim their whole flow is now compromised so that is the reason they are now using this other software, where the same stuff happens as far as I can see.

I'm obviously not just talking about the tabs. And "some little UI thing" can absolutely break your workflow - UI isn't just how things look. Mozilla purged lots of minor features over the years, and the goto excuse was usually "parity with Chrome" or "telemetry".

In the latest version they changed something AGAIN, when you drag a tab too far to the left it's pinned automatically. Literally nobody asked for it and it makes me so angry, god I hate Mozilla. I only use Fiefox because it's the last browser with Manifest V2 (I have a lot of these add-ons) and as an add-on dev they made me even more angry with having double standards regarding their shitty add-on review system.

> Disabling Javascript or even just third party scripts does lead to major breakage, but reporting spoofed values for identifiers like Tor does not. The Arkenfox user.js does all of this and more, but these options are not enabled by default. This shows that Firefox does not care much about privacy in practice.

I suspect that it shows that Firefox developers do a good job at making Firefox work, and this good job enables forks to work.


It's a really hard line to walk.

If you put too much in your Telemetry/crash reports, yeah, users become fingerprintable.

On the other hand, if you return spoofed values, it means that Firefox developers cannot debug platform/hardware-specific crashes. If you disable Telemetry, improving performance becomes impossible, because you're suddenly unable to determine where your users suffer. If you remove WebGL, plenty of websites suddenly stop working, and people assume that Firefox is broken.


> If you put too much in your Telemetry/crash reports, yeah, users become fingerprintable.

It's not only what gets send to Mozilla as telemetry or crash reports that is a problem. That can be turned off (many Linux distros do), or firewalled.

The main issue is that websites can more or less accurately identify users uniquely by extracting information that they should not have access to if the browser was designed with privacy in mind.

This includes, but is not limited to, fonts installed, system language, time zone, window size, browser version, hardware information (number of cores, device memory), canvas fingerprint, and many others attributes. When you combine all of that with the originating IP address, you can reliably determine who visited a website, because that information is shared and correlated with services where people identify themselves (Google accounts, Facebook, Amazon, etc.) Even masking your IP may not be enough because typically there is enough information in the other data points to track you already.


All of this is true, but it's a problem of the entire web platform and specs, so if you want to favor untraceability above compatibility, you'll need a dedicated privacy-hardened browser. Firefox aims to be better at privacy, but still respect the web specs.

Sure, but then don't go grandstanding about privacy. You can't have both.

And saying that improving performance is impossible without it is hyperbolic. Developers did that before every major application turned into actual spyware. Profilers still work without it.


Profilers only work once you have identified the problem. Telemetry lets you find out about it in the first place.

Hum.

I have at home 13 year old hardware running Firefox and no performance complaints.


Yeah, that is one of my main uses for AI: getting the build stuff and scripts out of the way so that I can focus on the application code. That and brainstorming.

In both cases, it works because I can mostly detect when the output is bullshit. I'm just a little bit scared, though, that it will stop working if I rely too much on it, because I might lose the brain muscles I need to detect said bullshit.


Im super interested to know how juniors get a long. i have dealt with build systems for decades and half the time its just use google or stackoverflow to get past something quickly, or manually troubleshoot deps. now i automate that entirely. and for code, i know what good or not, i check its output and hve it redo anything t5hat doesnt pass my known stndards. It makes using it so much easier. the article is so on point

You intrigue me.

> have it learn your conventions, pull in best practices

What do you mean by "have it learn your conventions"? Is there a way to somehow automatically extract your conventions and store it within CLAUDE.md?

> For example, we have a custom UI library, and Claude Code has a skill that explains exactly how to use it. Same for how we write Storybooks, how we structure APIs, and basically how we want everything done in our repo. So when it generates code, it already matches our patterns and standards out of the box.

Did you have to develop these skills yourself? How much work was that? Do you have public examples somewhere?


> What do you mean by "have it learn your conventions"?

I'll give you an example: I use ruff to format my python code, which has an opinionated way of formatting certain things. After an initial formatting, Opus 4.5, without prompting, will write code in this same style so that the ruff formatter almost never has anything to do on new commits. Sonnet 4.5 is actually pretty good at this too.


Isn't this a meaningless example? Formatters already exist. Generating code that doesn't need to be formatted is exactly the same as generating code and then formatting it.

I care about the norms in my codebase that can't be automatically enforced by machine. How is state managed? How are end-to-end tests written to minimize change detectors? When is it appropriate to log something?


Here's an example:

We have some tests in "GIVEN WHEN THEN" style, and others in other styles. Opus will try to match each style of testing by the project it is in by reading adjacent tests.


The one caveat with this, is that in messy code bases it will perpetuate bad things, unless you're specific about what you want. Then again, human developers will often do the same and are much harder to force to follow new conventions.

The second part is what I'd also like to have.

But I think it should be doable. You can tell it how YOU want the state to be managed and then have it write a custom "linter" that makes the check deterministic. I haven't tried this myself, but claude did create some custom clippy scripts in rust when I wanted to enforce something that isn't automatically enforced by anything out there.


Lints are typically well suited for syntactic properties or some local semantic properties. Almost all interesting challenges in software design and evolution involve nonlocal semantic properties.

Memes write themselves.

"AI has X"

"We have X at home"

"X at home: x"


Starting to use Opus 4.5 I'm reduces instrutions in claude.md and just ask claude to look in the codebase to understand the patterns already in use. Going from prompts/docs to instead having code being the "truth". Show don't tell. I've found this patterns has made a huge leap with Opus 4.5.

The Ash framework takes the approach you describe.

From the docs (https://hexdocs.pm/ash/what-is-ash.html):

"Model your application's behavior first, as data, and derive everything else automatically. Ash resources center around actions that represent domain logic."


I feel like I've been doing this since Sonnet 3.5 or Sonnet 4. I'll clone projects/modules/whatever into the working directory and tell claude to check it out. Voila, now it knows your standards and conventions.

When I ask Claude to do something, it independently, without me even asking or instructing it to, searches the codebase to understand what the convention is.

I’ve even found it searching node_modules to find the API of non-public libraries.


This sounds like it would take a huge amount of tokens. I've never used agents so could you disclose how much you pay for it?

If they're using Opus then it'll be the $100/month Claude Max 5x plan (could be the more expensive 20x plan depending on how intensive their use is). It does consume a lot of tokens, but I've been using the $100/mo plan and get a lot done without hitting limits. It helps to be mindful of context (regularly amending/pruning your CLAUDE.md instructions, clearing context between tasks, sizing your tasks to stay within the Opus context window). Claude Code plans have token limits that work in 5-hour blocks (that start when you send your first token, so it's often useful to prime it as early in the morning as possible).

Claude Code will spawn sub-agents (that often use their cheap Haiki model) for exploration and planning tasks, with only the results imported into the main context.

I've found the best results from a more interactive collaboration with Claude Code. As long as you describe the problem clearly, it does a good job on small/moderate tasks. I generally set two instances of Claude Code separate tasks and run them concurrently (the interaction with Claude Code distracts me too much to do my own independent coding simultaneously like with setting a task for a colleague, but I do work on architecture / planning tasks)

The one manner of taste that I have had to compromise on is the sheer amount of code - it likes to write a lot of code. I have a better experience if I sweat the low-level code less, and just periodically have it clean up areas where I think it's written too much / too repetitive code.

As you give it more freedom it's more prone to failure (and can often get itself stuck in a fruitless spiral) - however as you use it more you get a sense of what it can do independently and what's likely to choke on. A codebase with good human-designed unit & playwright tests is very good.

Crucially, you get the best results where your tasks are complex but on the menial side of the spectrum - it can pay attention to a lot of details, but on the whole don't expect it to do great on senior-level tasks.

To give you an idea, in a little over a month "npx ccusage" shows that via my Claude Code 5x sub I've used 5M input tokens, 1.5M output, 121M Cache Create, 1.7B Cache Read. Estimated pay-as-you-go API cost equivalent is $1500 (N.B. for the tail end of December they doubled everybody's API limits, so I was using a lot more tokens on more experimental on-the-fly tool construction work)


FYI Opus is available and pretty usable in claude-code on the $20/Mo plan if you are at all judicious.

I exclusively use opus for architecture / speccing, and then mostly Sonnet and occasionally Haiku to write the code. If my usage has been light and the code isn't too straightforward, I'll have Opus write code as well.


The problem with current approaches is the lack of feedback loops with independent validators that never lose track of the acceptance criteria. That's the next level that will truly allow no-babysitting implementatons that are feature complete and production grade. Check out this repo that offers that: https://github.com/covibes/zeroshot/

That's helpful to know, thanks! I gave Max 5x a go and didn't look back. My suspicion is that Opus 4.5 is subsidised, so good to know there's flexibility if prices go up.

The $20 plan for CC is good enough for 10-20 minutes of opus every 5h and you’ll be out of your weekly limit after 4-5 days if you sleep during the night. I wouldn’t be surprised if Anthropic actually makes a profit here. (Yeah probably not, but they aren’t burning cash.)

I use the $200/month Claude Code plan, and in the last week I've had it generate about half a million words of documentation without hitting any session limits.

I have hit the weekly limit before, briefly, but that took running multiple sessions in parallel continuously for many days.


Just ask it to.

/init in Claude Code already automatically extracts a bunch, but for something more comprehensive, just tell it which additional types of things you want it to look for and document.

> Did you have to develop these skills yourself? How much work was that? Do you have public examples somewhere?

I don't know about the person above, but I tell Claude to write all my skills and agents for me. With some caveats, you can do this iteratively in a single session ("update the X agent, then re-run it. Repeat until it reliably does Y")


"Claude, clone this repo https://github.com/repo, review the coding conventions, check out any markdown or readme files. This is an example of coding conventions we want to use on this project"

Of course, any attempt at safety or security requires defense in depth.

But usually, any effort spent on making one layer sturdy is worth it.


The answer to that would very much be: "it depends".

Yes, of course, network I/O > local I/O > most things you'll do on your CPU. But regardless, the answer is always to measure performance (through benchmarking or telemetry), find your bottlenecks, then act upon them.

I recall a case in Firefox in which we were bitten by a O(n^2) algorithm running at startup, where n was the number of tabs to restore, another in which several threads were fighting each other to load components of Firefox and ended up hammering the I/O subsystem, but also cases of executable too large, data not fitting in the CPU cache, Windows requiring a disk access to normalize paths, etc.


Sure, I will admit I was a bit hyperbolic here.

Obviously sometimes you need to do a CPU optimization, and I certainly do not think you should ignore big O for anything.

It just feels like 90+% of the time my “optimizing” boils down to figuring out how to batch a SQL or reduce a call to Redis or something.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: