Hacker Newsnew | past | comments | ask | show | jobs | submit | maranas's commentslogin

I think they have already realized this, which is why they are moving towards tool use instead of text generation. Also explains why there are no more free APIs nowadays (even for search)

Cline + RooCode and VSCode already works really well with local models like qwen3-coder or even the latest gpt-oss. It is not as plug-and-play as Claude but it gets you to a point where you only have to do the last 5% of the work

What are you working on that you’ve had such great success with gpt-oss?

I didn’t try it long because I got frustrated waiting for it to spit out wrong answers.

But I’m open to trying again.


I use it to build some side-projects, mostly apps for mobile devices. It is really good with Swift for some reason.

I also use it to start off MVP projects that involve both frontend and API development but you have to be super verbose, unlike when using Claude. The context window is also small, so you need to know how to break it up in parts that you can put together on your own


> What are you working on that you’ve had such great success with gpt-oss?

I'm doing programming on/off (mostly use Codex with hosted models) with GPT-OSS-120B, and with reasoning_effort set to high, it gets it right maybe 95% of the times, rarely does it get anything wrong.


This has been an issue with university rankings for a while. A lot of top US universities engage in this practice too - anecdotal, but I've heard a lot of professors force students to add/remove some citations, or even add their names to the list of people who worked on a paper to help the numbers for their university.

It would be good to see what the criteria is for deciding if a journal is "to be taken seriously". I imagine for example that Chinese or Arabic language journals wouls be published and citsd in journals of those languages. That doesn't necessarily mean that they arent to be raken seriously in the field, it's just that they aren't Western publications.


Regarding inflating the number of authors, it is especially bad in medicine, where I've observed a lot of names being added to papers for "political" reasons, despite the "author" playing no role in the paper.

Some journals now require an "Author Contributions" section to at least partially address this issue.


The worst part of this in the biomedical field is the conferences. Thats because sometimes you get toddlers with an advanced degree and a chair position picking the conference presenters, who will unilaterally reject or accept people on the grounds of whether they like them or see them as a competitor, even within the same department, no regards to what the poster or talk might be. At least with journals you have the editor who can sometimes mediate a hotheaded reviewer dispute in a level headed manner.


I've seen it go in the other direction too. Groups deliberately not citing other competing groups because it might help them. It's like the other groups don't exist.


I've observed this in multiple AI niches. In some cases I've emailed people saying they ignored very similar work and failed to cite it, and in at least some cases, they were apologetic and said they would update the arXiv version. Although of those times, they do that 50% of the time. Kind of tells you that the reviewers at top AI conferences themselves aren't that familiar with the breadth of the literature.


To be fair, there are so many publications in fields such as AI that it's really hard to stay on top of things, sometimes even within your own specific subarea. I'm not saying that to give reviewers a total pass, but I think it's reasonable that sometimes a group of reviewers might miss a relevant paper.


Yes, there are just too many publications, even with a very narrow focus. I am reminded of this article: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/


Of course they aren’t familiar with the literature. Most of the papers rediscover existing math and physics in a worse way (harmonic analysis, etc)


thanks for this. i haven't even fully gone through quake 3's source yet.


I'm still on Doom :/


I'm not surprised - Ive deserves it. The designs he came up with were the driving factor in Apple's success.


that's the life i want to live. just make sure not to forget paying your taxes


Agreed. Any bug report from a user is valuable - the fact that he was frustrated enough to report it means that there is a problem that needs to be addressed.


Almost everyone files a bug report in frustration because you're filing an issue about something that doesn't work the way you think it should. But most people are considerably more civil about it.

I really don't like to play the "free" card. Too often developers say "well, it's open source so fix it or stop complaining." I don't think that's right at all . . . especially if you've advocated people use your software. However, having been involved with a lot of OSS projects, I can say that the bugs that get fixed the fastest are the ones that affect the developers themselves. Anything that gets fixed above and beyond that is a developer being a nice person in volunteering his time to help someone else. More to the point: it's hard to get people to work for free when they're demoralized and bug reports like this are demoralizing. If I were involved with the project, he's about the absolute last person I'd try to please no matter how right he may be.

So, while I'm neutral on the ban, the bug reporter has to realize this is a very ineffective way to win friends and get his issues resolved.


That's not strictly true. If you've already got a report of the issue from other more helpful users, then someone coming in and calling you names and threatening to physically harm you, that isn't necessarily adding value.

We're very fortunate at Mozilla to have tens of thousands of but reporters who aren't bullies and who don't think because they wrote some GNU code once that they can threaten you and deride you. This means that the cost of kicking out the bullies is either trivial, or a net win.


I agree with you for the most part. C/C++ lets you do what you want, presumably because you know what you're doing. But I don't agree that pointers are evil, and should be avoided. If you want to do any practical programming with C/C++ at all, you have to learn how to dynamically allocate memory and use pointers. Generally, pointers work as advertised. It's the cases where they work when they shouldn't that is the problem. In these cases a compiler usually warns you, so you are still covered. So in C/C++, just because it works, doesn't mean your code is right.


But I don't agree that pointers are evil, and should be avoided.

No one is saying that at all. Pointers aren't evil, but they are dangerous.

Perhaps a better analogy is a sharp chef's knife. In the right hands it's an effective and efficient tool that lets you do things quicker and just as safely as any other tool.

In the wrong hands it is dangerous to the person using it and to those around them.

There are numerous other examples: welding torches, motorbikes, explosives etc etc.


Exactly. I didn't mean to imply that pointers are evil or should be avoided. That was supposed to be the point of my dynamite analogy, but I guess the comment made elsewhere on this topic about the inadequacies of analogies holds true here.

So, for the avoidance of doubt, I believe: pointers are awesome, powerful tools and you can do some great things in C/C++ using them and I sometimes miss them (a little bit) when using other languages. But you can also do some terrible things with them - and I have done some spectacularly bad things with them in the past. But that doesn't mean they are bad - it just means that I am reckless.


^ Ditto. I thought pointers were being shown in a negative light at first, but now I see the point. In my opinion too, manual memory management is a very important part of developing highly scalable applications, but should only be done if absolutely needed - which is fortunately rarely the case nowadays as most people just develop for the Web, and can afford to throw money at the problem. But for embedded systems where resources are limited and inputs are very limited, it is still very useful. In the early years, it was C + inline assembly for further optimization - where you try your best to avoid assembly. I guess now, it's a dynamic/interpreted language, plus C/C++ for further optimization, and avoid C/C++ like the plague as much as possible.


> C/C++ lets you do what you want, presumably because you know what you're doing.

Frankly, C++ is kinda schizophrenic about it. First, it is really anal about class membership, to the extent they had to introduce friend qualifier to get around its impracticality. And then you get this.


I think it isn't about the engineers' skill, but more on how they work together as a team. 5 superstar pricks will get you nowhere. 1 superstar plus 4 average engineers that work and take directions well, and you have a team. Even with just 5 average engineers, as long as they get the job done and take to instructions well, you're better off. The problem with superstars is that they usually have huge heads, and would rather prove a point than make things work. A superstar with a great work ethic and is a team player, now that is worth the money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: