Hacker Newsnew | past | comments | ask | show | jobs | submit | morsecodist's commentslogin

I would never ask a question on stack overflow because half the time it seemed to be flagged a dupe or for some other reason and it brought you closer to being disallowed to ask. I actually have answered a good amount of stack overflow questions to get a higher score but the overzealous question shutdowns totally had a chilling effect.

One thing that isn't talked about enough is the impact aggressive moderation had on people answering too.

If you were in the New queue, and found a question you could answer, by the time you posted your answer the question itself may have been nuked by mods making your answer/effect not seen by many.


Oh, man. That was kind of the end of the line for me, too. I’d get roped into conversations trying to defend the question, which wasn’t even mine, because I thought it was novel and interesting enough to be worth answering in the first place. And then I asked myself what I was doing getting suckered into these talks. I don’t need that kind of tarpit.

Yes. The problem is that the model was questions open by default, so you had exposure to the question before it could be properly be considered for inclusion. The Staging Ground fixed this, but too little (only applied to a small random sampling of questions) and way too late.

It's considered part of your responsibility, as someone answering questions, to understand the standards for closing questions (https://meta.stackoverflow.com/questions/417476) and the motivations behind those standards, and to skip over (better yet, flag or vote to close) those not meeting those standards (https://meta.stackoverflow.com/questions/429808).

You complain, but actually the deck is heavily stacked in your favour: there is a 5-minute grace period on answers; plus you can submit the answer by yourself, regardless of your reputation score, while typical closures (not duplicates and not questions flagged and then seen by someone on the very small moderation team) require three high-rep users (and it used to be five) to agree.

However, the question was not "nuked": the OP gets at least 9 days to fix it and submit for reconsideration before the system deletes it automatically (unless it's so bad that multiple even higher rep users take even further consensus action, on the belief that it fundamentally can't be fixed: see https://meta.stackoverflow.com/questions/426214/when-is-it-a...).

And this overwhelmingly was not done "by mods". It's done by people who acquired significant reputation (of course, this also generally describes the mods), typically by answering many questions.


I see this advice a lot in various forms. I think people are probably too conflict averse on average so there is some merit to it but there are limits. I feel like there are a lot of times in my life where just moving on or being diplomatic has been the right call.

The manager example is a good case study. There are a lot of examples here where there might be genuine repercussions for raising an issue with a manager. I wouldn't give this as blanket advice.

Unfortunately, I don't think there's a simple rule about whether or not you should raise an issue and it needs to be decided case by case.


This is such an interesting comment thread because people have such wildly different opinions and from my perspective the entire disagreement just comes from company size.

I am a "CTO" and I always put that in air quotes because I have one direct report and I spend the lion's share of my time doing IC work. I know what I do is not what people picture when they hear the title and I feel weird saying it. I use it because I do have to make the strategic technical decisions, there is no one else. When people are marketing technical B2B SaaS I am the one they are looking for.

From my perspective there just isn't nearly enough for me to do as a CTO to justify me not coding. If I were to hire someone just to manage them that would be an unjustifiable expense at this point. But I also get that as soon as we get to a reasonable size this would be totally unsustainable.


This sounds like myself as well. We are a small dev team of 6 (in a company of 30), however I also have a partial ownership stake in the company. Even though I spend a significant part of my time on "CTO" style work (client meetings, market assessments, product overviews, roadmap planning, third party collaboration, etc.) there also isn't near enough of that to fill my time or justify my salary. I code and review like my team does, but I also oversee technical direction for our whole portfolio and the responsibility for that technical success or failure rests on me. As we grow the coding will decrease I'm sure, but I see a lot of people here criticizing from a perspective of larger companies where a CTO would be a full time responsibility. In our situation the title (as much as I often dislike it) represents my level of responsibility, if not directly the full scope of my role.


I am pretty skeptical of how useful "memory" is for these models. I often need to start over with fresh context to get LLMs out of a rut. Depending on what I am working on I often find ChatGPT's memory system has made answers worse because it sometimes assumes certain tasks are related when they aren't and I have not really gotten much value out of it.

I am even more skeptical on a conceptual level. The LLM memories aren't constructing a self-consistent and up to date model of facts. They seem to remember snippets from your chats, but even a perfect AI may not be able to get enough context from your chats to make useful memories. Things you talk about may be unrelated or they get stale but you might not know which memories your answers are coming from but if you did have to manage that manually it would kind of defeat the purpose of memories in the first place.


That is my experience as well. This memory feature strikes me as beneficial for Anthropic but not for end users.


> When just a few years ago, having AI do these things was complete science fiction!

This is only because these projects only became consumer facing fairly recently. There was a lot of incremental progress in the academic language model space leading up to this. It wasn't as sudden as this makes it sound.

The deeper issue is that this future-looking analysis goes no deeper than drawing a line connecting a few points. COVID is a really interesting comparison, because in epidemiology the exponential model comes from us understanding disease transmission. It is also not actually exponential, as the population becomes saturated the transmission rate slows (it is worth noting that unbounded exponential growth doesn't really seem to exist in nature). Drawing an exponential line like this doesn't really add anything interesting. When you do a regression you need to pick the model that best represents your system.

This is made even worse because this uses benchmarks and coming up with good benchmarks is actually an important part of the AI problem. AI is really good at improving things we can measure so it makes total sense that it will crush any benchmark we throw at it eventually, but there will always be some difference between benchmarks and reality. I would argue that as you are trying to benchmark more subtle things it becomes much harder to make a benchmark. This is just a conjecture on my end but if something like this is possible it means you need to rule it out when modeling AI progress.

There are also economic incentives to always declare percent increases in progress at a regular schedule.

Will AI ever get this advanced? Maybe, maybe even as fast as the author says, but this just isn't a compelling case for it.


Any physical process can be interpreted as computation. Computation is in the eye of the beholder. Interpreting life as computation doesn't really add anything new we are just describing a model that we came up with.


In general, I think the dependency hate is overblown. People hear about problems with dependencies because dependencies are usually open source code used by a lot of people so it is public and relevant. You don't hear as much about problems in the random code of one particular company unless it ends up in a high profile leak. For example, something like the heartbleed bug was a huge deal and got a lot of press, but imagine how many issues we would be in if everyone was implementing their own SSL. Programmers often don't follow best practices when they do things on their own. That is how you end up with things like SQL injection attacks in 2025.

Dependencies do suck but it is because managing a lot of complicated code sucks. You need some way to find issues over time and keep things up to date. Dependencies and package managers at least offer us a path to deal with problems. If you are managing your own dependencies, which I imagine would mean vendoring, then you aren't going to keep these dependencies up to date. You aren't going to find out about exploits in the dependencies and apply them.


> imagine how many issues we would be in if everyone was implementing their own SSL.

No, the alternative is to imagine how many issues we would be in if every project pulled in 5 different SSL libraries. Having one that everybody uses and that is already installed on everyone's system is avoiding dependency hell. Even better if it's in stdlib.


I am extremely skeptical of this mathematical model to predict history thing. There's just not enough history to do it and you bake in your biases when you go through the qualitative historical record and try to assign it to quantities. A lot of people analyze history and claim they figured it out and they've come to different conclusions and none of them have made reliable, specific conditions. If you say something bad will happen at some point in the future you'll probably be right but it's not enough to call it science.


Nevermind the lack of data - what even would be the limits of knowledge in such a model? If it was widely believed that society will collapse at some point in the next 30 years, how would human behavior change in response? How would that affect the original prediction?


If only someone would devise a Foundation to look into this


A few points for clarification:

-It’s a probabilistic model, so it only predicts the odd of a collapse

- Their main contribution was the creation and curation of a super detailed historical database: the Seshat. It spans almost 10000 years of human history with more than 400 polities from 30 regions around the world, using over 1,500 variables. Based on this data, Turchin & al devised the mathematical model for the prediction.

- One key area is to find surrogate data when others are not available. For ex. body size could be used to describe the nutrition and economic situation of the population.

- In 2010, Nature asked experts and super-forecasters for their prediction of 2020. Only Turchin predicted the coming collapse of America.


Elite overproduction is an interesting topic and putting aside any suggestion that it's a precise mathematical predictor, it obviously creates societal problems.

That is - you've created a large class of intelligent achievers with nothing for them to do. Arguably that just naturally produces increasing societal upheaval. Whether that means revolution or just chaotic increasingly populist elections is a matter of degrees.


There is always something for a large class of intelligent achievers to do. The failure to put them to work is more of a societal failure than it is an indictment of the education system. (Maybe AI will change this, but only in the same way that it changes every part of our societal model.)


> There is always something for a large class of intelligent achievers to do. The failure to put them to work is more of a societal failure than it is an indictment of the education system.

This doesn’t quite resonate with me, because I’ve lived through it and seen it happen over and over again even in the most functional of societies.

Oversimplifying a bit, let’s call intelligent achievers elites. There is often a mismatch between elite supply and elite slots, and by definition elite slots are scarce — no matter how well your society is functioning.

Elite slots scale with the maturity and breadth of the economy. The U.S., with its size and diversity, has a much larger pool of elite slots than most countries. That’s one reason I moved here.

By contrast, in Canada (a country I love deeply), most Ph.D.s end up underemployed or they leave, because their skills simply aren’t needed at the level of specialization they were trained for. Some jobs only make sense when you have enough scale to support them — and without that scale, those elite positions just don’t exist.

Can intelligent achievers pivot to something else, like entrepreneurship? Sure, but in a smaller economy, the options are much more limited, even if they do a startup and invent new categories. They can also accept underemployment. There are inherent constraints in an economy due to natural factors like scale, geography, etc.

(My understanding is that Taiwan is in this situation -- highly educated people, limited industries that can employ them. Some move abroad, but many just curb their ambitions and try to get by with low pay and accept their lot in life, striving only for "little joys" they can afford like bubble tea and inexpensive street food)


AI seems poised to create more underemployment rather than fix the existing level of it…


Can you name some examples? Virtually every major revolution or civil war I can think of, would involve intelligent achievers who've made it. In fact, the core of the rebellion would be a class that's often vital for the exercise for political power, but won't be allowed access to that same power.

English gentry, New England merchants, nobles of the robe, army officers, etc.

Only the Russian revolution would involve people who were nobodies before it, but they took charge after the disaffected elites that came to power in February spend most of 1917 undermining each other.


Even the Russian Revolution was lead by elites: - Kerensky was lawyer - Lvov was an aristocrat - Lenin, Trotsky were highly educated and known for intellectual brilliance


The core of Russian revolution were highly educated nerds who would cancel their friends over slight differences in understanding of obscure socioeconomic theories


I find the way people talk about Go super weird. If people have criticisms people almost always respond that the language is just "fine" and people kind of shame you for wanting it. People say Go is simpler but having to write a for loop to get the list of keys of a map is not simpler.


I agree with your point, but you'll have to update your example of something go can't do

> having to write a for loop to get the list of keys of a map

We now have the stdlib "maps" package, you can do:

   keys := slices.Collect(maps.Keys(someMap))
With the wonder of generics, it's finally possible to implement that.

Now if only Go was consistent about methods vs functions, maybe then we could have "keys := someMap.Keys()" instead of it being a weird mix like `http.Request.Headers.Set("key", "value")` but `map["key"] = "value"`

Or 'close(chan x)' but 'file.Close()', etc etc.


I haven't use Go since 2024, but I was going to say something similar--seems like I was pretty happy doing all my Functional style coding in Go. The problem for me was the client didn't want us to use it. We were given the choice between Java (ugh) and Python to build APIs. We chose Python because I cross my arms and bite my lip and refuse to write anymore Java in these days of containers as the portability. I never really liked Java, or maybe I never really like the kinds of jobs you get using Java? <-- that


Fair I stopped using Go pre-generics so I am pretty out of date. I just remember having this conversation about generics and at the time there was a large anti-generics group. Is it a lot better with generics? I was worried that a lot of the library code was already written pre-generics.


The generics are a weak mimicry of what generics could be, almost as if to say "there we did it" without actually making the language that much more expressive.

For example, you're not allowed to write the following:

    type Option[T any] struct { t *T }

    func (o *Option[T]) Map[U any](f func(T) U) *Option[U] { ... }
That fails because methods can't have type parameters, only structs and functions. It hurts the ergonomics of generics quite a bit.

And, as you rightly point out, the stdlib is largely pre-generics, so now there's a bunch of duplicate functions, like "strings.Sort" and "slices.Sort", "atomic.Pointer" and "atomic.Value", quite possible a sync/v2 soon https://github.com/golang/go/issues/71076, etc.

The old non-generic versions also aren't deprecated typically, so they're just there to trap people that don't know "no never use atomic.Value, always use atomic.Pointer".


> Now if only Go was consistent about methods vs functions

This also hurts discoverability. `slices`, `maps`, `iter`, `sort` are all top-level packages you simply need to know about to work efficiently with iteration. You cannot just `items.sort().map(foo)`, guided and discoverable by auto-completion.


> Now if only Go was consistent about methods vs functions

Generics can only be on function and not methods because of it's type system. So don't hold your breath and modifying this would be a breaking change.


Ooh! Or remember when a bunch of people acted like they had ascended to heaven for looking down on syntax-highlighting because Rob said something about it being a distraction? Or the swarms blasting me for insisting GOPATH was a nightmare that could only be born of Google's hubris (literally at the same time that `godep` was a thing and Kubernetes was spending significant efforts just fucking dealing with GOPATH.).

Happy to not be in that community, happy to not have to write (or read) Go these days.

And frankly, most of the time I see people gushing about Go, it's for features that trivially exist in most languages that aren't C, or are entirely subjective like "it's easy" (while ignoring, you know, reality).


This just makes it even more frustrating to me. Everything good about go is more about the tooling and ecosystem but the language itself is not very good. I wish this effort had been put into a better language.


Go has transparent async io and a very nice M:N threading model that makes writing http servers using epoll very simple and efficient.

The ergonomics for this use case are better than in any language I ever used.


Implementing HTTP servers isn’t exactly a common use case in software development, though.


Sorry I didn’t mean implementing a raw http server like nginx, but just writing a backend.


> I wish this effort had been put into a better language.

But it is being put. Read newsletters like "The Go Blog", "Go Weekly". It's been improving constantly. Language-changes require lots of time to be done right, but the language is evolving.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: