Hacker Newsnew | past | comments | ask | show | jobs | submit | hckr1292's commentslogin

That makes sense when you depend on a shared library. However, if service A depends on endpoint x in service B, then you still have to work out synchronized deployments (or have developers handle this by making multiple separate deployments).

To be fair, this problem is not solved at all by monorepos. Basically, only careful use of gRPC (and similar technology) can help solve this… and it doesn’t really solve for application layer semantics, merely wire protocol compatibility. I’m not aware of any general comprehensive and easy solution.


> However, if service A depends on endpoint x in service B, then you still have to work out synchronized deployments (or have developers handle this by making multiple separate deployments).

In a polyrepo environment, either:

- B updates their endpoint in a backward compatible fashion, making sure older stuff still works

OR

- B releases a new version of their API at /api/2.0 but keeps /api/1.0 active and working until nothing depends on it anymore, releasing deprecation messages to devs of anyone depending on 1.0


Right, so all of that is independent of mono vs poly repo.


I’m curious about the authors experience with monorepo for marketing. I’ve found that using static site generators with nontechnical PMs resulted in dissatisfaction and more work for engineers that those PMs could handle independently in Wordpress/Contentful. As a huge believer in monorepo, I’d love to hear how folks have approached incorporating nonengingeers into the monorepo workflows.


Did you use turbo, buck or Bazel? Without monorepo tooling (and the blood, sweat, and tears it takes to hone them for your use cases), you start hitting all kinds of scaling limits in CI.


We had python scripts that generated GitLab CI/CD yaml [1]. Tooting my own horn here, but it was super cool to ship fairly fast for the first year or so. By the end, we had something like 5 MB of yaml, but in order for the GitLab SaaS backend to process it, it took something like 32 gigs of ram on their MergeRequestProcessor SideKiq worker.

They had to open a whole epic in order to reduce the memory usage, but I think all that work just let us continue to use GitLab as the number of services we grew increased. They recommended we use something called parent/child pipelines, but it would have been a fairly large rewrite of our logic.

[1]: https://docs.gitlab.com/ci/yaml/


Agree about the example! I can't tell if this article is tongue-in-cheek or earnest. I'm unclear on the point the author is trying to make.


The author explains it in the first sentence, i.e. not the syntax of lifetimes may be your problem, but the feature itself


My reading is: people use Rust because it’s fast but then they complain about the semantics that make it fast.

In other words, be careful what you wish for.

Most people would probably be better served by a language that was a tiny bit slower but had better developer productivity. However, once you deviate from the goal of “as fast as possible”, then you have to choose which parts you want to sacrifice speed for productivity. Like Excel, everybody agrees that Rust is too complicated but nobody can agree on which 10% to remove.


> people use Rust because it’s fast but then they complain about the semantics that make it fast.

I don't think most people use Rust because it's fast - fast is nice but Rust is being thrown at a bunch of use cases (e.g. backend services and APIs) for which it replaces "slower" garbage collected languages (the language being faster doesn't always make the overall product/service faster but that's a separate question).

What Rust gives you is a viable potential alternative to C and C++ in places where you absolutely can't have a GC language, and that's a huge deal, the problems and confusion start when people try to use Rust for everything.

> everybody agrees that Rust is too complicated

I don't think this is true either - a large part of the Rust community seem to think that it's as complicated as it needs to be. As a beginner/outsider, I found it kind of cumbersome to get started with, but that's certainly not everyone's opinion.

> Most people would probably be better served by a language that was a tiny bit slower but had better developer productivity.

True, and such languages already exist and are widely used, Rust doesn't need to fit that use case.


> I don't think this is true either - a large part of the Rust community seem to think that it's as complicated as it needs to be. As a beginner/outsider, I found it kind of cumbersome to get started with, but that's certainly not everyone's opinion

With any language there’s an active part of the community and then there’s the “dark matter” of people who use the language but are not actively involved in shaping its direction, forums or subreddits, etc.

Of course the people who are actively involved are likely to be of the opinion that all the complexity is necessary, but I doubt that applies to the broader Rust userbase.


> I don't think this is true either - a large part of the Rust community seem to think that it's as complicated as it needs to be. As a beginner/outsider, I found it kind of cumbersome to get started with, but that's certainly not everyone's opinion.

Personally I feel it's not complicated enough. Where is my function overloading, variadic templates and usable compile time reflection? (Sure you can sometimes use macros but ew macros)


> Personally I feel it's not complicated enough. Where is my function overloading, variadic templates and usable compile time reflection? (Sure you can sometimes use macros but ew macros)

Indeed. Rust is really crying out for a real CTFE implementation + richer macros to replace the mess that is procmacros (I really don't want to have to run an arbitrary external binary with full system access just to manipulate the AST...)


> Most people would probably be better served by a language that was a tiny bit slower but had better developer productivity.

D maybe? D and Rust are the two languages which come to mind when I think about "possible C++ replacements".


When GP said “most”, I interpreted it more broadly. Most applications simply do not require the guarantees of a non-GC language. When you expand that horizon, list of contenders becomes considerably larger - even when restricted to statically typed languages.


Yes for example many Python users switched to Go, a native code GC language, and are satisfied with the performance.

There’s also the middle ground of Swift’s memory management which uses compiler-elided refcounting - i.e. the compiler detects when a count goes up then down again and removes those operations.


> There’s also the middle ground of Swift’s memory management which uses compiler-elided refcounting - i.e. the compiler detects when a count goes up then down again and removes those operations.

In the face of threading that's not a safe optimisation; if another thread decrements the refcount inbetween those two removed operations, boom. The compiler will have to track every variable that crosses threads or something.

EDIT: spelling


Scalability and cost -- Loki stores the the actual log data on S3 and only keeps an index of a few fields. Log queries that can then efficiently target the (hopefully small) set of files containing the data and loki can re-parse those specific files from S3 to display the log results.


nushell is also amazing for exploring data quickly like this! I can't use it as a daily driver shell, but I just call it directly from whatever other shell I'm already in and then ^D back to my prior session when I'm done exploring. Works great and lets me visualize realllly nicely.


I can't live without this anymore! However, occasionally sqlite won't quite guess the type affinity as I'd hoped for a column, then I do have to resort to enumerating all the types.

I find it slightly annoying to have to switch mode back to something reasonable again, since mode impacts query results as well as imports.

Despite doing this every few weeks, I can never remember what the commands are! The Zui might improve this workflow for me a bit. Worth a shot!


Full disclosure, I currently work at Mux on the video product. Previously though, I worked at an education startup with user generated video content. Like many others commenting on this thread, I built a simple queuing system using RabbitMQ and celery, transcoding on EC2 with ffmpeg. While we might have saved some money by doing this in house, we almost certainly discouraged users from uploading content because the entire video needed to transcoded before it could be viewed. For use cases like the breaking news or high traffic user generated content, you really want to minimize wait time, and that requires some kind of special sauce. At Mux, we encode content just in time for very fast publish times. It’s very challenging to do this on your own.


This is why QUIC/http3 is happening right?


Yes.


I've never seen nushell as a daily driver -- more as a data exploration tool. Have a random export you want to go splunking in? `nu` from your current zsh session and go wild. When you're done, ^D back to your main zsh session, job done. Whereas `jq` is only useful for json, and `xsv` is only useful for CSVs, `nu` offers uniform syntax for exploring many different formats and producing structured data out at the end as well. Neat!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: