Hacker Newsnew | past | comments | ask | show | jobs | submit | RicoElectrico's commentslogin

Not downplaying your project but a general related question. What's the deal with writing non-real-time application software in Rust? The stuff it puts you through doesn't seem to be worth the effort. C++ is barely usable for the job either.

It turns out it is worth the effort. Once you have got past the "fighting the borrow checker" (which isn't nearly as bad as it used to be thanks to improvements to its abilities), you get some significant benefits:

* Strong ML-style type system that vastly reduces the chance of bugs (and hence the time spent writing tests and debugging).

* The borrow checker really wants you to have an ownership tree which it turns out is a really good way to avoid spaghetti code. It's like a no-spaghetti enforcer. It's not perfect of course and sometimes you do need non-tree ownership but overall it tends to make programs more reliable, again reducing debugging and test-writing time.

So it's more effort to write the code to the point that it will compile/run at all. But once you've done that you're usually basically done.

Some other languages have these properties (especially FP languages), but they come with a whole load of other baggage and much smaller ecosystems.


> So it's more effort to write the code to the point that it will compile/run at all. But once you've done that you're usually basically done.

Not if I don't know what I'm doing because it's something new. The way I'm learning how to do it is by building it. So I want to build it quickly so that I can get in more feedback loops as I learn. Also I want to learn by example, so I actually want to get runtime errors, not type system errors. Later when I do know what I am doing then, sure, I want to encode as much as I can in my types. But before that .. Don't get in my way!


Yeah it is a fair point that runtime errors are sometimes easier to understand than compile time errors. They're still a much worse option of course - for the many reasons that have been already discussed - but maybe compile-time errors could be improved by providing an example of the kind of runtime error you could get if you didn't fix it (and it hypothetically was dynamically typed). Perhaps that would be easier to understand for some people or some errors.

There's a (Curry-Howard) analogue here with formal verification and counter-examples.


A lot of complex GUIs are written in C++ (or are thinish wrappers around an underlying toolkit that is C++). This is often for performabce and/or resource consumption reasons. UIs may not have hard realtime requirements, but they are expected to consistently run smoothly at 60fps+. And dealong with multiple screen sizes, vector graphics, univode text,r etc can involve a lot of computation.

Rust gives you the same performance as C++ with much nicer language to work with.


Used to be written in C++, and usually trace back to the 1990's when C++ GUI frameworks used to rule.

Nowadays most are written in managed languages, and only hot paths are written in C++.

There is hardly anyone still writing new GUI applications on macOS, Windows in pure C++, even Qt nowadays pushes for a mix of QML, Python and C++.


I don’t understand the question. Why would rust be confined to real-time applications?

No the question is why you would use a systems language that necessarily lacks certain ergonomics such as automated garbage collection, for writing GUIs.

That makes no sense to me either, to be honest.


Why is automatic garbage collection necessary for UI? I’ve been writing UI apps for 25 years without using a garbage collector. In any case, the borrow checker often (though not always) obviates the need for garbage collection at all.

This is a good summary of the problem with rust I think:

> Pretty much all UI can be modeled as a tree–or more abstractly as a graph. A tree is a natural way to model UI: it makes it easy to compose different components together to build something that is visually complicated. It’s also been one of most common ways to model UI programming since at least the existence of HTML, if not earlier.

> UI in Rust is difficult because it's hard to share data across this component tree without inheritance. Additionally, in a normal UI framework there are all sorts of spots where you need to mutate the element tree, but because of Rust’s mutability rules, this "alter the tree however you want" approach doesn't work.[1]

[1] https://www.warp.dev/blog/why-is-building-a-ui-in-rust-so-ha...


If I’m being charitable that is an oversimplification, and I suppose I should be charitable at Christmas. But the Scrooge in me is screaming that this analysis is deeply flawed.

Rust makes ownership and mutability explicit. Concurrent editing is very dangerous no matter what stack you are using. Rust just doesn’t let you get away with being a cowboy and choosing YOLO as your concurrency model.

Shared mutable state isn’t any harder in Rust than other languages. In fact, writing correct, bug-free and performant code is easier in Rust than almost any other language in common use, because the tooling is there. It’s just that the other compilers let you ship buggy code without complaining.

To the specific example, there are ways of sharing mutable state, or encapsulating changes throughout a UI tree. I’ve written a UI framework in Rust that does this. It is difficult to get right. But this is true of ANY language - the difficulty is intrinsic to the data type, if you actually care about doing it correctly.

That difficulty does not need to be exposed to the user. There are plenty of Rust UI libraries that take react-like lambda updaters, for example.

I still fail to see the connection to garbage collectors.


End-to-end types and a single(-ish) binary simplifies a lot of things. Plus you can always just .clone() and .unwrap() if you want to be lazy/prototype something.

Markdown habit.


For something that is supposedly a "strategic priority" the implementation is half-assed as well. When I edit my prompt post-fact, it is instead sent as a new message.


Our time is plagued by cheap to produce commodities with high upfront investment. Today RAM, yesterday microcontrollers, even earlier HDDs. It's annoying AF. The commodities should be something we don't have to think of, they should serve us, not the other way.


Thanks for reminding me and looking into DMARC. I removed rua and only left ruf, such that I will only get reports about failing e-mails (not likely). The aggregate reports are useless for my small domain with effectively one e-mail.


What's wrong with fatbikes? They look stupid for sure, but otherwise?


They are routinely modified to exceed legal speed limits and owned by 10 year old or younger kids. Going nearly 30mph on a footpath whilst holding a mobile phone. I think they are also unregistered.


Major problem in the U.S. too.

Easily modified to go as fast as 50 MPH on a chassis not designed for it. Drivers aren’t licenced and often are young kids. No registration. No insurance. No training. Very hazardous to pedestrians.


Oh, you mean they're powered, right?


Yes- in the Netherlands the term 'fatbike' is pretty much synonymous with the battery powered bikes only (I presume elsewhere this may be different). They are mini motorcycles really- but exempt from all the rules and regulations that would apply to regular motorcycles.


Elsewhere “fatbike” just refers to the tyre size.

Pedal fatbikes for riding on snow and sand have existed for at least 20 years I’d say.


Seems very similar to not only X, but also Y


Is that a common attribute for LLMs to output into text?


Yes there is a video on youtube "How to spot AI text"


I've watched that, but it was basically just the emdash thing.


The most frustrating thing about this whole junior position drought is how it simultaneously affects those who are passionate and get it, not only the opportunist bootcamp alumni who were lured by the prospect of high earnings.

If I were to graduate today, I'd be royally screwed.


/r/cscareerquestions the horror eg. applied to 2000 jobs got 1 offer


Honestly, with the AI slop of resumes, I applied to dozens of jobs, and only got a callback to ones I had either a recruiter for or direct connections to, after 20 years of experience. Because I didn't have a big fat "worked at google for 10 years" on my resume. And I'd like to think of myself as someone who can take a very bad situation and make it look smooth.


Even with 10 years of google on my resume I got absolutely zero non-automated responses for all the jobs I applied to after being laid off a few months back (I'm working again). Connections from my network and recruiter reach-outs were the only real leads.

But looking back on my 30 years of working (including in high school), every job I've ever had I got through personal referrals or recruiter reach-outs. I've gotten to interviews before but never actually taken a job without a personal connection.


Other than Indeed/Hired all my other roles were from recruiters, I don't have a degree so it's harder for me to get a job application wise, at least now I have the 6 yrs+ experience which isn't a lot but better than 0

Will say what's gotten me hired are my projects eg. robotics or getting published online for hardware stuff, I work in the web-cloud space primarily though, hardware would be cool but hard to make that jump


> If I were to graduate today, I'd be royally screwed.

I feel that too. I am a self-taught dev. Got a degree, but not in CS. I don't know if I could get hired today.

Not sure how to fix it; feels like the entire industry is eating the seed corn.


The author seems to be oblivious to the slang meaning of PDF ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: