Hacker Newsnew | past | comments | ask | show | jobs | submit | GuB-42's commentslogin

The thing with Cybertrucks losing panels certainly didn't help.

A big part of the Cybertruck marketing was the robustness of its unusual design: exoskeleton! space grade materials! They smashed the door with a hammer and it didn't dent (just avoid pétanque balls...), Elon Musk commented that it would destroy the other vehicle in an accident. Morally dubious arguments sometimes, but it appeals to many potential customers.

And then, the vehicle that is supposed to be a tank falls apart by looking at it funny. And the glued on steel plates, is it that the exoskeleton? Not only the design is controversial, but it failed at what it is supposed to represent.


> For any electronic device you purchase a small tax is collected and used for the recycling and collection of the future waste it will generate.

I call bullshit on these initiatives. It is a tax, period. The government collects money and it does... stuff. It is not a deposit, so it doesn't incentivize people to return the thing, and it is too general to de-incentivize particularly bad products like disposable vapes.

The tax can be used on recycling efforts, and it probably is, however you don't need a specific tax for that. These investments can come from other sources of government income: VAT, income tax, tariffs, etc... I don't think people are paying a "presidential private jet tax" and yet, the president has his jet, and hopefully, all government effort for the environment is not just financed by a small, specific tax. Saying a tax is for this or that is little more than a PR move, they could do the same by increasing VAT, and I believe it would work better, but that's unpopular.

> The collection mandatorily happens in the shops that sell electronic devices

That is more concrete.


I suspect that the length of the offset of your input data in pi is equal to the length of the input data itself, plus or minus a few bytes at most, regardless of the size of the input data.

That is: no compression, but it won't make things worse either.

Unless the input data is the digits of pi, obviously, or the result of some computation involving pi.



What if instead of the index of your full data, you store the index of smaller blocks? Would I need i.e. to use an 8kbytes or larger integer to store the offset all the possible 8k blocks?

It is meant to be a joke anyway.


That would 'work' to a point. But my gut guess is it would end up with bigger data.

Most algs that I have ever made. There are several places where your gains disappear. The dictionary lookup for me is where things come apart. Sometimes it is the encoding of the bytes/blocks themselves.

In your example you could find all of the possible 8k blocks out there in pi. Now that number set would be very large. So it will be tough to get into your head how it is working. As it is not the whole of pi space you also probably need a dictionary or function to hold it or at least pointers to it.

One way to tell if a compression alg is doing ok is to try to make the most minimal version of it then scale it out. For example start with a 4 bit/8 bit/16 bit value instead of 8k. Then see how much space it would take up. Now sometimes scaling it up will let you get better gains (not always). That is where you will have a pretty good idea if it works or not. Like just move from 1 byte to 2 then 4 and so on. Just to see if the alg works. That exercise also lets you see if there are different ways to encode the data that may help as well.

I got nerd sniped about 3 decades ago on problems just like this. Still trying :)


Some patterns must happen to repeat, so I would assume the offset to be larger, no?

You could express the offset with scientific notation, tetration, and other big math number things. You probably don't need the whole offset number all at once!

Actually, you do.

You can use all the math stuff like scientific notation, tetration, etc... but it won't help you make things smaller.

Math notation is a form of compression. 10^9 is 1000000000, compressed. But the offset into pi is effectively a random number, and you can't compress random numbers no matter what technique you use, including math notation.

This can be formalized and mathematically proven. The only thing wrong here is that pi is not a random number, but unless you are dealing with circles, it looks a lot like it, so while unproven, I think it is a reasonable shortcut.


The way he approaches the problem, which essentially uses voxels, it shouldn't be too hard: for each voxel, compute the distance to the closest triangle and you have your SDF.

The thing is, you have a SDF and now what? What about textures and materials, animation, optimization, integration within the engine,... None of it seem impossible, but I won't call it easy.


Funny how almost everybody hated the Windows 8 desktop environment. And to this day, Windows 8 is still seen as one of the worst versions of Windows for that reason, even if it was pretty decent under the hood.

Projects like this show that it has its fans. It feels like authors being successful only after their death. I still think of the Windows 8 UI as terrible overall, but now that the hate has passed, people are not afraid to give it some redeeming qualities.

It was pretty good on mobile though, which is the root of the problem I think. They tried to unify what shouldn't be unified.


I used Mercurial before Git and found it way more intuitive. I don't have much to say about the documentation, I didn't have a problem with it, but that's not the reason.

It is not just because the CLI is more intuitive, though it plays a big part.

The main reason is that mercurial is more opinionated. On a default setup, only a few commands are available, and none of them let you change the history. If you want more, you have to add an extension. These are built-in, that's just a line in a configuration file, so that's not much of an obstacle, but you have to be deliberate. It gives a natural progression, and it better defines the project organization.

With git, you have everything and the kitchen sink, literally, as it uses the "plumbing" and "porcelain" metaphor. All flavors of merge, rebase and fast forward are available, there is a git-reset command that does a dozen of different things, there is stash, the staging area, etc... The first month or two on git, I was a bit overwhelmed, none of that with Mercurial, and I already had the experience of Mercurial when I switched to git, so I was familiar with the concept of push/pull and DAGs.

Now, I tend to prefer git, though after many years, I still have trouble wrapping my head around the command line sometimes. But that's for the same reason it was so hard to me to get into it. It gives out a lot of freedom and possibilities. For example, I like the fact it is really decentralized, for example in one project the customer had a completely separate central repository we couldn't access, and they couldn't access ours, for security reasons. We worked by exchanging bundle files. At some point we also took advantage of the fact that it is possible to have more than one root commit. Also, almost all mistakes are fixable and it is hard to really lose anything (including secrets, so beware!).

For a video game analogy, Mercurial introduces you to new game mechanics as you progress, while Git makes you start in the middle of the map with all the skills unlocked.


To each his own. I liked may company issued ThinkPad so much I ended up buying one myself and I have been pushing back getting a replacement (HP Elitebook).

Some of your points are common, such as the touchpad being garbage, or that it runs hotter than an Apple Silicon MacBook Air. But most people consider ThinkPad keyboards to be way better than Apple's and while most (not all) ThinkPads have a plastic shell, they certainly don't feel cheap. Apple displays are typically really good, but ThinkPads have a lot of options, so it is hard to tell.

Your comment, especially regarding the keyboard makes me think you just love your MacBook. Why buy anything else?

Linux support is not great, but a lot of a significant part of what makes Apple great is in their hardware/software integration and they are not doing it open source. It means a MacBook without OSX is a lesser MacBook, but at least, it is not Windows 11.


ThinkPads run the gamut. Their flagship line is nice. In most regards, I enjoy my first gen X1 Nano — good keyboard, screen (even if it annoyingly requires fractional UI scaling), body feels solid despite being lightweight, soft touch plastic makes it feel nice to hold. Trackpad is just ok but the trackpoint makes for that.

It likes to spin up its fan doing the most insignificant things though (even plugging in a pedestrian 1x scaling external monitor can while idle can do it) and its battery life is somewhat abysmal. Standby time is also quite poor.

Some of these things are in theory improved by a newer CPU (Lunar Lake in particular looks decent) but sadly they discontinued the Nano. The Carbon isn’t that much bigger, but the size difference is noticeable in some circumstances.


Are you running windows on this?

It dual boots Windows 11 and Fedora, and I’ve played with other distros in the past. They have minor edges over each other in various ways but none offer a major concrete advantage over the others in any category (except harassment/junkware, which any distro has a major upper hand over Windows 11 in, but Windows 10 accomplishes that almost as well).

Either there’s simply a hard limit on how good this hardware can be in terms of thermals and battery life or neither Lenovo’s tuning of Windows nor any Linux distro has gone far enough in properly leveraging power management and the like.


Is your macbook air running 8 different corporate security products eating CPU?

I had a macbook for work previously and it was just as shit as windows due to all the junk running in the background.


Not easy I would say.

Safety improvement means larger crumple zones, reinforcement, etc... Which mean a bigger and heavier vehicle if you want to keep the same capacity. That in turn means a more powerful engine, brakes, wheels and tyres, etc... further increasing the size and weight of the vehicle. This is an exponential factor.

Fuel economy and environmental stuff (which are linked) come with tighter engine control for better combustion and cleaner exhaust. It means tighter tolerances so simple tools may be less appropriate, and electronics.

And there are comfort elements that are hard to pass nowadays: A/C, power steering, door lock and windows. Mandatory safety equipment like airbags and ABS. Even simple cars like what Dacia makes are still bigger, heavier and more complex than older cars like the C15, they don't really have a choice.


Not only that but Markdown use the conventions people already used in text files (point 3 in the article). People wrote Markdown before Markdown existed, they just formalized it.

In fact, I like to write notes and documentation in text form, and then I notice I have been using Markdown all along, so I rename my text file into .md, fix a couple of markers, and now it looks nice on a viewer that supports markdown, and I have syntax highlighting in my text editor.


That's the main reason I still like writing Markdown (and Typst nowadays as well); I can "render" it in my head very quickly.

When I'm reading Markdown, I almost don't even see the symbols. Beginning a statement with a # immediately just looks like a heading, surrounding a word with asterisks looks italic to me, wrapping a string with backticks looks like code formatting to me, and my assumptions are generally right so I don't need to render very often (which is why the Pandoc -> LaTeX -> PDF pipeline didn't bother me that much).

If I'm writing LaTeX or something, I generally have a very rough idea of what something will look like, but it's not terribly reliable for me. I need to render frequently because my assumptions about how something is going to look is likely to be wrong.

I mostly use Typst now because it is similar enough to Markdown, and the compilation time is so categorically faster that I see little reason not to use it, but I still respect the hell out of Markdown for popularizing this kind of syntax.


I don’t even see the code. I see a blonde, brunette, red head.

That made me think - are there any depictions of Markdown in movies and tv shows? I've seen a fair share of C, Java, HTML, and (in newer works) JavaScript and Python. And Perl in The Social Network.

n.b.: the above quote is from The Matrix.


I think that was php in the social network.

I'm referring to the part where Mark was scraping the photos from the Harvard's houses face book pages.

If it is going to be accurate it is PHP.

Yeah, I don't even need a Markdown formatter. It's already in my head.

> Not only that but Markdown use the conventions people already used in text files

So why not Markup? At the time, everyone was using markup because Wikipedia was in wikimarkUP, with # for numbered lists, {} for macros and === to denominate titles. The latter still works in Markdown, but the former doesn’t. Funny heritage: Confluence shortcuts are also expressed in markup because it was the trend at the time, but they changed the shortcuts when they went to the Cloud.


MediaWiki syntax was its own odd duck. It used '''bold''' and ''italics'', and [https://example.com/ external links like this] - almost nothing else followed their lead.

And for a long time MediaWiki didn't have a proper parser for that markup, just a bunch of regexes that would transform it into HTML. I don't know if they have a proper parser now, but for reasons of backwards compatibility it must be lenient/forgiving, which means that converting all of Wikipedia to markdown is basically impossible now. So MediaWiki markup will stay with us for as long as there are MediaWiki wikis.

There's been some progress on that front:

https://www.mediawiki.org/wiki/Parsoid


BINGO. a key point he either is ignorant about or strangely chooses to overlook

There is almost never a single cause, here there was 12, it is often called the swiss cheese model. The root cause is a bad transcription, which probably happened many times, but for some reason, this time, all the safeguards failed. It happens sometimes, with catastrophic results. Hopefully, procedures will be adjusted, but in general, you can only minimize risks, not prevent catastrophic events entirely.

It was an expensive mistake, but thankfully, no one died.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: