Hacker Newsnew | past | comments | ask | show | jobs | submit | ummonk's commentslogin

Google tells its employees what products they're allowed to buy for personal use?


Seems like they meant for a work device.


A 3d printed Inconel part would be fine. 3d printed plastic is something else entirely...


I'm stuck on "setting up this account" like most people. What a botched launch. This kind of bugginess and unreliability has become so much more frequent since big tech started tightening the screws with mass layoffs.


Next they'll be doing PCB CAD in Photoshop...


Is there a tool / website that makes this process easy?


I coded it bun and openrouter(dot)ai. I have an array of benchmarks, each benchmark has a grader (for example, checking if it equals a certain string or grade the answer automatically using another LLM). Then I save all results to a file and render the percentage correct to a graph


But you didn't write that "Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are not just examples of how they teach to write essays in high school and college; they're also all examples of how I think and have learned to communicate."


Git log / draft history


That statement is honestly self-contradictory. If a draft was AI-generated and then reviewed, edited, and owned by a human contributor, then the parts which survived reviewing and editing verbatim were still AI-generated...


Why do you care, if a human reviewed and edited it, someone filtered it to make sure it’s correct. It’s validated to be correct, that is the main point.


> if a human reviewed and edited it, someone filtered it to make sure it’s correct

Yes.

But it's not “free from AI-generated prose”, so why advertise it as such?

And since the first sentence is a lie, why should we believe the second sentence at all?


Clearly someone didn't make sure everything is correct, since they allowed a self-contradictory statement (whether generated by AI or by human) into the text...


Because it never works like that in practice.

People have the illusion of reviewing and "owning" the final product, but that is not how it looks like from the outside. The quality, the prose style, the errors that pass through due to inevitable AI-induced complacency ALWAYS EVENTUALLY show. If people got out of the AI bubbles they would see it too, alas.

We keep reading the same stories for at least a couple of years now. There is no novelty anymore. The core issues and problems have stayed the same since gpt3.5. And because they are so omnipresent in the internet, we have grown to be able to recognise them almost automatically. It is no longer just a matter of quality, it is an insult to the readers when an author pretends that content is not AI generated just because they "reviewed it". Reviewing sth that somebody else wrote is not ownership, esp when that sth is an LLM.

In any case, I do not care if people want to read or write AI generated books, just don't lie about it being AI generated.


I don’t see how any of what you’re suggesting would have prevented this hack though (which involved an old storage account that hadn’t been used since 2020 getting hacked).


You don't see how preventative maintenance such as implementing a policy to remove old accounts after N days could have prevented this? Preventative maintenance is part of the forethought that should take place about the best or safest way to do a thing. This is something that could be easily learned by looking an problems others have had in the past.

As a controls tech, I provide a lot of documentation and teach to our customers about how to deploy, operate and maintain a machine for best possible results with lowest risk to production or human safety. Some clients follow my instruction, some do not. Guess which ones end up getting billed most for my time after they've implemented a product we make.

Too often, we want to just do without thinking. This often causes us to overlook critical points of failure.


For the app I maintain, we have a policy of deleting inactive accounts, after a year. We delete approved signups that have not been “consummated,” after thirty days.

Even so, we still need to keep an eye out. A couple of days ago, an old account (not quite a year), started spewing connection requests to all the app users. It had been a legit account, so I have to assume it was pwned. We deleted it quickly.

A lot of our monitoring is done manually, and carefully. We have extremely strict privacy rules, and that actually makes security monitoring a bit more difficult.


These are excellent practices.

Such data is a liability, not an asset and if you dispose of it as soon as you reasonably can that's good. If this is a communications service consider saving a hash of the ID and refusing new sign ups with that same ID because if the data gets deleted then someone could re-sign up with someone else's old account. But if you keep a copy of the hash around you can check if an account has ever existed and refuse registration if that's the case.


It would violate our privacy policy.

It's important that "delete all my information" also deletes everything after the user logs in for the first time.

Also, I'm not sure that Apple would allow it. They insist that deletion remove all traces of the user. As far as I know, there's no legal mandate to retain anything, and the nature of our demographic, means that folks could be hurt badly by leaks.

So we retain as little information as possible -even if that makes it more difficult for us to adminster, and destroy everything, when we delete.


I think you misunderstood my comment and/or fail to properly appreciate the subtle points of what I suggest you keep.

The risk you have here is one of account re-use, and the method I'm suggesting allows you to close that hole in your armor which could in turn be used to impersonate people whose accounts have been removed at their request. This is comparable to not being able to re-use a phone number once it is returned to the pool (and these are usually re-allocated after a while because they are a scarce resource, which ordinary user ids are not).


> I think you misunderstood my comment and/or fail to properly appreciate the subtle points of what I suggest you keep.

Nah, but I understand the error. Not a big deal.

We. Just. Plain. Don't. Keep. Any. Data. Not. Immediately. Relevant. To. The. App.

Any bad actor can easily register a throwaway, and there's no way to prevent that, without storing some seriously dangerous data, so we don't even try.

It hasn't been an issue. The incident that I mentioned, is the only one we've ever had, and I nuked it in five minutes. Even if a baddie gets in, they won't be able to do much, because we store so little data. This person would have found all those connections to be next to useless, even if I hadn't stopped them.

I'm a really cynical bastard, and I have spent my entire adult life, rubbing elbows with some of the nastiest folks on Earth. I have a fairly good handle on "thinking like a baddie."

It's very important that people who may even be somewhat inimical to our community, be allowed to register accounts. It's a way of accessing extremely important resources.


> I provide a lot of documentation

> Some clients follow my instruction, some do not.

So you’re telling me you design a non-foolproof system?!? Why isn’t it fully automated to prevent any potential pitfalls?


Couldn’t one just make long bigger then to make it match?


Maybe so; I haven't tried. Probably a lot less code depends on unsigned long wrapping at 2⁶⁴ than used to depend on unsigned int wrapping at 2¹⁶, and we got past that. But stability standards were lower then. Any code that runs on both 32-bit and 64-bit LP64 systems can't be too dependent on the exact sizeof long, and sizeof long already isn't sizeof int the way it was on 32-bit platforms.


I'd actually keep it still wrapping at 2^64, with the extra metadata not participating in arithmetic operations...


That seems worse.

For all the wrong code that assumes long can store a pointer, there's likely a lot more wrong code that assumes long can fit in 64 bits, especially when serializing it for I/O and other-language interop.

Also, 128-bit integers can't fit in standard registers on most architectures, and don't have the full suite of ALU operations either. So you're looking at some serious code bloat and slowdowns for code using longs.

You've also now got no "standard" C type (char, short, int, long, long long) which is the "native" size of the architecture. You could widen int too, but a lot of code also assumes an int can fit in 32 bits.


No, it should only do arithmetic on the first 64 (or 32) bits. The extra metadata should be copied unchanged.


Ok, I think I follow. You'd widen the type under the hood but not expose this fact to user code.

However, most longs are just numbers that have no metadata. I guess you'd set the metadata portion to all zeroes in that case. This feels like a reified version of Rust's pointer provenance, and I think you would have to expose some metadata-aware operations to the user. In which case, you're inviting some code rewrites anyway.

While not as bad as the register/ALU ops issue, you're still making all code pay a storage size penalty, and still adding some overhead to handle the metadata propagating through arithmetic operations, just to accommodate bad code, plus it complicates alignment and FFI.


It would still be exposed to user code that checks its size with sizeof, but yeah the long would only have numerical values between 2^-63 and 2^63-1.

And yes, there would still be some overhead for storing and propagating the metadata, and struct alignment would change and FFI wouldn't work with longs.


Err I meant -2^63, that’s embarrassing.


Heh. I missed that.


There's a lot of code that makes assumptions about the number of bytes in a long rather than diligently using sizeof ... remember, the whole point here is low quality code.


It's going to break stuff one way or another.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: