We need exactly the opposite: it should be impossible to determine the location of an Internet user. The fact that a user's IP address generally reveals their country is a massive flaw in the design of the Internet.
The only way to circumvent jurisdiction-specific laws is to make them impossible to enforce.
You think that common sense legislation is a more realistic solution than crypto?
You can never rely on governments or corporations to have reasonable policies. Any payment system that is centrally controlled will inevitably be corrupted.
That's really at this point that's back to where I am with crypto. Through all the speculation and cruft, there is still a shot at owning our own payments, or rather no one owning them.
The payment networks have power, and if you can twist the arm of the gatekeepers, people subvert that power.
The only thing I don't know about these days is with the stablecoins, how do you avoid the government sinking their claws into you if you intrinsically (esp. if successful) have to hold that much in cash or short-term instruments? Or you have something like tether, which leaving aside anything else, you can definitely say is comically opaque for an entity that is nominally running $160B.
It does punish everyone equally, if everyone pays the same fine. Some people having more ability to pay does not make the law unjust.
I think it's important to remember that money represents debt. When someone commits a crime, they owe a debt to society. But if they have money, that means society owes a debt to them, so when they pay the fine it balances out.
The system isn't perfect but the idea is that if someone makes a big contribution to society, like by practicing medicine or creating new technology, society's debt to that person shouldn't be cancelled out by a minor offense like a parking violation. But if they aren't contributing much, then breaking the rules could make them into a net negative.
If I make $10/hour and you take $100 from me you’ve taken away 10 hours of my labour. If I make $600 an hour and you take $100 from me you’ve taken away 10min of my labour.
The $100 is equal but the impact is not. Fines are penalties, they don’t represent the cost of something - and a fixed fine is an un-equal penalty.
Your analogy makes some sense, but since wealth and contribution to society aren’t actually linked in reality - only in theory - I can’t get behind it. The wealthiest people in reality are parasites, not those who contribute the most. Owners not builders, CEOs not scientists, money managers not teachers.
The fact that there is no truck parking is not an excuse for trucks to park in the bike lane or on the sidewalk.
If an area doesn't support trucks, then deliveries need to be made without trucks. That means parking the truck far away and using a hand truck to make the delivery on foot using the sidewalks.
The shipping companies can either eat the cost, pass it on to consumers or refuse to deliver to those areas.
ah yes. “park far away and simply walk 300 pounds of groceries to be stocked at the only corner store for miles” is a very clean and simple solution only you were smart enough to think of. There definitely aren’t hills or ruts making this impossible because everyone lives in perfectly paved suburbia and I’ve always heard tell of this obvious laziness of checks notes the men and women who actually deliver your prime packages at 4AM.
Society has an obligation to nobody. Governments are only obligated to serve the taxpayers that fund them.
Imprisoning homeless people is not an acceptable solution, because imprisonment costs taxpayer money.
A better solution is to let the market work. If you can't afford the rent for a city, you shouldn't be allowed to be in that city at all, even in a prison cell. People who can't afford to live in an inhabited area should be permitted to camp in the wilderness.
Deploying a free tool that doesn't solve an organizations problems isn't a valid choice. I'm tired of open source advocates hand-waving away the reasons people choose other software. For most organizations, software is not a big cost, labor is. It often makes sense to throw a million dollars at a piece of software to make people's job easier, because that can translate to tens of millions in labor.
That is stretching the subject beyond reasonable. Proprietary software as a general endeavor is not an invalid business and nobody is saying that here.
LibreOffice is close enough to Microsoft's offering that surely it makes sense accross the many EU states to stop spending millions on it, and spend a few to close the gap, saving even more millions in the future.
Respectfully, I think it's a bit of a Dunning–Kruger effect for random internet commenters to presume they know what is "close enough" to meet the requirements for the many thousands of different day jobs that people have across the different governments of dozens of different countries.
Certainly the people buying software know best what their requirements are.
> Certainly the people buying software know best what their requirements are.
I doubt it. The people who are going to use the software are the ones who know what the requirements are. The people buying it should be asking the users, but rarely do.
For a large software deployment, you should be getting part of your requirements from discussions with users, but there will often be a lot of requirements from non-user stakeholders. For government deployments, even more so.
Have you ever actually worked in a large org or government IT department? :D
Commendable ideas, but they do not translate to reality. Even taking the OSS discussion out of the equation: Understanding and integrating user requirements in development processes is a hard problem in general. It gets worse when we are talking about resource-constrained contexts (like government IT)
I didn’t say it wasn’t hard. Regardless it is extremely routine for multiple stakeholders groups to be involved in software purchases, at least over my 20 years of experience.
Let's be real... Tons of governments employ people just to boost employment numbers. Government staff are almost always simply a cost, governments don't need to be profitable. They extract taxes and then spend it. And I think a lot of countries would prefer to spend more on salaries than on software licenses going to a different country...
In English the word "free" is apparently another difficulty. The technical Four Freedoms are not at all about the money. Money can be exchanged between willing partners of course. That includes government. The means and methods of closed source, and the means and methods of "corruption" are real.
I believe the Danish gov is roughly spending 50M EUR/year to MS, certainly you can get the features needed paying for dev time for an open source project with some of that spend.
Not that it is insurmountable -- but the difficulty with adopting open source more broadly often isn't a financial issue, but organizational. A successful enterprise software deployment consists of a lot more than simply paying developers. You need the correct management in place to ensure the developers are building the right features, to ensure they meet your organizations needs in terms of compliance, deployment, support, ensure your users understand how to use the product, etc. Organizations that are familiar with software development can often do this, but these types of projects are sometimes beyond the reach of the expertise of other organizations.
Nobody outside of the IT security bubble thinks that using AppLocker is a sensible idea.
Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
> Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
I'm usually on the side of empowering workers, but I believe sometimes the companies do have business saying this.
One reason is that much of the software industry has become a batpoop-insane slimefest of privacy (IP) invasion, as well as grossly negligent security.
Another reason is that the company may be held liable for license terms of the software.
Another reason is that the company may be held liable for illegal behavior of the software (e.g., if the software violates some IP of another party).
Every piece of software might expose the company to these risks. And maybe disproportionately so, if software is being introduced by the "I'm gettin' it done!" employee, rather than by someone who sees vetting for the risks as part of their job.
For example, if someone installs the wrong version of Oracle Java on a VM in our farm, the licencing cost is seven figures as they want to charge per core that it could conceivably run on - this would be career-limiting for a number of people at once.
Developers are going to write code to do things for them, such as small utility programs for automating work. Each custom program is a potentially brand new binary, never sent before by the security auditing software. Does every program written by every dev have to be cleared? Is it best in such a system to get an interpreter cleared so I can use that to run whatever scripts I need?
If I have an internal developer in such a scenario, then what makes most sense to me is to issue them a code-signing certificate or equivalent, and whitelisting anything signed by that certificate[1], combined with logging and periodic auditing to detect abuse.
> No, that's not how things are implemented normally, exactly because they wouldn't work.
I used to work for a gov't contractor. I wrote a ~10 line golang http server, just because at the time golang was still new (this was years ago) and I wanted to try it. Not even 2 minutes later I got a call from the IT team asking a bunch of questions about why I was running that program (the http server not golang). I agree the practice is dumb but there are definitely companies who have it setup that way.
So running it wasn't prevented for you, and new apps listening on the network trigger notifications that the IT checks on immediately. That sounds like a reasonable policy.
Around 1998 I snagged an abandoned 486 and installed Linux on it for use at work; the corporate software I used the most, a ticketing system, could be run using X from a Solaris server. I don't remember what I did for Lotus Notes.
Anyway, the IT department spotted it but since I was using SMB it thought it was just another Windows server. No one ever checked up on it despite being plugged into the corporate network.
This was a Fortune 500 company; things have changed a wee bit since then.
had something similar happened a few years back.. basically the go binaries i compiled and run would get deleted every time I try to run it. usually just downloading the newer version of go compiler and recompile with that solves it (I think it got flagged because it was compiled with an older version of go compiler with known vulnerabilities).
Every time it happened I think IT security got a notification, cos they would reach out to me afterwards.
The few times upgrading to the latest go version didn't work (false positives), I would just name the binary something like "Dude, wake up", or "dude, I need this to get whitelisted", and do the compile-run-binary_got_deleted cycle 10-20 times, effectively paging the IT security guy until they reached out to me and whitelist things for me :-D.
Developers are generally given specific environments to run code, which aren’t their laptops — eg, VMs in a development environment.
The goal isn’t to stop a developer from doing something malicious, but to add a step to the chain for hackers to do something malicious: they need to pwn the developer laptop from the devbox before they can pivot to, eg, internal data systems.
I haven’t worked somewhere we ran code locally in a long, long time. Your IDE is local, but the testing is remote — typically in an environment where you can match the runtime environment more closely (eg, ensuring the same dependencies, access to cloud resources, etc).
This is a strawman argument. If a developer writes code that does something malicious then it's on the developer. If they install a program then the accountability is a bit fuzzier. It's partly on the developer, partly on security (for allowing an unprivileged user to do malicious/dangerous things even unknowingly), and partly on IT (for allowing the unauthorized program to run without any verification).
It's not a straw man, I'm not trying to defuse liability. Of course a developer running malicious code they wrote is responsible for the outcomes.
I am pointing out that if every unique binary never before run/approved is blocked, then no developer will be able to build and then run the software they are paid to write, since them developing it modifies said software into a new and never before seen sequence of bits.
OP may not have meant to say that "it's good to have an absolute allowlist of executable signatures and block everything else", but that is how I interpreted the initial claim and I am merely pointing out that such a system would be more than inconvenient, it'd make the workflow of editing and then running software nearly impossible.
Your premise assumes there are policies and technologies in place that restrict what a developer can do.
This is often the case, although I’ve very rarely seen environments as restrictive as what you describe being enforced on developers.
Typically developer user accounts and assigned devices are in slightly less restrictive policy groupings, or are given access to some kind of remote build/test infrastructure.
Of course companies need the option to control what software is run on their infrastructure. There are an endless stream of reasons and examples for that. Up-thread there’s a great example of what happens when you let folks install Oracle software without guardrails. Businesses are of course larger and more complex than their developers and have needs beyond their developers.
What matters here is implementation and policy management. You want those to be balanced between audience needs and business needs.
It’s also worth mentioning that plenty of developers have no clue what they’re doing with computers outside their particular area of expertise.
It's a straw man in that you're establishing an inherently facile and ridiculous scenario just to knock it down. A scenario that, as others have demonstrated, is not grounded in any logical reality. "Nobody mentioned this imaginary horrible system I just thought of, but if they had, it sure would be terrible" is quite a hill to die on.
That level of micromanagement can be quite sensible depending on the employee role. It's not needed for developers doing generic software work without any sensitive data. But if the employee is, let's say, a nurse doing medical chart review at an insurance company then there is absolutely no need for them to use anything other than specific approved programs. Allowing use of random software greatly increases the potential attack surface area, and in the worst case could result in something like a malware penetration and/or HIPAA privacy violation.
Security practitioners are big fans of application whitelisting for a reason: Your malware problems pretty much go away if malware cannot execute in the first place.
The Australian Signals Directorate for example has recommended (and more recently, mandated) application whitelisting on government systems for the past 15 years or so, because it would’ve prevented the majority of intrusions they’ve investigated.
AppLocker is effectively an almost perfect solution to ransomware. (On the employee desktops anyway) You can plug lots of random holes all day long or just whitelist what can be run in the first place. Ask M&S management today if they prefer to keep working with paper systems for the another month, or would they prefer to deal with AppLocker.
> Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
This is a lovely take if your business exclusively running on FOSS on premise software, but is a receipe for some hefty bills from software vendors due to people violating licensing conditions
> Companies have no business telling their employees which specific programs they can [run]
Agreed.
> and cannot run
I strongly disagree. I think those controls are great for denylists. For example, almost no one needs to run a BitTorrent client on their work laptops. (I said almost. If you’re one of them, make a case to your IT department.) Why allow it? Its presence vastly increases the odds of someone downloading porn (risk: sexual harassment) or warez (risks: malware, legal issues) with almost no upside to the company. I’m ok with a company denylisting those.
I couldn’t care less if you want to listen to Apple Music or Spotify while you work. Go for it. Even though it’s not strictly work-related, it makes happier employees with no significant downside. Want to use Zed instead of VSCode? Knock yourself out. I have no interest in maintaining an allowlist of vetted software. That’s awful for everyone involved. I absolutely don’t want anyone running even a dev version of anything Oracle in our non-Oracle shop, though, and tools to prevent that are welcome.
>Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
Yet so many receptionists think that the application attached to the email sent by couriercompany@hotmail.com is a reasonable piece of software to run. Curious.
False dichotomy. The manager of the receptionist, or the head of their department, can decide what's appropriate for their job and dictate this to IT, and then they can lock it down.
At my work currently IT have the first say and final say on all software, regardless of what it does or who is using it. It's an insane situation. Decisions are being made without any input from anyone even in the department of the users using the software... you know... the ones that actually make the company money...
No, it’s unreasonable for end users and non technical managers to simply dictate to IT what software is to be installed on corporate devices. They can submit requests to IT with a business justification which should be approved if can be accommodated.
Maybe your employer’s IT department is in the habit of saying no without a proper attempt to accommodate which can be a problem but, the solution is not to put the monkeys in charge of the zoo.
At my old job we had upper management demanding exceptions to office modern auth so they could use their preferred email apps. We denied that, there was no valid business justification that outweighed the security risk of bypassing MFA.
We then allowed a single exception to the policy for one of our devs as they were having issues with Outlook’s plaintext support when submitting patches to the LKML. Clear and obvious business justification without an alternative gets rubber stamped.
Security is a balance that can go too far in either direction. Your workstations probably don’t need to be air gapped, and susan from marketing probably shouldn’t be able to install grammarly.
>No, it’s unreasonable for end users and non technical managers to simply dictate to IT
Again, false dichotomy. It's possible to meet in the middle, collaborate and discuss technical requirements. It's just that that rarely happens.
Our software (built by us, has regular code reviews and yearly external security audits and is internal-use-only amongst electrical engineers and computer-science guys) regularly gets disabled or removed by IT without warning by accident, and it's usually a few days before it's re-enabled/able to be reinstalled, since the tiny IT dept is forced to rely on external agencies to control their white-listing software.
Your "monkeys in charge of the zoo" metaphor is in full effect at my workplace, but in this case, the monkeys are IT and their security theater.
> The manager of the receptionist, or the head of their department, can decide what's appropriate for their job and dictate this to IT, and then they can lock it down.
You said exactly that.
Again, maybe your IT team is garbage, I don’t really care to litigate your issue with them. I specifically said IT should accommodate requests when possible and not be overzealous when saying no.
What you previously suggested is that is that stakeholders should give their demands to IT and that IT should figure out how to make it happen. Doesn’t sound like collaboration to me.
In my experience end users and management are very rarely aware of the requirements placed upon IT to ensure the security of company infrastructure when it comes passing audits, whether that’s for cyber insurance, or CMMC compliance or whatever else.
It’s plainly obvious that products don’t exist to sell without developers or engineers. But you can’t sell your product to customers if they require SOC and you don’t have it or if your entire infrastructure gets ransomwared.
I’ve had to tell very intelligent and hard working people that if I accommodated their request the government would no longer buy products from our company.
>What you previously suggested is that is that stakeholders should give their demands to IT and that IT should figure out how to make it happen. Doesn’t sound like collaboration to me.
That's fair; I did make it sound pretty one-sided there.
>At my work currently IT have the first say and final say on all software, regardless of what it does or who is using it.
Yeah but software isnt software.
Like I have a customer with users that just randomly started using VPN software to manage their client sites. VPN software that exposes the user machine directly to uncontrolled networks. This causes risks in both directions, because their clients run things like datacenters and power stations. Increases security risks for their business, and increases security risks for their customers, not to mention liability.
IT should be neutral. but IT done right, is guided by best practice. IT is ultimately responsible and accountable for security and function. You cant be responsible and accountable without control, or you exist just to be beaten up when shit goes sideways.
>the ones that actually make the company money...
Making the company money in an uncontrolled fashion is just extra distance to fall. If you ship a fantastic product with a massive supply chain induced vuln that destroys your clients there was no point in making that money in the first place.
What's wrong with Amazon trying to make a political point?
Amazon is obviously trying to pressure the Trump admin into easing the tariffs. Why wouldn't they? Why shouldn't they? Amazon is as much a political actor as any other company, and they have a major stakeholder when it comes to tariff policy.
>What's wrong with Amazon trying to make a political point?
Mainly with the concept of letting a ginormous multinational megacorp with more money and resources than 99.9% of the rest of America combined influence our political process is literally how we got here.
The CEO of Amazon is welcome to lobby as himself but letting an extremely already privileged legal fiction (an LLC) have more power over our society is just dumb.