Hacker Newsnew | past | comments | ask | show | jobs | submit | more jka's commentslogin

It's generally not a model that has much supportive mindshare for the web currently, but it is possible to achieve tamper-prevention without requiring the content of communications to be encrypted.

For example, most official Debian[1] and Ubuntu[2] package repositories currently use HTTP (not HTTPS) by default for content retrieval.

That's reliable thanks to public-key encryption; the packages are signed, and the receiver verifies the signature.

Someone able to inspect your network traffic could, for example, tell that you've downloaded a genuine copy of "cowsay". Or they could detect that the server replied with a tampered copy (something that your client should reject as invalid).

[1] - https://wiki.debian.org/SourcesList#Example_sources.list

[2] - https://ubuntu.com/server/docs/package-management


Systems that are older than general SSL and TLS usage do indeed have those features, but they are mostly unsuitable for the majority of internet users.

Sadly, it could have been better with varying options of choices in connection, stream and content encryption methods, but that simply isn't feasible with the users and scale we're currently working with.

For niches (and operating systems and software packages are niches, even if an end user is somewhere under the hood using it) it can still be pretty good, especially considering the mirror system where you distribute files to mirrors which might themselves use TLS but you'd still want the distribution authority to be the only one signing those files.


Adding to that: just slapping HTTPS on those connections still would prevent an observer from detecting you downloading cowsay. IIRC every package has a fairly unique size and unless you add padding that is enough metadata to figure out with reasonable certainty which package you requested. So it’s not like HTTPS would add any immediate benefit anyway.


> For example, most official Debian[1] and Ubuntu[2] package repositories currently use HTTP (not HTTPS) by default for content retrieval.

But then you've bootstrapped the trust somehow. If you were to download an ISO from that not-HTTPS website, you'd be at risk.


Digital experience ('DEX') issues reported by employees, from the original report:

- 37% : Security/regulatory policies

- 37% : IT overwhelmed by number of issues that need resolving

- 35% : Lack of training for IT personnel

- 34% : Handling the shift to hybrid/remote work

- 32% : Increasing number of endpoints to manage

- 31% : Technology in place is not appropriate for supporting DEX

- 27% : Lack of knowledge around DEX

- 25% : Lack of budget to support DEX efforts

- 19% : Lack of buy-in from leadership around importance of DEX

- 2% : No challenges being faced

(with some snark: statistics on signup/login requirements before viewing published reports, and presenting statistics in images instead of tabular data formats were not reported)


Worth mentioning that the experimental payments during this research were one-time-only.

(a few descriptions[1][2][3] of Universal Basic Income seem to define it as including recurring/periodic payments)

[1] - https://en.wikipedia.org/wiki/Universal_basic_income

[2] - https://basicincome.stanford.edu/uploads/Umbrella%20Review%2...

[3] - https://www.investopedia.com/terms/b/basic-income.asp


Yes, it is usually intended as a monthly payment, BUT the actual UBI (in theory) doesn't (and shouldn't) pass through a "means test", i.e. (still in theory) it is "Universal" and as such all citizens should get it regardless their income or estate, as the Wikipedia article highlights:

>This article is about the system of unconditional income provided to every citizen. For the means-based model of social welfare, see Guaranteed minimum income.

https://en.wikipedia.org/wiki/Guaranteed_minimum_income

also sometimes called "basic income" (without the "Universal")

which is more pertinent and actually the one on which some (mostly failed) experiments have been done or are being made.


Indeed. One of the main criticisms of both Canadian major experiments with UBI (Manitoba in the 70s, and Ontario in the last few years) was that while they paid regularly, both programs were relatively short-lived -- a few years. This doesn't appropriately model the incentives and behaviours people would likely engage in if they knew with reliability the program would still be paying them in ten or more years. It wouldn't capture either negatives like people leaving the workforce permanently, or positives like going back for a four year degree they couldn't otherwise afford. Participants likely treated it as a one-time windfall (if over a couple years).


Nearly all UBI is a recurring payment. Otherwise it's not really capable of living on.


Often I begin from the assumption that the marketplace -- for some reason -- wants to gather as much information about customers as possible, rather than to sell them minimal products that meet their requirements, and so:

- Unless you're careful, searching around may lead you to multiple, spammy-looking websites and domains that appear designed to gather your purchase intent and search information, to share/sell and affect your decision-making

- The products that you find may include surplus functionality (be that hardware, software, subscriptions, tracking, account login requirements, ...) that aren't genuinely required for the requirements that you have

- Since vendors want to build social influence around their products (again, to affect your purchase decision-making and that of your peers), they'll potentially provide rewards, discounts, talking points, and other perks to highly-networked individuals as long as those people remain brand-loyal

- Since continued revenue is an incentive for many vendors, they'll reinvent products on a regular basis and/or use planned obsolescence to encourage you to spend more than once for essentially the same functionality. That could be accompanied by marketing/social-influence campaigns to subtly (or not so subtly) discredit previously-acceptable products (especially if those continue to meet requirements). I can see there being public-good reasons for migration away from problematic products of the past; however I'm not convinced that they're commonly the reason these upgrade cycles are suggested

- If competing products emerge that may meet requirements and are seeing high adoption rates, there is a possibility that vendors will acquire ownership of the competing product outright (stifling competition, although also perversely creating incentives for new-entrant companies to create apparent-competitors that are largely intended to be flipped to a larger incumbent rather than to distribute a lasting higher-quality solution)

- Similarly, if competing products/technologies exist, then vendors may encourage the promotion of brand names that obscure (duplicate, or are similar to) the name of the competitor, causing various forms of confusion and dividing would-be adopters (and their opinions) between the vendor's brand and the competitor's brand

It's possible that I've misinterpreted and misunderstood some behaviours of industry here - based on those, you could be excused for thinking that the goal of these vendors is to extract as much revenue as possible from people as opposed to providing lasting, effective and sustainable products.

By the sounds of it, I think what you want is something like a robust, reliable solid-state music player, as commonly available at low-cost over a decade ago.

The Wirecutter - generally a trustworthy resource - has a section on audio equipment[1], although they don't mention any portable personal music players, as far as I can tell (possibly because many people use their smartphones for this purpose, nowadays).

Rockbox[2] (not to be confused with a similarly-named line of music players) is an open source firmware project that can run on a range of devices[3], many of which may meet most of your requirements.

However, unfortunately it does not appear to have widespread bluetooth support currently. There is work-in-progress[4] on that (last updated in 2020), but one of the challenges with free-and-open-source software is that timescales are difficult to predict, and adding demand/pressure for functionality and bugfixes doesn't always help, so it's hard to tell if-and-when that may be available.

The website gh.de (mentioned in the HN thread that you link to) has a fairly good price-comparison section[5] for portable music players with many relevant filters.

In general: I try to wait until a product that meets requirements arrives (although this often means being well-behind-the-curve compared with peers), try to use the existing devices I have for as long as possible (and extend and enhance their functionality, an area where FOSS can be very helpful), and when possible, purchase products from vendors that have practices aligned with openness, minimalism, high-quality, sustainability and durability. It's difficult! And it leads to frustration after some purchases when realizing that they aren't up to expectations. But that's part of the learning process, too. Good luck.

[1] - https://www.nytimes.com/wirecutter/electronics/audio/

[2] - https://www.rockbox.org/

[3] - https://www.rockbox.org/wiki/TargetStatus

[4] - https://gerrit.rockbox.org/r/c/rockbox/+/3044

[5] - https://geizhals.de/?cat=mmp


(see also 'Sam Vimes "Boots" theory of socioeconomic unfairness': https://en.wikipedia.org/wiki/Boots_theory )


Slightly-curious, slightly-unnerved, slightly-paranoid question here:

Is there any chance that this code snippet was sampled from repositories that contained associated commentary/discussion, guiding GPT-3 to produce similar explanations?

Or is this genuinely an explanation that it can produce without context local to the code in question?

And what number of people are able to determine the answer to my first question?


>Is there any chance

There is a chance.

To get a better answer, you have to specify what exactly you mean by 'sampled' and by 'without context local to the code in question'.


Roughly speaking, yep - Common Crawl provides a sizable chunk of web data (420 TiB uncompressed, over 3 billion unique URLs, as of May 2022; historic statistics here[1]), and is updated on monthly basis. Not near-real-time, true, albeit relatively fresh.

A question to ask could be: how often do users care about information from a few minutes ago, compared to information that has been available for a longer duration of time?

[1] - https://commoncrawl.github.io/cc-crawl-statistics/


Isn't that more a question of adding to the mix frequent scraping of

- a few thousand news-sites (like nyt.com, bbc.co.uk),

- a few thousand very popular blogs (based on what influencers people search for),

- a handful of social media sites (e.g. Twitter),

- a few hundred databases in areas like weather, airlines, sports (like ATP for people who look for Wimbledon results today)?


I mean, any time someone wants information on current or recent events is your use case right there. If you exclude news entirely, you could maybe disregard recent websites but I imagine that's statistically a pretty large portion of search.


> You can always publish your creation on your website or wherever.

What you're suggesting wouldn't solve the problem for critical packages, though; the effect would be similar to yesterday's package unpublish issue[1] (all users of the package would have to update their dependency references despite no change in the content of the code).

[1] - https://news.ycombinator.com/item?id=32026624


No, the issue yesterday is that people had to go in manually to fix their CI builds, etc. That's simply not necessary if you don't update the version and yanking is disallowed (which it should be).

After that, if people want to at some point update to a newer version of the package, yes, they will have to adjust.


Hrm, fair point. Although migrating 'backwards-compatibly' like that could leave a lot of people in the cold if-and-when a security update for the package is released (and we're talking about critical packages here, at least for now).


How about you 1) leave old versions in place on the package index, 2) declare that you will not be providing further updates to the package through the index because of this issue, 3) go publish your stuff on your website or an alternate package index?


So what? Users can do that if the maintainer chooses to move hosting.


Sure, accepted. That'd be a disruptive change, though, and I think that a better approach is possible.

The suggestion regarding separation of (immutable) packages and policy in the article could provide some hints in that direction.


It seemed clearly-written and thought-provoking to me; the author doesn't claim that 2FA is a burden, either.

Writing and distributing software should be straightforward so that everyone can participate. And consuming software should be safe so that people and infrastructure are protected. Finding a security model that achieves both should be the goal.

PyPi appear to have walked a reasonable line on this so far, and it's worth considering and discussing what the future could be like.


Another reasonable point here is that pypi is also offering package owners ~$75 (edit: didn't realise they were sold in 1-packs now) in modern usb-c fido2 keys if you have one that is now marked as critical.


In particular, if package indexes start introducing additional requirements for developers, as mentioned in the article, then I do worry that it could risk moving an unreasonable level of burden onto developers (who may initiate or develop code purely for their own enjoyment).

Currently the "critical package" categorization may offset most of the likelihood of that occurring, although I'd expect there could be problems for some projects even so.

I wonder whether PyPi considered making 2FA-at-publish-time optional and instead offering a question of 2FA-at-package-install-time.

In other words: "pip install --2fa-signed-packages-only" or similar.

It's possible they didn't, or weren't able to, because the ecosystem is already widely deployed and many package version upgrades (including transitive dependencies) occur automatically.

Roughly speaking: I like the author's suggestion (quoted below) of making the the software package ecosystem an immutable (content-addressed?) space, where policies and attestation about whether to use those packages is opt-in based on rules-based overlays. That'd be ambitious but technically feasible, I think.

"So if I were to wish for something, then that the index has no policies beyond immutability of assets, and instead we use an independent layer of the index to enforce policies."


A correction/clarification since writing the parent comment: publication of packages requires an authentication token, and does not require an interactive 2FA challenge. Generating a suitable token for package publication, however, does.

(that implies that a naive implementation of '--2fa-signed-packages-only' flag would mean 'packages that were published using tokens that were generated by a 2FA-authenticated user; possibly a subtle distinction, but maybe worth mentioning)


Can you share one or two aspects of FOSS office software that caused problems for you?

(from my perspective, it's a modern miracle to be able to run "apt-get install libreoffice" at a command-line and have freely-available, no-license required office software available within a few moments including nearly like-for-like functionality and file-format compatibility with other office suites)


Sure. I'll take my technical hat off and put 'the average office user' hat on: I login to my Ubuntu account and head to Ubuntu appstore. It's a galore of half baked and a few well developed gems amongst them. "Apt-get install libreoffice" is great for us geeks but if I'd go to my office colleagues and tell them to run some commands in the shell, they'd think I'm mad.

I have no doubts that both Gimp and Open office are close to feature parity, but the UX is just not there. The user interface has to be super slick everywhere because an average user is very spoilt.

One hope I have is that Windows have been going down the drain usability wise, so hopefully they'll screw up things even worse in Windows 11/12 etc, so the competition could pick up on it.


Thanks for the response!

> if I'd go to my office colleagues and tell them to run some commands in the shell, they'd think I'm mad.

Depends on the colleagues, potentially - I often feel like I underestimate what other people are capable of learning, and that the resulting conversations can seem unintentionally condescending as a result of that (i.e. not preparing and demonstrating what's possible for fear that someone may not understand).

> The user interface has to be super slick everywhere because an average user is very spoilt.

Yep, that makes sense. However, whether I'm an employee, a business owner, an investor, or a partner who wants to see a business succeed: if I learn that the company is spending on software when there are lower-cost alternatives available that are ignored largely due to look-and-feel concerns.. some cognitive dissonance may develop. Especially if the potential cost savings could be pooled with others towards resolving those issues.

(on a potentially more practical note: what I hear from you is that user experience frustration can lead to dissatisfaction with software; I'm not sure what the best routes forward there are, other than encouraging further feedback and finding ways to improve and promote product design in user-facing FOSS)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: