Hacker Newsnew | past | comments | ask | show | jobs | submit | ZeroConcerns's commentslogin

More US-and-AI-centric dreck, unfortunately. So, right in the HN wheelhouse, I'm sure...

It's a hardware-isolated Linux shell with a durable disk you can conjure out of the sky on 1.5 seconds notice that costs virtually nothing when you're not actively using it. If we could have shipped this in 2021, before anyone thought coding agents would work, we'd have barfed on our shoes with excitement.

[flagged]


Please read and follow the HN Guidelines. https://news.ycombinator.com/newsguidelines.html

He's at home! Washing his tights!

I guess I can see how pre-installing some LLM agents makes it potentially seem "AI-centric", but I don't understand at all how this could be "US-centric".

the US is the centre of the world, and AI is at the centre of US attention!

why would anyone do anything else, anywhere else?


Not really -- any secrets stored using this method should also live in a password manager somewhere. It's about providing more-secure programmatic access to secrets.

Basically, it rebuilds Windows DPAPI from first principles, which is fine (I've done it many times myself!), and something non-Windows platforms sorely need. It changes the impact of malware from "they dumped all our secrets from prod to their C2" to "they got some encrypted values, and now someone will need to figure out our methodology and underlying keys", which is a meaningfully higher bar.


Yeah, I have some bad news about that huge bug bounty you're expecting... ChatGPT was wrong, and there is no way to close the HackerNews account you just created, so all the abuse that deservedly comes your way will, in fact, be on your permanent record.

This is a known security issue in Telegram, the one they stubbornly refuse to fix.

Ah, yes, I see... Are the known security issues that Telegram stubbornly refuse to fix in the room with us right now?

> it's probably worth avoiding the resampling of 44.1 to 48 kHz

Ehhm, yeah, duh? You don't resample unless there is a clear need, and even then you don't upsample and only downsample, and you tell anyone that tries to convince you otherwise to go away and find the original (analog) source, so you can do a proper transfer.


That seems a rather shallow - and probably incorrect - reading of the source. This is an efficiency and trust trade off as noted:

> given sufficient computing resources, we can resample 44.1 kHz to 48 kHz perfectly. No loss, no inaccuracies.

and then further

> Your smartphone probably can resample 44.1 kHz to 48 kHz in such a way that the errors are undetectable even in theory, because they are smaller than the noise floor. Proper audio equipment can certainly do so.

That is you don't need the original source to do a proper transfer. The author is simply noting

> Although this conversion can be done in such a way as to produce no audible errors, it's hard to be sure it actually is.

That is that re-sampling is not a bad idea in this case because it's going to have any sort of error if done properly, it's just that the Author notes you cannot trust any random given re-sampler to do so.

Therefore if you do need to resample, you can do so without the analog source, as long as you have a re-sampler you can trust, or do it yourself.


Speaking of a resampler you trust, I’ve had good experience with libsamplerate (http://www.mega-nerd.com/SRC/), which as of 2016 is BSD licensed.

If only it was that simple T_T

I'm working on a game. My game stores audio files as 44.1kHz .ogg files. If my game is the only thing playing audio, then great, the system sound mixer can configure the DAC to work in 44.1kHz mode.

But if other software is trying to play 48kHz sound files at the same time? Either my game has to resample from 44.1kHz to 48kHz before sending it to the system, or the system sound mixer needs to resample it to 48kHz, or the system sound mixer needs to resample the other software from 48kHz to 44.1kHz.

Unless I'm missing something?


You are right; the system sound mixer should handle all resampling unless you explicitly take exclusive control of the audio device. On Windows at least, this means everything generally gets resampled to 48khz. If you are trying to get the lowest latency possible, this can be an obstacle... on the order of single digit milliseconds.

And actually, why do we have both 48kHz and 44.1kHz anyway? If all "consumer grade high quality audio" was in 44.1kHz (or 48kHz) we probably could've avoided resampling in almost all circumstances other than professional audio contexts (or for already low quality audio like 8kHz files). What benefit do we get out of having both 44.1 and 48 that outweighs all the resampling it causes?

> And actually, why do we have both 48kHz and 44.1kHz anyway?

Those two examples emerged independently, like rail standards or any number of other standards one can cite. That's really just the top of the rabbit-hole, since there are 8-20 "standard" audio sample rates, depending how how you count.

This isn't really a drawback, and it does provide flexibility when making tradeoffs for low bitrates (e.g. 8 kHz narrowband voice is fine for most use cases) and for other authoring/editing vs. distribution choices.


> This isn't really a drawback

But, that's only true because people freely resample between them all the time and nobody knows or cares about it.


The nice thing about standards is, there are so many from which to choose! :)

Perhaps the benefit we get is access to existing recordings?

44.1khz exists because it was the lowest technically practical speed and was an optimization for processing speed and storage space.

48khz exists because it syncs with video easily — I’ve also heard it allows for more tolerance in the anti-aliasing filter.


> 48khz exists because it syncs with video easily

I guess meaning 24fps video? Because 44100 is already a multiple of 25, 30, 50, and 60.


24fps is where the money is because 24fps is the standard for film.

48k also supports 24, 48, 120, and 240 which are all nice to haves.

As far as I understood, both rates ultimately come from trying to map to video standards of the time. 44.1 kHz mapped great to reusing analog tape of the time, 48 kHz mapped better to digital clocking and integer multiples of video standards while also having a slightly wider margin on oversampling the high frequency.

44.1 kHz never really went away because CDs continued using it, allowing them to take any existing 44.1 kHz content as well as to fit slightly more audio per disc.

At the end of the day, the resampling between the two doesn't really matter and is more of a minor inconvenience than anything. There are also lots of other sampling rates which were in use for other things too.


> Why do we have both 48kHz and 44.1kHz anyway

Because of greed.

Early audio manufacturers (SONY notably) used 48kHz for profession-grade audio equipment, that would be used in studios or TV stations, and degraded 44.1khz audio for consumer devices. Typically you would pay an order of magnitude more for the 48kHz version of the hardware.

48khz is better for creating and mixing audio. You cannot practically mix audio at 44.1khz without doing very slight damage to audible high frequencies. But enough to make a difference. If you were creating for consumer devices, you would mix at 48Khz, and then downsample to 44.1khz during final mastering, since conversion from 48kHz to 44.1kHz can be done theoretically (and practically) perfectly. (Opinions of the OP notwithstanding).

I think it's safe to say that the 44.1kHz sampling rate was maliciously selected specifically because it is just low enough that perfect playback is still possible, but perfect mixing is practically not possible. And obviously maliciously chosen to be a rate with no convenient greatest common denominator with 48Khz, which would have allowed easy and cheap perfect realtime resampling. Had Sony chose 44.0kHz, it would be trivially easy to do sample rate conversion to 48Khz in realtime even with primitive hardware available in the late 1970s. That extra .1kHz is transparently obvious malice and greed in plain sight.

Presumably SONY would sell you the software or hardware to perform perfect non-realtime conversion of audio from 48khz to 44.1khz for a few tens of thousands of dollars. Not remotely subtle how greedy all of this was.

There has been no serious reason to use 44.1kHz instead of 48kHz for about 50 years, at least from a technology point of view. (And no real reason to EVER use 44.1khz instead of 48kHz other than GREED).


The Wikipedia page explains it as coming from PCM adaptors that put digital audio on video tapes. The constraints of recording on videotape led to 44.1kHz being best option. It sounds like there wasn't enough capacity for 48kHz.

Then Sony used the frequency on CDs.


Are you able to share evidence for this?

What would you consider evidence? Emails between standards committee members agreeing to collude in order to screw pro-audio customers?

The evidence is: why on earth would anyone on a standards committee choose 44.1kHz, instead of 44.0kHz? The answer: 44.1kHz was transparently obviously chosen to make it impossible to perform on-the-fly rate conversions.

The mathematics of polyphase rate converters was perfectly well understood at the time these standards were created.


Someone else wrote that it was chosen to best match PAL and NTSC. IIRC there is also a Technology Connections video about those early PCM adaptor devices that would record to VHS tape.

<https://en.wikipedia.org/w/index.php?title=44,100_Hz&oldid=1...>

Take it with a grain of salt, I’m not really knowledgeable about this.

E: also note the section about prime number squares below


CDs used 44.1, DAT and DVDs used 48. That’s it.

Most stuff on the internet ripped from CD is 44.1. 48 is getting more common. We’re like smack in the middle of the 75 year transition period to 48kHz.

For new projects, I use 48, because my mics are 32bit (float!)/48kHz.


technically we could use 40kHz and just upsample, the extra frequency over 40kHz is basically leeway to make analog part possible/cheap, but it is not technically needed in the signal

the first CD player didn't had compute power to upsample perfectly but modern devices certainly do.


AFAIU, 40kHz exactly wouldn't really work, if your goal is to represent 0Hz-20kHz: in order to avoid aliasing, you need a low pass filter to remove all frequency content above half your sample rate, and no filter is infinitely hard (and you generally want to give the filter a decent range of frequencies to work with). If you want to start your low pass filter at 20kHz, you want it to end (i.e reach practically -∞dB) at a few kHz above 20kHz. If you used a sample rate of exactly 40kHz, you would need your low pass filter to reach -∞dB at 20kHz, meaning it'd have to start somewhere in the audible region.

Though this is just my understanding. Maybe I'm wrong.


Is this not the job of the operating system or its supporting parts, to deal with audio from various sources? It should not be necessary to inspect the state of the OS your game is running on, to know what kind of audio you can playback. In fact, that could even be considered spying on things you shouldn't. Maybe the OS or its sound system does not abstract that from you and I am wrong about the state of OS in reality, but this seems to me like a pretty big oversight, if true. If I extrapolate from your use-case, then that would mean any application performing any playback of sound, needs to inspect whether something else is running on the system. That seems like a pretty big overreach.

As an example, lets say I change frequency in Audacity and press the play button. Does Audacity now go and inspect, whether anything else on my system is making any sound?


It is and it is done but you might not have control over process.

In PulseAudio you can choose resample method you want to use for the whole mixing daemon but I don't think that's option in windows/macos


The OS "deals with it" by resampling when necessary.

Depends on platform. But yes.

It is also the job of the operating system or its supporting parts to allow applications to configure audio devices to specific sample rates if that's what the application needs.

It's fine to just take whatever you get if you are a game app, and either allow the OS to resample, or do the resampling yourself on the fly.

Not so fine if you are authoring audio, where the audio device rate ABSOLUTELY has to match the rate of content that's being created. It is NOT acceptable to have the OS doing resampling when that's the case.

Audacity allows you to force the sample rate of the input and output devices on both Windows and Linux. Much easier on Windows; utterly chaotic and bug-filled and miserable and unpredictable on Linux (although up-to-date versions of Pipewire can almost mostly sometimes do the right thing, usually).


> Is this not the job of the operating system or its supporting parts, to deal with audio from various sources

I think that's the point? In practice the OS (or its supporting parts) resample audio all the time. It's "under the hood" but the only way to actually avoid it would be to limit all audio files and playback systems to a single rate.


I don't understand then, why they need to deal with that when making a game, unless they are not satisfied with the way that the OS resamples under the hood.

My reading is not that they're saying it's something they necessarily have deal with themselves, but that it's something they can't practically avoid.

But they CAN practically avoid it. lol. Just let the system do it for them.

If my audio files are 44.1kHz, and the user plays 48kHz audio at the same time, how do I practically avoid my audio being resampled?

You cannot avoid it either way then, I guess. Either you let the system do it for you, or you take matters into your own hands. But why do you feel it necessary to take matters into your own hands? I think that's the actual question that begs answering. Are you unsatisfied with how the system does the resampling? Does it result in a worse quality than your own implementation of resampling? Or is there another reason?

I don't feel it necessary to take matters into my own hands. If you read my original message again:

    > Either my game has to resample from 44.1kHz to 48kHz
    > before sending it to the system, or the system
    > sound mixer needs to resample it to 48kHz, or the
    > system sound mixer needs to resample the other software
    > from 48kHz to 44.1kHz
I expressed no preference with regard to those 3. I was outlining the theoretically possible options, to illustrate that there is no way to avoid resampling.

I got a different impression, because you also wrote:

> If only it was that simple T_T

Which to me sounded like _for you_ it's not simple because reasons, which led me to believe, that you _do_ want to take it into your own hands, making it not simple, ergo not being able to let the OS do it, for reasons. Now I understand what you mean, thanks!


I suppose, if you interpret "avoid" as "not care about".

I interpret them to mean "avoid doing it oneself" not "avoid it happening entirely".

If you read the comments with the other interpretation I think the conversation will make more sense.

Getting pristine resampling is insanely expensive and not worth it.

If you have a mixer at 48KHz you'll get minor quantization noise but if it's compressed already it's not going to do any more damage than compression already has.


That's a clear need IMO, but it'd be slightly better if the game could have 48 kHz audo files and downsampled them to 44.1 kHz playback than the other way around (better to downsample than upsample).

44.1kHz sampling is sufficient to perfectly describe all analog waves with no frequency component above 22050Hz, which is substantially above human hearing. You can then upsample this band limited signal (0-22050Hz) to any sampling rate you wish, perfectly, because the 44.1kHz sampling is lossless with respect to the analog waveform. (The 16 bits per sample is not, though for the purposes of human hearing it is sufficient for 99% of use cases.)

https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampli...


22050 Hz is an ideal unreachable limit, like the speed of light for velocities.

You cannot make filters that would stop everything above 22050 Hz and pass everything below. You can barely make very expensive analog filters that pass everything below 20 kHz while stopping everything above 22 kHz.

Many early CD recordings used cheaper filters with a pass-band smaller than 20 kHz.

For 48 kHz it is much easier to make filters that pass 20 kHz and whose output falls gradually until 24 kHz, but it is still not easy.

Modern audio equipment circumvents this problem by sampling at much higher frequencies, e.g. at least 96 kHz or 192 kHz, which allows much cheaper analog filters that pass 20 kHz but which do not attenuate well enough the higher frequencies, then using digital filters to remove everything above 20 kHz that has passed through the analog filters, and then downsampling to 48 kHz.

The original CD sampling frequency of 44.1 kHz was very tight, despite the high cost of the required filters, because at that time, making 16-bit ADCs and DACs for a higher sampling frequency was even more difficult and expensive. Today, making a 24-bit ADC sampling at 192 kHz is much simpler and cheaper than making an audio anti-aliasing filter for 44.1 kHz.


You mean average human hearing?

They're both fine (as long as the source is band limited to 20khz which it should be anyway).

The analog source is never perfectly limited to 20 kHz because very steep filters are expensive and they may also degrade the signal in other ways, because their transient response is not completely constrained by their amplitude-frequency characteristic.

This is especially true for older recordings, because for most newer recordings the analog filters are much less steep, but this is compensated by using a much higher sampling frequency than needed for the audio bandwidth, followed by digital filters, where it is much easier to obtain a steep characteristic without distorting the signal.

Therefore, normally it is much safer to upsample a 44.1 kHz signal to 48 kHz, than to downsample 48 kHz to 44.1 kHz, because in the latter case the source signal may have components above 22 kHz that have not been filtered enough before sampling (because the higher sampling frequency had allowed the use of cheaper filters) and which will become aliased to audible frequencies after downsampling.

Fortunately, you almost always want to upsample 44.1 kHz to 48 kHz, not the reverse, and this should always be safe, even when you do not know how the original analog signal had been processed.


yeah but you can record it in 96kHz, then resample it perfectly to 44.1 (hell, even just 40) in digital domain, then resample it back to 48kHz before sending to DAC

True.

If you have such a source sampled at a frequency high enough above the audio range, then through a combination of digital filtering and resampling you can obtain pretty much any desired output sampling frequency.


the point is that when down sampling from 48 to 44.1 you can for "free" do the filtering since the down sampling is being done digitally with an fft anyway

> Unless I'm missing something?

I suppose the option you're missing is you could try to get pristine captures of your samples at every possible sample rate you need / want to support on the host system.


DAC frequency and the audio format requirements for whatever you supply to your platform audio API have literally nothing to do with each other.

My reply was from an audio mastering perspective.


you're not missing something. You can re-sample them safely as stated by the author. They simply state you should check the re-sampler as:

> Although this conversion can be done in such a way as to produce no audible errors, it's hard to be sure it actually is.

That is, you should verify the re-sampler you are using or implement yourself in order to be sure it is done correctly, and that with todays hardware it is easily possible.


If 44.1kHz is otherwise sufficient but you have a downstream workflow that is incompatible, there are arguments for doing this. It can be done with no loss in quality.

From an information theory perspective, this is like putting a smaller pipe right through the middle of a bigger one. The channel capacity is the only variable that is changing and we are increasing it.


This isn't the whole picture because they aren't multiples of each other, so there will be interpolation/jitter

For example if you watch a 24fps film on a 60fps screen, in contrast to a 120fps screen


That's not how audio works. PCM data at some sample rate with infinite precision samples perfectly encodes all the audio data of all frequencies up to half the sample rate. Resampling is a theoretically lossless operation when the frequency content of the audio fits within half the sample rate of both the source and the destination sample rates (which will always be true when resampling to a higher sample rate, FWIW).

The issues are that 1) resampling has a performance and latency cost, 2) better resampling has a higher performance and latency cost


A very common clear need is incorporating 44.1khz audio sourcesinto video. 48khz is 48khz because 48khz divided by 24fps, 25fps, or 30fps is an integer (and 44.1khz is not).

Also, for decades upsampling on ingest and downsampling on egress has been standard practice for DSP because it reduces audible artifacts from truncation and other rounding techniques.

Finally, most recorded sound does not have an original analog source because of the access digital recording has created…youtube for example.


There's no link to the data sheet of the actual cable, but, yeah, looks like this should not have happened in such a short timeframe unless there's something really funny going on in that room, like ambient temperatures above 50 degC.

Another thing that should not have happened is installing the cable in loops in this way: any 'building' or 'underground' type cable needs to be of the exact length required at the demarcation point, fastened properly to prevent movement and terminated on a proper patch panel (can be a one-port box-type thingy for small setups), from where you use regular patch cords to connect your equipment.

(Loops are definitely allowed though, but that use case is mostly for aerial fiber to enable repair splices, and there are some very specific bend-radius and strain relief requirements, which, again should be spelled out in the cable data sheet)


>Another thing that should not have happened is installing the cable in loops in this way: any 'building' or 'underground' type cable needs to be of the exact length required at the demarcation point

This is awful advice I would discard immediately. It's poor practice and against code.

When pulling cable, especially fiber, the ends of the cable should be able to reach the fathest corner of the room. Excess cable should be in a service loop, properly secured to a wall, and terminated on a patch panel. Both ends of the cable should follow this rule. That means you're typically pulling cable that's 15m longer or more, depending on the room and configuration.

NEVER buy and pull cable that is the exact size. The cable literally comes from the factory looped up, it's designed to be looped (watch bend radius).

>Loops are definitely allowed though, but that use case is mostly for aerial fiber to enable repair splices

Again, awful advice that's against code. Underground fiber must have service loops at both ends, and must be terminated to a patch panel.


> any 'building' or 'underground' type cable needs to be of the exact length required at the demarcation point, fastened properly to prevent movement and terminated on a proper patch panel (can be a one-port box-type thingy for small setups)

How exact is exact? :-) I once had to reterminate some fiber that was cut and terminated to exact length, which means there was literally two centimeters from the wall to the connector. I literally had to squeeze the fiber splicer up against the wall to have a chance at splicing on new pigtails, but I had two mis-cuts and I was hosed. :-)


> Another thing that should not have happened is installing the cable in loops in this way: any 'building' or 'underground' type cable needs to be of the exact length required at the demarcation point...

This hasn't been my experience with fiber entrance cables terminated by ILECs, Spectrum, and Lumen. They typically leave a significant service loop bound to the cable ladder or backer board-- usually 15-20 feet.


Depends on the type of cable assembly. If it's fiber strands inside a soft-ish plastic jacket (and most of the cable is in fact in conduit), a service loop is fine, albeit a bit pointless for most repair scenarios. For armored cables (which are significantly stiffer), you only do these loops in situations where you expect to need to replace significant sections (think 'getting hit by a falling tree' or 'particularly aggressive rodents') and you have the space.

Both tree and rat took out my fiber so the loops are definitely useful. If your fiber goes through your whole house it's significantly less work to only have to reconnect one end instead of redoing the whole run.

I have a > 75ft service loop on a 48-count underground burial fiber from the street.

Thanks, I really appreciate the SMEs commenting here. I'm learning a lot.

Definitely learnt it the hard way this time. You're right that buried cables should be exact in length and fastened to a patch panel. I'll probably look into better conduit design as well for the next time (in 15 years?). Having shared conduits means I would risk damaging other cables if I tried to pull a new cable through.


Good conduit and patch panel design is definitely key for a happy life. Leaving some extra space/capacity initially is also a good idea, especially since (unless you're covering truly great distances) there's not exactly a lot of innovation in the single mode fibre space: strands you put in today (even if it's 'the cheapest stuff your vendor sells most of', which is generally my philosophy for selecting cables) will still be viable a few years down the road.

Sharing/in-place-repurposing conduit is not something I'd recommend, but if you must, leave a few dummy cables (a.k.a. 'pieces of string') on the initial install...


From one of the photos, the cable spec "G657A2" is visible on the outside - and specs listed for that indicate it's "bending insensitive single-mode fibre", apparently it can tolerate 10 loops around 15mm mandrel. (Which does surprise me).

But yes, agreed, a lot of "Er... why would you do it like that?" bits.


Those 10 loops definitely only apply to the single mode fibre itself, not the entire assembly with armor and everything, because that's just... physically impossible.

Cables for direct burial only like to be bent once or twice, and then only gently. Anything else may very well break the armor (whether plastic or metal), after which all bets are off.

Still, for the outer jacket to become brittle to the extent described, something else is required, which may very well turn out to be "shoddy manufacturing"...


> Can this be fixed?

For popular senders: sort-of: in your incoming mail server, substring-match the display name of the sender against popular brands, and ensure the actual domain matches.

This works remarkably well for proper brands (FedEx et al), but breaks down when the brand name regularly occurs in "normal" names, the sending brand sends mail from all over the place, or "innocuous" impersonation takes place all the time.

Like, somehow, From: "VODAFONE" <shipping-update@dpd.co.uk> is a 100% legit sender (assuming SPF and DKIM verification pass), despite both Vodafone and DPD being pretty common impersonation targets. You'd think they'd know better, but alas.

So, yeah, room for improvement and such...


Use <service>@<yourdomain> as your email address when signing up, and check the To header when receiving emails.

And/or, long-press or right-click on any link to inspect the linked domain.


I often go one step futher by appending a short random identifier, `{service}.{id}@{domain}`, to make it harder to guess (in case someone learned of my email address policy).

I created a little GTK program to help: https://github.com/LightAndLight/gen-alias


Yes, it’s really <f(service, rand())>.

What fraction of people do you suppose actually have a <yourdomain> to do this with?

Even some highly technically inclined people (like myself) can be entirely ignorant of the process. It's not as if consumer ISPs provide the service.


Sub-addressing (doing tag+handle@domain.com) is supported by many email services but + may be flagged as an illegal character.

at least hotmail, gmail, apple's various mail, though with apple just using hide my email is that whole idea fully and beautifully automated for normies

The process isn’t difficult and worth acquainting yourself with.

If you don't control your own domain fully, almost all email services let you do:

user+servicetag@domain.com

And have it go to user@domain.com with the servicetag still in the To: field. At least, I have never encountered a problem with this.


Some sites (hulu maybe? iirc) strip off the + and treat it as a bare email, with dedupe checks and all that.

Spammers won't respect the + either, they will clean their list of any +tags before sending.

The best I've actually come across is to abuse gmails period policy. I haven't seen sites dedupe this or perform any other checks or manipulation.

If you have enough letters in your alias you can treat the possible period locations as binary. For example, pests@ would have 4 edible spots, so I could make 16 different dot addresses: pests@, pest.s@, pes.ts@, pes.t.s@, pe.sts@, pe.st.s@, [...], p.e.s.t.s@

Then you can just remember/record the decimal ID you used per site.


> Spammers won't respect the + either, they will clean their list of any +tags before sending.

That's the entire point, if you get an email from the site but it doesn't include your +servicename tag then you immediately can immediately tell it's a phishing attempt or spam. If the tag is there it's not a 100% guarantee that it's legit, but absence of the tag is a big red flag.


You can't tell who it came from though, unlike my method at least.

Also, the +tag could get lost though just normal data clean up / normalization.


And then the spammers (or other illegitimate source) just add this to their processing…

^([^@+]+)\+[^@]*(@.*)$


The use case here is using a unique email address to help verify the sender of the email, it's not connected to spam usage.

So you’re suggesting the sender use the + modifier on the from address?

Here's the suggestion:

>Use <service>@<yourdomain> as your email address when signing up, and check the To header when receiving emails.

The user of the webservice specifies a unique email per webservice; knowledge of that unique email address serves as a hint that the email came from someone that has discovered that email address, i.e. the webservice itself.


Right, so 99% of the time that’s a spammer that is going to use that discovered email. I updated my message to specify other illegitimate sources to cover that less than 1%

Well, I can travel around most of Europe without a passport right now? ID card will do just fine, even though, really, nobody ever asks me for that either.

Meanwhile, my life expectancy is, like at least twice that 'prior to WW1' and my disposable income at least 20 times my take-home pay in 1917.

Bloody EU, innit...


Well, these days you can catch a Flixbus from London to Sofia for a mere 150 Europounds, and 48 hours of your time. And from there, Calcutta can't be that far, right?

(But, seriously, you can probably do it in another 48 hours...)


You just need a nuclear-powered Big Bus.

https://image.tmdb.org/t/p/w1066_and_h600_bestv2/l5n4h4gmRtj...

https://imcdb.org/i065460.jpg

https://en.wikipedia.org/wiki/The_Big_Bus

"Cyclops has a passenger capacity of 110 and is equipped with a bowling alley, Asian-style cocktail lounge with a piano bar, swimming pool, Bicentennial dining room, private marble-and-gold bathroom with sunken tub, and chef's kitchen."


This was the first thing I thought of when I read the article. I remember making my own nuclear-powered big bus out of legos after watching that movie.

> You just need a nuclear-powered Big Bus

Ah, so there is a chance!


From Sofia to the West of Turkey should be relatively easy. After that, travel through Iran, Afghanistan and Pakistan will get hairy.

You'd be surprised at how cheap, relatively safe and reliable bus services are in those regions.

Source: me and my wife traveled extensively by public transport in, well, at least Pakistan. The other countries are indeed sort-of hairy, but mostly for job-clearance-related reasons.


10s of thousands of religious pilgrims travel by bus between Pakistan and Iran every year. You can just avoid Afghanistan.

Generally the border between Paxistan and India cannot be crossed though. I believe Attari/Wagah is the only place, and it was closed too last I heard.

Indeed, it has been closed since the aerial clashes last year. But we can hope for peace, and with it cross border tourism.

Can locals still cross at Kasur/Ganda Singh Wala like before 1971?

To my knowledge, no. In the recent past, either you could cross via the the Wagah-Attari border crossing or get on the Thar Express train [1], which connects Karachi with Jodhpur via the Zero Point crossing. But the Thar Express has been closed since 2019.

[1] https://en.wikipedia.org/wiki/Thar_Express


They should avoid Pakistan if they can, not Afghanistan. It’s in relative peace while the border region of Iran/Pakistan see regular fighting between Pakistani forces and Baluch separatists[1].

1. https://en.wikipedia.org/wiki/Balochistan_Liberation_Army


There were some bad roads in the Balkans as well.

My Google Maps algo will be massively confused by my 'Directions' search: Sofia, Bulgaria to Kolkata, India on a Friday afternoon in January

It gives up. Besides these days it’s Kolkata.

Note that the complete title is "How Samba was Written", but apparently there is a list of unapproved interrogative adverbs...

Also note that since the MS-EU settlement, the SMB protocol is quite extensively, if a bit passive-aggressive opaquely, described in a series of documents that Microsoft updates to this day, e.g. https://learn.microsoft.com/en-us/openspecs/windows_protocol...


A lot of "fun" with those documents is that some of them quite probably were archeological digs on Microsoft side too.

For example, the MAPI specs have references to valid parts of the protocols and data structures that are not used anywhere and which in fact crash MAPI libraries (so Outlook and Exchange just throw errors if you give them such data), sometimes giving a glimpse of how there might have been abandoned features that were never delivered.

Like, surprisingly, HTML email support in Outlook[1] :D

[1] MAPI Message struct, and thus Exchange and Outlook, crashes when encountering "HTML message" let's call it "submessage". Turns out the valid way to save a HTML message is by wrapping it in RTF and saving it as Rich Text submessage. Plain text is another submessage.


Yeah, Hacker News does that. I've heard you can simply edit the title after submitting to fix Hacker News' "fixes" but I've not submitted enough things to give it a try.

This is correct (I've done it a few times). I think there's an edit window though and at 8 hours we're well outside that.

The other option (e.g. when it's not your submission) is to email the mods, which I've just done, and they will fix it up if appropriate.


So, ehhm, yeah, I sort-of question the data in this article. A US$5.50 burrito in Downtown SF in 2014? Nah... even Taco Bell takeout was already more expensive than that at the time.

Also: sure, some places overdo it on the pricing: I distinctly remember walking out of a Rotterdam (somewhere in The Netherlands) establishment due to them charging 25 Euros for a lunch sandwich, like 2 decades ago, despite this not being a fancy place at all. No inflation in sight, just greed and/or an inability to read the target audience...


They said they used Yelp photos, which seem like you can verify this.

Here's one from 2014, $5.50 https://www.yelp.com/biz_photos/taqueria-canc%C3%BAn-san-fra...

Here's one from 2015, $6.99 https://www.yelp.com/biz_photos/taqueria-canc%C3%BAn-san-fra...


Yeah, I did in fact read the original article, and the glossy (unreadable) and undated photographs did not convince me. Therefore, the Taco Bell comparison, and the rest of my comment.

I used to eat there regularly. It had been $4.99 for years and jumped to to $5.50 in 2014.

$5.50 - dated March 14, 2014: https://www.yelp.com/biz_photos/taqueria-cancun-san-francisc...

$5.50 - dated February 28, 2014: https://www.yelp.com/biz_photos/taqueria-cancun-san-francisc...

Mind, that's a regular burrito and it was one of the cheaper ones in SF. El Farolito was around $6 at the time. La Taqueria started at $8 for people who liked to splurge. El Papolote's yuppie burritos were like $10.

The price at Taqueria Cancun jumped again in 2014 iirc after La Taqueria won best burrito in the US normalized the higher price and got them some press (Taqueria Cancun was in one of the final brackets).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: