Hacker Newsnew | past | comments | ask | show | jobs | submit | IshKebab's commentslogin

So put a slightly more informative hello world example then.

Look at the Go homepage. Or Nim. (But not Rust sadly.)


Yeah and Linux is waaay behind in other areas. Windows had a secure attention sequence (ctrl-alt-del to login) for several decades now. Linux still doesn't.

Linux (well, more accurately, X11), has had a SAK for ages now, in the form of the CTRL+ALT+BACKSPACE that immediately kills X11, booting you back to the login screen.

I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.


That's not a SAK, you can disable it with setxkbmap. A SAK is on purpose impossible to disable, and it exists on Linux: Alt+SysRq+K.

Unfortunately it doesn't take any display server into consideration, both X11 and Wayland will just get killed.


There are many a ways to disable CTRL+ALT+DEL on windows too, from registry tricks to group policy options. Overall, SAK seems to be a relic of the past that should be kept far away from any security consideration.

There shouldn't be any non-privileged ways to disable ctrl-alt-del.

The "threat model" (if anyone even called it that) of applications back then was bugs resulting in unintended spin-locks, and the user not realizing they're critically short on RAM or disk space.

This setup came from the era of Windows running basically everything as administrator or something close to it.

The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.


Please check the relates wikipedia article. Updated to reflect recent secure attention key in the linux world: https://en.wikipedia.org/wiki/Secure_attention_key


That's not the same thing at all.

No, it's not. It has various functionality, as shown by the built-in help:

> Example output of the SysRq+h command:

> sysrq: HELP : loglevel(0-9) reboot(b) crash(c) terminate-all-tasks(e) memory-full-oom-kill(f) kill-all-tasks(i) thaw-filesystems(j) sak(k) show-backtrace-all-active-cpus(l) show-memory-usage(m) nice-all-RT-tasks(n) poweroff(o) show-registers(p) show-all-timers(q) unraw(r) sync(s) show-task-states(t) unmount(u) force-fb(v) show-blocked-tasks(w) dump-ftrace-buffer(z) dump-sched-ext(D) replay-kernel-logs(R) reset-sched-ext(S)

But note "sak (k)".


That kills X! Hardly useful.

How's it go again, 'raising all elephants is utterly boring'?

Like the GP says in sibling, Alt+SysRq+K is SAK on Linux. But it doesn't work with graphical environments.

Is that something Linux needs? I don’t really understand the benefit of it.

The more powerful form is the UAC full privilege escalation dance that Win 7+(?) does, which is a surprisingly elegant UX solution.

   1. Snapshot the desktop
   2. Switch to a separate secure UI session
   3. Display the snapshot in the background, greyed out, with the UAC prompt running in the current session and topmost
It avoids any chance of a user-space program faking or interacting with a UAC window.

Clever way of dealing with the train wreck of legacy Windows user/program permissioning.


My only experience with non-UAC endpoint privilege management was BeyondTrust and it seemed to try to do what UAC did but with a worse user experience. It looks like the Intune EPM offering also doesn't present as clear a delineation as UAC, which seems like a missed opportunity.

One of the things Windows did right, IMO. I hate that elevation prompts on macOS and most linux desktops are indistinguishable from any other window.

It's not just visual either. The secure desktop is in protected memory, and no other process can access it. Only NTAUTHORITY\System can initiate showing it and interact with it any way, no other process can.

You can also configure it to require you to press CTRL+ALT+DEL on the UAC prompt to be able to interact with it and enter credentials as another safeguard against spoofing.

I'm not even sure if Wayland supports doing something like that.


>Display the snapshot in the background, greyed out,

Is there an offset. I could have sworn things always seemed offset to the side a little.


It made a lot more sense in the bygone years of users casually downloading and running exe's to get more AIM "smilies", or putting in a floppy disk or CD and having the system autoexec whatever malware the last user of that disk had. It was the expected norm for everybody's computer to be an absolute mess.

These days, things have gotten far more reasonable, and I think we can generally expect a linux desktop user to only run software from trusted sources. In this context, such a feature makes much less sense.


It's useful for shared spaces like schools, universities and internet cafes. The point is that without it you can display a fake login screen and gather people's passwords.

I actually wrote a fake version of RMNet login when I was in school (before Windows added ctrl-alt-del to login).

https://www.rmusergroup.net/rm-networks/

I got the teacher's password and then got scared and deleted all trace of it.


Not to mention the 1-based indexing sin. JavaScript has a lot of WTFs but they got that right at least.

This indeed is not Algol (or rather C) heritage, but Fortran heritage, not memory offsets but indices in mathematical formulae. This is why R and Julia also have 1-based indexing.

Pascal. Modula-2. BASIC. Hell, Logo.

Lately, yes, Julia and R.

Lots of systems I grew up with were 1-indexed and there's nothing wrong with it. In the context of history, C is the anomaly.

I learned the Wirth languages first (and then later did a lot of programming in MOO, a prototype OO 1-indexed scripting language). Because of that early experience I still slip up and make off by 1 errors occasionally w/ 0 indexed languages.

(Actually both Modula-2 and Ada aren't strictly 1 indexed since you can redefine the indexing range.)

It's funny how orthodoxies grow.


In fact zero-based has shown some undeniable advantages over one-based. I couldn't explain it better than Dijkstra's famous essay: http://www.cs.utexas.edu/~EWD/ewd08xx/EWD831.PDF

It's fine, I can see the advantages. I just think it's a weird level of blindness to act like 1 indexing is some sort of aberration. It's really not. It's actually quite friendly for new or casual programmers, for one.

Is there any actual evidence that new programmers really find this hard? Python is renowned for being beginner friendly and I've never heard of anyone suggesting it was remotely a problem.

There are only a few languages that are purely for beginners (LOGO and BASIC?) so it's a high cost to annoy experienced programmers for something that probably isn't a big deal anyway.


I think the objection is not so much blindness as the idea that professional tools should not generally be tailored to the needs of new or casual users at the expense of experienced users.

As I understand it Julia changed course and is attempting to support arbitrary index ranges, a feature which Fortran enjoys. (I'm not clear on the details as I don't use either of them.)

Pascal, frankly, allowed to index arrays by any enumerable type; you could use Natural (1-based), or could use 0..whatever. Same with Modula-2; writing it, I freely used 0-based indexing when I wanted to interact with hardware where it made sense, and 1-based indexes when I wanted to implement some math formula.

> Lots of systems I grew up with were 1-indexed and there's nothing wrong with it. In the context of history, C is the anomaly.

The problem is that Lua is effectively an embedded language for C.

If Lua never interacted with C, 1-based indexing would merely be a weird quirk. Because you are constantly shifting across the C/Lua barrier, 1-based indices becomes a disaster.


And MATLAB. Doesn't make it any better that other languages have the same mistake.

Does it count as 0-indexing when your 0 is a floating point number?

Actually in JS array indexing is same as property indexing right? So it's actually looking up the string '0', as in arr['0']

Huh. I always thought that JS objects supported string and number keys separately, like lua. Nope!

  [Documents]$ cat test.js
  let testArray = [];
  testArray[0] = "foo";
  testArray["0"] = "bar";
  console.log(testArray[0]);
  console.log(testArray["0"]);
  [Documents]$ jsc test.js
  bar bar
  [Documents]$

Lua supports even functions and objects as keys:

  function f1() end
  function f2() end
  local m1 = {}
  local m2 = {}
  local obj = {
      [f1] = 1,
      [f2] = 2,
      [m1] = 3,
      [m2] = 4,
  }
  print(obj[f1], obj[f2], obj[m1], obj[m2], obj[{}])
Functions as keys is handy when implementing a quick pub/sub.

They do, but strings that are numbers will be reinterpreted as numbers.

[edit]

  let testArray = [];
  testArray[0] = "foo";
  testArray["0"] = "bar";
  testArray["00"] = "baz";
  console.log(testArray[0]);
  console.log(testArray["0"]);
  console.log(testArray["00"]);

That example only shows the opposite of what it sounds like you’re saying, although you could be getting at a few different true things. Anyway:

- Every property access in JavaScript is semantically coerced to a string (or a symbol, as of ES6). All property keys are semantically either strings or symbols.

- Property names that are the ToString() of a 31-bit unsigned integer are considered indexes for the purposes of the following two behaviours:

- For arrays, indexes are the elements of the array. They’re the properties that can affect its `length` and are acted on by array methods.

- Indexes are ordered in numeric order before other properties. Other properties are in creation order. (In some even nicher cases, property order is implementation-defined.)

  { let a = {}; a['1'] = 5; a['0'] = 6; Object.keys(a) }
  // ['0', '1']

  { let a = {}; a['1'] = 5; a['00'] = 6; Object.keys(a) }
  // ['1', '00']

There's nothing wrong with 1-based indexing. The only reason it seems wrong to you is because you're familiar with 0-based, not because it's inherently worse.

If you can't deal with off-by-one errors, you're not a programmer.

But with Lua all those errors are now off by two

Except for Date.

Yeah but virtually every developer in the world has already jumped through that hoop. They don't need to do it again for every project.

Also the hoop can be as simple as "click here to sign in with <other account you already have>".


> Debian is kind of slow in adapting to the modern world.

Yeah definitely. I guess this is a result of their weird idea that they have to own the entire world. Every bit of open source Linux software ever made must be in Debian.

If you have to upgrade the entire world it's going to take a while...


What is patch quilting, for the blissfully unaware?

https://wiki.debian.org/UsingQuilt but the short form is that you keep the original sources untouched, then as part of building the package, you apply everything in a `debian/patches` directory, do the build, and then revert them. Sort of an extreme version of "clearly labelled changes" - but tedious to work with since you need to apply, change and test, then stuff the changes back into diff form (the quilt tool uses a push/pop mechanism, so this isn't entirely mad.)

Ha yes that does sound mad. If only there was a version control system specifically designed to track changes to code...

Quilt predates Git. Back then source was distributed as a tarball, and Debian simply maintained a directory full of patches to apply to the tarball.

Sure but Git has been available (and super popular) for almost 20 years now.

Yea, so? Debian goes back 32 or more years, and quilt dates to approximately the same time. It’s probably just a year or two younger than Debian.

At Mozilla some developers used quilt for local development back when the Mozilla Suite source code was kept in a CVS repository. CVS had terrible support for branches. Creating a branch required writing to each individual ,v file on the server (and there was one for every file that had existed in the repository, plus more for the ones that had been deleted). It was so slow that it basically prevented anyone from committing anything for hours while it happened (because otherwise the branch wouldn’t necessarily get a consistent set of versions across the commit), so feature branches were effectively impossible. Instead, some developers used quilt to make stacks of patches that they shared amongst their group when they were working on larger features.

Personally I didn’t really see the benefit back then. I was only just starting my career, fresh out of university, and hadn’t actually worked on any features large enough to require months of work, multiple rounds of review, or even multiple smaller commits that you would rebase and apply fixups to. All I could see back then were the hoops that those guys were jumping through. The hoops were real, but so were the benefits.


> Yea, so?

So it's clearly a way better solution and it's disappointing that they still haven't switched to it after 20 years? I dunno what else to say...


So has git-buildpackage; the debian historical archives don't go further back than v0.4, but the oldest bug report referencing gbp is from december 2006: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=403987

it's quite difficult to maintain a quilt like workflow with plain git

I've tried it


Quilt is difficult to maintain, but a quilt-like workflow? Easy: it's just a branch with all patches as commits. You can re-apply those to new releases of the upstream by using `git rebase --onto $new_upstream_commit_tag_or_branch`.

How do you track changes to the patches themselves?

By having a naming convention for your tags and branches, then you can always identify the upstream "base" upon which the Debian "patches" are based, and then you can trivially use `git log` to list them.

Really, Git has a solution to this. If you insist that it doesn't without looking, you'll just keep re-inventing the wheel badly.


but then if I want to see the history for a specific patch, or bisect them?

mercurial has a patch queue extension that married it and quilt, which was very easy to use


Do you ever really want this? I don't recall wanting this. But you can still get this: just list the ${base_ref}..${deb_ref} commit ranges, select the commit you want, and diff the `git show` of the selected commits. It helps here to keep the commit synopsis the same.

E.g.,

  c0=$(git log --oneline ${base_ref0}..${deb_ref0} |
         grep "^[^ ] The subject in question" |
         cut -d' ' -f1)
  c1=$(git log --oneline ${base_ref1}..${deb_ref1} |
         grep "^[^ ] The subject in question" |
         cut -d' ' -f1)
  if [[ -z $c0 || -z $c1 ]]; then
    echo "Error: commits not found"
  else
    diff -ubw <(git show $c0) <(git show c1)
  fi
See also the above commentary about Gerrit and commit IDs.

(Honestly I don't need commit IDs. What happens if I eventually split a commit in a patch series into two? Which one, if either, gets the old commit ID? So I just don't bother.)


So there’s no way to have commit messages on changes to patches? There’s also https://dep-team.pages.debian.net/deps/dep3/

People keep saying “just use Git commits” without understanding the advantages of the Quilt approach. There are tools to keep patches as Git commits that solve this, but “just Git commits” do not.


Having maintained private versions of Debian packages, I have zero need for "commit messages on changes to patches". I can diff them as needed as I showed, but I rarely ever need to -- I mostly only rebase onto new upstreams. Seeing differences in patches isn't helpful because there is not enough context there as to what changed in the upstreams.

I rather suspect that "commit messages on changes to patches" is what Debian ended up with and back-justifies it.

Of course, I am not a Debian maintainer, so it's entirely possible I'm just missing the experience of it that would make me want "commit messages on changes to patches".


You can keep the old branches around if you want. Or merge instead of rebasing.

Those who don't understand git are doomed to reimplement half of it poorly?

(I know that's not quite the Greenspun quote)


I think that's right, sadly.

This seems pretty silly to me. Their solution for how do get structured output is pretty much just "don't". Well we still need the structured output so what do we do then?

> you need a parser that can find JSON in your output and, when working with non-frontier models, can handle unquoted strings, key-value pairs without comma delimiters, unescaped quotes and newlines; and you need a parser that can coerce the JSON into your output schema, if the model, say, returns a float where you wanted an int, or a string where you wanted a string[].

Oh cool I'm sure that will be really reliably. Facepalm.

> Allow it to respond in a free-form style: let it refuse to count the number of entries in a list, let it warn you when you've given it contradictory information, let it tell you the correct approach when you inadvertently ask it to use the wrong approach

This makes zero sense. The whole point of structured output is that it's a (non-AI) program reading it. That program needs JSON input with a given schema. If it is able to handle contradictory-information warnings, or being told you're using the wrong approach then that will be in the schema anyway!

I think the point about thinking models is interesting, but the solution to that is obviously to allow it to think without the structuring constraint, and then feed the output from that into a query with the structured output constraint.


It turns out it is worth the effort. Once you have got past the "fighting the borrow checker" (which isn't nearly as bad as it used to be thanks to improvements to its abilities), you get some significant benefits:

* Strong ML-style type system that vastly reduces the chance of bugs (and hence the time spent writing tests and debugging).

* The borrow checker really wants you to have an ownership tree which it turns out is a really good way to avoid spaghetti code. It's like a no-spaghetti enforcer. It's not perfect of course and sometimes you do need non-tree ownership but overall it tends to make programs more reliable, again reducing debugging and test-writing time.

So it's more effort to write the code to the point that it will compile/run at all. But once you've done that you're usually basically done.

Some other languages have these properties (especially FP languages), but they come with a whole load of other baggage and much smaller ecosystems.


> So it's more effort to write the code to the point that it will compile/run at all. But once you've done that you're usually basically done.

Not if I don't know what I'm doing because it's something new. The way I'm learning how to do it is by building it. So I want to build it quickly so that I can get in more feedback loops as I learn. Also I want to learn by example, so I actually want to get runtime errors, not type system errors. Later when I do know what I am doing then, sure, I want to encode as much as I can in my types. But before that .. Don't get in my way!


Yeah it is a fair point that runtime errors are sometimes easier to understand than compile time errors. They're still a much worse option of course - for the many reasons that have been already discussed - but maybe compile-time errors could be improved by providing an example of the kind of runtime error you could get if you didn't fix it (and it hypothetically was dynamically typed). Perhaps that would be easier to understand for some people or some errors.

There's a (Curry-Howard) analogue here with formal verification and counter-examples.


Yeah a terrible review presumably. It has zero context.

Amazing how bad the speech synthesis is for something so safety critical.

Then again I understood exactly what it was saying every time, which is more than I can say for some of the other traffic on that recording. I’m not sure synthetic-sounding means bad here.

The embedded systems qualified for use in general aviation avionics have very limited hardware resources. They are severely constrained by form factor, power, and cooling. It's amazing that the developers were able to get speech synthesis working so well.

I could do better than this with pre-recorded samples for each word. Especially for the phonetic alphabet.

Also avionics aren't that underpowered these days. They have full touchscreen displays and multicore CPUs.


It doesn't appear to me to be speech synthesis but rather prerecorded messages

They probably want to make it sound as clearly robotic as possible so some idiot at ATC doesn’t try to argue with it.

This, if it sounds too human ATC is going to try to help and possibly provide vectors, as they should, but The way the system works, ATC needs to be prioritizing clearing the runway and keeping aircraft away

[flagged]


Please don't fulminate or introduce political flamebait on HN

https://news.ycombinator.com/newsguidelines.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: