Hacker Newsnew | past | comments | ask | show | jobs | submit | xucheng's commentslogin

It’s quite common for a DOI to be assigned to a paper after it’s accepted during camera ready. However, the DOI won’t work until the conference or journal version is published on the official website (ACM in this case). The version you’re viewing now is simply a preprint directly from the authors.


Exactly! It says this as one of the 3 reasons for DOIs not found on the error page:

>The DOI has not been activated yet.


Can this be solved by storing a timestamp of the record along with precise GPS coordinates? Could we then utilize some database to compute the drift from then and now?


Yes, in fact it should essentially be mandatory because the spatial reference system for GPS is not fixed to a point on Earth. This has become a major issue for old geospatial data sets in the US where no one remembered to record when the coordinates were collected.

To correct for these cases you need to be able to separately attribute drift vectors due to the spatial reference system, plate tectonics, and other geophysical phenomena. Without a timestamp that allows you to precisely subtract out the spatial reference system drift vector, the magnitude of the uncertainty is quite large.


You don’t need to store a timestamp, but the local coordinate reference system that the coordinates are in. When revisions like this are made, it’s by updating the specification of a specific local coordinate reference.

WGS84 is global, but for most precise local work more specific national coords are used instead.



I mean, certainly - if you store both GPS time and derived coordinates from the same sampling, then you can always later interpret it as needed - whether relative to legal or geographical boundaries etc as you might want to interpret in the future.


At least, Cisco survived. Nortel and Lucent were not so lucky.


Many years ago, I reported an issue where iTerm2 leaks sensitive search history to preference files [1]. The issue was quickly fixed. But until this day, I can still find people unintentionally leak their search history in public dotfiles repos [2].

[1]: https://gitlab.com/gnachman/iterm2/-/issues/8491

[2]: https://github.com/search?q=NoSyncSearchHistory+path%3A*.pli...


Interestingly, in the end of the article, the FDA links to an old article hosted on web.archive.org[1] even though the linked article was originally published by FDA themselves. Considering the linked article was only published at 2022, a merely 2 years ago, maybe the FDA should do more to prevent dead links.

[1]: https://web.archive.org/web/20221028042729/https:/www.fda.go...


> maybe the FDA should do more to prevent dead links

Perhaps government departments (and companies) taking advantage of archive.org storing their old docs should be appropriately supporting them?


I don't know how much money the internet archive has received from official US government institutions, but they do receive at least some as you can see from their list of foundations that help with funding them: https://archive.org/about/


This could also make is more difficult for new administrations to "disappear" documents from government sites by storing them on an archival site.


Maybe better url, the official press release : https://www.gilead.com/news-and-press/press-room/press-relea...

@dang


A related question: what is the state in term of supporting the SQL standard among the popular RDBMS? It seems that almost all database engines use their own custom syntax.


I can say that none of Oracle, Sybase, or Microsoft Sql Server really aim at conforming to the standard. While they will often try to use standard syntax for new features if such syntax exists, there is tons of old non-conforming syntax that there seems to be no real effort in addressing, even by adding new options, etc. Some of these mean really common features deviate significantly from what the standard requires.

PostgreSQL does mostly aim at conforming to the standard. They will invent new syntax when needed, But compared to the those previously mentioned, Postgres seems to prefer to stick closer to the standard whenever possible, including adding standard syntax for existing features whenever possible.

PostgreSQL does have some places where there is deliberate non-conformance (beyond just incompletely implemented features). They document many deliberate deviations (other than unimplemented or partially implemented features) and if they think they can be fixed in the future or not: https://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL_Standard . Looking at the list I'd say only one especially likely to bite a developer is the default escape character for LIKE clauses, or the non-standard trailing space behavior for character(n) datatypes (but who used fixed length character datatypes instead of varchar or text?). And obviously not yet implemented features could bite people, but many such features are optional to implement anyway, so...

I cannot speak about MySQL or MariaDB, due to insufficient familiarity.


This is one of the questions I try to answer at https://modern-sql.com/


Your website is great and I regularly check it to see what's new in various implementations. Unfortunately it seems that many databases don't support many modern SQL features yet. Any ideas as to why?


> Unfortunately it seems that many databases don't support many modern SQL features yet. Any ideas as to why?

I'd guess the incentive structure is the opposite of what you're implying; the same reason why every cordless drill manufacturer has their own battery connector: vendor lock in fuels private planes and shareholder reports, versus being compatible means no one is forced to buy your batteries and thus profits are `$total - $forced_purchases`

This situation gets even worse in the case of a standard without any objective way of knowing one is in compliance. Having a badge on the mysql.com website saying "now featuring SQL:2023 compliance!11" sells how many more support contracts exactly?


That's a good point. Additionally, it seems the standard isn't freely available and I doubt most of the developers of existing SQL DBs partecipate in drafting new standards. It seems it is doomed to diverge even further, which begs the question whether is it relevant anymore to have the SQL standard at all


You are confusing two different units. The earthquake moment magnitude scale measures the amount of energy of a given earthquake. It does not concern the location you are and the shacking you felt. Therefore, for this particular Taiwan earthquake, its magnitude is 6.9 regardless of the location.

The intensity scale on the other hand measures the amount of shaking for a given location. Naturally, the further you are from the epicenter, the lower the intensity will be.


Thank you, this is good! :P :) xx ;p

It's quite confusing to have a couple of different numbers to "characterize" a quake: magnitude (in a few different systems), intensity scale (guess there are different systems around the world as well).

Really great to see the clarity here from knowledgable people on this! :) ;pxx ;p


It is unlikely because it is very difficult and maybe illegal to scrape research papers. See https://en.wikipedia.org/wiki/Aaron_Swartz for example.


I think they meant an archive of git.io links.


I mean that it is hard to scrape these git.io links used in the research papers to build the archive. Unless of course, if Github provides a DB dump, it would help everyone a lot.


List of takedown notices from HackerRank to GitHub:

https://github.com/github/dmca/search?q=HackerRank&type=


Thank you, this gives a lot of insight into what is going on. This one tells the whole story: https://github.com/github/dmca/blob/ea3736a0c4c9574e0c8cea06...


And yet the "repo" (seems to be a Gist) referred to in this couternotice is still down.


I think it did come back up, and then went down again:

https://github.com/github/dmca/search?q=TheRayTracer

HackerRank/WorthIT didn't dispute the counternotice (AFAICT), but instead they re-filed the same DMCA notice again, four months later. I think the repo owner gave up at that point.


That's predatory. It's also up to GitHub to actually check the DMCAs they get and not juts blindly follow through with them, as they seem to do.


Won't type the HR word, this page might get DMCA'd.


This shows it pretty unequivocally: HackerRank are nothing but scoundrels. The CEO playing nice in this thread is laughable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: