Feels like people on their deathbed are allowed to express their thoughts, trite/trivial or not. When you are literally dying the last thing on your mind should be whether some blogger deems your thoughts valuable or cheap.
May we all get to enjoy those final moments free of the need to perform on stage and express those things we truly wish to pass on to those will hear them.
>When you are literally dying the last thing on your mind should be whether some blogger deems your thoughts valuable or cheap.
to be fair when you're dying the other last thing on your mind should probably be your regrets, so there is some truth in the article.
When people go through dramatic events, not necessarily limited to their literal death, there's often a false sense of clarity. It's not uncommon for people with trauma or loss to suddenly have some conversion of one kind or the other, and it's rarely as good of an idea as they think it is. It seems more sincere because for the individual it's tied to some important event, but I think it's often the opposite.
I was tempted to respond with an offhand comment about the size of the industry or similar, but what axe do you have to grind about PC gaming? You'd prefer folks go to the far more injurious mobile gaming space?
Figures this comes from the National Design Studio (https://ndstudio.gov/) which ironically also ignores the government's own advice on web standards and correct use of identifying headers.
One can assume the US Tech Force will perceive itself as also unfettered by those silly rules and good practices.
My actual first thought was "Is this a hoax?" precisely because the website did not identify itself as a US government website in the usual way for executive branch sites.
I know it's par for the course these days, but that's a lot of Js and CSS for a single page app with some text, a few images, and a list of collapsible info sections (whose animations aren't very smooth).
I didn't mean the logo (honestly didn't even notice). I was talking about the robot guy's t-shirt - it does have 13 stripes, but the number and layout of stars look rather play-it-by-ear.
"What's the biggest brand in the world? If you said Trump, you're not wrong. But what's the foundation of that brand? One that's more globally recognized than practically anything else.
...
This is President Trump going bigger than President Nixon"
Of course. Whatever problems the US government had before, mass firings, loyalty tests, furloughs, and endless other shenanigans have only exacerbated them.
There is a somewhat stubborn idea that a government will always have many inefficiencies baked in, since there’s no real incentive to remove them beyond a generic “that would be nice”.
:( I had to click through because I didn't believe you at first... as someone who used to proudly work with feds, this yet another low point in many over the past ten years.
It's a very good system. $20 is the right number to get you off the couch, but not so much as to cripple you. There are exceptions if you have a valid reason for not voting. The maximum fine is ~$180 so you can't simply ignore the Elections Commission and hope it goes away.
Unfortunately while catching false citations is useful, in my experience that's not usually the problem affecting paper quality. Far more prevalent are authors who mis-cite materials, either drawing support from citations that don't actually say those things or strip the nuance away by using cherry picked quotes simply because that is what Google Scholar suggested as a top result.
The time it takes to find these errors is orders of magnitude higher than checking if a citation exists as you need to both read and understand the source material.
These bad actors should be subject to a three strikes rule: the steady corrosion of knowledge is not an accident by these individuals.
It seems like this is the type of thing that LLMs would actually excel at though: find a list of citations and claims in this paper, do the cited works support the claims?
sure, except when they hallucinate that the cited works support the claims when they do not. At which point you're back at needing to read the cited works to see if they support the claims.
Sometimes this kind of problem can be fixed by adjusting the prompt.
You don't say "here's a paper, find me invalid citations". You put less pressure on the model by chunking the text into sentences or paragraphs, extracting the citations for that chunk, and presenting both with a prompt like:
The following claim may be evidenced by the text of the article that follows. Please invoke the found_claim tool with a list of the specific sentence(s) in the text that support the claim, or an empty list indicating you could not find support for it in the text.
In other words you make it a needle-in-a-haystack problem, which models are much better at.
You don't just accept the review as-is, though; You prompt it to be a skeptic and find a handful of specific examples of claims that are worth extra attention from a qualified human.
Unfortunately, this probably results in lazy humans _only_ reading the automated flagged areas critically and neglecting everything else, but hey—at least it might keep a little more garbage out?
Exactly abuse of citations is a much more prevalent and sinister issue and has been for a long time. Fake citations are of course bad but only tip of the iceberg.
The linked article at the end says: "First, using Hallucination Check together with GPTZero’s AI Detector allows users to check for AI-generated text and suspicious citations at the same time, and even use one result to verify the other. Second, Hallucination Check greatly reduces the time and labor necessary to verify a document’s sources by identifying flawed citations for a human to review."
On their site (https://gptzero.me/sources) it also says "GPTZero's Hallucination Detector automatically detects hallucinated sources and poorly supported claims in essays. Verify academic integrity with the most accurate hallucination detection tool for educators", so it does more than just identify invalid citations. Seems to do exactly what you're talking about.
>These bad actors should be subject to a three strikes rule: the steady corrosion of knowledge is not an accident by these individuals.
These people are working in labs funded by Exxon or Meta or Pfizer or whoever and they know what results will make continued funding worthwhile in the eyes of their donors. If the lab doesn't produce the donor will fund another one that will.
No, not really. I've read lots of research papers from commercial firms and academic labs. Bad citations are something I only ever saw in academic papers.
I think that's because a lot of bad citations come from reviewer demands to add more of them during the journal publishing process, so they're not critical to the argument and end up being low effort citations that get copy/pasted between papers. Or someone is just spamming citations to make a weak claim look strong. And all this happens because academic uses citations as a kind of currency (it's a planned non-market economy, so they have to allocate funds using proxy signals).
Commercial labs are less likely to care about the journal process to begin with, and are much less likely to publish weak claims because publishing is just a recruiting tool, not the actual end goal of the R&D department.
Peer review definitely does catch errors when performed by qualified individuals. I've personally flagged papers for major revisions or rejection as a result of errors in approach or misrepresentation of source material. I have peers who say they have done similar.
I should have said "Peer review doesn't catch _all_ errors" or perhaps "Peer review doesn't eliminate errors".
In other words, being "peer reviewed" is nowhere close to "error free," and if (as is often the case) the rate of errors is significantly greater than the rate at which errors are caught, peer review may not even significantly improve the quality.
Thanks for clarifying, I fully agree with your take. Peer review helps, particularly where reviewers are equipped and provided the time to do the role correctly.
However, it is not alone a guarantor of quality. As someone proximate to academia its becoming obvious that many professors are beginning to throw in the towel or are sharply reducing their time verifying quality when faced with the rising tide of slop.
The window for avoiding the natural consequences of these trends feels like it is getting scarily small.
reply