Usually because the people capable of writing formally-verified software do not understand the specific subject area, and the people that understand the subject area are not generally capable of writing formally-verified software.
For a perspective of a mathematician that came to evangelize theorem provers, I recommend Kevin Buzzard's MS 2019-09 presentation [0] about LEAN. He highlights cultural misunderstanding and apathy on both sides of the domain divide. He also references the idea that the people who might make the appropriate tools may not have stayed in academia. So, he's structured his courses around using LEAN with the indirect consequence that power users (undergrads) may choose to become open source committers.
[0] https://www.youtube.com/watch?v=Dp-mQ3HxgDE One hour of presentation and then 15 min of q/a. My favorite is around 1:04:00 when someone asks a second time why he disprefers coq, and Buzzard complains that it can't represent some advanced quotient type that he'd have to work around. I'm reminded of [1]
Having been inspired by that video by Kevin Buzzard and finally finding something that would be worth formalizing, I am trying out Lean at the moment. I have about 2-3 months of Coq experience, so I can say that even without the quotient type, Lean is much better designed than Coq. I can't vouch for how it will do at scale since I've only started it out, but from what I can see, Lean fixes all the pain points that I had with Coq while going through Software Foundations.
It has things like structural recursion (similar to Agda), dependent pattern matching (the biggest benefit of which would be proper variable naming), unicode, `calc` blocks, good IDE experience (it actually has autocomplete) with VS Code (I prefer it over Emacs and the inbuilt CoqIDE is broken on Windows), mutually recursive definitions and types, and various other things that are not at the top of my head.
If I were to sum it up, the biggest issue with Coq is that it does not allow you to structure your code properly. This is kind of a big thing for me as a programmer.
>My favorite is around 1:04:00 when someone asks a second time why he disprefers coq, and Buzzard complains that it can't represent some advanced quotient type that he'd have to work around. I'm reminded of [1]
Second what abstractcontrol said.[0] I wouldn't say being able to deal with quotient types is a "trivial" feature, because Buzzard does explain his reason for why quotient types are crucial for what he does.
The motivation is that he wants to formalise a concept that is the subject of current active research, instead of yet another concept from undergraduate mathematics. In this case, it's the notion of perfectoid spaces,[1] which was only introduced in 2012 by Peter Scholze.[2]
The justification for this is sociological: research mathematicians predominantly find the retreading of old ground boring and trivial. Buzzard wants to sell the power of theorem provers to exactly these mathematicians, so he felt that formalising something that many number theorists would be intensely interested in may have more impact.
Unfortunately, while I'd like to think that he's succeeded somewhat, the reaction I've seen is just more of the "well, isn't that nice" variety, outside of some enthusiasts, many of whom are young and/or not very influential. Mainly, it's because theorem provers still can't help mathematicians improve their productivity, because there's still a lot of stuff that's still missing in the ecosystem, and Buzzard pointed out some of the missing "features" that may help theorem provers take off.
Despite what people may have told you, writing formal software is a quite mathematical task requiring people involved to really engage and write decent software specifications to begin with.
Other way around. Research scientists are the ones that may (or should) require additional rigor in their results. Formal verification would presumably help in that regard.
I don't disagree, I'm just saying that we as full-time software developers have not made formal verification standard practice, or even approachable. I would bet the average software developer has never even heard of it. So to expect people who do not write software full-time to do so is a little crazy.
You could, but your typical research scientist in biology will not even be aware of what formally verified software is, and even if it remains to be seen if there is such a thing for their particular application.
There is plenty of money for marketing. Research is a distant second as you probably well know given your nickname and if someone put their study on hold for a half decade or so just to make sure that the software they use is properly put together the funding would evaporate rapidly for that particular team or individual.
Let's institute a ban on pharma ads, at least the ones that disclaim "don't take $NEWDRUG if you're allergic to $NEWDRUG," and give the money to research.
As a physicist who worked in Biology for a while, money didn't always make things better and sometimes made it worse. Programs with slick user interfaces tended to be overwhelmingly chosen over better open source programs which were command line. In bioinformatics a lot of excellent software is open source and updated regularly. In contrast, closed source software could be nightmarishly opaque in how it handled the data. Also, companies had better salespeople than open source proponents. Graduate students used to take bets about whether a software would be purchased by looking at how hot the salesperson was.
As mentioned, it should be pronounced "rung," but in any case, removing the i and suddenly making it a word and pronouncing it in a very unexpected way is really pretty stupid. I think, if one's really insisting on using "ring" but without the i, at least call it "rung" not "rng" so that its pronounciation is clear.
If it's so important to have a "rng" instead of a "ring" then using its name in some example of its usage should probably be better.
Well, words, terms, notations and conventions come super cheap in math. You can call/denote "rng" anything you want. You just have to let your readers/interlocutors know about your conventions at the outset which is a standard practice in math anyway. For another thing, "r(u)ng" to me it's no different from writing "colonel" and reading it "kernel" (historical accident aside) :) There's also "lieutenant" which is pronounced " leftenant" in UK.
I wonder if there's any possibility of genetically engineering the fungi and using them to extract more energy from contaminated areas and nuclear waste as a form of biofuel. That would be quite an interesting way to use nuclear waste.
I used Openshot to edit an (originally) 2 hour-long video on my Linux machine [1]. It was not a very complicated edit, but included adding a logo throught the whole video, and trimming some parts of the video. I'm not a great video editor, but I think it could be better. The interface is quite intuitive, but there's a good deal of latency moving around in it, loading photos and other files. In general, it's not as snappy as I expected it to be. I must say though that it was the only reasonable open source editor that I could find, so, I guess that says a bit about the condition of FOSS video editors...
Sewage contains other volatile products that would evaporate and condense along with water. Cleaning sewage water for use in drinking is probably much, much harder than doing it with just sea water.
This is a very good idea, and could be used by Google or other advertisement agencies (if only they want to.) You may pay the revenue the ads make and get the stuff you want to learn instead. This way, Google gets its money, and you get to learn. Also, this could be uesful in that it doesn't require too much tracking of the users. It is definitely not fully sustainable, but it could be better than nothing.
A more reasonable thing to do is to form a council of computer scientists, psychologists, sociologists, economists, politicians, and people otherwise capable of having a useful opinion in the matter.
That council would then have the task of reviewing new technologies and their potential impacts. The council may then have different subgroups each discussing a certain technology, or possibly each with a certain specific task such as: reviewing research papers, watching the developments inside companies, writing legal propositions, etc...
This council could then be under a government agency, or better yet, an NGO that can get funding by the government, from donations, universities, etc...
A council, no matter how large, is, relative to the general public as a whole, made up of a small number of individuals.
Such a council would raise so many ethical concerns, such as: Who are these individuals? Why are they on the council? What are their biases? What does their livelihoods ultimately depend on? What conflicts of interest do they have? Among hundreds of other questions.
Only an informed debate among the general public can lead to laws that robustly protect the public at large because vested interests get drowned out by informed dissent from other, equally-qualified, voices who are not affected by such vested interests.
> The only difference between Theorem 1 and its quantum equivalent in [13] is that the quantum algorithm has no ε approximation factors (so ε=0 and 1/ε does not appear in the runtime). Thus, we can say that our algorithm performs just as well, up to polynomial slowdown and ε approximation factors.
With ε being the accepted margin of error, theorem 1 being the algorithm produced by Tang, and [13] being a reference to Kerenidis and Prakash's quantum algorithm kept here in the quote for fidelity of quoting.
This means that this algorithm is not really a substitue for the quantum algorithm because it will require a level of error to be accepted (something which might work in recommendation engines, but not necessarily outside that context) and it still has polynomial slowdown over the quantum algorithm. One thing to note is that not all quantum algorithms will perform exponenttially better than their classical counterparts. Particularily, some quantum search algorithms perform only quadratically better than the classical search algorithms. This means that even if an exponential speedup was not achieved by quantum computers, (which it still is because of the ε factor meaning they don't give the same results) it is expected that they still achieve polynomial speedup as they do in this case.
In conclusion, stating that this algorithm threatens the prospects of quantum computation is probably an exageration.