Hacker Newsnew | past | comments | ask | show | jobs | submit | relate's commentslogin

FYI Here is an overview of all of his columns:

https://www.bloomberg.com/opinion/authors/ARbTQlRLRjE/matthe...


Here is an example of a demo we released that uses a tooltip to make it more obvious how it works:

https://hific.github.io/


You could allow for errors and use a bloom filter to avoid the space issue:

https://en.wikipedia.org/wiki/Bloom_filter


Or you could use a privacy-preserving lookup API, but that might be too much traffic. A Bloom filter could be downloaded locally and is probably a better solution.


Bloom filter as first pass and then privacy-preserving lookup API if it returns "probably a match"?


If it returns "probably a match", you can just look up with HN directly, like it does now.


Would you prefer 11111+11111+33333 instead?


Still a bit boring, feels like cheating, something like 12321 + X + Y would be nice.


55555 = 12321 + 43234 + 0

:)

And if you don't like that:

55555 = 12321 + 23132 + 20102

Palindromes are sort of the least surprising numbers to have this property because you can easily "split" them into as many palindromes as you want like this. The surprising result is that you could write something like 19837100018374 as the sum of only 3 palindromes.


More surprising is that you can do it in any base >= 5


I've done whiteboard only interviews, as well as shared doc/coderpad only, and I find both limiting. When thinking about the problem, it's great to have a whiteboard to sketch examples etc, but writing code is much easier with a text editor.

Recently I interviewed at Google, and they had no problem fulfilling my request to do both: sketch the solution approach on whiteboard and then write the code with a laptop.


and whether you are willing to work 106 (40+56) hours a week...


I find it strange you mention transfer learning, since one of the reasons neural networks are so popular are because they tend to excel at it. Adapting (i.e. fine-tuning) networks trained on a task with a lot of data (e.g. image classification on ImageNet) to different tasks, such as image segmentation, has proven a very successful approach.


That was my intent in mentioning them. My understanding, though, is that we don't have a solid foundation on why it works. Just that that is the direction we are moving, and neural networks are good at it.


Could't they open source the design/implementation without a permissive licence? I.e. everyone could audit, but nobody could use it commercially.


That would be non-free / non-open to be clear. A copyleft license is non-permissive but is still free/open. Just want to be clear, since some people mistakenly think things like that commercial use requires permissive licensing.


In your example, this would correspond to having the number x = 0 +-1 and then wanting to compute 1/x. If your number can potentially be zero, why would you want to use it as a divisor?


The problem remains if you wrap the division in a non-zero check. Or maybe the interval [-1,+1] is already kind of a lie, i.e. x is known to be in the interval but you additionally know that x is non-zero when you are about to perform the division. The example is just meant to illustrate the problem that using a single interval is not good enough to track error bounds in the general case.


I'm not sure what you are implying. Iceland suffered a hit in the 2008 crash, but has since almost fully recovered.

GDP Iceland over time (2017 missing): https://data.worldbank.org/indicator/NY.GDP.PCAP.CD?location...

GDP per capita comparison (Iceland ranking 6th): https://data.worldbank.org/indicator/NY.GDP.PCAP.CD?year_hig...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: