Same story here. I'll never go back to Apple Music, even if only for streaming. I had hundreds of tracks and albums just demolished by something related to iTunes Match, didn't realize for months, and didn't have a solid backup system at the time.
I've been on mailbox for 6 years and I think the only issue I've had with rejections has been the email confirmation from some Discourse-based forums. But after I contacted the hosts and was added manually, the forums' emails made it through with no issue
A very basic way of how it works: encryption is basically just a function e(m, k)=c. “m” is your plaintext and “c” is the encrypted data. We call it an encryption function if the output looks random to anyone that does not have the key
If we could find some kind of function “e” that preserves the underlying structure even when the data is encrypted you have the outline of a homomorphic system. E.g. if the following happens:
e(2,k)*e(m,k) = e(2m,k)
Here we multiplied our message with 2 even in its encrypted form. The important thing is that every computation must produce something that looks random, but once decrypted it should have preserved the actual computation that happened.
It’s been a while since I did crypto, so google might be your friend here; but there are situations when e.g RSA preserves multiplication, making it partially homomorphic.
I get how that works for arithmetic operations - what about stuff like sorting, finding an element in a set etc? This would require knowledge of the cleartext data, wouldn't it?
You can reduce anything happening on the computer to arithmetic operations. If you can do additions and multiplications, then it's turing complete. All others can be constructed from them.
While correct, that doesn't answer the question at all, though. If I have my address book submited into an FHE system and want to sort by name - how do you do that if the FHE system does not have access to cleartext names?
> If we could find some kind of function “e” that preserves the underlying structure even when the data is encrypted
But isn't such a function a weakened form of encryption? Properly encrypted data should be indistinguishable from noise. "Preserving underlying structure" seems to me to be in opposition to the goal of encryption.
Given that the operations you can execute on the ciphertext are Turing complete (it suffices to show that we can do addition and multiplication) then it follows that any conceivable computation can be performed on the ciphertext.
Oh this is outside the context of encryption. My curiosity was, is there such a compression function that permits operations on the compressed data without first decompressing it?
One that is kind of in this spirit is that you can describe sparse matrices by omitting all the zeros and only describe the indices that have data. In this compression you can still perform normal matrix operations without having to unpack them into the “normal form”. Now this is neither encryption nor a particularly interesting compression, but it does prove that it is possible in principle ;p
a simple example of partial homomorphic encryption (not full), would be if a system supports addition or multiplication. You know the public key, and the modulus, so you can respect the "wrap around" value, and do multiplication on an encrypted number.
other ones I imagine behave kinda like translating, stretching, or skewing a polynomial or a donut/torus, such that the point/intercepts are still solveable, still unknown to an observer, and actually represent the correct mathematical value of the operation.
just means you treat the []byte value with special rules
Thank you. So based on your examples it sounds like the "computation" term is quite literal. How would this apply at larger levels of complexity like interacting anonymously with a database or something like that?
There are FHE schemes which effectively allow putting together arbitrary logical circuits, so you can make larger algorithms FHE by turning them into FHE circuits -- Jeremy Kun's 2024 overview [1] has a good summary
In Robert Waggoner's book, Lucid Dreaming: Gateway to the Inner Self, the author, who was a very skilled lucid dreamer from childhood, describes how he had a moment of insight after waking from a lucid dream. He had been thinking of himself as the controller of his dreams, and treated them mostly as entertainment. But he realized that for everything he "decided" in his lucid dream, there was far more content that arrived, unplanned--scenery, characters, events, and so on.
This made him curious about using awareness within the dream not just for entertainment, but to conduct experiments and tests, to research what was and wasn't possible, what dream characters and dream consciousness knew or didn't know, all from within the dreams themselves. He's spent decades doing that, and comparing notes with other skilled lucid dreamers.
It's an incredibly fascinating book, a sort of natural history of the dream world by a seasoned traveler within it.
Also has a bunch of useful tips on cultivating lucid dreaming, which I remember working pretty well a few times when I had been disciplined enough to practice them.
I understand that, but even so the sentence makes no sense. I assumed that they meant to write: ‘Highlights automatically searches directions for a location’. Pardon my pedantry, but how is it possible to ‘surface’ a direction. Am I missing something?
My point stands… the quality of writing in this article is very poor: overly long sentences, clunky information reveal etc. it feels more like a first draft.
Since I only partly understand your comment, I'm not sure if this pertains, but the phrase "spatiotemporal encoding" caught my attention. It makes intuitive sense that complex cognitive function would be connected to spatiotemporal sensations and ideas in an embodied nervous system evolved for, among other things, managing itself spatially and temporally.
Also, Riccardo Manzotti's book "The Spread Mind" seems connected. Part of the thesis is that the brain doesn't form a "model" version of the world with which to interact, but instead, the world's effects are kept active within the brain, even over extremely variable timespans. Objects of consciousness can't be definitively separated from their "external" causes, and can be considered the ongoing activity of those causes, "within" us.
Conscious experience as "encoding" in that sense would not be an inner representation of an outer reality, but more a kind of spatiotemporal imprint that is identical with and inextricable from the activity of "outer" events that precipitated it. The "mind" is not a separate observer or calculator but is "spread" among all phenomenal objects/events with which it has interacted--even now-dead stars whose light we might have seen.
Not sure if I'm doing the book justice here, but it's a great read, and satisfyingly methodical. The New York Review has an interview series if you want to get a sense of the ideas before committing to the book.
This is salient enough that I think you intuitively understood my comment. I won't pretend I can fully explain pending hypotheses either, it's more about research angles (e.g., connecting tools with problem categories).
Thanks a lot for the recommendations. That's what I love about HN. One often gets next-level great pointers.
> Objects of consciousness can't be definitively separated from their "external" causes, and can be considered the ongoing activity of those causes, "within" us.
Emphatically yes.
> […] spatiotemporal imprint that is identical with and inextricable from the activity of "outer" events that precipitated it
Exactly, noticing that it includes, and/or is shaped, by "inner" events as well.
So there's the outer world, and there's your inner world, and only a tiny part of the latter is termed "conscious". We gotta go about life from that certainly vantage but incredibly limited perspective too. The 'folding power' of nature (to put so much information in so little space) is mesmerizing, truly.
I like to put it down to earth to think about it. When you're in pain, or hungry, or sleepy—any pure physiological, biological state,—it will noticeably impact (alter, color, shade, formally "transform" as in filters or gating of) the whole system.
Your perception (stimuli), your actions (responses), your non-conscious impulses (intuitions, instincts, needs & wants…), your emotions, thoughts, and even decision-making and moral values.
I can't elaborate much here as it's bound to get abstract too fast, to seem obfuscated when it's everything but. I should probably write a blog or something, ha ha. You too, you seem quite astute at wording those things.
Whew. I'm sorry you had that situation to grow up in, caught up from an early age in maneuvering relative to a parent's insecurities and emotional blindness. I can relate in some ways. I hope the clarity with which you wrote about it now is an expression of having come to some healing and peace!
You know, it's taken a lot longer than I would have hoped, but I'm grateful enough that it happened at all that I don't dwell much on what could have been!
All of those options sound interesting and plausible, yet when I go for a walk while talking with a friend, I'm imagining an alien anthropologist wondering, "Are they communicating the ambient air temperature, and the availability of food nearby, their orientation to the sun and Venus setting?" Maybe the whales are gossiping, or sharing old stories, songs, jokes...
like the weather, restaurants, and plans for the summer, or even breeding opportunities? (will admit to missing the subtlety of the joke tho)
maybe they're complaining about how the options on whale tinder are too blubbery, but he categories for living things to talk about are definitely shared, and dividing phenomena into them is pretty straightforward. the real risk might be that the next iteration of GPT simulates an always just out of reach imagined better option and whales stop reproducing in pursuit of it. we should try it on another species first.
Good point, yeah, generally the basic survival topics are still in play even when we're busy with finer details! Still, giving an intelligent species the credit for (perhaps) engaging in the final details seems like something generous to leave on the table! Like whale dating apps, exactly.
Maybe the trees would show something useful to GTP
This point of view could be applied to any word, and the extreme result is that you'd negate meaningful or useful communication, or that someone would have to be the arbiter of what is a legitimate concept or not.
Between vocabulary, commonly understood meaning, possible meaning, and actual personal experience, there are many detours and jumps. "Dog" as a word, concept/meaning, and experience, has these issues. What's not a dog, which dog are you thinking about, and does this apply to "dog" or just those specific dogs you've experienced? Etc.
Words like "consciousness", for less concrete experiences than "dog", tend to have more fog in the gaps between word and shared meaning, and between those and individual experience.
It seems like you're trying to flatten a person's curiosity about the implications of a shared concept or experience into a "mundane" phantasm about a word whose referent is either nonsensical or nonexistent to you.
I think that the gaps between word, concept, and experience, while confusing and difficult, are worthy of more respect and wonder than to just flatten them as though their existence didn't imply something potentially important and essential is happening there. Language arose because we have actual experience to share, however tricky it can be to verbalize. It doesn't work perfectly, and leads to confusion, but here we are, reading and writing.
"Consciousness" may be a word for a slippery concept/experience, but that doesn't equate to questions about consciousness being inherently semantic.
>This point of view could be applied to any word, and the extreme result is that you'd negate meaningful or useful communication, or that someone would have to be the arbiter of what is a legitimate concept or not.
False. <- see? There's a word that doesn't apply. But you're not wrong. This POV does apply to MANY words. It just goes to show how MANY debates are traps. You think you're discussing something profound but it's just vocabulary.
>Between vocabulary, commonly understood meaning, possible meaning, and actual personal experience, there are many detours and jumps. "Dog" as a word, concept/meaning, and experience, has these issues. What's not a dog, which dog are you thinking about, and does this apply to "dog" or just those specific dogs you've experienced? Etc.
Right. So your example illustrates my point. Is it profound and meaningful to spend So much time discussing what is a dog and what isn't a dog? What is the definition of the word dog? No. It's not. Same. With. Consciousness. It's not profound to discuss vocabulary.
>It seems like you're trying to flatten a person's curiosity about the implications of a shared concept or experience into a "mundane" phantasm about a word whose referent is either nonsensical or nonexistent to you.
No I'm just stating reality as it is observed. The essence of a debate about consciousness is rationally and logically speaking entirely a vocabulary problem. This isn't even an attempt to "bend" anything to lean my way. The ultimate logical interpretation of any situation involving a debate on what is consciousness and what is not conscious is a vocabulary problem. Literally. Read the last sentence.
>I think that the gaps between word, concept, and experience, while confusing and difficult, are worthy of more respect and wonder than to just flatten them as though their existence didn't imply something potentially important and essential is happening there. Language arose because we have actual experience to share, however tricky it can be to verbalize. It doesn't work perfectly, and leads to confusion, but here we are, reading and writing.
Made up concepts also arise from words. Gods, goddesses, spirit, monster, hell, dryad, minitour, Cerberus. The existence of made up concepts logically speaking means that it's possible "consciousness" is a made up concept.
>"Consciousness" may be a word for a slippery concept/experience, but that doesn't equate to questions about consciousness being inherently semantic.
It does. Each question about consciousness is inherently relating the word to another semantic word. This is literally what's going on.
Yes. For example "randomness." Seems mundane, but this simple intuitive concept can't actually be formally defined. I have yet to see an actual algorithm for a truly random number generator.
The profoundness comes from the fact that on the intuitive level we are all hyper aware of what random means. But on the formal level we have no idea what it is.