I felt pretty much the exact opposite. I was immediately drawn to some of the abstract art while not particularilly enjoying the traditional paintings. I found them too uncanny and "lifeless".
That being said, if I had a screen that could reasonably pass as a framed image on the wall, I would love a version of this where I could have a well known picture on it that would primarilly be static but sometimes have subtle movements or shift about a bit as a fun novelty to trip over guests. The typical, blinking, repositioning. Like hoppers nighthawks, but the clerk serving a drink or two. The couple lighting a sigarette or someone walking past the diner.
I think the "surveilance capitalism" and centralization of companies like Meta, Google etc has made many of us very sensitive to any systems that will leave traces of us against our will, be it porn, flock cameras or anything else that is similar.
I think we would have a lot less of a pushback against such policing efforts if governments had done a better job at reigning in tracking on the internet from the start. "Porn websites should check your age" is not that radical, but in a world where it doesn't feel unrealistic that much of the information about you is correlated and processed in ways that are not in your personal best interest, then it becomes another loop in the proverbial noose that can be used to hang us all.
I've tried many times to find a nice UI for beets and somehow never come across this. It is exactly what I've been searching for all these years... Thanks for sharing!
I think 'restoring the ability to wiretap' is misleading as this is not 'restoring the ability', its more akin to 'wiretapping everyone all the time'.
Wiretapping requires probable cause and a court order in order to be used chat control does not. It will report thousands daily and no one will be blamed or punished for false reports which turned out did not have probable cause. It was a reactive tool in the police's arsenal, it was not proactive like this is supposed to be.
Wiretapping requires/required significant manpower investment in order to surveil a single potential criminal which rightfully forced the police to prioritize their resources. Chat Control is automated and will enable the same amount of police to police more people.
Wiretapping was not retroactive. This system will create records that can be stored for a long time for very cheap.
This is not restoring wiretapping, this is supercharging wiretapping.
You are correct. I was still basing my post of the assumption that the AI scanning was still in the bill and that the proposed two strikes then chats would disclosed was there as well, which they is not. This provision seemed to imply that messages would have to be stored in order to be able to be provided after the two strikes.
I wasn't very clear in my original post always included an assumption that false positives were involved and that messages being stored were a result of that and not all messages being stored at all times.
The images and links that are scanned and is deems potentially problematic will be stored for up to 6 months or until they are deemed unproblematic. There is still a potential 6 month paper trail here, and in politically turbulent times that paper trail could still be damaging retroactively even if the report contains non CSAM.
I wouldn't necessarily call it comforting fantasy, people change their minds all the time. I think we're all to some extent able to justify some negative sides of any political movement as tensions rise.
I've felt this myself a few times now. Both when Trump was attempted assasinated and now with Charlie Kirk. I am sad that public discourse and our democracies are kind of unraveling these days and that this is just a sad reality of that fact. As far as Trump or Charlie Kirk go, I have no sympathy what so ever.
I'm not sure I really want to blame anyone for things becoming like this, it all seems like par for the course in the world we've created for ourselves. I just wish we were able to stop before this.
I've been thinking about doing the same thing as the OP for a while, but haven't really gotten around to it. I've started scrobbling to last.fm in order to see if their recommendation algorithm can be a replacement for Spotify. The jury is still out on that.
As my financial situation has gone from a place where I felt I could not really care and still save a healthy amount per month, to a place where I feel it is more necessary for me to try to keep up with my finances I've gone from really liking Spotify to a realization that I've probably spent enough money on spotify over the last 15-ish years to buy a cheap car or quite a sizeable music collection, had I just spent that money on music directly.
I have gotten my money's worth from Spotify for sure, I listen to it a lot and have probably gotten to hear magnitudes more music than if I merely bought an album or something every month instead, but at this point I can't get over the fact that if/when I unsubscribe to Spotify, I will have nothing and will have to spend a lot to get access to the music I actually care about again.
In a sense, I wish there was an audible style subscription for music. Give me the ability to sample music as a replacement for spotify radio, or/and some playlists like discover weekly and a few personalized ones, and a credit to pick something to buy permanently.
I had the same experience with matrix.org, then I set up my own homeserver and it it became a LOT more snappy. Its not perfect, but its been an adequate replacement for me and my few friends who are interested in self-hosting our own services.
It's not about being defeatist, atleast not for me. It's about what is considered good enough.
Sure, locking down the OS in this way is more secure, but it's also very restrictive and personally I don't think the added security justifies this. Lock picks do exist, but I am still entirely content with a single lock on my front door. I do not need an extra biometric sensor or camera or security representative standing outside my door to check id's of people passing by in order to consider myself reasonably safe.
Maybe this is cultural/geographical, but I've yet to hear of anyone who lost access to their mail or had unauthorized access to their bank account as a result of malware. I'm sure you can find examples, but I do not consider this an attack vector that is prevalent enough to warrant requiring signed apps or preventing manual installation.
I think the "algorithm movie" concept, describes something also very prevalent in music these days.
With the number of music producers which learn from tutorials and want to make music within a certain genre are incentivized to generally not stray too far from the prescribed genre conventions. This in turn is amplified by algorithms that will also not stray too far when recommending music in response to someone's listening habits. These habits again often "poisoned" by the listener not really paying attention to the music that is served them, as it might only be on in the background while working or doing something else.
Its like a lot of people with nothing to say, being recommended by something that does not understand anything, recommending to people who don't really listen.
I still prefer tranditional search engines over LLMs but I admit, its results feels worse than it has traditionally.
I don't like LLMs for two reasons:
* I can't really get a feel for the veracity of the information without double checking it. A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.
* I'm missing out on learning opertunities that I would usually get otherwise by reading or skimming through a larger document trying to find the answer. I appreciate that I skim through a lot of documentation on a regular basis and can recall things that I just happened to read when looking for a solution for another problem. I would hate it if an LLM would drop random tidbits of information when I was looking for concrete answers, but since its a side effect of my information gathering process, I like it.
If I were to use an AI assistant that could help me search and curate the results, instead of trying to answer my question directly. Hopefully in a more sleek way than Perplexity does with its sources feature.
It's time to bind "Please be concise in your answer and only mention important details. Use a single paragraph and avoid lists. Keep me in the discussion, I'll ask for details later." to F1.
You've just made me realize that I actually do need that as a macro. Probably type that ten times per day lately. Others might include "in one sentence" or "only answer yes or no, and link sources proving your assertion".
No matter how many times I get ChatGPT to write my rules to long-term memory (I checked, and multiple rules exist in LTM multiple times), it inevitably forgets some or all of the rules because after a while, it can only see what's right in front of it, and not (what should be) the defining schema that you might provide.
I haven't used ChatGPT in a while. I used to run into a problem that sounds similar. If you're talking about:
1. Rules that get prefixed in front of your prompt as part of the real prompt ChatGPT gets. Like what they do with the system prompt.
And
2. Some content makes your prompt too big for the context windows where the rules get cut off.
Then, it might help to measure the tokens in the overall prompt, have a max number, and warn if it goes over it. I had a custom, chat app that used their API's with this feature built in.
Another possibility is, when this is detected, it asks you if you want to use one with a larger, context window. Those cost more. So, it would be presented as an option. My app let me select any of their models to do that manually.
> don't feel that short vs long answers make any difference
The “thinking” models are really verbose output models that summarise the thinking at the end. These tend to outperform non-thinking models, but at a higher cost.
Anthropic lets you see some/all of the thinking so you can see how the model arrived at the answer.
One problem with LLMs is that the amount of "thinking" they do when answering a question is dependent on how many tokens they use generating the answer. A big part of the power of models like deepseek R1 is they figured out how to get a model to use a lot of tokens in a logical way to work towards solving a problem. The models don't know the answer they come to it by generating it, and generating more helps them. In the future we'll probably see the trend continue where the model generates a "thinking" response first, then the model summarizes the answer concisely.
> I can't really get a feel for the veracity of the information without double checking it.
This is my main reason for not using LLMs as a replacement for search. I want an accurate answer. I quote often search for legal or regulatory issues, health, scientific issues, specific facts about lots of things. i want authoritative sources.
Unless someone's life is on the line, usually eyeballing the source URL is enough for me. If I'm looking for API documentation, there are a few well-known URLs I trust as authoritative. If I'm looking for product information, same thing. If the search engine points me to totallyawesomeproductleadgen19995.biz, I'm probably not getting reliable information.
An LLM response without explicit mention of its provenance... There's no way to even guess whether it is authoritative.
The sources will start to be redundant eventually. It's actually O(1) once you have looked at all the sources... that there are... in the world. Trivial!
I'm not sure. In this context, sources are utterances rather than speakers. So they're only finite if we limit ourselves to a snapshot of past utterances while doing our checking.
Wait, so if you go to python.org and the doc page says, "Added in version 3.11", you double-check this?
What do you even use for double-check? Some random low-quality content farm? A glitchy LLM? An dodgy mirror of official docs full of ads? Or do you actually dig the source code for this?
And do you keep double-checking with all other information on the page... "A TOMLDecodeError will be raised on an invalid TOML document." - are you going to start an interactive session and check which error will be raised?
Part of why I prefer to use a search engine is that I can see who is saying it, in what context. It might be Wikipedia, but also CIA world fact book. Or some blog but also python.org.
Or (lately) it might be AI SEO slop, reworded across 10 sites but nothing definitive. Which means I need to change my search strategy.
I find it easier (and quicker) to get to a believable result via a search engine than going via ChatGPT and then having to check what it claims.
>A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.
And this is how LLMs perform when LLM-rot hasn't even become widely pervasive yet. As time goes on and LLMs regurgitate into themselves, they will become even less trustworthy. I really can't trust what an LLM says, especially when it matters, and the more it lies, the more I can't trust them.
I find LLMs useful for the case where I'm not sure what the right terms are. I can describe something and the LLM gives me a term which I then type into a search engine to get more information. I'm only starting to use LLMs though, so maybe I'll use them more in the future? - only time will tell.
That being said, if I had a screen that could reasonably pass as a framed image on the wall, I would love a version of this where I could have a well known picture on it that would primarilly be static but sometimes have subtle movements or shift about a bit as a fun novelty to trip over guests. The typical, blinking, repositioning. Like hoppers nighthawks, but the clerk serving a drink or two. The couple lighting a sigarette or someone walking past the diner.
reply