Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Everyone will have their own AI (subconscious.substack.com)
1 point by ryanwaggoner on March 16, 2023 | hide | past | favorite | 3 comments


They won't have "their own", it will be proprietary, centralized, out of real control and hardly imposed.


You're probably right, but that's depressing! I found this vision more hopeful.

Nothing is set in stone, it's ultimately up to us as humanity how we integrate AI into our civilization, at least to a point.


Now masses may see "AI" as something new. But it's not. It's already integrated into our "modern" civilizations in way more fields than most people knows, sometimes for useful things (triage, computer vision, optimization, automatic guidance of machines, etc), sometimes it's debatable how useful, or for who really it's useful. I think about "social network's" "personalized" "suggestions", conducting people in a real dependency situation.

Nowadays, there is a strong influencing toward centralized and closed chat or requests services that not only do "open-washing" just like "green-washing" works, that only reflects via defined pathways, with convincing language methods, Human built knowledge that may be shared directly. I do not say this method is good or bad. Technically speaking, I think it's good and have a potential of being useful for having a neutral orientation in a huge amount of Human knowledge.

The way it's pushed may also conduct people to "believe" this will be the next indispensable tool for living sanely. Where, maybe it's not. Just like we are living the threat of private "social networks". The main game behind this is opening a "safe" way to shareholders wanting to perpetuate an economical, partly hidden culture as a main objective.

The "reality" "feeling depressing" is perhaps not a good reason to throw more credit, no matter the scenario, just to automatically "feel safer" or "in some right camp" and then, letting the others decide for you. I think we are far from knowing the consequences of non-user controlled tools as "services" that will certainly end to replace their own thinking by something that fits the company and partners behind, one day or another. Thus potentially turning them into externally controlled tools of self servitude for one camp of soul conquest or another, if not fully in the long-term, to certain extents.

That will probably place those products for the masses to a status of power that no one may even think to defy at a certain term and that begins to just "looking cool", "at the page" of self easement. And I join you where you say "it's ultimately up to us as humanity how we integrate AI into our civilization". I think that is true, however it can only be done with a certain, well-balanced awareness of all sides to try considering the real costs to each situations at all terms, plus doing it, really (which clearly seems improbable). We need to take care of this. Taking care constantly of who controls the tools that we are "invited" or "conducted" to use.

The security is also mostly omitted on this topic, scientifically speaking, and this is in itself a potential threat for tomorrow. Now even if the security were to be shown as considered, it remains that it only secures who writes the requirements and performs the verification. So, with those elements, I can maybe, for now, disapprove the strong influence network of people promoting "AI" from in-fact private sponsorship services that will always omit speaking of real costs to their potential subjects.

Edit: typos, enhancements




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: