I think it’s really cool, but not sure about the button design (seams to trap germs) and placement (index finger is not where I naturally exert pressure when cutting with a kitchen knife).
Also would be nice if it can be fitted to existing blades as handle retrofit, but I understand that might not be possible to properly tune the vibrations.
I think a hosted solution is best, unless you know for sure that they’ll always have a developer willing to volunteer their time to manage their server and keep everything up to date.
The cost is peanuts compared to the hourly rate of a developer to personally manage and keep secure their website for them.
As for static site, you think a non technical person can handle a static site generator and deploying to GitHub Pages? I doubt it.
Often times it gets very expensive to cheap out on things.
I disagree that you shouldn’t use document.write for <script> and <style> tags, as it’s the only way to force dynamically inserted script to run in a parser-blocking manner during parsing, and to prevent flash of unstyled content (FOUC) for dynamically inserted styles.
Yes it’s slower, but does it matter for your specific use case? Async scripts are harder to reason about, esp if you have nested templates. FOUC is also a much bigger and more noticeable problem than the tiny delay to parse the CSS snippets.
Forcing scripts to be parser-blocking is also needed if you want to nest document.write, to ensure it is writing to the correct location in the document.
With .gmi files or "gemini://" URLs and a compliant Gemini client, I don't need to even need to load the document beforehand to know if it intends to execute code on my device or not. It already won't by design, it won't in the future, and it doesn't require settings management, vendor whitelisting, popups, or caring who makes the browser for me to make it behave that way.
Whereas that .html document with it's noexec meta tag might be updated in the future to suddenly contain code.
With a dedicated Gemini client I simply have to trust/verify code provided the client developer.
With your solution now I have to trust/verify code provided by the browser developer(s), the apparatus the browser provides for extensions, and code provided by the extension developers.
If I'm super paranoid I can just look at a .gmi in Notepad or vi and understand it. I can't do that with all but the most basic HTML.
Ok I guess if you are that level of paranoid - even though both Chromium and Firefox are open source and under a heck of a lot of scrutiny for security vulnerabilities - then I understand why you prefer Gemini.
I just feel the fact that it cuts it self off from the wider clearnet completely kills your audience reach, if you’re ok writing to a very small insular community then sure, but most people want their writings to be read by as many people as possible.
My father pays for ChatGPT and it’s his personal consultant/assistant for everything - from troubleshooting appliance repair, to finding the correct part to buy, to guiding him step by step to track down lost luggage and drafting the email to airline asking for compensation (and got it).
It does everything for him and it gives him results.
So no, I don’t think it’s most useful for programmers, in fact I feel people who are not very techy and not good at Googling for solutions benefit the most as chatGPT (and LLM in general) will hand hold them through every problem they have in life, and is always patient and understanding.
A couple of days ago I was researching website analytics and GDPR/cookie law, and it seems clear that you need user consent even if IP addresses are only processed or temporarily stored before being discarded.
Arguing otherwise is like claiming it’s legal to steal from a store as long as you return the goods the next day - it’s legal fantasy.
I don’t think the EU is eager to go after these “ethical” analytics companies or their users, since they have bigger fish to fry. But if you think you’re legally in the clear using these solutions without user consent, you’re fooling yourself.
The law will change soon as far as I know, but still, the best way to respect data privacy laws is to not send your data to other companies AND to avoid tracking personal and sensitive data as much as possible. If you self-host and don't share the tracked data, you are already doing better than 99% of the companies
I see, I was confused because you mentioned GDPR but it has everything to do with ePD and I wasn't aware of this issue, thanks for sharing!
> Arguing otherwise is like claiming it’s legal to steal from a store as long as you return the goods the next day - it’s legal fantasy.
That said, this strongly implies that these privacy-focused analytics platforms are unquestionably breaking the GDPR and behaving in an unethical way, but that seems like a huge overstatement.
I've read the linked blog post and it seems like the analysis hinges on the precise wording of the ePD rather than GDPR. By their own admission, these analytics solutions seem to be in line with both the letter and the spirit of GDPR. The author even agrees that the wording of the ePD should be addressed and notes:
> Unfortunately I came to the rather demotivating conclusion that there simply isn’t any way to implement web analytics without running afoul of the ePrivacy Directive.
> This was a surprising conclusion at the time. Morally we can go very far: we can put a lot of smart stuff together and create a system that can’t be used to track individual users. But legally, that doesn’t particularly matter. The ePrivacy Directive is written as it is.
> Even the EU Data Protection Working Party decries this. In their 2012 opinion they write:
> the Working Party considers that first party analytics cookies are not likely to create a privacy risk when they are strictly limited to first party aggregated statistical purposes and when they are used by websites that already provide clear information about these cookies in their privacy policy as well as adequate privacy safeguards. […] In this regard, should article 5.3 of the Directive 2002/58/EC be re-visited in the future, the European legislator might appropriately add a third exemption criterion to consent for cookies that are strictly limited to first party anonymized and aggregated statistical purposes.
So it's not that these companies are doing anything inherently immoral or unethical as far as their handling of personal data goes, but they might be behaving unethically by making claims that run afoul of other legislation (ePD) that clashes with the GDPR.
He’s politically naive. I agree with him on much, such as don’t make workplace political, and cancel culture and DEI have in many cases gone mad, but his tolerance, even gentle celebration of Trump in the name of free speech is a classic example of the paradox of tolerance.
However he is right in many cases, and I don’t expect anyone to be right all the time, myself included. It’s strange to look for political leadership from a programmer anyhow.
At the end of the day it’s not something trivial to implement at the HTML spec/parser level.
For relative links, how should the page doing the import handle them?
Do nothing and let it break, convert to absolute links, or remap it as a new relative link?
Should the include be done synchronously or asynchronously?
The big benefit of traditional server side includes is that its synchronous, thus simplifying logic for in-page JavaScript, but all browsers are trying to eliminate synchronous calls for speed, it’s hard to see them agreeing to add a new synchronous bottleneck.
Should it be CORS restricted? If it is then it blocks offline use (file:// protocol) which really kills its utility.
There are a lot of hurdles to it and it’s hard to get people to agree on the exact implementation, it might be best to leave it to JavaScript libraries.
My guess is that some or maybe all of your concerns should have been solved by CSS @import (https://developer.mozilla.org/en-US/docs/Web/CSS/@import) although, as I'm reading the first few lines of the linked article, those must appear near the top of a CSS file, so are significantly more restricted than an import that can appear in the middle of a document.
Someone else made the same - https://github.com/Paul-Browne/HTMLInclude - but it's not been updated in 7 years, leaving questions. I'll try yours and theirs in due course. Err, and the fragment @HumanOstrich said elsewhere in comments.