Hacker Newsnew | past | comments | ask | show | jobs | submit | magicalhippo's commentslogin

I've seen some bedroom activities that I would definitely categorize as requiring extraordinary athletic abilities...

There was some research[1] that strongly suggested that varied use makes them last much longer than the steady use that most battery tests do. That is, bursts of high-current draw followed by moderate draw etc vs the constant current load typically used when evaluating battery performance. From the paper:

Specifically, for the same average current and voltage window, varying the dynamic discharge profile led to an increase of up to 38% in equivalent full cycles at end of life.

This was unexpected, hence explains why they fared better than predicted.

[1]: https://www.nature.com/articles/s41560-024-01675-8 Dynamic cycling enhances battery lifetime (open access)


We use SVN at work and it's a nightmare there too, "mine" and "theirs" and whatnot. I frequently end up looking at historical versions just to verify which is which.

If I have a merge conflict I typically have to be very conscious about what was done in both versions, to make sure the combination works.

I wish for "working copy" and "from commit 1234 (branch xyz)" or something informative, rather than confusing catch-all terms.


Please tell me you are using Git-SVN or Hg-SVN. Using bare SVN as a client hasn't been necessary in over a decade.

Using SmartSVN which makes life a fair bit better but still keeps this confusing terminology.

We'll be migrating to Git this year though so.

For reference, the codebase is over 20 years old, and includes binary dependencies like libraries. Makes it easy to compile old versions when needed, not so easy on the repository size...


Is there like a "kitchen sink law" for SAAS businesses? I mean, seems "everyone" finds it necessary to do "everything".

I'm a Proton subscriber. I just wanted mail. I don't use the 5-10-15? other products that come with the subscription.


I like the calendar, it deals well with all other providers. I like Pass, especially for sharing credentials in the family. Drive is nice, although I switched to Immich. I use the VPN a lot too.

> who really did 500 miles a day in an Ioniq 5

To be clear, that's an average of about 500 miles a day, for almost 3 years.


Similar in how you order, not similar in price at all. Order of magnitude more expensive for hobby-grade boards.

> maybe only 20% higher depending on what service you take

For higher-end board that seems likely. For cheap hobby-grade boards just the job fee[1] is more than 10 boards delivered is from JLCPCB.

That said, thanks for reminding me. Will definitely compare next time I need boards.

[1]: https://community.aisler.net/t/our-simple-pricing/102#p-124-...


The job fee seems to include shipping, which makes it more reasonable. But Aisler's "estimated dispatch" for the budget service is Jan 26. That's 10 business days + shipping, making it not very competitive with JLCPCB's 10-15 business days including the slowest/cheapest shipping.

Express service adds ~20 EUR, roughly the same cost as picking DHL express delivery on JLCPCB.


> For higher-end board that seems likely. For cheap hobby-grade boards just the job fee[1] is more than 10 boards delivered is from JLCPCB.

Just checked myself using a board I already had manufactured, and can confirm it's a lot higher than JLCPCB or PCBWay.

Maybe for rapid prototyping it is okay, but at scale, to make one board is more than the entire selling point of the whole device.


Reminds me of Runge-Kutta methods[1] of numerical integration, specifically RK2 since they only have one intermediary update.

The theorems took me right back to my finite element methods class at university, with Banach spaces and proving convergence of fixed point functions using Cauchy sequences.

Hopefully someone more well-versed in the field can chime in on the meat of the paper, looks like a good win from afar.

[1]: https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods


I recall reading about a paper in SciAm or American Scientist a couple of decades ago, where they had trained a ML model to predict regional conflicts or civil wars. The main input was scarcity of food, mainly through price IIRC.

They trained it on historical data up to the 90s or so, and had it predict the "future" up to the time of the article. And as I recall it did very well. They even included some actual near-future predictions as well which also turned out pretty accurately as I recall.

Which I suppose isn't a huge surprise after all. People don't like to starve.


Link?

My memory isn't good enough to recall the name of the paper, however doing some searching I see the field has not stood still. Here[1] is an example of a more recent paper where they've included more variables. A quote from the conclusions:

The closest natural resource–society interaction to predict conflict risk according to our models was food production within its economic and demographic context, e.g., with GDP per capita, unemployment, infant mortality and youth bulge.

[1]: https://www.mdpi.com/2071-1050/12/16/6574 Revisiting the Contested Role of Natural Resources in Violent Conflict Risk through Machine Learning (Open Access)



As a Windows user since 3.x days, I complain mostly about UX issues these days. It's also clear leadership is not aligned with what I want with my desktop.

I've hardly had hardware issues since I moved to Windows 2000. Sure some, but few enough I can't recall any in particular.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: