Hacker Newsnew | past | comments | ask | show | jobs | submit | everythingctl's commentslogin

I’d guess that’s a reference to Ada 95.


Yes


Pretty much every other developed country has solved this problem without having to transact in digital lottery tickets.


Yes and no. I can eTransfer in Canada... to a limit of $2,000 CAD daily. Rent is $2,100, so parent's point stands.

I could upgrade my cellphone and use Google Play Services, allowing up to $10,000 CAD daily, but I have no intention of doing so. I think it's messed up to tie my financial and digital sovereignty to a foreign corporation, I don't appreciate the hit to battery life, privacy, and the pre-requisite bump in hardware to run GApps, and I simply don't want to end up on the spending treadmill for new cellphones.

Canada "solved" this, but the limits haven't kept up with inflation or life in general


In Australia I can send $20k. In the UK it was similar.

These concerns seem very weird to me.


They kind of are, I'll immediately cop to that. I don't think it's that odd to say the constant upgrade treadmill is expensive though. I also don't appreciate gating features of our society behind smartphone ownership.

My specific concerns are odd. Not appreciating smartphone requirements for life feels more mainstream (if only marginally)


I’m not tied to a smartphone to do those things in the UK or Aus either.

Different geographies work differently I guess. In Brazil there’s an entire payment network (pix) that is centrally run, free and hugely popular.


Has any startup succeeded by starting out offering shiny hardware running innovative new software, all of which they have to develop?

It seems like a fatal dilution of focus to have to worry about the design and logistics of a fancy dumb terminal widget when you also have to get the software/AI/app integration stuff right.

Just make an app with text and voice interaction. Accept that the thing in our pockets with a screen and an internet connection is going to be a smartphone. You will not build an own-hardware moat with these weird little bits of e-waste.


I don't think there has been a single successful hardware startup in the last decade, so the answer to your question is safely "no" without even going into specifics.

Which is sad, because I'm sure there's room for a lot more innovative devices in the world outside of a single glass rectangle in your pocket that everyone must plug into in some way. The economics of the industry just makes it very hard for them to survive, and we all lose out because of it.


Analogue ( https://www.analogue.co/ ) comes to mind..

Genki Things is another...

I'm sure there are others?


There are tons of successful hardware startups in recent history.

You just don't hear about them because they're not selling to you. They make business, commercial, and industrial hardware.

Consumer hardware is very hard because consumers are extremely demanding of hardware. Just look at how difficult it is to convince people to spend even $5-10 on useful software or sign up for a $100/year SaaS product with near zero marginal cost per customer. Consumers are really hard to please and consumer price points are difficult to serve.


In just a few years around 2007-2012 we got Oculus, Nest, Ring, Blink, Fitbit, Beats, Oura, Square, Pebble, Tile, Dropcam, SmartThings, Makerbot, Neato, Raspberry Pi... All pure consumer hardware startups with popular products and successful exits. So it's not like the category is somehow fundamentally not viable. It just needs VCs and consumers to both shift from the smartphone-only mindset and start taking some risks.


Oculus? I guess it’s been over a decade already


Founded July 2012. And while the company was successful in the sense that it got an exit its product isn't exactly doing too well.


I would argue it is. I've got a Quest 3 sitting next to me and I think it's great.

I can't speak for the deranged expectations, hype cycles and back-lashes over the last 5 years. And I think Meta's R+D budget is pretty hard to justify. But I don't think that reflects on their current product range (which could have been matched with a much smaller budget - and to some degree has been)


I’ve used their products. While the market is still small it is one of the coolest hardware projects I’ve ever experienced


> The economics of the industry just makes it very hard for them to survive

Care to expand on that?


Hardware companies have (1) greater costs to get off the ground (2) longer periods of development (3) higher incremental cost per sale (so harder to scale) (4) slower iteration speed, and (5) just overall a lot riskier than pure software. A VC fund is going to see a startup with a groundbreaking innovate hardware device next to one building a cookie cutter SaaS app and still invest in the latter, because it just makes more business sense for them. No one outside of Apple/Google/Microsoft and the like is pouring 10 years and billions of dollars towards releasing a new device.


While I do agree with the topics and perceive them as changing resistance, I would like to add that bootstrapping your community is always a good idea for a successful business, so VCs are not essential. On the other end, I feel that personal data control and the human willingness to daily give away so many tasks are hard barriers on the long distance.


In the case of Humane and Rabbit the software doesn't sound all that innovative relative to ChatGPT/Copilot.


I guess the Pebble watch is the last thing that comes to mind from a new entrant. The original ipod? lol.


Despite a loved and wellmade product; Pebble lasted three years a decade ago, not sure that is "succeeded".


Cool toy and a nice piece for the CV perhaps, but it is difficult to take it seriously if you refuse to offer source code or a implementable specification.

I would give you the benefit of the doubt that it might just be code shyness or perfectionism about something in its early stages, but it looks like the last codec you developed (“HALIC”) is still only available as Windows binaries after a year.

I struggle to see an upside to withholding source code in a world awash with performant open source media codecs.


Maybe it’s just me, but every lossless codec that’s:

1. Not FLAC

2. Not as open-source as FLAC

comes across as a patent play.

FLAC is excellent and widely supported (and where it’s not supported some new at-least-open-enough codec will also not be supported). I have yet to see a compelling argument for lossless audio encoders that are not FLAC.


FLAC’s compression algorithm was pretty much garbage when it came out, and is much worse now compared to the state of the art. Even mp3 + gzip residuals would probably compress better.

FLAC doesn’t support more modern sampling formats (e.g. floating point for mastering), or complex multi channel compression for surround sound formats.

There just isn’t something better (and free) to replace it yet.


> There just isn’t something better (and free) to replace it yet.

Apple's ALAC (Apple Lossless Audio Codec) format is an open-source and patent-free alternative. I believe both ALAC and FLAC support up to 8 channels of audio, which allows them to support 5.1 and 7.1 surround. https://en.wikipedia.org/wiki/Apple_Lossless_Audio_Codec#His...

These are distribution formats, so I'd be surprised if there were demand for floating-point audio support. And in contexts where floating point audio is used, audio size is not really a problem.


When FLAC compresses stereo audio, it does a diff of the left and right channels and compresses that. This often results in a 2x additional compression ratio because the left and right channels are tightly correlated.

Unless things have changed substantially and I missed it, FLAC does not do similar tricks for other multichannel audio modes. Meaning that for surround sound, each channel is independently compressed and it is unable to exploit signal correlation between channels.

Proprietary formats like Dolby on the other hand do support rather intelligent handling of multichannel modes.

FLAC is not solely a distribution format. Indeed as a distribution format it sucks in a number of ways. It is chiefly used as an archival format, and would in fact be ideal as a mastering format if these deficiencies Could be addressed.


In what ways does flac suck for distribution? All the music I download from Bandcamp is in that format, it works great for me.


It could be much smaller, maybe 2-3x better compression. Better support for surround sound / multichannel audio. If an AAC stream were used for the lossy predictive stage, then existing hardware acceleration could be used for energy efficient playback.


How would 2-3x better compression be achievable?

I don't use or desire multichannel audio but that and the hardware acceleration are interesting points.


FLAC uses 1970’s era compression technology for both compression stages (lossy and residual) in order to conservatively avoid patents in the implementation. Just replace the lossy component with AAC, which is now out of patent protection, and replace Rice coding for the residual with the much better (but was still patented in the 90’s) arithmetic coding. Those two changes should get 2-4x performance improvement, as well as hardware accelerated encoding and playback as a free bonus.

Multichannel audio support is nice because it is often used in distribution of media files sourced from DVD/BluRay. It would be good to have a high quality, free codec for that use.


Thanks. I would love to see a PoC of this making my music files that much smaller.


> FLAC’s compression algorithm was pretty much garbage when it came out, and is much worse now compared to the state of the art. Even mp3 + gzip residuals would probably compress better.

MP3 is a lossy format so I would practically guarantee that you’d end up with a smaller file but that’s not the purpose of FLAC. Lossless encoding makes a file smaller than WAV while still being the same data.

> e.g. floating point for mastering

I’m 0% sold on floating point for mastering. 32bit yes, but anyone who’s played a video game can tell you about those flickering textures and those are caused not by bad floating point calculations, but by good floating point calculations (the bad part is putting textures “on top” of each other at the same coordinates) . Floating point math is “fast” but not accurate. Why would anyone want that for audio (not trying to bash here, I’m genuinely puzzled and would love some knowledgeable insight)


> MP3 is a lossy format so I would practically guarantee that you’d end up with a smaller file but that’s not the purpose of FLAC. Lossless encoding makes a file smaller than WAV while still being the same data.

You misunderstood what you are replying to. FLAC works by running a lossy compression pass, and then LZ encoding the residual. The better the lossy pass, the less entropy in the residual and the smaller it compresses. FLAC’s lossy compressor pass was shit when it came out, and hasn’t gotten any better.

Flickering textures is caused by truncation and wouldn’t be any better with integer math. The same issues apply (and are solved the same way, with explicit biases; flickering shouldn’t be a thing in any quality game engine).

Floating point math is largely desired for mastering because compression (technical term overloaded meaning! Compression here means something totally different than above) results in samples having vastly different dynamic ranges. If rescaled onto the same basis, one would necessarily lose a lot of precision to truncation in intermediate calculations. Using floating point with sufficient precision makes this a non-concern.


> FLAC works by running a lossy compression pass, and then LZ encoding the residual.

Since when does FLAC run a lossy pass? You can recover the original soundwave from a FLAC file, you can't do the same with an MP3.

I'm pretty sure FLAC does not run a lossy compression pass.

Flickering textures in game engines are likely due to z-fighting, unless you're referring to some other type of flickering.

If you're looking to preserving as much detail as possible from your masters then floating points make sense. But its really overkill.


> The FLAC encoding algorithm consists of multiple stages. In the first stage, the input audio is split into blocks. If the audio contains multiple channels, each channel is encoded separately as a subblock. The encoder then tries to find a good mathematical approximation of the block, either by fitting a simple polynomial, or through general linear predictive coding. A description of the approximation, which is only a few bytes in length, is then written. Finally, the difference between the approximation and the input, called residual, is encoded using Rice coding.

Linear predictor is a form of lossy encoding.


LPC is lossy, but FLAC maintains enough information to be able to reproduce the original data. Therefore its lossless even though LPC is a part of the compression.


Yes exactly. What you’re saying lines up with what I’ve learned through experience.

> If you're looking to preserving as much detail as possible from your masters then floating points make sense.

I’ve been searching for hours and gotten nothing more than the classic floats vs ints handwaving. Can you explain what you know about why using floats preserves detail?


Do you actually have experience writing a FLAC encoder/decoder? I do. Go read the format specification. There is a lossy compression pass, then it uses a general compressor on the residual after you subtract out the lossy signal. The two combined allow you to reconstruct the original signal losslessly.


what do you suggest instead?


I suggest that people who care enough about these things (not me, I’m just informed about it), come together and make a new lossless encoder format that has feature parity with the proprietary/“professional” codecs.


what codec are you suggesting is better, and how much better is it? unless encoders have wildly improved, alac's from apple is not better than flac. ape and wavpack seems to do a bit better, but not much


Support for >8 channels led me to use WavPack instead of FLAC.


What's the use case?


You are right about this. But there are things I should add to Halic and Halac. When I complete them and realize that it will really be used by someones, it will of course be open source.


One of the cool things about open source is that other people can do that for you! I've released a few bits of (rarely-used) software to open-source and been pleasantly surprised when people contribute. It helps to have a visible todo list so that new contributors know what to aim for.

By the way, there will always be things to add! That feeling should not stop you from putting the source out there - you will still own it (you can license the code any way you like!) and you can choose what contributions make it in to your source.

From the encode.su thread and now the HA thread, you've clearly gotten people excited, and I think that by itself means that people will be eager to try these out. Lossless codecs have a fairly low barrier for entry: you can use them without worrying about data loss by verifying that the decoder returns the original data, then just toss the originals and keep the matching decoder. So, it should be easy to get people started using the technology.

Open-sourcing your projects could lead to some really interesting applications: for example, delivering lossless images on the internet is a very common need, and a WASM build of your decoder could serve as a very convenient way to serve HALIC images to web browsers directly. Some sites are already using formats like BPG in this way.


> One of the cool things about open source is that other people can do that for you!

This is a very valid point, but we should all recognise that some people⁰ explicitly don't want that for various reasons, at least not until they've got the project to a certain point in their own plans. Even some who have released other projects already prefer to keep their new toy more to themselves and only want more open discourse once they are satisfied their core itch is sufficiently scratched. Open source is usually a great answer/solution, but it is not always the best one for some people/projects.

Even once open, “open source not open contribution”¹ seems to be becoming more popular as a stated position² for projects, sometimes for much the same reasons, sometimes for (future) licensing control, sometimes both.

--

[0] I'm talking about individual people specifically here, not groups, especially not commercial entities: the reasons for staying closed initially/forever can be very different away from an individual's passion project.

[1] “you are free to do what you want, but I/we want to keep my/our primary fork fully ours”.

[2] it has been the defacto position for many projects since a long time before this phrase was coined.


> I/we want to keep my/our primary fork fully ours

The "primary" fork is the one that the community decides it to be, not what the authors "wants". Does it really matter what is the "primary fork" for those working on something to "scratch their own itch"?


Hence I said my/our primary fork, not the primary fork.

If I were in the position of releasing something⁰: the community, should one or more coalesce around a work, can do/say what it likes, but my primary fork is what I say it is¹. It might be theirs, it might be not. I might consider myself part of that community, or not.

It should be noted that possibility of “the community” or other individual/team/etc taking a “we are the captain now” position (rather than “this is great, look what we've done with it too” which I would consider much more healthy and friendly) is what puts some people off opening their toy projects, at all or just until they have them to a point they are happy with or happy letting go at.

> Does it really matter what is the "primary fork" for those working on something to "scratch their own itch"?

It may do further down the line, if something bigger than just the scratching comes from the idea, or if the creator is particularly concerned about acknowledgement of their position as the originator².

--

[0] I'm not ATM. I have many ideas/plans, some of them I've mused for many years old, but I'm lacking in time/organisation/etc!

[1] That sounds a lot more combative than I intend, but trying to reword just makes it too long-winded/vague/weird/other

[2] I wouldn't be, but I imagine others would. Feelings on such matters vary widely, and rightly so.


I don’t get it. What the community does has no bearing on your fork, so why do you care? You can open source it and just not accept patches. Community development will end up happening somewhere else, but who cares?


> I don’t get it.

Don't worry. You don't have to.

If you want a more specific response than that, what part of the post do you not get?


Whatever position you are trying to argue seems to be so antithetical to Free Software, I'd say those sharing this view are completely missing the point of openness and would be better off by keeping all their work closed instead.

> other individual/team/etc taking a “we are the captain now” position rather than “this is great, look what we've done with it too”

The scenario is that someone opens up a project but says "I am not going to take any external contribution". Then someone else finds it interesting, forks it, that fork starts receiving attention and the original developer thinks to be entitled to control the direction of the fork? Is this really about "scratching your own itch" or is this some thinly-veiled control issue?

I'm sorry, after you open it up you can't have it both ways. Either it is open and other people are free to do whatever they want with it, or you say "it's mine!" and people will have to respect whatever conditions you impose to access/use/modify it.

> if the creator is particularly concerned about acknowledgement of their position as the originator.

That is what copyright is for and the patent system are for those who worry about being rewarded by their initial idea and creation.

If one is keeping their work to themselves out of fear of losing "recognition", they should look into the guarantees and rights given by their legal systems, because "feelings on this matter" are not going to save them from anything.


> Is this really about "scratching your own itch" or is this some thinly-veiled control issue?

I wasn't attempting to veil it at all. It is a control issue for some.

Sometimes someone is happy to share their project, but wants to keep some hold on the core direction.

> > other individual/team/etc taking a “we are the captain now” position rather than “this is great, look what we've done with it too”

The scenario is that someone opens up a project but says "I am not going to take any external contribution". Then someone else finds it interesting, forks it, that fork starts receiving attention and the original developer thinks to be entitled to control the direction of the fork?

You are missing a step. I said that if someone has this concern then they might not open the project at all, until they feel ready to let go a bit. At that point “open source but not open contribution” and control over forks are not issues at all because the source isn't open and forking isn't possible.

> That is what copyright is for and the patent system are for

I don't know about you, but playing in those minefields is not at all attractive to me, and I expect many feel the same. If I had those concerns, and legal redress is the solution, I now have two problems and the new one is a particularly complex beast, it would be much easier to just not open up.


> I wasn't attempting to veil it at all. It is a control issue.

Then do not hide it behind the "people just want to scratch their own itch". It is a bad rationalization for a much deeper issue and the way to overcome this is by bringing awareness to it, not by finding excuses.

> wants to keep some hold on the core direction.

You are really losing me here. The point from the beginning is that the idea of "direction" is relative to a certain frame of reference. There is no "core" direction when things are open. The very idea of "fork" should be a hint that it is okay to have people taking a project in different directions.

> it would be much easier to just not open up.

Agreed. But like I said: you can not have both ways. If you want to "keep control" and prevent others from taking the things in a different direction, then keep it close but be honest to yourself and others and don't say things like "it's not ready to be open yet" or "I want to share it with others but I worry about losing recognition".


> Then do not hide it behind the "people just want to scratch their own itch"

You seem to be latching on to individual sentences in individual posts rather than understanding the thread from my initial post downwards. Start from the top and see if that changes.

Right from the beginning I was walking about people not releasing source for this reason, not releasing with expectations of control – while quoting more of the preceding thread might have made that sentence look less like an attempt to hide as you see it, that would bulk out the thread necessarily IMO (and I'm already being too wordy) given that the context is already readily available nearby (as the thread is hardly a long one).

> > it would be much easier to just not open up.

> Agreed. But like I said: you can not have both ways. If you want to "keep control" and prevent …

No, but the other end of the equation often wants the source irrespective of the project creator not being ready to let go of fuller control just yet (for whatever reason, including wanting to get to a certain point their way to stamp their intended direction on it). And they will nag, and the author will either spend time replying to re-explain their (possibly already well documented) position or get a reputation for not listening which might harm them later.


I don't want to keep this conversation going in circles, but to me it seems like you are trying to explain a behavior (some people do not want to release source before conditions X, Y and Z are met) and I am arguing that the behavior itself is antithetical to FOSS.

From the top of the thread: "it is difficult to take it seriously if you refuse to offer source code or a implementable specification.". If OP has reservations about building it the open, I'd rather hear "I am not going to open it because I want to keep full control over it" then some vague "I will open it after I complete some other stuff".

You mention the concern about "getting a reputation for not listening". To me, this has already happened. The moment I saw "when I realize it can be used by someone, it will be of course be open source", I'm already doubting his ability to collaborate, I already put him in the "does not understand how FOSS work" box and I completely lost interest in the project.


> Frankly, if I publish open sources now, I can't take care of them again. Because there will be no excitement. I say this because I know myself very well.

> When I bring my work to a certain stage, I would like to deliver it to a team that can claim it. However, I want to see how much I can improve my work alone.


Sorry, this is exactly why it seems that you don't understand FOSS.

1) Publishing the code does not mean that it is done. Software development is a continuous effort.

2) There is no "delivering it to a team that can claim it". When (if?) you release your code, you will see the possible outcomes:

- the worst case scenario, someone will find an issue on your design and point to a better alternative and you will be left alone with your project.

- The best case scenario, your work brings some fundamental breakthrough and you will have to spend a good amount of time with people trying to figure it out or asking for assistance on how to make changes or improvements for their use case.

- The most likely scenario, your work will get its 15 minutes of fame, people are going to be taking a look at it, maybe star at Github and then completely leave it up to you to keep working on the project until it satisfies their needs.

Like "everythingctl" said, you will see that few people will take you seriously until you actually show source code or an reproducible specification. But you will also see that is a "required but not sufficient condition" for you to be taken seriously. And while I completely understand the fear of putting yourself out there and the possibility of having your work scrutinized and criticized for things you know need improvement, I think that this mentality is incompatible with the ethos of Open Source development and I wish more people can help you overcome this fear than tried to excuse or defend it.


> When I complete them and realize that it will really be used by someones, it will of course be open source

There is a chicken and egg problem with this strategy: Few people will want to, or even be able to, use this unless it’s open source and freely licensed.

The alternatives are mature, open or mostly open, and widely implemented. Minor improvements in speed aren’t enough to get everyone to put up with any difficulties in getting it to work. The only way to get adoption would be to make it completely open and as easy as possible to integrate everywhere.

It’s a cool project, but the reality is that keeping it closed until it’s “completed” will prevent adoption.


Hakan: if you are going to go open source just do it now. You have nothing to gain and much to lose by keeping it closed.


Maybe he is just waiting for the right investor that has a purpose for the codec so he can reinburse his time investment.

Making it opensource now would just ruin that leverage.

I am with you OP


Looking at history, it seems trying to build a business model around a codec doesn't tend to work out very well. It's not clear what the investor would be investing in. It's a better horse.


When I bring my work to a certain stage, I would like to deliver it to a team that can claim it. However, I want to see how much I can improve my work alone.


Being open source doesn't mean you have to accept contributions from other people.


When you do decide to open the codec, you should talk to xiph.org about patent defensibility. If you want it open, but don’t build a large enough moat (multichannel, other bitrates, bit depth, echo and phase control, etc then the licensing org will offensively patent or extend your creation.


Thanks for the information about the license and patent. It can work with any bitrates for Halac. However, more than 2 channels and 24/32 bit support outside 16 can be added.


I understand a forward compatibility concern, but have you considered to put an attention-grabbing alert in the encoder and clearly state that official releases in the future won't be able to decompress the output? Also your concern may have been too overblown; there have been more than 100 PAQ versions with mutually incompatible formats but such issues didn't happen too often. (Not to say that ZPAQ was pointless, of course.)


You may be trying to kill all criticisms, this is not possible. Not everyone will like you and not everyone will like your code. Fortunatly people irl that have personal differences tend to be a but more tactful than the software crowd can be online but something like this bound to get overwhelming amounts of love.

No great project started out great and the best open source projects got to their state because of the open sourcing.

Consider the problems you might be spending a lot of time solving might be someone else's trivial issue, so unless this is an enjoyable academic excercise for you (which i fully support), why suffer?


I have no problem trying to kill criticism. I'm just trying to do what I love as a hobby(academic).

Or maybe it's better for me to do things like fishing, swimming as a hobby.


Don't let perfect be the enemy of good. If Linus didn't open source Linux until it was "complete", it wouldn't be anywhere near as popular as it is.


Thank you for your valuable thoughts.


You could open it now and crowd-source the missing pieces. I really see nothing to lose by making this oss-first.


Sounds like some words in Filipino:

Halic = kiss Halac = raspy voice


You got that backwards buddy. Nobody will use them so long as they remain closed source like this.


Maybe they want to sell it to a streaming service or something.


This. It's almost ragebait posting this: "I'm better but I won't show you."


Would any “real” mainframe software (not Linux on z) use Unix epoch dates though?

I’d have thought mainframe dates were mostly binary coded decimal in EBCDIC and already futureproofed during the y2k mania.


s/purchase/purchase,/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: