It may result in an outsized penalty to bootstrapped companies but being VC funded doesn't make you immune to this. VC funded companies with revenue will not be able to offset their revenue by reinvesting in R&D (software development) expenses, so in some cases they may be seen as having a profit when they previously wouldn't have. In those cases they'd have a tax burden.
Unwrap is fine if used sparingly and as mentioned, to indicate a bug, but in practice it requires discipline and some wisdom to use properly - and by that I mean not just "oh this function should be a `Result` but I'll add that later (never).
I think relying on discipline alone in a team is usually a recipe for disaster or at the very least resentment while the most disciplined must continually educate and correct the least disciplined or perhaps least skilled. We have a clippy `deny` rule preventing panics, excepts, and unwraps, even though it's something we know to sometimes be acceptable. We don't warn because warnings are ignored. We don't allow because that makes it too easy to use. We don't use `forbid`, a `deny` that can't be overridden, because there are still places it could be helpful. What this means is that the least disciplined are pushed to correct a mistake by using `Result` and create meaningful error handling. In cases where that does not work, extra effort can be used to add an inline clippy allow instruction. We strongly question all inline clippy overrides to try to avoid our discipline collapsing into accepting always using `unwrap` & `allow` at review time to ensure nothing slips by mistakenly. I will concede that reviews themselves are potentially a dangerous "discipline trap" as well, but it's the secondary line of defense for this specific mistake.
My understanding was that the "enemy" was McKinsey, a firm that has a reputation to me as being an expensive consulting firm filled with MBA types who frequently are hired by companies.
My understanding of this reputation is: This often happens at the detriment of either product quality or employee satisfaction. It's debatable if they actually have a reputation of providing value. I think short term? Maybe, albeit expensive. Long term? I'd say no.
Agreed. It's wild to me how many people think they need arbitrary queries on their transactional database and then go write a CRUD app with no transactional consistency between resources and everything is a projection from a user or org resource -- you can easily model that with Dynamo. You can offload arbitrary analytical queries or searches to a different database and stop conflating that need with your app's core data source.
Well my experience has always been the opposite. New query patterns are always appearing. The difference between an OLTP and an OLAP query is not as clear cut as one might imagine that justifies huge changes to an existing system.
My take: it's a mix of brand bundling and lack of data. They're roughly equivalent but shorts is bundled with youtube which has its own brand perception and reels are bundled with IG/FB and have their own brand perception. Additionally fewer users means less algorithmic data to keep viewers.
Tiktok was allowed to establish its own brand and develop a community while shorts and reels are intrinsically tied to their past. They may be able to escape that history but I don't think it's helping them be fast movers or win "cool" points.
> My take: it's a mix of brand bundling and lack of data. They're roughly equivalent but shorts is bundled with youtube which has its own brand perception and reels are bundled with IG/FB and have their own brand perception. Additionally fewer users means less algorithmic data to keep viewers.
My intuition would work the other way around. I'd expect offerings from more established companies to have a big leg up in terms of usable data. Youtube should be able to use a viewer's entire watch/subscription history to inform itself about what shorts a user might like, even before they've interacted with their first short. Bytedance, on the other hand, has to start from scratch with each truly new user.
The coolness or stodginess of the company would be secondary to its effects. If boring-old-Youtube could promise shorts creators great exposure to an enthusiastic audience, it would win the platform regardless of its brand.
I'll argue that TikTok's structure which offers you one video at a time gives you much more useful information than YouTube's interface, which looks like
TikTok gets a definite thumbs up or thumbs down for every video it shows you whereas if you click on one particular sidebar video YouTube can make no conclusion about how you felt about the other videos in the sidebar. The recommendation literature talks about "negative sampling" to overcome this, I never could really believe in it, I think now it doesn't really work.
I built a system like that and found that, paradoxically, you have to make it blend in a good amount of content that it doesn't think you'd like for it to be able to calibrate itself.
> If boring-old-Youtube could promise shorts creators great exposure to an enthusiastic audience, it would win the platform regardless of its brand.
Just a guess, as someone who makes their living from YouTube: YouTube creators are driven to create content that earns them money. As compared to long-form content, YouTube shorts earn next-to-nothing, and it’s not clear that they drive significant new traffic to more-valuable content.
Most large creators on YouTube are focused on the bottom line, not exposure.
The reason shorts don't earn any money, as compared to Instagram and TikTok, is that they don't advertise crap for me to buy (I have YT premium), so I don't end up buying shit there like I do the other two.
While this is from ByteDance, who also are behind TikTok, this algorithm is likely not the one behind TikTok.
Instead, it is likely a component that powers ByteDance's commercial recommender system solution, which they market to e-commerce companies: https://www.byteplus.com/en/product/recommend
This was mentioned in past discussions of the paper on HN.
And even if aspects of this are used for TikTok:
(a) it would be just one of many components of their recommendation system, and
(b) the TikTok recommendation system has changed a lot during the 2+ years since this has been published.
So take what you see here with a grain of salt. After reading the paper and the code, you will NOT know how TikTok's recommendations work.
There's also a heavy element of manual curation in TikTok. They have people putting their fingers on the scales to decide what content gets promoted. Where are those people, and what's their agenda? Who knows.
Releasing the recommender on Github is a way to try to diffuse that criticism. But it's just one part of the puzzle that is Tiktok's content distribution.
This is true for all social media algorithms. None of them are purely automated and for good reason. You need humans going in and tweaking the outcomes to ensure users have a good experience.
Of course, when the conversation is about TikTok, this often becomes accusations of propaganda.
But YouTube, Facebook, and Twitter all exert significant control over their algorithms and things like their Homepage, Trending Topics, etc. The conservative right often labels such curation as liberal propaganda.
Sure. HN is very actively moderated, and most people here probably agree that it’s worth it. (Those who don’t like it presumably don’t stay here.)
But at the massive scale of Meta or ByteDance, there is a difference between removing problematic content and actively promoting content. They’re two sides of the same coin, but the first is applied based on reactive guidelines (“we’ve previously decided this kind of content shouldn’t be here”) while the second is ultimately an in-the-moment opinion on whether more people should be seeing the content. The line is blurry, but these are not the same thing, and vibes-based content promotion is easier to manipulate.
Are there CCP agents working at ByteDance? Of course there are because it’s practically mandatory — just like American telecom companies have NSA wiretap rooms. Do those CCP agents get consulted on which foreign political candidate should get the viral boost? Perhaps not. But it appears they’ve built a system where this kind of thing is possible and leaves little paper trail because the curated boosting is so integral to the platform.
I explicitly did not mention HackerNews, as the homepage feed is primarily based on user voting - neither algorithms nor chronology. Dang’s moderation is not comparable to other social media platform’s feed curation.
> there is a difference between removing problematic content and actively promoting content
Again, there is sufficient evidence that all major social media platforms do exactly this, not just TikTok. Hence why I said:
>> The conservative right often labels such curation as liberal propaganda.
> where this kind of thing is possible and leaves little paper trail
Could you point to the paper trail that Meta, Google, Twitter provide on their curation actions? Otherwise, this just proves my point that people blindly want to accuse Chinese platforms of shady activities, and Western ones as paragons of virtue.
I really don’t agree with this - chronological sorting favors larger/power users who spam their content through all hours of the day, versus smaller users that you probably care more about.
If you don't have a way to manually push the algo, then you'd never be able to sell features like promoted posts and the like. And why would you not want a feature to sell?
Ads can be inserted into any kind of feed, there is no difference between chronological and algorithmic. It’s usually a simple calculation of target ads displayed per posts viewed, with per user/advertiser caps.