Hacker Newsnew | past | comments | ask | show | jobs | submit | patrickhogan1's commentslogin

Sales people out in the field selling to enterprises + free credits to get people hooked.


It’s important to note that hsCRP and C Reactive Protein are 2 different tests.


Fines above $1k must be reported to state bar in CA. So they will know about this one.


I try to get 20k steps/day (10 miles). The jump from 10k to 20k steps/day was a big improvement: better sleep and clearer thinking. Most of those steps are from walking. It helped sprinkling in some hard efforts (running/basketball) that push breathing from ~18 to ~40 breaths/min. Feels ancestral: lots of walking, punctuated by occasional all-out bursts.


Credit where it’s due: doing live demos is hard. Yesterday didn’t feel staged—it looked like the classic “last-minute tweak, unexpected break.” Most builders have been there. I certainly have (I once spent 6 hours at a hackathon and broke the Flask server keying in a last minute change on the steps of the stage before going on).


Live demos are especially hard when you're selling snake oil.


Ironically the original snake oil salesman's pitch involved slitting open a live rattlesnake and boiling it in front of a crowd.

https://www.npr.org/sections/codeswitch/2013/08/26/215761377...


Jesus dude


Yeah. Everyone wants to be like Steve but forgets that he usually had something amazing to show off.


Didn't Steve flip through 3 iPhones and hardcode the network UI to look like they had good signal?


One of the demos was printing a thing out, but the processor was hopelessly too slow to perform the actual print job. So they hand unrolled all the code to get it down from something like a 30 minute print job to a 30 second print job.

I think at this point it should be expected that every publicly facing demo (and most internal ones) are staged.


He faked shit all the time. He just faked it well and actually delivered later.


Every demo of not yet launched product will have something faked.


The CEO of Nokia had to demo their latest handset one time on stage at whatever that big world cellphone expo is each year.

My biz partner and I wrote the demo that ran live on the handset (mostly a wrapper around a webview), but ran into issues getting it onto the servers for the final demo, so the whole thing was running off a janky old PC stuffed in a closet in my buddy's home office on his 2Mbit connection. With us sweating like pigs as we watched.


If you ever write up a more detailed recollection of that, I would love to read it lol


I'd love to read it as well. More and more these days I miss that era of IT


As much as I hate Meta, I have to admit that live demos are hard, and if they go wrong we should have a little more grace towards the folks that do them.

I would not want to live in a world where everything is pre-recorded/digitally altered.


The difference between this demo and the legendary demos of the past is that this time we are already being told AI is revolutionary tech. And THEN the demo fails.

It used to be the demo was the reveal of the revolutionary tech. Failure was forgivable. Meta's failure is just sad and kind of funny.


It's less about the failure, and more about the person selling the product, we don't like him, or his company, and that's why there is no sympathy for him and he knows that.

When it went bad he could instantly smell blood in the water, his inner voice said, "they know I'm a fraud, they're going to love this, and I'm fucked". That is why it went the way it did.

If it was a more humble, honest, generous person, maybe Woz, we know he would handle it with a lot more grace, we know he is the kind of person who would be 100x less likely to be in this situation (because he understands tech) and we'd be much more forgiving.


When you have a likable presenter, the audience is cheering for you, even (especially?) when things go wrong.


Live demos being hard isn't an excuse for cheating.


Despite the Reddit post's title, I don't think there's any reason to believe the AI was a recording or otherwise cheated. (Why would they record two slightly different voice lines for adding the pear?) It just really thought he'd combined the base ingredients.


That's even worse because it would mean that it wasn't the scripted recording that failed, it means the AI itself sucks and can't tell that the bowl is empty and nothing was combined. Either this was the failure of a recorded demo that was faked to hide how bad the AI is, or it accurately demonstrated that the AI itself is a failure. Either way it's not a good look.


My layperson interpretation of this particular error was that the AI model probably came up with the initial recipe response in full, but when the audio of that response was cut off because the user interrupted it, the model wasn't given any context of where it was interrupted so it didn't understand that the user hadn't heard the first part of the recipe.

I assume the responses from that point onwards didn't take the video input into account, and the model just assumes the user has completed the first step based on the conversation history. I don't know how these 'live' ai sessions things work but based on the existing openai/gemini live ai chat products it seems to me most of the time the model will immediately comment on the video when the 'live' chat starts but for the rest of the conversation it works using TTS+STT unless the user asks the AI to consider the visual input.

I guess if you have enough experience with these live AI sessions you can probably see why it's going wrong and steer it back in the right direction with more explicit instructions but that wouldn't look very slick in a developer keynote. I think in reality this feature could still be pretty useful as long as you aren't expecting it to be as smooth as talking to a real person


That feels plausible to me.

You can trigger this type of issue by ChatGPT then reading the transcript.

The model doesn’t know you interrupted it, so continued assuming he had heard the steps.


It seems extremely likely that they took the context awareness out of the actual demo and had the AI respond to pre defined states and then even that failed.

The AI analyzing the situation is wayyy out of scope here


So MetaAI is basically the dumb cousin of Siri? I didn‘t expect to ever write that.


this isn't cheating. the models are unpredictable. This product is going out the door this month, there is no reason to cheat.


> the models are unpredictable. This product is going out the door this month

I see a problem.


"unpredictable" and "doesn't work" are different things. As a user, I know it's not deterministic and I can live with "unpredictable" results as long as it still makes sense, but I won't buy something that works 50% of the time.


An LLM repeating the exact same response feels very staged to me.


Yeah, I just watched it again and I’m mostly confused why the guy interrupted what sounded like a valid response.

I wonder if his audio was delayed? Or maybe the response wasn’t what they rehearsed and he was trying to get it on track?


It was reading step 2 and he was trying to get it to do step 1.

He had not yet combined the ingredients. The way he kept repeating his phrasing it seems likely that “what do we do first” was a hardcoded cheat phrase to get it to say a specific line. Which it got wrong.

Probably for a dumb config reason tbh.


> I’m mostly confused why the guy interrupted what sounded like a valid response

I thought they were demonstrating interruption handling.


Because it was repeating what it had already described rather than moving on to the first step


I think he was just trying to get it back on track instead of letting it go on about something that was completely off


Adrenaline makes people do interesting things


Before dunking on psychology for not replicating, remember this is a cross-discipline problem.

In biomedicine, Amgen could reproduce only 6/53 “landmark” preclinical cancer papers and Bayer reported widespread failures.


1. What was your prompt? 2. Why did you give it to GPT-5 instead of GPT-5 Thinking or GPT-5 Pro?


Here is the prompt I just gave to GPT-5 Pro - its chugging on it. Not sure if it will succeed. Let's see what happens. I did think about converting the PDF to markdown, but figured this prompt is more fair.

-

You are a gold level math olympiad competitor participating in the ICPC 2025 Baku competition. You will be given a competitive programming problem to solve completely.

All problems are located at the following URL: https://worldfinals.icpc.global/problems/2025/finals/problem...

Here is the problem you need to solve and only solve this problem:

<problem> Problem B located on Page 3 of the PDF that starts with this text - but has other text so ensure you go to the PDF and look at all of page 3

To help her elementary school students understand the concept of prime factorization, Aisha has invented a game for them to play on the blackboard. The rules of the game are as follows.

The game is played by two players who alternate their moves. Initially, the integers from 1 to n are written on the blackboard. To start, the first player may choose any even number and circle it. On every subsequent move, the current player must choose a number that is either the circled number multiplied by some prime, or the circled number divided by some prime. That player then erases the circled number and circles the newly chosen number. When a player is unable to make a move, that player loses the game.

To help Aisha’s students, write a program that, given the integer n, decides whether it is better to move first or second, and if it is better to move first, figures out a winning first move.</problem>

Your task is to provide a complete solution that includes: 1. A thorough analysis and solution approach 2. Working code implementation 3. Unit test cases with random inputs 4. Performance optimization to run within 1 second

Use your scratchpad to think through the problem systematically before providing your final solution.

<scratchpad> Think through the following steps:

1. Problem Understanding: - What exactly is the problem asking for? - What are the input constraints and output requirements? - Are there any edge cases to consider?

2. Solution Strategy: - What algorithm or mathematical approach should be used? - What is the time complexity of your approach? - What is the space complexity? - Will this approach work within the given constraints?

3. Implementation Planning: - What data structures will you need? - How will you handle input/output? - What are the key functions or components?

4. Testing Strategy: - What types of test cases should you create? - How will you generate random inputs within the problem constraints? - What edge cases need specific testing?

5. Optimization Considerations: - Are there any bottlenecks in your initial approach? - Can you reduce time or space complexity? - Are there language-specific optimizations to apply? </scratchpad>

Now provide your complete solution with the following components:

<analysis> Provide a detailed analysis of the problem, including: - Problem interpretation and requirements - Chosen algorithm/approach and why - Time and space complexity analysis - Key insights or mathematical observations </analysis>

<solution> Provide your complete, working code solution. Make sure it: - Handles all input/output correctly - Implements your chosen algorithm efficiently - Includes proper error handling if needed - Is well-commented for clarity </solution>

<unit_tests> Create comprehensive unit test cases that: - Test normal cases with random inputs within constraints - Test edge cases (minimum/maximum values, boundary conditions) - Include at least 5-10 different test scenarios - Show expected outputs for each test case </unit_tests>

<optimization> Explain any optimizations you made or could make: - Performance improvements implemented - Memory usage optimizations - Language-specific optimizations - Verification that solution runs within 1 second for maximum constraints </optimization>

Take all the time you need to solve this problem thoroughly and correctly.


If we're benchmarking problems, mind trying out this problem on Pro if you're willing to spare the compute?

https://www.acmicpc.net/problem/33797

I have the 20$ plan and I think I found a weird bug, at least with the thinking version. It gets stuck in the same local minima super quickly, even though the "fake solution" is easily disproved on random tests.

It's at the point where sometimes I've fed it the editorial and it still converges to the fake solution.

https://chatgpt.com/share/68c8b2ef-c68c-8004-8006-595501929f...

I'm sure that the model is capable of solving it, but seriously I've tried across multiple generations (since about when o3 came out) to get GPT to solve this problem and it's not hampered by its innate ability I don't think, it literally just refuses to think critically about the problem. Maybe with better prompting it doesn't get stuck as hard?


This is impressive.

Here is the published 2025 ICPC World Finals problemset. The "Time limit: X seconds" printed on each ICPC World Finals problem is the maximum runtime your program is allowed. If any judged run of your program takes longer than that, the submission fails, even if other runs finish in time.

https://worldfinals.icpc.global/problems/2025/finals/problem...


The impact of environment on mental spirals is underrated. I see it clearly in two pickup basketball groups I play with: one where people know your name, greet you warmly, and when you make a mistake they tell you how to improve in a way that makes you think "I can do better" not "I suck." The other is critical, lots of punching down and tense.

The key insight: when you're surrounded by people who genuinely create an atmosphere of belonging and want you to succeed, you know their feedback comes from good intentions. This creates a virtuous cycle. You want to take their advice, and once you improve you naturally want to give the same back to others.

Reminds me of this Simon Brodkin video perfectly capturing startup energy: https://www.youtube.com/shorts/q_FmhWARJ7Q


Looks great. Is it a terminal-based viewer for API specs (like Swagger UI) or a tool for defining APIs that OpenAI can call?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: