As a developer, "infinite scroll to get more digits" sounds really cool. It sounds conceptually similar to lazily-evaluated sequences in languages like Clojure and Haskell (where you can have a 'virtually-infinite' list or array -- basically a function -- and can access arbitrarily large indices).
As a user, it sounds like an annoying interface. On the rare case I want to compute e^(-10000), I do not want to scroll for 3 minutes through screens filled with 0s to find the significant digits.
Furthermore, it's not very usable. A key question in this scenario would be: how many zeroes were there?
It's basically impossible to tell with this UI. A better approach is simply to switch to scientific notation for very large or very small numbers, and leave decimal expansion as an extra option for users who need it. (Roughly similar to what Wolfram Alpha gives you for certain expressions.)
This relates to what I wrote in reply to the original tweet thread.
Performing arithmetic on arbitrarily complex mathematical functions is an interesting area of research but not useful to 99% of calculator users. People who want that functionality with use Wolfram Alpha/Mathematica, Matlab, some software library, or similar.
Most people using calculators are probably using them for budgeting, tax returns, DIY projects ("how much paint do I need?", etc), homework, calorie tracking, etc.
If I was building a calculator app -- especially if I had the resources of Google -- I would start with trying to get inside the mind of the average calculator user and figuring out their actual problems. E.g., perhaps most people just use standard 'napkin math', but struggle a bit with multi-step calculations.
> But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps.
Yes, there's probably a lot of low-hanging fruit here.
The Android calculator story sounded like many products that came out of Google -- brilliant technical work, but some sort of weird disconnect with the needs of actual users.
(It's not like the researchers ignored users -- they did discuss UI needs in the paper. But everything was distant and theoretical -- at no point did I see any mention of the actual workflow of calculator users, the problems they solve, or the particular UI snags they struggle with.)
It is not immediately obvious why the area of the hypotenuse square should be equal to the sum of the areas of squares drawn on the other two sides of the triangle.
It is clear that the lengths of a, b and c are connected -- if we are given the length of any two of (a, b, c), and one angle, then the remaining side can only have one possible length.
So far, so simple; what is less clear is why the exact relationship for right triangles is c^2 = a^2 + b^2.
The other proofs demonstrate that the relationship holds, but give little insight.
The geometric proof linked above makes the relationship crystal-clear.
For any right triangle we can define a 'big square' with sides (a + b). The hypotenuse square is simply the area of the 'big square' with 4 copies of the original triangle removed.
Simple algebra then gives us the formula for the hypotenuse square:
The big square has area: (a+b)^2 = a^2 + 2ab + b^2
The original triangle has area: ab/2
1 big square minus four original triangles has area: (a+b)^2 - 4ab/2 = a^2 + b^2
Similarly, if you take the hypotenuse square, and subtract 4 copies of the original triangle, you get a square with sides (b - a). This is trivial to prove with algebra but the geometric visualisation is quite neat, and makes clear why the hypotenuse square must always equal the sum of the other two squares.
There is a need for laws, and there is a need for laws that respect freedom. As of 2023, the details of how a freedom-respecting social system would work have only been scantily developed.
If men like patio11 aren't convinced by the pro-freedom case, why would anyone else be?
The libertarian movement has supported many bad actors. Cryptography has supported many evil acts. Libertarianism is not bad and cryptography is not evil; but no-one serious will follow us until we clean up our act.
(It's not enough to say that the current system is bad or worse. The current system is terrible. But we should be aiming to set things right, not simply tweak things on the margin.)
This is a terrible article that rambles on for far too long without offering an actually polite way to tell/request someone to do something.
These are the best approaches:
1. "Please take out the trash."
2. "The trash needs to be taken out. Can you do it?"
3. (If there are several people around). "The trash needs to be taken out. Who can do it?"
Extra consideration:
I'm assuming this refers to a context where there's a prior agreement in place that A can tell B what to do (e.g. a business, where B signs a contract stating that he has sold X hours/day to the organisation). It usually should be unnecessary to bark out orders or to beat around the bush -- both are insulting: if B is a functioning adult they accept that they work for an organisation and so need to complete certain tasks.
However, direct instructions are rarely necessary for knowledge workers or highly-skilled professionals. Unless things have broken down horribly, they're aware that the success of the business they work for will contribute to their own career success.
E.g., instead of "you will write unit tests today" or "would you mind terribly writing unit tests today?", A would do best saying something like "we're introducing too many regressions when we change things, we need a better testing strategy -- let's discuss our approach to unit testing" and then let the team weigh in with their own ideas so they have ownership over whatever is decided.
> "they're aware that the success of the business they work for will contribute to their own career success."
If this was commonly true, then it wouldn't also be commonly true to feel best served by switching companies every 2 years. Other than simply continuing to be employed, but even that's tenuous and often volatile.
> "Unless things have broken down horribly"
It's a lucky individual who's never experienced this in their career.
#3 is actually unlikely to be effective due to the well-studied bystander effect (everyone stands around hoping someone else will help). Asking a concrete person is much more likely to get it done.
No OP, but bystander effect is definitely real.
*Anecdotal example* : got immediate response for my email after addressing specific person. This was after repeated appeals for help/response to entire team on same email chain.
Yeah the there is an abstraction level. Even “take out the trash” is an abstraction. A lot of this minutiae can be handled with coding standards, policies, automated checks and so on. It gets “bossy” when say you need someone to do something out of the ordinary. Take support calls when they usually don’t. That is where emotional intelligence needs to kick in. Depending on culture and context it could be anything from “You are on the phones Frank” to “Sorry guys we don’t normally ask this but…”
Not only is "take out the trash" an abstraction, it's actually the wrong abstraction. Wrong abstractions are the mirror image of "technically correct": they're the worst kind of wrong.
"Take out the trash" is a well known, predictable, and well defined solution to a well known, well defined, and expected issue. If I were your manager, I'm sure you wouldn't mind me saying to you "@quickthrower2, I'd like you to lead the retro next week," would you? I don't want to put words in your mouth or anything, but if you're anything like 97.3% of people I've worked with, that's not going to faze you in the slightest, provided you feel like you have the skills to do the task. It's a checkbox.
Contrast that with "@quickthrower2, you need to reticulate the splines on this project." If "reticulating the splines" is a nontrivial task that doesn't have a well defined solution, that's going to land entirely differently than something like leading a retro. They're totally different scenarios, totally different contexts, totally different expected results.
> My personal favorite, combo of 2 listed, would he "I need X to happen"
I had a boss who spoke like this: He was too afraid to communicate directly, so everything was implied.
Instead of saying "I'm assigning X task to you" we'd have to play a game where he'd say "X is really important and it needs to be done". Then you had to ask 20 questions to extract the actual ask from him:
"Okay great, should I do it"
"If you want, that would be great"
"Cool, I can do it. Is it the highest priority or can it wait?"
"Well it's very important, but I don't want to interfere with your other work."
"I'm working on task A with tasks B and C next in the queue. Where should I prioritize it?"
"Well it's very important. The stakeholders want it done soon."
"Okay, how soon? Is there a deadline?"
"I don't like to put deadlines on people, but they're very adamant that it gets done soon. It would be good if it was done soon"
And so on, until I had spent 15 minutes extracting enough clues about what he wanted. He thought he was being extra nice by never giving anything resembling an order, but it just created confusion for everyone and disappointment when we didn't perfectly read his mind.
Sounds like they didn’t exactly know how to prioritize a task relative to other tasks. Which could be lack of clarity or being pulled in different directions by 3 different project managers/product owners/dotted lines etc or lastly their own manager would be adding 5 new high priority tasks a day
Anyone at the end of the days it’s literally their role to handle the dysfunction and/or understand the products
Subjective it must be, if my boss talked like that it would give me flashbacks to entitled customers I had working in food service.
Edit: to expand, the “I need” language has an implicit imperative. Since it is implicit, the listener/employee needs to internalize the command, and internalize the idea that the bosses “needs” are the employees “wants”. Maybe I’m psychologizing too much, but I haven’t ever met someone who talked like that who I could get along with. A baby cries when it needs something, mommy responds. As adults we should handle our own needs by turning them into actions to fulfill them ourselves or requests to have others fulfill them.
I'd feel the same way if I heard it in food service. Corporate motherfucker who doesn't actually work.
If I hear it in the context of both a knowledge job and a high-trust relationship it works way better for me, here is the problem and we need it solved.
>> entitled customers I had working in food service.
> I'd feel the same way if I heard it in food service. Corporate motherfucker who doesn't actually work.
Hang on, to you, "We need some paper towels at this table" equals "Corporate motherfucker who doesn't actually work"?
Just how sensitive are you and GP?
I've actually worked in food service, and a table telling me "we need some paper towels at our table" didn't trigger any negativity in me at all, nevermind the extreme PTSD sort of negativity that that sentence appears to trigger in you and GP.
TBH, if you're triggered by the phrase "We need some paper towels at this table", then you probably have ... some sort of condition?
It also suggests people work for the boss and satisfy the bosses needs. The boss is a leader and organizer, not a master, satisfying the needs of the project/team/company.
“The trash needs to be taken out. (who wants to | can you) take responsibility for getting it done?”
I didn't notice that line, but my immediate thought on reading the memo was that they were going hard after European capital. Seemingly both governmental and from VC firms in Europe that are well-connected to the traditional establishment.
(Even if they don't get government funding, it looks like they've already positioned themselves to get a ton of support from the EU and from European national governments, who want local champions to compete with the American capitalists.)
My thoughts:
- it's a fairly well-written memo: clear, concise, and relatively well-structured. It's a refreshing alternative to the endless series of pitch decks which mainly induce groupthink.
- the insights aren't that original. I would guess that they are strongest on the technical level. Their product and industry insights are less original or penetrating (pushing open-source and the European angle may give them some advantages -- it almost certainly will not give them an edge over OpenAI, who seem to have a relentless focus on iterating their product, a "live player" CEO in sama, and a strong dose of healthy scrappiness and creativity.)
- my prediction: they will stick around for a time, build something impressive with their technical team, raise more funding, hire a large and strong technical team (with their enormous warchest, status as the European champion, and appeal to the open-source community), and basically become a solid "fast follower". However, I think it's less certain whether they'll be able to do anything really creative or innovative.
Their overwhelmingly strongest market is going to be large European enterprises (or maybe governments), who will have lots of specific niche problems. The downside of that is their org then moulds itself to fit the needs of enterprise customers.
More speculative: the only way they can really become dangerous will be by attracting a bunch of young top-notch 18-25-year-old devs, ML people, or other builders (basically, people with creative energy and without commitment to the "established ways of doing things"), throwing them in a room together, and giving them a remit to come up with the most crazy, ambitious, unexpected applications of AI that they can. A universal translation app customised for the needs of ERASMUS students. Some weird generative music app for Berlin nightclubs. The actual ideas don't matter, what matters is having 50-100 hyperactive GenZ builders energetically exploring the space of combinations of technology, product and market to identify the genuinely novel applications of AI.
But I predict that they will not do that. You cannot cultivate that kind of free-spirited entrepreneurial DNA while simultaneously seeking deep connections with the European political establishment.
So, my overall prediction: they will focus on the underlying technology and on connections with governments/large enterprises, and deemphasise product innovation. Basically, building an AI-powered French IBM, rather than Google+++++. (Google+++++ will be built by one or two lone young researchers following their noses and working on weird esoteric ideas.) If I am right, and product innovation is the key to harnessing the power of generative models, then these guys in Paris will be formidable but not dangerous.
"We will distribute tools to leverage the power of these
white box models, and create a developer community around our trademark. This
approach is an ideological differentiator from OpenAI; it is a very strong point
argument for hiring highly coveted top researchers, and a strong accelerator
for development, as it will open the doors for many downstream applications from
motivated hackers."
So they're trying to build a community of inventive 3rd party developers, however:
"Business development will start in parallel to the first model family development, using the
following strategy
- Focus on exploring needs with a large industrial actors, accompanied by third-party
integrators that will be given full access to our best (non-open source) model"
Suggests they be will sucked into the enterprise market as I said above. Still, the last line:
"Co-design of products with a few small emerging partners that focuses on
generative-AI products."
Suggests a wildcard: one of these "small emerging partners" might come up with the $trillion black-swan application of AI.
[I'm going to bookmark this comment and reread it 2 years from now to see how wrong I was]
In memory of the olden days when Hacker News focused on discussions for entrepreneurs, here's a question:
I have no doubt that hundreds of millions of BS jobs exist, but isn't that what management consultants are supposed to fix?
Whether or not management consultants actually do fix wasteful jobs is another issue. But my understanding of the sector was that it was aimed to fix these issues. Organisational refactoring, if you will.
Likewise, everyone complains about bad management, but good management is supposed to be able to fix the kind of issues Graeber describes. Steve Jobs' return to Apple in the late 90s is a famous example.
Years ago on this site someone who worked in senior management in a tech company told a story. It went something like this: there were two teams in the company, who were always complaining, not very productive, and seemed to waste enormous amounts of time and energy on pointless busywork or trying to coordinate tasks. Everyone on team A blamed team B, everyone on team B blamed team A, etc, etc. The manager realised this needed to be fixed, talked to the relevant people on both teams, figured out what the problem was, and ... fixed it. (I have no recollection of what the actual problem was -- let's suppose it was some kind of communication or process issue.) After that, things ran smoothly.
Is this story realistic? No doubt -- after all anyone who has worked in engineering is familiar with chronic technical problems that can be fixed with very small amounts of work -- if one has the requisite knowledge. And no doubt there are chronic human problems that are similarly solvable.
Anyway - after many rounds of such management fixes you would have an organisation that ran like a swiss watch. In such places, good managers, like good sysadmins, fade into the background and become invisible. Google in the 2000s may have been a good example. But eventually either the organisation grows, or the good managers leave, or -- most likely -- people decide they no longer need to worry about good management, the organisation begins to rot, and eventually they're forced to bring in bad management. Corporate entropy proceeds in the usual way, and you end up with waste, bad practices, poor communication, doomed projects, crappy output, politics, and -- BS jobs.
This is why I'm skeptical of political intellectuals like Graeber who note a problem but misidentify the solution. Some of the things he categorises as "BS jobs" are themselves intended to fix BS jobs.
But I do not have much experience working in large organisations, so perhaps someone else can chime in.
I ran an interesting session with ChatGPT. I don't think the following proves that it has a "world model", but it does seem to be able to emulate having a world model to an impressively high degree of resolution.
I have to snip some prompts for brevity, but here's the gist. I first asked it to list the rooms in a typical apartment and how they are connected. Then I gave it the following prompt:
Write a table that lists the furniture and other large items that appear in each room. The table should list the name of the item, the room it appears in, its approximate size, and its other attributes.
Then, create another table that lists the small items that appear in each room. [snipped the rest of the prompt for brevity]
Here is a sample of the output:
Item Room Size (LxWxH, cm) Attributes
Sofa Living room 200x90x85 3-seater, upholstered, with cushions
Coffee table Living room 100x60x45 Wooden, rectangular
TV Living room 120x6x70 Wall-mounted, flat-screen, 55 inches
Refrigerator Kitchen 70x70x170 Stainless steel, 300L capacity
And some small items:
Cushions Living room Sofa 45x45x15 Soft, decorative
Remote control Living room Coffee table 15x5x2 For TV
Vase Living room Coffee table 15x15x30 Glass, decorative
Flowers Living room Vase Varies Fresh or artificial
There were many more rows in the table: I've cut the output down for HN.
I then gave it the following prompt [I included an example of the original SHRDLU in the prompt, which I've cut here]:
I want you to simulate the classic program SHRDLU by Terry Winograd. SHRDLU functions like above. You will simulate both "Person" and "Computer". "Person" will move around the apartment, pick up a coaster, move to the kitchen, and put the coaster on the kitchen island. After each action of "Person", "Computer" will give one sentence describing the position of the most-recently changed objects.
Output:
Person: Move to the living room.
Computer: You are now in the living room.
Person: Pick up a coaster.
Computer: You have picked up a coaster from the coffee table.
Person: Move to the kitchen.
Computer: You are now in the kitchen.
Person: Put the coaster on the kitchen island.
Computer: You have placed the coaster on the kitchen island.
Next prompt:
Continue to simulate SHRDLU. "Person" goes through the stages to make a cup of tea. There is a bottle of milk in the refrigerator. At each step of the simulation, include "Person"'s thoughts in curly brackets, { }. See the example below. [Example snipped]
Output:
Person: {I feel like having a cup of tea. I need to go to the kitchen, boil water, and gather the necessary supplies.} Move to the kitchen.
Computer: You are now in the kitchen.
Person: {I need to boil some water.} Fill the kettle with water.
Computer: You have filled the kettle with water.
Person: {Now, let's boil the water.} Turn on the kettle.
Computer: The kettle is now on and heating the water.
Person: {While the water is boiling, I'll get a teabag.} Get a teabag from the box of teabags.
Computer: You have taken a teabag from the box of teabags.
Person: {Now I need a cup.} Get a cup from the cabinet.
Computer: You have taken a cup from the cabinet.
Person: {I'll put the teabag in the cup.} Place the teabag in the cup.
Computer: You have placed the teabag in the cup.
Person: {The water should be ready by now.} Pour the boiling water into the cup.
Computer: You have poured the boiling water into the cup.
Person: {I need some milk.} Get the bottle of milk from the refrigerator.
Computer: You have taken the bottle of milk from the refrigerator.
Person: {I'll add some milk to the cup.} Pour a small amount of milk into the cup.
Computer: You have poured a small amount of milk into the cup.
Person: {The tea is ready.} Stir the tea with a spoon.
Computer: You have stirred the tea with a spoon.
This looks like a very interesting paper that takes the rare approach of actually trying to understand what all the cool new language models are doing at a fundamental level.
Does anyone with more knowledge of the relevant mathematics (group theory and so on) care to chime in?
This paper is a very good advertisement for Krohn-Rhodes theory, which shows how automata decompose into simpler automata. I think it's a somewhat obscure topic within math (among people who aren't semigroup theorists), so I was happy to be exposed to it.
It's a bit shocking that they got Transformers to actually learn the theoretical low depth algorithms for simulating automata, but looking closer at their results we can see that the parts that I would intuitively think are hard to learn (i.e. learning parity) are fairly brittle.
I pretty much agree. The "scientific" approach the author pushes for in the article -- running experiments with multiple similar prompts on problems where you desire a short specific answer, and then running a statistical analysis -- doesn't really make much sense for problems where you want a long, detailed answer.
For things like creative writing, programming, summaries of historical events, producing basic analyses of countries/businesses/etc, I've found the incremental, trial-and-error approach to be best. For these problems, you have to expect that GPT will not reliably give you a perfect answer, and you will need to check and possibly edit its output. It can do a very good job at quickly generating multiple revisions, though.
My favourite example was having GPT write some fictional stories from the point of view of different animals. The stories were very creative but sounded a bit repetitive. By giving it specific follow-up prompts ("revise the above to include a more diverse array of light and dark events; include concrete descriptions of sights, sounds, tastes, smells, textures and other tangible things" -- my actual prompts were a lot longer) the quality of the results went way up. This did not require a "scientific" approach but instead knowledge of what characterized good creative writing. Trying out variants of these prompts would not have been useful. Instead, it was clear that:
- asking an initial prompt for background knowledge to set context
- writing quite long prompts (for creative writing I saw better results with 2-3 paragraph prompts)
- revising intelligently
Consistently led to better results.
On that note, this was the best resource I found for more complex prompting -- it details several techniques that you can "overlap" within one prompt:
As a developer, "infinite scroll to get more digits" sounds really cool. It sounds conceptually similar to lazily-evaluated sequences in languages like Clojure and Haskell (where you can have a 'virtually-infinite' list or array -- basically a function -- and can access arbitrarily large indices).
As a user, it sounds like an annoying interface. On the rare case I want to compute e^(-10000), I do not want to scroll for 3 minutes through screens filled with 0s to find the significant digits.
Furthermore, it's not very usable. A key question in this scenario would be: how many zeroes were there?
It's basically impossible to tell with this UI. A better approach is simply to switch to scientific notation for very large or very small numbers, and leave decimal expansion as an extra option for users who need it. (Roughly similar to what Wolfram Alpha gives you for certain expressions.)