I doubt we will. The state of the art seem to have moved away from the GPT-4 style giant and slow models to smaller, more refined ones - though Groq might be a bit of a return to the "old ways"?
Personally I'm hoping they update Haiku at some point. It's not quite good enough for translation at the moment, while Sonnet is pretty great and has OK latency (https://nuenki.app/blog/llm_translation_comparison)
Funny enough, 3.7 Sonnet seems to think it's Opus right now:
> "thinking": "I am Claude, an AI assistant created by Anthropic. I believe the specific model is Claude 3 Opus, which is Anthropic's most capable model at the time of my training. However, I should simply identify myself as Claude and not mention the specific model version unless explicitly asked for that level of detail."
"Trust but verify" is still useful especially when you ask LLMs to do stuff you don't know. I've used LLMs to help me get started on tasks where I wasn't even sure of what a solution was. I would then inspect the code and review any relevant documentation to see if the proposed solution would work. This has been time consuming but I've learned a lot regardless.
I use Julia regularly for experimental machine learning. It’s great for writing high performance, distributed code and even easier than Python for this kind of work, since I can optimize the entire stack in a single language. Not sure if it’s growing in popularity but it’s really solid for what it does
Me too, and I'd like it to become mainstream. The major problem right now is that it doesn't have anything that is close to Torch or JAX in performance and robustness. Flux et al. are 90% there, but the last 10% requires a massive investment, and Julia doesn't have any corporate juggernaut funding development like Meta or Google.
This is hurting Julia's adoption. The rest of the language is incredibly elegant, as there is no 2-language divide like in Python. Furthermore, it is really performant. With very little effort one can write code that is within 1.5-2x of C++, often closer.
One possibility is that something like Mojo takes Julia's spot. Mojo has some of the advantages of Julia, plus very tight integration with Python, its syntax and its ecosystem. I would still prefer Julia, but this is something to keep in mind.
LLMs massively compound the advantage of existing popular languages, namely python. Any new learner will find it infinitely easier to use sonnet 3.5 to overcome the so called '2 language barrier' for python, while the lacking data for Julia becomes the real barrier.
This issue will remain until LLMs get so smart they can maybe self-iterate and train on a given language. By then though, we'd likely get languages designed and optimized for LLMs.
To back up the sibling comment, I've found ChatGPT quite capable where Julia is concerned. It does hallucinate the occasional standard library function, but a) it gets it right after it's told it was wrong about half the time and b) Julia's documentation is fairly good, so finding what that function is really called is not a big deal.
It can even debug Pkg/build chain problems, which... Julia could use a bit of polish there. On paper the system is quite good, but in practice things like point upgrades of the Julia binary can involve a certain amount of throwing spaghetti at the wall.
For what it's worth I've found Claude Sonnet to work really well with Julia.
One fun exercise was when a friend handed me a stack of well-written, very readable Python code that they were actually using. They were considering rewriting it in C, which would have been worth it if they could get a 10x speedup.
I had Sonnet translate it to Julia, and it literally ran 200x faster, with almost identical syntax.
Could you elaborate? As far as I understand, if you treat it like Python (e.g. use defs and stick with the copy-on-modification default), you'll still see performance improvements without even thinking about memory.
I want to really like Julia. For me it felt like more work than python for simple stuff and not that much less work than c++ if you are trying to get the best performance. It is a cool language though.
It's reasonably popular, growth has continued at a slow but steady pace. It's never going to become Python or anything but it's great in its niche.
We use Julia in our hedge fund, it allows our researchers to write Python-like syntax while being very easy to optimize – compared to numpy code we've had a relatively easy time getting Julia to run 20x-1000x faster depending on the module, which has resulted in a very large reduction in AWS bills.
certainly yes in scientific computing, less so in ML/data science. there's much of the culture of scientific computing in economics -- lot of heavy numerical stuff in addition to the statistical modeling you might expect.
I think my general sense of this article is that all of these have been fixed. The language is relatively new, and the core devs are responsive. Using anything new comes with risks. I think the community appreciated a detailed and generally well-reasoned diagnosis, but at the same time these things are relatively easily addressed.
R is an open source version of S, which was a competitor to SAS.
Julia, from when I looked at it years ago was trying like a new version of Matlab or Mathematica. It was very linear-algebra focused, and were trying to replace those packages plus Fortran. They had some gimmicks like an IDE that would render mathematical notion like TeX for your matrices.
Python wasn't the obvious "Fortran killer" scientific language it is today. In fact it's arguably really weird that Python ended up winning that segment. In any case, I think Julia's been struggling since its inception.
R and S are also very linear algebra focused. R developers just try to make C++ behave like R as much as possible when they need more speed. Hence, Rcpp. Otherwise, we prefer our LISPy paradise.
I was in Austin while Travis Oliphant's wave from numpy led to Anaconda. After that we got to bring them in as consultants. It was wild talking to the team and hearing the inside-track dev info. It isn't a surprise to me that Python, as flexible and glue code as it is, became the Excel language of Scientific Computing.
Mostly the vision and ideals which became Anaconda, conda, and miniconda, as well as the translation of ideas to use cases to implementations, and some ideas that came about later in other forms or libraries (numba, pytorch).
Basically a mini/beta/in-progress version of Pycon each week.
Not at all? Totally different programming paradigm and performance. Certain communities pull towards Julia a lot more than others. Mostly I've seen scientific fields that require HPC but don't want to do everything in FORTRAN and C. Paging Chris Rackauckas!
Fair enough. It probably would make sense to have a Conda like release of Julia that comes out every year with a broad but curated selection of packages.
I don't think you'd actually want to include each of those packages in a standard distro: does the average user really need to programmatically send emails or deal with Voronoi tessellations? Probably not, but I still think there's value in a batteries-included approach, especially when working with students.
I agree that LLMs are less useful for smaller scale use cases. I’ve found success using them for creating mock data, reformatting unstructured text into structured data, and a few other menial tasks that would take me a while to do on my own.
I think so much of the hype is about potential larger scale applications but the models just don’t seem reliable enough yet for that.
CSS has had a similar issue with naming classes, which is one reason for how TailwindCSS’s design. I wonder if we’ll see more AI tools for these kinds of use cases.
Strangely, I've noticed mixed opinions from developers about whether learning vim is a productivity boost. Some people believe it is while others disagree.
I'll admit my FOMO was what originally got me to start learning vim. I still barely know the basic motions but I'm starting to think it could lead to a productivity boost once I get over the learning curve.
The productivity boost is highly subjective, it depends a lot on your work. Do you edit files a lot or spend more time thinking ? Do you use some graphical tools on the side that forces you to grab your mouse anyways ? With AI tools now, where you can now basically select a file and prompt "rewrite all in snake_case" in VSCode it's becoming even less evident. I think the biggest gain from vim is simply if you're more happy using it or not.
There is a quote from Apple UX designers/engineers about testing the keyboard vs mouse for doing stuff in the OS.
Apparently the test subject always reported that they keyboard-driven controls were faster, but the timing measurements showed that mouse were faster.
Chances are the keyboard feels faster, rather than being actually faster.
I'll add another point of view I developed while observing many LaTeX vs Word or Excel vs SomeObscureCoolThing(tm) threads: people will happily waster thousands of hours over many years to learn vim/emacs/LaTeX/SomeObscureCoolThing but will plain refuse to spend 20-200 hours (again, over many years) to properly learn how to use Jetbrains' stuff (IntelliJ etc) or Word/Excel/PowerPoint (or the LibreOffice equivalent) or some other mainstream tool.
I've seen countless time web-apps being developed in months that could have been an excel sheet developed in a week. People wasting weeks on their documents because after a software update LyX would not open the documents anymore. People (particularly in university) being super-stressed, wasting precious time and occasionally missing deadlines because they waster too much time fighting LaTeX to align tables or images because they refused to properly learn how to use Word (or the LibreOffice's equivalent, writer).
And don't even get me started on the plumbing of various tools together. Most vim/emacs user (and I say this as an emacs user) can only integrate other tools as long as there is some copy-paste-ready code, but they can't go much further.
So... Yeah productivity boost is incredibly subjective. And chances are it's also fake.
It's not too much of a big deal (meh) but I'm annoyed by the fact that all this isn't even acknowledged.
Maybe it comes back to the feeling of fun mentioned by GP? I also enjoy working with Vim, while LibreOffice Writer is clunky and Word is not much better (though I have no love for LaTeX). I could make a spreadsheet in under an hour, but if it'll make me feel like I'm hacking together something buggy in an inadequate tool, I'd rather spend more time making a web app or a Python script.
Likewise, mandating a file per each class in Java is no big deal on the surface, but having to create and juggle so many files for small classes feels terrible to me, so a seemingly small detail turns me off the language.
I think we should examime these feelings, because they ultimately drive (some part of) our behaviour, and I'd guess they're not just random preferences but are rationalisable.
> I could make a spreadsheet in under an hour, but if it'll make me feel like I'm hacking together something buggy in an inadequate tool, I'd rather spend more time making a web app or a Python script.
Congrats you missed the point entirely and provided me with a perfect example case:
Have you ever spent a considerable amount of time learning Excel, the very same way you did for python?
It’s very likely that excel is perfectly adequate and not buggy at all, you’re just and ignorant (in Excel) and can’t go further than “hacking a spreadsheet together”.
So, have you spent time properly learning other tools or are you one of those everything-expect-what-i-like-sucks ?
The whole "data and (hidden) code mixed in a seemingly-infinite matrix" concept offends my engineering sensibilities. It's not about bugs in Excel, it's about the bugs my spreadsheets will have since I forgot to fill in that one cell, overran the area used by some formula elsewhere, didn't set the format correctly, so visually all looks good but it breaks a sum...
To get back to my point, working with spreadsheets feels bad (from my experience) because I have to juggle all the things mentioned above. In my favourite programming environments, mistakes of this sort are generally directly evident in the form of errors.
If I had more spreadsheet experience, I would get better at avoiding these mistakes. But I choose to use Python, where they are handled by design.
It was a comparison between keyboard shortcuts and quick access bar, on a graphical tool.
And anyone who has ever played MMO or RTS games will know that early on, quickbar are faster and more precise than shortcuts, but later on clearly, no one mouse-click if they want to stay competitive.
The real productivity boost happens when you become fluent and it feels like a second language that describes actions on text objects. At this point I can do most things as fast as I can think what needs to be done. It's like being bilingual.
reply