It would probably be some application of spectral imaging [0], highly dependent on what data you choose to capture based on your assumptions about how the aliens see.
Even if you have a mathematical "photo of a planet of nitrogen gas clouds", that leaves the problem of how to present it to humans, since we have no concept of what "nitrogen gas color" is supposed to be.
As OP said, most astronomical pictures, even in the visible, target specific wavelengths and are thus "converted" for human vision. It's also obviously the case of any picture in wavelength beyond visible light. Keep in mind that there is also massive processing to clean the pictures, not just a translation of the wavelengths
We already know ChatGPT is not good enough to do all that without being able to run code. That's why ChatGPT Code Interpreter is so powerful. It was even branded GPT4.5 in this podcast[0] which I think is apt.
However, bringing that auto-coding loop into a persistent executable environment that you can revisit, adjust, and follow up on is a big win I think.
Hey HN! That's me talking in the video. This is basically GPT-4 + the new functions capability. We added this to Deepnote during a 2-day hackathon and were surprised by how well it works. Let me know if you have any questions!
Thanks for the feedback! The main difference between us (Deepnote) and Colab is that we aim for a clean reading experience, and we optimize for speed.
Colab seems to load a notebook that has editable cells and you can start executing cells directly, so it doesn't feel as a "publishing" feature, more like straight up notebook sharing.
And thanks for the bug report, will look into that
Thanks for the q! A follow-up comment here - if you look away from the publishing feature, the experience of Deepnote vs. Colab mainly differs in a) UI, b) breadth of integrations (Deepnote integrates with most of the data sources out there and plays well with the rest of your stack) and c) unlike Colab, Deepnote supports both real-time collaboration and asynchronous collaboration via comments - so quite literally Google docs / Figma meets notebooks.
Hi there! Simon (Engineer at Deepnote) here. We didn't benchmark the load speed but for some reason, Github's ipynb viewer has always felt to me quite slow and unreliable.
All viewers are publicly accessible so I'd love it someone did an independent benchmark. I'd prefer to avoid doing one ourselves because of the obvious conflict of interest
I thought that it is more than this. I think you should support any claim about that by creating a reproducible open benchmark test so at least people can understand what criteria you selected.
Deepnote team member here. It took a lot of effort to get to where we are right now – Deepnote is one of the most complex products I worked on. If you have any questions, engineering, product or otherwise, ask away!