Hacker Newsnew | past | comments | ask | show | jobs | submit | castorp's commentslogin

> I suspect that's a pretty non-standard/rarely-used feature though. If you learn SQL you likely won't encounter this

Recursive common table expressions are part of the SQL standard (since 1999) and are quite frequently used to traverse hierarchical data (aka "adjacency list").

It is part of basically all (good) SQL tutorials - at least in the "advanced" part.


I don't remember using recursion in a real project, but I built a HN clone on top of Postgres, with the following query:

    WITH RECURSIVE thread(id, parent_id, user_id, post_id, timestamp, text, depth) AS (
      SELECT id, parent_id, user_id, post_id, timestamp, text, 0
      FROM comments
      WHERE user_id = 1
        AND parent_id IS NULL
      UNION ALL
      SELECT c.id, c.parent_id, c.user_id, c.post_id, c.timestamp, c.text, t.depth + 1
      FROM comments c
      JOIN thread t ON c.parent_id = t.id
      WHERE c.user_id != t.user_id
    )
    SELECT * FROM thread ORDER BY timestamp ASC;


I wrote a recursive CTE to do a tree traversal on a parent-child “relationship” table a few weeks ago at work.

They do come up!


> PostgreSQL has built in support for UUIDv4 through the pgcrypto or the uuid-ossp extensions.

Since Postgres 13 installing an extension is no longer necessary as gen_random_uuid() is part of the core.


> rather than Redshift

Despite what the Amazon marketing is telling, Redshift is not really a "fork" of Postgres.

To my knowledge they only used the SQL parser and the wire protocol from Postgres.

The optimizer, query executor and storage engine are totally different. The whole "Redshift is Postgres" is complete marketing BS in my opinion.


I thought it was a genuine fork, just a very old (pre-v9 even) one?

Anyway, does it really matter? What is someone looking for a fast 'postgres' for analytics actually interested in?

(I didn't realise this was just an extension - in which case I'm amazed it's possible, but that obviously makes it an easy sell if you're already running pg. But if you're shopping about for managed solutions (which is obviously what Hydra wants to sell) with 'postgres' criterium, you're interested in the query language and maybe the wire protocol, surely?)


Yes, it matters.

Many Postgres features aren't supported on Redshift (set returning functions, indexes, ...) and many tools that work just fine with Postgres error out because Redshift does things differently or doesn't support features that Postgres does.


You need to use pg_dump from the new version, not from the old version.


I think you found a common stumbling block. The tutorial people follow may say to run pg_dump and many people would assume you're running it from the old version.


Wait, what? Explain please.

OK upon further thought, what you're saying is to use the pg_dump tool that comes with the pg version you are upgrading to.

I read it as "use pg_dump instead of pg_restore"...


Yes. Put concisely: Use the client tools from the newest version involved for all steps.


Isn't that impractical for most Linux distros? You'd need to manually install the new client.


On Debian (and probably most of its derivatives), the package manager will not remove any running PostgreSQL versions and will let you have multiple versions side by side. It also has its own tool pg_upgradecluster, which can perform the upgrade using pg_dump+pg_restore or pg_upgrade (optionally with link or clone options).


Probably? Certainly if you rely on distro packaging for it, yes.

I was just trying to simplify the guidance here. :)


Yes, and usually, it means you need to use the full binary path unfortunately (because debian wrappers embedded in the package select the version from your 5432 cluster).


> It sounds like Russia might get declared a terrorist state soon.

Lithuania has apparently already done that.


I have been working with Oracle for more than 20 years now. I think I only had very few situations where an "index organized table" (=clustered index) was useful or actually provided a major performance benefit over a "heap table". So I never really miss them in Postgres.


It is such a common access pattern that many database engines always have a clustered index (MySql - InnoDB, Sqlite) whether you use them directly or not.

I like having a choice as there is in Sql Server or Oracle, but for many use cases its a waste to write to a heap and to an index (which is just a hidden IOT) then look up in the index and dereference to the heap both in space and time.


> Without dura, you use Ctrl-Z in your editor to get back to a good state.

I just turn on "Local History" in my NetBeans IDE which creates a new version of the file each time I save it. No need to use Ctrl-Z for that.


You don't have to do that "manually". Postgres supports the conversion using the `at time zone` operator.


This extension uses cron syntax and only supports GMT.


Did you try Qwant? https://www.qwant.com/maps


    select extract(month from justify_interval(timestamp '2022-01-01' - timestamp '2021-11-01'))
returns 2


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: