Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Postgres gets support for upsert (postgresql.org)
1003 points by anarazel on May 8, 2015 | hide | past | favorite | 109 comments


I think I squealed with delight when I read this title. This is really a great addition, and will simplify so much of my code once I'm able to upgrade.

I try not to think too deeply about all the DB internals (some people are much smarter than me about this stuff), but clearly there are a lot of things that need maintained internally to make this correct, so kudos to all the people who contributed to this.


A big +1 to this. Peter Geoghegan is the primary author and has been championing it for over 2 Postgres releases now. I know he's especially excited to see it land.

Also it couldn't have been possible without the support of Heikki and Andres (who actually submitted this to HN as his first ever submission).


Hah, where from do you know that I don't have my sock puppet accounts ;)


This is actually huge. A common problem that arises when you write applications is you want to INSERT if key does not exist else UPDATE. The right way of doing this without an upsert is using a transaction. However this will make life easier as you can do it directly in one SQL statement.


Just "using a transaction" is insufficient. You must be prepared to handle the case that neither the INSERT nor the UPDATE succeeds (with READ COMMITTED isolation), or that the transaction fails (with REPEATABLE READ isolation or better), by repeating the transaction. And if latency is at all a concern to you, you must wrap this all in a stored procedure to avoid the necessary round-trip-time between the commands.

Hence this is more than just saving typing a couple lines -- this saves writing entire stupid loops to do what is conceptually a simple (and very common) operation.

Postgres gets better and better.


...and then you additionally have to verify that the specific reason both the update and the insert failed was due to concurrency, or you can make the mistake I did a couple years ago where I had transactions spinning against my database for weeks on end due to an unrelated constraint failure I had failed to notice with some transactions that I kept retrying in a loop as part of my hacked together upsert implementation (which is why my user id numbers jumped from ~8m to ~460m... it is a meaningless mistake, but one I am reminded of every day due to that "46" staring at me :/).


Is my conclusion correct that this implies sequence increments are run outside of the transaction?


Sequences in Postgres are updated even if a transaction is rolled back. It's pretty easy to see why that's important, especially when dealing with concurrent transactions.

You can't give out the same number to multiple transactions without causing a bunch of unique constraint violations. And you can't retroactively decrease a sequence number without affecting all the other transactions currently executing. You could in theory go back and plug in the gaps in a sequence, but it's more important to guarantee that a single thread of execution will always see increasing numbers than it is to guarantee that all numbers will eventually be handed out to a successful transaction.


Yes: http://www.postgresql.org/docs/9.4/static/functions-sequence...

> Important: To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used, even if the transaction that did the nextval later aborts. This means that aborted transactions might leave unused "holes" in the sequence of assigned values.


Yes indeed.


It's fairly trivial to do with a savepoint, but it's definitely not performant, and requires some pretty stupid client-side error-handling.

It will also cause your Postgres logs to be littered with benign but misleading error messages (possibly these could be silenced, I never bothered to find out).

So, yes, this is great news.


Minor nit: it's performant and doesn't pollute logs to implement this on the server side. I've written macros to make gen'ing upsert functions easy, given a certain table pattern. But this definitely cleans that situation up as well as makes it accessible to more casual developers.


Yes, sorry, I was referring to doing it client-side.


i've seen a system in mysql do "insert on duplicate key update" for ages, so this is that same pattern?


Yeah, it is the same as INSERT ... ON DUPLICATE KEY UPDATE, and similar to REPLACE in MySQL. Exciting to see it in Postgres!


I think the way MySQL works in doing "INSERT ... ON DUPLICATE KEY UPDATE" is to delete the old row and insert a new row if there's an auto incremental column (mostly it's ID column). Being curious if Postgres works the same way in this regard.


No, that's how REPLACE works -- which is a complete pain to be honest! INSERT ... UPDATE works the way you'd hope it would, and keeps the auto inc columns correct.


No, the auto increment columns are maintained as is.


That's great. Otherwise, it just wastes IDs unless one figures out a workaround for it.


Why do you need to use a transaction for that?

Do the update statement, followed by insert-where-not-exists. If the update doesn't match, 0 rows updated, and the insert works. If the update matches, then 0 rows inserted.


That's unfortunately not correct. The WHERE NOT EXISTS(...) will not "see" rows inserted by concurrently running transactions that have not committed yet.


I love that it's there now. I've been waiting for it for a long time, but one thing I don't get is... why does every single implementation have to have their own slightly different syntax?

pgsql -> on conflict

mysql -> replace / on duplicate

oracle -> merge

mssql -> merge

sqlite -> insert or replace

firebird -> merge / update or insert


There's a great discussion here: https://wiki.postgresql.org/wiki/UPSERT#Syntax_discussion It basically boils down to that the different syntaxes mean slightly different things. The PG devs (IMO rightly) don't want to adopt an existing syntax that does something slightly different than what they are willing/able to provide, thus confusing users.


MERGE is actually a separate feature. We're still considering implementing it. But it's syntax is too cumbersome for many usages. There's also some unrelated complexity - so implementing merge support does get easier by the infrastructure in here, but a fair amount of work remains.


This seems MUCH better than MERGE for the common case.

I tried implementing some MERGE logic on MSSQL recently and at first it seemed great until I realised:

(a) maintaining/debugging a MERGE that uses most of its syntactical features is an absolute nightmare involving psychically debugging 20+ lines of opaque code - I ended up copying & pasting pieces of the statement into temporary tables/variables and confirming results that way, making me question the value of MERGE as the performance was similar to the original code, but less flexible (MERGE doesn't make available some of the values you need in some cases, so you need to do pre-mapping of data in those cases anyway, resulting in lots of similar code)

(b) it doesn't save you from needing to know and use the correct locks, something many people don't seem to realise!

I then discovered this page, listing all kinds of issues with the MERGE implementation (even on MS SQL 2014, years after MERGE had been introduced): http://www.mssqltips.com/sqlservertip/3074/use-caution-with-...

I realised that even if I followed my debugging approach in (a), one day I was going to run into a problem with MERGE that couldn't be replicated in decomposed statements because MERGE was doing 'something' else. And course, if you look at some of those issues, many of them suffer from the typical Microsoft "Won't fix and won't say why" attitude. In the end I decided to just keep the original/decomposed code, which was clearer and easier to work with.

MERGE is probably okay if you're staging data; I wouldn't use it for transactional processing. It is far too complicated. I really like that PG has focused on efficiently implementing the common use case and avoided the kitchen sink that is MERGE.


Please add MERGE support. I use this in Oracle fairly regularly, mostly for merging two datasets rather than single row upsert. It would be really helpful to have this in Postgres as well.


It's great that you guys implemented UPSERT but I hope MERGE support won't be considered as insignificant because of this feature.


I'm pretty sure it's not (going to be) considered insignificant.

Unfortunately that does not equate to resources (i.e. time by somebody sufficiently crazy^Wdetermined) for implementing it being available. A large part of postgres development is driven by individuals. Some of it on company time, but usually not most of it.


I'm a fan of MERGE in T-SQL. An "upsert" is just one use-case enabled by the statement. Granted for the relatively simple use-case of an "upsert", especially a single row, MERGE has incredibly cumbersome syntax.

But MERGE also, in my opinion, aligns better with set-based logic. I.e., I have two _sets_ of data that I want to merge together. In some cases I want need to INSERT rows into the target, in other cases I need to UPDATE, and in some cases I might DELETE. The <ouput_clause> in T-SQL is also quite useful.


The T-SQL design is far more flexible and expressive. However, I don't think I've ever used it beyond the simple case.

And on the flip side, it's a lot more complicated to write, and difficult even to remember the syntax (at least so far).


The T-SQL design also has numerous bugs, and allows for many nontrivial race conditions (which appears not to be a well-known fact).

http://www.mssqltips.com/sqlservertip/3074/use-caution-with-...

Postgres tries to avoid "gotchas" like this. If SQL Server fixes all the issues listed above by the time Postgres releases its first version of MERGE, but Postgres's version actually works the first time because they spent enough time up front addressing concurrency issues, then IMO Postgres will have made the right call.

(And yes, if Microsoft is unable to ever fix all these issues and Postgres never releases the feature, I would still stand by this statement. I want to be able to rely on every feature in my database).


I'm not very familiar with those specific features in other dbms, but in mysql's case "on duplicate" and "replace" are two very different thing. The first is an upsert, while the other drops the old row if it exists and always make a new one.

Both have different use cases.


Yet another reason why I absolutely love Postgres. It might have taken four years, but we finally got what many of us have been asking for. Nothing ruined my day more than having to write insert-update loops, I am beyond ecstatic for this.


You love Postgres because it takes four years to get requested features?


Because the devs pay attention, and take the time to do things right. See https://wiki.postgresql.org/wiki/UPSERT for a glimpse into the design that needed to go behind this.

I feel a lot of Postgres fans (myself included) put their "money" on Postgres circa late version 7 or early version 8, back when MySQL was the more featureful and performant of the two, while Postgres had the reputation for being more, shall we say, robust. (Remember, those were the days before InnoDB was the default in MySQL.)

The payout for investment in Postgres the past few years has been substantial -- native replication, true serializable transactions, foreign data wrappers, index-only scans, native JSON support, updatable views, and materialized views are among the features added to Postgres in the last 4 years. It's matured from being an "entry-level" RDBMS with few features that all work reliably, to a much more enterprise-friendly RDBMS with many features that still work reliably.


> late version 7 or early version 8, back when MySQL was the more featureful and performant of the two

MySQL was never more featureful. The reason I started migrating back in the 7.1 time frame (when TOAST tables were added and you could finally store more than 8K of text in a TEXT column) was the lack of sub-selects in MySQL.

Even aside of that, Postgres was far ahead when considering basic SQL support: stored procedures, views, subselects, check constraints, triggers, actually enforcing foreign key constraints and so on.

It was significantly slower than MySQL, but it also scaled much better under load. Back then, when you had low load, MySQL would be about twice as fast as PostgreSQL but then as the load increases, MySQL's performance would drop sharply and Postgres would stay consistent.

By now, MySQL has mostly caught up feature-wise, but there's still stuff left that Postgres just does better. Also, even plain ideological reasons (community project vs. oracle open-core project) would want me to stay with postgres.

I also have anecdotal evidence that MySQL still has serious issues in the robustness department which I've yet to see with postgres.


MariaDB, WebScaleSQL, Percona are just three MySQL forks that are completely open source in every single way. Likewise MySQL-Server is GPL so it definitely should be considered as open source.

And if MySQL wasn't robust then YouTube, Facebook, Twitter, Alibaba, LinkedIn etc wouldn't be using it for core parts of their infrastructure. It's definitely robust.

No doubt that PostgreSQL is better at MySQL in many areas though and probably could do with a self contained, single download PostgreSQL Cluster edition.


>And if MySQL wasn't robust then YouTube, Facebook, Twitter, Alibaba, LinkedIn etc wouldn't be using it for core parts of their infrastructure. It's definitely robust.

Either that or they have staff and infrastructure in place to deal with the lack of robustness. Yes, you need to be prepared to deal with corruption anyways, but the more robust your solution, the more time is left to deal with other things.

Over the years I have seen multiple instances of MySQL table corruption, index corruption and mysqldump exiting with a zero exit code after aborting mid-dump due to table corruption.

Of course I was prepared for this and I always had backups ready, but it was still time-consuming and annoying.

With Postgres I've yet to see any kind of data corruption even though my postgres usage is much heavier than my MySQL usage.

But this is why I said "anecdotal evidence": For me personally, Postgres has proven to be way more robust than MySQL. Is this my inability to properly administer MySQL? Is it me being unlucky with hardware (though Postgres is fine on the same hardware)? Is it me just being unlucky? I really don't know.


> And if MySQL wasn't robust then YouTube, Facebook, Twitter, Alibaba, LinkedIn etc wouldn't be using it for core parts of their infrastructure. It's definitely robust.

Popularity is not an argument for quality. See: crocs, Justin Bieber, PHP.

Those companies you mentioned, like many others, probably use it because they're locked in that technology, not because it's a superior one. Just like banks still use COBOL.

If you're interested on knowing why using MySQL isn't a good idea, there's this (highly opinionated, obviously) post about it: http://grimoire.ca/mysql/choose-something-else


As an enthusiastic wearer of Crocs I have to interject to point out that there are many different aspects to quality - being robust is just one of them. Being able to easily recruit experts, ease of use, availability and quality of tools in the ecosystem are other measures.


It took me a while to realise the thing you referred to had changed, which led me to google "Crocs experts".


I never said popularity was an argument for quality. You did.

What I am saying is that if MySQL wasn't robust then those companies simply wouldn't be using it. Since at their scale any bug or weakness will manifest at a level far greater than say at a startup. And they have the skills, time and money to choose any technology they wan't so I don't buy your argument that they are "locked in". Some like Facebook and LinkedIn have even created their own databases.

And I never said MySQL was a good idea. You did. I am saying that to claim it is not robust flies in the face of available evidence.


" never said popularity was an argument for quality. You did."

I'd tend to think your statement:

"And if MySQL wasn't robust then YouTube, Facebook, Twitter, Alibaba, LinkedIn etc wouldn't be using it for core parts of their infrastructure. It's definitely robust."

..points out that many (popular) sites use it, and a prerequisite is robustness, which thesaurus-wise, sounds a lot like quality.

I get that

a) you didn't actually say it, and

b) you could mean something much more specific, such as "companies with many highly-starred, complex open source projects which have also re-written major parts of their tech stack, but chose to leave mysql in place".

I know thats a bit more verbose, but it just seems to obviously close to the other author's interpretation with the ambiguity of the statement.


> many (popular) sites use it

threeseed's argument is not

    this handful of popular companies use it, therefore it is good
but rather

    this handful of companies, each of which is considered by some/many to be 'excellent' or 'top', use it, therefore it is good


>What I am saying is that if MySQL wasn't robust then those companies simply wouldn't be using it.

That conclusion does not follow from the premises: that some well known large companies use a technology in no way implies that it is robust.

You are assuming that large, well known companies always intrinsically select for high robustness in their tools, which is not necessarily the case. In fact, there are many well known public cases of the opposite.

It could be (for example) that large companies simply have the spare organizational capacity to deal with a lack of robustness. Perhaps they are locked in and find the ongoing cost of dealing with lack of robustness to be lower than the costs of switching to something more robust. Or it could be that they don't notice, for whatever reason, the lack of robustness. There could be many other possible explanations.


> And I never said MySQL was a good idea. You did. I am saying that to claim it is not robust flies in the face of available evidence.

I am glad you mention the word "evidence", since you have provided none so far. An assorted selection of popular, buzz-worthy sites does not qualify as evidence (or lack thereof) of the validity of the technologies they use and endorse.

For contrast, I linked to a really nice post that explains thoroughly why MySQL is not a good idea in the slightest. I would encourage you to read it.


I think mysql-tests aren't open-source anymore (gift from oracle)


  > MySQL was never more featureful
Replication? Replication? Replication? UPSERTS? Also, MySQL supports multiple engines, so when talking about MySQL it makes sense which engine you have in mind.


Multiple engines in the MySQL case are an anti-feature, from the MyISAM v InnoDB perspective. Far too often the wrong or arbitrary choices are made, mixing within the same database, causing unnecessary confusion and work. "That table is on this storage engine which does not support the operation we need."

I can see the case where a different storage engine supports a different data format or use case (columnar storage for example), but in an RDBMS case I would argue consistency outweighs convenience.


The context of my comment and the comment it responded to was the ~7.1 time-frame which was in 2000. My "never more featureful" comment was related to what was released in 2000.

Back then, there was just MyISAM (and maybe even still ISAM), no transactions, no replication, no upserts. Back then there was nothing that MySQL could do that Postgres couldn't.


If given the choice between do it half-assed but quickly and do it right but using how ever much time is required to arrive there, the PostgreSQL team always choses the latter approach.

This conservatism works well for me considering we're talking about a database here.

One of the contributors has written an interesting article explaining why upsert is difficult to get right (if by "right" you want it to complete reasonably quickly and without any chance of corrupting your data):

http://www.depesz.com/2012/06/10/why-is-upsert-so-complicate...


Postgres turned a corner around the 9.0 era and has been churning out huge features for years. Indexes on jsonb completely changed the game for me.

As you can see from the commit, too, this was not a trivial feature. So yes, I'm overall pleased.


The documentation covering the new feature has now built: http://www.postgresql.org/docs/devel/static/sql-insert.html


Postgres's documentation is one of the best I've seen. Thanks to everybody working on it!


For the backstory on why this has taken so long: http://www.depesz.com/2012/06/10/why-is-upsert-so-complicate...


I know it doesn't matter to some developers out there, but for me, the only thing left is the ability to add a column after another one. Even if it's only logical order, it's easier for me to read a table structure when fields are ordered nicely.


Same thing here, I would love this feature. I like to make my tables have a common structure:

    id
    created_at
    updated_at
    status
    fkey1_id
    fkey2_id
    fkey3_id
    date_field1
    date_field2
    column1
    column2
    column3
Being able to add a new foreign key column with the "rest" of them would be awesome.


Work on this started recently, but since it is an open source project there is no way to know if this will make it to 9.6 or not (the work was start too late to get into 9.5).

http://www.postgresql.org/message-id/flat/20141209174146.GP1...


Unsure what you mean. A subset of columns can be selected in any order, across tables. Were you referring to the select order used by "*"?


I know, but when I look at a table structure in PgAdmin, all newly added columns appear at the end, of course this can handled properly in select col2, col1... , or by creating a view, but by any other way as this wiki [1] page says. But during development, ability to create logical column order is somehow important to me.

[1]https://wiki.postgresql.org/wiki/Alter_column_position



Yes, I think it's about changing the default presentation order. That could be a feature of the client, psql, pgcli, pgadmin3, but it won't be portable across them unless it's some metadata stored inside the db. Is there any db with that feature?

Edit: munro answered that. PostgreSQL does it but it's expensive because it has to rewrite the table.


Yes, MySQL: ALTER TABLE t1 ADD COLUMN `count` SMALLINT(6) NOT NULL AFTER `lastname`


I'd love this too. I wanted materialised views and upsert, so here's hoping :)


So excited about upserts in Postgres!

This was the most requested feature in PostgreSQL: https://postgresql.uservoice.com/forums/21853-general


Which version of Postgres will this land in?


9.5, that is, upcoming!


Which will hopefully drop in September.


I am extremely happy to hear about this. We currently insert, catch the exception and update if necessary. That can lead us to race conditions in concurrent transactions. Thank you very much for your excellent work on Postgres. It continues to be an amazing product to work with.


I want to give a giant digital hug to all PG committers who made this happen.


Who decided the syntax? Why not use MERGE ... INTO ... USING ... ON ... WHEN like Oracle and MSSQL and apparently ANSI SQL [1]? I'm not aware of ON CONFLICT being standard.

[1] http://en.wikipedia.org/wiki/Merge_(SQL)


There's some extended discussion of the issue on the wiki[1]. A future full MERGE implementation might have slightly different semantics or performance characteristics than ON CONFLICT (which might also differ from other DBMS).

[1] https://wiki.postgresql.org/wiki/UPSERT#SQL_MERGE_syntax


Does the description seems like it's a two-pass operation to anyone else? Can anyone with more detailed knowledge tell why can't this be a single-pass "INSERT in a unique index, UPDATE if it fails" operation?


To understand the problem one first has to know that for better or worse postgres does not use index organized tables. Additionally this ought to work not only on a primary key but also on additional unique constraints. Including partial unique indexes, potentially over expressions (say lower(username) or something).

That makes some approaches harder.

The bigger problem is that 'update if fails' is easier than it sounds. When you find a conflicting row in the index, that row's creating transaction might not yet have committed. So you need to first wait for it to commit, and then retry. But that implies that the transaction could actually roll back or delete the row again. And suddenly there's not actually a row to UPDATE. Thus you need to retry, right?

There's also some issues around avoiding deadlocks. If you "naively" just acquire a 'value lock' on the to-be-inserted row, you can easily get into deadlock territory if more than one row is inserted in one transaction. Such value locks have to be released after an attempt to update. Which then necessitates a looping attempt...

Does that start to explain the problems a bit?


I remember about 10 years ago using UPSERT in a Teradata database to do delta-diff updating of tables.

Basically, I was updating a table from an external file, and I wanted to add what was new, leave what was the same, and delete any records no longer in the external table. Using UPSERT to insert new rows, update a timestamp on existing rows, and then afterwards delete any rows who's timestamp was before the UPSERT operation. Because of the nature of the file, doing an external patch/diff approach was difficult, but the UPSERT operation worked like a breeze!


Damnit, I just wrote a sweet gem that handles this... ah, okay, but it requires an index constraint, whereas I needed to merge on arbitrary conditions. So I didn't totally waste my time.

Way to go Postgres team!


Is the gem open source? I checked your github and didn't see it, I think people would still find it useful!


I've been looking forward to this for ages! The solution we use at the moment for upsert is a create or replace rule on the table but it isn't great as you need to remember to update the rule whenever you change the table schema...

  CREATE OR REPLACE RULE "replace_row" AS
  ON INSERT TO my_table
  WHERE EXISTS(SELECT 1 FROM my_table WHERE id=NEW.id)
  DO INSTEAD
  (UPDATE my_table SET
  col1=NEW.col1,
  col2=NEW.col2,
  col3=NEW.col3,
  WHERE id=NEW.id);


That unfortunately is not safe on several fronts.

For one, the WHERE EXISTS() will not see concurrent insertions by transactions that are still in progress, which means you'll still get constraint violation errors.

For another, rules generally have very surprising behaviour. E.g. with this RULE you'll get into trouble if "id" isn't passed in as a explicit value, but uses a non-deterministic DEFAULT or directly passed in expression. Every reference to NEW.id will not be replaced by the result of that expression, but rather with the expression itself. For the common serial id column (aka autoincrement) you'll have a separate nextval() call in each reference. Which can make this explode in pretty damn confusing ways.


Unfortunately, there's no good all round solution for upsert in current postgres versions. For our use case (one logger daemon feeding results off a message queue in to a table in an idempotent way) this approach was the simplest and worked the best.


Can't you just do the "usual" looping pattern around a subtransaction? If events are inserted by a single callsite that shouldn't prove to onerous?

C.f. http://www.postgresql.org/docs/9.4/static/plpgsql-control-st...


Could have done but coming from a mysql background (i.e. where replace statement is supported), just crafting a simple replace rule was easier/faster to grok and implement. Also helped that the table schema was very unlikely to change much in the future - so we didn't have to worry too much about remembering to keep the rule up to date (and the fact the id column was defined as VARCHAR NOT NULL and had no default assigned to it).


Does anyone know when this is expected to be released?


New versions of Postgres usually ship in September. Except 9.4, which slipped to December.


As someone that's been using PostgreSQL way before it was cool (since 2005), I'm super happy to see this. Previously I had to do EXISTS, IF FOUND UPDATE ELSE INSERT. Now I can just do upsert, so nice.

Very thankful to all the hard working Postgres devs. PostgreSQL was really good when I first started using it, and just keeps getting better. Glad to see the general community finally getting on the PostgreSQL bandwagon.

Now let's see if I can get more people to use internal PostgreSQL functions instead of constantly writing statements directly in their code.


This reminds me of a common idiom in the Pick-style databases that I've worked on...

    READU REC FROM FILE,KEY ELSE
        REC<1> = 'stuff'
        * Set other fields as appropriate for a brand new record - this is the 'insert'
    END
    REC<1> = 'blah'
    * Set other fields as appropriate for an update
    WRITE BLAH TO FILE,KEY ;* Write record and release locks
This can, of course, be wrapped in a transaction.


Seriously, IIRC the wiki page for this feature was like 5 years old. I'll have some statements I'll be very glad to update as soon as this is released.

I was so surprised when I ran into the lack of this feature as I'd had it on both MSSQL and MySQL.


Upserts are generally useful tools for applications that run in active-active multi-datacenter configurations. Have you guys explored the implications of this feature for such deployments on top of Postgres yet?


I come from a SQL Server background and recently jumped over to a Postgres shop. It seems like I came in at a really exciting time.

I was really missing SQL Server's powerful MERGE statement, so this is welcome news.


So to simplify, what I understand is

1. INSERT .. ON CONFLICT is an implementation UPSERT technique

2. MERGE is something meant to be different from UPSERT but has UPSERT feature?


Basically true except for many implementations of MERGE not being concurrency safe, which means you would have to loop anyway on unique violation making it no better than the CTE based techniques that already exist in 9.4.


For anyone who has ever had to write scripts to import incremental batch data, CSV, etc this seems huge.


When is Postgres going to get built in fault tolerant multi-master support in any form?


I found engineering around the lack of this feature very upserting.

I'll get my coat.


Does it handle locking (aka no deadlocks) correctly?


There is a section in UPSERT wiki about this - https://wiki.postgresql.org/wiki/Value_locking#.22Unprincipl...

EDIT: rather the whole "Value Locking" page - https://wiki.postgresql.org/wiki/Value_locking


I think an UPSERT keyword would be more clear.


Oh happy day. PostgreSQL is the best.


Say goodbye to CRUD. Because 'create' and 'update' are syntactically and semantically the same, right?


its about F*G time


Yes! it's about bloody time.


Great! I'm just like it!


Slightly OT, why do we see these errors with Oracle's MERGE statements?

ORA-01779: cannot modify a column which maps to a non key-preserved table

ORA-30926: unable to get a stable set of rows in the source tables


Asking a support question on a feature announcement post for a completely unrelated product is EXTREMELY off topic.


It was not meant as a support question. It was in the context of MERGE/UPSERT was I asking. I wanted to know if similar thing happens in PG.


In my crazy-ass opinion, if you need upserts, there are problems with your application logic and problems with your database.

Get off my lawn, etc.


Why bother posting if you're not going to explain yourself? It's 2015, nobody's going to be impressed by mere contrarianism.


Syncing data from a third-party API?


In a concurrent world, it's a reality you eventually have to deal with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: