ChatGPT解决这个技术问题 Extra ChatGPT

Insert data in 3 tables at a time using Postgres

I want to insert data into 3 tables with a single query. My tables looks like below:

CREATE TABLE sample (
   id        bigserial PRIMARY KEY,
   lastname  varchar(20),
   firstname varchar(20)
);

CREATE TABLE sample1(
   user_id    bigserial PRIMARY KEY,
   sample_id  bigint REFERENCES sample,
   adddetails varchar(20)
);

CREATE TABLE sample2(
   id      bigserial PRIMARY KEY,
   user_id bigint REFERENCES sample1,
   value   varchar(10)
);

I will get a key in return for every insertion and I need to insert that key in the next table. My query is:

insert into sample(firstname,lastname) values('fai55','shaggk') RETURNING id;
insert into sample1(sample_id, adddetails) values($id,'ss') RETURNING user_id;
insert into sample2(user_id, value) values($id,'ss') RETURNING id;

But if I run single queries they just return values to me and I cannot reuse them in the next query immediately.

How to achieve this?


E
Erwin Brandstetter

Use data-modifying CTEs:

WITH ins1 AS (
   INSERT INTO sample(firstname, lastname)
   VALUES ('fai55', 'shaggk')
-- ON     CONFLICT DO NOTHING         -- optional addition in Postgres 9.5+
   RETURNING id AS sample_id
   )
, ins2 AS (
   INSERT INTO sample1 (sample_id, adddetails)
   SELECT sample_id, 'ss' FROM ins1
   RETURNING user_id
   )
INSERT INTO sample2 (user_id, value)
SELECT user_id, 'ss2' FROM ins2;

Each INSERT depends on the one before. SELECT instead of VALUES makes sure nothing is inserted in subsidiary tables if no row is returned from a previous INSERT. (Since Postgres 9.5+ you might add an ON CONFLICT.)
It's also a bit shorter and faster this way.

Typically, it's more convenient to provide complete data rows in one place:

WITH data(firstname, lastname, adddetails, value) AS (
   VALUES                              -- provide data here
      ('fai55', 'shaggk', 'ss', 'ss2') -- see below
    , ('fai56', 'XXaggk', 'xx', 'xx2') -- works for multiple input rows
       --  more?                      
   )
, ins1 AS (
   INSERT INTO sample (firstname, lastname)
   SELECT firstname, lastname          -- DISTINCT? see below
   FROM   data
   -- ON     CONFLICT DO NOTHING       -- UNIQUE constraint? see below
   RETURNING firstname, lastname, id AS sample_id
   )
, ins2 AS (
   INSERT INTO sample1 (sample_id, adddetails)
   SELECT ins1.sample_id, d.adddetails
   FROM   data d
   JOIN   ins1 USING (firstname, lastname)
   RETURNING sample_id, user_id
   )
INSERT INTO sample2 (user_id, value)
SELECT ins2.user_id, d.value
FROM   data d
JOIN   ins1 USING (firstname, lastname)
JOIN   ins2 USING (sample_id);

db<>fiddle here

You may need explicit type casts in a stand-alone VALUES expression - as opposed to a VALUES expression attached to an INSERT where data types are derived from the target table. See:

Casting NULL type when updating multiple rows

If multiple rows can come with identical (firstname, lastname), you may need to fold duplicates for the first INSERT:

...
INSERT INTO sample (firstname, lastname)
SELECT DISTINCT firstname, lastname FROM data
...

You could use a (temporary) table as data source instead of the CTE data.

It would probably make sense to combine this with a UNIQUE constraint on (firstname, lastname) in the table and an ON CONFLICT clause in the query.

Related:

How to use RETURNING with ON CONFLICT in PostgreSQL?

Is SELECT or INSERT in a function prone to race conditions?


thanx for replay can i add transaction roll out if any fail insertion occurs .yes how can i
This is a single SQL statement. One can bundle several statements into a single transaction, but one cannot split this one up. Also, what Denis says in his comment. And I appended some links to my answer.
@mmcrae: Yes, you can. Related: dba.stackexchange.com/questions/151199/…
@No_name: sure, various ways. I suggest you ask a question with defining details. you can always link here for context. or drop a comment here linking back to get my attention.
@AdamHughes: Indeed, sample_id and user_id got mixed up in multiple places. The example has rather misleading column names. Fixed, clarified, and added a fiddle.
a
a_horse_with_no_name

Something like this

with first_insert as (
   insert into sample(firstname,lastname) 
   values('fai55','shaggk') 
   RETURNING id
), 
second_insert as (
  insert into sample1( id ,adddetails) 
  values
  ( (select id from first_insert), 'ss')
  RETURNING user_id
)
insert into sample2 ( id ,adddetails) 
values 
( (select user_id from first_insert), 'ss');

As the generated id from the insert into sample2 is not needed, I removed the returning clause from the last insert.


I like this approach with select inside values. It's more consistent and also can drop the return aliases inside the with statements
D
Denis de Bernardy

Typically, you'd use a transaction to avoid writing complicated queries.

http://www.postgresql.org/docs/current/static/sql-begin.html

http://dev.mysql.com/doc/refman/5.7/en/commit.html

You could also use a CTE, assuming your Postgres tag is correct. For instance:

with sample_ids as (
  insert into sample(firstname, lastname)
  values('fai55','shaggk')
  RETURNING id
), sample1_ids as (
  insert into sample1(id, adddetails)
  select id,'ss'
  from sample_ids
  RETURNING id, user_id
)
insert into sample2(id, user_id, value)
select id, user_id, 'val'
from sample1_ids
RETURNING id, user_id;

thanx how would i achieve transaction in this query if any insert fail i could do rollback
Then you start everything over again, after correcting the queries of course, since the entire transaction (or the cte) would get rolled back. Btw, if your inserts are occasionally failing, you're probably doing something wrong. The only case where it's reasonable for an insert to fail is in an upsert scenario that runs into duplicate unique keys during concurrent transactions, and even then you could get an advisory lock or a table lock if you need to make things bullet proof.
D
DaImTo

You could create an after insert trigger on the Sample table to insert into the other two tables.

The only issue i see with doing this is that you wont have a way of inserting adddetails it will always be empty or in this case ss. There is no way to insert a column into sample thats not actualy in the sample table so you cant send it along with the innital insert.

Another option would be to create a stored procedure to run your inserts.

You have the question taged mysql and postgressql which database are we talking about here?