ChatGPT解决这个技术问题 Extra ChatGPT

What is the difference between Non-Repeatable Read and Phantom Read?

What is the difference between non-repeatable read and phantom read?

I have read the Isolation (database systems) article from Wikipedia, but I have a few doubts. In the below example, what will happen: the non-repeatable read and phantom read?

Transaction A

SELECT ID, USERNAME, accountno, amount FROM USERS WHERE ID=1

OUTPUT:

1----MIKE------29019892---------5000

Transaction B

UPDATE USERS SET amount=amount+5000 where ID=1 AND accountno=29019892;
COMMIT;

Transaction A

SELECT ID, USERNAME, accountno, amount FROM USERS WHERE ID=1

Another doubt is, in the above example, which isolation level should be used? And why?


d
dade

From Wikipedia (which has great and detailed examples for this):

A non-repeatable read occurs, when during the course of a transaction, a row is retrieved twice and the values within the row differ between reads.

and

A phantom read occurs when, in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from the first.

Simple examples:

User A runs the same query twice.

In between, User B runs a transaction and commits.

Non-repeatable read: The A row that user A has queried has a different value the second time.

Phantom read: All the rows in the query have the same value before and after, but different rows are being selected (because B has deleted or inserted some). Example: select sum(x) from table; will return a different result even if none of the affected rows themselves have been updated, if rows have been added or deleted.

In the above example,which isolation level to be used?

What isolation level you need depends on your application. There is a high cost to a "better" isolation level (such as reduced concurrency).

In your example, you won't have a phantom read, because you select only from a single row (identified by primary key). You can have non-repeatable reads, so if that is a problem, you may want to have an isolation level that prevents that. In Oracle, transaction A could also issue a SELECT FOR UPDATE, then transaction B cannot change the row until A is done.


I don't really understand the logic of such a syntax... A NON-repeatable read occurs when the read is repeated (and a different value obtained)??!...
@serhio "non-repeatable" refers to the fact that you can read a value once and get x as the result, and then read again and get y as the result, so you cannot repeat (non-repeatable) the same results from two separate queries of the same row, because that row value was updated in between reads.
Both sounds same to me
The difference is that when you do count(*) from table and get back first 42 and then 43 that is NOT a non-repeatable read, because for the 42 rows you selected the first time, you got back the same data the second time. So there was no row retrieved twice with different values. But it is still a phantom read, because you got back an additional row. So all the row values are the same individually, but you are now selecting different rows. @sn.anurag
The difference is that a non-repeatable read returns different values for the same logical row. (For example, if the primary key is employee_id, then a certain employee may have different salaries in the two results.) A phantom read returns two different sets of rows, but for every row that appears in both sets, the column values are the same.
B
BateTech

A simple way I like to think about it is:

Both non-repeatable and phantom reads have to do with data modification operations from a different transaction, which were committed after your transaction began, and then read by your transaction.

Non-repeatable reads are when your transaction reads committed UPDATES from another transaction. The same row now has different values than it did when your transaction began.

Phantom reads are similar but when reading from committed INSERTS and/or DELETES from another transaction. There are new rows or rows that have disappeared since you began the transaction.

Dirty reads are similar to non-repeatable and phantom reads, but relate to reading UNCOMMITTED data, and occur when an UPDATE, INSERT, or DELETE from another transaction is read, and the other transaction has NOT yet committed the data. It is reading "in progress" data, which may not be complete, and may never actually be committed.


It has to do with transaction isolation levels and concurrency. Using the default isolation level, you will not get dirty reads, and in most cases, you want to avoid dirty reads. There are isolation levels or query hints that will allow dirty reads, which in some cases is an acceptable trade off in order to achieve higher concurrency or is necessary due to an edge case, such as troubleshooting an in progress transaction from another connection. It is good that the idea of a dirty read doesn't pass the "smell test" for you, bc as a general rule, they should be avoided, but do have a purpose.
@PHPAvenger here is an use case for READ UNCOMMITTED isolation level: there is always a possibility to encounter a deadlock between a select and an update query (explained here). If the select query is too complex to create a covering index, in order to avoid deadlocks you will want to use a READ UNCOMMITED isolation level with the risk of encountering dirty reads, but how often do you rollback transactions to worry about those dirty reads not being permanent?!
@petrica.martinescu the issues caused by dirty reads are NOT just about whether or not a transaction is rolled back. Dirty reads can return very inaccurate results depending on how data in pending transactions has been modified. Imagine a transaction that performs a series of several deletes, updates, and/or inserts. If you read the data in the middle of that transaction using "read uncommitted", it is incomplete. Snapshot isolation level (in SQL Server) is a much better alternative to read uncommitted. A valid use case for read uncommitted isolation level in a production system is rare IMO.
@DiponRoy great question. The locking implemented if using repeatable read (RR) isolation should prevent deletes from occurring on rows that have been selected. I've seen varying definitions of the 2 iso levels over the years, mainly saying phantom is a change in the collection/# rows returned and RR is the same row being changed. I just checked the updated MS SQL documentation says that deletes can cause non-RR (docs.microsoft.com/en-us/sql/odbc/reference/develop-app/… ) so I think it would be safe to group deletes in the RR category too
@anir yes inserts and deletes are included in dirty reads. Example: start a transaction, insert 2 of 100 invoice lines on connection a, now connection b reads those 2 lines before the trx is committed and before the other 98 lines are added, and so doesn't include all info for the invoice. This would be a dirty read involving an insert.
V
Vlad Mihalcea

The Non-Repeatable Read anomaly looks as follows:

https://i.stack.imgur.com/iPI0C.png

Alice and Bob start two database transactions. Bob’s reads the post record and title column value is Transactions. Alice modifies the title of a given post record to the value of ACID. Alice commits her database transaction. If Bob’s re-reads the post record, he will observe a different version of this table row.

The Phantom Read anomaly can happen as follows:

https://i.stack.imgur.com/aCtew.png

Alice and Bob start two database transactions. Bob’s reads all the post_comment records associated with the post row with the identifier value of 1. Alice adds a new post_comment record which is associated with the post row having the identifier value of 1. Alice commits her database transaction. If Bob’s re-reads the post_comment records having the post_id column value equal to 1, he will observe a different version of this result set.

So, while the Non-Repeatable Read applies to a single row, the Phantom Read is about a range of records which satisfy a given query filtering criteria.


Can Phantom Read contian multiple non-repeatable reads?
There's no inclusion operation between these anomalies. The former is about range scans while the latter is about individual records.
Would non repeatable read not cause the lost update problem when Bob tries to update the value based on his last read value?
S
Subhadeep Ray

Read phenomena

Dirty reads: read UNCOMMITED data from another transaction

Non-repeatable reads: read COMMITTED data from an UPDATE query from another transaction

Phantom reads: read COMMITTED data from an INSERT or DELETE query from another transaction

Note : DELETE statements from another transaction, also have a very low probability of causing Non-repeatable reads in certain cases. It happens when the DELETE statement unfortunately, removes the very same row which your current transaction was querying. But this is a rare case, and far more unlikely to occur in a database which have millions of rows in each table. Tables containing transaction data usually have high data volume in any production environment.

Also we may observe that UPDATES may be a more frequent job in most use cases rather than actual INSERT or DELETES (in such cases, danger of non-repeatable reads remain only - phantom reads are not possible in those cases). This is why UPDATES are treated differently from INSERT-DELETE and the resulting anomaly is also named differently.

There is also an additional processing cost associated with handling for INSERT-DELETEs, rather than just handling the UPDATES.

Benefits of different isolation levels

READ_UNCOMMITTED prevents nothing. It's the zero isolation level

READ_COMMITTED prevents just one, i.e. Dirty reads

REPEATABLE_READ prevents two anomalies: Dirty reads and Non-repeatable reads

SERIALIZABLE prevents all three anomalies: Dirty reads, Non-repeatable reads and Phantom reads

Then why not just set the transaction SERIALIZABLE at all times? Well, the answer to the above question is: SERIALIZABLE setting makes transactions very slow, which we again don't want.

In fact transaction time consumption is in the following rate:

SERIALIZABLE > REPEATABLE_READ > READ_COMMITTED > READ_UNCOMMITTED

So READ_UNCOMMITTED setting is the fastest.

Summary

Actually we need to analyze the use case and decide an isolation level so that we optimize the transaction time and also prevent most anomalies.

Note that databases by default may have REPEATABLE_READ setting. Admins and architects may have an affinity towards choosing this setting as default, to exhibit better performance of the platform.


UPDATE or DELETE both can take place for Non-repeatable reads or it is only UPDATE?
UPDATE or DELETE both can take place for Non-repeatable reads
Actually we can summarize that on an average a random DELETE statement executed by another transaction on the same database has very low probability of causing non-repeatable reads for the current transaction. But the same delete statement has 100% chance of causing a Phantom read for the current transaction. Looking it that way, my writing is a bit wrong if you take it word for word. But hey, I intentionally wrote it this way to make things more clear to the reader.
+1 for a simple and easy to understand explanation. However I think most databases ( oracle , mysql ) have a default isolation level of Read Committed and probably postgress uses default of repeatable_read
@akila - I am lying. ;-) Like I have already mentioned. :-) I am mentioning the boundary case.
e
egraldlo

There is a difference in the implementation between these two kinds isolation levels.
For "non-repeatable read", row-locking is needed.
For "phantom read",scoped-locking is needed, even a table-locking.
We can implement these two levels by using two-phase-locking protocol.


To implement repeatable read or serializable, there is no need to use row-locking.
J
Jeffrey Kemp

In a system with non-repeatable reads, the result of Transaction A's second query will reflect the update in Transaction B - it will see the new amount.

In a system that allows phantom reads, if Transaction B were to insert a new row with ID = 1, Transaction A will see the new row when the second query is executed; i.e. phantom reads are a special case of non-repeatable read.


I don't think the explanation of a phantom read is correct. You can get phantom reads even if non-commit data is never visible. See the example on Wikipedia (linked in the comments above).
E
Erwin Smout

The accepted answer indicates most of all that the so-called distinction between the two is actually not significant at all.

If "a row is retrieved twice and the values within the row differ between reads", then they are not the same row (not the same tuple in correct RDB speak) and it is then indeed by definition also the case that "the collection of rows returned by the second query is different from the first".

As to the question "which isolation level should be used", the more your data is of vital importance to someone, somewhere, the more it will be the case that Serializable is your only reasonable option.


B
BartoszKP

I think there are some difference between Non-repeateable-read & phantom-read.

The Non-repeateable means there are tow transaction A & B. if B can notice the modification of A, so maybe happen dirty-read, so we let B notices the modification of A after A committing.

There is new issue: we let B notice the modification of A after A committing, it means A modify a value of row which the B is holding, sometime B will read the row again, so B will get new value different with first time we get, we call it Non-repeateable, to deal with the issue, we let the B remember something(cause i don't know what will be remembered yet) when B start.

Let's think about the new solution, we can notice there is new issue as well, cause we let B remember something, so whatever happened in A, the B can't be affected, but if B want to insert some data into table and B check the table to make sure there is no record, but this data has been inserted by A, so maybe occur some error. We call it Phantom-read.


s
sn.anurag

non-repeatable read is an isolation level and phantom read (reading committed value by other transactions) is a concept (type of read e.g. dirty read or snapshot read). Non-repeatable read isolation level allows phantom read but not dirty reads or snapshot reads.


D
Don Smith

Both non-repeatable reads and phantom reads result from one transaction T1 seeing changes from another transaction T2 that commits before T1 is complete. The difference is that a non-repeatable read returns different values for the same logical row. (For example, if the primary key is employee_id, then a certain employee may have different salaries in the two results.) A phantom read returns two different sets of rows, but for every row that appears in both sets, the column values are the same.