cancel
Showing results for 
Search instead for 
Did you mean: 

dead lock on MaxDB 7.6

Former Member
0 Kudos

Hi,

We're experiencing a problem with one of our applications, it's running into a dead-lock on the MaxDB database. The problems seems to be specific for MaxDB, our application (which supports several database vendors) runs fine on e.g. Oracle.

Running the application, there are 2 processes who are both at the same time accessing the same database table. The first process is doing batch-like processing, modifying a lot of rows in the table, all in separate transactions for each row. The second process accesses the table to gather statistics about the batch-like process to check its progress.

When the second process is activated (by user request), a dead-lock occurs. There seem to be both locks from the first and the second process on the table, and they both keep waiting for each other to release locks. The second process however is just simply performing read actions, not modifying data. It seems that MaxDB is placing shared read locks on the database table, which is quite a different approach than e.g. Oracle uses. On Oracle we do not have this issue.

Changing the isolation level from READ_COMMITTED to READ_UNCOMMITTED of course helps since the locks are ignored then, but we'd rather not change the isolation level, as READ_COMMITTED is considered a safer solution and gives more reliable data.

Does anyone have any idea how we can prevent these dead-locks? The most obvious solution would be looking at the shared read locks. Is it possible to have MaxDB not place those locks? And does it have big consequences if we do?

Some extra info about our enviroment:

Database: MaxDB 7.6 (is installed with the NetWeaver 2004s installation)

Isolation level: READ_COMMITTED

OS Platform: issue arises on both Windows and Linux

J2EE-platform: issua arises on both JBoss and NetWeaver

Hope someone can help, thanks for replying.

Best regards,

Leon

Accepted Solutions (0)

Answers (1)

Answers (1)

lbreddemann
Active Contributor
0 Kudos

> When the second process is activated (by user request), a dead-lock occurs. There seem to be both locks from the first and the second process on the table, and they both keep waiting for each other to release locks. The second process however is just simply performing read actions, not modifying data. It seems that MaxDB is placing shared read locks on the database table, which is quite a different approach than e.g. Oracle uses. On Oracle we do not have this issue.

Correct, MaxDB (as well as many other DBMS) uses SHARE LOCKS. (You may want to check the MaxDB documentation and the MaxDB Internals course material [here|http://maxdb.sap.com/training/] for details).

Oracle in turn is very much different here - they have a consistent read functionality that allows to read the committed state of data also while this data is currently being changed by another transaction.

> Changing the isolation level from READ_COMMITTED to READ_UNCOMMITTED of course helps since the locks are ignored then, but we'd rather not change the isolation level, as READ_COMMITTED is considered a safer solution and gives more reliable data.

Isolation levels are not safer or unsafer - they just differ in which data access they provide and which effects could occur. So for your statistics run it may be acceptable to also read uncommitted data. As you can change the isolation level for each transaction you start this could be an option.

> Does anyone have any idea how we can prevent these dead-locks? The most obvious solution would be looking at the shared read locks. Is it possible to have MaxDB not place those locks? And does it have big consequences if we do?

Well, yes - change the isolation level...

With the current versions of MaxDB there is no consistent read. Writers will have to wait for readers to finish their read transaction.

Basically the problem here is not so much the locking behaviour of MaxDB but that your application is obviously designed for Oracle.

Still it's a bad design - even on Oracle.

You wrote:

>"The first process is doing batch-like processing, modifying a lot of rows in the table, all in separate transactions for each row."

Why the heck to you do things row by row? BATCH processing means: process lots of data in a batch - not one by one (aka slow by slow).

Moreover the second process seems to be somehow superficial:

>The second process however is just simply performing read actions, not modifying data.

You already have the information on how far your first process has been working yet. The first process itself can put out this information.

BTW: How does the statement for checking the progress look like?

As I understand your description - the deadlock shouldn't happen:


                    T1                        T2
 
T1 updates row1    set ex-lock on row1        |  
                     |                        |
T2 reads table       |                       wait for s-lock on row1
                     |                        |
T1 commits        release ex-lock on row1    set s-lock on row1 
                     |                        |
T2 reads table       |                       set s-lock on row2
                     |                        |
T1 updates row2   wait for ex-lock on row2    |
                     |                        |
T2 finished read  set ex-lock on row1      release all s-locks
                     |                        |
                     *                        *

As you see - if T2 correctly closes the read cursor after gathering the progress information, than there should never be a mutual exclusive wait aka deadlock.

Can you post a demonstration setup to reproduce the issue? (CREATE TABLE/INDEX statements, some data, your SQL Statements)

Regards,

Lars