cancel
Showing results for 
Search instead for 
Did you mean: 

Select and update with lock exclusive

Former Member
0 Kudos

Hello Everybody

In our application we have a table autonum to handle unique keys for all other tables.

In Autonum we have a column "SUCHBEGRIFF" char(40) unique KEY and a column "WERT" fixed(10,0).

Suchbegriff has values like Rechnungsnr,Auftragsnr,Bestellnr ...

Example:

Befor inserting a record into table rechnungen we do:

Select wert from autonun where suchbegriff = "Rechnungsnr" with lock exclusive.

l_rechnrneu = wert + 1

update autonum set wert = ?l_rechnrneu where suchbegriff = "Rechnungsnr"

commit

(then l_rechnrneu is used for an insert into rechnungen command)

This technic works for all tables (250) in our application.

We have about 400 concurrent users working with maxdb 7.6 via odbc 7.6

No problems since 2 years!

Now we start some backgroundjobs from an xp-workstation.

We have scheduled 5 jobs, starting every 5 minutes, same time.(Same user,same odbc-connection)

Each job inserts records into a table joblogs and therefore needs a unique JOBLOGNR from autonum.

Now we run into problems 2 or 3 times a day?

<Sometimes the backgound jobs are running (waiting?) without inserting a record into joblogs (deadlock?)

And more worse:

Sometimes the insert into joblogs failes with "duplicate key" ??

We don't know where to begin? Maxdb Problem ? Workstation problem?

Any help welcomed

Best regards

Albert

Accepted Solutions (0)

Answers (1)

Answers (1)

lbreddemann
Active Contributor
0 Kudos

> Befor inserting a record into table rechnungen we do:

> Select wert from autonun where suchbegriff = "Rechnungsnr" with lock exclusive.

> l_rechnrneu = wert + 1

> update autonum set wert = ?l_rechnrneu where suchbegriff = "Rechnungsnr"

> commit

> (then l_rechnrneu is used for an insert into rechnungen command)

Why do you do that?

This is the programmed guarentee for not beeing able to scale your workload.

> This technic works for all tables (250) in our application.

> We have about 400 concurrent users working with maxdb 7.6 via odbc 7.6

> No problems since 2 years!

Ok, there are 400 users logged on at the same time. But they are not doing the same inserts at the same time. Otherwise you would have faced problems much earlier.

> Now we start some backgroundjobs from an xp-workstation.

> We have scheduled 5 jobs, starting every 5 minutes, same time.(Same user,same odbc-connection)

> Each job inserts records into a table joblogs and therefore needs a unique JOBLOGNR from autonum.

Well, that's what I meant - a little increase in the workload and this design shows its limitations.

> Now we run into problems 2 or 3 times a day?

> <Sometimes the backgound jobs are running (waiting?) without inserting a record into joblogs (deadlock?)

Do you see any -60 errors? If not, there are no deadlocks.

See a 'deadlock' is just one, but not the only one, possibility where it may look like the database is doing 'nothing' anymore.

But fortunately, MaxDB recognizes deadlocks and cancels one of the affected sessions to resolve it.

> And more worse:

> Sometimes the insert into joblogs failes with "duplicate key" ??

> We don't know where to begin? Maxdb Problem ? Workstation problem?

Coding bug in the application.

Perhaps one of your inserts had to rollback but your application did not handle this correctly and fetches the same ID twice.

Without having the option to see the DB and the coding, it'll be quite hard to tell what is going wrong here.

Anyhow, your 'homebrew number provider' - why don't you use sequences instead.

Where is the benefit for you here? With sequences you'll get unique numbers without locking. Everytime and very fast.

regards,

Lars

Former Member
0 Kudos

Hello Lars

Thank you for your answer.

The main reason for this technic is, that we use speacking ids in our tables.

We support about internal 50 companies with our application.

So Companyno is part of the id in invoises,orders ...

Suchbegriff rechnr4711 wert = 471100004999

Suchbegriff rechnr4812 wert = 471210440499

In other tables we have speacking id groups for different buinessparts

(cleaning,security,maitenance,catering,....)

We have to sync speciall subsystems for special companies or buisiness parts with the central data.

(all invoices for catering go into a special controlling tool)

(some orders for maintenance come from customer applications)

Sometimes we sell some subcompanies with all appendant data !?

Sometimes we buy companies with all appendant data !?

...

...

It may be not the best way to handle the variety of our group, but it works for years.

At speciall times we have about 300 users doing inserts into the same table!!!

Payroll data for 24000 employees for example (from 1 to 8 day of month)

40000 invoices with n positions between 8 and 15 day of month.

...

...

Until now, no problem.

We did tests for this technic by inserting more than 10000 records into the same table fetching an id with the same "suchbegriff" from autonum with parallel batchjobs. No problems, no duplicate keys.

We don't use rollback when fetching logids from autonum.

Only parallel batchjobs from our workstation fetching a joblogid make problems (sometimes)

Best regards

Albert

lbreddemann
Active Contributor
0 Kudos

> The main reason for this technic is, that we use speacking ids in our tables.

> We support about internal 50 companies with our application.

> So Companyno is part of the id in invoises,orders ...

> Suchbegriff rechnr4711 wert = 471100004999

> Suchbegriff rechnr4812 wert = 471210440499

> In other tables we have speacking id groups for different buinessparts

> It may be not the best way to handle the variety of our group, but it works for years.

Gosh - that's information overloading at it's best...

What do you do, when the value of an invoice has to be changed? Update the primary key of the table?

Does not mean that this design is not the cause of your trouble.

> At speciall times we have about 300 users doing inserts into the same table!!!

And you sync them to do the inserts at the very same microsecond?

Anyhow, even if there are only a few users doing inserts with the same kind of idea they will face wait times.

> We did tests for this technic by inserting more than 10000 records into the same table fetching an id with the same "suchbegriff" from autonum with parallel batchjobs. No problems, no duplicate keys.

You should have seen wait times for the update of your "number"-table and the lock necessary for it.

Anyhow, as I wrote, your technique should not provide duplicate keys.

Unless: you use the wrong isolation level, one of the update transactions to the "number table" failed or some data in the target tables had been changed without the help of your "number table".

> We don't use rollback when fetching logids from autonum.

So what do you do, when connection fails, the database crashes or the client application hits an error?

You use rollback. There is no way to avoid it.

Your application gets a "duplicate key" error - the database performs a rollback of your last action. What does your application do then? Commit?

> Only parallel batchjobs from our workstation fetching a joblogid make problems (sometimes)

At least, that's what you've seen until now.

Ok, without more detailed information we cannot tell too much about the real cause of the problem.

Try to figure out, what the duplicate keys do have in common.

Try to reproduce the behavior.

Check, whether there are any rollbacks happening when your application uses the database.

What isolation level do you use?

regards,

Lars

Former Member
0 Kudos

Hello Lars

Thank you again.

>Gosh - that's information overloading at it's best...

We call this needed information.

What does our SAP-System? We have 1 mandt,60 accounting areas and about 200 werks!

Of course SAP uses internal unique keys for all tables, but we configured different "number intervals" for each werks.

So the system has to do something similar. Automatic internal unique key and the next number from a special number area for this werks.

If we sell a company we know which material numbers,wich parts lists,... we have to extract.

We identify them by the speaking numbers.

IF we buy a company we can load the needed data into our system.

We had to handle these situations more than 8 times in the last 5 years. Everything worked very successfull.

So we decide to do something similar with our own applications and our second non SAP MaxDB-database.

>What do you do, when the value of an invoice has to be changed? Update the primary key of the table?

Update rechnungen set amount = xyz,...... where rechnungnr = nnn

We never change rechnungsnr,if the invoice was wrong we produce a credit for it and then write a new invoice

>Does not mean that this design is not the cause of your trouble.

> At speciall times we have about 300 users doing inserts into the same table!!!

>And you sync them to do the inserts at the very same microsecond?

>Anyhow, even if there are only a few users doing inserts with the same kind of idea they will face wait times.

Yes

> We did tests for this technic by inserting more than 10000 records into the same table fetching an id with the same "suchbegriff" from autonum with parallel batchjobs. No problems, no duplicate keys.

>You should have seen wait times for the update of your "number"-table and the lock necessary for it.

>Anyhow, as I wrote, your technique should not provide duplicate keys.

This is the point we are wondering about.

>Unless: you use the wrong isolation level, one of the update transactions to the "number table" failed or some data in the target tables had been changed without the help of your "number table".

> We don't use rollback when fetching logids from autonum.

>So what do you do, when connection fails, the database crashes or the client application hits an error?

>You use rollback. There is no way to avoid it.

>Your application gets a "duplicate key" error - the database performs a rollback of your last action. What does your application do then? Commit?

Sorry, i meant that we do not do rollbacks over 2 or 3 inserts or updates in different tables.

SQL-Command,On error = messagebox,errorlog,quit

> Only parallel batchjobs from our workstation fetching a joblogid make

> problems (sometimes)

>At least, that's what you've seen until now.

>Ok, without more detailed information we cannot tell too much about the real cause of the problem.

>Try to figure out, what the duplicate keys do have in common.

>Try to reproduce the behavior.

Until now we couldn't reproduce this behavior. 5 batchjobs starting every 5 minutes at the same time work for hours or days.

They connect to the database and first trie to get a joblogid from autonum.

We met the "duplicate key" situation for joblogid 2 times in the last week.(it was the first duplicate key error we have seen

in the last 4-5 years)

>Check, whether there are any rollbacks happening when your application uses the database.

>What isolation level do you use?

We use DATABASE ISOLATION LEVEL 0

After connecting to the database via odbc we do

=SQLSETPROP(verbindungsnr,"Transactions",2)

2 = Transaction processing is handled manually through SQLCOMMIT( ) and SQLROLLBACK( ). Read/write.

=SQLSETPROP(verbindungsnr,"DisconnectRollback",.T.)

So normal select commands are fired without a following commit.

Insert and update commands are fired with a following commit command.

Selects from autonum are fired with "look exclusive"

Nevertheless you dislike our design, do you think it would be better (quicker,safer...) to use an internal databases procedure to get

the next speaking number for a given "suchbegriff" from our autonum table? (no translation of the sql-command every time)

If so, how? (We didn't use database procedures until now and i don't know how to get the next number(=wert +1) as a string value

with a procedure)

I tried something like this:

CREATE DBPROC dbproc_autonum (IN suchkey CHAR(30),INOUT neuerwert FIXED(10,0)) as

begin

select wert into :neuerwert FROM "BWPROGI"."AUTONUM" WHERE upper(suchbegriff) = upper(:suchkey) WITH LOCK EXCLUSIVE;

update "BWPROGI"."AUTONUM" set wert = wert + 1 WHERE upper(suchbegriff) = upper(:suchkey) ;

SET neuerwert = neuerwert + 1;

end;

A char(10) return value would be nice and i don't know wether this would be the quicker way.

Best regards

Albert

lbreddemann
Active Contributor
0 Kudos

> >Gosh - that's information overloading at it's best...

> We call this needed information.

> What does our SAP-System? We have 1 mandt,60 accounting areas and about 200 werks!

> Of course SAP uses internal unique keys for all tables, but we configured different "number intervals" for each werks.

With "information overloading" I was referring to the multiple meanings you encode in just one column (your "rechnungs-no").

SAP tables do not have this.

As you wrote, there is "MANDT", "BUKRS", "WERKS" ... and the primary key is defined over all relevant columns.

There is no primary key column where all the different meanings are concatenated together.

So there aren't any surrogate keys here (a.k.a. AUTOID) used here - (ok except the infamous DDLOG sequence...).

> >What do you do, when the value of an invoice has to be changed? Update the primary key of the table?

> Update rechnungen set amount = xyz,...... where rechnungnr = nnn

> We never change rechnungsnr,if the invoice was wrong we produce a credit for it and then write a new invoice

So, you could use a sequence here instead as well.

> > We don't use rollback when fetching logids from autonum.

> >So what do you do, when connection fails, the database crashes or the client application hits an error?

> >You use rollback. There is no way to avoid it.

> >Your application gets a "duplicate key" error - the database performs a rollback of your last action. What does your application do then? Commit?

> Sorry, i meant that we do not do rollbacks over 2 or 3 inserts or updates in different tables.

> SQL-Command,On error = messagebox,errorlog,quit

You don't handle the fetching of the new number in the same transaction as the actual insert of your application data in the same transaction?

> >What isolation level do you use?

> We use DATABASE ISOLATION LEVEL 0

Hmm... did you read the documentation on SQL Locks?

[Internals Course - SQL Locks|http://maxdb.sap.com/training/internals_7.6/locking_EN_76.pdf] :

"Isolation level 0 does not offer any protection against access anomalies."

Basically it might have happened that the same number is read twice.

Perhaps the application is not always requiring locks when reading data from this table?

> After connecting to the database via odbc we do

> =SQLSETPROP(verbindungsnr,"Transactions",2)

> 2 = Transaction processing is handled manually through SQLCOMMIT( ) and SQLROLLBACK( ). Read/write.

> =SQLSETPROP(verbindungsnr,"DisconnectRollback",.T.)

>

>

> So normal select commands are fired without a following commit.

> Insert and update commands are fired with a following commit command.

> Selects from autonum are fired with "look exclusive"

Please be more detailed here.

What is the exact sequence of actions here?

1. Fetch number from number table and update number table.

2. COMMIT

3. Insert new application data with the just fetched number.

4. COMMIT

or

1. Fetch number from number table and update number table.

2. Insert new application data with the just fetched number.

3. COMMIT

And what does your application do with its data when a rollback occurs?

Is it guaranteed that all variables are reset?

> Nevertheless you dislike our design, do you think it would be better (quicker,safer...) to use an internal databases procedure to get

> the next speaking number for a given "suchbegriff" from our autonum table? (no translation of the sql-command every time)

No, currently we don't know what is causing the problem, so we cannot tell whether such a change would help. In fact, right now it would make things more complicated because we would less understand, what's happening here.

Concerning your design: it's not about "liking" or "not liking it".

I just pointed out some problems that result from the design as it is.

> select wert into :neuerwert FROM "BWPROGI"."AUTONUM" WHERE upper(suchbegriff) = upper(:suchkey) WITH LOCK EXCLUSIVE;

> update "BWPROGI"."AUTONUM" set wert = wert + 1 WHERE upper(suchbegriff) = upper(:suchkey) ;

> SET neuerwert = neuerwert + 1;

> end;

1. The WHERE clause UPPER(suchbegriff) = UPPER(...) is the best way to disable the efficient use of any index structure. Better make sure that your data is in the right format in the table when you enter it and then look it up without the UPPER() conversion.

2. I wouldn't perform the increment two times.

Get the current value into your variable, increment this variable, set the current value in the table to the variable.

> A char(10) return value would be nice and i don't know wether this would be the quicker way.

Why should it? You would have to convert it first - that's additional work.

Anyhow, to move ahead with your duplicate keys problem, you may perform a vtrace with the "stop on error" option.

As your error is a duplicate key on a primary key constraint you should set "stop on error" to the error code "200".

The next time your application hits the error, the vtrace automatically stops and we could examine what happened before.

See [MaxDB database trace|https://wiki.sdn.sap.com/wiki/x/2yg] for details on how to activate the trace.

When the error occurs next time and you caught it in the vtrace we can take a look at it.

regards,

Lars

Former Member
0 Kudos

Hello Lars

>You don't handle the fetching of the new number in the same transaction as the actual insert of your >application data in the same transaction?

no

we fetch the new number with a central function (select with lock exclusive, update wert = wert + 1,commit)

number = "9999999999"

1. Fetch new number from number table and update number table.

2. COMMIT

If new number = "9999999999" or number = .NULL.

errorlog , quit

3. Insert new application data with the just fetched number.

on error errorlog,quit

4. COMMIT

>Basically it might have happened that the same number is read twice.

until now we thought, that this could not be possible with lock exclusive ???

> select wert into :neuerwert FROM "BWPROGI"."AUTONUM" WHERE upper(suchbegriff) = upper(:suchkey) WITH LOCK EXCLUSIVE;

> update "BWPROGI"."AUTONUM" set wert = wert + 1 WHERE upper(suchbegriff) = upper(:suchkey) ;

> SET neuerwert = neuerwert + 1;

> end;

1. The WHERE clause UPPER(suchbegriff) = UPPER(...) is the best way to disable the efficient use of any index structure. Better make sure that your data is in the right format in the table when you enter it and then look it up without the UPPER() conversion.

OK! The "suchbegriff" in autonum is allways upper! We did this just for security! We will change our number-fetching function.

2. I wouldn't perform the increment two times.

Get the current value into your variable, increment this variable, set the current value in the table to the variable.

OK

>Anyhow, to move ahead with your duplicate keys problem, you may perform a vtrace with the "stop on >error" option.

>As your error is a duplicate key on a primary key constraint you should set "stop on error" to the error >code "200".

>The next time your application hits the error, the vtrace automatically stops and we could examine what >happened before.

>See MaxDB database trace (https://wiki.sdn.sap.com/wiki/x/2yg) for details on how to activate the trace.

>When the error occurs next time and you caught it in the vtrace we can take a look at it.

OK!

The idea of using a database procedure instead of our function for this was "performance" !?

Users have to wait for previous users accessing the same suchbegriff.

Until now we fire 3 commands via odbc (select with lock,update,commit) which the database has to "translate" first.

Wouldn't a database procedure be faster ?

Best regards

Albert

lbreddemann
Active Contributor
0 Kudos

When you do use Isolation level 0 than your SELECT ... WITH LOCK EXCLUSIVE does not protect the data from beeing read by another session.

It's easy to try it out.

Open two SQL Studio processes, set "Autocommit:OFF" and Isolaition level to "NOT COMMITED".

Now perform your SELECT .. WITH LOCK EXCLUSIVE.

If you see the data in your session A, try to select it in session B - you'll get the row back.

Only when you also use the same SELECT .. WITH LOCK EXCLUSIVE you will get a lock wait situation.

> The idea of using a database procedure instead of our function for this was "performance" !?

> Users have to wait for previous users accessing the same suchbegriff.

The users don't wait for the "translation" of the query (which is usually called 'parsing').

What they wait for is the release of the lock. That's what I was talking about, when I wrote that this "homebrew-sequence" is a non-scalable solution.

> Until now we fire 3 commands via odbc (select with lock,update,commit) which the database has to "translate" first.

> Wouldn't a database procedure be faster ?

The statements are rather small, don't change a lot - no, the parsed versions should be found in the shared SQL cache. And even if this is not the case and a full parse is necessary, than all relevant catalog data is likely to be found in the catalog cache.

I don't see any sign here that parsing is an issue performance-wise for you.

regards,

Lars