cancel
Showing results for 
Search instead for 
Did you mean: 

Error 1204

DilipVoora
Participant
0 Kudos

Dear Experts,

Recently we came across an error 1204 with severity 17, state as 2 and with the description as "ASE has run out of locks. Re-run your command when there are few active users, or contact SA to reconfigure the ASE with more locks"

Can any one explain me why we have received this error even though we are having a handsome number of locks available and also the percentage of locks that are active is less than 1%?

Output of sp_monitorconfig 'locks' below pertaining to our environment,

1> sp_monitorconfig 'locks'

2> go

Usage information at date and time: Sep 30 2015 7:32AM.

Name                      Num_free    Num_active Pct_act Max_Used    Reuse_cnt   Instance_Name                 

------------------------- ----------- ----------- ------- ----------- ----------- ------------------------------

number of locks              14991870        8130 0.05     15000000           0 NULL                          

(1 row affected)

(return status = 0)


Regards,

Dilip Voora

Accepted Solutions (1)

Accepted Solutions (1)

former_member188958
Active Contributor
0 Kudos

The number of locks used is very volatile, a single process can accumulate and release a large number of locks in a short period of time.  Once the 1204 is hit, the process rolls back and the locks are released, which can happen quite quickly, particularly if the locks are shared locks.  What matters is not the conditions now, but what the conditions just before the 1204 was raised.

You can configure ASE to take a shared memory dump (CSMD) on the 1204 error.  Then the next time this happens, the resulting dump can be looked at to see what used all the locks.

DilipVoora
Participant
0 Kudos

Hi Bret,

So, with respect to the above you mean to say that we have to enable the server to generate shared memory dump and then specify the dump condition to get a snapshot of the shared memory dump.Is my understanding correct ?

Regards,

Dilip Voora

former_member188958
Active Contributor
0 Kudos

Yes, that is my suggestion.  Mark's suggestion also looks good and will help you identify transactions that use a lot of locks without necessarily hitting the limit.

Answers (3)

Answers (3)

former_member89972
Active Contributor
0 Kudos

Hi Dilip

At my work place we poll MDA table monProcessActivity to zero down to SPIDs with high number locks using column called LocksHeld.  In fact we do not allow more than 1M+ locks for a SPID. SPID with more than 1M+ is terminated during the poll and DBAs are notified with SPID + KPID + Login details.

We have third party tools collecting SQL so we use those to get actual SQL/Plan if need be.

But the same can also be extracted from monProcessSQLText or good old dbcc sqltext(spid) before the SPID is sent to a better place !!

What I have observed  is that number of locks climbs extremely rapidly when user(s) use "select into" for creating temporary tables while extracting data from a very active table with datarows locking scheme and also with huge number of rows.  Most probably server can not lock the table(s) due to high activity and the number of locks keep increasing to cross the configured limit.

I would suggest not to increase the lock limit but get end users to agree on some reasonable number of locks and then use that as hard limit to terminate a SPID.  If you provide enough evidence to end users they normally agree because no one wants to see a worst case scenario of recycling a server during a busy day.

In summary MDA tables are your friends. Use them positively to help you.

HTH

Avinash

c_baker
Employee
Employee
0 Kudos

Along with the other suggestions, you might also want to post the values of the following configuration options:

page lock promotion HWM

page lock promotion LWM

page lock promotion PCT

row lock promotion HWM

row lock promotion LWM

row lock promotion PCT

lock scheme

If you identify a transaction running against a large table (e.g. >15000000 rows) then you could perhaps change the escalation so a full table lock is taken out, our use the granularity of the locking system to set that table first to perhaps DPL (data-page locking) instead of DRL (data-row locking) if that is the issue.

Tables can be changed to different locking schemes, even if the server-wide lock scheme is set differently.

Chris

former_member188958
Active Contributor
0 Kudos

I disagree slightly here.  If you have identified a transaction that you now know is going to lock a large number of rows, I would first consider using the LOCK TABLE command to ensure the transaction used only a single table lock for the table (as it will be locking most of the table anyway).

The lock escalation settings are of more use for unpredictable ad-hoc user queries.  The main issue with lock escalation is that if there is even a single incompatible lock on another row in the table, ASE will not be able to escalate to a table lock, which makes the settings irrelevant.

-bret

c_baker
Employee
Employee
0 Kudos

I Stand corrected. Forgot about LOCK TABLE.

The real issue is to understand the application(s) behaviour in the database.

THe MDA tables can help here.

Chris

Former Member
0 Kudos

monitorconfig shows you've hit the max number of locks

Name                         Num_free    Num_active Pct_act  Max_Used   Reuse_cnt   Instance_Name                

------------------------- ---------------- ----------------- ---------- --------------- --------------- ------------------------------

number of locks       14991870    8130               0.05      15000000    0                     NULL                


15,000,000  is a lot of locks.

Check transactions aren't too long and you're not doing any table scans on update or incorrect joins.

Or maybe you've just got a large system with lots of concurrent activity.


sp_lock will show you the locks in use - but once this get large its a very slow process.

you can also use the mda tables to look at locks but that's hideously slow to select from once it gets over a million.