cancel
Showing results for 
Search instead for 
Did you mean: 

master..syslocks vs master..monLocks

Former Member
0 Kudos

Should we expect huge performance differences between syslocsk and monLocks when we large number of locks ?

We ran

     select count(*) from syslocks

and

     select count(*) from monLocks

Both returned  200,000 rows

The first took          1s

The second took     25s

Is monLocks performance much slower than syslocks ?

Is there a way to improve the performance of the mon tables ?

Accepted Solutions (1)

Accepted Solutions (1)

simon_ogden
Participant
0 Kudos

Hi Mike,

I have spent many an hour trying to come up with a more efficient way of using monLocks that might result in better performance and better concurrency.

Alas I couldn't come up with anything close to what you get by using syslocks.

It sits there scanning the lock hashtable for ever and burns 100% of an engine whilst doing so. Modifying the number of hash buckets also appears to have zero effect.

As such, anything using monLocks has been stripped out of our monitoring which is a shame as monLocks by default will show you the requested locks as well as the granted ones (this was only possible previously for syslocks by using TF 1202). I've had this on a list to report to SAP, sadly that particular list hardly ever gets a look in anymore.

Cheers,

Simon

Former Member
0 Kudos

Hi Simon,

Thanks for this.

We will look at moving to syslocks - don't think there's much in monLocks thats crucial

Do you know if this was the same in Sybase 15.5 ? We've only just noticed the problem with 15.7.

> I've had this on a list to report to SAP, sadly that particular list hardly ever gets a look in anymore.


Does any list for SAP/Sybase get looked at anymore ?


Thanks again


Mike

simon_ogden
Participant
0 Kudos

To be honest I don't recall seeing this level of performance issue with monLocks on earlier than 15.7.  However I don't have a 15.5 environment to hand to verify that.

I'm not sure if DBA Cockpit  (use to monitor Business Suite on ASE) uses monLocks. With Business Suite, ASE is configured to disable lock escalation entirely so I would have expected they should be running with a huge number of locks given all tables are DRL locking. If so, I would have expected they would have noticed queries against monLocks taking an age.

It may be that they just monitor lock wait timing at an object level.

Former Member
0 Kudos

Thanks

Interesting you say about lock promotion - we have page level lock promotion set at 50,000 but we're seeing replication user having 100,000 page locks even though no one else is on the server.

I'd expect this to get promoted to a table lock.Could Replication be disabling lock promotion ?

simon_ogden
Participant
0 Kudos

Depends what your dsi is doing. If it is doing a bunch in singleton inserts in a xact this won't escalate. Escalation is only on a per scan basis. An individual scan of a table within the context of a wider query. If that scan session takes out X number of locks it will escalate ( assuming nothing else has a shared lock of course).

Former Member
0 Kudos

Ah - thank you that explains the lack of lock escalation.

Answers (2)

Answers (2)

Former Member
0 Kudos

FYI...

Ive been informed this being addressed in SP136 under these CRs

782871, 786474 improve performance in the concurrency layer as one spid is querying monLocks , when the other spids release and take out locks.

786594 improve the memory allocation in the monitoring layer.

former_member89972
Active Contributor
0 Kudos

Mike

I have two positive things to say about monLocks table compared to syslocks.

- It is NOT materialized for each query so even in a space crunch for tempdb queries on it will run.

  Multiple runs on syslocks when lock counts are high can stress out the server.

  Imagine locks running into 100s of thousands and a few users querying it concurrently.

  And I would count this as a big plus.

- It is a wider table compared to syslocks and has more details should you need them

For small number of locks performance hit may not be significant.

With the additional details and no demands on temporary space I will prefer monLocks any day !

HTH

Avinash

simon_ogden
Participant
0 Kudos

"Multiple runs on syslocks when lock counts are high can stress out the server."

Do you have some evidence to back that up? My tests show the exact opposite, monLocks is the problem one.

Agreed it does save on materialisation, you need to make sure you've got a couple of GB spare in your assigned tempdb if you have millions of locks and are querying syslocks.

When the server has millions of locks querying monLocks is impossible if you want results in an acceptable timeframe ( and can't afford an engine being essentially offline for that period)

Message was edited by: Bret Halford reduced the font size.

former_member89972
Active Contributor
0 Kudos

No I do not have any tests designed just for this 🙂

I am going by the materialization (for each SPID) requirement for queries on pseudo/fake tables.

So if you ensure enough temporary space at your disposal then syslocks may work for you.

Also note the additional details in columns on monLocks.

Avinash

simon_ogden
Participant
0 Kudos

Sorry about my last post and the giant font, that'll teach me for posting from a mobile device. It's also not editable (why?) like my other posts so we're stuck with it!

You make a good point regarding multiple connections querying syslocks, that would be problematic.

If you have control in as much as dedicated monitoring account assigned to a user tempdb and a single monitoring solution that provides the required data in the format needed for all interested parties then thankfully that side of things becomes less problematic. 

former_member89972
Active Contributor
0 Kudos

I too wondered about the font size 🙂   

Thanks for the clarification.

Typically my attack for high number of locks on a production server begins with monProcessActivity table which has LocksHeld column per SPID.  If I sort it by LocksHeld desc I get to a couple of top suspect SPIDs.

Then onto monProcessObject for the suspect SPIDs and table names.

Normally temp tables/work tables do get exclusive table locks for each SPID.

Typical culprit is an "insert into a temp table" based on "select from". 

Lock escalation for the source of this kind of  "select " may not happen if it is a relatively active table.

Thus the number of locks will climb so fast before you blink.

I have a watchdog process polling every minute to knock out any SPID grabbing more than 1M locks.

Keeps the server from getting into danger zone.  That process gets a few hits every month. And with that evidence provided to developers they can fix the code !

HTH

Avinash

Former Member
0 Kudos

Good Morning ,

Simon do you have some test SQL or application for this,  I logged a Change
Request 782871 a while a go in relation to this code. 

The old way we were collect data for monLocks was not very effective special how we allocated memory from the memory pool and hold the spinlocks I would expect it to linear in it slowness more locks over more bucket will be slower this 782 871 are planned for next schedule release of ASE 15.7 currently Sp136.

Thanks,

Niclas

Former Member
0 Kudos

> I have two positive things to say about monLocks table compared to syslocks.

This maybe the case but when we have more than 500,000 locks - monLocks is unreadable without causing major issues

Former Member
0 Kudos

Hello,

I do not need to explain this for some but syslocks can also cause problem if you have some contention or slow tempdb as the algorithm materialization a fake table in tempdb and if slow and you handle many locks, then the select on syslocks may take forever due to it cannot take a consistent snapshot of lock activities.

So the slower the materialization of fake table syslocks is, the more rows may need to be inserted to the  worktable due to more new locks.

But if you for a sample use in-memory tempdb and have the user bound to it own tempdb it should be no contention for resources or other slowness to materialization the fake table in to tempdb and it all work fine.

So both monLocks and syslocks have it pros and cons.

Thanks,
Niclas 

Former Member
0 Kudos

Hi Mike,

When you say 500 000 locks with this you mean you take out 500 000 locks and then do the select on syslocks and monLocks ?

It is not that your test,  release and aguirre locks at the same time you scan the syslocks and monLocks ?.

Just want to do some test to make sure 782871 address this with monLocks.

Niclas


Former Member
0 Kudos

In the real world, we had rep server running which was grabbing 900,000 locks.

We ran

     select count(*) from syslocks - took 20 seconds

     select count(*) from monLocks - took 5 mins

Here are some other counts

     select count(*) from syslocks - 150,000 reported in 1s

     select count(*) from monLocks- 150,000 reported in 24s

Here are some other counts

     select count(*) from syslocks - 23,000 reported in 0.023s

     select count(*) from monLocks- 23,000 reported in 0.123s


It seems odd that monLocks performance isn't linear.

When the number of locks goes over ~400,000 monLocks is really unusable.

Just run this one and monLocks ran much quicker than yesterday

     select count(*) from syslocks - 700,000 reported in 1s

     select count(*) from monLocks- 700,000 reported in 140s


yesterday this took 300seconds - I think its also related to what else is running.


Former Member
0 Kudos

Do you have a link to 782871 ? I can't find it.


I found this summary - but this sounds like a different issue to me

782871KBA 2164927 In some circumstance, the query on monLock might cause some running processes infected by timeslice.
former_member182259
Contributor
0 Kudos

I have tried to stay out of SCN because a lot of the discussions are more tech support oriented than community, but this one is going sideways.

The first thing you have to understand about MDA tables is that they *still* use the CIS layer.   Yes, they are now materialized rpc calls - but that just avoids the loopback that too often was the crippler in 12.5.x with MDA.  However, anyone with any real experience with CIS - especially with ASE in process kernel mode and having to deal with the async event queue - will tell you that CIS is not necessarily the spriteliest way to get data from a server.    And since ASE materializes data at the speed of client processing, the query execution speed of MDA queries is too often at the mercy of CIS.  For that reason, whenever I use MDA tables, I tend to use the system RPC calls (e.g. $monLocks) from an external program vs. SQL.....

.....and I have to say that as a result, I have never seen these types of differences.

Secondly, any constant polling of system tables - whether syslocks or sysprocesses - will cause problems - especially in any concurrency.   We have seen this especially with sysprocesses with apps that try stuff in triggers vs. using the ACF functions, but it also can apply to sysprocesses (as we have seen people that use concurrent sessions to monitor blocking via GUI tools have some fun with this).   A lot of it does have to do with the fact that system tables sysprocesses/syslocks don't exist as "tables" (although I have seen DBA's argue this and one even insisted they would work better with datarows locking) and as a result, the scanner has to scan structures (e.g. pss for sysprocesses - or multiple lock hashtables for locks) and when a different process has to amend the structures, it will grab spinlocks (mutex) somewhere - and that is where a number of factors immediately come into play (e.g number of sockets/cores, etc.).

Both monLocks and syslocks poll the same structures (I am not so sure that they poll the lock hashtables vs. polling pss...if they do poll the hash tables, they would have to poll several - table and fglocks - for arguments sake, we will say the hashtables) - there is no difference (other than some columns differences wrt to information presented) in that respect.    There is one notable difference in that as an RPC call, monLocks accepts parameters .....unfortunately, it is the SPID/KPID combination and not DBID/ObjectID - which I personally would have found more useful.   However, if you do wish to only grab locks for a single specific SPID, then there is a huge difference as syslocks still pools the entire hashtable and does the filtering when materializing the results via a work table.  monLocks RPC would simply not bother returning the locks for SPIDs that don't match at all (in fact it can scan the pss structure instead of the lock hashtable - which is why I question whether either of these actually scan the hashtable and not pss...scanning the hashtable is a bit nasty as you would have to scan the chains - and blocked locks/lock waits are not in the hashtables anyhow)....so it would be tons faster at returning the data if CIS wasn't in the way.

In 10+ years of using MDA (quite extensively, I would add - far far more extensively than TS does), I have never seen monLocks *cause* problems that syslocks would not have (and unfortunately, attempts to use syslocks instead rather proved it).  There have been issues with 'object lockwait timing' due to grabbing spinlocks - particularly in larger SMP environments (16+ engines) that had contention - but this was largely resolved in 15.7, although a different manifestation came about later in 15.7 (when lock waits was added to monCachedStatement) that also has been resolved now....but that was an impact on monOpenObjectActivity and not monLocks. 

However, I would agree with someone else on this thread.   If what you are after is who is holding a lot of locks, monProcessActivity is a better starting point as LocksHeld already has aggregated them to the SPID/KPID level at least.   If you are trying to prove where contention is and specifically seeing if table lock escalation (or not!) is causing the problem, then sampling monLocks ever minute is doable (assuming you have a lot of space to load the output for analysis) .    Polling every few seconds - although it can be done - is a fight against physics as the data has to be written somewhere (e.g. a file or scrolling to screen) and that writing of data will take far longer than the row materialization.

I have used monLocks successfully in a number of places/times when needed with no issues.....and much prefer it to syslocks as I don't need to constantly join it with other tables to get the data I need.

Former Member
0 Kudos

Thanks for the explanation.

>  I have never seen monLocks *cause* problems that syslocks would not have (and unfortunately,

> attempts to use syslocks instead rather proved it). 

We're using 15.7 SP134 on a machine with 24 engines (underlying machine is 24 cores x 2threads) with 512Gb RAM.

and its only when more than 200,000 locks do we get this.

Here's how we're comparing the difference between syslocks and monLocks.


        select @time = getdate()

        select @size1 = count(*) from master..syslocks

        select @timetaken1 = datediff(ms, @time, getdate())

        select @time = getdate()

        select @size2 = count(*) from master..monLocks

        select @timetaken2 = datediff(ms, @time, getdate())

        print "syslocks %1! %2! monLocks %3! %4!", @size1, @timetaken1, @size2, @timetaken2

I've altered our monitoring process now to stop using monLocks so will see over the next day if this helps.

Note: we're haven't seen this before on earlier versions so we may have something mis-configured.

Also, I can run more tests for you if you need me to provide more diagnostics.Let me know what you'd like me to test.

simon_ogden
Participant
0 Kudos

Jeff,

Nothing is going sideways. We're not theorising about problems with monLocks we're only relaying findings.

If it shouldn't be performing like this then we need to work out what the problem is.

Can you fire up a simple test on 15.7 SP132 with a single spid holding 4 million locks row on one table (just a single integer column) and post the statistics output?

The last time I checked this a few months ago I got:

==================== Lava Operator Tree ====================

                       

                        Emit

                        (VA = 2)

                        r:1 er:1

                        cpu: 0

             /

            ScalarAgg

              Count

            (VA = 1)

            r:1 er:1

            cpu: 5562500

/

TableScan

master..monLocks

(VA = 0)

r:4e+06 er:100

l:0 el:10

p:0 ep:2

============================================================

Table: master..monLocks scan count 1, logical reads: (regular=0 apf=0 total=0), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

Execution Time 55625.

Adaptive Server cpu time: 5562311 ms.  Adaptive Server elapsed time: 5562722 ms.

versus:

                        cpu: 0

             /

            ScalarAgg

              Count

            (VA = 1)

            r:1 er:1

            cpu: 400

/

TableScan

master..syslocks

(VA = 0)

r:4e+06 er:100

l:4.367e+06 el:10

p:0 ep:3

============================================================

Table: master..syslocks scan count 1, logical reads: (regular=4366948 apf=0 total=4366948), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 235

Execution Time 118.

Adaptive Server cpu time: 11811 ms.  Adaptive Server elapsed time: 11863 ms.

Execution time: 11.87 seconds

With the entire hour and a half in this routine and burning 100% of an engine thread.

pc: 0x00000000015c7f0c upyield+0x2cc()

pc: 0x00000000017582b8 mda__traverse_lock_hashtable+0x23a()

pc: 0x000000000175805d mda__read_lock_table+0x39()

pc: 0x0000000001758022 mda_populate_monLocks+0x82()

pc: 0x000000000174bc14 mda_exec+0xb2()

pc: 0x0000000001722193 VTABRemoteAccess::getNext()+0x509()

pc: 0x0000000001721b5a VTABRemoteAccess::startScan(short, short, bool)+0x112()

pc: 0x000000000084b751 LeScanOp::_LeOpNext(ExeCtxt&)+0x6b1()

pc: 0x00000000008149ca LeEmitSndOp::_LeOpNext(ExeCtxt&)+0x1ba()

pc: 0x00000000007fe83a LePlanNext+0xfa()

[Handler pc: 0x0x0000000000fb3520 le_execerr installed by the following function:-]

pc: 0x0000000000fbf450 exec_lava+0x280()

pc: 0x00000000011686ca s_execute+0x171a()

[Handler pc: 0x0x000000000120ad00 hdl_stack installed by the following function:-]

[Handler pc: 0x0x00000000011a5e30 s_handle installed by the following function:-]

pc: 0x00000000011aa8f2 sequencer+0x2532()

pc: 0x00000000007cb9a9 tdsrecv_language+0x189()

[Handler pc: 0x0x00000000014ed3b0 ut_handle installed by the following function:-]

pc: 0x00000000007ebd40 conn_hdlr+0x10c0()

dbcc stacktrace finished.

Our team is responsible for (along with about a million other things!) monitoring 200+ ASE servers. We simply cannot use monLocks.

If this is something specific to our cfg I would be keen to know.

former_member182259
Contributor
0 Kudos

Going sideways was simply a comment that there was a lot of theorization about possible issues when in reality not all the full information was given.....

....however, in your stack trace:

pc: 0x0000000001722193 VTABRemoteAccess::getNext()+0x509()

pc: 0x0000000001721b5a VTABRemoteAccess::startScan(short, short, bool)+0x112()

Like I said - live by CIS.....die by CIS.....in fact I wonder in your case if CIS didn't switch to cursor mode and if that wasn't the real cause of cpu burn vs. hashtable scan.

However, I have polled well over 1 million locks via the RPC in far less time than your query took above (well under a minute in fact).   The real test (yours is still suspect due to count(*) aspects in addition to CIS) would be to write a JDBC app that polled the RPC directly and wrote to a file ....and then did the same thing for syslocks (although via SQL as no system RPC exists)....   and repeat the test about 10 times each to average out the impact of file system cache timing differences.....

Unfortunately, I have a ton of things on my plate (due to 16sp02 beta) but maybe someday I will do this test.

However, like I said - I have - and on large production systems - used monLocks successfully - usually to prove that systems are escalating to table locks far quicker than people realized and that the typical settings for row (and page) lock promotion thresholds are way way way too low - which is why I sort of agree with the 2B setting SAP uses (although I have often thought a setting of 1M for the threshold would have been better).

WRT to one of the comments about the DSI, typically for SRS, we are issuing single statements within a txn - so a 100K row update at the primary becomes 100K update statements at the replicate (all of this is described in the docs or the old P&T whitepaper) which leads to the lock drain problem.....unless you use HVAR, at which point as long as you are below the incremental compilation threshold, you will get an update with join on #temp, which would escalate to table lock if necessary.   If above the threshold and you have incremental compilation enabled, it will still use HVAR ....however the default is incremental compilation is off, and as a result, the really big transactions end up reverting back to language mode and a gazillion statements....and as noted, we do not escalate locks across statements.

Regardless - could both be made better......yes - but the "sideways" aspect that I found disturbing was the comments that monLocks wasn't usable.    Sorry ....been there done that.   It is usable.  You just have to approach it differently.

Former Member
0 Kudos

> that I found disturbing was the comments that monLocks wasn't usable.  

> Sorry ....been there done

> that.   It is usable.  You just have to approach it differently.

Can anyone explain how we should be approaching it ?

We're doing this

     select * into #temptable from master..monLocks

and then processing it. This takes 5 mins with 1m locks.

Is this unusual or incorrect way of accessing it.

I've also tried

     select count(*) from monLocks

It think you're saying this shouldn't be slow (and to be honest I don't think it was slow in older versions)

I'll raise this as an incident and get it investigated.

simon_ogden
Participant
0 Kudos

In principle it is usable (or maybe was?), agreed 🙂 I too have used it many times in the past.

But..until someone shows me some evidence on 15.7 SP13x of it not being exponentially slower than syslocks when there is a large amount of locks then it will continue to be unusable in our environments.

As far as I can tell it is not repeatedly going back through the CIS functions, it remains in the hashtable scan function for the entire duration.

Adding in filtering by SPID/KPID or anything else makes zero difference to the performance of the query (or what routine it sits in). The count(*) version ends up being the easiest way to demonstrate the problem, whether it's filtered, inner side of join or accessed any other way it makes no odds.

It gets significantly worse dependent on the number of locks and changes to the the number of hash buckets (changing lock hashtable size) also appears to make no difference.

(This is with SPID/KPID filtering in the query)

select count(*) from master..monLocks where SPID=24 and KPID=7143479

50000 locks:

Execution Time 3.

Adaptive Server cpu time: 304 ms.  Adaptive Server elapsed time: 304 ms.

Execution time: 0.31 seconds

100000 locks:

Execution Time 15.

Adaptive Server cpu time: 1484 ms.  Adaptive Server elapsed time: 1484 ms.

Execution time: 1.489 seconds

200000 locks:

Execution Time 113.

Adaptive Server cpu time: 11321 ms.  Adaptive Server elapsed time: 11323 ms.

Execution time: 11.346 seconds

500000 locks:

Execution Time 811.

Adaptive Server cpu time: 81097 ms.  Adaptive Server elapsed time: 81109 ms.

Execution time: 81.114 seconds

1000000 locks (as per Mike's findings also):

Execution Time 3547.

Adaptive Server cpu time: 354714 ms.  Adaptive Server elapsed time: 354734 ms.

Execution time: 354.744 seconds

So each doubling is 4-7 times longer so I'm sure you can see where the 90 minutes came from for the 4 million test.

Former Member
0 Kudos

Hi Mike,

I do not want to go in to much about the internals of SAP ASE data structure, memory allocation and how we protect this structures but I logged that change request and I good knowledge of the detail and what we change and what as I pointed them out.

I will setup something as you and Simon describe and then see what effect 782871 have on what you have describe.

Niclas   

simon_ogden
Participant
0 Kudos

As you correctly suspected querying the native RPC does start returning rows almost immediately going straight into mdarpc_sendrow() from mda__traverse_lock_hashtable() so maybe the issue relates to the rows being held server-side during the hashtable scan or maybe something isn't prepped properly when coming from CIS?

former_member182259
Contributor
0 Kudos

@Simon - Not sure.....I got burned too many times with CIS in the past, so my MDA collector (in java) does strictly RPC calls to $monWhatEver table

@Mike W - that is what I meant by a different approach - RPC - not SQL....most MDA tables can be hit via SQL for quick checks, but any reliable monitoring (in my mind) has to use the RPC interface.

E.g:

  try
  {
   mda_query = "{call $" + mda_table + "}";


   mda_rpc = syb_conn.prepareCall(mda_query);

  }

catch

....

simon_ogden
Participant
0 Kudos

So for 1,000,000 locks:

dump of syslocks for a spid across a local network (128MB of data) - 3 minutes

start [2015-07-02 18:35:59]

end [2015-07-02 18:38:58]

Dump via $monLocks RPC for a spid across a local network (337MB data, wider table) - 9 minutes

start [2015-07-02 18:42:31]

end [2015-07-02 18:51:42]

So the RPC method is almost comparable between the two. The problem is the monLocks handling of the data and or where the results are buffered. It seems like all queries of this table are having to store this entire result set somewhere in memory (and writes to that memory are somehow painfully slow?) before then processing the remainder of the query. It does still make monLocks itself behave much, much more poorly than syslocks when there are a large number of locks.

Jeff, I'm sure 90% of folk wouldn't consider they could call the underlying RPC directly particularly as the MDA is supposed to be the supported method of accessing this data. We're also talking about live monitoring here rather than an MDA collector scenario.

Former Member
0 Kudos

What's SAP/Sybases advice for use of MDA tables ?

We're running a periodic collector to summarise the data and store for later viewing. We do this all within SQL.

Is this not how MDA is meant to be used ?

> Dump via $monLocks RPC for a spid across a local network (337MB data, wider table) - 9 minutes


This still seems to be long time. That would mean we'd only run period collection every 10 mins.


BTW, we've switched from monLocks to syslocks and now getting much better performance on our server.

Former Member
0 Kudos

Just checking the configuration settings and if its using CIS, then would the CIS settings make a difference ?

eg "cis cursor rows" is set to 50 by default - would increasing this improve performance of monLocks ?

simon_ogden
Participant
0 Kudos

The time that test took is determined pretty much solely by the speed of the network and the disk being written to. The results start returning immediately.

If you run an equivalent select from monLocks for a million locks however it will take 14 minutes, 5 minutes doing what appears to be nothing (which is our problem here, becomes hours if there are many millions), followed by the 9 minutes sending the results and writing to the file.

Anyway my test was really to highlight that the problem was monLocks itself rather than the native RPC.

To verify this you can create a second proxy table using the pre-15 method to map to the RPC. Just create a loopback server to point to the @@servername. This shows that the CIS side of things looks OK, at least when using the classic definition.

create existing table monLocks2

..blah,blah

external procedure at 'loopback...$monLocks'

Table: monLocks2 scan count 1, logical reads: (regular=0 apf=0 total=0), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

Execution Time 82.

Adaptive Server cpu time: 8174 ms.  Adaptive Server elapsed time: 8279 ms.

Execution time: 8.283 seconds

Now it's back to 8 seconds, whereas with monLocks using the 'materialized at "$monLocks"' definition:

Execution Time 3547.

Adaptive Server cpu time: 354714 ms.  Adaptive Server elapsed time: 354734 ms.



Former Member
0 Kudos

Hi,

Were you able to reproduce this ?

I'll log this as an incident.

Thanks

Mike

Former Member
0 Kudos

Hi Mike,

correction request 782871 does not fix the full scope of the problem, so we now also have 786594 to address this.

Let me know the incident number will associate this with the 786594.

Niclas 

Former Member
0 Kudos

Thanks - I've raised one here.

     606256 / 2015