on 12-09-2015 7:37 PM
Hello,
after having installed the latest support package (TDMS 4.0 SP0009), the 200 GB log disk in the receiver system runs over when executing a deletion run. We use MSSQL database system (11.00.5058.00) and SAP Netweaver 7.40 in sender, receiver and central systems. The log disk runs over, no matter whether the recovery model is set to FULL or SIMPLE. The simultaneos deletion of two big PSCD tables (DFKKKO, DFKKOP) is sufficient to cause the overflow. So, decreasing the number of batch processes does not change anything.
The rec/client profile parameter is set to ALL but for DFKKKO and DFKKOP, "Log Data Changes" is not active in SE11. To increase the log disk would be too expensive.
How can we solve this issue?
Thanks in advance for any help.
Best regards,
Anselm
Hello Anselm,
Could you please confirm if the receiver system is multi-client system ?
Also what is the deletion technique used in your TDMS package.
Please check what is the values of Cluster bytes currently set in the table - CNVMBT09CTR .
Also share the total size of the table DFKKKO, DFKKOP in your receiver system and total row counts for these tables in respective receiver clients.
Thanks,
Rajesh
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Rajesh,
yes, the receiver system is a multi-client system. There are eight clients containing application data. Therefore, I set GLOBAL_NO_DROP_INSERT = X for each receiver client. So, the deletion technique is Overall ARRAY DELETE.
In the CNVMBT09CTR table, there is one record for each package (with TABNAME = _DEFAULT and CLUBY = 5000000).
Table DFKKKO:
Reserved size (KB) = 26306640
Data size (KB) = 10885480
Index size (KB) = 15097152
Rows = 154981303
Client 100: 39 records
Client 300: 25747013 records
Client 400: 25614135 records
Client 444: 17526843 records
Client 504: 25614135 records
Client 505: 30239569 records
Client 506: 30239569 records
Client 999: 0 records
Table DFKKOP:
Reserved size (KB) = 33194144
Data size (KB) = 17236648
Index size (KB) = 15591768
Rows = 93597003
Client 100: 15 records
Client 300: 15608225 records
Client 400: 15460902 records
Client 444: 10622601 records
Client 504: 15460902 records
Client 505: 18222179 records
Client 506: 18222179 records
Client 999: 0 records
Best regards,
Anselm
Hello Anselm,
As the receiver system is quite huge with almost 25 million records in each client for these tables so definitely it will generate high amount of log files.
To avoid the problem you can reduce the size of COMBY in the control table CNVMBT09CTR for a next TDMS package. This will make sure that a DB Commit is done more frequently and the system will not run out of logging space.
Also reduce the number of parallel jobs used for data deletion.
Thanks,
Rajesh
Hello Rajesh,
for a new package, I reduced the size of COMBY from 1000000 (= default value) to 500000. Unfortunately, the log disk run over nevertheless during deletion run. I used 4 batch processes for deletion.
The data set for the new package in table CNVMBT09CTR consists of these values:
PACKID = 9003M
TABNAME = _DEFAULT
CLUBY = 5000000
COMBY = 500000
TASKID_MAX = 6
PARCLU_MIN = 20
CLUDEL = initial
PARCLU_MAX = 30
Your recommendation was to reduce the size of COMBY. Reducing COMBY from 1000000 to 500000 was not successful. Could you please recommend a more precise value for COMBY?
Best regards,
Anselm
User | Count |
---|---|
98 | |
11 | |
11 | |
10 | |
10 | |
8 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.