on 12-12-2011 9:46 AM
We are busy exporting a SRM system with sapinst/R3load
We do not use database specific tools as this is a ABAP+JAVA dual stack
At first we did try to start the export only with the option "Split STR" which was ok
except for one huge table, there we discovered that R3load has a limitation in the number
of exported data files, the maximum for a group export is 256, in our case the group SAPAPPL1
reached file SAPAPPL1.256 and ended in error with the attempt to have the next file (257)
in the LOG we had:
LOG:
(WTF) ERROR: too many data file entries for table CFF_CONT
(TFH) ERROR: Unable to close /sapmnt/PSR/export/ABAP/DATA/SAPAPPL1.257
(TFH) ERROR: OS error message: Bad file number
(DB) INFO: disconnected from DB
We saw here that the total size of this SRM database is 340 GB,
and the cluster table CFF_CONT alone is about 270 GB, thus without that table the system is only about 70 GB
The table itself (CFF_CONT) looks like just above 1GB but using LOB segment for about 270 GB
Now this huge table required that we restart the Export with different parameters
and unsing the Table Splitting Preparation
with a file like: SplitTable.TXT containing 1 line: CFF_CONT%4 and the option ROWID
this split in 4 (4 WHR files) and would avoid to reach the 256 limitation of R3load
Our question : is that the best recommend option for that table ?
Would also be nice to know what that table is for, and if it could be archived or clean up by business
Regards.
Hi,
You should go with table splitting option .
with a file like: SplitTable.TXT containing 1 line: CFF_CONT%10 and the option ROWID
this split in 10 (10 WHR files) .
By Splitting into 10 or more file such a big table can be exported and imported in a faster manner thus saving enough of time.
Hope this helps.
Regards,
Deepak Kori
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
R3ta seems to automaticcally selecting column "SRTF2" for the huge table CFF_CONT
Thus we will be splitting that table (262GB - 1 and a half million records)in 40 or 60.
We plan to have business reducing the size of this cFolder table for next similar exercise.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Or maybe, the sequence of key fields should only go in the R3ta_hints.txt file located in the exe kernel directory
and contain the following:
CFF_CONT MANDT RELID PHIO_ID SRTF2
Right ?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
What would be the content of the split.txt file if we want to use more than one FIELD (sapinst - Table Splitting Preparation)
replacing
CFF_CONT%4;ROWID
by
CFF_CONT%20; MANDT RELID PHIO_ID SRTF2
Woudl this be ok = space separator for fields, or shour we use semi colon separator
like in
CFF_CONT%20; MANDT; RELID; PHIO_ID; SRTF2
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Do not specify any column name . R3ta would figure out the field to be used for splitting.
Else find out the field as follows
Run the query
select count(distinct(<field_name>)) from <sapsid>.<tabname>;
against each column .of the primary key.
Select the column with the highest distinct values as the column for split.
The "Table Spliting Preparation" has been used (sapinst from NW 7.0 SR3)
in the split text file the option was = CFF_CONT%4;ROWID
(This needs probably to be changed for better performances, maybe by using a key column/field insead of ROWID
and by splitting more than only 4, probably at least 20 split would be a lot better)
The above did work out as expected by splitting into 4 export groups
Now we did also change the command file CFF_CONT.cmd in
/tmp/sapinst_instdir/SRM05/LM/COPY/ORA/EXP/CENTRAL/AS/EXP
tsk: "/tmp/sapinst_instdir/SRM05/LM/COPY/ORA/EXP/CENTRAL/AS/EXP/CFF_CONT.TSK"
icf: "/sapmnt/PSR/export/ABAP/DATA/CFF_CONT.STR"
dcf: "/sapmnt/PSR/export/ABAP/DB/DDLORA_LRG.TPL"
dat: "/sapmnt/PSR/export/ABAP/DATA/" bs=1k fs=4000M
dir: "/sapmnt/PSR/export/ABAP/DATA/CFF_CONT.TOC"
- Line 4 => Changed from 1000M into 4000M
To resume all this our next attempt to optimize this will be by trying other option in the split text file :
This is what the table looks like in the DDIC
MANDT MANDT CLNT 3
RELID INDX_RELID CHAR 2
PHIO_ID SDOK_PHID CHAR 32
SRTF2 INDX_SRTF2 INT4 10
PH_CLASS SDOK_PHCL CHAR 10
CLUSTR INDX_CLSTR INT2 5
CLUSTD SDOK_DATAX LRAW 32000
The 1st four field are part of the Primary Key
(The Cluster part is in the LOB segment)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
The best way to prepare for the table split in oracle is to use the PL/SQL slitter , the use of which is mentioned in SAP note 1043380. Since the table has an LOB field, the best way to figure out the number of split would be to find the number of rows in the table rather than the size of the table. Please check the number of rows of the table. Also ROWID based splitting is not supported for cluster tables . Hence try to find the primary key for the table and figure the field out with the column with the highest cardinality and split the table based on the field using the PL/SQL splitter . Also the splitter is optimized to calculate the ranges for cluster and pool tables . So just start with a value of 30 splits and you will find the splitter doing its best.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
may be you can increase the size of the export dumpfiles as well.
I think the default might be 650MB (to be burnable to CD), but you can increase this
(by twisting the cmd file if I remember correctly).
If the target DB is oracle as well, you might try to use the datapump for this single table.
Volker
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
FYI
The CFF_CONT is a special cluster table owned by the cFolder component and holding documents in different formats
that table do use the LOB segments
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks all already for the answers...
It looks like that table has only a few fields (columns) and the Table Splitting Preparation tool
did not let me split it in more than 4, every thing above ended with a warning about too few fields
to split it so much
Now it's interesting that this table is not found in SRM 5.5 for certain people
maybe I will adress a Oss Message to SAP to learn a bit more about this table
Business is already requested to look into it, as archiving if possible, sound really good in that context
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Raoul,
of course the first approach should be the one mentioned by Michael. Migrate only the data, that is needed and get the rid of the old stuff by deleting it first.
You only have an issue with one table and i guess your bottleneck is not the application server part (where R3load runs) - so i would go for the PL/SQL package approach with sap_r3load (sapnote #960280 - unfortunately this note is under review right now).
Afaik the table CFF_CONT (don't have a system with that table right here) does not need to be exported sorted - so unload it unsorted and with parallel query using sap_r3load package. The degree of parallelism depends on your database server hardware of course, but you can rock it hard with that approach )
Regards
Stefan
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hm, my SRM (5.5) does not even have a CFF_CONT table...
First of all i would ask the business if there can be data deleted prior to the system copy. 300gb sounds like a lot of data, most probably you can get rid of older data or you might need to establish data archiving.
Second i would split the table in more than 4 packages, i would rather choose something like 24 packages given the size of the table (for performance reasons). Also i would try a few dry runs to figure out the best strategy. But your problem only seems to be the file limitation per package, so with 4 packages you should only get around 70-80 files per package.
But first of all double check if you really need to keep all that data.
Cheers Michael
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
83 | |
10 | |
10 | |
9 | |
7 | |
6 | |
5 | |
5 | |
4 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.