cancel
Showing results for 
Search instead for 
Did you mean: 

Index Compression in SAP - system/basis tables?

Former Member
0 Kudos

Hi!

In thread the Oracle 10g feature index compression was discussed. We are now going to implement it also. SAP and Oracle say, this can be done for any index.

So we selected the biggest and the most frequently used indexes and analyzed them. We could save about 100GB disk space.

But here comes my question:

In the hitlist of our most frequently used and biggest Indexes there are also some basis table indexes.

A few samples:

BALHDR~0

BALHDR~1

BALHDR~2

BALHDR~3

BDCP~0

BDCP~1

BDCP~POS

BDCPS~0

BDCPS~1

CDCLS~0

CDHDR~0

D010INC~0

D010INC~1

D010TAB~0

D010TAB~1

DD01L~0

DD03L~5

DD07L~0

E071K~0

E071K~ULI

GVD_LATCHCHILDS~0

GVD_OBJECT_DEPEN~0

GVD_SEGSTAT~0

QRFCTRACE~0

QRFCTRACE~001

QRFCTRACE~002

REPOSRC~0

SCPRSVALS~0

SEOCOMPODF~0

SMSELKRIT~0

SRRELROLES~0

SRRELROLES~002

STXH~0

STXH~REF

STXL~0

SWW_CONT~0

TBTCS~1

TODIR~0

TRFCQOUT~5

USR02~0

UST04~0

VBDATA~0

VBMOD~0

WBCROSSGT~0

Is it really recommended to compress indexes of SAP Basis Tables also - especially in the area of Repository/Dictionary, t/qRFC and/or "Verbuchung" (VB...)?

Thanx for any hint and/or comment!

Regards,

Volker

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hello Volker,

I am not aware about any problem, so I see no reason why excluding the mentioned tables from index compression. Nevertheless you have to be aware that this is a "young feature" in SAP environments and so only limited experience is available. So - like always - it is a good idea to test the change at first in a non-production environment before implementing it in production.

Furthermore: If you experience problems for a certain table there is always the possibility to rebuild the indexes without the COMPRESS option.

In any case it would be great if you could provide feedback about your experiences in the area of index compression (particularly if there are really unforeseen problems).

Kind regards

Martin

Former Member
0 Kudos

Hi Martin!

Thanx very much for ASAP reply. Of course we will first make tests in our consolidation system ... and I will come back to here, when I have news.

Regards,

Volker

Answers (1)

Answers (1)

Former Member
0 Kudos

Hallo Volker

- in the original thread i was doing a test on TRFCQOUT~3, with quite good results. But Lars pointed out, that the RFC tables are special regarding CBO stats and there can be automatic index rebuilds scheduled. So take care. It won't be especially bad if the index will be rebuilt uncompressed but you might want to monitor that. As far as i know, the same can be true for the change pointer tables (BDCP). Besides that I also suggest being careful on the update tables (VB).

- development changes and support packages might change or introduce indexes. You should be aware of this, again this might lead to a once compressed index beeing created uncompressed, or to a new large index which you'd rather have compressed.

Best regards

Michael

Former Member
0 Kudos

Hi Michael!

Thanx for your helpful comments. Concerning the VB.... indexes I also have a "not so good feeling".

The Index rebuild uncompressed must be monitored. But I've created a modified version of Report RSANAORA with some additional features.

So we can periodically schedule this report with the e.g. 100 biggest and e.g. 100 most frequently used indexes in the system enabling us to find such candidates and by the way finding potential new candidates because of changed system usage/behaviour.

I'll keep you informed in this track.

Regards,

Volker

Former Member
0 Kudos

Hi Volkar,

I have succesfully tested the oracle index compression on ECC5 environment for the following tables in a sandbox environment;

ppoix

pcl2

pcl4

In total I saved around 60GB in the tablespaces.

Before compression I started a payroll run to see what time this will take without compression.

After compression of the indexes I re-executed the payroll which took exactly the same time as without compression (2 hours). So no impact on performance.

Also did an update statistics in DB13 -> no impact

With brtools: force update of specific table -> no impact

So we are seriously thinking about to take this into production.

I have also looked at BI environment but concluded that there was nothing to gain.

Unfortunately our infocubes are well build meaning that the fact tables contains the actual data and the corresponding dimension tables only the surrogate IDu2019s (SIDu2019s).

Those dimension tables are actually very small (64k) and not suitable for index compression.

Next step will be some Workflow tables.

Fe:

SWW_CONT~0 INDEX PSAPFIN 26.583.040

SWPNODELOG~0 INDEX PSAPFIN 15.589.376

SWWLOGHIST~0 INDEX PSAPFIN 13.353.984

SWWLOGHIST~1 INDEX PSAPFIN 8.642.560

SWW_CONTOB~0 INDEX PSAPFIN 8.488.960

SWPSTEPLOG~0 INDEX PSAPFIN 6.808.576

SWW_CONTOB~A INDEX PSAPFIN 6.707.200

SWWLOGHIST~2 INDEX PSAPFIN 6.507.520

SWW_WI2OBJ~Z01 INDEX PSAPFIN 2.777.088

SWW_WI2OBJ~0 INDEX PSAPFIN 2.399.232

SWWWIHEAD~E INDEX PSAPFIN 2.352.128

SWP_NODEWI~0 INDEX PSAPFIN 2.304.000

SWW_WI2OBJ~001 INDEX PSAPFIN 2.289.664

SWWWIHEAD~A INDEX PSAPFIN 2.144.256

SWPNODE~0 INDEX PSAPFIN 2.007.040

SWWWIRET~0 INDEX PSAPFIN 2.004.992

SWW_WI2OBJ~002 INDEX PSAPFIN 1.907.712

If you would like to know, I can post the results on workflow tables (indexes) on ECC6 environment.

Please rewards some point if you like.

Regards,

Stephan van Loon