cancel
Showing results for 
Search instead for 
Did you mean: 

Database compression on SAP BW using MSCOMPRESS

ThomasKjelsrud
Participant
0 Kudos

Hi.

We have tried out the brand new DB compression tool from SAP, MSSCOMPRESS, on our BW Sandbox system running on Windows Server 2008 and MS SQL Server 2008.

Information on the tool can be found in note 1459005 - Enabling index compression for SQL Server and 1488135 - Database compression for SQL Server.

It reduced the DB size from 145 GB to 43 GB, so we are please with that side of the story. On the other side, it turns out that our BW consultant is unable to get any useful information out of the system.

We performed a page compression on both data and indexes. Having a look at a cube after the compression, we get a lot of "Dimension XYZ is not posted in InfoCube" warning messages. Has anyone encountered this issue with the new MSSCOMPRESS tool? I am aware that the tool has only been available for a few days, but I am sure we are not the first to try it out.

Any input would be of interest.

Kind Regards,

Thomas

Accepted Solutions (1)

Accepted Solutions (1)

leslie_moser
Explorer
0 Kudos

Hello Thomas,

Due to the individual nature of each system, the only way to determine the best method to compress the data is to test different job configurations on your systems. Most likely the speed of your IO system will be the determining factor in how well you can parallelize the jobs without ending up overloading the disks.

That said, we have two suggestions that might help you out. You could turn off the 'Online' option if you're doing this during a maintenance window when users are not in the system. This will allow table locks to be used and more concurrency. You can also use the Top N feature to choose which tables to process in a batch.

Make sure to click the help icon on the MSSCOMPRESS screen (the blue box with the letter 'i' on it). My colleague wrote a lot of explanation of the various options available on the screen.

Thanks,

Leslie

Answers (6)

Answers (6)

leslie_moser
Explorer
0 Kudos

Hello,

For those of you intreested in compression, please see my colleague's newly published blogs at:

Choosing SQL Server compression type for SAP

http://blogs.msdn.com/b/saponsqlserver/archive/2010/09/21/choosing-sql-server-compression-type-for-s...

Compressing an SAP database using report MSSCOMPRESS

http://blogs.msdn.com/b/saponsqlserver/archive/2010/10/08/compressing-an-sap-database-using-report-m...

They should help answer a lot of your questions about the compression types and how to execute the report.

Best regards, Leslie

Former Member
0 Kudos

Hello Leslie,

I had a compression with MSSCOMPRESS for Microsoft SQL server Tables.

But After compression I did not see the Change in Data size in DB02 t code in SAP BW system.

I wanted to see the change at data size level after compression of the tables.

Could you please suggest how to solve the issue?

Regards

K Bhupal

jeff_matheis
Explorer
0 Kudos

Hi,

Are you compressing all tables and indexes or just your business specific tables and indexes? I was wondering if there were any Basis tables or any tables in general that should not be compressed.

Best Regards,

Jeff

ThomasKjelsrud
Participant
0 Kudos

Hi Jeff.

We are going all-in, compressing all the tables. There are a few tables that are used for queuing, and the content in these tables are volatile. There are probably different opinions on this matter, but from my point of view there is no harm in compressing these tables as well, as long as your system can handle the slight CPU overhead.

regards,

Thomas

leslie_moser
Explorer
0 Kudos

Hi Thomas,

I work from the West Coast of the US so I'm ~ nine hours behind you.

And Sven is 100% correct. Your SM21 message referred to a text field which can't be compressed offline. The compression should still have been completed, however it would have been performed online.

Thanks, Leslie

jeff_matheis
Explorer
0 Kudos

Did you force a rebuild of the data and index when you compressed the tables?

ThomasKjelsrud
Participant
0 Kudos

Hi Jeff.

Yes, we forced a rebuild of data and indexes.

-Thomas

Former Member
0 Kudos

Hi Thomas,

Did you run all of the tables online? We are also busy with the compression and I have noticed some messages in SM21, stating that certain indexes could not be compressed online and must be perfomed offline - i.e. index of STXL.

Regards,

CJ

ThomasKjelsrud
Participant
0 Kudos

Hi CJ.

Yes, we ran all the tables online. I have also noticed these messages in SM21, but we have not concluded on how to approach this situation yet. If anyone can help with some tips here, that would be great.

An example of these messages:

Database error 2725 at EXE
> [Microsoft][SQL Server Native Client 10.0][SQL Server]An
> online operation cannot be performed for index 'RSQWCOND~0'
> because the index contains column 'WCOND' of data type text,
> ntext, image, varchar(max), nvarchar(max), varbinary(max),
> xml, or large CLR type. For a non-clustered index, the colum
> could be an include column of the index. For a clustered
> index, the column could be any column of the table. If
> DROP_EXISTING is used, the column could be part of a new or
> old index. The operation must be performed offline.
Database error 2725
Database error 2725 at EXE

-Thomas

Former Member
0 Kudos

Hi,

my test db is not fully compressed yet, but I have the same errors. How about doing the following:

Lets run the complete compression again, but uncheck the ONLINE option, the 'Force data rebuild' option and the 'Force Index Rebuild' option. That way I would expect that only those indexes are being rebuild that need to be done in an offline fashion. The compression work done by the previous (ONLINE) run should not be repeated again.

What do you think?

Regards,

Sven

ThomasKjelsrud
Participant
0 Kudos

Hi Sven.

That might work.. It would be nice if Leslie Moser (she is part of the team that developed MSSCOMPRESS) could comment on this before I try it out.

-Thomas

Former Member
0 Kudos

Hi,

this is what the documentation about the report says:

3. Choose other options

You can configure the number of threads used to execute a single SQL statement with the SQL Server configuration option max degree of parallelism. For an SAP system, this is typically set to 1. To overwrite this configuration for the database compression, set the checkbox MAXDOP in report MSSCOMPRESS. This has no effect on SQL statements from other reports.

When setting the checkbox Use Online, the database compression does not acquire table locks, which results in less concurrency. However, this does not work for some tables, for example tables having a text or image field. Even if you have set the checkbox in MSSCOMPRESS, these tables are compressed without the online option.

Do you still have uncompressed tables/indexes in your db? According to the explaination there shouldn't be any left.

Regards,

Sven

ThomasKjelsrud
Participant
0 Kudos

Hi.

I have checked, and there are none uncompressed tables or indexes in the database. So like Leslie says, the SM21 messages are just warnings saying that it cannot be compressed online, and therefore offline compression is applied.

I guess that pretty much covers my questions regarding the compression and the way it works.

Thanks for your help, everyone!

Regards,

Thomas

leslie_moser
Explorer
0 Kudos

Hello Thomas,

I work with the team that wrote this report. Can you please open a customer message in BW-SYS-DB-MSS? We would very much like to investigate this.

___________________________

Leslie Moser

Developer, Microsoft Platforms U.S.

Netweaver Development Tools

SAP Labs, LLC

ThomasKjelsrud
Participant
0 Kudos

Hi.

It turns out that the issue was related to an inorrect state of the BW sandbox system before the compression was started. The result were therefore not reliable.

We did the compression in the development system instead, and here everything works as intented. The db savings are mindblowing, from approx. 300 GB to 67 GB.

We have also tried it out in our ERP Sandbox which is a pretty recent system copy from production, and we went from 2,1 TB to 466 GB. Impressive! Good work, you guys!

One thing:

One batch prosess for all tables used about 84 hours to complete, which is way too much time.

Therefore we are planning to be a bit more aggressive in the approach for the rest of the ERP an BW systems. Instead of just running one batch process compressing all the tables, we are going to set up 4-6 parallel jobs crunching through the compression. There are about 10 tables in ERP containing approx. 40% of the total database size. I would think splitting these 10-12 tables into 3-4 separate jobs will reduce the runtime significantly. The rest of the tables will be prosessed by one or two processes.

I am happy to recieve some tips on the proposed approach.

Regards,

Thomas