cancel
Showing results for 
Search instead for 
Did you mean: 

SQL Server Compression and Proportional Fill

Former Member
0 Kudos


Our SAP databases are on SQL Server 2012 I am sure you are familiar with proportional fill, we did a major reorg of our SAP databases to ensure this and also to distribute the database over multiple files to be able to take care of many CPU cores we have in our system.

We did reorg using R3Load and did a complete export/import of the databases and also used the new default page compression. Thus reducing our database size by upto 70%.

we were able to achieve this on all our systems over 20+ but GTS. in the GTS system, which is around 400GB in size, one table /SAPSLL/SPLAUD is over 180GB. Now when we try do a R3load export import, this table itself takes over 40hrs. We try the table split option in R3Load to export. but it will not split much and thus not reducing the R3load export/import time. We have no limitation on the number of CPU or the size of the resources we can throw on this one. we have put the source on a SSD and target on a FC drive but still the export and import time does not get below 35hrs total. this is an outage we cannot allow.

now this table is ever growing and eventually we will start seeing performance issues due to the distribution of the data across datafiles. Currently we have only 6 datafiles and 50% of data is occupied by this table, spread across these 6.

has anyone faced this before and can you advise on how we can handle this problem. I am looking for fastest way of being able to load/unload this table.

Apart from using R3load, is there any other supported way I can add more datafiles and spread this table across those without causing a downtime.

Adding a comment here:- we have considered archiving but this table has data which is audit related  there is a big debate in progress with our legal and compliance team on when can we get rid of this or archive it and so it may not happen immediately.

regards

Yogesh

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

Thank you for this information. Does migration from one filegroup to another needs downtime or can be done online. if it needs a downtime, I would rather do a complete r3load-unload of the DB and realign all tables.

Also per note 1704403 Non default filegroups are no supported by SAP. I would be hesitant to apply this procedure in my production system.

Yogesh

clas_hortien
Employee
Employee
0 Kudos

As this is a normal index operation, the online option is the same as for any other index. The only limitation comes from the tables with a BLOB field (nvarchar(max), varchar(max), varbinary(max)). These tables have to be done offline, all other tables can be done online.

Former Member
0 Kudos

I will test this in one of our non-production systems and see how it goes. I wonder how will it migrate a 180GB table/index without impacting its availability/performance.

I will also check with SAP if they will support this operation in a productive environment. I doubt that though.

it is a great workaround BTW, thank you.

Yogesh

former_member184473
Active Contributor
0 Kudos

Hi Yogesh,

Have you searched about table /SAPSLL/SPLAUD grow on space ?

Regards,

Eduardo Rezende

Former Member
0 Kudos

As I said we have looked into archiving the table and I agree that is the best way to address this issue but that is not going to happen immediately. Deleting data from this table is not an option as this tracks the audit trail for ECC documents which we need to keep for some years.


clas_hortien
Employee
Employee
0 Kudos

Hi,

the first question is, does this table has a BLOB column or not (e.g. NVARCHAR(MAX)). And if yes, is the BLOB column the one, that makes the table so big. If it has such a column (which I assume) it will not compress very well, as only the non-BLOB columns will compress.

You can try a different approach, which we use several times at other customers to move/compact such big tables.

<draft>

  • create a second filegroup with some files
  • run an create index command on that table and specify this new filegroup
    (CREATE INDEX ... WITH DROP_EXISTING ... TO FILEGROUP)
  • This will move the table from the current PRIMARY filegroup to the new files
  • Add files to your primary filegroup
  • Repeat the CREATE INDEX command and specify the PRIMARY filegroup again (and specify the compression flags)
  • This will move the table back to the PRIMARY filegroup and spread it over all files (including the new ones)
  • drop the new filegroup

</draft>

Best regards

  Clas

Matt_Fraser
Active Contributor
0 Kudos

Innovative!  I like it.