cancel
Showing results for 
Search instead for 
Did you mean: 

IQ vs DS loading mode

0 Kudos

Hi everyone,

We started a project with SAP IQ 16 & SAP Data Services 4.2.

I created a database for SAP IQ & started the development of SAP DataServices jobs.

But i have an issue regarding data loading in IQ.

When i try to load without the bulk loader, i have the following error message :

ODBC data source error message <sapiq.dbsys.lan> for operation <SQLExecute> : <[Sybase][ODBC Driver][Sybase

You have run out of space in IQDB_USR_DBSPC DBSpace.

But if i activate bulk loading, the job runs perfectly.

The IQDB_USR_DBSPC dbspace is 2GB large and i'm trying to load 5 MB of data. It seems unlikely that's a size issue.

I've granted rights for the iquser on the DB space.

I'm looking for any hints

Thanks

Guillaume

Accepted Solutions (1)

Accepted Solutions (1)

0 Kudos

Hi Everyone,

We changed DBspace from file to raw devices and used Bulk loading for batch jobs which resolved the error.

@Chris Baker we stayed in bulk loading mode, you just have to find the good workaround with SAP DS , IE create a template table, start the job, stop it via DS console, import table and then activate bulk loading. It works fine.

@Daniel B Indeed i faced this behavior : small table load taking 4 hours in regular mode, 20 seconds in bulk load.

Thanks for your inputs !

Answers (2)

Answers (2)

Former Member
0 Kudos

Just a small example to what Chris said:

We were also loading SAP IQ through Data Services, a table of approx. 5 million rows. Without bulk load operation, we stopped the job after running for 25 hours. DS wrote about 1.000 rows per minute into the database, while System I/O was at 100%, writing 80MB/s.

Activating the bulk load option, the whole load took 11 seconds.

Bests,

Daniel

c_baker
Employee
Employee
0 Kudos

IQ works best for loading and deleting as bulk operations.  Even updates are best set up as a bulk delete followed by a bulk insert

When you load without the bulk loader you are invoking row-by-row inserts. This is most likely causing a version build-up in your user dbspace.  To reduce this, you can set up commits to occur more often (commit after every 'n' rows).

This is still inefficient and slow.

Unless there is a reason (e.g. concurrency), stick to bulk loads or enable the RLV dbspace and set the table in question for RLV.  This will still be slow but will allow the row-by-row inserts to work.

You will still have to size accordingly and ensure the complete transaction for the load fits into the RLV cache memory (or use commits after every 'n' rows to help reduce the memory requirement).

Chris