Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Issue with INSERT statement

Former Member
0 Kudos

Hi,

We have program which is executed through multiple batch jobs in parallel. It uploads certain data from application server and put in different SAP tables using INSERT statement. We have an error reporting in place to identify if something is not updated properly.

We are facing a strange behaviour with this process. Though we do not receive any error messages, some data are not inserted in the tables. Missing records are put in table properly once those are reprocessed. I've no idea how this is happening and how to rectify this.

Your help is much appreciated to resolve this issue.

1 ACCEPTED SOLUTION

Former Member
0 Kudos

Hi,

With the information you have supplied it is going to be very difficult for someone who knows nothing about your solution, has no access to your system and does not know what your data is to be of any real help.  All people can do is guess at the problem.

You really need to break down the problem and take a structured approach to the issue - first thing you need to do is determine what scenario (like which jobs in parallel, which data, which user, which day of the week, which way the wind is blowing, etc.) causes your problem to happen.  Once you have done this, you can start to investigate further.

Cheers,

G.

26 REPLIES 26

former_member202818
Active Contributor
0 Kudos

Hi Anupam,

Check database already contains a line with the same primary key.

Regards

0 Kudos

Hi Sreekanth,

This is not the case. If it'd have been the case:

1. An abap dump should have been there.

2. Reprocess must have failed.

0 Kudos

You can check sy-dbcnt = lines(it_tab) to ensure all records inserted.

0 Kudos

All these are successful. The only thing is it does not actually INSERT the value to the tables.

Former Member
0 Kudos

HI Anupam,

               if batch jobs are scheduled by different users refer to DATABASE locks scenario .else maybe in case of  records which are not inserted some condition is failing  means before inserting the record into table ,another process is checking the condition for the record.

                 you can check in debugging mode .in sm37 select your job you want to debug and write t-code 'JDBG' without /n .

Regards ,

Krishna

0 Kudos

Hi Krishna,

Can you please explain how database lock works with INSERT statement. To my knowledge, database lock does not affect INSERT statement.

No conditions before INSERT are failing, otherwise it'd have appeared in batch log we are populating.

This is a random behaviour and we have 18 batch jobs running at a time. So, we do not know which one might cause the issue.

0 Kudos

Hi Anupam,

Seems for some reason, while splitting the data to the parallel jobs, you are missing those records while doing in bulk.

Please scan that piece of code where you are splitting the data.

Check the SORTs, AT NEW or AT END etc controls.

Cheers!!

Raju Shrestha

ritesh_inamdar
Explorer
0 Kudos

Hi Anupam,

This must be Primary key issue, please check whether the data which you are trying to insert is already in database.

-- Ritesh

Former Member
0 Kudos

Hi,

With the information you have supplied it is going to be very difficult for someone who knows nothing about your solution, has no access to your system and does not know what your data is to be of any real help.  All people can do is guess at the problem.

You really need to break down the problem and take a structured approach to the issue - first thing you need to do is determine what scenario (like which jobs in parallel, which data, which user, which day of the week, which way the wind is blowing, etc.) causes your problem to happen.  Once you have done this, you can start to investigate further.

Cheers,

G.

former_member201275
Active Contributor
0 Kudos

I agree with Gareth. Your question is so vague and lacks completely in detail. You should investigate a bit yourself first, and do some debugging, etc., and then post something more concrete here.  

Former Member
0 Kudos

Hi Gareth & Glen,

Thanks for your reply. Apologies, I could not publish the full story. I'll try explain the issue in more detail.

Please note that, this has been debugged enough and the issue is random. We do not have a specific pattern. All other solutions most people have mentioned like; primary key checks, missing records while splitting jobs etc etc are surely not the case. Anyway, let me try to explain:

Background: We are in SIT phase and this is a purely custom development and no standard applications are involved. We have certain Z-tables(30-40 tables) which stores a kind of master data. We have some Z-tables(10-12) which stores different kind of rules and formulas to work with those master data tables. And we have 5-6 more output Z-tables which are populated by our custom program using those master data and rules. The number of records in these tables are huge as those are for different product configurations. Based on some criteria we have set 18 batch jobs at a time (as agreed with basis) and at end of each jobs another job starts. And this way we have around 80-90 batch jobs run every night. Currently we are doing it every night to test the process and next morning business persons validate the output. This would be an one time activity on cutover.

What does the program do: The program runs for different IDs (relevant to this application) and we run it for 100-120 ids in each batch jobs. We process the master data and rules for each ids and put those records in another 5-6 output Z-tables. For each id, we have around 300-400 records to be inserted in each of those output tables. So, for each batch jobs, values are inserted to the output tables once for each ids. In short, INSERT statement is executed 100-120 times with 300-400 records. In case of any errors during the process and database update, we have populated those in job log.

Issue: We do not find any error message in batch job log. But some of the records are not inserted in tables. Records are inserted partially for ids, tt's not like we miss all the records for a particular id. When we reprocess with those IDs, some are inserted successfully, some are not. Again we reprocess those. After re-processing those 2-3 times all are inserted successfully.

I tried to explain as much as I can. This is a fully custom application. If this would have been a part of standard SAP applications, I could have explained in a better way.

Now, my questions:

1. We are only using INSERT statement. So, this should not be any locking issue.

2. Is there any limit of records we can INSERT in a single LUW? The help on INSERT statement tells something like that.

3. We are using COMMIT after each INSERT. Does it have any effect?

Thanks for your patience and help.

Regards,

Anupam

0 Kudos

Now, my questions:

1. We are only using INSERT statement. So, this should not be any locking issue.

It's much better, thank you, but I'm still very much guessing at the problem. What DB is this, and what about the possibility of "dirty reads" influencing logic...

From here:



  • Uncommitted read (also known as dirty read)
    A program that uses an uncommitted read to read data does not set locks for data objects. For this reason, programmers must bear in mind that their programs might read data that is still protected by an exclusive lock and has not yet been committed to the database by a database commit and could therefore still be deleted from the database by a database rollback.

(..)

  • The SAP HANA database and Oracle databases do not support uncommitted reads; here, only committed reads are configured.

  • Uncommitted reads are the standard setting on all other databases. Here, the level can be set to committed read using the function module DB_SET_ISOLATION_LEVEL. Calling the function module DB_RESET_ISOLATION_TO_DEFAULT or the next database commit or rollback resets the isolation level to the standard setting.


If It's not Oracle or HANA, I'd say set the isolation level to committed reads explicitly for all one time mass activities, just to be on safe side...

And I'd double, triple and quadruple-check that all the exception processing really is foolproof. That includes testing the exception processing... - "squinting at code" attentively is not enough, appearances can be deceiving.

cheers

Jānis

0 Kudos

Does the system log SM21 say anything meaningful for the time frame when your process is running and showing the described symptom?

Deadlocks, "strange" SQL errors...

Any dumps in ST22?


Thomas

0 Kudos

Hi Janis,

Thanks for your reply. It's an Oracle database. And there is no READ performed after the INSERTs are executed.

We have analysed the code multiple times. Executed it in foreground with small set of ids, but could not find any issues. May be we are missing something. We need to increase the logging to analyse it further. 😞

Regards,

Anupam

0 Kudos

Thomas,

There are no dumps. I'll check the SM21 log myself tomorrow morning, though the basis team did not find anything unusual.

Jobs are set for today. We are running each jobs with less ids this time increasing the total number of batch jobs. Let's see what's waiting for us tomorrow morning.

Anupam

0 Kudos

You mention the problem has been debugged through - have you had a few developers debugging the jobs in real time, line by line, preferably with someone else looking over their shoulder to see if anything odd is happening.

Once, a long time ago, I was in this situation with a batch job that took ages to run but was having weird issues.  Ended up with three of us staring at it line by line to reconcile the records being processed (we called it MSEG bingo.)  It wasn't fun but we got there in the end.

One thought I have just had, you mention you think there are problems with the INSERTS - are you sure any logic prior to the INSERTS (SELECTS, internal table processing, etc.) is all correct.  I'm just wondering whether your data is being corrupted somehow prior to the insert stage, especially as you say sometimes you get records with partial ID's inserted.

Also, do your programs make use of any shared memory, maybe in function groups or memory parameters?  Is there any possibility that your multiple program runs are clashing with each other, again whilst building up the data before the INSERTS?

Good luck!

G.

0 Kudos

With Oracle, I could suspect some ORA-0060 due to "genetic" inability of Oracle to manage concurrent insert, are you sure there are no such message in SM21 or in Oracle logs?

How are errors managed in your programs, is sy-subrc checked, or any TRY/CATCH/ENDTRY (*), are there some DB ROLLBACK or ROLLBACK WORK in the code, where do the primary keys come from, do you use some lock system, can you give more information, where is my crystal ball?

Regards,

Raymond

(*) If there are none, shame on you

0 Kudos

Raymond,

We could not find anything on SM21. And for oracle log, we depend on basis team. But, did not get anything positive from them.

We have used sy-subrc after each INSERTs and we do not have any ROLLBACK in our code. We have tried using lock before inserts and waiting if it's already locked. We have error reporting in place for the same. But no errors are reported in this part.

Could you please explain what did you mean by inability of Oracle to manage concurrent inserts?

Regards,

Anupam

Former Member
0 Kudos

Thanks you all for your help.

Just to update, we ran the batch jobs last night with lesser number of IDs (50 IDs instead of 100 we did earlier) in each jobs. We had 144 batch jobs for 7200 IDs. There were 18 jobs running at a time. And we had NO ERRORS.

But, this morning we executed another 55 jobs for 2750 IDs which ended with 50 errors.

Now, where do you all think the problem is? Is it something related to memory issue?

Regards,

Anupam

0 Kudos

Ok, so it looks like reducing the number of jobs and/or ID's helps.  So, you have one option and that is to go to a live system with limited data and jobs.  I suspect that won't be an answer that helps though!

You mention you have a combination of master tables, rules tables and output tables - is there any chance at all that either you have a deadlock reading your master and/or rules tables or the multiple jobs are clashing when trying to write to the output tables?

If it doesn't happen when a job is run on its own, there must be some sort of deadlock/clash whether it is in the select stage or update stage and could well be either in the ABAP or the database.

Have you tried using locks against objects when reading them?

Cheers,

G.

0 Kudos

Gareth,

We checked the locks while putting data in tables. But, did not check the locks while reading master data and rules data. We'll try that as well.

But, what I do not understand: Everything were successful last night. But, today morning we received errors with same number of IDs in each batch job. It gives us a feeling, it might be something with memory as many users were logged in by morning. We expected more help from our basis team. But....

Anyway, we have introduced more logs in the program. We'll add the number of records of the internal tables in batch logs so that we can identify if it's the problem with INSERT statement. It will definitely guide us to focus to correct direction.

Regards,

Anupam

0 Kudos

We'll add the number of records of the internal tables in batch logs so that we can identify if it's the problem with INSERT statement.

For writing such technical details SAP has provided the LOG-POINT construct. I think this is a valid use case for log points

Are you performing the Inserts/Updates directly in the program? Or, are they part of some Update modules?

BR,

Suhas

0 Kudos

Suhas,

Never used log-point before. Will try it. For the time being, I hope batch job log will serve our purpose.

INSERTs are being used directly in the program, not in any update module. And we have used COMMIT after each set of INSERTs.

Regards,

Anupam

Former Member
0 Kudos

Hello Anupam,

This could also be an issue if you are not passing sy-mandt in the insert statement.

0 Kudos

Navneet Kumar wrote:

Hello Anupam,

This could also be an issue if you are not passing sy-mandt in the insert statement.

Have you ever heard about the term - automatic client handling?

MariaJoãoRocha
Contributor
0 Kudos

Hi,

Is there any secundary índex that is defined as unique?

Regards,

Maria João Rocha