09-04-2015 9:26 AM
Dear Experts,
Since I have to process huge number of records by selecting the stock information from MARD I am using the OPEN cursor logic to fetch the data from MARD table in packets of 10 K records.
I have encapsulated by processing logic in a RFC enabled FM and do the parallel processing by means of STARTING NEW TASK addition to run the parallel threads so as to improve the performance of the program.
However the 'STARTING NEW TASK' issues a IMPLICIT COMMIT due to which the CURSOR opened earlier by OPEN CURSOR WITH HOLD is getting closed and when the program encounters the FETCH NEXT CURSOR in subsequent iteration the program dumps saying the CURSOR is already closed.
Kindly provide your valuable inputs on how can I keep the CURSOR open at the same time run the parallel threads as well.
Thanks.
Regards,
Srini
09-04-2015 10:13 AM
Hi,
Are you using any WAIT UNTIL command in the RECEIVE FM/Method?
R
09-04-2015 11:01 AM
Yes I am using the WAIT UNTIL command right after the RFC FM call but not within FORM routine where I RECEIVE RESULTS from RFC FM.
I have to collate the information from all the parallel threads and update a custom table once the threads are processed.
09-04-2015 11:06 AM
You call to FM does not issue any implicit commit. Rather this command does. I think that is forcing the Cursor to closed prematurely.
You can keep updating your custom table. Not sure why you have to wait until all the process are finished. This way the "commit" problem should not be there.
R
09-04-2015 11:30 AM
Hi Rudra,
Thanks a lot for your inputs.
One other reason for using the WAIT UNTIL is to check if all the parallel thread are complete before fetching the next set of records from MARD for processing as the custom table which I am updating after parallel processing steps also acts like a lookup in subsequent iterations to avoid processing the Material/Plant where were already processed in previous iterations.
Could you please suggest if there is any other better way to check if all the threads are completed or not ?
Additionally I have used a WAIT of 5 seconds to get the available work processes if none of them are free then the program will maximum WAIT for 5 seconds and again check for the free work processes.
Regards,
Srini
09-04-2015 11:51 AM
You could put a count variable of no of parallel calls you are making and in receive routine check that back to know all the calls have been made or not by setting a different variable ( Static ) or use GET/SET technique to increment each time receive routine is called and comparing these two variables.
09-04-2015 1:20 PM
Some tricks, but use the search tool, Cursor and Parallel executions are FAQ.
Regards,
Raymond
09-10-2015 11:33 AM
Hi Raymond,
Thanks for your response.
Actually I was pointed out that using of WAIT TILL is issuing the implicit commit which is causing my OPEN CURSOR to go for short dump.
Alternatively Rudra suggested that I can check the counter of type STATIC inside the RECEIVE results method.
However not sure on how can I ensure that all parallel calls are completed before I move on to next fetch using OPEN Cursor.
Kindly let me know your thoughts on handling the WAIT TILL while working wit OPEN Cursor logic.
09-10-2015 12:09 PM
Hi Srini,
Would you mind if i ask you to separate the task of getting the data and processing it.
1. first fetch all the day into internal table (it can be using the Cursor) and fill everything in internal table until cursor finishes fetching data.
2. next, take records from your internal table like 10K records each time and then call the FM in new task.
09-10-2015 12:47 PM
Did you use the option WITH HOLD which protects cursor against db commit as I suggested?
09-10-2015 4:20 PM
Hi Syed,
Thanks for your reply.
Actually the internal table data can range from 2 - 4 Million. So the internal table might exceed the size limit if i try to fill entire data at one shot. Correct me if i m wrong.
09-10-2015 4:21 PM
Yes I m already having the WITH HOLD addition to OPEN CURSOR. But still the WAIT TILL is giving problem while i do the parallel processing.
09-11-2015 1:04 PM
Try in this way.
This will call the FM n times and count it until the data is over in Cursor and the "count" will decrease after the FM finishes execution.
The wait will start only after calling the FM for all data, so no chance of Cursor getting closed.
REPORT zp_test.
DATA: l_v_count TYPE i.
PERFORM fetch_data.
*&---------------------------------------------------------------------*
*& Form FETCH_DATA
*&---------------------------------------------------------------------*
* text
*----------------------------------------------------------------------*
* --> p1 text
* <-- p2 text
*----------------------------------------------------------------------*
FORM fetch_data .
DATA: l_i_vbak TYPE TABLE OF vbak,
l_v_flag TYPE flag,
l_v_task TYPE string,
l_v_index TYPE string.
STATICS: s_cursor TYPE cursor.
OPEN CURSOR WITH HOLD s_cursor FOR
SELECT * FROM vbak.
WHILE l_v_flag IS INITIAL.
l_v_index = sy-index.
FETCH NEXT CURSOR s_cursor
APPENDING CORRESPONDING FIELDS
OF TABLE l_i_vbak
PACKAGE SIZE 10.
IF sy-subrc IS INITIAL.
l_v_count = l_v_count + 1.
CONCATENATE 'TASK' l_v_index INTO l_v_task.
CALL FUNCTION 'ZP_TEST' STARTING NEW TASK l_v_task
DESTINATION 'NONE'
PERFORMING count ON END OF TASK.
refresh l_i_vbak.
ELSE.
WAIT UNTIL l_v_count EQ 0.
l_v_flag = 'X'.
ENDIF.
ENDWHILE.
ENDFORM. " FETCH_DATA
*&---------------------------------------------------------------------*
*& Form COUNT
*&---------------------------------------------------------------------*
* text
*----------------------------------------------------------------------*
* --> p1 text
* <-- p2 text
*----------------------------------------------------------------------*
FORM count USING p_task.
l_v_count = l_v_count - 1.
ENDFORM. " COUNT
09-11-2015 1:18 PM
But you may trigger a RESOURCE_FAILURE error if no process are available, so I would suggest a
IF nb_running LT max_allowed.
CALL FUNCTION z_xxx STARTING NEW TASK...
ADD 1 TO nb_running.
ELSE.
CALL FUNCTION z_xxx...
ENDIF.
So WAIT statement only executed when cursor closed.
Regards,
Raymond