Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Parallel Processing with Open Cursor / Fetch Command

Former Member
0 Kudos

I am working on a project with BI/BW. I'm working with a program where the database has millions of records and could gradually increase over time.

The idea is to use parallel processing to speed up processing time. I can not select from the table without using "fetch" because it would cause a short dump. (That's how large the table is.)

So I read a blog about parallel processing:

/people/adam.baryla/blog/2010/12/22/optimizing-abap-programs-performance-with-parallel-processing

I thought - perfect! That's what we need. However, when starting FM in a new task. <DUMP>. Reading the documentation on open cursor it looked like the dump could be solved by simply using the "with hold" addition. That didn't work either.

Here's the problem code:


      OPEN CURSOR WITH HOLD lc_data FOR
        SELECT doc_number AS doc_num
             s_ord_item AS doc_item
             sched_line
             salesorg
             distr_chan
             div_head AS division
             sold_to
             material
             crm_trpid
             FROM (f_table)
             WHERE sold_to IN lr_customer.

  ASSIGN lt_result_copa TO <fs_odsdata>.
  DO.
    FETCH NEXT CURSOR lc_data
              APPENDING CORRESPONDING FIELDS
              OF TABLE lt_output_copa
              PACKAGE SIZE 2000.
    IF sy-subrc <> 0.
      CLOSE CURSOR lc_data.
      EXIT.
    ENDIF.

    LOOP AT lt_output_copa ASSIGNING <fs_out_copa>.
      l_tabix = sy-tabix.
      l_trpid = <fs_out_copa>-crm_trpid.
      REFRESH lt_output_tmp.
      MOVE-CORRESPONDING <fs_out_copa> TO ls_result_copa .
      ls_result_copa-/bic/zc_trpid  = <fs_out_copa>-crm_trpid.
      APPEND ls_result_copa TO lt_output_tmp.
      CLEAR <fs_out_copa>-crm_trpid.
      DO.
        cal = l_called_jobs - l_recvd_jobs .
        " IF cal LE 4.
        CALL FUNCTION 'Z_GET_TERRITORY_NEW'
          STARTING NEW TASK taskname
          DESTINATION IN GROUP group
          PERFORMING copa_return_info ON END OF TASK
          EXPORTING
            salesorg              = <fs_out_copa>-salesorg
            distr_chan            = <fs_out_copa>-distr_chan
            division              = <fs_out_copa>-division
            customer              = <fs_out_copa>-customer
            material              = <fs_out_copa>-material
            busgrp                = <fs_out_copa>-g_cwwbus
            minor                 = <fs_out_copa>-g_cwwmin
            minorsub              = <fs_out_copa>-g_cwwsub
          TABLES
            t_sorg                = lt_sorg
            t_customer            = lt_customer
          CHANGING
            copa                  = ls_result_copa
          EXCEPTIONS
            communication_failure = 1
            system_failure        = 2
            resource_failure      = 3
            no_territory          = 4
            OTHERS                = 5.
        IF sy-subrc = 0.
          l_called_jobs = l_called_jobs + 1.
          taskname = taskname + 1.
          EXIT.
        ELSEIF sy-subrc = 3.
          WAIT UNTIL l_recvd_jobs >= l_called_jobs UP TO '9' SECONDS.
        ENDIF.
        " ENDIF.
      ENDDO.
      WAIT UNTIL l_recvd_jobs >= l_called_jobs UP TO 30 SECONDS.

Thank you for the help!

Michelle

1 ACCEPTED SOLUTION

Clemenss
Active Contributor

Hi Michelle,

I love it!

Read the documentation about SAP LUW, Database LUW and processes.

You may then find out, that every asynchronous function call as your

CALL FUNCTION 'Z_GET_TERRITORY_NEW'
          STARTING NEW TASK taskname

triggers an implicit database commit.

I can understand the database that it will cause a dump if FETCH NEXT CURSOR is called after the database commit just closed all open cursors.

I saw a blog about parallel processing recently and I remember SAP standard processing in contract accounting: They both do some kind of pre-selection of blocks before and then sending out the blocks in parallel processing.

I think there is no better way.

Regards,

Clemens

P.S.: If you need the links to relevant documentation, let me know. Too lazy right now.

12 REPLIES 12

kesavadas_thekkillath
Active Contributor
0 Kudos

Dump message ???

Clemenss
Active Contributor

Hi Michelle,

I love it!

Read the documentation about SAP LUW, Database LUW and processes.

You may then find out, that every asynchronous function call as your

CALL FUNCTION 'Z_GET_TERRITORY_NEW'
          STARTING NEW TASK taskname

triggers an implicit database commit.

I can understand the database that it will cause a dump if FETCH NEXT CURSOR is called after the database commit just closed all open cursors.

I saw a blog about parallel processing recently and I remember SAP standard processing in contract accounting: They both do some kind of pre-selection of blocks before and then sending out the blocks in parallel processing.

I think there is no better way.

Regards,

Clemens

P.S.: If you need the links to relevant documentation, let me know. Too lazy right now.

former_member186741
Active Contributor
0 Kudos

You need to identify if the dump is related to the cursor selects or the parallel processing. Comment out the use of the FM and run the program again. That way you will be able to focus on the problem better. If there is no dump the problem must be in how you have coded the PP stuff.

0 Kudos

Hi,

it hurts, please read:

SAP LUW

As a rule, an application program is processed by multiple work processes in succession, and every change of the work process is linked to an implicit database commit. This means that an application program is not automatically associated with a single database LUW. This applies in particular to dialog-oriented applications, in which one database LUW is assigned to one dialog step.

(SAP LUW)

Implicit Database Commits

A work process can only execute a single database LUW. The consequence of this is that a work process must always end a database LUW when it finishes its work for a user or an external call. Work processes trigger an implicit database commit in the following situations:

· When a dialog step is completed

Control changes from the work process back to the SAP GUI.

· When a function module is called in another work process (RFC).

Control passes to the other work process.

· When the called function module (RFC) in the other work process ends.

Control returns to the calling work process.

· When a WAIT statement interrupts the work process.

Control passes to another work process.

· Error dialogs (information, warning, or error messages) in dialog steps.

Control passes from the work process to the SAP GUI.

(Database Logical Unit of Work (LUW))

The understanding of SAP LUW, Database Logical Unit of Work (LUW) and respective bundling techniques is essential when you try things like that.

If the system works correct, the dump should quote an unhandled and unhandable exception - The dump is not relevant at all in this case.

Regards,

Clemens

former_member186741
Active Contributor
0 Kudos

I don't think that you have to use cursors to process large volumes of data. The standard Select itself has a Package size option so the code could be remodelled to use that instead. I'm not sure if it would solve your problem but it may be worth a try.

Also, with your code snippet it's unclear if you are resetting the table that you are fetcing into. You are 'appending' each package into the same internal table. Unless you have a 'clear' outside the snippet the table will be growing with each fetch which is not what you want.

Former Member
0 Kudos

Thank you everyone for your quick answers!

Well - the programmer has decided to go a different way. Yes, the database commits were what was giving us problems. (Thank you Clemens)

Once we have completed, I'll try to come back and post what we ended up doing.

Again Thank you,

Michelle

0 Kudos

"Final" result is below and yes, I know nothing is ever final. My friend Vinod Ellath is the actual author. (I can name names now that is is working!)


FORM adjust_copa_data .
  DATA: cal TYPE i,
      lt_output_tmp    TYPE TABLE OF ls_output_copa.

  ASSIGN lt_result_copa TO <fs_odsdata>.
  DO.
    FETCH NEXT CURSOR lc_data
              APPENDING CORRESPONDING FIELDS
              OF TABLE lt_output_copa
              PACKAGE SIZE 2000.
    IF sy-subrc <> 0.
      CLOSE CURSOR lc_data.
      EXIT.
    ENDIF.

    LOOP AT lt_output_copa ASSIGNING <fs_out_copa>.
      l_tabix = sy-tabix.
      l_trpid = <fs_out_copa>-crm_trpid.
      REFRESH lt_output_tmp.
      MOVE-CORRESPONDING <fs_out_copa> TO ls_result_copa .
      ls_result_copa-/bic/zc_trpid  = <fs_out_copa>-crm_trpid.
      APPEND ls_result_copa TO lt_output_tmp.
      CLEAR <fs_out_copa>-crm_trpid.
      DO.
        cal = l_called_jobs - l_recvd_jobs .
        " IF cal LE 4.
        CALL FUNCTION 'Z_GET_TERRITORY_NEW'
          STARTING NEW TASK taskname
          DESTINATION IN GROUP group
          PERFORMING copa_return_info ON END OF TASK
          EXPORTING
            salesorg              = <fs_out_copa>-salesorg
            distr_chan            = <fs_out_copa>-distr_chan
            division              = <fs_out_copa>-division
            customer              = <fs_out_copa>-customer
            material              = <fs_out_copa>-material
            busgrp                = <fs_out_copa>-g_cwwbus
            minor                 = <fs_out_copa>-g_cwwmin
            minorsub              = <fs_out_copa>-g_cwwsub
          TABLES
            t_sorg                = lt_sorg
            t_customer            = lt_customer
          CHANGING
            copa                  = ls_result_copa
          EXCEPTIONS
            communication_failure = 1
            system_failure        = 2
            resource_failure      = 3
            no_territory          = 4
            OTHERS                = 5.
        IF sy-subrc = 0.
          l_called_jobs = l_called_jobs + 1.
          taskname = taskname + 1.
          EXIT.
        ELSEIF sy-subrc = 3.
          WAIT UNTIL l_recvd_jobs >= l_called_jobs UP TO '9' SECONDS.
        ENDIF.
        " ENDIF.
      ENDDO.
      WAIT UNTIL l_recvd_jobs >= l_called_jobs UP TO 30 SECONDS.
      DELETE lt_output_copa INDEX l_tabix.
    ENDLOOP.



FORM copa_return_info USING taskname.

  DATA: ls_result_copa_tmp     TYPE /bic/azcpa_o0300,
        lt_output_tmp          TYPE TABLE OF ls_output_copa.

  RECEIVE RESULTS FROM FUNCTION 'Z_GET_TERRITORY_NEW'
     IMPORTING
       crm_trpid          = ls_result_copa_tmp-crm_trpid
       crm_tr             = ls_result_copa_tmp-crm_tr
     CHANGING
       copa               = ls_result_copa_tmp
    EXCEPTIONS
       no_territory       = 1
       OTHERS             = 2.
  IF sy-subrc = 0.
*    IF ls_result_copa_tmp-crm_trpid  NE ls_result_copa_tmp-/bic/zc_trpid.
*      ls_result_copa_tmp-/bic/zctpathdt = sy-datum.
*      APPEND ls_result_copa TO lt_result_copa.
*    ENDIF.
  ENDIF.
*  l_recvd_jobs = l_recvd_jobs + 1.
ENDFORM.                    " COPA_RETURN_INFO

0 Kudos

Hi MIchelle,

I still can't believe.

When I thought that the for-the-time-being-final code looks suspiciously similar to the code in the original question, I took the chance to try the fantastic compare plug-in of the equally fantastic notepad++ editor.

The only differences are

new lines

FORM adjust_copa_data .
  DATA: cal TYPE i,
      lt_output_tmp    TYPE TABLE OF ls_output_copa.

removed and now missing code

OPEN CURSOR WITH HOLD lc_data FOR
        SELECT doc_number AS doc_num
             s_ord_item AS doc_item
             sched_line
             salesorg
             distr_chan
             div_head AS division
             sold_to
             material
             crm_trpid
             FROM (f_table)
             WHERE sold_to IN lr_customer.

also missing at the end

DELETE lt_output_copa INDEX l_tabix.
    ENDLOOP.

Now I wonder what the 'new' approach actually is. You have still an implicit database commit while the datbase cursor is still open.

The only chance to run this code succcessfully is if the fetched data will not reach the package size. But then, nothing will be done in parallel.

Just my personal thought: The best chance to gain performance by using parallel processing is to do the database access in the parallel function call.

You could create a background job just to determine a number of pairs (from-to) for i.e. customer or material that will split your (f_table) in chunks of approximately equal size. This Job puts the pairs in a user table. It is not time-critical, probably needs not a run each time you start the parallel thing.

Then loop at those pairs and pass one pair to each parallel function call.

By the way, it is also my opinion that OPEN CURSOR / FETCH is kind of stone-age SAP. The database interface has improved since then. Nobody ever proved that CURSOR really speeds up anything, still it spreads an aura of DB wisdom

I almost forgot that I posted [Easily implement parallel processing in online and batch processing|http://wiki.sdn.sap.com/wiki/display/Snippets/Easilyimplementparallelprocessinginonlineandbatchprocessing] some years ago. Here I built the intervals on the fly, I remember that did not take much time.

Anyway, happy hacking!

Regards,

Clemens

Former Member
0 Kudos

Hey now -

I never said it was top of the line code. With the latest and greatest way of coding. We know it isn't. The idea with the open cursor was to eliminate the short dump. Which it did.

Stone ages! I also would welcome ideas for doing parallel processing a different way. I don't usually play on the BI/BW system. BI/BW is where they process HUGE amounts of data.

As for the code diferences - I just putting out what I was given. Sorry about that guys. I honestly did not look at it. (Cut and paste is a dangerous thing.) I'll see if I can get access to the BI system where this code is located to tell you the differences. (if any)

Meanwhile - I'll take a look at the WIKI.

Thank you for the help!

Michelle

0 Kudos

Hi Michelle,

Do you have the code that you fixed the short dump? I'm facing the same problem where there is short dump in the Fetch Next cursor code.

Appreciate if you could paste the code that you have fixed the problem

Thanks.

MY

Former Member
0 Kudos

Hi,

Was anyone able to resolve this issue.

I also need to update huge amount (around 3 million) of data in parallel process and fetch cursor is giving dump because of database commit during parallel process. I can not take whole data at once in internal table hence need to use fetch next cursor.

Please help.

regards,

Nilanjana

Former Member
0 Kudos

Hello, I also have the same problem.

What I see strange is that it works well if wireless, when thread fails,

can you help me?

DO.

       REFRESH  lti_ztcu0014[].

*    FETCH NEXT CURSOR s_cursor INTO TABLE p_ti_ztcu0014

       FETCH NEXT  CURSOR  lo_cursor INTO TABLE lti_ztcu0014

       PACKAGE SIZE p_i_paquete.

       IF sy-subrc NE 0 .

         CLOSE CURSOR lo_cursor.

         EXIT .

       ENDIF.

*      ".Creamos un ID UNICO para la tarea.

       ADD 1 TO lc_index.

       CONCATENATE text-011 lc_index INTO lc_taskname.

       CONDENSE lc_taskname NO-GAPS.

       SET UPDATE TASK LOCAL.



NO FOUND

CALL FUNCTION 'ZCU_CREATE_BP'

         STARTING NEW TASK lc_taskname

         PERFORMING process_create_bp ON END OF TASK

         TABLES

           t_bp       = lti_ztcu0014[]

           t_zttmdg02 = p_ti_zttmdg02[].



WORK


CALL FUNCTION 'ZCU_CREATE_BP'

*STARTING NEW TASK lc_taskname

*PERFORMING process_create_bp ON END OF TASK

         TABLES

           t_bp       = lti_ztcu0014[]

           t_zttmdg02 = p_ti_zttmdg02[].



".Aumentamos en 1 el contador que indica cuantos procesos llevamos

       gi_procesos = gi_procesos + 1.

       IF gi_procesos >= li_agrupador.

         WAIT UNTIL gi_procesos EQ 0.

       ENDIF.


ENDDO.