cancel
Showing results for 
Search instead for 
Did you mean: 

API_SEMBPS_POST not working

Former Member
0 Kudos

We are using 'API_SEMBPS_POST' and 'API_SEMBPS_REFRESH' to post data in the cube, refresh memory and try to post data again without leaving the transaction.

The issue is that 'API_SEMBPS_POST' and 'API_SEMBPS_REFRESH' works only if we leave the transaction but if we don't leave the transaction then the data is not posted in the cube.

Any ideas on what can be wrong so I don't need to leave the transaction so the data can be posted into the cube.

FUNCTION ztemplate_exit.

*"----


""Local Interface:

*" IMPORTING

*" REFERENCE(I_AREA) TYPE UPC_Y_AREA

*" REFERENCE(I_PLEVEL) TYPE UPC_Y_PLEVEL

*" REFERENCE(I_METHOD) TYPE UPC_Y_METHOD

*" REFERENCE(I_PARAM) TYPE UPC_Y_PARAM

*" REFERENCE(I_PACKAGE) TYPE UPC_Y_PACKAGE

*" REFERENCE(IT_EXITP) TYPE UPF_YT_EXITP

*" REFERENCE(ITO_CHASEL) TYPE UPC_YTO_CHASEL

*" REFERENCE(ITO_CHA) TYPE UPC_YTO_CHA

*" REFERENCE(ITO_KYF) TYPE UPC_YTO_KYF

*" EXPORTING

*" REFERENCE(ET_MESG) TYPE UPC_YT_MESG

*" CHANGING

*" REFERENCE(XTH_DATA) TYPE HASHED TABLE

*"----


TABLES:

/1sem/_ys_kyfs_200bplom001,

/1sem/_ys_chas_200bplom001.

DATA: lwa_xth_data TYPE REF TO data,

lwa_xth_data_aux TYPE REF TO data.

TYPES : BEGIN OF ty_chas.

INCLUDE STRUCTURE /1sem/_ys_chas_200bplom001.

TYPES : END OF ty_chas.

TYPES : BEGIN OF ty_kyfs.

INCLUDE STRUCTURE /1sem/_ys_kyfs_200bplom001.

TYPES : END OF ty_kyfs.

TYPES : BEGIN OF ti_data,

s_chas TYPE ty_chas,

s_kyfs TYPE ty_kyfs,

END OF ti_data.

FIELD-SYMBOLS:

<lfs_wa_xth_data> TYPE ANY,

<lfs_wa_xth_data_aux> TYPE ANY.

CREATE DATA:

lwa_xth_data LIKE LINE OF xth_data,

lwa_xth_data_aux LIKE LINE OF xth_data.

ASSIGN:

lwa_xth_data->* TO <lfs_wa_xth_data>,

lwa_xth_data_aux->* TO <lfs_wa_xth_data_aux>.

FIELD-SYMBOLS:

<0calmonth2> TYPE ANY,

<fs_area> TYPE ANY,

<0material> TYPE ANY,

<0cust_group> TYPE ANY,

<0sales_grp> TYPE ANY,

<fs_chas> TYPE ANY.

DATA:

t_bapiret2 TYPE TABLE OF bapiret2 WITH HEADER LINE.

DATA:

vl_subrc LIKE sy-subrc.

LOOP AT xth_data

INTO <lfs_wa_xth_data>.

<lfs_wa_xth_data_aux> = <lfs_wa_xth_data>.

EXIT.

ENDLOOP.

**

ASSIGN COMPONENT 'S_CHAS'

OF STRUCTURE <lfs_wa_xth_data>

TO <fs_chas>.

ASSIGN COMPONENT '_AREA_____'

OF STRUCTURE <fs_chas> TO <fs_area>.

ASSIGN COMPONENT '0MATERIAL' OF STRUCTURE <fs_chas> TO

<0material>.

<0material> = '000000000030160294'.

COLLECT <lfs_wa_xth_data> INTO xth_data.

*

CALL FUNCTION 'API_SEMBPS_POST'

IMPORTING

e_subrc = vl_subrc

es_return = t_bapiret2.

  • New line

  • COMMIT WORK.

CALL FUNCTION 'API_SEMBPS_REFRESH'

IMPORTING

e_subrc = vl_subrc

es_return = t_bapiret2.

WAIT UP TO 60 SECONDS.

IF sy-subrc = 0.

ENDIF.

ENDFUNCTION.

Thanks in advanced,

Teresa.

Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi,

Can you please let us know the business reason behind this. I am not sure why posting to cube is required. Also, you can append the records that have to be created to XTH_DATA. They will be commited to database once user hits save....

thanks

Former Member
0 Kudos

Hi,

The process is in backgroung mode so we don't know how many records we are going to process. So we think of partitioning the package and post every record without leaving the session. That's why we need to post and then refresh memory.

It is not an online process!

Thanks,

Teresa.

Former Member
0 Kudos

Hi,

The better option for partitioning will be using the program UPC_BUNDLE_EXECUTE_STEP. this will partition the process based on a characteristic value. I am not sure the posting each record is a correct solution....

thanks

gerd_schoeffl
Advisor
Advisor
0 Kudos

Hello Teresa,

I would definitely recommend not to save after each record is written - just for performance reasons. Also it is not necessary.

I think you want to do the save as you fear that you might get a memory overflow. The way to handle this problem is:

cut the selection into smaller pieces,

execute the planning function on the first bit of data

save the data and release the buffer

execute the planning function on the next bit of data.

The report UPC_BUNDLE_EXECUTE_STEP does exactly this. Also have a look at not 546464. The size of the chunks of data should be 'reasonable' - not to large so the memeory will not overflow but also not too small (1 record is definitely not enough).

If you do a scenario like this you always need 2 planning functions. One that contains the business logic and a second one doing the save and refresh. That MUST be a separate planning function. The API...SAVE saves all data that is in the buffer. In your coding above you insert a record in the (local) table xth_data and before the records in the table are transmitted to the buffer the buffer is saved - this is why the save does not work the way you expect.

Please not that the refresh (...API...REFRESH) refreshes the entire BPS buffer - you can only use it in batch jobs or the web interface, but not when using Planning folders or BPS0.

Best regards,

Gerd Schoeffl

SAPNetWeaver RIG BI EMEA