cancel
Showing results for 
Search instead for 
Did you mean: 

New Feature in PI

Former Member
0 Kudos

Hi Friends,

I just put a thought over a possible feature which can be introduced in SAP PI.

As we all know that we have data passing from third party systems to SAP and vice-versa through PI. But we all know that data occupies a lot of memory while passing through. Hence effectively the required message might get stuck in any third party middleware or channel during its flow.

So I thought of improving this functionality. That is, why dont we have functions passing through from one system to other via PI instead of data.

Wont it be possible if a remote-enabled function module from SAP PI itself reads the required file from the legacy system and creates a similar copy in the PI system and then convert the same into the required format and then send it to SAP. Yes I know that there is no database in PI. But then we can buffer the function modules for limited extent of time and then flush it after the data gets converted, thereby reducing load.

Moreover, since we know that every FM (remote) has the ability to read SAP data, so we can introduce BAPI in here which will read the legacy objects as well.

In a way, this will infact help to bring SAP into cloud integration. Here PI will be the middleware between ECC and internet. If there is at all any chance of virus or hacking threat, we can introduce some new feature in PI and read those messages and block them in PI itself before it enters into SAP itself.

In a way then SAP will have more contact with real time objects which will help in its greater expansion.

Friends, wont this be helpful. Please let  me know......I need your helpful and valuable comments on the same.

Thanks and Best Regards,

Souvik Bhattacharjee

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

PI/XI has limitations on messages size. Although not only PI

Maybe you can use deltas instead of full loadings.

Other alternative is managed the content in chunks. SAP in the case of IDOCs now split large contents in several IDOCs like a sequential file. In the first IDOC you will have a header and the last one will have a footer for hashing.

I have doubts about file adapter but for sender adapter it is posible to create the file sequentially. I am not sure about the receiver

Former Member
0 Kudos

Hi,

Thanks for your helpful response!

Your suggestion is really helpful in cases wherein we have large data (eg. idocs) and breaking them into chunks (the settings of which can be done in partner profiles) allows data to pass through successfully without having to get them stuck anywhere in the middle.

However, just a quick question. In case of complex graphical mapping, is breaking data into chunks going to be helpful? Because field dependency might be such that if we break the data it might not be parsed through.

Please advise.

Regards,

Souvik

Answers (0)