cancel
Showing results for 
Search instead for 
Did you mean: 

Content Conversion issue for header record

Former Member
0 Kudos

Hi,

We have a very urgent question on an issue here with one of our XI objects.

This is an inbound interface from an external system into R/3 & BW. The inbound file has a header record (with about 8 fields) and detail records (about 900 fields per detail record). Data going into R/3 & BW don't have header records and everything goes in as detail records. One field from the header of this source file should be passed to the target structure at the detail level. Also, we are NOT using BPM.

Can someone help us how we could define the file content conversion parameters for File adapter.

Thanks in advance ......

Prashant

Accepted Solutions (0)

Answers (4)

Answers (4)

moorthy
Active Contributor
0 Kudos

Hi P Nanga,

Is it solved ? No response from you ...

My suggestion is, instead of doing all Data structures for 900 fields, it is better to read the file in general format i.e Row Structure

In this case your data type will look like this

DT_Source

-Record (0..n)

-Row (field)

Content Conversion

Record.fieldNames = Row

Record.fieldSeparator = 'nl'

If you do this actual structre can be build in

- Using Adapter Module - output will be field wise structure-here you need to split entire row into fields..I think, if it is a csv file, just you need to write java function to split the records.

- Or using Java Mapping -in this case output will be your idoc structure..

Refer this blog-

/people/sravya.talanki2/blog/2005/08/16/configuring-generic-sender-file-cc-adapter

Hope this helps,

Regards,

Moorthy

Former Member
0 Kudos

Krishna, I belive Nanga and Rajashree has given up on this issue . But I am so interested with this issue, as its looks really challenging.

You said "it is better to read the file in general format i.e Row Structure"

but how are we going to maintain the 900 fields in the file content conversion???, so thats where the bottleneck is.

Probably, they can treat all the 900 fields as a single field and there by it will be easy for them to maintain it in the File content conversion. But the COMPLETE VALIDATION should take place outside the XI system very throughly. I think, this approach may be a feasible solution for them.

Also, I felt this problem is due to XI itself, because we define the 900 fields in the Repository, but when we go into the Directory, we are not permitted to refer the Data Structure that we have created in the Repository, XI should have placed a provision to refer the repository objects not only for this situation and for most other situations too.

Former Member
0 Kudos

Nanga

You take your file structure as flat i.e, create a datatype and make all the headers followed by details everything in one level. Then you map the detail record from source to the datatype in the target so that it repeats exactly as details. In content conversion you specify the Recordset structure as the datatype and if you are expecting a comma seperated file .csv then you specify Datatype.fieldSeparator as ',' and if you are expecting a fixed length file then you specify Datatype.fieldFixedLengths in one line.

Regards,

---Satish

Former Member
0 Kudos

Satish,

How could we handle it if we have one header and multiple Detail records? If we create a data type as flat file then I am afraid it won't match the source file structure? Could you please verify.

Thanks in advance.

Raji.

Former Member
0 Kudos

Rajashree

In this scenario I think we have to map the deatils records to remove context and this to the datatype. For the only Header we have to use useOneAsMany node function. Then I think he can get his output as required. Correct me if i am wrong anywhere?

---Satish

Former Member
0 Kudos

Make two nodes in your data type, one for the header with an occurence of 1, and one for the detail with an occurence of 1 to unbounded.

Then in the file adapter, make sure you specify the Recordset structure as : Header,1,Details,*

Former Member
0 Kudos

To clarify, is it one header, followed by many details, or is it a header then details then another header then more details, etc?

If there is only one header per file, you don't need to do anything complex in the mapping.

If there are multiple headers, and multiple details it is more complex. I have done one like that recently, and I can provide the details of what you need to change, if that is the case.

Former Member
0 Kudos

Hi Vanda,

It is one header followed by many details. Could you please tell me how exactly we could define the file content conversion parameters?

Thank you very much,

Raji.

Former Member
0 Kudos

Hi Vanda,

Since we have around 900 fields in the detail record , we will have to split the file in to smaller files too. In that case how could we handle the file adapter configuration for content conversion?

Thank you ,

Raji.

Former Member
0 Kudos

What I have understood :

You have a File, which has Header (8 Fields) and Details(900 Fields).

This has to be sent to R/3 and BW. Eventhough, you are not sending the Header, but a Field from header needs to go to your Detail Record from my understanding.

Please refer this blog (for Content Conversion)

/people/shabarish.vijayakumar/blog/2006/02/27/content-conversion-the-key-field-problem

2) To send the Headerfield to all the Detail records, as some one mentioned use Oneastomany Node Function

I am not quite sure how you guys are going to maintain 900 fields in the Content conversion. Check with SAP if they have any other way to deal the above situation. Maintenance is going to be a big bigh challenge all the time.

Also you guys say, you want to split the Files into many packets (using Recordsets per message), in that case I dont know whether the Header record will be retained for all the detail records. Give it a try and Good luck!!!!

Former Member
0 Kudos

Vanda & Raji : We do have faced the similar situation, in one of the Legacy to R/3 Integration. We were having around 150 fields in our record. Eventhough we succeeded using XI, but as mentioned by Rohini, it was so tough to maintain when there is change in the Fields.

Please let us know how did you overcome this situation? Looks like very interesting problem for the whole XI community.

Former Member
0 Kudos

I'm so sorry, I wasn't subscribed to this thread and I didn't realize there were responses.

If you have a message type made up of a Header with 1 occurence and Detail with 1 to unbounded occurunces, you'd want to do the following in content conversion:

Document Name - your message type

Document Namespac - your message type namespace

Recordset Structure - Header,1,Detail,*

Recordset Structure - Ascending

Then you'll need to set some of the parameters, depending on the layout of your incoming file.

As for the problem of having hundreds of fields, I'm less sure about that.

Would it be possible to break your detail data type down into smaller data types. Each with fewer fields. You'd still have to maintain every field in content conversion, but at least they'd be in seperate parameters, instead of all 900 in one tiny box.

Here's a very rough example of what I mean:

If you have 900 fields, instead of making 1 data type of detail, you could make 9 data types, Detail1, Detail2, Detail3, Detail4, Detail5, Detail6, Detail7,Detail8, Detail 9, each with 100 fields in them (or more with even less fields).

Setting things up the file content conversion would be more complex in this scenario, so it might be a toss up if it's worth it to break it up this way or not if it meant configuring quite a few more parameters.

For example,

You'd have to declare your recordset structure like Header,1,Detail1,,Detail2,,Detail3,* etc, and you'd have to make sure to set the .endSeparator to '0' for all of the first 8 details, so it would recognize that they were all on one line.

I hope this helps a little bit.

Former Member
0 Kudos

*********************************************************

I request Michal/Krishna Moorthy and other Gurus to step into this issue and provide a suitable solution.

*********************************************************

Vanda, you said

Here's a very rough example of what I mean:

If you have 900 fields, instead of making 1 data type of detail, you could make 9 data types, Detail1, Detail2, Detail3, Detail4, Detail5, Detail6, Detail7,Detail8, Detail 9, each with 100 fields in them (or more with even less fields).

You are asking them to split the Single Record into 10 0r 15 Records , each containing 10 or 15 fields, I dont know how it is feasible, even if it is possible, you have to maintain the same 950 fields in the File content Conversion. So still their initial question hangs pretty strong!!!, so I totally differ from you in this regard.

Former Member
0 Kudos

Hi Nanga,

i think your requirement is to map to a flat structure from the complex source .. use functione like Remove context in the message mapping which will solve ur issue.

Thankx,

Shree

Former Member
0 Kudos

Hi Nanga,

I feel you meet this requirement in message mapping by using node functions.

Thanks,

YaseenM