on 08-28-2009 2:18 PM
Hi Friends,
My requirment is
774.89 3.54 15.07 4.87 2.93 4.86 5.42 3.56 -18.64 15.75 5.94 2.11 4.74 5.57 4.27 15.73 -1.46 -17.19Z
768.94 &.52 17.05 6.93 3.76 3.03 4.59 2.98 -20.00 15.91 6.14 4.01 3.24 3.44 1.87 13.96 -3.01 -16.97Z 254
771.55 3.52 15.21 7.74 0.86 3.24 4.54 4.02 -15.42 17.40 9.60 0.81 2.73 6.40 6.07 20.45 2.51 -17.93Z
770.33 3.53 13.26 3.51 5.12 4.94 4.81 3.51 -19.76 12.57 4.20 4.52 4.94 2.69 0.50 13.86 -2.94 -16.80Z
770.33 3.^3 13.26 3.51 5.12 4.94 4.81 3.51 -19.76 12.57 4.20 4.52 4.94 2.69 0.50 13.86 -2.94 -16.80Z
770.33 3.53 13.26 3.51 5.12 4.94 4.81 3.51 -19.76 12.57 4.20 4.52 4.94 2.69 0.50 13.86 -2.94 -16.80Z
770.33 3.53 13.26 3.51 5.12
From the above data
1. Second line has a special character & and it has crossed more than the fixed length, so the whole line is to be removed.
2. Fifth line is also having a special character, so the whole line of record should be deleted before it updates to ECC Table
3. Last row has incomplete row or some fields are empty. So, last row should also be removed or deleted
Am working on a scenario FILE - PI - ECC (abap proxy). With the above conditions the data should be validated.
How can I fulfill this requirement?
Thanks.
Hi All
Thanks very much. Its done at PROXY and not used any UDF.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Chennai,
The best option would be to read the whole file into xi and then check for any character like & or ^ and then ignore it. For the second line you can use .additionalLastFields as ignore so that the line field 254 will not read and for the last line you can use the parameter .missingLastfields as ignore so taht it will not worry about the missing fields. So give a try and let us know.
Once when you read the file into xi then you can use a udf to check for the special character and ignore it.
Regards,
---Satish
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
what You can do is:
-Build a module(java app) to delete those lines before the messages would taken by File adapter(FCC).
-Buil a javamapping to delete those lines but not use FCC, use only File.
Regards
Ivan
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
We cannot say this file is a fixed length file, because it dont have the strcture of the fixed length file. It is a random generated file.
Best thing would be ask the source file guys atleast to givethe file in fixed length so that you can handle in mapping else as jose mentioned you need to go with adaptermodule.
Regards,
---Satish
Hi,
Inside your module put the following code:
public cleaner class(){
public byte[] clean(byte[] file) throws Exception {
// TODO Auto-generated method stub
StringBuffer cleanedFile = new StringBuffer();
ByteArrayInputStream bais = new ByteArrayInputStream(file);
BufferedReader brFlatFile =
new BufferedReader(new InputStreamReader(bais));
String line = "";
int length = 120;
while((line = brFlatFile.readLine()) != null) {
if(line.indexOf("&") == -1 && line.indexOf("?") == -1 && line.length() == length){
cleanedFile.append(line);
}
}
return cleanedFile.toString().getBytes();
}
}
public class FlatFileToXmlConverterModule implements SessionBean, Module {
private ConverterFactory cFactory;
public void setSessionContext(SessionContext context) {
myContext = context;
}
private SessionContext myContext;
public void ejbRemove() {
}
public void ejbActivate() {
}
public void ejbPassivate() {
}
public void ejbCreate() throws CreateException {
}
public ModuleData process(
ModuleContext moduleContext,
ModuleData inputModuleData)
throws ModuleException {
try {
Message msg = (Message) inputModuleData.getPrincipalData();
//Principal data, it is the flat file content
XMLPayload file = msg.getDocument();
//instancing the factory
this.cFactory = new ConverterFactory();
//Getting the appropiate converter according to the filename
Cleaner cleaner = New Cleaner;
file.setContent(cleaner.clean(file.getContent()));
//saving the xml content
msg.setDocument(file);
//provide the XI message for returning
inputModuleData.setPrincipalData(msg);
} catch (Exception e) {
}
return inputModuleData;
}
For details how to develop and deploy modules you can read the following links:
Develop Modules
/people/ganesh.karicharla2/blog/2008/02/20/adapter-module-development-module-configuration
/people/farooq.farooqui3/blog/2008/09/24/sap-xipi-encode-outgoing-payload-using-adapter-module
/people/sap.user72/blog/2005/07/31/xi-read-data-from-pdf-file-in-sender-adapter
Regards
Ivan
Hi,
If the source file is fixed length of 7 characters each if it has 12 fields then the last line has no values at the end. If you have spaces then teh file would be a fixed length file.
So in this case you dont need an adapter module. You can read the file with fixed file parametrs and in the mapping you can eliminate with an user defined function.
Else you can write two interfaces. First interface will pick the file and let it read the whole record by record and let it check the speccial characters. If it has then ignore them and send the good file. Now your second interface can take this good file and then you can do a straight forward content conversion and mapping.
Its your choice. You can even go to adapter module or either one of these two options. So you have three options and you can pick one of them.
Regards,
---Satish
Hi
How can I ignore special characters like ! @ # $ % ^ & * ( ) _ - + = ? / ' " : ; { ] [ } \ | and others. Am using FCC.
Also am not able to remove the complete record if at all I found any special characters in any field of entire row.
I have used -- fieldContentFormatting is trim. No use of it because it removes the leading spaces.
Thanks.
Hi,
First read the whole file into xi and do two mappings. In the first mappings check the total length of the record and if the record is less than the total fields fixed lengths then ignore it. Now you have all the equal lenghts records with some special characaters also. Now after this condition in the same mapping use a udf with regular expresssion with [a-z][A-Z] so that other then in between lower a to z and upper a to z, anything is there you can ignore the record. For this udf please check this blogs on how to use regular expressions:
/people/wojciech.gasiorowski/blog/2006/11/01/the-power-of-regular-expressions-in-graphical-mapping-xi
/people/morten.wittrock/blog/2006/04/14/an-introduction-to-regular-expressions-in-java
Now from this mapping we have removed all teh extra lines which has special characters and record length is less than equal to the original one. Now since it is a fixed length file you can take substrings and map them in a second mapping.
Else after the first mappign you write the file to a shared drive, then create other interface which picks this good file and does the mapping and complete your process.
Hope this makes sense to you.
Regards,
---Satish
User | Count |
---|---|
89 | |
10 | |
10 | |
10 | |
7 | |
6 | |
6 | |
5 | |
4 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.