12-11-2012 9:03 AM
Hi,
We are doing an upgrade for our system from 4.6C to ECC6(EHP6). Just had few questions related to the Vocabulary assignment.
1. How the vocabulary is being generated in Unicode conversion?
2. How do we confirm the correctness of the language that assigned in SPUM4. Is there any user acceptance procedure to be followed?
3. What are the tasks we can Ignore during the Vocabulary assignment? (For example we can avoid the vocabulary assignment for the table SNAP(Short Dump Log). Please suggest if there are some more tasks that we can ignore.
Regards,
Suresh Kumar Manoharan.
12-11-2012 10:18 AM
Hi Suresh Kumar,
1. not 100% sure about what you mean ... All db tables are scanned in the Non-UC system and character like data with special characters from language independent tables are added to the vocab.
2. Native speakers should check the vocab for words assigned to their language. In addition the local end users should test the applications with their logon language.
3. SNAP: ABAP Snapshot for Runtime Errors - in most cases customers do not need them after the conversion. In any case I would recommend to have a look at tables with many entries in the vocab (also z-tables) and analyze whether the data needs to be converted. For table INDX there are multiple SAP notes which allows you to delete entries:
Examples:
836478 HR authorizations: Displaying the data in the INDX
989070 Cleanup for table INDX(SH)
977726 INDX cluster table contains many entries in area SM
1055431 Deletion of INDX entries
1089012 Cleanup table INDX(PE)/(PC)
1291662 Cleanup of table INDX(PR)
1292125 Cleaning up the table INDX(IW)
1294414 Cleaning up table INDX (VM/RT)
1302042 Cleanup for table INDX(KU)
1318670 Bereinigung Tabelle INDX(KE)
1390942 Deletion of DP entries from table INDX
1399175 Cleanup table INDX(AR)
1583140 Deletion of INDX entries with RELID = 'ST' for India- CIN
Regards,
Nils Buerckel
12-11-2012 10:18 AM
Hi Suresh Kumar,
1. not 100% sure about what you mean ... All db tables are scanned in the Non-UC system and character like data with special characters from language independent tables are added to the vocab.
2. Native speakers should check the vocab for words assigned to their language. In addition the local end users should test the applications with their logon language.
3. SNAP: ABAP Snapshot for Runtime Errors - in most cases customers do not need them after the conversion. In any case I would recommend to have a look at tables with many entries in the vocab (also z-tables) and analyze whether the data needs to be converted. For table INDX there are multiple SAP notes which allows you to delete entries:
Examples:
836478 HR authorizations: Displaying the data in the INDX
989070 Cleanup for table INDX(SH)
977726 INDX cluster table contains many entries in area SM
1055431 Deletion of INDX entries
1089012 Cleanup table INDX(PE)/(PC)
1291662 Cleanup of table INDX(PR)
1292125 Cleaning up the table INDX(IW)
1294414 Cleaning up table INDX (VM/RT)
1302042 Cleanup for table INDX(KU)
1318670 Bereinigung Tabelle INDX(KE)
1390942 Deletion of DP entries from table INDX
1399175 Cleanup table INDX(AR)
1583140 Deletion of INDX entries with RELID = 'ST' for India- CIN
Regards,
Nils Buerckel
12-11-2012 6:39 PM
Hi Nils,
Thank you for your response.
I have a clarification.
While checking the correctness of the language assignment with the end users, is there any particular set of transactions or specific objects to be checked during this testing, which will cover most of the probabilities?
Thanks once again for your inputs.
Regards,
Suresh Kumar Manoharan.
12-12-2012 12:57 PM
Hi Suresh Kumar,
as the scan is processing all sap (and customer) tables and most of these tables contain character fields, there is no specific set of transactions / objects to be tested. We recommend that customers prioritize their testing based on Business relevance .... similar to e.g. an upgrade.
Anyhow you surely should check SAPScript / smarforms / Adobe forms and in addition content of table ADRC.
Best regards,
Nils Buerckel