10-20-2015 8:27 AM
Hi Experts,
I have to check our Z-Programs whether they have set the Unicode-flag and change them if necessary.
One of the programs changes special character "hex 0c".
Here's the coding of non-Unicode program:
DATA: gv_xstring TYPE x.
DATA: ff_cstring TYPE c.
DATA: BEGIN OF gv_struc,
gv_struc_val TYPE counter1,
END OF gv_struc.
gv_xstring = '0C'.
gv_struc-gv_struc_val = gv_xstring.
ff_cstring = gv_struc.
Debugging:
Variable: Value hex-value
GV_XSTRING 0C 0C
GV_STRUC-GV_STRUC_VAL 12 0C
FF_CSTRING # 0C
With "Unicode active" it is not longer allowed to make "ff_cstring = gv_struc."
How can I change the coding so that the hex-value remains the same, 0C?
In all variants I've tried the hex value changes, even if the value remains #.
Can you please help me?
Thanks so much,
Monika
10-20-2015 9:14 AM
Hi.
Try this solution
CLASS cl_abap_container_utilities DEFINITION LOAD.
CALL METHOD cl_abap_container_utilities=>fill_container_c
EXPORTING
im_value = gv_struc
IMPORTING
ex_container = ff_cstring
EXCEPTIONS
illegal_parameter_type = 1
OTHERS = 2.
Hope to help
Bye
10-20-2015 9:14 AM
Hi.
Try this solution
CLASS cl_abap_container_utilities DEFINITION LOAD.
CALL METHOD cl_abap_container_utilities=>fill_container_c
EXPORTING
im_value = gv_struc
IMPORTING
ex_container = ff_cstring
EXCEPTIONS
illegal_parameter_type = 1
OTHERS = 2.
Hope to help
Bye
10-20-2015 9:32 AM
10-20-2015 9:20 AM
Hi Monika,
it seems that you have a legacy program which moved a byte value (0c) into a CHAR field. This did work in Non-Unicode systems (as CHAR and HEX fields basically had identical size, and 0C was a valid control character) but it cannot work in Unicode systems: See, in a Unicode system a CHAR 1 field has a width of 2 bytes. But a HEX1 field still is one byte. So there cannot be a canonical conversion from CHAR1 to HEX1.
Characters in Unicode systems are represented by UTF-16 encodation, which requires 2 bytes per character. Now a 0C in Non-Unicode represents a formfeed char. In Unicode system's UTF16 encodation, a formfeed char would be U+000C which is coded binary as 000C (big-endian, SAP codepage 4102) or 0C00 (little-endian, SAP codepage 4103), depending on your HW architecture.
You can use a field-symbol to move the 2 bytes of a Type X field (with length = 2 !) to a CHAR1 field in Unicode. But be sure to use correct byte-order to represent your special character in the X field.
Regards,
Alex