LevSelector.com New York
home > Unicode, UTF

On this page:
UTF-8, UTF-16
Editing Unicode Files
Perl Unicode Support

Unicode home - top of the page -

The Unicode is a 16-bit character encoding standard (~65,000 characters). It also has an extension mechanism called UTF-16 that allows for encoding additional ~1,000,000 characters. Currently there are only ~8,000 unused code points for future expansion in the basic 16-bit encoding (plus ~900,000 extra in UTF-16). Also there are ~6,400 code points reserved for private use (and ~130,000 extra through UTF-16).

The Unicode Standard defines codes for characters used in all major languages (including the European alphabetic scripts, Middle Eastern right-to-left scripts, and scripts of Asia).

Unicode support is built into major operating systems (Windows 2000, Sun Solaris 7, Linux), programming languages (Java, Perl), data and presentation formats (HTML-4, XML), servers (web servers, databases, application servers, etc.).

www.unicode.org/ - main site
www.unicode.org/Public/UNIDATA/NamesList.txt - a 490 KB text file with all the 16-bit unicode code points
www.unicode.org/charts/ - Code Charts

www.hclrss.demon.co.uk/unicode - Alan Wood's Unicode Resources (HTML, Fonts, Browsers, MS Office, etc.)
www.cl.cam.ac.uk/~mgk25/unicode.html - UTF-8 and Unicode FAQ for Unix/Linux by Markus Kuhn

www.w3.org/TR/unicode-xml - Unicode in XML and other Markup Languages (W3C Draft)
www.sun.com/software/white-papers/wp-unicode - Unicode Support in the Solaris(tm) 7 OS
www.microsoft.com/globaldev/articles/unicode.asp - Microsoft about Unicode

UTF-8, UTF-16 home - top of the page -

Coded Representations of Unicode:
UCS (Universal Multiple-Octet Coded Character Set) format - multibyte format.

UCS-2 (also known as Basic Multilingual Plane (BMP)) - characters encoded in two bytes.
UCS-4 - Characters encoded in four bytes.
UTF-16 - UCS Transformation Format 16-bit form (extended variant of UCS-2 with characters encoded in 2-4 bytes).
UTF-8 - UCS Transformation Format 8-bit form - A transformation format using characters encoded in 1-6 bytes. Preserves ASC compatibility (for asc codes 0..127).

www.cl.cam.ac.uk/~mgk25/unicode.html - a very good UTF-8 and Unicode FAQ for Unix/Linux by Markus Kuhn
czyborra.com/utf/ - Unicode Transformation Formats: UTF-8 & Co.
UCS characters U+0000 to U+007F (ASCII) are encoded simply as bytes 0x00 to 0x7F (ASCII compatibility). This means that files and strings which contain only 7-bit ASCII characters have the same encoding under both ASCII and UTF-8.
All UCS characters >U+007F are encoded as a sequence of several bytes, each of which has the most significant bit set. Therefore, no ASCII byte (0x00-0x7F) can appear as part of any other character. 
The first byte of a multibyte sequence that represents a non-ASCII character is always in the range 0xC0 to 0xFD and it indicates how many bytes follow for this character. All further bytes in a multibyte sequence are in the range 0x80 to 0xBF. This allows easy resynchronization and makes the encoding stateless and robust against missing bytes. 
All possible 231 UCS codes can be encoded. 
UTF-8 encoded characters may theoretically be up to six bytes long, however 16-bit BMP characters are only up to three bytes long
The sorting order of Bigendian UCS-4 byte strings is preserved. 
The bytes 0xFE and 0xFF are never used in the UTF-8 encoding. 

The following byte sequences are used to represent a character. The sequence to be used depends on the Unicode number of the character: 

U-00000000 - U-0000007F:  0xxxxxxx 
U-00000080 - U-000007FF:  110xxxxx 10xxxxxx 
U-00000800 - U-0000FFFF:  1110xxxx 10xxxxxx 10xxxxxx 
U-00010000 - U-001FFFFF:  11110xxx 10xxxxxx 10xxxxxx 10xxxxxx 
U-00200000 - U-03FFFFFF:  111110xx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 
U-04000000 - U-7FFFFFFF:  1111110x 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 

The xxx bit positions are filled with the bits of the character code number in binary representation. The rightmost x bit is the least-significant bit. Only the shortest possible multibyte sequence which can represent the code number of the character can be used. Note that in multibyte sequences, the number of leading 1 bits in the first byte is identical to the number of bytes in the entire sequence. 

Editing Coded Documents home - top of the page -

Unicode text files on Windows:
  First couple bytes tell editor that this is a unicode file - and it is the same for Sun (Java) and Microsoft.
  For example, you can save text from Java using unicode stream - and then view/edit this file by MS Word, Notpad, or MS Internet Explorer.
  If you editing a file in MS Word - you can save it as unicode (select File - Save As - uncoded text - unicode).

Edit UTF-8 HTML on Windows:
Microsoft's FrontPage 2000
www.namo.com/products - Namo WebEditor 4
Word 2000 - can save in HTML format, but you need HTML Filter 2.0 from the Office Update Web site to remove the Office-specific markup tags.

Microsoft's Internet Explorer 5 can convert HTML documents into UTF-8 character encoding. Simply display the file, make sure that the encoding and the display are correct, and then you can copy text into MS Word (or Save As from File menu).
The character encoding of an HTML document:
<meta http-equiv="content-type" content="text-html; charset=utf-8"> - allows "document character set for HTM"
   (set for HMLL-4 is equivalent to unicode set)
<meta http-equiv="content-type" content="text/html; charset=iso-8859-1"> - Western European (Windows)
<meta http-equiv="content-type" content="text/html; charset=windows-1251"> - Cyrillic (Windows)

www.hclrss.demon.co.uk/unicode/htmlunicode.html#encodings - Read more about encodings

Example: how to write unicode into a text file in Java:
 * string_out.java - example how to output a string into a unicode text file

import java.io.*; 

public class string_out { 

  public static void main(String [] args) { 
    try { 
      String str = "\u0438text";  // first character is Russian letter 'i'
                                  // low bytes go first, in txt file it will be 3804

      OutputStreamWriter out = new OutputStreamWriter(new FileOutputStream("test.txt"),"Unicode"); 
    catch (IOException e) {System.err.println("Error: " + e);} 



Perl unicode support home - top of the page -

Perl unicode support (Perl 5.6.0)

http://rf.net/~james/perli18n.html - James Briggs' Perl, Unicode and i18N FAQ
www.cl.cam.ac.uk/~mgk25/unicode.html - UTF-8 and Unicode FAQ for Unix/Linux by Markus Kuhn
/CPAN/perl/bytes.html - bytes - Perl pragma to force byte semantics rather than character semantics.  Perl normally assumes character semantics in the presence of character data (i.e. data that has come from a source that has been marked as being of a particular character encoding).
use bytes;
no bytes;

The following areas need further work. 
Input and Output Disciplines
There is currently no easy way to mark data read from a file or other external source as being utf8. This will be one of the major areas of focus in the near future. 

Regular Expressions
The existing regular expression compiler does not produce polymorphic opcodes. This means that the determination on whether to match Unicode characters is made when the pattern is compiled, based on whether the pattern contains Unicode characters, and not when the matching happens at run time. This needs to be changed to adaptively match Unicode if the string to be matched is Unicode. 

use utf8 still needed to enable a few features
The utf8 pragma implements the tables used for Unicode support. These tables are automatically loaded on demand, so the utf8 pragma need not normally be used. 
However, as a compatibility measure, this pragma must be explicitly used to enable recognition of UTF-8 encoded literals and identifiers in the source text. 

Byte and Character semantics
Beginning with version 5.6, Perl uses logically wide characters to represent strings internally. This internal representation of strings uses the UTF-8 encoding.

In future, Perl-level operations can be expected to work with characters rather than bytes, in general. 

However, as strictly an interim compatibility measure, Perl v5.6 aims to provide a safe migration path from byte semantics to character semantics for programs. For operations where Perl can unambiguously decide that the input data is characters, Perl now switches to character semantics. For operations where this determination cannot be made without additional information from the user, Perl decides in favor of compatibility, and chooses to use byte semantics

This behavior preserves compatibility with earlier versions of Perl, which allowed byte semantics in Perl operations, but only as long as none of the program's inputs are marked as being as source of Unicode character data. Such data may come from filehandles, from calls to external programs, from information provided by the system (such as %ENV), or from literals and constants in the source text. 

If the -C command line switch is used, (or the ${^WIDE_SYSTEM_CALLS} global flag is set to 1), all system calls will use the corresponding wide character APIs. This is currently only implemented on Windows. 

Regardless of the above, the bytes pragma can always be used to force byte semantics in a particular lexical scope. See bytes. 

The utf8 pragma is primarily a compatibility device that enables recognition of UTF-8 in literals encountered by the parser. It may also be used for enabling some of the more experimental Unicode support features. Note that this pragma is only required until a future version of Perl in which character semantics will become the default. This pragma may then become a no-op. See utf8. 

Unless mentioned otherwise, Perl operators will use character semantics when they are dealing with Unicode data, and byte semantics otherwise. Thus, character semantics for these operations apply transparently; if the input data came from a Unicode source (for example, by adding a character encoding discipline to the filehandle whence it came, or a literal UTF-8 string constant in the program), character semantics apply; otherwise, byte semantics are in effect. To force byte semantics on Unicode data, the bytes pragma should be used. 

Under character semantics, many operations that formerly operated on bytes change to operating on characters. For ASCII data this makes no difference, because UTF-8 stores ASCII in single bytes, but for any character greater than chr(127), the character may be stored in a sequence of two or more bytes, all of which have the high bit set. But by and large, the user need not worry about this, because Perl hides it from the user. A character in Perl is logically just a number ranging from 0 to 2**32 or so. Larger characters encode to longer sequences of bytes internally, but again, this is just an internal detail which is hidden at the Perl level. 

Effects of character semantics
Character semantics have the following effects: 

Strings and patterns may contain characters that have an ordinal value larger than 255. 

Presuming you use a Unicode editor to edit your program, such characters will typically occur directly within the literal strings as UTF-8 characters, but you can also specify a particular character with an extension of the \x notation. UTF-8 characters are specified by putting the hexadecimal code within curlies after the \x. For instance, a Unicode smiley face is \x{263A}. A character in the Latin-1 range (128..255) should be written \x{ab} rather than \xab, since the former will turn into a two-byte UTF-8 code, while the latter will continue to be interpreted as generating a 8-bit byte rather than a character. In fact, if the use warnings pragma of the -w switch is turned on, it will produce a warning that you might be generating invalid UTF-8. 

Identifiers within the Perl script may contain Unicode alphanumeric characters, including ideographs. (You are currently on your own when it comes to using the canonical forms of characters--Perl doesn't (yet) attempt to canonicalize variable names for you.) 

Regular expressions match characters instead of bytes. For instance, "." matches a character instead of a byte. (However, the \C pattern is provided to force a match a single byte ("char" in C, hence \C).) 

Character classes in regular expressions match characters instead of bytes, and match against the character properties specified in the Unicode properties database. So \w can be used to match an ideograph, for instance. 

Named Unicode properties and block ranges make be used as character classes via the new \p{} (matches property) and \P{} (doesn't match property) constructs. For instance, \p{Lu} matches any character with the Unicode uppercase property, while \p{M} matches any mark character. Single letter properties may omit the brackets, so that can be written \pM also. Many predefined character classes are available, such as \p{IsMirrored} and \p{InTibetan}.

The special pattern \X match matches any extended Unicode sequence (a "combining character sequence" in Standardese), where the first character is a base character and subsequent characters are mark characters that apply to the base character. It is equivalent to (?:\PM\pM*). 

The tr/// operator translates characters instead of bytes. It can also be forced to translate between 8-bit codes and UTF-8. For instance, if you know your input in Latin-1, you can say: 

    while (<>) {
 tr/\0-\xff//CU;  # latin1 char to utf8

Similarly you could translate your output with 

    tr/\0-\x{ff}//UC;  # utf8 to latin1 char

No, s/// doesn't take /U or /C (yet?). 

Case translation operators use the Unicode case translation tables when provided character input. Note that uc() translates to uppercase, while ucfirst translates to titlecase (for languages that make the distinction). Naturally the corresponding backslash sequences have the same semantics. 

Most operators that deal with positions or lengths in the string will automatically switch to using character positions, including chop(), substr(), pos(), index(), rindex(), sprintf(), write(), and length(). Operators that specifically don't switch include vec(), pack(), and unpack(). Operators that really don't care include chomp(), as well as any other operator that treats a string as a bucket of bits, such as sort(), and the operators dealing with filenames. 

The pack()/unpack() letters "c" and "C" do not change, since they're often used for byte-oriented formats. (Again, think "char" in the C language.) However, there is a new "U" specifier that will convert between UTF-8 characters and integers. (It works outside of the utf8 pragma too.) 

The chr() and ord() functions work on characters. This is like pack("U") and unpack("U"), not like pack("C") and unpack("C"). In fact, the latter are how you now emulate byte-oriented chr() and ord() under utf8. 

And finally, scalar reverse() reverses by character rather than by byte. 

Character encodings for input and output 
[XXX: This feature is not yet implemented.]
As of yet, there is no method for automatically coercing input and output to some encoding other than UTF-8. This is planned in the near future, however. 

Whether an arbitrary piece of data will be treated as "characters" or "bytes" by internal operations cannot be divined at the current time. 

Use of locales with utf8 may lead to odd results. Currently there is some attempt to apply 8-bit locale info to characters in the range 0..255, but this is demonstrably incorrect for locales that use characters above that range (when mapped into Unicode). It will also tend to run slower. Avoidance of locales is strongly encouraged. 
bytes, utf8, perlvar/"${^WIDE_SYSTEM_CALLS}" 
Last updated: Wed Nov 8 15:39:16 2000


/CPAN/perl/utf8.html - utf8 - Perl pragma to enable/disable UTF-8 in source code
Contained in perl-5.6.0 
utf8 - Perl pragma to enable/disable UTF-8 in source code 
    use utf8;
    no utf8;
WARNING: The implementation of Unicode support in Perl is incomplete. See perlunicode for the exact details. 

The use utf8 pragma tells the Perl parser to allow UTF-8 in the program text in the current lexical scope. The no utf8 pragma tells Perl to switch back to treating the source text as literal bytes in the current lexical scope. 

This pragma is primarily a compatibility device. Perl versions earlier than 5.6 allowed arbitrary bytes in source code, whereas in future we would like to standardize on the UTF-8 encoding for source text. Until UTF-8 becomes the default format for source text, this pragma should be used to recognize UTF-8 in the source. When UTF-8 becomes the standard source format, this pragma will effectively become a no-op. 

Enabling the utf8 pragma has the following effects: 

Bytes in the source text that have their high-bit set will be treated as being part of a literal UTF-8 character. This includes most literals such as identifiers, string constants, constant regular expression patterns and package names. 

In the absence of inputs marked as UTF-8, regular expressions within the scope of this pragma will default to using character semantics instead of byte semantics. 

    @bytes_or_chars = split //, $data; # may split to bytes if data
     # $data isn't UTF-8
 use utf8;   # force char semantics
 @chars = split //, $data; # splits characters
perlunicode, bytes