summaryrefslogtreecommitdiff
path: root/pod/perlunicode.pod
blob: d2c48e26b508f170fcaf25970a14c6bda579fe14 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
=head1 NAME

perlunicode - Unicode support in Perl

=head1 DESCRIPTION

=head2 Important Caveats

Unicode support is an extensive requirement. While perl does not
implement the Unicode standard or the accompanying technical reports
from cover to cover, Perl does support many Unicode features.

=over 4

=item Input and Output Disciplines

A filehandle can be marked as containing perl's internal Unicode
encoding (UTF-8 or UTF-EBCDIC) by opening it with the ":utf8" layer.
Other encodings can be converted to perl's encoding on input, or from
perl's encoding on output by use of the ":encoding(...)" layer.
See L<open>.

To mark the Perl source itself as being in a particular encoding,
see L<encoding>.

=item Regular Expressions

The regular expression compiler produces polymorphic opcodes.  That is,
the pattern adapts to the data and automatically switch to the Unicode
character scheme when presented with Unicode data, or a traditional
byte scheme when presented with byte data.

=item C<use utf8> still needed to enable UTF-8/UTF-EBCDIC in scripts

As a compatibility measure, this pragma must be explicitly used to
enable recognition of UTF-8 in the Perl scripts themselves on ASCII
based machines, or to recognize UTF-EBCDIC on EBCDIC based machines.
B<NOTE: this should be the only place where an explicit C<use utf8>
is needed>.

You can also use the C<encoding> pragma to change the default encoding
of the data in your script; see L<encoding>.

=back

=head2 Byte and Character semantics

Beginning with version 5.6, Perl uses logically wide characters to
represent strings internally.

In future, Perl-level operations can be expected to work with
characters rather than bytes, in general.

However, as strictly an interim compatibility measure, Perl aims to
provide a safe migration path from byte semantics to character
semantics for programs.  For operations where Perl can unambiguously
decide that the input data is characters, Perl now switches to
character semantics.  For operations where this determination cannot
be made without additional information from the user, Perl decides in
favor of compatibility, and chooses to use byte semantics.

This behavior preserves compatibility with earlier versions of Perl,
which allowed byte semantics in Perl operations, but only as long as
none of the program's inputs are marked as being as source of Unicode
character data.  Such data may come from filehandles, from calls to
external programs, from information provided by the system (such as %ENV),
or from literals and constants in the source text.

On Windows platforms, if the C<-C> command line switch is used, (or the
${^WIDE_SYSTEM_CALLS} global flag is set to C<1>), all system calls
will use the corresponding wide character APIs.  Note that this is
currently only implemented on Windows since other platforms lack an
API standard on this area.

Regardless of the above, the C<bytes> pragma can always be used to
force byte semantics in a particular lexical scope.  See L<bytes>.

The C<utf8> pragma is primarily a compatibility device that enables
recognition of UTF-(8|EBCDIC) in literals encountered by the parser.
Note that this pragma is only required until a future version of Perl
in which character semantics will become the default.  This pragma may
then become a no-op.  See L<utf8>.

Unless mentioned otherwise, Perl operators will use character semantics
when they are dealing with Unicode data, and byte semantics otherwise.
Thus, character semantics for these operations apply transparently; if
the input data came from a Unicode source (for example, by adding a
character encoding discipline to the filehandle whence it came, or a
literal Unicode string constant in the program), character semantics
apply; otherwise, byte semantics are in effect.  To force byte semantics
on Unicode data, the C<bytes> pragma should be used.

Notice that if you concatenate strings with byte semantics and strings
with Unicode character data, the bytes will by default be upgraded
I<as if they were ISO 8859-1 (Latin-1)> (or if in EBCDIC, after a
translation to ISO 8859-1). This is done without regard to the
system's native 8-bit encoding, so to change this for systems with
non-Latin-1 (or non-EBCDIC) native encodings, use the C<encoding>
pragma, see L<encoding>.

Under character semantics, many operations that formerly operated on
bytes change to operating on characters. A character in Perl is
logically just a number ranging from 0 to 2**31 or so. Larger
characters may encode to longer sequences of bytes internally, but
this is just an internal detail which is hidden at the Perl level.
See L<perluniintro> for more on this.

=head2 Effects of character semantics

Character semantics have the following effects:

=over 4

=item *

Strings (including hash keys) and regular expression patterns may
contain characters that have an ordinal value larger than 255.

If you use a Unicode editor to edit your program, Unicode characters
may occur directly within the literal strings in one of the various
Unicode encodings (UTF-8, UTF-EBCDIC, UCS-2, etc.), but are recognized
as such (and converted to Perl's internal representation) only if the
appropriate L<encoding> is specified.

You can also get Unicode characters into a string by using the C<\x{...}>
notation, putting the Unicode code for the desired character, in
hexadecimal, into the curlies. For instance, a smiley face is C<\x{263A}>.
This works only for characters with a code 0x100 and above.

Additionally, if you

   use charnames ':full';

you can use the C<\N{...}> notation, putting the official Unicode character
name within the curlies. For example, C<\N{WHITE SMILING FACE}>.
This works for all characters that have names.

=item *

If an appropriate L<encoding> is specified, identifiers within the
Perl script may contain Unicode alphanumeric characters, including
ideographs.  (You are currently on your own when it comes to using the
canonical forms of characters--Perl doesn't (yet) attempt to
canonicalize variable names for you.)

=item *

Regular expressions match characters instead of bytes.  For instance,
"." matches a character instead of a byte.  (However, the C<\C> pattern
is provided to force a match a single byte ("C<char>" in C, hence C<\C>).)

=item *

Character classes in regular expressions match characters instead of
bytes, and match against the character properties specified in the
Unicode properties database.  So C<\w> can be used to match an
ideograph, for instance.

=item *

Named Unicode properties, scripts, and block ranges may be used like
character classes via the new C<\p{}> (matches property) and C<\P{}>
(doesn't match property) constructs. For instance, C<\p{Lu}> matches any
character with the Unicode "Lu" (Letter, uppercase) property, while
C<\p{M}> matches any character with a "M" (mark -- accents and such)
property. Single letter properties may omit the brackets, so that can be
written C<\pM> also. Many predefined properties are available, such
as C<\p{Mirrored}> and C<\p{Tibetan}>.

The official Unicode script and block names have spaces and dashes as
separators, but for convenience you can have dashes, spaces, and underbars
at every word division, and you need not care about correct casing. It is
recommended, however, that for consistency you use the following naming:
the official Unicode script, block, or property name (see below for the
additional rules that apply to block names), with whitespace and dashes
removed, and the words "uppercase-first-lowercase-rest". That is, "Latin-1
Supplement" becomes "Latin1Supplement".

You can also negate both C<\p{}> and C<\P{}> by introducing a caret
(^) between the first curly and the property name: C<\p{^Tamil}> is
equal to C<\P{Tamil}>.

Here are the basic Unicode General Category properties, followed by their
long form (you can use either, e.g. C<\p{Lu}> and C<\p{LowercaseLetter}>
are identical).

    Short       Long

    L           Letter
    Lu          UppercaseLetter
    Ll          LowercaseLetter
    Lt          TitlecaseLetter
    Lm          ModifierLetter
    Lo          OtherLetter

    M           Mark
    Mn          NonspacingMark
    Mc          SpacingMark
    Me          EnclosingMark

    N           Number
    Nd          DecimalNumber
    Nl          LetterNumber
    No          OtherNumber

    P           Punctuation
    Pc          ConnectorPunctuation
    Pd          DashPunctuation
    Ps          OpenPunctuation
    Pe          ClosePunctuation
    Pi          InitialPunctuation
                (may behave like Ps or Pe depending on usage)
    Pf          FinalPunctuation
                (may behave like Ps or Pe depending on usage)
    Po          OtherPunctuation

    S           Symbol
    Sm          MathSymbol
    Sc          CurrencySymbol
    Sk          ModifierSymbol
    So          OtherSymbol

    Z           Separator
    Zs          SpaceSeparator
    Zl          LineSeparator
    Zp          ParagraphSeparator

    C           Other
    Cc          Control
    Cf          Format
    Cs          Surrogate   (not usable)
    Co          PrivateUse
    Cn          Unassigned

The single-letter properties match all characters in any of the
two-letter sub-properties starting with the same letter.
There's also C<L&> which is an alias for C<Ll>, C<Lu>, and C<Lt>.

Because Perl hides the need for the user to understand the internal
representation of Unicode characters, it has no need to support the
somewhat messy concept of surrogates. Therefore, the C<Cs> property is not
supported.

Because scripts differ in their directionality (for example Hebrew is
written right to left), Unicode supplies these properties:

    Property    Meaning

    BidiL       Left-to-Right
    BidiLRE     Left-to-Right Embedding
    BidiLRO     Left-to-Right Override
    BidiR       Right-to-Left
    BidiAL      Right-to-Left Arabic
    BidiRLE     Right-to-Left Embedding
    BidiRLO     Right-to-Left Override
    BidiPDF     Pop Directional Format
    BidiEN      European Number
    BidiES      European Number Separator
    BidiET      European Number Terminator
    BidiAN      Arabic Number
    BidiCS      Common Number Separator
    BidiNSM     Non-Spacing Mark
    BidiBN      Boundary Neutral
    BidiB       Paragraph Separator
    BidiS       Segment Separator
    BidiWS      Whitespace
    BidiON      Other Neutrals

For example, C<\p{BidiR}> matches all characters that are normally
written right to left.

=back

=head2 Scripts

The scripts available via C<\p{...}> and C<\P{...}>, for example
C<\p{Latin}> or C<\p{Cyrillic}>, are as follows:

    Arabic
    Armenian
    Bengali
    Bopomofo
    Buhid
    CanadianAboriginal
    Cherokee
    Cyrillic
    Deseret
    Devanagari
    Ethiopic
    Georgian
    Gothic
    Greek
    Gujarati
    Gurmukhi
    Han
    Hangul
    Hanunoo
    Hebrew
    Hiragana
    Inherited
    Kannada
    Katakana
    Khmer
    Lao
    Latin
    Malayalam
    Mongolian
    Myanmar
    Ogham
    OldItalic
    Oriya
    Runic
    Sinhala
    Syriac
    Tagalog
    Tagbanwa
    Tamil
    Telugu
    Thaana
    Thai
    Tibetan
    Yi

There are also extended property classes that supplement the basic
properties, defined by the F<PropList> Unicode database:

    ASCIIHexDigit
    BidiControl
    Dash
    Deprecated
    Diacritic
    Extender
    GraphemeLink
    HexDigit
    Hyphen
    Ideographic
    IDSBinaryOperator
    IDSTrinaryOperator
    JoinControl
    LogicalOrderException
    NoncharacterCodePoint
    OtherAlphabetic
    OtherDefaultIgnorableCodePoint
    OtherGraphemeExtend
    OtherLowercase
    OtherMath
    OtherUppercase
    QuotationMark
    Radical
    SoftDotted
    TerminalPunctuation
    UnifiedIdeograph
    WhiteSpace

and further derived properties:

    Alphabetic      Lu + Ll + Lt + Lm + Lo + OtherAlphabetic
    Lowercase       Ll + OtherLowercase
    Uppercase       Lu + OtherUppercase
    Math            Sm + OtherMath

    ID_Start        Lu + Ll + Lt + Lm + Lo + Nl
    ID_Continue     ID_Start + Mn + Mc + Nd + Pc

    Any             Any character
    Assigned        Any non-Cn character (i.e. synonym for \P{Cn})
    Unassigned      Synonym for \p{Cn}
    Common          Any character (or unassigned code point)
                    not explicitly assigned to a script

For backward compatibility, all properties mentioned so far may have C<Is>
prepended to their name (e.g. C<\P{IsLu}> is equal to C<\P{Lu}>).

=head2 Blocks

In addition to B<scripts>, Unicode also defines B<blocks> of characters.
The difference between scripts and blocks is that the scripts concept is
closer to natural languages, while the blocks concept is more an artificial
grouping based on groups of mostly 256 Unicode characters. For example, the
C<Latin> script contains letters from many blocks. On the other hand, the
C<Latin> script does not contain all the characters from those blocks. It
does not, for example, contain digits because digits are shared across many
scripts. Digits and other similar groups, like punctuation, are in a
category called C<Common>.

For more about scripts, see the UTR #24:

   http://www.unicode.org/unicode/reports/tr24/

For more about blocks, see:

   http://www.unicode.org/Public/UNIDATA/Blocks.txt

Blocks names are given with the C<In> prefix. For example, the
Katakana block is referenced via C<\p{InKatakana}>. The C<In>
prefix may be omitted if there is no naming conflict with a script
or any other property, but it is recommended that C<In> always be used
to avoid confusion.

These block names are supported:

    InAlphabeticPresentationForms
    InArabic
    InArabicPresentationFormsA
    InArabicPresentationFormsB
    InArmenian
    InArrows
    InBasicLatin
    InBengali
    InBlockElements
    InBopomofo
    InBopomofoExtended
    InBoxDrawing
    InBraillePatterns
    InBuhid
    InByzantineMusicalSymbols
    InCJKCompatibility
    InCJKCompatibilityForms
    InCJKCompatibilityIdeographs
    InCJKCompatibilityIdeographsSupplement
    InCJKRadicalsSupplement
    InCJKSymbolsAndPunctuation
    InCJKUnifiedIdeographs
    InCJKUnifiedIdeographsExtensionA
    InCJKUnifiedIdeographsExtensionB
    InCherokee
    InCombiningDiacriticalMarks
    InCombiningDiacriticalMarksforSymbols
    InCombiningHalfMarks
    InControlPictures
    InCurrencySymbols
    InCyrillic
    InCyrillicSupplementary
    InDeseret
    InDevanagari
    InDingbats
    InEnclosedAlphanumerics
    InEnclosedCJKLettersAndMonths
    InEthiopic
    InGeneralPunctuation
    InGeometricShapes
    InGeorgian
    InGothic
    InGreekExtended
    InGreekAndCoptic
    InGujarati
    InGurmukhi
    InHalfwidthAndFullwidthForms
    InHangulCompatibilityJamo
    InHangulJamo
    InHangulSyllables
    InHanunoo
    InHebrew
    InHighPrivateUseSurrogates
    InHighSurrogates
    InHiragana
    InIPAExtensions
    InIdeographicDescriptionCharacters
    InKanbun
    InKangxiRadicals
    InKannada
    InKatakana
    InKatakanaPhoneticExtensions
    InKhmer
    InLao
    InLatin1Supplement
    InLatinExtendedA
    InLatinExtendedAdditional
    InLatinExtendedB
    InLetterlikeSymbols
    InLowSurrogates
    InMalayalam
    InMathematicalAlphanumericSymbols
    InMathematicalOperators
    InMiscellaneousMathematicalSymbolsA
    InMiscellaneousMathematicalSymbolsB
    InMiscellaneousSymbols
    InMiscellaneousTechnical
    InMongolian
    InMusicalSymbols
    InMyanmar
    InNumberForms
    InOgham
    InOldItalic
    InOpticalCharacterRecognition
    InOriya
    InPrivateUseArea
    InRunic
    InSinhala
    InSmallFormVariants
    InSpacingModifierLetters
    InSpecials
    InSuperscriptsAndSubscripts
    InSupplementalArrowsA
    InSupplementalArrowsB
    InSupplementalMathematicalOperators
    InSupplementaryPrivateUseAreaA
    InSupplementaryPrivateUseAreaB
    InSyriac
    InTagalog
    InTagbanwa
    InTags
    InTamil
    InTelugu
    InThaana
    InThai
    InTibetan
    InUnifiedCanadianAboriginalSyllabics
    InVariationSelectors
    InYiRadicals
    InYiSyllables

=over 4

=item *

The special pattern C<\X> matches any extended Unicode sequence
(a "combining character sequence" in Standardese), where the first
character is a base character and subsequent characters are mark
characters that apply to the base character.  It is equivalent to
C<(?:\PM\pM*)>.

=item *

The C<tr///> operator translates characters instead of bytes.  Note
that the C<tr///CU> functionality has been removed, as the interface
was a mistake.  For similar functionality see pack('U0', ...) and
pack('C0', ...).

=item *

Case translation operators use the Unicode case translation tables
when provided character input.  Note that C<uc()> (also known as C<\U>
in doublequoted strings) translates to uppercase, while C<ucfirst>
(also known as C<\u> in doublequoted strings) translates to titlecase
(for languages that make the distinction).  Naturally the
corresponding backslash sequences have the same semantics.

=item *

Most operators that deal with positions or lengths in the string will
automatically switch to using character positions, including
C<chop()>, C<substr()>, C<pos()>, C<index()>, C<rindex()>,
C<sprintf()>, C<write()>, and C<length()>.  Operators that
specifically don't switch include C<vec()>, C<pack()>, and
C<unpack()>.  Operators that really don't care include C<chomp()>, as
well as any other operator that treats a string as a bucket of bits,
such as C<sort()>, and the operators dealing with filenames.

=item *

The C<pack()>/C<unpack()> letters "C<c>" and "C<C>" do I<not> change,
since they're often used for byte-oriented formats.  (Again, think
"C<char>" in the C language.)  However, there is a new "C<U>" specifier
that will convert between Unicode characters and integers.

=item *

The C<chr()> and C<ord()> functions work on characters.  This is like
C<pack("U")> and C<unpack("U")>, not like C<pack("C")> and
C<unpack("C")>.  In fact, the latter are how you now emulate
byte-oriented C<chr()> and C<ord()> for Unicode strings.
(Note that this reveals the internal encoding of Unicode strings,
which is not something one normally needs to care about at all.)

=item *

The bit string operators C<& | ^ ~> can operate on character data.
However, for backward compatibility reasons (bit string operations
when the characters all are less than 256 in ordinal value) one should
not mix C<~> (the bit complement) and characters both less than 256 and
equal or greater than 256.  Most importantly, the DeMorgan's laws
(C<~($x|$y) eq ~$x&~$y>, C<~($x&$y) eq ~$x|~$y>) won't hold.
Another way to look at this is that the complement cannot return
B<both> the 8-bit (byte) wide bit complement B<and> the full character
wide bit complement.

=item *

lc(), uc(), lcfirst(), and ucfirst() work for the following cases:

=over 8

=item *

the case mapping is from a single Unicode character to another
single Unicode character

=item *

the case mapping is from a single Unicode character to more
than one Unicode character

=back

What doesn't yet work are the following cases:

=over 8

=item *

the "final sigma" (Greek)

=item *

anything to with locales (Lithuanian, Turkish, Azeri)

=back

See the Unicode Technical Report #21, Case Mappings, for more details.

=item *

And finally, C<scalar reverse()> reverses by character rather than by byte.

=back

=head2 User-defined Character Properties

You can define your own character properties by defining subroutines
that have names beginning with "In" or "Is".  The subroutines must be
visible in the package that uses the properties.  The user-defined
properties can be used in the regular expression C<\p> and C<\P>
constructs.

The subroutines must return a specially formatted string: one or more
newline-separated lines.  Each line must be one of the following:

=over 4

=item *

Two hexadecimal numbers separated by horizontal whitespace (space or
tabulator characters) denoting a range of Unicode codepoints to include.

=item *

Something to include, prefixed by "+": either an built-in character
property (prefixed by "utf8::"), for all the characters in that
property; or two hexadecimal codepoints for a range; or a single
hexadecimal codepoint.

=item *

Something to exclude, prefixed by "-": either an existing character
property (prefixed by "utf8::"), for all the characters in that
property; or two hexadecimal codepoints for a range; or a single
hexadecimal codepoint.

=item *

Something to negate, prefixed "!": either an existing character
property (prefixed by "utf8::") for all the characters except the
characters in the property; or two hexadecimal codepoints for a range;
or a single hexadecimal codepoint.

=back

For example, to define a property that covers both the Japanese
syllabaries (hiragana and katakana), you can define

    sub InKana {
	return <<END;
    3040\t309F
    30A0\t30FF
    END
    }

Imagine that the here-doc end marker is at the beginning of the line.
Now you can use C<\p{InKana}> and C<\P{InKana}>.

You could also have used the existing block property names:

    sub InKana {
	return <<'END';
    +utf8::InHiragana
    +utf8::InKatakana
    END
    }

Suppose you wanted to match only the allocated characters,
not the raw block ranges: in other words, you want to remove
the non-characters:

    sub InKana {
	return <<'END';
    +utf8::InHiragana
    +utf8::InKatakana
    -utf8::IsCn
    END
    }

The negation is useful for defining (surprise!) negated classes.

    sub InNotKana {
	return <<'END';
    !utf8::InHiragana
    -utf8::InKatakana
    +utf8::IsCn
    END
    }

=head2 Character encodings for input and output

See L<Encode>.

=head2 Unicode Regular Expression Support Level

The following list of Unicode regular expression support describes
feature by feature the Unicode support implemented in Perl as of Perl
5.8.0.  The "Level N" and the section numbers refer to the Unicode
Technical Report 18, "Unicode Regular Expression Guidelines".

=over 4

=item *

Level 1 - Basic Unicode Support

        2.1 Hex Notation                        - done          [1]
            Named Notation                      - done          [2]
        2.2 Categories                          - done          [3][4]
        2.3 Subtraction                         - MISSING       [5][6]
        2.4 Simple Word Boundaries              - done          [7]
        2.5 Simple Loose Matches                - done          [8]
        2.6 End of Line                         - MISSING       [9][10]

        [ 1] \x{...}
        [ 2] \N{...}
        [ 3] . \p{...} \P{...}
        [ 4] now scripts (see UTR#24 Script Names) in addition to blocks
        [ 5] have negation
        [ 6] can use regular expression look-ahead [a]
             or user-defined character properties [b] to emulate subtraction
        [ 7] include Letters in word characters
        [ 8] note that perl does Full casefolding in matching, not Simple:
             for example U+1F88 is equivalent with U+1F000 U+03B9,
             not with 1F80.  This difference matters for certain Greek
             capital letters with certain modifiers: the Full casefolding
             decomposes the letter, while the Simple casefolding would map
             it to a single character.
        [ 9] see UTR#13 Unicode Newline Guidelines
        [10] should do ^ and $ also on \x{85}, \x{2028} and \x{2029})
             (should also affect <>, $., and script line numbers)
             (the \x{85}, \x{2028} and \x{2029} do match \s)

[a] You can mimic class subtraction using lookahead.
For example, what TR18 might write as

    [{Greek}-[{UNASSIGNED}]]

in Perl can be written as:

    (?!\p{Unassigned})\p{InGreekAndCoptic}
    (?=\p{Assigned})\p{InGreekAndCoptic}

But in this particular example, you probably really want

    \p{Greek}

which will match assigned characters known to be part of the Greek script.

[b] See L</User-defined Character Properties>.

=item *

Level 2 - Extended Unicode Support

        3.1 Surrogates                          - MISSING
        3.2 Canonical Equivalents               - MISSING       [11][12]
        3.3 Locale-Independent Graphemes        - MISSING       [13]
        3.4 Locale-Independent Words            - MISSING       [14]
        3.5 Locale-Independent Loose Matches    - MISSING       [15]

        [11] see UTR#15 Unicode Normalization
        [12] have Unicode::Normalize but not integrated to regexes
        [13] have \X but at this level . should equal that
        [14] need three classes, not just \w and \W
        [15] see UTR#21 Case Mappings

=item *

Level 3 - Locale-Sensitive Support

        4.1 Locale-Dependent Categories         - MISSING
        4.2 Locale-Dependent Graphemes          - MISSING       [16][17]
        4.3 Locale-Dependent Words              - MISSING
        4.4 Locale-Dependent Loose Matches      - MISSING
        4.5 Locale-Dependent Ranges             - MISSING

        [16] see UTR#10 Unicode Collation Algorithms
        [17] have Unicode::Collate but not integrated to regexes

=back

=head2 Unicode Encodings

Unicode characters are assigned to I<code points> which are abstract
numbers.  To use these numbers various encodings are needed.

=over 4

=item *

UTF-8

UTF-8 is a variable-length (1 to 6 bytes, current character allocations
require 4 bytes), byteorder independent encoding. For ASCII, UTF-8 is
transparent (and we really do mean 7-bit ASCII, not another 8-bit encoding).

The following table is from Unicode 3.2.

 Code Points            1st Byte  2nd Byte  3rd Byte  4th Byte

   U+0000..U+007F       00..7F
   U+0080..U+07FF       C2..DF    80..BF
   U+0800..U+0FFF       E0        A0..BF    80..BF  
   U+1000..U+CFFF       E1..EC    80..BF    80..BF  
   U+D000..U+D7FF       ED        80..9F    80..BF  
   U+D800..U+DFFF       ******* ill-formed *******
   U+E000..U+FFFF       EE..EF    80..BF    80..BF  
  U+10000..U+3FFFF      F0        90..BF    80..BF    80..BF
  U+40000..U+FFFFF      F1..F3    80..BF    80..BF    80..BF
 U+100000..U+10FFFF     F4        80..8F    80..BF    80..BF

Note the A0..BF in U+0800..U+0FFF, the 80..9F in U+D000...U+D7FF,
the 90..BF in U+10000..U+3FFFF, and the 80...8F in U+100000..U+10FFFF.
The "gaps" are caused by legal UTF-8 avoiding non-shortest encodings:
it is technically possible to UTF-8-encode a single code point in different
ways, but that is explicitly forbidden, and the shortest possible encoding
should always be used (and that is what Perl does).

Or, another way to look at it, as bits:

 Code Points                    1st Byte   2nd Byte  3rd Byte  4th Byte

                    0aaaaaaa     0aaaaaaa
            00000bbbbbaaaaaa     110bbbbb  10aaaaaa
            ccccbbbbbbaaaaaa     1110cccc  10bbbbbb  10aaaaaa
  00000dddccccccbbbbbbaaaaaa     11110ddd  10cccccc  10bbbbbb  10aaaaaa

As you can see, the continuation bytes all begin with C<10>, and the
leading bits of the start byte tell how many bytes the are in the
encoded character.

=item *

UTF-EBCDIC

Like UTF-8, but EBCDIC-safe, as UTF-8 is ASCII-safe.

=item *

UTF-16, UTF-16BE, UTF16-LE, Surrogates, and BOMs (Byte Order Marks)

(The followings items are mostly for reference, Perl doesn't
use them internally.)

UTF-16 is a 2 or 4 byte encoding.  The Unicode code points
0x0000..0xFFFF are stored in two 16-bit units, and the code points
0x010000..0x10FFFF in two 16-bit units.  The latter case is
using I<surrogates>, the first 16-bit unit being the I<high
surrogate>, and the second being the I<low surrogate>.

Surrogates are code points set aside to encode the 0x01000..0x10FFFF
range of Unicode code points in pairs of 16-bit units.  The I<high
surrogates> are the range 0xD800..0xDBFF, and the I<low surrogates>
are the range 0xDC00..0xDFFFF.  The surrogate encoding is

	$hi = ($uni - 0x10000) / 0x400 + 0xD800;
	$lo = ($uni - 0x10000) % 0x400 + 0xDC00;

and the decoding is

	$uni = 0x10000 + ($hi - 0xD800) * 0x400 + ($lo - 0xDC00);

If you try to generate surrogates (for example by using chr()), you
will get a warning if warnings are turned on (C<-w> or C<use
warnings;>) because those code points are not valid for a Unicode
character.

Because of the 16-bitness, UTF-16 is byteorder dependent.  UTF-16
itself can be used for in-memory computations, but if storage or
transfer is required, either UTF-16BE (Big Endian) or UTF-16LE
(Little Endian) must be chosen.

This introduces another problem: what if you just know that your data
is UTF-16, but you don't know which endianness?  Byte Order Marks
(BOMs) are a solution to this.  A special character has been reserved
in Unicode to function as a byte order marker: the character with the
code point 0xFEFF is the BOM.

The trick is that if you read a BOM, you will know the byte order,
since if it was written on a big endian platform, you will read the
bytes 0xFE 0xFF, but if it was written on a little endian platform,
you will read the bytes 0xFF 0xFE.  (And if the originating platform
was writing in UTF-8, you will read the bytes 0xEF 0xBB 0xBF.)

The way this trick works is that the character with the code point
0xFFFE is guaranteed not to be a valid Unicode character, so the
sequence of bytes 0xFF 0xFE is unambiguously "BOM, represented in
little-endian format" and cannot be "0xFFFE, represented in big-endian
format".

=item *

UTF-32, UTF-32BE, UTF32-LE

The UTF-32 family is pretty much like the UTF-16 family, expect that
the units are 32-bit, and therefore the surrogate scheme is not
needed.  The BOM signatures will be 0x00 0x00 0xFE 0xFF for BE and
0xFF 0xFE 0x00 0x00 for LE.

=item *

UCS-2, UCS-4

Encodings defined by the ISO 10646 standard.  UCS-2 is a 16-bit
encoding.  Unlike UTF-16, UCS-2 is not extensible beyond 0xFFFF,
because it does not use surrogates.  UCS-4 is a 32-bit encoding,
functionally identical to UTF-32.

=item *

UTF-7

A seven-bit safe (non-eight-bit) encoding, useful if the
transport/storage is not eight-bit safe.  Defined by RFC 2152.

=back

=head2 Security Implications of Unicode

=over 4

=item *

Malformed UTF-8

Unfortunately, the specification of UTF-8 leaves some room for
interpretation of how many bytes of encoded output one should generate
from one input Unicode character.  Strictly speaking, one is supposed
to always generate the shortest possible sequence of UTF-8 bytes,
because otherwise there is potential for input buffer overflow at
the receiving end of a UTF-8 connection.  Perl always generates the
shortest length UTF-8, and with warnings on (C<-w> or C<use
warnings;>) Perl will warn about non-shortest length UTF-8 (and other
malformations, too, such as the surrogates, which are not real
Unicode code points.)

=item *

Regular expressions behave slightly differently between byte data and
character (Unicode data).  For example, the "word character" character
class C<\w> will work differently when the data is all eight-bit bytes
or when the data is Unicode.

In the first case, the set of C<\w> characters is either small (the
default set of alphabetic characters, digits, and the "_"), or, if you
are using a locale (see L<perllocale>), the C<\w> might contain a few
more letters according to your language and country.

In the second case, the C<\w> set of characters is much, much larger,
and most importantly, even in the set of the first 256 characters, it
will most probably be different: as opposed to most locales (which are
specific to a language and country pair) Unicode classifies all the
characters that are letters as C<\w>.  For example: your locale might
not think that LATIN SMALL LETTER ETH is a letter (unless you happen
to speak Icelandic), but Unicode does.

As discussed elsewhere, Perl tries to stand one leg (two legs, as
camels are quadrupeds?) in two worlds: the old world of bytes and the new
world of characters, upgrading from bytes to characters when necessary.
If your legacy code is not explicitly using Unicode, no automatic
switchover to characters should happen, and characters shouldn't get
downgraded back to bytes, either.  It is possible to accidentally mix
bytes and characters, however (see L<perluniintro>), in which case the
C<\w> might start behaving differently.  Review your code.

=back

=head2 Unicode in Perl on EBCDIC

The way Unicode is handled on EBCDIC platforms is still rather
experimental.  On such a platform, references to UTF-8 encoding in this
document and elsewhere should be read as meaning UTF-EBCDIC as
specified in Unicode Technical Report 16 unless ASCII vs EBCDIC issues
are specifically discussed. There is no C<utfebcdic> pragma or
":utfebcdic" layer, rather, "utf8" and ":utf8" are re-used to mean
the platform's "natural" 8-bit encoding of Unicode. See L<perlebcdic>
for more discussion of the issues.

=head2 Locales

Usually locale settings and Unicode do not affect each other, but
there are a couple of exceptions:

=over 4

=item *

If your locale environment variables (LANGUAGE, LC_ALL, LC_CTYPE, LANG)
contain the strings 'UTF-8' or 'UTF8' (case-insensitive matching),
the default encoding of your STDIN, STDOUT, and STDERR, and of
B<any subsequent file open>, is UTF-8.

=item *

Perl tries really hard to work both with Unicode and the old byte
oriented world: most often this is nice, but sometimes this causes
problems.

=back

=head2 Using Unicode in XS

If you want to handle Perl Unicode in XS extensions, you may find
the following C APIs useful (see perlapi for details):

=over 4

=item *

DO_UTF8(sv) returns true if the UTF8 flag is on and the bytes pragma
is not in effect.  SvUTF8(sv) returns true is the UTF8 flag is on, the
bytes pragma is ignored.  The UTF8 flag being on does B<not> mean that
there are any characters of code points greater than 255 (or 127) in
the scalar, or that there even are any characters in the scalar.
What the UTF8 flag means is that the sequence of octets in the
representation of the scalar is the sequence of UTF-8 encoded
code points of the characters of a string.  The UTF8 flag being
off means that each octet in this representation encodes a single
character with codepoint 0..255 within the string.  Perl's Unicode
model is not to use UTF-8 until it's really necessary.

=item *

uvuni_to_utf8(buf, chr) writes a Unicode character code point into a
buffer encoding the code point as UTF-8, and returns a pointer
pointing after the UTF-8 bytes.

=item *

utf8_to_uvuni(buf, lenp) reads UTF-8 encoded bytes from a buffer and
returns the Unicode character code point (and optionally the length of
the UTF-8 byte sequence).

=item *

utf8_length(start, end) returns the length of the UTF-8 encoded buffer
in characters.  sv_len_utf8(sv) returns the length of the UTF-8 encoded
scalar.

=item *

sv_utf8_upgrade(sv) converts the string of the scalar to its UTF-8
encoded form.  sv_utf8_downgrade(sv) does the opposite (if possible).
sv_utf8_encode(sv) is like sv_utf8_upgrade but the UTF8 flag does not
get turned on.  sv_utf8_decode() does the opposite of sv_utf8_encode().
Note that none of these are to be used as general purpose encoding/decoding
interfaces: use Encode for that.  sv_utf8_upgrade() is affected by the
encoding pragma, but sv_utf8_downgrade() is not (since the encoding
pragma is designed to be a one-way street).

=item *

is_utf8_char(s) returns true if the pointer points to a valid UTF-8
character.

=item *

is_utf8_string(buf, len) returns true if the len bytes of the buffer
are valid UTF-8.

=item *

UTF8SKIP(buf) will return the number of bytes in the UTF-8 encoded
character in the buffer.  UNISKIP(chr) will return the number of bytes
required to UTF-8-encode the Unicode character code point.  UTF8SKIP()
is useful for example for iterating over the characters of a UTF-8
encoded buffer; UNISKIP() is useful for example in computing
the size required for a UTF-8 encoded buffer.

=item *

utf8_distance(a, b) will tell the distance in characters between the
two pointers pointing to the same UTF-8 encoded buffer.

=item *

utf8_hop(s, off) will return a pointer to an UTF-8 encoded buffer that
is C<off> (positive or negative) Unicode characters displaced from the
UTF-8 buffer C<s>.  Be careful not to overstep the buffer: utf8_hop()
will merrily run off the end or the beginning if told to do so.

=item *

pv_uni_display(dsv, spv, len, pvlim, flags) and sv_uni_display(dsv,
ssv, pvlim, flags) are useful for debug output of Unicode strings and
scalars.  By default they are useful only for debug: they display
B<all> characters as hexadecimal code points, but with the flags
UNI_DISPLAY_ISPRINT and UNI_DISPLAY_BACKSLASH you can make the output
more readable.

=item *

ibcmp_utf8(s1, pe1, u1, l1, u1, s2, pe2, l2, u2) can be used to
compare two strings case-insensitively in Unicode.
(For case-sensitive comparisons you can just use memEQ() and memNE()
as usual.)

=back

For more information, see L<perlapi>, and F<utf8.c> and F<utf8.h>
in the Perl source code distribution.

=head1 BUGS

=head2 Interaction with locales

Use of locales with Unicode data may lead to odd results.  Currently
there is some attempt to apply 8-bit locale info to characters in the
range 0..255, but this is demonstrably incorrect for locales that use
characters above that range when mapped into Unicode.  It will also
tend to run slower.  Use of locales with Unicode is discouraged.

=head2 Interaction with extensions

When perl exchanges data with an extension, the extension should be
able to understand the UTF-8 flag and act accordingly. If the
extension doesn't know about the flag, the risk is high that it will
return data that are incorrectly flagged.

So if you're working with Unicode data, consult the documentation of
every module you're using if there are any issues with Unicode data
exchange. If the documentation does not talk about Unicode at all,
suspect the worst and probably look at the source to learn how the
module is implemented. Modules written completely in perl shouldn't
cause problems. Modules that directly or indirectly access code written
in other programming languages are at risk.

For affected functions the simple strategy to avoid data corruption is
to always make the encoding of the exchanged data explicit. Choose an
encoding you know the extension can handle. Convert arguments passed
to the extensions to that encoding and convert results back from that
encoding. Write wrapper functions that do the conversions for you, so
you can later change the functions when the extension catches up.

To provide an example let's say the popular Foo::Bar::escape_html
function doesn't deal with Unicode data yet. The wrapper function
would convert the argument to raw UTF-8 and convert the result back to
perl's internal representation like so:

    sub my_escape_html ($) {
      my($what) = shift;
      return unless defined $what;
      Encode::decode_utf8(Foo::Bar::escape_html(Encode::encode_utf8($what)));
    }

Sometimes, when the extension does not convert data but just stores
and retrieves them, you will be in a position to use the otherwise
dangerous Encode::_utf8_on() function. Let's say the popular
C<Foo::Bar> extension, written in C, provides a C<param> method that
lets you store and retrieve data according to these prototypes:

    $self->param($name, $value);            # set a scalar
    $value = $self->param($name);           # retrieve a scalar

If it does not yet provide support for any encoding, one could write a
derived class with such a C<param> method:

    sub param {
      my($self,$name,$value) = @_;
      utf8::upgrade($name);     # make sure it is UTF-8 encoded
      if (defined $value)
        utf8::upgrade($value);  # make sure it is UTF-8 encoded
        return $self->SUPER::param($name,$value);
      } else {
        my $ret = $self->SUPER::param($name);
        Encode::_utf8_on($ret); # we know, it is UTF-8 encoded
        return $ret;
      }
    }

Some extensions provide filters on data entry/exit points, such as
DB_File::filter_store_key and family. Look out for such filters in
the documentation of your extensions, they can make the transition to
Unicode data much easier.

=head2 speed

Some functions are slower when working on UTF-8 encoded strings than
on byte encoded strings.  All functions that need to hop over
characters such as length(), substr() or index() can work B<much>
faster when the underlying data are byte-encoded. Witness the
following benchmark:

  % perl -e '
  use Benchmark;
  use strict;
  our $l = 10000;
  our $u = our $b = "x" x $l;
  substr($u,0,1) = "\x{100}";
  timethese(-2,{
  LENGTH_B => q{ length($b) },
  LENGTH_U => q{ length($u) },
  SUBSTR_B => q{ substr($b, $l/4, $l/2) },
  SUBSTR_U => q{ substr($u, $l/4, $l/2) },
  });
  '
  Benchmark: running LENGTH_B, LENGTH_U, SUBSTR_B, SUBSTR_U for at least 2 CPU seconds...
    LENGTH_B:  2 wallclock secs ( 2.36 usr +  0.00 sys =  2.36 CPU) @ 5649983.05/s (n=13333960)
    LENGTH_U:  2 wallclock secs ( 2.11 usr +  0.00 sys =  2.11 CPU) @ 12155.45/s (n=25648)
    SUBSTR_B:  3 wallclock secs ( 2.16 usr +  0.00 sys =  2.16 CPU) @ 374480.09/s (n=808877)
    SUBSTR_U:  2 wallclock secs ( 2.11 usr +  0.00 sys =  2.11 CPU) @ 6791.00/s (n=14329)

The numbers show an incredible slowness on long UTF-8 strings and you
should carefully avoid to use these functions within tight loops. For
example if you want to iterate over characters, it is infinitely
better to split into an array than to use substr, as the following
benchmark shows:

  % perl -e '
  use Benchmark;
  use strict;
  our $l = 10000;
  our $u = our $b = "x" x $l;
  substr($u,0,1) = "\x{100}";
  timethese(-5,{
  SPLIT_B => q{ for my $c (split //, $b){}  },
  SPLIT_U => q{ for my $c (split //, $u){}  },
  SUBSTR_B => q{ for my $i (0..length($b)-1){my $c = substr($b,$i,1);} },
  SUBSTR_U => q{ for my $i (0..length($u)-1){my $c = substr($u,$i,1);} },
  });
  '
  Benchmark: running SPLIT_B, SPLIT_U, SUBSTR_B, SUBSTR_U for at least 5 CPU seconds...
     SPLIT_B:  6 wallclock secs ( 5.29 usr +  0.00 sys =  5.29 CPU) @ 56.14/s (n=297)
     SPLIT_U:  5 wallclock secs ( 5.17 usr +  0.01 sys =  5.18 CPU) @ 55.21/s (n=286)
    SUBSTR_B:  5 wallclock secs ( 5.34 usr +  0.00 sys =  5.34 CPU) @ 123.22/s (n=658)
    SUBSTR_U:  7 wallclock secs ( 6.20 usr +  0.00 sys =  6.20 CPU) @  0.81/s (n=5)

You see, the algorithm based on substr() was faster with byte encoded
data but it is pathologically slow with UTF-8 data.

=head1 SEE ALSO

L<perluniintro>, L<encoding>, L<Encode>, L<open>, L<utf8>, L<bytes>,
L<perlretut>, L<perlvar/"${^WIDE_SYSTEM_CALLS}">

=cut