summaryrefslogtreecommitdiff
path: root/pod
diff options
context:
space:
mode:
authorKarl Williamson <khw@khw-desktop.(none)>2010-05-05 12:16:48 -0600
committerJesse Vincent <jesse@bestpractical.com>2010-05-08 16:37:56 -0400
commit9e5bbba0de25c01ae9355c7a97e237602a37e9f3 (patch)
treeb3bd49cb6b3aac8959763398c8f87ddc08f47138 /pod
parentd88362caea867f741c6a60e4a573f321c72b32d6 (diff)
downloadperl-9e5bbba0de25c01ae9355c7a97e237602a37e9f3.tar.gz
perlunifaq, uniintro: fix for 80 col display
Diffstat (limited to 'pod')
-rw-r--r--pod/perlunifaq.pod6
-rw-r--r--pod/perluniintro.pod24
2 files changed, 16 insertions, 14 deletions
diff --git a/pod/perlunifaq.pod b/pod/perlunifaq.pod
index ab42ff194a..8d507709e7 100644
--- a/pod/perlunifaq.pod
+++ b/pod/perlunifaq.pod
@@ -84,12 +84,12 @@ or encode anymore, on things that use the layered handle.
You can provide this layer when C<open>ing the file:
- open my $fh, '>:encoding(UTF-8)', $filename; # auto encoding on write
- open my $fh, '<:encoding(UTF-8)', $filename; # auto decoding on read
+ open my $fh, '>:encoding(UTF-8)', $filename; # auto encoding on write
+ open my $fh, '<:encoding(UTF-8)', $filename; # auto decoding on read
Or if you already have an open filehandle:
- binmode $fh, ':encoding(UTF-8)';
+ binmode $fh, ':encoding(UTF-8)';
Some database drivers for DBI can also automatically encode and decode, but
that is sometimes limited to the UTF-8 encoding.
diff --git a/pod/perluniintro.pod b/pod/perluniintro.pod
index bee286f5ea..54ce2f0a1c 100644
--- a/pod/perluniintro.pod
+++ b/pod/perluniintro.pod
@@ -344,7 +344,8 @@ layer when opening files
The I/O layers can also be specified more flexibly with
the C<open> pragma. See L<open>, or look at the following example.
- use open ':encoding(utf8)'; # input/output default encoding will be UTF-8
+ use open ':encoding(utf8)'; # input/output default encoding will be
+ # UTF-8
open X, ">file";
print X chr(0x100), "\n";
close X;
@@ -355,7 +356,8 @@ the C<open> pragma. See L<open>, or look at the following example.
With the C<open> pragma you can use the C<:locale> layer
BEGIN { $ENV{LC_ALL} = $ENV{LANG} = 'ru_RU.KOI8-R' }
- # the :locale will probe the locale environment variables like LC_ALL
+ # the :locale will probe the locale environment variables like
+ # LC_ALL
use open OUT => ':locale'; # russki parusski
open(O, ">koi8");
print O chr(0x430); # Unicode CYRILLIC SMALL LETTER A = KOI8-R 0xc1
@@ -432,13 +434,13 @@ its argument so that Unicode characters with code points greater than
255 are displayed as C<\x{...}>, control characters (like C<\n>) are
displayed as C<\x..>, and the rest of the characters as themselves:
- sub nice_string {
- join("",
- map { $_ > 255 ? # if wide character...
- sprintf("\\x{%04X}", $_) : # \x{...}
- chr($_) =~ /[[:cntrl:]]/ ? # else if control character ...
- sprintf("\\x%02X", $_) : # \x..
- quotemeta(chr($_)) # else quoted or as themselves
+ sub nice_string {
+ join("",
+ map { $_ > 255 ? # if wide character...
+ sprintf("\\x{%04X}", $_) : # \x{...}
+ chr($_) =~ /[[:cntrl:]]/ ? # else if control character ...
+ sprintf("\\x%02X", $_) : # \x..
+ quotemeta(chr($_)) # else quoted or as themselves
} unpack("W*", $_[0])); # unpack Unicode characters
}
@@ -731,11 +733,11 @@ or:
You can find the bytes that make up a UTF-8 sequence with
- @bytes = unpack("C*", $Unicode_string)
+ @bytes = unpack("C*", $Unicode_string)
and you can create well-formed Unicode with
- $Unicode_string = pack("U*", 0xff, ...)
+ $Unicode_string = pack("U*", 0xff, ...)
=item *