=head1 NAME Pumpkin - Notes on handling the Perl Patch Pumpkin And Porting Perl =head1 SYNOPSIS There is no simple synopsis, yet. =head1 DESCRIPTION This document attempts to begin to describe some of the considerations involved in patching, porting, and maintaining perl. This document is still under construction, and still subject to significant changes. Still, I hope parts of it will be useful, so I'm releasing it even though it's not done. For the most part, it's a collection of anecdotal information that already assumes some familiarity with the Perl sources. I really need an introductory section that describes the organization of the sources and all the various auxiliary files that are part of the distribution. =head1 Where Do I Get Perl Sources and Related Material? The Comprehensive Perl Archive Network (or CPAN) is the place to go. There are many mirrors, but the easiest thing to use is probably http://www.cpan.org/README.html , which automatically points you to a mirror site "close" to you. =head2 Perl5-porters mailing list The mailing list perl5-porters@perl.org is the main group working with the development of perl. If you're interested in all the latest developments, you should definitely subscribe. The list is high volume, but generally has a fairly low noise level. Subscribe by sending the message (in the body of your letter) subscribe perl5-porters to perl5-porters-request@perl.org . Archives of the list are held at: http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/ =head1 How are Perl Releases Numbered? Beginning with v5.6.0, even versions will stand for maintenance releases and odd versions for development releases, i.e., v5.6.x for maintenance releases, and v5.7.x for development releases. Before v5.6.0, subversions _01 through _49 were reserved for bug-fix maintenance releases, and subversions _50 through _99 for unstable development versions. For example, in v5.6.1, the revision number is 5, the version is 6, and 1 is the subversion. For compatibility with the older numbering scheme the composite floating point version number continues to be available as the magic variable $], and amounts to C<$revision + $version/1000 + $subversion/100000>. This can still be used in comparisons. print "You've got an old perl\n" if $] < 5.005_03; In addition, the version is also available as a string in $^V. print "You've got a new perl\n" if $^V and $^V ge v5.6.0; You can also require particular version (or later) with: use 5.006; or using the new syntax available only from v5.6 onward: use v5.6.0; At some point in the future, we may need to decide what to call the next big revision. In the .package file used by metaconfig to generate Configure, there are two variables that might be relevant: $baserev=5 and $package=perl5. Perl releases produced by the members of perl5-porters are usually available on CPAN in the F and F directories. =head2 Maintenance and Development Subversions The first rule of maintenance work is "First, do no harm." Trial releases of bug-fix maintenance releases are announced on perl5-porters. Trial releases use the new subversion number (to avoid testers installing it over the previous release) and include a 'local patch' entry in F. The distribution file contains the string C to make clear that the file is not meant for public consumption. In general, the names of official distribution files for the public always match the regular expression: ^perl\d+\.(\d+)\.\d+(-MAINT_TRIAL_\d+)\.tar\.gz$ C<$1> in the pattern is always an even number for maintenance versions, and odd for developer releases. In the past it has been observed that pumpkings tend to invent new naming conventions on the fly. If you are a pumpking, before you invent a new name for any of the three types of perl distributions, please inform the guys from the CPAN who are doing indexing and provide the trees of symlinks and the like. They will have to know I what you decide. =head2 Why is it called the patch pumpkin? Chip Salzenberg gets credit for that, with a nod to his cow orker, David Croy. We had passed around various names (baton, token, hot potato) but none caught on. Then, Chip asked: [begin quote] Who has the patch pumpkin? To explain: David Croy once told me once that at a previous job, there was one tape drive and multiple systems that used it for backups. But instead of some high-tech exclusion software, they used a low-tech method to prevent multiple simultaneous backups: a stuffed pumpkin. No one was allowed to make backups unless they had the "backup pumpkin". [end quote] The name has stuck. =head1 Philosophical Issues in Patching and Porting Perl There are no absolute rules, but there are some general guidelines I have tried to follow as I apply patches to the perl sources. (This section is still under construction.) =head2 Solve problems as generally as possible Never implement a specific restricted solution to a problem when you can solve the same problem in a more general, flexible way. For example, for dynamic loading to work on some SVR4 systems, we had to build a shared libperl.so library. In order to build "FAT" binaries on NeXT 4.0 systems, we had to build a special libperl library. Rather than continuing to build a contorted nest of special cases, I generalized the process of building libperl so that NeXT and SVR4 users could still get their work done, but others could build a shared libperl if they wanted to as well. Contain your changes carefully. Assume nothing about other operating systems, not even closely related ones. Your changes must not affect other platforms. Spy shamelessly on how similar patching or porting issues have been settled elsewhere. If feasible, try to keep filenames 8.3-compliant to humor those poor souls that get joy from running Perl under such dire limitations. There's a script, F, for keeping your nose 8.3-clean. In a similar vein, do not create files or directories which differ only in case (upper versus lower). =head2 Seek consensus on major changes If you are making big changes, don't do it in secret. Discuss the ideas in advance on perl5-porters. =head2 Keep the documentation up-to-date If your changes may affect how users use perl, then check to be sure that the documentation is in sync with your changes. Be sure to check all the files F and also the F document. Consider writing the appropriate documentation first and then implementing your change to correspond to the documentation. =head2 Avoid machine-specific #ifdef's To the extent reasonable, try to avoid machine-specific #ifdef's in the sources. Instead, use feature-specific #ifdef's. The reason is that the machine-specific #ifdef's may not be valid across major releases of the operating system. Further, the feature-specific tests may help out folks on another platform who have the same problem. =head2 Machine-specific files =over 4 =item source code If you have many machine-specific #defines or #includes, consider creating an "osish.h" (F, F, and so on) and including that in F. If you have several machine-specific files (function emulations, function stubs, build utility wrappers) you may create a separate subdirectory (djgpp, win32) and put the files in there. Remember to update C when you add files. If your system supports dynamic loading but none of the existing methods at F work for you, you must write a new one. Study the existing ones to see what kind of interface you must supply. =item build hints There are two kinds of hints: hints for building Perl and hints for extensions. The former live in the C subdirectory, the latter in C subdirectories. The top level hints are Bourne-shell scripts that set, modify and unset appropriate Configure variables, based on the Configure command line options and possibly existing config.sh and Policy.sh files from previous Configure runs. The extension hints are written in Perl (by the time they are used miniperl has been built) and control the building of their respective extensions. They can be used to for example manipulate compilation and linking flags. =item build and installation Makefiles, scripts, and so forth Sometimes you will also need to tweak the Perl build and installation procedure itself, like for example F and F. Tread very carefully, even more than usual. Contain your changes with utmost care. =item test suite Many of the tests in C subdirectory assume machine-specific things like existence of certain functions, something about filesystem semantics, certain external utilities and their error messages. Use the C<$^O> and the C module (which contains the results of the Configure run, in effect the C converted to Perl) to either skip (preferably not) or customize (preferable) the tests for your platform. =item modules Certain standard modules may need updating if your operating system sports for example a native filesystem naming. You may want to update some or all of the modules File::Basename, File::Spec, File::Path, and File::Copy to become aware of your native filesystem syntax and peculiarities. Remember to have a $VERSION in the modules. You can use the F script for checking this. =item documentation If your operating system comes from outside UNIX you almost certainly will have differences in the available operating system functionality (missing system calls, different semantics, whatever). Please document these at F. If your operating system is the first B to have a system call also update the list of "portability-bewares" at the beginning of F. A file called F at the top level that explains things like how to install perl at this platform, where to get any possibly required additional software, and for example what test suite errors to expect, is nice too. Such files are in the process of being written in pod format and will eventually be renamed F. You may also want to write a separate F<.pod> file for your operating system to tell about existing mailing lists, os-specific modules, documentation, whatever. Please name these along the lines of FI.pod. [unfinished: where to put this file (the pod/ subdirectory, of course: but more importantly, which/what index files should be updated?)] =back =head2 Allow for lots of testing We should never release a main version without testing it as a subversion first. =head2 Test popular applications and modules. We should never release a main version without testing whether or not it breaks various popular modules and applications. A partial list of such things would include majordomo, metaconfig, apache, Tk, CGI, libnet, and libwww, to name just a few. Of course it's quite possible that some of those things will be just plain broken and need to be fixed, but, in general, we ought to try to avoid breaking widely-installed things. =head2 Automated generation of derivative files The F, F, F, F, F, and F files are all automatically generated by perl scripts. In general, don't patch these directly; patch the data files instead. F and F are also automatically generated by B. In general, you should patch the metaconfig units instead of patching these files directly. However, very minor changes to F may be made in between major sync-ups with the metaconfig units, which tends to be complicated operations. But be careful, this can quickly spiral out of control. Running metaconfig is not really hard. Also F is automatically produced from F. In general, look out for all F<*.SH> files. Finally, the sample files in the F subdirectory are generated automatically by the script F included with the metaconfig units. See L<"run metaconfig"> below for information on obtaining the metaconfig units. =head1 How to Make a Distribution This section has now been expanded and moved into its own file, F. I've kept some of the subsections here for now, as they don't directly relate to building a release any more, but still contain what might be useful information - DAPM 7/2009. =head2 run metaconfig If you need to make changes to Configure or config_h.SH, it may be best to change the appropriate metaconfig units instead, and regenerate Configure. metaconfig -m will regenerate F and F. Much more information on obtaining and running metaconfig is in the F file that comes with Perl's metaconfig units. Since metaconfig is hard to change, running correction scripts after this generation is sometimes needed. Configure gained complexity over time, and the order in which config_h.SH is generated can cause havoc when compiling perl. Therefor, you need to run Porting/config_h.pl after that generation. All that and more is described in the README files that come with the metaunits. Perl's metaconfig units should be available on CPAN. A set of units that will work with perl5.9.x is in a file with a name similar to F under L. The mc_units tar file should be unpacked in your main perl source directory. Note: those units were for use with 5.9.x. There may have been changes since then. Check for later versions or contact perl5-porters@perl.org to obtain a pointer to the current version. Alternatively, do consider if the F<*ish.h> files or the hint files might be a better place for your changes. =head2 MANIFEST If you are using metaconfig to regenerate Configure, then you should note that metaconfig actually uses MANIFEST.new, so you want to be sure MANIFEST.new is up-to-date too. I haven't found the MANIFEST/MANIFEST.new distinction particularly useful, but that's probably because I still haven't learned how to use the full suite of tools in the dist distribution. =head2 Run Configure This will build a config.sh and config.h. You can skip this if you haven't changed Configure or config_h.SH at all. I use the following command sh Configure -Dprefix=/opt/perl -Doptimize=-O -Dusethreads \ -Dcf_by='yourname' \ -Dcf_email='yourname@yourhost.yourplace.com' \ -Dperladmin='yourname@yourhost.yourplace.com' \ -Dmydomain='.yourplace.com' \ -Dmyhostname='yourhost' \ -des =head2 Update Porting/config.sh and Porting/config_H [XXX This section needs revision. We're currently working on easing the task of keeping the vms, win32, and plan9 config.sh info up-to-date. The plan is to use keep up-to-date 'canned' config.sh files in the appropriate subdirectories and then generate 'canned' config.h files for vms, win32, etc. from the generic config.sh file. This is to ease maintenance. When Configure gets updated, the parts sometimes get scrambled around, and the changes in config_H can sometimes be very hard to follow. config.sh, on the other hand, can safely be sorted, so it's easy to track (typically very small) changes to config.sh and then propagate them to a canned 'config.h' by any number of means, including a perl script in win32/ or carrying F and F to a Unix system and running sh config_h.SH.) Vms uses F to generate its own F and F. If you want to add a new variable to F check with vms folk how to add it to configure.com too. XXX] The F and F files are provided to help those folks who can't run Configure. It is important to keep them up-to-date. If you have changed F, those changes must be reflected in config_H as well. (The name config_H was chosen to distinguish the file from config.h even on case-insensitive file systems.) Simply edit the existing config_H file; keep the first few explanatory lines and then copy your new config.h below. It may also be necessary to update win32/config.?c, and F, though you should be quite careful in doing so if you are not familiar with those systems. You might want to issue your patch with a promise to quickly issue a follow-up that handles those directories. =head2 make regen_perly If F has been edited, it is necessary to run this target to rebuild F, F and F. In fact this target just runs the Perl script F. Note that F is I rebuilt; this is just a plain static file now. This target relies on you having Bison installed on your system. Running the target will tell you if you haven't got the right version, and if so, where to get the right one. Or if you prefer, you could hack F to work with your version of Bison. The important things are that the regexes can still extract out the right chunks of the Bison output into F and F, and that the contents of those two files, plus F, are functionally equivalent to those produced by the supported version of Bison. Note that in the old days, you had to do C instead. =head2 make regen_all This target takes care of the regen_headers target. (It used to also call the regen_pods target, but that has been eliminated.) =head2 make regen_headers The F, F, and F files are all automatically generated by perl scripts. Since the user isn't guaranteed to have a working perl, we can't require the user to generate them. Hence you have to, if you're making a distribution. I used to include rules like the following in the makefile: # The following three header files are generated automatically # The correct versions should be already supplied with the perl kit, # in case you don't have perl or 'sh' available. # The - is to ignore error return codes in case you have the source # installed read-only or you don't have perl yet. keywords.h: keywords.pl @echo "Don't worry if this fails." - perl keywords.pl However, I got B of mail consisting of people worrying because the command failed. I eventually decided that I would save myself time and effort by manually running C myself rather than answering all the questions and complaints about the failing command. =head2 global.sym, and perlio.sym Make sure these files are up-to-date. Read the comments in these files and in F to see what to do. =head2 Binary compatibility If you do change F think carefully about what you are doing. To the extent reasonable, we'd like to maintain source and binary compatibility with older releases of perl. That way, extensions built under one version of perl will continue to work with new versions of perl. Of course, some incompatible changes may well be necessary. I'm just suggesting that we not make any such changes without thinking carefully about them first. If possible, we should provide backwards-compatibility stubs. There's a lot of XS code out there. Let's not force people to keep changing it. =head2 PPPort F needs to be synchronized to include all new macros added to .h files (normally F and F, but others as well). Since chances are that when a new macro is added the committer will forget to update F, it's the best to diff for changes in .h files when making a new release and making sure that F contains them all. The pumpking can delegate the synchronization responsibility to anybody else, but the release process is the only place where we can make sure that no new macros fell through the cracks. =head2 Todo The F file contains a roughly-categorized unordered list of aspects of Perl that could use enhancement, features that could be added, areas that could be cleaned up, and so on. During your term as pumpkin-holder, you will probably address some of these issues, and perhaps identify others which, while you decide not to address them this time around, may be tackled in the future. Update the file to reflect the situation as it stands when you hand over the pumpkin. You might like, early in your pumpkin-holding career, to see if you can find champions for particular issues on the to-do list: an issue owned is an issue more likely to be resolved. There are also some more porting-specific L items later in this file. =head2 OS/2-specific updates In the os2 directory is F, a set of OS/2-specific diffs against B. If you make changes to Configure, you may want to consider regenerating this diff file to save trouble for the OS/2 maintainer. You can also consider the OS/2 diffs as reminders of portability things that need to be fixed in Configure. =head2 VMS-specific updates The Perl revision number appears as "perl5" in F. It is courteous to update that if necessary. =head2 Making a new patch I find the F utility quite handy for making patches. You can obtain it from any CPAN archive under L. There are a couple of differences between my version and the standard one. I have mine do a # Print a reassuring "End of Patch" note so people won't # wonder if their mailer truncated patches. print "\n\nEnd of Patch.\n"; at the end. That's because I used to get questions from people asking if their mail was truncated. It also writes Index: lines which include the new directory prefix (change Index: print, approx line 294 or 310 depending on the version, to read: print PATCH ("Index: $newdir$new\n");). That helps patches work with more POSIX conformant patch programs. Here's how I generate a new patch. I'll use the hypothetical 5.004_07 to 5.004_08 patch as an example. # unpack perl5.004_07/ gzip -d -c perl5.004_07.tar.gz | tar -xof - # unpack perl5.004_08/ gzip -d -c perl5.004_08.tar.gz | tar -xof - makepatch perl5.004_07 perl5.004_08 > perl5.004_08.pat Makepatch will automatically generate appropriate B commands to remove deleted files. Unfortunately, it will not correctly set permissions for newly created files, so you may have to do so manually. For example, patch 5.003_04 created a new test F which needs to be executable, so at the top of the patch, I inserted the following lines: # Make a new test touch t/op/gv.t chmod +x t/opt/gv.t Now, of course, my patch is now wrong because makepatch didn't know I was going to do that command, and it patched against /dev/null. So, what I do is sort out all such shell commands that need to be in the patch (including possible mv-ing of files, if needed) and put that in the shell commands at the top of the patch. Next, I delete all the patch parts of perl5.004_08.pat, leaving just the shell commands. Then, I do the following: cd perl5.004_07 sh ../perl5.004_08.pat cd .. makepatch perl5.004_07 perl5.004_08 >> perl5.004_08.pat (Note the append to preserve my shell commands.) Now, my patch will line up with what the end users are going to do. =head2 Testing your patch It seems obvious, but be sure to test your patch. That is, verify that it produces exactly the same thing as your full distribution. rm -rf perl5.004_07 gzip -d -c perl5.004_07.tar.gz | tar -xf - cd perl5.004_07 sh ../perl5.004_08.pat patch -p1 -N < ../perl5.004_08.pat cd .. gdiff -r perl5.004_07 perl5.004_08 where B is GNU diff. Other diff's may also do recursive checking. =head2 More testing Again, it's obvious, but you should test your new version as widely as you can. You can be sure you'll hear about it quickly if your version doesn't work on both ANSI and pre-ANSI compilers, and on common systems such as SunOS 4.1.[34], Solaris, and Linux. If your changes include conditional code, try to test the different branches as thoroughly as you can. For example, if your system supports dynamic loading, you can also test static loading with sh Configure -Uusedl You can also hand-tweak your config.h to try out different #ifdef branches. =head2 Other tests =over 4 =item gcc -ansi -pedantic Configure -Dgccansipedantic [ -Dcc=gcc ] will enable (via the cflags script, not $Config{ccflags}) the gcc strict ANSI C flags -ansi and -pedantic for the compilation of the core files on platforms where it knows it can do so (like Linux, see cflags.SH for the full list), and on some platforms only one (Solaris can do only -pedantic, not -ansi). The flag -DPERL_GCC_PEDANTIC also gets added, since gcc does not add any internal cpp flag to signify that -pedantic is being used, as it does for -ansi (__STRICT_ANSI__). Note that the -ansi and -pedantic are enabled only for version 3 (and later) of gcc, since even gcc version 2.95.4 finds lots of seemingly false "value computed not used" errors from Perl. The -ansi and -pedantic are useful in catching at least the following nonportable practices: =over 4 =item * gcc-specific extensions =item * lvalue casts =item * // C++ comments =item * enum trailing commas =back The -Dgccansipedantic should be used only when cleaning up the code, not for production builds, since otherwise gcc cannot inline certain things. =back =head1 Running Purify Purify is a commercial tool that is helpful in identifying memory overruns, wild pointers, memory leaks and other such badness. Perl must be compiled in a specific way for optimal testing with Purify. Use the following commands to test perl with Purify: sh Configure -des -Doptimize=-g -Uusemymalloc -Dusemultiplicity \ -Accflags=-DPURIFY setenv PURIFYOPTIONS "-chain-length=25" make all pureperl cd t ln -s ../pureperl perl setenv PERL_DESTRUCT_LEVEL 2 ./perl TEST Disabling Perl's malloc allows Purify to monitor allocations and leaks more closely; using Perl's malloc will make Purify report most leaks in the "potential" leaks category. Enabling the multiplicity option allows perl to clean up thoroughly when the interpreter shuts down, which reduces the number of bogus leak reports from Purify. The -DPURIFY enables any Purify-specific debugging code in the sources. Purify outputs messages in "Viewer" windows by default. If you don't have a windowing environment or if you simply want the Purify output to unobtrusively go to a log file instead of to the interactive window, use the following options instead: setenv PURIFYOPTIONS "-chain-length=25 -windows=no -log-file=perl.log \ -append-logfile=yes" The only currently known leaks happen when there are compile-time errors within eval or require. (Fixing these is non-trivial, unfortunately, but they must be fixed eventually.) =head1 Common Gotchas =over 4 =item Probably Prefer POSIX It's often the case that you'll need to choose whether to do something the BSD-ish way or the POSIX-ish way. It's usually not a big problem when the two systems use different names for similar functions, such as memcmp() and bcmp(). The perl.h header file handles these by appropriate #defines, selecting the POSIX mem*() functions if available, but falling back on the b*() functions, if need be. More serious is the case where some brilliant person decided to use the same function name but give it a different meaning or calling sequence :-). getpgrp() and setpgrp() come to mind. These are a real problem on systems that aim for conformance to one standard (e.g. POSIX), but still try to support the other way of doing things (e.g. BSD). My general advice (still not really implemented in the source) is to do something like the following. Suppose there are two alternative versions, fooPOSIX() and fooBSD(). #ifdef HAS_FOOPOSIX /* use fooPOSIX(); */ #else # ifdef HAS_FOOBSD /* try to emulate fooPOSIX() with fooBSD(); perhaps with the following: */ # define fooPOSIX fooBSD # else # /* Uh, oh. We have to supply our own. */ # define fooPOSIX Perl_fooPOSIX # endif #endif =item Think positively If you need to add an #ifdef test, it is usually easier to follow if you think positively, e.g. #ifdef HAS_NEATO_FEATURE /* use neato feature */ #else /* use some fallback mechanism */ #endif rather than the more impenetrable #ifndef MISSING_NEATO_FEATURE /* Not missing it, so we must have it, so use it */ #else /* Are missing it, so fall back on something else. */ #endif Of course for this toy example, there's not much difference. But when the #ifdef's start spanning a couple of screen fulls, and the #else's are marked something like #else /* !MISSING_NEATO_FEATURE */ I find it easy to get lost. =item Providing Missing Functions -- Problem Not all systems have all the neat functions you might want or need, so you might decide to be helpful and provide an emulation. This is sound in theory and very kind of you, but please be careful about what you name the function. Let me use the C function as an illustration. Perl5.003 has the following in F #ifndef HAS_PAUSE #define pause() sleep((32767<<16)+32767) #endif Configure sets HAS_PAUSE if the system has the pause() function, so this #define only kicks in if the pause() function is missing. Nice idea, right? Unfortunately, some systems apparently have a prototype for pause() in F, but don't actually have the function in the library. (Or maybe they do have it in a library we're not using.) Thus, the compiler sees something like extern int pause(void); /* . . . */ #define pause() sleep((32767<<16)+32767) and dies with an error message. (Some compilers don't mind this; others apparently do.) To work around this, 5.003_03 and later have the following in perl.h: /* Some unistd.h's give a prototype for pause() even though HAS_PAUSE ends up undefined. This causes the #define below to be rejected by the compiler. Sigh. */ #ifdef HAS_PAUSE # define Pause pause #else # define Pause() sleep((32767<<16)+32767) #endif This works. The curious reader may wonder why I didn't do the following in F instead: #ifndef HAS_PAUSE void pause() { sleep((32767<<16)+32767); } #endif That is, since the function is missing, just provide it. Then things would probably be been alright, it would seem. Well, almost. It could be made to work. The problem arises from the conflicting needs of dynamic loading and namespace protection. For dynamic loading to work on AIX (and VMS) we need to provide a list of symbols to be exported. This is done by the script F, which reads F. Thus, the C symbol would have to be added to F So far, so good. On the other hand, one of the goals of Perl5 is to make it easy to either extend or embed perl and link it with other libraries. This means we have to be careful to keep the visible namespace "clean". That is, we don't want perl's global variables to conflict with those in the other application library. Although this work is still in progress, the way it is currently done is via the F file. This file is built from the F file, since those files already list the globally visible symbols. If we had added C to global.sym, then F would contain the line #define pause Perl_pause and calls to C in the perl sources would now point to C. Now, when B is run to build the F executable, it will go looking for C, which probably won't exist in any of the standard libraries. Thus the build of perl will fail. Those systems where C is not defined would be ok, however, since they would get a C function in util.c. The rest of the world would be in trouble. And yes, this scenario has happened. On SCO, the function C is available. (I think it's in F<-lx>, the Xenix compatibility library.) Since the perl4 days (and possibly before), Perl has included a C function that gets called something akin to #ifndef HAS_CHSIZE I32 chsize(fd, length) /* . . . */ #endif When 5.003 added #define chsize Perl_chsize to F, the compile started failing on SCO systems. The "fix" is to give the function a different name. The one implemented in 5.003_05 isn't optimal, but here's what was done: #ifdef HAS_CHSIZE # ifdef my_chsize /* Probably #defined to Perl_my_chsize in embed.h */ # undef my_chsize # endif # define my_chsize chsize #endif My explanatory comment in patch 5.003_05 said: Undef and then re-define my_chsize from Perl_my_chsize to just plain chsize if this system HAS_CHSIZE. This probably only applies to SCO. This shows the perils of having internal functions with the same name as external library functions :-). Now, we can safely put C in F, export it, and hide it with F. To be consistent with what I did for C, I probably should have called the new function C, rather than C. However, the perl sources are quite inconsistent on this (Consider New, Mymalloc, and Myremalloc, to name just a few.) There is a problem with this fix, however, in that C was available as a F library function in 5.003, but it isn't available any more (as of 5.003_07). This means that we've broken binary compatibility. This is not good. =item Providing missing functions -- some ideas We currently don't have a standard way of handling such missing function names. Right now, I'm effectively thinking aloud about a solution. Some day, I'll try to formally propose a solution. Part of the problem is that we want to have some functions listed as exported but not have their names mangled by embed.h or possibly conflict with names in standard system headers. We actually already have such a list at the end of F (though that list is out-of-date): # extra globals not included above. cat <> perl.exp perl_init_ext perl_init_fold perl_init_i18nl14n perl_alloc perl_construct perl_destruct perl_free perl_parse perl_run perl_get_sv perl_get_av perl_get_hv perl_get_cv perl_call_argv perl_call_pv perl_call_method perl_call_sv perl_requirepv safecalloc safemalloc saferealloc safefree This still needs much thought, but I'm inclined to think that one possible solution is to prefix all such functions with C in the source and list them along with the other C functions in F. Thus, for C, we'd do something like the following: /* in perl.h */ #ifdef HAS_CHSIZE # define perl_chsize chsize #endif then in some file (e.g. F or F) do #ifndef HAS_CHSIZE I32 perl_chsize(fd, length) /* implement the function here . . . */ #endif Alternatively, we could just always use C everywhere and move C from F to the end of F. That would probably be fine as long as our C function agreed with all the C function prototypes in the various systems we'll be using. As long as the prototypes in actual use don't vary that much, this is probably a good alternative. (As a counter-example, note how Configure and perl have to go through hoops to find and use get Malloc_t and Free_t for C and C.) At the moment, this latter option is what I tend to prefer. =item All the world's a VAX Sorry, showing my age:-). Still, all the world is not BSD 4.[34], SVR4, or POSIX. Be aware that SVR3-derived systems are still quite common (do you have any idea how many systems run SCO?) If you don't have a bunch of v7 manuals handy, the metaconfig units (by default installed in F) are a good resource to look at for portability. =back =head1 Miscellaneous Topics =head2 Autoconf Why does perl use a metaconfig-generated Configure script instead of an autoconf-generated configure script? Metaconfig and autoconf are two tools with very similar purposes. Metaconfig is actually the older of the two, and was originally written by Larry Wall, while autoconf is probably now used in a wider variety of packages. The autoconf info file discusses the history of autoconf and how it came to be. The curious reader is referred there for further information. Overall, both tools are quite good, I think, and the choice of which one to use could be argued either way. In March, 1994, when I was just starting to work on Configure support for Perl5, I considered both autoconf and metaconfig, and eventually decided to use metaconfig for the following reasons: =over 4 =item Compatibility with Perl4 Perl4 used metaconfig, so many of the #ifdef's were already set up for metaconfig. Of course metaconfig had evolved some since Perl4's days, but not so much that it posed any serious problems. =item Metaconfig worked for me My system at the time was Interactive 2.2, an SVR3.2/386 derivative that also had some POSIX support. Metaconfig-generated Configure scripts worked fine for me on that system. On the other hand, autoconf-generated scripts usually didn't. (They did come quite close, though, in some cases.) At the time, I actually fetched a large number of GNU packages and checked. Not a single one configured and compiled correctly out-of-the-box with the system's cc compiler. =item Configure can be interactive With both autoconf and metaconfig, if the script works, everything is fine. However, one of my main problems with autoconf-generated scripts was that if it guessed wrong about something, it could be B hard to go back and fix it. For example, autoconf always insisted on passing the -Xp flag to cc (to turn on POSIX behavior), even when that wasn't what I wanted or needed for that package. There was no way short of editing the configure script to turn this off. You couldn't just edit the resulting Makefile at the end because the -Xp flag influenced a number of other configure tests. Metaconfig's Configure scripts, on the other hand, can be interactive. Thus if Configure is guessing things incorrectly, you can go back and fix them. This isn't as important now as it was when we were actively developing Configure support for new features such as dynamic loading, but it's still useful occasionally. =item GPL At the time, autoconf-generated scripts were covered under the GNU Public License, and hence weren't suitable for inclusion with Perl, which has a different licensing policy. (Autoconf's licensing has since changed.) =item Modularity Metaconfig builds up Configure from a collection of discrete pieces called "units". You can override the standard behavior by supplying your own unit. With autoconf, you have to patch the standard files instead. I find the metaconfig "unit" method easier to work with. Others may find metaconfig's units clumsy to work with. =back =head2 Why isn't there a directory to override Perl's library? Mainly because no one's gotten around to making one. Note that "making one" involves changing perl.c, Configure, config_h.SH (and associated files, see above), and I it all in the INSTALL file. Apparently, most folks who want to override one of the standard library files simply do it by overwriting the standard library files. =head2 APPLLIB In the perl.c sources, you'll find an undocumented APPLLIB_EXP variable, sort of like PRIVLIB_EXP and ARCHLIB_EXP (which are documented in config_h.SH). Here's what APPLLIB_EXP is for, from a mail message from Larry: The main intent of APPLLIB_EXP is for folks who want to send out a version of Perl embedded in their product. They would set the symbol to be the name of the library containing the files needed to run or to support their particular application. This works at the "override" level to make sure they get their own versions of any library code that they absolutely must have configuration control over. As such, I don't see any conflict with a sysadmin using it for a override-ish sort of thing, when installing a generic Perl. It should probably have been named something to do with overriding though. Since it's undocumented we could still change it... :-) Given that it's already there, you can use it to override distribution modules. One way to do that is to add ccflags="$ccflags -DAPPLLIB_EXP=\"/my/override\"" to your config.over file. (You have to be particularly careful to get the double quotes in. APPLLIB_EXP must be a valid C string. It might actually be easier to just #define it yourself in perl.c.) Then perl.c will put /my/override ahead of ARCHLIB and PRIVLIB. Perl will also search architecture-specific and version-specific subdirectories of APPLLIB_EXP. =head2 Shared libperl.so location Why isn't the shared libperl.so installed in /usr/lib/ along with "all the other" shared libraries? Instead, it is installed in $archlib, which is typically something like /usr/local/lib/perl5/archname/5.00404 and is architecture- and version-specific. The basic reason why a shared libperl.so gets put in $archlib is so that you can have more than one version of perl on the system at the same time, and have each refer to its own libperl.so. Three examples might help. All of these work now; none would work if you put libperl.so in /usr/lib. =over =item 1. Suppose you want to have both threaded and non-threaded perl versions around. Configure will name both perl libraries "libperl.so" (so that you can link to them with -lperl). The perl binaries tell them apart by having looking in the appropriate $archlib directories. =item 2. Suppose you have perl5.004_04 installed and you want to try to compile it again, perhaps with different options or after applying a patch. If you already have libperl.so installed in /usr/lib/, then it may be either difficult or impossible to get ld.so to find the new libperl.so that you're trying to build. If, instead, libperl.so is tucked away in $archlib, then you can always just change $archlib in the current perl you're trying to build so that ld.so won't find your old libperl.so. (The INSTALL file suggests you do this when building a debugging perl.) =item 3. The shared perl library is not a "well-behaved" shared library with proper major and minor version numbers, so you can't necessarily have perl5.004_04 and perl5.004_05 installed simultaneously. Suppose perl5.004_04 were to install /usr/lib/libperl.so.4.4, and perl5.004_05 were to install /usr/lib/libperl.so.4.5. Now, when you try to run perl5.004_04, ld.so might try to load libperl.so.4.5, since it has the right "major version" number. If this works at all, it almost certainly defeats the reason for keeping perl5.004_04 around. Worse, with development subversions, you certainly can't guarantee that libperl.so.4.4 and libperl.so.4.55 will be compatible. Anyway, all this leads to quite obscure failures that are sure to drive casual users crazy. Even experienced users will get confused :-). Upon reflection, I'd say leave libperl.so in $archlib. =back =head2 Indentation style Over the years Perl has become a mishmash of various indentation styles, but the original "Larry style" can probably be restored with (GNU) indent somewhat like this: indent -kr -nce -psl -sc A more ambitious solution would also specify a list of Perl specific types with -TSV -TAV -THV .. -TMAGIC -TPerlIO ... but that list would be quite ungainly. Also note that GNU indent also doesn't do aligning of consecutive assignments, which would truly wreck the layout in places like sv.c:Perl_sv_upgrade() or sv.c:Perl_clone_using(). Similarly nicely aligned &&s, ||s and ==s would not be respected. =head1 Upload Your Work to CPAN You can upload your work to CPAN if you have a CPAN id. Check out L for information on _PAUSE_, the Perl Author's Upload Server. I typically upload both the patch file, e.g. F and the full tar file, e.g. F. If you want your patch to appear in the F directory on CPAN, send e-mail to the CPAN master librarian. (Check out http://www.cpan.org/CPAN.html ). =head1 Help Save the World You should definitely announce your patch on the perl5-porters list. You should also consider announcing your patch on comp.lang.perl.announce, though you should make it quite clear that a subversion is not a production release, and be prepared to deal with people who will not read your disclaimer. =head1 Todo Here, in no particular order, are some Configure and build-related items that merit consideration. This list isn't exhaustive, it's just what I came up with off the top of my head. =head2 Adding missing library functions to Perl The perl Configure script automatically determines which headers and functions you have available on your system and arranges for them to be included in the compilation and linking process. Occasionally, when porting perl to an operating system for the first time, you may find that the operating system is missing a key function. While perl may still build without this function, no perl program will be able to reference the missing function. You may be able to write the missing function yourself, or you may be able to find the missing function in the distribution files for another software package. In this case, you need to instruct the perl configure-and-build process to use your function. Perform these steps. =over 3 =item * Code and test the function you wish to add. Test it carefully; you will have a much easier time debugging your code independently than when it is a part of perl. =item * Here is an implementation of the POSIX truncate function for an operating system (VOS) that does not supply one, but which does supply the ftruncate() function. /* Beginning of modification history */ /* Written 02-01-02 by Nick Ing-Simmons (nick@ing-simmons.net) */ /* End of modification history */ /* VOS doesn't supply a truncate function, so we build one up from the available POSIX functions. */ #include #include #include int truncate(const char *path, off_t len) { int fd = open(path,O_WRONLY); int code = -1; if (fd >= 0) { code = ftruncate(fd,len); close(fd); } return code; } Place this file into a subdirectory that has the same name as the operating system. This file is named perl/vos/vos.c =item * If your operating system has a hints file (in perl/hints/XXX.sh for an operating system named XXX), then start with it. If your operating system has no hints file, then create one. You can use a hints file for a similar operating system, if one exists, as a template. =item * Add lines like the following to your hints file. The first line (d_truncate="define") instructs Configure that the truncate() function exists. The second line (archobjs="vos.o") instructs the makefiles that the perl executable depends on the existence of a file named "vos.o". (Make will automatically look for "vos.c" and compile it with the same options as the perl source code). The final line ("test -h...") adds a symbolic link to the top-level directory so that make can find vos.c. Of course, you should use your own operating system name for the source file of extensions, not "vos.c". # VOS does not have truncate() but we supply one in vos.c d_truncate="define" archobjs="vos.o" # Help gmake find vos.c test -h vos.c || ln -s vos/vos.c vos.c The hints file is a series of shell commands that are run in the top-level directory (the "perl" directory). Thus, these commands are simply executed by Configure at an appropriate place during its execution. =item * At this point, you can run the Configure script and rebuild perl. Carefully test the newly-built perl to ensure that normal paths, and error paths, behave as you expect. =back =head2 Good ideas waiting for round tuits =over 4 =item Configure -Dsrc=/blah/blah We should be able to emulate B. Tom Tromey tromey@creche.cygnus.com has submitted some patches to the dist-users mailing list along these lines. They have been folded back into the main distribution, but various parts of the perl Configure/build/install process still assume src='.'. =item Hint file fixes Various hint files work around Configure problems. We ought to fix Configure so that most of them aren't needed. =item Hint file information Some of the hint file information (particularly dynamic loading stuff) ought to be fed back into the main metaconfig distribution. =back =head2 Probably good ideas waiting for round tuits =over 4 =item GNU configure --options I've received sensible suggestions for --exec_prefix and other GNU configure --options. It's not always obvious exactly what is intended, but this merits investigation. =item make clean Currently, B isn't all that useful, though B and B are. This needs a bit of thought and documentation before it gets cleaned up. =item Try gcc if cc fails Currently, we just give up. =item bypassing safe*alloc wrappers On some systems, it may be safe to call the system malloc directly without going through the util.c safe* layers. (Such systems would accept free(0), for example.) This might be a time-saver for systems that already have a good malloc. (Recent Linux libc's apparently have a nice malloc that is well-tuned for the system.) =back =head2 Vague possibilities =over 4 =item gconvert replacement Maybe include a replacement function that doesn't lose data in rare cases of coercion between string and numerical values. =item Improve makedepend The current makedepend process is clunky and annoyingly slow, but it works for most folks. Alas, it assumes that there is a filename $firstmakefile that the B command will try to use before it uses F. Such may not be the case for all B commands, particularly those on non-Unix systems. Probably some variant of the BSD F<.depend> file will be useful. We ought to check how other packages do this, if they do it at all. We could probably pre-generate the dependencies (with the exception of malloc.o, which could probably be determined at F extraction time. =item GNU Makefile standard targets GNU software generally has standardized Makefile targets. Unless we have good reason to do otherwise, I see no reason not to support them. =item File locking Somehow, straighten out, document, and implement lockf(), flock(), and/or fcntl() file locking. It's a mess. See $d_fcntl_can_lock in recent config.sh files though. =back =head2 Copyright Issues The following is based on the consensus of a couple of IPR lawyers, but it is of course not a legally binding statement, just a common sense summary. =over 4 =item * Tacking on copyright statements is unnecessary to begin with because of the Berne convention. But assuming you want to go ahead... =item * The right form of a copyright statement is Copyright (C) Year, Year, ... by Someone The (C) is not required everywhere but it doesn't hurt and in certain jurisdictions it is required, so let's leave it in. (Yes, it's true that in some jurisdictions the "(C)" is not legally binding, one should use the true ringed-C. But we don't have that character available for Perl's source code.) The years must be listed out separately. Year-Year is not correct. Only the years when the piece has changed 'significantly' may be added. =item * One cannot give away one's copyright trivially. One can give one's copyright away by using public domain, but even that requires a little bit more than just saying 'this is in public domain'. (What it exactly requires depends on your jurisdiction.) But barring public domain, one cannot "transfer" one's copyright to another person or entity. In the context of software, it means that contributors cannot give away their copyright or "transfer" it to the "owner" of the software. Also remember that in many cases if you are employed by someone, your work may be copyrighted to your employer, even when you are contributing on your own time (this all depends on too many things to list here). But the bottom line is that you definitely can't give away a copyright you may not even have. What is possible, however, is that the software can simply state Copyright (C) Year, Year, ... by Someone and others and then list the "others" somewhere in the distribution. And this is exactly what Perl does. (The "somewhere" is AUTHORS and the Changes* files.) =item * Split files, merged files, and generated files are problematic. The rule of thumb: in split files, copy the copyright years of the original file to all the new files; in merged files make an union of the copyright years of all the old files; in generated files propagate the copyright years of the generating file(s). =item * The files of Perl source code distribution do carry a lot of copyrights, by various people. (There are many copyrights embedded in perl.c, for example.) The most straightforward thing for pumpkings to do is to simply update Larry's copyrights at the beginning of the *.[hcy], x2p/*.[hcy], *.pl, and README files, and leave all other copyrights alone. Doing more than that requires quite a bit of tracking. =back =head1 AUTHORS Original author: Andy Dougherty doughera@lafayette.edu . Additions by Chip Salzenberg chip@perl.com and Tim Bunce Tim.Bunce@ig.co.uk . All opinions expressed herein are those of the authorZ<>(s). =head1 LAST MODIFIED 2009-07-08-01 Jesse Vincent