summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.cvsignore26
-rw-r--r--AUTHORS22
-rw-r--r--COPYING11
-rw-r--r--ChangeLog56
-rw-r--r--INSTALL16
-rw-r--r--Makefile887
-rw-r--r--Makefile.am43
-rw-r--r--Makefile.in887
-rw-r--r--NEWS137
-rw-r--r--README41
-rw-r--r--THANKS30
-rw-r--r--TODO42
-rw-r--r--aclocal.m4548
-rwxr-xr-xautogen.sh4
-rw-r--r--chkstow.in104
-rw-r--r--config.log176
-rwxr-xr-xconfig.status786
-rwxr-xr-xconfigure3297
-rw-r--r--configure.ac17
-rw-r--r--configure.in22
-rwxr-xr-xinstall-sh507
-rwxr-xr-xmdate-sh201
-rwxr-xr-xmissing367
-rw-r--r--stamp-vti4
-rwxr-xr-xstow1794
-rwxr-xr-x[-rw-r--r--]stow.in2273
-rw-r--r--stow.info1335
-rw-r--r--stow.texi585
-rwxr-xr-xt/chkstow.t115
-rw-r--r--t/cleanup_invalid_links.t92
-rw-r--r--t/defer.t22
-rw-r--r--t/examples.t204
-rw-r--r--t/find_stowed_path.t51
-rw-r--r--t/foldable.t74
-rw-r--r--t/join_paths.t89
-rw-r--r--t/parent.t41
-rw-r--r--t/relative_path.t41
-rw-r--r--t/stow.t97
-rw-r--r--t/stow_contents.t283
-rw-r--r--t/unstow_contents.t276
-rw-r--r--t/unstow_contents_orig.t277
-rw-r--r--t/util.pm157
-rw-r--r--texinfo.tex7482
-rw-r--r--version.texi4
44 files changed, 22719 insertions, 804 deletions
diff --git a/.cvsignore b/.cvsignore
deleted file mode 100644
index f0ff8e1..0000000
--- a/.cvsignore
+++ /dev/null
@@ -1,26 +0,0 @@
-tags
-Makefile
-Makefile.in
-.deps
-.libs
-*.lo
-*.la
-*.so*
-aclocal.m4
-config.guess
-config.h.in
-config.sub
-configure
-install-sh
-ltmain.sh
-missing
-mkinstalldirs
-stamp-h.in
-stamp-h
-config.cache
-config.h
-config.log
-config.status
-libtool
-*.tar.gz
-*.info
diff --git a/AUTHORS b/AUTHORS
index 29ba285..fbf4c2f 100644
--- a/AUTHORS
+++ b/AUTHORS
@@ -12,4 +12,24 @@ stow -D / stow -R removing initially-empty directories.
Adam Lackorzynski <al10@inf.tu-dresden.de> wrote the fix to prevente
the generation of wrong links if there are links in the stow directory.
-Stow is currently maintained by Guillaume Morin <gmorin@gnu.org>.
+Stow was maintained by Guillaume Morin <gmorin@gnu.org> up to November 2007.
+
+Kahlil (Kal) Hodgson <kahlil@internode.on.net> performed a major rewrite
+inorder to implement:
+
+ 1. defered operations,
+ 2. option parsing via Getopt::Long,
+ 3. options to support shared files,
+ 4. support for multiple operations per invocation,
+ 5. default command line arguments via '.stowrc' and '~/.stowrc' files,
+ 6. better cooperation between multiple stow directories,
+ 7. a test suite (and support code) to ensure that everything still works.
+
+As these changes required a dramatic reorganisation of the code, very little
+was left untouched, and so stows major version was bumped up to version 2.
+
+Austin Wood <austin.wood@rmit.edu.au> and Chris Hoobin
+<christopher.hoobin@rmit.edu.au> helped clean up the documentation for
+version 2 and created the texi2man script.
+
+Stow is currently maintained by Kahlil (Kal) Hodgson <kahlil@internode.on.net>.
diff --git a/COPYING b/COPYING
index a43ea21..623b625 100644
--- a/COPYING
+++ b/COPYING
@@ -2,7 +2,7 @@
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
- 675 Mass Ave, Cambridge, MA 02139, USA
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
@@ -279,7 +279,7 @@ POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
- Appendix: How to Apply These Terms to Your New Programs
+ How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
@@ -291,7 +291,7 @@ convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
- Copyright (C) 19yy <name of author>
+ Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@@ -305,14 +305,15 @@ the "copyright" line and a pointer to where the full notice is found.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
- Gnomovision version 69, Copyright (C) 19yy name of author
+ Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
diff --git a/ChangeLog b/ChangeLog
index b23673d..b513c25 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,59 @@
+2008-01-31 Kahlil Hodgson <kal@grebo.cs.rmit.edu.au>
+
+ * stow.texi: Austin Wood and Chris Hoobin clean this up for version 2.
+
+ * texi2man: new script by Austin and Chris to generate a man page from the
+ texinfo file.
+
+
+Sun Nov 25 19:31:32 2007 Kahlil Hodgson <kahlil@internode.con.net>
+ * all: Version 2.0.1
+
+ * AUTHORS: added Kahlil Hodgson as a new author and current maintainer.
+
+ * stow.in: major rewrite to produce version 2.0.1 see NEWS for details
+
+ * t/: added test suite and support code
+
+ * configure.in: renamed to configure.ac as per autotools recommendation.
+
+ * configure.ac:
+ Use AC_INT rather than obsolete AM_INTI_MAKEFILE usage.
+ Remove redundant VERSION and PACKAGE setttings
+ Remove redundant AC_ARG_PROGRAM
+ Use AM_INIT_AUTOMAKE([-Wall -Werror]) because we are pedantic.
+ Add AC_PREREQ([2.6.1])
+
+ * Makefile.am, configure.ac:
+ Use explicit rewrite in Makefile.am, rather than AC_CONFIG_FILES(stow.in),
+ as per autotools recommendation.
+
+ * Makefile.am:
+ Add TESTS and TEST_ENVIRONMENT for files in t/
+ Use dist_man_MANS instead of EXTRA_DIST for man page
+
+ * INSTALL: update to reflect autotools modernization.
+
+ * NEWS: update to describe cahnges in Version 2.0.1.
+
+ * README: update to point to the right websites and email addresses.
+
+ * THANKS:
+ Add Emil Mikulc who's ideas largely inspired Version 2 and
+ and Geoffrey Giesemann who did some initial testing and found some
+ important bugs.
+
+ * TODO: remove tasks that where implemented in Version 2
+
+ * stow.texi: update documentation to reflect Version 2 changes.
+
+ * stow.8: update to reflect Version 2 changes.
+
+
+Sun Jan 06 12:18:50 2002 Guillaume Morin <gmorin@gnu.org>
+
+ * Makefile.am: use EXTRA_DIST to include manpage in distribution
+
Wed Jan 02 21:33:41 2002 Guillaume Morin <gmorin@gnu.org>
* stow.in: Stow now only warns the user if a subdirectory
diff --git a/INSTALL b/INSTALL
index 38ffc55..21358ba 100644
--- a/INSTALL
+++ b/INSTALL
@@ -8,13 +8,13 @@ The steps in building stow are:
1. `cd' to the directory containing the source code (and this file)
and type `./configure' to configure stow for your system. This
- step will attempt to locate your copy of perl and use its location
- to create `stow' from `stow.in'. If perl can't be found, you'll
- have to edit line 1 of `stow' from `#!false' to `#!/path/to/perl'
- (where /path/to/perl is wherever perl will be found when stow
- runs).
+ step will attempt to locate your copy of perl and set its location
+ `Makefile.in'.
-2. Type `make' to create stow.info from stow.texi.
+2. Type `make' to create `stow' and `'stow.info'. If perl could not
+ be found by `./configure', you'll have to edit line 1 of `stow'
+ from `#!false' to `#!/path/to/perl' (where /path/to/perl is wherever
+ perl will be found when stow runs).
3. Type `make install' to install `stow' and `stow.info'.
@@ -58,8 +58,8 @@ to recreate the current configuration, a file `config.cache' that
saves the results of its tests to speed up reconfiguring, and a file
`config.log' containing other output.
-The file `configure.in' is used to create `configure' by a program
-called `autoconf'. You only need `configure.in' if you want to change
+The file `configure.ac' is used to create `configure' by a program
+called `autoconf'. You only need `configure.ac' if you want to change
it or regenerate `configure' using a newer version of `autoconf'.
The file `Makefile.am' is used to create `Makefile.in' by a program
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..43728de
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,887 @@
+# Makefile.in generated by automake 1.10 from Makefile.am.
+# Makefile. Generated from Makefile.in by configure.
+
+# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
+# 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+# This Makefile.in is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+
+
+
+
+pkgdatadir = $(datadir)/stow
+pkglibdir = $(libdir)/stow
+pkgincludedir = $(includedir)/stow
+am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
+install_sh_DATA = $(install_sh) -c -m 644
+install_sh_PROGRAM = $(install_sh) -c
+install_sh_SCRIPT = $(install_sh) -c
+INSTALL_HEADER = $(INSTALL_DATA)
+transform = $(program_transform_name)
+NORMAL_INSTALL = :
+PRE_INSTALL = :
+POST_INSTALL = :
+NORMAL_UNINSTALL = :
+PRE_UNINSTALL = :
+POST_UNINSTALL = :
+subdir = .
+DIST_COMMON = README $(am__configure_deps) $(dist_doc_DATA) \
+ $(dist_man_MANS) $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
+ $(srcdir)/stamp-vti $(srcdir)/version.texi \
+ $(top_srcdir)/configure AUTHORS COPYING ChangeLog INSTALL NEWS \
+ THANKS TODO install-sh mdate-sh missing texinfo.tex
+ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
+am__aclocal_m4_deps = $(top_srcdir)/configure.ac
+am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
+ $(ACLOCAL_M4)
+am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
+ configure.lineno config.status.lineno
+mkinstalldirs = $(install_sh) -d
+CONFIG_CLEAN_FILES =
+am__installdirs = "$(DESTDIR)$(bindir)" "$(DESTDIR)$(infodir)" \
+ "$(DESTDIR)$(man8dir)" "$(DESTDIR)$(docdir)"
+binSCRIPT_INSTALL = $(INSTALL_SCRIPT)
+SCRIPTS = $(bin_SCRIPTS)
+SOURCES =
+DIST_SOURCES =
+INFO_DEPS = $(srcdir)/stow.info
+am__TEXINFO_TEX_DIR = $(srcdir)
+DVIS = stow.dvi
+PDFS = stow.pdf
+PSS = stow.ps
+HTMLS = stow.html
+TEXINFOS = stow.texi
+TEXI2DVI = texi2dvi
+TEXI2PDF = $(TEXI2DVI) --pdf --batch
+MAKEINFOHTML = $(MAKEINFO) --html
+AM_MAKEINFOHTMLFLAGS = $(AM_MAKEINFOFLAGS)
+DVIPS = dvips
+am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
+am__vpath_adj = case $$p in \
+ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
+ *) f=$$p;; \
+ esac;
+am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
+man8dir = $(mandir)/man8
+NROFF = nroff
+MANS = $(dist_man_MANS) $(man8_MANS)
+dist_docDATA_INSTALL = $(INSTALL_DATA)
+DATA = $(dist_doc_DATA)
+DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
+distdir = $(PACKAGE)-$(VERSION)
+top_distdir = $(distdir)
+am__remove_distdir = \
+ { test ! -d $(distdir) \
+ || { find $(distdir) -type d ! -perm -200 -exec chmod u+w {} ';' \
+ && rm -fr $(distdir); }; }
+DIST_ARCHIVES = $(distdir).tar.gz $(distdir).shar.gz
+GZIP_ENV = --best
+distuninstallcheck_listfiles = find . -type f -print
+distcleancheck_listfiles = find . -type f -print
+ACLOCAL = ${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run aclocal-1.10
+AMTAR = ${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run tar
+AUTOCONF = ${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run autoconf
+AUTOHEADER = ${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run autoheader
+AUTOMAKE = ${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run automake-1.10
+AWK = gawk
+CYGPATH_W = echo
+DEFS = -DPACKAGE_NAME=\"stow\" -DPACKAGE_TARNAME=\"stow\" -DPACKAGE_VERSION=\"2.0.2\" -DPACKAGE_STRING=\"stow\ 2.0.2\" -DPACKAGE_BUGREPORT=\"bug-stow@gnu.org\" -DPACKAGE=\"stow\" -DVERSION=\"2.0.2\"
+ECHO_C =
+ECHO_N = -n
+ECHO_T =
+INSTALL = /usr/bin/install -c
+INSTALL_DATA = ${INSTALL} -m 644
+INSTALL_PROGRAM = ${INSTALL}
+INSTALL_SCRIPT = ${INSTALL}
+INSTALL_STRIP_PROGRAM = $(install_sh) -c -s
+LIBOBJS =
+LIBS =
+LTLIBOBJS =
+MAKEINFO = ${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run makeinfo
+MKDIR_P = /bin/mkdir -p
+PACKAGE = stow
+PACKAGE_BUGREPORT = bug-stow@gnu.org
+PACKAGE_NAME = stow
+PACKAGE_STRING = stow 2.0.2
+PACKAGE_TARNAME = stow
+PACKAGE_VERSION = 2.0.2
+PATH_SEPARATOR = :
+PERL = /usr/bin/perl
+SET_MAKE =
+SHELL = /bin/sh
+STRIP =
+VERSION = 2.0.2
+abs_builddir = /home/kal/Projects/old/RMIT/stow/stow-2.0.2
+abs_srcdir = /home/kal/Projects/old/RMIT/stow/stow-2.0.2
+abs_top_builddir = /home/kal/Projects/old/RMIT/stow/stow-2.0.2
+abs_top_srcdir = /home/kal/Projects/old/RMIT/stow/stow-2.0.2
+am__leading_dot = .
+am__tar = ${AMTAR} chof - "$$tardir"
+am__untar = ${AMTAR} xf -
+bindir = ${exec_prefix}/bin
+build_alias =
+builddir = .
+datadir = ${datarootdir}
+datarootdir = ${prefix}/share
+docdir = ${datarootdir}/doc/${PACKAGE_TARNAME}
+dvidir = ${docdir}
+exec_prefix = ${prefix}
+host_alias =
+htmldir = ${docdir}
+includedir = ${prefix}/include
+infodir = ${datarootdir}/info
+install_sh = $(SHELL) /home/kal/Projects/old/RMIT/stow/stow-2.0.2/install-sh
+libdir = ${exec_prefix}/lib
+libexecdir = ${exec_prefix}/libexec
+localedir = ${datarootdir}/locale
+localstatedir = ${prefix}/var
+mandir = ${datarootdir}/man
+mkdir_p = /bin/mkdir -p
+oldincludedir = /usr/include
+pdfdir = ${docdir}
+prefix = /usr/local
+program_transform_name = s,x,x,
+psdir = ${docdir}
+sbindir = ${exec_prefix}/sbin
+sharedstatedir = ${prefix}/com
+srcdir = .
+sysconfdir = ${prefix}/etc
+target_alias =
+top_builddir = .
+top_srcdir = .
+bin_SCRIPTS = stow chkstow
+info_TEXINFOS = stow.texi
+man8_MANS = stow.8
+dist_man_MANS = stow.8
+dist_doc_DATA = README
+TESTS_ENVIRONMENT = $(PERL) -I $(top_srcdir)
+TESTS = \
+ t/cleanup_invalid_links.t \
+ t/defer.t \
+ t/examples.t \
+ t/find_stowed_path.t \
+ t/foldable.t \
+ t/join_paths.t \
+ t/parent.t \
+ t/relative_path.t \
+ t/stow_contents.t \
+ t/stow.t \
+ t/unstow_contents_orig.t \
+ t/unstow_contents.t \
+ t/chkstow.t
+
+AUTOMAKE_OPTIONS = dist-shar
+EXTRA_DIST = $(TESTS) t/util.pm stow.in
+CLEANFILES = $(bin_SCRIPTS)
+
+# this is more explicit and reliable than the config file trick
+edit = sed -e 's|[@]PERL[@]|$(PERL)|g' \
+ -e 's|[@]PACKAGE[@]|$(PACKAGE)|g' \
+ -e 's|[@]VERSION[@]|$(VERSION)|g'
+
+all: all-am
+
+.SUFFIXES:
+.SUFFIXES: .dvi .html .info .pdf .ps .texi
+am--refresh:
+ @:
+$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
+ @for dep in $?; do \
+ case '$(am__configure_deps)' in \
+ *$$dep*) \
+ echo ' cd $(srcdir) && $(AUTOMAKE) --gnu '; \
+ cd $(srcdir) && $(AUTOMAKE) --gnu \
+ && exit 0; \
+ exit 1;; \
+ esac; \
+ done; \
+ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu Makefile'; \
+ cd $(top_srcdir) && \
+ $(AUTOMAKE) --gnu Makefile
+.PRECIOUS: Makefile
+Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
+ @case '$?' in \
+ *config.status*) \
+ echo ' $(SHELL) ./config.status'; \
+ $(SHELL) ./config.status;; \
+ *) \
+ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \
+ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \
+ esac;
+
+$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
+ $(SHELL) ./config.status --recheck
+
+$(top_srcdir)/configure: $(am__configure_deps)
+ cd $(srcdir) && $(AUTOCONF)
+$(ACLOCAL_M4): $(am__aclocal_m4_deps)
+ cd $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
+install-binSCRIPTS: $(bin_SCRIPTS)
+ @$(NORMAL_INSTALL)
+ test -z "$(bindir)" || $(MKDIR_P) "$(DESTDIR)$(bindir)"
+ @list='$(bin_SCRIPTS)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ if test -f $$d$$p; then \
+ f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
+ echo " $(binSCRIPT_INSTALL) '$$d$$p' '$(DESTDIR)$(bindir)/$$f'"; \
+ $(binSCRIPT_INSTALL) "$$d$$p" "$(DESTDIR)$(bindir)/$$f"; \
+ else :; fi; \
+ done
+
+uninstall-binSCRIPTS:
+ @$(NORMAL_UNINSTALL)
+ @list='$(bin_SCRIPTS)'; for p in $$list; do \
+ f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
+ echo " rm -f '$(DESTDIR)$(bindir)/$$f'"; \
+ rm -f "$(DESTDIR)$(bindir)/$$f"; \
+ done
+
+.texi.info:
+ restore=: && backupdir="$(am__leading_dot)am$$$$" && \
+ am__cwd=`pwd` && cd $(srcdir) && \
+ rm -rf $$backupdir && mkdir $$backupdir && \
+ if ($(MAKEINFO) --version) >/dev/null 2>&1; then \
+ for f in $@ $@-[0-9] $@-[0-9][0-9] $(@:.info=).i[0-9] $(@:.info=).i[0-9][0-9]; do \
+ if test -f $$f; then mv $$f $$backupdir; restore=mv; else :; fi; \
+ done; \
+ else :; fi && \
+ cd "$$am__cwd"; \
+ if $(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
+ -o $@ $<; \
+ then \
+ rc=0; \
+ cd $(srcdir); \
+ else \
+ rc=$$?; \
+ cd $(srcdir) && \
+ $$restore $$backupdir/* `echo "./$@" | sed 's|[^/]*$$||'`; \
+ fi; \
+ rm -rf $$backupdir; exit $$rc
+
+.texi.dvi:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
+ $(TEXI2DVI) $<
+
+.texi.pdf:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
+ $(TEXI2PDF) $<
+
+.texi.html:
+ rm -rf $(@:.html=.htp)
+ if $(MAKEINFOHTML) $(AM_MAKEINFOHTMLFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
+ -o $(@:.html=.htp) $<; \
+ then \
+ rm -rf $@; \
+ if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
+ mv $(@:.html=) $@; else mv $(@:.html=.htp) $@; fi; \
+ else \
+ if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
+ rm -rf $(@:.html=); else rm -Rf $(@:.html=.htp) $@; fi; \
+ exit 1; \
+ fi
+$(srcdir)/stow.info: stow.texi $(srcdir)/version.texi
+stow.dvi: stow.texi $(srcdir)/version.texi
+stow.pdf: stow.texi $(srcdir)/version.texi
+stow.html: stow.texi $(srcdir)/version.texi
+$(srcdir)/version.texi: $(srcdir)/stamp-vti
+$(srcdir)/stamp-vti: stow.texi $(top_srcdir)/configure
+ @(dir=.; test -f ./stow.texi || dir=$(srcdir); \
+ set `$(SHELL) $(srcdir)/mdate-sh $$dir/stow.texi`; \
+ echo "@set UPDATED $$1 $$2 $$3"; \
+ echo "@set UPDATED-MONTH $$2 $$3"; \
+ echo "@set EDITION $(VERSION)"; \
+ echo "@set VERSION $(VERSION)") > vti.tmp
+ @cmp -s vti.tmp $(srcdir)/version.texi \
+ || (echo "Updating $(srcdir)/version.texi"; \
+ cp vti.tmp $(srcdir)/version.texi)
+ -@rm -f vti.tmp
+ @cp $(srcdir)/version.texi $@
+
+mostlyclean-vti:
+ -rm -f vti.tmp
+
+maintainer-clean-vti:
+ -rm -f $(srcdir)/stamp-vti $(srcdir)/version.texi
+.dvi.ps:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ $(DVIPS) -o $@ $<
+
+uninstall-dvi-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(DVIS)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(dvidir)/$$f'"; \
+ rm -f "$(DESTDIR)$(dvidir)/$$f"; \
+ done
+
+uninstall-html-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(HTMLS)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -rf '$(DESTDIR)$(htmldir)/$$f'"; \
+ rm -rf "$(DESTDIR)$(htmldir)/$$f"; \
+ done
+
+uninstall-info-am:
+ @$(PRE_UNINSTALL)
+ @if test -d '$(DESTDIR)$(infodir)' && \
+ (install-info --version && \
+ install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \
+ list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ echo " install-info --info-dir='$(DESTDIR)$(infodir)' --remove '$(DESTDIR)$(infodir)/$$relfile'"; \
+ install-info --info-dir="$(DESTDIR)$(infodir)" --remove "$(DESTDIR)$(infodir)/$$relfile"; \
+ done; \
+ else :; fi
+ @$(NORMAL_UNINSTALL)
+ @list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ relfile_i=`echo "$$relfile" | sed 's|\.info$$||;s|$$|.i|'`; \
+ (if test -d "$(DESTDIR)$(infodir)" && cd "$(DESTDIR)$(infodir)"; then \
+ echo " cd '$(DESTDIR)$(infodir)' && rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]"; \
+ rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]; \
+ else :; fi); \
+ done
+
+uninstall-pdf-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(PDFS)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(pdfdir)/$$f'"; \
+ rm -f "$(DESTDIR)$(pdfdir)/$$f"; \
+ done
+
+uninstall-ps-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(PSS)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(psdir)/$$f'"; \
+ rm -f "$(DESTDIR)$(psdir)/$$f"; \
+ done
+
+dist-info: $(INFO_DEPS)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
+ list='$(INFO_DEPS)'; \
+ for base in $$list; do \
+ case $$base in \
+ $(srcdir)/*) base=`echo "$$base" | sed "s|^$$srcdirstrip/||"`;; \
+ esac; \
+ if test -f $$base; then d=.; else d=$(srcdir); fi; \
+ base_i=`echo "$$base" | sed 's|\.info$$||;s|$$|.i|'`; \
+ for file in $$d/$$base $$d/$$base-[0-9] $$d/$$base-[0-9][0-9] $$d/$$base_i[0-9] $$d/$$base_i[0-9][0-9]; do \
+ if test -f $$file; then \
+ relfile=`expr "$$file" : "$$d/\(.*\)"`; \
+ test -f $(distdir)/$$relfile || \
+ cp -p $$file $(distdir)/$$relfile; \
+ else :; fi; \
+ done; \
+ done
+
+mostlyclean-aminfo:
+ -rm -rf stow.aux stow.cp stow.cps stow.fn stow.fns stow.ky stow.kys stow.log \
+ stow.pg stow.pgs stow.tmp stow.toc stow.tp stow.tps stow.vr \
+ stow.vrs stow.dvi stow.pdf stow.ps stow.html
+
+maintainer-clean-aminfo:
+ @list='$(INFO_DEPS)'; for i in $$list; do \
+ i_i=`echo "$$i" | sed 's|\.info$$||;s|$$|.i|'`; \
+ echo " rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]"; \
+ rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]; \
+ done
+install-man8: $(man8_MANS) $(man_MANS)
+ @$(NORMAL_INSTALL)
+ test -z "$(man8dir)" || $(MKDIR_P) "$(DESTDIR)$(man8dir)"
+ @list='$(man8_MANS) $(dist_man8_MANS) $(nodist_man8_MANS)'; \
+ l2='$(man_MANS) $(dist_man_MANS) $(nodist_man_MANS)'; \
+ for i in $$l2; do \
+ case "$$i" in \
+ *.8*) list="$$list $$i" ;; \
+ esac; \
+ done; \
+ for i in $$list; do \
+ if test -f $(srcdir)/$$i; then file=$(srcdir)/$$i; \
+ else file=$$i; fi; \
+ ext=`echo $$i | sed -e 's/^.*\\.//'`; \
+ case "$$ext" in \
+ 8*) ;; \
+ *) ext='8' ;; \
+ esac; \
+ inst=`echo $$i | sed -e 's/\\.[0-9a-z]*$$//'`; \
+ inst=`echo $$inst | sed -e 's/^.*\///'`; \
+ inst=`echo $$inst | sed '$(transform)'`.$$ext; \
+ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man8dir)/$$inst'"; \
+ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man8dir)/$$inst"; \
+ done
+uninstall-man8:
+ @$(NORMAL_UNINSTALL)
+ @list='$(man8_MANS) $(dist_man8_MANS) $(nodist_man8_MANS)'; \
+ l2='$(man_MANS) $(dist_man_MANS) $(nodist_man_MANS)'; \
+ for i in $$l2; do \
+ case "$$i" in \
+ *.8*) list="$$list $$i" ;; \
+ esac; \
+ done; \
+ for i in $$list; do \
+ ext=`echo $$i | sed -e 's/^.*\\.//'`; \
+ case "$$ext" in \
+ 8*) ;; \
+ *) ext='8' ;; \
+ esac; \
+ inst=`echo $$i | sed -e 's/\\.[0-9a-z]*$$//'`; \
+ inst=`echo $$inst | sed -e 's/^.*\///'`; \
+ inst=`echo $$inst | sed '$(transform)'`.$$ext; \
+ echo " rm -f '$(DESTDIR)$(man8dir)/$$inst'"; \
+ rm -f "$(DESTDIR)$(man8dir)/$$inst"; \
+ done
+install-dist_docDATA: $(dist_doc_DATA)
+ @$(NORMAL_INSTALL)
+ test -z "$(docdir)" || $(MKDIR_P) "$(DESTDIR)$(docdir)"
+ @list='$(dist_doc_DATA)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ echo " $(dist_docDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(docdir)/$$f'"; \
+ $(dist_docDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(docdir)/$$f"; \
+ done
+
+uninstall-dist_docDATA:
+ @$(NORMAL_UNINSTALL)
+ @list='$(dist_doc_DATA)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(docdir)/$$f'"; \
+ rm -f "$(DESTDIR)$(docdir)/$$f"; \
+ done
+tags: TAGS
+TAGS:
+
+ctags: CTAGS
+CTAGS:
+
+
+check-TESTS: $(TESTS)
+ @failed=0; all=0; xfail=0; xpass=0; skip=0; ws='[ ]'; \
+ srcdir=$(srcdir); export srcdir; \
+ list=' $(TESTS) '; \
+ if test -n "$$list"; then \
+ for tst in $$list; do \
+ if test -f ./$$tst; then dir=./; \
+ elif test -f $$tst; then dir=; \
+ else dir="$(srcdir)/"; fi; \
+ if $(TESTS_ENVIRONMENT) $${dir}$$tst; then \
+ all=`expr $$all + 1`; \
+ case " $(XFAIL_TESTS) " in \
+ *$$ws$$tst$$ws*) \
+ xpass=`expr $$xpass + 1`; \
+ failed=`expr $$failed + 1`; \
+ echo "XPASS: $$tst"; \
+ ;; \
+ *) \
+ echo "PASS: $$tst"; \
+ ;; \
+ esac; \
+ elif test $$? -ne 77; then \
+ all=`expr $$all + 1`; \
+ case " $(XFAIL_TESTS) " in \
+ *$$ws$$tst$$ws*) \
+ xfail=`expr $$xfail + 1`; \
+ echo "XFAIL: $$tst"; \
+ ;; \
+ *) \
+ failed=`expr $$failed + 1`; \
+ echo "FAIL: $$tst"; \
+ ;; \
+ esac; \
+ else \
+ skip=`expr $$skip + 1`; \
+ echo "SKIP: $$tst"; \
+ fi; \
+ done; \
+ if test "$$failed" -eq 0; then \
+ if test "$$xfail" -eq 0; then \
+ banner="All $$all tests passed"; \
+ else \
+ banner="All $$all tests behaved as expected ($$xfail expected failures)"; \
+ fi; \
+ else \
+ if test "$$xpass" -eq 0; then \
+ banner="$$failed of $$all tests failed"; \
+ else \
+ banner="$$failed of $$all tests did not behave as expected ($$xpass unexpected passes)"; \
+ fi; \
+ fi; \
+ dashes="$$banner"; \
+ skipped=""; \
+ if test "$$skip" -ne 0; then \
+ skipped="($$skip tests were not run)"; \
+ test `echo "$$skipped" | wc -c` -le `echo "$$banner" | wc -c` || \
+ dashes="$$skipped"; \
+ fi; \
+ report=""; \
+ if test "$$failed" -ne 0 && test -n "$(PACKAGE_BUGREPORT)"; then \
+ report="Please report to $(PACKAGE_BUGREPORT)"; \
+ test `echo "$$report" | wc -c` -le `echo "$$banner" | wc -c` || \
+ dashes="$$report"; \
+ fi; \
+ dashes=`echo "$$dashes" | sed s/./=/g`; \
+ echo "$$dashes"; \
+ echo "$$banner"; \
+ test -z "$$skipped" || echo "$$skipped"; \
+ test -z "$$report" || echo "$$report"; \
+ echo "$$dashes"; \
+ test "$$failed" -eq 0; \
+ else :; fi
+
+distdir: $(DISTFILES)
+ $(am__remove_distdir)
+ test -d $(distdir) || mkdir $(distdir)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ list='$(DISTFILES)'; \
+ dist_files=`for file in $$list; do echo $$file; done | \
+ sed -e "s|^$$srcdirstrip/||;t" \
+ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
+ case $$dist_files in \
+ */*) $(MKDIR_P) `echo "$$dist_files" | \
+ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
+ sort -u` ;; \
+ esac; \
+ for file in $$dist_files; do \
+ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
+ if test -d $$d/$$file; then \
+ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
+ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
+ cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
+ fi; \
+ cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
+ else \
+ test -f $(distdir)/$$file \
+ || cp -p $$d/$$file $(distdir)/$$file \
+ || exit 1; \
+ fi; \
+ done
+ $(MAKE) $(AM_MAKEFLAGS) \
+ top_distdir="$(top_distdir)" distdir="$(distdir)" \
+ dist-info
+ -find $(distdir) -type d ! -perm -777 -exec chmod a+rwx {} \; -o \
+ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
+ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \
+ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
+ || chmod -R a+r $(distdir)
+dist-gzip: distdir
+ tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
+ $(am__remove_distdir)
+
+dist-bzip2: distdir
+ tardir=$(distdir) && $(am__tar) | bzip2 -9 -c >$(distdir).tar.bz2
+ $(am__remove_distdir)
+
+dist-tarZ: distdir
+ tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
+ $(am__remove_distdir)
+dist-shar: distdir
+ shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz
+ $(am__remove_distdir)
+
+dist-zip: distdir
+ -rm -f $(distdir).zip
+ zip -rq $(distdir).zip $(distdir)
+ $(am__remove_distdir)
+
+dist dist-all: distdir
+ tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
+ shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz
+ $(am__remove_distdir)
+
+# This target untars the dist file and tries a VPATH configuration. Then
+# it guarantees that the distribution is self-contained by making another
+# tarfile.
+distcheck: dist
+ case '$(DIST_ARCHIVES)' in \
+ *.tar.gz*) \
+ GZIP=$(GZIP_ENV) gunzip -c $(distdir).tar.gz | $(am__untar) ;;\
+ *.tar.bz2*) \
+ bunzip2 -c $(distdir).tar.bz2 | $(am__untar) ;;\
+ *.tar.Z*) \
+ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
+ *.shar.gz*) \
+ GZIP=$(GZIP_ENV) gunzip -c $(distdir).shar.gz | unshar ;;\
+ *.zip*) \
+ unzip $(distdir).zip ;;\
+ esac
+ chmod -R a-w $(distdir); chmod a+w $(distdir)
+ mkdir $(distdir)/_build
+ mkdir $(distdir)/_inst
+ chmod a-w $(distdir)
+ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
+ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
+ && cd $(distdir)/_build \
+ && ../configure --srcdir=.. --prefix="$$dc_install_base" \
+ $(DISTCHECK_CONFIGURE_FLAGS) \
+ && $(MAKE) $(AM_MAKEFLAGS) \
+ && $(MAKE) $(AM_MAKEFLAGS) dvi \
+ && $(MAKE) $(AM_MAKEFLAGS) check \
+ && $(MAKE) $(AM_MAKEFLAGS) install \
+ && $(MAKE) $(AM_MAKEFLAGS) installcheck \
+ && $(MAKE) $(AM_MAKEFLAGS) uninstall \
+ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
+ distuninstallcheck \
+ && chmod -R a-w "$$dc_install_base" \
+ && ({ \
+ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
+ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
+ } || { rm -rf "$$dc_destdir"; exit 1; }) \
+ && rm -rf "$$dc_destdir" \
+ && $(MAKE) $(AM_MAKEFLAGS) dist \
+ && rm -rf $(DIST_ARCHIVES) \
+ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck
+ $(am__remove_distdir)
+ @(echo "$(distdir) archives ready for distribution: "; \
+ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
+ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
+distuninstallcheck:
+ @cd $(distuninstallcheck_dir) \
+ && test `$(distuninstallcheck_listfiles) | wc -l` -le 1 \
+ || { echo "ERROR: files left after uninstall:" ; \
+ if test -n "$(DESTDIR)"; then \
+ echo " (check DESTDIR support)"; \
+ fi ; \
+ $(distuninstallcheck_listfiles) ; \
+ exit 1; } >&2
+distcleancheck: distclean
+ @if test '$(srcdir)' = . ; then \
+ echo "ERROR: distcleancheck can only run from a VPATH build" ; \
+ exit 1 ; \
+ fi
+ @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
+ || { echo "ERROR: files left in build directory after distclean:" ; \
+ $(distcleancheck_listfiles) ; \
+ exit 1; } >&2
+check-am: all-am
+ $(MAKE) $(AM_MAKEFLAGS) check-TESTS
+check: check-am
+all-am: Makefile $(INFO_DEPS) $(SCRIPTS) $(MANS) $(DATA)
+installdirs:
+ for dir in "$(DESTDIR)$(bindir)" "$(DESTDIR)$(infodir)" "$(DESTDIR)$(man8dir)" "$(DESTDIR)$(docdir)"; do \
+ test -z "$$dir" || $(MKDIR_P) "$$dir"; \
+ done
+install: install-am
+install-exec: install-exec-am
+install-data: install-data-am
+uninstall: uninstall-am
+
+install-am: all-am
+ @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
+
+installcheck: installcheck-am
+install-strip:
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ `test -z '$(STRIP)' || \
+ echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
+mostlyclean-generic:
+
+clean-generic:
+ -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
+
+distclean-generic:
+ -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
+
+maintainer-clean-generic:
+ @echo "This command is intended for maintainers to use"
+ @echo "it deletes files that may require special tools to rebuild."
+clean: clean-am
+
+clean-am: clean-generic clean-local mostlyclean-am
+
+distclean: distclean-am
+ -rm -f $(am__CONFIG_DISTCLEAN_FILES)
+ -rm -f Makefile
+distclean-am: clean-am distclean-generic
+
+dvi: dvi-am
+
+dvi-am: $(DVIS)
+
+html: html-am
+
+html-am: $(HTMLS)
+
+info: info-am
+
+info-am: $(INFO_DEPS)
+
+install-data-am: install-dist_docDATA install-info-am install-man
+
+install-dvi: install-dvi-am
+
+install-dvi-am: $(DVIS)
+ @$(NORMAL_INSTALL)
+ test -z "$(dvidir)" || $(MKDIR_P) "$(DESTDIR)$(dvidir)"
+ @list='$(DVIS)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(dvidir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(dvidir)/$$f"; \
+ done
+install-exec-am: install-binSCRIPTS
+
+install-html: install-html-am
+
+install-html-am: $(HTMLS)
+ @$(NORMAL_INSTALL)
+ test -z "$(htmldir)" || $(MKDIR_P) "$(DESTDIR)$(htmldir)"
+ @list='$(HTMLS)'; for p in $$list; do \
+ if test -f "$$p" || test -d "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ if test -d "$$d$$p"; then \
+ echo " $(MKDIR_P) '$(DESTDIR)$(htmldir)/$$f'"; \
+ $(MKDIR_P) "$(DESTDIR)$(htmldir)/$$f" || exit 1; \
+ echo " $(INSTALL_DATA) '$$d$$p'/* '$(DESTDIR)$(htmldir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p"/* "$(DESTDIR)$(htmldir)/$$f"; \
+ else \
+ echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(htmldir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(htmldir)/$$f"; \
+ fi; \
+ done
+install-info: install-info-am
+
+install-info-am: $(INFO_DEPS)
+ @$(NORMAL_INSTALL)
+ test -z "$(infodir)" || $(MKDIR_P) "$(DESTDIR)$(infodir)"
+ @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
+ list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ case $$file in \
+ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
+ esac; \
+ if test -f $$file; then d=.; else d=$(srcdir); fi; \
+ file_i=`echo "$$file" | sed 's|\.info$$||;s|$$|.i|'`; \
+ for ifile in $$d/$$file $$d/$$file-[0-9] $$d/$$file-[0-9][0-9] \
+ $$d/$$file_i[0-9] $$d/$$file_i[0-9][0-9] ; do \
+ if test -f $$ifile; then \
+ relfile=`echo "$$ifile" | sed 's|^.*/||'`; \
+ echo " $(INSTALL_DATA) '$$ifile' '$(DESTDIR)$(infodir)/$$relfile'"; \
+ $(INSTALL_DATA) "$$ifile" "$(DESTDIR)$(infodir)/$$relfile"; \
+ else : ; fi; \
+ done; \
+ done
+ @$(POST_INSTALL)
+ @if (install-info --version && \
+ install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \
+ list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ echo " install-info --info-dir='$(DESTDIR)$(infodir)' '$(DESTDIR)$(infodir)/$$relfile'";\
+ install-info --info-dir="$(DESTDIR)$(infodir)" "$(DESTDIR)$(infodir)/$$relfile" || :;\
+ done; \
+ else : ; fi
+install-man: install-man8
+
+install-pdf: install-pdf-am
+
+install-pdf-am: $(PDFS)
+ @$(NORMAL_INSTALL)
+ test -z "$(pdfdir)" || $(MKDIR_P) "$(DESTDIR)$(pdfdir)"
+ @list='$(PDFS)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(pdfdir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(pdfdir)/$$f"; \
+ done
+install-ps: install-ps-am
+
+install-ps-am: $(PSS)
+ @$(NORMAL_INSTALL)
+ test -z "$(psdir)" || $(MKDIR_P) "$(DESTDIR)$(psdir)"
+ @list='$(PSS)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(psdir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(psdir)/$$f"; \
+ done
+installcheck-am:
+
+maintainer-clean: maintainer-clean-am
+ -rm -f $(am__CONFIG_DISTCLEAN_FILES)
+ -rm -rf $(top_srcdir)/autom4te.cache
+ -rm -f Makefile
+maintainer-clean-am: distclean-am maintainer-clean-aminfo \
+ maintainer-clean-generic maintainer-clean-vti
+
+mostlyclean: mostlyclean-am
+
+mostlyclean-am: mostlyclean-aminfo mostlyclean-generic mostlyclean-vti
+
+pdf: pdf-am
+
+pdf-am: $(PDFS)
+
+ps: ps-am
+
+ps-am: $(PSS)
+
+uninstall-am: uninstall-binSCRIPTS uninstall-dist_docDATA \
+ uninstall-dvi-am uninstall-html-am uninstall-info-am \
+ uninstall-man uninstall-pdf-am uninstall-ps-am
+
+uninstall-man: uninstall-man8
+
+.MAKE: install-am install-strip
+
+.PHONY: all all-am am--refresh check check-TESTS check-am clean \
+ clean-generic clean-local dist dist-all dist-bzip2 dist-gzip \
+ dist-info dist-shar dist-tarZ dist-zip distcheck distclean \
+ distclean-generic distcleancheck distdir distuninstallcheck \
+ dvi dvi-am html html-am info info-am install install-am \
+ install-binSCRIPTS install-data install-data-am \
+ install-dist_docDATA install-dvi install-dvi-am install-exec \
+ install-exec-am install-html install-html-am install-info \
+ install-info-am install-man install-man8 install-pdf \
+ install-pdf-am install-ps install-ps-am install-strip \
+ installcheck installcheck-am installdirs maintainer-clean \
+ maintainer-clean-aminfo maintainer-clean-generic \
+ maintainer-clean-vti mostlyclean mostlyclean-aminfo \
+ mostlyclean-generic mostlyclean-vti pdf pdf-am ps ps-am \
+ uninstall uninstall-am uninstall-binSCRIPTS \
+ uninstall-dist_docDATA uninstall-dvi-am uninstall-html-am \
+ uninstall-info-am uninstall-man uninstall-man8 \
+ uninstall-pdf-am uninstall-ps-am
+
+
+# clean up files left behind by test suite
+clean-local:
+ -rm -rf t/target t/stow
+
+stow: stow.in Makefile
+ $(edit) < $< > $@
+ chmod +x $@
+
+chkstow: chkstow.in Makefile
+ $(edit) < $< > $@
+ chmod +x $@
+
+# The rules for manual.html and manual.texi are only used by
+# the developer
+manual.html: manual.texi
+ -rm -f $@
+ texi2html -expandinfo -menu -monolithic -verbose $<
+
+manual.texi: stow.texi
+ -rm -f $@
+ cp $< $@
+# Tell versions [3.59,3.63) of GNU make to not export all variables.
+# Otherwise a system limit (for SysV at least) may be exceeded.
+.NOEXPORT:
diff --git a/Makefile.am b/Makefile.am
index 944b8c8..8b549ad 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -1,12 +1,47 @@
## Process this file with Automake to produce Makefile.in
-AUTOMAKE_OPTIONS = dist-shar
-
-bin_SCRIPTS = stow
+bin_SCRIPTS = stow chkstow
info_TEXINFOS = stow.texi
man8_MANS = stow.8
+dist_man_MANS = stow.8
+dist_doc_DATA = README
+
+TESTS_ENVIRONMENT=$(PERL) -I $(top_srcdir)
+TESTS = \
+ t/cleanup_invalid_links.t \
+ t/defer.t \
+ t/examples.t \
+ t/find_stowed_path.t \
+ t/foldable.t \
+ t/join_paths.t \
+ t/parent.t \
+ t/relative_path.t \
+ t/stow_contents.t \
+ t/stow.t \
+ t/unstow_contents_orig.t \
+ t/unstow_contents.t \
+ t/chkstow.t
+
+AUTOMAKE_OPTIONS = dist-shar
+EXTRA_DIST = $(TESTS) t/util.pm stow.in
+CLEANFILES = $(bin_SCRIPTS)
+
+# clean up files left behind by test suite
+clean-local:
+ -rm -rf t/target t/stow
+
+# this is more explicit and reliable than the config file trick
+edit = sed -e 's|[@]PERL[@]|$(PERL)|g' \
+ -e 's|[@]PACKAGE[@]|$(PACKAGE)|g' \
+ -e 's|[@]VERSION[@]|$(VERSION)|g'
+
+stow: stow.in Makefile
+ $(edit) < $< > $@
+ chmod +x $@
-CLEANFILES = stow manual.html manual.texi
+chkstow: chkstow.in Makefile
+ $(edit) < $< > $@
+ chmod +x $@
# The rules for manual.html and manual.texi are only used by
# the developer
diff --git a/Makefile.in b/Makefile.in
new file mode 100644
index 0000000..36ccce6
--- /dev/null
+++ b/Makefile.in
@@ -0,0 +1,887 @@
+# Makefile.in generated by automake 1.10 from Makefile.am.
+# @configure_input@
+
+# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
+# 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+# This Makefile.in is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+@SET_MAKE@
+
+
+VPATH = @srcdir@
+pkgdatadir = $(datadir)/@PACKAGE@
+pkglibdir = $(libdir)/@PACKAGE@
+pkgincludedir = $(includedir)/@PACKAGE@
+am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
+install_sh_DATA = $(install_sh) -c -m 644
+install_sh_PROGRAM = $(install_sh) -c
+install_sh_SCRIPT = $(install_sh) -c
+INSTALL_HEADER = $(INSTALL_DATA)
+transform = $(program_transform_name)
+NORMAL_INSTALL = :
+PRE_INSTALL = :
+POST_INSTALL = :
+NORMAL_UNINSTALL = :
+PRE_UNINSTALL = :
+POST_UNINSTALL = :
+subdir = .
+DIST_COMMON = README $(am__configure_deps) $(dist_doc_DATA) \
+ $(dist_man_MANS) $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
+ $(srcdir)/stamp-vti $(srcdir)/version.texi \
+ $(top_srcdir)/configure AUTHORS COPYING ChangeLog INSTALL NEWS \
+ THANKS TODO install-sh mdate-sh missing texinfo.tex
+ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
+am__aclocal_m4_deps = $(top_srcdir)/configure.ac
+am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
+ $(ACLOCAL_M4)
+am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
+ configure.lineno config.status.lineno
+mkinstalldirs = $(install_sh) -d
+CONFIG_CLEAN_FILES =
+am__installdirs = "$(DESTDIR)$(bindir)" "$(DESTDIR)$(infodir)" \
+ "$(DESTDIR)$(man8dir)" "$(DESTDIR)$(docdir)"
+binSCRIPT_INSTALL = $(INSTALL_SCRIPT)
+SCRIPTS = $(bin_SCRIPTS)
+SOURCES =
+DIST_SOURCES =
+INFO_DEPS = $(srcdir)/stow.info
+am__TEXINFO_TEX_DIR = $(srcdir)
+DVIS = stow.dvi
+PDFS = stow.pdf
+PSS = stow.ps
+HTMLS = stow.html
+TEXINFOS = stow.texi
+TEXI2DVI = texi2dvi
+TEXI2PDF = $(TEXI2DVI) --pdf --batch
+MAKEINFOHTML = $(MAKEINFO) --html
+AM_MAKEINFOHTMLFLAGS = $(AM_MAKEINFOFLAGS)
+DVIPS = dvips
+am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
+am__vpath_adj = case $$p in \
+ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
+ *) f=$$p;; \
+ esac;
+am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
+man8dir = $(mandir)/man8
+NROFF = nroff
+MANS = $(dist_man_MANS) $(man8_MANS)
+dist_docDATA_INSTALL = $(INSTALL_DATA)
+DATA = $(dist_doc_DATA)
+DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
+distdir = $(PACKAGE)-$(VERSION)
+top_distdir = $(distdir)
+am__remove_distdir = \
+ { test ! -d $(distdir) \
+ || { find $(distdir) -type d ! -perm -200 -exec chmod u+w {} ';' \
+ && rm -fr $(distdir); }; }
+DIST_ARCHIVES = $(distdir).tar.gz $(distdir).shar.gz
+GZIP_ENV = --best
+distuninstallcheck_listfiles = find . -type f -print
+distcleancheck_listfiles = find . -type f -print
+ACLOCAL = @ACLOCAL@
+AMTAR = @AMTAR@
+AUTOCONF = @AUTOCONF@
+AUTOHEADER = @AUTOHEADER@
+AUTOMAKE = @AUTOMAKE@
+AWK = @AWK@
+CYGPATH_W = @CYGPATH_W@
+DEFS = @DEFS@
+ECHO_C = @ECHO_C@
+ECHO_N = @ECHO_N@
+ECHO_T = @ECHO_T@
+INSTALL = @INSTALL@
+INSTALL_DATA = @INSTALL_DATA@
+INSTALL_PROGRAM = @INSTALL_PROGRAM@
+INSTALL_SCRIPT = @INSTALL_SCRIPT@
+INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
+LIBOBJS = @LIBOBJS@
+LIBS = @LIBS@
+LTLIBOBJS = @LTLIBOBJS@
+MAKEINFO = @MAKEINFO@
+MKDIR_P = @MKDIR_P@
+PACKAGE = @PACKAGE@
+PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
+PACKAGE_NAME = @PACKAGE_NAME@
+PACKAGE_STRING = @PACKAGE_STRING@
+PACKAGE_TARNAME = @PACKAGE_TARNAME@
+PACKAGE_VERSION = @PACKAGE_VERSION@
+PATH_SEPARATOR = @PATH_SEPARATOR@
+PERL = @PERL@
+SET_MAKE = @SET_MAKE@
+SHELL = @SHELL@
+STRIP = @STRIP@
+VERSION = @VERSION@
+abs_builddir = @abs_builddir@
+abs_srcdir = @abs_srcdir@
+abs_top_builddir = @abs_top_builddir@
+abs_top_srcdir = @abs_top_srcdir@
+am__leading_dot = @am__leading_dot@
+am__tar = @am__tar@
+am__untar = @am__untar@
+bindir = @bindir@
+build_alias = @build_alias@
+builddir = @builddir@
+datadir = @datadir@
+datarootdir = @datarootdir@
+docdir = @docdir@
+dvidir = @dvidir@
+exec_prefix = @exec_prefix@
+host_alias = @host_alias@
+htmldir = @htmldir@
+includedir = @includedir@
+infodir = @infodir@
+install_sh = @install_sh@
+libdir = @libdir@
+libexecdir = @libexecdir@
+localedir = @localedir@
+localstatedir = @localstatedir@
+mandir = @mandir@
+mkdir_p = @mkdir_p@
+oldincludedir = @oldincludedir@
+pdfdir = @pdfdir@
+prefix = @prefix@
+program_transform_name = @program_transform_name@
+psdir = @psdir@
+sbindir = @sbindir@
+sharedstatedir = @sharedstatedir@
+srcdir = @srcdir@
+sysconfdir = @sysconfdir@
+target_alias = @target_alias@
+top_builddir = @top_builddir@
+top_srcdir = @top_srcdir@
+bin_SCRIPTS = stow chkstow
+info_TEXINFOS = stow.texi
+man8_MANS = stow.8
+dist_man_MANS = stow.8
+dist_doc_DATA = README
+TESTS_ENVIRONMENT = $(PERL) -I $(top_srcdir)
+TESTS = \
+ t/cleanup_invalid_links.t \
+ t/defer.t \
+ t/examples.t \
+ t/find_stowed_path.t \
+ t/foldable.t \
+ t/join_paths.t \
+ t/parent.t \
+ t/relative_path.t \
+ t/stow_contents.t \
+ t/stow.t \
+ t/unstow_contents_orig.t \
+ t/unstow_contents.t \
+ t/chkstow.t
+
+AUTOMAKE_OPTIONS = dist-shar
+EXTRA_DIST = $(TESTS) t/util.pm stow.in
+CLEANFILES = $(bin_SCRIPTS)
+
+# this is more explicit and reliable than the config file trick
+edit = sed -e 's|[@]PERL[@]|$(PERL)|g' \
+ -e 's|[@]PACKAGE[@]|$(PACKAGE)|g' \
+ -e 's|[@]VERSION[@]|$(VERSION)|g'
+
+all: all-am
+
+.SUFFIXES:
+.SUFFIXES: .dvi .html .info .pdf .ps .texi
+am--refresh:
+ @:
+$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
+ @for dep in $?; do \
+ case '$(am__configure_deps)' in \
+ *$$dep*) \
+ echo ' cd $(srcdir) && $(AUTOMAKE) --gnu '; \
+ cd $(srcdir) && $(AUTOMAKE) --gnu \
+ && exit 0; \
+ exit 1;; \
+ esac; \
+ done; \
+ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu Makefile'; \
+ cd $(top_srcdir) && \
+ $(AUTOMAKE) --gnu Makefile
+.PRECIOUS: Makefile
+Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
+ @case '$?' in \
+ *config.status*) \
+ echo ' $(SHELL) ./config.status'; \
+ $(SHELL) ./config.status;; \
+ *) \
+ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \
+ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \
+ esac;
+
+$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
+ $(SHELL) ./config.status --recheck
+
+$(top_srcdir)/configure: $(am__configure_deps)
+ cd $(srcdir) && $(AUTOCONF)
+$(ACLOCAL_M4): $(am__aclocal_m4_deps)
+ cd $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
+install-binSCRIPTS: $(bin_SCRIPTS)
+ @$(NORMAL_INSTALL)
+ test -z "$(bindir)" || $(MKDIR_P) "$(DESTDIR)$(bindir)"
+ @list='$(bin_SCRIPTS)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ if test -f $$d$$p; then \
+ f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
+ echo " $(binSCRIPT_INSTALL) '$$d$$p' '$(DESTDIR)$(bindir)/$$f'"; \
+ $(binSCRIPT_INSTALL) "$$d$$p" "$(DESTDIR)$(bindir)/$$f"; \
+ else :; fi; \
+ done
+
+uninstall-binSCRIPTS:
+ @$(NORMAL_UNINSTALL)
+ @list='$(bin_SCRIPTS)'; for p in $$list; do \
+ f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
+ echo " rm -f '$(DESTDIR)$(bindir)/$$f'"; \
+ rm -f "$(DESTDIR)$(bindir)/$$f"; \
+ done
+
+.texi.info:
+ restore=: && backupdir="$(am__leading_dot)am$$$$" && \
+ am__cwd=`pwd` && cd $(srcdir) && \
+ rm -rf $$backupdir && mkdir $$backupdir && \
+ if ($(MAKEINFO) --version) >/dev/null 2>&1; then \
+ for f in $@ $@-[0-9] $@-[0-9][0-9] $(@:.info=).i[0-9] $(@:.info=).i[0-9][0-9]; do \
+ if test -f $$f; then mv $$f $$backupdir; restore=mv; else :; fi; \
+ done; \
+ else :; fi && \
+ cd "$$am__cwd"; \
+ if $(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
+ -o $@ $<; \
+ then \
+ rc=0; \
+ cd $(srcdir); \
+ else \
+ rc=$$?; \
+ cd $(srcdir) && \
+ $$restore $$backupdir/* `echo "./$@" | sed 's|[^/]*$$||'`; \
+ fi; \
+ rm -rf $$backupdir; exit $$rc
+
+.texi.dvi:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
+ $(TEXI2DVI) $<
+
+.texi.pdf:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
+ $(TEXI2PDF) $<
+
+.texi.html:
+ rm -rf $(@:.html=.htp)
+ if $(MAKEINFOHTML) $(AM_MAKEINFOHTMLFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
+ -o $(@:.html=.htp) $<; \
+ then \
+ rm -rf $@; \
+ if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
+ mv $(@:.html=) $@; else mv $(@:.html=.htp) $@; fi; \
+ else \
+ if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
+ rm -rf $(@:.html=); else rm -Rf $(@:.html=.htp) $@; fi; \
+ exit 1; \
+ fi
+$(srcdir)/stow.info: stow.texi $(srcdir)/version.texi
+stow.dvi: stow.texi $(srcdir)/version.texi
+stow.pdf: stow.texi $(srcdir)/version.texi
+stow.html: stow.texi $(srcdir)/version.texi
+$(srcdir)/version.texi: $(srcdir)/stamp-vti
+$(srcdir)/stamp-vti: stow.texi $(top_srcdir)/configure
+ @(dir=.; test -f ./stow.texi || dir=$(srcdir); \
+ set `$(SHELL) $(srcdir)/mdate-sh $$dir/stow.texi`; \
+ echo "@set UPDATED $$1 $$2 $$3"; \
+ echo "@set UPDATED-MONTH $$2 $$3"; \
+ echo "@set EDITION $(VERSION)"; \
+ echo "@set VERSION $(VERSION)") > vti.tmp
+ @cmp -s vti.tmp $(srcdir)/version.texi \
+ || (echo "Updating $(srcdir)/version.texi"; \
+ cp vti.tmp $(srcdir)/version.texi)
+ -@rm -f vti.tmp
+ @cp $(srcdir)/version.texi $@
+
+mostlyclean-vti:
+ -rm -f vti.tmp
+
+maintainer-clean-vti:
+ -rm -f $(srcdir)/stamp-vti $(srcdir)/version.texi
+.dvi.ps:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ $(DVIPS) -o $@ $<
+
+uninstall-dvi-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(DVIS)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(dvidir)/$$f'"; \
+ rm -f "$(DESTDIR)$(dvidir)/$$f"; \
+ done
+
+uninstall-html-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(HTMLS)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -rf '$(DESTDIR)$(htmldir)/$$f'"; \
+ rm -rf "$(DESTDIR)$(htmldir)/$$f"; \
+ done
+
+uninstall-info-am:
+ @$(PRE_UNINSTALL)
+ @if test -d '$(DESTDIR)$(infodir)' && \
+ (install-info --version && \
+ install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \
+ list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ echo " install-info --info-dir='$(DESTDIR)$(infodir)' --remove '$(DESTDIR)$(infodir)/$$relfile'"; \
+ install-info --info-dir="$(DESTDIR)$(infodir)" --remove "$(DESTDIR)$(infodir)/$$relfile"; \
+ done; \
+ else :; fi
+ @$(NORMAL_UNINSTALL)
+ @list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ relfile_i=`echo "$$relfile" | sed 's|\.info$$||;s|$$|.i|'`; \
+ (if test -d "$(DESTDIR)$(infodir)" && cd "$(DESTDIR)$(infodir)"; then \
+ echo " cd '$(DESTDIR)$(infodir)' && rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]"; \
+ rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]; \
+ else :; fi); \
+ done
+
+uninstall-pdf-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(PDFS)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(pdfdir)/$$f'"; \
+ rm -f "$(DESTDIR)$(pdfdir)/$$f"; \
+ done
+
+uninstall-ps-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(PSS)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(psdir)/$$f'"; \
+ rm -f "$(DESTDIR)$(psdir)/$$f"; \
+ done
+
+dist-info: $(INFO_DEPS)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
+ list='$(INFO_DEPS)'; \
+ for base in $$list; do \
+ case $$base in \
+ $(srcdir)/*) base=`echo "$$base" | sed "s|^$$srcdirstrip/||"`;; \
+ esac; \
+ if test -f $$base; then d=.; else d=$(srcdir); fi; \
+ base_i=`echo "$$base" | sed 's|\.info$$||;s|$$|.i|'`; \
+ for file in $$d/$$base $$d/$$base-[0-9] $$d/$$base-[0-9][0-9] $$d/$$base_i[0-9] $$d/$$base_i[0-9][0-9]; do \
+ if test -f $$file; then \
+ relfile=`expr "$$file" : "$$d/\(.*\)"`; \
+ test -f $(distdir)/$$relfile || \
+ cp -p $$file $(distdir)/$$relfile; \
+ else :; fi; \
+ done; \
+ done
+
+mostlyclean-aminfo:
+ -rm -rf stow.aux stow.cp stow.cps stow.fn stow.fns stow.ky stow.kys stow.log \
+ stow.pg stow.pgs stow.tmp stow.toc stow.tp stow.tps stow.vr \
+ stow.vrs stow.dvi stow.pdf stow.ps stow.html
+
+maintainer-clean-aminfo:
+ @list='$(INFO_DEPS)'; for i in $$list; do \
+ i_i=`echo "$$i" | sed 's|\.info$$||;s|$$|.i|'`; \
+ echo " rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]"; \
+ rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]; \
+ done
+install-man8: $(man8_MANS) $(man_MANS)
+ @$(NORMAL_INSTALL)
+ test -z "$(man8dir)" || $(MKDIR_P) "$(DESTDIR)$(man8dir)"
+ @list='$(man8_MANS) $(dist_man8_MANS) $(nodist_man8_MANS)'; \
+ l2='$(man_MANS) $(dist_man_MANS) $(nodist_man_MANS)'; \
+ for i in $$l2; do \
+ case "$$i" in \
+ *.8*) list="$$list $$i" ;; \
+ esac; \
+ done; \
+ for i in $$list; do \
+ if test -f $(srcdir)/$$i; then file=$(srcdir)/$$i; \
+ else file=$$i; fi; \
+ ext=`echo $$i | sed -e 's/^.*\\.//'`; \
+ case "$$ext" in \
+ 8*) ;; \
+ *) ext='8' ;; \
+ esac; \
+ inst=`echo $$i | sed -e 's/\\.[0-9a-z]*$$//'`; \
+ inst=`echo $$inst | sed -e 's/^.*\///'`; \
+ inst=`echo $$inst | sed '$(transform)'`.$$ext; \
+ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man8dir)/$$inst'"; \
+ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man8dir)/$$inst"; \
+ done
+uninstall-man8:
+ @$(NORMAL_UNINSTALL)
+ @list='$(man8_MANS) $(dist_man8_MANS) $(nodist_man8_MANS)'; \
+ l2='$(man_MANS) $(dist_man_MANS) $(nodist_man_MANS)'; \
+ for i in $$l2; do \
+ case "$$i" in \
+ *.8*) list="$$list $$i" ;; \
+ esac; \
+ done; \
+ for i in $$list; do \
+ ext=`echo $$i | sed -e 's/^.*\\.//'`; \
+ case "$$ext" in \
+ 8*) ;; \
+ *) ext='8' ;; \
+ esac; \
+ inst=`echo $$i | sed -e 's/\\.[0-9a-z]*$$//'`; \
+ inst=`echo $$inst | sed -e 's/^.*\///'`; \
+ inst=`echo $$inst | sed '$(transform)'`.$$ext; \
+ echo " rm -f '$(DESTDIR)$(man8dir)/$$inst'"; \
+ rm -f "$(DESTDIR)$(man8dir)/$$inst"; \
+ done
+install-dist_docDATA: $(dist_doc_DATA)
+ @$(NORMAL_INSTALL)
+ test -z "$(docdir)" || $(MKDIR_P) "$(DESTDIR)$(docdir)"
+ @list='$(dist_doc_DATA)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ echo " $(dist_docDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(docdir)/$$f'"; \
+ $(dist_docDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(docdir)/$$f"; \
+ done
+
+uninstall-dist_docDATA:
+ @$(NORMAL_UNINSTALL)
+ @list='$(dist_doc_DATA)'; for p in $$list; do \
+ f=$(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(docdir)/$$f'"; \
+ rm -f "$(DESTDIR)$(docdir)/$$f"; \
+ done
+tags: TAGS
+TAGS:
+
+ctags: CTAGS
+CTAGS:
+
+
+check-TESTS: $(TESTS)
+ @failed=0; all=0; xfail=0; xpass=0; skip=0; ws='[ ]'; \
+ srcdir=$(srcdir); export srcdir; \
+ list=' $(TESTS) '; \
+ if test -n "$$list"; then \
+ for tst in $$list; do \
+ if test -f ./$$tst; then dir=./; \
+ elif test -f $$tst; then dir=; \
+ else dir="$(srcdir)/"; fi; \
+ if $(TESTS_ENVIRONMENT) $${dir}$$tst; then \
+ all=`expr $$all + 1`; \
+ case " $(XFAIL_TESTS) " in \
+ *$$ws$$tst$$ws*) \
+ xpass=`expr $$xpass + 1`; \
+ failed=`expr $$failed + 1`; \
+ echo "XPASS: $$tst"; \
+ ;; \
+ *) \
+ echo "PASS: $$tst"; \
+ ;; \
+ esac; \
+ elif test $$? -ne 77; then \
+ all=`expr $$all + 1`; \
+ case " $(XFAIL_TESTS) " in \
+ *$$ws$$tst$$ws*) \
+ xfail=`expr $$xfail + 1`; \
+ echo "XFAIL: $$tst"; \
+ ;; \
+ *) \
+ failed=`expr $$failed + 1`; \
+ echo "FAIL: $$tst"; \
+ ;; \
+ esac; \
+ else \
+ skip=`expr $$skip + 1`; \
+ echo "SKIP: $$tst"; \
+ fi; \
+ done; \
+ if test "$$failed" -eq 0; then \
+ if test "$$xfail" -eq 0; then \
+ banner="All $$all tests passed"; \
+ else \
+ banner="All $$all tests behaved as expected ($$xfail expected failures)"; \
+ fi; \
+ else \
+ if test "$$xpass" -eq 0; then \
+ banner="$$failed of $$all tests failed"; \
+ else \
+ banner="$$failed of $$all tests did not behave as expected ($$xpass unexpected passes)"; \
+ fi; \
+ fi; \
+ dashes="$$banner"; \
+ skipped=""; \
+ if test "$$skip" -ne 0; then \
+ skipped="($$skip tests were not run)"; \
+ test `echo "$$skipped" | wc -c` -le `echo "$$banner" | wc -c` || \
+ dashes="$$skipped"; \
+ fi; \
+ report=""; \
+ if test "$$failed" -ne 0 && test -n "$(PACKAGE_BUGREPORT)"; then \
+ report="Please report to $(PACKAGE_BUGREPORT)"; \
+ test `echo "$$report" | wc -c` -le `echo "$$banner" | wc -c` || \
+ dashes="$$report"; \
+ fi; \
+ dashes=`echo "$$dashes" | sed s/./=/g`; \
+ echo "$$dashes"; \
+ echo "$$banner"; \
+ test -z "$$skipped" || echo "$$skipped"; \
+ test -z "$$report" || echo "$$report"; \
+ echo "$$dashes"; \
+ test "$$failed" -eq 0; \
+ else :; fi
+
+distdir: $(DISTFILES)
+ $(am__remove_distdir)
+ test -d $(distdir) || mkdir $(distdir)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ list='$(DISTFILES)'; \
+ dist_files=`for file in $$list; do echo $$file; done | \
+ sed -e "s|^$$srcdirstrip/||;t" \
+ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
+ case $$dist_files in \
+ */*) $(MKDIR_P) `echo "$$dist_files" | \
+ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
+ sort -u` ;; \
+ esac; \
+ for file in $$dist_files; do \
+ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
+ if test -d $$d/$$file; then \
+ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
+ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
+ cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
+ fi; \
+ cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
+ else \
+ test -f $(distdir)/$$file \
+ || cp -p $$d/$$file $(distdir)/$$file \
+ || exit 1; \
+ fi; \
+ done
+ $(MAKE) $(AM_MAKEFLAGS) \
+ top_distdir="$(top_distdir)" distdir="$(distdir)" \
+ dist-info
+ -find $(distdir) -type d ! -perm -777 -exec chmod a+rwx {} \; -o \
+ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
+ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \
+ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
+ || chmod -R a+r $(distdir)
+dist-gzip: distdir
+ tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
+ $(am__remove_distdir)
+
+dist-bzip2: distdir
+ tardir=$(distdir) && $(am__tar) | bzip2 -9 -c >$(distdir).tar.bz2
+ $(am__remove_distdir)
+
+dist-tarZ: distdir
+ tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
+ $(am__remove_distdir)
+dist-shar: distdir
+ shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz
+ $(am__remove_distdir)
+
+dist-zip: distdir
+ -rm -f $(distdir).zip
+ zip -rq $(distdir).zip $(distdir)
+ $(am__remove_distdir)
+
+dist dist-all: distdir
+ tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
+ shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz
+ $(am__remove_distdir)
+
+# This target untars the dist file and tries a VPATH configuration. Then
+# it guarantees that the distribution is self-contained by making another
+# tarfile.
+distcheck: dist
+ case '$(DIST_ARCHIVES)' in \
+ *.tar.gz*) \
+ GZIP=$(GZIP_ENV) gunzip -c $(distdir).tar.gz | $(am__untar) ;;\
+ *.tar.bz2*) \
+ bunzip2 -c $(distdir).tar.bz2 | $(am__untar) ;;\
+ *.tar.Z*) \
+ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
+ *.shar.gz*) \
+ GZIP=$(GZIP_ENV) gunzip -c $(distdir).shar.gz | unshar ;;\
+ *.zip*) \
+ unzip $(distdir).zip ;;\
+ esac
+ chmod -R a-w $(distdir); chmod a+w $(distdir)
+ mkdir $(distdir)/_build
+ mkdir $(distdir)/_inst
+ chmod a-w $(distdir)
+ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
+ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
+ && cd $(distdir)/_build \
+ && ../configure --srcdir=.. --prefix="$$dc_install_base" \
+ $(DISTCHECK_CONFIGURE_FLAGS) \
+ && $(MAKE) $(AM_MAKEFLAGS) \
+ && $(MAKE) $(AM_MAKEFLAGS) dvi \
+ && $(MAKE) $(AM_MAKEFLAGS) check \
+ && $(MAKE) $(AM_MAKEFLAGS) install \
+ && $(MAKE) $(AM_MAKEFLAGS) installcheck \
+ && $(MAKE) $(AM_MAKEFLAGS) uninstall \
+ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
+ distuninstallcheck \
+ && chmod -R a-w "$$dc_install_base" \
+ && ({ \
+ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
+ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
+ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
+ } || { rm -rf "$$dc_destdir"; exit 1; }) \
+ && rm -rf "$$dc_destdir" \
+ && $(MAKE) $(AM_MAKEFLAGS) dist \
+ && rm -rf $(DIST_ARCHIVES) \
+ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck
+ $(am__remove_distdir)
+ @(echo "$(distdir) archives ready for distribution: "; \
+ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
+ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
+distuninstallcheck:
+ @cd $(distuninstallcheck_dir) \
+ && test `$(distuninstallcheck_listfiles) | wc -l` -le 1 \
+ || { echo "ERROR: files left after uninstall:" ; \
+ if test -n "$(DESTDIR)"; then \
+ echo " (check DESTDIR support)"; \
+ fi ; \
+ $(distuninstallcheck_listfiles) ; \
+ exit 1; } >&2
+distcleancheck: distclean
+ @if test '$(srcdir)' = . ; then \
+ echo "ERROR: distcleancheck can only run from a VPATH build" ; \
+ exit 1 ; \
+ fi
+ @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
+ || { echo "ERROR: files left in build directory after distclean:" ; \
+ $(distcleancheck_listfiles) ; \
+ exit 1; } >&2
+check-am: all-am
+ $(MAKE) $(AM_MAKEFLAGS) check-TESTS
+check: check-am
+all-am: Makefile $(INFO_DEPS) $(SCRIPTS) $(MANS) $(DATA)
+installdirs:
+ for dir in "$(DESTDIR)$(bindir)" "$(DESTDIR)$(infodir)" "$(DESTDIR)$(man8dir)" "$(DESTDIR)$(docdir)"; do \
+ test -z "$$dir" || $(MKDIR_P) "$$dir"; \
+ done
+install: install-am
+install-exec: install-exec-am
+install-data: install-data-am
+uninstall: uninstall-am
+
+install-am: all-am
+ @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
+
+installcheck: installcheck-am
+install-strip:
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ `test -z '$(STRIP)' || \
+ echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
+mostlyclean-generic:
+
+clean-generic:
+ -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
+
+distclean-generic:
+ -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
+
+maintainer-clean-generic:
+ @echo "This command is intended for maintainers to use"
+ @echo "it deletes files that may require special tools to rebuild."
+clean: clean-am
+
+clean-am: clean-generic clean-local mostlyclean-am
+
+distclean: distclean-am
+ -rm -f $(am__CONFIG_DISTCLEAN_FILES)
+ -rm -f Makefile
+distclean-am: clean-am distclean-generic
+
+dvi: dvi-am
+
+dvi-am: $(DVIS)
+
+html: html-am
+
+html-am: $(HTMLS)
+
+info: info-am
+
+info-am: $(INFO_DEPS)
+
+install-data-am: install-dist_docDATA install-info-am install-man
+
+install-dvi: install-dvi-am
+
+install-dvi-am: $(DVIS)
+ @$(NORMAL_INSTALL)
+ test -z "$(dvidir)" || $(MKDIR_P) "$(DESTDIR)$(dvidir)"
+ @list='$(DVIS)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(dvidir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(dvidir)/$$f"; \
+ done
+install-exec-am: install-binSCRIPTS
+
+install-html: install-html-am
+
+install-html-am: $(HTMLS)
+ @$(NORMAL_INSTALL)
+ test -z "$(htmldir)" || $(MKDIR_P) "$(DESTDIR)$(htmldir)"
+ @list='$(HTMLS)'; for p in $$list; do \
+ if test -f "$$p" || test -d "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ if test -d "$$d$$p"; then \
+ echo " $(MKDIR_P) '$(DESTDIR)$(htmldir)/$$f'"; \
+ $(MKDIR_P) "$(DESTDIR)$(htmldir)/$$f" || exit 1; \
+ echo " $(INSTALL_DATA) '$$d$$p'/* '$(DESTDIR)$(htmldir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p"/* "$(DESTDIR)$(htmldir)/$$f"; \
+ else \
+ echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(htmldir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(htmldir)/$$f"; \
+ fi; \
+ done
+install-info: install-info-am
+
+install-info-am: $(INFO_DEPS)
+ @$(NORMAL_INSTALL)
+ test -z "$(infodir)" || $(MKDIR_P) "$(DESTDIR)$(infodir)"
+ @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
+ list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ case $$file in \
+ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
+ esac; \
+ if test -f $$file; then d=.; else d=$(srcdir); fi; \
+ file_i=`echo "$$file" | sed 's|\.info$$||;s|$$|.i|'`; \
+ for ifile in $$d/$$file $$d/$$file-[0-9] $$d/$$file-[0-9][0-9] \
+ $$d/$$file_i[0-9] $$d/$$file_i[0-9][0-9] ; do \
+ if test -f $$ifile; then \
+ relfile=`echo "$$ifile" | sed 's|^.*/||'`; \
+ echo " $(INSTALL_DATA) '$$ifile' '$(DESTDIR)$(infodir)/$$relfile'"; \
+ $(INSTALL_DATA) "$$ifile" "$(DESTDIR)$(infodir)/$$relfile"; \
+ else : ; fi; \
+ done; \
+ done
+ @$(POST_INSTALL)
+ @if (install-info --version && \
+ install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \
+ list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ echo " install-info --info-dir='$(DESTDIR)$(infodir)' '$(DESTDIR)$(infodir)/$$relfile'";\
+ install-info --info-dir="$(DESTDIR)$(infodir)" "$(DESTDIR)$(infodir)/$$relfile" || :;\
+ done; \
+ else : ; fi
+install-man: install-man8
+
+install-pdf: install-pdf-am
+
+install-pdf-am: $(PDFS)
+ @$(NORMAL_INSTALL)
+ test -z "$(pdfdir)" || $(MKDIR_P) "$(DESTDIR)$(pdfdir)"
+ @list='$(PDFS)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(pdfdir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(pdfdir)/$$f"; \
+ done
+install-ps: install-ps-am
+
+install-ps-am: $(PSS)
+ @$(NORMAL_INSTALL)
+ test -z "$(psdir)" || $(MKDIR_P) "$(DESTDIR)$(psdir)"
+ @list='$(PSS)'; for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ f=$(am__strip_dir) \
+ echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(psdir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(psdir)/$$f"; \
+ done
+installcheck-am:
+
+maintainer-clean: maintainer-clean-am
+ -rm -f $(am__CONFIG_DISTCLEAN_FILES)
+ -rm -rf $(top_srcdir)/autom4te.cache
+ -rm -f Makefile
+maintainer-clean-am: distclean-am maintainer-clean-aminfo \
+ maintainer-clean-generic maintainer-clean-vti
+
+mostlyclean: mostlyclean-am
+
+mostlyclean-am: mostlyclean-aminfo mostlyclean-generic mostlyclean-vti
+
+pdf: pdf-am
+
+pdf-am: $(PDFS)
+
+ps: ps-am
+
+ps-am: $(PSS)
+
+uninstall-am: uninstall-binSCRIPTS uninstall-dist_docDATA \
+ uninstall-dvi-am uninstall-html-am uninstall-info-am \
+ uninstall-man uninstall-pdf-am uninstall-ps-am
+
+uninstall-man: uninstall-man8
+
+.MAKE: install-am install-strip
+
+.PHONY: all all-am am--refresh check check-TESTS check-am clean \
+ clean-generic clean-local dist dist-all dist-bzip2 dist-gzip \
+ dist-info dist-shar dist-tarZ dist-zip distcheck distclean \
+ distclean-generic distcleancheck distdir distuninstallcheck \
+ dvi dvi-am html html-am info info-am install install-am \
+ install-binSCRIPTS install-data install-data-am \
+ install-dist_docDATA install-dvi install-dvi-am install-exec \
+ install-exec-am install-html install-html-am install-info \
+ install-info-am install-man install-man8 install-pdf \
+ install-pdf-am install-ps install-ps-am install-strip \
+ installcheck installcheck-am installdirs maintainer-clean \
+ maintainer-clean-aminfo maintainer-clean-generic \
+ maintainer-clean-vti mostlyclean mostlyclean-aminfo \
+ mostlyclean-generic mostlyclean-vti pdf pdf-am ps ps-am \
+ uninstall uninstall-am uninstall-binSCRIPTS \
+ uninstall-dist_docDATA uninstall-dvi-am uninstall-html-am \
+ uninstall-info-am uninstall-man uninstall-man8 \
+ uninstall-pdf-am uninstall-ps-am
+
+
+# clean up files left behind by test suite
+clean-local:
+ -rm -rf t/target t/stow
+
+stow: stow.in Makefile
+ $(edit) < $< > $@
+ chmod +x $@
+
+chkstow: chkstow.in Makefile
+ $(edit) < $< > $@
+ chmod +x $@
+
+# The rules for manual.html and manual.texi are only used by
+# the developer
+manual.html: manual.texi
+ -rm -f $@
+ texi2html -expandinfo -menu -monolithic -verbose $<
+
+manual.texi: stow.texi
+ -rm -f $@
+ cp $< $@
+# Tell versions [3.59,3.63) of GNU make to not export all variables.
+# Otherwise a system limit (for SysV at least) may be exceeded.
+.NOEXPORT:
diff --git a/NEWS b/NEWS
index 7791b77..1d663f8 100644
--- a/NEWS
+++ b/NEWS
@@ -1,5 +1,138 @@
News file for Stow.
+* Changes in version 2.0.1:
+** Defer operations until all potential conflicts have been assessed.
+
+We do this by traversing the installation image(s) and recording the
+actions that need to be performed. Redundant actions are factored out,
+e.g., we don't want to create a link that we will later remove in order to
+create a directory. Benefits of this approach
+
+ 1. Get to see _all_ the conflicts that are blocking an installation:
+ you don't have to deal with them one at a time.
+ 2. No operations are be performed if _any_ conflicts are detected:
+ a failed stow will not leave you with a partially installed
+ package.
+ 3. Minimises the set of operations that need to be performed.
+ 4. Operations are executed as a batch which is much faster
+ This can be an advantage when upgrading packages on a live system
+ where you want to minimise the amount of time when the package is
+ unavailable.
+
+** The above fixes the false conflict problem mentioned in the info file.
+
+** It also fixes the two bugs mentioned in the man page.
+
+** Multiple stow directories will now cooperate in folding/unfolding.
+
+** Conflict messages are more uniform and informative.
+
+** Verbosity and tracing is more extensive and uniform.
+
+** Implemented option parsing via Getopt::Long.
+
+** Default command line arguments set via '.stowrc' and '~/.stowrc' files.
+
+Contents of these files are parsed as though they occurred first on the
+command line.
+
+** Support multiple actions per invocation.
+
+In order for this to work, we had to add a new (optional) command line arg
+(-S) to specify packages to stow. For example, to update an installation
+of emacs you can now do
+
+ stow -D emacs-21.3 -S emacs-21.4a
+
+which will replace emacs-21.3 with emacs-21.4a.
+You can mix and match any number of actions, e.g.,
+
+ stow -S p1 p2 -D p3 p4 -S p5 -R p6
+
+will unstow p3, p4 and p6, then stow p1, p2, p5 and p6.
+
+* New (repeatable) command line arg: --ignore='<regex>'
+
+This suppresses operating on a file matching the regex (suffix), e.g.,
+
+ --ignore='~' --ignore='\.#.*'
+
+will ignore emacs and CVS backup files (suitable for ~/.stowrc file).
+
+(I opted for Perl regular expressions because they are more powerful and
+easier to implement).
+
+** New (repeatable) command line arg: --defer='<regex>'
+
+This defers stowing a file matching the regex (prefix) if that file is
+already stowed to a different package, e.g.,
+
+ --defer='man' --defer='info'
+
+will cause stow to skip over pre-existing man and info pages.
+
+Equivalently, you could use --defer='man|info' since the argument is just
+a Perl regex.
+
+** New (repeatable) command line arg: --override='<regex>'
+
+This forces a file matching the regex (prefix) to be stowed even if the
+file is already stowed to a different package, e.g.,
+
+ --override='man' --override='info'
+
+will unstow any pre-existing man and info pages that would conflict with
+the file we are trying to stow.
+
+Equivalently, you could use --override='man|info' since the argument is
+just a Perl regex.
+
+** The above gives the ability to manage packages with common content.
+
+For example, man pages that are shared by a number of CPAN packages.
+Using multiple stow directories and .stowrc files can also simplify
+things. In our setup we use the standard /usr/local/stow directory for
+packages to be installed in /usr/local. Since we install a large number
+of extra Perl packages (currently about 300) we use an additional stow
+directory: /usr/local/stow/perl-5.8.8-extras. Both stow directories
+contain a '.stow' file so that they collaborate appropriately. I then use
+the following .stowrc file in /usr/local/stow/perl-5.8.8-extras
+
+ --dir=/usr/local/stow/perl-5.8.8-extras
+ --target=/usr/local
+ --override=bin
+ --override=man
+ --ignore='perllocal\.pod'
+ --ignore='\.packlist'
+ --ignore='\.bs'
+
+When I stow packages from there, they automatically override any man pages
+and binaries that may already have been stowed by another package or by
+the core perl-5.8.8 installation. For example, if you want to upgrade the
+Test-Simple package, you need to override all the man pages that would
+have been installed by the core package. If you are upgrading CPAN, you
+will also have to override the pre-existing cpan executable.
+
+** By default, search less aggressively for invalid symlinks when unstowing.
+
+That is, we only search for bad symlinks in the directories explicitly
+mentioned in the installation image, and do not dig down into other
+subdirs. Digging down into other directories can be very time consuming
+if you have a really big tree (like with a couple of Oracle installations
+lying around). In general the old behaviour is only necessary when you
+have really stuffed up your installation by deleting a directory that has
+already been stowed. Doing that on a live system is somewhat crazy and
+hopefully rare. We provide an option '-p|--compat' to enable the old
+behaviour for those needing to patch up mistakes.
+
+** Implement a test suite and support code.
+
+This was built before implementing any of the extra features so I could
+more easily check for equivalent functionality. The initial code base had
+to be refactored substantially to allow for testing. The test suite is
+not exhaustive, but it should provide enough to check for regressions.
+
+
* Changes in version 1.3.3:
** Now requires Perl 5.005 or later
** Initially empty directories are not removed anymore
@@ -24,7 +157,3 @@ News file for Stow.
** `make clean' removes stow (which is generated from stow.in).
* Initial public release (v1.0) of Stow.
-
-Local variables:
-mode: outline
-End:
diff --git a/README b/README
index 1eef38a..913f245 100644
--- a/README
+++ b/README
@@ -1,28 +1,31 @@
This is GNU Stow, a program for managing the installation of software
-packages, keeping them separate (/usr/local/stow/emacs
-vs. /usr/local/stow/perl, for example) while making them appear to be
-installed in the same place (/usr/local).
+packages, keeping them separate (/usr/local/stow/emacs vs.
+/usr/local/stow/perl, for example) while making them appear to be installed in
+the same place (/usr/local). Stow doesn't store an extra state between runs,
+so there's no danger of mangling directories when file hierarchies don't match
+the database. Also, stow will never delete any files, directories, or links
+that appear in a stow directory, so it is always possible to rebuild the
+target tree.
-Stow is a Perl script which should run correctly under Perl 4 and Perl
-5. You must install Perl before running Stow. For more information
-about Perl, see http://www.perl.com/perl/.
+Stow is a Perl script which should run correctly under Perl 4 and Perl 5. You
+must install Perl before running Stow. For more information about Perl, see
+http://www.perl.com/perl/.
You can get the latest information about Stow from
-http://www.gnu.ai.mit.edu/software/stow/stow.html.
+http://www.gnu.org/software/stow/stow.html
-Stow was inspired by Carnegie Mellon's "Depot" program, but is
-substantially simpler. Whereas Depot requires database files to keep
-things in sync, Stow stores no extra state between runs, so there's no
-danger (as there is in Depot) of mangling directories when file
-hierarchies don't match the database. Also unlike Depot, Stow will
-never delete any files, directories, or links that appear in a Stow
-directory (e.g., /usr/local/stow/emacs), so it's always possible to
-rebuild the target tree (e.g., /usr/local).
+Stow was inspired by Carnegie Mellon's "Depot" program, but is substantially
+simpler. Whereas Depot requires database files to keep things in sync, Stow
+stores no extra state between runs, so there's no danger (as there is in
+Depot) of mangling directories when file hierarchies don't match the database.
+Also unlike Depot, Stow will never delete any files, directories, or links
+that appear in a Stow directory (e.g., /usr/local/stow/emacs), so it's always
+possible to rebuild the target tree (e.g., /usr/local).
-Stow is free software, licensed under the GNU General Public License,
-which can be found in the file COPYING.
+Stow is free software, licensed under the GNU General Public License, which
+can be found in the file COPYING.
See INSTALL for installation instructions.
-Please mail comments, questions, and criticisms to the author, Bob
-Glickstein, <bobg+stow@zanshin.com>.
+Please mail comments, questions, and criticisms to the current maintainer,
+Kahlil (Kal) Hodgson via help-stow@gnu.org or bug-stow@gnu.org.
diff --git a/THANKS b/THANKS
index 75d5bf0..e874eb1 100644
--- a/THANKS
+++ b/THANKS
@@ -3,16 +3,20 @@ Bob Glickstein:
Thanks to the following people for testing, using, commenting on, and
otherwise aiding the creation of Stow:
-Miles Bader <miles@gnu.ai.mit.edu>
-Greg Fox <fox@zanshin.com>
-David Hartmann <davidh@zanshin.com>
-Ben Liblit <liblit@well.com>
-Gord Matzigkeit <gord@enci.ucalgary.ca>
-Roland McGrath <roland@gnu.ai.mit.edu>
-Jim Meyering <meyering@asic.sc.ti.com>
-Fritz Mueller <fritzm@netcom.com>
-Bart Schaefer <schaefer@nbn.com>
-Richard Stallman <rms@gnu.ai.mit.edu>
-Spencer Sun <zorak@netcom.com>
-Tom Tromey <tromey@cygnus.com>
-Steve Webster <srw@zanshin.com>
+Miles Bader <miles@gnu.ai.mit.edu>
+Greg Fox <fox@zanshin.com>
+David Hartmann <davidh@zanshin.com>
+Ben Liblit <liblit@well.com>
+Gord Matzigkeit <gord@enci.ucalgary.ca>
+Roland McGrath <roland@gnu.ai.mit.edu>
+Jim Meyering <meyering@asic.sc.ti.com>
+Fritz Mueller <fritzm@netcom.com>
+Bart Schaefer <schaefer@nbn.com>
+Richard Stallman <rms@gnu.ai.mit.edu>
+Spencer Sun <zorak@netcom.com>
+Tom Tromey <tromey@cygnus.com>
+Steve Webster <srw@zanshin.com>
+Geoffrey Giesemann <geoffrey.giesemann@rmit.edu.au>
+Emil Mikulic <emil.mikulic@rmit.edu.au>
+Austin Wood <austin.wood@rmit.edu.au>
+Christopher Hoobin <christopher.hoobin.edu.au>
diff --git a/TODO b/TODO
index a3ba084..eaf9c82 100644
--- a/TODO
+++ b/TODO
@@ -1,15 +1,26 @@
--*- outline -*-
+* get account on fencepost.gnu.org (email accounts@gnu.org)
+ set up copyright papers?
+ 'assign.future' and 'request-assign.future.manual'
-* Autodetect "foreign" stow directories
+* Update stow.texi
+ - The email address in 'Reporting Bugs' needs to be updated
+
+* Figure out what needs the optin 'nostow' 'notstowed'. Can they be removed?
+
+* _texi2man_ needs author/copyright/license to be completed
-* Fix empty-dir problem (see "Known bugs" in the manual)
+* Update http://directory.fsf.org/project/stow/
-* Continue after conflicts.
+* Update savanaugh CVS
-When detecting a conflict, affected subparts of the Stow traversal can
-be skipped while continuing with other subparts.
+* Check that all email addresses are working: need an account on fenchpost for
+ this bug-stow@gnu.org, help-stow@gnu.org
-* Traverse links in the target tree?
+* Get some pre-testers: need to find appropriate mailing list?
+
+* Announce release on info-gnu@gnu.org.
+
+* Autodetect "foreign" stow directories
From e-mail with meyering@na-net.ornl.gov:
@@ -32,16 +43,7 @@ From e-mail with meyering@na-net.ornl.gov:
should it be an enumeration of which links are OK to traverse
(such as, "--traversable='info man doc'")?
-* Develop a mechanism for sharing files between packages.
-
-This would solve the problem of maintaining N platform-specific copies
-of a package, all of which have many platform-*independent* files
-which could be shared, such as man pages, info files, etc.
-
-* Option to ignore certain files in the stow tree.
-
-For example, --ignore='*~ .#*' (skip Emacs and CVS backup files).
-
-* Option to ignore links in the stow tree to certain places.
-
-For example, --ignore-link='/*' (skip absolute links).
+Does Version 2 fix this? (Kal)
+I think that because it never needs to create /usr/local/info,
+it only needs to check th ownership of links that it _operatates_ on,
+not on all the elements of the path.
diff --git a/aclocal.m4 b/aclocal.m4
new file mode 100644
index 0000000..2981344
--- /dev/null
+++ b/aclocal.m4
@@ -0,0 +1,548 @@
+# generated automatically by aclocal 1.10 -*- Autoconf -*-
+
+# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
+# 2005, 2006 Free Software Foundation, Inc.
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+m4_if(m4_PACKAGE_VERSION, [2.61],,
+[m4_fatal([this file was generated for autoconf 2.61.
+You have another version of autoconf. If you want to use that,
+you should regenerate the build system entirely.], [63])])
+
+# Copyright (C) 2002, 2003, 2005, 2006 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_AUTOMAKE_VERSION(VERSION)
+# ----------------------------
+# Automake X.Y traces this macro to ensure aclocal.m4 has been
+# generated from the m4 files accompanying Automake X.Y.
+# (This private macro should not be called outside this file.)
+AC_DEFUN([AM_AUTOMAKE_VERSION],
+[am__api_version='1.10'
+dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to
+dnl require some minimum version. Point them to the right macro.
+m4_if([$1], [1.10], [],
+ [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl
+])
+
+# _AM_AUTOCONF_VERSION(VERSION)
+# -----------------------------
+# aclocal traces this macro to find the Autoconf version.
+# This is a private macro too. Using m4_define simplifies
+# the logic in aclocal, which can simply ignore this definition.
+m4_define([_AM_AUTOCONF_VERSION], [])
+
+# AM_SET_CURRENT_AUTOMAKE_VERSION
+# -------------------------------
+# Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced.
+# This function is AC_REQUIREd by AC_INIT_AUTOMAKE.
+AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION],
+[AM_AUTOMAKE_VERSION([1.10])dnl
+_AM_AUTOCONF_VERSION(m4_PACKAGE_VERSION)])
+
+# AM_AUX_DIR_EXPAND -*- Autoconf -*-
+
+# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets
+# $ac_aux_dir to `$srcdir/foo'. In other projects, it is set to
+# `$srcdir', `$srcdir/..', or `$srcdir/../..'.
+#
+# Of course, Automake must honor this variable whenever it calls a
+# tool from the auxiliary directory. The problem is that $srcdir (and
+# therefore $ac_aux_dir as well) can be either absolute or relative,
+# depending on how configure is run. This is pretty annoying, since
+# it makes $ac_aux_dir quite unusable in subdirectories: in the top
+# source directory, any form will work fine, but in subdirectories a
+# relative path needs to be adjusted first.
+#
+# $ac_aux_dir/missing
+# fails when called from a subdirectory if $ac_aux_dir is relative
+# $top_srcdir/$ac_aux_dir/missing
+# fails if $ac_aux_dir is absolute,
+# fails when called from a subdirectory in a VPATH build with
+# a relative $ac_aux_dir
+#
+# The reason of the latter failure is that $top_srcdir and $ac_aux_dir
+# are both prefixed by $srcdir. In an in-source build this is usually
+# harmless because $srcdir is `.', but things will broke when you
+# start a VPATH build or use an absolute $srcdir.
+#
+# So we could use something similar to $top_srcdir/$ac_aux_dir/missing,
+# iff we strip the leading $srcdir from $ac_aux_dir. That would be:
+# am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"`
+# and then we would define $MISSING as
+# MISSING="\${SHELL} $am_aux_dir/missing"
+# This will work as long as MISSING is not called from configure, because
+# unfortunately $(top_srcdir) has no meaning in configure.
+# However there are other variables, like CC, which are often used in
+# configure, and could therefore not use this "fixed" $ac_aux_dir.
+#
+# Another solution, used here, is to always expand $ac_aux_dir to an
+# absolute PATH. The drawback is that using absolute paths prevent a
+# configured tree to be moved without reconfiguration.
+
+AC_DEFUN([AM_AUX_DIR_EXPAND],
+[dnl Rely on autoconf to set up CDPATH properly.
+AC_PREREQ([2.50])dnl
+# expand $ac_aux_dir to an absolute path
+am_aux_dir=`cd $ac_aux_dir && pwd`
+])
+
+# Do all the work for Automake. -*- Autoconf -*-
+
+# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
+# 2005, 2006 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# serial 12
+
+# This macro actually does too much. Some checks are only needed if
+# your package does certain things. But this isn't really a big deal.
+
+# AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE])
+# AM_INIT_AUTOMAKE([OPTIONS])
+# -----------------------------------------------
+# The call with PACKAGE and VERSION arguments is the old style
+# call (pre autoconf-2.50), which is being phased out. PACKAGE
+# and VERSION should now be passed to AC_INIT and removed from
+# the call to AM_INIT_AUTOMAKE.
+# We support both call styles for the transition. After
+# the next Automake release, Autoconf can make the AC_INIT
+# arguments mandatory, and then we can depend on a new Autoconf
+# release and drop the old call support.
+AC_DEFUN([AM_INIT_AUTOMAKE],
+[AC_PREREQ([2.60])dnl
+dnl Autoconf wants to disallow AM_ names. We explicitly allow
+dnl the ones we care about.
+m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl
+AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl
+AC_REQUIRE([AC_PROG_INSTALL])dnl
+if test "`cd $srcdir && pwd`" != "`pwd`"; then
+ # Use -I$(srcdir) only when $(srcdir) != ., so that make's output
+ # is not polluted with repeated "-I."
+ AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl
+ # test to see if srcdir already configured
+ if test -f $srcdir/config.status; then
+ AC_MSG_ERROR([source directory already configured; run "make distclean" there first])
+ fi
+fi
+
+# test whether we have cygpath
+if test -z "$CYGPATH_W"; then
+ if (cygpath --version) >/dev/null 2>/dev/null; then
+ CYGPATH_W='cygpath -w'
+ else
+ CYGPATH_W=echo
+ fi
+fi
+AC_SUBST([CYGPATH_W])
+
+# Define the identity of the package.
+dnl Distinguish between old-style and new-style calls.
+m4_ifval([$2],
+[m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl
+ AC_SUBST([PACKAGE], [$1])dnl
+ AC_SUBST([VERSION], [$2])],
+[_AM_SET_OPTIONS([$1])dnl
+dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT.
+m4_if(m4_ifdef([AC_PACKAGE_NAME], 1)m4_ifdef([AC_PACKAGE_VERSION], 1), 11,,
+ [m4_fatal([AC_INIT should be called with package and version arguments])])dnl
+ AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl
+ AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl
+
+_AM_IF_OPTION([no-define],,
+[AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package])
+ AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package])])dnl
+
+# Some tools Automake needs.
+AC_REQUIRE([AM_SANITY_CHECK])dnl
+AC_REQUIRE([AC_ARG_PROGRAM])dnl
+AM_MISSING_PROG(ACLOCAL, aclocal-${am__api_version})
+AM_MISSING_PROG(AUTOCONF, autoconf)
+AM_MISSING_PROG(AUTOMAKE, automake-${am__api_version})
+AM_MISSING_PROG(AUTOHEADER, autoheader)
+AM_MISSING_PROG(MAKEINFO, makeinfo)
+AM_PROG_INSTALL_SH
+AM_PROG_INSTALL_STRIP
+AC_REQUIRE([AM_PROG_MKDIR_P])dnl
+# We need awk for the "check" target. The system "awk" is bad on
+# some platforms.
+AC_REQUIRE([AC_PROG_AWK])dnl
+AC_REQUIRE([AC_PROG_MAKE_SET])dnl
+AC_REQUIRE([AM_SET_LEADING_DOT])dnl
+_AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])],
+ [_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])],
+ [_AM_PROG_TAR([v7])])])
+_AM_IF_OPTION([no-dependencies],,
+[AC_PROVIDE_IFELSE([AC_PROG_CC],
+ [_AM_DEPENDENCIES(CC)],
+ [define([AC_PROG_CC],
+ defn([AC_PROG_CC])[_AM_DEPENDENCIES(CC)])])dnl
+AC_PROVIDE_IFELSE([AC_PROG_CXX],
+ [_AM_DEPENDENCIES(CXX)],
+ [define([AC_PROG_CXX],
+ defn([AC_PROG_CXX])[_AM_DEPENDENCIES(CXX)])])dnl
+AC_PROVIDE_IFELSE([AC_PROG_OBJC],
+ [_AM_DEPENDENCIES(OBJC)],
+ [define([AC_PROG_OBJC],
+ defn([AC_PROG_OBJC])[_AM_DEPENDENCIES(OBJC)])])dnl
+])
+])
+
+
+# When config.status generates a header, we must update the stamp-h file.
+# This file resides in the same directory as the config header
+# that is generated. The stamp files are numbered to have different names.
+
+# Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the
+# loop where config.status creates the headers, so we can generate
+# our stamp files there.
+AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK],
+[# Compute $1's index in $config_headers.
+_am_stamp_count=1
+for _am_header in $config_headers :; do
+ case $_am_header in
+ $1 | $1:* )
+ break ;;
+ * )
+ _am_stamp_count=`expr $_am_stamp_count + 1` ;;
+ esac
+done
+echo "timestamp for $1" >`AS_DIRNAME([$1])`/stamp-h[]$_am_stamp_count])
+
+# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_PROG_INSTALL_SH
+# ------------------
+# Define $install_sh.
+AC_DEFUN([AM_PROG_INSTALL_SH],
+[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
+install_sh=${install_sh-"\$(SHELL) $am_aux_dir/install-sh"}
+AC_SUBST(install_sh)])
+
+# Copyright (C) 2003, 2005 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# serial 2
+
+# Check whether the underlying file-system supports filenames
+# with a leading dot. For instance MS-DOS doesn't.
+AC_DEFUN([AM_SET_LEADING_DOT],
+[rm -rf .tst 2>/dev/null
+mkdir .tst 2>/dev/null
+if test -d .tst; then
+ am__leading_dot=.
+else
+ am__leading_dot=_
+fi
+rmdir .tst 2>/dev/null
+AC_SUBST([am__leading_dot])])
+
+# Fake the existence of programs that GNU maintainers use. -*- Autoconf -*-
+
+# Copyright (C) 1997, 1999, 2000, 2001, 2003, 2004, 2005
+# Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# serial 5
+
+# AM_MISSING_PROG(NAME, PROGRAM)
+# ------------------------------
+AC_DEFUN([AM_MISSING_PROG],
+[AC_REQUIRE([AM_MISSING_HAS_RUN])
+$1=${$1-"${am_missing_run}$2"}
+AC_SUBST($1)])
+
+
+# AM_MISSING_HAS_RUN
+# ------------------
+# Define MISSING if not defined so far and test if it supports --run.
+# If it does, set am_missing_run to use it, otherwise, to nothing.
+AC_DEFUN([AM_MISSING_HAS_RUN],
+[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
+AC_REQUIRE_AUX_FILE([missing])dnl
+test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing"
+# Use eval to expand $SHELL
+if eval "$MISSING --run true"; then
+ am_missing_run="$MISSING --run "
+else
+ am_missing_run=
+ AC_MSG_WARN([`missing' script is too old or missing])
+fi
+])
+
+# Copyright (C) 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_PROG_MKDIR_P
+# ---------------
+# Check for `mkdir -p'.
+AC_DEFUN([AM_PROG_MKDIR_P],
+[AC_PREREQ([2.60])dnl
+AC_REQUIRE([AC_PROG_MKDIR_P])dnl
+dnl Automake 1.8 to 1.9.6 used to define mkdir_p. We now use MKDIR_P,
+dnl while keeping a definition of mkdir_p for backward compatibility.
+dnl @MKDIR_P@ is magic: AC_OUTPUT adjusts its value for each Makefile.
+dnl However we cannot define mkdir_p as $(MKDIR_P) for the sake of
+dnl Makefile.ins that do not define MKDIR_P, so we do our own
+dnl adjustment using top_builddir (which is defined more often than
+dnl MKDIR_P).
+AC_SUBST([mkdir_p], ["$MKDIR_P"])dnl
+case $mkdir_p in
+ [[\\/$]]* | ?:[[\\/]]*) ;;
+ */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;;
+esac
+])
+
+# Helper functions for option handling. -*- Autoconf -*-
+
+# Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# serial 3
+
+# _AM_MANGLE_OPTION(NAME)
+# -----------------------
+AC_DEFUN([_AM_MANGLE_OPTION],
+[[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])])
+
+# _AM_SET_OPTION(NAME)
+# ------------------------------
+# Set option NAME. Presently that only means defining a flag for this option.
+AC_DEFUN([_AM_SET_OPTION],
+[m4_define(_AM_MANGLE_OPTION([$1]), 1)])
+
+# _AM_SET_OPTIONS(OPTIONS)
+# ----------------------------------
+# OPTIONS is a space-separated list of Automake options.
+AC_DEFUN([_AM_SET_OPTIONS],
+[AC_FOREACH([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])])
+
+# _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET])
+# -------------------------------------------
+# Execute IF-SET if OPTION is set, IF-NOT-SET otherwise.
+AC_DEFUN([_AM_IF_OPTION],
+[m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])])
+
+# Check to make sure that the build environment is sane. -*- Autoconf -*-
+
+# Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005
+# Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# serial 4
+
+# AM_SANITY_CHECK
+# ---------------
+AC_DEFUN([AM_SANITY_CHECK],
+[AC_MSG_CHECKING([whether build environment is sane])
+# Just in case
+sleep 1
+echo timestamp > conftest.file
+# Do `set' in a subshell so we don't clobber the current shell's
+# arguments. Must try -L first in case configure is actually a
+# symlink; some systems play weird games with the mod time of symlinks
+# (eg FreeBSD returns the mod time of the symlink's containing
+# directory).
+if (
+ set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null`
+ if test "$[*]" = "X"; then
+ # -L didn't work.
+ set X `ls -t $srcdir/configure conftest.file`
+ fi
+ rm -f conftest.file
+ if test "$[*]" != "X $srcdir/configure conftest.file" \
+ && test "$[*]" != "X conftest.file $srcdir/configure"; then
+
+ # If neither matched, then we have a broken ls. This can happen
+ # if, for instance, CONFIG_SHELL is bash and it inherits a
+ # broken ls alias from the environment. This has actually
+ # happened. Such a system could not be considered "sane".
+ AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken
+alias in your environment])
+ fi
+
+ test "$[2]" = conftest.file
+ )
+then
+ # Ok.
+ :
+else
+ AC_MSG_ERROR([newly created file is older than distributed files!
+Check your system clock])
+fi
+AC_MSG_RESULT(yes)])
+
+# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# AM_PROG_INSTALL_STRIP
+# ---------------------
+# One issue with vendor `install' (even GNU) is that you can't
+# specify the program used to strip binaries. This is especially
+# annoying in cross-compiling environments, where the build's strip
+# is unlikely to handle the host's binaries.
+# Fortunately install-sh will honor a STRIPPROG variable, so we
+# always use install-sh in `make install-strip', and initialize
+# STRIPPROG with the value of the STRIP variable (set by the user).
+AC_DEFUN([AM_PROG_INSTALL_STRIP],
+[AC_REQUIRE([AM_PROG_INSTALL_SH])dnl
+# Installed binaries are usually stripped using `strip' when the user
+# run `make install-strip'. However `strip' might not be the right
+# tool to use in cross-compilation environments, therefore Automake
+# will honor the `STRIP' environment variable to overrule this program.
+dnl Don't test for $cross_compiling = yes, because it might be `maybe'.
+if test "$cross_compiling" != no; then
+ AC_CHECK_TOOL([STRIP], [strip], :)
+fi
+INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s"
+AC_SUBST([INSTALL_STRIP_PROGRAM])])
+
+# Copyright (C) 2006 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# _AM_SUBST_NOTMAKE(VARIABLE)
+# ---------------------------
+# Prevent Automake from outputing VARIABLE = @VARIABLE@ in Makefile.in.
+# This macro is traced by Automake.
+AC_DEFUN([_AM_SUBST_NOTMAKE])
+
+# Check how to create a tarball. -*- Autoconf -*-
+
+# Copyright (C) 2004, 2005 Free Software Foundation, Inc.
+#
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# serial 2
+
+# _AM_PROG_TAR(FORMAT)
+# --------------------
+# Check how to create a tarball in format FORMAT.
+# FORMAT should be one of `v7', `ustar', or `pax'.
+#
+# Substitute a variable $(am__tar) that is a command
+# writing to stdout a FORMAT-tarball containing the directory
+# $tardir.
+# tardir=directory && $(am__tar) > result.tar
+#
+# Substitute a variable $(am__untar) that extract such
+# a tarball read from stdin.
+# $(am__untar) < result.tar
+AC_DEFUN([_AM_PROG_TAR],
+[# Always define AMTAR for backward compatibility.
+AM_MISSING_PROG([AMTAR], [tar])
+m4_if([$1], [v7],
+ [am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -'],
+ [m4_case([$1], [ustar],, [pax],,
+ [m4_fatal([Unknown tar format])])
+AC_MSG_CHECKING([how to create a $1 tar archive])
+# Loop over all known methods to create a tar archive until one works.
+_am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none'
+_am_tools=${am_cv_prog_tar_$1-$_am_tools}
+# Do not fold the above two line into one, because Tru64 sh and
+# Solaris sh will not grok spaces in the rhs of `-'.
+for _am_tool in $_am_tools
+do
+ case $_am_tool in
+ gnutar)
+ for _am_tar in tar gnutar gtar;
+ do
+ AM_RUN_LOG([$_am_tar --version]) && break
+ done
+ am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"'
+ am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"'
+ am__untar="$_am_tar -xf -"
+ ;;
+ plaintar)
+ # Must skip GNU tar: if it does not support --format= it doesn't create
+ # ustar tarball either.
+ (tar --version) >/dev/null 2>&1 && continue
+ am__tar='tar chf - "$$tardir"'
+ am__tar_='tar chf - "$tardir"'
+ am__untar='tar xf -'
+ ;;
+ pax)
+ am__tar='pax -L -x $1 -w "$$tardir"'
+ am__tar_='pax -L -x $1 -w "$tardir"'
+ am__untar='pax -r'
+ ;;
+ cpio)
+ am__tar='find "$$tardir" -print | cpio -o -H $1 -L'
+ am__tar_='find "$tardir" -print | cpio -o -H $1 -L'
+ am__untar='cpio -i -H $1 -d'
+ ;;
+ none)
+ am__tar=false
+ am__tar_=false
+ am__untar=false
+ ;;
+ esac
+
+ # If the value was cached, stop now. We just wanted to have am__tar
+ # and am__untar set.
+ test -n "${am_cv_prog_tar_$1}" && break
+
+ # tar/untar a dummy directory, and stop if the command works
+ rm -rf conftest.dir
+ mkdir conftest.dir
+ echo GrepMe > conftest.dir/file
+ AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar])
+ rm -rf conftest.dir
+ if test -s conftest.tar; then
+ AM_RUN_LOG([$am__untar <conftest.tar])
+ grep GrepMe conftest.dir/file >/dev/null 2>&1 && break
+ fi
+done
+rm -rf conftest.dir
+
+AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool])
+AC_MSG_RESULT([$am_cv_prog_tar_$1])])
+AC_SUBST([am__tar])
+AC_SUBST([am__untar])
+]) # _AM_PROG_TAR
+
diff --git a/autogen.sh b/autogen.sh
deleted file mode 100755
index b185cc2..0000000
--- a/autogen.sh
+++ /dev/null
@@ -1,4 +0,0 @@
-automake --add-missing --copy --gnu
-aclocal
-autoheader
-autoconf
diff --git a/chkstow.in b/chkstow.in
new file mode 100644
index 0000000..6e54b91
--- /dev/null
+++ b/chkstow.in
@@ -0,0 +1,104 @@
+#!@PERL@
+
+use strict;
+use warnings;
+
+use File::Find;
+use Getopt::Long;
+
+our $Wanted = \&bad_links;
+our %Package=();
+our $Stow_dir = '';
+our $Target = q{/usr/local/};
+
+# put the main loop into a block so that tests can load this as a module
+if ( not caller() ) {
+ if (@ARGV == 0) {
+ usage();
+ }
+ process_options();
+ #check_stow($Target, $Wanted);
+ check_stow();
+}
+
+sub process_options {
+ GetOptions(
+ 'b|badlinks' => sub { $Wanted = \&bad_links },
+ 'a|aliens' => sub { $Wanted = \&aliens },
+ 'l|list' => sub { $Wanted = \&list },
+ 't|target=s' => \$Target,
+ ) or usage();
+ return;
+}
+
+sub usage {
+ print <<"EOT";
+USAGE: chkstow [options]
+
+Options:
+ -b, --badlinks Report symlinks that point to non-existant files.
+ -a, --aliens Report non-symlinks in the target directory.
+ -l, --list List packages in the target directory.
+ -t DIR, --target=DIR Set the target directory to DIR (default
+ is /usr/local)
+EOT
+ exit(0);
+}
+
+sub check_stow {
+ #my ($Target, $Wanted) = @_;
+
+ my (%options) = (
+ wanted => $Wanted,
+ preprocess => \&skip_dirs,
+ );
+
+ find(\%options, $Target);
+
+ if ($Wanted == \&list) {
+ delete $Package{''};
+ delete $Package{'..'};
+
+ if (keys %Package) {
+ local $,="\n";
+ print sort(keys %Package), "\n";
+ }
+ }
+ return;
+}
+
+sub skip_dirs {
+ # skip stow source and unstowed targets
+ if (-e ".stow" || -e ".notstowed" ) {
+ warn "skipping $File::Find::dir\n";
+ return ();
+ }
+ else {
+ return @_;
+ }
+}
+
+# checking for files that do not link to anything
+sub bad_links {
+ -l && !-e && print "Bogus link: $File::Find::name\n";
+}
+
+# checking for files that are not owned by stow
+sub aliens {
+ !-l && !-d && print "Unstowed file: $File::Find::name\n";
+}
+
+# just list the packages in the the target directory
+# FIXME: what if the stow dir is not called 'stow'?
+sub list {
+ if (-l) {
+ $_ = readlink;
+ s{\A(?:\.\./)+stow/}{}g;
+ s{/.*}{}g;
+ $Package{$_} = 1;
+ }
+}
+
+1; # Hey, it's a module!
+
+# vim:ft=perl
diff --git a/config.log b/config.log
new file mode 100644
index 0000000..ed0ccea
--- /dev/null
+++ b/config.log
@@ -0,0 +1,176 @@
+This file contains any messages produced by compilers while
+running configure, to aid debugging if configure makes a mistake.
+
+It was created by stow configure 2.0.2, which was
+generated by GNU Autoconf 2.61. Invocation command line was
+
+ $ ./configure
+
+## --------- ##
+## Platform. ##
+## --------- ##
+
+hostname = buffy.finexium.com
+uname -m = x86_64
+uname -r = 2.6.27.19-170.2.35.fc10.x86_64
+uname -s = Linux
+uname -v = #1 SMP Mon Feb 23 13:00:23 EST 2009
+
+/usr/bin/uname -p = unknown
+/bin/uname -X = unknown
+
+/bin/arch = x86_64
+/usr/bin/arch -k = unknown
+/usr/convex/getsysinfo = unknown
+/usr/bin/hostinfo = unknown
+/bin/machine = unknown
+/usr/bin/oslevel = unknown
+/bin/universe = unknown
+
+PATH: /home/kal/bin
+PATH: /usr/lib64/qt-3.3/bin
+PATH: /usr/kerberos/bin
+PATH: /usr/lib64/ccache
+PATH: /usr/local/bin
+PATH: /usr/bin
+PATH: /bin
+PATH: /usr/local/sbin
+PATH: /usr/sbin
+PATH: /sbin
+
+
+## ----------- ##
+## Core tests. ##
+## ----------- ##
+
+configure:1694: checking for a BSD-compatible install
+configure:1750: result: /usr/bin/install -c
+configure:1761: checking whether build environment is sane
+configure:1804: result: yes
+configure:1832: checking for a thread-safe mkdir -p
+configure:1871: result: /bin/mkdir -p
+configure:1884: checking for gawk
+configure:1900: found /usr/bin/gawk
+configure:1911: result: gawk
+configure:1922: checking whether make sets $(MAKE)
+configure:1943: result: yes
+configure:2144: checking for a BSD-compatible install
+configure:2200: result: /usr/bin/install -c
+configure:2216: checking for perl
+configure:2234: found /usr/bin/perl
+configure:2246: result: /usr/bin/perl
+configure:2396: creating ./config.status
+
+## ---------------------- ##
+## Running config.status. ##
+## ---------------------- ##
+
+This file was extended by stow config.status 2.0.2, which was
+generated by GNU Autoconf 2.61. Invocation command line was
+
+ CONFIG_FILES =
+ CONFIG_HEADERS =
+ CONFIG_LINKS =
+ CONFIG_COMMANDS =
+ $ ./config.status
+
+on buffy.finexium.com
+
+config.status:589: creating Makefile
+
+## ---------------- ##
+## Cache variables. ##
+## ---------------- ##
+
+ac_cv_env_build_alias_set=
+ac_cv_env_build_alias_value=
+ac_cv_env_host_alias_set=
+ac_cv_env_host_alias_value=
+ac_cv_env_target_alias_set=
+ac_cv_env_target_alias_value=
+ac_cv_path_PERL=/usr/bin/perl
+ac_cv_path_install='/usr/bin/install -c'
+ac_cv_path_mkdir=/bin/mkdir
+ac_cv_prog_AWK=gawk
+ac_cv_prog_make_make_set=yes
+
+## ----------------- ##
+## Output variables. ##
+## ----------------- ##
+
+ACLOCAL='${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run aclocal-1.10'
+AMTAR='${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run tar'
+AUTOCONF='${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run autoconf'
+AUTOHEADER='${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run autoheader'
+AUTOMAKE='${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run automake-1.10'
+AWK='gawk'
+CYGPATH_W='echo'
+DEFS='-DPACKAGE_NAME=\"stow\" -DPACKAGE_TARNAME=\"stow\" -DPACKAGE_VERSION=\"2.0.2\" -DPACKAGE_STRING=\"stow\ 2.0.2\" -DPACKAGE_BUGREPORT=\"bug-stow@gnu.org\" -DPACKAGE=\"stow\" -DVERSION=\"2.0.2\"'
+ECHO_C=''
+ECHO_N='-n'
+ECHO_T=''
+INSTALL_DATA='${INSTALL} -m 644'
+INSTALL_PROGRAM='${INSTALL}'
+INSTALL_SCRIPT='${INSTALL}'
+INSTALL_STRIP_PROGRAM='$(install_sh) -c -s'
+LIBOBJS=''
+LIBS=''
+LTLIBOBJS=''
+MAKEINFO='${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run makeinfo'
+PACKAGE='stow'
+PACKAGE_BUGREPORT='bug-stow@gnu.org'
+PACKAGE_NAME='stow'
+PACKAGE_STRING='stow 2.0.2'
+PACKAGE_TARNAME='stow'
+PACKAGE_VERSION='2.0.2'
+PATH_SEPARATOR=':'
+PERL='/usr/bin/perl'
+SET_MAKE=''
+SHELL='/bin/sh'
+STRIP=''
+VERSION='2.0.2'
+am__isrc=''
+am__leading_dot='.'
+am__tar='${AMTAR} chof - "$$tardir"'
+am__untar='${AMTAR} xf -'
+bindir='${exec_prefix}/bin'
+build_alias=''
+datadir='${datarootdir}'
+datarootdir='${prefix}/share'
+docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
+dvidir='${docdir}'
+exec_prefix='${prefix}'
+host_alias=''
+htmldir='${docdir}'
+includedir='${prefix}/include'
+infodir='${datarootdir}/info'
+install_sh='$(SHELL) /home/kal/Projects/old/RMIT/stow/stow-2.0.2/install-sh'
+libdir='${exec_prefix}/lib'
+libexecdir='${exec_prefix}/libexec'
+localedir='${datarootdir}/locale'
+localstatedir='${prefix}/var'
+mandir='${datarootdir}/man'
+mkdir_p='/bin/mkdir -p'
+oldincludedir='/usr/include'
+pdfdir='${docdir}'
+prefix='/usr/local'
+program_transform_name='s,x,x,'
+psdir='${docdir}'
+sbindir='${exec_prefix}/sbin'
+sharedstatedir='${prefix}/com'
+sysconfdir='${prefix}/etc'
+target_alias=''
+
+## ----------- ##
+## confdefs.h. ##
+## ----------- ##
+
+#define PACKAGE_NAME "stow"
+#define PACKAGE_TARNAME "stow"
+#define PACKAGE_VERSION "2.0.2"
+#define PACKAGE_STRING "stow 2.0.2"
+#define PACKAGE_BUGREPORT "bug-stow@gnu.org"
+#define PACKAGE "stow"
+#define VERSION "2.0.2"
+
+configure: exit 0
diff --git a/config.status b/config.status
new file mode 100755
index 0000000..52d0292
--- /dev/null
+++ b/config.status
@@ -0,0 +1,786 @@
+#! /bin/sh
+# Generated by configure.
+# Run this file to recreate the current configuration.
+# Compiler output produced by configure, useful for debugging
+# configure, is in config.log if it exists.
+
+debug=false
+ac_cs_recheck=false
+ac_cs_silent=false
+SHELL=${CONFIG_SHELL-/bin/sh}
+## --------------------- ##
+## M4sh Initialization. ##
+## --------------------- ##
+
+# Be more Bourne compatible
+DUALCASE=1; export DUALCASE # for MKS sh
+if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+ emulate sh
+ NULLCMD=:
+ # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+ setopt NO_GLOB_SUBST
+else
+ case `(set -o) 2>/dev/null` in
+ *posix*) set -o posix ;;
+esac
+
+fi
+
+
+
+
+# PATH needs CR
+# Avoid depending upon Character Ranges.
+as_cr_letters='abcdefghijklmnopqrstuvwxyz'
+as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
+as_cr_Letters=$as_cr_letters$as_cr_LETTERS
+as_cr_digits='0123456789'
+as_cr_alnum=$as_cr_Letters$as_cr_digits
+
+# The user is always right.
+if test "${PATH_SEPARATOR+set}" != set; then
+ echo "#! /bin/sh" >conf$$.sh
+ echo "exit 0" >>conf$$.sh
+ chmod +x conf$$.sh
+ if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then
+ PATH_SEPARATOR=';'
+ else
+ PATH_SEPARATOR=:
+ fi
+ rm -f conf$$.sh
+fi
+
+# Support unset when possible.
+if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then
+ as_unset=unset
+else
+ as_unset=false
+fi
+
+
+# IFS
+# We need space, tab and new line, in precisely that order. Quoting is
+# there to prevent editors from complaining about space-tab.
+# (If _AS_PATH_WALK were called with IFS unset, it would disable word
+# splitting by setting IFS to empty value.)
+as_nl='
+'
+IFS=" "" $as_nl"
+
+# Find who we are. Look in the path if we contain no directory separator.
+case $0 in
+ *[\\/]* ) as_myself=$0 ;;
+ *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break
+done
+IFS=$as_save_IFS
+
+ ;;
+esac
+# We did not find ourselves, most probably we were run as `sh COMMAND'
+# in which case we are not to be found in the path.
+if test "x$as_myself" = x; then
+ as_myself=$0
+fi
+if test ! -f "$as_myself"; then
+ echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2
+ { (exit 1); exit 1; }
+fi
+
+# Work around bugs in pre-3.0 UWIN ksh.
+for as_var in ENV MAIL MAILPATH
+do ($as_unset $as_var) >/dev/null 2>&1 && $as_unset $as_var
+done
+PS1='$ '
+PS2='> '
+PS4='+ '
+
+# NLS nuisances.
+for as_var in \
+ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \
+ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \
+ LC_TELEPHONE LC_TIME
+do
+ if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then
+ eval $as_var=C; export $as_var
+ else
+ ($as_unset $as_var) >/dev/null 2>&1 && $as_unset $as_var
+ fi
+done
+
+# Required to use basename.
+if expr a : '\(a\)' >/dev/null 2>&1 &&
+ test "X`expr 00001 : '.*\(...\)'`" = X001; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then
+ as_basename=basename
+else
+ as_basename=false
+fi
+
+
+# Name of the executable.
+as_me=`$as_basename -- "$0" ||
+$as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)' \| . 2>/dev/null ||
+echo X/"$0" |
+ sed '/^.*\/\([^/][^/]*\)\/*$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+
+# CDPATH.
+$as_unset CDPATH
+
+
+
+ as_lineno_1=$LINENO
+ as_lineno_2=$LINENO
+ test "x$as_lineno_1" != "x$as_lineno_2" &&
+ test "x`expr $as_lineno_1 + 1`" = "x$as_lineno_2" || {
+
+ # Create $as_me.lineno as a copy of $as_myself, but with $LINENO
+ # uniformly replaced by the line number. The first 'sed' inserts a
+ # line-number line after each line using $LINENO; the second 'sed'
+ # does the real work. The second script uses 'N' to pair each
+ # line-number line with the line containing $LINENO, and appends
+ # trailing '-' during substitution so that $LINENO is not a special
+ # case at line end.
+ # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the
+ # scripts with optimization help from Paolo Bonzini. Blame Lee
+ # E. McMahon (1931-1989) for sed's syntax. :-)
+ sed -n '
+ p
+ /[$]LINENO/=
+ ' <$as_myself |
+ sed '
+ s/[$]LINENO.*/&-/
+ t lineno
+ b
+ :lineno
+ N
+ :loop
+ s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/
+ t loop
+ s/-\n.*//
+ ' >$as_me.lineno &&
+ chmod +x "$as_me.lineno" ||
+ { echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2
+ { (exit 1); exit 1; }; }
+
+ # Don't try to exec as it changes $[0], causing all sort of problems
+ # (the dirname of $[0] is not the place where we might find the
+ # original and so on. Autoconf is especially sensitive to this).
+ . "./$as_me.lineno"
+ # Exit status is that of the last command.
+ exit
+}
+
+
+if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then
+ as_dirname=dirname
+else
+ as_dirname=false
+fi
+
+ECHO_C= ECHO_N= ECHO_T=
+case `echo -n x` in
+-n*)
+ case `echo 'x\c'` in
+ *c*) ECHO_T=' ';; # ECHO_T is single tab character.
+ *) ECHO_C='\c';;
+ esac;;
+*)
+ ECHO_N='-n';;
+esac
+
+if expr a : '\(a\)' >/dev/null 2>&1 &&
+ test "X`expr 00001 : '.*\(...\)'`" = X001; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+rm -f conf$$ conf$$.exe conf$$.file
+if test -d conf$$.dir; then
+ rm -f conf$$.dir/conf$$.file
+else
+ rm -f conf$$.dir
+ mkdir conf$$.dir
+fi
+echo >conf$$.file
+if ln -s conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s='ln -s'
+ # ... but there are two gotchas:
+ # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.
+ # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.
+ # In both cases, we have to default to `cp -p'.
+ ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||
+ as_ln_s='cp -p'
+elif ln conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s=ln
+else
+ as_ln_s='cp -p'
+fi
+rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file
+rmdir conf$$.dir 2>/dev/null
+
+if mkdir -p . 2>/dev/null; then
+ as_mkdir_p=:
+else
+ test -d ./-p && rmdir ./-p
+ as_mkdir_p=false
+fi
+
+if test -x / >/dev/null 2>&1; then
+ as_test_x='test -x'
+else
+ if ls -dL / >/dev/null 2>&1; then
+ as_ls_L_option=L
+ else
+ as_ls_L_option=
+ fi
+ as_test_x='
+ eval sh -c '\''
+ if test -d "$1"; then
+ test -d "$1/.";
+ else
+ case $1 in
+ -*)set "./$1";;
+ esac;
+ case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in
+ ???[sx]*):;;*)false;;esac;fi
+ '\'' sh
+ '
+fi
+as_executable_p=$as_test_x
+
+# Sed expression to map a string onto a valid CPP name.
+as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'"
+
+# Sed expression to map a string onto a valid variable name.
+as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'"
+
+
+exec 6>&1
+
+# Save the log message, to keep $[0] and so on meaningful, and to
+# report actual input values of CONFIG_FILES etc. instead of their
+# values after options handling.
+ac_log="
+This file was extended by stow $as_me 2.0.2, which was
+generated by GNU Autoconf 2.61. Invocation command line was
+
+ CONFIG_FILES = $CONFIG_FILES
+ CONFIG_HEADERS = $CONFIG_HEADERS
+ CONFIG_LINKS = $CONFIG_LINKS
+ CONFIG_COMMANDS = $CONFIG_COMMANDS
+ $ $0 $@
+
+on `(hostname || uname -n) 2>/dev/null | sed 1q`
+"
+
+# Files that config.status was made for.
+config_files=" Makefile"
+
+ac_cs_usage="\
+\`$as_me' instantiates files from templates according to the
+current configuration.
+
+Usage: $0 [OPTIONS] [FILE]...
+
+ -h, --help print this help, then exit
+ -V, --version print version number and configuration settings, then exit
+ -q, --quiet do not print progress messages
+ -d, --debug don't remove temporary files
+ --recheck update $as_me by reconfiguring in the same conditions
+ --file=FILE[:TEMPLATE]
+ instantiate the configuration file FILE
+
+Configuration files:
+$config_files
+
+Report bugs to <bug-autoconf@gnu.org>."
+
+ac_cs_version="\
+stow config.status 2.0.2
+configured by ./configure, generated by GNU Autoconf 2.61,
+ with options \"\"
+
+Copyright (C) 2006 Free Software Foundation, Inc.
+This config.status script is free software; the Free Software Foundation
+gives unlimited permission to copy, distribute and modify it."
+
+ac_pwd='/home/kal/Projects/old/RMIT/stow/stow-2.0.2'
+srcdir='.'
+INSTALL='/usr/bin/install -c'
+MKDIR_P='/bin/mkdir -p'
+# If no file are specified by the user, then we need to provide default
+# value. By we need to know if files were specified by the user.
+ac_need_defaults=:
+while test $# != 0
+do
+ case $1 in
+ --*=*)
+ ac_option=`expr "X$1" : 'X\([^=]*\)='`
+ ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'`
+ ac_shift=:
+ ;;
+ *)
+ ac_option=$1
+ ac_optarg=$2
+ ac_shift=shift
+ ;;
+ esac
+
+ case $ac_option in
+ # Handling of the options.
+ -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r)
+ ac_cs_recheck=: ;;
+ --version | --versio | --versi | --vers | --ver | --ve | --v | -V )
+ echo "$ac_cs_version"; exit ;;
+ --debug | --debu | --deb | --de | --d | -d )
+ debug=: ;;
+ --file | --fil | --fi | --f )
+ $ac_shift
+ CONFIG_FILES="$CONFIG_FILES $ac_optarg"
+ ac_need_defaults=false;;
+ --he | --h | --help | --hel | -h )
+ echo "$ac_cs_usage"; exit ;;
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil | --si | --s)
+ ac_cs_silent=: ;;
+
+ # This is an error.
+ -*) { echo "$as_me: error: unrecognized option: $1
+Try \`$0 --help' for more information." >&2
+ { (exit 1); exit 1; }; } ;;
+
+ *) ac_config_targets="$ac_config_targets $1"
+ ac_need_defaults=false ;;
+
+ esac
+ shift
+done
+
+ac_configure_extra_args=
+
+if $ac_cs_silent; then
+ exec 6>/dev/null
+ ac_configure_extra_args="$ac_configure_extra_args --silent"
+fi
+
+if $ac_cs_recheck; then
+ echo "running CONFIG_SHELL=/bin/sh /bin/sh ./configure " $ac_configure_extra_args " --no-create --no-recursion" >&6
+ CONFIG_SHELL=/bin/sh
+ export CONFIG_SHELL
+ exec /bin/sh "./configure" $ac_configure_extra_args --no-create --no-recursion
+fi
+
+exec 5>>config.log
+{
+ echo
+ sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX
+## Running $as_me. ##
+_ASBOX
+ echo "$ac_log"
+} >&5
+
+
+# Handling of arguments.
+for ac_config_target in $ac_config_targets
+do
+ case $ac_config_target in
+ "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;;
+
+ *) { { echo "$as_me:$LINENO: error: invalid argument: $ac_config_target" >&5
+echo "$as_me: error: invalid argument: $ac_config_target" >&2;}
+ { (exit 1); exit 1; }; };;
+ esac
+done
+
+
+# If the user did not use the arguments to specify the items to instantiate,
+# then the envvar interface is used. Set only those that are not.
+# We use the long form for the default assignment because of an extremely
+# bizarre bug on SunOS 4.1.3.
+if $ac_need_defaults; then
+ test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files
+fi
+
+# Have a temporary directory for convenience. Make it in the build tree
+# simply because there is no reason against having it here, and in addition,
+# creating and moving files from /tmp can sometimes cause problems.
+# Hook for its removal unless debugging.
+# Note that there is a small window in which the directory will not be cleaned:
+# after its creation but before its name has been assigned to `$tmp'.
+$debug ||
+{
+ tmp=
+ trap 'exit_status=$?
+ { test -z "$tmp" || test ! -d "$tmp" || rm -fr "$tmp"; } && exit $exit_status
+' 0
+ trap '{ (exit 1); exit 1; }' 1 2 13 15
+}
+# Create a (secure) tmp directory for tmp files.
+
+{
+ tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` &&
+ test -n "$tmp" && test -d "$tmp"
+} ||
+{
+ tmp=./conf$$-$RANDOM
+ (umask 077 && mkdir "$tmp")
+} ||
+{
+ echo "$me: cannot create a temporary directory in ." >&2
+ { (exit 1); exit 1; }
+}
+
+#
+# Set up the sed scripts for CONFIG_FILES section.
+#
+
+# No need to generate the scripts if there are no CONFIG_FILES.
+# This happens for instance when ./config.status config.h
+if test -n "$CONFIG_FILES"; then
+
+cat >"$tmp/subs-1.sed" <<\CEOF
+/@[a-zA-Z_][a-zA-Z_0-9]*@/!b end
+s,@SHELL@,|#_!!_#|/bin/sh,g
+s,@PATH_SEPARATOR@,|#_!!_#|:,g
+s,@PACKAGE_NAME@,|#_!!_#|stow,g
+s,@PACKAGE_TARNAME@,|#_!!_#|stow,g
+s,@PACKAGE_VERSION@,|#_!!_#|2.0.2,g
+s,@PACKAGE_STRING@,|#_!!_#|stow 2.0.2,g
+s,@PACKAGE_BUGREPORT@,|#_!!_#|bug-stow@|#_!!_#|gnu.org,g
+s,@exec_prefix@,|#_!!_#|${prefix},g
+s,@prefix@,|#_!!_#|/usr/local,g
+s,@program_transform_name@,|#_!!_#|s\,x\,x\,,g
+s,@bindir@,|#_!!_#|${exec_prefix}/bin,g
+s,@sbindir@,|#_!!_#|${exec_prefix}/sbin,g
+s,@libexecdir@,|#_!!_#|${exec_prefix}/libexec,g
+s,@datarootdir@,|#_!!_#|${prefix}/share,g
+s,@datadir@,|#_!!_#|${datarootdir},g
+s,@sysconfdir@,|#_!!_#|${prefix}/etc,g
+s,@sharedstatedir@,|#_!!_#|${prefix}/com,g
+s,@localstatedir@,|#_!!_#|${prefix}/var,g
+s,@includedir@,|#_!!_#|${prefix}/include,g
+s,@oldincludedir@,|#_!!_#|/usr/include,g
+s,@docdir@,|#_!!_#|${datarootdir}/doc/${PACKAGE_TARNAME},g
+s,@infodir@,|#_!!_#|${datarootdir}/info,g
+s,@htmldir@,|#_!!_#|${docdir},g
+s,@dvidir@,|#_!!_#|${docdir},g
+s,@pdfdir@,|#_!!_#|${docdir},g
+s,@psdir@,|#_!!_#|${docdir},g
+s,@libdir@,|#_!!_#|${exec_prefix}/lib,g
+s,@localedir@,|#_!!_#|${datarootdir}/locale,g
+s,@mandir@,|#_!!_#|${datarootdir}/man,g
+s,@DEFS@,|#_!!_#|-DPACKAGE_NAME=\\"stow\\" -DPACKAGE_TARNAME=\\"stow\\" -DPACKAGE_VERSION=\\"2.0.2\\" -DPACKAGE_STRING=\\"stow\\ 2.0.2\\" -DPACKAGE_BUGREPORT=\\"bug-stow@|#_!!_#|gnu.org\\" -DPACKAGE=\\"stow\\" -DVERSION=\\"2.0.2\\",g
+s,@ECHO_C@,|#_!!_#|,g
+s,@ECHO_N@,|#_!!_#|-n,g
+s,@ECHO_T@,|#_!!_#|,g
+s,@LIBS@,|#_!!_#|,g
+s,@build_alias@,|#_!!_#|,g
+s,@host_alias@,|#_!!_#|,g
+s,@target_alias@,|#_!!_#|,g
+s,@INSTALL_PROGRAM@,|#_!!_#|${INSTALL},g
+s,@INSTALL_SCRIPT@,|#_!!_#|${INSTALL},g
+s,@INSTALL_DATA@,|#_!!_#|${INSTALL} -m 644,g
+s,@am__isrc@,|#_!!_#|,g
+s,@CYGPATH_W@,|#_!!_#|echo,g
+s,@PACKAGE@,|#_!!_#|stow,g
+s,@VERSION@,|#_!!_#|2.0.2,g
+s,@ACLOCAL@,|#_!!_#|${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run aclocal-1.10,g
+s,@AUTOCONF@,|#_!!_#|${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run autoconf,g
+s,@AUTOMAKE@,|#_!!_#|${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run automake-1.10,g
+s,@AUTOHEADER@,|#_!!_#|${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run autoheader,g
+s,@MAKEINFO@,|#_!!_#|${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run makeinfo,g
+s,@install_sh@,|#_!!_#|$(SHELL) /home/kal/Projects/old/RMIT/stow/stow-2.0.2/install-sh,g
+s,@STRIP@,|#_!!_#|,g
+s,@INSTALL_STRIP_PROGRAM@,|#_!!_#|$(install_sh) -c -s,g
+s,@mkdir_p@,|#_!!_#|/bin/mkdir -p,g
+s,@AWK@,|#_!!_#|gawk,g
+s,@SET_MAKE@,|#_!!_#|,g
+s,@am__leading_dot@,|#_!!_#|.,g
+s,@AMTAR@,|#_!!_#|${SHELL} /home/kal/Projects/old/RMIT/stow/stow-2.0.2/missing --run tar,g
+s,@am__tar@,|#_!!_#|${AMTAR} chof - "$$tardir",g
+s,@am__untar@,|#_!!_#|${AMTAR} xf -,g
+s,@PERL@,|#_!!_#|/usr/bin/perl,g
+s,@LIBOBJS@,|#_!!_#|,g
+s,@LTLIBOBJS@,|#_!!_#|,g
+:end
+s/|#_!!_#|//g
+CEOF
+fi # test -n "$CONFIG_FILES"
+
+
+for ac_tag in :F $CONFIG_FILES
+do
+ case $ac_tag in
+ :[FHLC]) ac_mode=$ac_tag; continue;;
+ esac
+ case $ac_mode$ac_tag in
+ :[FHL]*:*);;
+ :L* | :C*:*) { { echo "$as_me:$LINENO: error: Invalid tag $ac_tag." >&5
+echo "$as_me: error: Invalid tag $ac_tag." >&2;}
+ { (exit 1); exit 1; }; };;
+ :[FH]-) ac_tag=-:-;;
+ :[FH]*) ac_tag=$ac_tag:$ac_tag.in;;
+ esac
+ ac_save_IFS=$IFS
+ IFS=:
+ set x $ac_tag
+ IFS=$ac_save_IFS
+ shift
+ ac_file=$1
+ shift
+
+ case $ac_mode in
+ :L) ac_source=$1;;
+ :[FH])
+ ac_file_inputs=
+ for ac_f
+ do
+ case $ac_f in
+ -) ac_f="$tmp/stdin";;
+ *) # Look for the file first in the build tree, then in the source tree
+ # (if the path is not absolute). The absolute path cannot be DOS-style,
+ # because $ac_f cannot contain `:'.
+ test -f "$ac_f" ||
+ case $ac_f in
+ [\\/$]*) false;;
+ *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";;
+ esac ||
+ { { echo "$as_me:$LINENO: error: cannot find input file: $ac_f" >&5
+echo "$as_me: error: cannot find input file: $ac_f" >&2;}
+ { (exit 1); exit 1; }; };;
+ esac
+ ac_file_inputs="$ac_file_inputs $ac_f"
+ done
+
+ # Let's still pretend it is `configure' which instantiates (i.e., don't
+ # use $as_me), people would be surprised to read:
+ # /* config.h. Generated by config.status. */
+ configure_input="Generated from "`IFS=:
+ echo $* | sed 's|^[^:]*/||;s|:[^:]*/|, |g'`" by configure."
+ if test x"$ac_file" != x-; then
+ configure_input="$ac_file. $configure_input"
+ { echo "$as_me:$LINENO: creating $ac_file" >&5
+echo "$as_me: creating $ac_file" >&6;}
+ fi
+
+ case $ac_tag in
+ *:-:* | *:-) cat >"$tmp/stdin";;
+ esac
+ ;;
+ esac
+
+ ac_dir=`$as_dirname -- "$ac_file" ||
+$as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$ac_file" : 'X\(//\)[^/]' \| \
+ X"$ac_file" : 'X\(//\)$' \| \
+ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null ||
+echo X"$ac_file" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ { as_dir="$ac_dir"
+ case $as_dir in #(
+ -*) as_dir=./$as_dir;;
+ esac
+ test -d "$as_dir" || { $as_mkdir_p && mkdir -p "$as_dir"; } || {
+ as_dirs=
+ while :; do
+ case $as_dir in #(
+ *\'*) as_qdir=`echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #(
+ *) as_qdir=$as_dir;;
+ esac
+ as_dirs="'$as_qdir' $as_dirs"
+ as_dir=`$as_dirname -- "$as_dir" ||
+$as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$as_dir" : 'X\(//\)[^/]' \| \
+ X"$as_dir" : 'X\(//\)$' \| \
+ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null ||
+echo X"$as_dir" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ test -d "$as_dir" && break
+ done
+ test -z "$as_dirs" || eval "mkdir $as_dirs"
+ } || test -d "$as_dir" || { { echo "$as_me:$LINENO: error: cannot create directory $as_dir" >&5
+echo "$as_me: error: cannot create directory $as_dir" >&2;}
+ { (exit 1); exit 1; }; }; }
+ ac_builddir=.
+
+case "$ac_dir" in
+.) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;;
+*)
+ ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'`
+ # A ".." for each directory in $ac_dir_suffix.
+ ac_top_builddir_sub=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,/..,g;s,/,,'`
+ case $ac_top_builddir_sub in
+ "") ac_top_builddir_sub=. ac_top_build_prefix= ;;
+ *) ac_top_build_prefix=$ac_top_builddir_sub/ ;;
+ esac ;;
+esac
+ac_abs_top_builddir=$ac_pwd
+ac_abs_builddir=$ac_pwd$ac_dir_suffix
+# for backward compatibility:
+ac_top_builddir=$ac_top_build_prefix
+
+case $srcdir in
+ .) # We are building in place.
+ ac_srcdir=.
+ ac_top_srcdir=$ac_top_builddir_sub
+ ac_abs_top_srcdir=$ac_pwd ;;
+ [\\/]* | ?:[\\/]* ) # Absolute name.
+ ac_srcdir=$srcdir$ac_dir_suffix;
+ ac_top_srcdir=$srcdir
+ ac_abs_top_srcdir=$srcdir ;;
+ *) # Relative name.
+ ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix
+ ac_top_srcdir=$ac_top_build_prefix$srcdir
+ ac_abs_top_srcdir=$ac_pwd/$srcdir ;;
+esac
+ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix
+
+
+ case $ac_mode in
+ :F)
+ #
+ # CONFIG_FILE
+ #
+
+ case $INSTALL in
+ [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;;
+ *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;;
+ esac
+ ac_MKDIR_P=$MKDIR_P
+ case $MKDIR_P in
+ [\\/$]* | ?:[\\/]* ) ;;
+ */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;;
+ esac
+# If the template does not know about datarootdir, expand it.
+# FIXME: This hack should be removed a few years after 2.60.
+ac_datarootdir_hack=; ac_datarootdir_seen=
+
+case `sed -n '/datarootdir/ {
+ p
+ q
+}
+/@datadir@/p
+/@docdir@/p
+/@infodir@/p
+/@localedir@/p
+/@mandir@/p
+' $ac_file_inputs` in
+*datarootdir*) ac_datarootdir_seen=yes;;
+*@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*)
+ { echo "$as_me:$LINENO: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5
+echo "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;}
+ ac_datarootdir_hack='
+ s&@datadir@&${datarootdir}&g
+ s&@docdir@&${datarootdir}/doc/${PACKAGE_TARNAME}&g
+ s&@infodir@&${datarootdir}/info&g
+ s&@localedir@&${datarootdir}/locale&g
+ s&@mandir@&${datarootdir}/man&g
+ s&\${datarootdir}&${prefix}/share&g' ;;
+esac
+ sed "/^[ ]*VPATH[ ]*=/{
+s/:*\$(srcdir):*/:/
+s/:*\${srcdir}:*/:/
+s/:*@srcdir@:*/:/
+s/^\([^=]*=[ ]*\):*/\1/
+s/:*$//
+s/^[^=]*=[ ]*$//
+}
+
+:t
+/@[a-zA-Z_][a-zA-Z_0-9]*@/!b
+s&@configure_input@&$configure_input&;t t
+s&@top_builddir@&$ac_top_builddir_sub&;t t
+s&@srcdir@&$ac_srcdir&;t t
+s&@abs_srcdir@&$ac_abs_srcdir&;t t
+s&@top_srcdir@&$ac_top_srcdir&;t t
+s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t
+s&@builddir@&$ac_builddir&;t t
+s&@abs_builddir@&$ac_abs_builddir&;t t
+s&@abs_top_builddir@&$ac_abs_top_builddir&;t t
+s&@INSTALL@&$ac_INSTALL&;t t
+s&@MKDIR_P@&$ac_MKDIR_P&;t t
+$ac_datarootdir_hack
+" $ac_file_inputs | sed -f "$tmp/subs-1.sed" >$tmp/out
+
+test -z "$ac_datarootdir_hack$ac_datarootdir_seen" &&
+ { ac_out=`sed -n '/\${datarootdir}/p' "$tmp/out"`; test -n "$ac_out"; } &&
+ { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' "$tmp/out"`; test -z "$ac_out"; } &&
+ { echo "$as_me:$LINENO: WARNING: $ac_file contains a reference to the variable \`datarootdir'
+which seems to be undefined. Please make sure it is defined." >&5
+echo "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir'
+which seems to be undefined. Please make sure it is defined." >&2;}
+
+ rm -f "$tmp/stdin"
+ case $ac_file in
+ -) cat "$tmp/out"; rm -f "$tmp/out";;
+ *) rm -f "$ac_file"; mv "$tmp/out" $ac_file;;
+ esac
+ ;;
+
+
+
+ esac
+
+done # for ac_tag
+
+
+{ (exit 0); exit 0; }
diff --git a/configure b/configure
new file mode 100755
index 0000000..8e79ed4
--- /dev/null
+++ b/configure
@@ -0,0 +1,3297 @@
+#! /bin/sh
+# Guess values for system-dependent variables and create Makefiles.
+# Generated by GNU Autoconf 2.61 for stow 2.0.2.
+#
+# Report bugs to <bug-stow@gnu.org>.
+#
+# Copyright (C) 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001,
+# 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+# This configure script is free software; the Free Software Foundation
+# gives unlimited permission to copy, distribute and modify it.
+## --------------------- ##
+## M4sh Initialization. ##
+## --------------------- ##
+
+# Be more Bourne compatible
+DUALCASE=1; export DUALCASE # for MKS sh
+if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+ emulate sh
+ NULLCMD=:
+ # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+ setopt NO_GLOB_SUBST
+else
+ case `(set -o) 2>/dev/null` in
+ *posix*) set -o posix ;;
+esac
+
+fi
+
+
+
+
+# PATH needs CR
+# Avoid depending upon Character Ranges.
+as_cr_letters='abcdefghijklmnopqrstuvwxyz'
+as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
+as_cr_Letters=$as_cr_letters$as_cr_LETTERS
+as_cr_digits='0123456789'
+as_cr_alnum=$as_cr_Letters$as_cr_digits
+
+# The user is always right.
+if test "${PATH_SEPARATOR+set}" != set; then
+ echo "#! /bin/sh" >conf$$.sh
+ echo "exit 0" >>conf$$.sh
+ chmod +x conf$$.sh
+ if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then
+ PATH_SEPARATOR=';'
+ else
+ PATH_SEPARATOR=:
+ fi
+ rm -f conf$$.sh
+fi
+
+# Support unset when possible.
+if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then
+ as_unset=unset
+else
+ as_unset=false
+fi
+
+
+# IFS
+# We need space, tab and new line, in precisely that order. Quoting is
+# there to prevent editors from complaining about space-tab.
+# (If _AS_PATH_WALK were called with IFS unset, it would disable word
+# splitting by setting IFS to empty value.)
+as_nl='
+'
+IFS=" "" $as_nl"
+
+# Find who we are. Look in the path if we contain no directory separator.
+case $0 in
+ *[\\/]* ) as_myself=$0 ;;
+ *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break
+done
+IFS=$as_save_IFS
+
+ ;;
+esac
+# We did not find ourselves, most probably we were run as `sh COMMAND'
+# in which case we are not to be found in the path.
+if test "x$as_myself" = x; then
+ as_myself=$0
+fi
+if test ! -f "$as_myself"; then
+ echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2
+ { (exit 1); exit 1; }
+fi
+
+# Work around bugs in pre-3.0 UWIN ksh.
+for as_var in ENV MAIL MAILPATH
+do ($as_unset $as_var) >/dev/null 2>&1 && $as_unset $as_var
+done
+PS1='$ '
+PS2='> '
+PS4='+ '
+
+# NLS nuisances.
+for as_var in \
+ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \
+ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \
+ LC_TELEPHONE LC_TIME
+do
+ if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then
+ eval $as_var=C; export $as_var
+ else
+ ($as_unset $as_var) >/dev/null 2>&1 && $as_unset $as_var
+ fi
+done
+
+# Required to use basename.
+if expr a : '\(a\)' >/dev/null 2>&1 &&
+ test "X`expr 00001 : '.*\(...\)'`" = X001; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then
+ as_basename=basename
+else
+ as_basename=false
+fi
+
+
+# Name of the executable.
+as_me=`$as_basename -- "$0" ||
+$as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)' \| . 2>/dev/null ||
+echo X/"$0" |
+ sed '/^.*\/\([^/][^/]*\)\/*$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+
+# CDPATH.
+$as_unset CDPATH
+
+
+if test "x$CONFIG_SHELL" = x; then
+ if (eval ":") 2>/dev/null; then
+ as_have_required=yes
+else
+ as_have_required=no
+fi
+
+ if test $as_have_required = yes && (eval ":
+(as_func_return () {
+ (exit \$1)
+}
+as_func_success () {
+ as_func_return 0
+}
+as_func_failure () {
+ as_func_return 1
+}
+as_func_ret_success () {
+ return 0
+}
+as_func_ret_failure () {
+ return 1
+}
+
+exitcode=0
+if as_func_success; then
+ :
+else
+ exitcode=1
+ echo as_func_success failed.
+fi
+
+if as_func_failure; then
+ exitcode=1
+ echo as_func_failure succeeded.
+fi
+
+if as_func_ret_success; then
+ :
+else
+ exitcode=1
+ echo as_func_ret_success failed.
+fi
+
+if as_func_ret_failure; then
+ exitcode=1
+ echo as_func_ret_failure succeeded.
+fi
+
+if ( set x; as_func_ret_success y && test x = \"\$1\" ); then
+ :
+else
+ exitcode=1
+ echo positional parameters were not saved.
+fi
+
+test \$exitcode = 0) || { (exit 1); exit 1; }
+
+(
+ as_lineno_1=\$LINENO
+ as_lineno_2=\$LINENO
+ test \"x\$as_lineno_1\" != \"x\$as_lineno_2\" &&
+ test \"x\`expr \$as_lineno_1 + 1\`\" = \"x\$as_lineno_2\") || { (exit 1); exit 1; }
+") 2> /dev/null; then
+ :
+else
+ as_candidate_shells=
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ case $as_dir in
+ /*)
+ for as_base in sh bash ksh sh5; do
+ as_candidate_shells="$as_candidate_shells $as_dir/$as_base"
+ done;;
+ esac
+done
+IFS=$as_save_IFS
+
+
+ for as_shell in $as_candidate_shells $SHELL; do
+ # Try only shells that exist, to save several forks.
+ if { test -f "$as_shell" || test -f "$as_shell.exe"; } &&
+ { ("$as_shell") 2> /dev/null <<\_ASEOF
+if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+ emulate sh
+ NULLCMD=:
+ # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+ setopt NO_GLOB_SUBST
+else
+ case `(set -o) 2>/dev/null` in
+ *posix*) set -o posix ;;
+esac
+
+fi
+
+
+:
+_ASEOF
+}; then
+ CONFIG_SHELL=$as_shell
+ as_have_required=yes
+ if { "$as_shell" 2> /dev/null <<\_ASEOF
+if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+ emulate sh
+ NULLCMD=:
+ # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+ setopt NO_GLOB_SUBST
+else
+ case `(set -o) 2>/dev/null` in
+ *posix*) set -o posix ;;
+esac
+
+fi
+
+
+:
+(as_func_return () {
+ (exit $1)
+}
+as_func_success () {
+ as_func_return 0
+}
+as_func_failure () {
+ as_func_return 1
+}
+as_func_ret_success () {
+ return 0
+}
+as_func_ret_failure () {
+ return 1
+}
+
+exitcode=0
+if as_func_success; then
+ :
+else
+ exitcode=1
+ echo as_func_success failed.
+fi
+
+if as_func_failure; then
+ exitcode=1
+ echo as_func_failure succeeded.
+fi
+
+if as_func_ret_success; then
+ :
+else
+ exitcode=1
+ echo as_func_ret_success failed.
+fi
+
+if as_func_ret_failure; then
+ exitcode=1
+ echo as_func_ret_failure succeeded.
+fi
+
+if ( set x; as_func_ret_success y && test x = "$1" ); then
+ :
+else
+ exitcode=1
+ echo positional parameters were not saved.
+fi
+
+test $exitcode = 0) || { (exit 1); exit 1; }
+
+(
+ as_lineno_1=$LINENO
+ as_lineno_2=$LINENO
+ test "x$as_lineno_1" != "x$as_lineno_2" &&
+ test "x`expr $as_lineno_1 + 1`" = "x$as_lineno_2") || { (exit 1); exit 1; }
+
+_ASEOF
+}; then
+ break
+fi
+
+fi
+
+ done
+
+ if test "x$CONFIG_SHELL" != x; then
+ for as_var in BASH_ENV ENV
+ do ($as_unset $as_var) >/dev/null 2>&1 && $as_unset $as_var
+ done
+ export CONFIG_SHELL
+ exec "$CONFIG_SHELL" "$as_myself" ${1+"$@"}
+fi
+
+
+ if test $as_have_required = no; then
+ echo This script requires a shell more modern than all the
+ echo shells that I found on your system. Please install a
+ echo modern shell, or manually run the script under such a
+ echo shell if you do have one.
+ { (exit 1); exit 1; }
+fi
+
+
+fi
+
+fi
+
+
+
+(eval "as_func_return () {
+ (exit \$1)
+}
+as_func_success () {
+ as_func_return 0
+}
+as_func_failure () {
+ as_func_return 1
+}
+as_func_ret_success () {
+ return 0
+}
+as_func_ret_failure () {
+ return 1
+}
+
+exitcode=0
+if as_func_success; then
+ :
+else
+ exitcode=1
+ echo as_func_success failed.
+fi
+
+if as_func_failure; then
+ exitcode=1
+ echo as_func_failure succeeded.
+fi
+
+if as_func_ret_success; then
+ :
+else
+ exitcode=1
+ echo as_func_ret_success failed.
+fi
+
+if as_func_ret_failure; then
+ exitcode=1
+ echo as_func_ret_failure succeeded.
+fi
+
+if ( set x; as_func_ret_success y && test x = \"\$1\" ); then
+ :
+else
+ exitcode=1
+ echo positional parameters were not saved.
+fi
+
+test \$exitcode = 0") || {
+ echo No shell found that supports shell functions.
+ echo Please tell autoconf@gnu.org about your system,
+ echo including any error possibly output before this
+ echo message
+}
+
+
+
+ as_lineno_1=$LINENO
+ as_lineno_2=$LINENO
+ test "x$as_lineno_1" != "x$as_lineno_2" &&
+ test "x`expr $as_lineno_1 + 1`" = "x$as_lineno_2" || {
+
+ # Create $as_me.lineno as a copy of $as_myself, but with $LINENO
+ # uniformly replaced by the line number. The first 'sed' inserts a
+ # line-number line after each line using $LINENO; the second 'sed'
+ # does the real work. The second script uses 'N' to pair each
+ # line-number line with the line containing $LINENO, and appends
+ # trailing '-' during substitution so that $LINENO is not a special
+ # case at line end.
+ # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the
+ # scripts with optimization help from Paolo Bonzini. Blame Lee
+ # E. McMahon (1931-1989) for sed's syntax. :-)
+ sed -n '
+ p
+ /[$]LINENO/=
+ ' <$as_myself |
+ sed '
+ s/[$]LINENO.*/&-/
+ t lineno
+ b
+ :lineno
+ N
+ :loop
+ s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/
+ t loop
+ s/-\n.*//
+ ' >$as_me.lineno &&
+ chmod +x "$as_me.lineno" ||
+ { echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2
+ { (exit 1); exit 1; }; }
+
+ # Don't try to exec as it changes $[0], causing all sort of problems
+ # (the dirname of $[0] is not the place where we might find the
+ # original and so on. Autoconf is especially sensitive to this).
+ . "./$as_me.lineno"
+ # Exit status is that of the last command.
+ exit
+}
+
+
+if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then
+ as_dirname=dirname
+else
+ as_dirname=false
+fi
+
+ECHO_C= ECHO_N= ECHO_T=
+case `echo -n x` in
+-n*)
+ case `echo 'x\c'` in
+ *c*) ECHO_T=' ';; # ECHO_T is single tab character.
+ *) ECHO_C='\c';;
+ esac;;
+*)
+ ECHO_N='-n';;
+esac
+
+if expr a : '\(a\)' >/dev/null 2>&1 &&
+ test "X`expr 00001 : '.*\(...\)'`" = X001; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+rm -f conf$$ conf$$.exe conf$$.file
+if test -d conf$$.dir; then
+ rm -f conf$$.dir/conf$$.file
+else
+ rm -f conf$$.dir
+ mkdir conf$$.dir
+fi
+echo >conf$$.file
+if ln -s conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s='ln -s'
+ # ... but there are two gotchas:
+ # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.
+ # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.
+ # In both cases, we have to default to `cp -p'.
+ ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||
+ as_ln_s='cp -p'
+elif ln conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s=ln
+else
+ as_ln_s='cp -p'
+fi
+rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file
+rmdir conf$$.dir 2>/dev/null
+
+if mkdir -p . 2>/dev/null; then
+ as_mkdir_p=:
+else
+ test -d ./-p && rmdir ./-p
+ as_mkdir_p=false
+fi
+
+if test -x / >/dev/null 2>&1; then
+ as_test_x='test -x'
+else
+ if ls -dL / >/dev/null 2>&1; then
+ as_ls_L_option=L
+ else
+ as_ls_L_option=
+ fi
+ as_test_x='
+ eval sh -c '\''
+ if test -d "$1"; then
+ test -d "$1/.";
+ else
+ case $1 in
+ -*)set "./$1";;
+ esac;
+ case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in
+ ???[sx]*):;;*)false;;esac;fi
+ '\'' sh
+ '
+fi
+as_executable_p=$as_test_x
+
+# Sed expression to map a string onto a valid CPP name.
+as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'"
+
+# Sed expression to map a string onto a valid variable name.
+as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'"
+
+
+
+exec 7<&0 </dev/null 6>&1
+
+# Name of the host.
+# hostname on some systems (SVR3.2, Linux) returns a bogus exit status,
+# so uname gets run too.
+ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q`
+
+#
+# Initializations.
+#
+ac_default_prefix=/usr/local
+ac_clean_files=
+ac_config_libobj_dir=.
+LIBOBJS=
+cross_compiling=no
+subdirs=
+MFLAGS=
+MAKEFLAGS=
+SHELL=${CONFIG_SHELL-/bin/sh}
+
+# Identity of this package.
+PACKAGE_NAME='stow'
+PACKAGE_TARNAME='stow'
+PACKAGE_VERSION='2.0.2'
+PACKAGE_STRING='stow 2.0.2'
+PACKAGE_BUGREPORT='bug-stow@gnu.org'
+
+ac_subst_vars='SHELL
+PATH_SEPARATOR
+PACKAGE_NAME
+PACKAGE_TARNAME
+PACKAGE_VERSION
+PACKAGE_STRING
+PACKAGE_BUGREPORT
+exec_prefix
+prefix
+program_transform_name
+bindir
+sbindir
+libexecdir
+datarootdir
+datadir
+sysconfdir
+sharedstatedir
+localstatedir
+includedir
+oldincludedir
+docdir
+infodir
+htmldir
+dvidir
+pdfdir
+psdir
+libdir
+localedir
+mandir
+DEFS
+ECHO_C
+ECHO_N
+ECHO_T
+LIBS
+build_alias
+host_alias
+target_alias
+INSTALL_PROGRAM
+INSTALL_SCRIPT
+INSTALL_DATA
+am__isrc
+CYGPATH_W
+PACKAGE
+VERSION
+ACLOCAL
+AUTOCONF
+AUTOMAKE
+AUTOHEADER
+MAKEINFO
+install_sh
+STRIP
+INSTALL_STRIP_PROGRAM
+mkdir_p
+AWK
+SET_MAKE
+am__leading_dot
+AMTAR
+am__tar
+am__untar
+PERL
+LIBOBJS
+LTLIBOBJS'
+ac_subst_files=''
+ ac_precious_vars='build_alias
+host_alias
+target_alias'
+
+
+# Initialize some variables set by options.
+ac_init_help=
+ac_init_version=false
+# The variables have the same names as the options, with
+# dashes changed to underlines.
+cache_file=/dev/null
+exec_prefix=NONE
+no_create=
+no_recursion=
+prefix=NONE
+program_prefix=NONE
+program_suffix=NONE
+program_transform_name=s,x,x,
+silent=
+site=
+srcdir=
+verbose=
+x_includes=NONE
+x_libraries=NONE
+
+# Installation directory options.
+# These are left unexpanded so users can "make install exec_prefix=/foo"
+# and all the variables that are supposed to be based on exec_prefix
+# by default will actually change.
+# Use braces instead of parens because sh, perl, etc. also accept them.
+# (The list follows the same order as the GNU Coding Standards.)
+bindir='${exec_prefix}/bin'
+sbindir='${exec_prefix}/sbin'
+libexecdir='${exec_prefix}/libexec'
+datarootdir='${prefix}/share'
+datadir='${datarootdir}'
+sysconfdir='${prefix}/etc'
+sharedstatedir='${prefix}/com'
+localstatedir='${prefix}/var'
+includedir='${prefix}/include'
+oldincludedir='/usr/include'
+docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
+infodir='${datarootdir}/info'
+htmldir='${docdir}'
+dvidir='${docdir}'
+pdfdir='${docdir}'
+psdir='${docdir}'
+libdir='${exec_prefix}/lib'
+localedir='${datarootdir}/locale'
+mandir='${datarootdir}/man'
+
+ac_prev=
+ac_dashdash=
+for ac_option
+do
+ # If the previous option needs an argument, assign it.
+ if test -n "$ac_prev"; then
+ eval $ac_prev=\$ac_option
+ ac_prev=
+ continue
+ fi
+
+ case $ac_option in
+ *=*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;;
+ *) ac_optarg=yes ;;
+ esac
+
+ # Accept the important Cygnus configure options, so we can diagnose typos.
+
+ case $ac_dashdash$ac_option in
+ --)
+ ac_dashdash=yes ;;
+
+ -bindir | --bindir | --bindi | --bind | --bin | --bi)
+ ac_prev=bindir ;;
+ -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*)
+ bindir=$ac_optarg ;;
+
+ -build | --build | --buil | --bui | --bu)
+ ac_prev=build_alias ;;
+ -build=* | --build=* | --buil=* | --bui=* | --bu=*)
+ build_alias=$ac_optarg ;;
+
+ -cache-file | --cache-file | --cache-fil | --cache-fi \
+ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c)
+ ac_prev=cache_file ;;
+ -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \
+ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*)
+ cache_file=$ac_optarg ;;
+
+ --config-cache | -C)
+ cache_file=config.cache ;;
+
+ -datadir | --datadir | --datadi | --datad)
+ ac_prev=datadir ;;
+ -datadir=* | --datadir=* | --datadi=* | --datad=*)
+ datadir=$ac_optarg ;;
+
+ -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \
+ | --dataroo | --dataro | --datar)
+ ac_prev=datarootdir ;;
+ -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \
+ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*)
+ datarootdir=$ac_optarg ;;
+
+ -disable-* | --disable-*)
+ ac_feature=`expr "x$ac_option" : 'x-*disable-\(.*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_feature" : ".*[^-._$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid feature name: $ac_feature" >&2
+ { (exit 1); exit 1; }; }
+ ac_feature=`echo $ac_feature | sed 's/[-.]/_/g'`
+ eval enable_$ac_feature=no ;;
+
+ -docdir | --docdir | --docdi | --doc | --do)
+ ac_prev=docdir ;;
+ -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*)
+ docdir=$ac_optarg ;;
+
+ -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv)
+ ac_prev=dvidir ;;
+ -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*)
+ dvidir=$ac_optarg ;;
+
+ -enable-* | --enable-*)
+ ac_feature=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_feature" : ".*[^-._$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid feature name: $ac_feature" >&2
+ { (exit 1); exit 1; }; }
+ ac_feature=`echo $ac_feature | sed 's/[-.]/_/g'`
+ eval enable_$ac_feature=\$ac_optarg ;;
+
+ -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \
+ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \
+ | --exec | --exe | --ex)
+ ac_prev=exec_prefix ;;
+ -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \
+ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \
+ | --exec=* | --exe=* | --ex=*)
+ exec_prefix=$ac_optarg ;;
+
+ -gas | --gas | --ga | --g)
+ # Obsolete; use --with-gas.
+ with_gas=yes ;;
+
+ -help | --help | --hel | --he | -h)
+ ac_init_help=long ;;
+ -help=r* | --help=r* | --hel=r* | --he=r* | -hr*)
+ ac_init_help=recursive ;;
+ -help=s* | --help=s* | --hel=s* | --he=s* | -hs*)
+ ac_init_help=short ;;
+
+ -host | --host | --hos | --ho)
+ ac_prev=host_alias ;;
+ -host=* | --host=* | --hos=* | --ho=*)
+ host_alias=$ac_optarg ;;
+
+ -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht)
+ ac_prev=htmldir ;;
+ -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \
+ | --ht=*)
+ htmldir=$ac_optarg ;;
+
+ -includedir | --includedir | --includedi | --included | --include \
+ | --includ | --inclu | --incl | --inc)
+ ac_prev=includedir ;;
+ -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \
+ | --includ=* | --inclu=* | --incl=* | --inc=*)
+ includedir=$ac_optarg ;;
+
+ -infodir | --infodir | --infodi | --infod | --info | --inf)
+ ac_prev=infodir ;;
+ -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*)
+ infodir=$ac_optarg ;;
+
+ -libdir | --libdir | --libdi | --libd)
+ ac_prev=libdir ;;
+ -libdir=* | --libdir=* | --libdi=* | --libd=*)
+ libdir=$ac_optarg ;;
+
+ -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \
+ | --libexe | --libex | --libe)
+ ac_prev=libexecdir ;;
+ -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \
+ | --libexe=* | --libex=* | --libe=*)
+ libexecdir=$ac_optarg ;;
+
+ -localedir | --localedir | --localedi | --localed | --locale)
+ ac_prev=localedir ;;
+ -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*)
+ localedir=$ac_optarg ;;
+
+ -localstatedir | --localstatedir | --localstatedi | --localstated \
+ | --localstate | --localstat | --localsta | --localst | --locals)
+ ac_prev=localstatedir ;;
+ -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \
+ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*)
+ localstatedir=$ac_optarg ;;
+
+ -mandir | --mandir | --mandi | --mand | --man | --ma | --m)
+ ac_prev=mandir ;;
+ -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*)
+ mandir=$ac_optarg ;;
+
+ -nfp | --nfp | --nf)
+ # Obsolete; use --without-fp.
+ with_fp=no ;;
+
+ -no-create | --no-create | --no-creat | --no-crea | --no-cre \
+ | --no-cr | --no-c | -n)
+ no_create=yes ;;
+
+ -no-recursion | --no-recursion | --no-recursio | --no-recursi \
+ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r)
+ no_recursion=yes ;;
+
+ -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \
+ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \
+ | --oldin | --oldi | --old | --ol | --o)
+ ac_prev=oldincludedir ;;
+ -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \
+ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \
+ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*)
+ oldincludedir=$ac_optarg ;;
+
+ -prefix | --prefix | --prefi | --pref | --pre | --pr | --p)
+ ac_prev=prefix ;;
+ -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*)
+ prefix=$ac_optarg ;;
+
+ -program-prefix | --program-prefix | --program-prefi | --program-pref \
+ | --program-pre | --program-pr | --program-p)
+ ac_prev=program_prefix ;;
+ -program-prefix=* | --program-prefix=* | --program-prefi=* \
+ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*)
+ program_prefix=$ac_optarg ;;
+
+ -program-suffix | --program-suffix | --program-suffi | --program-suff \
+ | --program-suf | --program-su | --program-s)
+ ac_prev=program_suffix ;;
+ -program-suffix=* | --program-suffix=* | --program-suffi=* \
+ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*)
+ program_suffix=$ac_optarg ;;
+
+ -program-transform-name | --program-transform-name \
+ | --program-transform-nam | --program-transform-na \
+ | --program-transform-n | --program-transform- \
+ | --program-transform | --program-transfor \
+ | --program-transfo | --program-transf \
+ | --program-trans | --program-tran \
+ | --progr-tra | --program-tr | --program-t)
+ ac_prev=program_transform_name ;;
+ -program-transform-name=* | --program-transform-name=* \
+ | --program-transform-nam=* | --program-transform-na=* \
+ | --program-transform-n=* | --program-transform-=* \
+ | --program-transform=* | --program-transfor=* \
+ | --program-transfo=* | --program-transf=* \
+ | --program-trans=* | --program-tran=* \
+ | --progr-tra=* | --program-tr=* | --program-t=*)
+ program_transform_name=$ac_optarg ;;
+
+ -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd)
+ ac_prev=pdfdir ;;
+ -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*)
+ pdfdir=$ac_optarg ;;
+
+ -psdir | --psdir | --psdi | --psd | --ps)
+ ac_prev=psdir ;;
+ -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*)
+ psdir=$ac_optarg ;;
+
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil)
+ silent=yes ;;
+
+ -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
+ ac_prev=sbindir ;;
+ -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
+ | --sbi=* | --sb=*)
+ sbindir=$ac_optarg ;;
+
+ -sharedstatedir | --sharedstatedir | --sharedstatedi \
+ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \
+ | --sharedst | --shareds | --shared | --share | --shar \
+ | --sha | --sh)
+ ac_prev=sharedstatedir ;;
+ -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \
+ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \
+ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \
+ | --sha=* | --sh=*)
+ sharedstatedir=$ac_optarg ;;
+
+ -site | --site | --sit)
+ ac_prev=site ;;
+ -site=* | --site=* | --sit=*)
+ site=$ac_optarg ;;
+
+ -srcdir | --srcdir | --srcdi | --srcd | --src | --sr)
+ ac_prev=srcdir ;;
+ -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*)
+ srcdir=$ac_optarg ;;
+
+ -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \
+ | --syscon | --sysco | --sysc | --sys | --sy)
+ ac_prev=sysconfdir ;;
+ -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \
+ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*)
+ sysconfdir=$ac_optarg ;;
+
+ -target | --target | --targe | --targ | --tar | --ta | --t)
+ ac_prev=target_alias ;;
+ -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*)
+ target_alias=$ac_optarg ;;
+
+ -v | -verbose | --verbose | --verbos | --verbo | --verb)
+ verbose=yes ;;
+
+ -version | --version | --versio | --versi | --vers | -V)
+ ac_init_version=: ;;
+
+ -with-* | --with-*)
+ ac_package=`expr "x$ac_option" : 'x-*with-\([^=]*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_package" : ".*[^-._$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid package name: $ac_package" >&2
+ { (exit 1); exit 1; }; }
+ ac_package=`echo $ac_package | sed 's/[-.]/_/g'`
+ eval with_$ac_package=\$ac_optarg ;;
+
+ -without-* | --without-*)
+ ac_package=`expr "x$ac_option" : 'x-*without-\(.*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_package" : ".*[^-._$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid package name: $ac_package" >&2
+ { (exit 1); exit 1; }; }
+ ac_package=`echo $ac_package | sed 's/[-.]/_/g'`
+ eval with_$ac_package=no ;;
+
+ --x)
+ # Obsolete; use --with-x.
+ with_x=yes ;;
+
+ -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \
+ | --x-incl | --x-inc | --x-in | --x-i)
+ ac_prev=x_includes ;;
+ -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \
+ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*)
+ x_includes=$ac_optarg ;;
+
+ -x-libraries | --x-libraries | --x-librarie | --x-librari \
+ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l)
+ ac_prev=x_libraries ;;
+ -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \
+ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*)
+ x_libraries=$ac_optarg ;;
+
+ -*) { echo "$as_me: error: unrecognized option: $ac_option
+Try \`$0 --help' for more information." >&2
+ { (exit 1); exit 1; }; }
+ ;;
+
+ *=*)
+ ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_envvar" : ".*[^_$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid variable name: $ac_envvar" >&2
+ { (exit 1); exit 1; }; }
+ eval $ac_envvar=\$ac_optarg
+ export $ac_envvar ;;
+
+ *)
+ # FIXME: should be removed in autoconf 3.0.
+ echo "$as_me: WARNING: you should use --build, --host, --target" >&2
+ expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null &&
+ echo "$as_me: WARNING: invalid host type: $ac_option" >&2
+ : ${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}
+ ;;
+
+ esac
+done
+
+if test -n "$ac_prev"; then
+ ac_option=--`echo $ac_prev | sed 's/_/-/g'`
+ { echo "$as_me: error: missing argument to $ac_option" >&2
+ { (exit 1); exit 1; }; }
+fi
+
+# Be sure to have absolute directory names.
+for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \
+ datadir sysconfdir sharedstatedir localstatedir includedir \
+ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
+ libdir localedir mandir
+do
+ eval ac_val=\$$ac_var
+ case $ac_val in
+ [\\/$]* | ?:[\\/]* ) continue;;
+ NONE | '' ) case $ac_var in *prefix ) continue;; esac;;
+ esac
+ { echo "$as_me: error: expected an absolute directory name for --$ac_var: $ac_val" >&2
+ { (exit 1); exit 1; }; }
+done
+
+# There might be people who depend on the old broken behavior: `$host'
+# used to hold the argument of --host etc.
+# FIXME: To remove some day.
+build=$build_alias
+host=$host_alias
+target=$target_alias
+
+# FIXME: To remove some day.
+if test "x$host_alias" != x; then
+ if test "x$build_alias" = x; then
+ cross_compiling=maybe
+ echo "$as_me: WARNING: If you wanted to set the --build type, don't use --host.
+ If a cross compiler is detected then cross compile mode will be used." >&2
+ elif test "x$build_alias" != "x$host_alias"; then
+ cross_compiling=yes
+ fi
+fi
+
+ac_tool_prefix=
+test -n "$host_alias" && ac_tool_prefix=$host_alias-
+
+test "$silent" = yes && exec 6>/dev/null
+
+
+ac_pwd=`pwd` && test -n "$ac_pwd" &&
+ac_ls_di=`ls -di .` &&
+ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` ||
+ { echo "$as_me: error: Working directory cannot be determined" >&2
+ { (exit 1); exit 1; }; }
+test "X$ac_ls_di" = "X$ac_pwd_ls_di" ||
+ { echo "$as_me: error: pwd does not report name of working directory" >&2
+ { (exit 1); exit 1; }; }
+
+
+# Find the source files, if location was not specified.
+if test -z "$srcdir"; then
+ ac_srcdir_defaulted=yes
+ # Try the directory containing this script, then the parent directory.
+ ac_confdir=`$as_dirname -- "$0" ||
+$as_expr X"$0" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$0" : 'X\(//\)[^/]' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)' \| . 2>/dev/null ||
+echo X"$0" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ srcdir=$ac_confdir
+ if test ! -r "$srcdir/$ac_unique_file"; then
+ srcdir=..
+ fi
+else
+ ac_srcdir_defaulted=no
+fi
+if test ! -r "$srcdir/$ac_unique_file"; then
+ test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .."
+ { echo "$as_me: error: cannot find sources ($ac_unique_file) in $srcdir" >&2
+ { (exit 1); exit 1; }; }
+fi
+ac_msg="sources are in $srcdir, but \`cd $srcdir' does not work"
+ac_abs_confdir=`(
+ cd "$srcdir" && test -r "./$ac_unique_file" || { echo "$as_me: error: $ac_msg" >&2
+ { (exit 1); exit 1; }; }
+ pwd)`
+# When building in place, set srcdir=.
+if test "$ac_abs_confdir" = "$ac_pwd"; then
+ srcdir=.
+fi
+# Remove unnecessary trailing slashes from srcdir.
+# Double slashes in file names in object file debugging info
+# mess up M-x gdb in Emacs.
+case $srcdir in
+*/) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;;
+esac
+for ac_var in $ac_precious_vars; do
+ eval ac_env_${ac_var}_set=\${${ac_var}+set}
+ eval ac_env_${ac_var}_value=\$${ac_var}
+ eval ac_cv_env_${ac_var}_set=\${${ac_var}+set}
+ eval ac_cv_env_${ac_var}_value=\$${ac_var}
+done
+
+#
+# Report the --help message.
+#
+if test "$ac_init_help" = "long"; then
+ # Omit some internal or obsolete options to make the list less imposing.
+ # This message is too long to be a string in the A/UX 3.1 sh.
+ cat <<_ACEOF
+\`configure' configures stow 2.0.2 to adapt to many kinds of systems.
+
+Usage: $0 [OPTION]... [VAR=VALUE]...
+
+To assign environment variables (e.g., CC, CFLAGS...), specify them as
+VAR=VALUE. See below for descriptions of some of the useful variables.
+
+Defaults for the options are specified in brackets.
+
+Configuration:
+ -h, --help display this help and exit
+ --help=short display options specific to this package
+ --help=recursive display the short help of all the included packages
+ -V, --version display version information and exit
+ -q, --quiet, --silent do not print \`checking...' messages
+ --cache-file=FILE cache test results in FILE [disabled]
+ -C, --config-cache alias for \`--cache-file=config.cache'
+ -n, --no-create do not create output files
+ --srcdir=DIR find the sources in DIR [configure dir or \`..']
+
+Installation directories:
+ --prefix=PREFIX install architecture-independent files in PREFIX
+ [$ac_default_prefix]
+ --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX
+ [PREFIX]
+
+By default, \`make install' will install all the files in
+\`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify
+an installation prefix other than \`$ac_default_prefix' using \`--prefix',
+for instance \`--prefix=\$HOME'.
+
+For better control, use the options below.
+
+Fine tuning of the installation directories:
+ --bindir=DIR user executables [EPREFIX/bin]
+ --sbindir=DIR system admin executables [EPREFIX/sbin]
+ --libexecdir=DIR program executables [EPREFIX/libexec]
+ --sysconfdir=DIR read-only single-machine data [PREFIX/etc]
+ --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com]
+ --localstatedir=DIR modifiable single-machine data [PREFIX/var]
+ --libdir=DIR object code libraries [EPREFIX/lib]
+ --includedir=DIR C header files [PREFIX/include]
+ --oldincludedir=DIR C header files for non-gcc [/usr/include]
+ --datarootdir=DIR read-only arch.-independent data root [PREFIX/share]
+ --datadir=DIR read-only architecture-independent data [DATAROOTDIR]
+ --infodir=DIR info documentation [DATAROOTDIR/info]
+ --localedir=DIR locale-dependent data [DATAROOTDIR/locale]
+ --mandir=DIR man documentation [DATAROOTDIR/man]
+ --docdir=DIR documentation root [DATAROOTDIR/doc/stow]
+ --htmldir=DIR html documentation [DOCDIR]
+ --dvidir=DIR dvi documentation [DOCDIR]
+ --pdfdir=DIR pdf documentation [DOCDIR]
+ --psdir=DIR ps documentation [DOCDIR]
+_ACEOF
+
+ cat <<\_ACEOF
+
+Program names:
+ --program-prefix=PREFIX prepend PREFIX to installed program names
+ --program-suffix=SUFFIX append SUFFIX to installed program names
+ --program-transform-name=PROGRAM run sed PROGRAM on installed program names
+_ACEOF
+fi
+
+if test -n "$ac_init_help"; then
+ case $ac_init_help in
+ short | recursive ) echo "Configuration of stow 2.0.2:";;
+ esac
+ cat <<\_ACEOF
+
+Report bugs to <bug-stow@gnu.org>.
+_ACEOF
+ac_status=$?
+fi
+
+if test "$ac_init_help" = "recursive"; then
+ # If there are subdirs, report their specific --help.
+ for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue
+ test -d "$ac_dir" || continue
+ ac_builddir=.
+
+case "$ac_dir" in
+.) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;;
+*)
+ ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'`
+ # A ".." for each directory in $ac_dir_suffix.
+ ac_top_builddir_sub=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,/..,g;s,/,,'`
+ case $ac_top_builddir_sub in
+ "") ac_top_builddir_sub=. ac_top_build_prefix= ;;
+ *) ac_top_build_prefix=$ac_top_builddir_sub/ ;;
+ esac ;;
+esac
+ac_abs_top_builddir=$ac_pwd
+ac_abs_builddir=$ac_pwd$ac_dir_suffix
+# for backward compatibility:
+ac_top_builddir=$ac_top_build_prefix
+
+case $srcdir in
+ .) # We are building in place.
+ ac_srcdir=.
+ ac_top_srcdir=$ac_top_builddir_sub
+ ac_abs_top_srcdir=$ac_pwd ;;
+ [\\/]* | ?:[\\/]* ) # Absolute name.
+ ac_srcdir=$srcdir$ac_dir_suffix;
+ ac_top_srcdir=$srcdir
+ ac_abs_top_srcdir=$srcdir ;;
+ *) # Relative name.
+ ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix
+ ac_top_srcdir=$ac_top_build_prefix$srcdir
+ ac_abs_top_srcdir=$ac_pwd/$srcdir ;;
+esac
+ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix
+
+ cd "$ac_dir" || { ac_status=$?; continue; }
+ # Check for guested configure.
+ if test -f "$ac_srcdir/configure.gnu"; then
+ echo &&
+ $SHELL "$ac_srcdir/configure.gnu" --help=recursive
+ elif test -f "$ac_srcdir/configure"; then
+ echo &&
+ $SHELL "$ac_srcdir/configure" --help=recursive
+ else
+ echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2
+ fi || ac_status=$?
+ cd "$ac_pwd" || { ac_status=$?; break; }
+ done
+fi
+
+test -n "$ac_init_help" && exit $ac_status
+if $ac_init_version; then
+ cat <<\_ACEOF
+stow configure 2.0.2
+generated by GNU Autoconf 2.61
+
+Copyright (C) 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001,
+2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+This configure script is free software; the Free Software Foundation
+gives unlimited permission to copy, distribute and modify it.
+_ACEOF
+ exit
+fi
+cat >config.log <<_ACEOF
+This file contains any messages produced by compilers while
+running configure, to aid debugging if configure makes a mistake.
+
+It was created by stow $as_me 2.0.2, which was
+generated by GNU Autoconf 2.61. Invocation command line was
+
+ $ $0 $@
+
+_ACEOF
+exec 5>>config.log
+{
+cat <<_ASUNAME
+## --------- ##
+## Platform. ##
+## --------- ##
+
+hostname = `(hostname || uname -n) 2>/dev/null | sed 1q`
+uname -m = `(uname -m) 2>/dev/null || echo unknown`
+uname -r = `(uname -r) 2>/dev/null || echo unknown`
+uname -s = `(uname -s) 2>/dev/null || echo unknown`
+uname -v = `(uname -v) 2>/dev/null || echo unknown`
+
+/usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown`
+/bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown`
+
+/bin/arch = `(/bin/arch) 2>/dev/null || echo unknown`
+/usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown`
+/usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown`
+/usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown`
+/bin/machine = `(/bin/machine) 2>/dev/null || echo unknown`
+/usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown`
+/bin/universe = `(/bin/universe) 2>/dev/null || echo unknown`
+
+_ASUNAME
+
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ echo "PATH: $as_dir"
+done
+IFS=$as_save_IFS
+
+} >&5
+
+cat >&5 <<_ACEOF
+
+
+## ----------- ##
+## Core tests. ##
+## ----------- ##
+
+_ACEOF
+
+
+# Keep a trace of the command line.
+# Strip out --no-create and --no-recursion so they do not pile up.
+# Strip out --silent because we don't want to record it for future runs.
+# Also quote any args containing shell meta-characters.
+# Make two passes to allow for proper duplicate-argument suppression.
+ac_configure_args=
+ac_configure_args0=
+ac_configure_args1=
+ac_must_keep_next=false
+for ac_pass in 1 2
+do
+ for ac_arg
+ do
+ case $ac_arg in
+ -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;;
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil)
+ continue ;;
+ *\'*)
+ ac_arg=`echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;;
+ esac
+ case $ac_pass in
+ 1) ac_configure_args0="$ac_configure_args0 '$ac_arg'" ;;
+ 2)
+ ac_configure_args1="$ac_configure_args1 '$ac_arg'"
+ if test $ac_must_keep_next = true; then
+ ac_must_keep_next=false # Got value, back to normal.
+ else
+ case $ac_arg in
+ *=* | --config-cache | -C | -disable-* | --disable-* \
+ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \
+ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \
+ | -with-* | --with-* | -without-* | --without-* | --x)
+ case "$ac_configure_args0 " in
+ "$ac_configure_args1"*" '$ac_arg' "* ) continue ;;
+ esac
+ ;;
+ -* ) ac_must_keep_next=true ;;
+ esac
+ fi
+ ac_configure_args="$ac_configure_args '$ac_arg'"
+ ;;
+ esac
+ done
+done
+$as_unset ac_configure_args0 || test "${ac_configure_args0+set}" != set || { ac_configure_args0=; export ac_configure_args0; }
+$as_unset ac_configure_args1 || test "${ac_configure_args1+set}" != set || { ac_configure_args1=; export ac_configure_args1; }
+
+# When interrupted or exit'd, cleanup temporary files, and complete
+# config.log. We remove comments because anyway the quotes in there
+# would cause problems or look ugly.
+# WARNING: Use '\'' to represent an apostrophe within the trap.
+# WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug.
+trap 'exit_status=$?
+ # Save into config.log some information that might help in debugging.
+ {
+ echo
+
+ cat <<\_ASBOX
+## ---------------- ##
+## Cache variables. ##
+## ---------------- ##
+_ASBOX
+ echo
+ # The following way of writing the cache mishandles newlines in values,
+(
+ for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do
+ eval ac_val=\$$ac_var
+ case $ac_val in #(
+ *${as_nl}*)
+ case $ac_var in #(
+ *_cv_*) { echo "$as_me:$LINENO: WARNING: Cache variable $ac_var contains a newline." >&5
+echo "$as_me: WARNING: Cache variable $ac_var contains a newline." >&2;} ;;
+ esac
+ case $ac_var in #(
+ _ | IFS | as_nl) ;; #(
+ *) $as_unset $ac_var ;;
+ esac ;;
+ esac
+ done
+ (set) 2>&1 |
+ case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #(
+ *${as_nl}ac_space=\ *)
+ sed -n \
+ "s/'\''/'\''\\\\'\'''\''/g;
+ s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p"
+ ;; #(
+ *)
+ sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p"
+ ;;
+ esac |
+ sort
+)
+ echo
+
+ cat <<\_ASBOX
+## ----------------- ##
+## Output variables. ##
+## ----------------- ##
+_ASBOX
+ echo
+ for ac_var in $ac_subst_vars
+ do
+ eval ac_val=\$$ac_var
+ case $ac_val in
+ *\'\''*) ac_val=`echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;;
+ esac
+ echo "$ac_var='\''$ac_val'\''"
+ done | sort
+ echo
+
+ if test -n "$ac_subst_files"; then
+ cat <<\_ASBOX
+## ------------------- ##
+## File substitutions. ##
+## ------------------- ##
+_ASBOX
+ echo
+ for ac_var in $ac_subst_files
+ do
+ eval ac_val=\$$ac_var
+ case $ac_val in
+ *\'\''*) ac_val=`echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;;
+ esac
+ echo "$ac_var='\''$ac_val'\''"
+ done | sort
+ echo
+ fi
+
+ if test -s confdefs.h; then
+ cat <<\_ASBOX
+## ----------- ##
+## confdefs.h. ##
+## ----------- ##
+_ASBOX
+ echo
+ cat confdefs.h
+ echo
+ fi
+ test "$ac_signal" != 0 &&
+ echo "$as_me: caught signal $ac_signal"
+ echo "$as_me: exit $exit_status"
+ } >&5
+ rm -f core *.core core.conftest.* &&
+ rm -f -r conftest* confdefs* conf$$* $ac_clean_files &&
+ exit $exit_status
+' 0
+for ac_signal in 1 2 13 15; do
+ trap 'ac_signal='$ac_signal'; { (exit 1); exit 1; }' $ac_signal
+done
+ac_signal=0
+
+# confdefs.h avoids OS command line length limits that DEFS can exceed.
+rm -f -r conftest* confdefs.h
+
+# Predefined preprocessor variables.
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_NAME "$PACKAGE_NAME"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_TARNAME "$PACKAGE_TARNAME"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_VERSION "$PACKAGE_VERSION"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_STRING "$PACKAGE_STRING"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT"
+_ACEOF
+
+
+# Let the site file select an alternate cache file if it wants to.
+# Prefer explicitly selected file to automatically selected ones.
+if test -n "$CONFIG_SITE"; then
+ set x "$CONFIG_SITE"
+elif test "x$prefix" != xNONE; then
+ set x "$prefix/share/config.site" "$prefix/etc/config.site"
+else
+ set x "$ac_default_prefix/share/config.site" \
+ "$ac_default_prefix/etc/config.site"
+fi
+shift
+for ac_site_file
+do
+ if test -r "$ac_site_file"; then
+ { echo "$as_me:$LINENO: loading site script $ac_site_file" >&5
+echo "$as_me: loading site script $ac_site_file" >&6;}
+ sed 's/^/| /' "$ac_site_file" >&5
+ . "$ac_site_file"
+ fi
+done
+
+if test -r "$cache_file"; then
+ # Some versions of bash will fail to source /dev/null (special
+ # files actually), so we avoid doing that.
+ if test -f "$cache_file"; then
+ { echo "$as_me:$LINENO: loading cache $cache_file" >&5
+echo "$as_me: loading cache $cache_file" >&6;}
+ case $cache_file in
+ [\\/]* | ?:[\\/]* ) . "$cache_file";;
+ *) . "./$cache_file";;
+ esac
+ fi
+else
+ { echo "$as_me:$LINENO: creating cache $cache_file" >&5
+echo "$as_me: creating cache $cache_file" >&6;}
+ >$cache_file
+fi
+
+# Check that the precious variables saved in the cache have kept the same
+# value.
+ac_cache_corrupted=false
+for ac_var in $ac_precious_vars; do
+ eval ac_old_set=\$ac_cv_env_${ac_var}_set
+ eval ac_new_set=\$ac_env_${ac_var}_set
+ eval ac_old_val=\$ac_cv_env_${ac_var}_value
+ eval ac_new_val=\$ac_env_${ac_var}_value
+ case $ac_old_set,$ac_new_set in
+ set,)
+ { echo "$as_me:$LINENO: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5
+echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;}
+ ac_cache_corrupted=: ;;
+ ,set)
+ { echo "$as_me:$LINENO: error: \`$ac_var' was not set in the previous run" >&5
+echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;}
+ ac_cache_corrupted=: ;;
+ ,);;
+ *)
+ if test "x$ac_old_val" != "x$ac_new_val"; then
+ { echo "$as_me:$LINENO: error: \`$ac_var' has changed since the previous run:" >&5
+echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;}
+ { echo "$as_me:$LINENO: former value: $ac_old_val" >&5
+echo "$as_me: former value: $ac_old_val" >&2;}
+ { echo "$as_me:$LINENO: current value: $ac_new_val" >&5
+echo "$as_me: current value: $ac_new_val" >&2;}
+ ac_cache_corrupted=:
+ fi;;
+ esac
+ # Pass precious variables to config.status.
+ if test "$ac_new_set" = set; then
+ case $ac_new_val in
+ *\'*) ac_arg=$ac_var=`echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;;
+ *) ac_arg=$ac_var=$ac_new_val ;;
+ esac
+ case " $ac_configure_args " in
+ *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy.
+ *) ac_configure_args="$ac_configure_args '$ac_arg'" ;;
+ esac
+ fi
+done
+if $ac_cache_corrupted; then
+ { echo "$as_me:$LINENO: error: changes in the environment can compromise the build" >&5
+echo "$as_me: error: changes in the environment can compromise the build" >&2;}
+ { { echo "$as_me:$LINENO: error: run \`make distclean' and/or \`rm $cache_file' and start over" >&5
+echo "$as_me: error: run \`make distclean' and/or \`rm $cache_file' and start over" >&2;}
+ { (exit 1); exit 1; }; }
+fi
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+
+am__api_version='1.10'
+
+ac_aux_dir=
+for ac_dir in "$srcdir" "$srcdir/.." "$srcdir/../.."; do
+ if test -f "$ac_dir/install-sh"; then
+ ac_aux_dir=$ac_dir
+ ac_install_sh="$ac_aux_dir/install-sh -c"
+ break
+ elif test -f "$ac_dir/install.sh"; then
+ ac_aux_dir=$ac_dir
+ ac_install_sh="$ac_aux_dir/install.sh -c"
+ break
+ elif test -f "$ac_dir/shtool"; then
+ ac_aux_dir=$ac_dir
+ ac_install_sh="$ac_aux_dir/shtool install -c"
+ break
+ fi
+done
+if test -z "$ac_aux_dir"; then
+ { { echo "$as_me:$LINENO: error: cannot find install-sh or install.sh in \"$srcdir\" \"$srcdir/..\" \"$srcdir/../..\"" >&5
+echo "$as_me: error: cannot find install-sh or install.sh in \"$srcdir\" \"$srcdir/..\" \"$srcdir/../..\"" >&2;}
+ { (exit 1); exit 1; }; }
+fi
+
+# These three variables are undocumented and unsupported,
+# and are intended to be withdrawn in a future Autoconf release.
+# They can cause serious problems if a builder's source tree is in a directory
+# whose full name contains unusual characters.
+ac_config_guess="$SHELL $ac_aux_dir/config.guess" # Please don't use this var.
+ac_config_sub="$SHELL $ac_aux_dir/config.sub" # Please don't use this var.
+ac_configure="$SHELL $ac_aux_dir/configure" # Please don't use this var.
+
+
+# Find a good install program. We prefer a C program (faster),
+# so one script is as good as another. But avoid the broken or
+# incompatible versions:
+# SysV /etc/install, /usr/sbin/install
+# SunOS /usr/etc/install
+# IRIX /sbin/install
+# AIX /bin/install
+# AmigaOS /C/install, which installs bootblocks on floppy discs
+# AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag
+# AFS /usr/afsws/bin/install, which mishandles nonexistent args
+# SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff"
+# OS/2's system install, which has a completely different semantic
+# ./install, which can be erroneously created by make from ./install.sh.
+{ echo "$as_me:$LINENO: checking for a BSD-compatible install" >&5
+echo $ECHO_N "checking for a BSD-compatible install... $ECHO_C" >&6; }
+if test -z "$INSTALL"; then
+if test "${ac_cv_path_install+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ # Account for people who put trailing slashes in PATH elements.
+case $as_dir/ in
+ ./ | .// | /cC/* | \
+ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \
+ ?:\\/os2\\/install\\/* | ?:\\/OS2\\/INSTALL\\/* | \
+ /usr/ucb/* ) ;;
+ *)
+ # OSF1 and SCO ODT 3.0 have their own names for install.
+ # Don't use installbsd from OSF since it installs stuff as root
+ # by default.
+ for ac_prog in ginstall scoinst install; do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; }; then
+ if test $ac_prog = install &&
+ grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then
+ # AIX install. It has an incompatible calling convention.
+ :
+ elif test $ac_prog = install &&
+ grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then
+ # program-specific install script used by HP pwplus--don't use.
+ :
+ else
+ ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c"
+ break 3
+ fi
+ fi
+ done
+ done
+ ;;
+esac
+done
+IFS=$as_save_IFS
+
+
+fi
+ if test "${ac_cv_path_install+set}" = set; then
+ INSTALL=$ac_cv_path_install
+ else
+ # As a last resort, use the slow shell script. Don't cache a
+ # value for INSTALL within a source directory, because that will
+ # break other packages using the cache if that directory is
+ # removed, or if the value is a relative name.
+ INSTALL=$ac_install_sh
+ fi
+fi
+{ echo "$as_me:$LINENO: result: $INSTALL" >&5
+echo "${ECHO_T}$INSTALL" >&6; }
+
+# Use test -z because SunOS4 sh mishandles braces in ${var-val}.
+# It thinks the first close brace ends the variable substitution.
+test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}'
+
+test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}'
+
+test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644'
+
+{ echo "$as_me:$LINENO: checking whether build environment is sane" >&5
+echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; }
+# Just in case
+sleep 1
+echo timestamp > conftest.file
+# Do `set' in a subshell so we don't clobber the current shell's
+# arguments. Must try -L first in case configure is actually a
+# symlink; some systems play weird games with the mod time of symlinks
+# (eg FreeBSD returns the mod time of the symlink's containing
+# directory).
+if (
+ set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null`
+ if test "$*" = "X"; then
+ # -L didn't work.
+ set X `ls -t $srcdir/configure conftest.file`
+ fi
+ rm -f conftest.file
+ if test "$*" != "X $srcdir/configure conftest.file" \
+ && test "$*" != "X conftest.file $srcdir/configure"; then
+
+ # If neither matched, then we have a broken ls. This can happen
+ # if, for instance, CONFIG_SHELL is bash and it inherits a
+ # broken ls alias from the environment. This has actually
+ # happened. Such a system could not be considered "sane".
+ { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken
+alias in your environment" >&5
+echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken
+alias in your environment" >&2;}
+ { (exit 1); exit 1; }; }
+ fi
+
+ test "$2" = conftest.file
+ )
+then
+ # Ok.
+ :
+else
+ { { echo "$as_me:$LINENO: error: newly created file is older than distributed files!
+Check your system clock" >&5
+echo "$as_me: error: newly created file is older than distributed files!
+Check your system clock" >&2;}
+ { (exit 1); exit 1; }; }
+fi
+{ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6; }
+test "$program_prefix" != NONE &&
+ program_transform_name="s&^&$program_prefix&;$program_transform_name"
+# Use a double $ so make ignores it.
+test "$program_suffix" != NONE &&
+ program_transform_name="s&\$&$program_suffix&;$program_transform_name"
+# Double any \ or $. echo might interpret backslashes.
+# By default was `s,x,x', remove it if useless.
+cat <<\_ACEOF >conftest.sed
+s/[\\$]/&&/g;s/;s,x,x,$//
+_ACEOF
+program_transform_name=`echo $program_transform_name | sed -f conftest.sed`
+rm -f conftest.sed
+
+# expand $ac_aux_dir to an absolute path
+am_aux_dir=`cd $ac_aux_dir && pwd`
+
+test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing"
+# Use eval to expand $SHELL
+if eval "$MISSING --run true"; then
+ am_missing_run="$MISSING --run "
+else
+ am_missing_run=
+ { echo "$as_me:$LINENO: WARNING: \`missing' script is too old or missing" >&5
+echo "$as_me: WARNING: \`missing' script is too old or missing" >&2;}
+fi
+
+{ echo "$as_me:$LINENO: checking for a thread-safe mkdir -p" >&5
+echo $ECHO_N "checking for a thread-safe mkdir -p... $ECHO_C" >&6; }
+if test -z "$MKDIR_P"; then
+ if test "${ac_cv_path_mkdir+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_prog in mkdir gmkdir; do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; } || continue
+ case `"$as_dir/$ac_prog$ac_exec_ext" --version 2>&1` in #(
+ 'mkdir (GNU coreutils) '* | \
+ 'mkdir (coreutils) '* | \
+ 'mkdir (fileutils) '4.1*)
+ ac_cv_path_mkdir=$as_dir/$ac_prog$ac_exec_ext
+ break 3;;
+ esac
+ done
+ done
+done
+IFS=$as_save_IFS
+
+fi
+
+ if test "${ac_cv_path_mkdir+set}" = set; then
+ MKDIR_P="$ac_cv_path_mkdir -p"
+ else
+ # As a last resort, use the slow shell script. Don't cache a
+ # value for MKDIR_P within a source directory, because that will
+ # break other packages using the cache if that directory is
+ # removed, or if the value is a relative name.
+ test -d ./--version && rmdir ./--version
+ MKDIR_P="$ac_install_sh -d"
+ fi
+fi
+{ echo "$as_me:$LINENO: result: $MKDIR_P" >&5
+echo "${ECHO_T}$MKDIR_P" >&6; }
+
+mkdir_p="$MKDIR_P"
+case $mkdir_p in
+ [\\/$]* | ?:[\\/]*) ;;
+ */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;;
+esac
+
+for ac_prog in gawk mawk nawk awk
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; }
+if test "${ac_cv_prog_AWK+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$AWK"; then
+ ac_cv_prog_AWK="$AWK" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+ ac_cv_prog_AWK="$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+IFS=$as_save_IFS
+
+fi
+fi
+AWK=$ac_cv_prog_AWK
+if test -n "$AWK"; then
+ { echo "$as_me:$LINENO: result: $AWK" >&5
+echo "${ECHO_T}$AWK" >&6; }
+else
+ { echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6; }
+fi
+
+
+ test -n "$AWK" && break
+done
+
+{ echo "$as_me:$LINENO: checking whether ${MAKE-make} sets \$(MAKE)" >&5
+echo $ECHO_N "checking whether ${MAKE-make} sets \$(MAKE)... $ECHO_C" >&6; }
+set x ${MAKE-make}; ac_make=`echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'`
+if { as_var=ac_cv_prog_make_${ac_make}_set; eval "test \"\${$as_var+set}\" = set"; }; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.make <<\_ACEOF
+SHELL = /bin/sh
+all:
+ @echo '@@@%%%=$(MAKE)=@@@%%%'
+_ACEOF
+# GNU make sometimes prints "make[1]: Entering...", which would confuse us.
+case `${MAKE-make} -f conftest.make 2>/dev/null` in
+ *@@@%%%=?*=@@@%%%*)
+ eval ac_cv_prog_make_${ac_make}_set=yes;;
+ *)
+ eval ac_cv_prog_make_${ac_make}_set=no;;
+esac
+rm -f conftest.make
+fi
+if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then
+ { echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6; }
+ SET_MAKE=
+else
+ { echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6; }
+ SET_MAKE="MAKE=${MAKE-make}"
+fi
+
+rm -rf .tst 2>/dev/null
+mkdir .tst 2>/dev/null
+if test -d .tst; then
+ am__leading_dot=.
+else
+ am__leading_dot=_
+fi
+rmdir .tst 2>/dev/null
+
+if test "`cd $srcdir && pwd`" != "`pwd`"; then
+ # Use -I$(srcdir) only when $(srcdir) != ., so that make's output
+ # is not polluted with repeated "-I."
+ am__isrc=' -I$(srcdir)'
+ # test to see if srcdir already configured
+ if test -f $srcdir/config.status; then
+ { { echo "$as_me:$LINENO: error: source directory already configured; run \"make distclean\" there first" >&5
+echo "$as_me: error: source directory already configured; run \"make distclean\" there first" >&2;}
+ { (exit 1); exit 1; }; }
+ fi
+fi
+
+# test whether we have cygpath
+if test -z "$CYGPATH_W"; then
+ if (cygpath --version) >/dev/null 2>/dev/null; then
+ CYGPATH_W='cygpath -w'
+ else
+ CYGPATH_W=echo
+ fi
+fi
+
+
+# Define the identity of the package.
+ PACKAGE='stow'
+ VERSION='2.0.2'
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE "$PACKAGE"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define VERSION "$VERSION"
+_ACEOF
+
+# Some tools Automake needs.
+
+ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"}
+
+
+AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"}
+
+
+AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"}
+
+
+AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"}
+
+
+MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"}
+
+install_sh=${install_sh-"\$(SHELL) $am_aux_dir/install-sh"}
+
+# Installed binaries are usually stripped using `strip' when the user
+# run `make install-strip'. However `strip' might not be the right
+# tool to use in cross-compilation environments, therefore Automake
+# will honor the `STRIP' environment variable to overrule this program.
+if test "$cross_compiling" != no; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+set dummy ${ac_tool_prefix}strip; ac_word=$2
+{ echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; }
+if test "${ac_cv_prog_STRIP+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$STRIP"; then
+ ac_cv_prog_STRIP="$STRIP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+ ac_cv_prog_STRIP="${ac_tool_prefix}strip"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+IFS=$as_save_IFS
+
+fi
+fi
+STRIP=$ac_cv_prog_STRIP
+if test -n "$STRIP"; then
+ { echo "$as_me:$LINENO: result: $STRIP" >&5
+echo "${ECHO_T}$STRIP" >&6; }
+else
+ { echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6; }
+fi
+
+
+fi
+if test -z "$ac_cv_prog_STRIP"; then
+ ac_ct_STRIP=$STRIP
+ # Extract the first word of "strip", so it can be a program name with args.
+set dummy strip; ac_word=$2
+{ echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; }
+if test "${ac_cv_prog_ac_ct_STRIP+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_STRIP"; then
+ ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+ ac_cv_prog_ac_ct_STRIP="strip"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+IFS=$as_save_IFS
+
+fi
+fi
+ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP
+if test -n "$ac_ct_STRIP"; then
+ { echo "$as_me:$LINENO: result: $ac_ct_STRIP" >&5
+echo "${ECHO_T}$ac_ct_STRIP" >&6; }
+else
+ { echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6; }
+fi
+
+ if test "x$ac_ct_STRIP" = x; then
+ STRIP=":"
+ else
+ case $cross_compiling:$ac_tool_warned in
+yes:)
+{ echo "$as_me:$LINENO: WARNING: In the future, Autoconf will not detect cross-tools
+whose name does not start with the host triplet. If you think this
+configuration is useful to you, please write to autoconf@gnu.org." >&5
+echo "$as_me: WARNING: In the future, Autoconf will not detect cross-tools
+whose name does not start with the host triplet. If you think this
+configuration is useful to you, please write to autoconf@gnu.org." >&2;}
+ac_tool_warned=yes ;;
+esac
+ STRIP=$ac_ct_STRIP
+ fi
+else
+ STRIP="$ac_cv_prog_STRIP"
+fi
+
+fi
+INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s"
+
+# We need awk for the "check" target. The system "awk" is bad on
+# some platforms.
+# Always define AMTAR for backward compatibility.
+
+AMTAR=${AMTAR-"${am_missing_run}tar"}
+
+am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -'
+
+
+
+
+
+# Find a good install program. We prefer a C program (faster),
+# so one script is as good as another. But avoid the broken or
+# incompatible versions:
+# SysV /etc/install, /usr/sbin/install
+# SunOS /usr/etc/install
+# IRIX /sbin/install
+# AIX /bin/install
+# AmigaOS /C/install, which installs bootblocks on floppy discs
+# AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag
+# AFS /usr/afsws/bin/install, which mishandles nonexistent args
+# SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff"
+# OS/2's system install, which has a completely different semantic
+# ./install, which can be erroneously created by make from ./install.sh.
+{ echo "$as_me:$LINENO: checking for a BSD-compatible install" >&5
+echo $ECHO_N "checking for a BSD-compatible install... $ECHO_C" >&6; }
+if test -z "$INSTALL"; then
+if test "${ac_cv_path_install+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ # Account for people who put trailing slashes in PATH elements.
+case $as_dir/ in
+ ./ | .// | /cC/* | \
+ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \
+ ?:\\/os2\\/install\\/* | ?:\\/OS2\\/INSTALL\\/* | \
+ /usr/ucb/* ) ;;
+ *)
+ # OSF1 and SCO ODT 3.0 have their own names for install.
+ # Don't use installbsd from OSF since it installs stuff as root
+ # by default.
+ for ac_prog in ginstall scoinst install; do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; }; then
+ if test $ac_prog = install &&
+ grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then
+ # AIX install. It has an incompatible calling convention.
+ :
+ elif test $ac_prog = install &&
+ grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then
+ # program-specific install script used by HP pwplus--don't use.
+ :
+ else
+ ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c"
+ break 3
+ fi
+ fi
+ done
+ done
+ ;;
+esac
+done
+IFS=$as_save_IFS
+
+
+fi
+ if test "${ac_cv_path_install+set}" = set; then
+ INSTALL=$ac_cv_path_install
+ else
+ # As a last resort, use the slow shell script. Don't cache a
+ # value for INSTALL within a source directory, because that will
+ # break other packages using the cache if that directory is
+ # removed, or if the value is a relative name.
+ INSTALL=$ac_install_sh
+ fi
+fi
+{ echo "$as_me:$LINENO: result: $INSTALL" >&5
+echo "${ECHO_T}$INSTALL" >&6; }
+
+# Use test -z because SunOS4 sh mishandles braces in ${var-val}.
+# It thinks the first close brace ends the variable substitution.
+test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}'
+
+test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}'
+
+test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644'
+
+
+for ac_prog in perl perl5
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6; }
+if test "${ac_cv_path_PERL+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ case $PERL in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PERL="$PERL" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+ ac_cv_path_PERL="$as_dir/$ac_word$ac_exec_ext"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PERL=$ac_cv_path_PERL
+if test -n "$PERL"; then
+ { echo "$as_me:$LINENO: result: $PERL" >&5
+echo "${ECHO_T}$PERL" >&6; }
+else
+ { echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6; }
+fi
+
+
+ test -n "$PERL" && break
+done
+test -n "$PERL" || PERL="false"
+
+if test "x$PERL" = xfalse
+then
+ { echo "$as_me:$LINENO: WARNING: WARNING: Perl not found; you must edit line 1 of 'stow'" >&5
+echo "$as_me: WARNING: WARNING: Perl not found; you must edit line 1 of 'stow'" >&2;}
+fi
+
+ac_config_files="$ac_config_files Makefile"
+
+cat >confcache <<\_ACEOF
+# This file is a shell script that caches the results of configure
+# tests run on this system so they can be shared between configure
+# scripts and configure runs, see configure's option --config-cache.
+# It is not useful on other systems. If it contains results you don't
+# want to keep, you may remove or edit it.
+#
+# config.status only pays attention to the cache file if you give it
+# the --recheck option to rerun configure.
+#
+# `ac_cv_env_foo' variables (set or unset) will be overridden when
+# loading this file, other *unset* `ac_cv_foo' will be assigned the
+# following values.
+
+_ACEOF
+
+# The following way of writing the cache mishandles newlines in values,
+# but we know of no workaround that is simple, portable, and efficient.
+# So, we kill variables containing newlines.
+# Ultrix sh set writes to stderr and can't be redirected directly,
+# and sets the high bit in the cache file unless we assign to the vars.
+(
+ for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do
+ eval ac_val=\$$ac_var
+ case $ac_val in #(
+ *${as_nl}*)
+ case $ac_var in #(
+ *_cv_*) { echo "$as_me:$LINENO: WARNING: Cache variable $ac_var contains a newline." >&5
+echo "$as_me: WARNING: Cache variable $ac_var contains a newline." >&2;} ;;
+ esac
+ case $ac_var in #(
+ _ | IFS | as_nl) ;; #(
+ *) $as_unset $ac_var ;;
+ esac ;;
+ esac
+ done
+
+ (set) 2>&1 |
+ case $as_nl`(ac_space=' '; set) 2>&1` in #(
+ *${as_nl}ac_space=\ *)
+ # `set' does not quote correctly, so add quotes (double-quote
+ # substitution turns \\\\ into \\, and sed turns \\ into \).
+ sed -n \
+ "s/'/'\\\\''/g;
+ s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p"
+ ;; #(
+ *)
+ # `set' quotes correctly as required by POSIX, so do not add quotes.
+ sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p"
+ ;;
+ esac |
+ sort
+) |
+ sed '
+ /^ac_cv_env_/b end
+ t clear
+ :clear
+ s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/
+ t end
+ s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/
+ :end' >>confcache
+if diff "$cache_file" confcache >/dev/null 2>&1; then :; else
+ if test -w "$cache_file"; then
+ test "x$cache_file" != "x/dev/null" &&
+ { echo "$as_me:$LINENO: updating cache $cache_file" >&5
+echo "$as_me: updating cache $cache_file" >&6;}
+ cat confcache >$cache_file
+ else
+ { echo "$as_me:$LINENO: not updating unwritable cache $cache_file" >&5
+echo "$as_me: not updating unwritable cache $cache_file" >&6;}
+ fi
+fi
+rm -f confcache
+
+test "x$prefix" = xNONE && prefix=$ac_default_prefix
+# Let make expand exec_prefix.
+test "x$exec_prefix" = xNONE && exec_prefix='${prefix}'
+
+# Transform confdefs.h into DEFS.
+# Protect against shell expansion while executing Makefile rules.
+# Protect against Makefile macro expansion.
+#
+# If the first sed substitution is executed (which looks for macros that
+# take arguments), then branch to the quote section. Otherwise,
+# look for a macro that doesn't take arguments.
+ac_script='
+t clear
+:clear
+s/^[ ]*#[ ]*define[ ][ ]*\([^ (][^ (]*([^)]*)\)[ ]*\(.*\)/-D\1=\2/g
+t quote
+s/^[ ]*#[ ]*define[ ][ ]*\([^ ][^ ]*\)[ ]*\(.*\)/-D\1=\2/g
+t quote
+b any
+:quote
+s/[ `~#$^&*(){}\\|;'\''"<>?]/\\&/g
+s/\[/\\&/g
+s/\]/\\&/g
+s/\$/$$/g
+H
+:any
+${
+ g
+ s/^\n//
+ s/\n/ /g
+ p
+}
+'
+DEFS=`sed -n "$ac_script" confdefs.h`
+
+
+ac_libobjs=
+ac_ltlibobjs=
+for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue
+ # 1. Remove the extension, and $U if already installed.
+ ac_script='s/\$U\././;s/\.o$//;s/\.obj$//'
+ ac_i=`echo "$ac_i" | sed "$ac_script"`
+ # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR
+ # will be set to the directory where LIBOBJS objects are built.
+ ac_libobjs="$ac_libobjs \${LIBOBJDIR}$ac_i\$U.$ac_objext"
+ ac_ltlibobjs="$ac_ltlibobjs \${LIBOBJDIR}$ac_i"'$U.lo'
+done
+LIBOBJS=$ac_libobjs
+
+LTLIBOBJS=$ac_ltlibobjs
+
+
+
+: ${CONFIG_STATUS=./config.status}
+ac_clean_files_save=$ac_clean_files
+ac_clean_files="$ac_clean_files $CONFIG_STATUS"
+{ echo "$as_me:$LINENO: creating $CONFIG_STATUS" >&5
+echo "$as_me: creating $CONFIG_STATUS" >&6;}
+cat >$CONFIG_STATUS <<_ACEOF
+#! $SHELL
+# Generated by $as_me.
+# Run this file to recreate the current configuration.
+# Compiler output produced by configure, useful for debugging
+# configure, is in config.log if it exists.
+
+debug=false
+ac_cs_recheck=false
+ac_cs_silent=false
+SHELL=\${CONFIG_SHELL-$SHELL}
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+## --------------------- ##
+## M4sh Initialization. ##
+## --------------------- ##
+
+# Be more Bourne compatible
+DUALCASE=1; export DUALCASE # for MKS sh
+if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+ emulate sh
+ NULLCMD=:
+ # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+ setopt NO_GLOB_SUBST
+else
+ case `(set -o) 2>/dev/null` in
+ *posix*) set -o posix ;;
+esac
+
+fi
+
+
+
+
+# PATH needs CR
+# Avoid depending upon Character Ranges.
+as_cr_letters='abcdefghijklmnopqrstuvwxyz'
+as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
+as_cr_Letters=$as_cr_letters$as_cr_LETTERS
+as_cr_digits='0123456789'
+as_cr_alnum=$as_cr_Letters$as_cr_digits
+
+# The user is always right.
+if test "${PATH_SEPARATOR+set}" != set; then
+ echo "#! /bin/sh" >conf$$.sh
+ echo "exit 0" >>conf$$.sh
+ chmod +x conf$$.sh
+ if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then
+ PATH_SEPARATOR=';'
+ else
+ PATH_SEPARATOR=:
+ fi
+ rm -f conf$$.sh
+fi
+
+# Support unset when possible.
+if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then
+ as_unset=unset
+else
+ as_unset=false
+fi
+
+
+# IFS
+# We need space, tab and new line, in precisely that order. Quoting is
+# there to prevent editors from complaining about space-tab.
+# (If _AS_PATH_WALK were called with IFS unset, it would disable word
+# splitting by setting IFS to empty value.)
+as_nl='
+'
+IFS=" "" $as_nl"
+
+# Find who we are. Look in the path if we contain no directory separator.
+case $0 in
+ *[\\/]* ) as_myself=$0 ;;
+ *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break
+done
+IFS=$as_save_IFS
+
+ ;;
+esac
+# We did not find ourselves, most probably we were run as `sh COMMAND'
+# in which case we are not to be found in the path.
+if test "x$as_myself" = x; then
+ as_myself=$0
+fi
+if test ! -f "$as_myself"; then
+ echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2
+ { (exit 1); exit 1; }
+fi
+
+# Work around bugs in pre-3.0 UWIN ksh.
+for as_var in ENV MAIL MAILPATH
+do ($as_unset $as_var) >/dev/null 2>&1 && $as_unset $as_var
+done
+PS1='$ '
+PS2='> '
+PS4='+ '
+
+# NLS nuisances.
+for as_var in \
+ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \
+ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \
+ LC_TELEPHONE LC_TIME
+do
+ if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then
+ eval $as_var=C; export $as_var
+ else
+ ($as_unset $as_var) >/dev/null 2>&1 && $as_unset $as_var
+ fi
+done
+
+# Required to use basename.
+if expr a : '\(a\)' >/dev/null 2>&1 &&
+ test "X`expr 00001 : '.*\(...\)'`" = X001; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then
+ as_basename=basename
+else
+ as_basename=false
+fi
+
+
+# Name of the executable.
+as_me=`$as_basename -- "$0" ||
+$as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)' \| . 2>/dev/null ||
+echo X/"$0" |
+ sed '/^.*\/\([^/][^/]*\)\/*$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\/\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+
+# CDPATH.
+$as_unset CDPATH
+
+
+
+ as_lineno_1=$LINENO
+ as_lineno_2=$LINENO
+ test "x$as_lineno_1" != "x$as_lineno_2" &&
+ test "x`expr $as_lineno_1 + 1`" = "x$as_lineno_2" || {
+
+ # Create $as_me.lineno as a copy of $as_myself, but with $LINENO
+ # uniformly replaced by the line number. The first 'sed' inserts a
+ # line-number line after each line using $LINENO; the second 'sed'
+ # does the real work. The second script uses 'N' to pair each
+ # line-number line with the line containing $LINENO, and appends
+ # trailing '-' during substitution so that $LINENO is not a special
+ # case at line end.
+ # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the
+ # scripts with optimization help from Paolo Bonzini. Blame Lee
+ # E. McMahon (1931-1989) for sed's syntax. :-)
+ sed -n '
+ p
+ /[$]LINENO/=
+ ' <$as_myself |
+ sed '
+ s/[$]LINENO.*/&-/
+ t lineno
+ b
+ :lineno
+ N
+ :loop
+ s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/
+ t loop
+ s/-\n.*//
+ ' >$as_me.lineno &&
+ chmod +x "$as_me.lineno" ||
+ { echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2
+ { (exit 1); exit 1; }; }
+
+ # Don't try to exec as it changes $[0], causing all sort of problems
+ # (the dirname of $[0] is not the place where we might find the
+ # original and so on. Autoconf is especially sensitive to this).
+ . "./$as_me.lineno"
+ # Exit status is that of the last command.
+ exit
+}
+
+
+if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then
+ as_dirname=dirname
+else
+ as_dirname=false
+fi
+
+ECHO_C= ECHO_N= ECHO_T=
+case `echo -n x` in
+-n*)
+ case `echo 'x\c'` in
+ *c*) ECHO_T=' ';; # ECHO_T is single tab character.
+ *) ECHO_C='\c';;
+ esac;;
+*)
+ ECHO_N='-n';;
+esac
+
+if expr a : '\(a\)' >/dev/null 2>&1 &&
+ test "X`expr 00001 : '.*\(...\)'`" = X001; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+rm -f conf$$ conf$$.exe conf$$.file
+if test -d conf$$.dir; then
+ rm -f conf$$.dir/conf$$.file
+else
+ rm -f conf$$.dir
+ mkdir conf$$.dir
+fi
+echo >conf$$.file
+if ln -s conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s='ln -s'
+ # ... but there are two gotchas:
+ # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.
+ # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.
+ # In both cases, we have to default to `cp -p'.
+ ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||
+ as_ln_s='cp -p'
+elif ln conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s=ln
+else
+ as_ln_s='cp -p'
+fi
+rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file
+rmdir conf$$.dir 2>/dev/null
+
+if mkdir -p . 2>/dev/null; then
+ as_mkdir_p=:
+else
+ test -d ./-p && rmdir ./-p
+ as_mkdir_p=false
+fi
+
+if test -x / >/dev/null 2>&1; then
+ as_test_x='test -x'
+else
+ if ls -dL / >/dev/null 2>&1; then
+ as_ls_L_option=L
+ else
+ as_ls_L_option=
+ fi
+ as_test_x='
+ eval sh -c '\''
+ if test -d "$1"; then
+ test -d "$1/.";
+ else
+ case $1 in
+ -*)set "./$1";;
+ esac;
+ case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in
+ ???[sx]*):;;*)false;;esac;fi
+ '\'' sh
+ '
+fi
+as_executable_p=$as_test_x
+
+# Sed expression to map a string onto a valid CPP name.
+as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'"
+
+# Sed expression to map a string onto a valid variable name.
+as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'"
+
+
+exec 6>&1
+
+# Save the log message, to keep $[0] and so on meaningful, and to
+# report actual input values of CONFIG_FILES etc. instead of their
+# values after options handling.
+ac_log="
+This file was extended by stow $as_me 2.0.2, which was
+generated by GNU Autoconf 2.61. Invocation command line was
+
+ CONFIG_FILES = $CONFIG_FILES
+ CONFIG_HEADERS = $CONFIG_HEADERS
+ CONFIG_LINKS = $CONFIG_LINKS
+ CONFIG_COMMANDS = $CONFIG_COMMANDS
+ $ $0 $@
+
+on `(hostname || uname -n) 2>/dev/null | sed 1q`
+"
+
+_ACEOF
+
+cat >>$CONFIG_STATUS <<_ACEOF
+# Files that config.status was made for.
+config_files="$ac_config_files"
+
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+ac_cs_usage="\
+\`$as_me' instantiates files from templates according to the
+current configuration.
+
+Usage: $0 [OPTIONS] [FILE]...
+
+ -h, --help print this help, then exit
+ -V, --version print version number and configuration settings, then exit
+ -q, --quiet do not print progress messages
+ -d, --debug don't remove temporary files
+ --recheck update $as_me by reconfiguring in the same conditions
+ --file=FILE[:TEMPLATE]
+ instantiate the configuration file FILE
+
+Configuration files:
+$config_files
+
+Report bugs to <bug-autoconf@gnu.org>."
+
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF
+ac_cs_version="\\
+stow config.status 2.0.2
+configured by $0, generated by GNU Autoconf 2.61,
+ with options \\"`echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`\\"
+
+Copyright (C) 2006 Free Software Foundation, Inc.
+This config.status script is free software; the Free Software Foundation
+gives unlimited permission to copy, distribute and modify it."
+
+ac_pwd='$ac_pwd'
+srcdir='$srcdir'
+INSTALL='$INSTALL'
+MKDIR_P='$MKDIR_P'
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+# If no file are specified by the user, then we need to provide default
+# value. By we need to know if files were specified by the user.
+ac_need_defaults=:
+while test $# != 0
+do
+ case $1 in
+ --*=*)
+ ac_option=`expr "X$1" : 'X\([^=]*\)='`
+ ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'`
+ ac_shift=:
+ ;;
+ *)
+ ac_option=$1
+ ac_optarg=$2
+ ac_shift=shift
+ ;;
+ esac
+
+ case $ac_option in
+ # Handling of the options.
+ -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r)
+ ac_cs_recheck=: ;;
+ --version | --versio | --versi | --vers | --ver | --ve | --v | -V )
+ echo "$ac_cs_version"; exit ;;
+ --debug | --debu | --deb | --de | --d | -d )
+ debug=: ;;
+ --file | --fil | --fi | --f )
+ $ac_shift
+ CONFIG_FILES="$CONFIG_FILES $ac_optarg"
+ ac_need_defaults=false;;
+ --he | --h | --help | --hel | -h )
+ echo "$ac_cs_usage"; exit ;;
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil | --si | --s)
+ ac_cs_silent=: ;;
+
+ # This is an error.
+ -*) { echo "$as_me: error: unrecognized option: $1
+Try \`$0 --help' for more information." >&2
+ { (exit 1); exit 1; }; } ;;
+
+ *) ac_config_targets="$ac_config_targets $1"
+ ac_need_defaults=false ;;
+
+ esac
+ shift
+done
+
+ac_configure_extra_args=
+
+if $ac_cs_silent; then
+ exec 6>/dev/null
+ ac_configure_extra_args="$ac_configure_extra_args --silent"
+fi
+
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF
+if \$ac_cs_recheck; then
+ echo "running CONFIG_SHELL=$SHELL $SHELL $0 "$ac_configure_args \$ac_configure_extra_args " --no-create --no-recursion" >&6
+ CONFIG_SHELL=$SHELL
+ export CONFIG_SHELL
+ exec $SHELL "$0"$ac_configure_args \$ac_configure_extra_args --no-create --no-recursion
+fi
+
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF
+exec 5>>config.log
+{
+ echo
+ sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX
+## Running $as_me. ##
+_ASBOX
+ echo "$ac_log"
+} >&5
+
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+
+# Handling of arguments.
+for ac_config_target in $ac_config_targets
+do
+ case $ac_config_target in
+ "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;;
+
+ *) { { echo "$as_me:$LINENO: error: invalid argument: $ac_config_target" >&5
+echo "$as_me: error: invalid argument: $ac_config_target" >&2;}
+ { (exit 1); exit 1; }; };;
+ esac
+done
+
+
+# If the user did not use the arguments to specify the items to instantiate,
+# then the envvar interface is used. Set only those that are not.
+# We use the long form for the default assignment because of an extremely
+# bizarre bug on SunOS 4.1.3.
+if $ac_need_defaults; then
+ test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files
+fi
+
+# Have a temporary directory for convenience. Make it in the build tree
+# simply because there is no reason against having it here, and in addition,
+# creating and moving files from /tmp can sometimes cause problems.
+# Hook for its removal unless debugging.
+# Note that there is a small window in which the directory will not be cleaned:
+# after its creation but before its name has been assigned to `$tmp'.
+$debug ||
+{
+ tmp=
+ trap 'exit_status=$?
+ { test -z "$tmp" || test ! -d "$tmp" || rm -fr "$tmp"; } && exit $exit_status
+' 0
+ trap '{ (exit 1); exit 1; }' 1 2 13 15
+}
+# Create a (secure) tmp directory for tmp files.
+
+{
+ tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` &&
+ test -n "$tmp" && test -d "$tmp"
+} ||
+{
+ tmp=./conf$$-$RANDOM
+ (umask 077 && mkdir "$tmp")
+} ||
+{
+ echo "$me: cannot create a temporary directory in ." >&2
+ { (exit 1); exit 1; }
+}
+
+#
+# Set up the sed scripts for CONFIG_FILES section.
+#
+
+# No need to generate the scripts if there are no CONFIG_FILES.
+# This happens for instance when ./config.status config.h
+if test -n "$CONFIG_FILES"; then
+
+_ACEOF
+
+
+
+ac_delim='%!_!# '
+for ac_last_try in false false false false false :; do
+ cat >conf$$subs.sed <<_ACEOF
+SHELL!$SHELL$ac_delim
+PATH_SEPARATOR!$PATH_SEPARATOR$ac_delim
+PACKAGE_NAME!$PACKAGE_NAME$ac_delim
+PACKAGE_TARNAME!$PACKAGE_TARNAME$ac_delim
+PACKAGE_VERSION!$PACKAGE_VERSION$ac_delim
+PACKAGE_STRING!$PACKAGE_STRING$ac_delim
+PACKAGE_BUGREPORT!$PACKAGE_BUGREPORT$ac_delim
+exec_prefix!$exec_prefix$ac_delim
+prefix!$prefix$ac_delim
+program_transform_name!$program_transform_name$ac_delim
+bindir!$bindir$ac_delim
+sbindir!$sbindir$ac_delim
+libexecdir!$libexecdir$ac_delim
+datarootdir!$datarootdir$ac_delim
+datadir!$datadir$ac_delim
+sysconfdir!$sysconfdir$ac_delim
+sharedstatedir!$sharedstatedir$ac_delim
+localstatedir!$localstatedir$ac_delim
+includedir!$includedir$ac_delim
+oldincludedir!$oldincludedir$ac_delim
+docdir!$docdir$ac_delim
+infodir!$infodir$ac_delim
+htmldir!$htmldir$ac_delim
+dvidir!$dvidir$ac_delim
+pdfdir!$pdfdir$ac_delim
+psdir!$psdir$ac_delim
+libdir!$libdir$ac_delim
+localedir!$localedir$ac_delim
+mandir!$mandir$ac_delim
+DEFS!$DEFS$ac_delim
+ECHO_C!$ECHO_C$ac_delim
+ECHO_N!$ECHO_N$ac_delim
+ECHO_T!$ECHO_T$ac_delim
+LIBS!$LIBS$ac_delim
+build_alias!$build_alias$ac_delim
+host_alias!$host_alias$ac_delim
+target_alias!$target_alias$ac_delim
+INSTALL_PROGRAM!$INSTALL_PROGRAM$ac_delim
+INSTALL_SCRIPT!$INSTALL_SCRIPT$ac_delim
+INSTALL_DATA!$INSTALL_DATA$ac_delim
+am__isrc!$am__isrc$ac_delim
+CYGPATH_W!$CYGPATH_W$ac_delim
+PACKAGE!$PACKAGE$ac_delim
+VERSION!$VERSION$ac_delim
+ACLOCAL!$ACLOCAL$ac_delim
+AUTOCONF!$AUTOCONF$ac_delim
+AUTOMAKE!$AUTOMAKE$ac_delim
+AUTOHEADER!$AUTOHEADER$ac_delim
+MAKEINFO!$MAKEINFO$ac_delim
+install_sh!$install_sh$ac_delim
+STRIP!$STRIP$ac_delim
+INSTALL_STRIP_PROGRAM!$INSTALL_STRIP_PROGRAM$ac_delim
+mkdir_p!$mkdir_p$ac_delim
+AWK!$AWK$ac_delim
+SET_MAKE!$SET_MAKE$ac_delim
+am__leading_dot!$am__leading_dot$ac_delim
+AMTAR!$AMTAR$ac_delim
+am__tar!$am__tar$ac_delim
+am__untar!$am__untar$ac_delim
+PERL!$PERL$ac_delim
+LIBOBJS!$LIBOBJS$ac_delim
+LTLIBOBJS!$LTLIBOBJS$ac_delim
+_ACEOF
+
+ if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 62; then
+ break
+ elif $ac_last_try; then
+ { { echo "$as_me:$LINENO: error: could not make $CONFIG_STATUS" >&5
+echo "$as_me: error: could not make $CONFIG_STATUS" >&2;}
+ { (exit 1); exit 1; }; }
+ else
+ ac_delim="$ac_delim!$ac_delim _$ac_delim!! "
+ fi
+done
+
+ac_eof=`sed -n '/^CEOF[0-9]*$/s/CEOF/0/p' conf$$subs.sed`
+if test -n "$ac_eof"; then
+ ac_eof=`echo "$ac_eof" | sort -nru | sed 1q`
+ ac_eof=`expr $ac_eof + 1`
+fi
+
+cat >>$CONFIG_STATUS <<_ACEOF
+cat >"\$tmp/subs-1.sed" <<\CEOF$ac_eof
+/@[a-zA-Z_][a-zA-Z_0-9]*@/!b end
+_ACEOF
+sed '
+s/[,\\&]/\\&/g; s/@/@|#_!!_#|/g
+s/^/s,@/; s/!/@,|#_!!_#|/
+:n
+t n
+s/'"$ac_delim"'$/,g/; t
+s/$/\\/; p
+N; s/^.*\n//; s/[,\\&]/\\&/g; s/@/@|#_!!_#|/g; b n
+' >>$CONFIG_STATUS <conf$$subs.sed
+rm -f conf$$subs.sed
+cat >>$CONFIG_STATUS <<_ACEOF
+:end
+s/|#_!!_#|//g
+CEOF$ac_eof
+_ACEOF
+
+
+# VPATH may cause trouble with some makes, so we remove $(srcdir),
+# ${srcdir} and @srcdir@ from VPATH if srcdir is ".", strip leading and
+# trailing colons and then remove the whole line if VPATH becomes empty
+# (actually we leave an empty line to preserve line numbers).
+if test "x$srcdir" = x.; then
+ ac_vpsub='/^[ ]*VPATH[ ]*=/{
+s/:*\$(srcdir):*/:/
+s/:*\${srcdir}:*/:/
+s/:*@srcdir@:*/:/
+s/^\([^=]*=[ ]*\):*/\1/
+s/:*$//
+s/^[^=]*=[ ]*$//
+}'
+fi
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+fi # test -n "$CONFIG_FILES"
+
+
+for ac_tag in :F $CONFIG_FILES
+do
+ case $ac_tag in
+ :[FHLC]) ac_mode=$ac_tag; continue;;
+ esac
+ case $ac_mode$ac_tag in
+ :[FHL]*:*);;
+ :L* | :C*:*) { { echo "$as_me:$LINENO: error: Invalid tag $ac_tag." >&5
+echo "$as_me: error: Invalid tag $ac_tag." >&2;}
+ { (exit 1); exit 1; }; };;
+ :[FH]-) ac_tag=-:-;;
+ :[FH]*) ac_tag=$ac_tag:$ac_tag.in;;
+ esac
+ ac_save_IFS=$IFS
+ IFS=:
+ set x $ac_tag
+ IFS=$ac_save_IFS
+ shift
+ ac_file=$1
+ shift
+
+ case $ac_mode in
+ :L) ac_source=$1;;
+ :[FH])
+ ac_file_inputs=
+ for ac_f
+ do
+ case $ac_f in
+ -) ac_f="$tmp/stdin";;
+ *) # Look for the file first in the build tree, then in the source tree
+ # (if the path is not absolute). The absolute path cannot be DOS-style,
+ # because $ac_f cannot contain `:'.
+ test -f "$ac_f" ||
+ case $ac_f in
+ [\\/$]*) false;;
+ *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";;
+ esac ||
+ { { echo "$as_me:$LINENO: error: cannot find input file: $ac_f" >&5
+echo "$as_me: error: cannot find input file: $ac_f" >&2;}
+ { (exit 1); exit 1; }; };;
+ esac
+ ac_file_inputs="$ac_file_inputs $ac_f"
+ done
+
+ # Let's still pretend it is `configure' which instantiates (i.e., don't
+ # use $as_me), people would be surprised to read:
+ # /* config.h. Generated by config.status. */
+ configure_input="Generated from "`IFS=:
+ echo $* | sed 's|^[^:]*/||;s|:[^:]*/|, |g'`" by configure."
+ if test x"$ac_file" != x-; then
+ configure_input="$ac_file. $configure_input"
+ { echo "$as_me:$LINENO: creating $ac_file" >&5
+echo "$as_me: creating $ac_file" >&6;}
+ fi
+
+ case $ac_tag in
+ *:-:* | *:-) cat >"$tmp/stdin";;
+ esac
+ ;;
+ esac
+
+ ac_dir=`$as_dirname -- "$ac_file" ||
+$as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$ac_file" : 'X\(//\)[^/]' \| \
+ X"$ac_file" : 'X\(//\)$' \| \
+ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null ||
+echo X"$ac_file" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ { as_dir="$ac_dir"
+ case $as_dir in #(
+ -*) as_dir=./$as_dir;;
+ esac
+ test -d "$as_dir" || { $as_mkdir_p && mkdir -p "$as_dir"; } || {
+ as_dirs=
+ while :; do
+ case $as_dir in #(
+ *\'*) as_qdir=`echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #(
+ *) as_qdir=$as_dir;;
+ esac
+ as_dirs="'$as_qdir' $as_dirs"
+ as_dir=`$as_dirname -- "$as_dir" ||
+$as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$as_dir" : 'X\(//\)[^/]' \| \
+ X"$as_dir" : 'X\(//\)$' \| \
+ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null ||
+echo X"$as_dir" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'`
+ test -d "$as_dir" && break
+ done
+ test -z "$as_dirs" || eval "mkdir $as_dirs"
+ } || test -d "$as_dir" || { { echo "$as_me:$LINENO: error: cannot create directory $as_dir" >&5
+echo "$as_me: error: cannot create directory $as_dir" >&2;}
+ { (exit 1); exit 1; }; }; }
+ ac_builddir=.
+
+case "$ac_dir" in
+.) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;;
+*)
+ ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'`
+ # A ".." for each directory in $ac_dir_suffix.
+ ac_top_builddir_sub=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,/..,g;s,/,,'`
+ case $ac_top_builddir_sub in
+ "") ac_top_builddir_sub=. ac_top_build_prefix= ;;
+ *) ac_top_build_prefix=$ac_top_builddir_sub/ ;;
+ esac ;;
+esac
+ac_abs_top_builddir=$ac_pwd
+ac_abs_builddir=$ac_pwd$ac_dir_suffix
+# for backward compatibility:
+ac_top_builddir=$ac_top_build_prefix
+
+case $srcdir in
+ .) # We are building in place.
+ ac_srcdir=.
+ ac_top_srcdir=$ac_top_builddir_sub
+ ac_abs_top_srcdir=$ac_pwd ;;
+ [\\/]* | ?:[\\/]* ) # Absolute name.
+ ac_srcdir=$srcdir$ac_dir_suffix;
+ ac_top_srcdir=$srcdir
+ ac_abs_top_srcdir=$srcdir ;;
+ *) # Relative name.
+ ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix
+ ac_top_srcdir=$ac_top_build_prefix$srcdir
+ ac_abs_top_srcdir=$ac_pwd/$srcdir ;;
+esac
+ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix
+
+
+ case $ac_mode in
+ :F)
+ #
+ # CONFIG_FILE
+ #
+
+ case $INSTALL in
+ [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;;
+ *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;;
+ esac
+ ac_MKDIR_P=$MKDIR_P
+ case $MKDIR_P in
+ [\\/$]* | ?:[\\/]* ) ;;
+ */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;;
+ esac
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+# If the template does not know about datarootdir, expand it.
+# FIXME: This hack should be removed a few years after 2.60.
+ac_datarootdir_hack=; ac_datarootdir_seen=
+
+case `sed -n '/datarootdir/ {
+ p
+ q
+}
+/@datadir@/p
+/@docdir@/p
+/@infodir@/p
+/@localedir@/p
+/@mandir@/p
+' $ac_file_inputs` in
+*datarootdir*) ac_datarootdir_seen=yes;;
+*@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*)
+ { echo "$as_me:$LINENO: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5
+echo "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;}
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF
+ ac_datarootdir_hack='
+ s&@datadir@&$datadir&g
+ s&@docdir@&$docdir&g
+ s&@infodir@&$infodir&g
+ s&@localedir@&$localedir&g
+ s&@mandir@&$mandir&g
+ s&\\\${datarootdir}&$datarootdir&g' ;;
+esac
+_ACEOF
+
+# Neutralize VPATH when `$srcdir' = `.'.
+# Shell code in configure.ac might set extrasub.
+# FIXME: do we really want to maintain this feature?
+cat >>$CONFIG_STATUS <<_ACEOF
+ sed "$ac_vpsub
+$extrasub
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF
+:t
+/@[a-zA-Z_][a-zA-Z_0-9]*@/!b
+s&@configure_input@&$configure_input&;t t
+s&@top_builddir@&$ac_top_builddir_sub&;t t
+s&@srcdir@&$ac_srcdir&;t t
+s&@abs_srcdir@&$ac_abs_srcdir&;t t
+s&@top_srcdir@&$ac_top_srcdir&;t t
+s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t
+s&@builddir@&$ac_builddir&;t t
+s&@abs_builddir@&$ac_abs_builddir&;t t
+s&@abs_top_builddir@&$ac_abs_top_builddir&;t t
+s&@INSTALL@&$ac_INSTALL&;t t
+s&@MKDIR_P@&$ac_MKDIR_P&;t t
+$ac_datarootdir_hack
+" $ac_file_inputs | sed -f "$tmp/subs-1.sed" >$tmp/out
+
+test -z "$ac_datarootdir_hack$ac_datarootdir_seen" &&
+ { ac_out=`sed -n '/\${datarootdir}/p' "$tmp/out"`; test -n "$ac_out"; } &&
+ { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' "$tmp/out"`; test -z "$ac_out"; } &&
+ { echo "$as_me:$LINENO: WARNING: $ac_file contains a reference to the variable \`datarootdir'
+which seems to be undefined. Please make sure it is defined." >&5
+echo "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir'
+which seems to be undefined. Please make sure it is defined." >&2;}
+
+ rm -f "$tmp/stdin"
+ case $ac_file in
+ -) cat "$tmp/out"; rm -f "$tmp/out";;
+ *) rm -f "$ac_file"; mv "$tmp/out" $ac_file;;
+ esac
+ ;;
+
+
+
+ esac
+
+done # for ac_tag
+
+
+{ (exit 0); exit 0; }
+_ACEOF
+chmod +x $CONFIG_STATUS
+ac_clean_files=$ac_clean_files_save
+
+
+# configure is writing to config.log, and then calls config.status.
+# config.status does its own redirection, appending to config.log.
+# Unfortunately, on DOS this fails, as config.log is still kept open
+# by configure, so config.status won't be able to write to it; its
+# output is simply discarded. So we exec the FD to /dev/null,
+# effectively closing config.log, so it can be properly (re)opened and
+# appended to by config.status. When coming back to configure, we
+# need to make the FD available again.
+if test "$no_create" != yes; then
+ ac_cs_success=:
+ ac_config_status_args=
+ test "$silent" = yes &&
+ ac_config_status_args="$ac_config_status_args --quiet"
+ exec 5>/dev/null
+ $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false
+ exec 5>>config.log
+ # Use ||, not &&, to avoid exiting from the if with $? = 1, which
+ # would make configure fail if this is the last instruction.
+ $ac_cs_success || { (exit 1); exit 1; }
+fi
+
diff --git a/configure.ac b/configure.ac
new file mode 100644
index 0000000..fe89853
--- /dev/null
+++ b/configure.ac
@@ -0,0 +1,17 @@
+dnl Process this file with Autoconf to produce configure dnl
+
+AC_INIT([stow], [2.0.2], [bug-stow@gnu.org])
+AC_PREREQ([2.61])
+AM_INIT_AUTOMAKE([-Wall -Werror])
+AC_PROG_INSTALL
+
+dnl Check for perl on our system
+dnl call to AC_SUBST(PERL) is implicit
+AC_PATH_PROGS([PERL], [perl] [perl5], [false])
+if test "x$PERL" = xfalse
+then
+ AC_MSG_WARN([WARNING: Perl not found; you must edit line 1 of 'stow'])
+fi
+
+AC_CONFIG_FILES([Makefile])
+AC_OUTPUT
diff --git a/configure.in b/configure.in
deleted file mode 100644
index 1c292ed..0000000
--- a/configure.in
+++ /dev/null
@@ -1,22 +0,0 @@
-dnl Process this file with Autoconf to produce configure
-
-AC_INIT(stow.in)
-
-PACKAGE=stow
-VERSION=1.3.3
-AM_INIT_AUTOMAKE(stow, $VERSION)
-AC_SUBST(PACKAGE)
-AC_SUBST(VERSION)
-
-AC_ARG_PROGRAM
-
-AC_PROG_INSTALL
-
-AC_PATH_PROGS(PERL, perl perl5, false)
-
-if test "x$PERL" = xfalse
-then
- echo 'WARNING: Perl not found; you must edit line 1 of `stow'"'"
-fi
-
-AC_OUTPUT(Makefile stow)
diff --git a/install-sh b/install-sh
new file mode 100755
index 0000000..4fbbae7
--- /dev/null
+++ b/install-sh
@@ -0,0 +1,507 @@
+#!/bin/sh
+# install - install a program, script, or datafile
+
+scriptversion=2006-10-14.15
+
+# This originates from X11R5 (mit/util/scripts/install.sh), which was
+# later released in X11R6 (xc/config/util/install.sh) with the
+# following copyright and license.
+#
+# Copyright (C) 1994 X Consortium
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to
+# deal in the Software without restriction, including without limitation the
+# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+# sell copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
+# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-
+# TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+#
+# Except as contained in this notice, the name of the X Consortium shall not
+# be used in advertising or otherwise to promote the sale, use or other deal-
+# ings in this Software without prior written authorization from the X Consor-
+# tium.
+#
+#
+# FSF changes to this file are in the public domain.
+#
+# Calling this script install-sh is preferred over install.sh, to prevent
+# `make' implicit rules from creating a file called install from it
+# when there is no Makefile.
+#
+# This script is compatible with the BSD install script, but was written
+# from scratch.
+
+nl='
+'
+IFS=" "" $nl"
+
+# set DOITPROG to echo to test this script
+
+# Don't use :- since 4.3BSD and earlier shells don't like it.
+doit="${DOITPROG-}"
+if test -z "$doit"; then
+ doit_exec=exec
+else
+ doit_exec=$doit
+fi
+
+# Put in absolute file names if you don't have them in your path;
+# or use environment vars.
+
+mvprog="${MVPROG-mv}"
+cpprog="${CPPROG-cp}"
+chmodprog="${CHMODPROG-chmod}"
+chownprog="${CHOWNPROG-chown}"
+chgrpprog="${CHGRPPROG-chgrp}"
+stripprog="${STRIPPROG-strip}"
+rmprog="${RMPROG-rm}"
+mkdirprog="${MKDIRPROG-mkdir}"
+
+posix_glob=
+posix_mkdir=
+
+# Desired mode of installed file.
+mode=0755
+
+chmodcmd=$chmodprog
+chowncmd=
+chgrpcmd=
+stripcmd=
+rmcmd="$rmprog -f"
+mvcmd="$mvprog"
+src=
+dst=
+dir_arg=
+dstarg=
+no_target_directory=
+
+usage="Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE
+ or: $0 [OPTION]... SRCFILES... DIRECTORY
+ or: $0 [OPTION]... -t DIRECTORY SRCFILES...
+ or: $0 [OPTION]... -d DIRECTORIES...
+
+In the 1st form, copy SRCFILE to DSTFILE.
+In the 2nd and 3rd, copy all SRCFILES to DIRECTORY.
+In the 4th, create DIRECTORIES.
+
+Options:
+-c (ignored)
+-d create directories instead of installing files.
+-g GROUP $chgrpprog installed files to GROUP.
+-m MODE $chmodprog installed files to MODE.
+-o USER $chownprog installed files to USER.
+-s $stripprog installed files.
+-t DIRECTORY install into DIRECTORY.
+-T report an error if DSTFILE is a directory.
+--help display this help and exit.
+--version display version info and exit.
+
+Environment variables override the default commands:
+ CHGRPPROG CHMODPROG CHOWNPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG
+"
+
+while test $# -ne 0; do
+ case $1 in
+ -c) shift
+ continue;;
+
+ -d) dir_arg=true
+ shift
+ continue;;
+
+ -g) chgrpcmd="$chgrpprog $2"
+ shift
+ shift
+ continue;;
+
+ --help) echo "$usage"; exit $?;;
+
+ -m) mode=$2
+ shift
+ shift
+ case $mode in
+ *' '* | *' '* | *'
+'* | *'*'* | *'?'* | *'['*)
+ echo "$0: invalid mode: $mode" >&2
+ exit 1;;
+ esac
+ continue;;
+
+ -o) chowncmd="$chownprog $2"
+ shift
+ shift
+ continue;;
+
+ -s) stripcmd=$stripprog
+ shift
+ continue;;
+
+ -t) dstarg=$2
+ shift
+ shift
+ continue;;
+
+ -T) no_target_directory=true
+ shift
+ continue;;
+
+ --version) echo "$0 $scriptversion"; exit $?;;
+
+ --) shift
+ break;;
+
+ -*) echo "$0: invalid option: $1" >&2
+ exit 1;;
+
+ *) break;;
+ esac
+done
+
+if test $# -ne 0 && test -z "$dir_arg$dstarg"; then
+ # When -d is used, all remaining arguments are directories to create.
+ # When -t is used, the destination is already specified.
+ # Otherwise, the last argument is the destination. Remove it from $@.
+ for arg
+ do
+ if test -n "$dstarg"; then
+ # $@ is not empty: it contains at least $arg.
+ set fnord "$@" "$dstarg"
+ shift # fnord
+ fi
+ shift # arg
+ dstarg=$arg
+ done
+fi
+
+if test $# -eq 0; then
+ if test -z "$dir_arg"; then
+ echo "$0: no input file specified." >&2
+ exit 1
+ fi
+ # It's OK to call `install-sh -d' without argument.
+ # This can happen when creating conditional directories.
+ exit 0
+fi
+
+if test -z "$dir_arg"; then
+ trap '(exit $?); exit' 1 2 13 15
+
+ # Set umask so as not to create temps with too-generous modes.
+ # However, 'strip' requires both read and write access to temps.
+ case $mode in
+ # Optimize common cases.
+ *644) cp_umask=133;;
+ *755) cp_umask=22;;
+
+ *[0-7])
+ if test -z "$stripcmd"; then
+ u_plus_rw=
+ else
+ u_plus_rw='% 200'
+ fi
+ cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;;
+ *)
+ if test -z "$stripcmd"; then
+ u_plus_rw=
+ else
+ u_plus_rw=,u+rw
+ fi
+ cp_umask=$mode$u_plus_rw;;
+ esac
+fi
+
+for src
+do
+ # Protect names starting with `-'.
+ case $src in
+ -*) src=./$src ;;
+ esac
+
+ if test -n "$dir_arg"; then
+ dst=$src
+ dstdir=$dst
+ test -d "$dstdir"
+ dstdir_status=$?
+ else
+
+ # Waiting for this to be detected by the "$cpprog $src $dsttmp" command
+ # might cause directories to be created, which would be especially bad
+ # if $src (and thus $dsttmp) contains '*'.
+ if test ! -f "$src" && test ! -d "$src"; then
+ echo "$0: $src does not exist." >&2
+ exit 1
+ fi
+
+ if test -z "$dstarg"; then
+ echo "$0: no destination specified." >&2
+ exit 1
+ fi
+
+ dst=$dstarg
+ # Protect names starting with `-'.
+ case $dst in
+ -*) dst=./$dst ;;
+ esac
+
+ # If destination is a directory, append the input filename; won't work
+ # if double slashes aren't ignored.
+ if test -d "$dst"; then
+ if test -n "$no_target_directory"; then
+ echo "$0: $dstarg: Is a directory" >&2
+ exit 1
+ fi
+ dstdir=$dst
+ dst=$dstdir/`basename "$src"`
+ dstdir_status=0
+ else
+ # Prefer dirname, but fall back on a substitute if dirname fails.
+ dstdir=`
+ (dirname "$dst") 2>/dev/null ||
+ expr X"$dst" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$dst" : 'X\(//\)[^/]' \| \
+ X"$dst" : 'X\(//\)$' \| \
+ X"$dst" : 'X\(/\)' \| . 2>/dev/null ||
+ echo X"$dst" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)[^/].*/{
+ s//\1/
+ q
+ }
+ /^X\(\/\/\)$/{
+ s//\1/
+ q
+ }
+ /^X\(\/\).*/{
+ s//\1/
+ q
+ }
+ s/.*/./; q'
+ `
+
+ test -d "$dstdir"
+ dstdir_status=$?
+ fi
+ fi
+
+ obsolete_mkdir_used=false
+
+ if test $dstdir_status != 0; then
+ case $posix_mkdir in
+ '')
+ # Create intermediate dirs using mode 755 as modified by the umask.
+ # This is like FreeBSD 'install' as of 1997-10-28.
+ umask=`umask`
+ case $stripcmd.$umask in
+ # Optimize common cases.
+ *[2367][2367]) mkdir_umask=$umask;;
+ .*0[02][02] | .[02][02] | .[02]) mkdir_umask=22;;
+
+ *[0-7])
+ mkdir_umask=`expr $umask + 22 \
+ - $umask % 100 % 40 + $umask % 20 \
+ - $umask % 10 % 4 + $umask % 2
+ `;;
+ *) mkdir_umask=$umask,go-w;;
+ esac
+
+ # With -d, create the new directory with the user-specified mode.
+ # Otherwise, rely on $mkdir_umask.
+ if test -n "$dir_arg"; then
+ mkdir_mode=-m$mode
+ else
+ mkdir_mode=
+ fi
+
+ posix_mkdir=false
+ case $umask in
+ *[123567][0-7][0-7])
+ # POSIX mkdir -p sets u+wx bits regardless of umask, which
+ # is incompatible with FreeBSD 'install' when (umask & 300) != 0.
+ ;;
+ *)
+ tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$
+ trap 'ret=$?; rmdir "$tmpdir/d" "$tmpdir" 2>/dev/null; exit $ret' 0
+
+ if (umask $mkdir_umask &&
+ exec $mkdirprog $mkdir_mode -p -- "$tmpdir/d") >/dev/null 2>&1
+ then
+ if test -z "$dir_arg" || {
+ # Check for POSIX incompatibilities with -m.
+ # HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or
+ # other-writeable bit of parent directory when it shouldn't.
+ # FreeBSD 6.1 mkdir -m -p sets mode of existing directory.
+ ls_ld_tmpdir=`ls -ld "$tmpdir"`
+ case $ls_ld_tmpdir in
+ d????-?r-*) different_mode=700;;
+ d????-?--*) different_mode=755;;
+ *) false;;
+ esac &&
+ $mkdirprog -m$different_mode -p -- "$tmpdir" && {
+ ls_ld_tmpdir_1=`ls -ld "$tmpdir"`
+ test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1"
+ }
+ }
+ then posix_mkdir=:
+ fi
+ rmdir "$tmpdir/d" "$tmpdir"
+ else
+ # Remove any dirs left behind by ancient mkdir implementations.
+ rmdir ./$mkdir_mode ./-p ./-- 2>/dev/null
+ fi
+ trap '' 0;;
+ esac;;
+ esac
+
+ if
+ $posix_mkdir && (
+ umask $mkdir_umask &&
+ $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir"
+ )
+ then :
+ else
+
+ # The umask is ridiculous, or mkdir does not conform to POSIX,
+ # or it failed possibly due to a race condition. Create the
+ # directory the slow way, step by step, checking for races as we go.
+
+ case $dstdir in
+ /*) prefix=/ ;;
+ -*) prefix=./ ;;
+ *) prefix= ;;
+ esac
+
+ case $posix_glob in
+ '')
+ if (set -f) 2>/dev/null; then
+ posix_glob=true
+ else
+ posix_glob=false
+ fi ;;
+ esac
+
+ oIFS=$IFS
+ IFS=/
+ $posix_glob && set -f
+ set fnord $dstdir
+ shift
+ $posix_glob && set +f
+ IFS=$oIFS
+
+ prefixes=
+
+ for d
+ do
+ test -z "$d" && continue
+
+ prefix=$prefix$d
+ if test -d "$prefix"; then
+ prefixes=
+ else
+ if $posix_mkdir; then
+ (umask=$mkdir_umask &&
+ $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break
+ # Don't fail if two instances are running concurrently.
+ test -d "$prefix" || exit 1
+ else
+ case $prefix in
+ *\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;;
+ *) qprefix=$prefix;;
+ esac
+ prefixes="$prefixes '$qprefix'"
+ fi
+ fi
+ prefix=$prefix/
+ done
+
+ if test -n "$prefixes"; then
+ # Don't fail if two instances are running concurrently.
+ (umask $mkdir_umask &&
+ eval "\$doit_exec \$mkdirprog $prefixes") ||
+ test -d "$dstdir" || exit 1
+ obsolete_mkdir_used=true
+ fi
+ fi
+ fi
+
+ if test -n "$dir_arg"; then
+ { test -z "$chowncmd" || $doit $chowncmd "$dst"; } &&
+ { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } &&
+ { test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false ||
+ test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1
+ else
+
+ # Make a couple of temp file names in the proper directory.
+ dsttmp=$dstdir/_inst.$$_
+ rmtmp=$dstdir/_rm.$$_
+
+ # Trap to clean up those temp files at exit.
+ trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0
+
+ # Copy the file name to the temp name.
+ (umask $cp_umask && $doit_exec $cpprog "$src" "$dsttmp") &&
+
+ # and set any options; do chmod last to preserve setuid bits.
+ #
+ # If any of these fail, we abort the whole thing. If we want to
+ # ignore errors from any of these, just make sure not to ignore
+ # errors from the above "$doit $cpprog $src $dsttmp" command.
+ #
+ { test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } \
+ && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } \
+ && { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } \
+ && { test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } &&
+
+ # Now rename the file to the real destination.
+ { $doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null \
+ || {
+ # The rename failed, perhaps because mv can't rename something else
+ # to itself, or perhaps because mv is so ancient that it does not
+ # support -f.
+
+ # Now remove or move aside any old file at destination location.
+ # We try this two ways since rm can't unlink itself on some
+ # systems and the destination file might be busy for other
+ # reasons. In this case, the final cleanup might fail but the new
+ # file should still install successfully.
+ {
+ if test -f "$dst"; then
+ $doit $rmcmd -f "$dst" 2>/dev/null \
+ || { $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null \
+ && { $doit $rmcmd -f "$rmtmp" 2>/dev/null; :; }; }\
+ || {
+ echo "$0: cannot unlink or rename $dst" >&2
+ (exit 1); exit 1
+ }
+ else
+ :
+ fi
+ } &&
+
+ # Now rename the file to the real destination.
+ $doit $mvcmd "$dsttmp" "$dst"
+ }
+ } || exit 1
+
+ trap '' 0
+ fi
+done
+
+# Local variables:
+# eval: (add-hook 'write-file-hooks 'time-stamp)
+# time-stamp-start: "scriptversion="
+# time-stamp-format: "%:y-%02m-%02d.%02H"
+# time-stamp-end: "$"
+# End:
diff --git a/mdate-sh b/mdate-sh
new file mode 100755
index 0000000..cd916c0
--- /dev/null
+++ b/mdate-sh
@@ -0,0 +1,201 @@
+#!/bin/sh
+# Get modification time of a file or directory and pretty-print it.
+
+scriptversion=2005-06-29.22
+
+# Copyright (C) 1995, 1996, 1997, 2003, 2004, 2005 Free Software
+# Foundation, Inc.
+# written by Ulrich Drepper <drepper@gnu.ai.mit.edu>, June 1995
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# This file is maintained in Automake, please report
+# bugs to <bug-automake@gnu.org> or send patches to
+# <automake-patches@gnu.org>.
+
+case $1 in
+ '')
+ echo "$0: No file. Try \`$0 --help' for more information." 1>&2
+ exit 1;
+ ;;
+ -h | --h*)
+ cat <<\EOF
+Usage: mdate-sh [--help] [--version] FILE
+
+Pretty-print the modification time of FILE.
+
+Report bugs to <bug-automake@gnu.org>.
+EOF
+ exit $?
+ ;;
+ -v | --v*)
+ echo "mdate-sh $scriptversion"
+ exit $?
+ ;;
+esac
+
+# Prevent date giving response in another language.
+LANG=C
+export LANG
+LC_ALL=C
+export LC_ALL
+LC_TIME=C
+export LC_TIME
+
+# GNU ls changes its time format in response to the TIME_STYLE
+# variable. Since we cannot assume `unset' works, revert this
+# variable to its documented default.
+if test "${TIME_STYLE+set}" = set; then
+ TIME_STYLE=posix-long-iso
+ export TIME_STYLE
+fi
+
+save_arg1=$1
+
+# Find out how to get the extended ls output of a file or directory.
+if ls -L /dev/null 1>/dev/null 2>&1; then
+ ls_command='ls -L -l -d'
+else
+ ls_command='ls -l -d'
+fi
+
+# A `ls -l' line looks as follows on OS/2.
+# drwxrwx--- 0 Aug 11 2001 foo
+# This differs from Unix, which adds ownership information.
+# drwxrwx--- 2 root root 4096 Aug 11 2001 foo
+#
+# To find the date, we split the line on spaces and iterate on words
+# until we find a month. This cannot work with files whose owner is a
+# user named `Jan', or `Feb', etc. However, it's unlikely that `/'
+# will be owned by a user whose name is a month. So we first look at
+# the extended ls output of the root directory to decide how many
+# words should be skipped to get the date.
+
+# On HPUX /bin/sh, "set" interprets "-rw-r--r--" as options, so the "x" below.
+set x`ls -l -d /`
+
+# Find which argument is the month.
+month=
+command=
+until test $month
+do
+ shift
+ # Add another shift to the command.
+ command="$command shift;"
+ case $1 in
+ Jan) month=January; nummonth=1;;
+ Feb) month=February; nummonth=2;;
+ Mar) month=March; nummonth=3;;
+ Apr) month=April; nummonth=4;;
+ May) month=May; nummonth=5;;
+ Jun) month=June; nummonth=6;;
+ Jul) month=July; nummonth=7;;
+ Aug) month=August; nummonth=8;;
+ Sep) month=September; nummonth=9;;
+ Oct) month=October; nummonth=10;;
+ Nov) month=November; nummonth=11;;
+ Dec) month=December; nummonth=12;;
+ esac
+done
+
+# Get the extended ls output of the file or directory.
+set dummy x`eval "$ls_command \"\$save_arg1\""`
+
+# Remove all preceding arguments
+eval $command
+
+# Because of the dummy argument above, month is in $2.
+#
+# On a POSIX system, we should have
+#
+# $# = 5
+# $1 = file size
+# $2 = month
+# $3 = day
+# $4 = year or time
+# $5 = filename
+#
+# On Darwin 7.7.0 and 7.6.0, we have
+#
+# $# = 4
+# $1 = day
+# $2 = month
+# $3 = year or time
+# $4 = filename
+
+# Get the month.
+case $2 in
+ Jan) month=January; nummonth=1;;
+ Feb) month=February; nummonth=2;;
+ Mar) month=March; nummonth=3;;
+ Apr) month=April; nummonth=4;;
+ May) month=May; nummonth=5;;
+ Jun) month=June; nummonth=6;;
+ Jul) month=July; nummonth=7;;
+ Aug) month=August; nummonth=8;;
+ Sep) month=September; nummonth=9;;
+ Oct) month=October; nummonth=10;;
+ Nov) month=November; nummonth=11;;
+ Dec) month=December; nummonth=12;;
+esac
+
+case $3 in
+ ???*) day=$1;;
+ *) day=$3; shift;;
+esac
+
+# Here we have to deal with the problem that the ls output gives either
+# the time of day or the year.
+case $3 in
+ *:*) set `date`; eval year=\$$#
+ case $2 in
+ Jan) nummonthtod=1;;
+ Feb) nummonthtod=2;;
+ Mar) nummonthtod=3;;
+ Apr) nummonthtod=4;;
+ May) nummonthtod=5;;
+ Jun) nummonthtod=6;;
+ Jul) nummonthtod=7;;
+ Aug) nummonthtod=8;;
+ Sep) nummonthtod=9;;
+ Oct) nummonthtod=10;;
+ Nov) nummonthtod=11;;
+ Dec) nummonthtod=12;;
+ esac
+ # For the first six month of the year the time notation can also
+ # be used for files modified in the last year.
+ if (expr $nummonth \> $nummonthtod) > /dev/null;
+ then
+ year=`expr $year - 1`
+ fi;;
+ *) year=$3;;
+esac
+
+# The result.
+echo $day $month $year
+
+# Local Variables:
+# mode: shell-script
+# sh-indentation: 2
+# eval: (add-hook 'write-file-hooks 'time-stamp)
+# time-stamp-start: "scriptversion="
+# time-stamp-format: "%:y-%02m-%02d.%02H"
+# time-stamp-end: "$"
+# End:
diff --git a/missing b/missing
new file mode 100755
index 0000000..1c8ff70
--- /dev/null
+++ b/missing
@@ -0,0 +1,367 @@
+#! /bin/sh
+# Common stub for a few missing GNU programs while installing.
+
+scriptversion=2006-05-10.23
+
+# Copyright (C) 1996, 1997, 1999, 2000, 2002, 2003, 2004, 2005, 2006
+# Free Software Foundation, Inc.
+# Originally by Fran,cois Pinard <pinard@iro.umontreal.ca>, 1996.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+# 02110-1301, USA.
+
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+if test $# -eq 0; then
+ echo 1>&2 "Try \`$0 --help' for more information"
+ exit 1
+fi
+
+run=:
+sed_output='s/.* --output[ =]\([^ ]*\).*/\1/p'
+sed_minuso='s/.* -o \([^ ]*\).*/\1/p'
+
+# In the cases where this matters, `missing' is being run in the
+# srcdir already.
+if test -f configure.ac; then
+ configure_ac=configure.ac
+else
+ configure_ac=configure.in
+fi
+
+msg="missing on your system"
+
+case $1 in
+--run)
+ # Try to run requested program, and just exit if it succeeds.
+ run=
+ shift
+ "$@" && exit 0
+ # Exit code 63 means version mismatch. This often happens
+ # when the user try to use an ancient version of a tool on
+ # a file that requires a minimum version. In this case we
+ # we should proceed has if the program had been absent, or
+ # if --run hadn't been passed.
+ if test $? = 63; then
+ run=:
+ msg="probably too old"
+ fi
+ ;;
+
+ -h|--h|--he|--hel|--help)
+ echo "\
+$0 [OPTION]... PROGRAM [ARGUMENT]...
+
+Handle \`PROGRAM [ARGUMENT]...' for when PROGRAM is missing, or return an
+error status if there is no known handling for PROGRAM.
+
+Options:
+ -h, --help display this help and exit
+ -v, --version output version information and exit
+ --run try to run the given command, and emulate it if it fails
+
+Supported PROGRAM values:
+ aclocal touch file \`aclocal.m4'
+ autoconf touch file \`configure'
+ autoheader touch file \`config.h.in'
+ autom4te touch the output file, or create a stub one
+ automake touch all \`Makefile.in' files
+ bison create \`y.tab.[ch]', if possible, from existing .[ch]
+ flex create \`lex.yy.c', if possible, from existing .c
+ help2man touch the output file
+ lex create \`lex.yy.c', if possible, from existing .c
+ makeinfo touch the output file
+ tar try tar, gnutar, gtar, then tar without non-portable flags
+ yacc create \`y.tab.[ch]', if possible, from existing .[ch]
+
+Send bug reports to <bug-automake@gnu.org>."
+ exit $?
+ ;;
+
+ -v|--v|--ve|--ver|--vers|--versi|--versio|--version)
+ echo "missing $scriptversion (GNU Automake)"
+ exit $?
+ ;;
+
+ -*)
+ echo 1>&2 "$0: Unknown \`$1' option"
+ echo 1>&2 "Try \`$0 --help' for more information"
+ exit 1
+ ;;
+
+esac
+
+# Now exit if we have it, but it failed. Also exit now if we
+# don't have it and --version was passed (most likely to detect
+# the program).
+case $1 in
+ lex|yacc)
+ # Not GNU programs, they don't have --version.
+ ;;
+
+ tar)
+ if test -n "$run"; then
+ echo 1>&2 "ERROR: \`tar' requires --run"
+ exit 1
+ elif test "x$2" = "x--version" || test "x$2" = "x--help"; then
+ exit 1
+ fi
+ ;;
+
+ *)
+ if test -z "$run" && ($1 --version) > /dev/null 2>&1; then
+ # We have it, but it failed.
+ exit 1
+ elif test "x$2" = "x--version" || test "x$2" = "x--help"; then
+ # Could not run --version or --help. This is probably someone
+ # running `$TOOL --version' or `$TOOL --help' to check whether
+ # $TOOL exists and not knowing $TOOL uses missing.
+ exit 1
+ fi
+ ;;
+esac
+
+# If it does not exist, or fails to run (possibly an outdated version),
+# try to emulate it.
+case $1 in
+ aclocal*)
+ echo 1>&2 "\
+WARNING: \`$1' is $msg. You should only need it if
+ you modified \`acinclude.m4' or \`${configure_ac}'. You might want
+ to install the \`Automake' and \`Perl' packages. Grab them from
+ any GNU archive site."
+ touch aclocal.m4
+ ;;
+
+ autoconf)
+ echo 1>&2 "\
+WARNING: \`$1' is $msg. You should only need it if
+ you modified \`${configure_ac}'. You might want to install the
+ \`Autoconf' and \`GNU m4' packages. Grab them from any GNU
+ archive site."
+ touch configure
+ ;;
+
+ autoheader)
+ echo 1>&2 "\
+WARNING: \`$1' is $msg. You should only need it if
+ you modified \`acconfig.h' or \`${configure_ac}'. You might want
+ to install the \`Autoconf' and \`GNU m4' packages. Grab them
+ from any GNU archive site."
+ files=`sed -n 's/^[ ]*A[CM]_CONFIG_HEADER(\([^)]*\)).*/\1/p' ${configure_ac}`
+ test -z "$files" && files="config.h"
+ touch_files=
+ for f in $files; do
+ case $f in
+ *:*) touch_files="$touch_files "`echo "$f" |
+ sed -e 's/^[^:]*://' -e 's/:.*//'`;;
+ *) touch_files="$touch_files $f.in";;
+ esac
+ done
+ touch $touch_files
+ ;;
+
+ automake*)
+ echo 1>&2 "\
+WARNING: \`$1' is $msg. You should only need it if
+ you modified \`Makefile.am', \`acinclude.m4' or \`${configure_ac}'.
+ You might want to install the \`Automake' and \`Perl' packages.
+ Grab them from any GNU archive site."
+ find . -type f -name Makefile.am -print |
+ sed 's/\.am$/.in/' |
+ while read f; do touch "$f"; done
+ ;;
+
+ autom4te)
+ echo 1>&2 "\
+WARNING: \`$1' is needed, but is $msg.
+ You might have modified some files without having the
+ proper tools for further handling them.
+ You can get \`$1' as part of \`Autoconf' from any GNU
+ archive site."
+
+ file=`echo "$*" | sed -n "$sed_output"`
+ test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"`
+ if test -f "$file"; then
+ touch $file
+ else
+ test -z "$file" || exec >$file
+ echo "#! /bin/sh"
+ echo "# Created by GNU Automake missing as a replacement of"
+ echo "# $ $@"
+ echo "exit 0"
+ chmod +x $file
+ exit 1
+ fi
+ ;;
+
+ bison|yacc)
+ echo 1>&2 "\
+WARNING: \`$1' $msg. You should only need it if
+ you modified a \`.y' file. You may need the \`Bison' package
+ in order for those modifications to take effect. You can get
+ \`Bison' from any GNU archive site."
+ rm -f y.tab.c y.tab.h
+ if test $# -ne 1; then
+ eval LASTARG="\${$#}"
+ case $LASTARG in
+ *.y)
+ SRCFILE=`echo "$LASTARG" | sed 's/y$/c/'`
+ if test -f "$SRCFILE"; then
+ cp "$SRCFILE" y.tab.c
+ fi
+ SRCFILE=`echo "$LASTARG" | sed 's/y$/h/'`
+ if test -f "$SRCFILE"; then
+ cp "$SRCFILE" y.tab.h
+ fi
+ ;;
+ esac
+ fi
+ if test ! -f y.tab.h; then
+ echo >y.tab.h
+ fi
+ if test ! -f y.tab.c; then
+ echo 'main() { return 0; }' >y.tab.c
+ fi
+ ;;
+
+ lex|flex)
+ echo 1>&2 "\
+WARNING: \`$1' is $msg. You should only need it if
+ you modified a \`.l' file. You may need the \`Flex' package
+ in order for those modifications to take effect. You can get
+ \`Flex' from any GNU archive site."
+ rm -f lex.yy.c
+ if test $# -ne 1; then
+ eval LASTARG="\${$#}"
+ case $LASTARG in
+ *.l)
+ SRCFILE=`echo "$LASTARG" | sed 's/l$/c/'`
+ if test -f "$SRCFILE"; then
+ cp "$SRCFILE" lex.yy.c
+ fi
+ ;;
+ esac
+ fi
+ if test ! -f lex.yy.c; then
+ echo 'main() { return 0; }' >lex.yy.c
+ fi
+ ;;
+
+ help2man)
+ echo 1>&2 "\
+WARNING: \`$1' is $msg. You should only need it if
+ you modified a dependency of a manual page. You may need the
+ \`Help2man' package in order for those modifications to take
+ effect. You can get \`Help2man' from any GNU archive site."
+
+ file=`echo "$*" | sed -n "$sed_output"`
+ test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"`
+ if test -f "$file"; then
+ touch $file
+ else
+ test -z "$file" || exec >$file
+ echo ".ab help2man is required to generate this page"
+ exit 1
+ fi
+ ;;
+
+ makeinfo)
+ echo 1>&2 "\
+WARNING: \`$1' is $msg. You should only need it if
+ you modified a \`.texi' or \`.texinfo' file, or any other file
+ indirectly affecting the aspect of the manual. The spurious
+ call might also be the consequence of using a buggy \`make' (AIX,
+ DU, IRIX). You might want to install the \`Texinfo' package or
+ the \`GNU make' package. Grab either from any GNU archive site."
+ # The file to touch is that specified with -o ...
+ file=`echo "$*" | sed -n "$sed_output"`
+ test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"`
+ if test -z "$file"; then
+ # ... or it is the one specified with @setfilename ...
+ infile=`echo "$*" | sed 's/.* \([^ ]*\) *$/\1/'`
+ file=`sed -n '
+ /^@setfilename/{
+ s/.* \([^ ]*\) *$/\1/
+ p
+ q
+ }' $infile`
+ # ... or it is derived from the source name (dir/f.texi becomes f.info)
+ test -z "$file" && file=`echo "$infile" | sed 's,.*/,,;s,.[^.]*$,,'`.info
+ fi
+ # If the file does not exist, the user really needs makeinfo;
+ # let's fail without touching anything.
+ test -f $file || exit 1
+ touch $file
+ ;;
+
+ tar)
+ shift
+
+ # We have already tried tar in the generic part.
+ # Look for gnutar/gtar before invocation to avoid ugly error
+ # messages.
+ if (gnutar --version > /dev/null 2>&1); then
+ gnutar "$@" && exit 0
+ fi
+ if (gtar --version > /dev/null 2>&1); then
+ gtar "$@" && exit 0
+ fi
+ firstarg="$1"
+ if shift; then
+ case $firstarg in
+ *o*)
+ firstarg=`echo "$firstarg" | sed s/o//`
+ tar "$firstarg" "$@" && exit 0
+ ;;
+ esac
+ case $firstarg in
+ *h*)
+ firstarg=`echo "$firstarg" | sed s/h//`
+ tar "$firstarg" "$@" && exit 0
+ ;;
+ esac
+ fi
+
+ echo 1>&2 "\
+WARNING: I can't seem to be able to run \`tar' with the given arguments.
+ You may want to install GNU tar or Free paxutils, or check the
+ command line arguments."
+ exit 1
+ ;;
+
+ *)
+ echo 1>&2 "\
+WARNING: \`$1' is needed, and is $msg.
+ You might have modified some files without having the
+ proper tools for further handling them. Check the \`README' file,
+ it often tells you about the needed prerequisites for installing
+ this package. You may also peek at any GNU archive site, in case
+ some other package would contain this missing \`$1' program."
+ exit 1
+ ;;
+esac
+
+exit 0
+
+# Local variables:
+# eval: (add-hook 'write-file-hooks 'time-stamp)
+# time-stamp-start: "scriptversion="
+# time-stamp-format: "%:y-%02m-%02d.%02H"
+# time-stamp-end: "$"
+# End:
diff --git a/stamp-vti b/stamp-vti
new file mode 100644
index 0000000..3245b32
--- /dev/null
+++ b/stamp-vti
@@ -0,0 +1,4 @@
+@set UPDATED 20 February 2008
+@set UPDATED-MONTH February 2008
+@set EDITION 2.0.2
+@set VERSION 2.0.2
diff --git a/stow b/stow
new file mode 100755
index 0000000..c68866b
--- /dev/null
+++ b/stow
@@ -0,0 +1,1794 @@
+#!/usr/bin/perl
+
+# GNU Stow - manage the installation of multiple software packages
+# Copyright (C) 1993, 1994, 1995, 1996 by Bob Glickstein
+# Copyright (C) 2000, 2001 Guillaume Morin
+# Copyright (C) 2007 Kahlil Hodgson
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+use strict;
+use warnings;
+
+require 5.005;
+use POSIX qw(getcwd);
+use Getopt::Long;
+
+my $Version = '2.0.2';
+my $ProgramName = $0;
+$ProgramName =~ s{.*/}{};
+
+# Verbosity rules:
+#
+# 0: errors only
+# > 0: print operations: LINK/UNLINK/MKDIR/RMDIR
+# > 1: print trace: stow/unstow package/contents/node
+# > 2: print trace detail: "_this_ already points to _that_"
+#
+# All output (except for version() and usage()) is to stderr to preserve
+# backward compatibility.
+
+# These are the defaults for command line options
+our %Option = (
+ help => 0,
+ conflicts => 0,
+ action => 'stow',
+ simulate => 0,
+ verbose => 0,
+ paranoid => 0,
+ dir => undef,
+ target => undef,
+ ignore => [],
+ override => [],
+ defer => [],
+);
+
+# This becomes static after option processing
+our $Stow_Path; # only use in main loop and find_stowed_path()
+
+# Store conflicts during pre-processing
+our @Conflicts = ();
+
+# Store command line packges to stow (-S and -R)
+our @Pkgs_To_Stow = ();
+
+# Store command line packages to unstow (-D and -R)
+our @Pkgs_To_Delete = ();
+
+# The following structures are used by the abstractions that allow us to
+# defer operating on the filesystem until after all potential conflcits have
+# been assessed.
+
+# our @Tasks: list of operations to be performed (in order)
+# each element is a hash ref of the form
+# {
+# action => ...
+# type => ...
+# path => ... (unique)
+# source => ... (only for links)
+# }
+our @Tasks = ();
+
+# my %Dir_Task_For: map a path to the corresponding directory task reference
+# This structurew allows us to quickly determine if a path has an existing
+# directory task associated with it.
+our %Dir_Task_For = ();
+
+# my %Link_Task_For: map a path to the corresponding directory task reference
+# This structurew allows us to quickly determine if a path has an existing
+# directory task associated with it.
+our %Link_Task_For = ();
+
+# NB: directory tasks and link tasks are NOT mutually exclusive
+
+# put the main loop in this block so we can load the
+# rest of the code as a module for testing
+if ( not caller() ) {
+
+ process_options();
+ set_stow_path();
+
+ # current dir is now the target directory
+
+ for my $package (@Pkgs_To_Delete) {
+ if (not -d join_paths($Stow_Path,$package)) {
+ error("The given package name ($package) is not in your stow path");
+ }
+ if ($Option{'verbose'} > 1) {
+ warn "Unstowing package $package...\n";
+ }
+ if ($Option{'compat'}) {
+ unstow_contents_orig(
+ join_paths($Stow_Path,$package), # path to package
+ '', # target is current_dir
+ );
+ }
+ else {
+ unstow_contents(
+ join_paths($Stow_Path,$package), # path to package
+ '', # target is current_dir
+ );
+ }
+ if ($Option{'verbose'} > 1) {
+ warn "Unstowing package $package...done\n";
+ }
+ }
+
+ for my $package (@Pkgs_To_Stow) {
+ if (not -d join_paths($Stow_Path,$package)) {
+ error("The given package name ($package) is not in your stow path");
+ }
+ if ($Option{'verbose'} > 1) {
+ warn "Stowing package $package...\n";
+ }
+ stow_contents(
+ join_paths($Stow_Path,$package), # path package
+ '', # target is current dir
+ join_paths($Stow_Path,$package), # source from target
+ );
+ if ($Option{'verbose'} > 1) {
+ warn "Stowing package $package...done\n";
+ }
+ }
+
+ # --verbose: tell me what you are planning to do
+ # --simulate: don't execute planned operations
+ # --conflicts: just list any detected conflicts
+
+ if (scalar @Conflicts) {
+ warn "WARNING: conflicts detected.\n";
+ if ($Option{'conflicts'}) {
+ map { warn $_ } @Conflicts;
+ }
+ warn "WARNING: all operations aborted.\n";
+ }
+ else {
+ process_tasks();
+ }
+}
+
+
+#===== SUBROUTINE ===========================================================
+# Name : process_options()
+# Purpose : parse command line options and update the %Option hash
+# Parameters: none
+# Returns : n/a
+# Throws : a fatal error if a bad command line option is given
+# Comments : checks @ARGV for valid package names
+#============================================================================
+sub process_options {
+
+ get_defaults();
+ #$,="\n"; print @ARGV,"\n"; # for debugging rc file
+
+ Getopt::Long::config('no_ignore_case', 'bundling', 'permute');
+ GetOptions(
+ 'v' => sub { $Option{'verbose'}++ },
+ 'verbose=s' => sub { $Option{'verbose'} = $_[1] },
+ 'h|help' => sub { $Option{'help'} = '1' },
+ 'n|no|simulate' => sub { $Option{'simulate'} = '1' },
+ 'c|conflicts' => sub { $Option{'conflicts'} = '1' },
+ 'V|version' => sub { $Option{'version'} = '1' },
+ 'p|compat' => sub { $Option{'compat'} = '1' },
+ 'd|dir=s' => sub { $Option{'dir'} = $_[1] },
+ 't|target=s' => sub { $Option{'target'} = $_[1] },
+
+ # clean and pre-compile any regex's at parse time
+ 'ignore=s' =>
+ sub {
+ my $regex = strip_quotes($_[1]);
+ push @{$Option{'ignore'}}, qr($regex\z)
+ },
+
+ 'override=s' =>
+ sub {
+ my $regex = strip_quotes($_[1]);
+ push @{$Option{'override'}}, qr(\A$regex)
+ },
+
+ 'defer=s' =>
+ sub {
+ my $regex = strip_quotes($_[1]);
+ push @{$Option{'defer'}}, qr(\A$regex) ;
+ },
+
+ # a little craziness so we can do different actions on the same line:
+ # a -D, -S, or -R changes the action that will be performed on the
+ # package arguments that follow it.
+ 'D|delete' => sub { $Option{'action'} = 'delete' },
+ 'S|stow' => sub { $Option{'action'} = 'stow' },
+ 'R|restow' => sub { $Option{'action'} = 'restow' },
+ '<>' =>
+ sub {
+ if ($Option{'action'} eq 'restow') {
+ push @Pkgs_To_Delete, $_[0];
+ push @Pkgs_To_Stow, $_[0];
+ }
+ elsif ($Option{'action'} eq 'delete') {
+ push @Pkgs_To_Delete, $_[0];
+ }
+ else {
+ push @Pkgs_To_Stow, $_[0];
+ }
+ },
+ ) or usage();
+
+ #print "$Option{'dir'}\n"; print "$Option{'target'}\n"; exit;
+
+ # clean any leading and trailing whitespace in paths
+ if ($Option{'dir'}) {
+ $Option{'dir'} =~ s/\A +//;
+ $Option{'dir'} =~ s/ +\z//;
+ }
+ if ($Option{'target'}) {
+ $Option{'target'} =~ s/\A +//;
+ $Option{'target'} =~ s/ +\z//;
+ }
+
+ if ($Option{'help'}) {
+ usage();
+ }
+ if ($Option{'version'}) {
+ version();
+ }
+ if ($Option{'conflicts'}) {
+ $Option{'simulate'} = 1;
+ }
+
+ if (not scalar @Pkgs_To_Stow and not scalar @Pkgs_To_Delete ) {
+ usage("No packages named");
+ }
+
+ # check package arguments
+ for my $package ( (@Pkgs_To_Stow, @Pkgs_To_Delete) ) {
+ $package =~ s{/+$}{}; # delete trailing slashes
+ if ( $package =~ m{/} ) {
+ error("Slashes are not permitted in package names");
+ }
+ }
+
+ return;
+}
+
+#===== SUBROUTINE ============================================================
+# Name : get_defaults()
+# Purpose : search for default settings in any .stow files
+# Parameters: none
+# Returns : n/a
+# Throws : no exceptions
+# Comments : prepends the contents '~/.stowrc' and '.stowrc' to the command
+# : line so they get parsed just like noremal arguments. (This was
+# : hacked in so that Emil and I could set different preferences).
+#=============================================================================
+sub get_defaults {
+
+ my @defaults = ();
+ for my $file ($ENV{'HOME'}.'/.stowrc','.stowrc') {
+ if (-r $file ) {
+ warn "Loading defaults from $file\n";
+ open my $FILE, '<', $file
+ or die "Could not open $file for reading\n";
+ while (my $line = <$FILE> ){
+ chomp $line;
+ push @defaults, split " ", $line;
+ }
+ close $FILE or die "Could not close open file: $file\n";
+ }
+ }
+ # doing this inline does not seem to work
+ unshift @ARGV, @defaults;
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : usage()
+# Purpose : print program usage message and exit
+# Parameters: msg => string to prepend to the usage message
+# Returns : n/a
+# Throws : n/a
+# Comments : if 'msg' is given, then exit with non-zero status
+#============================================================================
+sub usage {
+ my ($msg) = @_;
+
+ if ($msg) {
+ print "$ProgramName: $msg\n\n";
+ }
+
+ print <<"EOT";
+$ProgramName (GNU Stow) version $Version
+
+SYNOPSIS:
+
+ $ProgramName [OPTION ...] [-D|-S|-R] PACKAGE ... [-D|-S|-R] PACKAGE ...
+
+OPTIONS:
+
+ -n, --no Do not actually make any filesystem changes
+ -c, --conflicts Scan for and print any conflicts, implies -n
+ -d DIR, --dir=DIR Set stow dir to DIR (default is current dir)
+ -t DIR, --target=DIR Set target to DIR (default is parent of stow dir)
+ -v, --verbose[=N] Increase verbosity (levels are 0,1,2,3;
+ -v or --verbose adds 1; --verbose=N sets level)
+
+ -S, --stow Stow the package names that follow this option
+ -D, --delete Unstow the package names that follow this option
+ -R, --restow Restow (like stow -D followed by stow -S)
+ -p, --compat use legacy algorithm for unstowing
+
+ --ignore=REGEX ignore files ending in this perl regex
+ --defer=REGEX defer stowing files begining with this perl regex
+ if the file is already stowed to another package
+ --override=REGEX force stowing files begining with this perl regex
+ if the file is already stowed to another package
+ -V, --version Show stow version number
+ -h, --help Show this help
+EOT
+ exit( $msg ? 1 : 0 );
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : set_stow_path()
+# Purpose : find the relative path to the stow directory
+# Parameters: none
+# Returns : a relative path
+# Throws : fatal error if either default directories or those set by the
+# : the command line flags are not valid.
+# Comments : This sets the current working directory to $Option{target}
+#============================================================================
+sub set_stow_path {
+
+ # Changing dirs helps a lot when soft links are used
+ # Also prevents problems when 'stow_dir' or 'target' are
+ # supplied as relative paths (FIXME: examples?)
+
+ my $current_dir = getcwd();
+
+ # default stow dir is the current directory
+ if (not $Option{'dir'} ) {
+ $Option{'dir'} = getcwd();
+ }
+ if (not chdir($Option{'dir'})) {
+ error("Cannot chdir to target tree: '$Option{'dir'}'");
+ }
+ my $stow_dir = getcwd();
+
+ # back to start in case target is relative
+ if (not chdir($current_dir)) {
+ error("Your directory does not seem to exist anymore");
+ }
+
+ # default target is the parent of the stow directory
+ if (not $Option{'target'}) {
+ $Option{'target'} = parent($Option{'dir'});
+ }
+ if (not chdir($Option{'target'})) {
+ error("Cannot chdir to target tree: $Option{'target'}");
+ }
+
+ # set our one global
+ $Stow_Path = relative_path(getcwd(),$stow_dir);
+
+ if ($Option{'verbose'} > 1) {
+ warn "current dir is ".getcwd()."\n";
+ warn "stow dir path is $Stow_Path\n";
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : stow_contents()
+# Purpose : stow the contents of the given directory
+# Parameters: $path => relative path to source dir from current directory
+# : $source => relative path to symlink source from the dir of target
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : a fatal error if directory cannot be read
+# Comments : stow_node() and stow_contents() are mutually recursive
+# : $source and $target are used for creating the symlink
+# : $path is used for folding/unfolding trees as necessary
+#============================================================================
+sub stow_contents {
+
+ my ($path, $target, $source) = @_;
+
+ if ($Option{'verbose'} > 1){
+ warn "Stowing contents of $path\n";
+ }
+ if ($Option{'verbose'} > 2){
+ warn "--- $target => $source\n";
+ }
+
+ if (not -d $path) {
+ error("stow_contents() called on a non-directory: $path");
+ }
+
+ opendir my $DIR, $path
+ or error("cannot read directory: $path");
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+ next NODE if ignore($node);
+ stow_node(
+ join_paths($path, $node), # path
+ join_paths($target,$node), # target
+ join_paths($source,$node), # source
+ );
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : stow_node()
+# Purpose : stow the given node
+# Parameters: $path => realtive path to source node from the current directory
+# : $target => realtive path to symlink target from the current directory
+# : $source => realtive path to symlink source from the dir of target
+# Returns : n/a
+# Throws : fatal exception if a conflict arises
+# Comments : stow_node() and stow_contents() are mutually recursive
+# : $source and $target are used for creating the symlink
+# : $path is used for folding/unfolding trees as necessary
+#============================================================================
+sub stow_node {
+
+ my ($path, $target, $source) = @_;
+
+ if ($Option{'verbose'} > 1) {
+ warn "Stowing $path\n";
+ }
+ if ($Option{'verbose'} > 2) {
+ warn "--- $target => $source\n";
+ }
+
+ # don't try to stow absolute symlinks (they cant be unstowed)
+ if (-l $source) {
+ my $second_source = read_a_link($source);
+ if ($second_source =~ m{\A/} ) {
+ conflict("source is an absolute symlink $source => $second_source");
+ if ($Option{'verbose'} > 2) {
+ warn "absolute symlinks cannot be unstowed";
+ }
+ return;
+ }
+ }
+
+ # does the target already exist?
+ if (is_a_link($target)) {
+
+ # where is the link pointing?
+ my $old_source = read_a_link($target);
+ if (not $old_source) {
+ error("Could not read link: $target");
+ }
+ if ($Option{'verbose'} > 2) {
+ warn "--- Evaluate existing link: $target => $old_source\n";
+ }
+
+ # does it point to a node under our stow directory?
+ my $old_path = find_stowed_path($target, $old_source);
+ if (not $old_path) {
+ conflict("existing target is not owned by stow: $target");
+ return; # XXX #
+ }
+
+ # does the existing $target actually point to anything?
+ if (is_a_node($old_path)) {
+ if ($old_source eq $source) {
+ if ($Option{'verbose'} > 2) {
+ warn "--- Skipping $target as it already points to $source\n";
+ }
+ }
+ elsif (defer($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn "--- deferring installation of: $target\n";
+ }
+ }
+ elsif (override($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn "--- overriding installation of: $target\n";
+ }
+ do_unlink($target);
+ do_link($source,$target);
+ }
+ elsif (is_a_dir(join_paths(parent($target),$old_source)) &&
+ is_a_dir(join_paths(parent($target),$source)) ) {
+
+ # if the existing link points to a directory,
+ # and the proposed new link points to a directory,
+ # then we can unfold the tree at that point
+
+ if ($Option{'verbose'} > 2){
+ warn "--- Unfolding $target\n";
+ }
+ do_unlink($target);
+ do_mkdir($target);
+ stow_contents($old_path, $target, join_paths('..',$old_source));
+ stow_contents($path, $target, join_paths('..',$source));
+ }
+ else {
+ conflict(
+ q{existing target is stowed to a different package: %s => %s},
+ $target,
+ $old_source,
+ );
+ }
+ }
+ else {
+ # the existing link is invalid, so replace it with a good link
+ if ($Option{'verbose'} > 2){
+ warn "--- replacing invalid link: $path\n";
+ }
+ do_unlink($target);
+ do_link($source, $target);
+ }
+ }
+ elsif (is_a_node($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn("--- Evaluate existing node: $target\n");
+ }
+ if (is_a_dir($target)) {
+ stow_contents($path, $target, join_paths('..',$source));
+ }
+ else {
+ conflict(
+ qq{existing target is neither a link nor a directory: $target}
+ );
+ }
+ }
+ else {
+ do_link($source, $target);
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : unstow_contents_orig()
+# Purpose : unstow the contents of the given directory
+# Parameters: $path => relative path to source dir from current directory
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : a fatal error if directory cannot be read
+# Comments : unstow_node() and unstow_contents() are mutually recursive
+# : Here we traverse the target tree, rather than the source tree.
+#============================================================================
+sub unstow_contents_orig {
+
+ my ($path, $target) = @_;
+
+ # don't try to remove anything under a stow directory
+ if ($target eq $Stow_Path or -e "$target/.stow" or -e "$target/.nonstow") {
+ return;
+ }
+ if ($Option{'verbose'} > 1){
+ warn "Unstowing in $target\n";
+ }
+ if ($Option{'verbose'} > 2){
+ warn "--- path is $path\n";
+ }
+ if (not -d $target) {
+ error("unstow_contents() called on a non-directory: $target");
+ }
+
+ opendir my $DIR, $target
+ or error("cannot read directory: $target");
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+ next NODE if ignore($node);
+ unstow_node_orig(
+ join_paths($path, $node), # path
+ join_paths($target, $node), # target
+ );
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : unstow_node_orig()
+# Purpose : unstow the given node
+# Parameters: $path => relative path to source node from the current directory
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : fatal error if a conflict arises
+# Comments : unstow_node() and unstow_contents() are mutually recursive
+#============================================================================
+sub unstow_node_orig {
+
+ my ($path, $target) = @_;
+
+ if ($Option{'verbose'} > 1) {
+ warn "Unstowing $target\n";
+ }
+ if ($Option{'verbose'} > 2) {
+ warn "--- path is $path\n";
+ }
+
+ # does the target exist
+ if (is_a_link($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn("Evaluate existing link: $target\n");
+ }
+
+ # where is the link pointing?
+ my $old_source = read_a_link($target);
+ if (not $old_source) {
+ error("Could not read link: $target");
+ }
+
+ # does it point to a node under our stow directory?
+ my $old_path = find_stowed_path($target, $old_source);
+ if (not $old_path) {
+ # skip links not owned by stow
+ return; # XXX #
+ }
+
+ # does the existing $target actually point to anything
+ if (-e $old_path) {
+ # does link points to the right place
+ if ($old_path eq $path) {
+ do_unlink($target);
+ }
+ elsif (override($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn("--- overriding installation of: $target\n");
+ }
+ do_unlink($target);
+ }
+ # else leave it alone
+ }
+ else {
+ if ($Option{'verbose'} > 2){
+ warn "--- removing invalid link into a stow directory: $path\n";
+ }
+ do_unlink($target);
+ }
+ }
+ elsif (-d $target) {
+ unstow_contents_orig($path, $target);
+
+ # this action may have made the parent directory foldable
+ if (my $parent = foldable($target)) {
+ fold_tree($target,$parent);
+ }
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : unstow_contents()
+# Purpose : unstow the contents of the given directory
+# Parameters: $path => relative path to source dir from current directory
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : a fatal error if directory cannot be read
+# Comments : unstow_node() and unstow_contents() are mutually recursive
+# : Here we traverse the target tree, rather than the source tree.
+#============================================================================
+sub unstow_contents {
+
+ my ($path, $target) = @_;
+
+ # don't try to remove anything under a stow directory
+ if ($target eq $Stow_Path or -e "$target/.stow") {
+ return;
+ }
+ if ($Option{'verbose'} > 1){
+ warn "Unstowing in $target\n";
+ }
+ if ($Option{'verbose'} > 2){
+ warn "--- path is $path\n";
+ }
+ if (not -d $path) {
+ error("unstow_contents() called on a non-directory: $path");
+ }
+
+ opendir my $DIR, $path
+ or error("cannot read directory: $path");
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+ next NODE if ignore($node);
+ unstow_node(
+ join_paths($path, $node), # path
+ join_paths($target, $node), # target
+ );
+ }
+ if (-d $target) {
+ cleanup_invalid_links($target);
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : unstow_node()
+# Purpose : unstow the given node
+# Parameters: $path => relative path to source node from the current directory
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : fatal error if a conflict arises
+# Comments : unstow_node() and unstow_contents() are mutually recursive
+#============================================================================
+sub unstow_node {
+
+ my ($path, $target) = @_;
+
+ if ($Option{'verbose'} > 1) {
+ warn "Unstowing $path\n";
+ }
+ if ($Option{'verbose'} > 2) {
+ warn "--- target is $target\n";
+ }
+
+ # does the target exist
+ if (is_a_link($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn("Evaluate existing link: $target\n");
+ }
+
+ # where is the link pointing?
+ my $old_source = read_a_link($target);
+ if (not $old_source) {
+ error("Could not read link: $target");
+ }
+
+ if ($old_source =~ m{\A/}) {
+ warn "ignoring a absolute symlink: $target => $old_source\n";
+ return; # XXX #
+ }
+
+ # does it point to a node under our stow directory?
+ my $old_path = find_stowed_path($target, $old_source);
+ if (not $old_path) {
+ conflict(
+ qq{existing target is not owned by stow: $target => $old_source}
+ );
+ return; # XXX #
+ }
+
+ # does the existing $target actually point to anything
+ if (-e $old_path) {
+ # does link points to the right place
+ if ($old_path eq $path) {
+ do_unlink($target);
+ }
+
+ # XXX we quietly ignore links that are stowed to a different
+ # package.
+
+ #elsif (defer($target)) {
+ # if ($Option{'verbose'} > 2) {
+ # warn("--- deferring to installation of: $target\n");
+ # }
+ #}
+ #elsif (override($target)) {
+ # if ($Option{'verbose'} > 2) {
+ # warn("--- overriding installation of: $target\n");
+ # }
+ # do_unlink($target);
+ #}
+ #else {
+ # conflict(
+ # q{existing target is stowed to a different package: %s => %s},
+ # $target,
+ # $old_source
+ # );
+ #}
+ }
+ else {
+ if ($Option{'verbose'} > 2){
+ warn "--- removing invalid link into a stow directory: $path\n";
+ }
+ do_unlink($target);
+ }
+ }
+ elsif (-e $target) {
+ if ($Option{'verbose'} > 2) {
+ warn("Evaluate existing node: $target\n");
+ }
+ if (-d $target) {
+ unstow_contents($path, $target);
+
+ # this action may have made the parent directory foldable
+ if (my $parent = foldable($target)) {
+ fold_tree($target,$parent);
+ }
+ }
+ else {
+ conflict(
+ qq{existing target is neither a link nor a directory: $target},
+ );
+ }
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : find_stowed_path()
+# Purpose : determine if the given link points to a member of a
+# : stowed package
+# Parameters: $target => path to a symbolic link under current directory
+# : $source => where that link points to
+# Returns : relative path to stowed node (from the current directory)
+# : or '' if link is not owned by stow
+# Throws : fatal exception if link is unreadable
+# Comments : allow for stow dir not being under target dir
+# : we could put more logic under here for multiple stow dirs
+#============================================================================
+sub find_stowed_path {
+
+ my ($target, $source) = @_;
+
+ # evaluate softlink relative to its target
+ my $path = join_paths(parent($target), $source);
+
+ # search for .stow files
+ my $dir = '';
+ for my $part (split m{/+}, $path) {
+ $dir = join_paths($dir,$part);
+ if (-f "$dir/.stow") {
+ return $path;
+ }
+ }
+
+ # compare with $Stow_Path
+ my @path = split m{/+}, $path;
+ my @stow_path = split m{/+}, $Stow_Path;
+
+ # strip off common prefixes
+ while ( @path && @stow_path ) {
+ if ( (shift @path) ne (shift @stow_path) ) {
+ return '';
+ }
+ }
+ if (@stow_path) {
+ # @path is not under @stow_dir
+ return '';
+ }
+
+ return $path
+}
+
+#===== SUBROUTINE ============================================================
+# Name : cleanup_invalid_links()
+# Purpose : clean up invalid links that may block folding
+# Parameters: $dir => path to directory to check
+# Returns : n/a
+# Throws : no exceptions
+# Comments : removing files from a stowed package is probably a bad practice
+# : so this kind of clean up is not _really_ stow's responsibility;
+# : however, failing to clean up can block tree folding, so we'll do
+# : it anyway
+#=============================================================================
+sub cleanup_invalid_links {
+
+ my ($dir) = @_;
+
+ if (not -d $dir) {
+ error("cleanup_invalid_links() called with a non-directory: $dir");
+ }
+
+ opendir my $DIR, $dir
+ or error("cannot read directory: $dir");
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+
+ my $node_path = join_paths($dir,$node);
+
+ if (-l $node_path and not exists $Link_Task_For{$node_path}) {
+
+ # where is the link pointing?
+ # (dont use read_a_link here)
+ my $source = readlink($node_path);
+ if (not $source) {
+ error("Could not read link $node_path");
+ }
+
+ if (
+ not -e join_paths($dir,$source) and # bad link
+ find_stowed_path($node_path,$source) # owned by stow
+ ){
+ if ($Option{'verbose'} > 2) {
+ warn "--- removing stale link: $node_path => ",
+ join_paths($dir,$source), "\n";
+ }
+ do_unlink($node_path);
+ }
+ }
+ }
+ return;
+}
+
+
+#===== SUBROUTINE ===========================================================
+# Name : foldable()
+# Purpose : determine if a tree can be folded
+# Parameters: target => path to a directory
+# Returns : path to the parent dir iff the tree can be safely folded
+# Throws : n/a
+# Comments : the path returned is relative to the parent of $target,
+# : that is, it can be used as the source for a replacement symlink
+#============================================================================
+sub foldable {
+
+ my ($target) = @_;
+
+ if ($Option{'verbose'} > 2){
+ warn "--- Is $target foldable?\n";
+ }
+
+ opendir my $DIR, $target
+ or error(qq{Cannot read directory "$target" ($!)\n});
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ my $parent = '';
+ NODE:
+ for my $node (@listing) {
+
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+
+ my $path = join_paths($target,$node);
+
+ # skip nodes scheduled for removal
+ next NODE if not is_a_node($path);
+
+ # if its not a link then we can't fold its parent
+ return '' if not is_a_link($path);
+
+ # where is the link pointing?
+ my $source = read_a_link($path);
+ if (not $source) {
+ error("Could not read link $path");
+ }
+ if ($parent eq '') {
+ $parent = parent($source)
+ }
+ elsif ($parent ne parent($source)) {
+ return '';
+ }
+ }
+ return '' if not $parent;
+
+ # if we get here then all nodes inside $target are links, and those links
+ # point to nodes inside the same directory.
+
+ # chop of leading '..' to get the path to the common parent directory
+ # relative to the parent of our $target
+ $parent =~ s{\A\.\./}{};
+
+ # if the resulting path is owned by stow, we can fold it
+ if (find_stowed_path($target,$parent)) {
+ if ($Option{'verbose'} > 2){
+ warn "--- $target is foldable\n";
+ }
+ return $parent;
+ }
+ else {
+ return '';
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : fold_tree()
+# Purpose : fold the given tree
+# Parameters: $source => link to the folded tree source
+# : $target => directory that we will replace with a link to $source
+# Returns : n/a
+# Throws : none
+# Comments : only called iff foldable() is true so we can remove some checks
+#============================================================================
+sub fold_tree {
+
+ my ($target,$source) = @_;
+
+ if ($Option{'verbose'} > 2){
+ warn "--- Folding tree: $target => $source\n";
+ }
+
+ opendir my $DIR, $target
+ or error(qq{Cannot read directory "$target" ($!)\n});
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+ next NODE if not is_a_node(join_paths($target,$node));
+ do_unlink(join_paths($target,$node));
+ }
+ do_rmdir($target);
+ do_link($source, $target);
+ return;
+}
+
+
+#===== SUBROUTINE ===========================================================
+# Name : conflict()
+# Purpose : handle conflicts in stow operations
+# Parameters: paths that conflict
+# Returns : n/a
+# Throws : fatal exception unless 'conflicts' option is set
+# Comments : indicates what type of conflict it is
+#============================================================================
+sub conflict {
+ my ( $format, @args ) = @_;
+
+ my $message = sprintf($format, @args);
+
+ if ($Option{'verbose'}) {
+ warn qq{CONFLICT: $message\n};
+ }
+ push @Conflicts, qq{CONFLICT: $message\n};
+ return;
+}
+
+#===== SUBROUTINE ============================================================
+# Name : ignore
+# Purpose : determine if the given path matches a regex in our ignore list
+# Parameters: none
+# Returns : Boolean
+# Throws : no exceptions
+# Comments : none
+#=============================================================================
+sub ignore {
+
+ my ($path) = @_;
+
+ for my $suffix (@{$Option{'ignore'}}) {
+ return 1 if $path =~ m/$suffix/;
+ }
+ return 0;
+}
+
+#===== SUBROUTINE ============================================================
+# Name : defer
+# Purpose : determine if the given path matches a regex in our defer list
+# Parameters: none
+# Returns : Boolean
+# Throws : no exceptions
+# Comments : none
+#=============================================================================
+sub defer {
+
+ my ($path) = @_;
+
+ for my $prefix (@{$Option{'defer'}}) {
+ return 1 if $path =~ m/$prefix/;
+ }
+ return 0;
+}
+
+#===== SUBROUTINE ============================================================
+# Name : overide
+# Purpose : determine if the given path matches a regex in our override list
+# Parameters: none
+# Returns : Boolean
+# Throws : no exceptions
+# Comments : none
+#=============================================================================
+sub override {
+
+ my ($path) = @_;
+
+ for my $regex (@{$Option{'override'}}) {
+ return 1 if $path =~ m/$regex/;
+ }
+ return 0;
+}
+
+##############################################################################
+#
+# The following code provides the abstractions that allow us to defer operating
+# on the filesystem until after all potential conflcits have been assessed.
+#
+##############################################################################
+
+#===== SUBROUTINE ===========================================================
+# Name : process_tasks()
+# Purpose : process each task in the @Tasks list
+# Parameters: none
+# Returns : n/a
+# Throws : fatal error if @Tasks is corrupted or a task fails
+# Comments : task involve either creating or deleting dirs and symlinks
+# : an action is set to 'skip' if it is found to be redundant
+#============================================================================
+sub process_tasks {
+
+ if ($Option{'verbose'} > 1) {
+ warn "Processing tasks...\n"
+ }
+
+ # strip out all tasks with a skip action
+ @Tasks = grep { $_->{'action'} ne 'skip' } @Tasks;
+
+ if (not scalar @Tasks) {
+ warn "There are no outstanding operations to perform.\n";
+ return;
+ }
+ if ($Option{'simulate'}) {
+ warn "WARNING: simulating so all operations are deferred.\n";
+ return;
+ }
+
+ for my $task (@Tasks) {
+
+ if ( $task->{'action'} eq 'create' ) {
+ if ( $task->{'type'} eq 'dir' ) {
+ mkdir($task->{'path'}, 0777)
+ or error(qq(Could not create directory: $task->{'path'}));
+ }
+ elsif ( $task->{'type'} eq 'link' ) {
+ symlink $task->{'source'}, $task->{'path'}
+ or error(
+ q(Could not create symlink: %s => %s),
+ $task->{'path'},
+ $task->{'source'}
+ );
+ }
+ else {
+ internal_error(qq(bad task type: $task->{'type'}));
+ }
+ }
+ elsif ( $task->{'action'} eq 'remove' ) {
+ if ( $task->{'type'} eq 'dir' ) {
+ rmdir $task->{'path'}
+ or error(qq(Could not remove directory: $task->{'path'}));
+ }
+ elsif ( $task->{'type'} eq 'link' ) {
+ unlink $task->{'path'}
+ or error(qq(Could not remove link: $task->{'path'}));
+ }
+ else {
+ internal_error(qq(bad task type: $task->{'type'}));
+ }
+ }
+ else {
+ internal_error(qq(bad task action: $task->{'action'}));
+ }
+ }
+ if ($Option{'verbose'} > 1) {
+ warn "Processing tasks...done\n"
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : is_a_link()
+# Purpose : is the given path a current or planned link
+# Parameters: none
+# Returns : Boolean
+# Throws : none
+# Comments : returns false if an existing link is scheduled for removal
+# : and true if a non-exsitent link is scheduled for creation
+#============================================================================
+sub is_a_link {
+ my ($path) = @_;
+
+
+ if ( exists $Link_Task_For{$path} ) {
+
+ my $action = $Link_Task_For{$path}->{'action'};
+
+ if ($action eq 'remove') {
+ return 0;
+ }
+ elsif ($action eq 'create') {
+ return 1;
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+ elsif (-l $path) {
+ # check if any of its parent are links scheduled for removal
+ # (need this for edge case during unfolding)
+ my $parent = '';
+ for my $part (split m{/+}, $path ) {
+ $parent = join_paths($parent,$part);
+ if ( exists $Link_Task_For{$parent} ) {
+ if ($Link_Task_For{$parent}->{'action'} eq 'remove') {
+ return 0;
+ }
+ }
+ }
+ return 1;
+ }
+ return 0;
+}
+
+
+#===== SUBROUTINE ===========================================================
+# Name : is_a_dir()
+# Purpose : is the given path a current or planned directory
+# Parameters: none
+# Returns : Boolean
+# Throws : none
+# Comments : returns false if an existing directory is scheduled for removal
+# : and true if a non-existent directory is scheduled for creation
+# : we also need to be sure we are not just following a link
+#============================================================================
+sub is_a_dir {
+ my ($path) = @_;
+
+ if ( exists $Dir_Task_For{$path} ) {
+ my $action = $Dir_Task_For{$path}->{'action'};
+ if ($action eq 'remove') {
+ return 0;
+ }
+ elsif ($action eq 'create') {
+ return 1;
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+
+ # are we really following a link that is scheduled for removal
+ my $prefix = '';
+ for my $part (split m{/+}, $path) {
+ $prefix = join_paths($prefix,$part);
+ if (exists $Link_Task_For{$prefix} and
+ $Link_Task_For{$prefix}->{'action'} eq 'remove') {
+ return 0;
+ }
+ }
+
+ if (-d $path) {
+ return 1;
+ }
+ return 0;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : is_a_node()
+# Purpose : is the given path a current or planned node
+# Parameters: none
+# Returns : Boolean
+# Throws : none
+# Comments : returns false if an existing node is scheduled for removal
+# : true if a non-existent node is scheduled for creation
+# : we also need to be sure we are not just following a link
+#============================================================================
+sub is_a_node {
+ my ($path) = @_;
+
+ if ( exists $Link_Task_For{$path} ) {
+
+ my $action = $Link_Task_For{$path}->{'action'};
+
+ if ($action eq 'remove') {
+ return 0;
+ }
+ elsif ($action eq 'create') {
+ return 1;
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+
+ if ( exists $Dir_Task_For{$path} ) {
+
+ my $action = $Dir_Task_For{$path}->{'action'};
+
+ if ($action eq 'remove') {
+ return 0;
+ }
+ elsif ($action eq 'create') {
+ return 1;
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+
+ # are we really following a link that is scheduled for removal
+ my $prefix = '';
+ for my $part (split m{/+}, $path) {
+ $prefix = join_paths($prefix,$part);
+ if ( exists $Link_Task_For{$prefix} and
+ $Link_Task_For{$prefix}->{'action'} eq 'remove') {
+ return 0;
+ }
+ }
+
+ if (-e $path) {
+ return 1;
+ }
+ return 0;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : read_a_link()
+# Purpose : return the source of a current or planned link
+# Parameters: $path => path to the link target
+# Returns : a string
+# Throws : fatal exception if the given path is not a current or planned
+# : link
+# Comments : none
+#============================================================================
+sub read_a_link {
+
+ my ($path) = @_;
+
+ if ( exists $Link_Task_For{$path} ) {
+ my $action = $Link_Task_For{$path}->{'action'};
+
+ if ($action eq 'create') {
+ return $Link_Task_For{$path}->{'source'};
+ }
+ elsif ($action eq 'remove') {
+ internal_error(
+ "read_a_link() passed a path that is scheduled for removal: $path"
+ );
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+ elsif (-l $path) {
+ return readlink $path
+ or error("Could not read link: $path");
+ }
+ internal_error("read_a_link() passed a non link path: $path\n");
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : do_link()
+# Purpose : wrap 'link' operation for later processing
+# Parameters: file => the file to link
+# Returns : n/a
+# Throws : error if this clashes with an existing planned operation
+# Comments : cleans up operations that undo previous operations
+#============================================================================
+sub do_link {
+
+ my ( $oldfile, $newfile ) = @_;
+
+ if ( exists $Dir_Task_For{$newfile} ) {
+
+ my $task_ref = $Dir_Task_For{$newfile};
+
+ if ( $task_ref->{'action'} eq 'create' ) {
+ if ($task_ref->{'type'} eq 'dir') {
+ internal_error(
+ "new link (%s => %s ) clashes with planned new directory",
+ $newfile,
+ $oldfile,
+ );
+ }
+ }
+ elsif ( $task_ref->{'action'} eq 'remove' ) {
+ # we may need to remove a directory before creating a link so continue;
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ( exists $Link_Task_For{$newfile} ) {
+
+ my $task_ref = $Link_Task_For{$newfile};
+
+ if ( $task_ref->{'action'} eq 'create' ) {
+ if ( $task_ref->{'source'} ne $oldfile ) {
+ internal_error(
+ "new link clashes with planned new link: %s => %s",
+ $task_ref->{'path'},
+ $task_ref->{'source'},
+ )
+ }
+ else {
+ if ($Option{'verbose'}) {
+ warn "LINK: $newfile => $oldfile (duplicates previous action)\n";
+ }
+ return;
+ }
+ }
+ elsif ( $task_ref->{'action'} eq 'remove' ) {
+ if ( $task_ref->{'source'} eq $oldfile ) {
+ # no need to remove a link we are going to recreate
+ if ($Option{'verbose'}) {
+ warn "LINK: $newfile => $oldfile (reverts previous action)\n";
+ }
+ $Link_Task_For{$newfile}->{'action'} = 'skip';
+ delete $Link_Task_For{$newfile};
+ return;
+ }
+ # we may need to remove a link to replace it so continue
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ # creating a new link
+ if ($Option{'verbose'}) {
+ warn "LINK: $newfile => $oldfile\n";
+ }
+ my $task = {
+ action => 'create',
+ type => 'link',
+ path => $newfile,
+ source => $oldfile,
+ };
+ push @Tasks, $task;
+ $Link_Task_For{$newfile} = $task;
+
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : do_unlink()
+# Purpose : wrap 'unlink' operation for later processing
+# Parameters: $file => the file to unlink
+# Returns : n/a
+# Throws : error if this clashes with an existing planned operation
+# Comments : will remove an existing planned link
+#============================================================================
+sub do_unlink {
+
+ my ($file) = @_;
+
+ if (exists $Link_Task_For{$file} ) {
+ my $task_ref = $Link_Task_For{$file};
+ if ( $task_ref->{'action'} eq 'remove' ) {
+ if ($Option{'verbose'}) {
+ warn "UNLINK: $file (duplicates previous action)\n";
+ }
+ return;
+ }
+ elsif ( $task_ref->{'action'} eq 'create' ) {
+ # do need to create a link then remove it
+ if ($Option{'verbose'}) {
+ warn "UNLINK: $file (reverts previous action)\n";
+ }
+ $Link_Task_For{$file}->{'action'} = 'skip';
+ delete $Link_Task_For{$file};
+ return;
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ( exists $Dir_Task_For{$file} and $Dir_Task_For{$file} eq 'create' ) {
+ internal_error(
+ "new unlink operation clashes with planned operation: %s dir %s",
+ $Dir_Task_For{$file}->{'action'},
+ $file
+ );
+ }
+
+ # remove the link
+ if ($Option{'verbose'}) {
+ #warn "UNLINK: $file (".(caller())[2].")\n";
+ warn "UNLINK: $file\n";
+ }
+
+ my $source = readlink $file or error("could not readlink $file");
+
+ my $task = {
+ action => 'remove',
+ type => 'link',
+ path => $file,
+ source => $source,
+ };
+ push @Tasks, $task;
+ $Link_Task_For{$file} = $task;
+
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : do_mkdir()
+# Purpose : wrap 'mkdir' operation
+# Parameters: $dir => the directory to remove
+# Returns : n/a
+# Throws : fatal exception if operation fails
+# Comments : outputs a message if 'verbose' option is set
+# : does not perform operation if 'simulate' option is set
+# Comments : cleans up operations that undo previous operations
+#============================================================================
+sub do_mkdir {
+ my ($dir) = @_;
+
+ if ( exists $Link_Task_For{$dir} ) {
+
+ my $task_ref = $Link_Task_For{$dir};
+
+ if ($task_ref->{'action'} eq 'create') {
+ internal_error(
+ "new dir clashes with planned new link (%s => %s)",
+ $task_ref->{'path'},
+ $task_ref->{'source'},
+ );
+ }
+ elsif ($task_ref->{'action'} eq 'remove') {
+ # may need to remove a link before creating a directory so continue
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ( exists $Dir_Task_For{$dir} ) {
+
+ my $task_ref = $Dir_Task_For{$dir};
+
+ if ($task_ref->{'action'} eq 'create') {
+ if ($Option{'verbose'}) {
+ warn "MKDIR: $dir (duplicates previous action)\n";
+ }
+ return;
+ }
+ elsif ($task_ref->{'action'} eq 'remove') {
+ if ($Option{'verbose'}) {
+ warn "MKDIR: $dir (reverts previous action)\n";
+ }
+ $Dir_Task_For{$dir}->{'action'} = 'skip';
+ delete $Dir_Task_For{$dir};
+ return;
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ($Option{'verbose'}) {
+ warn "MKDIR: $dir\n";
+ }
+ my $task = {
+ action => 'create',
+ type => 'dir',
+ path => $dir,
+ source => undef,
+ };
+ push @Tasks, $task;
+ $Dir_Task_For{$dir} = $task;
+
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : do_rmdir()
+# Purpose : wrap 'rmdir' operation
+# Parameters: $dir => the directory to remove
+# Returns : n/a
+# Throws : fatal exception if operation fails
+# Comments : outputs a message if 'verbose' option is set
+# : does not perform operation if 'simulate' option is set
+#============================================================================
+sub do_rmdir {
+ my ($dir) = @_;
+
+ if (exists $Link_Task_For{$dir} ) {
+ my $task_ref = $Link_Task_For{$dir};
+ internal_error(
+ "rmdir clashes with planned operation: %s link %s => %s",
+ $task_ref->{'action'},
+ $task_ref->{'path'},
+ $task_ref->{'source'}
+ );
+ }
+
+ if (exists $Dir_Task_For{$dir} ) {
+ my $task_ref = $Link_Task_For{$dir};
+
+ if ($task_ref->{'action'} eq 'remove' ) {
+ if ($Option{'verbose'}) {
+ warn "RMDIR $dir (duplicates previous action)\n";
+ }
+ return;
+ }
+ elsif ($task_ref->{'action'} eq 'create' ) {
+ if ($Option{'verbose'}) {
+ warn "MKDIR $dir (reverts previous action)\n";
+ }
+ $Link_Task_For{$dir}->{'action'} = 'skip';
+ delete $Link_Task_For{$dir};
+ return;
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ($Option{'verbose'}) {
+ warn "RMDIR $dir\n";
+ }
+ my $task = {
+ action => 'remove',
+ type => 'dir',
+ path => $dir,
+ source => '',
+ };
+ push @Tasks, $task;
+ $Dir_Task_For{$dir} = $task;
+
+ return;
+}
+
+#############################################################################
+#
+# General Utilities: nothing stow specific here.
+#
+#############################################################################
+
+#===== SUBROUTINE ============================================================
+# Name : strip_quotes
+# Purpose : remove matching outer quotes from the given string
+# Parameters: none
+# Returns : n/a
+# Throws : no exceptions
+# Comments : none
+#=============================================================================
+sub strip_quotes {
+
+ my ($string) = @_;
+
+ if ($string =~ m{\A\s*'(.*)'\s*\z}) {
+ return $1;
+ }
+ elsif ($string =~ m{\A\s*"(.*)"\s*\z}) {
+ return $1
+ }
+ return $string;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : relative_path()
+# Purpose : find the relative path between two given paths
+# Parameters: path1 => a directory path
+# : path2 => a directory path
+# Returns : path2 relative to path1
+# Throws : n/a
+# Comments : only used once by main interactive routine
+# : factored out for testing
+#============================================================================
+sub relative_path {
+
+ my ($path1, $path2) = @_;
+
+ my (@path1) = split m{/+}, $path1;
+ my (@path2) = split m{/+}, $path2;
+
+ # drop common prefixes until we find a difference
+ PREFIX:
+ while ( @path1 && @path2 ) {
+ last PREFIX if $path1[0] ne $path2[0];
+ shift @path1;
+ shift @path2;
+ }
+
+ # prepend one '..' to $path2 for each component of $path1
+ while ( shift @path1 ) {
+ unshift @path2, '..';
+ }
+
+ return join_paths(@path2);
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : join_path()
+# Purpose : concatenates given paths
+# Parameters: path1, path2, ... => paths
+# Returns : concatenation of given paths
+# Throws : n/a
+# Comments : factors out redundant path elements:
+# : '//' => '/' and 'a/b/../c' => 'a/c'
+#============================================================================
+sub join_paths {
+
+ my @paths = @_;
+
+ # weed out empty components and concatenate
+ my $result = join '/', grep {!/\A\z/} @paths;
+
+ # factor out back references and remove redundant /'s)
+ my @result = ();
+ PART:
+ for my $part ( split m{/+}, $result) {
+ next PART if $part eq '.';
+ if (@result && $part eq '..' && $result[-1] ne '..') {
+ pop @result;
+ }
+ else {
+ push @result, $part;
+ }
+ }
+
+ return join '/', @result;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : parent
+# Purpose : find the parent of the given path
+# Parameters: @path => components of the path
+# Returns : returns a path string
+# Throws : n/a
+# Comments : allows you to send multiple chunks of the path
+# : (this feature is currently not used)
+#============================================================================
+sub parent {
+ my @path = @_;
+ my $path = join '/', @_;
+ my @elts = split m{/+}, $path;
+ pop @elts;
+ return join '/', @elts;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : internal_error()
+# Purpose : output internal error message in a consistent form and die
+# Parameters: $message => error message to output
+# Returns : n/a
+# Throws : n/a
+# Comments : none
+#============================================================================
+sub internal_error {
+ my ($format,@args) = @_;
+ die "$ProgramName: INTERNAL ERROR: ".sprintf($format,@args)."\n",
+ "This _is_ a bug. Please submit a bug report so we can fix it:-)\n";
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : error()
+# Purpose : output error message in a consistent form and die
+# Parameters: $message => error message to output
+# Returns : n/a
+# Throws : n/a
+# Comments : none
+#============================================================================
+sub error {
+ my ($format,@args) = @_;
+ die "$ProgramName: ERROR: ".sprintf($format,@args)." ($!)\n";
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : version()
+# Purpose : print this programs verison and exit
+# Parameters: none
+# Returns : n/a
+# Throws : n/a
+# Comments : none
+#============================================================================
+sub version {
+ print "$ProgramName (GNU Stow) version $Version\n";
+ exit 0;
+}
+
+1; # return true so we can load this script as a module during unit testing
+
+# Local variables:
+# mode: perl
+# End:
+# vim: ft=perl
diff --git a/stow.in b/stow.in
index aee5885..c786115 100644..100755
--- a/stow.in
+++ b/stow.in
@@ -2,8 +2,9 @@
# GNU Stow - manage the installation of multiple software packages
# Copyright (C) 1993, 1994, 1995, 1996 by Bob Glickstein
-# Copyright (C) 2000,2001 Guillaume Morin
-#
+# Copyright (C) 2000, 2001 Guillaume Morin
+# Copyright (C) 2007 Kahlil Hodgson
+#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
@@ -17,529 +18,1777 @@
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-#
-# $Id$
-# $Source$
-# $Date$
-# $Author$
+
+use strict;
+use warnings;
require 5.005;
-use POSIX;
-
-$ProgramName = $0;
-$ProgramName =~ s,.*/,,;
-
-$Version = '@VERSION@';
-
-$Conflicts = 0;
-$Delete = 0;
-$NotReally = 0;
-$Verbose = 0;
-$ReportHelp = 0;
-$Stow = undef;
-$Target = undef;
-$Restow = 0;
-
-
-# FIXME: use Getopt::Long
-while (@ARGV && ($_ = $ARGV[0]) && /^-/) {
- $opt = $';
- shift;
- last if /^--$/;
-
- if ($opt =~ /^-/) {
- $opt = $';
- if ($opt =~ /^no?$/i) {
- $NotReally = 1;
- } elsif ($opt =~ /^c(o(n(f(l(i(c(ts?)?)?)?)?)?)?)?$/i) {
- $Conflicts = 1;
- $NotReally = 1;
- } elsif ($opt =~ /^dir?/i) {
- $remainder = $';
- if ($remainder =~ /^=/) {
- $Stow = $'; # the stuff after the =
- } else {
- $Stow = shift;
- }
- } elsif ($opt =~ /^t(a(r(g(et?)?)?)?)?/i) {
- $remainder = $';
- if ($remainder =~ /^=/) {
- $Target = $'; # the stuff after the =
- } else {
- $Target = shift;
- }
- } elsif ($opt =~ /^verb(o(se?)?)?/i) {
- $remainder = $';
- if ($remainder =~ /^=(\d+)/) {
- $Verbose = $1;
- } else {
- ++$Verbose;
- }
- } elsif ($opt =~ /^de(l(e(te?)?)?)?$/i) {
- $Delete = 1;
- } elsif ($opt =~ /^r(e(s(t(o(w?)?)?)?)?)?$/i) {
- $Restow = 1;
- } elsif ($opt =~ /^vers(i(on?)?)?$/i) {
- &version();
- } else {
- &usage(($opt =~ /^h(e(lp?)?)?$/) ? undef :
- "unknown or ambiguous option: $opt");
- }
- } else {
- @opts = split(//, $opt);
- while ($_ = shift(@opts)) {
- if ($_ eq 'n') {
- $NotReally = 1;
- } elsif ($_ eq 'c') {
- $Conflicts = 1;
- $NotReally = 1;
- } elsif ($_ eq 'd') {
- $Stow = (join('', @opts) || shift);
- @opts = ();
- } elsif ($_ eq 't') {
- $Target = (join('', @opts) || shift);
- @opts = ();
- } elsif ($_ eq 'v') {
- ++$Verbose;
- } elsif ($_ eq 'D') {
- $Delete = 1;
- } elsif ($_ eq 'R') {
- $Restow = 1;
- } elsif ($_ eq 'V') {
- &version();
- } else {
- &usage(($_ eq 'h') ? undef : "unknown option: $_");
- }
- }
- }
-}
-
-&usage("No packages named") unless @ARGV;
-
-# Changing dirs helps a lot when soft links are used
-$current_dir = &getcwd;
-if ($Stow) {
- chdir($Stow) || die "Cannot chdir to target tree $Stow ($!)\n";
-}
-
-# This prevents problems if $Target was supplied as a relative path
-$Stow = &getcwd;
-
-chdir($current_dir) || die "Your directory does not seem to exist anymore ($!)\n";
-
-$Target = &parent($Stow) unless $Target;
-
-chdir($Target) || die "Cannot chdir to target tree $Target ($!)\n";
-$Target = &getcwd;
-
-foreach $package (@ARGV) {
- $package =~ s,/+$,,; # delete trailing slashes
- if ($package =~ m,/,) {
- die "$ProgramName: slashes not permitted in package names\n";
- }
-}
-
-if ($Delete || $Restow) {
- @Collections = @ARGV;
- &Unstow('', &RelativePath($Target, $Stow));
-}
-
-if (!$Delete || $Restow) {
- foreach $Collection (@ARGV) {
- warn "Stowing package $Collection...\n" if $Verbose;
- &StowContents($Collection, &RelativePath($Target, $Stow));
- }
-}
-
-sub CommonParent {
- local($dir1, $dir2) = @_;
- local($result, $x);
- local(@d1) = split(/\/+/, $dir1);
- local(@d2) = split(/\/+/, $dir2);
-
- while (@d1 && @d2 && (($x = shift(@d1)) eq shift(@d2))) {
- $result .= "$x/";
- }
- chop($result);
- $result;
-}
-
-# Find the relative patch between
-# two paths given as arguments.
-
-sub RelativePath {
- local($a, $b) = @_;
- local($c) = &CommonParent($a, $b);
- local(@a) = split(/\/+/, $a);
- local(@b) = split(/\/+/, $b);
- local(@c) = split(/\/+/, $c);
-
- # if $c == "/something", scalar(@c) >= 2
- # but if $c == "/", scalar(@c) == 0
- # but we want 1
- my $length = scalar(@c) ? scalar(@c) : 1;
- splice(@a, 0, $length);
- splice(@b, 0, $length);
-
- unshift(@b, (('..') x (@a + 0)));
- &JoinPaths(@b);
-}
-
-# Basically concatenates the paths given
-# as arguments
-
-sub JoinPaths {
- local(@paths, @parts);
- local ($x, $y);
- local($result) = '';
-
- $result = '/' if ($_[0] =~ /^\//);
- foreach $x (@_) {
- @parts = split(/\/+/, $x);
- foreach $y (@parts) {
- push(@paths, $y) if ($y ne "");
- }
- }
- $result .= join('/', @paths);
-}
-
-sub Unstow {
- local($targetdir, $stow) = @_;
- local(@contents);
- local($content);
- local($linktarget, $stowmember, $collection);
- local(@stowmember);
- local($pure, $othercollection) = (1, '');
- local($subpure, $subother);
- local($empty) = (1);
- local(@puresubdirs);
-
- return (0, '') if (&JoinPaths($Target, $targetdir) eq $Stow);
- return (0, '') if (-e &JoinPaths($Target, $targetdir, '.stow'));
- warn sprintf("Unstowing in %s\n", &JoinPaths($Target, $targetdir))
- if ($Verbose > 1);
- if (!opendir(DIR, &JoinPaths($Target, $targetdir))) {
- warn "Warning: $ProgramName: Cannot read directory \"$dir\" ($!). Stow might leave some links. If you think, it does. Rerun Stow with appropriate rights.\n";
- }
- @contents = readdir(DIR);
- closedir(DIR);
- foreach $content (@contents) {
- next if (($content eq '.') || ($content eq '..'));
- $empty = 0;
- if (-l &JoinPaths($Target, $targetdir, $content)) {
- ($linktarget = readlink(&JoinPaths($Target,
- $targetdir,
- $content)))
- || die sprintf("%s: Cannot read link %s (%s)\n",
- $ProgramName,
- &JoinPaths($Target, $targetdir, $content),
- $!);
- if ($stowmember = &FindStowMember(&JoinPaths($Target,
- $targetdir),
- $linktarget)) {
- @stowmember = split(/\/+/, $stowmember);
- $collection = shift(@stowmember);
- if (grep(($collection eq $_), @Collections)) {
- &DoUnlink(&JoinPaths($Target, $targetdir, $content));
- } elsif ($pure) {
- if ($othercollection) {
- $pure = 0 if ($collection ne $othercollection);
- } else {
- $othercollection = $collection;
- }
- }
- } else {
- $pure = 0;
- }
- } elsif (-d &JoinPaths($Target, $targetdir, $content)) {
- ($subpure, $subother) = &Unstow(&JoinPaths($targetdir, $content),
- &JoinPaths('..', $stow));
- if ($subpure) {
- push(@puresubdirs, "$content/$subother");
- }
- if ($pure) {
- if ($subpure) {
- if ($othercollection) {
- if ($subother) {
- if ($othercollection ne $subother) {
- $pure = 0;
- }
- }
- } elsif ($subother) {
- $othercollection = $subother;
- }
- } else {
- $pure = 0;
- }
- }
- } else {
- $pure = 0;
- }
- }
- # This directory was an initially empty directory therefore
- # We do not remove it.
- $pure = 0 if $empty;
- if ((!$pure || !$targetdir) && @puresubdirs) {
- &CoalesceTrees($targetdir, $stow, @puresubdirs);
- }
- ($pure, $othercollection);
-}
-
-sub CoalesceTrees {
- local($parent, $stow, @trees) = @_;
- local($tree, $collection, $x);
-
- foreach $x (@trees) {
- ($tree, $collection) = ($x =~ /^(.*)\/(.*)/);
- &EmptyTree(&JoinPaths($Target, $parent, $tree));
- &DoRmdir(&JoinPaths($Target, $parent, $tree));
- if ($collection) {
- &DoLink(&JoinPaths($stow, $collection, $parent, $tree),
- &JoinPaths($Target, $parent, $tree));
- }
- }
-}
-
-sub EmptyTree {
- local($dir) = @_;
- local(@contents);
- local($content);
-
- opendir(DIR, $dir)
- || die "$ProgramName: Cannot read directory \"$dir\" ($!)\n";
- @contents = readdir(DIR);
- closedir(DIR);
- foreach $content (@contents) {
- next if (($content eq '.') || ($content eq '..'));
- if (-l &JoinPaths($dir, $content)) {
- &DoUnlink(&JoinPaths($dir, $content));
- } elsif (-d &JoinPaths($dir, $content)) {
- &EmptyTree(&JoinPaths($dir, $content));
- &DoRmdir(&JoinPaths($dir, $content));
- } else {
- &DoUnlink(&JoinPaths($dir, $content));
- }
- }
-}
-
-sub StowContents {
- local($dir, $stow) = @_;
- local(@contents);
- local($content);
-
- warn "Stowing contents of $dir\n" if ($Verbose > 1);
- opendir(DIR, &JoinPaths($Stow, $dir))
- || die "$ProgramName: Cannot read directory \"$dir\" ($!)\n";
- @contents = readdir(DIR);
- closedir(DIR);
- foreach $content (@contents) {
- next if (($content eq '.') || ($content eq '..'));
- if (-d &JoinPaths($Stow, $dir, $content)) {
- &StowDir(&JoinPaths($dir, $content), $stow);
- } else {
- &StowNondir(&JoinPaths($dir, $content), $stow);
- }
- }
-}
-
-sub StowDir {
- local($dir, $stow) = @_;
- local(@dir) = split(/\/+/, $dir);
- local($collection) = shift(@dir);
- local($subdir) = join('/', @dir);
- local($linktarget, $stowsubdir);
-
- warn "Stowing directory $dir\n" if ($Verbose > 1);
- if (-l &JoinPaths($Target, $subdir)) {
- ($linktarget = readlink(&JoinPaths($Target, $subdir)))
- || die sprintf("%s: Could not read link %s (%s)\n",
- $ProgramName,
- &JoinPaths($Target, $subdir),
- $!);
- ($stowsubdir =
- &FindStowMember(sprintf('%s/%s', $Target,
- join('/', @dir[0..($#dir - 1)])),
- $linktarget))
- || (&Conflict($dir, $subdir), return);
- if (-e &JoinPaths($Stow, $stowsubdir)) {
- if ($stowsubdir eq $dir) {
- warn sprintf("%s already points to %s\n",
- &JoinPaths($Target, $subdir),
- &JoinPaths($Stow, $dir))
- if ($Verbose > 2);
- return;
- }
- if (-d &JoinPaths($Stow, $stowsubdir)) {
- &DoUnlink(&JoinPaths($Target, $subdir));
- &DoMkdir(&JoinPaths($Target, $subdir));
- &StowContents($stowsubdir, &JoinPaths('..', $stow));
- &StowContents($dir, &JoinPaths('..', $stow));
- } else {
- (&Conflict($dir, $subdir), return);
- }
- } else {
- &DoUnlink(&JoinPaths($Target, $subdir));
- &DoLink(&JoinPaths($stow, $dir),
- &JoinPaths($Target, $subdir));
- }
- } elsif (-e &JoinPaths($Target, $subdir)) {
- if (-d &JoinPaths($Target, $subdir)) {
- &StowContents($dir, &JoinPaths('..', $stow));
- } else {
- &Conflict($dir, $subdir);
- }
- } else {
- &DoLink(&JoinPaths($stow, $dir),
- &JoinPaths($Target, $subdir));
- }
-}
-
-sub StowNondir {
- local($file, $stow) = @_;
- local(@file) = split(/\/+/, $file);
- local($collection) = shift(@file);
- local($subfile) = join('/', @file);
- local($linktarget, $stowsubfile);
-
- if (-l &JoinPaths($Target, $subfile)) {
- ($linktarget = readlink(&JoinPaths($Target, $subfile)))
- || die sprintf("%s: Could not read link %s (%s)\n",
- $ProgramName,
- &JoinPaths($Target, $subfile),
- $!);
- ($stowsubfile =
- &FindStowMember(sprintf('%s/%s', $Target,
- join('/', @file[0..($#file - 1)])),
- $linktarget))
- || (&Conflict($file, $subfile), return);
- if (-e &JoinPaths($Stow, $stowsubfile)) {
- (&Conflict($file, $subfile), return)
- unless ($stowsubfile eq $file);
- warn sprintf("%s already points to %s\n",
- &JoinPaths($Target, $subfile),
- &JoinPaths($Stow, $file))
- if ($Verbose > 2);
- } else {
- &DoUnlink(&JoinPaths($Target, $subfile));
- &DoLink(&JoinPaths($stow, $file),
- &JoinPaths($Target, $subfile));
- }
- } elsif (-e &JoinPaths($Target, $subfile)) {
- &Conflict($file, $subfile);
- } else {
- &DoLink(&JoinPaths($stow, $file),
- &JoinPaths($Target, $subfile));
- }
-}
-
-sub DoUnlink {
- local($file) = @_;
-
- warn "UNLINK $file\n" if $Verbose;
- (unlink($file) || die "$ProgramName: Could not unlink $file ($!)\n")
- unless $NotReally;
-}
-
-sub DoRmdir {
- local($dir) = @_;
-
- warn "RMDIR $dir\n" if $Verbose;
- (rmdir($dir) || die "$ProgramName: Could not rmdir $dir ($!)\n")
- unless $NotReally;
-}
-
-sub DoLink {
- local($target, $name) = @_;
-
- warn "LINK $name to $target\n" if $Verbose;
- (symlink($target, $name) ||
- die "$ProgramName: Could not symlink $name to $target ($!)\n")
- unless $NotReally;
-}
-
-sub DoMkdir {
- local($dir) = @_;
-
- warn "MKDIR $dir\n" if $Verbose;
- (mkdir($dir, 0777)
- || die "$ProgramName: Could not make directory $dir ($!)\n")
- unless $NotReally;
-}
-
-sub Conflict {
- local($a, $b) = @_;
-
- if ($Conflicts) {
- warn sprintf("CONFLICT: %s vs. %s\n", &JoinPaths($Stow, $a),
- &JoinPaths($Target, $b));
- } else {
- die sprintf("%s: CONFLICT: %s vs. %s\n",
- $ProgramName,
- &JoinPaths($Stow, $a),
- &JoinPaths($Target, $b));
- }
-}
-
-sub FindStowMember {
- local($start, $path) = @_;
- local(@x) = split(/\/+/, $start);
- local(@path) = split(/\/+/, $path);
- local($x);
- local(@d) = split(/\/+/, $Stow);
-
- while (@path) {
- $x = shift(@path);
- if ($x eq '..') {
- pop(@x);
- return '' unless @x;
- } elsif ($x) {
- push(@x, $x);
- }
- }
- while (@x && @d) {
- if (($x = shift(@x)) ne shift(@d)) {
- return '';
- }
- }
- return '' if @d;
- join('/', @x);
+use POSIX qw(getcwd);
+use Getopt::Long;
+
+my $Version = '@VERSION@';
+my $ProgramName = $0;
+$ProgramName =~ s{.*/}{};
+
+# Verbosity rules:
+#
+# 0: errors only
+# > 0: print operations: LINK/UNLINK/MKDIR/RMDIR
+# > 1: print trace: stow/unstow package/contents/node
+# > 2: print trace detail: "_this_ already points to _that_"
+#
+# All output (except for version() and usage()) is to stderr to preserve
+# backward compatibility.
+
+# These are the defaults for command line options
+our %Option = (
+ help => 0,
+ conflicts => 0,
+ action => 'stow',
+ simulate => 0,
+ verbose => 0,
+ paranoid => 0,
+ dir => undef,
+ target => undef,
+ ignore => [],
+ override => [],
+ defer => [],
+);
+
+# This becomes static after option processing
+our $Stow_Path; # only use in main loop and find_stowed_path()
+
+# Store conflicts during pre-processing
+our @Conflicts = ();
+
+# Store command line packges to stow (-S and -R)
+our @Pkgs_To_Stow = ();
+
+# Store command line packages to unstow (-D and -R)
+our @Pkgs_To_Delete = ();
+
+# The following structures are used by the abstractions that allow us to
+# defer operating on the filesystem until after all potential conflcits have
+# been assessed.
+
+# our @Tasks: list of operations to be performed (in order)
+# each element is a hash ref of the form
+# {
+# action => ...
+# type => ...
+# path => ... (unique)
+# source => ... (only for links)
+# }
+our @Tasks = ();
+
+# my %Dir_Task_For: map a path to the corresponding directory task reference
+# This structurew allows us to quickly determine if a path has an existing
+# directory task associated with it.
+our %Dir_Task_For = ();
+
+# my %Link_Task_For: map a path to the corresponding directory task reference
+# This structurew allows us to quickly determine if a path has an existing
+# directory task associated with it.
+our %Link_Task_For = ();
+
+# NB: directory tasks and link tasks are NOT mutually exclusive
+
+# put the main loop in this block so we can load the
+# rest of the code as a module for testing
+if ( not caller() ) {
+
+ process_options();
+ set_stow_path();
+
+ # current dir is now the target directory
+
+ for my $package (@Pkgs_To_Delete) {
+ if (not -d join_paths($Stow_Path,$package)) {
+ error("The given package name ($package) is not in your stow path");
+ }
+ if ($Option{'verbose'} > 1) {
+ warn "Unstowing package $package...\n";
+ }
+ if ($Option{'compat'}) {
+ unstow_contents_orig(
+ join_paths($Stow_Path,$package), # path to package
+ '', # target is current_dir
+ );
+ }
+ else {
+ unstow_contents(
+ join_paths($Stow_Path,$package), # path to package
+ '', # target is current_dir
+ );
+ }
+ if ($Option{'verbose'} > 1) {
+ warn "Unstowing package $package...done\n";
+ }
+ }
+
+ for my $package (@Pkgs_To_Stow) {
+ if (not -d join_paths($Stow_Path,$package)) {
+ error("The given package name ($package) is not in your stow path");
+ }
+ if ($Option{'verbose'} > 1) {
+ warn "Stowing package $package...\n";
+ }
+ stow_contents(
+ join_paths($Stow_Path,$package), # path package
+ '', # target is current dir
+ join_paths($Stow_Path,$package), # source from target
+ );
+ if ($Option{'verbose'} > 1) {
+ warn "Stowing package $package...done\n";
+ }
+ }
+
+ # --verbose: tell me what you are planning to do
+ # --simulate: don't execute planned operations
+ # --conflicts: just list any detected conflicts
+
+ if (scalar @Conflicts) {
+ warn "WARNING: conflicts detected.\n";
+ if ($Option{'conflicts'}) {
+ map { warn $_ } @Conflicts;
+ }
+ warn "WARNING: all operations aborted.\n";
+ }
+ else {
+ process_tasks();
+ }
}
-sub parent {
- local($path) = join('/', @_);
- local(@elts) = split(/\/+/, $path);
- pop(@elts);
- join('/', @elts);
+
+#===== SUBROUTINE ===========================================================
+# Name : process_options()
+# Purpose : parse command line options and update the %Option hash
+# Parameters: none
+# Returns : n/a
+# Throws : a fatal error if a bad command line option is given
+# Comments : checks @ARGV for valid package names
+#============================================================================
+sub process_options {
+
+ get_defaults();
+ #$,="\n"; print @ARGV,"\n"; # for debugging rc file
+
+ Getopt::Long::config('no_ignore_case', 'bundling', 'permute');
+ GetOptions(
+ 'v' => sub { $Option{'verbose'}++ },
+ 'verbose=s' => sub { $Option{'verbose'} = $_[1] },
+ 'h|help' => sub { $Option{'help'} = '1' },
+ 'n|no|simulate' => sub { $Option{'simulate'} = '1' },
+ 'c|conflicts' => sub { $Option{'conflicts'} = '1' },
+ 'V|version' => sub { $Option{'version'} = '1' },
+ 'p|compat' => sub { $Option{'compat'} = '1' },
+ 'd|dir=s' => sub { $Option{'dir'} = $_[1] },
+ 't|target=s' => sub { $Option{'target'} = $_[1] },
+
+ # clean and pre-compile any regex's at parse time
+ 'ignore=s' =>
+ sub {
+ my $regex = strip_quotes($_[1]);
+ push @{$Option{'ignore'}}, qr($regex\z)
+ },
+
+ 'override=s' =>
+ sub {
+ my $regex = strip_quotes($_[1]);
+ push @{$Option{'override'}}, qr(\A$regex)
+ },
+
+ 'defer=s' =>
+ sub {
+ my $regex = strip_quotes($_[1]);
+ push @{$Option{'defer'}}, qr(\A$regex) ;
+ },
+
+ # a little craziness so we can do different actions on the same line:
+ # a -D, -S, or -R changes the action that will be performed on the
+ # package arguments that follow it.
+ 'D|delete' => sub { $Option{'action'} = 'delete' },
+ 'S|stow' => sub { $Option{'action'} = 'stow' },
+ 'R|restow' => sub { $Option{'action'} = 'restow' },
+ '<>' =>
+ sub {
+ if ($Option{'action'} eq 'restow') {
+ push @Pkgs_To_Delete, $_[0];
+ push @Pkgs_To_Stow, $_[0];
+ }
+ elsif ($Option{'action'} eq 'delete') {
+ push @Pkgs_To_Delete, $_[0];
+ }
+ else {
+ push @Pkgs_To_Stow, $_[0];
+ }
+ },
+ ) or usage();
+
+ #print "$Option{'dir'}\n"; print "$Option{'target'}\n"; exit;
+
+ # clean any leading and trailing whitespace in paths
+ if ($Option{'dir'}) {
+ $Option{'dir'} =~ s/\A +//;
+ $Option{'dir'} =~ s/ +\z//;
+ }
+ if ($Option{'target'}) {
+ $Option{'target'} =~ s/\A +//;
+ $Option{'target'} =~ s/ +\z//;
+ }
+
+ if ($Option{'help'}) {
+ usage();
+ }
+ if ($Option{'version'}) {
+ version();
+ }
+ if ($Option{'conflicts'}) {
+ $Option{'simulate'} = 1;
+ }
+
+ if (not scalar @Pkgs_To_Stow and not scalar @Pkgs_To_Delete ) {
+ usage("No packages named");
+ }
+
+ # check package arguments
+ for my $package ( (@Pkgs_To_Stow, @Pkgs_To_Delete) ) {
+ $package =~ s{/+$}{}; # delete trailing slashes
+ if ( $package =~ m{/} ) {
+ error("Slashes are not permitted in package names");
+ }
+ }
+
+ return;
}
+#===== SUBROUTINE ============================================================
+# Name : get_defaults()
+# Purpose : search for default settings in any .stow files
+# Parameters: none
+# Returns : n/a
+# Throws : no exceptions
+# Comments : prepends the contents '~/.stowrc' and '.stowrc' to the command
+# : line so they get parsed just like noremal arguments. (This was
+# : hacked in so that Emil and I could set different preferences).
+#=============================================================================
+sub get_defaults {
+
+ my @defaults = ();
+ for my $file ($ENV{'HOME'}.'/.stowrc','.stowrc') {
+ if (-r $file ) {
+ warn "Loading defaults from $file\n";
+ open my $FILE, '<', $file
+ or die "Could not open $file for reading\n";
+ while (my $line = <$FILE> ){
+ chomp $line;
+ push @defaults, split " ", $line;
+ }
+ close $FILE or die "Could not close open file: $file\n";
+ }
+ }
+ # doing this inline does not seem to work
+ unshift @ARGV, @defaults;
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : usage()
+# Purpose : print program usage message and exit
+# Parameters: msg => string to prepend to the usage message
+# Returns : n/a
+# Throws : n/a
+# Comments : if 'msg' is given, then exit with non-zero status
+#============================================================================
sub usage {
- local($msg) = shift;
-
- if ($msg) {
- print "$ProgramName: $msg\n";
- }
- print "$ProgramName (GNU Stow) version $Version\n\n";
- print "Usage: $ProgramName [OPTION ...] PACKAGE ...\n";
- print <<EOT;
- -n, --no Do not actually make changes
- -c, --conflicts Scan for conflicts, implies -n
- -d DIR, --dir=DIR Set stow dir to DIR (default is current dir)
- -t DIR, --target=DIR Set target to DIR (default is parent of stow dir)
- -v, --verbose[=N] Increase verboseness (levels are 0,1,2,3;
- -v or --verbose adds 1; --verbose=N sets level)
- -D, --delete Unstow instead of stow
- -R, --restow Restow (like stow -D followed by stow)
- -V, --version Show Stow version number
- -h, --help Show this help
+ my ($msg) = @_;
+
+ if ($msg) {
+ print "$ProgramName: $msg\n\n";
+ }
+
+ print <<"EOT";
+$ProgramName (GNU Stow) version $Version
+
+SYNOPSIS:
+
+ $ProgramName [OPTION ...] [-D|-S|-R] PACKAGE ... [-D|-S|-R] PACKAGE ...
+
+OPTIONS:
+
+ -n, --no Do not actually make any filesystem changes
+ -c, --conflicts Scan for and print any conflicts, implies -n
+ -d DIR, --dir=DIR Set stow dir to DIR (default is current dir)
+ -t DIR, --target=DIR Set target to DIR (default is parent of stow dir)
+ -v, --verbose[=N] Increase verbosity (levels are 0,1,2,3;
+ -v or --verbose adds 1; --verbose=N sets level)
+
+ -S, --stow Stow the package names that follow this option
+ -D, --delete Unstow the package names that follow this option
+ -R, --restow Restow (like stow -D followed by stow -S)
+ -p, --compat use legacy algorithm for unstowing
+
+ --ignore=REGEX ignore files ending in this perl regex
+ --defer=REGEX defer stowing files begining with this perl regex
+ if the file is already stowed to another package
+ --override=REGEX force stowing files begining with this perl regex
+ if the file is already stowed to another package
+ -V, --version Show stow version number
+ -h, --help Show this help
EOT
- exit($msg ? 1 : 0);
+ exit( $msg ? 1 : 0 );
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : set_stow_path()
+# Purpose : find the relative path to the stow directory
+# Parameters: none
+# Returns : a relative path
+# Throws : fatal error if either default directories or those set by the
+# : the command line flags are not valid.
+# Comments : This sets the current working directory to $Option{target}
+#============================================================================
+sub set_stow_path {
+
+ # Changing dirs helps a lot when soft links are used
+ # Also prevents problems when 'stow_dir' or 'target' are
+ # supplied as relative paths (FIXME: examples?)
+
+ my $current_dir = getcwd();
+
+ # default stow dir is the current directory
+ if (not $Option{'dir'} ) {
+ $Option{'dir'} = getcwd();
+ }
+ if (not chdir($Option{'dir'})) {
+ error("Cannot chdir to target tree: '$Option{'dir'}'");
+ }
+ my $stow_dir = getcwd();
+
+ # back to start in case target is relative
+ if (not chdir($current_dir)) {
+ error("Your directory does not seem to exist anymore");
+ }
+
+ # default target is the parent of the stow directory
+ if (not $Option{'target'}) {
+ $Option{'target'} = parent($Option{'dir'});
+ }
+ if (not chdir($Option{'target'})) {
+ error("Cannot chdir to target tree: $Option{'target'}");
+ }
+
+ # set our one global
+ $Stow_Path = relative_path(getcwd(),$stow_dir);
+
+ if ($Option{'verbose'} > 1) {
+ warn "current dir is ".getcwd()."\n";
+ warn "stow dir path is $Stow_Path\n";
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : stow_contents()
+# Purpose : stow the contents of the given directory
+# Parameters: $path => relative path to source dir from current directory
+# : $source => relative path to symlink source from the dir of target
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : a fatal error if directory cannot be read
+# Comments : stow_node() and stow_contents() are mutually recursive
+# : $source and $target are used for creating the symlink
+# : $path is used for folding/unfolding trees as necessary
+#============================================================================
+sub stow_contents {
+
+ my ($path, $target, $source) = @_;
+
+ if ($Option{'verbose'} > 1){
+ warn "Stowing contents of $path\n";
+ }
+ if ($Option{'verbose'} > 2){
+ warn "--- $target => $source\n";
+ }
+
+ if (not -d $path) {
+ error("stow_contents() called on a non-directory: $path");
+ }
+
+ opendir my $DIR, $path
+ or error("cannot read directory: $path");
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+ next NODE if ignore($node);
+ stow_node(
+ join_paths($path, $node), # path
+ join_paths($target,$node), # target
+ join_paths($source,$node), # source
+ );
+ }
}
+#===== SUBROUTINE ===========================================================
+# Name : stow_node()
+# Purpose : stow the given node
+# Parameters: $path => realtive path to source node from the current directory
+# : $target => realtive path to symlink target from the current directory
+# : $source => realtive path to symlink source from the dir of target
+# Returns : n/a
+# Throws : fatal exception if a conflict arises
+# Comments : stow_node() and stow_contents() are mutually recursive
+# : $source and $target are used for creating the symlink
+# : $path is used for folding/unfolding trees as necessary
+#============================================================================
+sub stow_node {
+
+ my ($path, $target, $source) = @_;
+
+ if ($Option{'verbose'} > 1) {
+ warn "Stowing $path\n";
+ }
+ if ($Option{'verbose'} > 2) {
+ warn "--- $target => $source\n";
+ }
+
+ # don't try to stow absolute symlinks (they cant be unstowed)
+ if (-l $source) {
+ my $second_source = read_a_link($source);
+ if ($second_source =~ m{\A/} ) {
+ conflict("source is an absolute symlink $source => $second_source");
+ if ($Option{'verbose'} > 2) {
+ warn "absolute symlinks cannot be unstowed";
+ }
+ return;
+ }
+ }
+
+ # does the target already exist?
+ if (is_a_link($target)) {
+
+ # where is the link pointing?
+ my $old_source = read_a_link($target);
+ if (not $old_source) {
+ error("Could not read link: $target");
+ }
+ if ($Option{'verbose'} > 2) {
+ warn "--- Evaluate existing link: $target => $old_source\n";
+ }
+
+ # does it point to a node under our stow directory?
+ my $old_path = find_stowed_path($target, $old_source);
+ if (not $old_path) {
+ conflict("existing target is not owned by stow: $target");
+ return; # XXX #
+ }
+
+ # does the existing $target actually point to anything?
+ if (is_a_node($old_path)) {
+ if ($old_source eq $source) {
+ if ($Option{'verbose'} > 2) {
+ warn "--- Skipping $target as it already points to $source\n";
+ }
+ }
+ elsif (defer($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn "--- deferring installation of: $target\n";
+ }
+ }
+ elsif (override($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn "--- overriding installation of: $target\n";
+ }
+ do_unlink($target);
+ do_link($source,$target);
+ }
+ elsif (is_a_dir(join_paths(parent($target),$old_source)) &&
+ is_a_dir(join_paths(parent($target),$source)) ) {
+
+ # if the existing link points to a directory,
+ # and the proposed new link points to a directory,
+ # then we can unfold the tree at that point
+
+ if ($Option{'verbose'} > 2){
+ warn "--- Unfolding $target\n";
+ }
+ do_unlink($target);
+ do_mkdir($target);
+ stow_contents($old_path, $target, join_paths('..',$old_source));
+ stow_contents($path, $target, join_paths('..',$source));
+ }
+ else {
+ conflict(
+ q{existing target is stowed to a different package: %s => %s},
+ $target,
+ $old_source,
+ );
+ }
+ }
+ else {
+ # the existing link is invalid, so replace it with a good link
+ if ($Option{'verbose'} > 2){
+ warn "--- replacing invalid link: $path\n";
+ }
+ do_unlink($target);
+ do_link($source, $target);
+ }
+ }
+ elsif (is_a_node($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn("--- Evaluate existing node: $target\n");
+ }
+ if (is_a_dir($target)) {
+ stow_contents($path, $target, join_paths('..',$source));
+ }
+ else {
+ conflict(
+ qq{existing target is neither a link nor a directory: $target}
+ );
+ }
+ }
+ else {
+ do_link($source, $target);
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : unstow_contents_orig()
+# Purpose : unstow the contents of the given directory
+# Parameters: $path => relative path to source dir from current directory
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : a fatal error if directory cannot be read
+# Comments : unstow_node() and unstow_contents() are mutually recursive
+# : Here we traverse the target tree, rather than the source tree.
+#============================================================================
+sub unstow_contents_orig {
+
+ my ($path, $target) = @_;
+
+ # don't try to remove anything under a stow directory
+ if ($target eq $Stow_Path or -e "$target/.stow" or -e "$target/.nonstow") {
+ return;
+ }
+ if ($Option{'verbose'} > 1){
+ warn "Unstowing in $target\n";
+ }
+ if ($Option{'verbose'} > 2){
+ warn "--- path is $path\n";
+ }
+ if (not -d $target) {
+ error("unstow_contents() called on a non-directory: $target");
+ }
+
+ opendir my $DIR, $target
+ or error("cannot read directory: $target");
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+ next NODE if ignore($node);
+ unstow_node_orig(
+ join_paths($path, $node), # path
+ join_paths($target, $node), # target
+ );
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : unstow_node_orig()
+# Purpose : unstow the given node
+# Parameters: $path => relative path to source node from the current directory
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : fatal error if a conflict arises
+# Comments : unstow_node() and unstow_contents() are mutually recursive
+#============================================================================
+sub unstow_node_orig {
+
+ my ($path, $target) = @_;
+
+ if ($Option{'verbose'} > 1) {
+ warn "Unstowing $target\n";
+ }
+ if ($Option{'verbose'} > 2) {
+ warn "--- path is $path\n";
+ }
+
+ # does the target exist
+ if (is_a_link($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn("Evaluate existing link: $target\n");
+ }
+
+ # where is the link pointing?
+ my $old_source = read_a_link($target);
+ if (not $old_source) {
+ error("Could not read link: $target");
+ }
+
+ # does it point to a node under our stow directory?
+ my $old_path = find_stowed_path($target, $old_source);
+ if (not $old_path) {
+ # skip links not owned by stow
+ return; # XXX #
+ }
+
+ # does the existing $target actually point to anything
+ if (-e $old_path) {
+ # does link points to the right place
+ if ($old_path eq $path) {
+ do_unlink($target);
+ }
+ elsif (override($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn("--- overriding installation of: $target\n");
+ }
+ do_unlink($target);
+ }
+ # else leave it alone
+ }
+ else {
+ if ($Option{'verbose'} > 2){
+ warn "--- removing invalid link into a stow directory: $path\n";
+ }
+ do_unlink($target);
+ }
+ }
+ elsif (-d $target) {
+ unstow_contents_orig($path, $target);
+
+ # this action may have made the parent directory foldable
+ if (my $parent = foldable($target)) {
+ fold_tree($target,$parent);
+ }
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : unstow_contents()
+# Purpose : unstow the contents of the given directory
+# Parameters: $path => relative path to source dir from current directory
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : a fatal error if directory cannot be read
+# Comments : unstow_node() and unstow_contents() are mutually recursive
+# : Here we traverse the target tree, rather than the source tree.
+#============================================================================
+sub unstow_contents {
+
+ my ($path, $target) = @_;
+
+ # don't try to remove anything under a stow directory
+ if ($target eq $Stow_Path or -e "$target/.stow") {
+ return;
+ }
+ if ($Option{'verbose'} > 1){
+ warn "Unstowing in $target\n";
+ }
+ if ($Option{'verbose'} > 2){
+ warn "--- path is $path\n";
+ }
+ if (not -d $path) {
+ error("unstow_contents() called on a non-directory: $path");
+ }
+
+ opendir my $DIR, $path
+ or error("cannot read directory: $path");
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+ next NODE if ignore($node);
+ unstow_node(
+ join_paths($path, $node), # path
+ join_paths($target, $node), # target
+ );
+ }
+ if (-d $target) {
+ cleanup_invalid_links($target);
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : unstow_node()
+# Purpose : unstow the given node
+# Parameters: $path => relative path to source node from the current directory
+# : $target => relative path to symlink target from the current directory
+# Returns : n/a
+# Throws : fatal error if a conflict arises
+# Comments : unstow_node() and unstow_contents() are mutually recursive
+#============================================================================
+sub unstow_node {
+
+ my ($path, $target) = @_;
+
+ if ($Option{'verbose'} > 1) {
+ warn "Unstowing $path\n";
+ }
+ if ($Option{'verbose'} > 2) {
+ warn "--- target is $target\n";
+ }
+
+ # does the target exist
+ if (is_a_link($target)) {
+ if ($Option{'verbose'} > 2) {
+ warn("Evaluate existing link: $target\n");
+ }
+
+ # where is the link pointing?
+ my $old_source = read_a_link($target);
+ if (not $old_source) {
+ error("Could not read link: $target");
+ }
+
+ if ($old_source =~ m{\A/}) {
+ warn "ignoring a absolute symlink: $target => $old_source\n";
+ return; # XXX #
+ }
+
+ # does it point to a node under our stow directory?
+ my $old_path = find_stowed_path($target, $old_source);
+ if (not $old_path) {
+ conflict(
+ qq{existing target is not owned by stow: $target => $old_source}
+ );
+ return; # XXX #
+ }
+
+ # does the existing $target actually point to anything
+ if (-e $old_path) {
+ # does link points to the right place
+ if ($old_path eq $path) {
+ do_unlink($target);
+ }
+
+ # XXX we quietly ignore links that are stowed to a different
+ # package.
+
+ #elsif (defer($target)) {
+ # if ($Option{'verbose'} > 2) {
+ # warn("--- deferring to installation of: $target\n");
+ # }
+ #}
+ #elsif (override($target)) {
+ # if ($Option{'verbose'} > 2) {
+ # warn("--- overriding installation of: $target\n");
+ # }
+ # do_unlink($target);
+ #}
+ #else {
+ # conflict(
+ # q{existing target is stowed to a different package: %s => %s},
+ # $target,
+ # $old_source
+ # );
+ #}
+ }
+ else {
+ if ($Option{'verbose'} > 2){
+ warn "--- removing invalid link into a stow directory: $path\n";
+ }
+ do_unlink($target);
+ }
+ }
+ elsif (-e $target) {
+ if ($Option{'verbose'} > 2) {
+ warn("Evaluate existing node: $target\n");
+ }
+ if (-d $target) {
+ unstow_contents($path, $target);
+
+ # this action may have made the parent directory foldable
+ if (my $parent = foldable($target)) {
+ fold_tree($target,$parent);
+ }
+ }
+ else {
+ conflict(
+ qq{existing target is neither a link nor a directory: $target},
+ );
+ }
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : find_stowed_path()
+# Purpose : determine if the given link points to a member of a
+# : stowed package
+# Parameters: $target => path to a symbolic link under current directory
+# : $source => where that link points to
+# Returns : relative path to stowed node (from the current directory)
+# : or '' if link is not owned by stow
+# Throws : fatal exception if link is unreadable
+# Comments : allow for stow dir not being under target dir
+# : we could put more logic under here for multiple stow dirs
+#============================================================================
+sub find_stowed_path {
+
+ my ($target, $source) = @_;
+
+ # evaluate softlink relative to its target
+ my $path = join_paths(parent($target), $source);
+
+ # search for .stow files
+ my $dir = '';
+ for my $part (split m{/+}, $path) {
+ $dir = join_paths($dir,$part);
+ if (-f "$dir/.stow") {
+ return $path;
+ }
+ }
+
+ # compare with $Stow_Path
+ my @path = split m{/+}, $path;
+ my @stow_path = split m{/+}, $Stow_Path;
+
+ # strip off common prefixes
+ while ( @path && @stow_path ) {
+ if ( (shift @path) ne (shift @stow_path) ) {
+ return '';
+ }
+ }
+ if (@stow_path) {
+ # @path is not under @stow_dir
+ return '';
+ }
+
+ return $path
+}
+
+#===== SUBROUTINE ============================================================
+# Name : cleanup_invalid_links()
+# Purpose : clean up invalid links that may block folding
+# Parameters: $dir => path to directory to check
+# Returns : n/a
+# Throws : no exceptions
+# Comments : removing files from a stowed package is probably a bad practice
+# : so this kind of clean up is not _really_ stow's responsibility;
+# : however, failing to clean up can block tree folding, so we'll do
+# : it anyway
+#=============================================================================
+sub cleanup_invalid_links {
+
+ my ($dir) = @_;
+
+ if (not -d $dir) {
+ error("cleanup_invalid_links() called with a non-directory: $dir");
+ }
+
+ opendir my $DIR, $dir
+ or error("cannot read directory: $dir");
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+
+ my $node_path = join_paths($dir,$node);
+
+ if (-l $node_path and not exists $Link_Task_For{$node_path}) {
+
+ # where is the link pointing?
+ # (dont use read_a_link here)
+ my $source = readlink($node_path);
+ if (not $source) {
+ error("Could not read link $node_path");
+ }
+
+ if (
+ not -e join_paths($dir,$source) and # bad link
+ find_stowed_path($node_path,$source) # owned by stow
+ ){
+ if ($Option{'verbose'} > 2) {
+ warn "--- removing stale link: $node_path => ",
+ join_paths($dir,$source), "\n";
+ }
+ do_unlink($node_path);
+ }
+ }
+ }
+ return;
+}
+
+
+#===== SUBROUTINE ===========================================================
+# Name : foldable()
+# Purpose : determine if a tree can be folded
+# Parameters: target => path to a directory
+# Returns : path to the parent dir iff the tree can be safely folded
+# Throws : n/a
+# Comments : the path returned is relative to the parent of $target,
+# : that is, it can be used as the source for a replacement symlink
+#============================================================================
+sub foldable {
+
+ my ($target) = @_;
+
+ if ($Option{'verbose'} > 2){
+ warn "--- Is $target foldable?\n";
+ }
+
+ opendir my $DIR, $target
+ or error(qq{Cannot read directory "$target" ($!)\n});
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ my $parent = '';
+ NODE:
+ for my $node (@listing) {
+
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+
+ my $path = join_paths($target,$node);
+
+ # skip nodes scheduled for removal
+ next NODE if not is_a_node($path);
+
+ # if its not a link then we can't fold its parent
+ return '' if not is_a_link($path);
+
+ # where is the link pointing?
+ my $source = read_a_link($path);
+ if (not $source) {
+ error("Could not read link $path");
+ }
+ if ($parent eq '') {
+ $parent = parent($source)
+ }
+ elsif ($parent ne parent($source)) {
+ return '';
+ }
+ }
+ return '' if not $parent;
+
+ # if we get here then all nodes inside $target are links, and those links
+ # point to nodes inside the same directory.
+
+ # chop of leading '..' to get the path to the common parent directory
+ # relative to the parent of our $target
+ $parent =~ s{\A\.\./}{};
+
+ # if the resulting path is owned by stow, we can fold it
+ if (find_stowed_path($target,$parent)) {
+ if ($Option{'verbose'} > 2){
+ warn "--- $target is foldable\n";
+ }
+ return $parent;
+ }
+ else {
+ return '';
+ }
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : fold_tree()
+# Purpose : fold the given tree
+# Parameters: $source => link to the folded tree source
+# : $target => directory that we will replace with a link to $source
+# Returns : n/a
+# Throws : none
+# Comments : only called iff foldable() is true so we can remove some checks
+#============================================================================
+sub fold_tree {
+
+ my ($target,$source) = @_;
+
+ if ($Option{'verbose'} > 2){
+ warn "--- Folding tree: $target => $source\n";
+ }
+
+ opendir my $DIR, $target
+ or error(qq{Cannot read directory "$target" ($!)\n});
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+ next NODE if not is_a_node(join_paths($target,$node));
+ do_unlink(join_paths($target,$node));
+ }
+ do_rmdir($target);
+ do_link($source, $target);
+ return;
+}
+
+
+#===== SUBROUTINE ===========================================================
+# Name : conflict()
+# Purpose : handle conflicts in stow operations
+# Parameters: paths that conflict
+# Returns : n/a
+# Throws : fatal exception unless 'conflicts' option is set
+# Comments : indicates what type of conflict it is
+#============================================================================
+sub conflict {
+ my ( $format, @args ) = @_;
+
+ my $message = sprintf($format, @args);
+
+ if ($Option{'verbose'}) {
+ warn qq{CONFLICT: $message\n};
+ }
+ push @Conflicts, qq{CONFLICT: $message\n};
+ return;
+}
+
+#===== SUBROUTINE ============================================================
+# Name : ignore
+# Purpose : determine if the given path matches a regex in our ignore list
+# Parameters: none
+# Returns : Boolean
+# Throws : no exceptions
+# Comments : none
+#=============================================================================
+sub ignore {
+
+ my ($path) = @_;
+
+ for my $suffix (@{$Option{'ignore'}}) {
+ return 1 if $path =~ m/$suffix/;
+ }
+ return 0;
+}
+
+#===== SUBROUTINE ============================================================
+# Name : defer
+# Purpose : determine if the given path matches a regex in our defer list
+# Parameters: none
+# Returns : Boolean
+# Throws : no exceptions
+# Comments : none
+#=============================================================================
+sub defer {
+
+ my ($path) = @_;
+
+ for my $prefix (@{$Option{'defer'}}) {
+ return 1 if $path =~ m/$prefix/;
+ }
+ return 0;
+}
+
+#===== SUBROUTINE ============================================================
+# Name : overide
+# Purpose : determine if the given path matches a regex in our override list
+# Parameters: none
+# Returns : Boolean
+# Throws : no exceptions
+# Comments : none
+#=============================================================================
+sub override {
+
+ my ($path) = @_;
+
+ for my $regex (@{$Option{'override'}}) {
+ return 1 if $path =~ m/$regex/;
+ }
+ return 0;
+}
+
+##############################################################################
+#
+# The following code provides the abstractions that allow us to defer operating
+# on the filesystem until after all potential conflcits have been assessed.
+#
+##############################################################################
+
+#===== SUBROUTINE ===========================================================
+# Name : process_tasks()
+# Purpose : process each task in the @Tasks list
+# Parameters: none
+# Returns : n/a
+# Throws : fatal error if @Tasks is corrupted or a task fails
+# Comments : task involve either creating or deleting dirs and symlinks
+# : an action is set to 'skip' if it is found to be redundant
+#============================================================================
+sub process_tasks {
+
+ if ($Option{'verbose'} > 1) {
+ warn "Processing tasks...\n"
+ }
+
+ # strip out all tasks with a skip action
+ @Tasks = grep { $_->{'action'} ne 'skip' } @Tasks;
+
+ if (not scalar @Tasks) {
+ warn "There are no outstanding operations to perform.\n";
+ return;
+ }
+ if ($Option{'simulate'}) {
+ warn "WARNING: simulating so all operations are deferred.\n";
+ return;
+ }
+
+ for my $task (@Tasks) {
+
+ if ( $task->{'action'} eq 'create' ) {
+ if ( $task->{'type'} eq 'dir' ) {
+ mkdir($task->{'path'}, 0777)
+ or error(qq(Could not create directory: $task->{'path'}));
+ }
+ elsif ( $task->{'type'} eq 'link' ) {
+ symlink $task->{'source'}, $task->{'path'}
+ or error(
+ q(Could not create symlink: %s => %s),
+ $task->{'path'},
+ $task->{'source'}
+ );
+ }
+ else {
+ internal_error(qq(bad task type: $task->{'type'}));
+ }
+ }
+ elsif ( $task->{'action'} eq 'remove' ) {
+ if ( $task->{'type'} eq 'dir' ) {
+ rmdir $task->{'path'}
+ or error(qq(Could not remove directory: $task->{'path'}));
+ }
+ elsif ( $task->{'type'} eq 'link' ) {
+ unlink $task->{'path'}
+ or error(qq(Could not remove link: $task->{'path'}));
+ }
+ else {
+ internal_error(qq(bad task type: $task->{'type'}));
+ }
+ }
+ else {
+ internal_error(qq(bad task action: $task->{'action'}));
+ }
+ }
+ if ($Option{'verbose'} > 1) {
+ warn "Processing tasks...done\n"
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : is_a_link()
+# Purpose : is the given path a current or planned link
+# Parameters: none
+# Returns : Boolean
+# Throws : none
+# Comments : returns false if an existing link is scheduled for removal
+# : and true if a non-exsitent link is scheduled for creation
+#============================================================================
+sub is_a_link {
+ my ($path) = @_;
+
+
+ if ( exists $Link_Task_For{$path} ) {
+
+ my $action = $Link_Task_For{$path}->{'action'};
+
+ if ($action eq 'remove') {
+ return 0;
+ }
+ elsif ($action eq 'create') {
+ return 1;
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+ elsif (-l $path) {
+ # check if any of its parent are links scheduled for removal
+ # (need this for edge case during unfolding)
+ my $parent = '';
+ for my $part (split m{/+}, $path ) {
+ $parent = join_paths($parent,$part);
+ if ( exists $Link_Task_For{$parent} ) {
+ if ($Link_Task_For{$parent}->{'action'} eq 'remove') {
+ return 0;
+ }
+ }
+ }
+ return 1;
+ }
+ return 0;
+}
+
+
+#===== SUBROUTINE ===========================================================
+# Name : is_a_dir()
+# Purpose : is the given path a current or planned directory
+# Parameters: none
+# Returns : Boolean
+# Throws : none
+# Comments : returns false if an existing directory is scheduled for removal
+# : and true if a non-existent directory is scheduled for creation
+# : we also need to be sure we are not just following a link
+#============================================================================
+sub is_a_dir {
+ my ($path) = @_;
+
+ if ( exists $Dir_Task_For{$path} ) {
+ my $action = $Dir_Task_For{$path}->{'action'};
+ if ($action eq 'remove') {
+ return 0;
+ }
+ elsif ($action eq 'create') {
+ return 1;
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+
+ # are we really following a link that is scheduled for removal
+ my $prefix = '';
+ for my $part (split m{/+}, $path) {
+ $prefix = join_paths($prefix,$part);
+ if (exists $Link_Task_For{$prefix} and
+ $Link_Task_For{$prefix}->{'action'} eq 'remove') {
+ return 0;
+ }
+ }
+
+ if (-d $path) {
+ return 1;
+ }
+ return 0;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : is_a_node()
+# Purpose : is the given path a current or planned node
+# Parameters: none
+# Returns : Boolean
+# Throws : none
+# Comments : returns false if an existing node is scheduled for removal
+# : true if a non-existent node is scheduled for creation
+# : we also need to be sure we are not just following a link
+#============================================================================
+sub is_a_node {
+ my ($path) = @_;
+
+ if ( exists $Link_Task_For{$path} ) {
+
+ my $action = $Link_Task_For{$path}->{'action'};
+
+ if ($action eq 'remove') {
+ return 0;
+ }
+ elsif ($action eq 'create') {
+ return 1;
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+
+ if ( exists $Dir_Task_For{$path} ) {
+
+ my $action = $Dir_Task_For{$path}->{'action'};
+
+ if ($action eq 'remove') {
+ return 0;
+ }
+ elsif ($action eq 'create') {
+ return 1;
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+
+ # are we really following a link that is scheduled for removal
+ my $prefix = '';
+ for my $part (split m{/+}, $path) {
+ $prefix = join_paths($prefix,$part);
+ if ( exists $Link_Task_For{$prefix} and
+ $Link_Task_For{$prefix}->{'action'} eq 'remove') {
+ return 0;
+ }
+ }
+
+ if (-e $path) {
+ return 1;
+ }
+ return 0;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : read_a_link()
+# Purpose : return the source of a current or planned link
+# Parameters: $path => path to the link target
+# Returns : a string
+# Throws : fatal exception if the given path is not a current or planned
+# : link
+# Comments : none
+#============================================================================
+sub read_a_link {
+
+ my ($path) = @_;
+
+ if ( exists $Link_Task_For{$path} ) {
+ my $action = $Link_Task_For{$path}->{'action'};
+
+ if ($action eq 'create') {
+ return $Link_Task_For{$path}->{'source'};
+ }
+ elsif ($action eq 'remove') {
+ internal_error(
+ "read_a_link() passed a path that is scheduled for removal: $path"
+ );
+ }
+ else {
+ internal_error("bad task action: $action");
+ }
+ }
+ elsif (-l $path) {
+ return readlink $path
+ or error("Could not read link: $path");
+ }
+ internal_error("read_a_link() passed a non link path: $path\n");
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : do_link()
+# Purpose : wrap 'link' operation for later processing
+# Parameters: file => the file to link
+# Returns : n/a
+# Throws : error if this clashes with an existing planned operation
+# Comments : cleans up operations that undo previous operations
+#============================================================================
+sub do_link {
+
+ my ( $oldfile, $newfile ) = @_;
+
+ if ( exists $Dir_Task_For{$newfile} ) {
+
+ my $task_ref = $Dir_Task_For{$newfile};
+
+ if ( $task_ref->{'action'} eq 'create' ) {
+ if ($task_ref->{'type'} eq 'dir') {
+ internal_error(
+ "new link (%s => %s ) clashes with planned new directory",
+ $newfile,
+ $oldfile,
+ );
+ }
+ }
+ elsif ( $task_ref->{'action'} eq 'remove' ) {
+ # we may need to remove a directory before creating a link so continue;
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ( exists $Link_Task_For{$newfile} ) {
+
+ my $task_ref = $Link_Task_For{$newfile};
+
+ if ( $task_ref->{'action'} eq 'create' ) {
+ if ( $task_ref->{'source'} ne $oldfile ) {
+ internal_error(
+ "new link clashes with planned new link: %s => %s",
+ $task_ref->{'path'},
+ $task_ref->{'source'},
+ )
+ }
+ else {
+ if ($Option{'verbose'}) {
+ warn "LINK: $newfile => $oldfile (duplicates previous action)\n";
+ }
+ return;
+ }
+ }
+ elsif ( $task_ref->{'action'} eq 'remove' ) {
+ if ( $task_ref->{'source'} eq $oldfile ) {
+ # no need to remove a link we are going to recreate
+ if ($Option{'verbose'}) {
+ warn "LINK: $newfile => $oldfile (reverts previous action)\n";
+ }
+ $Link_Task_For{$newfile}->{'action'} = 'skip';
+ delete $Link_Task_For{$newfile};
+ return;
+ }
+ # we may need to remove a link to replace it so continue
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ # creating a new link
+ if ($Option{'verbose'}) {
+ warn "LINK: $newfile => $oldfile\n";
+ }
+ my $task = {
+ action => 'create',
+ type => 'link',
+ path => $newfile,
+ source => $oldfile,
+ };
+ push @Tasks, $task;
+ $Link_Task_For{$newfile} = $task;
+
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : do_unlink()
+# Purpose : wrap 'unlink' operation for later processing
+# Parameters: $file => the file to unlink
+# Returns : n/a
+# Throws : error if this clashes with an existing planned operation
+# Comments : will remove an existing planned link
+#============================================================================
+sub do_unlink {
+
+ my ($file) = @_;
+
+ if (exists $Link_Task_For{$file} ) {
+ my $task_ref = $Link_Task_For{$file};
+ if ( $task_ref->{'action'} eq 'remove' ) {
+ if ($Option{'verbose'}) {
+ warn "UNLINK: $file (duplicates previous action)\n";
+ }
+ return;
+ }
+ elsif ( $task_ref->{'action'} eq 'create' ) {
+ # do need to create a link then remove it
+ if ($Option{'verbose'}) {
+ warn "UNLINK: $file (reverts previous action)\n";
+ }
+ $Link_Task_For{$file}->{'action'} = 'skip';
+ delete $Link_Task_For{$file};
+ return;
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ( exists $Dir_Task_For{$file} and $Dir_Task_For{$file} eq 'create' ) {
+ internal_error(
+ "new unlink operation clashes with planned operation: %s dir %s",
+ $Dir_Task_For{$file}->{'action'},
+ $file
+ );
+ }
+
+ # remove the link
+ if ($Option{'verbose'}) {
+ #warn "UNLINK: $file (".(caller())[2].")\n";
+ warn "UNLINK: $file\n";
+ }
+
+ my $source = readlink $file or error("could not readlink $file");
+
+ my $task = {
+ action => 'remove',
+ type => 'link',
+ path => $file,
+ source => $source,
+ };
+ push @Tasks, $task;
+ $Link_Task_For{$file} = $task;
+
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : do_mkdir()
+# Purpose : wrap 'mkdir' operation
+# Parameters: $dir => the directory to remove
+# Returns : n/a
+# Throws : fatal exception if operation fails
+# Comments : outputs a message if 'verbose' option is set
+# : does not perform operation if 'simulate' option is set
+# Comments : cleans up operations that undo previous operations
+#============================================================================
+sub do_mkdir {
+ my ($dir) = @_;
+
+ if ( exists $Link_Task_For{$dir} ) {
+
+ my $task_ref = $Link_Task_For{$dir};
+
+ if ($task_ref->{'action'} eq 'create') {
+ internal_error(
+ "new dir clashes with planned new link (%s => %s)",
+ $task_ref->{'path'},
+ $task_ref->{'source'},
+ );
+ }
+ elsif ($task_ref->{'action'} eq 'remove') {
+ # may need to remove a link before creating a directory so continue
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ( exists $Dir_Task_For{$dir} ) {
+
+ my $task_ref = $Dir_Task_For{$dir};
+
+ if ($task_ref->{'action'} eq 'create') {
+ if ($Option{'verbose'}) {
+ warn "MKDIR: $dir (duplicates previous action)\n";
+ }
+ return;
+ }
+ elsif ($task_ref->{'action'} eq 'remove') {
+ if ($Option{'verbose'}) {
+ warn "MKDIR: $dir (reverts previous action)\n";
+ }
+ $Dir_Task_For{$dir}->{'action'} = 'skip';
+ delete $Dir_Task_For{$dir};
+ return;
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ($Option{'verbose'}) {
+ warn "MKDIR: $dir\n";
+ }
+ my $task = {
+ action => 'create',
+ type => 'dir',
+ path => $dir,
+ source => undef,
+ };
+ push @Tasks, $task;
+ $Dir_Task_For{$dir} = $task;
+
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : do_rmdir()
+# Purpose : wrap 'rmdir' operation
+# Parameters: $dir => the directory to remove
+# Returns : n/a
+# Throws : fatal exception if operation fails
+# Comments : outputs a message if 'verbose' option is set
+# : does not perform operation if 'simulate' option is set
+#============================================================================
+sub do_rmdir {
+ my ($dir) = @_;
+
+ if (exists $Link_Task_For{$dir} ) {
+ my $task_ref = $Link_Task_For{$dir};
+ internal_error(
+ "rmdir clashes with planned operation: %s link %s => %s",
+ $task_ref->{'action'},
+ $task_ref->{'path'},
+ $task_ref->{'source'}
+ );
+ }
+
+ if (exists $Dir_Task_For{$dir} ) {
+ my $task_ref = $Link_Task_For{$dir};
+
+ if ($task_ref->{'action'} eq 'remove' ) {
+ if ($Option{'verbose'}) {
+ warn "RMDIR $dir (duplicates previous action)\n";
+ }
+ return;
+ }
+ elsif ($task_ref->{'action'} eq 'create' ) {
+ if ($Option{'verbose'}) {
+ warn "MKDIR $dir (reverts previous action)\n";
+ }
+ $Link_Task_For{$dir}->{'action'} = 'skip';
+ delete $Link_Task_For{$dir};
+ return;
+ }
+ else {
+ internal_error("bad task action: $task_ref->{'action'}");
+ }
+ }
+
+ if ($Option{'verbose'}) {
+ warn "RMDIR $dir\n";
+ }
+ my $task = {
+ action => 'remove',
+ type => 'dir',
+ path => $dir,
+ source => '',
+ };
+ push @Tasks, $task;
+ $Dir_Task_For{$dir} = $task;
+
+ return;
+}
+
+#############################################################################
+#
+# General Utilities: nothing stow specific here.
+#
+#############################################################################
+
+#===== SUBROUTINE ============================================================
+# Name : strip_quotes
+# Purpose : remove matching outer quotes from the given string
+# Parameters: none
+# Returns : n/a
+# Throws : no exceptions
+# Comments : none
+#=============================================================================
+sub strip_quotes {
+
+ my ($string) = @_;
+
+ if ($string =~ m{\A\s*'(.*)'\s*\z}) {
+ return $1;
+ }
+ elsif ($string =~ m{\A\s*"(.*)"\s*\z}) {
+ return $1
+ }
+ return $string;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : relative_path()
+# Purpose : find the relative path between two given paths
+# Parameters: path1 => a directory path
+# : path2 => a directory path
+# Returns : path2 relative to path1
+# Throws : n/a
+# Comments : only used once by main interactive routine
+# : factored out for testing
+#============================================================================
+sub relative_path {
+
+ my ($path1, $path2) = @_;
+
+ my (@path1) = split m{/+}, $path1;
+ my (@path2) = split m{/+}, $path2;
+
+ # drop common prefixes until we find a difference
+ PREFIX:
+ while ( @path1 && @path2 ) {
+ last PREFIX if $path1[0] ne $path2[0];
+ shift @path1;
+ shift @path2;
+ }
+
+ # prepend one '..' to $path2 for each component of $path1
+ while ( shift @path1 ) {
+ unshift @path2, '..';
+ }
+
+ return join_paths(@path2);
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : join_path()
+# Purpose : concatenates given paths
+# Parameters: path1, path2, ... => paths
+# Returns : concatenation of given paths
+# Throws : n/a
+# Comments : factors out redundant path elements:
+# : '//' => '/' and 'a/b/../c' => 'a/c'
+#============================================================================
+sub join_paths {
+
+ my @paths = @_;
+
+ # weed out empty components and concatenate
+ my $result = join '/', grep {!/\A\z/} @paths;
+
+ # factor out back references and remove redundant /'s)
+ my @result = ();
+ PART:
+ for my $part ( split m{/+}, $result) {
+ next PART if $part eq '.';
+ if (@result && $part eq '..' && $result[-1] ne '..') {
+ pop @result;
+ }
+ else {
+ push @result, $part;
+ }
+ }
+
+ return join '/', @result;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : parent
+# Purpose : find the parent of the given path
+# Parameters: @path => components of the path
+# Returns : returns a path string
+# Throws : n/a
+# Comments : allows you to send multiple chunks of the path
+# : (this feature is currently not used)
+#============================================================================
+sub parent {
+ my @path = @_;
+ my $path = join '/', @_;
+ my @elts = split m{/+}, $path;
+ pop @elts;
+ return join '/', @elts;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : internal_error()
+# Purpose : output internal error message in a consistent form and die
+# Parameters: $message => error message to output
+# Returns : n/a
+# Throws : n/a
+# Comments : none
+#============================================================================
+sub internal_error {
+ my ($format,@args) = @_;
+ die "$ProgramName: INTERNAL ERROR: ".sprintf($format,@args)."\n",
+ "This _is_ a bug. Please submit a bug report so we can fix it:-)\n";
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : error()
+# Purpose : output error message in a consistent form and die
+# Parameters: $message => error message to output
+# Returns : n/a
+# Throws : n/a
+# Comments : none
+#============================================================================
+sub error {
+ my ($format,@args) = @_;
+ die "$ProgramName: ERROR: ".sprintf($format,@args)." ($!)\n";
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : version()
+# Purpose : print this programs verison and exit
+# Parameters: none
+# Returns : n/a
+# Throws : n/a
+# Comments : none
+#============================================================================
sub version {
- print "$ProgramName (GNU Stow) version $Version\n";
- exit(0);
+ print "$ProgramName (GNU Stow) version $Version\n";
+ exit 0;
}
+1; # return true so we can load this script as a module during unit testing
+
# Local variables:
# mode: perl
# End:
+# vim: ft=perl
diff --git a/stow.info b/stow.info
new file mode 100644
index 0000000..7b6b9de
--- /dev/null
+++ b/stow.info
@@ -0,0 +1,1335 @@
+This is stow.info, produced by makeinfo version 4.12 from stow.texi.
+
+INFO-DIR-SECTION Administration
+START-INFO-DIR-ENTRY
+* Stow: (stow). GNU Stow.
+END-INFO-DIR-ENTRY
+
+ This file describes GNU Stow version 2.0.2, a program for managing
+the installation of software packages.
+
+ Software and documentation Copyright (C) 1993, 1994, 1995, 1996 by
+Bob Glickstein <bobg+stow@zanshin.com>. Copyright (C) 2000, 2001
+Guillaume Morin <gmorin@gnu.org>. Copyright (C) 2007 Kahlil (Kal)
+Hodgson <kahlil@internode.on.net>.
+
+ Permission is granted to make and distribute verbatim copies of this
+manual provided the copyright notice and this permission notice are
+preserved on all copies.
+
+ Permission is granted to copy and distribute modified versions of
+this manual under the conditions for verbatim copying, provided also
+that the section entitled "GNU General Public License" is included with
+the modified manual, and provided that the entire resulting derived
+work is distributed under the terms of a permission notice identical to
+this one.
+
+ Permission is granted to copy and distribute translations of this
+manual into another language, under the above conditions for modified
+versions, except that this permission notice may be stated in a
+translation approved by the Free Software Foundation.
+
+
+File: stow.info, Node: Top, Next: Introduction, Prev: (dir), Up: (dir)
+
+ This manual describes GNU Stow 2.0.2, a program for managing the
+installation of software packages.
+
+* Menu:
+
+* Introduction:: Description of Stow.
+* Terminology:: Terms used by this manual.
+* Invoking Stow:: Option summary.
+* Installing Packages:: Using Stow to install.
+* Deleting Packages:: Using Stow to uninstall.
+* Conflicts:: When Stow can't stow.
+* Deferred Operation:: Using two passes to stow.
+* Mixing Operations:: Multiple actions per invocation.
+* Multiple Stow Directories:: Further segregating software.
+* Target Maintenance:: Cleaning up mistakes.
+* Resource Files:: Setting default command line options.
+* Compile-time vs Install-time:: Faking out `make install'.
+* Bootstrapping:: When stow and perl are not yet stowed.
+* Reporting Bugs:: How, what, where, and when to report.
+* Known Bugs:: Don't report any of these.
+* GNU General Public License:: Copying terms.
+* Index:: Index of concepts.
+
+ --- The Detailed Node Listing ---
+
+Compile-time and install-time
+
+* GNU Emacs::
+* Other FSF Software::
+* Cygnus Software::
+* Perl and Perl 5 Modules::
+
+
+File: stow.info, Node: Introduction, Next: Terminology, Prev: Top, Up: Top
+
+1 Introduction
+**************
+
+Stow is a tool for managing the installation of multiple software
+packages in the same run-time directory tree. One historical difficulty
+of this task has been the need to administer, upgrade, install, and
+remove files in independent packages without confusing them with other
+files sharing the same file system space. For instance, it is common to
+install Perl and Emacs in `/usr/local'. When one does so, one winds up
+with the following files(1) in `/usr/local/man/man1':
+
+ a2p.1
+ ctags.1
+ emacs.1
+ etags.1
+ h2ph.1
+ perl.1
+ s2p.1
+
+Now suppose it's time to uninstall Perl. Which man pages get removed?
+Obviously `perl.1' is one of them, but it should not be the
+administrator's responsibility to memorize the ownership of individual
+files by separate packages.
+
+ The approach used by Stow is to install each package into its own
+tree, then use symbolic links to make it appear as though the files are
+installed in the common tree. Administration can be performed in the
+package's private tree in isolation from clutter from other packages.
+Stow can then be used to update the symbolic links. The structure of
+each private tree should reflect the desired structure in the common
+tree; i.e. (in the typical case) there should be a `bin' directory
+containing executables, a `man/man1' directory containing section 1 man
+pages, and so on.
+
+ Stow was inspired by Carnegie Mellon's Depot program, but is
+substantially simpler and safer. Whereas Depot required database files
+to keep things in sync, Stow stores no extra state between runs, so
+there's no danger (as there was in Depot) of mangling directories when
+file hierarchies don't match the database. Also unlike Depot, Stow will
+never delete any files, directories, or links that appear in a Stow
+directory (e.g., `/usr/local/stow/emacs'), so it's always possible to
+rebuild the target tree (e.g., `/usr/local').
+
+ For information about the latest version of Stow, you can refer to
+http://www.gnu.org/software/stow/.
+
+ ---------- Footnotes ----------
+
+ (1) As of Perl 4.036 and Emacs 19.22.
+
+
+File: stow.info, Node: Terminology, Next: Invoking Stow, Prev: Introduction, Up: Top
+
+2 Terminology
+*************
+
+ A "package" is a related collection of files and directories that
+you wish to administer as a unit -- e.g., Perl or Emacs -- and that
+needs to be installed in a particular directory structure -- e.g., with
+`bin', `lib', and `man' subdirectories.
+
+ A "target directory" is the root of a tree in which one or more
+packages wish to _appear_ to be installed. A common, but by no means
+the only such location is `/usr/local'. The examples in this manual
+will use `/usr/local' as the target directory.
+
+ A "stow directory" is the root of a tree containing separate
+packages in private subtrees. When Stow runs, it uses the current
+directory as the default stow directory. The examples in this manual
+will use `/usr/local/stow' as the stow directory, so that individual
+packages will be, for example, `/usr/local/stow/perl' and
+`/usr/local/stow/emacs'.
+
+ An "installation image" is the layout of files and directories
+required by a package, relative to the target directory. Thus, the
+installation image for Perl includes: a `bin' directory containing
+`perl' and `a2p' (among others); an `info' directory containing Texinfo
+documentation; a `lib/perl' directory containing Perl libraries; and a
+`man/man1' directory containing man pages.
+
+ A "package directory" is the root of a tree containing the
+installation image for a particular package. Each package directory
+must reside in a stow directory -- e.g., the package directory
+`/usr/local/stow/perl' must reside in the stow directory
+`/usr/local/stow'. The "name" of a package is the name of its
+directory within the stow directory -- e.g., `perl'.
+
+ Thus, the Perl executable might reside in
+`/usr/local/stow/perl/bin/perl', where `/usr/local' is the target
+directory, `/usr/local/stow' is the stow directory,
+`/usr/local/stow/perl' is the package directory, and `bin/perl' within
+is part of the installation image.
+
+ A "symlink" is a symbolic link. A symlink can be "relative" or
+"absolute". An absolute symlink names a full path; that is, one
+starting from `/'. A relative symlink names a relative path; that is,
+one not starting from `/'. The target of a relative symlink is
+computed starting from the symlink's own directory. Stow only creates
+relative symlinks.
+
+
+File: stow.info, Node: Invoking Stow, Next: Installing Packages, Prev: Terminology, Up: Top
+
+3 Invoking Stow
+***************
+
+The syntax of the `stow' command is:
+
+ stow [OPTIONS] [ACTION FLAG] PACKAGE ...
+
+Each PACKAGE is the name of a package (e.g., `perl') in the stow
+directory that we wish to install into (or delete from) the target
+directory. The default action is to install the given packages,
+although alternate actions may be specified by preceding the package
+name(s) with an ACTION FLAG. Unless otherwise specified, the stow
+directory is assumed to be the current directory and the target
+directory is assumed to be the parent of the current directory, so it
+is typical to execute `stow' from the directory `/usr/local/stow'.
+
+The following options are supported:
+
+`-d DIR'
+`--dir=DIR'
+ Set the stow directory to DIR instead of the current directory.
+ This also has the effect of making the default target directory be
+ the parent of DIR.
+
+`-t DIR'
+`--target=DIR'
+ Set the target directory to DIR instead of the parent of the stow
+ directory.
+
+`--ignore='<regex>''
+ This (repeatable) option lets you suppress acting on files that
+ match the given perl regular expression. For example, using the
+ options
+
+ --ignore='~' --ignore='\.#.*'
+
+ will cause stow to ignore emacs and CVS backup files.
+
+ Note that the regular expression is anchored to the end of the
+ filename, because this is what you will want to do most of the
+ time.
+
+`--defer='<regex>''
+ This (repeatable) option lets you defer stowing a file matching
+ the given regular expression, if that file is already stowed by
+ another package. For example, the following options
+
+ --defer='man' --defer='info'
+
+ will cause stow to skip over pre-existing man and info pages.
+
+ Equivalently, you could use -defer='man|info' since the argument
+ is just a Perl regex.
+
+ Note that the regular expression is anchored to the beginning of
+ the path relative to the target directory, because this is what
+ you will want to do most of the time.
+
+`--override='<regex>''
+ This (repeatable) option forces any file matching the regular
+ expression to be stowed, even if the file is already stowed to
+ another package. For example, the following options
+
+ --override='man' --override='info'
+
+ will permit stow to overwrite links that point to pre-existing man
+ and info pages that are owned by stow and would otherwise cause a
+ conflict.
+
+ The regular expression is anchored to the beginning of the path
+ relative to the target directory, because this is what you will
+ want to do most of the time.
+
+`-n'
+`--no'
+`--simulate'
+ Do not perform any operations that modify the file system; in
+ combination with `-v' can be used to merely show what would happen.
+
+`-v'
+`--verbose[=N]'
+ Send verbose output to standard error describing what Stow is
+ doing. Verbosity levels are 0, 1, 2, and 3; 0 is the default.
+ Using `-v' or `--verbose' increases the verbosity by one; using
+ `--verbose=N' sets it to N.
+
+`-p'
+`--compat'
+ Scan the whole target tree when unstowing. By default, only
+ directories specified in the "installation image" are scanned
+ during an unstow operation. Scanning the whole tree can be
+ prohibitive if your target tree is very large. This option
+ restores the legacy behaviour; however, the `--badlinks' option
+ may be a better way of ensuring that your installation does not
+ have any dangling symlinks.
+
+`-V'
+`--version'
+ Show Stow version number, and exit.
+
+`-h'
+`--help'
+ Show Stow command syntax, and exit.
+
+ The following ACTION FLAGS are supported:
+
+`-D'
+`--delete'
+ Delete (unstow) the package name(s) that follow this option from
+ the "target directory". This option may be repeated any number of
+ times.
+
+`-R'
+`--restow'
+ Restow (first unstow, then stow again) the package names that
+ follow this option. This is useful for pruning obsolete symlinks
+ from the target tree after updating the software in a package.
+ This option may be repeated any number of times.
+
+`-S'
+
+`--stow'
+ explictly stow the package name(s) that follow this option. May
+ be omitted if you are not using the `-D' or `-R' options in the
+ same invocation. See *Note Mixing Operations::, for details of
+ when you might like to use this feature. This option may be
+ repeated any number of times.
+
+ The following options are useful for cleaning up your target tree:
+
+`-b'
+`--badlinks'
+ Checks target directory for bogus symbolic links. That is, links
+ that point to non-existent files.
+
+`-a'
+`--aliens'
+ Checks for files in the target directory that are not symbolic
+ links. The target directory should be managed by stow alone,
+ except for directories that contain a `.stow' file.
+
+`-l'
+`--list'
+ Will display the target package for every symbolic link in the
+ stow target directory. The following options are deprecated:
+
+ The following options are deprecated as of Stow Version 2:
+`-c'
+`--conflicts'
+ Print any conflicts that are encountered. This option implies
+ `-n', and is used to search for all conflicts that might arise
+ from an actual Stow operation.
+
+ This option is deprecated as conflicts are now printed by default
+ and no operations will be performed if any conflicts are detected.
+
+ *note Resource Files:: for a way to set default values for any of
+these options
+
+
+File: stow.info, Node: Installing Packages, Next: Deleting Packages, Prev: Invoking Stow, Up: Top
+
+4 Installing Packages
+*********************
+
+The default action of Stow is to install a package. This means creating
+symlinks in the target tree that point into the package tree. Stow
+attempts to do this with as few symlinks as possible; in other words, if
+Stow can create a single symlink that points to an entire subtree within
+the package tree, it will choose to do that rather than create a
+directory in the target tree and populate it with symlinks.
+
+ For example, suppose that no packages have yet been installed in
+`/usr/local'; it's completely empty (except for the `stow'
+subdirectory, of course). Now suppose the Perl package is installed.
+Recall that it includes the following directories in its installation
+image: `bin'; `info'; `lib/perl'; `man/man1'. Rather than creating
+the directory `/usr/local/bin' and populating it with symlinks to
+`../stow/perl/bin/perl' and `../stow/perl/bin/a2p' (and so on), Stow
+will create a single symlink, `/usr/local/bin', which points to
+`stow/perl/bin'. In this way, it still works to refer to
+`/usr/local/bin/perl' and `/usr/local/bin/a2p', and fewer symlinks have
+been created. This is called "tree folding", since an entire subtree
+is "folded" into a single symlink.
+
+ To complete this example, Stow will also create the symlink
+`/usr/local/info' pointing to `stow/perl/info'; the symlink
+`/usr/local/lib' pointing to `stow/perl/lib'; and the symlink
+`/usr/local/man' pointing to `stow/perl/man'.
+
+ Now suppose that instead of installing the Perl package into an empty
+target tree, the target tree is not empty to begin with. Instead, it
+contains several files and directories installed under a different
+system-administration philosophy. In particular, `/usr/local/bin'
+already exists and is a directory, as are `/usr/local/lib' and
+`/usr/local/man/man1'. In this case, Stow will descend into
+`/usr/local/bin' and create symlinks to `../stow/perl/bin/perl' and
+`../stow/perl/bin/a2p' (etc.), and it will descend into
+`/usr/local/lib' and create the tree-folding symlink `perl' pointing to
+`../stow/perl/lib/perl', and so on. As a rule, Stow only descends as
+far as necessary into the target tree when it can create a tree-folding
+symlink.
+
+ The time often comes when a tree-folding symlink has to be undone
+because another package uses one or more of the folded subdirectories in
+its installation image. This operation is called "splitting open" or
+"unfolding" a folded tree. It involves removing the original symlink
+from the target tree, creating a true directory in its place, and then
+populating the new directory with symlinks to the newly-installed
+package _and_ to the old package that used the old symlink. For
+example, suppose that after installing Perl into an empty `/usr/local',
+we wish to install Emacs. Emacs's installation image includes a `bin'
+directory containing the `emacs' and `etags' executables, among others.
+Stow must make these files appear to be installed in `/usr/local/bin',
+but presently `/usr/local/bin' is a symlink to `stow/perl/bin'. Stow
+therefore takes the following steps: the symlink `/usr/local/bin' is
+deleted; the directory `/usr/local/bin' is created; links are made from
+`/usr/local/bin' to `../stow/emacs/bin/emacs' and
+`../stow/emacs/bin/etags'; and links are made from `/usr/local/bin' to
+`../stow/perl/bin/perl' and `../stow/perl/bin/a2p'.
+
+ When splitting open a folded tree, Stow makes sure that the symlink
+it is about to remove points inside a valid package in the current stow
+directory. _Stow will never delete anything that it doesn't own_.
+Stow "owns" everything living in the target tree that points into a
+package in the stow directory. Anything Stow owns, it can recompute if
+lost: symlinks that point into a package in the stow directory, or
+directories that only contain sysmlinks that stow "owns". Note that
+by this definition, Stow doesn't "own" anything _in_ the stow directory
+or in any of the packages.
+
+ If Stow needs to create a directory or a symlink in the target tree
+and it cannot because that name is already in use and is not owned by
+Stow, then a "conflict" has arisen. *Note Conflicts::.
+
+
+File: stow.info, Node: Deleting Packages, Next: Conflicts, Prev: Installing Packages, Up: Top
+
+5 Deleting Packages
+*******************
+
+When the `-D' option is given, the action of Stow is to delete a
+package from the target tree. Note that Stow will not delete anything
+it doesn't "own". Deleting a package does _not_ mean removing it from
+the stow directory or discarding the package tree.
+
+ To delete a package, Stow recursively scans the target tree,
+skipping over any directory that is not included in the installation
+image.(1) For example, if the target directory is `/usr/local' and the
+installation image for the package being deleted has only a `bin'
+directory and a `man' directory at the top level, then we only scan
+`/usr/local/bin' and `/usr/local/bin/man', and not `/usr/local/lib' or
+`/usr/local/share', or for that matter `/usr/local/stow'. Any symlink
+it finds that points into the package being deleted is removed. Any
+directory that contained only symlinks to the package being deleted is
+removed. Any directory that, after removing symlinks and empty
+subdirectories, contains only symlinks to a single other package, is
+considered to be a previously "folded" tree that was "split open." Stow
+will re-fold the tree by removing the symlinks to the surviving package,
+removing the directory, then linking the directory back to the surviving
+package.
+
+ ---------- Footnotes ----------
+
+ (1) This approach was introduced in version 2 of GNU Stow.
+Previously, the whole target tree was scanned and stow directories were
+explicitly omitted. This became problematic when dealing with very
+large installations. The only situation where this is useful is if you
+accidentally delete a directory in the package tree, leaving you with a
+whole bunch of dangling links. Note that you can enable the old
+approach with the `-p' option. Alternatively, you can use the
+`--badlinks' option get stow to search for dangling links in your
+target tree and remove the offenders manually.
+
+
+File: stow.info, Node: Conflicts, Next: Deferred Operation, Prev: Deleting Packages, Up: Top
+
+5.1 Conflicts
+=============
+
+If, during installation, a file or symlink exists in the target tree and
+has the same name as something Stow needs to create, and if the
+existing name is not a folded tree that can be split open, then a
+"conflict" has arisen. A conflict also occurs if a directory exists
+where Stow needs to place a symlink to a non-directory. On the other
+hand, if the existing name is merely a symlink that already points
+where Stow needs it to, then no conflict has occurred. (Thus it is
+harmless to install a package that has already been installed.)
+
+ A conflict causes Stow to exit immediately and print a warning
+(unless `-c' is given), even if that means aborting an installation in
+mid-package.
+
+ When running Stow with the `-n' or `-c' options, no actual
+filesystem-modifying operations take place. Thus if a folded tree would
+have been split open, but instead was left in place because `-n' or
+`-c' was used, then Stow will report a "false conflict", since the
+directory that Stow was expecting to populate has remained an
+un-populatable symlink.
+
+
+File: stow.info, Node: Deferred Operation, Next: Mixing Operations, Prev: Conflicts, Up: Top
+
+6 Deferred Operation
+********************
+
+For complex packages, scanning the stow and target trees in tandem, and
+deciding whether to make directories or links, split-open or fold
+directories, can actually take a long time (a number of seconds).
+Moreover, an accurate analysis of potential conflicts, requires us to
+take in to account all of these operations.
+
+ Accidentally stowing a package that will result in a conflict
+
+
+File: stow.info, Node: Mixing Operations, Next: Multiple Stow Directories, Prev: Deferred Operation, Up: Top
+
+7 Mixing Operations
+*******************
+
+Since Version 2.0, multiple distinct actions can be specified in a
+single invocation of GNU Stow. For example, to update an installation
+of Emacs from version 21.3 to 21.4a you can now do the following:
+
+ stow -D emacs-21.3 -S emacs-21.4a
+
+which will replace emacs-21.3 with emacs-21.4a using a single
+invocation.
+
+ This is much faster and cleaner than performing two separate
+invocations of stow, because redundant folding/unfolding operations can
+be factored out. In addition, all the operations are calculated and
+merged before being executed *note Deferred Operation::, so the amount
+of of time in which GNU Emacs is unavailable is minimised.
+
+ You can mix and match any number of actions, for example,
+
+ stow -S pkg1 pkg2 -D pkg3 pkg4 -S pkg5 -R pkg6
+
+will unstow pkg3, pkg4 and pkg6, then stow pkg1, pkg2, pkg5 and pkg6.
+
+
+File: stow.info, Node: Multiple Stow Directories, Next: Target Maintenance, Prev: Mixing Operations, Up: Top
+
+8 Multiple Stow Directories
+***************************
+
+If there are two or more system administrators who wish to maintain
+software separately, or if there is any other reason to want two or more
+stow directories, it can be done by creating a file named `.stow' in
+each stow directory. The presence of `/usr/local/foo/.stow' informs
+Stow that, though `foo' is not the current stow directory, and though
+it is a subdirectory of the target directory, nevertheless it is _a_
+stow directory and as such Stow doesn't "own" anything in it (*note
+Installing Packages::). This will protect the contents of `foo' from a
+`stow -D', for instance.
+
+ XXX is this still true? XXX
+
+ When multiple stow directories share a target tree, the effectiveness
+of Stow is reduced. If a tree-folding symlink is encountered and needs
+to be split open during an installation, but the symlink points into
+the wrong stow directory, Stow will report a conflict rather than split
+open the tree (because it doesn't consider itself to own the symlink,
+and thus cannot remove it).
+
+
+File: stow.info, Node: Target Maintenance, Next: Resource Files, Prev: Multiple Stow Directories, Up: Top
+
+9 Target Maintenance
+********************
+
+From time to time you will need to clean up your target tree. Stow
+includes three operational modes that performs checks that would
+generally be too expensive to performed during normal stow execution.
+
+ I've added a -l option to chkstow which will give you a listing of
+every package name that has already been stowed should be able to diff
+this with your directory listing
+
+ bash cd build/scripts diff <(../bin/chkstow -l) <(ls -1)
+
+
+File: stow.info, Node: Resource Files, Next: Compile-time vs Install-time, Prev: Target Maintenance, Up: Top
+
+10 Resource Files
+*****************
+
+Default command line options may be set in `.stowrc' (current
+directory) or `~/.stowrc' (home directory). These are parsed in that
+order, and effectively prepended to you command line. This feature can
+be used for some interesting effects.
+
+ For example, suppose your site uses more than one stow directory,
+perhaps in order to share around responsibilities with a number of
+systems administrators. One of the administrators might have the
+following in there `~/.stowrc' file:
+
+ --dir=/usr/local/stow2
+ --target=/usr/local
+ --ignore='~'
+ --ignore='^CVS'
+
+ so that the `stow' command will default to operating on the
+`/usr/local/stow2' directory, with `/usr/local' as the target, and
+ignoring vi backup files and CVS directories.
+
+ If you had a stow directory `/usr/local/stow/perl-extras' that was
+only used for Perl modules, then you might place the following in
+`/usr/local/stow/perl-extras/.stowrc':
+
+ --dir=/usr/local/stow/perl-extras
+ --target=/usr/local
+ --override=bin
+ --override=man
+ --ignore='perllocal\.pod'
+ --ignore='\.packlist'
+ --ignore='\.bs'
+
+ so that the when your are in the `/usr/local/stow/perl-extras'
+directory, `stow' will regard any subdirectories as stow packages, with
+`/usr/local' as the target (rather than the immediate parent directoy
+`/usr/local/stow'), overriding any pre-existing links to bin files or
+man pages, and ignoring some cruft that gets installed by default.
+
+
+File: stow.info, Node: Compile-time vs Install-time, Next: Bootstrapping, Prev: Resource Files, Up: Top
+
+11 Compile-time vs Install-time
+*******************************
+
+Software whose installation is managed with Stow needs to be installed
+in one place (the package directory, e.g. `/usr/local/stow/perl') but
+needs to appear to run in another place (the target tree, e.g.,
+`/usr/local'). Why is this important? What's wrong with Perl, for
+instance, looking for its files in `/usr/local/stow/perl' instead of in
+`/usr/local'?
+
+ The answer is that there may be another package, e.g.,
+`/usr/local/stow/perl-extras', stowed under `/usr/local'. If Perl is
+configured to find its files in `/usr/local/stow/perl', it will never
+find the extra files in the `perl-extras' package, even though they're
+intended to be found by Perl. On the other hand, if Perl looks for its
+files in `/usr/local', then it will find the intermingled Perl and
+`perl-extras' files.
+
+ This means that when you compile a package, you must tell it the
+location of the run-time, or target tree; but when you install it, you
+must place it in the stow tree.
+
+ Some software packages allow you to specify, at compile-time,
+separate locations for installation and for run-time. Perl is one such
+package; *Note Perl and Perl 5 Modules::. Others allow you to compile
+the package, then give a different destination in the `make install'
+step without causing the binaries or other files to get rebuilt. Most
+GNU software falls into this category; Emacs is a notable exception.
+See *note GNU Emacs::, and *note Other FSF Software::.
+
+ Still other software packages cannot abide the idea of separate
+installation and run-time locations at all. If you try to `make
+install prefix=/usr/local/stow/FOO', then first the whole package will
+be recompiled to hardwire the `/usr/local/stow/FOO' path. With these
+packages, it is best to compile normally, then run `make -n install',
+which should report all the steps needed to install the just-built
+software. Place this output into a file, edit the commands in the file
+to remove recompilation steps and to reflect the Stow-based
+installation location, and execute the edited file as a shell script in
+place of `make install'. Be sure to execute the script using the same
+shell that `make install' would have used.
+
+ (If you use GNU Make and a shell [such as GNU bash] that understands
+`pushd' and `popd', you can do the following:
+
+ 1. Replace all lines matching `make[N]: Entering directory `DIR''
+ with `pushd DIR'.
+
+ 2. Replace all lines matching `make[N]: Leaving directory `DIR'' with
+ `popd'.
+
+ 3. Delete all lines matching `make[N]: Nothing to be done for RULE'.
+
+ Then find other lines in the output containing `cd' or `make'
+commands and rewrite or delete them. In particular, you should be able
+to delete sections of the script that resemble this:
+
+ for i in DIR_1 DIR_2 ...; do \
+ (cd $i; make ARGS ...) \
+ done
+
+Note, that's "should be able to," not "can." Be sure to modulate these
+guidelines with plenty of your own intelligence.
+
+ The details of stowing some specific packages are described in the
+following sections.
+
+* Menu:
+
+* GNU Emacs::
+* Other FSF Software::
+* Cygnus Software::
+* Perl and Perl 5 Modules::
+
+
+File: stow.info, Node: GNU Emacs, Next: Other FSF Software, Prev: Compile-time vs Install-time, Up: Compile-time vs Install-time
+
+11.1 GNU Emacs
+==============
+
+Although the Free Software Foundation has many enlightened practices
+regarding Makefiles and software installation (see *note Other FSF
+Software::), Emacs, its flagship program, doesn't quite follow the
+rules. In particular, most GNU software allows you to write:
+
+ make
+ make install prefix=/usr/local/stow/PACKAGE
+
+If you try this with Emacs, then the new value for `prefix' in the
+`make install' step will cause some files to get recompiled with the
+new value of `prefix' wired into them. In Emacs 19.23 and later,(1)
+the way to work around this problem is:
+
+ make
+ make install-arch-dep install-arch-indep prefix=/usr/local/stow/emacs
+
+ In 19.22 and some prior versions of Emacs, the workaround was:
+
+ make
+ make do-install prefix=/usr/local/stow/emacs
+
+ ---------- Footnotes ----------
+
+ (1) As I write this, the current version of Emacs is 19.31.
+
+
+File: stow.info, Node: Other FSF Software, Next: Cygnus Software, Prev: GNU Emacs, Up: Compile-time vs Install-time
+
+11.2 Other FSF Software
+=======================
+
+The Free Software Foundation, the organization behind the GNU project,
+has been unifying the build procedure for its tools for some time.
+Thanks to its tools `autoconf' and `automake', most packages now
+respond well to these simple steps, with no other intervention
+necessary:
+
+ ./configure OPTIONS
+ make
+ make install prefix=/usr/local/stow/PACKAGE
+
+ Hopefully, these tools can evolve to be aware of Stow-managed
+packages, such that providing an option to `configure' can allow `make'
+and `make install' steps to work correctly without needing to "fool"
+the build process.
+
+
+File: stow.info, Node: Cygnus Software, Next: Perl and Perl 5 Modules, Prev: Other FSF Software, Up: Compile-time vs Install-time
+
+11.3 Cygnus Software
+====================
+
+Cygnus is a commercial supplier and supporter of GNU software. It has
+also written several of its own packages, released under the terms of
+the GNU General Public License; and it has taken over the maintenance of
+other packages. Among the packages released by Cygnus are `gdb',
+`gnats', and `dejagnu'.
+
+ Cygnus packages have the peculiarity that each one unpacks into a
+directory tree with a generic top-level Makefile, which is set up to
+compile _all_ of Cygnus' packages, any number of which may reside under
+the top-level directory. In other words, even if you're only building
+`gnats', the top-level Makefile will look for, and try to build, `gdb'
+and `dejagnu' subdirectories, among many others.
+
+ The result is that if you try `make -n install
+prefix=/usr/local/stow/PACKAGE' at the top level of a Cygnus package,
+you'll get a bewildering amount of output. It will then be very
+difficult to visually scan the output to see whether the install will
+proceed correctly. Unfortunately, it's not always clear how to invoke
+an install from the subdirectory of interest.
+
+ In cases like this, the best approach is to run your `make install
+prefix=...', but be ready to interrupt it if you detect that it is
+recompiling files. Usually it will work just fine; otherwise, install
+manually.
+
+
+File: stow.info, Node: Perl and Perl 5 Modules, Prev: Cygnus Software, Up: Compile-time vs Install-time
+
+11.4 Perl and Perl 5 Modules
+============================
+
+Perl 4.036 allows you to specify different locations for installation
+and for run-time. It is the only widely-used package in this author's
+experience that allows this, though hopefully more packages will adopt
+this model.
+
+ Unfortunately, the authors of Perl believed that only AFS sites need
+this ability. The configuration instructions for Perl 4 misleadingly
+state that some occult means are used under AFS to transport files from
+their installation tree to their run-time tree. In fact, that confusion
+arises from the fact that Depot, Stow's predecessor, originated at
+Carnegie Mellon University, which was also the birthplace of AFS. CMU's
+need to separate install-time and run-time trees stemmed from its use of
+Depot, not from AFS.
+
+ The result of this confusion is that Perl 5's configuration script
+doesn't even offer the option of separating install-time and run-time
+trees _unless_ you're running AFS. Fortunately, after you've entered
+all the configuration settings, Perl's setup script gives you the
+opportunity to edit those settings in a file called `config.sh'. When
+prompted, you should edit this file and replace occurrences of
+
+ inst.../usr/local...
+
+with
+
+ inst.../usr/local/stow/perl...
+
+You can do this with the following Unix command:
+
+ sed 's,^\(inst.*/usr/local\),\1/stow/perl,' config.sh > config.sh.new
+ mv config.sh.new config.sh
+
+ Hopefully, the Perl authors will correct this deficiency in Perl 5's
+configuration mechanism.
+
+ Perl 5 modules--i.e., extensions to Perl 5--generally conform to a
+set of standards for building and installing them. The standard says
+that the package comes with a top-level `Makefile.PL', which is a Perl
+script. When it runs, it generates a `Makefile'.
+
+ If you followed the instructions above for editing `config.sh' when
+Perl was built, then when you create a `Makefile' from a `Makefile.PL',
+it will contain separate locations for run-time (`/usr/local') and
+install-time (`/usr/local/stow/perl'). Thus you can do
+
+ perl Makefile.PL
+ make
+ make install
+
+and the files will be installed into `/usr/local/stow/perl'. However,
+you might prefer each Perl module to be stowed separately. In that
+case, you must edit the resulting Makefile, replacing
+`/usr/local/stow/perl' with `/usr/local/stow/MODULE'. The best way to
+do this is:
+
+ perl Makefile.PL
+ find . -name Makefile -print | \
+ xargs perl -pi~ -e 's,^(INST.*/stow)/perl,$1/MODULE,;'
+ make
+ make install
+
+(The use of `find' and `xargs' ensures that all Makefiles in the
+module's source tree, even those in subdirectories, get edited.) A
+good convention to follow is to name the stow directory for a Perl
+MODULE `cpan.MODULE', where `cpan' stands for Comprehensive Perl
+Archive Network, a collection of FTP sites that is the source of most
+Perl 5 extensions. This way, it's easy to tell at a glance which of
+the subdirectories of `/usr/local/stow' are Perl 5 extensions.
+
+ When you stow separate Perl 5 modules separately, you are likely to
+encounter conflicts (*note Conflicts::) with files named `.exists' and
+`perllocal.pod'. One way to work around this is to remove those files
+before stowing the module. If you use the `cpan.MODULE' naming
+convention, you can simply do this:
+
+ cd /usr/local/stow
+ find cpan.* \( -name .exists -o -name perllocal.pod \) -print | \
+ xargs rm
+
+
+File: stow.info, Node: Bootstrapping, Next: Reporting Bugs, Prev: Compile-time vs Install-time, Up: Top
+
+12 Bootstrapping
+****************
+
+Suppose you have a stow directory all set up and ready to go:
+`/usr/local/stow/perl' contains the Perl installation,
+`/usr/local/stow/stow' contains Stow itself, and perhaps you have other
+packages waiting to be stowed. You'd like to be able to do this:
+
+ cd /usr/local/stow
+ stow -vv *
+
+but `stow' is not yet in your `PATH'. Nor can you do this:
+
+ cd /usr/local/stow
+ stow/bin/stow -vv *
+
+because the `#!' line at the beginning of `stow' tries to locate Perl
+(usually in `/usr/local/bin/perl'), and that won't be found. The
+solution you must use is:
+
+ cd /usr/local/stow
+ perl/bin/perl stow/bin/stow -vv *
+
+
+File: stow.info, Node: Reporting Bugs, Next: Known Bugs, Prev: Bootstrapping, Up: Top
+
+13 Reporting Bugs
+*****************
+
+Please send bug reports to the current maintaner, Kal Hodgson, by
+electronic mail. The address to use is `<bug-stow@gnu.org>'. Please
+include:
+
+ * the version number of Stow (`stow --version');
+
+ * the version number of Perl (`perl -v');
+
+ * the system information, which can often be obtained with `uname
+ -a';
+
+ * a description of the bug;
+
+ * the precise command you gave;
+
+ * the output from the command (preferably verbose output, obtained by
+ adding `--verbose=3' to the Stow command line).
+
+ If you are really keen, consider developing a minimal test case and
+creating a new test. See the `t/' for lots of examples.
+
+ Before reporting a bug, please read the manual carefully, especially
+*note Known Bugs::, to see whether you're encountering something that
+doesn't need reporting. (*note Conflicts::).
+
+
+File: stow.info, Node: Known Bugs, Next: GNU General Public License, Prev: Reporting Bugs, Up: Top
+
+14 Known Bugs
+*************
+
+ * When using multiple stow directories (*note Multiple Stow
+ Directories::), Stow fails to "split open" tree-folding symlinks
+ (*note Installing Packages::) that point into a stow directory
+ which is not the one in use by the current Stow command. Before
+ failing, it should search the target of the link to see whether
+ any element of the path contains a `.stow' file. If it finds one,
+ it can "learn" about the cooperating stow directory to
+ short-circuit the `.stow' search the next time it encounters a
+ tree-folding symlink.
+
+
+File: stow.info, Node: GNU General Public License, Next: Index, Prev: Known Bugs, Up: Top
+
+GNU General Public License
+**************************
+
+ Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+ 675 Mass Ave, Cambridge, MA 02139, USA
+
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+Preamble
+========
+
+The licenses for most software are designed to take away your freedom
+to share and change it. By contrast, the GNU General Public License is
+intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Library General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it in
+new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software,
+and (2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+ 0. This License applies to any program or other work which contains a
+ notice placed by the copyright holder saying it may be distributed
+ under the terms of this General Public License. The "Program",
+ below, refers to any such program or work, and a "work based on
+ the Program" means either the Program or any derivative work under
+ copyright law: that is to say, a work containing the Program or a
+ portion of it, either verbatim or with modifications and/or
+ translated into another language. (Hereinafter, translation is
+ included without limitation in the term "modification".) Each
+ licensee is addressed as "you".
+
+ Activities other than copying, distribution and modification are
+ not covered by this License; they are outside its scope. The act
+ of running the Program is not restricted, and the output from the
+ Program is covered only if its contents constitute a work based on
+ the Program (independent of having been made by running the
+ Program). Whether that is true depends on what the Program does.
+
+ 1. You may copy and distribute verbatim copies of the Program's
+ source code as you receive it, in any medium, provided that you
+ conspicuously and appropriately publish on each copy an appropriate
+ copyright notice and disclaimer of warranty; keep intact all the
+ notices that refer to this License and to the absence of any
+ warranty; and give any other recipients of the Program a copy of
+ this License along with the Program.
+
+ You may charge a fee for the physical act of transferring a copy,
+ and you may at your option offer warranty protection in exchange
+ for a fee.
+
+ 2. You may modify your copy or copies of the Program or any portion
+ of it, thus forming a work based on the Program, and copy and
+ distribute such modifications or work under the terms of Section 1
+ above, provided that you also meet all of these conditions:
+
+ a. You must cause the modified files to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ b. You must cause any work that you distribute or publish, that
+ in whole or in part contains or is derived from the Program
+ or any part thereof, to be licensed as a whole at no charge
+ to all third parties under the terms of this License.
+
+ c. If the modified program normally reads commands interactively
+ when run, you must cause it, when started running for such
+ interactive use in the most ordinary way, to print or display
+ an announcement including an appropriate copyright notice and
+ a notice that there is no warranty (or else, saying that you
+ provide a warranty) and that users may redistribute the
+ program under these conditions, and telling the user how to
+ view a copy of this License. (Exception: if the Program
+ itself is interactive but does not normally print such an
+ announcement, your work based on the Program is not required
+ to print an announcement.)
+
+ These requirements apply to the modified work as a whole. If
+ identifiable sections of that work are not derived from the
+ Program, and can be reasonably considered independent and separate
+ works in themselves, then this License, and its terms, do not
+ apply to those sections when you distribute them as separate
+ works. But when you distribute the same sections as part of a
+ whole which is a work based on the Program, the distribution of
+ the whole must be on the terms of this License, whose permissions
+ for other licensees extend to the entire whole, and thus to each
+ and every part regardless of who wrote it.
+
+ Thus, it is not the intent of this section to claim rights or
+ contest your rights to work written entirely by you; rather, the
+ intent is to exercise the right to control the distribution of
+ derivative or collective works based on the Program.
+
+ In addition, mere aggregation of another work not based on the
+ Program with the Program (or with a work based on the Program) on
+ a volume of a storage or distribution medium does not bring the
+ other work under the scope of this License.
+
+ 3. You may copy and distribute the Program (or a work based on it,
+ under Section 2) in object code or executable form under the terms
+ of Sections 1 and 2 above provided that you also do one of the
+ following:
+
+ a. Accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of
+ Sections 1 and 2 above on a medium customarily used for
+ software interchange; or,
+
+ b. Accompany it with a written offer, valid for at least three
+ years, to give any third party, for a charge no more than your
+ cost of physically performing source distribution, a complete
+ machine-readable copy of the corresponding source code, to be
+ distributed under the terms of Sections 1 and 2 above on a
+ medium customarily used for software interchange; or,
+
+ c. Accompany it with the information you received as to the offer
+ to distribute corresponding source code. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form with
+ such an offer, in accord with Subsection b above.)
+
+ The source code for a work means the preferred form of the work for
+ making modifications to it. For an executable work, complete
+ source code means all the source code for all modules it contains,
+ plus any associated interface definition files, plus the scripts
+ used to control compilation and installation of the executable.
+ However, as a special exception, the source code distributed need
+ not include anything that is normally distributed (in either
+ source or binary form) with the major components (compiler,
+ kernel, and so on) of the operating system on which the executable
+ runs, unless that component itself accompanies the executable.
+
+ If distribution of executable or object code is made by offering
+ access to copy from a designated place, then offering equivalent
+ access to copy the source code from the same place counts as
+ distribution of the source code, even though third parties are not
+ compelled to copy the source along with the object code.
+
+ 4. You may not copy, modify, sublicense, or distribute the Program
+ except as expressly provided under this License. Any attempt
+ otherwise to copy, modify, sublicense or distribute the Program is
+ void, and will automatically terminate your rights under this
+ License. However, parties who have received copies, or rights,
+ from you under this License will not have their licenses
+ terminated so long as such parties remain in full compliance.
+
+ 5. You are not required to accept this License, since you have not
+ signed it. However, nothing else grants you permission to modify
+ or distribute the Program or its derivative works. These actions
+ are prohibited by law if you do not accept this License.
+ Therefore, by modifying or distributing the Program (or any work
+ based on the Program), you indicate your acceptance of this
+ License to do so, and all its terms and conditions for copying,
+ distributing or modifying the Program or works based on it.
+
+ 6. Each time you redistribute the Program (or any work based on the
+ Program), the recipient automatically receives a license from the
+ original licensor to copy, distribute or modify the Program
+ subject to these terms and conditions. You may not impose any
+ further restrictions on the recipients' exercise of the rights
+ granted herein. You are not responsible for enforcing compliance
+ by third parties to this License.
+
+ 7. If, as a consequence of a court judgment or allegation of patent
+ infringement or for any other reason (not limited to patent
+ issues), conditions are imposed on you (whether by court order,
+ agreement or otherwise) that contradict the conditions of this
+ License, they do not excuse you from the conditions of this
+ License. If you cannot distribute so as to satisfy simultaneously
+ your obligations under this License and any other pertinent
+ obligations, then as a consequence you may not distribute the
+ Program at all. For example, if a patent license would not permit
+ royalty-free redistribution of the Program by all those who
+ receive copies directly or indirectly through you, then the only
+ way you could satisfy both it and this License would be to refrain
+ entirely from distribution of the Program.
+
+ If any portion of this section is held invalid or unenforceable
+ under any particular circumstance, the balance of the section is
+ intended to apply and the section as a whole is intended to apply
+ in other circumstances.
+
+ It is not the purpose of this section to induce you to infringe any
+ patents or other property right claims or to contest validity of
+ any such claims; this section has the sole purpose of protecting
+ the integrity of the free software distribution system, which is
+ implemented by public license practices. Many people have made
+ generous contributions to the wide range of software distributed
+ through that system in reliance on consistent application of that
+ system; it is up to the author/donor to decide if he or she is
+ willing to distribute software through any other system and a
+ licensee cannot impose that choice.
+
+ This section is intended to make thoroughly clear what is believed
+ to be a consequence of the rest of this License.
+
+ 8. If the distribution and/or use of the Program is restricted in
+ certain countries either by patents or by copyrighted interfaces,
+ the original copyright holder who places the Program under this
+ License may add an explicit geographical distribution limitation
+ excluding those countries, so that distribution is permitted only
+ in or among countries not thus excluded. In such case, this
+ License incorporates the limitation as if written in the body of
+ this License.
+
+ 9. The Free Software Foundation may publish revised and/or new
+ versions of the General Public License from time to time. Such
+ new versions will be similar in spirit to the present version, but
+ may differ in detail to address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+ Program specifies a version number of this License which applies
+ to it and "any later version", you have the option of following
+ the terms and conditions either of that version or of any later
+ version published by the Free Software Foundation. If the Program
+ does not specify a version number of this License, you may choose
+ any version ever published by the Free Software Foundation.
+
+ 10. If you wish to incorporate parts of the Program into other free
+ programs whose distribution conditions are different, write to the
+ author to ask for permission. For software which is copyrighted
+ by the Free Software Foundation, write to the Free Software
+ Foundation; we sometimes make exceptions for this. Our decision
+ will be guided by the two goals of preserving the free status of
+ all derivatives of our free software and of promoting the sharing
+ and reuse of software generally.
+
+ NO WARRANTY
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO
+ WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE
+ LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
+ WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT
+ NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
+ FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE
+ QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+ PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY
+ SERVICING, REPAIR OR CORRECTION.
+
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+ WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY
+ MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE
+ LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
+ INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR
+ INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU
+ OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY
+ OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN
+ ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+How to Apply These Terms to Your New Programs
+=============================================
+
+If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these
+terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ ONE LINE TO GIVE THE PROGRAM'S NAME AND AN IDEA OF WHAT IT DOES.
+ Copyright (C) 19YY NAME OF AUTHOR
+
+ This program is free software; you can redistribute it and/or
+ modify it under the terms of the GNU General Public License
+ as published by the Free Software Foundation; either version 2
+ of the License, or (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ Also add information on how to contact you by electronic and paper
+mail.
+
+ If the program is interactive, make it output a short notice like
+this when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) 19YY NAME OF AUTHOR
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
+ type `show w'. This is free software, and you are welcome
+ to redistribute it under certain conditions; type `show c'
+ for details.
+
+ The hypothetical commands `show w' and `show c' should show the
+appropriate parts of the General Public License. Of course, the
+commands you use may be called something other than `show w' and `show
+c'; they could even be mouse-clicks or menu items--whatever suits your
+program.
+
+ You should also get your employer (if you work as a programmer) or
+your school, if any, to sign a "copyright disclaimer" for the program,
+if necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright
+ interest in the program `Gnomovision'
+ (which makes passes at compilers) written
+ by James Hacker.
+
+ SIGNATURE OF TY COON, 1 April 1989
+ Ty Coon, President of Vice
+
+ This General Public License does not permit incorporating your
+program into proprietary programs. If your program is a subroutine
+library, you may consider it more useful to permit linking proprietary
+applications with the library. If this is what you want to do, use the
+GNU Library General Public License instead of this License.
+
+
+File: stow.info, Node: Index, Prev: GNU General Public License, Up: Top
+
+Index
+*****
+
+
+* Menu:
+
+* absolute symlink: Terminology. (line 43)
+* conflict: Installing Packages. (line 72)
+* deletion: Deleting Packages. (line 6)
+* directory folding: Installing Packages. (line 13)
+* false conflict: Conflicts. (line 19)
+* folding trees: Installing Packages. (line 13)
+* installation: Installing Packages. (line 6)
+* installation image: Terminology. (line 23)
+* ownership: Installing Packages. (line 62)
+* package: Terminology. (line 6)
+* package directory: Terminology. (line 30)
+* package name: Terminology. (line 30)
+* relative symlink: Terminology. (line 43)
+* splitting open folded trees: Installing Packages. (line 44)
+* stow directory: Terminology. (line 16)
+* symlink: Terminology. (line 43)
+* target directory: Terminology. (line 11)
+* tree folding: Installing Packages. (line 13)
+* unfolding trees: Installing Packages. (line 44)
+
+
+
+Tag Table:
+Node: Top1313
+Node: Introduction2684
+Ref: Introduction-Footnote-14845
+Node: Terminology4887
+Node: Invoking Stow7250
+Node: Installing Packages12809
+Node: Deleting Packages17066
+Ref: Deleting Packages-Footnote-118486
+Node: Conflicts19074
+Node: Deferred Operation20253
+Node: Mixing Operations20781
+Node: Multiple Stow Directories21780
+Node: Target Maintenance22954
+Node: Resource Files23550
+Node: Compile-time vs Install-time25160
+Node: GNU Emacs28447
+Ref: GNU Emacs-Footnote-129436
+Node: Other FSF Software29500
+Node: Cygnus Software30261
+Node: Perl and Perl 5 Modules31740
+Node: Bootstrapping35295
+Node: Reporting Bugs36076
+Node: Known Bugs37044
+Node: GNU General Public License37747
+Node: Index56887
+
+End Tag Table
diff --git a/stow.texi b/stow.texi
index d06998f..81630dc 100644
--- a/stow.texi
+++ b/stow.texi
@@ -20,7 +20,8 @@ managing the installation of software packages.
Software and documentation Copyright @copyright{} 1993, 1994, 1995, 1996
by Bob Glickstein <bobg+stow@@zanshin.com>.
-Copyright @copyright{} 2000,2001 Guillaume Morin <gmorin@@gnu.org>
+Copyright @copyright{} 2000, 2001 Guillaume Morin <gmorin@@gnu.org>.
+Copyright @copyright{} 2007 Kahlil (Kal) Hodgson <kahlil@@internode.on.net>.
Permission is granted to make and distribute verbatim copies of this
manual provided the copyright notice and this permission notice are
@@ -50,6 +51,7 @@ approved by the Free Software Foundation.
@title Stow @value{VERSION}
@subtitle Managing the installation of software packages
@author Bob Glickstein, Zanshin Software, Inc.
+@author Kahlil Hodgson, RMIT Univerity, Australia.
@page
@vskip 0pt plus 1filll
This manual describes GNU Stow version @value{VERSION}, a program for
@@ -57,6 +59,8 @@ managing the installation of software packages.
Software and documentation Copyright @copyright{} 1993, 1994, 1995, 1996
by Bob Glickstein <bobg+stow@@zanshin.com>.
+Copyright @copyright{} 2000,2001 Guillaume Morin <gmorin@@gnu.org>
+Copyright @copyright{} 2007 Kahlil (Kal) Hodgson <kahlil@@internode.on.net>
Permission is granted to make and distribute verbatim copies of this
manual provided the copyright notice and this permission notice are
@@ -75,6 +79,8 @@ except that this permission notice may be stated in a translation
approved by the Free Software Foundation.
@end titlepage
+
+@c ==========================================================================
@node Top, Introduction, (dir), (dir)
@ifinfo
@@ -83,34 +89,36 @@ the installation of software packages.
@end ifinfo
@menu
-* Introduction:: Description of Stow.
-* Terminology:: Terms used by this manual.
-* Invoking Stow:: Option summary.
-* Installing packages:: Using Stow to install.
-* Deleting packages:: Using Stow to uninstall.
-* Caveats:: Pitfalls and difficulties to beware.
-* Bootstrapping:: When stow and perl are not yet stowed.
-* Reporting bugs:: How, what, where, and when to report.
-* Known bugs:: Don't report any of these.
-* GNU General Public License:: Copying terms.
-* Index:: Index of concepts.
+* Introduction:: Description of Stow.
+* Terminology:: Terms used by this manual.
+* Invoking Stow:: Option summary.
+* Installing Packages:: Using Stow to install.
+* Deleting Packages:: Using Stow to uninstall.
+* Conflicts:: When Stow can't stow.
+* Deferred Operation:: Using two passes to stow.
+* Mixing Operations:: Multiple actions per invocation.
+* Multiple Stow Directories:: Further segregating software.
+* Target Maintenance:: Cleaning up mistakes.
+* Resource Files:: Setting default command line options.
+* Compile-time vs Install-time:: Faking out `make install'.
+* Bootstrapping:: When stow and perl are not yet stowed.
+* Reporting Bugs:: How, what, where, and when to report.
+* Known Bugs:: Don't report any of these.
+* GNU General Public License:: Copying terms.
+* Index:: Index of concepts.
--- The Detailed Node Listing ---
-Caveats
-
-* Compile-time and install-time:: Faking out `make install'.
-* Multiple stow directories:: Further segregating software.
-* Conflicts:: When Stow can't stow.
-
Compile-time and install-time
* GNU Emacs::
-* Other FSF software::
-* Cygnus software::
-* Perl and Perl 5 modules::
+* Other FSF Software::
+* Cygnus Software::
+* Perl and Perl 5 Modules::
@end menu
+
+@c ===========================================================================
@node Introduction, Terminology, Top, Top
@chapter Introduction
@@ -118,12 +126,23 @@ Stow is a tool for managing the installation of multiple software
packages in the same run-time directory tree. One historical difficulty
of this task has been the need to administer, upgrade, install, and
remove files in independent packages without confusing them with other
-files sharing the same filesystem space. For instance, it is common to
+files sharing the same file system space. For instance, it is common to
install Perl and Emacs in @file{/usr/local}. When one does so, one
winds up with the following files@footnote{As of Perl 4.036 and Emacs
-19.22.} in @file{/usr/local/man/man1}: @file{a2p.1}; @file{ctags.1};
-@file{emacs.1}; @file{etags.1}; @file{h2ph.1}; @file{perl.1}; and
-@file{s2p.1}. Now suppose it's time to uninstall Perl. Which man pages
+19.22.} in @file{/usr/local/man/man1}:
+
+@example
+a2p.1
+ctags.1
+emacs.1
+etags.1
+h2ph.1
+perl.1
+s2p.1
+@end example
+
+@noindent
+Now suppose it's time to uninstall Perl. Which man pages
get removed? Obviously @file{perl.1} is one of them, but it should not
be the administrator's responsibility to memorize the ownership of
individual files by separate packages.
@@ -150,13 +169,16 @@ to rebuild the target tree (e.g., @file{/usr/local}).
For information about the latest version of Stow, you can refer to
http://www.gnu.org/software/stow/.
+
+@c ===========================================================================
@node Terminology, Invoking Stow, Introduction, Top
@chapter Terminology
+@indent
@cindex package
A @dfn{package} is a related collection of files and directories that
-you wish to administer as a unit---e.g., Perl or Emacs---and that needs
-to be installed in a particular directory structure---e.g., with
+you wish to administer as a unit --- e.g., Perl or Emacs --- and that needs
+to be installed in a particular directory structure --- e.g., with
@file{bin}, @file{lib}, and @file{man} subdirectories.
@cindex target directory
@@ -176,7 +198,7 @@ and @file{/usr/local/stow/emacs}.
@cindex installation image
An @dfn{installation image} is the layout of files and directories
required by a package, relative to the target directory. Thus, the
-installation image for Perl includes: a @file{bin} directory containing
+installation image for Perl includes: a @file{bin} directory containing
@file{perl} and @file{a2p} (among others); an @file{info} directory
containing Texinfo documentation; a @file{lib/perl} directory containing
Perl libraries; and a @file{man/man1} directory containing man pages.
@@ -185,10 +207,10 @@ Perl libraries; and a @file{man/man1} directory containing man pages.
@cindex package name
A @dfn{package directory} is the root of a tree containing the
installation image for a particular package. Each package directory
-must reside in a stow directory---e.g., the package directory
+must reside in a stow directory --- e.g., the package directory
@file{/usr/local/stow/perl} must reside in the stow directory
@file{/usr/local/stow}. The @dfn{name} of a package is the name of its
-directory within the stow directory---e.g., @file{perl}.
+directory within the stow directory --- e.g., @file{perl}.
Thus, the Perl executable might reside in
@file{/usr/local/stow/perl/bin/perl}, where @file{/usr/local} is the
@@ -206,40 +228,30 @@ is, one not starting from @file{/}. The target of a relative symlink is
computed starting from the symlink's own directory. Stow only
creates relative symlinks.
-@node Invoking Stow, Installing packages, Terminology, Top
+@c ===========================================================================
+@node Invoking Stow, Installing Packages, Terminology, Top
@chapter Invoking Stow
The syntax of the @code{stow} command is:
@example
-stow @var{[options]} @var{package @dots{}}
+stow [@var{options}] [@var{action flag}] @var{package @dots{}}
@end example
-The stow directory is assumed to be the current directory, and the
-target directory is assumed to be the parent of the current directory
-(so it is typical to execute @code{stow} from the directory
-@file{/usr/local/stow}). Each @var{package} is the name of a package in
-the stow directory (e.g., @samp{perl}). By default, they are installed
-into the target directory (but they can be deleted instead using
-@samp{-D}).
+@noindent
+Each @var{package} is the name of a package (e.g., @samp{perl}) in the stow
+directory that we wish to install into (or delete from) the target directory.
+The default action is to install the given packages, although alternate actions
+may be specified by preceding the package name(s) with an @var{action flag}.
+Unless otherwise specified, the stow directory is assumed to be
+the current directory and the target directory is assumed to be the parent of
+the current directory, so it is typical to execute @code{stow} from the
+directory @file{/usr/local/stow}.
-The options are:
+@noindent
+The following options are supported:
@table @samp
-@item -n
-@itemx --no
-Do not perform any operations that modify the filesystem; merely show
-what would happen. Since no actual operations are performed,
-@samp{stow -n} could report conflicts when none would actually take
-place (@pxref{Conflicts}); but it won't fail to report conflicts that
-@emph{would} take place.
-
-@item -c
-@itemx --conflicts
-Do not exit immediately when a conflict is encountered. This option
-implies @samp{-n}, and is used to search for all conflicts that might
-arise from an actual Stow operation. As with @samp{-n}, however,
-false conflicts might be reported (@pxref{Conflicts}).
@item -d @var{dir}
@itemx --dir=@var{dir}
@@ -252,6 +264,61 @@ parent of @var{dir}.
Set the target directory to @var{dir} instead of the parent of the stow
directory.
+@item --ignore='<regex>'
+This (repeatable) option lets you suppress acting on files that match the
+given perl regular expression. For example, using the options
+
+@example
+--ignore='~' --ignore='\.#.*'
+@end example
+
+@noindent
+will cause stow to ignore emacs and CVS backup files.
+
+Note that the regular expression is anchored to the end of the filename,
+because this is what you will want to do most of the time.
+
+@item --defer='<regex>'
+This (repeatable) option lets you defer stowing a file matching the given
+regular expression, if that file is already stowed by another package. For
+example, the following options
+
+@example
+--defer='man' --defer='info'
+@end example
+
+@noindent
+will cause stow to skip over pre-existing man and info pages.
+
+Equivalently, you could use --defer='man|info' since the argument is just
+a Perl regex.
+
+Note that the regular expression is anchored to the beginning of the path
+relative to the target directory, because this is what you will want to do most
+of the time.
+
+@item --override='<regex>'
+This (repeatable) option forces any file matching the regular expression to be
+stowed, even if the file is already stowed to another package. For example,
+the following options
+
+@example
+--override='man' --override='info'
+@end example
+
+@noindent
+will permit stow to overwrite links that point to pre-existing man and info
+pages that are owned by stow and would otherwise cause a conflict.
+
+The regular expression is anchored to the beginning of the path relative to
+the target directory, because this is what you will want to do most of the time.
+
+@item -n
+@itemx --no
+@itemx --simulate
+Do not perform any operations that modify the file system; in combination with
+@samp{-v} can be used to merely show what would happen.
+
@item -v
@itemx --verbose[=@var{n}]
Send verbose output to standard error describing what Stow is
@@ -259,15 +326,14 @@ doing. Verbosity levels are 0, 1, 2, and 3; 0 is the default. Using
@samp{-v} or @samp{--verbose} increases the verbosity by one; using
@samp{--verbose=@var{n}} sets it to @var{n}.
-@item -D
-@itemx --delete
-Delete packages from the target directory rather than installing them.
-
-@item -R
-@itemx --restow
-Restow packages (first unstow, then stow again). This is useful for
-pruning obsolete symlinks from the target tree after updating the
-software in a package.
+@item -p
+@itemx --compat
+Scan the whole target tree when unstowing. By default, only directories
+specified in the @dfn{installation image} are scanned during an unstow
+operation. Scanning the whole tree can be prohibitive if your target tree is
+very large. This option restores the legacy behaviour; however, the
+@samp{--badlinks} option may be a better way of ensuring that your
+installation does not have any dangling symlinks.
@item -V
@itemx --version
@@ -278,8 +344,68 @@ Show Stow version number, and exit.
Show Stow command syntax, and exit.
@end table
-@node Installing packages, Deleting packages, Invoking Stow, Top
-@chapter Installing packages
+The following @var{action flags} are supported:
+
+@table @samp
+@item -D
+@itemx --delete
+Delete (unstow) the package name(s) that follow this option from the @dfn{target
+directory}. This option may be repeated any number of times.
+
+@item -R
+@itemx --restow
+Restow (first unstow, then stow again) the package names that follow this
+option. This is useful for pruning obsolete symlinks from the target tree
+after updating the software in a package. This option may be repeated any
+number of times.
+
+@item -S
+@item --stow
+explictly stow the package name(s) that follow this option. May be omitted if
+you are not using the @samp{-D} or @samp{-R} options in the same invocation.
+See @xref{Mixing Operations}, for details of when you might like to use this
+feature. This option may be repeated any number of times.
+@end table
+
+The following options are useful for cleaning up your target tree:
+
+@table @samp
+@item -b
+@itemx --badlinks
+Checks target directory for bogus symbolic links. That is, links that point to
+non-existent files.
+
+@item -a
+@itemx --aliens
+Checks for files in the target directory that are not symbolic links. The
+target directory should be managed by stow alone, except for directories that
+contain a @file{.stow} file.
+
+@item -l
+@itemx --list
+Will display the target package for every symbolic link in the stow target
+directory.
+The following options are deprecated:
+@end table
+
+The following options are deprecated as of Stow Version 2:
+@table @samp
+@item -c
+@itemx --conflicts
+Print any conflicts that are encountered. This option
+implies @samp{-n}, and is used to search for all conflicts that might
+arise from an actual Stow operation.
+
+This option is deprecated as conflicts are now printed by default and no
+operations will be performed if any conflicts are detected.
+@end table
+
+@ref{Resource Files} for a way to set default values for any of these options
+
+
+@c ===========================================================================
+@node Installing Packages, Deleting Packages, Invoking Stow, Top
+@chapter Installing Packages
@cindex installation
The default action of Stow is to install a package. This means creating
@@ -326,11 +452,12 @@ descends as far as necessary into the target tree when it can create a
tree-folding symlink.
@cindex splitting open folded trees
+@cindex unfolding trees
The time often comes when a tree-folding symlink has to be undone
because another package uses one or more of the folded subdirectories in
-its installation image. This operation is called @dfn{splitting open} a
-folded tree. It involves removing the original symlink from the target
-tree, creating a true directory in its place, and then populating the
+its installation image. This operation is called @dfn{splitting open} or
+@dfn{unfolding} a folded tree. It involves removing the original symlink from
+the target tree, creating a true directory in its place, and then populating the
new directory with symlinks to the newly-installed package @emph{and} to
the old package that used the old symlink. For example, suppose that
after installing Perl into an empty @file{/usr/local}, we wish to
@@ -350,19 +477,22 @@ directory @file{/usr/local/bin} is created; links are made from
When splitting open a folded tree, Stow makes sure that the
symlink it is about to remove points inside a valid package in the
current stow directory. @emph{Stow will never delete anything
-that it doesn't own}. Stow @dfn{owns} everything living in the
+that it doesn't own}. Stow ``owns'' everything living in the
target tree that points into a package in the stow directory. Anything
-Stow owns, it can recompute if lost. Note that by this
-definition, Stow doesn't ``own'' anything @emph{in} the stow
-directory or in any of the packages.
+Stow owns, it can recompute if lost: symlinks that point into a package in
+the stow directory, or directories that only contain sysmlinks that stow
+``owns''. Note that by this definition, Stow doesn't ``own'' anything
+@emph{in} the stow directory or in any of the packages.
@cindex conflict
If Stow needs to create a directory or a symlink in the target
tree and it cannot because that name is already in use and is not owned
-by Stow, then a conflict has arisen. @xref{Conflicts}.
+by Stow, then a @dfn{conflict} has arisen. @xref{Conflicts}.
+
-@node Deleting packages, Caveats, Installing packages, Top
-@chapter Deleting packages
+@c ===========================================================================
+@node Deleting Packages, Conflicts, Installing Packages, Top
+@chapter Deleting Packages
@cindex deletion
When the @samp{-D} option is given, the action of Stow is to
@@ -371,32 +501,187 @@ delete anything it doesn't ``own''. Deleting a package does @emph{not}
mean removing it from the stow directory or discarding the package
tree.
-To delete a package, Stow recursively scans the target tree,
-skipping over the stow directory (since that is usually a subdirectory
-of the target tree) and any other stow directories it encounters
-(@pxref{Multiple stow directories}). Any symlink it finds that points
-into the package being deleted is removed. Any directory that
-contained only symlinks to the package being deleted is removed. Any
-directory that, after removing symlinks and empty subdirectories,
-contains only symlinks to a single other package, is considered to be a
-previously ``folded'' tree that was ``split open.'' Stow will
-re-fold the tree by removing the symlinks to the surviving package,
+To delete a package, Stow recursively scans the target tree, skipping over any
+directory that is not included in the installation image.@footnote{This
+approach was introduced in version 2 of GNU Stow. Previously, the whole
+target tree was scanned and stow directories were explicitly omitted. This
+became problematic when dealing with very large installations. The only
+situation where this is useful is if you accidentally delete a directory in
+the package tree, leaving you with a whole bunch of dangling links. Note that
+you can enable the old approach with the @samp{-p} option. Alternatively, you can
+use the @samp{--badlinks} option get stow to search for dangling links in your target tree and remove the offenders manually.}
+For example, if the target directory is @file{/usr/local} and the
+installation image for the package being deleted has only a @file{bin}
+directory and a @file{man} directory at the top level, then we only scan
+@file{/usr/local/bin} and @file{/usr/local/bin/man}, and not
+@file{/usr/local/lib} or @file{/usr/local/share}, or for that matter
+@file{/usr/local/stow}. Any symlink it finds that points into the package
+being deleted is removed. Any directory that contained only symlinks to the
+package being deleted is removed. Any directory that, after removing symlinks
+and empty subdirectories, contains only symlinks to a single other package, is
+considered to be a previously ``folded'' tree that was ``split open.'' Stow
+will re-fold the tree by removing the symlinks to the surviving package,
removing the directory, then linking the directory back to the surviving
package.
-@node Caveats, Bootstrapping, Deleting packages, Top
-@chapter Caveats
-This chapter describes the common problems that arise with Stow.
+@c ===========================================================================
+@node Conflicts, Deferred Operation, Deleting Packages, Top
+@section Conflicts
-@menu
-* Compile-time and install-time:: Faking out `make install'.
-* Multiple stow directories:: Further segregating software.
-* Conflicts:: When Stow can't stow.
-@end menu
+If, during installation, a file or symlink exists in the target tree and
+has the same name as something Stow needs to create, and if the
+existing name is not a folded tree that can be split open, then a
+@dfn{conflict} has arisen. A conflict also occurs if a directory exists
+where Stow needs to place a symlink to a non-directory. On the
+other hand, if the existing name is merely a symlink that already points
+where Stow needs it to, then no conflict has occurred. (Thus it
+is harmless to install a package that has already been installed.)
+
+A conflict causes Stow to exit immediately and print a warning
+(unless @samp{-c} is given), even if that means aborting an installation
+in mid-package.
-@node Compile-time and install-time, Multiple stow directories, Caveats, Caveats
-@section Compile-time and install-time
+@cindex false conflict
+When running Stow with the @samp{-n} or @samp{-c} options, no actual
+filesystem-modifying operations take place. Thus if a folded tree would
+have been split open, but instead was left in place because @samp{-n} or
+@samp{-c} was used, then Stow will report a @dfn{false conflict}, since
+the directory that Stow was expecting to populate has remained an
+un-populatable symlink.
+
+@c ===========================================================================
+@node Deferred Operation, Mixing Operations, Conflicts, Top
+@chapter Deferred Operation
+
+For complex packages, scanning the stow and target trees in tandem, and
+deciding whether to make directories or links, split-open or fold directories,
+can actually take a long time (a number of seconds). Moreover, an accurate
+analysis of potential conflicts, requires us to take in to account all of
+these operations.
+
+Accidentally stowing a package that will result in a conflict
+
+@c ===========================================================================
+@node Mixing Operations, Multiple Stow Directories, Deferred Operation, Top
+@chapter Mixing Operations
+
+Since Version 2.0, multiple distinct actions can be specified in a single
+invocation of GNU Stow. For example, to update an installation of Emacs from
+version 21.3 to 21.4a you can now do the following:
+
+@example
+stow -D emacs-21.3 -S emacs-21.4a
+@end example
+
+@noindent
+which will replace emacs-21.3 with emacs-21.4a using a single invocation.
+
+This is much faster and cleaner than performing two separate invocations of
+stow, because redundant folding/unfolding operations can be factored out.
+In addition, all the operations are calculated and merged before being executed
+@pxref{Deferred Operation}, so the amount of of time in which GNU Emacs is unavailable is minimised.
+
+You can mix and match any number of actions, for example,
+
+@example
+stow -S pkg1 pkg2 -D pkg3 pkg4 -S pkg5 -R pkg6
+@end example
+
+@noindent
+will unstow pkg3, pkg4 and pkg6, then stow pkg1, pkg2, pkg5 and pkg6.
+
+@c ===========================================================================
+@node Multiple Stow Directories, Target Maintenance, Mixing Operations, Top
+@chapter Multiple Stow Directories
+
+If there are two or more system administrators who wish to maintain
+software separately, or if there is any other reason to want two or more
+stow directories, it can be done by creating a file named @file{.stow}
+in each stow directory. The presence of @file{/usr/local/foo/.stow}
+informs Stow that, though @file{foo} is not the current stow
+directory, and though it is a subdirectory of the target directory,
+nevertheless it is @emph{a} stow directory and as such Stow
+doesn't ``own'' anything in it (@pxref{Installing Packages}). This will
+protect the contents of @file{foo} from a @samp{stow -D}, for instance.
+
+XXX is this still true? XXX
+
+When multiple stow directories share a target tree, the effectiveness
+of Stow is reduced. If a tree-folding symlink is encountered and
+needs to be split open during an installation, but the symlink points
+into the wrong stow directory, Stow will report a conflict rather
+than split open the tree (because it doesn't consider itself to own the
+symlink, and thus cannot remove it).
+
+
+@c ===========================================================================
+@node Target Maintenance, Resource Files, Multiple Stow Directories, Top
+@chapter Target Maintenance
+
+From time to time you will need to clean up your target tree.
+Stow includes three operational modes that performs checks that
+would generally be too expensive to performed during normal stow
+execution.
+
+I've added a -l option to chkstow
+which will give you a listing of every package name that has already been stowed
+should be able to diff this with your directory listing
+
+bash
+cd build/scripts
+diff <(../bin/chkstow -l) <(ls -1)
+
+
+@c ===========================================================================
+@node Resource Files, Compile-time vs Install-time, Target Maintenance, Top
+@chapter Resource Files
+
+Default command line options may be set in `.stowrc' (current directory) or
+`~/.stowrc' (home directory). These are parsed in that order, and effectively
+prepended to you command line. This feature can be used for some interesting
+effects.
+
+For example, suppose your site uses more than one stow directory, perhaps in
+order to share around responsibilities with a number of systems
+administrators. One of the administrators might have the following in there
+`~/.stowrc' file:
+
+@example
+--dir=/usr/local/stow2
+--target=/usr/local
+--ignore='~'
+--ignore='^CVS'
+@end example
+
+so that the `stow' command will default to operating on the @file{/usr/local/stow2}
+directory, with @file{/usr/local} as the target, and ignoring vi backup files
+and CVS directories.
+
+If you had a stow directory `/usr/local/stow/perl-extras' that was only used
+for Perl modules, then you might place the following in
+`/usr/local/stow/perl-extras/.stowrc':
+
+@example
+--dir=/usr/local/stow/perl-extras
+--target=/usr/local
+--override=bin
+--override=man
+--ignore='perllocal\.pod'
+--ignore='\.packlist'
+--ignore='\.bs'
+@end example
+
+so that the when your are in the @file{/usr/local/stow/perl-extras} directory,
+`stow' will regard any subdirectories as stow packages, with @file{/usr/local}
+as the target (rather than the immediate parent directoy
+@file{/usr/local/stow}), overriding any pre-existing links to bin files or man
+pages, and ignoring some cruft that gets installed by default.
+
+
+@c ===========================================================================
+@node Compile-time vs Install-time, Bootstrapping, Resource Files, Top
+@chapter Compile-time vs Install-time
Software whose installation is managed with Stow needs to be installed
in one place (the package directory, e.g. @file{/usr/local/stow/perl})
@@ -419,11 +704,11 @@ must place it in the stow tree.
Some software packages allow you to specify, at compile-time, separate
locations for installation and for run-time. Perl is one such package;
-see @ref{Perl and Perl 5 modules}. Others allow you to compile the
+@xref{Perl and Perl 5 Modules}. Others allow you to compile the
package, then give a different destination in the @samp{make install}
step without causing the binaries or other files to get rebuilt. Most
GNU software falls into this category; Emacs is a notable exception.
-See @ref{GNU Emacs}, and @ref{Other FSF software}.
+See @ref{GNU Emacs}, and @ref{Other FSF Software}.
Still other software packages cannot abide the idea of separate
installation and run-time locations at all. If you try to @samp{make
@@ -464,24 +749,25 @@ done
@noindent
Note, that's ``should be able to,'' not ``can.'' Be sure to modulate
-these guidelines with plenty of your own intelligence.)
+these guidelines with plenty of your own intelligence.
The details of stowing some specific packages are described in the
following sections.
@menu
* GNU Emacs::
-* Other FSF software::
-* Cygnus software::
-* Perl and Perl 5 modules::
+* Other FSF Software::
+* Cygnus Software::
+* Perl and Perl 5 Modules::
@end menu
-@node GNU Emacs, Other FSF software, Compile-time and install-time, Compile-time and install-time
-@subsection GNU Emacs
+@c ---------------------------------------------------------------------------
+@node GNU Emacs, Other FSF Software, Compile-time vs Install-time, Compile-time vs Install-time
+@section GNU Emacs
Although the Free Software Foundation has many enlightened practices
regarding Makefiles and software installation (see @pxref{Other FSF
-software}), Emacs, its flagship program, doesn't quite follow the
+Software}), Emacs, its flagship program, doesn't quite follow the
rules. In particular, most GNU software allows you to write:
@example
@@ -508,8 +794,9 @@ make
make do-install prefix=/usr/local/stow/emacs
@end example
-@node Other FSF software, Cygnus software, GNU Emacs, Compile-time and install-time
-@subsection Other FSF software
+@c ---------------------------------------------------------------------------
+@node Other FSF Software, Cygnus Software, GNU Emacs, Compile-time vs Install-time
+@section Other FSF Software
The Free Software Foundation, the organization behind the GNU project,
has been unifying the build procedure for its tools for some time.
@@ -528,8 +815,9 @@ such that providing an option to @samp{configure} can allow @samp{make}
and @samp{make install} steps to work correctly without needing to
``fool'' the build process.
-@node Cygnus software, Perl and Perl 5 modules, Other FSF software, Compile-time and install-time
-@subsection Cygnus software
+@c ---------------------------------------------------------------------------
+@node Cygnus Software, Perl and Perl 5 Modules, Other FSF Software, Compile-time vs Install-time
+@section Cygnus Software
Cygnus is a commercial supplier and supporter of GNU software. It has
also written several of its own packages, released under the terms of
@@ -556,8 +844,9 @@ prefix=@r{@dots{}}}, but be ready to interrupt it if you detect that it
is recompiling files. Usually it will work just fine; otherwise,
install manually.
-@node Perl and Perl 5 modules, , Cygnus software, Compile-time and install-time
-@subsection Perl and Perl 5 modules
+@c ---------------------------------------------------------------------------
+@node Perl and Perl 5 Modules, , Cygnus Software, Compile-time vs Install-time
+@section Perl and Perl 5 Modules
Perl 4.036 allows you to specify different locations for installation
and for run-time. It is the only widely-used package in this author's
@@ -657,51 +946,9 @@ find cpan.* \( -name .exists -o -name perllocal.pod \) -print | \
xargs rm
@end example
-@node Multiple stow directories, Conflicts, Compile-time and install-time, Caveats
-@section Multiple stow directories
-
-If there are two or more system administrators who wish to maintain
-software separately, or if there is any other reason to want two or more
-stow directories, it can be done by creating a file named @file{.stow}
-in each stow directory. The presence of @file{/usr/local/foo/.stow}
-informs Stow that, though @file{foo} is not the current stow
-directory, and though it is a subdirectory of the target directory,
-nevertheless it is @emph{a} stow directory and as such Stow
-doesn't ``own'' anything in it (@pxref{Installing packages}). This will
-protect the contents of @file{foo} from a @samp{stow -D}, for instance.
-
-When multiple stow directories share a target tree, the effectiveness
-of Stow is reduced. If a tree-folding symlink is encountered and
-needs to be split open during an installation, but the symlink points
-into the wrong stow directory, Stow will report a conflict rather
-than split open the tree (because it doesn't consider itself to own the
-symlink, and thus cannot remove it).
-@node Conflicts, , Multiple stow directories, Caveats
-@section Conflicts
-
-If, during installation, a file or symlink exists in the target tree and
-has the same name as something Stow needs to create, and if the
-existing name is not a folded tree that can be split open, then a
-@dfn{conflict} has arisen. A conflict also occurs if a directory exists
-where Stow needs to place a symlink to a non-directory. On the
-other hand, if the existing name is merely a symlink that already points
-where Stow needs it to, then no conflict has occurred. (Thus it
-is harmless to install a package that has already been installed.)
-
-A conflict causes Stow to exit immediately and print a warning
-(unless @samp{-c} is given), even if that means aborting an installation
-in mid-package.
-
-@cindex false conflict
-When running Stow with the @samp{-n} or @samp{-c} options, no actual
-filesystem-modifying operations take place. Thus if a folded tree would
-have been split open, but instead was left in place because @samp{-n} or
-@samp{-c} was used, then Stow will report a @dfn{false conflict}, since
-the directory that Stow was expecting to populate has remained an
-unpopulatable symlink.
-
-@node Bootstrapping, Reporting bugs, Caveats, Top
+@c ---------------------------------------------------------------------------
+@node Bootstrapping, Reporting Bugs, Compile-time vs Install-time, Top
@chapter Bootstrapping
Suppose you have a stow directory all set up and ready to go:
@@ -732,11 +979,12 @@ cd /usr/local/stow
perl/bin/perl stow/bin/stow -vv *
@end example
-@node Reporting bugs, Known bugs, Bootstrapping, Top
-@chapter Reporting bugs
+@c ===========================================================================
+@node Reporting Bugs, Known Bugs, Bootstrapping, Top
+@chapter Reporting Bugs
-Please send bug reports to the author, Bob Glickstein, by electronic
-mail. The address to use is @samp{<bobg+stow@@zanshin.com>}. Please
+Please send bug reports to the current maintaner, Kal Hodgson, by electronic
+mail. The address to use is @samp{<bug-stow@@gnu.org>}. Please
include:
@itemize @bullet
@@ -761,19 +1009,23 @@ the output from the command (preferably verbose output, obtained by
adding @samp{--verbose=3} to the Stow command line).
@end itemize
+If you are really keen, consider developing a minimal test case and creating a
+new test. See the @file{t/} for lots of examples.
+
Before reporting a bug, please read the manual carefully, especially
-@ref{Known bugs}, and @ref{Caveats}, to see whether you're encountering
-something that doesn't need reporting, such as a ``false conflict''
+@ref{Known Bugs}, to see whether you're encountering
+something that doesn't need reporting.
(@pxref{Conflicts}).
-@node Known bugs, GNU General Public License, Reporting bugs, Top
-@chapter Known bugs
+@c ===========================================================================
+@node Known Bugs, GNU General Public License, Reporting Bugs, Top
+@chapter Known Bugs
@itemize @bullet
@item
-When using multiple stow directories (@pxref{Multiple stow
-directories}), Stow fails to ``split open'' tree-folding symlinks
-(@pxref{Installing packages}) that point into a stow directory which is
+When using multiple stow directories (@pxref{Multiple Stow
+Directories}), Stow fails to ``split open'' tree-folding symlinks
+(@pxref{Installing Packages}) that point into a stow directory which is
not the one in use by the current Stow command. Before failing, it
should search the target of the link to see whether any element of the
path contains a @file{.stow} file. If it finds one, it can ``learn''
@@ -781,7 +1033,8 @@ about the cooperating stow directory to short-circuit the @file{.stow}
search the next time it encounters a tree-folding symlink.
@end itemize
-@node GNU General Public License, Index, Known bugs, Top
+@c ===========================================================================
+@node GNU General Public License, Index, Known Bugs, Top
@unnumbered GNU General Public License
@center Version 2, June 1991
diff --git a/t/chkstow.t b/t/chkstow.t
new file mode 100755
index 0000000..f75a88a
--- /dev/null
+++ b/t/chkstow.t
@@ -0,0 +1,115 @@
+#!/usr/local/bin/perl
+
+#
+# Testing cleanup_invalid_links()
+#
+
+# load as a library
+BEGIN {
+ use lib qw(.);
+ require "t/util.pm";
+ require "chkstow";
+}
+
+use Test::More tests => 7;
+use Test::Output;
+use English qw(-no_match_vars);
+
+### setup
+eval { remove_dir('t/target'); };
+make_dir('t/target');
+
+chdir 't/target';
+
+# setup stow directory
+make_dir('stow');
+make_file('stow/.stow');
+# perl
+make_dir('stow/perl/bin');
+make_file('stow/perl/bin/perl');
+make_file('stow/perl/bin/a2p');
+make_dir('stow/perl/info');
+make_file('stow/perl/info/perl');
+make_dir('stow/perl/lib/perl');
+make_dir('stow/perl/man/man1');
+make_file('stow/perl/man/man1/perl.1');
+# emacs
+make_dir('stow/emacs/bin');
+make_file('stow/emacs/bin/emacs');
+make_file('stow/emacs/bin/etags');
+make_dir('stow/emacs/info');
+make_file('stow/emacs/info/emacs');
+make_dir('stow/emacs/libexec/emacs');
+make_dir('stow/emacs/man/man1');
+make_file('stow/emacs/man/man1/emacs.1');
+
+#setup target directory
+make_dir('bin');
+make_link('bin/a2p', '../stow/perl/bin/a2p');
+make_link('bin/emacs', '../stow/emacs/bin/emacs');
+make_link('bin/etags', '../stow/emacs/bin/etags');
+make_link('bin/perl', '../stow/perl/bin/perl');
+
+make_dir('info');
+make_link('info/emacs', '../stow/emacs/info/emacs');
+make_link('info/perl', '../stow/perl/info/perl');
+
+make_link('lib', 'stow/perl/lib');
+make_link('libexec', 'stow/emacs/libexec');
+
+make_dir('man');
+make_dir('man/man1');
+make_link('man/man1/emacs', '../../stow/emacs/man/man1/emacs.1');
+make_link('man/man1/perl', '../../stow/perl/man/man1/perl.1');
+
+sub run_chkstow() {
+ process_options();
+ check_stow();
+}
+
+local @ARGV = ('-t', '.', '-b',);
+stderr_like(
+ \&run_chkstow,
+ qr{\Askipping .*stow.*\z}xms,
+ "Skip directories containing .stow");
+
+# squelch warn so that check_stow doesn't carp about skipping .stow all the time
+$SIG{'__WARN__'} = sub { };
+
+@ARGV = ('-t', '.', '-l',);
+stdout_like(
+ \&run_chkstow,
+ qr{emacs$perl$stow}xms,
+ "List packages");
+
+@ARGV = ('-t', '.', '-b',);
+stdout_like(
+ \&run_chkstow,
+ qr{\A\z}xms,
+ "No bogus links exist");
+
+@ARGV = ('-t', '.', '-a',);
+stdout_like(
+ \&run_chkstow,
+ qr{\A\z}xms,
+ "No aliens exist");
+
+# Create an alien
+make_file('bin/alien');
+@ARGV = ('-t', '.', '-a',);
+stdout_like(
+ \&run_chkstow,
+ qr{Unstowed\ file:\ ./bin/alien}xms,
+ "Aliens exist");
+
+make_link('bin/link', 'ireallyhopethisfiledoesn/t.exist');
+@ARGV = ('-t', '.', '-b',);
+stdout_like(
+ \&run_chkstow,
+ qr{Bogus\ link:\ ./bin/link}xms,
+ "Bogus links exist");
+
+@ARGV = ('-b',);
+process_options();
+ok($Target == q{/usr/local},
+ "Default target is /usr/local/");
diff --git a/t/cleanup_invalid_links.t b/t/cleanup_invalid_links.t
new file mode 100644
index 0000000..8273926
--- /dev/null
+++ b/t/cleanup_invalid_links.t
@@ -0,0 +1,92 @@
+#!/usr/local/bin/perl
+
+#
+# Testing cleanup_invalid_links()
+#
+
+# load as a library
+BEGIN { use lib qw(.); require "t/util.pm"; require "stow"; }
+
+use Test::More tests => 3;
+use English qw(-no_match_vars);
+
+# local utility
+sub reset_state {
+ @Tasks = ();
+ @Conflicts = ();
+ %Link_Task_For = ();
+ %Dir_Task_For = ();
+ %Options = ();
+ return;
+}
+
+### setup
+eval { remove_dir('t/target'); };
+eval { remove_dir('t/stow'); };
+make_dir('t/target');
+make_dir('t/stow');
+
+chdir 't/target';
+$Stow_Path= '../stow';
+
+# Note that each of the following tests use a distinct set of files
+
+#
+# nothing to clean in a simple tree
+#
+reset_state();
+$Option{'verbose'} = 1;
+
+make_dir('../stow/pkg1/bin1');
+make_file('../stow/pkg1/bin1/file1');
+make_link('bin1','../stow/pkg1/bin1');
+
+cleanup_invalid_links('./');
+is(
+ scalar @Tasks, 0
+ => 'nothing to clean'
+);
+
+#
+# cleanup a bad link in a simple tree
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin2');
+make_dir('../stow/pkg2/bin2');
+make_file('../stow/pkg2/bin2/file2a');
+make_link('bin2/file2a','../../stow/pkg2/bin2/file2a');
+make_link('bin2/file2b','../../stow/pkg2/bin2/file2b');
+
+cleanup_invalid_links('bin2');
+ok(
+ scalar(@Conflicts) == 0 &&
+ scalar @Tasks == 1 &&
+ $Link_Task_For{'bin2/file2b'}->{'action'} eq 'remove'
+ => 'cleanup a bad link'
+);
+
+#use Data::Dumper;
+#print Dumper(\@Tasks,\%Link_Task_For,\%Dir_Task_For);
+
+#
+# dont cleanup a bad link not owned by stow
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin3');
+make_dir('../stow/pkg3/bin3');
+make_file('../stow/pkg3/bin3/file3a');
+make_link('bin3/file3a','../../stow/pkg3/bin3/file3a');
+make_link('bin3/file3b','../../empty');
+
+cleanup_invalid_links('bin3');
+ok(
+ scalar(@Conflicts) == 0 &&
+ scalar @Tasks == 0
+ => 'dont cleanup a bad link not owned by stow'
+);
+
+
diff --git a/t/defer.t b/t/defer.t
new file mode 100644
index 0000000..a4f8cf7
--- /dev/null
+++ b/t/defer.t
@@ -0,0 +1,22 @@
+#!/usr/local/bin/perl
+
+#
+# Testing defer().
+#
+
+# load as a library
+BEGIN { use lib qw(. ..); require "stow"; }
+
+use Test::More tests => 4;
+
+$Option{'defer'} = [ 'man' ];
+ok(defer('man/man1/file.1') => 'simple success');
+
+$Option{'defer'} = [ 'lib' ];
+ok(!defer('man/man1/file.1') => 'simple failure');
+
+$Option{'defer'} = [ 'lib', 'man', 'share' ];
+ok(defer('man/man1/file.1') => 'complex success');
+
+$Option{'defer'} = [ 'lib', 'man', 'share' ];
+ok(!defer('bin/file') => 'complex failure');
diff --git a/t/examples.t b/t/examples.t
new file mode 100644
index 0000000..839f822
--- /dev/null
+++ b/t/examples.t
@@ -0,0 +1,204 @@
+#!/usr/local/bin/perl
+
+#
+# Testing examples from the documentation
+#
+
+# load as a library
+BEGIN { use lib qw(.); require "t/util.pm"; require "stow"; }
+
+use Test::More tests => 4;
+use English qw(-no_match_vars);
+
+# local utility
+sub reset_state {
+ @Tasks = ();
+ @Conflicts = ();
+ %Link_Task_For = ();
+ %Dir_Task_For = ();
+ %Options = ();
+ return;
+}
+
+### setup
+eval { remove_dir('t/target'); };
+make_dir('t/target/stow');
+
+chdir 't/target';
+$Stow_Path= 'stow';
+
+## set up some fake packages to stow
+
+# perl
+make_dir('stow/perl/bin');
+make_file('stow/perl/bin/perl');
+make_file('stow/perl/bin/a2p');
+make_dir('stow/perl/info');
+make_file('stow/perl/info/perl');
+make_dir('stow/perl/lib/perl');
+make_dir('stow/perl/man/man1');
+make_file('stow/perl/man/man1/perl.1');
+
+# emacs
+make_dir('stow/emacs/bin');
+make_file('stow/emacs/bin/emacs');
+make_file('stow/emacs/bin/etags');
+make_dir('stow/emacs/info');
+make_file('stow/emacs/info/emacs');
+make_dir('stow/emacs/libexec/emacs');
+make_dir('stow/emacs/man/man1');
+make_file('stow/emacs/man/man1/emacs.1');
+
+#
+# stow perl into an empty target
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('stow/perl/bin');
+make_file('stow/perl/bin/perl');
+make_file('stow/perl/bin/a2p');
+make_dir('stow/perl/info');
+make_dir('stow/perl/lib/perl');
+make_dir('stow/perl/man/man1');
+make_file('stow/perl/man/man1/perl.1');
+
+stow_contents('stow/perl','./','stow/perl');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -l 'bin' && -l 'info' && -l 'lib' && -l 'man' &&
+ readlink('bin') eq 'stow/perl/bin' &&
+ readlink('info') eq 'stow/perl/info' &&
+ readlink('lib') eq 'stow/perl/lib' &&
+ readlink('man') eq 'stow/perl/man'
+ => 'stow perl into an empty target'
+);
+
+
+#
+# stow perl into a non-empty target
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+# clean up previous stow
+remove_link('bin');
+remove_link('info');
+remove_link('lib');
+remove_link('man');
+
+make_dir('bin');
+make_dir('lib');
+make_dir('man/man1');
+
+stow_contents('stow/perl','./','stow/perl');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -d 'bin' && -d 'lib' && -d 'man' && -d 'man/man1' &&
+ -l 'info' && -l 'bin/perl' && -l 'bin/a2p' &&
+ -l 'lib/perl' && -l 'man/man1/perl.1' &&
+ readlink('info') eq 'stow/perl/info' &&
+ readlink('bin/perl') eq '../stow/perl/bin/perl' &&
+ readlink('bin/a2p') eq '../stow/perl/bin/a2p' &&
+ readlink('lib/perl') eq '../stow/perl/lib/perl' &&
+ readlink('man/man1/perl.1') eq '../../stow/perl/man/man1/perl.1'
+ => 'stow perl into a non-empty target'
+);
+
+
+#
+# Install perl into an empty target and then install emacs
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+# clean up previous stow
+remove_link('info');
+remove_dir('bin');
+remove_dir('lib');
+remove_dir('man');
+
+stow_contents('stow/perl', './','stow/perl');
+stow_contents('stow/emacs','./','stow/emacs');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -d 'bin' &&
+ -l 'bin/perl' &&
+ -l 'bin/emacs' &&
+ -l 'bin/a2p' &&
+ -l 'bin/etags' &&
+ readlink('bin/perl') eq '../stow/perl/bin/perl' &&
+ readlink('bin/a2p') eq '../stow/perl/bin/a2p' &&
+ readlink('bin/emacs') eq '../stow/emacs/bin/emacs' &&
+ readlink('bin/etags') eq '../stow/emacs/bin/etags' &&
+
+ -d 'info' &&
+ -l 'info/perl' &&
+ -l 'info/emacs' &&
+ readlink('info/perl') eq '../stow/perl/info/perl' &&
+ readlink('info/emacs') eq '../stow/emacs/info/emacs' &&
+
+ -d 'man' &&
+ -d 'man/man1' &&
+ -l 'man/man1/perl.1' &&
+ -l 'man/man1/emacs.1' &&
+ readlink('man/man1/perl.1') eq '../../stow/perl/man/man1/perl.1' &&
+ readlink('man/man1/emacs.1') eq '../../stow/emacs/man/man1/emacs.1' &&
+
+ -l 'lib' &&
+ -l 'libexec' &&
+ readlink('lib') eq 'stow/perl/lib' &&
+ readlink('libexec') eq 'stow/emacs/libexec' &&
+ 1
+ => 'stow perl into an empty target, then stow emacs'
+);
+
+#
+# BUG 1:
+# 1. stowing a package with an empty directory
+# 2. stow another package with the same directory but non empty
+# 3. unstow the second package
+# Q. the original empty directory should remain
+# behaviour is the same as if the empty directory had nothing to do with stow
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('stow/pkg1a/bin1');
+make_dir('stow/pkg1b/bin1');
+make_file('stow/pkg1b/bin1/file1b');
+
+stow_contents('stow/pkg1a', './', 'stow/pkg1a');
+stow_contents('stow/pkg1b', './', 'stow/pkg1b');
+unstow_contents('stow/pkg1b', './', 'stow/pkg1b');
+process_tasks();
+
+ok(
+ scalar(@Conflicts) == 0 &&
+ -d 'bin1'
+ => 'bug 1: stowing empty dirs'
+);
+
+
+#
+# BUG 2: split open tree-folding symlinks pointing inside different stow
+# directories
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('stow2a/pkg2a/bin2');
+make_file('stow2a/pkg2a/bin2/file2a');
+make_file('stow2a/.stow');
+make_dir('stow2b/pkg2b/bin2');
+make_file('stow2b/pkg2b/bin2/file2b');
+make_file('stow2b/.stow');
+
+stow_contents('stow2a/pkg2a','./', 'stow2a/pkg2a');
+stow_contents('stow2b/pkg2b','./', 'stow2b/pkg2b');
+process_tasks();
+
+## Finish this test
diff --git a/t/find_stowed_path.t b/t/find_stowed_path.t
new file mode 100644
index 0000000..03a7c73
--- /dev/null
+++ b/t/find_stowed_path.t
@@ -0,0 +1,51 @@
+#!/usr/local/bin/perl
+
+#
+# Testing find_stowed_path()
+#
+
+BEGIN { require "t/util.pm"; require "stow"; }
+
+use Test::More tests => 5;
+
+eval { remove_dir('t/target'); };
+eval { remove_dir('t/stow'); };
+make_dir('t/target');
+make_dir('t/stow');
+
+$Stow_Path = 't/stow';
+is(
+ find_stowed_path('t/target/a/b/c', '../../../stow/a/b/c'),
+ 't/stow/a/b/c',
+ => 'from root'
+);
+
+$Stow_Path = '../stow';
+is(
+ find_stowed_path('a/b/c','../../../stow/a/b/c'),
+ '../stow/a/b/c',
+ => 'from target directory'
+);
+
+$Stow_Path = 't/target/stow';
+
+is(
+ find_stowed_path('t/target/a/b/c', '../../stow/a/b/c'),
+ 't/target/stow/a/b/c',
+ => 'stow is subdir of target directory'
+);
+
+is(
+ find_stowed_path('t/target/a/b/c','../../empty'),
+ '',
+ => 'target is not stowed'
+);
+
+make_dir('t/target/stow2');
+make_file('t/target/stow2/.stow');
+
+is(
+ find_stowed_path('t/target/a/b/c','../../stow2/a/b/c'),
+ 't/target/stow2/a/b/c'
+ => q(detect alternate stow directory)
+);
diff --git a/t/foldable.t b/t/foldable.t
new file mode 100644
index 0000000..e4e4fec
--- /dev/null
+++ b/t/foldable.t
@@ -0,0 +1,74 @@
+#!/usr/local/bin/perl
+
+#
+# Testing foldable()
+#
+
+# load as a library
+BEGIN { use lib qw(.); require "t/util.pm"; require "stow"; }
+
+use Test::More tests => 4;
+use English qw(-no_match_vars);
+
+### setup
+# be very careful with these
+eval { remove_dir('t/target'); };
+eval { remove_dir('t/stow'); };
+make_dir('t/target');
+make_dir('t/stow');
+
+chdir 't/target';
+$Stow_Path= '../stow';
+
+# Note that each of the following tests use a distinct set of files
+
+#
+# can fold a simple tree
+#
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg1/bin1');
+make_file('../stow/pkg1/bin1/file1');
+make_dir('bin1');
+make_link('bin1/file1','../../stow/pkg1/bin1/file1');
+
+is( foldable('bin1'), '../stow/pkg1/bin1' => q(can fold a simple tree) );
+
+#
+# can't fold an empty directory
+#
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg2/bin2');
+make_file('../stow/pkg2/bin2/file2');
+make_dir('bin2');
+
+is( foldable('bin2'), '' => q(can't fold an empty directory) );
+
+#
+# can't fold if dir contains a non-link
+#
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg3/bin3');
+make_file('../stow/pkg3/bin3/file3');
+make_dir('bin3');
+make_link('bin3/file3','../../stow/pkg3/bin3/file3');
+make_file('bin3/non-link');
+
+is( foldable('bin3'), '' => q(can't fold a dir containing non-links) );
+
+#
+# can't fold if links point to different directories
+#
+$Option{'verbose'} = 0;
+
+make_dir('bin4');
+make_dir('../stow/pkg4a/bin4');
+make_file('../stow/pkg4a/bin4/file4a');
+make_link('bin4/file4a','../../stow/pkg4a/bin4/file4a');
+make_dir('../stow/pkg4b/bin4');
+make_file('../stow/pkg4b/bin4/file4b');
+make_link('bin4/file4b','../../stow/pkg4b/bin4/file4b');
+
+is( foldable('bin4'), '' => q(can't fold if links point to different dirs) );
diff --git a/t/join_paths.t b/t/join_paths.t
new file mode 100644
index 0000000..1fc6b24
--- /dev/null
+++ b/t/join_paths.t
@@ -0,0 +1,89 @@
+#!/usr/local/bin/perl
+
+#
+# Testing join_paths();
+#
+
+# load as a library
+BEGIN { use lib qw(. ..); require "stow"; }
+
+use Test::More tests => 13;
+
+is(
+ join_paths('a/b/c', 'd/e/f'),
+ 'a/b/c/d/e/f',
+ => 'simple'
+);
+
+is(
+ join_paths('/a/b/c', '/d/e/f'),
+ '/a/b/c/d/e/f',
+ => 'leading /'
+);
+
+is(
+ join_paths('/a/b/c/', '/d/e/f/'),
+ '/a/b/c/d/e/f',
+ => 'trailing /'
+);
+
+is(
+ join_paths('///a/b///c//', '/d///////e/f'),
+ '/a/b/c/d/e/f',
+ => 'mltiple /\'s'
+);
+
+is(
+ join_paths('', 'a/b/c'),
+ 'a/b/c',
+ => 'first empty'
+);
+
+is(
+ join_paths('a/b/c', ''),
+ 'a/b/c',
+ => 'second empty'
+);
+
+is(
+ join_paths('/', 'a/b/c'),
+ '/a/b/c',
+ => 'first is /'
+);
+
+is(
+ join_paths('a/b/c', '/'),
+ 'a/b/c',
+ => 'second is /'
+);
+
+is(
+ join_paths('///a/b///c//', '/d///////e/f'),
+ '/a/b/c/d/e/f',
+ => 'multiple /\'s'
+);
+
+
+is(
+ join_paths('../a1/b1/../c1/', '/a2/../b2/e2'),
+ '../a1/c1/b2/e2',
+ => 'simple deref ".."'
+);
+
+is(
+ join_paths('../a1/b1/../c1/d1/e1', '../a2/../b2/c2/d2/../e2'),
+ '../a1/c1/d1/b2/c2/e2',
+ => 'complex deref ".."'
+);
+
+is(
+ join_paths('../a1/../../c1', 'a2/../../'),
+ '../..',
+ => 'too many ".."'
+);
+
+is(
+ join_paths('./a1', '../../a2'),
+ '../a2',
+ => 'drop any "./"'
+);
diff --git a/t/parent.t b/t/parent.t
new file mode 100644
index 0000000..52a4bea
--- /dev/null
+++ b/t/parent.t
@@ -0,0 +1,41 @@
+#!/usr/local/bin/perl
+
+#
+# Testing parent()
+#
+
+# load as a library
+BEGIN { use lib qw(. ..); require "stow"; }
+
+use Test::More tests => 5;
+
+is(
+ parent('a/b/c'),
+ 'a/b',
+ => 'no leading or trailing /'
+);
+
+is(
+ parent('/a/b/c'),
+ '/a/b',
+ => 'leading /'
+);
+
+is(
+ parent('a/b/c/'),
+ 'a/b',
+ => 'trailing /'
+);
+
+is(
+ parent('/////a///b///c///'),
+ '/a/b',
+ => 'multiple /'
+);
+
+is (
+ parent('a'),
+ ''
+ => 'empty parent'
+);
+
diff --git a/t/relative_path.t b/t/relative_path.t
new file mode 100644
index 0000000..7269b7d
--- /dev/null
+++ b/t/relative_path.t
@@ -0,0 +1,41 @@
+#!/usr/local/bin/perl
+
+#
+# Testing relative_path();
+#
+
+# load as a library
+BEGIN { use lib qw(. ..); require "stow"; }
+
+use Test::More tests => 5;
+
+is(
+ relative_path('a/b/c', 'a/b/d'),
+ '../d',
+ => 'diferent branches'
+);
+
+is(
+ relative_path('/a/b/c', '/a/b/c/d'),
+ 'd',
+ => 'lower same branch'
+);
+
+is(
+ relative_path('a/b/c', 'a/b'),
+ '..',
+ => 'higher, same branch'
+);
+
+is(
+ relative_path('/a/b/c', '/d/e/f'),
+ '../../../d/e/f',
+ => 'common parent is /'
+);
+
+is(
+ relative_path('///a//b//c////', '/a////b/c/d////'),
+ 'd',
+ => 'extra /\'s '
+);
+
diff --git a/t/stow.t b/t/stow.t
new file mode 100644
index 0000000..4281270
--- /dev/null
+++ b/t/stow.t
@@ -0,0 +1,97 @@
+#!/usr/local/bin/perl
+
+#
+# Testing core application
+#
+
+# load as a library
+BEGIN { use lib qw(.); require "t/util.pm"; require "stow"; }
+
+use Test::More tests => 10;
+
+local @ARGV = (
+ '-v',
+ '-d t/stow',
+ '-t t/target',
+ 'dummy'
+);
+
+### setup
+eval { remove_dir('t/target'); };
+eval { remove_dir('t/stow'); };
+make_dir('t/target');
+make_dir('t/stow');
+
+ok eval {process_options(); 1} => 'process options';
+ok eval {set_stow_path(); 1} => 'set stow path';
+
+is($Stow_Path,"../stow" => 'stow dir');
+is_deeply(\@Pkgs_To_Stow, [ 'dummy' ] => 'default to stow');
+
+
+#
+# Check mixed up package options
+#
+%Option=();
+local @ARGV = (
+ '-v',
+ '-D', 'd1', 'd2',
+ '-S', 's1',
+ '-R', 'r1',
+ '-D', 'd3',
+ '-S', 's2', 's3',
+ '-R', 'r2',
+);
+
+@Pkgs_To_Stow = ();
+@Pkgs_To_Delete = ();
+process_options();
+is_deeply(\@Pkgs_To_Delete, [ 'd1', 'd2', 'r1', 'd3', 'r2' ] => 'mixed deletes');
+is_deeply(\@Pkgs_To_Stow, [ 's1', 'r1', 's2', 's3', 'r2' ] => 'mixed stows');
+
+#
+# Check setting defered paths
+#
+%Option=();
+local @ARGV = (
+ '--defer=man',
+ '--defer=info'
+);
+process_options();
+is_deeply($Option{'defer'}, [ qr(\Aman), qr(\Ainfo) ] => 'defer man and info');
+
+#
+# Check setting override paths
+#
+%Option=();
+local @ARGV = (
+ '--override=man',
+ '--override=info'
+);
+process_options();
+is_deeply($Option{'override'}, [qr(\Aman), qr(\Ainfo)] => 'override man and info');
+
+#
+# Check stripping any matched quotes
+#
+%Option=();
+local @ARGV = (
+ "--override='man'",
+ '--override="info"',
+);
+process_options();
+is_deeply($Option{'override'}, [qr(\Aman), qr(\Ainfo)] => 'strip shell quoting');
+
+#
+# Check setting ignored paths
+#
+%Option=();
+local @ARGV = (
+ '--ignore="~"',
+ '--ignore="\.#.*'
+);
+process_options();
+is_deeply($Option{'ignore'}, [ qr(~\z), qr(\.#.*\z) ] => 'ignore temp files');
+
+
+# vim:ft=perl
diff --git a/t/stow_contents.t b/t/stow_contents.t
new file mode 100644
index 0000000..093a1df
--- /dev/null
+++ b/t/stow_contents.t
@@ -0,0 +1,283 @@
+#!/usr/local/bin/perl
+
+#
+# Testing
+#
+
+# load as a library
+BEGIN { use lib qw(.); require "t/util.pm"; require "stow"; }
+
+use Test::More tests => 13;
+use English qw(-no_match_vars);
+
+# local utility
+sub reset_state {
+ @Tasks = ();
+ @Conflicts = ();
+ %Link_Task_For = ();
+ %Dir_Task_For = ();
+ %Options = ();
+ return;
+}
+
+### setup
+eval { remove_dir('t/target'); };
+eval { remove_dir('t/stow'); };
+make_dir('t/target');
+make_dir('t/stow');
+
+chdir 't/target';
+$Stow_Path= '../stow';
+
+# Note that each of the following tests use a distinct set of files
+
+#
+# stow a simple tree minimally
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg1/bin1');
+make_file('../stow/pkg1/bin1/file1');
+stow_contents('../stow/pkg1', './', '../stow/pkg1');
+process_tasks();
+is(
+ readlink('bin1'),
+ '../stow/pkg1/bin1',
+ => 'minimal stow of a simple tree'
+);
+
+#
+# stow a simple tree into an existing directory
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg2/lib2');
+make_file('../stow/pkg2/lib2/file2');
+make_dir('lib2');
+stow_contents('../stow/pkg2', './', '../stow/pkg2');
+process_tasks();
+is(
+ readlink('lib2/file2'),
+ '../../stow/pkg2/lib2/file2',
+ => 'stow simple tree to existing directory'
+);
+
+#
+# unfold existing tree
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg3a/bin3');
+make_file('../stow/pkg3a/bin3/file3a');
+make_link('bin3' => '../stow/pkg3a/bin3'); # emulate stow
+
+make_dir('../stow/pkg3b/bin3');
+make_file('../stow/pkg3b/bin3/file3b');
+stow_contents('../stow/pkg3b', './', '../stow/pkg3b');
+process_tasks();
+ok(
+ -d 'bin3' &&
+ readlink('bin3/file3a') eq '../../stow/pkg3a/bin3/file3a' &&
+ readlink('bin3/file3b') eq '../../stow/pkg3b/bin3/file3b'
+ => 'target already has 1 stowed package'
+);
+
+#
+# Link to a new dir conflicts with existing non-dir (can't unfold)
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_file('bin4'); # this is a file but named like a directory
+make_dir('../stow/pkg4/bin4');
+make_file('../stow/pkg4/bin4/file4');
+stow_contents('../stow/pkg4', './', '../stow/pkg4');
+like(
+ $Conflicts[-1], qr(CONFLICT:.*existing target is neither a link nor a directory)
+ => 'link to new dir conflicts with existing non-directory'
+);
+
+#
+# Target already exists but is not owned by stow
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin5');
+make_link('bin5/file5','../../empty');
+make_dir('../stow/pkg5/bin5/file5');
+stow_contents('../stow/pkg5', './', '../stow/pkg5');
+like(
+ $Conflicts[-1], qr(CONFLICT:.*not owned by stow)
+ => 'target already exists but is not owned by stow'
+);
+
+#
+# Replace existing but invalid target
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_link('file6','../stow/path-does-not-exist');
+make_dir('../stow/pkg6');
+make_file('../stow/pkg6/file6');
+eval{ stow_contents('../stow/pkg6', './', '../stow/pkg6'); process_tasks() };
+is(
+ readlink('file6'),
+ '../stow/pkg6/file6'
+ => 'replace existing but invalid target'
+);
+
+#
+# Target already exists, is owned by stow, but points to a non-directory
+# (can't unfold)
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin7');
+make_dir('../stow/pkg7a/bin7');
+make_file('../stow/pkg7a/bin7/node7');
+make_link('bin7/node7','../../stow/pkg7a/bin7/node7');
+make_dir('../stow/pkg7b/bin7/node7');
+make_file('../stow/pkg7b/bin7/node7/file7');
+stow_contents('../stow/pkg7b', './', '../stow/pkg7b');
+like(
+ $Conflicts[-1], qr(CONFLICT:.*existing target is stowed to a different package)
+ => 'link to new dir conflicts with existing stowed non-directory'
+);
+
+#
+# stowing directories named 0
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg8a/0');
+make_file('../stow/pkg8a/0/file8a');
+make_link('0' => '../stow/pkg8a/0'); # emulate stow
+
+make_dir('../stow/pkg8b/0');
+make_file('../stow/pkg8b/0/file8b');
+stow_contents('../stow/pkg8b', './', '../stow/pkg8b');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -d '0' &&
+ readlink('0/file8a') eq '../../stow/pkg8a/0/file8a' &&
+ readlink('0/file8b') eq '../../stow/pkg8b/0/file8b'
+ => 'stowing directories named 0'
+);
+
+#
+# overriding already stowed documentation
+#
+reset_state();
+$Option{'verbose'} = 0;
+$Option{'override'} = ['man9', 'info9'];
+
+make_dir('../stow/pkg9a/man9/man1');
+make_file('../stow/pkg9a/man9/man1/file9.1');
+make_dir('man9/man1');
+make_link('man9/man1/file9.1' => '../../../stow/pkg9a/man9/man1/file9.1'); # emulate stow
+
+make_dir('../stow/pkg9b/man9/man1');
+make_file('../stow/pkg9b/man9/man1/file9.1');
+stow_contents('../stow/pkg9b', './', '../stow/pkg9b');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ readlink('man9/man1/file9.1') eq '../../../stow/pkg9b/man9/man1/file9.1'
+ => 'overriding existing documentation files'
+);
+
+#
+# deferring to already stowed documentation
+#
+reset_state();
+$Option{'verbose'} = 0;
+$Option{'defer'} = ['man10', 'info10'];
+
+make_dir('../stow/pkg10a/man10/man1');
+make_file('../stow/pkg10a/man10/man1/file10.1');
+make_dir('man10/man1');
+make_link('man10/man1/file10.1' => '../../../stow/pkg10a/man10/man1/file10.1'); # emulate stow
+
+make_dir('../stow/pkg10b/man10/man1');
+make_file('../stow/pkg10b/man10/man1/file10.1');
+stow_contents('../stow/pkg10b', './', '../stow/pkg10b');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ readlink('man10/man1/file10.1') eq '../../../stow/pkg10a/man10/man1/file10.1'
+ => 'defer to existing documentation files'
+);
+
+#
+# Ignore temp files
+#
+reset_state();
+$Option{'verbose'} = 0;
+$Option{'ignore'} = ['~', '\.#.*'];
+
+make_dir('../stow/pkg11/man11/man1');
+make_file('../stow/pkg11/man11/man1/file11.1');
+make_file('../stow/pkg11/man11/man1/file11.1~');
+make_file('../stow/pkg11/man11/man1/.#file11.1');
+make_dir('man11/man1');
+
+stow_contents('../stow/pkg11', './', '../stow/pkg11');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ readlink('man11/man1/file11.1') eq '../../../stow/pkg11/man11/man1/file11.1' &&
+ !-e 'man11/man1/file11.1~' &&
+ !-e 'man11/man1/.#file11.1'
+ => 'ignore temp files'
+);
+
+#
+# stowing links library files
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg12/lib12/');
+make_file('../stow/pkg12/lib12/lib.so');
+make_link('../stow/pkg12/lib12/lib.so.1','lib.so');
+
+make_dir('lib12/');
+stow_contents('../stow/pkg12', './', '../stow/pkg12');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ readlink('lib12/lib.so.1') eq '../../stow/pkg12/lib12/lib.so.1'
+ => 'stow links to libraries'
+);
+
+#
+# unfolding to stow links to library files
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg13a/lib13/');
+make_file('../stow/pkg13a/lib13/liba.so');
+make_link('../stow/pkg13a/lib13/liba.so.1', 'liba.so');
+make_link('lib13','../stow/pkg13a/lib13');
+
+make_dir('../stow/pkg13b/lib13/');
+make_file('../stow/pkg13b/lib13/libb.so');
+make_link('../stow/pkg13b/lib13/libb.so.1', 'libb.so');
+
+stow_contents('../stow/pkg13b', './', '../stow/pkg13b');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ readlink('lib13/liba.so.1') eq '../../stow/pkg13a/lib13/liba.so.1' &&
+ readlink('lib13/libb.so.1') eq '../../stow/pkg13b/lib13/libb.so.1'
+ => 'unfolding to stow links to libraries'
+);
diff --git a/t/unstow_contents.t b/t/unstow_contents.t
new file mode 100644
index 0000000..06880f3
--- /dev/null
+++ b/t/unstow_contents.t
@@ -0,0 +1,276 @@
+#!/usr/local/bin/perl
+
+#
+# Testing unstow_contents()
+#
+
+# load as a library
+BEGIN { use lib qw(.); require "t/util.pm"; require "stow"; }
+
+use Test::More tests => 11;
+use English qw(-no_match_vars);
+
+# local utility
+sub reset_state {
+ @Tasks = ();
+ @Conflicts = ();
+ %Link_Task_For = ();
+ %Dir_Task_For = ();
+ %Options = ();
+ return;
+}
+
+### setup
+eval { remove_dir('t/target'); };
+eval { remove_dir('t/stow'); };
+make_dir('t/target');
+make_dir('t/stow');
+
+chdir 't/target';
+$Stow_Path= '../stow';
+
+# Note that each of the following tests use a distinct set of files
+
+#
+# unstow a simple tree minimally
+#
+
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg1/bin1');
+make_file('../stow/pkg1/bin1/file1');
+make_link('bin1','../stow/pkg1/bin1');
+
+unstow_contents('../stow/pkg1','./');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -f '../stow/pkg1/bin1/file1' && ! -e 'bin1'
+ => 'unstow a simple tree'
+);
+
+#
+# unstow a simple tree from an existing directory
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('lib2');
+make_dir('../stow/pkg2/lib2');
+make_file('../stow/pkg2/lib2/file2');
+make_link('lib2/file2', '../../stow/pkg2/lib2/file2');
+unstow_contents('../stow/pkg2','./');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -f '../stow/pkg2/lib2/file2' && -d 'lib2'
+ => 'unstow simple tree from a pre-existing directory'
+);
+
+#
+# fold tree after unstowing
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin3');
+
+make_dir('../stow/pkg3a/bin3');
+make_file('../stow/pkg3a/bin3/file3a');
+make_link('bin3/file3a' => '../../stow/pkg3a/bin3/file3a'); # emulate stow
+
+make_dir('../stow/pkg3b/bin3');
+make_file('../stow/pkg3b/bin3/file3b');
+make_link('bin3/file3b' => '../../stow/pkg3b/bin3/file3b'); # emulate stow
+unstow_contents('../stow/pkg3b', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -l 'bin3' &&
+ readlink('bin3') eq '../stow/pkg3a/bin3'
+ => 'fold tree after unstowing'
+);
+
+#
+# existing link is owned by stow but is invalid so it gets removed anyway
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin4');
+make_dir('../stow/pkg4/bin4');
+make_file('../stow/pkg4/bin4/file4');
+make_link('bin4/file4', '../../stow/pkg4/bin4/does-not-exist');
+
+unstow_contents('../stow/pkg4', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ ! -e 'bin4/file4'
+ => q(remove invalid link owned by stow)
+);
+
+#
+# Existing link is not owned by stow
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg5/bin5');
+make_link('bin5', '../not-stow');
+
+unstow_contents('../stow/pkg5', './');
+like(
+ $Conflicts[-1], qr(CONFLICT:.*existing target is not owned by stow)
+ => q(existing link not owned by stow)
+);
+#
+# Target already exists, is owned by stow, but points to a different package
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin6');
+make_dir('../stow/pkg6a/bin6');
+make_file('../stow/pkg6a/bin6/file6');
+make_link('bin6/file6', '../../stow/pkg6a/bin6/file6');
+
+make_dir('../stow/pkg6b/bin6');
+make_file('../stow/pkg6b/bin6/file6');
+
+unstow_contents('../stow/pkg6b', './');
+ok(
+ scalar(@Conflicts) == 0 &&
+ -l 'bin6/file6' &&
+ readlink('bin6/file6') eq '../../stow/pkg6a/bin6/file6'
+ => q(ignore existing link that points to a different package)
+);
+
+#
+# Don't unlink anything under the stow directory
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('stow'); # make out stow dir a subdir of target
+$Stow_Path = 'stow';
+
+# emulate stowing into ourself (bizzare corner case or accident)
+make_dir('stow/pkg7a/stow/pkg7b');
+make_file('stow/pkg7a/stow/pkg7b/file7b');
+make_link('stow/pkg7b', '../stow/pkg7a/stow/pkg7b');
+
+unstow_contents('stow/pkg7b', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -l 'stow/pkg7b' &&
+ readlink('stow/pkg7b') eq '../stow/pkg7a/stow/pkg7b'
+ => q(don't unlink any nodes under the stow directory)
+);
+
+#
+# Don't unlink any nodes under another stow directory
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('stow'); # make out stow dir a subdir of target
+$Stow_Path = 'stow';
+
+make_dir('stow2'); # make our alternate stow dir a subdir of target
+make_file('stow2/.stow');
+
+# emulate stowing into ourself (bizzare corner case or accident)
+make_dir('stow/pkg8a/stow2/pkg8b');
+make_file('stow/pkg8a/stow2/pkg8b/file8b');
+make_link('stow2/pkg8b', '../stow/pkg8a/stow2/pkg8b');
+
+unstow_contents('stow/pkg8a', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -l 'stow2/pkg8b' &&
+ readlink('stow2/pkg8b') eq '../stow/pkg8a/stow2/pkg8b'
+ => q(don't unlink any nodes under another stow directory)
+);
+
+#
+# overriding already stowed documentation
+#
+reset_state();
+$Stow_Path = '../stow';
+$Option{'verbose'} = 0;
+$Option{'override'} = ['man9', 'info9'];
+
+make_dir('../stow/pkg9a/man9/man1');
+make_file('../stow/pkg9a/man9/man1/file9.1');
+make_dir('man9/man1');
+make_link('man9/man1/file9.1' => '../../../stow/pkg9a/man9/man1/file9.1'); # emulate stow
+
+make_dir('../stow/pkg9b/man9/man1');
+make_file('../stow/pkg9b/man9/man1/file9.1');
+unstow_contents('../stow/pkg9b', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ !-l 'man9/man1/file9.1'
+ => 'overriding existing documentation files'
+);
+
+#
+# deferring to already stowed documentation
+#
+reset_state();
+$Option{'verbose'} = 0;
+$Option{'defer'} = ['man10', 'info10'];
+
+make_dir('../stow/pkg10a/man10/man1');
+make_file('../stow/pkg10a/man10/man1/file10a.1');
+make_dir('man10/man1');
+make_link('man10/man1/file10a.1' => '../../../stow/pkg10a/man10/man1/file10a.1');
+
+# need this to block folding
+make_dir('../stow/pkg10b/man10/man1');
+make_file('../stow/pkg10b/man10/man1/file10b.1');
+make_link('man10/man1/file10b.1' => '../../../stow/pkg10b/man10/man1/file10b.1');
+
+
+make_dir('../stow/pkg10c/man10/man1');
+make_file('../stow/pkg10c/man10/man1/file10a.1');
+unstow_contents('../stow/pkg10c', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ readlink('man10/man1/file10a.1') eq '../../../stow/pkg10a/man10/man1/file10a.1'
+ => 'defer to existing documentation files'
+);
+
+#
+# Ignore temp files
+#
+reset_state();
+$Option{'verbose'} = 0;
+$Option{'ignore'} = ['~', '\.#.*'];
+
+make_dir('../stow/pkg12/man12/man1');
+make_file('../stow/pkg12/man12/man1/file12.1');
+make_file('../stow/pkg12/man12/man1/file12.1~');
+make_file('../stow/pkg12/man12/man1/.#file12.1');
+make_dir('man12/man1');
+make_link('man12/man1/file12.1' => '../../../stow/pkg12/man12/man1/file12.1');
+
+unstow_contents('../stow/pkg12', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ !-e 'man12/man1/file12.1'
+ => 'ignore temp files'
+);
+
+
+# Todo
+#
+# Test cleaning up subdirs with --paranoid option
+
diff --git a/t/unstow_contents_orig.t b/t/unstow_contents_orig.t
new file mode 100644
index 0000000..ead3dc5
--- /dev/null
+++ b/t/unstow_contents_orig.t
@@ -0,0 +1,277 @@
+#!/usr/local/bin/perl
+
+#
+# Testing unstow_contents_orig()
+#
+
+# load as a library
+BEGIN { use lib qw(.); require "t/util.pm"; require "stow"; }
+
+use Test::More tests => 11;
+use English qw(-no_match_vars);
+
+# local utility
+sub reset_state {
+ @Tasks = ();
+ @Conflicts = ();
+ %Link_Task_For = ();
+ %Dir_Task_For = ();
+ %Options = ();
+ return;
+}
+
+### setup
+eval { remove_dir('t/target'); };
+eval { remove_dir('t/stow'); };
+make_dir('t/target');
+make_dir('t/stow');
+
+chdir 't/target';
+$Stow_Path= '../stow';
+
+# Note that each of the following tests use a distinct set of files
+
+#
+# unstow a simple tree minimally
+#
+
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg1/bin1');
+make_file('../stow/pkg1/bin1/file1');
+make_link('bin1','../stow/pkg1/bin1');
+
+unstow_contents_orig('../stow/pkg1','./');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -f '../stow/pkg1/bin1/file1' && ! -e 'bin1'
+ => 'unstow a simple tree'
+);
+
+#
+# unstow a simple tree from an existing directory
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('lib2');
+make_dir('../stow/pkg2/lib2');
+make_file('../stow/pkg2/lib2/file2');
+make_link('lib2/file2', '../../stow/pkg2/lib2/file2');
+unstow_contents_orig('../stow/pkg2','./');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -f '../stow/pkg2/lib2/file2' && -d 'lib2'
+ => 'unstow simple tree from a pre-existing directory'
+);
+
+#
+# fold tree after unstowing
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin3');
+
+make_dir('../stow/pkg3a/bin3');
+make_file('../stow/pkg3a/bin3/file3a');
+make_link('bin3/file3a' => '../../stow/pkg3a/bin3/file3a'); # emulate stow
+
+make_dir('../stow/pkg3b/bin3');
+make_file('../stow/pkg3b/bin3/file3b');
+make_link('bin3/file3b' => '../../stow/pkg3b/bin3/file3b'); # emulate stow
+unstow_contents_orig('../stow/pkg3b', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -l 'bin3' &&
+ readlink('bin3') eq '../stow/pkg3a/bin3'
+ => 'fold tree after unstowing'
+);
+
+#
+# existing link is owned by stow but is invalid so it gets removed anyway
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin4');
+make_dir('../stow/pkg4/bin4');
+make_file('../stow/pkg4/bin4/file4');
+make_link('bin4/file4', '../../stow/pkg4/bin4/does-not-exist');
+
+unstow_contents_orig('../stow/pkg4', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ ! -e 'bin4/file4'
+ => q(remove invalid link owned by stow)
+);
+
+#
+# Existing link is not owned by stow
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('../stow/pkg5/bin5');
+make_link('bin5', '../not-stow');
+
+unstow_contents_orig('../stow/pkg5', './');
+#like(
+# $Conflicts[-1], qr(CONFLICT:.*can't unlink.*not owned by stow)
+# => q(existing link not owned by stow)
+#);
+ok(
+ -l 'bin5' && readlink('bin5') eq '../not-stow'
+ => q(existing link not owned by stow)
+);
+#
+# Target already exists, is owned by stow, but points to a different package
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('bin6');
+make_dir('../stow/pkg6a/bin6');
+make_file('../stow/pkg6a/bin6/file6');
+make_link('bin6/file6', '../../stow/pkg6a/bin6/file6');
+
+make_dir('../stow/pkg6b/bin6');
+make_file('../stow/pkg6b/bin6/file6');
+
+unstow_contents_orig('../stow/pkg6b', './');
+ok(
+ -l 'bin6/file6' && readlink('bin6/file6') eq '../../stow/pkg6a/bin6/file6'
+ => q(existing link owned by stow but points to a different package)
+);
+
+#
+# Don't unlink anything under the stow directory
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('stow'); # make out stow dir a subdir of target
+$Stow_Path = 'stow';
+
+# emulate stowing into ourself (bizzare corner case or accident)
+make_dir('stow/pkg7a/stow/pkg7b');
+make_file('stow/pkg7a/stow/pkg7b/file7b');
+make_link('stow/pkg7b', '../stow/pkg7a/stow/pkg7b');
+
+unstow_contents_orig('stow/pkg7b', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -l 'stow/pkg7b' &&
+ readlink('stow/pkg7b') eq '../stow/pkg7a/stow/pkg7b'
+ => q(don't unlink any nodes under the stow directory)
+);
+
+#
+# Don't unlink any nodes under another stow directory
+#
+reset_state();
+$Option{'verbose'} = 0;
+
+make_dir('stow'); # make out stow dir a subdir of target
+$Stow_Path = 'stow';
+
+make_dir('stow2'); # make our alternate stow dir a subdir of target
+make_file('stow2/.stow');
+
+# emulate stowing into ourself (bizzare corner case or accident)
+make_dir('stow/pkg8a/stow2/pkg8b');
+make_file('stow/pkg8a/stow2/pkg8b/file8b');
+make_link('stow2/pkg8b', '../stow/pkg8a/stow2/pkg8b');
+
+unstow_contents_orig('stow/pkg8a', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ -l 'stow2/pkg8b' &&
+ readlink('stow2/pkg8b') eq '../stow/pkg8a/stow2/pkg8b'
+ => q(don't unlink any nodes under another stow directory)
+);
+
+#
+# overriding already stowed documentation
+#
+reset_state();
+$Stow_Path = '../stow';
+$Option{'verbose'} = 0;
+$Option{'override'} = ['man9', 'info9'];
+
+make_dir('../stow/pkg9a/man9/man1');
+make_file('../stow/pkg9a/man9/man1/file9.1');
+make_dir('man9/man1');
+make_link('man9/man1/file9.1' => '../../../stow/pkg9a/man9/man1/file9.1'); # emulate stow
+
+make_dir('../stow/pkg9b/man9/man1');
+make_file('../stow/pkg9b/man9/man1/file9.1');
+unstow_contents_orig('../stow/pkg9b', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ !-l 'man9/man1/file9.1'
+ => 'overriding existing documentation files'
+);
+
+#
+# deferring to already stowed documentation
+#
+reset_state();
+$Option{'verbose'} = 0;
+$Option{'defer'} = ['man10', 'info10'];
+
+make_dir('../stow/pkg10a/man10/man1');
+make_file('../stow/pkg10a/man10/man1/file10a.1');
+make_dir('man10/man1');
+make_link('man10/man1/file10a.1' => '../../../stow/pkg10a/man10/man1/file10a.1');
+
+# need this to block folding
+make_dir('../stow/pkg10b/man10/man1');
+make_file('../stow/pkg10b/man10/man1/file10b.1');
+make_link('man10/man1/file10b.1' => '../../../stow/pkg10b/man10/man1/file10b.1');
+
+
+make_dir('../stow/pkg10c/man10/man1');
+make_file('../stow/pkg10c/man10/man1/file10a.1');
+unstow_contents_orig('../stow/pkg10c', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ readlink('man10/man1/file10a.1') eq '../../../stow/pkg10a/man10/man1/file10a.1'
+ => 'defer to existing documentation files'
+);
+
+#
+# Ignore temp files
+#
+reset_state();
+$Option{'verbose'} = 0;
+$Option{'ignore'} = ['~', '\.#.*'];
+
+make_dir('../stow/pkg12/man12/man1');
+make_file('../stow/pkg12/man12/man1/file12.1');
+make_file('../stow/pkg12/man12/man1/file12.1~');
+make_file('../stow/pkg12/man12/man1/.#file12.1');
+make_dir('man12/man1');
+make_link('man12/man1/file12.1' => '../../../stow/pkg12/man12/man1/file12.1');
+
+unstow_contents_orig('../stow/pkg12', './');
+process_tasks();
+ok(
+ scalar(@Conflicts) == 0 &&
+ !-e 'man12/man1/file12.1'
+ => 'ignore temp files'
+);
+
+# Todo
+#
+# Test cleaning up subdirs with --paranoid option
+
diff --git a/t/util.pm b/t/util.pm
new file mode 100644
index 0000000..abf670d
--- /dev/null
+++ b/t/util.pm
@@ -0,0 +1,157 @@
+#
+# Utilities shared by test scripts
+#
+
+use strict;
+use warnings;
+
+#===== SUBROUTINE ===========================================================
+# Name : make_link()
+# Purpose : safely create a link
+# Parameters: $target => path to the link
+# : $source => where the new link should point
+# Returns : n/a
+# Throws : fatal error if the link can not be safely created
+# Comments : checks for existing nodes
+#============================================================================
+sub make_link {
+ my ($target, $source) = @_;
+
+ if (-l $target) {
+ my $old_source = readlink join('/',parent($target),$source)
+ or die "could not read link $target/$source";
+ if ($old_source ne $source) {
+ die "$target already exists but points elsewhere\n";
+ }
+ }
+ elsif (-e $target ) {
+ die "$target already exists and is not a link\n";
+ }
+ else {
+ symlink $source, $target
+ or die "could not create link $target => $source ($!)\n";
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : make_dir()
+# Purpose : create a directory and any requiste parents
+# Parameters: $dir => path to the new directory
+# Returns : n/a
+# Throws : fatal error if the directory or any of its parents cannot be
+# : created
+# Comments : none
+#============================================================================
+sub make_dir {
+ my ($dir) = @_;
+
+ my @parents = ();
+ for my $part (split '/', $dir) {
+ my $path = join '/', @parents, $part;
+ if (not -d $path and not mkdir $path) {
+ die "could not create directory: $path ($!)\n";
+ }
+ push @parents, $part;
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : create_file()
+# Purpose : create an empty file
+# Parameters: $path => proposed path to the file
+# Returns : n/a
+# Throws : fatal error if the file could not be created
+# Comments : detects clash with an existing non-file
+#============================================================================
+sub make_file {
+ my ($path) =@_;
+
+ if (not -e $path) {
+ open my $FILE ,'>', $path
+ or die "could not create file: $path ($!)\n";
+ close $FILE;
+ }
+ elsif ( not -f $path) {
+ die "a non-file already exists at $path\n";
+ }
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : remove_link()
+# Purpose : remove an esiting symbolic link
+# Parameters: $path => path to the symbolic link
+# Returns : n/a
+# Throws : fatal error if the operation fails or if passed the path to a
+# : non-link
+# Comments : none
+#============================================================================
+sub remove_link {
+ my ($path) = @_;
+ if (not -l $path) {
+ die qq(remove_link() called with a non-link: $path);
+ }
+ unlink $path or die "could not remove link: $path ($!)\n";
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : remove_file()
+# Purpose : remove an existing empty file
+# Parameters: $path => the path to the empty file
+# Returns : n/a
+# Throws : fatal error if given file is non-empty or the operation fails
+# Comments : none
+#============================================================================
+sub remove_file {
+ my ($path) = @_;
+ if (-z $path) {
+ die "file at $path is non-empty\n";
+ }
+ unlink $path or die "could not remove empty file: $path ($!)\n";
+ return;
+}
+
+#===== SUBROUTINE ===========================================================
+# Name : remove_dir()
+# Purpose : safely remove a tree of test files
+# Parameters: $dir => path to the top of the tree
+# Returns : n/a
+# Throws : fatal error if the tree contains a non-link or non-empty file
+# Comments : recursively removes directories containing softlinks empty files
+#============================================================================
+sub remove_dir {
+ my ($dir) = @_;
+
+ if (not -d $dir) {
+ die "$dir is not a directory";
+ }
+
+ opendir my $DIR, $dir or die "cannot read directory: $dir ($!)\n";
+ my @listing = readdir $DIR;
+ closedir $DIR;
+
+ NODE:
+ for my $node (@listing) {
+ next NODE if $node eq '.';
+ next NODE if $node eq '..';
+
+ my $path = "$dir/$node";
+ if (-l $path or -z $path) {
+ unlink $path or die "cannot unlink $path ($!)\n";
+ }
+ elsif (-d "$path") {
+ remove_dir($path);
+ }
+ else {
+ die "$path is not a link, directory, or empty file\n";
+ }
+ }
+ rmdir $dir or die "cannot rmdir $dir ($!)\n";
+
+ return;
+}
+
+1;
diff --git a/texinfo.tex b/texinfo.tex
new file mode 100644
index 0000000..8083622
--- /dev/null
+++ b/texinfo.tex
@@ -0,0 +1,7482 @@
+% texinfo.tex -- TeX macros to handle Texinfo files.
+%
+% Load plain if necessary, i.e., if running under initex.
+\expandafter\ifx\csname fmtname\endcsname\relax\input plain\fi
+%
+\def\texinfoversion{2006-10-04.17}
+%
+% Copyright (C) 1985, 1986, 1988, 1990, 1991, 1992, 1993, 1994, 1995,
+% 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free
+% Software Foundation, Inc.
+%
+% This texinfo.tex file is free software; you can redistribute it and/or
+% modify it under the terms of the GNU General Public License as
+% published by the Free Software Foundation; either version 2, or (at
+% your option) any later version.
+%
+% This texinfo.tex file is distributed in the hope that it will be
+% useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+% of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+% General Public License for more details.
+%
+% You should have received a copy of the GNU General Public License
+% along with this texinfo.tex file; see the file COPYING. If not, write
+% to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+% Boston, MA 02110-1301, USA.
+%
+% As a special exception, when this file is read by TeX when processing
+% a Texinfo source document, you may use the result without
+% restriction. (This has been our intent since Texinfo was invented.)
+%
+% Please try the latest version of texinfo.tex before submitting bug
+% reports; you can get the latest version from:
+% http://www.gnu.org/software/texinfo/ (the Texinfo home page), or
+% ftp://tug.org/tex/texinfo.tex
+% (and all CTAN mirrors, see http://www.ctan.org).
+% The texinfo.tex in any given distribution could well be out
+% of date, so if that's what you're using, please check.
+%
+% Send bug reports to bug-texinfo@gnu.org. Please include including a
+% complete document in each bug report with which we can reproduce the
+% problem. Patches are, of course, greatly appreciated.
+%
+% To process a Texinfo manual with TeX, it's most reliable to use the
+% texi2dvi shell script that comes with the distribution. For a simple
+% manual foo.texi, however, you can get away with this:
+% tex foo.texi
+% texindex foo.??
+% tex foo.texi
+% tex foo.texi
+% dvips foo.dvi -o # or whatever; this makes foo.ps.
+% The extra TeX runs get the cross-reference information correct.
+% Sometimes one run after texindex suffices, and sometimes you need more
+% than two; texi2dvi does it as many times as necessary.
+%
+% It is possible to adapt texinfo.tex for other languages, to some
+% extent. You can get the existing language-specific files from the
+% full Texinfo distribution.
+%
+% The GNU Texinfo home page is http://www.gnu.org/software/texinfo.
+
+
+\message{Loading texinfo [version \texinfoversion]:}
+
+% If in a .fmt file, print the version number
+% and turn on active characters that we couldn't do earlier because
+% they might have appeared in the input file name.
+\everyjob{\message{[Texinfo version \texinfoversion]}%
+ \catcode`+=\active \catcode`\_=\active}
+
+\message{Basics,}
+\chardef\other=12
+
+% We never want plain's \outer definition of \+ in Texinfo.
+% For @tex, we can use \tabalign.
+\let\+ = \relax
+
+% Save some plain tex macros whose names we will redefine.
+\let\ptexb=\b
+\let\ptexbullet=\bullet
+\let\ptexc=\c
+\let\ptexcomma=\,
+\let\ptexdot=\.
+\let\ptexdots=\dots
+\let\ptexend=\end
+\let\ptexequiv=\equiv
+\let\ptexexclam=\!
+\let\ptexfootnote=\footnote
+\let\ptexgtr=>
+\let\ptexhat=^
+\let\ptexi=\i
+\let\ptexindent=\indent
+\let\ptexinsert=\insert
+\let\ptexlbrace=\{
+\let\ptexless=<
+\let\ptexnewwrite\newwrite
+\let\ptexnoindent=\noindent
+\let\ptexplus=+
+\let\ptexrbrace=\}
+\let\ptexslash=\/
+\let\ptexstar=\*
+\let\ptext=\t
+
+% If this character appears in an error message or help string, it
+% starts a new line in the output.
+\newlinechar = `^^J
+
+% Use TeX 3.0's \inputlineno to get the line number, for better error
+% messages, but if we're using an old version of TeX, don't do anything.
+%
+\ifx\inputlineno\thisisundefined
+ \let\linenumber = \empty % Pre-3.0.
+\else
+ \def\linenumber{l.\the\inputlineno:\space}
+\fi
+
+% Set up fixed words for English if not already set.
+\ifx\putwordAppendix\undefined \gdef\putwordAppendix{Appendix}\fi
+\ifx\putwordChapter\undefined \gdef\putwordChapter{Chapter}\fi
+\ifx\putwordfile\undefined \gdef\putwordfile{file}\fi
+\ifx\putwordin\undefined \gdef\putwordin{in}\fi
+\ifx\putwordIndexIsEmpty\undefined \gdef\putwordIndexIsEmpty{(Index is empty)}\fi
+\ifx\putwordIndexNonexistent\undefined \gdef\putwordIndexNonexistent{(Index is nonexistent)}\fi
+\ifx\putwordInfo\undefined \gdef\putwordInfo{Info}\fi
+\ifx\putwordInstanceVariableof\undefined \gdef\putwordInstanceVariableof{Instance Variable of}\fi
+\ifx\putwordMethodon\undefined \gdef\putwordMethodon{Method on}\fi
+\ifx\putwordNoTitle\undefined \gdef\putwordNoTitle{No Title}\fi
+\ifx\putwordof\undefined \gdef\putwordof{of}\fi
+\ifx\putwordon\undefined \gdef\putwordon{on}\fi
+\ifx\putwordpage\undefined \gdef\putwordpage{page}\fi
+\ifx\putwordsection\undefined \gdef\putwordsection{section}\fi
+\ifx\putwordSection\undefined \gdef\putwordSection{Section}\fi
+\ifx\putwordsee\undefined \gdef\putwordsee{see}\fi
+\ifx\putwordSee\undefined \gdef\putwordSee{See}\fi
+\ifx\putwordShortTOC\undefined \gdef\putwordShortTOC{Short Contents}\fi
+\ifx\putwordTOC\undefined \gdef\putwordTOC{Table of Contents}\fi
+%
+\ifx\putwordMJan\undefined \gdef\putwordMJan{January}\fi
+\ifx\putwordMFeb\undefined \gdef\putwordMFeb{February}\fi
+\ifx\putwordMMar\undefined \gdef\putwordMMar{March}\fi
+\ifx\putwordMApr\undefined \gdef\putwordMApr{April}\fi
+\ifx\putwordMMay\undefined \gdef\putwordMMay{May}\fi
+\ifx\putwordMJun\undefined \gdef\putwordMJun{June}\fi
+\ifx\putwordMJul\undefined \gdef\putwordMJul{July}\fi
+\ifx\putwordMAug\undefined \gdef\putwordMAug{August}\fi
+\ifx\putwordMSep\undefined \gdef\putwordMSep{September}\fi
+\ifx\putwordMOct\undefined \gdef\putwordMOct{October}\fi
+\ifx\putwordMNov\undefined \gdef\putwordMNov{November}\fi
+\ifx\putwordMDec\undefined \gdef\putwordMDec{December}\fi
+%
+\ifx\putwordDefmac\undefined \gdef\putwordDefmac{Macro}\fi
+\ifx\putwordDefspec\undefined \gdef\putwordDefspec{Special Form}\fi
+\ifx\putwordDefvar\undefined \gdef\putwordDefvar{Variable}\fi
+\ifx\putwordDefopt\undefined \gdef\putwordDefopt{User Option}\fi
+\ifx\putwordDeffunc\undefined \gdef\putwordDeffunc{Function}\fi
+
+% Since the category of space is not known, we have to be careful.
+\chardef\spacecat = 10
+\def\spaceisspace{\catcode`\ =\spacecat}
+
+% sometimes characters are active, so we need control sequences.
+\chardef\colonChar = `\:
+\chardef\commaChar = `\,
+\chardef\dashChar = `\-
+\chardef\dotChar = `\.
+\chardef\exclamChar= `\!
+\chardef\lquoteChar= `\`
+\chardef\questChar = `\?
+\chardef\rquoteChar= `\'
+\chardef\semiChar = `\;
+\chardef\underChar = `\_
+
+% Ignore a token.
+%
+\def\gobble#1{}
+
+% The following is used inside several \edef's.
+\def\makecsname#1{\expandafter\noexpand\csname#1\endcsname}
+
+% Hyphenation fixes.
+\hyphenation{
+ Flor-i-da Ghost-script Ghost-view Mac-OS Post-Script
+ ap-pen-dix bit-map bit-maps
+ data-base data-bases eshell fall-ing half-way long-est man-u-script
+ man-u-scripts mini-buf-fer mini-buf-fers over-view par-a-digm
+ par-a-digms rath-er rec-tan-gu-lar ro-bot-ics se-vere-ly set-up spa-ces
+ spell-ing spell-ings
+ stand-alone strong-est time-stamp time-stamps which-ever white-space
+ wide-spread wrap-around
+}
+
+% Margin to add to right of even pages, to left of odd pages.
+\newdimen\bindingoffset
+\newdimen\normaloffset
+\newdimen\pagewidth \newdimen\pageheight
+
+% For a final copy, take out the rectangles
+% that mark overfull boxes (in case you have decided
+% that the text looks ok even though it passes the margin).
+%
+\def\finalout{\overfullrule=0pt}
+
+% @| inserts a changebar to the left of the current line. It should
+% surround any changed text. This approach does *not* work if the
+% change spans more than two lines of output. To handle that, we would
+% have adopt a much more difficult approach (putting marks into the main
+% vertical list for the beginning and end of each change).
+%
+\def\|{%
+ % \vadjust can only be used in horizontal mode.
+ \leavevmode
+ %
+ % Append this vertical mode material after the current line in the output.
+ \vadjust{%
+ % We want to insert a rule with the height and depth of the current
+ % leading; that is exactly what \strutbox is supposed to record.
+ \vskip-\baselineskip
+ %
+ % \vadjust-items are inserted at the left edge of the type. So
+ % the \llap here moves out into the left-hand margin.
+ \llap{%
+ %
+ % For a thicker or thinner bar, change the `1pt'.
+ \vrule height\baselineskip width1pt
+ %
+ % This is the space between the bar and the text.
+ \hskip 12pt
+ }%
+ }%
+}
+
+% Sometimes it is convenient to have everything in the transcript file
+% and nothing on the terminal. We don't just call \tracingall here,
+% since that produces some useless output on the terminal. We also make
+% some effort to order the tracing commands to reduce output in the log
+% file; cf. trace.sty in LaTeX.
+%
+\def\gloggingall{\begingroup \globaldefs = 1 \loggingall \endgroup}%
+\def\loggingall{%
+ \tracingstats2
+ \tracingpages1
+ \tracinglostchars2 % 2 gives us more in etex
+ \tracingparagraphs1
+ \tracingoutput1
+ \tracingmacros2
+ \tracingrestores1
+ \showboxbreadth\maxdimen \showboxdepth\maxdimen
+ \ifx\eTeXversion\undefined\else % etex gives us more logging
+ \tracingscantokens1
+ \tracingifs1
+ \tracinggroups1
+ \tracingnesting2
+ \tracingassigns1
+ \fi
+ \tracingcommands3 % 3 gives us more in etex
+ \errorcontextlines16
+}%
+
+% add check for \lastpenalty to plain's definitions. If the last thing
+% we did was a \nobreak, we don't want to insert more space.
+%
+\def\smallbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\smallskipamount
+ \removelastskip\penalty-50\smallskip\fi\fi}
+\def\medbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\medskipamount
+ \removelastskip\penalty-100\medskip\fi\fi}
+\def\bigbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\bigskipamount
+ \removelastskip\penalty-200\bigskip\fi\fi}
+
+% For @cropmarks command.
+% Do @cropmarks to get crop marks.
+%
+\newif\ifcropmarks
+\let\cropmarks = \cropmarkstrue
+%
+% Dimensions to add cropmarks at corners.
+% Added by P. A. MacKay, 12 Nov. 1986
+%
+\newdimen\outerhsize \newdimen\outervsize % set by the paper size routines
+\newdimen\cornerlong \cornerlong=1pc
+\newdimen\cornerthick \cornerthick=.3pt
+\newdimen\topandbottommargin \topandbottommargin=.75in
+
+% Main output routine.
+\chardef\PAGE = 255
+\output = {\onepageout{\pagecontents\PAGE}}
+
+\newbox\headlinebox
+\newbox\footlinebox
+
+% \onepageout takes a vbox as an argument. Note that \pagecontents
+% does insertions, but you have to call it yourself.
+\def\onepageout#1{%
+ \ifcropmarks \hoffset=0pt \else \hoffset=\normaloffset \fi
+ %
+ \ifodd\pageno \advance\hoffset by \bindingoffset
+ \else \advance\hoffset by -\bindingoffset\fi
+ %
+ % Do this outside of the \shipout so @code etc. will be expanded in
+ % the headline as they should be, not taken literally (outputting ''code).
+ \setbox\headlinebox = \vbox{\let\hsize=\pagewidth \makeheadline}%
+ \setbox\footlinebox = \vbox{\let\hsize=\pagewidth \makefootline}%
+ %
+ {%
+ % Have to do this stuff outside the \shipout because we want it to
+ % take effect in \write's, yet the group defined by the \vbox ends
+ % before the \shipout runs.
+ %
+ \indexdummies % don't expand commands in the output.
+ \normalturnoffactive % \ in index entries must not stay \, e.g., if
+ % the page break happens to be in the middle of an example.
+ % We don't want .vr (or whatever) entries like this:
+ % \entry{{\tt \indexbackslash }acronym}{32}{\code {\acronym}}
+ % "\acronym" won't work when it's read back in;
+ % it needs to be
+ % {\code {{\tt \backslashcurfont }acronym}
+ \shipout\vbox{%
+ % Do this early so pdf references go to the beginning of the page.
+ \ifpdfmakepagedest \pdfdest name{\the\pageno} xyz\fi
+ %
+ \ifcropmarks \vbox to \outervsize\bgroup
+ \hsize = \outerhsize
+ \vskip-\topandbottommargin
+ \vtop to0pt{%
+ \line{\ewtop\hfil\ewtop}%
+ \nointerlineskip
+ \line{%
+ \vbox{\moveleft\cornerthick\nstop}%
+ \hfill
+ \vbox{\moveright\cornerthick\nstop}%
+ }%
+ \vss}%
+ \vskip\topandbottommargin
+ \line\bgroup
+ \hfil % center the page within the outer (page) hsize.
+ \ifodd\pageno\hskip\bindingoffset\fi
+ \vbox\bgroup
+ \fi
+ %
+ \unvbox\headlinebox
+ \pagebody{#1}%
+ \ifdim\ht\footlinebox > 0pt
+ % Only leave this space if the footline is nonempty.
+ % (We lessened \vsize for it in \oddfootingyyy.)
+ % The \baselineskip=24pt in plain's \makefootline has no effect.
+ \vskip 24pt
+ \unvbox\footlinebox
+ \fi
+ %
+ \ifcropmarks
+ \egroup % end of \vbox\bgroup
+ \hfil\egroup % end of (centering) \line\bgroup
+ \vskip\topandbottommargin plus1fill minus1fill
+ \boxmaxdepth = \cornerthick
+ \vbox to0pt{\vss
+ \line{%
+ \vbox{\moveleft\cornerthick\nsbot}%
+ \hfill
+ \vbox{\moveright\cornerthick\nsbot}%
+ }%
+ \nointerlineskip
+ \line{\ewbot\hfil\ewbot}%
+ }%
+ \egroup % \vbox from first cropmarks clause
+ \fi
+ }% end of \shipout\vbox
+ }% end of group with \indexdummies
+ \advancepageno
+ \ifnum\outputpenalty>-20000 \else\dosupereject\fi
+}
+
+\newinsert\margin \dimen\margin=\maxdimen
+
+\def\pagebody#1{\vbox to\pageheight{\boxmaxdepth=\maxdepth #1}}
+{\catcode`\@ =11
+\gdef\pagecontents#1{\ifvoid\topins\else\unvbox\topins\fi
+% marginal hacks, juha@viisa.uucp (Juha Takala)
+\ifvoid\margin\else % marginal info is present
+ \rlap{\kern\hsize\vbox to\z@{\kern1pt\box\margin \vss}}\fi
+\dimen@=\dp#1 \unvbox#1
+\ifvoid\footins\else\vskip\skip\footins\footnoterule \unvbox\footins\fi
+\ifr@ggedbottom \kern-\dimen@ \vfil \fi}
+}
+
+% Here are the rules for the cropmarks. Note that they are
+% offset so that the space between them is truly \outerhsize or \outervsize
+% (P. A. MacKay, 12 November, 1986)
+%
+\def\ewtop{\vrule height\cornerthick depth0pt width\cornerlong}
+\def\nstop{\vbox
+ {\hrule height\cornerthick depth\cornerlong width\cornerthick}}
+\def\ewbot{\vrule height0pt depth\cornerthick width\cornerlong}
+\def\nsbot{\vbox
+ {\hrule height\cornerlong depth\cornerthick width\cornerthick}}
+
+% Parse an argument, then pass it to #1. The argument is the rest of
+% the input line (except we remove a trailing comment). #1 should be a
+% macro which expects an ordinary undelimited TeX argument.
+%
+\def\parsearg{\parseargusing{}}
+\def\parseargusing#1#2{%
+ \def\argtorun{#2}%
+ \begingroup
+ \obeylines
+ \spaceisspace
+ #1%
+ \parseargline\empty% Insert the \empty token, see \finishparsearg below.
+}
+
+{\obeylines %
+ \gdef\parseargline#1^^M{%
+ \endgroup % End of the group started in \parsearg.
+ \argremovecomment #1\comment\ArgTerm%
+ }%
+}
+
+% First remove any @comment, then any @c comment.
+\def\argremovecomment#1\comment#2\ArgTerm{\argremovec #1\c\ArgTerm}
+\def\argremovec#1\c#2\ArgTerm{\argcheckspaces#1\^^M\ArgTerm}
+
+% Each occurence of `\^^M' or `<space>\^^M' is replaced by a single space.
+%
+% \argremovec might leave us with trailing space, e.g.,
+% @end itemize @c foo
+% This space token undergoes the same procedure and is eventually removed
+% by \finishparsearg.
+%
+\def\argcheckspaces#1\^^M{\argcheckspacesX#1\^^M \^^M}
+\def\argcheckspacesX#1 \^^M{\argcheckspacesY#1\^^M}
+\def\argcheckspacesY#1\^^M#2\^^M#3\ArgTerm{%
+ \def\temp{#3}%
+ \ifx\temp\empty
+ % Do not use \next, perhaps the caller of \parsearg uses it; reuse \temp:
+ \let\temp\finishparsearg
+ \else
+ \let\temp\argcheckspaces
+ \fi
+ % Put the space token in:
+ \temp#1 #3\ArgTerm
+}
+
+% If a _delimited_ argument is enclosed in braces, they get stripped; so
+% to get _exactly_ the rest of the line, we had to prevent such situation.
+% We prepended an \empty token at the very beginning and we expand it now,
+% just before passing the control to \argtorun.
+% (Similarily, we have to think about #3 of \argcheckspacesY above: it is
+% either the null string, or it ends with \^^M---thus there is no danger
+% that a pair of braces would be stripped.
+%
+% But first, we have to remove the trailing space token.
+%
+\def\finishparsearg#1 \ArgTerm{\expandafter\argtorun\expandafter{#1}}
+
+% \parseargdef\foo{...}
+% is roughly equivalent to
+% \def\foo{\parsearg\Xfoo}
+% \def\Xfoo#1{...}
+%
+% Actually, I use \csname\string\foo\endcsname, ie. \\foo, as it is my
+% favourite TeX trick. --kasal, 16nov03
+
+\def\parseargdef#1{%
+ \expandafter \doparseargdef \csname\string#1\endcsname #1%
+}
+\def\doparseargdef#1#2{%
+ \def#2{\parsearg#1}%
+ \def#1##1%
+}
+
+% Several utility definitions with active space:
+{
+ \obeyspaces
+ \gdef\obeyedspace{ }
+
+ % Make each space character in the input produce a normal interword
+ % space in the output. Don't allow a line break at this space, as this
+ % is used only in environments like @example, where each line of input
+ % should produce a line of output anyway.
+ %
+ \gdef\sepspaces{\obeyspaces\let =\tie}
+
+ % If an index command is used in an @example environment, any spaces
+ % therein should become regular spaces in the raw index file, not the
+ % expansion of \tie (\leavevmode \penalty \@M \ ).
+ \gdef\unsepspaces{\let =\space}
+}
+
+
+\def\flushcr{\ifx\par\lisppar \def\next##1{}\else \let\next=\relax \fi \next}
+
+% Define the framework for environments in texinfo.tex. It's used like this:
+%
+% \envdef\foo{...}
+% \def\Efoo{...}
+%
+% It's the responsibility of \envdef to insert \begingroup before the
+% actual body; @end closes the group after calling \Efoo. \envdef also
+% defines \thisenv, so the current environment is known; @end checks
+% whether the environment name matches. The \checkenv macro can also be
+% used to check whether the current environment is the one expected.
+%
+% Non-false conditionals (@iftex, @ifset) don't fit into this, so they
+% are not treated as enviroments; they don't open a group. (The
+% implementation of @end takes care not to call \endgroup in this
+% special case.)
+
+
+% At runtime, environments start with this:
+\def\startenvironment#1{\begingroup\def\thisenv{#1}}
+% initialize
+\let\thisenv\empty
+
+% ... but they get defined via ``\envdef\foo{...}'':
+\long\def\envdef#1#2{\def#1{\startenvironment#1#2}}
+\def\envparseargdef#1#2{\parseargdef#1{\startenvironment#1#2}}
+
+% Check whether we're in the right environment:
+\def\checkenv#1{%
+ \def\temp{#1}%
+ \ifx\thisenv\temp
+ \else
+ \badenverr
+ \fi
+}
+
+% Evironment mismatch, #1 expected:
+\def\badenverr{%
+ \errhelp = \EMsimple
+ \errmessage{This command can appear only \inenvironment\temp,
+ not \inenvironment\thisenv}%
+}
+\def\inenvironment#1{%
+ \ifx#1\empty
+ out of any environment%
+ \else
+ in environment \expandafter\string#1%
+ \fi
+}
+
+% @end foo executes the definition of \Efoo.
+% But first, it executes a specialized version of \checkenv
+%
+\parseargdef\end{%
+ \if 1\csname iscond.#1\endcsname
+ \else
+ % The general wording of \badenverr may not be ideal, but... --kasal, 06nov03
+ \expandafter\checkenv\csname#1\endcsname
+ \csname E#1\endcsname
+ \endgroup
+ \fi
+}
+
+\newhelp\EMsimple{Press RETURN to continue.}
+
+
+%% Simple single-character @ commands
+
+% @@ prints an @
+% Kludge this until the fonts are right (grr).
+\def\@{{\tt\char64}}
+
+% This is turned off because it was never documented
+% and you can use @w{...} around a quote to suppress ligatures.
+%% Define @` and @' to be the same as ` and '
+%% but suppressing ligatures.
+%\def\`{{`}}
+%\def\'{{'}}
+
+% Used to generate quoted braces.
+\def\mylbrace {{\tt\char123}}
+\def\myrbrace {{\tt\char125}}
+\let\{=\mylbrace
+\let\}=\myrbrace
+\begingroup
+ % Definitions to produce \{ and \} commands for indices,
+ % and @{ and @} for the aux/toc files.
+ \catcode`\{ = \other \catcode`\} = \other
+ \catcode`\[ = 1 \catcode`\] = 2
+ \catcode`\! = 0 \catcode`\\ = \other
+ !gdef!lbracecmd[\{]%
+ !gdef!rbracecmd[\}]%
+ !gdef!lbraceatcmd[@{]%
+ !gdef!rbraceatcmd[@}]%
+!endgroup
+
+% @comma{} to avoid , parsing problems.
+\let\comma = ,
+
+% Accents: @, @dotaccent @ringaccent @ubaraccent @udotaccent
+% Others are defined by plain TeX: @` @' @" @^ @~ @= @u @v @H.
+\let\, = \c
+\let\dotaccent = \.
+\def\ringaccent#1{{\accent23 #1}}
+\let\tieaccent = \t
+\let\ubaraccent = \b
+\let\udotaccent = \d
+
+% Other special characters: @questiondown @exclamdown @ordf @ordm
+% Plain TeX defines: @AA @AE @O @OE @L (plus lowercase versions) @ss.
+\def\questiondown{?`}
+\def\exclamdown{!`}
+\def\ordf{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{a}}}
+\def\ordm{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{o}}}
+
+% Dotless i and dotless j, used for accents.
+\def\imacro{i}
+\def\jmacro{j}
+\def\dotless#1{%
+ \def\temp{#1}%
+ \ifx\temp\imacro \ptexi
+ \else\ifx\temp\jmacro \j
+ \else \errmessage{@dotless can be used only with i or j}%
+ \fi\fi
+}
+
+% The \TeX{} logo, as in plain, but resetting the spacing so that a
+% period following counts as ending a sentence. (Idea found in latex.)
+%
+\edef\TeX{\TeX \spacefactor=1000 }
+
+% @LaTeX{} logo. Not quite the same results as the definition in
+% latex.ltx, since we use a different font for the raised A; it's most
+% convenient for us to use an explicitly smaller font, rather than using
+% the \scriptstyle font (since we don't reset \scriptstyle and
+% \scriptscriptstyle).
+%
+\def\LaTeX{%
+ L\kern-.36em
+ {\setbox0=\hbox{T}%
+ \vbox to \ht0{\hbox{\selectfonts\lllsize A}\vss}}%
+ \kern-.15em
+ \TeX
+}
+
+% Be sure we're in horizontal mode when doing a tie, since we make space
+% equivalent to this in @example-like environments. Otherwise, a space
+% at the beginning of a line will start with \penalty -- and
+% since \penalty is valid in vertical mode, we'd end up putting the
+% penalty on the vertical list instead of in the new paragraph.
+{\catcode`@ = 11
+ % Avoid using \@M directly, because that causes trouble
+ % if the definition is written into an index file.
+ \global\let\tiepenalty = \@M
+ \gdef\tie{\leavevmode\penalty\tiepenalty\ }
+}
+
+% @: forces normal size whitespace following.
+\def\:{\spacefactor=1000 }
+
+% @* forces a line break.
+\def\*{\hfil\break\hbox{}\ignorespaces}
+
+% @/ allows a line break.
+\let\/=\allowbreak
+
+% @. is an end-of-sentence period.
+\def\.{.\spacefactor=\endofsentencespacefactor\space}
+
+% @! is an end-of-sentence bang.
+\def\!{!\spacefactor=\endofsentencespacefactor\space}
+
+% @? is an end-of-sentence query.
+\def\?{?\spacefactor=\endofsentencespacefactor\space}
+
+% @frenchspacing on|off says whether to put extra space after punctuation.
+%
+\def\onword{on}
+\def\offword{off}
+%
+\parseargdef\frenchspacing{%
+ \def\temp{#1}%
+ \ifx\temp\onword \plainfrenchspacing
+ \else\ifx\temp\offword \plainnonfrenchspacing
+ \else
+ \errhelp = \EMsimple
+ \errmessage{Unknown @frenchspacing option `\temp', must be on/off}%
+ \fi\fi
+}
+
+% @w prevents a word break. Without the \leavevmode, @w at the
+% beginning of a paragraph, when TeX is still in vertical mode, would
+% produce a whole line of output instead of starting the paragraph.
+\def\w#1{\leavevmode\hbox{#1}}
+
+% @group ... @end group forces ... to be all on one page, by enclosing
+% it in a TeX vbox. We use \vtop instead of \vbox to construct the box
+% to keep its height that of a normal line. According to the rules for
+% \topskip (p.114 of the TeXbook), the glue inserted is
+% max (\topskip - \ht (first item), 0). If that height is large,
+% therefore, no glue is inserted, and the space between the headline and
+% the text is small, which looks bad.
+%
+% Another complication is that the group might be very large. This can
+% cause the glue on the previous page to be unduly stretched, because it
+% does not have much material. In this case, it's better to add an
+% explicit \vfill so that the extra space is at the bottom. The
+% threshold for doing this is if the group is more than \vfilllimit
+% percent of a page (\vfilllimit can be changed inside of @tex).
+%
+\newbox\groupbox
+\def\vfilllimit{0.7}
+%
+\envdef\group{%
+ \ifnum\catcode`\^^M=\active \else
+ \errhelp = \groupinvalidhelp
+ \errmessage{@group invalid in context where filling is enabled}%
+ \fi
+ \startsavinginserts
+ %
+ \setbox\groupbox = \vtop\bgroup
+ % Do @comment since we are called inside an environment such as
+ % @example, where each end-of-line in the input causes an
+ % end-of-line in the output. We don't want the end-of-line after
+ % the `@group' to put extra space in the output. Since @group
+ % should appear on a line by itself (according to the Texinfo
+ % manual), we don't worry about eating any user text.
+ \comment
+}
+%
+% The \vtop produces a box with normal height and large depth; thus, TeX puts
+% \baselineskip glue before it, and (when the next line of text is done)
+% \lineskip glue after it. Thus, space below is not quite equal to space
+% above. But it's pretty close.
+\def\Egroup{%
+ % To get correct interline space between the last line of the group
+ % and the first line afterwards, we have to propagate \prevdepth.
+ \endgraf % Not \par, as it may have been set to \lisppar.
+ \global\dimen1 = \prevdepth
+ \egroup % End the \vtop.
+ % \dimen0 is the vertical size of the group's box.
+ \dimen0 = \ht\groupbox \advance\dimen0 by \dp\groupbox
+ % \dimen2 is how much space is left on the page (more or less).
+ \dimen2 = \pageheight \advance\dimen2 by -\pagetotal
+ % if the group doesn't fit on the current page, and it's a big big
+ % group, force a page break.
+ \ifdim \dimen0 > \dimen2
+ \ifdim \pagetotal < \vfilllimit\pageheight
+ \page
+ \fi
+ \fi
+ \box\groupbox
+ \prevdepth = \dimen1
+ \checkinserts
+}
+%
+% TeX puts in an \escapechar (i.e., `@') at the beginning of the help
+% message, so this ends up printing `@group can only ...'.
+%
+\newhelp\groupinvalidhelp{%
+group can only be used in environments such as @example,^^J%
+where each line of input produces a line of output.}
+
+% @need space-in-mils
+% forces a page break if there is not space-in-mils remaining.
+
+\newdimen\mil \mil=0.001in
+
+% Old definition--didn't work.
+%\parseargdef\need{\par %
+%% This method tries to make TeX break the page naturally
+%% if the depth of the box does not fit.
+%{\baselineskip=0pt%
+%\vtop to #1\mil{\vfil}\kern -#1\mil\nobreak
+%\prevdepth=-1000pt
+%}}
+
+\parseargdef\need{%
+ % Ensure vertical mode, so we don't make a big box in the middle of a
+ % paragraph.
+ \par
+ %
+ % If the @need value is less than one line space, it's useless.
+ \dimen0 = #1\mil
+ \dimen2 = \ht\strutbox
+ \advance\dimen2 by \dp\strutbox
+ \ifdim\dimen0 > \dimen2
+ %
+ % Do a \strut just to make the height of this box be normal, so the
+ % normal leading is inserted relative to the preceding line.
+ % And a page break here is fine.
+ \vtop to #1\mil{\strut\vfil}%
+ %
+ % TeX does not even consider page breaks if a penalty added to the
+ % main vertical list is 10000 or more. But in order to see if the
+ % empty box we just added fits on the page, we must make it consider
+ % page breaks. On the other hand, we don't want to actually break the
+ % page after the empty box. So we use a penalty of 9999.
+ %
+ % There is an extremely small chance that TeX will actually break the
+ % page at this \penalty, if there are no other feasible breakpoints in
+ % sight. (If the user is using lots of big @group commands, which
+ % almost-but-not-quite fill up a page, TeX will have a hard time doing
+ % good page breaking, for example.) However, I could not construct an
+ % example where a page broke at this \penalty; if it happens in a real
+ % document, then we can reconsider our strategy.
+ \penalty9999
+ %
+ % Back up by the size of the box, whether we did a page break or not.
+ \kern -#1\mil
+ %
+ % Do not allow a page break right after this kern.
+ \nobreak
+ \fi
+}
+
+% @br forces paragraph break (and is undocumented).
+
+\let\br = \par
+
+% @page forces the start of a new page.
+%
+\def\page{\par\vfill\supereject}
+
+% @exdent text....
+% outputs text on separate line in roman font, starting at standard page margin
+
+% This records the amount of indent in the innermost environment.
+% That's how much \exdent should take out.
+\newskip\exdentamount
+
+% This defn is used inside fill environments such as @defun.
+\parseargdef\exdent{\hfil\break\hbox{\kern -\exdentamount{\rm#1}}\hfil\break}
+
+% This defn is used inside nofill environments such as @example.
+\parseargdef\nofillexdent{{\advance \leftskip by -\exdentamount
+ \leftline{\hskip\leftskip{\rm#1}}}}
+
+% @inmargin{WHICH}{TEXT} puts TEXT in the WHICH margin next to the current
+% paragraph. For more general purposes, use the \margin insertion
+% class. WHICH is `l' or `r'.
+%
+\newskip\inmarginspacing \inmarginspacing=1cm
+\def\strutdepth{\dp\strutbox}
+%
+\def\doinmargin#1#2{\strut\vadjust{%
+ \nobreak
+ \kern-\strutdepth
+ \vtop to \strutdepth{%
+ \baselineskip=\strutdepth
+ \vss
+ % if you have multiple lines of stuff to put here, you'll need to
+ % make the vbox yourself of the appropriate size.
+ \ifx#1l%
+ \llap{\ignorespaces #2\hskip\inmarginspacing}%
+ \else
+ \rlap{\hskip\hsize \hskip\inmarginspacing \ignorespaces #2}%
+ \fi
+ \null
+ }%
+}}
+\def\inleftmargin{\doinmargin l}
+\def\inrightmargin{\doinmargin r}
+%
+% @inmargin{TEXT [, RIGHT-TEXT]}
+% (if RIGHT-TEXT is given, use TEXT for left page, RIGHT-TEXT for right;
+% else use TEXT for both).
+%
+\def\inmargin#1{\parseinmargin #1,,\finish}
+\def\parseinmargin#1,#2,#3\finish{% not perfect, but better than nothing.
+ \setbox0 = \hbox{\ignorespaces #2}%
+ \ifdim\wd0 > 0pt
+ \def\lefttext{#1}% have both texts
+ \def\righttext{#2}%
+ \else
+ \def\lefttext{#1}% have only one text
+ \def\righttext{#1}%
+ \fi
+ %
+ \ifodd\pageno
+ \def\temp{\inrightmargin\righttext}% odd page -> outside is right margin
+ \else
+ \def\temp{\inleftmargin\lefttext}%
+ \fi
+ \temp
+}
+
+% @include file insert text of that file as input.
+%
+\def\include{\parseargusing\filenamecatcodes\includezzz}
+\def\includezzz#1{%
+ \pushthisfilestack
+ \def\thisfile{#1}%
+ {%
+ \makevalueexpandable
+ \def\temp{\input #1 }%
+ \expandafter
+ }\temp
+ \popthisfilestack
+}
+\def\filenamecatcodes{%
+ \catcode`\\=\other
+ \catcode`~=\other
+ \catcode`^=\other
+ \catcode`_=\other
+ \catcode`|=\other
+ \catcode`<=\other
+ \catcode`>=\other
+ \catcode`+=\other
+ \catcode`-=\other
+}
+
+\def\pushthisfilestack{%
+ \expandafter\pushthisfilestackX\popthisfilestack\StackTerm
+}
+\def\pushthisfilestackX{%
+ \expandafter\pushthisfilestackY\thisfile\StackTerm
+}
+\def\pushthisfilestackY #1\StackTerm #2\StackTerm {%
+ \gdef\popthisfilestack{\gdef\thisfile{#1}\gdef\popthisfilestack{#2}}%
+}
+
+\def\popthisfilestack{\errthisfilestackempty}
+\def\errthisfilestackempty{\errmessage{Internal error:
+ the stack of filenames is empty.}}
+
+\def\thisfile{}
+
+% @center line
+% outputs that line, centered.
+%
+\parseargdef\center{%
+ \ifhmode
+ \let\next\centerH
+ \else
+ \let\next\centerV
+ \fi
+ \next{\hfil \ignorespaces#1\unskip \hfil}%
+}
+\def\centerH#1{%
+ {%
+ \hfil\break
+ \advance\hsize by -\leftskip
+ \advance\hsize by -\rightskip
+ \line{#1}%
+ \break
+ }%
+}
+\def\centerV#1{\line{\kern\leftskip #1\kern\rightskip}}
+
+% @sp n outputs n lines of vertical space
+
+\parseargdef\sp{\vskip #1\baselineskip}
+
+% @comment ...line which is ignored...
+% @c is the same as @comment
+% @ignore ... @end ignore is another way to write a comment
+
+\def\comment{\begingroup \catcode`\^^M=\other%
+\catcode`\@=\other \catcode`\{=\other \catcode`\}=\other%
+\commentxxx}
+{\catcode`\^^M=\other \gdef\commentxxx#1^^M{\endgroup}}
+
+\let\c=\comment
+
+% @paragraphindent NCHARS
+% We'll use ems for NCHARS, close enough.
+% NCHARS can also be the word `asis' or `none'.
+% We cannot feasibly implement @paragraphindent asis, though.
+%
+\def\asisword{asis} % no translation, these are keywords
+\def\noneword{none}
+%
+\parseargdef\paragraphindent{%
+ \def\temp{#1}%
+ \ifx\temp\asisword
+ \else
+ \ifx\temp\noneword
+ \defaultparindent = 0pt
+ \else
+ \defaultparindent = #1em
+ \fi
+ \fi
+ \parindent = \defaultparindent
+}
+
+% @exampleindent NCHARS
+% We'll use ems for NCHARS like @paragraphindent.
+% It seems @exampleindent asis isn't necessary, but
+% I preserve it to make it similar to @paragraphindent.
+\parseargdef\exampleindent{%
+ \def\temp{#1}%
+ \ifx\temp\asisword
+ \else
+ \ifx\temp\noneword
+ \lispnarrowing = 0pt
+ \else
+ \lispnarrowing = #1em
+ \fi
+ \fi
+}
+
+% @firstparagraphindent WORD
+% If WORD is `none', then suppress indentation of the first paragraph
+% after a section heading. If WORD is `insert', then do indent at such
+% paragraphs.
+%
+% The paragraph indentation is suppressed or not by calling
+% \suppressfirstparagraphindent, which the sectioning commands do.
+% We switch the definition of this back and forth according to WORD.
+% By default, we suppress indentation.
+%
+\def\suppressfirstparagraphindent{\dosuppressfirstparagraphindent}
+\def\insertword{insert}
+%
+\parseargdef\firstparagraphindent{%
+ \def\temp{#1}%
+ \ifx\temp\noneword
+ \let\suppressfirstparagraphindent = \dosuppressfirstparagraphindent
+ \else\ifx\temp\insertword
+ \let\suppressfirstparagraphindent = \relax
+ \else
+ \errhelp = \EMsimple
+ \errmessage{Unknown @firstparagraphindent option `\temp'}%
+ \fi\fi
+}
+
+% Here is how we actually suppress indentation. Redefine \everypar to
+% \kern backwards by \parindent, and then reset itself to empty.
+%
+% We also make \indent itself not actually do anything until the next
+% paragraph.
+%
+\gdef\dosuppressfirstparagraphindent{%
+ \gdef\indent{%
+ \restorefirstparagraphindent
+ \indent
+ }%
+ \gdef\noindent{%
+ \restorefirstparagraphindent
+ \noindent
+ }%
+ \global\everypar = {%
+ \kern -\parindent
+ \restorefirstparagraphindent
+ }%
+}
+
+\gdef\restorefirstparagraphindent{%
+ \global \let \indent = \ptexindent
+ \global \let \noindent = \ptexnoindent
+ \global \everypar = {}%
+}
+
+
+% @asis just yields its argument. Used with @table, for example.
+%
+\def\asis#1{#1}
+
+% @math outputs its argument in math mode.
+%
+% One complication: _ usually means subscripts, but it could also mean
+% an actual _ character, as in @math{@var{some_variable} + 1}. So make
+% _ active, and distinguish by seeing if the current family is \slfam,
+% which is what @var uses.
+{
+ \catcode`\_ = \active
+ \gdef\mathunderscore{%
+ \catcode`\_=\active
+ \def_{\ifnum\fam=\slfam \_\else\sb\fi}%
+ }
+}
+% Another complication: we want \\ (and @\) to output a \ character.
+% FYI, plain.tex uses \\ as a temporary control sequence (why?), but
+% this is not advertised and we don't care. Texinfo does not
+% otherwise define @\.
+%
+% The \mathchar is class=0=ordinary, family=7=ttfam, position=5C=\.
+\def\mathbackslash{\ifnum\fam=\ttfam \mathchar"075C \else\backslash \fi}
+%
+\def\math{%
+ \tex
+ \mathunderscore
+ \let\\ = \mathbackslash
+ \mathactive
+ $\finishmath
+}
+\def\finishmath#1{#1$\endgroup} % Close the group opened by \tex.
+
+% Some active characters (such as <) are spaced differently in math.
+% We have to reset their definitions in case the @math was an argument
+% to a command which sets the catcodes (such as @item or @section).
+%
+{
+ \catcode`^ = \active
+ \catcode`< = \active
+ \catcode`> = \active
+ \catcode`+ = \active
+ \gdef\mathactive{%
+ \let^ = \ptexhat
+ \let< = \ptexless
+ \let> = \ptexgtr
+ \let+ = \ptexplus
+ }
+}
+
+% @bullet and @minus need the same treatment as @math, just above.
+\def\bullet{$\ptexbullet$}
+\def\minus{$-$}
+
+% @dots{} outputs an ellipsis using the current font.
+% We do .5em per period so that it has the same spacing in the cm
+% typewriter fonts as three actual period characters; on the other hand,
+% in other typewriter fonts three periods are wider than 1.5em. So do
+% whichever is larger.
+%
+\def\dots{%
+ \leavevmode
+ \setbox0=\hbox{...}% get width of three periods
+ \ifdim\wd0 > 1.5em
+ \dimen0 = \wd0
+ \else
+ \dimen0 = 1.5em
+ \fi
+ \hbox to \dimen0{%
+ \hskip 0pt plus.25fil
+ .\hskip 0pt plus1fil
+ .\hskip 0pt plus1fil
+ .\hskip 0pt plus.5fil
+ }%
+}
+
+% @enddots{} is an end-of-sentence ellipsis.
+%
+\def\enddots{%
+ \dots
+ \spacefactor=\endofsentencespacefactor
+}
+
+% @comma{} is so commas can be inserted into text without messing up
+% Texinfo's parsing.
+%
+\let\comma = ,
+
+% @refill is a no-op.
+\let\refill=\relax
+
+% If working on a large document in chapters, it is convenient to
+% be able to disable indexing, cross-referencing, and contents, for test runs.
+% This is done with @novalidate (before @setfilename).
+%
+\newif\iflinks \linkstrue % by default we want the aux files.
+\let\novalidate = \linksfalse
+
+% @setfilename is done at the beginning of every texinfo file.
+% So open here the files we need to have open while reading the input.
+% This makes it possible to make a .fmt file for texinfo.
+\def\setfilename{%
+ \fixbackslash % Turn off hack to swallow `\input texinfo'.
+ \iflinks
+ \tryauxfile
+ % Open the new aux file. TeX will close it automatically at exit.
+ \immediate\openout\auxfile=\jobname.aux
+ \fi % \openindices needs to do some work in any case.
+ \openindices
+ \let\setfilename=\comment % Ignore extra @setfilename cmds.
+ %
+ % If texinfo.cnf is present on the system, read it.
+ % Useful for site-wide @afourpaper, etc.
+ \openin 1 texinfo.cnf
+ \ifeof 1 \else \input texinfo.cnf \fi
+ \closein 1
+ %
+ \comment % Ignore the actual filename.
+}
+
+% Called from \setfilename.
+%
+\def\openindices{%
+ \newindex{cp}%
+ \newcodeindex{fn}%
+ \newcodeindex{vr}%
+ \newcodeindex{tp}%
+ \newcodeindex{ky}%
+ \newcodeindex{pg}%
+}
+
+% @bye.
+\outer\def\bye{\pagealignmacro\tracingstats=1\ptexend}
+
+
+\message{pdf,}
+% adobe `portable' document format
+\newcount\tempnum
+\newcount\lnkcount
+\newtoks\filename
+\newcount\filenamelength
+\newcount\pgn
+\newtoks\toksA
+\newtoks\toksB
+\newtoks\toksC
+\newtoks\toksD
+\newbox\boxA
+\newcount\countA
+\newif\ifpdf
+\newif\ifpdfmakepagedest
+
+% when pdftex is run in dvi mode, \pdfoutput is defined (so \pdfoutput=1
+% can be set). So we test for \relax and 0 as well as \undefined,
+% borrowed from ifpdf.sty.
+\ifx\pdfoutput\undefined
+\else
+ \ifx\pdfoutput\relax
+ \else
+ \ifcase\pdfoutput
+ \else
+ \pdftrue
+ \fi
+ \fi
+\fi
+
+% PDF uses PostScript string constants for the names of xref targets,
+% for display in the outlines, and in other places. Thus, we have to
+% double any backslashes. Otherwise, a name like "\node" will be
+% interpreted as a newline (\n), followed by o, d, e. Not good.
+% http://www.ntg.nl/pipermail/ntg-pdftex/2004-July/000654.html
+% (and related messages, the final outcome is that it is up to the TeX
+% user to double the backslashes and otherwise make the string valid, so
+% that's what we do).
+
+% double active backslashes.
+%
+{\catcode`\@=0 \catcode`\\=\active
+ @gdef@activebackslashdouble{%
+ @catcode`@\=@active
+ @let\=@doublebackslash}
+}
+
+% To handle parens, we must adopt a different approach, since parens are
+% not active characters. hyperref.dtx (which has the same problem as
+% us) handles it with this amazing macro to replace tokens. I've
+% tinkered with it a little for texinfo, but it's definitely from there.
+%
+% #1 is the tokens to replace.
+% #2 is the replacement.
+% #3 is the control sequence with the string.
+%
+\def\HyPsdSubst#1#2#3{%
+ \def\HyPsdReplace##1#1##2\END{%
+ ##1%
+ \ifx\\##2\\%
+ \else
+ #2%
+ \HyReturnAfterFi{%
+ \HyPsdReplace##2\END
+ }%
+ \fi
+ }%
+ \xdef#3{\expandafter\HyPsdReplace#3#1\END}%
+}
+\long\def\HyReturnAfterFi#1\fi{\fi#1}
+
+% #1 is a control sequence in which to do the replacements.
+\def\backslashparens#1{%
+ \xdef#1{#1}% redefine it as its expansion; the definition is simply
+ % \lastnode when called from \setref -> \pdfmkdest.
+ \HyPsdSubst{(}{\realbackslash(}{#1}%
+ \HyPsdSubst{)}{\realbackslash)}{#1}%
+}
+
+\ifpdf
+ \input pdfcolor
+ \pdfcatalog{/PageMode /UseOutlines}%
+ % #1 is image name, #2 width (might be empty/whitespace), #3 height (ditto).
+ \def\dopdfimage#1#2#3{%
+ \def\imagewidth{#2}\setbox0 = \hbox{\ignorespaces #2}%
+ \def\imageheight{#3}\setbox2 = \hbox{\ignorespaces #3}%
+ % without \immediate, pdftex seg faults when the same image is
+ % included twice. (Version 3.14159-pre-1.0-unofficial-20010704.)
+ \ifnum\pdftexversion < 14
+ \immediate\pdfimage
+ \else
+ \immediate\pdfximage
+ \fi
+ \ifdim \wd0 >0pt width \imagewidth \fi
+ \ifdim \wd2 >0pt height \imageheight \fi
+ \ifnum\pdftexversion<13
+ #1.pdf%
+ \else
+ {#1.pdf}%
+ \fi
+ \ifnum\pdftexversion < 14 \else
+ \pdfrefximage \pdflastximage
+ \fi}
+ \def\pdfmkdest#1{{%
+ % We have to set dummies so commands such as @code, and characters
+ % such as \, aren't expanded when present in a section title.
+ \atdummies
+ \activebackslashdouble
+ \def\pdfdestname{#1}%
+ \backslashparens\pdfdestname
+ \pdfdest name{\pdfdestname} xyz%
+ }}%
+ %
+ % used to mark target names; must be expandable.
+ \def\pdfmkpgn#1{#1}%
+ %
+ \let\linkcolor = \Blue % was Cyan, but that seems light?
+ \def\endlink{\Black\pdfendlink}
+ % Adding outlines to PDF; macros for calculating structure of outlines
+ % come from Petr Olsak
+ \def\expnumber#1{\expandafter\ifx\csname#1\endcsname\relax 0%
+ \else \csname#1\endcsname \fi}
+ \def\advancenumber#1{\tempnum=\expnumber{#1}\relax
+ \advance\tempnum by 1
+ \expandafter\xdef\csname#1\endcsname{\the\tempnum}}
+ %
+ % #1 is the section text, which is what will be displayed in the
+ % outline by the pdf viewer. #2 is the pdf expression for the number
+ % of subentries (or empty, for subsubsections). #3 is the node text,
+ % which might be empty if this toc entry had no corresponding node.
+ % #4 is the page number
+ %
+ \def\dopdfoutline#1#2#3#4{%
+ % Generate a link to the node text if that exists; else, use the
+ % page number. We could generate a destination for the section
+ % text in the case where a section has no node, but it doesn't
+ % seem worth the trouble, since most documents are normally structured.
+ \def\pdfoutlinedest{#3}%
+ \ifx\pdfoutlinedest\empty
+ \def\pdfoutlinedest{#4}%
+ \else
+ % Doubled backslashes in the name.
+ {\activebackslashdouble \xdef\pdfoutlinedest{#3}%
+ \backslashparens\pdfoutlinedest}%
+ \fi
+ %
+ % Also double the backslashes in the display string.
+ {\activebackslashdouble \xdef\pdfoutlinetext{#1}%
+ \backslashparens\pdfoutlinetext}%
+ %
+ \pdfoutline goto name{\pdfmkpgn{\pdfoutlinedest}}#2{\pdfoutlinetext}%
+ }
+ %
+ \def\pdfmakeoutlines{%
+ \begingroup
+ % Thanh's hack / proper braces in bookmarks
+ \edef\mylbrace{\iftrue \string{\else}\fi}\let\{=\mylbrace
+ \edef\myrbrace{\iffalse{\else\string}\fi}\let\}=\myrbrace
+ %
+ % Read toc silently, to get counts of subentries for \pdfoutline.
+ \def\numchapentry##1##2##3##4{%
+ \def\thischapnum{##2}%
+ \def\thissecnum{0}%
+ \def\thissubsecnum{0}%
+ }%
+ \def\numsecentry##1##2##3##4{%
+ \advancenumber{chap\thischapnum}%
+ \def\thissecnum{##2}%
+ \def\thissubsecnum{0}%
+ }%
+ \def\numsubsecentry##1##2##3##4{%
+ \advancenumber{sec\thissecnum}%
+ \def\thissubsecnum{##2}%
+ }%
+ \def\numsubsubsecentry##1##2##3##4{%
+ \advancenumber{subsec\thissubsecnum}%
+ }%
+ \def\thischapnum{0}%
+ \def\thissecnum{0}%
+ \def\thissubsecnum{0}%
+ %
+ % use \def rather than \let here because we redefine \chapentry et
+ % al. a second time, below.
+ \def\appentry{\numchapentry}%
+ \def\appsecentry{\numsecentry}%
+ \def\appsubsecentry{\numsubsecentry}%
+ \def\appsubsubsecentry{\numsubsubsecentry}%
+ \def\unnchapentry{\numchapentry}%
+ \def\unnsecentry{\numsecentry}%
+ \def\unnsubsecentry{\numsubsecentry}%
+ \def\unnsubsubsecentry{\numsubsubsecentry}%
+ \readdatafile{toc}%
+ %
+ % Read toc second time, this time actually producing the outlines.
+ % The `-' means take the \expnumber as the absolute number of
+ % subentries, which we calculated on our first read of the .toc above.
+ %
+ % We use the node names as the destinations.
+ \def\numchapentry##1##2##3##4{%
+ \dopdfoutline{##1}{count-\expnumber{chap##2}}{##3}{##4}}%
+ \def\numsecentry##1##2##3##4{%
+ \dopdfoutline{##1}{count-\expnumber{sec##2}}{##3}{##4}}%
+ \def\numsubsecentry##1##2##3##4{%
+ \dopdfoutline{##1}{count-\expnumber{subsec##2}}{##3}{##4}}%
+ \def\numsubsubsecentry##1##2##3##4{% count is always zero
+ \dopdfoutline{##1}{}{##3}{##4}}%
+ %
+ % PDF outlines are displayed using system fonts, instead of
+ % document fonts. Therefore we cannot use special characters,
+ % since the encoding is unknown. For example, the eogonek from
+ % Latin 2 (0xea) gets translated to a | character. Info from
+ % Staszek Wawrykiewicz, 19 Jan 2004 04:09:24 +0100.
+ %
+ % xx to do this right, we have to translate 8-bit characters to
+ % their "best" equivalent, based on the @documentencoding. Right
+ % now, I guess we'll just let the pdf reader have its way.
+ \indexnofonts
+ \setupdatafile
+ \catcode`\\=\active \otherbackslash
+ \input \jobname.toc
+ \endgroup
+ }
+ %
+ \def\skipspaces#1{\def\PP{#1}\def\D{|}%
+ \ifx\PP\D\let\nextsp\relax
+ \else\let\nextsp\skipspaces
+ \ifx\p\space\else\addtokens{\filename}{\PP}%
+ \advance\filenamelength by 1
+ \fi
+ \fi
+ \nextsp}
+ \def\getfilename#1{\filenamelength=0\expandafter\skipspaces#1|\relax}
+ \ifnum\pdftexversion < 14
+ \let \startlink \pdfannotlink
+ \else
+ \let \startlink \pdfstartlink
+ \fi
+ % make a live url in pdf output.
+ \def\pdfurl#1{%
+ \begingroup
+ % it seems we really need yet another set of dummies; have not
+ % tried to figure out what each command should do in the context
+ % of @url. for now, just make @/ a no-op, that's the only one
+ % people have actually reported a problem with.
+ %
+ \normalturnoffactive
+ \def\@{@}%
+ \let\/=\empty
+ \makevalueexpandable
+ \leavevmode\Red
+ \startlink attr{/Border [0 0 0]}%
+ user{/Subtype /Link /A << /S /URI /URI (#1) >>}%
+ \endgroup}
+ \def\pdfgettoks#1.{\setbox\boxA=\hbox{\toksA={#1.}\toksB={}\maketoks}}
+ \def\addtokens#1#2{\edef\addtoks{\noexpand#1={\the#1#2}}\addtoks}
+ \def\adn#1{\addtokens{\toksC}{#1}\global\countA=1\let\next=\maketoks}
+ \def\poptoks#1#2|ENDTOKS|{\let\first=#1\toksD={#1}\toksA={#2}}
+ \def\maketoks{%
+ \expandafter\poptoks\the\toksA|ENDTOKS|\relax
+ \ifx\first0\adn0
+ \else\ifx\first1\adn1 \else\ifx\first2\adn2 \else\ifx\first3\adn3
+ \else\ifx\first4\adn4 \else\ifx\first5\adn5 \else\ifx\first6\adn6
+ \else\ifx\first7\adn7 \else\ifx\first8\adn8 \else\ifx\first9\adn9
+ \else
+ \ifnum0=\countA\else\makelink\fi
+ \ifx\first.\let\next=\done\else
+ \let\next=\maketoks
+ \addtokens{\toksB}{\the\toksD}
+ \ifx\first,\addtokens{\toksB}{\space}\fi
+ \fi
+ \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi
+ \next}
+ \def\makelink{\addtokens{\toksB}%
+ {\noexpand\pdflink{\the\toksC}}\toksC={}\global\countA=0}
+ \def\pdflink#1{%
+ \startlink attr{/Border [0 0 0]} goto name{\pdfmkpgn{#1}}
+ \linkcolor #1\endlink}
+ \def\done{\edef\st{\global\noexpand\toksA={\the\toksB}}\st}
+\else
+ \let\pdfmkdest = \gobble
+ \let\pdfurl = \gobble
+ \let\endlink = \relax
+ \let\linkcolor = \relax
+ \let\pdfmakeoutlines = \relax
+\fi % \ifx\pdfoutput
+
+
+\message{fonts,}
+
+% Change the current font style to #1, remembering it in \curfontstyle.
+% For now, we do not accumulate font styles: @b{@i{foo}} prints foo in
+% italics, not bold italics.
+%
+\def\setfontstyle#1{%
+ \def\curfontstyle{#1}% not as a control sequence, because we are \edef'd.
+ \csname ten#1\endcsname % change the current font
+}
+
+% Select #1 fonts with the current style.
+%
+\def\selectfonts#1{\csname #1fonts\endcsname \csname\curfontstyle\endcsname}
+
+\def\rm{\fam=0 \setfontstyle{rm}}
+\def\it{\fam=\itfam \setfontstyle{it}}
+\def\sl{\fam=\slfam \setfontstyle{sl}}
+\def\bf{\fam=\bffam \setfontstyle{bf}}\def\bfstylename{bf}
+\def\tt{\fam=\ttfam \setfontstyle{tt}}
+
+% Texinfo sort of supports the sans serif font style, which plain TeX does not.
+% So we set up a \sf.
+\newfam\sffam
+\def\sf{\fam=\sffam \setfontstyle{sf}}
+\let\li = \sf % Sometimes we call it \li, not \sf.
+
+% We don't need math for this font style.
+\def\ttsl{\setfontstyle{ttsl}}
+
+
+% Default leading.
+\newdimen\textleading \textleading = 13.2pt
+
+% Set the baselineskip to #1, and the lineskip and strut size
+% correspondingly. There is no deep meaning behind these magic numbers
+% used as factors; they just match (closely enough) what Knuth defined.
+%
+\def\lineskipfactor{.08333}
+\def\strutheightpercent{.70833}
+\def\strutdepthpercent {.29167}
+%
+\def\setleading#1{%
+ \normalbaselineskip = #1\relax
+ \normallineskip = \lineskipfactor\normalbaselineskip
+ \normalbaselines
+ \setbox\strutbox =\hbox{%
+ \vrule width0pt height\strutheightpercent\baselineskip
+ depth \strutdepthpercent \baselineskip
+ }%
+}
+
+
+% Set the font macro #1 to the font named #2, adding on the
+% specified font prefix (normally `cm').
+% #3 is the font's design size, #4 is a scale factor
+\def\setfont#1#2#3#4{\font#1=\fontprefix#2#3 scaled #4}
+
+
+% Use cm as the default font prefix.
+% To specify the font prefix, you must define \fontprefix
+% before you read in texinfo.tex.
+\ifx\fontprefix\undefined
+\def\fontprefix{cm}
+\fi
+% Support font families that don't use the same naming scheme as CM.
+\def\rmshape{r}
+\def\rmbshape{bx} %where the normal face is bold
+\def\bfshape{b}
+\def\bxshape{bx}
+\def\ttshape{tt}
+\def\ttbshape{tt}
+\def\ttslshape{sltt}
+\def\itshape{ti}
+\def\itbshape{bxti}
+\def\slshape{sl}
+\def\slbshape{bxsl}
+\def\sfshape{ss}
+\def\sfbshape{ss}
+\def\scshape{csc}
+\def\scbshape{csc}
+
+% Definitions for a main text size of 11pt. This is the default in
+% Texinfo.
+%
+\def\definetextfontsizexi{
+% Text fonts (11.2pt, magstep1).
+\def\textnominalsize{11pt}
+\edef\mainmagstep{\magstephalf}
+\setfont\textrm\rmshape{10}{\mainmagstep}
+\setfont\texttt\ttshape{10}{\mainmagstep}
+\setfont\textbf\bfshape{10}{\mainmagstep}
+\setfont\textit\itshape{10}{\mainmagstep}
+\setfont\textsl\slshape{10}{\mainmagstep}
+\setfont\textsf\sfshape{10}{\mainmagstep}
+\setfont\textsc\scshape{10}{\mainmagstep}
+\setfont\textttsl\ttslshape{10}{\mainmagstep}
+\font\texti=cmmi10 scaled \mainmagstep
+\font\textsy=cmsy10 scaled \mainmagstep
+
+% A few fonts for @defun names and args.
+\setfont\defbf\bfshape{10}{\magstep1}
+\setfont\deftt\ttshape{10}{\magstep1}
+\setfont\defttsl\ttslshape{10}{\magstep1}
+\def\df{\let\tentt=\deftt \let\tenbf = \defbf \let\tenttsl=\defttsl \bf}
+
+% Fonts for indices, footnotes, small examples (9pt).
+\def\smallnominalsize{9pt}
+\setfont\smallrm\rmshape{9}{1000}
+\setfont\smalltt\ttshape{9}{1000}
+\setfont\smallbf\bfshape{10}{900}
+\setfont\smallit\itshape{9}{1000}
+\setfont\smallsl\slshape{9}{1000}
+\setfont\smallsf\sfshape{9}{1000}
+\setfont\smallsc\scshape{10}{900}
+\setfont\smallttsl\ttslshape{10}{900}
+\font\smalli=cmmi9
+\font\smallsy=cmsy9
+
+% Fonts for small examples (8pt).
+\def\smallernominalsize{8pt}
+\setfont\smallerrm\rmshape{8}{1000}
+\setfont\smallertt\ttshape{8}{1000}
+\setfont\smallerbf\bfshape{10}{800}
+\setfont\smallerit\itshape{8}{1000}
+\setfont\smallersl\slshape{8}{1000}
+\setfont\smallersf\sfshape{8}{1000}
+\setfont\smallersc\scshape{10}{800}
+\setfont\smallerttsl\ttslshape{10}{800}
+\font\smalleri=cmmi8
+\font\smallersy=cmsy8
+
+% Fonts for title page (20.4pt):
+\def\titlenominalsize{20pt}
+\setfont\titlerm\rmbshape{12}{\magstep3}
+\setfont\titleit\itbshape{10}{\magstep4}
+\setfont\titlesl\slbshape{10}{\magstep4}
+\setfont\titlett\ttbshape{12}{\magstep3}
+\setfont\titlettsl\ttslshape{10}{\magstep4}
+\setfont\titlesf\sfbshape{17}{\magstep1}
+\let\titlebf=\titlerm
+\setfont\titlesc\scbshape{10}{\magstep4}
+\font\titlei=cmmi12 scaled \magstep3
+\font\titlesy=cmsy10 scaled \magstep4
+\def\authorrm{\secrm}
+\def\authortt{\sectt}
+
+% Chapter (and unnumbered) fonts (17.28pt).
+\def\chapnominalsize{17pt}
+\setfont\chaprm\rmbshape{12}{\magstep2}
+\setfont\chapit\itbshape{10}{\magstep3}
+\setfont\chapsl\slbshape{10}{\magstep3}
+\setfont\chaptt\ttbshape{12}{\magstep2}
+\setfont\chapttsl\ttslshape{10}{\magstep3}
+\setfont\chapsf\sfbshape{17}{1000}
+\let\chapbf=\chaprm
+\setfont\chapsc\scbshape{10}{\magstep3}
+\font\chapi=cmmi12 scaled \magstep2
+\font\chapsy=cmsy10 scaled \magstep3
+
+% Section fonts (14.4pt).
+\def\secnominalsize{14pt}
+\setfont\secrm\rmbshape{12}{\magstep1}
+\setfont\secit\itbshape{10}{\magstep2}
+\setfont\secsl\slbshape{10}{\magstep2}
+\setfont\sectt\ttbshape{12}{\magstep1}
+\setfont\secttsl\ttslshape{10}{\magstep2}
+\setfont\secsf\sfbshape{12}{\magstep1}
+\let\secbf\secrm
+\setfont\secsc\scbshape{10}{\magstep2}
+\font\seci=cmmi12 scaled \magstep1
+\font\secsy=cmsy10 scaled \magstep2
+
+% Subsection fonts (13.15pt).
+\def\ssecnominalsize{13pt}
+\setfont\ssecrm\rmbshape{12}{\magstephalf}
+\setfont\ssecit\itbshape{10}{1315}
+\setfont\ssecsl\slbshape{10}{1315}
+\setfont\ssectt\ttbshape{12}{\magstephalf}
+\setfont\ssecttsl\ttslshape{10}{1315}
+\setfont\ssecsf\sfbshape{12}{\magstephalf}
+\let\ssecbf\ssecrm
+\setfont\ssecsc\scbshape{10}{1315}
+\font\sseci=cmmi12 scaled \magstephalf
+\font\ssecsy=cmsy10 scaled 1315
+
+% Reduced fonts for @acro in text (10pt).
+\def\reducednominalsize{10pt}
+\setfont\reducedrm\rmshape{10}{1000}
+\setfont\reducedtt\ttshape{10}{1000}
+\setfont\reducedbf\bfshape{10}{1000}
+\setfont\reducedit\itshape{10}{1000}
+\setfont\reducedsl\slshape{10}{1000}
+\setfont\reducedsf\sfshape{10}{1000}
+\setfont\reducedsc\scshape{10}{1000}
+\setfont\reducedttsl\ttslshape{10}{1000}
+\font\reducedi=cmmi10
+\font\reducedsy=cmsy10
+
+% reset the current fonts
+\textfonts
+\rm
+} % end of 11pt text font size definitions
+
+
+% Definitions to make the main text be 10pt Computer Modern, with
+% section, chapter, etc., sizes following suit. This is for the GNU
+% Press printing of the Emacs 22 manual. Maybe other manuals in the
+% future. Used with @smallbook, which sets the leading to 12pt.
+%
+\def\definetextfontsizex{%
+% Text fonts (10pt).
+\def\textnominalsize{10pt}
+\edef\mainmagstep{1000}
+\setfont\textrm\rmshape{10}{\mainmagstep}
+\setfont\texttt\ttshape{10}{\mainmagstep}
+\setfont\textbf\bfshape{10}{\mainmagstep}
+\setfont\textit\itshape{10}{\mainmagstep}
+\setfont\textsl\slshape{10}{\mainmagstep}
+\setfont\textsf\sfshape{10}{\mainmagstep}
+\setfont\textsc\scshape{10}{\mainmagstep}
+\setfont\textttsl\ttslshape{10}{\mainmagstep}
+\font\texti=cmmi10 scaled \mainmagstep
+\font\textsy=cmsy10 scaled \mainmagstep
+
+% A few fonts for @defun names and args.
+\setfont\defbf\bfshape{10}{\magstephalf}
+\setfont\deftt\ttshape{10}{\magstephalf}
+\setfont\defttsl\ttslshape{10}{\magstephalf}
+\def\df{\let\tentt=\deftt \let\tenbf = \defbf \let\tenttsl=\defttsl \bf}
+
+% Fonts for indices, footnotes, small examples (9pt).
+\def\smallnominalsize{9pt}
+\setfont\smallrm\rmshape{9}{1000}
+\setfont\smalltt\ttshape{9}{1000}
+\setfont\smallbf\bfshape{10}{900}
+\setfont\smallit\itshape{9}{1000}
+\setfont\smallsl\slshape{9}{1000}
+\setfont\smallsf\sfshape{9}{1000}
+\setfont\smallsc\scshape{10}{900}
+\setfont\smallttsl\ttslshape{10}{900}
+\font\smalli=cmmi9
+\font\smallsy=cmsy9
+
+% Fonts for small examples (8pt).
+\def\smallernominalsize{8pt}
+\setfont\smallerrm\rmshape{8}{1000}
+\setfont\smallertt\ttshape{8}{1000}
+\setfont\smallerbf\bfshape{10}{800}
+\setfont\smallerit\itshape{8}{1000}
+\setfont\smallersl\slshape{8}{1000}
+\setfont\smallersf\sfshape{8}{1000}
+\setfont\smallersc\scshape{10}{800}
+\setfont\smallerttsl\ttslshape{10}{800}
+\font\smalleri=cmmi8
+\font\smallersy=cmsy8
+
+% Fonts for title page (20.4pt):
+\def\titlenominalsize{20pt}
+\setfont\titlerm\rmbshape{12}{\magstep3}
+\setfont\titleit\itbshape{10}{\magstep4}
+\setfont\titlesl\slbshape{10}{\magstep4}
+\setfont\titlett\ttbshape{12}{\magstep3}
+\setfont\titlettsl\ttslshape{10}{\magstep4}
+\setfont\titlesf\sfbshape{17}{\magstep1}
+\let\titlebf=\titlerm
+\setfont\titlesc\scbshape{10}{\magstep4}
+\font\titlei=cmmi12 scaled \magstep3
+\font\titlesy=cmsy10 scaled \magstep4
+\def\authorrm{\secrm}
+\def\authortt{\sectt}
+
+% Chapter fonts (14.4pt).
+\def\chapnominalsize{14pt}
+\setfont\chaprm\rmbshape{12}{\magstep1}
+\setfont\chapit\itbshape{10}{\magstep2}
+\setfont\chapsl\slbshape{10}{\magstep2}
+\setfont\chaptt\ttbshape{12}{\magstep1}
+\setfont\chapttsl\ttslshape{10}{\magstep2}
+\setfont\chapsf\sfbshape{12}{\magstep1}
+\let\chapbf\chaprm
+\setfont\chapsc\scbshape{10}{\magstep2}
+\font\chapi=cmmi12 scaled \magstep1
+\font\chapsy=cmsy10 scaled \magstep2
+
+% Section fonts (12pt).
+\def\secnominalsize{12pt}
+\setfont\secrm\rmbshape{12}{1000}
+\setfont\secit\itbshape{10}{\magstep1}
+\setfont\secsl\slbshape{10}{\magstep1}
+\setfont\sectt\ttbshape{12}{1000}
+\setfont\secttsl\ttslshape{10}{\magstep1}
+\setfont\secsf\sfbshape{12}{1000}
+\let\secbf\secrm
+\setfont\secsc\scbshape{10}{\magstep1}
+\font\seci=cmmi12
+\font\secsy=cmsy10 scaled \magstep1
+
+% Subsection fonts (10pt).
+\def\ssecnominalsize{10pt}
+\setfont\ssecrm\rmbshape{10}{1000}
+\setfont\ssecit\itbshape{10}{1000}
+\setfont\ssecsl\slbshape{10}{1000}
+\setfont\ssectt\ttbshape{10}{1000}
+\setfont\ssecttsl\ttslshape{10}{1000}
+\setfont\ssecsf\sfbshape{10}{1000}
+\let\ssecbf\ssecrm
+\setfont\ssecsc\scbshape{10}{1000}
+\font\sseci=cmmi10
+\font\ssecsy=cmsy10
+
+% Reduced fonts for @acro in text (9pt).
+\def\reducednominalsize{9pt}
+\setfont\reducedrm\rmshape{9}{1000}
+\setfont\reducedtt\ttshape{9}{1000}
+\setfont\reducedbf\bfshape{10}{900}
+\setfont\reducedit\itshape{9}{1000}
+\setfont\reducedsl\slshape{9}{1000}
+\setfont\reducedsf\sfshape{9}{1000}
+\setfont\reducedsc\scshape{10}{900}
+\setfont\reducedttsl\ttslshape{10}{900}
+\font\reducedi=cmmi9
+\font\reducedsy=cmsy9
+
+% reduce space between paragraphs
+\divide\parskip by 2
+
+% reset the current fonts
+\textfonts
+\rm
+} % end of 10pt text font size definitions
+
+
+% We provide the user-level command
+% @fonttextsize 10
+% (or 11) to redefine the text font size. pt is assumed.
+%
+\def\xword{10}
+\def\xiword{11}
+%
+\parseargdef\fonttextsize{%
+ \def\textsizearg{#1}%
+ \wlog{doing @fonttextsize \textsizearg}%
+ %
+ % Set \globaldefs so that documents can use this inside @tex, since
+ % makeinfo 4.8 does not support it, but we need it nonetheless.
+ %
+ \begingroup \globaldefs=1
+ \ifx\textsizearg\xword \definetextfontsizex
+ \else \ifx\textsizearg\xiword \definetextfontsizexi
+ \else
+ \errhelp=\EMsimple
+ \errmessage{@fonttextsize only supports `10' or `11', not `\textsizearg'}
+ \fi\fi
+ \endgroup
+}
+
+
+% In order for the font changes to affect most math symbols and letters,
+% we have to define the \textfont of the standard families. Since
+% texinfo doesn't allow for producing subscripts and superscripts except
+% in the main text, we don't bother to reset \scriptfont and
+% \scriptscriptfont (which would also require loading a lot more fonts).
+%
+\def\resetmathfonts{%
+ \textfont0=\tenrm \textfont1=\teni \textfont2=\tensy
+ \textfont\itfam=\tenit \textfont\slfam=\tensl \textfont\bffam=\tenbf
+ \textfont\ttfam=\tentt \textfont\sffam=\tensf
+}
+
+% The font-changing commands redefine the meanings of \tenSTYLE, instead
+% of just \STYLE. We do this because \STYLE needs to also set the
+% current \fam for math mode. Our \STYLE (e.g., \rm) commands hardwire
+% \tenSTYLE to set the current font.
+%
+% Each font-changing command also sets the names \lsize (one size lower)
+% and \lllsize (three sizes lower). These relative commands are used in
+% the LaTeX logo and acronyms.
+%
+% This all needs generalizing, badly.
+%
+\def\textfonts{%
+ \let\tenrm=\textrm \let\tenit=\textit \let\tensl=\textsl
+ \let\tenbf=\textbf \let\tentt=\texttt \let\smallcaps=\textsc
+ \let\tensf=\textsf \let\teni=\texti \let\tensy=\textsy
+ \let\tenttsl=\textttsl
+ \def\curfontsize{text}%
+ \def\lsize{reduced}\def\lllsize{smaller}%
+ \resetmathfonts \setleading{\textleading}}
+\def\titlefonts{%
+ \let\tenrm=\titlerm \let\tenit=\titleit \let\tensl=\titlesl
+ \let\tenbf=\titlebf \let\tentt=\titlett \let\smallcaps=\titlesc
+ \let\tensf=\titlesf \let\teni=\titlei \let\tensy=\titlesy
+ \let\tenttsl=\titlettsl
+ \def\curfontsize{title}%
+ \def\lsize{chap}\def\lllsize{subsec}%
+ \resetmathfonts \setleading{25pt}}
+\def\titlefont#1{{\titlefonts\rm #1}}
+\def\chapfonts{%
+ \let\tenrm=\chaprm \let\tenit=\chapit \let\tensl=\chapsl
+ \let\tenbf=\chapbf \let\tentt=\chaptt \let\smallcaps=\chapsc
+ \let\tensf=\chapsf \let\teni=\chapi \let\tensy=\chapsy
+ \let\tenttsl=\chapttsl
+ \def\curfontsize{chap}%
+ \def\lsize{sec}\def\lllsize{text}%
+ \resetmathfonts \setleading{19pt}}
+\def\secfonts{%
+ \let\tenrm=\secrm \let\tenit=\secit \let\tensl=\secsl
+ \let\tenbf=\secbf \let\tentt=\sectt \let\smallcaps=\secsc
+ \let\tensf=\secsf \let\teni=\seci \let\tensy=\secsy
+ \let\tenttsl=\secttsl
+ \def\curfontsize{sec}%
+ \def\lsize{subsec}\def\lllsize{reduced}%
+ \resetmathfonts \setleading{16pt}}
+\def\subsecfonts{%
+ \let\tenrm=\ssecrm \let\tenit=\ssecit \let\tensl=\ssecsl
+ \let\tenbf=\ssecbf \let\tentt=\ssectt \let\smallcaps=\ssecsc
+ \let\tensf=\ssecsf \let\teni=\sseci \let\tensy=\ssecsy
+ \let\tenttsl=\ssecttsl
+ \def\curfontsize{ssec}%
+ \def\lsize{text}\def\lllsize{small}%
+ \resetmathfonts \setleading{15pt}}
+\let\subsubsecfonts = \subsecfonts
+\def\reducedfonts{%
+ \let\tenrm=\reducedrm \let\tenit=\reducedit \let\tensl=\reducedsl
+ \let\tenbf=\reducedbf \let\tentt=\reducedtt \let\reducedcaps=\reducedsc
+ \let\tensf=\reducedsf \let\teni=\reducedi \let\tensy=\reducedsy
+ \let\tenttsl=\reducedttsl
+ \def\curfontsize{reduced}%
+ \def\lsize{small}\def\lllsize{smaller}%
+ \resetmathfonts \setleading{10.5pt}}
+\def\smallfonts{%
+ \let\tenrm=\smallrm \let\tenit=\smallit \let\tensl=\smallsl
+ \let\tenbf=\smallbf \let\tentt=\smalltt \let\smallcaps=\smallsc
+ \let\tensf=\smallsf \let\teni=\smalli \let\tensy=\smallsy
+ \let\tenttsl=\smallttsl
+ \def\curfontsize{small}%
+ \def\lsize{smaller}\def\lllsize{smaller}%
+ \resetmathfonts \setleading{10.5pt}}
+\def\smallerfonts{%
+ \let\tenrm=\smallerrm \let\tenit=\smallerit \let\tensl=\smallersl
+ \let\tenbf=\smallerbf \let\tentt=\smallertt \let\smallcaps=\smallersc
+ \let\tensf=\smallersf \let\teni=\smalleri \let\tensy=\smallersy
+ \let\tenttsl=\smallerttsl
+ \def\curfontsize{smaller}%
+ \def\lsize{smaller}\def\lllsize{smaller}%
+ \resetmathfonts \setleading{9.5pt}}
+
+% Set the fonts to use with the @small... environments.
+\let\smallexamplefonts = \smallfonts
+
+% About \smallexamplefonts. If we use \smallfonts (9pt), @smallexample
+% can fit this many characters:
+% 8.5x11=86 smallbook=72 a4=90 a5=69
+% If we use \scriptfonts (8pt), then we can fit this many characters:
+% 8.5x11=90+ smallbook=80 a4=90+ a5=77
+% For me, subjectively, the few extra characters that fit aren't worth
+% the additional smallness of 8pt. So I'm making the default 9pt.
+%
+% By the way, for comparison, here's what fits with @example (10pt):
+% 8.5x11=71 smallbook=60 a4=75 a5=58
+%
+% I wish the USA used A4 paper.
+% --karl, 24jan03.
+
+
+% Set up the default fonts, so we can use them for creating boxes.
+%
+\definetextfontsizexi
+
+% Define these so they can be easily changed for other fonts.
+\def\angleleft{$\langle$}
+\def\angleright{$\rangle$}
+
+% Count depth in font-changes, for error checks
+\newcount\fontdepth \fontdepth=0
+
+% Fonts for short table of contents.
+\setfont\shortcontrm\rmshape{12}{1000}
+\setfont\shortcontbf\bfshape{10}{\magstep1} % no cmb12
+\setfont\shortcontsl\slshape{12}{1000}
+\setfont\shortconttt\ttshape{12}{1000}
+
+%% Add scribe-like font environments, plus @l for inline lisp (usually sans
+%% serif) and @ii for TeX italic
+
+% \smartitalic{ARG} outputs arg in italics, followed by an italic correction
+% unless the following character is such as not to need one.
+\def\smartitalicx{\ifx\next,\else\ifx\next-\else\ifx\next.\else
+ \ptexslash\fi\fi\fi}
+\def\smartslanted#1{{\ifusingtt\ttsl\sl #1}\futurelet\next\smartitalicx}
+\def\smartitalic#1{{\ifusingtt\ttsl\it #1}\futurelet\next\smartitalicx}
+
+% like \smartslanted except unconditionally uses \ttsl.
+% @var is set to this for defun arguments.
+\def\ttslanted#1{{\ttsl #1}\futurelet\next\smartitalicx}
+
+% like \smartslanted except unconditionally use \sl. We never want
+% ttsl for book titles, do we?
+\def\cite#1{{\sl #1}\futurelet\next\smartitalicx}
+
+\let\i=\smartitalic
+\let\slanted=\smartslanted
+\let\var=\smartslanted
+\let\dfn=\smartslanted
+\let\emph=\smartitalic
+
+% @b, explicit bold.
+\def\b#1{{\bf #1}}
+\let\strong=\b
+
+% @sansserif, explicit sans.
+\def\sansserif#1{{\sf #1}}
+
+% We can't just use \exhyphenpenalty, because that only has effect at
+% the end of a paragraph. Restore normal hyphenation at the end of the
+% group within which \nohyphenation is presumably called.
+%
+\def\nohyphenation{\hyphenchar\font = -1 \aftergroup\restorehyphenation}
+\def\restorehyphenation{\hyphenchar\font = `- }
+
+% Set sfcode to normal for the chars that usually have another value.
+% Can't use plain's \frenchspacing because it uses the `\x notation, and
+% sometimes \x has an active definition that messes things up.
+%
+\catcode`@=11
+ \def\plainfrenchspacing{%
+ \sfcode\dotChar =\@m \sfcode\questChar=\@m \sfcode\exclamChar=\@m
+ \sfcode\colonChar=\@m \sfcode\semiChar =\@m \sfcode\commaChar =\@m
+ \def\endofsentencespacefactor{1000}% for @. and friends
+ }
+ \def\plainnonfrenchspacing{%
+ \sfcode`\.3000\sfcode`\?3000\sfcode`\!3000
+ \sfcode`\:2000\sfcode`\;1500\sfcode`\,1250
+ \def\endofsentencespacefactor{3000}% for @. and friends
+ }
+\catcode`@=\other
+\def\endofsentencespacefactor{3000}% default
+
+\def\t#1{%
+ {\tt \rawbackslash \plainfrenchspacing #1}%
+ \null
+}
+\def\samp#1{`\tclose{#1}'\null}
+\setfont\keyrm\rmshape{8}{1000}
+\font\keysy=cmsy9
+\def\key#1{{\keyrm\textfont2=\keysy \leavevmode\hbox{%
+ \raise0.4pt\hbox{\angleleft}\kern-.08em\vtop{%
+ \vbox{\hrule\kern-0.4pt
+ \hbox{\raise0.4pt\hbox{\vphantom{\angleleft}}#1}}%
+ \kern-0.4pt\hrule}%
+ \kern-.06em\raise0.4pt\hbox{\angleright}}}}
+% The old definition, with no lozenge:
+%\def\key #1{{\ttsl \nohyphenation \uppercase{#1}}\null}
+\def\ctrl #1{{\tt \rawbackslash \hat}#1}
+
+% @file, @option are the same as @samp.
+\let\file=\samp
+\let\option=\samp
+
+% @code is a modification of @t,
+% which makes spaces the same size as normal in the surrounding text.
+\def\tclose#1{%
+ {%
+ % Change normal interword space to be same as for the current font.
+ \spaceskip = \fontdimen2\font
+ %
+ % Switch to typewriter.
+ \tt
+ %
+ % But `\ ' produces the large typewriter interword space.
+ \def\ {{\spaceskip = 0pt{} }}%
+ %
+ % Turn off hyphenation.
+ \nohyphenation
+ %
+ \rawbackslash
+ \plainfrenchspacing
+ #1%
+ }%
+ \null
+}
+
+% We *must* turn on hyphenation at `-' and `_' in @code.
+% Otherwise, it is too hard to avoid overfull hboxes
+% in the Emacs manual, the Library manual, etc.
+
+% Unfortunately, TeX uses one parameter (\hyphenchar) to control
+% both hyphenation at - and hyphenation within words.
+% We must therefore turn them both off (\tclose does that)
+% and arrange explicitly to hyphenate at a dash.
+% -- rms.
+{
+ \catcode`\-=\active \catcode`\_=\active
+ \catcode`\'=\active \catcode`\`=\active
+ %
+ \global\def\code{\begingroup
+ \catcode\rquoteChar=\active \catcode\lquoteChar=\active
+ \let'\codequoteright \let`\codequoteleft
+ %
+ \catcode\dashChar=\active \catcode\underChar=\active
+ \ifallowcodebreaks
+ \let-\codedash
+ \let_\codeunder
+ \else
+ \let-\realdash
+ \let_\realunder
+ \fi
+ \codex
+ }
+}
+
+\def\realdash{-}
+\def\codedash{-\discretionary{}{}{}}
+\def\codeunder{%
+ % this is all so @math{@code{var_name}+1} can work. In math mode, _
+ % is "active" (mathcode"8000) and \normalunderscore (or \char95, etc.)
+ % will therefore expand the active definition of _, which is us
+ % (inside @code that is), therefore an endless loop.
+ \ifusingtt{\ifmmode
+ \mathchar"075F % class 0=ordinary, family 7=ttfam, pos 0x5F=_.
+ \else\normalunderscore \fi
+ \discretionary{}{}{}}%
+ {\_}%
+}
+\def\codex #1{\tclose{#1}\endgroup}
+
+% An additional complication: the above will allow breaks after, e.g.,
+% each of the four underscores in __typeof__. This is undesirable in
+% some manuals, especially if they don't have long identifiers in
+% general. @allowcodebreaks provides a way to control this.
+%
+\newif\ifallowcodebreaks \allowcodebreakstrue
+
+\def\keywordtrue{true}
+\def\keywordfalse{false}
+
+\parseargdef\allowcodebreaks{%
+ \def\txiarg{#1}%
+ \ifx\txiarg\keywordtrue
+ \allowcodebreakstrue
+ \else\ifx\txiarg\keywordfalse
+ \allowcodebreaksfalse
+ \else
+ \errhelp = \EMsimple
+ \errmessage{Unknown @allowcodebreaks option `\txiarg'}%
+ \fi\fi
+}
+
+% @kbd is like @code, except that if the argument is just one @key command,
+% then @kbd has no effect.
+
+% @kbdinputstyle -- arg is `distinct' (@kbd uses slanted tty font always),
+% `example' (@kbd uses ttsl only inside of @example and friends),
+% or `code' (@kbd uses normal tty font always).
+\parseargdef\kbdinputstyle{%
+ \def\txiarg{#1}%
+ \ifx\txiarg\worddistinct
+ \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}%
+ \else\ifx\txiarg\wordexample
+ \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\tt}%
+ \else\ifx\txiarg\wordcode
+ \gdef\kbdexamplefont{\tt}\gdef\kbdfont{\tt}%
+ \else
+ \errhelp = \EMsimple
+ \errmessage{Unknown @kbdinputstyle option `\txiarg'}%
+ \fi\fi\fi
+}
+\def\worddistinct{distinct}
+\def\wordexample{example}
+\def\wordcode{code}
+
+% Default is `distinct.'
+\kbdinputstyle distinct
+
+\def\xkey{\key}
+\def\kbdfoo#1#2#3\par{\def\one{#1}\def\three{#3}\def\threex{??}%
+\ifx\one\xkey\ifx\threex\three \key{#2}%
+\else{\tclose{\kbdfont\look}}\fi
+\else{\tclose{\kbdfont\look}}\fi}
+
+% For @indicateurl, @env, @command quotes seem unnecessary, so use \code.
+\let\indicateurl=\code
+\let\env=\code
+\let\command=\code
+
+% @uref (abbreviation for `urlref') takes an optional (comma-separated)
+% second argument specifying the text to display and an optional third
+% arg as text to display instead of (rather than in addition to) the url
+% itself. First (mandatory) arg is the url. Perhaps eventually put in
+% a hypertex \special here.
+%
+\def\uref#1{\douref #1,,,\finish}
+\def\douref#1,#2,#3,#4\finish{\begingroup
+ \unsepspaces
+ \pdfurl{#1}%
+ \setbox0 = \hbox{\ignorespaces #3}%
+ \ifdim\wd0 > 0pt
+ \unhbox0 % third arg given, show only that
+ \else
+ \setbox0 = \hbox{\ignorespaces #2}%
+ \ifdim\wd0 > 0pt
+ \ifpdf
+ \unhbox0 % PDF: 2nd arg given, show only it
+ \else
+ \unhbox0\ (\code{#1})% DVI: 2nd arg given, show both it and url
+ \fi
+ \else
+ \code{#1}% only url given, so show it
+ \fi
+ \fi
+ \endlink
+\endgroup}
+
+% @url synonym for @uref, since that's how everyone uses it.
+%
+\let\url=\uref
+
+% rms does not like angle brackets --karl, 17may97.
+% So now @email is just like @uref, unless we are pdf.
+%
+%\def\email#1{\angleleft{\tt #1}\angleright}
+\ifpdf
+ \def\email#1{\doemail#1,,\finish}
+ \def\doemail#1,#2,#3\finish{\begingroup
+ \unsepspaces
+ \pdfurl{mailto:#1}%
+ \setbox0 = \hbox{\ignorespaces #2}%
+ \ifdim\wd0>0pt\unhbox0\else\code{#1}\fi
+ \endlink
+ \endgroup}
+\else
+ \let\email=\uref
+\fi
+
+% Check if we are currently using a typewriter font. Since all the
+% Computer Modern typewriter fonts have zero interword stretch (and
+% shrink), and it is reasonable to expect all typewriter fonts to have
+% this property, we can check that font parameter.
+%
+\def\ifmonospace{\ifdim\fontdimen3\font=0pt }
+
+% Typeset a dimension, e.g., `in' or `pt'. The only reason for the
+% argument is to make the input look right: @dmn{pt} instead of @dmn{}pt.
+%
+\def\dmn#1{\thinspace #1}
+
+\def\kbd#1{\def\look{#1}\expandafter\kbdfoo\look??\par}
+
+% @l was never documented to mean ``switch to the Lisp font'',
+% and it is not used as such in any manual I can find. We need it for
+% Polish suppressed-l. --karl, 22sep96.
+%\def\l#1{{\li #1}\null}
+
+% Explicit font changes: @r, @sc, undocumented @ii.
+\def\r#1{{\rm #1}} % roman font
+\def\sc#1{{\smallcaps#1}} % smallcaps font
+\def\ii#1{{\it #1}} % italic font
+
+% @acronym for "FBI", "NATO", and the like.
+% We print this one point size smaller, since it's intended for
+% all-uppercase.
+%
+\def\acronym#1{\doacronym #1,,\finish}
+\def\doacronym#1,#2,#3\finish{%
+ {\selectfonts\lsize #1}%
+ \def\temp{#2}%
+ \ifx\temp\empty \else
+ \space ({\unsepspaces \ignorespaces \temp \unskip})%
+ \fi
+}
+
+% @abbr for "Comput. J." and the like.
+% No font change, but don't do end-of-sentence spacing.
+%
+\def\abbr#1{\doabbr #1,,\finish}
+\def\doabbr#1,#2,#3\finish{%
+ {\plainfrenchspacing #1}%
+ \def\temp{#2}%
+ \ifx\temp\empty \else
+ \space ({\unsepspaces \ignorespaces \temp \unskip})%
+ \fi
+}
+
+% @pounds{} is a sterling sign, which Knuth put in the CM italic font.
+%
+\def\pounds{{\it\$}}
+
+% @euro{} comes from a separate font, depending on the current style.
+% We use the free feym* fonts from the eurosym package by Henrik
+% Theiling, which support regular, slanted, bold and bold slanted (and
+% "outlined" (blackboard board, sort of) versions, which we don't need).
+% It is available from http://www.ctan.org/tex-archive/fonts/eurosym.
+%
+% Although only regular is the truly official Euro symbol, we ignore
+% that. The Euro is designed to be slightly taller than the regular
+% font height.
+%
+% feymr - regular
+% feymo - slanted
+% feybr - bold
+% feybo - bold slanted
+%
+% There is no good (free) typewriter version, to my knowledge.
+% A feymr10 euro is ~7.3pt wide, while a normal cmtt10 char is ~5.25pt wide.
+% Hmm.
+%
+% Also doesn't work in math. Do we need to do math with euro symbols?
+% Hope not.
+%
+%
+\def\euro{{\eurofont e}}
+\def\eurofont{%
+ % We set the font at each command, rather than predefining it in
+ % \textfonts and the other font-switching commands, so that
+ % installations which never need the symbol don't have to have the
+ % font installed.
+ %
+ % There is only one designed size (nominal 10pt), so we always scale
+ % that to the current nominal size.
+ %
+ % By the way, simply using "at 1em" works for cmr10 and the like, but
+ % does not work for cmbx10 and other extended/shrunken fonts.
+ %
+ \def\eurosize{\csname\curfontsize nominalsize\endcsname}%
+ %
+ \ifx\curfontstyle\bfstylename
+ % bold:
+ \font\thiseurofont = \ifusingit{feybo10}{feybr10} at \eurosize
+ \else
+ % regular:
+ \font\thiseurofont = \ifusingit{feymo10}{feymr10} at \eurosize
+ \fi
+ \thiseurofont
+}
+
+% @registeredsymbol - R in a circle. The font for the R should really
+% be smaller yet, but lllsize is the best we can do for now.
+% Adapted from the plain.tex definition of \copyright.
+%
+\def\registeredsymbol{%
+ $^{{\ooalign{\hfil\raise.07ex\hbox{\selectfonts\lllsize R}%
+ \hfil\crcr\Orb}}%
+ }$%
+}
+
+% @textdegree - the normal degrees sign.
+%
+\def\textdegree{$^\circ$}
+
+% Laurent Siebenmann reports \Orb undefined with:
+% Textures 1.7.7 (preloaded format=plain 93.10.14) (68K) 16 APR 2004 02:38
+% so we'll define it if necessary.
+%
+\ifx\Orb\undefined
+\def\Orb{\mathhexbox20D}
+\fi
+
+
+\message{page headings,}
+
+\newskip\titlepagetopglue \titlepagetopglue = 1.5in
+\newskip\titlepagebottomglue \titlepagebottomglue = 2pc
+
+% First the title page. Must do @settitle before @titlepage.
+\newif\ifseenauthor
+\newif\iffinishedtitlepage
+
+% Do an implicit @contents or @shortcontents after @end titlepage if the
+% user says @setcontentsaftertitlepage or @setshortcontentsaftertitlepage.
+%
+\newif\ifsetcontentsaftertitlepage
+ \let\setcontentsaftertitlepage = \setcontentsaftertitlepagetrue
+\newif\ifsetshortcontentsaftertitlepage
+ \let\setshortcontentsaftertitlepage = \setshortcontentsaftertitlepagetrue
+
+\parseargdef\shorttitlepage{\begingroup\hbox{}\vskip 1.5in \chaprm \centerline{#1}%
+ \endgroup\page\hbox{}\page}
+
+\envdef\titlepage{%
+ % Open one extra group, as we want to close it in the middle of \Etitlepage.
+ \begingroup
+ \parindent=0pt \textfonts
+ % Leave some space at the very top of the page.
+ \vglue\titlepagetopglue
+ % No rule at page bottom unless we print one at the top with @title.
+ \finishedtitlepagetrue
+ %
+ % Most title ``pages'' are actually two pages long, with space
+ % at the top of the second. We don't want the ragged left on the second.
+ \let\oldpage = \page
+ \def\page{%
+ \iffinishedtitlepage\else
+ \finishtitlepage
+ \fi
+ \let\page = \oldpage
+ \page
+ \null
+ }%
+}
+
+\def\Etitlepage{%
+ \iffinishedtitlepage\else
+ \finishtitlepage
+ \fi
+ % It is important to do the page break before ending the group,
+ % because the headline and footline are only empty inside the group.
+ % If we use the new definition of \page, we always get a blank page
+ % after the title page, which we certainly don't want.
+ \oldpage
+ \endgroup
+ %
+ % Need this before the \...aftertitlepage checks so that if they are
+ % in effect the toc pages will come out with page numbers.
+ \HEADINGSon
+ %
+ % If they want short, they certainly want long too.
+ \ifsetshortcontentsaftertitlepage
+ \shortcontents
+ \contents
+ \global\let\shortcontents = \relax
+ \global\let\contents = \relax
+ \fi
+ %
+ \ifsetcontentsaftertitlepage
+ \contents
+ \global\let\contents = \relax
+ \global\let\shortcontents = \relax
+ \fi
+}
+
+\def\finishtitlepage{%
+ \vskip4pt \hrule height 2pt width \hsize
+ \vskip\titlepagebottomglue
+ \finishedtitlepagetrue
+}
+
+%%% Macros to be used within @titlepage:
+
+\let\subtitlerm=\tenrm
+\def\subtitlefont{\subtitlerm \normalbaselineskip = 13pt \normalbaselines}
+
+\def\authorfont{\authorrm \normalbaselineskip = 16pt \normalbaselines
+ \let\tt=\authortt}
+
+\parseargdef\title{%
+ \checkenv\titlepage
+ \leftline{\titlefonts\rm #1}
+ % print a rule at the page bottom also.
+ \finishedtitlepagefalse
+ \vskip4pt \hrule height 4pt width \hsize \vskip4pt
+}
+
+\parseargdef\subtitle{%
+ \checkenv\titlepage
+ {\subtitlefont \rightline{#1}}%
+}
+
+% @author should come last, but may come many times.
+% It can also be used inside @quotation.
+%
+\parseargdef\author{%
+ \def\temp{\quotation}%
+ \ifx\thisenv\temp
+ \def\quotationauthor{#1}% printed in \Equotation.
+ \else
+ \checkenv\titlepage
+ \ifseenauthor\else \vskip 0pt plus 1filll \seenauthortrue \fi
+ {\authorfont \leftline{#1}}%
+ \fi
+}
+
+
+%%% Set up page headings and footings.
+
+\let\thispage=\folio
+
+\newtoks\evenheadline % headline on even pages
+\newtoks\oddheadline % headline on odd pages
+\newtoks\evenfootline % footline on even pages
+\newtoks\oddfootline % footline on odd pages
+
+% Now make TeX use those variables
+\headline={{\textfonts\rm \ifodd\pageno \the\oddheadline
+ \else \the\evenheadline \fi}}
+\footline={{\textfonts\rm \ifodd\pageno \the\oddfootline
+ \else \the\evenfootline \fi}\HEADINGShook}
+\let\HEADINGShook=\relax
+
+% Commands to set those variables.
+% For example, this is what @headings on does
+% @evenheading @thistitle|@thispage|@thischapter
+% @oddheading @thischapter|@thispage|@thistitle
+% @evenfooting @thisfile||
+% @oddfooting ||@thisfile
+
+
+\def\evenheading{\parsearg\evenheadingxxx}
+\def\evenheadingxxx #1{\evenheadingyyy #1\|\|\|\|\finish}
+\def\evenheadingyyy #1\|#2\|#3\|#4\finish{%
+\global\evenheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}
+
+\def\oddheading{\parsearg\oddheadingxxx}
+\def\oddheadingxxx #1{\oddheadingyyy #1\|\|\|\|\finish}
+\def\oddheadingyyy #1\|#2\|#3\|#4\finish{%
+\global\oddheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}
+
+\parseargdef\everyheading{\oddheadingxxx{#1}\evenheadingxxx{#1}}%
+
+\def\evenfooting{\parsearg\evenfootingxxx}
+\def\evenfootingxxx #1{\evenfootingyyy #1\|\|\|\|\finish}
+\def\evenfootingyyy #1\|#2\|#3\|#4\finish{%
+\global\evenfootline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}
+
+\def\oddfooting{\parsearg\oddfootingxxx}
+\def\oddfootingxxx #1{\oddfootingyyy #1\|\|\|\|\finish}
+\def\oddfootingyyy #1\|#2\|#3\|#4\finish{%
+ \global\oddfootline = {\rlap{\centerline{#2}}\line{#1\hfil#3}}%
+ %
+ % Leave some space for the footline. Hopefully ok to assume
+ % @evenfooting will not be used by itself.
+ \global\advance\pageheight by -12pt
+ \global\advance\vsize by -12pt
+}
+
+\parseargdef\everyfooting{\oddfootingxxx{#1}\evenfootingxxx{#1}}
+
+
+% @headings double turns headings on for double-sided printing.
+% @headings single turns headings on for single-sided printing.
+% @headings off turns them off.
+% @headings on same as @headings double, retained for compatibility.
+% @headings after turns on double-sided headings after this page.
+% @headings doubleafter turns on double-sided headings after this page.
+% @headings singleafter turns on single-sided headings after this page.
+% By default, they are off at the start of a document,
+% and turned `on' after @end titlepage.
+
+\def\headings #1 {\csname HEADINGS#1\endcsname}
+
+\def\HEADINGSoff{%
+\global\evenheadline={\hfil} \global\evenfootline={\hfil}
+\global\oddheadline={\hfil} \global\oddfootline={\hfil}}
+\HEADINGSoff
+% When we turn headings on, set the page number to 1.
+% For double-sided printing, put current file name in lower left corner,
+% chapter name on inside top of right hand pages, document
+% title on inside top of left hand pages, and page numbers on outside top
+% edge of all pages.
+\def\HEADINGSdouble{%
+\global\pageno=1
+\global\evenfootline={\hfil}
+\global\oddfootline={\hfil}
+\global\evenheadline={\line{\folio\hfil\thistitle}}
+\global\oddheadline={\line{\thischapter\hfil\folio}}
+\global\let\contentsalignmacro = \chapoddpage
+}
+\let\contentsalignmacro = \chappager
+
+% For single-sided printing, chapter title goes across top left of page,
+% page number on top right.
+\def\HEADINGSsingle{%
+\global\pageno=1
+\global\evenfootline={\hfil}
+\global\oddfootline={\hfil}
+\global\evenheadline={\line{\thischapter\hfil\folio}}
+\global\oddheadline={\line{\thischapter\hfil\folio}}
+\global\let\contentsalignmacro = \chappager
+}
+\def\HEADINGSon{\HEADINGSdouble}
+
+\def\HEADINGSafter{\let\HEADINGShook=\HEADINGSdoublex}
+\let\HEADINGSdoubleafter=\HEADINGSafter
+\def\HEADINGSdoublex{%
+\global\evenfootline={\hfil}
+\global\oddfootline={\hfil}
+\global\evenheadline={\line{\folio\hfil\thistitle}}
+\global\oddheadline={\line{\thischapter\hfil\folio}}
+\global\let\contentsalignmacro = \chapoddpage
+}
+
+\def\HEADINGSsingleafter{\let\HEADINGShook=\HEADINGSsinglex}
+\def\HEADINGSsinglex{%
+\global\evenfootline={\hfil}
+\global\oddfootline={\hfil}
+\global\evenheadline={\line{\thischapter\hfil\folio}}
+\global\oddheadline={\line{\thischapter\hfil\folio}}
+\global\let\contentsalignmacro = \chappager
+}
+
+% Subroutines used in generating headings
+% This produces Day Month Year style of output.
+% Only define if not already defined, in case a txi-??.tex file has set
+% up a different format (e.g., txi-cs.tex does this).
+\ifx\today\undefined
+\def\today{%
+ \number\day\space
+ \ifcase\month
+ \or\putwordMJan\or\putwordMFeb\or\putwordMMar\or\putwordMApr
+ \or\putwordMMay\or\putwordMJun\or\putwordMJul\or\putwordMAug
+ \or\putwordMSep\or\putwordMOct\or\putwordMNov\or\putwordMDec
+ \fi
+ \space\number\year}
+\fi
+
+% @settitle line... specifies the title of the document, for headings.
+% It generates no output of its own.
+\def\thistitle{\putwordNoTitle}
+\def\settitle{\parsearg{\gdef\thistitle}}
+
+
+\message{tables,}
+% Tables -- @table, @ftable, @vtable, @item(x).
+
+% default indentation of table text
+\newdimen\tableindent \tableindent=.8in
+% default indentation of @itemize and @enumerate text
+\newdimen\itemindent \itemindent=.3in
+% margin between end of table item and start of table text.
+\newdimen\itemmargin \itemmargin=.1in
+
+% used internally for \itemindent minus \itemmargin
+\newdimen\itemmax
+
+% Note @table, @ftable, and @vtable define @item, @itemx, etc., with
+% these defs.
+% They also define \itemindex
+% to index the item name in whatever manner is desired (perhaps none).
+
+\newif\ifitemxneedsnegativevskip
+
+\def\itemxpar{\par\ifitemxneedsnegativevskip\nobreak\vskip-\parskip\nobreak\fi}
+
+\def\internalBitem{\smallbreak \parsearg\itemzzz}
+\def\internalBitemx{\itemxpar \parsearg\itemzzz}
+
+\def\itemzzz #1{\begingroup %
+ \advance\hsize by -\rightskip
+ \advance\hsize by -\tableindent
+ \setbox0=\hbox{\itemindicate{#1}}%
+ \itemindex{#1}%
+ \nobreak % This prevents a break before @itemx.
+ %
+ % If the item text does not fit in the space we have, put it on a line
+ % by itself, and do not allow a page break either before or after that
+ % line. We do not start a paragraph here because then if the next
+ % command is, e.g., @kindex, the whatsit would get put into the
+ % horizontal list on a line by itself, resulting in extra blank space.
+ \ifdim \wd0>\itemmax
+ %
+ % Make this a paragraph so we get the \parskip glue and wrapping,
+ % but leave it ragged-right.
+ \begingroup
+ \advance\leftskip by-\tableindent
+ \advance\hsize by\tableindent
+ \advance\rightskip by0pt plus1fil
+ \leavevmode\unhbox0\par
+ \endgroup
+ %
+ % We're going to be starting a paragraph, but we don't want the
+ % \parskip glue -- logically it's part of the @item we just started.
+ \nobreak \vskip-\parskip
+ %
+ % Stop a page break at the \parskip glue coming up. However, if
+ % what follows is an environment such as @example, there will be no
+ % \parskip glue; then the negative vskip we just inserted would
+ % cause the example and the item to crash together. So we use this
+ % bizarre value of 10001 as a signal to \aboveenvbreak to insert
+ % \parskip glue after all. Section titles are handled this way also.
+ %
+ \penalty 10001
+ \endgroup
+ \itemxneedsnegativevskipfalse
+ \else
+ % The item text fits into the space. Start a paragraph, so that the
+ % following text (if any) will end up on the same line.
+ \noindent
+ % Do this with kerns and \unhbox so that if there is a footnote in
+ % the item text, it can migrate to the main vertical list and
+ % eventually be printed.
+ \nobreak\kern-\tableindent
+ \dimen0 = \itemmax \advance\dimen0 by \itemmargin \advance\dimen0 by -\wd0
+ \unhbox0
+ \nobreak\kern\dimen0
+ \endgroup
+ \itemxneedsnegativevskiptrue
+ \fi
+}
+
+\def\item{\errmessage{@item while not in a list environment}}
+\def\itemx{\errmessage{@itemx while not in a list environment}}
+
+% @table, @ftable, @vtable.
+\envdef\table{%
+ \let\itemindex\gobble
+ \tablecheck{table}%
+}
+\envdef\ftable{%
+ \def\itemindex ##1{\doind {fn}{\code{##1}}}%
+ \tablecheck{ftable}%
+}
+\envdef\vtable{%
+ \def\itemindex ##1{\doind {vr}{\code{##1}}}%
+ \tablecheck{vtable}%
+}
+\def\tablecheck#1{%
+ \ifnum \the\catcode`\^^M=\active
+ \endgroup
+ \errmessage{This command won't work in this context; perhaps the problem is
+ that we are \inenvironment\thisenv}%
+ \def\next{\doignore{#1}}%
+ \else
+ \let\next\tablex
+ \fi
+ \next
+}
+\def\tablex#1{%
+ \def\itemindicate{#1}%
+ \parsearg\tabley
+}
+\def\tabley#1{%
+ {%
+ \makevalueexpandable
+ \edef\temp{\noexpand\tablez #1\space\space\space}%
+ \expandafter
+ }\temp \endtablez
+}
+\def\tablez #1 #2 #3 #4\endtablez{%
+ \aboveenvbreak
+ \ifnum 0#1>0 \advance \leftskip by #1\mil \fi
+ \ifnum 0#2>0 \tableindent=#2\mil \fi
+ \ifnum 0#3>0 \advance \rightskip by #3\mil \fi
+ \itemmax=\tableindent
+ \advance \itemmax by -\itemmargin
+ \advance \leftskip by \tableindent
+ \exdentamount=\tableindent
+ \parindent = 0pt
+ \parskip = \smallskipamount
+ \ifdim \parskip=0pt \parskip=2pt \fi
+ \let\item = \internalBitem
+ \let\itemx = \internalBitemx
+}
+\def\Etable{\endgraf\afterenvbreak}
+\let\Eftable\Etable
+\let\Evtable\Etable
+\let\Eitemize\Etable
+\let\Eenumerate\Etable
+
+% This is the counter used by @enumerate, which is really @itemize
+
+\newcount \itemno
+
+\envdef\itemize{\parsearg\doitemize}
+
+\def\doitemize#1{%
+ \aboveenvbreak
+ \itemmax=\itemindent
+ \advance\itemmax by -\itemmargin
+ \advance\leftskip by \itemindent
+ \exdentamount=\itemindent
+ \parindent=0pt
+ \parskip=\smallskipamount
+ \ifdim\parskip=0pt \parskip=2pt \fi
+ \def\itemcontents{#1}%
+ % @itemize with no arg is equivalent to @itemize @bullet.
+ \ifx\itemcontents\empty\def\itemcontents{\bullet}\fi
+ \let\item=\itemizeitem
+}
+
+% Definition of @item while inside @itemize and @enumerate.
+%
+\def\itemizeitem{%
+ \advance\itemno by 1 % for enumerations
+ {\let\par=\endgraf \smallbreak}% reasonable place to break
+ {%
+ % If the document has an @itemize directly after a section title, a
+ % \nobreak will be last on the list, and \sectionheading will have
+ % done a \vskip-\parskip. In that case, we don't want to zero
+ % parskip, or the item text will crash with the heading. On the
+ % other hand, when there is normal text preceding the item (as there
+ % usually is), we do want to zero parskip, or there would be too much
+ % space. In that case, we won't have a \nobreak before. At least
+ % that's the theory.
+ \ifnum\lastpenalty<10000 \parskip=0in \fi
+ \noindent
+ \hbox to 0pt{\hss \itemcontents \kern\itemmargin}%
+ \vadjust{\penalty 1200}}% not good to break after first line of item.
+ \flushcr
+}
+
+% \splitoff TOKENS\endmark defines \first to be the first token in
+% TOKENS, and \rest to be the remainder.
+%
+\def\splitoff#1#2\endmark{\def\first{#1}\def\rest{#2}}%
+
+% Allow an optional argument of an uppercase letter, lowercase letter,
+% or number, to specify the first label in the enumerated list. No
+% argument is the same as `1'.
+%
+\envparseargdef\enumerate{\enumeratey #1 \endenumeratey}
+\def\enumeratey #1 #2\endenumeratey{%
+ % If we were given no argument, pretend we were given `1'.
+ \def\thearg{#1}%
+ \ifx\thearg\empty \def\thearg{1}\fi
+ %
+ % Detect if the argument is a single token. If so, it might be a
+ % letter. Otherwise, the only valid thing it can be is a number.
+ % (We will always have one token, because of the test we just made.
+ % This is a good thing, since \splitoff doesn't work given nothing at
+ % all -- the first parameter is undelimited.)
+ \expandafter\splitoff\thearg\endmark
+ \ifx\rest\empty
+ % Only one token in the argument. It could still be anything.
+ % A ``lowercase letter'' is one whose \lccode is nonzero.
+ % An ``uppercase letter'' is one whose \lccode is both nonzero, and
+ % not equal to itself.
+ % Otherwise, we assume it's a number.
+ %
+ % We need the \relax at the end of the \ifnum lines to stop TeX from
+ % continuing to look for a <number>.
+ %
+ \ifnum\lccode\expandafter`\thearg=0\relax
+ \numericenumerate % a number (we hope)
+ \else
+ % It's a letter.
+ \ifnum\lccode\expandafter`\thearg=\expandafter`\thearg\relax
+ \lowercaseenumerate % lowercase letter
+ \else
+ \uppercaseenumerate % uppercase letter
+ \fi
+ \fi
+ \else
+ % Multiple tokens in the argument. We hope it's a number.
+ \numericenumerate
+ \fi
+}
+
+% An @enumerate whose labels are integers. The starting integer is
+% given in \thearg.
+%
+\def\numericenumerate{%
+ \itemno = \thearg
+ \startenumeration{\the\itemno}%
+}
+
+% The starting (lowercase) letter is in \thearg.
+\def\lowercaseenumerate{%
+ \itemno = \expandafter`\thearg
+ \startenumeration{%
+ % Be sure we're not beyond the end of the alphabet.
+ \ifnum\itemno=0
+ \errmessage{No more lowercase letters in @enumerate; get a bigger
+ alphabet}%
+ \fi
+ \char\lccode\itemno
+ }%
+}
+
+% The starting (uppercase) letter is in \thearg.
+\def\uppercaseenumerate{%
+ \itemno = \expandafter`\thearg
+ \startenumeration{%
+ % Be sure we're not beyond the end of the alphabet.
+ \ifnum\itemno=0
+ \errmessage{No more uppercase letters in @enumerate; get a bigger
+ alphabet}
+ \fi
+ \char\uccode\itemno
+ }%
+}
+
+% Call \doitemize, adding a period to the first argument and supplying the
+% common last two arguments. Also subtract one from the initial value in
+% \itemno, since @item increments \itemno.
+%
+\def\startenumeration#1{%
+ \advance\itemno by -1
+ \doitemize{#1.}\flushcr
+}
+
+% @alphaenumerate and @capsenumerate are abbreviations for giving an arg
+% to @enumerate.
+%
+\def\alphaenumerate{\enumerate{a}}
+\def\capsenumerate{\enumerate{A}}
+\def\Ealphaenumerate{\Eenumerate}
+\def\Ecapsenumerate{\Eenumerate}
+
+
+% @multitable macros
+% Amy Hendrickson, 8/18/94, 3/6/96
+%
+% @multitable ... @end multitable will make as many columns as desired.
+% Contents of each column will wrap at width given in preamble. Width
+% can be specified either with sample text given in a template line,
+% or in percent of \hsize, the current width of text on page.
+
+% Table can continue over pages but will only break between lines.
+
+% To make preamble:
+%
+% Either define widths of columns in terms of percent of \hsize:
+% @multitable @columnfractions .25 .3 .45
+% @item ...
+%
+% Numbers following @columnfractions are the percent of the total
+% current hsize to be used for each column. You may use as many
+% columns as desired.
+
+
+% Or use a template:
+% @multitable {Column 1 template} {Column 2 template} {Column 3 template}
+% @item ...
+% using the widest term desired in each column.
+
+% Each new table line starts with @item, each subsequent new column
+% starts with @tab. Empty columns may be produced by supplying @tab's
+% with nothing between them for as many times as empty columns are needed,
+% ie, @tab@tab@tab will produce two empty columns.
+
+% @item, @tab do not need to be on their own lines, but it will not hurt
+% if they are.
+
+% Sample multitable:
+
+% @multitable {Column 1 template} {Column 2 template} {Column 3 template}
+% @item first col stuff @tab second col stuff @tab third col
+% @item
+% first col stuff
+% @tab
+% second col stuff
+% @tab
+% third col
+% @item first col stuff @tab second col stuff
+% @tab Many paragraphs of text may be used in any column.
+%
+% They will wrap at the width determined by the template.
+% @item@tab@tab This will be in third column.
+% @end multitable
+
+% Default dimensions may be reset by user.
+% @multitableparskip is vertical space between paragraphs in table.
+% @multitableparindent is paragraph indent in table.
+% @multitablecolmargin is horizontal space to be left between columns.
+% @multitablelinespace is space to leave between table items, baseline
+% to baseline.
+% 0pt means it depends on current normal line spacing.
+%
+\newskip\multitableparskip
+\newskip\multitableparindent
+\newdimen\multitablecolspace
+\newskip\multitablelinespace
+\multitableparskip=0pt
+\multitableparindent=6pt
+\multitablecolspace=12pt
+\multitablelinespace=0pt
+
+% Macros used to set up halign preamble:
+%
+\let\endsetuptable\relax
+\def\xendsetuptable{\endsetuptable}
+\let\columnfractions\relax
+\def\xcolumnfractions{\columnfractions}
+\newif\ifsetpercent
+
+% #1 is the @columnfraction, usually a decimal number like .5, but might
+% be just 1. We just use it, whatever it is.
+%
+\def\pickupwholefraction#1 {%
+ \global\advance\colcount by 1
+ \expandafter\xdef\csname col\the\colcount\endcsname{#1\hsize}%
+ \setuptable
+}
+
+\newcount\colcount
+\def\setuptable#1{%
+ \def\firstarg{#1}%
+ \ifx\firstarg\xendsetuptable
+ \let\go = \relax
+ \else
+ \ifx\firstarg\xcolumnfractions
+ \global\setpercenttrue
+ \else
+ \ifsetpercent
+ \let\go\pickupwholefraction
+ \else
+ \global\advance\colcount by 1
+ \setbox0=\hbox{#1\unskip\space}% Add a normal word space as a
+ % separator; typically that is always in the input, anyway.
+ \expandafter\xdef\csname col\the\colcount\endcsname{\the\wd0}%
+ \fi
+ \fi
+ \ifx\go\pickupwholefraction
+ % Put the argument back for the \pickupwholefraction call, so
+ % we'll always have a period there to be parsed.
+ \def\go{\pickupwholefraction#1}%
+ \else
+ \let\go = \setuptable
+ \fi%
+ \fi
+ \go
+}
+
+% multitable-only commands.
+%
+% @headitem starts a heading row, which we typeset in bold.
+% Assignments have to be global since we are inside the implicit group
+% of an alignment entry. Note that \everycr resets \everytab.
+\def\headitem{\checkenv\multitable \crcr \global\everytab={\bf}\the\everytab}%
+%
+% A \tab used to include \hskip1sp. But then the space in a template
+% line is not enough. That is bad. So let's go back to just `&' until
+% we encounter the problem it was intended to solve again.
+% --karl, nathan@acm.org, 20apr99.
+\def\tab{\checkenv\multitable &\the\everytab}%
+
+% @multitable ... @end multitable definitions:
+%
+\newtoks\everytab % insert after every tab.
+%
+\envdef\multitable{%
+ \vskip\parskip
+ \startsavinginserts
+ %
+ % @item within a multitable starts a normal row.
+ % We use \def instead of \let so that if one of the multitable entries
+ % contains an @itemize, we don't choke on the \item (seen as \crcr aka
+ % \endtemplate) expanding \doitemize.
+ \def\item{\crcr}%
+ %
+ \tolerance=9500
+ \hbadness=9500
+ \setmultitablespacing
+ \parskip=\multitableparskip
+ \parindent=\multitableparindent
+ \overfullrule=0pt
+ \global\colcount=0
+ %
+ \everycr = {%
+ \noalign{%
+ \global\everytab={}%
+ \global\colcount=0 % Reset the column counter.
+ % Check for saved footnotes, etc.
+ \checkinserts
+ % Keeps underfull box messages off when table breaks over pages.
+ %\filbreak
+ % Maybe so, but it also creates really weird page breaks when the
+ % table breaks over pages. Wouldn't \vfil be better? Wait until the
+ % problem manifests itself, so it can be fixed for real --karl.
+ }%
+ }%
+ %
+ \parsearg\domultitable
+}
+\def\domultitable#1{%
+ % To parse everything between @multitable and @item:
+ \setuptable#1 \endsetuptable
+ %
+ % This preamble sets up a generic column definition, which will
+ % be used as many times as user calls for columns.
+ % \vtop will set a single line and will also let text wrap and
+ % continue for many paragraphs if desired.
+ \halign\bgroup &%
+ \global\advance\colcount by 1
+ \multistrut
+ \vtop{%
+ % Use the current \colcount to find the correct column width:
+ \hsize=\expandafter\csname col\the\colcount\endcsname
+ %
+ % In order to keep entries from bumping into each other
+ % we will add a \leftskip of \multitablecolspace to all columns after
+ % the first one.
+ %
+ % If a template has been used, we will add \multitablecolspace
+ % to the width of each template entry.
+ %
+ % If the user has set preamble in terms of percent of \hsize we will
+ % use that dimension as the width of the column, and the \leftskip
+ % will keep entries from bumping into each other. Table will start at
+ % left margin and final column will justify at right margin.
+ %
+ % Make sure we don't inherit \rightskip from the outer environment.
+ \rightskip=0pt
+ \ifnum\colcount=1
+ % The first column will be indented with the surrounding text.
+ \advance\hsize by\leftskip
+ \else
+ \ifsetpercent \else
+ % If user has not set preamble in terms of percent of \hsize
+ % we will advance \hsize by \multitablecolspace.
+ \advance\hsize by \multitablecolspace
+ \fi
+ % In either case we will make \leftskip=\multitablecolspace:
+ \leftskip=\multitablecolspace
+ \fi
+ % Ignoring space at the beginning and end avoids an occasional spurious
+ % blank line, when TeX decides to break the line at the space before the
+ % box from the multistrut, so the strut ends up on a line by itself.
+ % For example:
+ % @multitable @columnfractions .11 .89
+ % @item @code{#}
+ % @tab Legal holiday which is valid in major parts of the whole country.
+ % Is automatically provided with highlighting sequences respectively
+ % marking characters.
+ \noindent\ignorespaces##\unskip\multistrut
+ }\cr
+}
+\def\Emultitable{%
+ \crcr
+ \egroup % end the \halign
+ \global\setpercentfalse
+}
+
+\def\setmultitablespacing{%
+ \def\multistrut{\strut}% just use the standard line spacing
+ %
+ % Compute \multitablelinespace (if not defined by user) for use in
+ % \multitableparskip calculation. We used define \multistrut based on
+ % this, but (ironically) that caused the spacing to be off.
+ % See bug-texinfo report from Werner Lemberg, 31 Oct 2004 12:52:20 +0100.
+\ifdim\multitablelinespace=0pt
+\setbox0=\vbox{X}\global\multitablelinespace=\the\baselineskip
+\global\advance\multitablelinespace by-\ht0
+\fi
+%% Test to see if parskip is larger than space between lines of
+%% table. If not, do nothing.
+%% If so, set to same dimension as multitablelinespace.
+\ifdim\multitableparskip>\multitablelinespace
+\global\multitableparskip=\multitablelinespace
+\global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller
+ %% than skip between lines in the table.
+\fi%
+\ifdim\multitableparskip=0pt
+\global\multitableparskip=\multitablelinespace
+\global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller
+ %% than skip between lines in the table.
+\fi}
+
+
+\message{conditionals,}
+
+% @iftex, @ifnotdocbook, @ifnothtml, @ifnotinfo, @ifnotplaintext,
+% @ifnotxml always succeed. They currently do nothing; we don't
+% attempt to check whether the conditionals are properly nested. But we
+% have to remember that they are conditionals, so that @end doesn't
+% attempt to close an environment group.
+%
+\def\makecond#1{%
+ \expandafter\let\csname #1\endcsname = \relax
+ \expandafter\let\csname iscond.#1\endcsname = 1
+}
+\makecond{iftex}
+\makecond{ifnotdocbook}
+\makecond{ifnothtml}
+\makecond{ifnotinfo}
+\makecond{ifnotplaintext}
+\makecond{ifnotxml}
+
+% Ignore @ignore, @ifhtml, @ifinfo, and the like.
+%
+\def\direntry{\doignore{direntry}}
+\def\documentdescription{\doignore{documentdescription}}
+\def\docbook{\doignore{docbook}}
+\def\html{\doignore{html}}
+\def\ifdocbook{\doignore{ifdocbook}}
+\def\ifhtml{\doignore{ifhtml}}
+\def\ifinfo{\doignore{ifinfo}}
+\def\ifnottex{\doignore{ifnottex}}
+\def\ifplaintext{\doignore{ifplaintext}}
+\def\ifxml{\doignore{ifxml}}
+\def\ignore{\doignore{ignore}}
+\def\menu{\doignore{menu}}
+\def\xml{\doignore{xml}}
+
+% Ignore text until a line `@end #1', keeping track of nested conditionals.
+%
+% A count to remember the depth of nesting.
+\newcount\doignorecount
+
+\def\doignore#1{\begingroup
+ % Scan in ``verbatim'' mode:
+ \obeylines
+ \catcode`\@ = \other
+ \catcode`\{ = \other
+ \catcode`\} = \other
+ %
+ % Make sure that spaces turn into tokens that match what \doignoretext wants.
+ \spaceisspace
+ %
+ % Count number of #1's that we've seen.
+ \doignorecount = 0
+ %
+ % Swallow text until we reach the matching `@end #1'.
+ \dodoignore{#1}%
+}
+
+{ \catcode`_=11 % We want to use \_STOP_ which cannot appear in texinfo source.
+ \obeylines %
+ %
+ \gdef\dodoignore#1{%
+ % #1 contains the command name as a string, e.g., `ifinfo'.
+ %
+ % Define a command to find the next `@end #1'.
+ \long\def\doignoretext##1^^M@end #1{%
+ \doignoretextyyy##1^^M@#1\_STOP_}%
+ %
+ % And this command to find another #1 command, at the beginning of a
+ % line. (Otherwise, we would consider a line `@c @ifset', for
+ % example, to count as an @ifset for nesting.)
+ \long\def\doignoretextyyy##1^^M@#1##2\_STOP_{\doignoreyyy{##2}\_STOP_}%
+ %
+ % And now expand that command.
+ \doignoretext ^^M%
+ }%
+}
+
+\def\doignoreyyy#1{%
+ \def\temp{#1}%
+ \ifx\temp\empty % Nothing found.
+ \let\next\doignoretextzzz
+ \else % Found a nested condition, ...
+ \advance\doignorecount by 1
+ \let\next\doignoretextyyy % ..., look for another.
+ % If we're here, #1 ends with ^^M\ifinfo (for example).
+ \fi
+ \next #1% the token \_STOP_ is present just after this macro.
+}
+
+% We have to swallow the remaining "\_STOP_".
+%
+\def\doignoretextzzz#1{%
+ \ifnum\doignorecount = 0 % We have just found the outermost @end.
+ \let\next\enddoignore
+ \else % Still inside a nested condition.
+ \advance\doignorecount by -1
+ \let\next\doignoretext % Look for the next @end.
+ \fi
+ \next
+}
+
+% Finish off ignored text.
+{ \obeylines%
+ % Ignore anything after the last `@end #1'; this matters in verbatim
+ % environments, where otherwise the newline after an ignored conditional
+ % would result in a blank line in the output.
+ \gdef\enddoignore#1^^M{\endgroup\ignorespaces}%
+}
+
+
+% @set VAR sets the variable VAR to an empty value.
+% @set VAR REST-OF-LINE sets VAR to the value REST-OF-LINE.
+%
+% Since we want to separate VAR from REST-OF-LINE (which might be
+% empty), we can't just use \parsearg; we have to insert a space of our
+% own to delimit the rest of the line, and then take it out again if we
+% didn't need it.
+% We rely on the fact that \parsearg sets \catcode`\ =10.
+%
+\parseargdef\set{\setyyy#1 \endsetyyy}
+\def\setyyy#1 #2\endsetyyy{%
+ {%
+ \makevalueexpandable
+ \def\temp{#2}%
+ \edef\next{\gdef\makecsname{SET#1}}%
+ \ifx\temp\empty
+ \next{}%
+ \else
+ \setzzz#2\endsetzzz
+ \fi
+ }%
+}
+% Remove the trailing space \setxxx inserted.
+\def\setzzz#1 \endsetzzz{\next{#1}}
+
+% @clear VAR clears (i.e., unsets) the variable VAR.
+%
+\parseargdef\clear{%
+ {%
+ \makevalueexpandable
+ \global\expandafter\let\csname SET#1\endcsname=\relax
+ }%
+}
+
+% @value{foo} gets the text saved in variable foo.
+\def\value{\begingroup\makevalueexpandable\valuexxx}
+\def\valuexxx#1{\expandablevalue{#1}\endgroup}
+{
+ \catcode`\- = \active \catcode`\_ = \active
+ %
+ \gdef\makevalueexpandable{%
+ \let\value = \expandablevalue
+ % We don't want these characters active, ...
+ \catcode`\-=\other \catcode`\_=\other
+ % ..., but we might end up with active ones in the argument if
+ % we're called from @code, as @code{@value{foo-bar_}}, though.
+ % So \let them to their normal equivalents.
+ \let-\realdash \let_\normalunderscore
+ }
+}
+
+% We have this subroutine so that we can handle at least some @value's
+% properly in indexes (we call \makevalueexpandable in \indexdummies).
+% The command has to be fully expandable (if the variable is set), since
+% the result winds up in the index file. This means that if the
+% variable's value contains other Texinfo commands, it's almost certain
+% it will fail (although perhaps we could fix that with sufficient work
+% to do a one-level expansion on the result, instead of complete).
+%
+\def\expandablevalue#1{%
+ \expandafter\ifx\csname SET#1\endcsname\relax
+ {[No value for ``#1'']}%
+ \message{Variable `#1', used in @value, is not set.}%
+ \else
+ \csname SET#1\endcsname
+ \fi
+}
+
+% @ifset VAR ... @end ifset reads the `...' iff VAR has been defined
+% with @set.
+%
+% To get special treatment of `@end ifset,' call \makeond and the redefine.
+%
+\makecond{ifset}
+\def\ifset{\parsearg{\doifset{\let\next=\ifsetfail}}}
+\def\doifset#1#2{%
+ {%
+ \makevalueexpandable
+ \let\next=\empty
+ \expandafter\ifx\csname SET#2\endcsname\relax
+ #1% If not set, redefine \next.
+ \fi
+ \expandafter
+ }\next
+}
+\def\ifsetfail{\doignore{ifset}}
+
+% @ifclear VAR ... @end ifclear reads the `...' iff VAR has never been
+% defined with @set, or has been undefined with @clear.
+%
+% The `\else' inside the `\doifset' parameter is a trick to reuse the
+% above code: if the variable is not set, do nothing, if it is set,
+% then redefine \next to \ifclearfail.
+%
+\makecond{ifclear}
+\def\ifclear{\parsearg{\doifset{\else \let\next=\ifclearfail}}}
+\def\ifclearfail{\doignore{ifclear}}
+
+% @dircategory CATEGORY -- specify a category of the dir file
+% which this file should belong to. Ignore this in TeX.
+\let\dircategory=\comment
+
+% @defininfoenclose.
+\let\definfoenclose=\comment
+
+
+\message{indexing,}
+% Index generation facilities
+
+% Define \newwrite to be identical to plain tex's \newwrite
+% except not \outer, so it can be used within macros and \if's.
+\edef\newwrite{\makecsname{ptexnewwrite}}
+
+% \newindex {foo} defines an index named foo.
+% It automatically defines \fooindex such that
+% \fooindex ...rest of line... puts an entry in the index foo.
+% It also defines \fooindfile to be the number of the output channel for
+% the file that accumulates this index. The file's extension is foo.
+% The name of an index should be no more than 2 characters long
+% for the sake of vms.
+%
+\def\newindex#1{%
+ \iflinks
+ \expandafter\newwrite \csname#1indfile\endcsname
+ \openout \csname#1indfile\endcsname \jobname.#1 % Open the file
+ \fi
+ \expandafter\xdef\csname#1index\endcsname{% % Define @#1index
+ \noexpand\doindex{#1}}
+}
+
+% @defindex foo == \newindex{foo}
+%
+\def\defindex{\parsearg\newindex}
+
+% Define @defcodeindex, like @defindex except put all entries in @code.
+%
+\def\defcodeindex{\parsearg\newcodeindex}
+%
+\def\newcodeindex#1{%
+ \iflinks
+ \expandafter\newwrite \csname#1indfile\endcsname
+ \openout \csname#1indfile\endcsname \jobname.#1
+ \fi
+ \expandafter\xdef\csname#1index\endcsname{%
+ \noexpand\docodeindex{#1}}%
+}
+
+
+% @synindex foo bar makes index foo feed into index bar.
+% Do this instead of @defindex foo if you don't want it as a separate index.
+%
+% @syncodeindex foo bar similar, but put all entries made for index foo
+% inside @code.
+%
+\def\synindex#1 #2 {\dosynindex\doindex{#1}{#2}}
+\def\syncodeindex#1 #2 {\dosynindex\docodeindex{#1}{#2}}
+
+% #1 is \doindex or \docodeindex, #2 the index getting redefined (foo),
+% #3 the target index (bar).
+\def\dosynindex#1#2#3{%
+ % Only do \closeout if we haven't already done it, else we'll end up
+ % closing the target index.
+ \expandafter \ifx\csname donesynindex#2\endcsname \undefined
+ % The \closeout helps reduce unnecessary open files; the limit on the
+ % Acorn RISC OS is a mere 16 files.
+ \expandafter\closeout\csname#2indfile\endcsname
+ \expandafter\let\csname\donesynindex#2\endcsname = 1
+ \fi
+ % redefine \fooindfile:
+ \expandafter\let\expandafter\temp\expandafter=\csname#3indfile\endcsname
+ \expandafter\let\csname#2indfile\endcsname=\temp
+ % redefine \fooindex:
+ \expandafter\xdef\csname#2index\endcsname{\noexpand#1{#3}}%
+}
+
+% Define \doindex, the driver for all \fooindex macros.
+% Argument #1 is generated by the calling \fooindex macro,
+% and it is "foo", the name of the index.
+
+% \doindex just uses \parsearg; it calls \doind for the actual work.
+% This is because \doind is more useful to call from other macros.
+
+% There is also \dosubind {index}{topic}{subtopic}
+% which makes an entry in a two-level index such as the operation index.
+
+\def\doindex#1{\edef\indexname{#1}\parsearg\singleindexer}
+\def\singleindexer #1{\doind{\indexname}{#1}}
+
+% like the previous two, but they put @code around the argument.
+\def\docodeindex#1{\edef\indexname{#1}\parsearg\singlecodeindexer}
+\def\singlecodeindexer #1{\doind{\indexname}{\code{#1}}}
+
+% Take care of Texinfo commands that can appear in an index entry.
+% Since there are some commands we want to expand, and others we don't,
+% we have to laboriously prevent expansion for those that we don't.
+%
+\def\indexdummies{%
+ \escapechar = `\\ % use backslash in output files.
+ \def\@{@}% change to @@ when we switch to @ as escape char in index files.
+ \def\ {\realbackslash\space }%
+ %
+ % Need these in case \tex is in effect and \{ is a \delimiter again.
+ % But can't use \lbracecmd and \rbracecmd because texindex assumes
+ % braces and backslashes are used only as delimiters.
+ \let\{ = \mylbrace
+ \let\} = \myrbrace
+ %
+ % I don't entirely understand this, but when an index entry is
+ % generated from a macro call, the \endinput which \scanmacro inserts
+ % causes processing to be prematurely terminated. This is,
+ % apparently, because \indexsorttmp is fully expanded, and \endinput
+ % is an expandable command. The redefinition below makes \endinput
+ % disappear altogether for that purpose -- although logging shows that
+ % processing continues to some further point. On the other hand, it
+ % seems \endinput does not hurt in the printed index arg, since that
+ % is still getting written without apparent harm.
+ %
+ % Sample source (mac-idx3.tex, reported by Graham Percival to
+ % help-texinfo, 22may06):
+ % @macro funindex {WORD}
+ % @findex xyz
+ % @end macro
+ % ...
+ % @funindex commtest
+ %
+ % The above is not enough to reproduce the bug, but it gives the flavor.
+ %
+ % Sample whatsit resulting:
+ % .@write3{\entry{xyz}{@folio }{@code {xyz@endinput }}}
+ %
+ % So:
+ \let\endinput = \empty
+ %
+ % Do the redefinitions.
+ \commondummies
+}
+
+% For the aux and toc files, @ is the escape character. So we want to
+% redefine everything using @ as the escape character (instead of
+% \realbackslash, still used for index files). When everything uses @,
+% this will be simpler.
+%
+\def\atdummies{%
+ \def\@{@@}%
+ \def\ {@ }%
+ \let\{ = \lbraceatcmd
+ \let\} = \rbraceatcmd
+ %
+ % Do the redefinitions.
+ \commondummies
+ \otherbackslash
+}
+
+% Called from \indexdummies and \atdummies.
+%
+\def\commondummies{%
+ %
+ % \definedummyword defines \#1 as \string\#1\space, thus effectively
+ % preventing its expansion. This is used only for control% words,
+ % not control letters, because the \space would be incorrect for
+ % control characters, but is needed to separate the control word
+ % from whatever follows.
+ %
+ % For control letters, we have \definedummyletter, which omits the
+ % space.
+ %
+ % These can be used both for control words that take an argument and
+ % those that do not. If it is followed by {arg} in the input, then
+ % that will dutifully get written to the index (or wherever).
+ %
+ \def\definedummyword ##1{\def##1{\string##1\space}}%
+ \def\definedummyletter##1{\def##1{\string##1}}%
+ \let\definedummyaccent\definedummyletter
+ %
+ \commondummiesnofonts
+ %
+ \definedummyletter\_%
+ %
+ % Non-English letters.
+ \definedummyword\AA
+ \definedummyword\AE
+ \definedummyword\L
+ \definedummyword\OE
+ \definedummyword\O
+ \definedummyword\aa
+ \definedummyword\ae
+ \definedummyword\l
+ \definedummyword\oe
+ \definedummyword\o
+ \definedummyword\ss
+ \definedummyword\exclamdown
+ \definedummyword\questiondown
+ \definedummyword\ordf
+ \definedummyword\ordm
+ %
+ % Although these internal commands shouldn't show up, sometimes they do.
+ \definedummyword\bf
+ \definedummyword\gtr
+ \definedummyword\hat
+ \definedummyword\less
+ \definedummyword\sf
+ \definedummyword\sl
+ \definedummyword\tclose
+ \definedummyword\tt
+ %
+ \definedummyword\LaTeX
+ \definedummyword\TeX
+ %
+ % Assorted special characters.
+ \definedummyword\bullet
+ \definedummyword\comma
+ \definedummyword\copyright
+ \definedummyword\registeredsymbol
+ \definedummyword\dots
+ \definedummyword\enddots
+ \definedummyword\equiv
+ \definedummyword\error
+ \definedummyword\euro
+ \definedummyword\expansion
+ \definedummyword\minus
+ \definedummyword\pounds
+ \definedummyword\point
+ \definedummyword\print
+ \definedummyword\result
+ \definedummyword\textdegree
+ %
+ % We want to disable all macros so that they are not expanded by \write.
+ \macrolist
+ %
+ \normalturnoffactive
+ %
+ % Handle some cases of @value -- where it does not contain any
+ % (non-fully-expandable) commands.
+ \makevalueexpandable
+}
+
+% \commondummiesnofonts: common to \commondummies and \indexnofonts.
+%
+\def\commondummiesnofonts{%
+ % Control letters and accents.
+ \definedummyletter\!%
+ \definedummyaccent\"%
+ \definedummyaccent\'%
+ \definedummyletter\*%
+ \definedummyaccent\,%
+ \definedummyletter\.%
+ \definedummyletter\/%
+ \definedummyletter\:%
+ \definedummyaccent\=%
+ \definedummyletter\?%
+ \definedummyaccent\^%
+ \definedummyaccent\`%
+ \definedummyaccent\~%
+ \definedummyword\u
+ \definedummyword\v
+ \definedummyword\H
+ \definedummyword\dotaccent
+ \definedummyword\ringaccent
+ \definedummyword\tieaccent
+ \definedummyword\ubaraccent
+ \definedummyword\udotaccent
+ \definedummyword\dotless
+ %
+ % Texinfo font commands.
+ \definedummyword\b
+ \definedummyword\i
+ \definedummyword\r
+ \definedummyword\sc
+ \definedummyword\t
+ %
+ % Commands that take arguments.
+ \definedummyword\acronym
+ \definedummyword\cite
+ \definedummyword\code
+ \definedummyword\command
+ \definedummyword\dfn
+ \definedummyword\emph
+ \definedummyword\env
+ \definedummyword\file
+ \definedummyword\kbd
+ \definedummyword\key
+ \definedummyword\math
+ \definedummyword\option
+ \definedummyword\pxref
+ \definedummyword\ref
+ \definedummyword\samp
+ \definedummyword\strong
+ \definedummyword\tie
+ \definedummyword\uref
+ \definedummyword\url
+ \definedummyword\var
+ \definedummyword\verb
+ \definedummyword\w
+ \definedummyword\xref
+}
+
+% \indexnofonts is used when outputting the strings to sort the index
+% by, and when constructing control sequence names. It eliminates all
+% control sequences and just writes whatever the best ASCII sort string
+% would be for a given command (usually its argument).
+%
+\def\indexnofonts{%
+ % Accent commands should become @asis.
+ \def\definedummyaccent##1{\let##1\asis}%
+ % We can just ignore other control letters.
+ \def\definedummyletter##1{\let##1\empty}%
+ % Hopefully, all control words can become @asis.
+ \let\definedummyword\definedummyaccent
+ %
+ \commondummiesnofonts
+ %
+ % Don't no-op \tt, since it isn't a user-level command
+ % and is used in the definitions of the active chars like <, >, |, etc.
+ % Likewise with the other plain tex font commands.
+ %\let\tt=\asis
+ %
+ \def\ { }%
+ \def\@{@}%
+ % how to handle braces?
+ \def\_{\normalunderscore}%
+ %
+ % Non-English letters.
+ \def\AA{AA}%
+ \def\AE{AE}%
+ \def\L{L}%
+ \def\OE{OE}%
+ \def\O{O}%
+ \def\aa{aa}%
+ \def\ae{ae}%
+ \def\l{l}%
+ \def\oe{oe}%
+ \def\o{o}%
+ \def\ss{ss}%
+ \def\exclamdown{!}%
+ \def\questiondown{?}%
+ \def\ordf{a}%
+ \def\ordm{o}%
+ %
+ \def\LaTeX{LaTeX}%
+ \def\TeX{TeX}%
+ %
+ % Assorted special characters.
+ % (The following {} will end up in the sort string, but that's ok.)
+ \def\bullet{bullet}%
+ \def\comma{,}%
+ \def\copyright{copyright}%
+ \def\registeredsymbol{R}%
+ \def\dots{...}%
+ \def\enddots{...}%
+ \def\equiv{==}%
+ \def\error{error}%
+ \def\euro{euro}%
+ \def\expansion{==>}%
+ \def\minus{-}%
+ \def\pounds{pounds}%
+ \def\point{.}%
+ \def\print{-|}%
+ \def\result{=>}%
+ \def\textdegree{degrees}%
+ %
+ % We need to get rid of all macros, leaving only the arguments (if present).
+ % Of course this is not nearly correct, but it is the best we can do for now.
+ % makeinfo does not expand macros in the argument to @deffn, which ends up
+ % writing an index entry, and texindex isn't prepared for an index sort entry
+ % that starts with \.
+ %
+ % Since macro invocations are followed by braces, we can just redefine them
+ % to take a single TeX argument. The case of a macro invocation that
+ % goes to end-of-line is not handled.
+ %
+ \macrolist
+}
+
+\let\indexbackslash=0 %overridden during \printindex.
+\let\SETmarginindex=\relax % put index entries in margin (undocumented)?
+
+% Most index entries go through here, but \dosubind is the general case.
+% #1 is the index name, #2 is the entry text.
+\def\doind#1#2{\dosubind{#1}{#2}{}}
+
+% Workhorse for all \fooindexes.
+% #1 is name of index, #2 is stuff to put there, #3 is subentry --
+% empty if called from \doind, as we usually are (the main exception
+% is with most defuns, which call us directly).
+%
+\def\dosubind#1#2#3{%
+ \iflinks
+ {%
+ % Store the main index entry text (including the third arg).
+ \toks0 = {#2}%
+ % If third arg is present, precede it with a space.
+ \def\thirdarg{#3}%
+ \ifx\thirdarg\empty \else
+ \toks0 = \expandafter{\the\toks0 \space #3}%
+ \fi
+ %
+ \edef\writeto{\csname#1indfile\endcsname}%
+ %
+ \ifvmode
+ \dosubindsanitize
+ \else
+ \dosubindwrite
+ \fi
+ }%
+ \fi
+}
+
+% Write the entry in \toks0 to the index file:
+%
+\def\dosubindwrite{%
+ % Put the index entry in the margin if desired.
+ \ifx\SETmarginindex\relax\else
+ \insert\margin{\hbox{\vrule height8pt depth3pt width0pt \the\toks0}}%
+ \fi
+ %
+ % Remember, we are within a group.
+ \indexdummies % Must do this here, since \bf, etc expand at this stage
+ \def\backslashcurfont{\indexbackslash}% \indexbackslash isn't defined now
+ % so it will be output as is; and it will print as backslash.
+ %
+ % Process the index entry with all font commands turned off, to
+ % get the string to sort by.
+ {\indexnofonts
+ \edef\temp{\the\toks0}% need full expansion
+ \xdef\indexsorttmp{\temp}%
+ }%
+ %
+ % Set up the complete index entry, with both the sort key and
+ % the original text, including any font commands. We write
+ % three arguments to \entry to the .?? file (four in the
+ % subentry case), texindex reduces to two when writing the .??s
+ % sorted result.
+ \edef\temp{%
+ \write\writeto{%
+ \string\entry{\indexsorttmp}{\noexpand\folio}{\the\toks0}}%
+ }%
+ \temp
+}
+
+% Take care of unwanted page breaks:
+%
+% If a skip is the last thing on the list now, preserve it
+% by backing up by \lastskip, doing the \write, then inserting
+% the skip again. Otherwise, the whatsit generated by the
+% \write will make \lastskip zero. The result is that sequences
+% like this:
+% @end defun
+% @tindex whatever
+% @defun ...
+% will have extra space inserted, because the \medbreak in the
+% start of the @defun won't see the skip inserted by the @end of
+% the previous defun.
+%
+% But don't do any of this if we're not in vertical mode. We
+% don't want to do a \vskip and prematurely end a paragraph.
+%
+% Avoid page breaks due to these extra skips, too.
+%
+% But wait, there is a catch there:
+% We'll have to check whether \lastskip is zero skip. \ifdim is not
+% sufficient for this purpose, as it ignores stretch and shrink parts
+% of the skip. The only way seems to be to check the textual
+% representation of the skip.
+%
+% The following is almost like \def\zeroskipmacro{0.0pt} except that
+% the ``p'' and ``t'' characters have catcode \other, not 11 (letter).
+%
+\edef\zeroskipmacro{\expandafter\the\csname z@skip\endcsname}
+%
+% ..., ready, GO:
+%
+\def\dosubindsanitize{%
+ % \lastskip and \lastpenalty cannot both be nonzero simultaneously.
+ \skip0 = \lastskip
+ \edef\lastskipmacro{\the\lastskip}%
+ \count255 = \lastpenalty
+ %
+ % If \lastskip is nonzero, that means the last item was a
+ % skip. And since a skip is discardable, that means this
+ % -\skip0 glue we're inserting is preceded by a
+ % non-discardable item, therefore it is not a potential
+ % breakpoint, therefore no \nobreak needed.
+ \ifx\lastskipmacro\zeroskipmacro
+ \else
+ \vskip-\skip0
+ \fi
+ %
+ \dosubindwrite
+ %
+ \ifx\lastskipmacro\zeroskipmacro
+ % If \lastskip was zero, perhaps the last item was a penalty, and
+ % perhaps it was >=10000, e.g., a \nobreak. In that case, we want
+ % to re-insert the same penalty (values >10000 are used for various
+ % signals); since we just inserted a non-discardable item, any
+ % following glue (such as a \parskip) would be a breakpoint. For example:
+ %
+ % @deffn deffn-whatever
+ % @vindex index-whatever
+ % Description.
+ % would allow a break between the index-whatever whatsit
+ % and the "Description." paragraph.
+ \ifnum\count255>9999 \penalty\count255 \fi
+ \else
+ % On the other hand, if we had a nonzero \lastskip,
+ % this make-up glue would be preceded by a non-discardable item
+ % (the whatsit from the \write), so we must insert a \nobreak.
+ \nobreak\vskip\skip0
+ \fi
+}
+
+% The index entry written in the file actually looks like
+% \entry {sortstring}{page}{topic}
+% or
+% \entry {sortstring}{page}{topic}{subtopic}
+% The texindex program reads in these files and writes files
+% containing these kinds of lines:
+% \initial {c}
+% before the first topic whose initial is c
+% \entry {topic}{pagelist}
+% for a topic that is used without subtopics
+% \primary {topic}
+% for the beginning of a topic that is used with subtopics
+% \secondary {subtopic}{pagelist}
+% for each subtopic.
+
+% Define the user-accessible indexing commands
+% @findex, @vindex, @kindex, @cindex.
+
+\def\findex {\fnindex}
+\def\kindex {\kyindex}
+\def\cindex {\cpindex}
+\def\vindex {\vrindex}
+\def\tindex {\tpindex}
+\def\pindex {\pgindex}
+
+\def\cindexsub {\begingroup\obeylines\cindexsub}
+{\obeylines %
+\gdef\cindexsub "#1" #2^^M{\endgroup %
+\dosubind{cp}{#2}{#1}}}
+
+% Define the macros used in formatting output of the sorted index material.
+
+% @printindex causes a particular index (the ??s file) to get printed.
+% It does not print any chapter heading (usually an @unnumbered).
+%
+\parseargdef\printindex{\begingroup
+ \dobreak \chapheadingskip{10000}%
+ %
+ \smallfonts \rm
+ \tolerance = 9500
+ \everypar = {}% don't want the \kern\-parindent from indentation suppression.
+ %
+ % See if the index file exists and is nonempty.
+ % Change catcode of @ here so that if the index file contains
+ % \initial {@}
+ % as its first line, TeX doesn't complain about mismatched braces
+ % (because it thinks @} is a control sequence).
+ \catcode`\@ = 11
+ \openin 1 \jobname.#1s
+ \ifeof 1
+ % \enddoublecolumns gets confused if there is no text in the index,
+ % and it loses the chapter title and the aux file entries for the
+ % index. The easiest way to prevent this problem is to make sure
+ % there is some text.
+ \putwordIndexNonexistent
+ \else
+ %
+ % If the index file exists but is empty, then \openin leaves \ifeof
+ % false. We have to make TeX try to read something from the file, so
+ % it can discover if there is anything in it.
+ \read 1 to \temp
+ \ifeof 1
+ \putwordIndexIsEmpty
+ \else
+ % Index files are almost Texinfo source, but we use \ as the escape
+ % character. It would be better to use @, but that's too big a change
+ % to make right now.
+ \def\indexbackslash{\backslashcurfont}%
+ \catcode`\\ = 0
+ \escapechar = `\\
+ \begindoublecolumns
+ \input \jobname.#1s
+ \enddoublecolumns
+ \fi
+ \fi
+ \closein 1
+\endgroup}
+
+% These macros are used by the sorted index file itself.
+% Change them to control the appearance of the index.
+
+\def\initial#1{{%
+ % Some minor font changes for the special characters.
+ \let\tentt=\sectt \let\tt=\sectt \let\sf=\sectt
+ %
+ % Remove any glue we may have, we'll be inserting our own.
+ \removelastskip
+ %
+ % We like breaks before the index initials, so insert a bonus.
+ \nobreak
+ \vskip 0pt plus 3\baselineskip
+ \penalty 0
+ \vskip 0pt plus -3\baselineskip
+ %
+ % Typeset the initial. Making this add up to a whole number of
+ % baselineskips increases the chance of the dots lining up from column
+ % to column. It still won't often be perfect, because of the stretch
+ % we need before each entry, but it's better.
+ %
+ % No shrink because it confuses \balancecolumns.
+ \vskip 1.67\baselineskip plus .5\baselineskip
+ \leftline{\secbf #1}%
+ % Do our best not to break after the initial.
+ \nobreak
+ \vskip .33\baselineskip plus .1\baselineskip
+}}
+
+% \entry typesets a paragraph consisting of the text (#1), dot leaders, and
+% then page number (#2) flushed to the right margin. It is used for index
+% and table of contents entries. The paragraph is indented by \leftskip.
+%
+% A straightforward implementation would start like this:
+% \def\entry#1#2{...
+% But this frozes the catcodes in the argument, and can cause problems to
+% @code, which sets - active. This problem was fixed by a kludge---
+% ``-'' was active throughout whole index, but this isn't really right.
+%
+% The right solution is to prevent \entry from swallowing the whole text.
+% --kasal, 21nov03
+\def\entry{%
+ \begingroup
+ %
+ % Start a new paragraph if necessary, so our assignments below can't
+ % affect previous text.
+ \par
+ %
+ % Do not fill out the last line with white space.
+ \parfillskip = 0in
+ %
+ % No extra space above this paragraph.
+ \parskip = 0in
+ %
+ % Do not prefer a separate line ending with a hyphen to fewer lines.
+ \finalhyphendemerits = 0
+ %
+ % \hangindent is only relevant when the entry text and page number
+ % don't both fit on one line. In that case, bob suggests starting the
+ % dots pretty far over on the line. Unfortunately, a large
+ % indentation looks wrong when the entry text itself is broken across
+ % lines. So we use a small indentation and put up with long leaders.
+ %
+ % \hangafter is reset to 1 (which is the value we want) at the start
+ % of each paragraph, so we need not do anything with that.
+ \hangindent = 2em
+ %
+ % When the entry text needs to be broken, just fill out the first line
+ % with blank space.
+ \rightskip = 0pt plus1fil
+ %
+ % A bit of stretch before each entry for the benefit of balancing
+ % columns.
+ \vskip 0pt plus1pt
+ %
+ % Swallow the left brace of the text (first parameter):
+ \afterassignment\doentry
+ \let\temp =
+}
+\def\doentry{%
+ \bgroup % Instead of the swallowed brace.
+ \noindent
+ \aftergroup\finishentry
+ % And now comes the text of the entry.
+}
+\def\finishentry#1{%
+ % #1 is the page number.
+ %
+ % The following is kludged to not output a line of dots in the index if
+ % there are no page numbers. The next person who breaks this will be
+ % cursed by a Unix daemon.
+ \def\tempa{{\rm }}%
+ \def\tempb{#1}%
+ \edef\tempc{\tempa}%
+ \edef\tempd{\tempb}%
+ \ifx\tempc\tempd
+ \ %
+ \else
+ %
+ % If we must, put the page number on a line of its own, and fill out
+ % this line with blank space. (The \hfil is overwhelmed with the
+ % fill leaders glue in \indexdotfill if the page number does fit.)
+ \hfil\penalty50
+ \null\nobreak\indexdotfill % Have leaders before the page number.
+ %
+ % The `\ ' here is removed by the implicit \unskip that TeX does as
+ % part of (the primitive) \par. Without it, a spurious underfull
+ % \hbox ensues.
+ \ifpdf
+ \pdfgettoks#1.%
+ \ \the\toksA
+ \else
+ \ #1%
+ \fi
+ \fi
+ \par
+ \endgroup
+}
+
+% Like plain.tex's \dotfill, except uses up at least 1 em.
+\def\indexdotfill{\cleaders
+ \hbox{$\mathsurround=0pt \mkern1.5mu.\mkern1.5mu$}\hskip 1em plus 1fill}
+
+\def\primary #1{\line{#1\hfil}}
+
+\newskip\secondaryindent \secondaryindent=0.5cm
+\def\secondary#1#2{{%
+ \parfillskip=0in
+ \parskip=0in
+ \hangindent=1in
+ \hangafter=1
+ \noindent\hskip\secondaryindent\hbox{#1}\indexdotfill
+ \ifpdf
+ \pdfgettoks#2.\ \the\toksA % The page number ends the paragraph.
+ \else
+ #2
+ \fi
+ \par
+}}
+
+% Define two-column mode, which we use to typeset indexes.
+% Adapted from the TeXbook, page 416, which is to say,
+% the manmac.tex format used to print the TeXbook itself.
+\catcode`\@=11
+
+\newbox\partialpage
+\newdimen\doublecolumnhsize
+
+\def\begindoublecolumns{\begingroup % ended by \enddoublecolumns
+ % Grab any single-column material above us.
+ \output = {%
+ %
+ % Here is a possibility not foreseen in manmac: if we accumulate a
+ % whole lot of material, we might end up calling this \output
+ % routine twice in a row (see the doublecol-lose test, which is
+ % essentially a couple of indexes with @setchapternewpage off). In
+ % that case we just ship out what is in \partialpage with the normal
+ % output routine. Generally, \partialpage will be empty when this
+ % runs and this will be a no-op. See the indexspread.tex test case.
+ \ifvoid\partialpage \else
+ \onepageout{\pagecontents\partialpage}%
+ \fi
+ %
+ \global\setbox\partialpage = \vbox{%
+ % Unvbox the main output page.
+ \unvbox\PAGE
+ \kern-\topskip \kern\baselineskip
+ }%
+ }%
+ \eject % run that output routine to set \partialpage
+ %
+ % Use the double-column output routine for subsequent pages.
+ \output = {\doublecolumnout}%
+ %
+ % Change the page size parameters. We could do this once outside this
+ % routine, in each of @smallbook, @afourpaper, and the default 8.5x11
+ % format, but then we repeat the same computation. Repeating a couple
+ % of assignments once per index is clearly meaningless for the
+ % execution time, so we may as well do it in one place.
+ %
+ % First we halve the line length, less a little for the gutter between
+ % the columns. We compute the gutter based on the line length, so it
+ % changes automatically with the paper format. The magic constant
+ % below is chosen so that the gutter has the same value (well, +-<1pt)
+ % as it did when we hard-coded it.
+ %
+ % We put the result in a separate register, \doublecolumhsize, so we
+ % can restore it in \pagesofar, after \hsize itself has (potentially)
+ % been clobbered.
+ %
+ \doublecolumnhsize = \hsize
+ \advance\doublecolumnhsize by -.04154\hsize
+ \divide\doublecolumnhsize by 2
+ \hsize = \doublecolumnhsize
+ %
+ % Double the \vsize as well. (We don't need a separate register here,
+ % since nobody clobbers \vsize.)
+ \vsize = 2\vsize
+}
+
+% The double-column output routine for all double-column pages except
+% the last.
+%
+\def\doublecolumnout{%
+ \splittopskip=\topskip \splitmaxdepth=\maxdepth
+ % Get the available space for the double columns -- the normal
+ % (undoubled) page height minus any material left over from the
+ % previous page.
+ \dimen@ = \vsize
+ \divide\dimen@ by 2
+ \advance\dimen@ by -\ht\partialpage
+ %
+ % box0 will be the left-hand column, box2 the right.
+ \setbox0=\vsplit255 to\dimen@ \setbox2=\vsplit255 to\dimen@
+ \onepageout\pagesofar
+ \unvbox255
+ \penalty\outputpenalty
+}
+%
+% Re-output the contents of the output page -- any previous material,
+% followed by the two boxes we just split, in box0 and box2.
+\def\pagesofar{%
+ \unvbox\partialpage
+ %
+ \hsize = \doublecolumnhsize
+ \wd0=\hsize \wd2=\hsize
+ \hbox to\pagewidth{\box0\hfil\box2}%
+}
+%
+% All done with double columns.
+\def\enddoublecolumns{%
+ \output = {%
+ % Split the last of the double-column material. Leave it on the
+ % current page, no automatic page break.
+ \balancecolumns
+ %
+ % If we end up splitting too much material for the current page,
+ % though, there will be another page break right after this \output
+ % invocation ends. Having called \balancecolumns once, we do not
+ % want to call it again. Therefore, reset \output to its normal
+ % definition right away. (We hope \balancecolumns will never be
+ % called on to balance too much material, but if it is, this makes
+ % the output somewhat more palatable.)
+ \global\output = {\onepageout{\pagecontents\PAGE}}%
+ }%
+ \eject
+ \endgroup % started in \begindoublecolumns
+ %
+ % \pagegoal was set to the doubled \vsize above, since we restarted
+ % the current page. We're now back to normal single-column
+ % typesetting, so reset \pagegoal to the normal \vsize (after the
+ % \endgroup where \vsize got restored).
+ \pagegoal = \vsize
+}
+%
+% Called at the end of the double column material.
+\def\balancecolumns{%
+ \setbox0 = \vbox{\unvbox255}% like \box255 but more efficient, see p.120.
+ \dimen@ = \ht0
+ \advance\dimen@ by \topskip
+ \advance\dimen@ by-\baselineskip
+ \divide\dimen@ by 2 % target to split to
+ %debug\message{final 2-column material height=\the\ht0, target=\the\dimen@.}%
+ \splittopskip = \topskip
+ % Loop until we get a decent breakpoint.
+ {%
+ \vbadness = 10000
+ \loop
+ \global\setbox3 = \copy0
+ \global\setbox1 = \vsplit3 to \dimen@
+ \ifdim\ht3>\dimen@
+ \global\advance\dimen@ by 1pt
+ \repeat
+ }%
+ %debug\message{split to \the\dimen@, column heights: \the\ht1, \the\ht3.}%
+ \setbox0=\vbox to\dimen@{\unvbox1}%
+ \setbox2=\vbox to\dimen@{\unvbox3}%
+ %
+ \pagesofar
+}
+\catcode`\@ = \other
+
+
+\message{sectioning,}
+% Chapters, sections, etc.
+
+% \unnumberedno is an oxymoron, of course. But we count the unnumbered
+% sections so that we can refer to them unambiguously in the pdf
+% outlines by their "section number". We avoid collisions with chapter
+% numbers by starting them at 10000. (If a document ever has 10000
+% chapters, we're in trouble anyway, I'm sure.)
+\newcount\unnumberedno \unnumberedno = 10000
+\newcount\chapno
+\newcount\secno \secno=0
+\newcount\subsecno \subsecno=0
+\newcount\subsubsecno \subsubsecno=0
+
+% This counter is funny since it counts through charcodes of letters A, B, ...
+\newcount\appendixno \appendixno = `\@
+%
+% \def\appendixletter{\char\the\appendixno}
+% We do the following ugly conditional instead of the above simple
+% construct for the sake of pdftex, which needs the actual
+% letter in the expansion, not just typeset.
+%
+\def\appendixletter{%
+ \ifnum\appendixno=`A A%
+ \else\ifnum\appendixno=`B B%
+ \else\ifnum\appendixno=`C C%
+ \else\ifnum\appendixno=`D D%
+ \else\ifnum\appendixno=`E E%
+ \else\ifnum\appendixno=`F F%
+ \else\ifnum\appendixno=`G G%
+ \else\ifnum\appendixno=`H H%
+ \else\ifnum\appendixno=`I I%
+ \else\ifnum\appendixno=`J J%
+ \else\ifnum\appendixno=`K K%
+ \else\ifnum\appendixno=`L L%
+ \else\ifnum\appendixno=`M M%
+ \else\ifnum\appendixno=`N N%
+ \else\ifnum\appendixno=`O O%
+ \else\ifnum\appendixno=`P P%
+ \else\ifnum\appendixno=`Q Q%
+ \else\ifnum\appendixno=`R R%
+ \else\ifnum\appendixno=`S S%
+ \else\ifnum\appendixno=`T T%
+ \else\ifnum\appendixno=`U U%
+ \else\ifnum\appendixno=`V V%
+ \else\ifnum\appendixno=`W W%
+ \else\ifnum\appendixno=`X X%
+ \else\ifnum\appendixno=`Y Y%
+ \else\ifnum\appendixno=`Z Z%
+ % The \the is necessary, despite appearances, because \appendixletter is
+ % expanded while writing the .toc file. \char\appendixno is not
+ % expandable, thus it is written literally, thus all appendixes come out
+ % with the same letter (or @) in the toc without it.
+ \else\char\the\appendixno
+ \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi
+ \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi}
+
+% Each @chapter defines this as the name of the chapter.
+% page headings and footings can use it. @section does likewise.
+% However, they are not reliable, because we don't use marks.
+\def\thischapter{}
+\def\thissection{}
+
+\newcount\absseclevel % used to calculate proper heading level
+\newcount\secbase\secbase=0 % @raisesections/@lowersections modify this count
+
+% @raisesections: treat @section as chapter, @subsection as section, etc.
+\def\raisesections{\global\advance\secbase by -1}
+\let\up=\raisesections % original BFox name
+
+% @lowersections: treat @chapter as section, @section as subsection, etc.
+\def\lowersections{\global\advance\secbase by 1}
+\let\down=\lowersections % original BFox name
+
+% we only have subsub.
+\chardef\maxseclevel = 3
+%
+% A numbered section within an unnumbered changes to unnumbered too.
+% To achive this, remember the "biggest" unnum. sec. we are currently in:
+\chardef\unmlevel = \maxseclevel
+%
+% Trace whether the current chapter is an appendix or not:
+% \chapheadtype is "N" or "A", unnumbered chapters are ignored.
+\def\chapheadtype{N}
+
+% Choose a heading macro
+% #1 is heading type
+% #2 is heading level
+% #3 is text for heading
+\def\genhead#1#2#3{%
+ % Compute the abs. sec. level:
+ \absseclevel=#2
+ \advance\absseclevel by \secbase
+ % Make sure \absseclevel doesn't fall outside the range:
+ \ifnum \absseclevel < 0
+ \absseclevel = 0
+ \else
+ \ifnum \absseclevel > 3
+ \absseclevel = 3
+ \fi
+ \fi
+ % The heading type:
+ \def\headtype{#1}%
+ \if \headtype U%
+ \ifnum \absseclevel < \unmlevel
+ \chardef\unmlevel = \absseclevel
+ \fi
+ \else
+ % Check for appendix sections:
+ \ifnum \absseclevel = 0
+ \edef\chapheadtype{\headtype}%
+ \else
+ \if \headtype A\if \chapheadtype N%
+ \errmessage{@appendix... within a non-appendix chapter}%
+ \fi\fi
+ \fi
+ % Check for numbered within unnumbered:
+ \ifnum \absseclevel > \unmlevel
+ \def\headtype{U}%
+ \else
+ \chardef\unmlevel = 3
+ \fi
+ \fi
+ % Now print the heading:
+ \if \headtype U%
+ \ifcase\absseclevel
+ \unnumberedzzz{#3}%
+ \or \unnumberedseczzz{#3}%
+ \or \unnumberedsubseczzz{#3}%
+ \or \unnumberedsubsubseczzz{#3}%
+ \fi
+ \else
+ \if \headtype A%
+ \ifcase\absseclevel
+ \appendixzzz{#3}%
+ \or \appendixsectionzzz{#3}%
+ \or \appendixsubseczzz{#3}%
+ \or \appendixsubsubseczzz{#3}%
+ \fi
+ \else
+ \ifcase\absseclevel
+ \chapterzzz{#3}%
+ \or \seczzz{#3}%
+ \or \numberedsubseczzz{#3}%
+ \or \numberedsubsubseczzz{#3}%
+ \fi
+ \fi
+ \fi
+ \suppressfirstparagraphindent
+}
+
+% an interface:
+\def\numhead{\genhead N}
+\def\apphead{\genhead A}
+\def\unnmhead{\genhead U}
+
+% @chapter, @appendix, @unnumbered. Increment top-level counter, reset
+% all lower-level sectioning counters to zero.
+%
+% Also set \chaplevelprefix, which we prepend to @float sequence numbers
+% (e.g., figures), q.v. By default (before any chapter), that is empty.
+\let\chaplevelprefix = \empty
+%
+\outer\parseargdef\chapter{\numhead0{#1}} % normally numhead0 calls chapterzzz
+\def\chapterzzz#1{%
+ % section resetting is \global in case the chapter is in a group, such
+ % as an @include file.
+ \global\secno=0 \global\subsecno=0 \global\subsubsecno=0
+ \global\advance\chapno by 1
+ %
+ % Used for \float.
+ \gdef\chaplevelprefix{\the\chapno.}%
+ \resetallfloatnos
+ %
+ \message{\putwordChapter\space \the\chapno}%
+ %
+ % Write the actual heading.
+ \chapmacro{#1}{Ynumbered}{\the\chapno}%
+ %
+ % So @section and the like are numbered underneath this chapter.
+ \global\let\section = \numberedsec
+ \global\let\subsection = \numberedsubsec
+ \global\let\subsubsection = \numberedsubsubsec
+}
+
+\outer\parseargdef\appendix{\apphead0{#1}} % normally apphead0 calls appendixzzz
+\def\appendixzzz#1{%
+ \global\secno=0 \global\subsecno=0 \global\subsubsecno=0
+ \global\advance\appendixno by 1
+ \gdef\chaplevelprefix{\appendixletter.}%
+ \resetallfloatnos
+ %
+ \def\appendixnum{\putwordAppendix\space \appendixletter}%
+ \message{\appendixnum}%
+ %
+ \chapmacro{#1}{Yappendix}{\appendixletter}%
+ %
+ \global\let\section = \appendixsec
+ \global\let\subsection = \appendixsubsec
+ \global\let\subsubsection = \appendixsubsubsec
+}
+
+\outer\parseargdef\unnumbered{\unnmhead0{#1}} % normally unnmhead0 calls unnumberedzzz
+\def\unnumberedzzz#1{%
+ \global\secno=0 \global\subsecno=0 \global\subsubsecno=0
+ \global\advance\unnumberedno by 1
+ %
+ % Since an unnumbered has no number, no prefix for figures.
+ \global\let\chaplevelprefix = \empty
+ \resetallfloatnos
+ %
+ % This used to be simply \message{#1}, but TeX fully expands the
+ % argument to \message. Therefore, if #1 contained @-commands, TeX
+ % expanded them. For example, in `@unnumbered The @cite{Book}', TeX
+ % expanded @cite (which turns out to cause errors because \cite is meant
+ % to be executed, not expanded).
+ %
+ % Anyway, we don't want the fully-expanded definition of @cite to appear
+ % as a result of the \message, we just want `@cite' itself. We use
+ % \the<toks register> to achieve this: TeX expands \the<toks> only once,
+ % simply yielding the contents of <toks register>. (We also do this for
+ % the toc entries.)
+ \toks0 = {#1}%
+ \message{(\the\toks0)}%
+ %
+ \chapmacro{#1}{Ynothing}{\the\unnumberedno}%
+ %
+ \global\let\section = \unnumberedsec
+ \global\let\subsection = \unnumberedsubsec
+ \global\let\subsubsection = \unnumberedsubsubsec
+}
+
+% @centerchap is like @unnumbered, but the heading is centered.
+\outer\parseargdef\centerchap{%
+ % Well, we could do the following in a group, but that would break
+ % an assumption that \chapmacro is called at the outermost level.
+ % Thus we are safer this way: --kasal, 24feb04
+ \let\centerparametersmaybe = \centerparameters
+ \unnmhead0{#1}%
+ \let\centerparametersmaybe = \relax
+}
+
+% @top is like @unnumbered.
+\let\top\unnumbered
+
+% Sections.
+\outer\parseargdef\numberedsec{\numhead1{#1}} % normally calls seczzz
+\def\seczzz#1{%
+ \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1
+ \sectionheading{#1}{sec}{Ynumbered}{\the\chapno.\the\secno}%
+}
+
+\outer\parseargdef\appendixsection{\apphead1{#1}} % normally calls appendixsectionzzz
+\def\appendixsectionzzz#1{%
+ \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1
+ \sectionheading{#1}{sec}{Yappendix}{\appendixletter.\the\secno}%
+}
+\let\appendixsec\appendixsection
+
+\outer\parseargdef\unnumberedsec{\unnmhead1{#1}} % normally calls unnumberedseczzz
+\def\unnumberedseczzz#1{%
+ \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1
+ \sectionheading{#1}{sec}{Ynothing}{\the\unnumberedno.\the\secno}%
+}
+
+% Subsections.
+\outer\parseargdef\numberedsubsec{\numhead2{#1}} % normally calls numberedsubseczzz
+\def\numberedsubseczzz#1{%
+ \global\subsubsecno=0 \global\advance\subsecno by 1
+ \sectionheading{#1}{subsec}{Ynumbered}{\the\chapno.\the\secno.\the\subsecno}%
+}
+
+\outer\parseargdef\appendixsubsec{\apphead2{#1}} % normally calls appendixsubseczzz
+\def\appendixsubseczzz#1{%
+ \global\subsubsecno=0 \global\advance\subsecno by 1
+ \sectionheading{#1}{subsec}{Yappendix}%
+ {\appendixletter.\the\secno.\the\subsecno}%
+}
+
+\outer\parseargdef\unnumberedsubsec{\unnmhead2{#1}} %normally calls unnumberedsubseczzz
+\def\unnumberedsubseczzz#1{%
+ \global\subsubsecno=0 \global\advance\subsecno by 1
+ \sectionheading{#1}{subsec}{Ynothing}%
+ {\the\unnumberedno.\the\secno.\the\subsecno}%
+}
+
+% Subsubsections.
+\outer\parseargdef\numberedsubsubsec{\numhead3{#1}} % normally numberedsubsubseczzz
+\def\numberedsubsubseczzz#1{%
+ \global\advance\subsubsecno by 1
+ \sectionheading{#1}{subsubsec}{Ynumbered}%
+ {\the\chapno.\the\secno.\the\subsecno.\the\subsubsecno}%
+}
+
+\outer\parseargdef\appendixsubsubsec{\apphead3{#1}} % normally appendixsubsubseczzz
+\def\appendixsubsubseczzz#1{%
+ \global\advance\subsubsecno by 1
+ \sectionheading{#1}{subsubsec}{Yappendix}%
+ {\appendixletter.\the\secno.\the\subsecno.\the\subsubsecno}%
+}
+
+\outer\parseargdef\unnumberedsubsubsec{\unnmhead3{#1}} %normally unnumberedsubsubseczzz
+\def\unnumberedsubsubseczzz#1{%
+ \global\advance\subsubsecno by 1
+ \sectionheading{#1}{subsubsec}{Ynothing}%
+ {\the\unnumberedno.\the\secno.\the\subsecno.\the\subsubsecno}%
+}
+
+% These macros control what the section commands do, according
+% to what kind of chapter we are in (ordinary, appendix, or unnumbered).
+% Define them by default for a numbered chapter.
+\let\section = \numberedsec
+\let\subsection = \numberedsubsec
+\let\subsubsection = \numberedsubsubsec
+
+% Define @majorheading, @heading and @subheading
+
+% NOTE on use of \vbox for chapter headings, section headings, and such:
+% 1) We use \vbox rather than the earlier \line to permit
+% overlong headings to fold.
+% 2) \hyphenpenalty is set to 10000 because hyphenation in a
+% heading is obnoxious; this forbids it.
+% 3) Likewise, headings look best if no \parindent is used, and
+% if justification is not attempted. Hence \raggedright.
+
+
+\def\majorheading{%
+ {\advance\chapheadingskip by 10pt \chapbreak }%
+ \parsearg\chapheadingzzz
+}
+
+\def\chapheading{\chapbreak \parsearg\chapheadingzzz}
+\def\chapheadingzzz#1{%
+ {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000
+ \parindent=0pt\raggedright
+ \rm #1\hfill}}%
+ \bigskip \par\penalty 200\relax
+ \suppressfirstparagraphindent
+}
+
+% @heading, @subheading, @subsubheading.
+\parseargdef\heading{\sectionheading{#1}{sec}{Yomitfromtoc}{}
+ \suppressfirstparagraphindent}
+\parseargdef\subheading{\sectionheading{#1}{subsec}{Yomitfromtoc}{}
+ \suppressfirstparagraphindent}
+\parseargdef\subsubheading{\sectionheading{#1}{subsubsec}{Yomitfromtoc}{}
+ \suppressfirstparagraphindent}
+
+% These macros generate a chapter, section, etc. heading only
+% (including whitespace, linebreaking, etc. around it),
+% given all the information in convenient, parsed form.
+
+%%% Args are the skip and penalty (usually negative)
+\def\dobreak#1#2{\par\ifdim\lastskip<#1\removelastskip\penalty#2\vskip#1\fi}
+
+%%% Define plain chapter starts, and page on/off switching for it
+% Parameter controlling skip before chapter headings (if needed)
+
+\newskip\chapheadingskip
+
+\def\chapbreak{\dobreak \chapheadingskip {-4000}}
+\def\chappager{\par\vfill\supereject}
+\def\chapoddpage{\chappager \ifodd\pageno \else \hbox to 0pt{} \chappager\fi}
+
+\def\setchapternewpage #1 {\csname CHAPPAG#1\endcsname}
+
+\def\CHAPPAGoff{%
+\global\let\contentsalignmacro = \chappager
+\global\let\pchapsepmacro=\chapbreak
+\global\let\pagealignmacro=\chappager}
+
+\def\CHAPPAGon{%
+\global\let\contentsalignmacro = \chappager
+\global\let\pchapsepmacro=\chappager
+\global\let\pagealignmacro=\chappager
+\global\def\HEADINGSon{\HEADINGSsingle}}
+
+\def\CHAPPAGodd{%
+\global\let\contentsalignmacro = \chapoddpage
+\global\let\pchapsepmacro=\chapoddpage
+\global\let\pagealignmacro=\chapoddpage
+\global\def\HEADINGSon{\HEADINGSdouble}}
+
+\CHAPPAGon
+
+% Chapter opening.
+%
+% #1 is the text, #2 is the section type (Ynumbered, Ynothing,
+% Yappendix, Yomitfromtoc), #3 the chapter number.
+%
+% To test against our argument.
+\def\Ynothingkeyword{Ynothing}
+\def\Yomitfromtockeyword{Yomitfromtoc}
+\def\Yappendixkeyword{Yappendix}
+%
+\def\chapmacro#1#2#3{%
+ \pchapsepmacro
+ {%
+ \chapfonts \rm
+ %
+ % Have to define \thissection before calling \donoderef, because the
+ % xref code eventually uses it. On the other hand, it has to be called
+ % after \pchapsepmacro, or the headline will change too soon.
+ \gdef\thissection{#1}%
+ \gdef\thischaptername{#1}%
+ %
+ % Only insert the separating space if we have a chapter/appendix
+ % number, and don't print the unnumbered ``number''.
+ \def\temptype{#2}%
+ \ifx\temptype\Ynothingkeyword
+ \setbox0 = \hbox{}%
+ \def\toctype{unnchap}%
+ \gdef\thischapternum{}%
+ \gdef\thischapter{#1}%
+ \else\ifx\temptype\Yomitfromtockeyword
+ \setbox0 = \hbox{}% contents like unnumbered, but no toc entry
+ \def\toctype{omit}%
+ \gdef\thischapternum{}%
+ \gdef\thischapter{}%
+ \else\ifx\temptype\Yappendixkeyword
+ \setbox0 = \hbox{\putwordAppendix{} #3\enspace}%
+ \def\toctype{app}%
+ \xdef\thischapternum{\appendixletter}%
+ % We don't substitute the actual chapter name into \thischapter
+ % because we don't want its macros evaluated now. And we don't
+ % use \thissection because that changes with each section.
+ %
+ \xdef\thischapter{\putwordAppendix{} \appendixletter:
+ \noexpand\thischaptername}%
+ \else
+ \setbox0 = \hbox{#3\enspace}%
+ \def\toctype{numchap}%
+ \xdef\thischapternum{\the\chapno}%
+ \xdef\thischapter{\putwordChapter{} \the\chapno:
+ \noexpand\thischaptername}%
+ \fi\fi\fi
+ %
+ % Write the toc entry for this chapter. Must come before the
+ % \donoderef, because we include the current node name in the toc
+ % entry, and \donoderef resets it to empty.
+ \writetocentry{\toctype}{#1}{#3}%
+ %
+ % For pdftex, we have to write out the node definition (aka, make
+ % the pdfdest) after any page break, but before the actual text has
+ % been typeset. If the destination for the pdf outline is after the
+ % text, then jumping from the outline may wind up with the text not
+ % being visible, for instance under high magnification.
+ \donoderef{#2}%
+ %
+ % Typeset the actual heading.
+ \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright
+ \hangindent=\wd0 \centerparametersmaybe
+ \unhbox0 #1\par}%
+ }%
+ \nobreak\bigskip % no page break after a chapter title
+ \nobreak
+}
+
+% @centerchap -- centered and unnumbered.
+\let\centerparametersmaybe = \relax
+\def\centerparameters{%
+ \advance\rightskip by 3\rightskip
+ \leftskip = \rightskip
+ \parfillskip = 0pt
+}
+
+
+% I don't think this chapter style is supported any more, so I'm not
+% updating it with the new noderef stuff. We'll see. --karl, 11aug03.
+%
+\def\setchapterstyle #1 {\csname CHAPF#1\endcsname}
+%
+\def\unnchfopen #1{%
+\chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000
+ \parindent=0pt\raggedright
+ \rm #1\hfill}}\bigskip \par\nobreak
+}
+\def\chfopen #1#2{\chapoddpage {\chapfonts
+\vbox to 3in{\vfil \hbox to\hsize{\hfil #2} \hbox to\hsize{\hfil #1} \vfil}}%
+\par\penalty 5000 %
+}
+\def\centerchfopen #1{%
+\chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000
+ \parindent=0pt
+ \hfill {\rm #1}\hfill}}\bigskip \par\nobreak
+}
+\def\CHAPFopen{%
+ \global\let\chapmacro=\chfopen
+ \global\let\centerchapmacro=\centerchfopen}
+
+
+% Section titles. These macros combine the section number parts and
+% call the generic \sectionheading to do the printing.
+%
+\newskip\secheadingskip
+\def\secheadingbreak{\dobreak \secheadingskip{-1000}}
+
+% Subsection titles.
+\newskip\subsecheadingskip
+\def\subsecheadingbreak{\dobreak \subsecheadingskip{-500}}
+
+% Subsubsection titles.
+\def\subsubsecheadingskip{\subsecheadingskip}
+\def\subsubsecheadingbreak{\subsecheadingbreak}
+
+
+% Print any size, any type, section title.
+%
+% #1 is the text, #2 is the section level (sec/subsec/subsubsec), #3 is
+% the section type for xrefs (Ynumbered, Ynothing, Yappendix), #4 is the
+% section number.
+%
+\def\sectionheading#1#2#3#4{%
+ {%
+ % Switch to the right set of fonts.
+ \csname #2fonts\endcsname \rm
+ %
+ % Insert space above the heading.
+ \csname #2headingbreak\endcsname
+ %
+ % Only insert the space after the number if we have a section number.
+ \def\sectionlevel{#2}%
+ \def\temptype{#3}%
+ %
+ \ifx\temptype\Ynothingkeyword
+ \setbox0 = \hbox{}%
+ \def\toctype{unn}%
+ \gdef\thissection{#1}%
+ \else\ifx\temptype\Yomitfromtockeyword
+ % for @headings -- no section number, don't include in toc,
+ % and don't redefine \thissection.
+ \setbox0 = \hbox{}%
+ \def\toctype{omit}%
+ \let\sectionlevel=\empty
+ \else\ifx\temptype\Yappendixkeyword
+ \setbox0 = \hbox{#4\enspace}%
+ \def\toctype{app}%
+ \gdef\thissection{#1}%
+ \else
+ \setbox0 = \hbox{#4\enspace}%
+ \def\toctype{num}%
+ \gdef\thissection{#1}%
+ \fi\fi\fi
+ %
+ % Write the toc entry (before \donoderef). See comments in \chapmacro.
+ \writetocentry{\toctype\sectionlevel}{#1}{#4}%
+ %
+ % Write the node reference (= pdf destination for pdftex).
+ % Again, see comments in \chapmacro.
+ \donoderef{#3}%
+ %
+ % Interline glue will be inserted when the vbox is completed.
+ % That glue will be a valid breakpoint for the page, since it'll be
+ % preceded by a whatsit (usually from the \donoderef, or from the
+ % \writetocentry if there was no node). We don't want to allow that
+ % break, since then the whatsits could end up on page n while the
+ % section is on page n+1, thus toc/etc. are wrong. Debian bug 276000.
+ \nobreak
+ %
+ % Output the actual section heading.
+ \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright
+ \hangindent=\wd0 % zero if no section number
+ \unhbox0 #1}%
+ }%
+ % Add extra space after the heading -- half of whatever came above it.
+ % Don't allow stretch, though.
+ \kern .5 \csname #2headingskip\endcsname
+ %
+ % Do not let the kern be a potential breakpoint, as it would be if it
+ % was followed by glue.
+ \nobreak
+ %
+ % We'll almost certainly start a paragraph next, so don't let that
+ % glue accumulate. (Not a breakpoint because it's preceded by a
+ % discardable item.)
+ \vskip-\parskip
+ %
+ % This is purely so the last item on the list is a known \penalty >
+ % 10000. This is so \startdefun can avoid allowing breakpoints after
+ % section headings. Otherwise, it would insert a valid breakpoint between:
+ %
+ % @section sec-whatever
+ % @deffn def-whatever
+ \penalty 10001
+}
+
+
+\message{toc,}
+% Table of contents.
+\newwrite\tocfile
+
+% Write an entry to the toc file, opening it if necessary.
+% Called from @chapter, etc.
+%
+% Example usage: \writetocentry{sec}{Section Name}{\the\chapno.\the\secno}
+% We append the current node name (if any) and page number as additional
+% arguments for the \{chap,sec,...}entry macros which will eventually
+% read this. The node name is used in the pdf outlines as the
+% destination to jump to.
+%
+% We open the .toc file for writing here instead of at @setfilename (or
+% any other fixed time) so that @contents can be anywhere in the document.
+% But if #1 is `omit', then we don't do anything. This is used for the
+% table of contents chapter openings themselves.
+%
+\newif\iftocfileopened
+\def\omitkeyword{omit}%
+%
+\def\writetocentry#1#2#3{%
+ \edef\writetoctype{#1}%
+ \ifx\writetoctype\omitkeyword \else
+ \iftocfileopened\else
+ \immediate\openout\tocfile = \jobname.toc
+ \global\tocfileopenedtrue
+ \fi
+ %
+ \iflinks
+ {\atdummies
+ \edef\temp{%
+ \write\tocfile{@#1entry{#2}{#3}{\lastnode}{\noexpand\folio}}}%
+ \temp
+ }%
+ \fi
+ \fi
+ %
+ % Tell \shipout to create a pdf destination on each page, if we're
+ % writing pdf. These are used in the table of contents. We can't
+ % just write one on every page because the title pages are numbered
+ % 1 and 2 (the page numbers aren't printed), and so are the first
+ % two pages of the document. Thus, we'd have two destinations named
+ % `1', and two named `2'.
+ \ifpdf \global\pdfmakepagedesttrue \fi
+}
+
+
+% These characters do not print properly in the Computer Modern roman
+% fonts, so we must take special care. This is more or less redundant
+% with the Texinfo input format setup at the end of this file.
+%
+\def\activecatcodes{%
+ \catcode`\"=\active
+ \catcode`\$=\active
+ \catcode`\<=\active
+ \catcode`\>=\active
+ \catcode`\\=\active
+ \catcode`\^=\active
+ \catcode`\_=\active
+ \catcode`\|=\active
+ \catcode`\~=\active
+}
+
+
+% Read the toc file, which is essentially Texinfo input.
+\def\readtocfile{%
+ \setupdatafile
+ \activecatcodes
+ \input \jobname.toc
+}
+
+\newskip\contentsrightmargin \contentsrightmargin=1in
+\newcount\savepageno
+\newcount\lastnegativepageno \lastnegativepageno = -1
+
+% Prepare to read what we've written to \tocfile.
+%
+\def\startcontents#1{%
+ % If @setchapternewpage on, and @headings double, the contents should
+ % start on an odd page, unlike chapters. Thus, we maintain
+ % \contentsalignmacro in parallel with \pagealignmacro.
+ % From: Torbjorn Granlund <tege@matematik.su.se>
+ \contentsalignmacro
+ \immediate\closeout\tocfile
+ %
+ % Don't need to put `Contents' or `Short Contents' in the headline.
+ % It is abundantly clear what they are.
+ \def\thischapter{}%
+ \chapmacro{#1}{Yomitfromtoc}{}%
+ %
+ \savepageno = \pageno
+ \begingroup % Set up to handle contents files properly.
+ \raggedbottom % Worry more about breakpoints than the bottom.
+ \advance\hsize by -\contentsrightmargin % Don't use the full line length.
+ %
+ % Roman numerals for page numbers.
+ \ifnum \pageno>0 \global\pageno = \lastnegativepageno \fi
+}
+
+
+% Normal (long) toc.
+\def\contents{%
+ \startcontents{\putwordTOC}%
+ \openin 1 \jobname.toc
+ \ifeof 1 \else
+ \readtocfile
+ \fi
+ \vfill \eject
+ \contentsalignmacro % in case @setchapternewpage odd is in effect
+ \ifeof 1 \else
+ \pdfmakeoutlines
+ \fi
+ \closein 1
+ \endgroup
+ \lastnegativepageno = \pageno
+ \global\pageno = \savepageno
+}
+
+% And just the chapters.
+\def\summarycontents{%
+ \startcontents{\putwordShortTOC}%
+ %
+ \let\numchapentry = \shortchapentry
+ \let\appentry = \shortchapentry
+ \let\unnchapentry = \shortunnchapentry
+ % We want a true roman here for the page numbers.
+ \secfonts
+ \let\rm=\shortcontrm \let\bf=\shortcontbf
+ \let\sl=\shortcontsl \let\tt=\shortconttt
+ \rm
+ \hyphenpenalty = 10000
+ \advance\baselineskip by 1pt % Open it up a little.
+ \def\numsecentry##1##2##3##4{}
+ \let\appsecentry = \numsecentry
+ \let\unnsecentry = \numsecentry
+ \let\numsubsecentry = \numsecentry
+ \let\appsubsecentry = \numsecentry
+ \let\unnsubsecentry = \numsecentry
+ \let\numsubsubsecentry = \numsecentry
+ \let\appsubsubsecentry = \numsecentry
+ \let\unnsubsubsecentry = \numsecentry
+ \openin 1 \jobname.toc
+ \ifeof 1 \else
+ \readtocfile
+ \fi
+ \closein 1
+ \vfill \eject
+ \contentsalignmacro % in case @setchapternewpage odd is in effect
+ \endgroup
+ \lastnegativepageno = \pageno
+ \global\pageno = \savepageno
+}
+\let\shortcontents = \summarycontents
+
+% Typeset the label for a chapter or appendix for the short contents.
+% The arg is, e.g., `A' for an appendix, or `3' for a chapter.
+%
+\def\shortchaplabel#1{%
+ % This space should be enough, since a single number is .5em, and the
+ % widest letter (M) is 1em, at least in the Computer Modern fonts.
+ % But use \hss just in case.
+ % (This space doesn't include the extra space that gets added after
+ % the label; that gets put in by \shortchapentry above.)
+ %
+ % We'd like to right-justify chapter numbers, but that looks strange
+ % with appendix letters. And right-justifying numbers and
+ % left-justifying letters looks strange when there is less than 10
+ % chapters. Have to read the whole toc once to know how many chapters
+ % there are before deciding ...
+ \hbox to 1em{#1\hss}%
+}
+
+% These macros generate individual entries in the table of contents.
+% The first argument is the chapter or section name.
+% The last argument is the page number.
+% The arguments in between are the chapter number, section number, ...
+
+% Chapters, in the main contents.
+\def\numchapentry#1#2#3#4{\dochapentry{#2\labelspace#1}{#4}}
+%
+% Chapters, in the short toc.
+% See comments in \dochapentry re vbox and related settings.
+\def\shortchapentry#1#2#3#4{%
+ \tocentry{\shortchaplabel{#2}\labelspace #1}{\doshortpageno\bgroup#4\egroup}%
+}
+
+% Appendices, in the main contents.
+% Need the word Appendix, and a fixed-size box.
+%
+\def\appendixbox#1{%
+ % We use M since it's probably the widest letter.
+ \setbox0 = \hbox{\putwordAppendix{} M}%
+ \hbox to \wd0{\putwordAppendix{} #1\hss}}
+%
+\def\appentry#1#2#3#4{\dochapentry{\appendixbox{#2}\labelspace#1}{#4}}
+
+% Unnumbered chapters.
+\def\unnchapentry#1#2#3#4{\dochapentry{#1}{#4}}
+\def\shortunnchapentry#1#2#3#4{\tocentry{#1}{\doshortpageno\bgroup#4\egroup}}
+
+% Sections.
+\def\numsecentry#1#2#3#4{\dosecentry{#2\labelspace#1}{#4}}
+\let\appsecentry=\numsecentry
+\def\unnsecentry#1#2#3#4{\dosecentry{#1}{#4}}
+
+% Subsections.
+\def\numsubsecentry#1#2#3#4{\dosubsecentry{#2\labelspace#1}{#4}}
+\let\appsubsecentry=\numsubsecentry
+\def\unnsubsecentry#1#2#3#4{\dosubsecentry{#1}{#4}}
+
+% And subsubsections.
+\def\numsubsubsecentry#1#2#3#4{\dosubsubsecentry{#2\labelspace#1}{#4}}
+\let\appsubsubsecentry=\numsubsubsecentry
+\def\unnsubsubsecentry#1#2#3#4{\dosubsubsecentry{#1}{#4}}
+
+% This parameter controls the indentation of the various levels.
+% Same as \defaultparindent.
+\newdimen\tocindent \tocindent = 15pt
+
+% Now for the actual typesetting. In all these, #1 is the text and #2 is the
+% page number.
+%
+% If the toc has to be broken over pages, we want it to be at chapters
+% if at all possible; hence the \penalty.
+\def\dochapentry#1#2{%
+ \penalty-300 \vskip1\baselineskip plus.33\baselineskip minus.25\baselineskip
+ \begingroup
+ \chapentryfonts
+ \tocentry{#1}{\dopageno\bgroup#2\egroup}%
+ \endgroup
+ \nobreak\vskip .25\baselineskip plus.1\baselineskip
+}
+
+\def\dosecentry#1#2{\begingroup
+ \secentryfonts \leftskip=\tocindent
+ \tocentry{#1}{\dopageno\bgroup#2\egroup}%
+\endgroup}
+
+\def\dosubsecentry#1#2{\begingroup
+ \subsecentryfonts \leftskip=2\tocindent
+ \tocentry{#1}{\dopageno\bgroup#2\egroup}%
+\endgroup}
+
+\def\dosubsubsecentry#1#2{\begingroup
+ \subsubsecentryfonts \leftskip=3\tocindent
+ \tocentry{#1}{\dopageno\bgroup#2\egroup}%
+\endgroup}
+
+% We use the same \entry macro as for the index entries.
+\let\tocentry = \entry
+
+% Space between chapter (or whatever) number and the title.
+\def\labelspace{\hskip1em \relax}
+
+\def\dopageno#1{{\rm #1}}
+\def\doshortpageno#1{{\rm #1}}
+
+\def\chapentryfonts{\secfonts \rm}
+\def\secentryfonts{\textfonts}
+\def\subsecentryfonts{\textfonts}
+\def\subsubsecentryfonts{\textfonts}
+
+
+\message{environments,}
+% @foo ... @end foo.
+
+% @point{}, @result{}, @expansion{}, @print{}, @equiv{}.
+%
+% Since these characters are used in examples, it should be an even number of
+% \tt widths. Each \tt character is 1en, so two makes it 1em.
+%
+\def\point{$\star$}
+\def\result{\leavevmode\raise.15ex\hbox to 1em{\hfil$\Rightarrow$\hfil}}
+\def\expansion{\leavevmode\raise.1ex\hbox to 1em{\hfil$\mapsto$\hfil}}
+\def\print{\leavevmode\lower.1ex\hbox to 1em{\hfil$\dashv$\hfil}}
+\def\equiv{\leavevmode\lower.1ex\hbox to 1em{\hfil$\ptexequiv$\hfil}}
+
+% The @error{} command.
+% Adapted from the TeXbook's \boxit.
+%
+\newbox\errorbox
+%
+{\tentt \global\dimen0 = 3em}% Width of the box.
+\dimen2 = .55pt % Thickness of rules
+% The text. (`r' is open on the right, `e' somewhat less so on the left.)
+\setbox0 = \hbox{\kern-.75pt \reducedsf error\kern-1.5pt}
+%
+\setbox\errorbox=\hbox to \dimen0{\hfil
+ \hsize = \dimen0 \advance\hsize by -5.8pt % Space to left+right.
+ \advance\hsize by -2\dimen2 % Rules.
+ \vbox{%
+ \hrule height\dimen2
+ \hbox{\vrule width\dimen2 \kern3pt % Space to left of text.
+ \vtop{\kern2.4pt \box0 \kern2.4pt}% Space above/below.
+ \kern3pt\vrule width\dimen2}% Space to right.
+ \hrule height\dimen2}
+ \hfil}
+%
+\def\error{\leavevmode\lower.7ex\copy\errorbox}
+
+% @tex ... @end tex escapes into raw Tex temporarily.
+% One exception: @ is still an escape character, so that @end tex works.
+% But \@ or @@ will get a plain tex @ character.
+
+\envdef\tex{%
+ \catcode `\\=0 \catcode `\{=1 \catcode `\}=2
+ \catcode `\$=3 \catcode `\&=4 \catcode `\#=6
+ \catcode `\^=7 \catcode `\_=8 \catcode `\~=\active \let~=\tie
+ \catcode `\%=14
+ \catcode `\+=\other
+ \catcode `\"=\other
+ \catcode `\|=\other
+ \catcode `\<=\other
+ \catcode `\>=\other
+ \escapechar=`\\
+ %
+ \let\b=\ptexb
+ \let\bullet=\ptexbullet
+ \let\c=\ptexc
+ \let\,=\ptexcomma
+ \let\.=\ptexdot
+ \let\dots=\ptexdots
+ \let\equiv=\ptexequiv
+ \let\!=\ptexexclam
+ \let\i=\ptexi
+ \let\indent=\ptexindent
+ \let\noindent=\ptexnoindent
+ \let\{=\ptexlbrace
+ \let\+=\tabalign
+ \let\}=\ptexrbrace
+ \let\/=\ptexslash
+ \let\*=\ptexstar
+ \let\t=\ptext
+ \let\frenchspacing=\plainfrenchspacing
+ %
+ \def\endldots{\mathinner{\ldots\ldots\ldots\ldots}}%
+ \def\enddots{\relax\ifmmode\endldots\else$\mathsurround=0pt \endldots\,$\fi}%
+ \def\@{@}%
+}
+% There is no need to define \Etex.
+
+% Define @lisp ... @end lisp.
+% @lisp environment forms a group so it can rebind things,
+% including the definition of @end lisp (which normally is erroneous).
+
+% Amount to narrow the margins by for @lisp.
+\newskip\lispnarrowing \lispnarrowing=0.4in
+
+% This is the definition that ^^M gets inside @lisp, @example, and other
+% such environments. \null is better than a space, since it doesn't
+% have any width.
+\def\lisppar{\null\endgraf}
+
+% This space is always present above and below environments.
+\newskip\envskipamount \envskipamount = 0pt
+
+% Make spacing and below environment symmetrical. We use \parskip here
+% to help in doing that, since in @example-like environments \parskip
+% is reset to zero; thus the \afterenvbreak inserts no space -- but the
+% start of the next paragraph will insert \parskip.
+%
+\def\aboveenvbreak{{%
+ % =10000 instead of <10000 because of a special case in \itemzzz and
+ % \sectionheading, q.v.
+ \ifnum \lastpenalty=10000 \else
+ \advance\envskipamount by \parskip
+ \endgraf
+ \ifdim\lastskip<\envskipamount
+ \removelastskip
+ % it's not a good place to break if the last penalty was \nobreak
+ % or better ...
+ \ifnum\lastpenalty<10000 \penalty-50 \fi
+ \vskip\envskipamount
+ \fi
+ \fi
+}}
+
+\let\afterenvbreak = \aboveenvbreak
+
+% \nonarrowing is a flag. If "set", @lisp etc don't narrow margins; it will
+% also clear it, so that its embedded environments do the narrowing again.
+\let\nonarrowing=\relax
+
+% @cartouche ... @end cartouche: draw rectangle w/rounded corners around
+% environment contents.
+\font\circle=lcircle10
+\newdimen\circthick
+\newdimen\cartouter\newdimen\cartinner
+\newskip\normbskip\newskip\normpskip\newskip\normlskip
+\circthick=\fontdimen8\circle
+%
+\def\ctl{{\circle\char'013\hskip -6pt}}% 6pt from pl file: 1/2charwidth
+\def\ctr{{\hskip 6pt\circle\char'010}}
+\def\cbl{{\circle\char'012\hskip -6pt}}
+\def\cbr{{\hskip 6pt\circle\char'011}}
+\def\carttop{\hbox to \cartouter{\hskip\lskip
+ \ctl\leaders\hrule height\circthick\hfil\ctr
+ \hskip\rskip}}
+\def\cartbot{\hbox to \cartouter{\hskip\lskip
+ \cbl\leaders\hrule height\circthick\hfil\cbr
+ \hskip\rskip}}
+%
+\newskip\lskip\newskip\rskip
+
+\envdef\cartouche{%
+ \ifhmode\par\fi % can't be in the midst of a paragraph.
+ \startsavinginserts
+ \lskip=\leftskip \rskip=\rightskip
+ \leftskip=0pt\rightskip=0pt % we want these *outside*.
+ \cartinner=\hsize \advance\cartinner by-\lskip
+ \advance\cartinner by-\rskip
+ \cartouter=\hsize
+ \advance\cartouter by 18.4pt % allow for 3pt kerns on either
+ % side, and for 6pt waste from
+ % each corner char, and rule thickness
+ \normbskip=\baselineskip \normpskip=\parskip \normlskip=\lineskip
+ % Flag to tell @lisp, etc., not to narrow margin.
+ \let\nonarrowing = t%
+ \vbox\bgroup
+ \baselineskip=0pt\parskip=0pt\lineskip=0pt
+ \carttop
+ \hbox\bgroup
+ \hskip\lskip
+ \vrule\kern3pt
+ \vbox\bgroup
+ \kern3pt
+ \hsize=\cartinner
+ \baselineskip=\normbskip
+ \lineskip=\normlskip
+ \parskip=\normpskip
+ \vskip -\parskip
+ \comment % For explanation, see the end of \def\group.
+}
+\def\Ecartouche{%
+ \ifhmode\par\fi
+ \kern3pt
+ \egroup
+ \kern3pt\vrule
+ \hskip\rskip
+ \egroup
+ \cartbot
+ \egroup
+ \checkinserts
+}
+
+
+% This macro is called at the beginning of all the @example variants,
+% inside a group.
+\def\nonfillstart{%
+ \aboveenvbreak
+ \hfuzz = 12pt % Don't be fussy
+ \sepspaces % Make spaces be word-separators rather than space tokens.
+ \let\par = \lisppar % don't ignore blank lines
+ \obeylines % each line of input is a line of output
+ \parskip = 0pt
+ \parindent = 0pt
+ \emergencystretch = 0pt % don't try to avoid overfull boxes
+ \ifx\nonarrowing\relax
+ \advance \leftskip by \lispnarrowing
+ \exdentamount=\lispnarrowing
+ \else
+ \let\nonarrowing = \relax
+ \fi
+ \let\exdent=\nofillexdent
+}
+
+% If you want all examples etc. small: @set dispenvsize small.
+% If you want even small examples the full size: @set dispenvsize nosmall.
+% This affects the following displayed environments:
+% @example, @display, @format, @lisp
+%
+\def\smallword{small}
+\def\nosmallword{nosmall}
+\let\SETdispenvsize\relax
+\def\setnormaldispenv{%
+ \ifx\SETdispenvsize\smallword
+ \smallexamplefonts \rm
+ \fi
+}
+\def\setsmalldispenv{%
+ \ifx\SETdispenvsize\nosmallword
+ \else
+ \smallexamplefonts \rm
+ \fi
+}
+
+% We often define two environments, @foo and @smallfoo.
+% Let's do it by one command:
+\def\makedispenv #1#2{
+ \expandafter\envdef\csname#1\endcsname {\setnormaldispenv #2}
+ \expandafter\envdef\csname small#1\endcsname {\setsmalldispenv #2}
+ \expandafter\let\csname E#1\endcsname \afterenvbreak
+ \expandafter\let\csname Esmall#1\endcsname \afterenvbreak
+}
+
+% Define two synonyms:
+\def\maketwodispenvs #1#2#3{
+ \makedispenv{#1}{#3}
+ \makedispenv{#2}{#3}
+}
+
+% @lisp: indented, narrowed, typewriter font; @example: same as @lisp.
+%
+% @smallexample and @smalllisp: use smaller fonts.
+% Originally contributed by Pavel@xerox.
+%
+\maketwodispenvs {lisp}{example}{%
+ \nonfillstart
+ \tt\quoteexpand
+ \let\kbdfont = \kbdexamplefont % Allow @kbd to do something special.
+ \gobble % eat return
+}
+% @display/@smalldisplay: same as @lisp except keep current font.
+%
+\makedispenv {display}{%
+ \nonfillstart
+ \gobble
+}
+
+% @format/@smallformat: same as @display except don't narrow margins.
+%
+\makedispenv{format}{%
+ \let\nonarrowing = t%
+ \nonfillstart
+ \gobble
+}
+
+% @flushleft: same as @format, but doesn't obey \SETdispenvsize.
+\envdef\flushleft{%
+ \let\nonarrowing = t%
+ \nonfillstart
+ \gobble
+}
+\let\Eflushleft = \afterenvbreak
+
+% @flushright.
+%
+\envdef\flushright{%
+ \let\nonarrowing = t%
+ \nonfillstart
+ \advance\leftskip by 0pt plus 1fill
+ \gobble
+}
+\let\Eflushright = \afterenvbreak
+
+
+% @quotation does normal linebreaking (hence we can't use \nonfillstart)
+% and narrows the margins. We keep \parskip nonzero in general, since
+% we're doing normal filling. So, when using \aboveenvbreak and
+% \afterenvbreak, temporarily make \parskip 0.
+%
+\envdef\quotation{%
+ {\parskip=0pt \aboveenvbreak}% because \aboveenvbreak inserts \parskip
+ \parindent=0pt
+ %
+ % @cartouche defines \nonarrowing to inhibit narrowing at next level down.
+ \ifx\nonarrowing\relax
+ \advance\leftskip by \lispnarrowing
+ \advance\rightskip by \lispnarrowing
+ \exdentamount = \lispnarrowing
+ \else
+ \let\nonarrowing = \relax
+ \fi
+ \parsearg\quotationlabel
+}
+
+% We have retained a nonzero parskip for the environment, since we're
+% doing normal filling.
+%
+\def\Equotation{%
+ \par
+ \ifx\quotationauthor\undefined\else
+ % indent a bit.
+ \leftline{\kern 2\leftskip \sl ---\quotationauthor}%
+ \fi
+ {\parskip=0pt \afterenvbreak}%
+}
+
+% If we're given an argument, typeset it in bold with a colon after.
+\def\quotationlabel#1{%
+ \def\temp{#1}%
+ \ifx\temp\empty \else
+ {\bf #1: }%
+ \fi
+}
+
+
+% LaTeX-like @verbatim...@end verbatim and @verb{<char>...<char>}
+% If we want to allow any <char> as delimiter,
+% we need the curly braces so that makeinfo sees the @verb command, eg:
+% `@verbx...x' would look like the '@verbx' command. --janneke@gnu.org
+%
+% [Knuth]: Donald Ervin Knuth, 1996. The TeXbook.
+%
+% [Knuth] p.344; only we need to do the other characters Texinfo sets
+% active too. Otherwise, they get lost as the first character on a
+% verbatim line.
+\def\dospecials{%
+ \do\ \do\\\do\{\do\}\do\$\do\&%
+ \do\#\do\^\do\^^K\do\_\do\^^A\do\%\do\~%
+ \do\<\do\>\do\|\do\@\do+\do\"%
+}
+%
+% [Knuth] p. 380
+\def\uncatcodespecials{%
+ \def\do##1{\catcode`##1=\other}\dospecials}
+%
+% [Knuth] pp. 380,381,391
+% Disable Spanish ligatures ?` and !` of \tt font
+\begingroup
+ \catcode`\`=\active\gdef`{\relax\lq}
+\endgroup
+%
+% Setup for the @verb command.
+%
+% Eight spaces for a tab
+\begingroup
+ \catcode`\^^I=\active
+ \gdef\tabeightspaces{\catcode`\^^I=\active\def^^I{\ \ \ \ \ \ \ \ }}
+\endgroup
+%
+\def\setupverb{%
+ \tt % easiest (and conventionally used) font for verbatim
+ \def\par{\leavevmode\endgraf}%
+ \catcode`\`=\active
+ \tabeightspaces
+ % Respect line breaks,
+ % print special symbols as themselves, and
+ % make each space count
+ % must do in this order:
+ \obeylines \uncatcodespecials \sepspaces
+}
+
+% Setup for the @verbatim environment
+%
+% Real tab expansion
+\newdimen\tabw \setbox0=\hbox{\tt\space} \tabw=8\wd0 % tab amount
+%
+\def\starttabbox{\setbox0=\hbox\bgroup}
+
+% Allow an option to not replace quotes with a regular directed right
+% quote/apostrophe (char 0x27), but instead use the undirected quote
+% from cmtt (char 0x0d). The undirected quote is ugly, so don't make it
+% the default, but it works for pasting with more pdf viewers (at least
+% evince), the lilypond developers report. xpdf does work with the
+% regular 0x27.
+%
+\def\codequoteright{%
+ \expandafter\ifx\csname SETcodequoteundirected\endcsname\relax
+ '%
+ \else
+ \char'15
+ \fi
+}
+%
+% and a similar option for the left quote char vs. a grave accent.
+% Modern fonts display ASCII 0x60 as a grave accent, so some people like
+% the code environments to do likewise.
+%
+\def\codequoteleft{%
+ \expandafter\ifx\csname SETcodequotebacktick\endcsname\relax
+ `%
+ \else
+ \char'22
+ \fi
+}
+%
+\begingroup
+ \catcode`\^^I=\active
+ \gdef\tabexpand{%
+ \catcode`\^^I=\active
+ \def^^I{\leavevmode\egroup
+ \dimen0=\wd0 % the width so far, or since the previous tab
+ \divide\dimen0 by\tabw
+ \multiply\dimen0 by\tabw % compute previous multiple of \tabw
+ \advance\dimen0 by\tabw % advance to next multiple of \tabw
+ \wd0=\dimen0 \box0 \starttabbox
+ }%
+ }
+ \catcode`\'=\active
+ \gdef\rquoteexpand{\catcode\rquoteChar=\active \def'{\codequoteright}}%
+ %
+ \catcode`\`=\active
+ \gdef\lquoteexpand{\catcode\lquoteChar=\active \def`{\codequoteleft}}%
+ %
+ \gdef\quoteexpand{\rquoteexpand \lquoteexpand}%
+\endgroup
+
+% start the verbatim environment.
+\def\setupverbatim{%
+ \let\nonarrowing = t%
+ \nonfillstart
+ % Easiest (and conventionally used) font for verbatim
+ \tt
+ \def\par{\leavevmode\egroup\box0\endgraf}%
+ \catcode`\`=\active
+ \tabexpand
+ \quoteexpand
+ % Respect line breaks,
+ % print special symbols as themselves, and
+ % make each space count
+ % must do in this order:
+ \obeylines \uncatcodespecials \sepspaces
+ \everypar{\starttabbox}%
+}
+
+% Do the @verb magic: verbatim text is quoted by unique
+% delimiter characters. Before first delimiter expect a
+% right brace, after last delimiter expect closing brace:
+%
+% \def\doverb'{'<char>#1<char>'}'{#1}
+%
+% [Knuth] p. 382; only eat outer {}
+\begingroup
+ \catcode`[=1\catcode`]=2\catcode`\{=\other\catcode`\}=\other
+ \gdef\doverb{#1[\def\next##1#1}[##1\endgroup]\next]
+\endgroup
+%
+\def\verb{\begingroup\setupverb\doverb}
+%
+%
+% Do the @verbatim magic: define the macro \doverbatim so that
+% the (first) argument ends when '@end verbatim' is reached, ie:
+%
+% \def\doverbatim#1@end verbatim{#1}
+%
+% For Texinfo it's a lot easier than for LaTeX,
+% because texinfo's \verbatim doesn't stop at '\end{verbatim}':
+% we need not redefine '\', '{' and '}'.
+%
+% Inspired by LaTeX's verbatim command set [latex.ltx]
+%
+\begingroup
+ \catcode`\ =\active
+ \obeylines %
+ % ignore everything up to the first ^^M, that's the newline at the end
+ % of the @verbatim input line itself. Otherwise we get an extra blank
+ % line in the output.
+ \xdef\doverbatim#1^^M#2@end verbatim{#2\noexpand\end\gobble verbatim}%
+ % We really want {...\end verbatim} in the body of the macro, but
+ % without the active space; thus we have to use \xdef and \gobble.
+\endgroup
+%
+\envdef\verbatim{%
+ \setupverbatim\doverbatim
+}
+\let\Everbatim = \afterenvbreak
+
+
+% @verbatiminclude FILE - insert text of file in verbatim environment.
+%
+\def\verbatiminclude{\parseargusing\filenamecatcodes\doverbatiminclude}
+%
+\def\doverbatiminclude#1{%
+ {%
+ \makevalueexpandable
+ \setupverbatim
+ \input #1
+ \afterenvbreak
+ }%
+}
+
+% @copying ... @end copying.
+% Save the text away for @insertcopying later.
+%
+% We save the uninterpreted tokens, rather than creating a box.
+% Saving the text in a box would be much easier, but then all the
+% typesetting commands (@smallbook, font changes, etc.) have to be done
+% beforehand -- and a) we want @copying to be done first in the source
+% file; b) letting users define the frontmatter in as flexible order as
+% possible is very desirable.
+%
+\def\copying{\checkenv{}\begingroup\scanargctxt\docopying}
+\def\docopying#1@end copying{\endgroup\def\copyingtext{#1}}
+%
+\def\insertcopying{%
+ \begingroup
+ \parindent = 0pt % paragraph indentation looks wrong on title page
+ \scanexp\copyingtext
+ \endgroup
+}
+
+\message{defuns,}
+% @defun etc.
+
+\newskip\defbodyindent \defbodyindent=.4in
+\newskip\defargsindent \defargsindent=50pt
+\newskip\deflastargmargin \deflastargmargin=18pt
+
+% Start the processing of @deffn:
+\def\startdefun{%
+ \ifnum\lastpenalty<10000
+ \medbreak
+ \else
+ % If there are two @def commands in a row, we'll have a \nobreak,
+ % which is there to keep the function description together with its
+ % header. But if there's nothing but headers, we need to allow a
+ % break somewhere. Check specifically for penalty 10002, inserted
+ % by \defargscommonending, instead of 10000, since the sectioning
+ % commands also insert a nobreak penalty, and we don't want to allow
+ % a break between a section heading and a defun.
+ %
+ \ifnum\lastpenalty=10002 \penalty2000 \fi
+ %
+ % Similarly, after a section heading, do not allow a break.
+ % But do insert the glue.
+ \medskip % preceded by discardable penalty, so not a breakpoint
+ \fi
+ %
+ \parindent=0in
+ \advance\leftskip by \defbodyindent
+ \exdentamount=\defbodyindent
+}
+
+\def\dodefunx#1{%
+ % First, check whether we are in the right environment:
+ \checkenv#1%
+ %
+ % As above, allow line break if we have multiple x headers in a row.
+ % It's not a great place, though.
+ \ifnum\lastpenalty=10002 \penalty3000 \fi
+ %
+ % And now, it's time to reuse the body of the original defun:
+ \expandafter\gobbledefun#1%
+}
+\def\gobbledefun#1\startdefun{}
+
+% \printdefunline \deffnheader{text}
+%
+\def\printdefunline#1#2{%
+ \begingroup
+ % call \deffnheader:
+ #1#2 \endheader
+ % common ending:
+ \interlinepenalty = 10000
+ \advance\rightskip by 0pt plus 1fil
+ \endgraf
+ \nobreak\vskip -\parskip
+ \penalty 10002 % signal to \startdefun and \dodefunx
+ % Some of the @defun-type tags do not enable magic parentheses,
+ % rendering the following check redundant. But we don't optimize.
+ \checkparencounts
+ \endgroup
+}
+
+\def\Edefun{\endgraf\medbreak}
+
+% \makedefun{deffn} creates \deffn, \deffnx and \Edeffn;
+% the only thing remainnig is to define \deffnheader.
+%
+\def\makedefun#1{%
+ \expandafter\let\csname E#1\endcsname = \Edefun
+ \edef\temp{\noexpand\domakedefun
+ \makecsname{#1}\makecsname{#1x}\makecsname{#1header}}%
+ \temp
+}
+
+% \domakedefun \deffn \deffnx \deffnheader
+%
+% Define \deffn and \deffnx, without parameters.
+% \deffnheader has to be defined explicitly.
+%
+\def\domakedefun#1#2#3{%
+ \envdef#1{%
+ \startdefun
+ \parseargusing\activeparens{\printdefunline#3}%
+ }%
+ \def#2{\dodefunx#1}%
+ \def#3%
+}
+
+%%% Untyped functions:
+
+% @deffn category name args
+\makedefun{deffn}{\deffngeneral{}}
+
+% @deffn category class name args
+\makedefun{defop}#1 {\defopon{#1\ \putwordon}}
+
+% \defopon {category on}class name args
+\def\defopon#1#2 {\deffngeneral{\putwordon\ \code{#2}}{#1\ \code{#2}} }
+
+% \deffngeneral {subind}category name args
+%
+\def\deffngeneral#1#2 #3 #4\endheader{%
+ % Remember that \dosubind{fn}{foo}{} is equivalent to \doind{fn}{foo}.
+ \dosubind{fn}{\code{#3}}{#1}%
+ \defname{#2}{}{#3}\magicamp\defunargs{#4\unskip}%
+}
+
+%%% Typed functions:
+
+% @deftypefn category type name args
+\makedefun{deftypefn}{\deftypefngeneral{}}
+
+% @deftypeop category class type name args
+\makedefun{deftypeop}#1 {\deftypeopon{#1\ \putwordon}}
+
+% \deftypeopon {category on}class type name args
+\def\deftypeopon#1#2 {\deftypefngeneral{\putwordon\ \code{#2}}{#1\ \code{#2}} }
+
+% \deftypefngeneral {subind}category type name args
+%
+\def\deftypefngeneral#1#2 #3 #4 #5\endheader{%
+ \dosubind{fn}{\code{#4}}{#1}%
+ \defname{#2}{#3}{#4}\defunargs{#5\unskip}%
+}
+
+%%% Typed variables:
+
+% @deftypevr category type var args
+\makedefun{deftypevr}{\deftypecvgeneral{}}
+
+% @deftypecv category class type var args
+\makedefun{deftypecv}#1 {\deftypecvof{#1\ \putwordof}}
+
+% \deftypecvof {category of}class type var args
+\def\deftypecvof#1#2 {\deftypecvgeneral{\putwordof\ \code{#2}}{#1\ \code{#2}} }
+
+% \deftypecvgeneral {subind}category type var args
+%
+\def\deftypecvgeneral#1#2 #3 #4 #5\endheader{%
+ \dosubind{vr}{\code{#4}}{#1}%
+ \defname{#2}{#3}{#4}\defunargs{#5\unskip}%
+}
+
+%%% Untyped variables:
+
+% @defvr category var args
+\makedefun{defvr}#1 {\deftypevrheader{#1} {} }
+
+% @defcv category class var args
+\makedefun{defcv}#1 {\defcvof{#1\ \putwordof}}
+
+% \defcvof {category of}class var args
+\def\defcvof#1#2 {\deftypecvof{#1}#2 {} }
+
+%%% Type:
+% @deftp category name args
+\makedefun{deftp}#1 #2 #3\endheader{%
+ \doind{tp}{\code{#2}}%
+ \defname{#1}{}{#2}\defunargs{#3\unskip}%
+}
+
+% Remaining @defun-like shortcuts:
+\makedefun{defun}{\deffnheader{\putwordDeffunc} }
+\makedefun{defmac}{\deffnheader{\putwordDefmac} }
+\makedefun{defspec}{\deffnheader{\putwordDefspec} }
+\makedefun{deftypefun}{\deftypefnheader{\putwordDeffunc} }
+\makedefun{defvar}{\defvrheader{\putwordDefvar} }
+\makedefun{defopt}{\defvrheader{\putwordDefopt} }
+\makedefun{deftypevar}{\deftypevrheader{\putwordDefvar} }
+\makedefun{defmethod}{\defopon\putwordMethodon}
+\makedefun{deftypemethod}{\deftypeopon\putwordMethodon}
+\makedefun{defivar}{\defcvof\putwordInstanceVariableof}
+\makedefun{deftypeivar}{\deftypecvof\putwordInstanceVariableof}
+
+% \defname, which formats the name of the @def (not the args).
+% #1 is the category, such as "Function".
+% #2 is the return type, if any.
+% #3 is the function name.
+%
+% We are followed by (but not passed) the arguments, if any.
+%
+\def\defname#1#2#3{%
+ % Get the values of \leftskip and \rightskip as they were outside the @def...
+ \advance\leftskip by -\defbodyindent
+ %
+ % How we'll format the type name. Putting it in brackets helps
+ % distinguish it from the body text that may end up on the next line
+ % just below it.
+ \def\temp{#1}%
+ \setbox0=\hbox{\kern\deflastargmargin \ifx\temp\empty\else [\rm\temp]\fi}
+ %
+ % Figure out line sizes for the paragraph shape.
+ % The first line needs space for \box0; but if \rightskip is nonzero,
+ % we need only space for the part of \box0 which exceeds it:
+ \dimen0=\hsize \advance\dimen0 by -\wd0 \advance\dimen0 by \rightskip
+ % The continuations:
+ \dimen2=\hsize \advance\dimen2 by -\defargsindent
+ % (plain.tex says that \dimen1 should be used only as global.)
+ \parshape 2 0in \dimen0 \defargsindent \dimen2
+ %
+ % Put the type name to the right margin.
+ \noindent
+ \hbox to 0pt{%
+ \hfil\box0 \kern-\hsize
+ % \hsize has to be shortened this way:
+ \kern\leftskip
+ % Intentionally do not respect \rightskip, since we need the space.
+ }%
+ %
+ % Allow all lines to be underfull without complaint:
+ \tolerance=10000 \hbadness=10000
+ \exdentamount=\defbodyindent
+ {%
+ % defun fonts. We use typewriter by default (used to be bold) because:
+ % . we're printing identifiers, they should be in tt in principle.
+ % . in languages with many accents, such as Czech or French, it's
+ % common to leave accents off identifiers. The result looks ok in
+ % tt, but exceedingly strange in rm.
+ % . we don't want -- and --- to be treated as ligatures.
+ % . this still does not fix the ?` and !` ligatures, but so far no
+ % one has made identifiers using them :).
+ \df \tt
+ \def\temp{#2}% return value type
+ \ifx\temp\empty\else \tclose{\temp} \fi
+ #3% output function name
+ }%
+ {\rm\enskip}% hskip 0.5 em of \tenrm
+ %
+ \boldbrax
+ % arguments will be output next, if any.
+}
+
+% Print arguments in slanted roman (not ttsl), inconsistently with using
+% tt for the name. This is because literal text is sometimes needed in
+% the argument list (groff manual), and ttsl and tt are not very
+% distinguishable. Prevent hyphenation at `-' chars.
+%
+\def\defunargs#1{%
+ % use sl by default (not ttsl),
+ % tt for the names.
+ \df \sl \hyphenchar\font=0
+ %
+ % On the other hand, if an argument has two dashes (for instance), we
+ % want a way to get ttsl. Let's try @var for that.
+ \let\var=\ttslanted
+ #1%
+ \sl\hyphenchar\font=45
+}
+
+% We want ()&[] to print specially on the defun line.
+%
+\def\activeparens{%
+ \catcode`\(=\active \catcode`\)=\active
+ \catcode`\[=\active \catcode`\]=\active
+ \catcode`\&=\active
+}
+
+% Make control sequences which act like normal parenthesis chars.
+\let\lparen = ( \let\rparen = )
+
+% Be sure that we always have a definition for `(', etc. For example,
+% if the fn name has parens in it, \boldbrax will not be in effect yet,
+% so TeX would otherwise complain about undefined control sequence.
+{
+ \activeparens
+ \global\let(=\lparen \global\let)=\rparen
+ \global\let[=\lbrack \global\let]=\rbrack
+ \global\let& = \&
+
+ \gdef\boldbrax{\let(=\opnr\let)=\clnr\let[=\lbrb\let]=\rbrb}
+ \gdef\magicamp{\let&=\amprm}
+}
+
+\newcount\parencount
+
+% If we encounter &foo, then turn on ()-hacking afterwards
+\newif\ifampseen
+\def\amprm#1 {\ampseentrue{\bf\&#1 }}
+
+\def\parenfont{%
+ \ifampseen
+ % At the first level, print parens in roman,
+ % otherwise use the default font.
+ \ifnum \parencount=1 \rm \fi
+ \else
+ % The \sf parens (in \boldbrax) actually are a little bolder than
+ % the contained text. This is especially needed for [ and ] .
+ \sf
+ \fi
+}
+\def\infirstlevel#1{%
+ \ifampseen
+ \ifnum\parencount=1
+ #1%
+ \fi
+ \fi
+}
+\def\bfafterword#1 {#1 \bf}
+
+\def\opnr{%
+ \global\advance\parencount by 1
+ {\parenfont(}%
+ \infirstlevel \bfafterword
+}
+\def\clnr{%
+ {\parenfont)}%
+ \infirstlevel \sl
+ \global\advance\parencount by -1
+}
+
+\newcount\brackcount
+\def\lbrb{%
+ \global\advance\brackcount by 1
+ {\bf[}%
+}
+\def\rbrb{%
+ {\bf]}%
+ \global\advance\brackcount by -1
+}
+
+\def\checkparencounts{%
+ \ifnum\parencount=0 \else \badparencount \fi
+ \ifnum\brackcount=0 \else \badbrackcount \fi
+}
+\def\badparencount{%
+ \errmessage{Unbalanced parentheses in @def}%
+ \global\parencount=0
+}
+\def\badbrackcount{%
+ \errmessage{Unbalanced square braces in @def}%
+ \global\brackcount=0
+}
+
+
+\message{macros,}
+% @macro.
+
+% To do this right we need a feature of e-TeX, \scantokens,
+% which we arrange to emulate with a temporary file in ordinary TeX.
+\ifx\eTeXversion\undefined
+ \newwrite\macscribble
+ \def\scantokens#1{%
+ \toks0={#1}%
+ \immediate\openout\macscribble=\jobname.tmp
+ \immediate\write\macscribble{\the\toks0}%
+ \immediate\closeout\macscribble
+ \input \jobname.tmp
+ }
+\fi
+
+\def\scanmacro#1{%
+ \begingroup
+ \newlinechar`\^^M
+ \let\xeatspaces\eatspaces
+ % Undo catcode changes of \startcontents and \doprintindex
+ % When called from @insertcopying or (short)caption, we need active
+ % backslash to get it printed correctly. Previously, we had
+ % \catcode`\\=\other instead. We'll see whether a problem appears
+ % with macro expansion. --kasal, 19aug04
+ \catcode`\@=0 \catcode`\\=\active \escapechar=`\@
+ % ... and \example
+ \spaceisspace
+ %
+ % Append \endinput to make sure that TeX does not see the ending newline.
+ % I've verified that it is necessary both for e-TeX and for ordinary TeX
+ % --kasal, 29nov03
+ \scantokens{#1\endinput}%
+ \endgroup
+}
+
+\def\scanexp#1{%
+ \edef\temp{\noexpand\scanmacro{#1}}%
+ \temp
+}
+
+\newcount\paramno % Count of parameters
+\newtoks\macname % Macro name
+\newif\ifrecursive % Is it recursive?
+
+% List of all defined macros in the form
+% \definedummyword\macro1\definedummyword\macro2...
+% Currently is also contains all @aliases; the list can be split
+% if there is a need.
+\def\macrolist{}
+
+% Add the macro to \macrolist
+\def\addtomacrolist#1{\expandafter \addtomacrolistxxx \csname#1\endcsname}
+\def\addtomacrolistxxx#1{%
+ \toks0 = \expandafter{\macrolist\definedummyword#1}%
+ \xdef\macrolist{\the\toks0}%
+}
+
+% Utility routines.
+% This does \let #1 = #2, with \csnames; that is,
+% \let \csname#1\endcsname = \csname#2\endcsname
+% (except of course we have to play expansion games).
+%
+\def\cslet#1#2{%
+ \expandafter\let
+ \csname#1\expandafter\endcsname
+ \csname#2\endcsname
+}
+
+% Trim leading and trailing spaces off a string.
+% Concepts from aro-bend problem 15 (see CTAN).
+{\catcode`\@=11
+\gdef\eatspaces #1{\expandafter\trim@\expandafter{#1 }}
+\gdef\trim@ #1{\trim@@ @#1 @ #1 @ @@}
+\gdef\trim@@ #1@ #2@ #3@@{\trim@@@\empty #2 @}
+\def\unbrace#1{#1}
+\unbrace{\gdef\trim@@@ #1 } #2@{#1}
+}
+
+% Trim a single trailing ^^M off a string.
+{\catcode`\^^M=\other \catcode`\Q=3%
+\gdef\eatcr #1{\eatcra #1Q^^MQ}%
+\gdef\eatcra#1^^MQ{\eatcrb#1Q}%
+\gdef\eatcrb#1Q#2Q{#1}%
+}
+
+% Macro bodies are absorbed as an argument in a context where
+% all characters are catcode 10, 11 or 12, except \ which is active
+% (as in normal texinfo). It is necessary to change the definition of \.
+
+% It's necessary to have hard CRs when the macro is executed. This is
+% done by making ^^M (\endlinechar) catcode 12 when reading the macro
+% body, and then making it the \newlinechar in \scanmacro.
+
+\def\scanctxt{%
+ \catcode`\"=\other
+ \catcode`\+=\other
+ \catcode`\<=\other
+ \catcode`\>=\other
+ \catcode`\@=\other
+ \catcode`\^=\other
+ \catcode`\_=\other
+ \catcode`\|=\other
+ \catcode`\~=\other
+}
+
+\def\scanargctxt{%
+ \scanctxt
+ \catcode`\\=\other
+ \catcode`\^^M=\other
+}
+
+\def\macrobodyctxt{%
+ \scanctxt
+ \catcode`\{=\other
+ \catcode`\}=\other
+ \catcode`\^^M=\other
+ \usembodybackslash
+}
+
+\def\macroargctxt{%
+ \scanctxt
+ \catcode`\\=\other
+}
+
+% \mbodybackslash is the definition of \ in @macro bodies.
+% It maps \foo\ => \csname macarg.foo\endcsname => #N
+% where N is the macro parameter number.
+% We define \csname macarg.\endcsname to be \realbackslash, so
+% \\ in macro replacement text gets you a backslash.
+
+{\catcode`@=0 @catcode`@\=@active
+ @gdef@usembodybackslash{@let\=@mbodybackslash}
+ @gdef@mbodybackslash#1\{@csname macarg.#1@endcsname}
+}
+\expandafter\def\csname macarg.\endcsname{\realbackslash}
+
+\def\macro{\recursivefalse\parsearg\macroxxx}
+\def\rmacro{\recursivetrue\parsearg\macroxxx}
+
+\def\macroxxx#1{%
+ \getargs{#1}% now \macname is the macname and \argl the arglist
+ \ifx\argl\empty % no arguments
+ \paramno=0%
+ \else
+ \expandafter\parsemargdef \argl;%
+ \fi
+ \if1\csname ismacro.\the\macname\endcsname
+ \message{Warning: redefining \the\macname}%
+ \else
+ \expandafter\ifx\csname \the\macname\endcsname \relax
+ \else \errmessage{Macro name \the\macname\space already defined}\fi
+ \global\cslet{macsave.\the\macname}{\the\macname}%
+ \global\expandafter\let\csname ismacro.\the\macname\endcsname=1%
+ \addtomacrolist{\the\macname}%
+ \fi
+ \begingroup \macrobodyctxt
+ \ifrecursive \expandafter\parsermacbody
+ \else \expandafter\parsemacbody
+ \fi}
+
+\parseargdef\unmacro{%
+ \if1\csname ismacro.#1\endcsname
+ \global\cslet{#1}{macsave.#1}%
+ \global\expandafter\let \csname ismacro.#1\endcsname=0%
+ % Remove the macro name from \macrolist:
+ \begingroup
+ \expandafter\let\csname#1\endcsname \relax
+ \let\definedummyword\unmacrodo
+ \xdef\macrolist{\macrolist}%
+ \endgroup
+ \else
+ \errmessage{Macro #1 not defined}%
+ \fi
+}
+
+% Called by \do from \dounmacro on each macro. The idea is to omit any
+% macro definitions that have been changed to \relax.
+%
+\def\unmacrodo#1{%
+ \ifx #1\relax
+ % remove this
+ \else
+ \noexpand\definedummyword \noexpand#1%
+ \fi
+}
+
+% This makes use of the obscure feature that if the last token of a
+% <parameter list> is #, then the preceding argument is delimited by
+% an opening brace, and that opening brace is not consumed.
+\def\getargs#1{\getargsxxx#1{}}
+\def\getargsxxx#1#{\getmacname #1 \relax\getmacargs}
+\def\getmacname #1 #2\relax{\macname={#1}}
+\def\getmacargs#1{\def\argl{#1}}
+
+% Parse the optional {params} list. Set up \paramno and \paramlist
+% so \defmacro knows what to do. Define \macarg.blah for each blah
+% in the params list, to be ##N where N is the position in that list.
+% That gets used by \mbodybackslash (above).
+
+% We need to get `macro parameter char #' into several definitions.
+% The technique used is stolen from LaTeX: let \hash be something
+% unexpandable, insert that wherever you need a #, and then redefine
+% it to # just before using the token list produced.
+%
+% The same technique is used to protect \eatspaces till just before
+% the macro is used.
+
+\def\parsemargdef#1;{\paramno=0\def\paramlist{}%
+ \let\hash\relax\let\xeatspaces\relax\parsemargdefxxx#1,;,}
+\def\parsemargdefxxx#1,{%
+ \if#1;\let\next=\relax
+ \else \let\next=\parsemargdefxxx
+ \advance\paramno by 1%
+ \expandafter\edef\csname macarg.\eatspaces{#1}\endcsname
+ {\xeatspaces{\hash\the\paramno}}%
+ \edef\paramlist{\paramlist\hash\the\paramno,}%
+ \fi\next}
+
+% These two commands read recursive and nonrecursive macro bodies.
+% (They're different since rec and nonrec macros end differently.)
+
+\long\def\parsemacbody#1@end macro%
+{\xdef\temp{\eatcr{#1}}\endgroup\defmacro}%
+\long\def\parsermacbody#1@end rmacro%
+{\xdef\temp{\eatcr{#1}}\endgroup\defmacro}%
+
+% This defines the macro itself. There are six cases: recursive and
+% nonrecursive macros of zero, one, and many arguments.
+% Much magic with \expandafter here.
+% \xdef is used so that macro definitions will survive the file
+% they're defined in; @include reads the file inside a group.
+\def\defmacro{%
+ \let\hash=##% convert placeholders to macro parameter chars
+ \ifrecursive
+ \ifcase\paramno
+ % 0
+ \expandafter\xdef\csname\the\macname\endcsname{%
+ \noexpand\scanmacro{\temp}}%
+ \or % 1
+ \expandafter\xdef\csname\the\macname\endcsname{%
+ \bgroup\noexpand\macroargctxt
+ \noexpand\braceorline
+ \expandafter\noexpand\csname\the\macname xxx\endcsname}%
+ \expandafter\xdef\csname\the\macname xxx\endcsname##1{%
+ \egroup\noexpand\scanmacro{\temp}}%
+ \else % many
+ \expandafter\xdef\csname\the\macname\endcsname{%
+ \bgroup\noexpand\macroargctxt
+ \noexpand\csname\the\macname xx\endcsname}%
+ \expandafter\xdef\csname\the\macname xx\endcsname##1{%
+ \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}%
+ \expandafter\expandafter
+ \expandafter\xdef
+ \expandafter\expandafter
+ \csname\the\macname xxx\endcsname
+ \paramlist{\egroup\noexpand\scanmacro{\temp}}%
+ \fi
+ \else
+ \ifcase\paramno
+ % 0
+ \expandafter\xdef\csname\the\macname\endcsname{%
+ \noexpand\norecurse{\the\macname}%
+ \noexpand\scanmacro{\temp}\egroup}%
+ \or % 1
+ \expandafter\xdef\csname\the\macname\endcsname{%
+ \bgroup\noexpand\macroargctxt
+ \noexpand\braceorline
+ \expandafter\noexpand\csname\the\macname xxx\endcsname}%
+ \expandafter\xdef\csname\the\macname xxx\endcsname##1{%
+ \egroup
+ \noexpand\norecurse{\the\macname}%
+ \noexpand\scanmacro{\temp}\egroup}%
+ \else % many
+ \expandafter\xdef\csname\the\macname\endcsname{%
+ \bgroup\noexpand\macroargctxt
+ \expandafter\noexpand\csname\the\macname xx\endcsname}%
+ \expandafter\xdef\csname\the\macname xx\endcsname##1{%
+ \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}%
+ \expandafter\expandafter
+ \expandafter\xdef
+ \expandafter\expandafter
+ \csname\the\macname xxx\endcsname
+ \paramlist{%
+ \egroup
+ \noexpand\norecurse{\the\macname}%
+ \noexpand\scanmacro{\temp}\egroup}%
+ \fi
+ \fi}
+
+\def\norecurse#1{\bgroup\cslet{#1}{macsave.#1}}
+
+% \braceorline decides whether the next nonwhitespace character is a
+% {. If so it reads up to the closing }, if not, it reads the whole
+% line. Whatever was read is then fed to the next control sequence
+% as an argument (by \parsebrace or \parsearg)
+\def\braceorline#1{\let\macnamexxx=#1\futurelet\nchar\braceorlinexxx}
+\def\braceorlinexxx{%
+ \ifx\nchar\bgroup\else
+ \expandafter\parsearg
+ \fi \macnamexxx}
+
+
+% @alias.
+% We need some trickery to remove the optional spaces around the equal
+% sign. Just make them active and then expand them all to nothing.
+\def\alias{\parseargusing\obeyspaces\aliasxxx}
+\def\aliasxxx #1{\aliasyyy#1\relax}
+\def\aliasyyy #1=#2\relax{%
+ {%
+ \expandafter\let\obeyedspace=\empty
+ \addtomacrolist{#1}%
+ \xdef\next{\global\let\makecsname{#1}=\makecsname{#2}}%
+ }%
+ \next
+}
+
+
+\message{cross references,}
+
+\newwrite\auxfile
+
+\newif\ifhavexrefs % True if xref values are known.
+\newif\ifwarnedxrefs % True if we warned once that they aren't known.
+
+% @inforef is relatively simple.
+\def\inforef #1{\inforefzzz #1,,,,**}
+\def\inforefzzz #1,#2,#3,#4**{\putwordSee{} \putwordInfo{} \putwordfile{} \file{\ignorespaces #3{}},
+ node \samp{\ignorespaces#1{}}}
+
+% @node's only job in TeX is to define \lastnode, which is used in
+% cross-references. The @node line might or might not have commas, and
+% might or might not have spaces before the first comma, like:
+% @node foo , bar , ...
+% We don't want such trailing spaces in the node name.
+%
+\parseargdef\node{\checkenv{}\donode #1 ,\finishnodeparse}
+%
+% also remove a trailing comma, in case of something like this:
+% @node Help-Cross, , , Cross-refs
+\def\donode#1 ,#2\finishnodeparse{\dodonode #1,\finishnodeparse}
+\def\dodonode#1,#2\finishnodeparse{\gdef\lastnode{#1}}
+
+\let\nwnode=\node
+\let\lastnode=\empty
+
+% Write a cross-reference definition for the current node. #1 is the
+% type (Ynumbered, Yappendix, Ynothing).
+%
+\def\donoderef#1{%
+ \ifx\lastnode\empty\else
+ \setref{\lastnode}{#1}%
+ \global\let\lastnode=\empty
+ \fi
+}
+
+% @anchor{NAME} -- define xref target at arbitrary point.
+%
+\newcount\savesfregister
+%
+\def\savesf{\relax \ifhmode \savesfregister=\spacefactor \fi}
+\def\restoresf{\relax \ifhmode \spacefactor=\savesfregister \fi}
+\def\anchor#1{\savesf \setref{#1}{Ynothing}\restoresf \ignorespaces}
+
+% \setref{NAME}{SNT} defines a cross-reference point NAME (a node or an
+% anchor), which consists of three parts:
+% 1) NAME-title - the current sectioning name taken from \thissection,
+% or the anchor name.
+% 2) NAME-snt - section number and type, passed as the SNT arg, or
+% empty for anchors.
+% 3) NAME-pg - the page number.
+%
+% This is called from \donoderef, \anchor, and \dofloat. In the case of
+% floats, there is an additional part, which is not written here:
+% 4) NAME-lof - the text as it should appear in a @listoffloats.
+%
+\def\setref#1#2{%
+ \pdfmkdest{#1}%
+ \iflinks
+ {%
+ \atdummies % preserve commands, but don't expand them
+ \edef\writexrdef##1##2{%
+ \write\auxfile{@xrdef{#1-% #1 of \setref, expanded by the \edef
+ ##1}{##2}}% these are parameters of \writexrdef
+ }%
+ \toks0 = \expandafter{\thissection}%
+ \immediate \writexrdef{title}{\the\toks0 }%
+ \immediate \writexrdef{snt}{\csname #2\endcsname}% \Ynumbered etc.
+ \writexrdef{pg}{\folio}% will be written later, during \shipout
+ }%
+ \fi
+}
+
+% @xref, @pxref, and @ref generate cross-references. For \xrefX, #1 is
+% the node name, #2 the name of the Info cross-reference, #3 the printed
+% node name, #4 the name of the Info file, #5 the name of the printed
+% manual. All but the node name can be omitted.
+%
+\def\pxref#1{\putwordsee{} \xrefX[#1,,,,,,,]}
+\def\xref#1{\putwordSee{} \xrefX[#1,,,,,,,]}
+\def\ref#1{\xrefX[#1,,,,,,,]}
+\def\xrefX[#1,#2,#3,#4,#5,#6]{\begingroup
+ \unsepspaces
+ \def\printedmanual{\ignorespaces #5}%
+ \def\printedrefname{\ignorespaces #3}%
+ \setbox1=\hbox{\printedmanual\unskip}%
+ \setbox0=\hbox{\printedrefname\unskip}%
+ \ifdim \wd0 = 0pt
+ % No printed node name was explicitly given.
+ \expandafter\ifx\csname SETxref-automatic-section-title\endcsname\relax
+ % Use the node name inside the square brackets.
+ \def\printedrefname{\ignorespaces #1}%
+ \else
+ % Use the actual chapter/section title appear inside
+ % the square brackets. Use the real section title if we have it.
+ \ifdim \wd1 > 0pt
+ % It is in another manual, so we don't have it.
+ \def\printedrefname{\ignorespaces #1}%
+ \else
+ \ifhavexrefs
+ % We know the real title if we have the xref values.
+ \def\printedrefname{\refx{#1-title}{}}%
+ \else
+ % Otherwise just copy the Info node name.
+ \def\printedrefname{\ignorespaces #1}%
+ \fi%
+ \fi
+ \fi
+ \fi
+ %
+ % Make link in pdf output.
+ \ifpdf
+ \leavevmode
+ \getfilename{#4}%
+ {\turnoffactive
+ % See comments at \activebackslashdouble.
+ {\activebackslashdouble \xdef\pdfxrefdest{#1}%
+ \backslashparens\pdfxrefdest}%
+ %
+ \ifnum\filenamelength>0
+ \startlink attr{/Border [0 0 0]}%
+ goto file{\the\filename.pdf} name{\pdfxrefdest}%
+ \else
+ \startlink attr{/Border [0 0 0]}%
+ goto name{\pdfmkpgn{\pdfxrefdest}}%
+ \fi
+ }%
+ \linkcolor
+ \fi
+ %
+ % Float references are printed completely differently: "Figure 1.2"
+ % instead of "[somenode], p.3". We distinguish them by the
+ % LABEL-title being set to a magic string.
+ {%
+ % Have to otherify everything special to allow the \csname to
+ % include an _ in the xref name, etc.
+ \indexnofonts
+ \turnoffactive
+ \expandafter\global\expandafter\let\expandafter\Xthisreftitle
+ \csname XR#1-title\endcsname
+ }%
+ \iffloat\Xthisreftitle
+ % If the user specified the print name (third arg) to the ref,
+ % print it instead of our usual "Figure 1.2".
+ \ifdim\wd0 = 0pt
+ \refx{#1-snt}{}%
+ \else
+ \printedrefname
+ \fi
+ %
+ % if the user also gave the printed manual name (fifth arg), append
+ % "in MANUALNAME".
+ \ifdim \wd1 > 0pt
+ \space \putwordin{} \cite{\printedmanual}%
+ \fi
+ \else
+ % node/anchor (non-float) references.
+ %
+ % If we use \unhbox0 and \unhbox1 to print the node names, TeX does not
+ % insert empty discretionaries after hyphens, which means that it will
+ % not find a line break at a hyphen in a node names. Since some manuals
+ % are best written with fairly long node names, containing hyphens, this
+ % is a loss. Therefore, we give the text of the node name again, so it
+ % is as if TeX is seeing it for the first time.
+ \ifdim \wd1 > 0pt
+ \putwordsection{} ``\printedrefname'' \putwordin{} \cite{\printedmanual}%
+ \else
+ % _ (for example) has to be the character _ for the purposes of the
+ % control sequence corresponding to the node, but it has to expand
+ % into the usual \leavevmode...\vrule stuff for purposes of
+ % printing. So we \turnoffactive for the \refx-snt, back on for the
+ % printing, back off for the \refx-pg.
+ {\turnoffactive
+ % Only output a following space if the -snt ref is nonempty; for
+ % @unnumbered and @anchor, it won't be.
+ \setbox2 = \hbox{\ignorespaces \refx{#1-snt}{}}%
+ \ifdim \wd2 > 0pt \refx{#1-snt}\space\fi
+ }%
+ % output the `[mynode]' via a macro so it can be overridden.
+ \xrefprintnodename\printedrefname
+ %
+ % But we always want a comma and a space:
+ ,\space
+ %
+ % output the `page 3'.
+ \turnoffactive \putwordpage\tie\refx{#1-pg}{}%
+ \fi
+ \fi
+ \endlink
+\endgroup}
+
+% This macro is called from \xrefX for the `[nodename]' part of xref
+% output. It's a separate macro only so it can be changed more easily,
+% since square brackets don't work well in some documents. Particularly
+% one that Bob is working on :).
+%
+\def\xrefprintnodename#1{[#1]}
+
+% Things referred to by \setref.
+%
+\def\Ynothing{}
+\def\Yomitfromtoc{}
+\def\Ynumbered{%
+ \ifnum\secno=0
+ \putwordChapter@tie \the\chapno
+ \else \ifnum\subsecno=0
+ \putwordSection@tie \the\chapno.\the\secno
+ \else \ifnum\subsubsecno=0
+ \putwordSection@tie \the\chapno.\the\secno.\the\subsecno
+ \else
+ \putwordSection@tie \the\chapno.\the\secno.\the\subsecno.\the\subsubsecno
+ \fi\fi\fi
+}
+\def\Yappendix{%
+ \ifnum\secno=0
+ \putwordAppendix@tie @char\the\appendixno{}%
+ \else \ifnum\subsecno=0
+ \putwordSection@tie @char\the\appendixno.\the\secno
+ \else \ifnum\subsubsecno=0
+ \putwordSection@tie @char\the\appendixno.\the\secno.\the\subsecno
+ \else
+ \putwordSection@tie
+ @char\the\appendixno.\the\secno.\the\subsecno.\the\subsubsecno
+ \fi\fi\fi
+}
+
+% Define \refx{NAME}{SUFFIX} to reference a cross-reference string named NAME.
+% If its value is nonempty, SUFFIX is output afterward.
+%
+\def\refx#1#2{%
+ {%
+ \indexnofonts
+ \otherbackslash
+ \expandafter\global\expandafter\let\expandafter\thisrefX
+ \csname XR#1\endcsname
+ }%
+ \ifx\thisrefX\relax
+ % If not defined, say something at least.
+ \angleleft un\-de\-fined\angleright
+ \iflinks
+ \ifhavexrefs
+ \message{\linenumber Undefined cross reference `#1'.}%
+ \else
+ \ifwarnedxrefs\else
+ \global\warnedxrefstrue
+ \message{Cross reference values unknown; you must run TeX again.}%
+ \fi
+ \fi
+ \fi
+ \else
+ % It's defined, so just use it.
+ \thisrefX
+ \fi
+ #2% Output the suffix in any case.
+}
+
+% This is the macro invoked by entries in the aux file. Usually it's
+% just a \def (we prepend XR to the control sequence name to avoid
+% collisions). But if this is a float type, we have more work to do.
+%
+\def\xrdef#1#2{%
+ \expandafter\gdef\csname XR#1\endcsname{#2}% remember this xref value.
+ %
+ % Was that xref control sequence that we just defined for a float?
+ \expandafter\iffloat\csname XR#1\endcsname
+ % it was a float, and we have the (safe) float type in \iffloattype.
+ \expandafter\let\expandafter\floatlist
+ \csname floatlist\iffloattype\endcsname
+ %
+ % Is this the first time we've seen this float type?
+ \expandafter\ifx\floatlist\relax
+ \toks0 = {\do}% yes, so just \do
+ \else
+ % had it before, so preserve previous elements in list.
+ \toks0 = \expandafter{\floatlist\do}%
+ \fi
+ %
+ % Remember this xref in the control sequence \floatlistFLOATTYPE,
+ % for later use in \listoffloats.
+ \expandafter\xdef\csname floatlist\iffloattype\endcsname{\the\toks0{#1}}%
+ \fi
+}
+
+% Read the last existing aux file, if any. No error if none exists.
+%
+\def\tryauxfile{%
+ \openin 1 \jobname.aux
+ \ifeof 1 \else
+ \readdatafile{aux}%
+ \global\havexrefstrue
+ \fi
+ \closein 1
+}
+
+\def\setupdatafile{%
+ \catcode`\^^@=\other
+ \catcode`\^^A=\other
+ \catcode`\^^B=\other
+ \catcode`\^^C=\other
+ \catcode`\^^D=\other
+ \catcode`\^^E=\other
+ \catcode`\^^F=\other
+ \catcode`\^^G=\other
+ \catcode`\^^H=\other
+ \catcode`\^^K=\other
+ \catcode`\^^L=\other
+ \catcode`\^^N=\other
+ \catcode`\^^P=\other
+ \catcode`\^^Q=\other
+ \catcode`\^^R=\other
+ \catcode`\^^S=\other
+ \catcode`\^^T=\other
+ \catcode`\^^U=\other
+ \catcode`\^^V=\other
+ \catcode`\^^W=\other
+ \catcode`\^^X=\other
+ \catcode`\^^Z=\other
+ \catcode`\^^[=\other
+ \catcode`\^^\=\other
+ \catcode`\^^]=\other
+ \catcode`\^^^=\other
+ \catcode`\^^_=\other
+ % It was suggested to set the catcode of ^ to 7, which would allow ^^e4 etc.
+ % in xref tags, i.e., node names. But since ^^e4 notation isn't
+ % supported in the main text, it doesn't seem desirable. Furthermore,
+ % that is not enough: for node names that actually contain a ^
+ % character, we would end up writing a line like this: 'xrdef {'hat
+ % b-title}{'hat b} and \xrdef does a \csname...\endcsname on the first
+ % argument, and \hat is not an expandable control sequence. It could
+ % all be worked out, but why? Either we support ^^ or we don't.
+ %
+ % The other change necessary for this was to define \auxhat:
+ % \def\auxhat{\def^{'hat }}% extra space so ok if followed by letter
+ % and then to call \auxhat in \setq.
+ %
+ \catcode`\^=\other
+ %
+ % Special characters. Should be turned off anyway, but...
+ \catcode`\~=\other
+ \catcode`\[=\other
+ \catcode`\]=\other
+ \catcode`\"=\other
+ \catcode`\_=\other
+ \catcode`\|=\other
+ \catcode`\<=\other
+ \catcode`\>=\other
+ \catcode`\$=\other
+ \catcode`\#=\other
+ \catcode`\&=\other
+ \catcode`\%=\other
+ \catcode`+=\other % avoid \+ for paranoia even though we've turned it off
+ %
+ % This is to support \ in node names and titles, since the \
+ % characters end up in a \csname. It's easier than
+ % leaving it active and making its active definition an actual \
+ % character. What I don't understand is why it works in the *value*
+ % of the xrdef. Seems like it should be a catcode12 \, and that
+ % should not typeset properly. But it works, so I'm moving on for
+ % now. --karl, 15jan04.
+ \catcode`\\=\other
+ %
+ % Make the characters 128-255 be printing characters.
+ {%
+ \count1=128
+ \def\loop{%
+ \catcode\count1=\other
+ \advance\count1 by 1
+ \ifnum \count1<256 \loop \fi
+ }%
+ }%
+ %
+ % @ is our escape character in .aux files, and we need braces.
+ \catcode`\{=1
+ \catcode`\}=2
+ \catcode`\@=0
+}
+
+\def\readdatafile#1{%
+\begingroup
+ \setupdatafile
+ \input\jobname.#1
+\endgroup}
+
+\message{insertions,}
+% including footnotes.
+
+\newcount \footnoteno
+
+% The trailing space in the following definition for supereject is
+% vital for proper filling; pages come out unaligned when you do a
+% pagealignmacro call if that space before the closing brace is
+% removed. (Generally, numeric constants should always be followed by a
+% space to prevent strange expansion errors.)
+\def\supereject{\par\penalty -20000\footnoteno =0 }
+
+% @footnotestyle is meaningful for info output only.
+\let\footnotestyle=\comment
+
+{\catcode `\@=11
+%
+% Auto-number footnotes. Otherwise like plain.
+\gdef\footnote{%
+ \let\indent=\ptexindent
+ \let\noindent=\ptexnoindent
+ \global\advance\footnoteno by \@ne
+ \edef\thisfootno{$^{\the\footnoteno}$}%
+ %
+ % In case the footnote comes at the end of a sentence, preserve the
+ % extra spacing after we do the footnote number.
+ \let\@sf\empty
+ \ifhmode\edef\@sf{\spacefactor\the\spacefactor}\ptexslash\fi
+ %
+ % Remove inadvertent blank space before typesetting the footnote number.
+ \unskip
+ \thisfootno\@sf
+ \dofootnote
+}%
+
+% Don't bother with the trickery in plain.tex to not require the
+% footnote text as a parameter. Our footnotes don't need to be so general.
+%
+% Oh yes, they do; otherwise, @ifset (and anything else that uses
+% \parseargline) fails inside footnotes because the tokens are fixed when
+% the footnote is read. --karl, 16nov96.
+%
+\gdef\dofootnote{%
+ \insert\footins\bgroup
+ % We want to typeset this text as a normal paragraph, even if the
+ % footnote reference occurs in (for example) a display environment.
+ % So reset some parameters.
+ \hsize=\pagewidth
+ \interlinepenalty\interfootnotelinepenalty
+ \splittopskip\ht\strutbox % top baseline for broken footnotes
+ \splitmaxdepth\dp\strutbox
+ \floatingpenalty\@MM
+ \leftskip\z@skip
+ \rightskip\z@skip
+ \spaceskip\z@skip
+ \xspaceskip\z@skip
+ \parindent\defaultparindent
+ %
+ \smallfonts \rm
+ %
+ % Because we use hanging indentation in footnotes, a @noindent appears
+ % to exdent this text, so make it be a no-op. makeinfo does not use
+ % hanging indentation so @noindent can still be needed within footnote
+ % text after an @example or the like (not that this is good style).
+ \let\noindent = \relax
+ %
+ % Hang the footnote text off the number. Use \everypar in case the
+ % footnote extends for more than one paragraph.
+ \everypar = {\hang}%
+ \textindent{\thisfootno}%
+ %
+ % Don't crash into the line above the footnote text. Since this
+ % expands into a box, it must come within the paragraph, lest it
+ % provide a place where TeX can split the footnote.
+ \footstrut
+ \futurelet\next\fo@t
+}
+}%end \catcode `\@=11
+
+% In case a @footnote appears in a vbox, save the footnote text and create
+% the real \insert just after the vbox finished. Otherwise, the insertion
+% would be lost.
+% Similarily, if a @footnote appears inside an alignment, save the footnote
+% text to a box and make the \insert when a row of the table is finished.
+% And the same can be done for other insert classes. --kasal, 16nov03.
+
+% Replace the \insert primitive by a cheating macro.
+% Deeper inside, just make sure that the saved insertions are not spilled
+% out prematurely.
+%
+\def\startsavinginserts{%
+ \ifx \insert\ptexinsert
+ \let\insert\saveinsert
+ \else
+ \let\checkinserts\relax
+ \fi
+}
+
+% This \insert replacement works for both \insert\footins{foo} and
+% \insert\footins\bgroup foo\egroup, but it doesn't work for \insert27{foo}.
+%
+\def\saveinsert#1{%
+ \edef\next{\noexpand\savetobox \makeSAVEname#1}%
+ \afterassignment\next
+ % swallow the left brace
+ \let\temp =
+}
+\def\makeSAVEname#1{\makecsname{SAVE\expandafter\gobble\string#1}}
+\def\savetobox#1{\global\setbox#1 = \vbox\bgroup \unvbox#1}
+
+\def\checksaveins#1{\ifvoid#1\else \placesaveins#1\fi}
+
+\def\placesaveins#1{%
+ \ptexinsert \csname\expandafter\gobblesave\string#1\endcsname
+ {\box#1}%
+}
+
+% eat @SAVE -- beware, all of them have catcode \other:
+{
+ \def\dospecials{\do S\do A\do V\do E} \uncatcodespecials % ;-)
+ \gdef\gobblesave @SAVE{}
+}
+
+% initialization:
+\def\newsaveins #1{%
+ \edef\next{\noexpand\newsaveinsX \makeSAVEname#1}%
+ \next
+}
+\def\newsaveinsX #1{%
+ \csname newbox\endcsname #1%
+ \expandafter\def\expandafter\checkinserts\expandafter{\checkinserts
+ \checksaveins #1}%
+}
+
+% initialize:
+\let\checkinserts\empty
+\newsaveins\footins
+\newsaveins\margin
+
+
+% @image. We use the macros from epsf.tex to support this.
+% If epsf.tex is not installed and @image is used, we complain.
+%
+% Check for and read epsf.tex up front. If we read it only at @image
+% time, we might be inside a group, and then its definitions would get
+% undone and the next image would fail.
+\openin 1 = epsf.tex
+\ifeof 1 \else
+ % Do not bother showing banner with epsf.tex v2.7k (available in
+ % doc/epsf.tex and on ctan).
+ \def\epsfannounce{\toks0 = }%
+ \input epsf.tex
+\fi
+\closein 1
+%
+% We will only complain once about lack of epsf.tex.
+\newif\ifwarnednoepsf
+\newhelp\noepsfhelp{epsf.tex must be installed for images to
+ work. It is also included in the Texinfo distribution, or you can get
+ it from ftp://tug.org/tex/epsf.tex.}
+%
+\def\image#1{%
+ \ifx\epsfbox\undefined
+ \ifwarnednoepsf \else
+ \errhelp = \noepsfhelp
+ \errmessage{epsf.tex not found, images will be ignored}%
+ \global\warnednoepsftrue
+ \fi
+ \else
+ \imagexxx #1,,,,,\finish
+ \fi
+}
+%
+% Arguments to @image:
+% #1 is (mandatory) image filename; we tack on .eps extension.
+% #2 is (optional) width, #3 is (optional) height.
+% #4 is (ignored optional) html alt text.
+% #5 is (ignored optional) extension.
+% #6 is just the usual extra ignored arg for parsing this stuff.
+\newif\ifimagevmode
+\def\imagexxx#1,#2,#3,#4,#5,#6\finish{\begingroup
+ \catcode`\^^M = 5 % in case we're inside an example
+ \normalturnoffactive % allow _ et al. in names
+ % If the image is by itself, center it.
+ \ifvmode
+ \imagevmodetrue
+ \nobreak\bigskip
+ % Usually we'll have text after the image which will insert
+ % \parskip glue, so insert it here too to equalize the space
+ % above and below.
+ \nobreak\vskip\parskip
+ \nobreak
+ \line\bgroup
+ \fi
+ %
+ % Output the image.
+ \ifpdf
+ \dopdfimage{#1}{#2}{#3}%
+ \else
+ % \epsfbox itself resets \epsf?size at each figure.
+ \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \epsfxsize=#2\relax \fi
+ \setbox0 = \hbox{\ignorespaces #3}\ifdim\wd0 > 0pt \epsfysize=#3\relax \fi
+ \epsfbox{#1.eps}%
+ \fi
+ %
+ \ifimagevmode \egroup \bigbreak \fi % space after the image
+\endgroup}
+
+
+% @float FLOATTYPE,LABEL,LOC ... @end float for displayed figures, tables,
+% etc. We don't actually implement floating yet, we always include the
+% float "here". But it seemed the best name for the future.
+%
+\envparseargdef\float{\eatcommaspace\eatcommaspace\dofloat#1, , ,\finish}
+
+% There may be a space before second and/or third parameter; delete it.
+\def\eatcommaspace#1, {#1,}
+
+% #1 is the optional FLOATTYPE, the text label for this float, typically
+% "Figure", "Table", "Example", etc. Can't contain commas. If omitted,
+% this float will not be numbered and cannot be referred to.
+%
+% #2 is the optional xref label. Also must be present for the float to
+% be referable.
+%
+% #3 is the optional positioning argument; for now, it is ignored. It
+% will somehow specify the positions allowed to float to (here, top, bottom).
+%
+% We keep a separate counter for each FLOATTYPE, which we reset at each
+% chapter-level command.
+\let\resetallfloatnos=\empty
+%
+\def\dofloat#1,#2,#3,#4\finish{%
+ \let\thiscaption=\empty
+ \let\thisshortcaption=\empty
+ %
+ % don't lose footnotes inside @float.
+ %
+ % BEWARE: when the floats start float, we have to issue warning whenever an
+ % insert appears inside a float which could possibly float. --kasal, 26may04
+ %
+ \startsavinginserts
+ %
+ % We can't be used inside a paragraph.
+ \par
+ %
+ \vtop\bgroup
+ \def\floattype{#1}%
+ \def\floatlabel{#2}%
+ \def\floatloc{#3}% we do nothing with this yet.
+ %
+ \ifx\floattype\empty
+ \let\safefloattype=\empty
+ \else
+ {%
+ % the floattype might have accents or other special characters,
+ % but we need to use it in a control sequence name.
+ \indexnofonts
+ \turnoffactive
+ \xdef\safefloattype{\floattype}%
+ }%
+ \fi
+ %
+ % If label is given but no type, we handle that as the empty type.
+ \ifx\floatlabel\empty \else
+ % We want each FLOATTYPE to be numbered separately (Figure 1,
+ % Table 1, Figure 2, ...). (And if no label, no number.)
+ %
+ \expandafter\getfloatno\csname\safefloattype floatno\endcsname
+ \global\advance\floatno by 1
+ %
+ {%
+ % This magic value for \thissection is output by \setref as the
+ % XREFLABEL-title value. \xrefX uses it to distinguish float
+ % labels (which have a completely different output format) from
+ % node and anchor labels. And \xrdef uses it to construct the
+ % lists of floats.
+ %
+ \edef\thissection{\floatmagic=\safefloattype}%
+ \setref{\floatlabel}{Yfloat}%
+ }%
+ \fi
+ %
+ % start with \parskip glue, I guess.
+ \vskip\parskip
+ %
+ % Don't suppress indentation if a float happens to start a section.
+ \restorefirstparagraphindent
+}
+
+% we have these possibilities:
+% @float Foo,lbl & @caption{Cap}: Foo 1.1: Cap
+% @float Foo,lbl & no caption: Foo 1.1
+% @float Foo & @caption{Cap}: Foo: Cap
+% @float Foo & no caption: Foo
+% @float ,lbl & Caption{Cap}: 1.1: Cap
+% @float ,lbl & no caption: 1.1
+% @float & @caption{Cap}: Cap
+% @float & no caption:
+%
+\def\Efloat{%
+ \let\floatident = \empty
+ %
+ % In all cases, if we have a float type, it comes first.
+ \ifx\floattype\empty \else \def\floatident{\floattype}\fi
+ %
+ % If we have an xref label, the number comes next.
+ \ifx\floatlabel\empty \else
+ \ifx\floattype\empty \else % if also had float type, need tie first.
+ \appendtomacro\floatident{\tie}%
+ \fi
+ % the number.
+ \appendtomacro\floatident{\chaplevelprefix\the\floatno}%
+ \fi
+ %
+ % Start the printed caption with what we've constructed in
+ % \floatident, but keep it separate; we need \floatident again.
+ \let\captionline = \floatident
+ %
+ \ifx\thiscaption\empty \else
+ \ifx\floatident\empty \else
+ \appendtomacro\captionline{: }% had ident, so need a colon between
+ \fi
+ %
+ % caption text.
+ \appendtomacro\captionline{\scanexp\thiscaption}%
+ \fi
+ %
+ % If we have anything to print, print it, with space before.
+ % Eventually this needs to become an \insert.
+ \ifx\captionline\empty \else
+ \vskip.5\parskip
+ \captionline
+ %
+ % Space below caption.
+ \vskip\parskip
+ \fi
+ %
+ % If have an xref label, write the list of floats info. Do this
+ % after the caption, to avoid chance of it being a breakpoint.
+ \ifx\floatlabel\empty \else
+ % Write the text that goes in the lof to the aux file as
+ % \floatlabel-lof. Besides \floatident, we include the short
+ % caption if specified, else the full caption if specified, else nothing.
+ {%
+ \atdummies
+ %
+ % since we read the caption text in the macro world, where ^^M
+ % is turned into a normal character, we have to scan it back, so
+ % we don't write the literal three characters "^^M" into the aux file.
+ \scanexp{%
+ \xdef\noexpand\gtemp{%
+ \ifx\thisshortcaption\empty
+ \thiscaption
+ \else
+ \thisshortcaption
+ \fi
+ }%
+ }%
+ \immediate\write\auxfile{@xrdef{\floatlabel-lof}{\floatident
+ \ifx\gtemp\empty \else : \gtemp \fi}}%
+ }%
+ \fi
+ \egroup % end of \vtop
+ %
+ % place the captured inserts
+ %
+ % BEWARE: when the floats start floating, we have to issue warning
+ % whenever an insert appears inside a float which could possibly
+ % float. --kasal, 26may04
+ %
+ \checkinserts
+}
+
+% Append the tokens #2 to the definition of macro #1, not expanding either.
+%
+\def\appendtomacro#1#2{%
+ \expandafter\def\expandafter#1\expandafter{#1#2}%
+}
+
+% @caption, @shortcaption
+%
+\def\caption{\docaption\thiscaption}
+\def\shortcaption{\docaption\thisshortcaption}
+\def\docaption{\checkenv\float \bgroup\scanargctxt\defcaption}
+\def\defcaption#1#2{\egroup \def#1{#2}}
+
+% The parameter is the control sequence identifying the counter we are
+% going to use. Create it if it doesn't exist and assign it to \floatno.
+\def\getfloatno#1{%
+ \ifx#1\relax
+ % Haven't seen this figure type before.
+ \csname newcount\endcsname #1%
+ %
+ % Remember to reset this floatno at the next chap.
+ \expandafter\gdef\expandafter\resetallfloatnos
+ \expandafter{\resetallfloatnos #1=0 }%
+ \fi
+ \let\floatno#1%
+}
+
+% \setref calls this to get the XREFLABEL-snt value. We want an @xref
+% to the FLOATLABEL to expand to "Figure 3.1". We call \setref when we
+% first read the @float command.
+%
+\def\Yfloat{\floattype@tie \chaplevelprefix\the\floatno}%
+
+% Magic string used for the XREFLABEL-title value, so \xrefX can
+% distinguish floats from other xref types.
+\def\floatmagic{!!float!!}
+
+% #1 is the control sequence we are passed; we expand into a conditional
+% which is true if #1 represents a float ref. That is, the magic
+% \thissection value which we \setref above.
+%
+\def\iffloat#1{\expandafter\doiffloat#1==\finish}
+%
+% #1 is (maybe) the \floatmagic string. If so, #2 will be the
+% (safe) float type for this float. We set \iffloattype to #2.
+%
+\def\doiffloat#1=#2=#3\finish{%
+ \def\temp{#1}%
+ \def\iffloattype{#2}%
+ \ifx\temp\floatmagic
+}
+
+% @listoffloats FLOATTYPE - print a list of floats like a table of contents.
+%
+\parseargdef\listoffloats{%
+ \def\floattype{#1}% floattype
+ {%
+ % the floattype might have accents or other special characters,
+ % but we need to use it in a control sequence name.
+ \indexnofonts
+ \turnoffactive
+ \xdef\safefloattype{\floattype}%
+ }%
+ %
+ % \xrdef saves the floats as a \do-list in \floatlistSAFEFLOATTYPE.
+ \expandafter\ifx\csname floatlist\safefloattype\endcsname \relax
+ \ifhavexrefs
+ % if the user said @listoffloats foo but never @float foo.
+ \message{\linenumber No `\safefloattype' floats to list.}%
+ \fi
+ \else
+ \begingroup
+ \leftskip=\tocindent % indent these entries like a toc
+ \let\do=\listoffloatsdo
+ \csname floatlist\safefloattype\endcsname
+ \endgroup
+ \fi
+}
+
+% This is called on each entry in a list of floats. We're passed the
+% xref label, in the form LABEL-title, which is how we save it in the
+% aux file. We strip off the -title and look up \XRLABEL-lof, which
+% has the text we're supposed to typeset here.
+%
+% Figures without xref labels will not be included in the list (since
+% they won't appear in the aux file).
+%
+\def\listoffloatsdo#1{\listoffloatsdoentry#1\finish}
+\def\listoffloatsdoentry#1-title\finish{{%
+ % Can't fully expand XR#1-lof because it can contain anything. Just
+ % pass the control sequence. On the other hand, XR#1-pg is just the
+ % page number, and we want to fully expand that so we can get a link
+ % in pdf output.
+ \toksA = \expandafter{\csname XR#1-lof\endcsname}%
+ %
+ % use the same \entry macro we use to generate the TOC and index.
+ \edef\writeentry{\noexpand\entry{\the\toksA}{\csname XR#1-pg\endcsname}}%
+ \writeentry
+}}
+
+\message{localization,}
+% and i18n.
+
+% @documentlanguage is usually given very early, just after
+% @setfilename. If done too late, it may not override everything
+% properly. Single argument is the language abbreviation.
+% It would be nice if we could set up a hyphenation file here.
+%
+\parseargdef\documentlanguage{%
+ \tex % read txi-??.tex file in plain TeX.
+ % Read the file if it exists.
+ \openin 1 txi-#1.tex
+ \ifeof 1
+ \errhelp = \nolanghelp
+ \errmessage{Cannot read language file txi-#1.tex}%
+ \else
+ \input txi-#1.tex
+ \fi
+ \closein 1
+ \endgroup
+}
+\newhelp\nolanghelp{The given language definition file cannot be found or
+is empty. Maybe you need to install it? In the current directory
+should work if nowhere else does.}
+
+
+% @documentencoding should change something in TeX eventually, most
+% likely, but for now just recognize it.
+\let\documentencoding = \comment
+
+
+% Page size parameters.
+%
+\newdimen\defaultparindent \defaultparindent = 15pt
+
+\chapheadingskip = 15pt plus 4pt minus 2pt
+\secheadingskip = 12pt plus 3pt minus 2pt
+\subsecheadingskip = 9pt plus 2pt minus 2pt
+
+% Prevent underfull vbox error messages.
+\vbadness = 10000
+
+% Don't be so finicky about underfull hboxes, either.
+\hbadness = 2000
+
+% Following George Bush, just get rid of widows and orphans.
+\widowpenalty=10000
+\clubpenalty=10000
+
+% Use TeX 3.0's \emergencystretch to help line breaking, but if we're
+% using an old version of TeX, don't do anything. We want the amount of
+% stretch added to depend on the line length, hence the dependence on
+% \hsize. We call this whenever the paper size is set.
+%
+\def\setemergencystretch{%
+ \ifx\emergencystretch\thisisundefined
+ % Allow us to assign to \emergencystretch anyway.
+ \def\emergencystretch{\dimen0}%
+ \else
+ \emergencystretch = .15\hsize
+ \fi
+}
+
+% Parameters in order: 1) textheight; 2) textwidth;
+% 3) voffset; 4) hoffset; 5) binding offset; 6) topskip;
+% 7) physical page height; 8) physical page width.
+%
+% We also call \setleading{\textleading}, so the caller should define
+% \textleading. The caller should also set \parskip.
+%
+\def\internalpagesizes#1#2#3#4#5#6#7#8{%
+ \voffset = #3\relax
+ \topskip = #6\relax
+ \splittopskip = \topskip
+ %
+ \vsize = #1\relax
+ \advance\vsize by \topskip
+ \outervsize = \vsize
+ \advance\outervsize by 2\topandbottommargin
+ \pageheight = \vsize
+ %
+ \hsize = #2\relax
+ \outerhsize = \hsize
+ \advance\outerhsize by 0.5in
+ \pagewidth = \hsize
+ %
+ \normaloffset = #4\relax
+ \bindingoffset = #5\relax
+ %
+ \ifpdf
+ \pdfpageheight #7\relax
+ \pdfpagewidth #8\relax
+ \fi
+ %
+ \setleading{\textleading}
+ %
+ \parindent = \defaultparindent
+ \setemergencystretch
+}
+
+% @letterpaper (the default).
+\def\letterpaper{{\globaldefs = 1
+ \parskip = 3pt plus 2pt minus 1pt
+ \textleading = 13.2pt
+ %
+ % If page is nothing but text, make it come out even.
+ \internalpagesizes{46\baselineskip}{6in}%
+ {\voffset}{.25in}%
+ {\bindingoffset}{36pt}%
+ {11in}{8.5in}%
+}}
+
+% Use @smallbook to reset parameters for 7x9.25 trim size.
+\def\smallbook{{\globaldefs = 1
+ \parskip = 2pt plus 1pt
+ \textleading = 12pt
+ %
+ \internalpagesizes{7.5in}{5in}%
+ {\voffset}{.25in}%
+ {\bindingoffset}{16pt}%
+ {9.25in}{7in}%
+ %
+ \lispnarrowing = 0.3in
+ \tolerance = 700
+ \hfuzz = 1pt
+ \contentsrightmargin = 0pt
+ \defbodyindent = .5cm
+}}
+
+% Use @smallerbook to reset parameters for 6x9 trim size.
+% (Just testing, parameters still in flux.)
+\def\smallerbook{{\globaldefs = 1
+ \parskip = 1.5pt plus 1pt
+ \textleading = 12pt
+ %
+ \internalpagesizes{7.4in}{4.8in}%
+ {-.2in}{-.4in}%
+ {0pt}{14pt}%
+ {9in}{6in}%
+ %
+ \lispnarrowing = 0.25in
+ \tolerance = 700
+ \hfuzz = 1pt
+ \contentsrightmargin = 0pt
+ \defbodyindent = .4cm
+}}
+
+% Use @afourpaper to print on European A4 paper.
+\def\afourpaper{{\globaldefs = 1
+ \parskip = 3pt plus 2pt minus 1pt
+ \textleading = 13.2pt
+ %
+ % Double-side printing via postscript on Laserjet 4050
+ % prints double-sided nicely when \bindingoffset=10mm and \hoffset=-6mm.
+ % To change the settings for a different printer or situation, adjust
+ % \normaloffset until the front-side and back-side texts align. Then
+ % do the same for \bindingoffset. You can set these for testing in
+ % your texinfo source file like this:
+ % @tex
+ % \global\normaloffset = -6mm
+ % \global\bindingoffset = 10mm
+ % @end tex
+ \internalpagesizes{51\baselineskip}{160mm}
+ {\voffset}{\hoffset}%
+ {\bindingoffset}{44pt}%
+ {297mm}{210mm}%
+ %
+ \tolerance = 700
+ \hfuzz = 1pt
+ \contentsrightmargin = 0pt
+ \defbodyindent = 5mm
+}}
+
+% Use @afivepaper to print on European A5 paper.
+% From romildo@urano.iceb.ufop.br, 2 July 2000.
+% He also recommends making @example and @lisp be small.
+\def\afivepaper{{\globaldefs = 1
+ \parskip = 2pt plus 1pt minus 0.1pt
+ \textleading = 12.5pt
+ %
+ \internalpagesizes{160mm}{120mm}%
+ {\voffset}{\hoffset}%
+ {\bindingoffset}{8pt}%
+ {210mm}{148mm}%
+ %
+ \lispnarrowing = 0.2in
+ \tolerance = 800
+ \hfuzz = 1.2pt
+ \contentsrightmargin = 0pt
+ \defbodyindent = 2mm
+ \tableindent = 12mm
+}}
+
+% A specific text layout, 24x15cm overall, intended for A4 paper.
+\def\afourlatex{{\globaldefs = 1
+ \afourpaper
+ \internalpagesizes{237mm}{150mm}%
+ {\voffset}{4.6mm}%
+ {\bindingoffset}{7mm}%
+ {297mm}{210mm}%
+ %
+ % Must explicitly reset to 0 because we call \afourpaper.
+ \globaldefs = 0
+}}
+
+% Use @afourwide to print on A4 paper in landscape format.
+\def\afourwide{{\globaldefs = 1
+ \afourpaper
+ \internalpagesizes{241mm}{165mm}%
+ {\voffset}{-2.95mm}%
+ {\bindingoffset}{7mm}%
+ {297mm}{210mm}%
+ \globaldefs = 0
+}}
+
+% @pagesizes TEXTHEIGHT[,TEXTWIDTH]
+% Perhaps we should allow setting the margins, \topskip, \parskip,
+% and/or leading, also. Or perhaps we should compute them somehow.
+%
+\parseargdef\pagesizes{\pagesizesyyy #1,,\finish}
+\def\pagesizesyyy#1,#2,#3\finish{{%
+ \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \hsize=#2\relax \fi
+ \globaldefs = 1
+ %
+ \parskip = 3pt plus 2pt minus 1pt
+ \setleading{\textleading}%
+ %
+ \dimen0 = #1
+ \advance\dimen0 by \voffset
+ %
+ \dimen2 = \hsize
+ \advance\dimen2 by \normaloffset
+ %
+ \internalpagesizes{#1}{\hsize}%
+ {\voffset}{\normaloffset}%
+ {\bindingoffset}{44pt}%
+ {\dimen0}{\dimen2}%
+}}
+
+% Set default to letter.
+%
+\letterpaper
+
+
+\message{and turning on texinfo input format.}
+
+% Define macros to output various characters with catcode for normal text.
+\catcode`\"=\other
+\catcode`\~=\other
+\catcode`\^=\other
+\catcode`\_=\other
+\catcode`\|=\other
+\catcode`\<=\other
+\catcode`\>=\other
+\catcode`\+=\other
+\catcode`\$=\other
+\def\normaldoublequote{"}
+\def\normaltilde{~}
+\def\normalcaret{^}
+\def\normalunderscore{_}
+\def\normalverticalbar{|}
+\def\normalless{<}
+\def\normalgreater{>}
+\def\normalplus{+}
+\def\normaldollar{$}%$ font-lock fix
+
+% This macro is used to make a character print one way in \tt
+% (where it can probably be output as-is), and another way in other fonts,
+% where something hairier probably needs to be done.
+%
+% #1 is what to print if we are indeed using \tt; #2 is what to print
+% otherwise. Since all the Computer Modern typewriter fonts have zero
+% interword stretch (and shrink), and it is reasonable to expect all
+% typewriter fonts to have this, we can check that font parameter.
+%
+\def\ifusingtt#1#2{\ifdim \fontdimen3\font=0pt #1\else #2\fi}
+
+% Same as above, but check for italic font. Actually this also catches
+% non-italic slanted fonts since it is impossible to distinguish them from
+% italic fonts. But since this is only used by $ and it uses \sl anyway
+% this is not a problem.
+\def\ifusingit#1#2{\ifdim \fontdimen1\font>0pt #1\else #2\fi}
+
+% Turn off all special characters except @
+% (and those which the user can use as if they were ordinary).
+% Most of these we simply print from the \tt font, but for some, we can
+% use math or other variants that look better in normal text.
+
+\catcode`\"=\active
+\def\activedoublequote{{\tt\char34}}
+\let"=\activedoublequote
+\catcode`\~=\active
+\def~{{\tt\char126}}
+\chardef\hat=`\^
+\catcode`\^=\active
+\def^{{\tt \hat}}
+
+\catcode`\_=\active
+\def_{\ifusingtt\normalunderscore\_}
+\let\realunder=_
+% Subroutine for the previous macro.
+\def\_{\leavevmode \kern.07em \vbox{\hrule width.3em height.1ex}\kern .07em }
+
+\catcode`\|=\active
+\def|{{\tt\char124}}
+\chardef \less=`\<
+\catcode`\<=\active
+\def<{{\tt \less}}
+\chardef \gtr=`\>
+\catcode`\>=\active
+\def>{{\tt \gtr}}
+\catcode`\+=\active
+\def+{{\tt \char 43}}
+\catcode`\$=\active
+\def${\ifusingit{{\sl\$}}\normaldollar}%$ font-lock fix
+
+% If a .fmt file is being used, characters that might appear in a file
+% name cannot be active until we have parsed the command line.
+% So turn them off again, and have \everyjob (or @setfilename) turn them on.
+% \otherifyactive is called near the end of this file.
+\def\otherifyactive{\catcode`+=\other \catcode`\_=\other}
+
+% Used sometimes to turn off (effectively) the active characters even after
+% parsing them.
+\def\turnoffactive{%
+ \normalturnoffactive
+ \otherbackslash
+}
+
+\catcode`\@=0
+
+% \backslashcurfont outputs one backslash character in current font,
+% as in \char`\\.
+\global\chardef\backslashcurfont=`\\
+\global\let\rawbackslashxx=\backslashcurfont % let existing .??s files work
+
+% \realbackslash is an actual character `\' with catcode other, and
+% \doublebackslash is two of them (for the pdf outlines).
+{\catcode`\\=\other @gdef@realbackslash{\} @gdef@doublebackslash{\\}}
+
+% In texinfo, backslash is an active character; it prints the backslash
+% in fixed width font.
+\catcode`\\=\active
+@def@normalbackslash{{@tt@backslashcurfont}}
+% On startup, @fixbackslash assigns:
+% @let \ = @normalbackslash
+
+% \rawbackslash defines an active \ to do \backslashcurfont.
+% \otherbackslash defines an active \ to be a literal `\' character with
+% catcode other.
+@gdef@rawbackslash{@let\=@backslashcurfont}
+@gdef@otherbackslash{@let\=@realbackslash}
+
+% Same as @turnoffactive except outputs \ as {\tt\char`\\} instead of
+% the literal character `\'.
+%
+@def@normalturnoffactive{%
+ @let\=@normalbackslash
+ @let"=@normaldoublequote
+ @let~=@normaltilde
+ @let^=@normalcaret
+ @let_=@normalunderscore
+ @let|=@normalverticalbar
+ @let<=@normalless
+ @let>=@normalgreater
+ @let+=@normalplus
+ @let$=@normaldollar %$ font-lock fix
+ @unsepspaces
+}
+
+% Make _ and + \other characters, temporarily.
+% This is canceled by @fixbackslash.
+@otherifyactive
+
+% If a .fmt file is being used, we don't want the `\input texinfo' to show up.
+% That is what \eatinput is for; after that, the `\' should revert to printing
+% a backslash.
+%
+@gdef@eatinput input texinfo{@fixbackslash}
+@global@let\ = @eatinput
+
+% On the other hand, perhaps the file did not have a `\input texinfo'. Then
+% the first `\' in the file would cause an error. This macro tries to fix
+% that, assuming it is called before the first `\' could plausibly occur.
+% Also turn back on active characters that might appear in the input
+% file name, in case not using a pre-dumped format.
+%
+@gdef@fixbackslash{%
+ @ifx\@eatinput @let\ = @normalbackslash @fi
+ @catcode`+=@active
+ @catcode`@_=@active
+}
+
+% Say @foo, not \foo, in error messages.
+@escapechar = `@@
+
+% These look ok in all fonts, so just make them not special.
+@catcode`@& = @other
+@catcode`@# = @other
+@catcode`@% = @other
+
+
+@c Local variables:
+@c eval: (add-hook 'write-file-hooks 'time-stamp)
+@c page-delimiter: "^\\\\message"
+@c time-stamp-start: "def\\\\texinfoversion{"
+@c time-stamp-format: "%:y-%02m-%02d.%02H"
+@c time-stamp-end: "}"
+@c End:
+
+@c vim:sw=2:
+
+@ignore
+ arch-tag: e1b36e32-c96e-4135-a41a-0b2efa2ea115
+@end ignore
diff --git a/version.texi b/version.texi
new file mode 100644
index 0000000..3245b32
--- /dev/null
+++ b/version.texi
@@ -0,0 +1,4 @@
+@set UPDATED 20 February 2008
+@set UPDATED-MONTH February 2008
+@set EDITION 2.0.2
+@set VERSION 2.0.2