summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.tru648
-rw-r--r--hints/dec_osf.sh36
2 files changed, 44 insertions, 0 deletions
diff --git a/README.tru64 b/README.tru64
index 297cab8327..e852a5cb11 100644
--- a/README.tru64
+++ b/README.tru64
@@ -26,6 +26,14 @@ of the op/regexp and op/pat, or ext/Storable tests dumping core
(the exact pattern of failures depending on the GCC release and
optimization flags).
+gcc 3.2.1 is known to work okay with Perl 5.8.0. However, when
+optimizing the toke.c gcc likes to have a lot of memory, 256 megabytes
+seems to be enough. The default setting of the process data section
+in Tru64 should be one gigabyte, but some sites/setups might have
+lowered that. The configuration process of Perl checks for too low
+process limits, and lowers the optimization for the toke.c if
+necessary, and also gives advice on how to raise the process limits.
+
=head2 Using Large Files with Perl on Tru64
In Tru64 Perl is automatically able to use large files, that is,
diff --git a/hints/dec_osf.sh b/hints/dec_osf.sh
index 8ef151e93f..8cf54b19df 100644
--- a/hints/dec_osf.sh
+++ b/hints/dec_osf.sh
@@ -148,6 +148,42 @@ case "$optimize" in
;;
esac
+## Optimization limits
+case "$isgcc" in
+gcc) # gcc 3.2.1 wants a lot of memory for -O3'ing toke.c
+cat >try.c <<EOF
+#include <sys/resource.h>
+
+int main ()
+{
+ struct rlimit rl;
+ int i = getrlimit (RLIMIT_DATA, &rl);
+ printf ("%d\n", rl.rlim_cur / (1024 * 1024));
+ } /* main */
+EOF
+$cc -o try $ccflags $ldflags try.c
+ maxdsiz=`./try`
+rm -f try try.c core
+if [ $maxdsiz -lt 256 ]; then
+ # less than 256 MB is probably not enough to optimize toke.c with gcc -O3
+ cat <<EOM >&4
+
+Your process datasize is limited to $maxdsiz MB, which is (sadly) not
+always enough to fully optimize some source code files of Perl,
+at least 256 MB seems to be necessary as of Perl 5.8.0. I'll try to
+use a lower optimization level for those parts. You could either try
+using your shell's ulimit/limit/limits command to raise your datasize
+(assuming the system-wide hard resource limits allow you to go higher),
+or if you can't go higher and if you are a sysadmin, and you *do* want
+the full optimization, you can tune the 'max_per_proc_data_size'
+kernel parameter: see man sysconfigtab, and man sys_attrs_proc.
+
+EOM
+toke_cflags='optimize=-O2'
+ fi
+;;
+esac
+
# we want dynamic fp rounding mode, and we want ieee exception semantics
case "$isgcc" in
gcc) ;;