| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Some versions of autoconf enforce macro arguments to macros being bounded by
"[" and "[", others do not (the version I have does not), but we should use
them for best compatibility.
|
| |
|
|
|
|
|
|
| |
I added dummy gpcl6 and gxps exe names to avoid a warning when building from a
gs only release archive. I neglected to add appropriate dummy install targets
for those, and that caused an error with "make install".
|
|
|
|
|
| |
This caused us to ignore the printer specified by -s%printer%___
and always use the default printer if QueryUser was not specified
|
| |
|
|
|
|
|
|
| |
But still skip adding them to the targets list if the source is not available.
This avoids a warning when building a Ghostscript only release archive.
|
| |
|
| |
|
| |
|
|
|
|
| |
about the revised directory structure, build and executable names
|
|
|
|
|
|
|
|
| |
Windows backtraces are limited to 63 levels. Linux ones can be
any size. I'd mistakenly overflown the buffers in the linux
case.
While we're fixing that, improve the code to require less copying.
|
|
|
|
|
|
|
|
| |
The fix for windows builds broke linux due to -DMEMENTO
being in CFLAGS on windows, and GENOPT on configured builds.
Also tidy the code to avoid things detected by the more picky
compiler on Linux.
|
|
|
|
|
|
|
| |
uname in MSYS2 terminal reports MSYS_NT-6.1, so add MSYS* to all the "case"
in configure.ac.
Note: cygwin terminal's uname reports CYGWIN_NT-6.1
|
|
|
|
|
| |
So, windows.h cannot be used unless Microsoft extensions are enabled.
How dumb is that?
|
|
|
|
| |
Replace with proper configure setting.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If built with MEMENTO_DETAILS (on by default), we store the
backtrace on every event that affects a block.
Memento_details(address) will display the events that affected
a block (typically malloc, {realloc}*, free), including the
backtrace at each point.
Windows and linux use different mechanisms for this. Windows
loads a DLL and calls windows specific functions - no extra
libraries are required.
Linux also loads a shared object (libbacktrace.so). This is not
present on all platforms, so on platforms where it is not available
we just get addresses. These can be converted using addr2line
(unless ASLR is enabled).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The PSD file format does not support multiple images (pages) in a single file,
so we spot such an attempt (by checking whether or not the output file string
has a "%d" in it) and throw an error.
This also throws an error if an attempt is made to write multiple pages to
/dev/null (as is done for performance testing).
Since it really doesn't matter if the output "file" is invalid when we're just
discarding the data, spot this case, and allow multi-page files to run
without error.
|
| |
|
| |
|
|
|
|
|
| |
copyright assignment, update the copyright headers for the RAM file
system code.
|
| |
|
|
|
|
|
| |
Thanks to Nikolaus Kreuzer for suggesting this method. It now seems to work
as requested and as I would expect.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The gxino12b.c and gxino16b.c modules were no longer used and the
graphics library always supported 12 and 16 bit images, so remove
the leftover code allowing for these to be build time options.
This was causing false positives with helgrind since the procs
were being set into the two-dimensional array at run time. The
unpackicc_16 had this same code even though there was no method
or check for non-support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #696626 " A PDF file causes ps2pdf crash"
The code fro serialising a Type 2 function was assuming that C0 and C1
were valid (validated by the interpreter) and always present. In fact
both of these are optional. We were attempting to dereference a NULL
pointer.
Altered the type 2 serialise code to write the default values if either
or both of C0 and C1 are not present.
No differences expected.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ps2write does not currently handle Multiple Master fonts well, we know
this there is an open enhancement for it. We do have a hacky work-around
which turns the MM OtherSubrs into regular OtherSubrs using the blended
values.
This doesn't work when the font program uses constrcuts such as ' x y div'
in order to create floating point values. This could mean that we did
not have as many paramters as expected on the stack as are required for
a given MM OtherSubr, leading to us indexing off the bottom of the
stack.
This meant we were using random values, but this didn't really matter
as the data is always going to be wrong. However the address sanitizer
complains about this.....
In this commit if we would underrun the stack we just write a 0 instead
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When checking a font name to see if its a URW replacement for a base 14
font we were doing a memcmp with the length of the candidate name.
If that length exceeded the length of the URW name then we were in
effect comparng against part of the next name (or random bytes off the
end of the table.
Of course this is harmless except in the highly unlikely case of the end
of the table not being followed by more data bytes. But the address
sanitizer complains. So we now compare the length of the two strings
first.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid searching the linked list of blocks in order to remove a
block by moving to a doubly linked list. This can be done
without increasing the amount of memory in use by making better
use of the 'parent' pointer that is only used when displaying
nested blocks.
Also store magic values in the 'child' and 'sibling' pointers
(again only used when displaying nested blocks) so that we
can quickly verify that a block is real before doing too much
with it.
Those changes drastically reduce the time required for
MEMENTO_LEAKONLY runs (now the same order of magnitude as non
memento runs).
Normal memento runs are still very slow when the numbers of
blocks increase due to the paranoid checking taking time.
To ameliorate this a bit, we try 2 other things.
Firstly, we optimise the searching of blocks by making use of
int aligned tests. This still doesn't make much difference.
Secondly, we introduce a new mechanism for the 'paranoia'
levels. If a negative number is given for the paranoia level
(say -n) then we first perform our overwrite checks after n events.
We next test after 2n events, then 4n, then 8n etc.
The new default paranoia level is set to be -1024.
This makes a huge difference, and brings normal memento runs
down to be comparable with debug runs.
|
|
|
|
|
|
|
|
|
|
|
| |
The current code globally enables use of the antidropout
downscaler on all halftoned devices (unless they override
the gxdso call), when interpolation is enabled.
We now introduce a -dAntidropoutDownscaler option that
defaults off. That sets a bit in dev.color_info that is
used to control the option. This decouples it from
interpolation.
|
|
|
|
|
| |
Marcos did some cleanup identifying broken links, based on his work, I've fixed
the broken links, and removed those to files that no longer exist.
|
|
|
|
| |
Only write downscaler options if we will read them.
|
|
|
|
|
|
|
|
|
|
|
|
| |
For high level forms support, if the device requests a specific CTM be
applied we calculate a clip path which includes negative co-ordinates
to ensure that a translated form won't be erroneously clipped to the
page.
When doing this, we need to account for the CTM when the form is executed
potentially being flipped or mirrored (or both!)
No differences expected.
|
|
|
|
| |
Change an indent to silence Coverity.....
|
|
|
|
|
| |
Some referenced files appear to no longer exist, I've left those in
the documentation but commented out with a 'missing' notation.
|
|
|
|
|
| |
The code that expanded alphas to 8 bits was incorrect for the
4 bit case.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
There are still broken links in doc/Psfiles.htm, doc/Develop.htm, and
doc/Drivers.htm caused by the files they refer to having been moved to
a different directory. Those will be fixed in a separate commit.
|
|
|
|
|
|
|
|
|
|
|
| |
pbmraw (deliberately) doesn't know how to copy_alpha. We never
call copy_alpha when going to pbmraw direct because of this, but
the clist wrapping defeats our detection. This results in an
error.
Tests enabling copy_alpha for pbmraw show a degredation in render
quality, so instead, we just disable all imagemask interpolation
when going to halftone devices.
|
|
|
|
|
|
|
|
| |
These build with address sanitizer enabled.
These hackily set the '-i' flag in the recursive calls to make
to sidestep the problems with genconf/mkromfs leaking at the
moment. These issues will be fixed and the -i removed.
|
|
|
|
| |
I had the depth checks in blank_unmasked_pixels wrong.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mem_mono_copy_mono includes some clever code that reads
16 bits at a time and shifts to do fast mono copies of
unaligned data. This can result in overreading the end
of data by a byte, but never so far as to cause address
overflows due to the granularity at which data can be
allocated.
The 'overread' data is never actually used.
The simple fix here just extends the source block by
a byte to avoid error sanitizer complaining. Valgrind
correctly does not flag this.
|
|
|
|
| |
Adopt Cecil Hornbakers patch to solve this. Many thanks!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Spotted while debugging bug 696612.
Various places in the code do:
{ int j, code = 0;
for (j = 0; j < n && code >= 0; j++)
{
STUFF
}
}
which is a perfectly reasonable thing to do - except for the
facts that: 1) STUFF never alters the value of code, and 2) even
if STUFF did alter the value of code, we never check the value
and return it.
Accordingly, I've just removed the references to code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the closedown of the file, we run through and write out fonts.
As part of this process, we check the glyphs in the font. If any
of the glyphs come back as bad, we abort the whole process.
Previously we ignored any errors here, and my change to make us
not ignore errors in the pdf_close routine caused this regression.
After discussion with Ken and Chris, the correct fix, I believe,
is to continue to catch and honour all errors in pdf_close, but
to explicitly swallow certain errors lower down.
Chris suggested, and I agree with him, that simply swallowing
the rangecheck error in psf_check_outline_glyphs would be an
acceptable fix (for now at least).
I am leaving the bug open and passing it to Ken so that he can
double check this area in more detail at his convenience.
|
|
|
|
|
|
|
|
| |
When creating a pdf14 device, ensure that the antialias level
of the pdf14 device matches that of the underlying device.
This prevents antialiasing getting lost when the clist kicks
in for transparency.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we create a pattern accumulator the bitmap contents are
initially undefined. We then draw a rectangle over them to set
them to known values.
Unfortunately the code that writes this bitmap does not check
for the ctm being sane, so in some cases the initialisation
can fail.
This shows up as indeterminisms in the alpha blending.
The simple fix is to set the ctm to the identity matrix before
rendering.
|