| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #703750 "-dRenderIntent=0 not working on linux"
For unknown reasons the Windows build calls gx_default_put_params()
twice with the same set of parameters. Linux, however only calls it
once.
The reason this is a problem is because the first time it is called the
device's icc_struct member is NULL (not been allocated) and so the
put_params call simply discards the request (without warning). On
Windows the second request succeeds but since Linux only calls put_params
once, it ends up discarding the request entirely.
On examining the code it is clear that the gx_default_put_intent() and
similar functions all create the device's icc_struct member if it is not
already present, so clearly this is a problem which has been encountered
before. Since this is the case, there is no need for the guard against
dev->icc_struct being NULL in gx_default_put_params() and removing it
properly stores the requested parameters.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When doing a fill stroke in the presence of transparency, we use
a knockout group to get the correct results.
When we use alphabits, this results in the actual plotting
happening via copy_alpha_color.
The pdf14 copy_alpha_color routines don't ever appear to be called
in normal runs, and they don't seem to take note of the backdrop
for knockout runs.
Accordingly, we update them here so they do.
Also, to avoid nasty lines around the edge of stroke segments
we ensure that do_fill_stroke keeps the lop_pdf14 bit set
until AFTER the stroke is complete.
While this gives VASTLY improved rendering in most cases, this does
leave us with the classic 'white halos' around knocked out shapes.
What typically happens during a fill/stroke is that we push a
knockout group, and draw the fill into it. Then we draw the stroke
into it. As the stroke is drawn, all we have for each pixel is a
color and an alpha.
The old code used to use the alpha to mix between the given color,
and the current group contents. This is wrong, because it stops
knockout happening.
The new code uses the alpha to mix between the given color, and the
original background. This correctly knocks out colors, but leaves
nasty white halos around the edge of things.
I think the issue here is that we are getting a single alpha, when
really we'd like both opacity and shape. To do this properly, I
think we should combine the color and the original background by
opacity, and then combine the result of that with the current group
contents by shape. But we only have the combined opacity and shape
value. So we're probably doing as well as we can hope to for now.
|
| |
|
|
|
|
|
|
|
| |
Unused variables, 'may be used unset' or implementations not
matching prototypes.
These show up now as the gcc version on the nodes has increased.
|
|
|
|
|
|
| |
"planar" can either be PLANAR_CONTIG or PLANAR_SEPARATE. We've
checked for PLANAR_CONTIG earlier, so no point in checking we're
not PLANAR_SEPARATE later.
|
| |
|
|
|
|
|
|
|
|
|
| |
Bug #693637 "Some of user options are not working in windows printer driver when option QueryUser set to 3."
Some (undocumented!) User settings which can only be altered by
sending PostScript were not being applied when the QueryUser
setting was 3 (use system default printer). This was despite
the Copies being set in this case, so this was clearly an oversight.
|
|
|
|
|
|
|
|
|
|
| |
Bug #693636 "Antialiasing options are ignored in windows printer driver"
This commit processes the Graphics and Text AlphaBits parameters and
stores them. When we later set the actual colour depth of the device
we set the device color_info structure anti-alias bits from the
requested AlphaBits, taking the actual colour depth of the device into
account for the purposes of setting the maximum.
|
|
|
|
|
|
|
|
|
|
|
| |
This supports the common form of Multipage TIFFs, whereby a single
TIFF has multiple IFDs.
We do not yet support the (much rarer) form of multipage TIFFs
whereby we have a single IFD with sub-ifd's in it.
Those are supposedly used for strange things like multiple subsamplings
of the same image, so are probably not actually what we want anyway.
|
|
|
|
|
|
| |
Hopefully no change in behaviour.
This opens the way for multi-image support.
|
|
|
|
| |
We were passing the wrong param to the poststroke handling.
|
|
|
|
|
|
|
|
|
|
| |
Bug #706705 "zpdfops_op_defs[] table overflow"
If HAVE_LIBIDN is true then we would have more than 16 operators in the
table, which is the maximum (obviously this is not true on the cluster)
Split the table up by moving the old PDF interpreter operators into a
new table to make it easier to get rid of them in future.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #58582
The fundamental problem here is that pdfwrite was assuming that the
font WMode could only ever be 0 or 1 (the only two valid values) and so
was using it as a bitfield, shifting and OR'ing it with other values.
The file in this case has a CMap which contains :
/WMode 8883123282518010140455180910294889 def
Which gets clamped to the maximum unsigned integer 0x7fffff
This led to a non-zero value in the flags to the glyph info code, when
the value *should* have been 0, which caused the graphics library to
take a code path which wasn't valid. This led to us trying to use a
member of a structure whose pointer was NULL.
I can't be certain whether other places in the code use WMode in the
same way, so I've chosen to fix this at several levels.
Firstly, in the code path we shouldn't reach (gs_type42_glyph_info_by_gid)
check the value of pmat before calling gs_default_glyph_info. That code
will try to use the matrix to scale the outline, so if it is NULL then
the result is undefined. This prevents the seg fault.
Secondly, in gdevpdtc.c, scan_cmap_text(), set wmode to be either 0 or
1, to ensure that it does work as a bit, rather than using the integer
value from the font and assuming it will be 0 or 1.
Finally in the three places in the PDF interpreter where we set the
WMode for the font, check to see if the value is either 0 or 1 and if it
is not, raise a warning and make it 0 or 1.
|
|
|
|
|
|
|
|
|
| |
When using -dPDFINFO to get information about PDF files one of the
pieces of information is the embedding status of any fonts used on each
page. Unfortunately I messed up when coding and any non-error status
for a font reported it as being embedded.
Fix that here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz bug #58423
The problem is reported as a use-after-free, what I see is a colour
space persisting until long after the PDF interpreter has been freed,
and being cleaned up by the end of job restore.
Because the colour space was created by the PDF interpreter it has a
custom callback to free associated objects. But by the time we call that
callback the PDF interpreter has vanished.
This happens because in gx_pattern_load() we try to push the pdf14
compositor (the pattern has transparency) which fails. Instead of
cleaning up we were immediately returning, which was leaving the colour
space counted up, which is why it was not counted down and freed before
the interpreter exits.
Fix that here by using a 'goto' the cleanup code instead of returning
the error code immediately.
Also, noted in passing, we don't need to set the callback in
pdfi_create_DeviceRGB(), because that is done in pdfi_gs_setrgbcolor.
Not only that, but there are circumstances under which we do not want
to set the callback (if the space came from PostScript not created by
the PDF interpreter) and that is catered for in pdfi_gs_setrgbcolor()
whereas it wasn't in pdfi_create_DeviceRGB. So remove the callback
assignment.
|
|
|
|
| |
for tolower() prototype.
|
|
|
|
| |
See bug for full discussion.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tests/pdf/Bug6901014_CityMap-evince.simplified.pdf.bitrgbtags.300.1.gs
tests/pdf/Bug6901014_CityMap-evince.simplified.pdf.pgmraw.300.0.gs
tests/pdf/Bug6901014_CityMap-evince.simplified.pdf.pgmraw.300.1.gs
tests/pdf/Bug6901014_CityMap-evince.simplified.pdf.ppmraw.300.1.gs
tests/pdf/Bug6901014_CityMap-evince.simplified.pdf.psdcmyk.300.1.gs
tests/pdf/Bug6901014_CityMap-evince.simplified.pdf.psdcmykog.600.1.gs
tests/pdf/Bug6901014_CityMap-evince.simplified.pdf.psdrgb.300.1 gs
tests/pdf/Bug6901014_CityMap-evince.simplified.pdf.tiffscaled.600.1.gs
tests_private/pdf/uploads/333.pdf.bitrgbtags.300.1.gs
tests_private/pdf/uploads/333.pdf.ppmraw.300.0.gs
tests_private/pdf/uploads/333.pdf.ppmraw.300.1.gs
tests_private/pdf/uploads/333.pdf.psdcmyk.300.1.gs
tests_private/pdf/uploads/333.pdf.psdcmyk16.300.1.gs
tests_private/pdf/uploads/333.pdf.psdcmykog.600.1.gs
tests_private/pdf/uploads/333.pdf.psdrgb.300.1.gs
These seem to be caused by us creating a memory device, then
detecting an error, and exiting with the device unused, but
half setup. That was enough to trip a problem when the device
is finalized by the gc system later.
Now, we move the operation that might cause an error to be before
the allocation of the memory device.
This leaves these files returning an error where previously they
didn't. This is less than ideal, so I've opened bug 706695 to track
that.
|
|
|
|
|
| |
Copying a value twice. Fixing more for neatness than for
expectation of any other benefit.
|
|
|
|
|
|
|
|
| |
Noticed while working on a different issue; we were not cleaning up the
dictionary, nor restoring back the PostScript state, after running a
Portfolio (PDF Collection) file.
Fixed by calling runpdfend after running all of the embedded PDF files.
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #58405
There is a guard to prevent buffer overruns, but it wasn't taking the
NULL terminator into account. In addition, I think it is possible for
the required number of bytes to be 4, not 3, if the byte pointed to
is 0xCC (resulting in 'E-', 'E-' being generated) and then still
potentially requiring a NULL terminator for a total of 5 bytes.
Change the 3-byte minimum space requirement to 5.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike any other object type, ref objects do not have their mark bit unset
during the first phase of a garbage collection.
They, in fact, require the bit to be unset explicitly at the end of the reloc
phase.
This was an omission from:
https://git.ghostscript.com/?p=ghostpdl.git;a=commitdiff;h=b0739b945394ec81a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, it looks like we are hitting a problem with the fix for
bug 703324.
When that fix resets the fillstrokealpha value after the stroke
it does so by fiddling directly in the pgs. In the clist writing
case, this means that the pgs changes without the change being
considered for putting into the reader. This can cause the reader
and writer to get out of sync.
The fix, implemented here, is to call gs_update_trans_marking_params
to ensure that the change happens via the compositor mechanism
first. This keeps reader and writer in sync.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code to handle fill_stroke in the presence of transparency
was being confused by the scaling applied to clipping regions.
When we fill_stroke, if transparency is present, then we need to
push a group. The bounds for this group are calculated based upon
the bounds for the path in question, expanded by an appropriate
amount for the current stroke strate, and then clipped by the
clipping path.
In the presence of alphabits, the path is scaled up, and so the
bbox for it needs to be scaled down. The clipping path is similarly
scaled, so this too needs to be scaled down.
|
|
|
|
|
|
| |
- Adds GPDL to TOC and tweaks Home icon CSS.
- A better TOC, updated copyright header.
- Adds headers in Use.rst for basic use examples.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #58341
The problem is that a Pattern PaintProc uses another Pattern; the
initial Pattern dictionary does not use transparency, and does not
declare the use of the child pattern, that is used by name and found in
the /Page /Resources dictionary. The child Pattern uses transparency.
There's basically no way for us to tell that the initial pattern should
push the pdf14 device, so we don't. Later on when we do push the device
for the child pattern we try to copy the DeviceN parameters, but the
device we are pointing at is the pattern accumulator, which doesn't
store the DeviceN parameters and can't return them, so it returns NULL.
We then try to dereference the NULL pointer.
Obviously this is only a problem when trying to use a DeviceN device,
in this case the psdcmyk device.
To fix this 'properly' we would either need to push the pdf14 device for
every pattern or have the pattern accumulator copy, store and return the
DeviceN parameters when requested.
For now I've chosen just to avoid the crash because this isn't really
my field and I'm wary about making extensive changes. If anyone ever
comes up with a real PDF file where this causes problems we'll address
it then.
Note that the PDF file is already contravening the spec which says that
the /Resources dictionary is *required* for Patterns (at least if the
Pattern uses any named resources).
|
|
|
|
|
|
|
| |
Another set of typos in the shading optimisation code. This is a
different "class" of typo than before. I've checked the code
so that we should be free of problems of this class at least
in future.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706668 "regression: new PDF engine can't handle a broken PDF file, falsely reports password protected"
Although the file is damaged (in fact it appears to be 2 PDF files,
concatenated) that's not the problem. The second file is the one which
we process, and that file is encrypted. We simply were not setting the
encryption key length for Revision 3 and we were only reading the
actual key /Length if /V is present and has the value 2 or 3.
This is basically because PDF encryption is a mess, and simply an
oversight. Fix it here.
|
|
|
|
|
|
| |
Using the PDF object number (rather than the internal gs_id for the font) as
an element in the XUID means pdfwrite/ps2write can be more consistent in
identifying repeated uses of the same font embedded in a PDF.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To maintain interoperability with the Postscript interpreter, pdfi shares
the Postscript name table (when it's built into gs, rather than standalone).
On the basis that the name table is cleaned up only with global VM, and that
global VM is only gc'ed at the end of job (we thought!), we thought adding
names to the table was sufficient.
Turns out, there are circumstances under which global VM (and thus the name
table) do get gc'ed before the end of job, and in that case, names pdfi still
relied upon could disappear.
To avoid that, add the names (as key and value) to a Postscript dictionary,
known to the garbager, at the same time as we add them to the name table.
Came up cluster testing a fix for Bug 706595, with file:
tests_private/comparefiles/Bug691740.pdf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No file or bug report for this, the customer requested the files be
kept private. However any PDF Collection (Portfolio) file will show
the problem.
GhostPDF supports preserving embedded files from the input, but when
we are processing a PDF Collection we don't want to do that, because
in this case we run each of the embedded files individually. If we
copy the EmbeddedFIles as well then we end up duplicating them in the
output.
So, when processing EmbeddedFiles, check the Catalog to see if there is
a /Collection key, if there is then stop processing EmbeddedFiles.
The customer also pointed out there was no way to avoid embedding any
EmbeddedFiles from the input, so additionally add a new switch
-dPreserveEmbeddedFiles to control this. While we're doing that, add
one to control the preservation of 'DOCVIEW' (PageMode, PageLayout,
OpenAction) as well, -dPreserveDocView.
This then leads on to preventing the EmbeddedFiles in a PDF Collection
from writing their DocView information. If we let them do that then
we end up opening the file incorrectly.
To facilitate similar changes in the future I've rejigged the way
.PDFInit works, so that it calls a helper function to read any
interpreter parameters and applies them to the PDF context. I've also
added a new PostScript operator '.PDFSetParams' which takes a PDF
context and a dictionary of key/value pairs which it applies to the
context.
Sadly I can't actually use that for the docview control, because the
PDF initialisation is what processes the document, so changing it
afterwards is no help. So I've altered runpdfbegin to call a new
function runpdfbegin_with_params and pass an empty dictionary. That then
allows me to call runpdfbegin_with_params from the PDF Collection
processing, and turn off PreserveDocView.
So in summary; new controls PreserveDocView and PreserveEmbeddedFiles
and a new function .PDFSetParams to allow us to alter the PDF
interpreter parameters after .PDFInit is executed. PDF Collections no
longer embed duplicate files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we load a Type 1 font from a font file (i.e. not an embedded font), we use
the Adobe Glyph List to find if a glyph name has other names (based on the
Unicode code point).
For example, "/ocyrillic" is code point 0x43e, which also commonly maps to the
name "/afii10080".
Previously, use used "forall" to interate through the CharStrings dictionary,
but that causes two problems. Firstly, and most importantly, when we write new
entries to that dictionary, if the dictionary has to be extended, it ends up
messing with the "forall" indexing. Secondly, it means we do work than necessary
because we potentially seek out equivalents for names we've just added.
To improve this, populate an array with the original names from the CharStrings
dictionary, and iterate through that - thus the changing contents of the
dictionary doesn't matter.
|
|
|
|
|
|
| |
The commit as is works, but it could fall foul of not checking the
state of 'negative' properly. It probably doesn't really matter (we
store the value as 64-bit anyway) but let's do this properly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706607 "Old-fashioned password security not working"
The problem turns out to be on Windows, where we clamp integers to the
maximum platform integer value during parsing. Windows builds use a
32-bit maximum integer and we were clamping all integers to the maximum
signed integer value.
Ordinarily this isn't a problem because we rarely encounter values
genuinely that large. However in this file the /P value is stored as a
32-bit unsigned integer, even though e spec clearly states that it is a
2s-complement form. So the value is stored as 4294963428 when in fact
it *should* be -3868. Acrobat, of course, happily opens either way.
Alter the integer overflow detection so that we clamp signed integers
at the signed maximum and unsigned integers at the unsigned maximum.
This allows us to read the value from the file and treat it as Acrobat does.
|
|
|
|
|
|
|
|
|
| |
No bug report for this, but it's been annoying me for ages and so I'm
adding it as an enhancement and for some practice with pdfmark
generation.
Read the PageLayout, PageMode and OpenAction from the Root (Catalog)
dictionary and send them to pdfwrite.
|
|
|
|
|
|
|
|
| |
Bug #706571 "Segmentation violation at base/gscparam.c:342 (in c_param_write function)"
We don't permit mixed type arrays when parsing parameters, the number
parsing checked the array wasn't a string array, but didn't check to see
if it was a name array.
|
|
|
|
|
|
|
|
|
|
| |
Infuriatingly, there are at least a couple of fonts that a PDF interpreter just
has to know are symbolic, regardless of the flag in their font descriptor.
Wingdings and ZapfDingbats and their variants.
Worse, contrary to the spec, if a font object for one of those fonts includes an
encoding entry which is a name object, we have to ignore it.
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #57880
toffs was fuzzed to be very nearly 2^32-1 and when the (valid) tlen was
added to it, the result overflowed a 32-bit value, evading the existing
check to ensure the table was entirely contained in the buffer of data.
Simply promote the 32-bit variables to 64-bit before performing the
arithmetic and the check. fbuflen is already a 64-bit value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706551 "ps2pdf corrupts Unicode title in PDF 1.4 XML metadata"
This is not in fact part of the original report and should have been reported
separately. Still....
The bug arose from mer removing a partial implementation of surrogate
pairs when we started using Coverity, because Coverity complained about
the code and it was simpler to remove it. I clearly forgto to go back
and finish it.
This just adds code to deal with the (documented as unusual) case of
UTF-16 surrogate pairs. Seems to work with all the tests I can concoct.
|
|
|
|
|
|
|
|
|
|
| |
Bug #706563 "PagSize Policy 3 can inappropriately rotate content"
As noted, if the medium is square, and the requested medium is portrait,
then we would end up rotating the content.
To avoid this, just check if the medium or request is square, and if so
set rotate to 0.
|
|
|
|
|
|
|
|
|
|
| |
Bug #706551 "ps2pdf corrupts Unicode title in PDF 1.4 XML metadata"
In fact the XMP metadata string is not 'corrupted' we simply replace all
UTF16 values > 0x800 with the Unicode replacement glyph. This was to do
with some investigations back in 2016 which never came to anything.
Simply promote UTF16 values > 0x800 to 3 bytes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz bug #57745
The problem in the report is that the BlackGeneration function is a 1-in
3-out function. It is required to be a 1-in, 1-out function. The result
was that the evaluation was writing 3 floats to a 1 float buffer.
Check the parameters of the function to make sure it is of the correct
size before trying to evaluate it.
I also desk-checked all the other uses of functions; most were already
checking the function parameters but I found two more cases which were
not. Fix the /Separation and DeviceN tint transform so that we check the
number of inputs and outputs to make sure they are correct.
|
|
|
|
| |
Improve error messages.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This is so that Memento wrapped blocks get the same alignment % 32
as the underlying malloc blocks. This is important for setjmp
buffer usage on Windows (at least, probably linux too).
|