| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #58582
The fundamental problem here is that pdfwrite was assuming that the
font WMode could only ever be 0 or 1 (the only two valid values) and so
was using it as a bitfield, shifting and OR'ing it with other values.
The file in this case has a CMap which contains :
/WMode 8883123282518010140455180910294889 def
Which gets clamped to the maximum unsigned integer 0x7fffff
This led to a non-zero value in the flags to the glyph info code, when
the value *should* have been 0, which caused the graphics library to
take a code path which wasn't valid. This led to us trying to use a
member of a structure whose pointer was NULL.
I can't be certain whether other places in the code use WMode in the
same way, so I've chosen to fix this at several levels.
Firstly, in the code path we shouldn't reach (gs_type42_glyph_info_by_gid)
check the value of pmat before calling gs_default_glyph_info. That code
will try to use the matrix to scale the outline, so if it is NULL then
the result is undefined. This prevents the seg fault.
Secondly, in gdevpdtc.c, scan_cmap_text(), set wmode to be either 0 or
1, to ensure that it does work as a bit, rather than using the integer
value from the font and assuming it will be 0 or 1.
Finally in the three places in the PDF interpreter where we set the
WMode for the font, check to see if the value is either 0 or 1 and if it
is not, raise a warning and make it 0 or 1.
|
|
|
|
|
|
|
|
|
| |
When using -dPDFINFO to get information about PDF files one of the
pieces of information is the embedding status of any fonts used on each
page. Unfortunately I messed up when coding and any non-error status
for a font reported it as being embedded.
Fix that here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz bug #58423
The problem is reported as a use-after-free, what I see is a colour
space persisting until long after the PDF interpreter has been freed,
and being cleaned up by the end of job restore.
Because the colour space was created by the PDF interpreter it has a
custom callback to free associated objects. But by the time we call that
callback the PDF interpreter has vanished.
This happens because in gx_pattern_load() we try to push the pdf14
compositor (the pattern has transparency) which fails. Instead of
cleaning up we were immediately returning, which was leaving the colour
space counted up, which is why it was not counted down and freed before
the interpreter exits.
Fix that here by using a 'goto' the cleanup code instead of returning
the error code immediately.
Also, noted in passing, we don't need to set the callback in
pdfi_create_DeviceRGB(), because that is done in pdfi_gs_setrgbcolor.
Not only that, but there are circumstances under which we do not want
to set the callback (if the space came from PostScript not created by
the PDF interpreter) and that is catered for in pdfi_gs_setrgbcolor()
whereas it wasn't in pdfi_create_DeviceRGB. So remove the callback
assignment.
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #58405
There is a guard to prevent buffer overruns, but it wasn't taking the
NULL terminator into account. In addition, I think it is possible for
the required number of bytes to be 4, not 3, if the byte pointed to
is 0xCC (resulting in 'E-', 'E-' being generated) and then still
potentially requiring a NULL terminator for a total of 5 bytes.
Change the 3-byte minimum space requirement to 5.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706668 "regression: new PDF engine can't handle a broken PDF file, falsely reports password protected"
Although the file is damaged (in fact it appears to be 2 PDF files,
concatenated) that's not the problem. The second file is the one which
we process, and that file is encrypted. We simply were not setting the
encryption key length for Revision 3 and we were only reading the
actual key /Length if /V is present and has the value 2 or 3.
This is basically because PDF encryption is a mess, and simply an
oversight. Fix it here.
|
|
|
|
|
|
| |
Using the PDF object number (rather than the internal gs_id for the font) as
an element in the XUID means pdfwrite/ps2write can be more consistent in
identifying repeated uses of the same font embedded in a PDF.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No file or bug report for this, the customer requested the files be
kept private. However any PDF Collection (Portfolio) file will show
the problem.
GhostPDF supports preserving embedded files from the input, but when
we are processing a PDF Collection we don't want to do that, because
in this case we run each of the embedded files individually. If we
copy the EmbeddedFIles as well then we end up duplicating them in the
output.
So, when processing EmbeddedFiles, check the Catalog to see if there is
a /Collection key, if there is then stop processing EmbeddedFiles.
The customer also pointed out there was no way to avoid embedding any
EmbeddedFiles from the input, so additionally add a new switch
-dPreserveEmbeddedFiles to control this. While we're doing that, add
one to control the preservation of 'DOCVIEW' (PageMode, PageLayout,
OpenAction) as well, -dPreserveDocView.
This then leads on to preventing the EmbeddedFiles in a PDF Collection
from writing their DocView information. If we let them do that then
we end up opening the file incorrectly.
To facilitate similar changes in the future I've rejigged the way
.PDFInit works, so that it calls a helper function to read any
interpreter parameters and applies them to the PDF context. I've also
added a new PostScript operator '.PDFSetParams' which takes a PDF
context and a dictionary of key/value pairs which it applies to the
context.
Sadly I can't actually use that for the docview control, because the
PDF initialisation is what processes the document, so changing it
afterwards is no help. So I've altered runpdfbegin to call a new
function runpdfbegin_with_params and pass an empty dictionary. That then
allows me to call runpdfbegin_with_params from the PDF Collection
processing, and turn off PreserveDocView.
So in summary; new controls PreserveDocView and PreserveEmbeddedFiles
and a new function .PDFSetParams to allow us to alter the PDF
interpreter parameters after .PDFInit is executed. PDF Collections no
longer embed duplicate files.
|
|
|
|
|
|
| |
The commit as is works, but it could fall foul of not checking the
state of 'negative' properly. It probably doesn't really matter (we
store the value as 64-bit anyway) but let's do this properly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706607 "Old-fashioned password security not working"
The problem turns out to be on Windows, where we clamp integers to the
maximum platform integer value during parsing. Windows builds use a
32-bit maximum integer and we were clamping all integers to the maximum
signed integer value.
Ordinarily this isn't a problem because we rarely encounter values
genuinely that large. However in this file the /P value is stored as a
32-bit unsigned integer, even though e spec clearly states that it is a
2s-complement form. So the value is stored as 4294963428 when in fact
it *should* be -3868. Acrobat, of course, happily opens either way.
Alter the integer overflow detection so that we clamp signed integers
at the signed maximum and unsigned integers at the unsigned maximum.
This allows us to read the value from the file and treat it as Acrobat does.
|
|
|
|
|
|
|
|
|
| |
No bug report for this, but it's been annoying me for ages and so I'm
adding it as an enhancement and for some practice with pdfmark
generation.
Read the PageLayout, PageMode and OpenAction from the Root (Catalog)
dictionary and send them to pdfwrite.
|
|
|
|
|
|
|
|
|
|
| |
Infuriatingly, there are at least a couple of fonts that a PDF interpreter just
has to know are symbolic, regardless of the flag in their font descriptor.
Wingdings and ZapfDingbats and their variants.
Worse, contrary to the spec, if a font object for one of those fonts includes an
encoding entry which is a name object, we have to ignore it.
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #57880
toffs was fuzzed to be very nearly 2^32-1 and when the (valid) tlen was
added to it, the result overflowed a 32-bit value, evading the existing
check to ensure the table was entirely contained in the buffer of data.
Simply promote the 32-bit variables to 64-bit before performing the
arithmetic and the check. fbuflen is already a 64-bit value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz bug #57745
The problem in the report is that the BlackGeneration function is a 1-in
3-out function. It is required to be a 1-in, 1-out function. The result
was that the evaluation was writing 3 floats to a 1 float buffer.
Check the parameters of the function to make sure it is of the correct
size before trying to evaluate it.
I also desk-checked all the other uses of functions; most were already
checking the function parameters but I found two more cases which were
not. Fix the /Separation and DeviceN tint transform so that we check the
number of inputs and outputs to make sure they are correct.
|
| |
|
|
|
|
|
|
|
|
|
| |
Bug #706533 "Copy/paste ligatures from luaLaTeX with new PDF interpreter produces invalid chars"
The code to retrieve a Unicode code point for a glyph, as a string, was
padding 3-byte values with a trailing 0, should be a leading 0.
Similarly fix one byte values (if we ever see any)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706500 "regression: new PDF engine can't handle a PDF file"
In this case the PDF file has been edited, and the original xref table
is invalid:
xref
1 7
0000000000 65535 f
0000000009 00000 n
That should, of course, be "0 7". The fact that it is '1' means we read
the expected offsets off by one each time. So object 0 gets read as
object 1 (which is then defined as a free object. Object 1 gets read
as object 2, so object 2 has the offset which points to object 1, etc.
So when we try to get the first page, object 4, we actually read object
3. Nothing is wrong apart from the fact that it's the wrong object but
obviously we check what object we got, and return an error because it
isn't the object we wanted.
This commit simply inserts another attempt to repair the file if
dereferencing an object results in no error, but no object was read
(can't see any way this is possible, but still) or the object we read
was a token not a PDF object, or the object number didn't match the
expected number (this case).
|
|
|
|
|
|
|
| |
Bug #706479 "Text string detected in DOCINFO cannot be represented in XMP for PDF/A1, aborting conversion."
As the comment says, if we get a Fatal error returned by pdfi_doc_info()
don't try and ignore it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706441 "AcroForm field Btn with no AP not implemented"
The title is misleading. The actual problem is that when checking to
determine the 'visibility' of an annotation, the NoView bit was being
checked with the wrong bit set in the mask.
This led to the annotation not being visible and not rendering.
From there; the old PDF interpreter used the presence of an OutputFile
on the command line to determine whether or not the output should be
treated as 'printer' or 'viewer'. The display device doesn't take an
OutputFIle so we treat that as a viewer. We weren't taking that action
at all internally.
So pass OutputFile in from the PostScript world if it is present, and
look for it on the command line if we are stand-alone. Start by assuming
we are a printer. If we find an OutputFile, and have not encountered a
'Printed' switch, then assume we are a printer.
Secondly; deal with the warnings. These are real but are the wrong place
for a warning. The problem is that we have an annotation which has an
/AP dictionary:
<<
/D <</Off 723 0 R/renew 724 0 R>>
/N <</renew 722 0 R>>
>>
We pick up the Normal (/N) key/value and see that the value is a
dictionary. So we consult the annotation for a /AS (appearance state)
which in this case is defined as:
/AS/Off
So we then try to find the /Off state in the sub-dictionary. There isn't
one. The specification has nothing to say about what we should do here.
I've chosen to replace the appearance with a null object and alter the
drawing routine to simply silently ignore this case.
Final note; the code is now behaving as it is expected to, but the file
in bug #706441 will still be missing a number of buttons when rendered,
because these buttons are only drawn when the application is a viewer.
In order to have them render Ghostscript must be invoked with :
-dPrinted=false
|
|
|
|
|
|
| |
Bug #706476 "<</DecodeParms null>> causes a bogus warning"
A null object for DecodeParams is legal. Pointless but legal.
|
|
|
|
|
|
|
|
|
| |
Bug 706474 "Segfault in gs for PDF with encrypted streams and unencrypted strings"
This is a simple typo, unencrypted streams are handled by an early exit
from pdfi_filter, so we should never see a CRYPT_IDENTITY case, just
as the comment says. But we were looking at the string encryption,
not the stream encryption. A one letter typo....
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
If fnamelen was very long (4091 or more) then later when we add in
the fontdirstr we could end up running off the end of a buffer (fstr)
which is set as being gp_filename_sizeof bytes long.
Change the length check to account for this possibility.
|
|
|
|
|
|
|
|
| |
In my previous revision for this, I missed that the values from a CMap
(ToUnicode) are big endian, but the values from the built-in decoding are native
endian.
This should resolve that.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #705865 "PDF Writer is dropping the /Alternate color space for ICC based color profile"
This commit starts off with the bug referenced above. Prior to this
commit the PDF interpreter never set the /Alternate space for an
ICCBased colour space. For rendering there is no need since Ghostscript
will always use the ICC profile, or will set the /Alternate instead
(if there is an error in the ICC profile) or will use the number of
components (/N) to set a Device space if there is a fault in the ICC
profile and no /Alternate.
However, this means pdfwrite never sees an Alternate space to write out
for an ICCBased space. This should not be a problem since the
/Alternate is optional for an ICCBased space and indeed the PDF 2.0
specification states "PDF readers should not use this colour space".
The file for the bug has a /ICCBased space where the /Alternate is
an Lab space. Obviously any PDF consumer should use the ICCBased space
but it seems that Chrome, Firefox, possibly other browsers cannot handle
ICCBased colour and so drop back to the Alternate. Surprisingly they
can handle Lab and get the expected colour. Obviously if we drop the
/Alternate then these consumers cannot use Lab and have a last ditch
fallback to RGB (based on the number of components, and that *is* in the
spec). But RGB != Lab and so the colours are incorrect.
Ideally we would simply use the existing colour space code and attach
the alternate space to the ICCBased space's 'base_space' member. That
would write everything perfectly well. But we can't do that because
when we are called from Ghostscript the ICC profile cache is in GC'ed
memory. So we have to create the ICCBased colour space in GC'ed memory
too. We have special hackery in the PDF interpreter colour code to do
exactly that.
Colour spaces call back to the PDF interpreter when freed (with more
hackery for ICCBased spaces), but if we create colour spaces in non-GC
(PDF interpreter) memory and attach them to the ICCBased space in GC'ed
memory then they can outlive the PDF interpreter, leading to crashes.
I did start down the road of making all colour spaces in GC-ed memory,
but that rapidly spiralled out of control because names needed to be
allocated in GC'ed memory, and functions and well, all kinds of things.
Without that we got crashes, and it quickly became clear the only real
way to make this work would be to allocate everything in GC'ed memory
which we really don't want to do.
So instead I added a new enumerated type member to the colour space, in
that member, if the current colour space is ICCBased, we store the type
of Alternate that was supplied (if any). We only support DeviceGray,
DeviceRGB, DeviceCMYK, CalGray, CalRGB and Lab. I also added the
relevant parameters to the 'params' union of the colour space.
In the PDF interpreter; add code to spot the afore-mentioned Alternate
spaces if present, and if we haven't been forced to fall back to using
the Alternate (or /N) because the ICC profile is broken. When we spot
one of those spaces, set the colour space ICC_Alternate_space member
appropriately and for the paramaterised spaces gather the parameter
values and store them.
In the pdfwrite device; if we are writing out an ICCBased space, and
it's ICC_Alternate_space member is not gs_ICC_Alternate_none, create
a ColorSpace resource and call the relevant space-specific code to
create a colour space array with a name and dictionary containing the
required parameters. Attach the resulting object to the ICCBased
colour space by inserting it into the array with a /Alternate key.
This also meant I needed to alter the parameters passed internally to
pdf_make_iccbased so that we pass in the original colour space instead
of the alternate space (which is always NULL currently).
There are also a couple of fixes; when finalising a colour space check
that the colour space is a DeviceN space before checking if the device_n
structure in the params union has a non-zero devn_process_space. The new
members in the union meant we could get here and think we needed to
free the devn_process_space, causing a crash.
In the PDF interpreter; there's a little clause in the PDF specification
which mentions a CalCMYK space. Apparently this was never properly
specified and so should be treated as DeviceCMYK. The PDF interpreter
now does so.
Finally another obsrvation; the initial code wrote the /Alternate space
as a named colour space, eg:
19 0 obj
<</N 3
/Alternate /R18 /Length 1972>>stream
....
Where R18 is defined in the Page's ColorSpace Resources as a named
resource:
<</R18
18 0 R/R17
17 0 R/R20
20 0 R/R22
22 0 R>>
endobj
But this does not work with Chrome (I didn't test Firefox). For this to
work with Chrome we have to reference the object directly, which should
not be required IMO. I believe this to be (another) bug in Chrome's PDF
handling.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At various places in the code, we test for parameters from a
param_list by using:
if (!strncmp(param, "ParamValue", strlen("ParamValue"))
This is bad, because as well as matching "ParamValue" and
"ParamValue=", it will also match "ParamValueSomething".
Also, at various places in the code, we don't call strlen
(understandably, cos that's a runtime function call to retrieve
a constant value), and just wire in the constant value. But
in at least 1 location, we've got the constant value wrong.
Accordingly, move to using an 'argis' macro that tests correctly
and calculates the length at compile time.
|
|
|
|
|
|
|
|
|
|
| |
Originally, the code assumed that any extra font files were on disk files, and
did not account for the possibility that they might be in the romfs (or
potentially access via the Postscript %os% or similer i/o device).
This revises it to cope with such cases.
Stems from Bug 706267.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This exposed a couple of issues:
Firstly, and most importantly, when pdfwrite uses the callback to retrieve the
glyph index for text in a CIDFont, it uses the descendant font, not the Type 0,
as I originally thought. For embedded CIDFonts, that didn't cause a problem, but
for substituted CIDFonts it meant the glyph decoding callback did not have
access to the decoding table.
Secondly, fixing that exposed some byte ordering issues, where Unicode codes
read from the ToUnicode CMap differed in byte order from codes read from the
decoding table.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally, pdfi was extremely conservative in its font creation, ensuring
every font/CIDFont instance has a unique XUID, that ensures we don't have
clashes when a font stream in a PDF is used by multiple font objects with
differing encodings, widths arrays etc...
A side effect of that was that pdfwrite couldn't consolidate instances of fonts
loaded from file (which, unlike embedded subsets, it can do safely).
For fonts loaded from disk, we now skip the pseudo-XUID creation, allowing
pdfwrite to make its own decisions about compatible font instances.
|
|
|
|
|
|
| |
The code intends a 64 digit, null terminated ASCII string representing a real
number, but the buffer was only declared as 64 bytes long, should be 65 to be
the intended length plus the null termination.
|
|
|
|
| |
This was stopping GPDL using pdfi directly.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test file supplied in confidence by customer 820
The file has an Outlines entry where the /Dest has a name which is
in UTF16-BE. The code was dealing with strings as C strings, which
fails here because UTF16-BE text can contain NULLs.
In this commit; alter the comparison to use memcmp instead of strncmp
and pass the length of the PDF string as a parameter so that we do not
need to use strlen.
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were not allowing the page of a /Dest to be null, which means
'do not change the current value'.
Add handling to cater for that, and set the destination page to 0,
which pdfwrite interprets as meaning it should emit a 'null'.
At the same time, check that the /Dest array contains a name object in
element 2. We may need to add specific testing of the name in future to
ensure it is valid.
|
|
|
|
|
|
| |
The interaction between the FontDescriptor flags, the font's native encoding,
and the PDF font encoding is not very well defined, this is just a tweak
based on the example file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was inspired by OSS-fuzz bug #55443, the PDF file has a CMap which
is corrupted, a hex string defining an entry in the CMap has some
garbage inserted into it.
This causes the CMap to have an extremely large range, which takes a
very long time to work through. Short-circuiting this by detecting the
error in the hex string makes the file complete much more quickly. Since
the CMap is corrupted the result is always going to be incorrect anyway
and at least this way we stop processing it more quickly.
Because CMaps and fonts use the same parser, this then exhibited an
error with a lot of fonts, particularly the PostScript emitted by
ps2write. It turns out that the eexec encrypted portion of the font has
rubbish following the 'currentfile closefile' and in some cases this
included an opening hex string marker '<' followed by non-hex data.
Obviously we shouldn't be processing that, I think we're lucky to have
got away with it to date. To fix the problem, create an operator for
fonts to deal with the 'closefile', and have that consume all the
remaining data in the buffer.
|
|
|
|
|
| |
When using anything other than the default MediaBox, we'd leak an array and
its contents.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706399 "Optional content not drawn when OCMD has empty OCGs"
The file uses the BDC operator and the named optional content references
an Optional Content Management Dictionary rather than an Optional
Content Group.
The OCMD has an OCGs member whose value is an empty array. Because it
also has no 'P' entry we were defaulting to the 'AnyOn' value. Then we
checked each of the (non-existent) OCGs to see if any are 'on'. Since
none are, we determined the visibility to be 'off'.
However, according to the PDF 1.7 specification, p366, table 4.49, the
OCGs entry, if the OCGs are empty (or not present) then the OCMD should
'have no effect'. Its not obvious what that means, exactly, so I've
chosen to interpret it as 'should use the OCProperties->D->BaseState as
the visibility state'.
The BaseState is optional and so (as in this case) may not be present
in which case we use the default value of 'on'.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706400 "Type 3 CharProcs with wy == 0 argument to d0 not drawn"
The width array argument was being initialised incorrectly due to
copy/paste errors. We were copying the second argument to both array
elements. If the second argument was 0 (which the PDF Reference states
it must be) then we would end up drawing a glyph with 0 width and height
and so would not draw it at all.
Fix that, and get the elements in the correct order.
This shows some tiny (1 pixel) alignment shifts in type 3 characters in
a couple of files.
|
|
|
|
|
|
|
|
| |
We have this in the device setup structure, and have for ages, even
though we don't currently use it. Initialise it properly though in case
we ever need it.
Noticed while looking at a bug with Robin.
|
|
|
|
| |
Not sure why these cropped up as new, but fix anyway.
|
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #55443 with eps2write
The font has been corrupted and throws an error, which frees the buffer
we created and passed in. But we then try and free the buffer again
leading to a crash (on Windows at least).
Update the comment to note that ownership is transferred regardless of
success, and remove the code freeing the buffer.
|
|
|
|
|
|
| |
We're using ishex in pdf_int.c and pdf_fontps.c, best if we only have
one definition. Make them inline for performance (as was done in
pdf_fontps.c and should have been in pdf_int.c)
|
|
|
|
|
| |
We should only explicitly free the file enumerator if the enumeration loop
exits early.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #705872 see comment #4
Although Firefox is wrong to state that an ExpansionFactor of 0 is 'bad'
it still isn't what it should be in fact. If the font doesn't have an
ExpansionFactor then we should write the default.
The same applies to BlueShift and BlueFuzz. Ideally we should also write
default values for UnderlinePosition and UnderlineThickness, but I can't
see an obvious way to do that, they are not in the same structure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #706315 "`ps2pdf -sFONTPATH=. ...` crashes with SIGSEGV"
When enumerating files in a directory in order to build a font map, we
were not checking the result of sfopen but assuming it would succeed and
then trying to read from it.
On Windows at least, the file may already be open exclusively which
results in sfopen returning a NULL, which causes a seg fault when we
try to use it in sgets().
I'm unable to reproduce this problem on Linux so I can't be certain it
is the same problem as in the report, but it seems likely.
Fixed by simply checking the return value for NULL and ignoring the
file if it is. We obviously can't use it as a font anyway so there's
no harm in ignoring it.
|
|
|
|
|
|
| |
For some reason when PDFDEBUG was true I'd chosen to emit the characters
following a comment with spaces between which made it hard to read.
Removed the space.
|
| |
|
|
|
|
|
|
|
| |
We weren't doing the same clean up afer rendering annotations and acroforms that
we do after the main page contents, meaning an error could result in extra
gsave levels persisting after the pdfi interpreter exits, leading to crashes
or other problems.
|
|
|
|
|
|
|
|
|
|
| |
OSS-fuzz #54436
The PDF file had been fuzzed so that one of the W entries was negative,
which is not valid. This later caused problems when we tried to read
that number of bytes (cast to unsigned) from a file into a buffer which
was sized based on the signed value. That caused a buffer overrun and
subsequent crash.
|