| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
| |
If the ICC target profile for the PNG device is the
Artifex sRGB ICC profile, just set the appropriate
tag information in the PNG header.
|
|
|
|
|
|
|
|
|
| |
Add a new device 'pclm8' which outputs in DeviceGray, for use with
'WiFi Direct Print' enabled monochrome printers.
While the printer is supposed to be able to accept RGB, and print to
gray, it's quicker to send Gray instead of RGB when we know it is
required.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This wasn't working, possibly due to device changes, more likely
due to insufficient testing when it was first written.
We fix it here, and change the way we handle post render color
conversion.
We were using ICC before in the case when we had a PostRenderProfile,
but not otherwise. If no PostRenderProfile is specifically set, we
now stash the device profile before we change our colormodel, and
use that as the target for our post render conversion.
This has required a new downscaler helper function to make an
icc link from a given profile.
This means that we now only really support contone operation (as
doing ICC conversions from pre-dithered results will not look
good). If we want to support dithered output, then we'll really
need to do the dithering as part of the conversion from contone
rendered RGB, as otherwise it'll look really bad.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifically this is intended to allow use of -sPostRenderProfile
so we can (for example) use the plan device to render in RGB, then
convert to cmyk as a post-process step.
In preparing this commit, various problems were found in the planar
downscaler, and these are fixed here too.
In addition, to avoid every device that wants to support
PostRenderProfile needing an identical batch of code to create the
icclink, we extract that into a gx_downscaler_create_post_render_link
helper function, and use that in the appropriate devices.
As a knock on for that, we tweak gsicc_free_link. Firstly, it no
longer takes a memory pointer, but rather uses the one in the
link itself. This is a step up from the existing code that appears
to allocate with 'stable_memory' and then free with 'non_gc_memory'.
I suspect we've been getting away with this by chance because the
two have always happened to be the same.
Secondly, it performs the 'free_link' operation as part of the call.
This enables all the call sites to be simplified.
|
|
|
|
|
|
| |
Also cups device: Ignore deprecated function warnings
Using gcc pragmas
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apple Raster (aka URF) output: With -sDEVICE=appleraster or
-sDEVICE=urf (instead of -sDEVICE=cups) the output is in the Apple
Raster format and not in CUPS Raster format.
The format is used by Apple's AirPrint which allows printing from iOS
devices. Due to this practically all modern printers understand this
format and so providing it allows easy, driverless printing.
Before, in order to output Apple Raster, the gstoraster filter of
cups-filters had used the "cups" output device of Ghostscript to
produce CUPS Raster and after that the rastertopwg filter of CUPS had
to turn this into Apple Raster. With this commit a future version of
cups-filters can let gstoraster (the ghostscript() filter function)
directly output Apple Raster and this way save one filter step.
Outputting Apple Raster instead of PWG Raster (which we could already
do via -sDEVICE=pwgraster) is trivial. One has only to tell the
cupsRasterOpen() function of libcups to write Apple Raster headers
instead of PWG Raster headers by a simple flag, so this commit
consists mainly of the definition of the output device names
"appleraster" and "urf" and make the output device set said flag when
one of these device names is used.
I have defined two device names for exactly the same thing, as I am
not really sure which one is the better or if one could cause any
problems. We could remove one later if needed. The output is
absolutely identical for both.
Apple Raster support was introduced to libcups in CUPS 2.2.2, so the
CUPS 2.2 API does not contain it necessarily, therefore the 2.3 CUPS
API is the minimum requirement by the ./configure script to build
Ghostscript with Apple Raster/URF output support.
The libcups which is included with the Ghostscript source is too old,
and when you build with it (--with-local-cups) you will not get Apple
Raster support. But remember that this build option is only for
development and debugging and not for production.
The commit also contains a bug fix: devs.mak hard-coded -I./cups/libs/
in the compilation command line of gdevcups.c, making always the
included cups/raster.h being used instead of the system's one and so
always having a too old cups/raster.h which suggests that Apple Raster
is not supported. This I have fixed by removing the -I./cups/libs/,
--with-local-cups works also without.
|
|
|
|
|
|
|
| |
This is a clone of the pngalpha device, with antialiasing disabled
by default. People should use this, together with the DownScaleFactor
parameter to acheive antialiasing without the nasty effects given
in some cases by GraphicsAlphaBits.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add example tags devn devices, both 8 and 16 bit. The tags
plane is always placed as the last plane in the data (i.e
it follows any spots that get added). Changes were made to
remove tags conditionals in the planar memory device code,
since it really should not care what the extra planes are
used for and it should not add components to the device based
upon the tags support. The target device should handle any
of this sort of setup. There were
also some changes needed in the pdf14 code, as the tags
information was not getting properly communicated when we
had knockout objects and devn colors.
Also, fix various whitespace issues.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the only C code that required access to the Adobe Glyph List was
in the vector devices, so we kept the AGL definition with those. Otherwise
it was always in Postscript.
The PDF interpreter in C also requires the AGL, so move the AGL into base, and
give it its own ".dev".
Still to do: Remove the Postscript definition, and create the Postscript
definition(s) in PS VM from the C definition.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This adds a new docxwrite device which generates docx files.
Unlike the txtwrite device, we don't output xml for later use by
extract-exe or store information about spans in arrays. Instead we call
extract functions such as extract_add_char() directly.
Code changes:
Have moved txtwrite and docxwrite common code into new devices/vector/doc_common.{c,h}.
Shared types and functions are currently:
txt_glyph_width_t
txt_glyph_widths_t
txt_glyph_widths()
txt_get_unicode()
txt_char_widths_to_uts()
txt_calculate_text_size()
Building:
By default we do not build with Extract and there will be no docxwrite
device in the final executables.
To build with Extract, specify the location of the extract checkout to
build and link with.
Unix:
./autogen.sh --with-extract-dir=<extract-dir>
Windows:
Set environmental variable EXTRACT_DIR=<extract-dir> when building,
e.g.:
EXTRACT_DIR=<extract-dir> devenv.com windows/GhostPDL.sln /Build Debug /Project ghostscript
On both Unix and Windows we exit with an error message if the specified
location does not exist.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces OCR operation to the pdfwrite device.
The full development history can be seen on the pdfwrite_ocr branch.
The list of individual commits is as follows:
--------------------------------------------
Interim commit for pdfwrite+OCR
This is the initial framework for pdfwrite to send a bitmap of a glyph
to an OCR engine in order to generate a Unicode code point for it.
This code must not be used as-is, in particular it prevents the function
gs_font_map_glyph_to_unicode from functioning properly in the absence
of OCR software, and the conenction between pdfwrite and the OCR engine
is not present.
We need to add either compile-time or run-time detection of an OCR
engine and only use on if present, as well as some control to decide
when to use OCR. We might always use OCR, or only when there is no
Glyph2Unicode dictionary available, or simply when all other fallbacks
fail.
--------------------------------------------
Hook Tesseract up to pdfwrite.
--------------------------------------------
More work on pdfwrite + OCR
Reset the stage of the state machine after processing a returned value
Set the unicode value used by the ToUnicode processing from the value
returned by OCR.
Much more complex than previously thought; process_text_return_width()
processes all the contents of the text in the enumerator on the first
pass, because its trying to decide if we can use a fast case (all
widths are default) or not.
This means that if we want to jump out an OCR a glyph, we need to track
which index in the string process_text_return_width was dealing with,
rather than the text enumerator index. Fortunately we are already
using a copy of the enumerator to run the glyph, so we simply need
to capture the index and set the copied enumerator index from it.
--------------------------------------------
Tweak Tesseract build to include legacy engine.
Actually making use of the legacy engine requires a different set
of eng.traineddata be used, and for the engine to be selected away
from LSTM.
Suitable traineddata can be found here, for instance (open the link,
and click the download button):
https://github.com/tesseract-ocr/tessdata/blob/master/eng.traineddata
--------------------------------------------
Add OCRLanguage and OCREngine parameters to pdfwrite.
--------------------------------------------
Add gs_param_list_dump() debug function.
--------------------------------------------
Improve use of pdfwrite with OCR
Rework the pdfwrite OCR code extensively in order to create a large
'strip' bitmap from a number of glyphs in a single call to the text
routine. The hope is that this will provide better context for
Tesseract and improved recognition.
Due to the way that text enumerators work, and the requirement to exit
to the PostScript interpreter in order to render glyph bitmaps, I've had
to abandon efforts to run as a fully 'on demand' system. We can't wait
until we find a glyph with no Unicode value and then try to render all
the glyphs up to that point (and all the following ones as well). It is
probably possible to do this but it would mean rewriting the text
processing code which is quite hideous enough as it is.
So now we render each glyph in the text string, and store them in a
linked list. When we're done with the text we free the memory. If we
find a glyph with no Unicode value then on the first pass we take the
list of glyphs, create a big bitmap from them and send it to Tesseract.
That should then return all the character codes, which we keep. On
subsequent missing Unicode values we consult the stored list.
We need to deal specially with space glyphs (which make no marks) as
Tesseract (clearly!) can't spot those.
Modify makefile (devs.mak) so that we have a preprocessor flag we can
use for conditional compilation. Currently OCR_VERSION is 0 for no OCR,
1 for Tesseract, there may be higher numbers in future.
Add a new function to the OCR interface to process and return multiple
glyphs at once from a bitmap. Don't delete the code for a single bitmap
because we may want to use that in future enhancements.
If we don't get the expected number of characters back from the OCR
engine then we currently abort the processing. Future enhancements;
fall back to using a single bitmap instead of a strip of text, if we get
*more* characters than expected, check for ligatures (fi, ffi etc).
Even if we've already seen a glyph, if we have not yet assigned it a
Unicode value then attempt to OCR it. So if we fail a character in one
place we may be able to recognise it in another. This requires new code
in gsfcmap.c to determine if we have a Unicode code point assigned.
Make all the relevant code, especially the params code, only compile
if OCR is enabled (Tesseract and Leptonica present and built).
Remove some debugging print code.
Add documentation
--------------------------------------------
Remove vestiges of earlier OCR attempt
Trying to identify each glyph bitmap individually didn't work as well
and is replaced by the new 'all the characters in the text operation'
approach. There were a few vestiges of the old approach lying around
and one of them was causing problems when OCR was not enabled. Remove
all of that cruft here.
|
| |
|
| |
|
|
|
|
|
| |
The device icc_struct needs to be initialised before we try to
use it.
|
|
|
|
|
|
| |
pdfocr8/24/32, ocr and hocr devices.
Use OCRLanguage to set languages to use ("eng" by default).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue this branch is trying to solve is to ensure
that the alpha blending occurs in the proper page group
color space. If the page group is CMYK and the device is
RGB then the final alpha blend must occur prior to the
color conversion. Currently with the head code this is
not the case. This work required a significant rework
of how the transparency group pop occurred since if it
is the final group, the blend will not occur until the
put_image operation. The group color handling was
completely reworked and simplified. The reworked code
now maintains a group_color object that is related to
its own color rather than the parent as before.
In addition, during the push_device operation, a buffer
is not created. Previously an entire page buffer was
created. If we have a page group that is smaller than
the whole page, this will save us in space. The downside
of this is that we need to ensure we have a buffer in place
when the first drawing operation occurs.
There were several issues with the bitrgbtags devices as
well as the pngalpha and psdcmyk16 devices that had to
be considered during the put_image operation.
operation
|
| |
|
|
|
|
|
|
|
|
|
| |
Any "printer" device depends on the low level 'page' device (page.dev),
unaccountably, the cups devices (cups and pwgraster) did not have that
dependency in the makefiles.
Also, the PDF transparency compositor now (and for some time) has also depended
upon page.dev, so update the makefiles for that, too.
|
|
|
|
|
|
|
|
|
| |
A quick back to back test with/without cal using:
bin/gswin32c.exe -sDEVICE=tiffsep1 -o out.tif -r600 -dMaxBitmap=80000000
examples/tiger.eps
shows timings of 1.142s vs 1.297s on my machine.
|
|
|
|
|
|
|
|
|
|
|
|
| |
For patterns with > 256 dots, threshold_from_order would put in 0 value
cells which would then always be imaged. Change this device to (finally)
use the gx_ht_construct_threshold used by the fast_ht thresholding code
so that it should match the other devices, such as pbmraw.
Also vertically invert the use of the threshold array to match the dots
of the other devices.
Add missing dependencies for gdevtsep.c in devs.mak
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This includes build rubrik for devices to generate URF files
(urfgray, urfrgb, urfcmyk), a decompression filter for the
rle variant used in URF files and a urf "language" interpreter
implementation for gpdl.
Note, this is only the build framework for these things. The
actual implementation code lives in the private 'urf'
git module, and will be activated automatically as part of
the build if it is in position at configure time.
|
|
|
|
|
|
|
| |
Move the definition of x_pixel within the headers to ensure
gdevcmp.h stands alone.
Include a ufst header to ensure that gxfapiu.h stands alone.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(squash of commits from filesec branch)
Most of this commit is donkeywork conversions of calls from
FILE * -> gp_file *, fwrite -> gp_fwrite etc. Pretty much every
device is touched, along with the clist and parsing code.
The more interesting changes are within gp.h (where the actual
new API is defined), gpmisc.c (where the basic implementations
live), and the platform specific levels (gp_mswin.c, gp_unifs.c
etc where the platform specific implementations have been
tweaked/renamed).
File opening path validation
All file opening routines now call a central routine for
path validation.
This then consults new entries in gs_lib_ctx to see if validation
is enabled or not. If so, it validates the paths by seeing if
they match.
Simple C level functions for adding/removing/clearing paths, exposed
through the gsapi level.
Add 2 postscript operators for path control.
<name> <string> .addcontrolpath -
Add the given <string> (path) to the list of paths for
controlset <name>, where <name> can be:
/PermitFileReading
/PermitFileWriting
/PermitFileControl
(Anything else -> rangecheck)
- .activatepathcontrol -
Enable path control. At this point PS cannot make any
more changes, and all file access is checked.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For creating opdfread.h we normally strip comments and whitespace from the
Postscript, but that makes reading debugging the opdfread Postscript even more
unpleasant than it has to be.
This adds a "-d" option ("don't pack") to pack_ps.c.
Also adds a subtarget for opdfread.h that means invoking make thus:
make DEBUG_OPDFREAD=1
will create opdfread.h without stripping whitespace and comments, so the output
from ps2write is only baroque and less wilfully horrific.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now we properly "include what we use" let's sanitise the horrid
blah_DEFINED ifdeffery (i.e. kill it where possible).
Also, we update the .c dependencies in the base/psi makefiles to
be correct.
Unfortunately, this new correct set of dependencies causes nmake
to soil itself and die with an out of memory error. After much
experimentation, I've come to the conclusion that this is because
it copes poorly with given the same file as a dependency multiple
times.
Sadly, our style of declaring dependencies in the following style:
foo_h=$(BLAH)/foo.h $(std_h)
bar_h=$(BLAH)/bar.h $(foo_h) $(std_h)
baz_h=$(BLAH)/baz.h $(foo_h) $(std_h)
means that a .obj file that depends on $(foo_h) $(bar_h) and $(baz_h)
ends up depending on foo.h twice, and std.h three times.
I have therefore changed the style of dependencies used to be more
standard.
We still define:
foo_h=$(BLAH)/foo.h
so each .obj file rule can depend on $(foo_h) etc as required, but the
dependencies between each .h file are expressed in normal rules at the
end of the file in a dedicated "# Dependencies" section that we can now
autogenerate.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So the patch that was provided was far from sufficient in making this
work. There was significant work that needed to be done in the
pdf14 device to ensure that the transparency put_image operation
worked correctly. Not sure if the author was paid a bounty but I
question if it was properly tested.
I tested this with some extreme altona files and others and
everything seems to be working well now. Having the psdcmyk16 device
is nice as it is our only device to have full spot color support and
16 bit output precision.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Run a release build on a linux machine to make arch.h etc. Then
run toolbin/headercompile.pl to test compiling each headerfile
by itself.
Resolve all the missing #includes, add missing repeated include
guards and copyright statements.
Also, update all the header dependencies in the makefiles.
It is possible that the object dependencies in the makefiles can be
simplified now, but that's a task for another day.
|
|
|
|
|
|
|
|
|
| |
This commit is a squashed version of the gpdl-shared-device
branch. Essentially this is a first version of the new
language switching mechanism.
This does not build as part as "all", but rather as "experimental"
or "gpdl".
|
|
|
|
|
|
|
|
|
| |
Give the range of color spaces and models that cups supports, we can't
reasonably provide (or expect others to provide) output ICC profiles for all
cases.
For the purpose of profile validation, have it claim to be DeviceN and benefit
from the extra tolerance in profiles allowed for that class of device.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tiffscaled contone devices have to be able to change their color model
to allow a more flexible use of the post render ICC profile with the output
intent. Prior to this commit, certain profile combinations would result in
mis-rendered results.
With this fix, if we are wanting to render to a CMYK intermediate
output intent but we want the output to be in sRGB then we need to use
-sDEVICE=tiffscaled24 -dUsePDFX3Profile -sOutputICCProfile=default_cmyk.icc
-sPostRenderProfile=srgb.icc . This should then render to a temporary
buffer the is in the OutputIntent color space and then be converted to
sRGB. This should look like the result we get when we go out to the
tiffscaled32 device. This is in contrast to the command line
sDEVICE=tiffscaled24 -dUsePDFX3Profile -sPostRenderProfile=srgb.icc which would
end up using the output intent as a proofing profile. The results may be similar
but not exact as overprint and spot colors would not appear correctly due to the
additive color model during rendering.
|
|
|
|
|
|
|
|
|
|
|
| |
The commit f2cf68297e3d63cb927db3c98d317f7ee68e7898
resulted in errors with the separation type devices.
With these devices, we can simply check if the color
model matches the ICC profile since these devices
change their number of components. Will likely need
to do some testing with these device and different
profiles to see what breaks when and make sure we
exit gracefully.
|
|
|
|
|
|
| |
Make sure that the various types of profiles that can be set work with
the device color model and with each other. Only allow the use of
the post render ICC profile when the device supports it.
|
|
|
|
|
|
|
|
|
| |
Also update copyright dates.
Remove gs_cmdl.ps as we no longer use it, and remove its entry from
psfiles.htm.
Remove xfonts.htm as this feature (xfont support) is long, long gone.
|
|
|
|
|
| |
Noticed during other work, but not committed as part of those fixes
since this is unrelatd.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These devices render input to a bitmap (or in the case of PCLm
multiple bitmaps) then wraps the bitmap(s) up as the content of
a PDF file. For PCLm there are some additional rules regarding
headers, extra content and the order in which the content is
written in the PDF file.
The aim is to support the PCLm mobile printing standard, and
to permit production of PDF files from input where the graphics
model differs significantly from PDF (eg PCL and RasterOPs).
Devices are named pdfimage8, pdfimage24, pdfimage32 and PCLm.
Currently produce valid PDF files with a colour depth of 8 (Gray),
24 (RGB) or 32 (CMYK), the PCLm device only supports 24-bit RGB.
Devices supports the DownScaleFactor switch to implement
page level anti-aliasing
-sCompression can be set to None, LZW, Flate, JPEG or RLE (LZW
is not supported on PCLm, None is only available on PCLm for
debugging purposes).
The PCLm device supports -dStripHeight to set the vertical height
of the strips of image content, as required by the specification.
For JPEG compression the devices support both the JPEGQ and
QFactor controls, exactly as per the jpeg and jpeggray devices.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we always interpolated to the full device resolution. This
parameter allows control of the resolution of the interploated image
which makes sense for devices that cannot render continuous tone at
the device resolution due to halftone cell size. This avoids the
overhead of interpolation beyond what the device can reproduce and
allows the user a quality/performance tradeoff.
The -dDOINTERPOLATE is equivalent to -dInterpolateControl=-1 and the
-dNOINTERPOLATE is equivalent to -dInterpolateControl=0. These options
still work for PS/PDF files, but are deprecated and may be removed in
the future.
Performance results vary, but using the -dInterpolateControl=4 is
4.5 times faster with files that have images that are scaled up and cover
a large portion of the page such as comparefiles/Bug695221.ps at 600dpi.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit implements a new packps tool which converts PostScript files
to packed arrays of C strings, providing a method to compile PostScript
code directly into the executable. Using this tool, the opdfread.h
file is now derived from devices/vector/opdfread.ps.
- The packps utility is built from the new file base/pack_ps.c.
- opdfread.ps has been moved from lib to /devices/vector. This is
now the reference file for the opdfread header procset code.
- opdfread.h has been removed from /devices/vector and is now built
within the DEVGEN directory as part of the make process.
- For Windows builds, a stale file reference to Resource\Init\opdfread.ps
has been removed from the project file. Additionally, a file reference
to opdfread.ps has been added to the /devices/vector source list, and
the reference to opdfread.h has been removed from the header file list.
|
|
|
|
| |
Customer would like to have this capability.
|
| |
|
|
|
|
| |
As well as the (long non-functional/non-useful) Windows "xfont" code.
|
| |
|