summaryrefslogtreecommitdiff
path: root/devices/devs.mak
Commit message (Collapse)AuthorAgeFilesLines
* Update postal address in file headersChris Liddell2023-04-041-2/+2
|
* Update gdevdocx.c for new extract include directory structure.Robin Watts2023-02-281-2/+2
|
* For PNG output, don't embed sRGB ICC profileMichael Vrhel2022-09-141-1/+1
| | | | | | If the ICC target profile for the PNG device is the Artifex sRGB ICC profile, just set the appropriate tag information in the PNG header.
* Bug #705035 "PCLm mode neds 8-bit Grayscale mode"Ken Sharp2022-03-141-0/+4
| | | | | | | | | Add a new device 'pclm8' which outputs in DeviceGray, for use with 'WiFi Direct Print' enabled monochrome printers. While the printer is supposed to be able to accept RGB, and print to gray, it's quicker to send Gray instead of RGB when we know it is required.
* Fix and extend chameleon device.Robin Watts2022-03-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | This wasn't working, possibly due to device changes, more likely due to insufficient testing when it was first written. We fix it here, and change the way we handle post render color conversion. We were using ICC before in the case when we had a PostRenderProfile, but not otherwise. If no PostRenderProfile is specifically set, we now stash the device profile before we change our colormodel, and use that as the target for our post render conversion. This has required a new downscaler helper function to make an icc link from a given profile. This means that we now only really support contone operation (as doing ICC conversions from pre-dithered results will not look good). If we want to support dithered output, then we'll really need to do the dithering as part of the conversion from contone rendered RGB, as otherwise it'll look really bad.
* Update plan devices to use the downscaler, plus various fixes.Robin Watts2022-03-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Specifically this is intended to allow use of -sPostRenderProfile so we can (for example) use the plan device to render in RGB, then convert to cmyk as a post-process step. In preparing this commit, various problems were found in the planar downscaler, and these are fixed here too. In addition, to avoid every device that wants to support PostRenderProfile needing an identical batch of code to create the icclink, we extract that into a gx_downscaler_create_post_render_link helper function, and use that in the appropriate devices. As a knock on for that, we tweak gsicc_free_link. Firstly, it no longer takes a memory pointer, but rather uses the one in the link itself. This is a step up from the existing code that appears to allocate with 'stable_memory' and then free with 'non_gc_memory'. I suspect we've been getting away with this by chance because the two have always happened to be the same. Secondly, it performs the 'free_link' operation as part of the call. This enables all the call sites to be simplified.
* Fix building with "local" cups sourcesChris Liddell2022-02-241-4/+4
| | | | | | Also cups device: Ignore deprecated function warnings Using gcc pragmas
* "cups" output device: Support for Apple Raster (URF) outputTill Kamppeter2022-02-201-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Apple Raster (aka URF) output: With -sDEVICE=appleraster or -sDEVICE=urf (instead of -sDEVICE=cups) the output is in the Apple Raster format and not in CUPS Raster format. The format is used by Apple's AirPrint which allows printing from iOS devices. Due to this practically all modern printers understand this format and so providing it allows easy, driverless printing. Before, in order to output Apple Raster, the gstoraster filter of cups-filters had used the "cups" output device of Ghostscript to produce CUPS Raster and after that the rastertopwg filter of CUPS had to turn this into Apple Raster. With this commit a future version of cups-filters can let gstoraster (the ghostscript() filter function) directly output Apple Raster and this way save one filter step. Outputting Apple Raster instead of PWG Raster (which we could already do via -sDEVICE=pwgraster) is trivial. One has only to tell the cupsRasterOpen() function of libcups to write Apple Raster headers instead of PWG Raster headers by a simple flag, so this commit consists mainly of the definition of the output device names "appleraster" and "urf" and make the output device set said flag when one of these device names is used. I have defined two device names for exactly the same thing, as I am not really sure which one is the better or if one could cause any problems. We could remove one later if needed. The output is absolutely identical for both. Apple Raster support was introduced to libcups in CUPS 2.2.2, so the CUPS 2.2 API does not contain it necessarily, therefore the 2.3 CUPS API is the minimum requirement by the ./configure script to build Ghostscript with Apple Raster/URF output support. The libcups which is included with the Ghostscript source is too old, and when you build with it (--with-local-cups) you will not get Apple Raster support. But remember that this build option is only for development and debugging and not for production. The commit also contains a bug fix: devs.mak hard-coded -I./cups/libs/ in the compilation command line of gdevcups.c, making always the included cups/raster.h being used instead of the system's one and so always having a too old cups/raster.h which suggests that Apple Raster is not supported. This I have fixed by removing the -I./cups/libs/, --with-local-cups works also without.
* Add png16malpha device.Robin Watts2022-01-201-2/+8
| | | | | | | This is a clone of the pngalpha device, with antialiasing disabled by default. People should use this, together with the DownScaleFactor parameter to acheive antialiasing without the nasty effects given in some cases by GraphicsAlphaBits.
* Add psdcmyktags and psdcmyktags16 devicesMichael Vhrel2022-01-111-0/+6
| | | | | | | | | | | | | | | | Add example tags devn devices, both 8 and 16 bit. The tags plane is always placed as the last plane in the data (i.e it follows any spots that get added). Changes were made to remove tags conditionals in the planar memory device code, since it really should not care what the extra planes are used for and it should not add components to the device based upon the tags support. The target device should handle any of this sort of setup. There were also some changes needed in the pdf14 code, as the tags information was not getting properly communicated when we had knockout objects and devn colors. Also, fix various whitespace issues.
* Ensure the MAKEDIRS dependency is always lastChris Liddell2021-09-231-5/+5
|
* Consolidate AGL definitions in CChris Liddell2021-08-231-21/+13
| | | | | | | | | | | | Previously, the only C code that required access to the Adobe Glyph List was in the vector devices, so we kept the AGL definition with those. Otherwise it was always in Postscript. The PDF interpreter in C also requires the AGL, so move the AGL into base, and give it its own ".dev". Still to do: Remove the Postscript definition, and create the Postscript definition(s) in PS VM from the C definition.
* Update copyright to 2021Chris Liddell2021-03-151-1/+1
|
* Remove Luratech integration code/makefilesChris Liddell2021-02-221-35/+3
|
* Added docxwrite device; uses extract library to write docx output.Julian Smith2021-02-151-2/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This adds a new docxwrite device which generates docx files. Unlike the txtwrite device, we don't output xml for later use by extract-exe or store information about spans in arrays. Instead we call extract functions such as extract_add_char() directly. Code changes: Have moved txtwrite and docxwrite common code into new devices/vector/doc_common.{c,h}. Shared types and functions are currently: txt_glyph_width_t txt_glyph_widths_t txt_glyph_widths() txt_get_unicode() txt_char_widths_to_uts() txt_calculate_text_size() Building: By default we do not build with Extract and there will be no docxwrite device in the final executables. To build with Extract, specify the location of the extract checkout to build and link with. Unix: ./autogen.sh --with-extract-dir=<extract-dir> Windows: Set environmental variable EXTRACT_DIR=<extract-dir> when building, e.g.: EXTRACT_DIR=<extract-dir> devenv.com windows/GhostPDL.sln /Build Debug /Project ghostscript On both Unix and Windows we exit with an error message if the specified location does not exist.
* Squashed commit of pdfwrite_ocr branch.Ken Sharp2020-11-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces OCR operation to the pdfwrite device. The full development history can be seen on the pdfwrite_ocr branch. The list of individual commits is as follows: -------------------------------------------- Interim commit for pdfwrite+OCR This is the initial framework for pdfwrite to send a bitmap of a glyph to an OCR engine in order to generate a Unicode code point for it. This code must not be used as-is, in particular it prevents the function gs_font_map_glyph_to_unicode from functioning properly in the absence of OCR software, and the conenction between pdfwrite and the OCR engine is not present. We need to add either compile-time or run-time detection of an OCR engine and only use on if present, as well as some control to decide when to use OCR. We might always use OCR, or only when there is no Glyph2Unicode dictionary available, or simply when all other fallbacks fail. -------------------------------------------- Hook Tesseract up to pdfwrite. -------------------------------------------- More work on pdfwrite + OCR Reset the stage of the state machine after processing a returned value Set the unicode value used by the ToUnicode processing from the value returned by OCR. Much more complex than previously thought; process_text_return_width() processes all the contents of the text in the enumerator on the first pass, because its trying to decide if we can use a fast case (all widths are default) or not. This means that if we want to jump out an OCR a glyph, we need to track which index in the string process_text_return_width was dealing with, rather than the text enumerator index. Fortunately we are already using a copy of the enumerator to run the glyph, so we simply need to capture the index and set the copied enumerator index from it. -------------------------------------------- Tweak Tesseract build to include legacy engine. Actually making use of the legacy engine requires a different set of eng.traineddata be used, and for the engine to be selected away from LSTM. Suitable traineddata can be found here, for instance (open the link, and click the download button): https://github.com/tesseract-ocr/tessdata/blob/master/eng.traineddata -------------------------------------------- Add OCRLanguage and OCREngine parameters to pdfwrite. -------------------------------------------- Add gs_param_list_dump() debug function. -------------------------------------------- Improve use of pdfwrite with OCR Rework the pdfwrite OCR code extensively in order to create a large 'strip' bitmap from a number of glyphs in a single call to the text routine. The hope is that this will provide better context for Tesseract and improved recognition. Due to the way that text enumerators work, and the requirement to exit to the PostScript interpreter in order to render glyph bitmaps, I've had to abandon efforts to run as a fully 'on demand' system. We can't wait until we find a glyph with no Unicode value and then try to render all the glyphs up to that point (and all the following ones as well). It is probably possible to do this but it would mean rewriting the text processing code which is quite hideous enough as it is. So now we render each glyph in the text string, and store them in a linked list. When we're done with the text we free the memory. If we find a glyph with no Unicode value then on the first pass we take the list of glyphs, create a big bitmap from them and send it to Tesseract. That should then return all the character codes, which we keep. On subsequent missing Unicode values we consult the stored list. We need to deal specially with space glyphs (which make no marks) as Tesseract (clearly!) can't spot those. Modify makefile (devs.mak) so that we have a preprocessor flag we can use for conditional compilation. Currently OCR_VERSION is 0 for no OCR, 1 for Tesseract, there may be higher numbers in future. Add a new function to the OCR interface to process and return multiple glyphs at once from a bitmap. Don't delete the code for a single bitmap because we may want to use that in future enhancements. If we don't get the expected number of characters back from the OCR engine then we currently abort the processing. Future enhancements; fall back to using a single bitmap instead of a strip of text, if we get *more* characters than expected, check for ligatures (fi, ffi etc). Even if we've already seen a glyph, if we have not yet assigned it a Unicode value then attempt to OCR it. So if we fail a character in one place we may be able to recognise it in another. This requires new code in gsfcmap.c to determine if we have a Unicode code point assigned. Make all the relevant code, especially the params code, only compile if OCR is enabled (Tesseract and Leptonica present and built). Remove some debugging print code. Add documentation -------------------------------------------- Remove vestiges of earlier OCR attempt Trying to identify each glyph bitmap individually didn't work as well and is replaced by the new 'all the characters in the text operation' approach. There were a few vestiges of the old approach lying around and one of them was causing problems when OCR was not enabled. Remove all of that cruft here.
* Strip trailing whitespace from makefiles.Robin Watts2020-09-091-1/+1
|
* Update header dependenciesRobin Watts2020-08-031-1278/+1278
|
* Fix spot color handling of display device.Robin Watts2020-07-071-1/+1
| | | | | The device icc_struct needs to be initialised before we try to use it.
* Tesseract based OCR devices.Robin Watts2020-06-031-2/+47
| | | | | | pdfocr8/24/32, ocr and hocr devices. Use OCRLanguage to set languages to use ("eng" by default).
* Squashed commit of the page_group branchghostpdl-9.52-test-base-4Michael Vrhel2020-05-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | The issue this branch is trying to solve is to ensure that the alpha blending occurs in the proper page group color space. If the page group is CMYK and the device is RGB then the final alpha blend must occur prior to the color conversion. Currently with the head code this is not the case. This work required a significant rework of how the transparency group pop occurred since if it is the final group, the blend will not occur until the put_image operation. The group color handling was completely reworked and simplified. The reworked code now maintains a group_color object that is related to its own color rather than the parent as before. In addition, during the push_device operation, a buffer is not created. Previously an entire page buffer was created. If we have a page group that is smaller than the whole page, this will save us in space. The downside of this is that we need to ensure we have a buffer in place when the first drawing operation occurs. There were several issues with the bitrgbtags devices as well as the pngalpha and psdcmyk16 devices that had to be considered during the put_image operation. operation
* Update copyright to 2020Chris Liddell2020-04-101-1/+1
|
* Bug 702019: fix dependencies for cups devs and gdevp14Chris Liddell2020-01-061-2/+4
| | | | | | | | | Any "printer" device depends on the low level 'page' device (page.dev), unaccountably, the cups devices (cups and pwgraster) did not have that dependency in the makefiles. Also, the PDF transparency compositor now (and for some time) has also depended upon page.dev, so update the makefiles for that, too.
* Use CAL halftoning in tiffsep1 post processing.Robin Watts2020-01-031-2/+13
| | | | | | | | | A quick back to back test with/without cal using: bin/gswin32c.exe -sDEVICE=tiffsep1 -o out.tif -r600 -dMaxBitmap=80000000 examples/tiger.eps shows timings of 1.142s vs 1.297s on my machine.
* Bug 701880: tiffsep1 threshold_from_order caused dots in full white.Ray Johnston2019-12-271-0/+1
| | | | | | | | | | | | For patterns with > 256 dots, threshold_from_order would put in 0 value cells which would then always be imaged. Change this device to (finally) use the gx_ht_construct_threshold used by the fast_ht thresholding code so that it should match the other devices, such as pbmraw. Also vertically invert the use of the threshold array to match the dots of the other devices. Add missing dependencies for gdevtsep.c in devs.mak
* Remove gproof device.Robin Watts2019-11-201-24/+0
|
* Public build changes to accommodate private URF support.Robin Watts2019-11-131-0/+19
| | | | | | | | | | | | This includes build rubrik for devices to generate URF files (urfgray, urfrgb, urfcmyk), a decompression filter for the rle variant used in URF files and a urf "language" interpreter implementation for gpdl. Note, this is only the build framework for these things. The actual implementation code lives in the private 'urf' git module, and will be activated automatically as part of the build if it is in position at configure time.
* Run toolbin/headercompile.pl and update dependencies in Makefiles.Robin Watts2019-09-251-1208/+1220
| | | | | | | Move the definition of x_pixel within the headers to ensure gdevcmp.h stands alone. Include a ufst header to ensure that gxfapiu.h stands alone.
* Move FILE * operations behind new gp_file * API.Robin Watts2019-05-291-1202/+1293
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (squash of commits from filesec branch) Most of this commit is donkeywork conversions of calls from FILE * -> gp_file *, fwrite -> gp_fwrite etc. Pretty much every device is touched, along with the clist and parsing code. The more interesting changes are within gp.h (where the actual new API is defined), gpmisc.c (where the basic implementations live), and the platform specific levels (gp_mswin.c, gp_unifs.c etc where the platform specific implementations have been tweaked/renamed). File opening path validation All file opening routines now call a central routine for path validation. This then consults new entries in gs_lib_ctx to see if validation is enabled or not. If so, it validates the paths by seeing if they match. Simple C level functions for adding/removing/clearing paths, exposed through the gsapi level. Add 2 postscript operators for path control. <name> <string> .addcontrolpath - Add the given <string> (path) to the list of paths for controlset <name>, where <name> can be: /PermitFileReading /PermitFileWriting /PermitFileControl (Anything else -> rangecheck) - .activatepathcontrol - Enable path control. At this point PS cannot make any more changes, and all file access is checked.
* Add non-packed capability to packpsChris Liddell2019-02-041-2/+8
| | | | | | | | | | | | | | For creating opdfread.h we normally strip comments and whitespace from the Postscript, but that makes reading debugging the opdfread Postscript even more unpleasant than it has to be. This adds a "-d" option ("don't pack") to pack_ps.c. Also adds a subtarget for opdfread.h that means invoking make thus: make DEBUG_OPDFREAD=1 will create opdfread.h without stripping whitespace and comments, so the output from ps2write is only baroque and less wilfully horrific.
* Update source/header file copyright notice to 2019Chris Liddell2019-01-161-1/+1
|
* Remove some blah_DEFINED cruft.Robin Watts2019-01-071-18/+2575
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now we properly "include what we use" let's sanitise the horrid blah_DEFINED ifdeffery (i.e. kill it where possible). Also, we update the .c dependencies in the base/psi makefiles to be correct. Unfortunately, this new correct set of dependencies causes nmake to soil itself and die with an out of memory error. After much experimentation, I've come to the conclusion that this is because it copes poorly with given the same file as a dependency multiple times. Sadly, our style of declaring dependencies in the following style: foo_h=$(BLAH)/foo.h $(std_h) bar_h=$(BLAH)/bar.h $(foo_h) $(std_h) baz_h=$(BLAH)/baz.h $(foo_h) $(std_h) means that a .obj file that depends on $(foo_h) $(bar_h) and $(baz_h) ends up depending on foo.h twice, and std.h three times. I have therefore changed the style of dependencies used to be more standard. We still define: foo_h=$(BLAH)/foo.h so each .obj file rule can depend on $(foo_h) etc as required, but the dependencies between each .h file are expressed in normal rules at the end of the file in a dedicated "# Dependencies" section that we can now autogenerate.
* Bug 688210: Add psdcmyk16 and psdrgb16 devicesMichael Vrhel2018-12-291-0/+6
| | | | | | | | | | | | | So the patch that was provided was far from sufficient in making this work. There was significant work that needed to be done in the pdf14 device to ensure that the transparency put_image operation worked correctly. Not sure if the author was paid a bounty but I question if it was properly tested. I tested this with some extreme altona files and others and everything seems to be working well now. Having the psdcmyk16 device is nice as it is our only device to have full spot color support and 16 bit output precision.
* Fix header inclusions.Robin Watts2018-12-141-6/+7
| | | | | | | | | | | | | | Run a release build on a linux machine to make arch.h etc. Then run toolbin/headercompile.pl to test compiling each headerfile by itself. Resolve all the missing #includes, add missing repeated include guards and copyright statements. Also, update all the header dependencies in the makefiles. It is possible that the object dependencies in the makefiles can be simplified now, but that's a task for another day.
* Commit of gpdl-shared-device branch.Chris Liddell2018-12-071-0/+15
| | | | | | | | | This commit is a squashed version of the gpdl-shared-device branch. Essentially this is a first version of the new language switching mechanism. This does not build as part as "all", but rather as "experimental" or "gpdl".
* For ICC profile validation, have cups id iteself as DeviceNChris Liddell2018-09-041-1/+1
| | | | | | | | | Give the range of color spaces and models that cups supports, we can't reasonably provide (or expect others to provide) output ICC profiles for all cases. For the purpose of profile validation, have it claim to be DeviceN and benefit from the extra tolerance in profiles allowed for that class of device.
* Fix rendering issue on tiffscaled devicesMichael Vrhel2018-06-291-1/+1
| | | | | | | | | | | | | | | | | | | The tiffscaled contone devices have to be able to change their color model to allow a more flexible use of the post render ICC profile with the output intent. Prior to this commit, certain profile combinations would result in mis-rendered results. With this fix, if we are wanting to render to a CMYK intermediate output intent but we want the output to be in sRGB then we need to use -sDEVICE=tiffscaled24 -dUsePDFX3Profile -sOutputICCProfile=default_cmyk.icc -sPostRenderProfile=srgb.icc . This should then render to a temporary buffer the is in the OutputIntent color space and then be converted to sRGB. This should look like the result we get when we go out to the tiffscaled32 device. This is in contrast to the command line sDEVICE=tiffscaled24 -dUsePDFX3Profile -sPostRenderProfile=srgb.icc which would end up using the output intent as a proofing profile. The results may be similar but not exact as overprint and spot colors would not appear correctly due to the additive color model during rendering.
* Fix errors introduced in previous commitMichael Vrhel2018-06-271-3/+3
| | | | | | | | | | | The commit f2cf68297e3d63cb927db3c98d317f7ee68e7898 resulted in errors with the separation type devices. With these devices, we can simply check if the color model matches the ICC profile since these devices change their number of components. Will likely need to do some testing with these device and different profiles to see what breaks when and make sure we exit gracefully.
* Bug 699381 Add error checking for device icc profilesMichael Vrhel2018-06-221-1/+1
| | | | | | Make sure that the various types of profiles that can be set work with the device color model and with each other. Only allow the use of the post render ICC profile when the device supports it.
* Update copyright notice with new head office address.Ken Sharp2018-01-301-3/+3
| | | | | | | | | Also update copyright dates. Remove gs_cmdl.ps as we no longer use it, and remove its entry from psfiles.htm. Remove xfonts.htm as this feature (xfont support) is long, long gone.
* Fix (some) missing dependenciesRay Johnston2018-01-171-5/+5
| | | | | Noticed during other work, but not committed as part of those fixes since this is unrelatd.
* Fix compiler warnings after pdfimage device commitKen Sharp2017-12-141-2/+1
|
* New devices - pdfimage and PCLmKen Sharp2017-12-141-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These devices render input to a bitmap (or in the case of PCLm multiple bitmaps) then wraps the bitmap(s) up as the content of a PDF file. For PCLm there are some additional rules regarding headers, extra content and the order in which the content is written in the PDF file. The aim is to support the PCLm mobile printing standard, and to permit production of PDF files from input where the graphics model differs significantly from PDF (eg PCL and RasterOPs). Devices are named pdfimage8, pdfimage24, pdfimage32 and PCLm. Currently produce valid PDF files with a colour depth of 8 (Gray), 24 (RGB) or 32 (CMYK), the PCLm device only supports 24-bit RGB. Devices supports the DownScaleFactor switch to implement page level anti-aliasing -sCompression can be set to None, LZW, Flate, JPEG or RLE (LZW is not supported on PCLm, None is only available on PCLm for debugging purposes). The PCLm device supports -dStripHeight to set the vertical height of the strips of image content, as required by the specification. For JPEG compression the devices support both the JPEGQ and QFactor controls, exactly as per the jpeg and jpeggray devices.
* New "planr" device. 1 bit per component, RGB, planar device.Robin Watts2017-11-151-0/+5
|
* Bug 693684: Add InterpolateControl parameter to limit image interpolation.Ray Johnston2017-07-051-1/+1
| | | | | | | | | | | | | | | | | | Previously, we always interpolated to the full device resolution. This parameter allows control of the resolution of the interploated image which makes sense for devices that cannot render continuous tone at the device resolution due to halftone cell size. This avoids the overhead of interpolation beyond what the device can reproduce and allows the user a quality/performance tradeoff. The -dDOINTERPOLATE is equivalent to -dInterpolateControl=-1 and the -dNOINTERPOLATE is equivalent to -dInterpolateControl=0. These options still work for PS/PDF files, but are deprecated and may be removed in the future. Performance results vary, but using the -dInterpolateControl=4 is 4.5 times faster with files that have images that are scaled up and cover a large portion of the page such as comparefiles/Bug695221.ps at 600dpi.
* Auto-generate opdfread.h from opdfread.ps during the build using packps toolSteve Phillips2017-04-201-1/+4
| | | | | | | | | | | | | | | | | This commit implements a new packps tool which converts PostScript files to packed arrays of C strings, providing a method to compile PostScript code directly into the executable. Using this tool, the opdfread.h file is now derived from devices/vector/opdfread.ps. - The packps utility is built from the new file base/pack_ps.c. - opdfread.ps has been moved from lib to /devices/vector. This is now the reference file for the opdfread header procset code. - opdfread.h has been removed from /devices/vector and is now built within the DEVGEN directory as part of the make process. - For Windows builds, a stale file reference to Resource\Init\opdfread.ps has been removed from the project file. Additionally, a file reference to opdfread.ps has been added to the /devices/vector source list, and the reference to opdfread.h has been removed from the header file list.
* Add -sPostRenderICCProfile support to tiffsepMichael Vrhel2016-10-171-1/+2
| | | | Customer would like to have this capability.
* Remove the os2prn deviceChris Liddell2016-10-101-2/+0
|
* Remove the mswindll and mswin devicesChris Liddell2016-10-101-2/+0
| | | | As well as the (long non-functional/non-useful) Windows "xfont" code.
* Remove the various "vga" devicesChris Liddell2016-10-101-54/+0
|