Skip to main content
Skip table of contents

(v13) Using HVD with page ranges


This page applies to Harlequin v13.1r0 and later; and to Harlequin Core but not Harlequin MultiRIP.

A large job can be split into “chunks” of data with the use of /PageRange. This can be useful to minimize VM usage for very large jobs, or for splitting a single PDF file across multiple RIPs.

Here, for example, the job is split into chunks of 10 pages:

TEXT
            /PDFContext
            (%E%//TestJobs/largejob.pdf) (r) file << >> pdfopen def PDFContext << /PageRange [ [1 10] ] >> pdfexecid PDFContext << /PageRange [ [11 20] ] >> pdfexecid PDFContext << /PageRange [ [21 30] ] >> pdfexecid PDFContext << /PageRange [ [31 40] ] >> pdfexecid PDFContext << /PageRange [ [41 50] ] >> pdfexecid PDFContext pdfclose

If you are writing a PostScript language control stream that needs to execute chunks from different PDF files, you should call pdfclose on the first PDF file before calling pdfexecid on a chunk from the second to ensure that HVD scanning is triggered for the second file.

When using chunks with eHVD, the same unique ID is assigned to a raster element that is constructed from the same set of graphical elements, even if each chunk is processed on different RIPs within your system. But the decisions about optimal coalescing of graphical elements that are used together are done based on the pages within the current chunk. With small chunks that is quite likely to mean that different decisions are made within each chunk and therefore the opportunity to share raster elements between chunks may be rather small. As the chunk size increases it becomes more likely that the same coalescing decisions are made within multiple chunks and therefore more raster elements may be shared between those chunks.

In the same way, if very small chunks are used, it becomes more likely that no re-use will be found within a single chunk. As explained in /OptimizedPDFScanLimitPercent, scanning is automatically disabled for all subsequent chunks in that case. Using larger chunks may avoid this issue for a job that will benefit from HVD, but where repeats may be spread over a significant range of pages.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.