Skip to main content
Skip table of contents

(v13) Using HVD with ranges of pages


This page applies to Harlequin v13.1r0 and later; and to Harlequin Core but not Harlequin MultiRIP.

A large job can be split into “chunks” of data with the use of /PageRange . Here, for example, the job is split into chunks of 10 pages:

TEXT
/PDFContext (%E%//TestJobs/largejob.pdf) (r) file << >> pdfopen def
  PDFContext << /PageRange [ [1 10] ] >> pdfexecid
  PDFContext << /PageRange [ [11 20] ] >> pdfexecid
  PDFContext << /PageRange [ [21 30] ] >> pdfexecid
  PDFContext << /PageRange [ [31 40] ] >> pdfexecid
  PDFContext << /PageRange [ [41 50] ] >> pdfexecid
PDFContext pdfclose

While running this PostScript language fragment in an HVD setup, if, for example, during the first page range (1 to 10) some variable data is retained for re-use but the scan is aborted during a subsequent range, the scan for variable data is aborted for the rest of the job. Thus, if you are using small chunks of data and are seeing jobs aborting the HVD scan when you think there should be re-use of data, you should increase the /OptimizedPDFScanLimitPercent value, possibly up to the maximum of 100%, in which case, the HVD scan continues for the whole job.

If you are writing a PostScript language control stream that needs to execute chunks from different PDF files you should call pdfclose on the first PDF file before calling pdfexecid on a chunk from the second to ensure that HVD scanning is triggered for the second file.



JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.