Skip to main content
Skip table of contents

Harlequin VariData (HVD) and the Scalable RIP

This page applies to Harlequin v13.1r0 and later; and to Harlequin Core but not Harlequin MultiRIP

The Scalable RIP can be configured to use HVD. When using HVD, it is important to realize that each job is split up into chunks, and the chunks are farmed out to separate RIPs for interpretation and rendering. HVD scans the start of each job it sees to determine whether there is enough repeated content to be worthwhile caching. If not enough content is repeated, HVD disables caching for the rest of the job. In the Scalable RIP, HVD scanning and re-use is performed on each Farm RIP independently. When sending page ranges from a job to a Farm RIP, the Scalable RIP keeps the job context open on the Farm RIP if it had previously run a page range from the same Scalable RIP job.

When using HVD with the Scalable RIP, this means that:

1          The chunk size must be large enough for HVD to be able to detect re-used content within the first page range of a job.

2          The HVD scan limit percentage must be set so that HVD is likely to detect re-used content within the first page range of a job.

If HVD does not detect enough content re-use within the first page range of a job, it will disable re-use not just for the first page range, but for all subsequent page ranges of the job sent to the same Farm RIP.

The default chunk size that the Scalable RIP uses to split PDF jobs is 1, which will prevent HVD from working with iHVD and non-position-independent eHVD. For some common PDF-VT job types, HVD can be turned on automatically if the job is likely to benefit, and the chunk size set to a different value (50 in the following example) by using the AutoHVDChunkSize configuration in your configuration file or a page feature:

50 /HqnScalableRIP /ProcSet findresource /AutoHVDChunkSize get exec

NOTE:   The above example utilizes the same functionality as outlined in the Extensions Manual; for more information see Auto mode for HVD .

The configuration or page feature using AutoHVDChunkSize also needs to set up the HVD cache ID; any other parameters except for /EnableOptimizedPDFScan, and any license key required for HVD.

The HVD scan limit percentage is configured using the

/OptimizedPDFScanLimitPercent PDF parameter. The default value for this is 10 (i.e., up to 10% of the job will be scanned for re-use). For use with the Scalable RIP, this will be the percentage of the first page range encountered, so a much higher percentage is appropriate. To scan the entire submitted page range for re-use, this parameter should be set in the configuration or a page feature to 100%:

TEXT
                      <<
                        /OptimizedPDFScanLimitPercent 100
                        /OptimizedPDFExternal true
                        /OptimizedPDFCacheID (GGDUMB1)
                        /OptimizedPDFPositionIndependent true
                      >> setpdfparams

The chunk size can be set explicitly by using the DefaultPageChunkSize key in the global config file, but this has the disadvantage that it will affect all jobs (not just variable data jobs), reducing the load-balancing capability of the Scalable RIP for small nonvariable data jobs. The chunk size can be set for each job separately by setting a parameter on the internal Scalable RIP device. An alternate method of configuring the chunk size for variable data jobs separately from non-variable data jobs is to set the /SetChunkSize parameter in the Scalable RIP procset, thus:

50 /HqnScalableRIP /ProcSet findresource /SetChunkSize get exec

This will set the chunk size for the current configuration to 50 pages. This configuration option can also be added in a page feature.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.