DCamProf - a digital camera profiling tool

Table of contents

News

2015-09-06
Version 0.9.7, changes:
  • Yet some reworking of the ICC LUT code to fix bad shadows.
  • When extracting tone curves from transfer functions, tiff-tf will now also compensate non-zero starting points (unless it's a big offset, then it's considered a part of the curve design and not compensated). This is only a "mathematical cleanup", won't have visible effect on profiles so you don't need to re-generate.
  • You can now run dcamprof -v to get the version.
2015-09-05
Version 0.9.6, changes:
  • Fixed critical ICC LUT bug which caused bad shadows with the neutral tone reproduction operator.
2015-09-03
Version 0.9.5, changes:
  • Smarter illuminant copying when using the -m parameter for make-dcp.
  • Exclude lightness (or chromaticity) from CIEDE2000 comparison in LUT generation when using -l with negative parameters (make-profile). This increases the relaxation effect a little.
  • Added -E flag to match-spectra to force spectra as emissive.
  • I've now written a separate tutorial article of how to make profiles using DCamProf. It's supposed to complement the reference documentation found on this page.
2015-08-28
Version 0.9.4, changes:
  • Reworked ICC LUT rendering, especially considering out of gamut values to minimize artifacts for extreme white balance settings.
  • Fixed critical icc2json Lab LUT bug (didn't work at all in previous versions).
  • Fixed so make-dcp now embeds linear curve per default again, as it should.
  • Added -m parameter to make-dcp which can be used to avoid avoid white balance shift in Adobe Lightroom when you change from an old to a new profile.
  • Slight adjustment of test-profile illuminant handling and showed information.
  • Updated broken SSF database link on this page.
2015-08-18
Version 0.9.3, changes:
  • The neutral tone reproduction operator highlight rolloff can now be tuned differently per hue, which for example can be used to apply a longer rolloff on skin-tones and shorter one landscape skies in subjective profiles. The bundled look operators example has been updated showing its use.
  • Minor adjustment to the neutral tone reproduction highlight rolloff mixing.
  • It's now possible to extract tone curves through transfer function comparison using the tiff-tf command. This can for example be used if you want to extract a tone curve in a Capture One workflow.
  • Integer curves (as transfer functions are) now get their truncated values reconstructed per default.
  • Added another JSON format for tone curves, so you can specify splines with any gamma. This can be useful when you design a curve inside the raw converter using its curve tool. If you know what gamma it's using and the coordinates are retrievable you can convert it to a JSON format tone curve and use it in your DCamProf workflow.
  • Now possible to cascade tone curves in make-icc and make-dcp commands.
  • Further updates to the ICC workflow documentation, specifically for Capture One users showing how the new features can be used to design a profile with the same properties as native Capture One profiles.
  • You can now skip applying the curve directly to the LUT in the make-icc command, using the -T flag. This would be used for raw converters that apply the curve after the ICC profile. I don't really know if there exists any, but anyway if you would need it you can now do it.
2015-08-14
Version 0.9.2, changes:
  • Tested making an ICC profile for Capture One (was a while ago), and made slight updates to the docs accordingly.
  • Improved ICC LUT rendering, which now avoids a "black hole" in the extreme blue area (caused by XYZ space clipping), which could cause extreme blues become black.
    • The black hole can still be observed in DNG profiles, but due to the the different LUT design in DNG profiles the blues are not pushed into that undefined area for real images so I haven't yet found a reason to fix it there. The black hole is in actuality no error, it's outside the human gamut and undefined in XYZ space which then clips to zero.
  • Updated basic workflow docs to show the use of the neutral tone reproduction operator, and some basic LUT relaxation recipes.
  • Fixed a minor blending bug in the look operators.
2015-08-11
Version 0.9.1, changes:
  • The target patch matching reports now also come as 16 bit ProPhoto TIFF images, so you can by eye see how large the mismatch is. This makes it easier to get a quick overview of matching performance than just reading pure text reports.
  • Now possible to run test-profile without a target (to just get plots etc).
  • Test-profile will in dump mode now output a gradient TIFF processed through the profile, so you can diagnose LUT smoothness. This is especially useful when designing a subjective look to verify that your look operators don't mess with the smoothness too much.
  • The make-dcp command will now test for hue shift discontinuity, and will abort (per default) if it detects it.
  • Extended look operators functionality with RGB-HSV and RGB-HSL color spaces, which may work better in various clipping scenarios.
2015-08-05
Version 0.9.0, changes:
  • DCamProf now supports making DNG and ICC profiles with subjective looks, as described in the look design documentation for the make-dcp command.
    • This feature is intended for subtle fine-tuning of the accurate look rather than making "Instagram" type of filters.
  • A new section about subjective looks in the documentation.
2015-07-29
Version 0.8.4, changes:
  • Now possible to disable lightness (or chromaticity) part of the LUT when making a profile, using negative relax parameters. This can be useful when making robust profiles.
2015-07-26
Version 0.8.3, changes:
  • Reworked tone reproduction operator.
    • While code changes are quite large, the principle is the same as the old one and the result is very similar.
    • Smoother transition into clipping, better curve analysis, fixed "grayness" issue in the shadow areas.
    • Some extra focus on tuning for skin tone rendering.
    • It's based on the same code as for the upcoming "Perceptual" curve in the RawTherapee project which I have contributed, with the help of expert eyes from members of the RawTherapee team.
  • Now possible to customize the neutral tone reproduction operator via a JSON configuration file.
  • The tone curve will be dumped to the report directory (tc.dat and tc-srgb.dat)
2015-07-06
Version 0.8.2, changes:
  • Added tone reproduction operator support to ICC profiles, that is same functionality as previously available only for DNG profiles.
  • Added a small section in the documentation about tone curves for matrix-only profiles.
2015-07-03
Version 0.8.1, changes:
  • Fixed fatal bug in neutral tone reproduction operator
  • Added scaling rolloff to avoid oversaturated extreme highlights in neutral tone reproduction operator.
2015-06-26
Version 0.8.0, changes:
  • Dropped the -k parameter and instead added a proper tone reproduction operator to make-dcp command. Now you can apply a subjective contrast curve without the color shift and over-saturation of the standard RGB curves. I intend to add the functionality to make-icc later, but it's not available in this release.
    • The neutral tone reproduction operator provided in this release is an early release, it's likely I will tune it further in coming releases.
  • Now possible to provide custom curves to make-dcp and make-icc.
  • Added a new section on tone curves, which I think is an important read as tone curves fundamentally affect color reproduction. The section has several example pictures that give insight into the effect of tone curves and profiles.
  • Changed transfer function parameter -t to -f, as tone reproduction took over -t (and added -T and -o).
  • Common tags like copyright strings and description can now be set directly from the command line for the make-dcp and make-icc commands.
  • Include DefaultBlackRender=none for DNG profiles, to avoid automatic black-level adjustment in some converters. If you're a Lightroom user you're probably used to automatic black level adjustment and may want it, if so provide the -B parameter to make-dcp.
  • Fixed bug that caused incorrect parsing of 3D DNG LUTs.
  • DNG 3D LUTs can now be plotted.
  • make-dcp command now takes three values for the -h parameter, that is value divisions was added to support 3D LUTs.
  • Improved scaling and normalization algorithm in match-spectra command for better matching.
  • Added standard illuminants D55 and D75 to the built-in spectral database.
  • Transposed built-in CC24 data set so it's 4 rows with 6 columns rather than the other way around (to match Argyll's CC24 layout).
2015-06-09
Version 0.7.4, changes:
  • Added possibility to adjust chroma of reference values (-k parameter to make-target, make-profile and test-profile), useful to compensate look for profiles intended for strong tone curves.
  • Changed default: make-dcp now sets a linear curve per default.
2015-06-07
Version 0.7.3, a quick update due to some bad bugs introduced in 0.7.2:
  • Fixed bug in TIFF flatfield correction that could cause over-exposure.
  • Fixed bug causing mix up of -C and -S flags in make-profile and test-profile.
  • Made it possible to totally exclude a class from matrix optimization by assigning weight zero.
  • Made it possible to address class name in exclude list, useful when patches are named the same in different classes.
  • The LUT now takes the CIEDE2000 kL/C/H weights into account.
2015-06-05
Version 0.7.2, changes:
  • I've added a fairly large section in the documentation which describes chromatic adaptation transforms. I think it's an important aspect of profiling if you make a profile for some other illuminant than D50, so do read it.
  • CAT is now enabled per default so the -C option to make-target, make-profile and test-profile has now inverted meaning.
  • Added -S parameter to make-target, make-profile and test-profile which enables rendering of virtual spectra if spectra is missing in the target, this generally improves the performance of the relighting transform.
  • For quick workflows you can now directly write .dcp and .icc from the make-profile command. To support this the -c option for camera name was renamed to -n to work in all three commands.
  • I've done a lot of testing concerning glare-related measurement errors, resulting in some new recommendations in the documentation.
  • Added linearization feature to testchart-ff, using the -L flag, which in some circumstances can reduce the effect of glare.
  • Added possibility to add a distributed step wedge to self-generated targets, using the -g parameter.
2015-05-30
Version 0.7.1, changes:
  • Added new command average-targets, useful if you want to reduce noise in measurements.
  • Added new command match-spectra, useful for comparing spectra of different targets.
  • Added possibility to flatfield correct TIFF files directly with the testchart-ff command, useful for targets that aren't speckled with white patches.
  • Now read spectra isn't resampled when written to .ti3 unless necessary.
  • Added -C option to make-target, make-profile and test-profile which causes the XYZ D50 reference values to be calculated via CAT02 from calibration illuminant values, rather than recalculated directly for D50 from spectra, useful if you want to the profile to model color appearance differences between illuminants.
  • Replaced CAT02 with Bradford in cases CAT is used for "relighting" rather than "perceptual" results, as Bradford generally has better performance for this particular task while CAT02 is better at perceptual modeling.
  • Added nve-lutd.dat, a higher density grid useful when zooming in plots of the native LUT.
2015-05-23
Version 0.7.0. Changes:
  • Added -b and -B flags for white balancing control in make-profile and test-profile.
  • New default: now make-profile will per default re-balance the target reference values so that the whitest patch is considered 100% neutral even if it isn't. This is because most users will expect color-picking the white patch will make up the perfect white balance.
  • Added -x parameter to make-profile so you can provide a list of patches to exclude from the target, intended to be used when you're doing several re-generates testing to remove problematic patches.
  • Added a new section "white balance and camera profiles" which describes the relationship between your raw converter's white balance and the camera profile, it's a good read.
  • Added new command "make-testchart" that generates an Argyll .ti1 used to make a real test target chart using Argyll's printtarg and an inkjet printer.
  • Added new command "testchart-ff" which is used for compensate uneven light in a test chart photo.
  • Patch labels are now kept when processing .ti3 files, is shown in patch matching reports and can also be plotted, like this in gnuplot-speak: 'target-xyz.dat' pt 7 lc rgb var, 'target-xyz.dat' using 1:2:3:5 with labels offset 2
  • For convenience make-target no longer requires assigning a class name when reprocessing .ti3 files.
2015-05-18
Patch release 0.6.4. Fixed overall stats bug. Added possibility to show matrix result only on make-profile (-L), useful when doing repeated runs tuning weights for a matrix-only profile. Minor adjustment of matrix optimizer. Updated make-profile weighting documentation with some more tips.

2015-05-17
Patch release 0.6.3. Fixed a plot bug introduced in 0.6.2. I also adjusted whitepoint preservation handling, now the LUT (per default) excludes patches close to the whitepoint, as the matrix is whitepoint-preserving (neutral patches already optimized for as good as possible) it doesn't make sense to stretch there. Added sign to error tables on lightness and chroma so one can see if the color is too light or dark or too saturated or desaturated. Added two more error vector plots.

2015-05-16
Patch release 0.6.2. Added ICC support for test-profile, including transfer function reversal, and also some ICC plots. Now ICC support is as good as DCP support.

2015-05-13
Patch release 0.6.1. Fixed various ICC bugs, added possibility to provide transfer functions which means Capture One ICCs can now be generated. It's now also possible to make LUT ICCs. Test-profile for ICC is still missing, but otherwise ICC support should be complete. To read transfer functions a new dependency was added, libtiff. The native LUT now applies whitepoint preservation.

2015-05-11
Here's 0.6.0. It's now possible to make ICC profiles (make-icc), matrix-only for linear pipelines only to start with (-L flag must be enabled). Added possibility to add a tone curve when making a DCP. Extended profile format (can't use old profile.json files, sorry). Added icc2json and json2icc commands. Added -I parameter to make-target so you can separate RGB and XYZ illuminants also in that command. Increased parsing flexibility for Argyll-like files (various CGATS), should handle patchtool files better.

2015-05-05
Patch release 0.5.4. Fixed bug in RGB/XYZ levels when merging several targets, fixed a few Windows compile bugs. Now DCamProf supports emissive spectra in the target files (added SAMPLE_TYPE column, "R" or "E"). When merging targets the special patch "illuminant" and "white" (new) is now always kept even if there are nearby patches.

2015-05-03
Patch release 0.5.3. Added possibility to read Argyll SPECT files (produced by Argyll's illumread) as illuminants. Now it's possible to make dual-illuminant DCPs directly with the make-dcp command. When running the command without parameters there is now a full list of exif light sources and their temperatures, a useful list when choosing a suitable calibration illuminant. Added a new command txt2ti3 to convert raw text files with spectral data to .ti3 that make-target can read, useful when getting spectral data from various third-party sources.

2015-05-02
Patch release 0.5.2. A slight adjustment of observer remapping in make-dcp and some update of observer documentation.

2015-05-01
New patch release 0.5.1. I hadn't tested on Lightroom properly. It was shown that Lightroom cannot handle too high precision on matrix rationals and it doesn't like if the standard observer WP is different from 1931_2. I have now changed default observer to 1931_2 as it's an easier to use default, and made an automatic remapping in the make-dcp tool to handle the case when a different observer was used during profile creation.

2015-04-30
Here's the first release of DCamProf to the public (0.5.0). This is an early one, and while you can make camera profiles with it it's still in a "hackish" state, probably some bugs left and certainly slow. There's lots of silly loops over loops here and there. My excuse is that I've been focused to get things working first rather than get stuck optimizing for speed. As usual with these kind of projects it has taken far more time than I initially planned, but then it's far more feature-rich too!

What is DCamProf?

DCamProf is a free and open-source command line tool for generating camera profiles and perform tasks related to camera profiles and profiling.

To generate a camera profile you need either the camera spectral sensitivity functions (SSFs) or a measured target. DCamProf has no measurement functionality, but you can use the free and open-source Argyll CMS to get a .ti3 file with measurement data which DCamProf can read.

Here's a feature list:

  • Generate camera profiles from test target measurements or camera spectral sensitivity functions (SSFs).
  • Import (and export) measurement data from Argyll.
  • Detailed control of matrix and LUT optimizers to hand-tune the trade-off between accuracy and smoothness.
  • Test profile color matching performance, with free choice of illuminant.
  • Save reports and data files for plotting (using gnuplot for example).
  • Simulate reflective spectra all the way to the locus.
  • Built-in spectral database with Munsell, Macbeth CC24, spectra from nature and common illuminants.
  • Import spectral data to be used in targets or as illuminants.
  • Analyze camera color separation performance under different illuminants using SSFs.
  • Native camera profile format, can be converted to DNG profiles (DCP) and ICC profiles.
  • Support for ICC raw converters that apply pre-processing curves, such as Capture One.
  • Apply a subjective film curve while keeping neutral realistic colors.
  • Optionally design and embed a subjective look in your profile.
  • Decode and hand-edit ICC/DCP profiles, and re-encode them again.
  • Generate own test charts.
  • Correct uneven lighting in test chart photos (flat-field correction).
Note that many features are related to camera SSFs, and indeed you get most out of DCamProf if you have SSFs available. You don't need them to make great profiles though, having them is more about flexibility and convenience than quality. You can then also learn many things of how cameras work by testing various scenarios, testing the efficiency of a specific target design, testing how a profile performs under a different illuminant etc.

The reason I started this software was that 1) Argyll can't do DCPs, and 2) I was not pleased with the commercially available alternatives for making own camera profiles. Too much hidden under the hood, too little control, and many indications that the quality of the finalized profiles was not that good. Then I added the SSF ability and the software grew to something more than just a profile maker, now you can say it's a camera color rendering simulator as well.

The software is quite technical, but if you can use Argyll you can use DCamProf. You can also find a separate tutorial of how to make profiles using DCamProf. It's supposed to complement the reference documentation found on this page.

Downloading and building DCamProf

Download the source code for DCamProf v0.9.7. It's developed on Linux and compiling it there is easy as all third parties should be available as standard packages in your Linux distribution. It should also be relatively easy to build on Mac OS X. On Windows it's easy to build using Cygwin, or Mingw. It won't work with Microsoft's own compilers though as they refuse to support the C99 standard, and I'm not in the mood of dropping that just to let Microsoft be lazy and ignore standards.

DCamProf uses OpenMP to make use of all your CPU cores in parallel. You can build it without OpenMP (remove the -fopenmp words from the Makefile) but some aspects of the program will then run much slower as it will use only one core. At the time of writing Mac OS X standard compiler clang doesn't support OpenMP (it's in the works though), unless you build your own clang from source.

An alternative to building it on your OS X or Windows platform is simply to install a Linux virtual box and run it there, make sure you give the virtual machine access to all cores. An added bonus is then easy access to Argyll, gnuplot, exiftool and other related tools.

How DCamProf models cameras

DCamProf looks on the perfect camera as a colorimetric camera, that is the SSFs matches the color matching functions for the CIE XYZ color space. No real camera is colorimetric so the goal of profiling is to make it perform as close as possible to one.

DCamProf assumes that the camera is linear, that is if you for example double the intensity of a certain spectrum the raw values will also double and there will be no change in their relation. This is indeed true for any normal digital camera today, with the possible exception of extreme underexposure and very close to clipping where there can be non-linear effects.

The linearity assumption leads to that the correction lookup table only needs to be indexed on chromaticity (that is saturation and hue, but not lightness), but the output still needs correction factors for all three dimensions as some colors can be rendered too dark or too light with a fixed factor throughout the full lightness range. That is DCamProf works with a LUT with 2D input and 3D output, commonly referred to as a 2.5D LUT.

DCamProf does allow you to apply a subjective look on top of the accurate colorimetric 2.5D profile. It will then use a full 3D LUT so you can make lightness-dependent adjustments, but the colorimetric part always stays 2.5D.

2.5D vs 3D LUT

With a 2.5D LUT we assume that the same color in a darker shade will have the same shape of its spectrum, only scaled down. This is true if you render colors darker by reducing the camera exposure in a fixed condition. However, if we compare a dark and light color of the same hue and saturation in a printed media the spectrum shapes can differ because a typical print technology will alter the colorant mix (eg inks) depending on lightness. In some cases lightness is controlled by adding a spectrally flat white or black colorant, and in those cases spectrum shapes are retained, but that is not always the case.

This means that our linearity assumption breaks as the relative mix of camera raw values may differ slightly between dark and light colors and in this case a full 3D LUT could make a more exact correction. However, this only makes sense in highly controlled conditions when copying known media (such as printed photographs), that is when you're using the camera just like a flatbed scanner. The light source must be fixed, the camera exposure must be fixed, and the camera profile must be designed using a target made with the same materials as the objects you shoot.

As a 3D LUT only makes sense in this very narrow use case DCamProf supports only 2.5D (so far). If you really need a 3D LUT you can use Argyll, but you're then limited to ICC profiles. For strict reproduction work that may be a better approach.

Note that commercial raw converters often use 3D LUTs, not to achieve better colorimetric accuracy though but to make subjective "look" adjustments, which you also can do with DCamProf with its "look operator" functionality.

Basic workflow for making a DNG profile using a test target

  1. Get or make a physical test target. The classic Macbeth/X-Rite 24 patch color checker is a fine choice.
  2. Get or make a reference file for the test target, preferably containing reflectance spectra.
    • If you're using that 24 patch color checker like most will be doing, the reference file with reflectance spectra assembled by BabelColor is the second best choice if you can't measure it yourself. This data should be valid for the nowadays more popular "ColorChecker Passport" product too.
    • The above BabelColor CGATS text file example needs some conversion to be used with Argyll. To save you some time I've done it for you, look for "cc24_ref.cie" in the DCamProf distribution.
  3. Shoot your test target under the desired light source. Store in raw format.
  4. Convert the raw file to a 16 bit linear TIFF without white balancing.
    • You can use a recent version of RawTherapee (export for profiling, with disabled white balance), or DCRaw
      • dcraw -v -r 1 1 1 1 -o 0 -H 0 -T -W -g 1 1 <rawfile>
    • If you have an odd camera format and you want to use the profile in Adobe's products it may be safer to convert to DNG first using Adobe's DNG converter, as raw decoding of proprietary formats may differ a little concerning application of black levels, white levels and calibration data.
    • Crop so only the target is visible, and rotate if needed. Argyll is very sensitive to target orientation. If you use some image editor to do this make sure that the full 16 bit range is kept, that is don't use 8 bit Gimp. If you use RawTherapee you can crop and rotate in there.
  5. Use Argyll scanin command to generate a .ti3 file.
    • It needs the target reference file, test target layout file and raw image as 16 bit TIFF as input.
    • scanin -v -dipn rawfile.tif ColorChecker.cht cc24_ref.cie
    • The scanin command will generate a diag.tif which shows patch matching (look at it to see that it matched) and a rawfile.ti3 file which contains the raw values read from rawfile.tif together with reference data from the cc24_ref.cie file.
  6. Use DCamProf to make a profile from Argyll's rawfile.ti3 target file.
    • dcamprof make-profile rawfile.ti3 profile.json
    • The above command doesn't specify any illuminants, which means that the profile will be made for D50 and the rawfile.ti3 must contain reflectance spectra (it will if the example cc24_ref.cie is used) or have it's XYZ values related to D50. To change calibration illuminant use the -i parameter, and if the .ti3 lacks reflectance spectra specify its XYZ illuminant using -I.
    • Per default DCamProf prioritizes accuracy above smoothness as most users will expect as good target match as possible per default. However, it's usually not the best, so here's a few "recipes" for making smoother profiles:
      • Tell the optimizer that up to 1.5 DE is okay if it improves smoothness, and also tell that lightness errors are the least severe and hue errors are the most:
        • dcamprof make-profile -w all 1.5,1,8,2,1 rawfile.ti3 profile.json
      • Same as above, with some added relaxation of the LUT bending, for an even smoother result:
        • dcamprof make-profile -w all 1.5,1,8,2,1 -l 0.1,0.1 rawfile.ti3 profile.json
      • Only correct hue and saturation, skip lightness (which I recommend if you have shot the target very casually as it then likely has unreliable lightness values due to uneven light and glare). This is actually what Adobe's own DNG Profile Editor does, and is probably the best default if you don't intend to diagnose the result (with plotting and or looking at test gradients) and adjust smoothing accordingly.
        • dcamprof make-profile -w all 1.5,1,8,2,1 -l -1,0 rawfile.ti3 profile.json
      • See the documentation for the make-profile command for an in-depth explanation of the weighting and LUT smoothing parameters.
  7. Convert the native format profile to a DCP.
    • dcamprof make-dcp -n "Camera manufacturer and model" -d "My Profile" profile.json profile.dcp
    • For many raw converters the camera manufacturer and model must exactly match what the raw converter is expecting. For example if using Adobe Lightroom the name must match the name Lightroom is using for it's own DCPs.
    • The description tag (set by -d, "My Profile" in this example) will be the one shown in the profile select box in for example Adobe Lightroom.
    • The above example makes a colorimetric profile without a curve. If you want to embed a tone-curve and make use of DCamProf's neutral tone reproduction (which really makes more sense for a general purpose profile), you can do like this:
      • dcamprof make-dcp -n "Camera manufacturer and model" -d "My Profile" -t acr profile.json profile.dcp
  8. Optionally use DCamProf's dcp2json and json2dcp commands to do any manual edits of the DCP file, such as changing profile name and copyright.
  9. The DNG profile is now ready to use in your raw converter.

Basic workflow for making an ICC profile using a test target

Making an ICC profile is almost the same as a DNG profile. Actually you can follow the exact same workflow and run the make-icc command at the end instead of the make-dcp, the native profile format can be converted to both types. However, some raw converters using ICC profiles apply some sort of pre-processing such as a curve before the ICC profile is applied which much be taken into account. Capture One is one such raw converter.

The steps that are the same as in the DNG profile case are only briefly described here, so look there if you need further details.

  1. Get a physical target with reference file, and shoot a raw file in desired light.
  2. Export to a tiff for profiling in the raw converter you will be using.
    • How you do it varies depending on raw converter, look in its documentation or search on the 'net.
      • Capture One: Select ICC profile: "Phase One Effects: No Color Correction", Select Curve: "Linear Response" (or similar), rotate/crop to show only the target. Then Export variants, "16 bit TIFF", "Embed camera profile". Note: you probably don't want to use "Linear Scientific" as it is a special mode which disables highlight reconstruction. Of course there should be no clipping in the profiling TIFF, but as the profile should be used with the same curve when finished the "Linear Response" is better for all-around use.
    • White balance setting does not matter, unless you intend to let the profile correct a camera preset (not a common use case, see make-icc reference documentation for more information).
    • If you can choose a curve, choose linear.
      • DCamProf must somehow be able to calculate the linear data. If the exported TIFF contains a transfer function tag, like from Capture One, the curve choice doesn't really matter as it can be linearized anyway. I still recommend using a linear curve during profiling, for precision reasons.
      • If the exported TIFF doesn't contain a transfer function tag, you should export with a linear curve, unless if you can find out the transfer function in some other way. Note that "linear" doesn't always mean exactly linear so you may need to find out the transfer function anyway somehow.
  3. Use Argyll scanin command to generate a .ti3 file.
    • scanin -v -dipn target.tif ColorChecker.cht cc24_ref.cie
  4. If the raw converter applies a pre-processing curve (like Capture One), re-process the .ti3 file to get linear RGB data.
    • dcamprof make-target -X -f target.tif -p target.ti3 new-target.ti3
    • DCamProf gets the pre-process curve from the transfer function tag in the profiling tiff. You can also extract it separately with the dcamprof tiff-tf command, but as the make-target command can handle the tiff directly it's generally not needed.
    • If pre-processing curve cannot be had from the tiff, you need to supply it in a json file (still -f parameter), see the provided data example for formatting.
  5. Use DCamProf to make a native profile from the linear .ti3 file.
    • dcamprof make-profile new-target.ti3 profile.json
    • See the DNG profile case to get alternative command lines to prioritize smoothness more over accuracy (which typically is a good idea).
  6. Convert the native format profile to an ICC profile.
    • If pre-processing curve was used it must be given:
      • dcamprof make-icc -n "Camera manufacturer and model" -f target.tif profile.json profile.icc
    • ...otherwise the -f parameter is skipped:
      • dcamprof make-icc -n "Camera manufacturer and model" profile.json profile.icc
    • If you want to embed a tone curve and make use DCamProf's neutral tone reproduction you do like this (both with and without pre-processing shown):
      • dcamprof make-icc -n "Camera manufacturer and model" -f target.tif -t acr profile.json profile.icc
      • dcamprof make-icc -n "Camera manufacturer and model" -t acr profile.json profile.icc
      • The "-t acr" will apply Adobe's standard film curve, you can also design your own, or import via the tiff-tf command.
      • The curve will be applied to the profile's LUT.
      • If you're using Capture One, read the section about Capture One and curves.
  7. The ICC profile is now ready to use in your raw converter.
Note that some ICC raw converters do additional processing than just a curve and white-balance, they may for example do some sort of pre-matrixing. If the stripped profiling tiff looks much more saturated than a corresponding tiff from a DCRaw or DNG profiling workflow it's likely that some pre-matrixing has been applied. As you profile based on the raw converter's own profiling TIFF this doesn't matter, except that the native format profile generated in the process will not be compatible with any other raw converter.

Capture One and curves

If your raw converter allows choosing curve separately, as Capture One does, it's actually against the color rendition principle of DCamProf. With DCamProf the profile itself applies the curve, and if you want different curves you simply render different profiles, one for each curve. The reason why this is the case is because tone curves can fundamentally affect color rendition, as described in the "tone curves and camera profiles" section. That Capture One doesn't alter ICC profile when the curve is switched I'd say is broken color science, however due to the mild shape of their curves and that it's applied before the ICC profile the color appearance is not affected that much. Actually they have a mixed approach, some of the curve is applied separately before the ICC and some is applied by the profile's LUT. This mixed approach makes the color appearance more stable between the different curves than it otherwise would have been, but it also makes the result with "Linear Response" far from actual linear.

In any case, you can assume that their bundled profiles have been optimized for the default curve and the others will provide somewhat sub-optimal color, although the difference is not huge.

If you like your DCamProf profile to work the same way, you do this way:

  1. Make a profile with the linear response, that is everything in the standard workflow until you've run the make-profile command.
  2. Export a profiling TIFF one with the "Linear Response" (let's call it linear.tif) and the other with the desired curve, usually "Auto" or "Film Standard" (let's call it curve.tif).
  3. Extract the actual shape of the curve from the TIFF files, using the tiff-tf command:
    • dcamprof tiff-tf -f linear.tif curve.tif tone-curve.json
    • It may warn that the curves don't end at 1.0, don't worry about that. Capture One does that, probably related to having some margin for highlight reconstruction or white balance adjustments. The tiff-tf command will automatically compensate.
  4. Make a preliminary ICC profile adapted for the curve's pre-processing, and also apply the tone curve.
    • dcamprof make-icc -n "Camera manufacturer and model" -f curve.tif -t tone-curve.json profile.json preliminary-profile.icc
    • as the pre-processing curve and tone curve are the same, they will cancel them out and the profile is linear in terms of contrast, but the neutral tone reproduction operator will have done its work with the appropriate curve, so the profile becomes optimized for the desired curve.
  5. Load the preliminary profile and compare with the bundled. Probably the DCamProf profile has lower contrast because the native profile applies some extra contrast in the LUT. Use the curve tool inside Capture One to match the contrast. An exact match is not necessary, what you find most pleasing is the best for you.
  6. Transfer the coordinates from the curve tool to a JSON file DCamProf can read, let's call it modifier-curve.json. Find an example below.
  7. Re-run make-icc with the extra curve cascaded.
    • dcamprof make-icc -n "Camera manufacturer and model" -f curve.tif -t tone-curve.json -t modifier-curve.json profile.json profile.icc
  8. The profile is now ready to use. The look will be optimized for the chosen curve, but it will look okay too for "Linear Response", just like Capture One's bundled profiles. Of course, there will be the residual curve left (the modifier-curve) when you use the profile in "Linear Response" mode, the same way it is for native profiles.
The modifier curve is suitably designed with the curve tool inside Capture One. Load your first profile generated in the workflow above, and then edit a curve to your liking. Then you copy the handle values into a text file with the JSON tone curve format, like this:
{
    "CurveType": "Spline",
    "CurveHandles": [
      [ 0,0 ],
      [ 14,8 ],
      [ 27,20 ],
      [ 115, 118 ],
      [ 229, 233 ],
      [ 255, 255 ]
    ],
    "CurveMax": 255,
    "CurveGamma": 1.8
}
Capture One uses 0-255 as their range in the curve, and the curve works with gamma 1.8.

Basic workflow for making a profile from camera SSFs

If you have the camera's spectral sensitivity functions you can skip the target shooting process.
  1. Format your camera's SSF data into a JSON file that DCamProf can read.
    • Use the distributed examples as a guide for how the JSON file should be formatted.
    • If you don't have the equipment or knowledge to measure your camera's SSFs, you can look in the SSF links section and see if you're lucky and can find your camera in one of the sources.
  2. Generate a "virtual" target with your desired spectral data.
    • You can use DCamProf's built-in spectral database or use its ability to generate spectra, or import from some other spectral source (see provided import_spectra.txt for formatting, it's the Argyll .ti3 format, but you can use a subset).
    • Here's a basic example when we just generate a color checker from the built-in spectral database target:
      • dcamprof make-target -c ssf.json -p cc24 target.ti3
    • The resulting target.ti3 contains reflectance spectra for all patches, plus XYZ reference values and RGB values for the camera rendered using the SSFs found in ssf.json.
  3. Make the profile
    • dcamprof make-profile -c ssf.json target.ti3 profile.json
    • We don't really need to provide the camera's SSF again (ssf.json) as the target file already contains rendered RGB and XYZ values, but it's a good habit since then the RGB (and XYZ) values will be regenerated from spectra each time which is convenient and reduces the risk of making mistakes.
    • If the SSFs are of high quality you will typically get a considerably better match with this than if you have shot a test target. This means that there is often less need of weighting and LUT relaxation when rendering the profile.
  4. Convert the native profile to a DNG profile or an ICC profile.
    • See the description for the basic workflow using a test target for more details.
In this example workflow we keep the illuminants at default, D50. As we let the spectral information follow through in the workflow we can change calibration illuminant late in the process, when making the profile:
  dcamprof make-profile -c ssf.json -i StdA target.ti3 profile.json
Note that as SSFs are generally measured from real raw data without pre-processing, profiles generated from SSFs won't work for ICC raw converters that does pre-processing before applying the ICC.

Choosing test target

Due to natural limitations of camera profiling precision it's quite hard to improve on the classic 24 patch Macbeth color checker when it comes to making profiles for all-around use. It's more important to have a good reference measurement of the test target than to have many patches. If you don't believe me please feel free to make your own experiments with DCamProf; by using camera SSFs you can simulate profiling with both few and many patches and compare target matching between them.

DCamProf allows you to use any target you like though, you can even print your own and use a spectrometer and Argyll to get reference values. Although darker repeats of colors does not hurt there's not much gain from it as the LUT is 2.5D, so an IT-8 style target layout (many patches are just repeats in darker shades) does not make that much sense.

Dark patches are problematic as they are more sensitive to glare and noise (both in camera and spectrometer measurement), so an ideal target has as light colors as possible for a given chromaticity.

The profiling process requires at least one white (or neutral gray) patch. It's no problem if it's slightly off-white though. It should preferably also contain one black patch which should be the darkest patch in the target. This black patch is used to monitor glare. If feasible the "black" should be made as light as possible while darker than the darkest colored patch. If the black patch is significantly darker than the darkest colored patch we may detect a glare issue than in actuality only affects the black patch.

The white (and black) patches should preferably have a very flat spectral reflectance, as it makes glare monitoring more accurate.

Most targets have a gray scale step wedge which can be used for linearization. Digital cameras have linear sensors, but the linearity can be hurt by glare (and flare). Normally it's much better to reduce glare to a minimum during shooting than trying to linearize afterwards, as glare distortion is a more complex process than just affecting linearity.

(Semi-)glossy targets, such as X-Rite's ColorChecker SG, are extremely glare-prone and therefore hard to use. They cannot be shot outdoors, but must be shot indoor in a pitch-dark room with controlled light. Due to their difficulty during measurement the end result is often a worse profile than using a matte target. I recommend to first get good results with a matte target before starting to experiment with a semi-glossy. Those targets often receive bad reviews simply because the users have not minimized glare when shooting them.

If you have the camera's SSFs you can use the built-in spectral databases (or import your own) rather than shooting real test targets. In that case you will probably want to select spectral data that matches what you are going to shoot, for example reflectance spectra from nature if you are a landscape photographer.

The classic 24 patch Macbeth color checker, originally devised in the 1970's. Despite its age it still holds up well for designing profiles, thanks to relatively saturated colors with a relatively large spread. As seen in the u'v' chromaticity diagram (with locus, AdobeRGB and Pointer's gamut) there's still space to fill though, and some patches are occupying almost the same chromaticity coordinate which is not that useful when making 2.5D LUTs.

Making your own target

Using the make-testchart command you can make your own target. Here's a workflow, showing making a target for an A4 sheet, and using a Colormunki Photo for scanning the patches:
  1. Generate test patches in Argyll's .ti1 format:
    • dcamprof make-testchart -l 15 -d 14.5,12.3 -O -p 210 target.ti1
    • In the above example we specify the chart layout with -l, -d and -O parameters, so that white patches can be placed optimally for flatfield correction later on. The layout must match what Argyll's printtarg is going to generate.
  2. Generate a .tif for printing, a .cht file for chart recognition and .ti2 file for scanning using Argyll's printtarg command:
    • printtarg -v -S -iCM -h -r -T300 -p A4 target
    • It's important that we use the -r flag, otherwise Argyll will randomize the patch positions which can break flatfield correction.
    • In the above command we choose A4 sheet and Colormunki half-size patches, it will make 210 patches in one sheet, like we have generated. Even if you don't have a Colormunki instrument the generated patch size is nice for a test chart for photography.
  3. Print the tiff file on an OBA-free paper matte or semi-gloss, with color management disabled, that is the same way you would print a test chart for printer profiling.
  4. Measure the reflectance spectra of the patches (this will make a .ti3 file):
    • chartread -v -H -T0.4 target
  5. Convert the .ti3 to a reference .cie file to be used with Argyll's scanin later.
    • spec2cie -v -i D50 target.ti3 target.cie
  6. Now you have the printed test chart, a target.cht chart recognition file and a target.cie reference spectra file which can be used in the profiling workflows.
The quality of your own target will depend on the spectral qualities of your printer. A modern inkjet printer with several inks will have better spectral qualities than many other print technologies, but will still not be as good as the special print techniques used when commercial test targets are made. If you are curious about target performance you can use the SSF functionality of DCamProf to make simulations. Despite spectral limitations it seems that they perform at least as good as a CC24 or sometimes even better when it comes to making profiles that matches real colors.

Semi-gloss targets will get very high saturation patches, but those are difficult for the camera to match and it's hard to shoot those targets without glare issues, they may also be harder to measure accurately with the spectrometer if it has limited range (some consumer spectrometers start at 420nm) or issues with glare. Making a matte target may be better in practice, although you can't get deep violet colors in those.

Test target reference files

The foundation of profiling using test targets is that the profiling software knows what CIE XYZ coordinate each color patch corresponds to, or even better which reflectance spectrum each color patch has so the software can calculate the XYZ values internally.

Higher end test targets may be individually measured so you get a CGATS text file with reference values, and Argyll's scanin tool can use them directly. If you get a standard 24 patch Macbeth color checker you probably don't have an individual reference file and then a generic file like the one provided with DCamProf (cc24_ref.cie) will have to do. Having the reflectance spectra is strongly preferred over pre-calculated XYZ values, so do get that if you can. The problem with pre-calculated values and no spectra is that when changing illuminants the software cannot re-calculate XYZ from scratch using spectral data, but must rely on a chromatic adaptation transform which is less exact. It's also a higher risk for the user to mess up by forgetting to inform DCamProf of which illuminant the XYZ values are related to. If there's spectral data the reference values are always re-generated from scratch to fit the currently used illuminant, which is both exact and convenient.

If you have a spectrometer (usually designed for printer profiling) you can measure your target and generate your own reference file with spectra. Using Argyll you do like this:

  1. Create or find an Argyll .ti2 text file which contains the test target layout needed for the spectrometer scan. Note that Argyll is distributed with .ti2 files for many of the popular commercial test targets, the file is called ColorChecker.ti2 for the Macbeth 24.
  2. Scan the target with Argyll's chartread (exclude the .ti2 suffix, for most Argyll commands the suffix should be excluded):
    • chartread -v -H target
    • Note that some targets may have too small patches to be read successfully with your instrument. For example an X-Rite ColorChecker Passport cannot be read by an X-Rite Colormunki spectrometer.
  3. Convert the resulting .ti3 file (which contains complete spectra for each patch) to a new .ti3 file with reference CIE XYZ values with your desired illuminant.
    • spec2cie -v -i D65 target.ti3 reference.cie
    • In the above example "D65" was chosen, but you can also choose "A" or "D50", or any other supported by the spec2cie tool.
    • As the spectral data will be kept in the file it does not really matter what illuminant (or observer) you use, you can change that again when generating the profile with DCamProf. The described method is however also compatible with a standard Argyll workflow.
  4. The resulting reference.cie can now be used together with Argyll's scanin tool.
  5. If you don't have a usable .cht file for the chart layout (a more detailed layout information than needed for the spectrometer scan), you need to generate one. If you have generated your own chart using Argyll's printtarg you can add the -s (or -S) parameter to it to get the .cht file. If you haven't used printtarg it's unfortunately a bit of a headache to make your own .cht. You can use the scanin tool as a help for that (using the -g parameter), but it's quite messy with lots of manual edits. At the time of writing I have not tried doing it myself and as long as you're using a reasonable popular target there will be a .cht file distributed with Argyll, and if you make your own using Argyll you can make the .cht when calling printtarg.
It's probably better to measure your own target and get full spectral information than getting a typical pre-generated reference file with only XYZ values for some pre-defined illuminant. If it really is better depends on the precision of your instrument, the sample-to-sample variation of test targets and the quality of the provided reference file. It's not possible to really know what will be best, you can try both and see what you like the most. If there's some serious problem with the reference file it's usually noticed when making the profile, the LUT must make extreme stretches etc.

In some cases you could get the reference spectra in some format that Argyll can't read directly. Argyll is delivered with a few conversion tools to handle other common text formats, cb2ti3, kodak2ti3 and txt2ti3. You may be helped by making a dummy conversion using DCamProf, like this: dcamprof make-target -p input.txt -a "name" output.ti3, and sometimes you may have to do some manual edits in a text editor too to get it into a format Argyll accepts.

Shooting test targets

To consider:
  • Even light from the desired illuminant
  • Avoid reflections and glare on the target, difficult with glossy and semi-glossy targets.
  • Avoid colored reflections from nearby surfaces
  • Minimize vignetting
  • Minimize perspective distortion
  • Avoid over- or underexposure
If you are simulating daylight using an artificial light source it's better to use a high temperature halogen lamp, such as a Solux lamp on overdrive, than a fluorescent. Using lower temperature halogen lamps and putting an 80A filter on the camera lens is also a way. Or just shoot outside in real daylight, but that only works with matte targets.

Avoid reflections from nearby colored surfaces that may distort the color of the light source. If shooting outdoor, shooting in an open space with someone holding up the test target in front away from the body is a good alternative.

I recommend to defocus very slightly so you won't capture any structure of the target patches surface and instead get fields of pure color. If your camera lacks anti-alias filter this also makes sure you get no color aliasing issues. Shoot at a typical quite small aperture, say f/8 if 135 full-frame.

Argyll's scanin is sensitive to perspective distortion, so try to shoot as straight on as possible, and correct any residual rotation/perspective in the raw conversion.

If you know what you are doing you can push the exposure a little extra to get optimal "expose to the right" (ETTR) and thus as low noise as possible. But be careful, clipped colors will be a disaster in terms of results. I use to exposure bracket a few shots and check the levels in the linear raw conversion to see that there is no clipping.

Uneven lighting is a common problem in camera profiling. The typical recommendation is to make sure you have even lighting (at least two lights if not shooting outdoor) and shoot the target small in the center (to minimize vignetting). However, if you employ DCamProf's flatfield correction (the testchart-ff command) you can relax the even-lighting requirement quite a bit. Flatfield correction evens out the light with high accuracy, so you need only make sure all parts of the target has sufficient light to avoid noisy patches. Some halogen lights may have an outer rim of light of a different light temperature. This is not well corrected with flatfield correction, so make sure the target is at least lit with the same light spectrum all over.

Using fewer lights (maybe only one) and compensate with flatfield correction can be a smart strategy when shooting glossy targets, as it's easier to keep the rest of the room dark. Room darkness is very important to reduce glare which is a real issue with (semi-)glossy targets.

Glossy and semi-glossy targets allow for higher saturation colors on the patches, but are also more difficult to shoot as they produce glare. Glare is minimized by being in pitch-dark room and having the light(s) outside the "family of angles". If the target is replaced with a mirror you should only barely see the dark room and camera in it, certainly not any lights. Having a long lens narrows down the family of angles, and a projecting light source (like a halogen spotlight) and dark/black cloth around the target makes sure as little stray light as possible bounces around in the room.

This may look as a perfect target shot, even diffuse outdoor light, no visible reflections. However as the target is semi-glossy the surrounding diffuse light coming from all directions add up the direct reflection component (glare) so the contrast of the target is lowered and the photograph will not match reflectance spectra measurements. Semi-glossy targets must be shoot in indoor lab setups with dark surroundings and projecting light(s) outside the family of angles.
(Semi-)glossy targets are virtually impossible to shot accurately outdoor as you cannot shoot from a dark position, that is if you put a mirror where the target is you will likely clearly see the camera and yourself, which means you will have glare. If you still shoot it in that light it will be affected by glare and produce a lower contrast result, the dynamic range easily drops from 7 stops (typical range in a semi-glossy target) to about half of that. This won't be visible until you make side-by-side comparison or note poor profiling results (typically an over-saturated profile with some bad non-linearities).

Veiling glare is a lens limitation of how large dynamic range it can capture. It's typically between 0.3% to 0.5% for high quality lenses, the fewer lens elements and better quality coating the lower veiling glare. I thought I'd mention it as you may have heard of it, but compared to other forms of glare this is negligible so you don't need to worry about it. Do avoid lens flare though, that is the lens must be in shadow, use a lens hood and make sure you have no light sources towards the camera. Also make sure the viewfinder is closed tight so no light comes in that way.

If you shoot a glossy target be prepared that you can have issues with dark patches, as those are affected most by glare. Removing those from the measurement (using an exclude list to the make-profile command for example) can be a better way to solve the problem than trying to correct the measurement error in other ways. Due to the many difficulties with semi-glossy targets I recommend to simultaneously make a profile from a matte target so you have a profile to sanity-check against.

In theory a gray scale step wedge in the target could be used to correct glare. With DCamProf you can enable linearization in the testchart-ff to compensate glare-induced non-linearity. However, glare distorts more than just linearity meaning that linearization will only help to some extent, so don't rely on it. You can indeed improve results this way, but it often ends up worse than just excluding the darkest patches (those that are most affected by glare) from profiling. So the recommendation with DCamProf is to reduce glare to a minimum, and keep an extra eye on the performance of dark patches, and exclude them if they seem problematic.

White balance and camera profiles

The white balance setting in your raw converter and your camera profile interacts so before making profiles it's good to have some insight into that.

Both DCPs and ICCs make corrections on white-balanced data, that is you feed the profile with a white-balanced image. For DCPs it might seem that you don't as the "ColorMatrix" work on unbalanced image data (more on that later) but the actual color rendering is decided by the "ForwardMatrix" and the LUT which work on the white-balanced image.

Naturally this means that in order for the profile to make the "correct" adjustments it must be used with the exact same white balance as used during profile design. Which white balance is used during design? Per default DCamProf will re-balance the target such that the whitest patch in the target is considered 100% neutral (real targets usually differ 1-2 DE from perfect), which means that white balance picker on the whitest patch is the best balance. You can disable this (-B to make-profile) and then DCamProf calculates the optimal white balance automatically, which is when camera white matches the calibration illuminant reflected by a 100% perfect white patch, that is usually slightly different from the whitest patch in the target. In any case it's a picked white balance, not the "As Shot" camera preset balance (there is an ICC special case though where you can design a profile for a camera white balance preset).

A well-behaved profile, that is one with only small and wide area stretches in the LUT, will be robust against slightly different white balance so it won't matter if you set it a little bit off to get a warmer or cooler look for example. A profile which has strong and very localized stretches (not a good profile!) may make sudden strange color changes when you shift white balance. This is because if you change white balance you apply a cast on all colors, which means that the colors move to other start positions in the LUT, and will get corrections that was intended for other neighboring colors, and if there are strong localized corrections the result can become quite off.

Wouldn't it be better if the ideal profile white balance was applied first, then the profile, and then your own user-selected white-balance? Yes, if the illuminant would always be the same as the one used when shooting the target, but if you shoot outdoors that's not the case. And in any case that's not how raw converters work so you can't have it that way even if you'd like it.

The take-away message is that for ideal profile result you should always set the white balance to represent white as good as possible, and if you want to make a creative cast, for example a bluer colder look, you should ideally apply that look with other color tools rather than the white balance setting. However, many (most?) raw converters don't make it easy to apply a cool/warm look in a different way than using the white balance setting, so that's what we usually end up doing anyway. If you've made a well-behaved profile (which you should) that should not be any real problem. Yes, profile corrections will not be as exact as when used at its designed profile, but if you're creating a look anyway that won't matter.

The most robust profiles concerning white balance changes is a pure matrix-only profile (no LUT), as they are 100% linear.

DCP-specific white balance properties

DCPs are a bit special when it comes to white balance, they have a more immediate connection to it than ICC profiles. The embedded "ColorMatrix" is not used for any color corrections, but to figure out the connection between a camera raw RGB balance (internal white balance multipliers) and illuminant temperature and tint. When you use the camera's "As Shot" white balance, the raw converter will display the corresponding temperature and tint. That is if you change profile to one with a different ColorMatrix the "As Shot" temperature/tint will change even if the multipliers are exactly the same. Ideally the temperature/tint should of course show the "truth", the actual correlated color temperature of the illuminant for that white balance but it's an approximation that may differ quite much between profiles, for a temperature around 5000K a variation of several hundreds of degrees is normal. Naturally a profile is best at estimating temperatures close to the one that was used when the profile was made.

If you instead of using the "As Shot" white balance selects a different one with temperature and tint, the ColorMatrix is used to calculate the corresponding white balance multipliers, at least for Adobe Lightroom (other raw converters may use a hard-coded white-balance model rather than using the profile-provided ColorMatrix). This means that if you change profile to one with a different ColorMatrix the temp/tint will in this case stay the same but the actual multipliers will change and thus the actual visual appearance, that is you get a shift in white balance.

A DNG profile contains the calibration illuminant as an EXIF lightsource tag, that is there is a limited set of pre-defined light sources to choose from. For a single illuminant DNG profile this tag is not used though, so it can be set to anything. If you provide DCamProf with a custom illuminant spectrum during profiling the resulting DCP will contain "Other" as lightsource tag, that is no information of what temperature the profile was designed for, but as said it's no problem.

However if you don't provide the spectrum and instead provide the completely wrong illuminant, say you shoot the target under Tungsten but say to DCamProf that it's D50, the calculated color matrix will be made against incorrect XYZ reference values and the resulting profile will be bad at estimating light temperatures. For single illuminant profiles that still won't affect the color correction though.

Dual-illuminant profiles is an exception. In that case you have two matrices, usually one for StdA and one for D65. Both these are then used to calculate the temperature and tint, and the derived temperature is then used to mix the two ForwardMatrices, that is if it's exactly between the 6500K of D65 and 2850K of StdA the 50% of each is used. This means that the temperature derivation has some effect on the forward matrix and thus some effect on the color correction. So if you intend to make a dual-illuminant profile it's required to provide a proper EXIF lightsource for each, and for the profile to make accurate temperature estimations the actual lights used during profiling should match the EXIF lightsource temperatures as well as possible. It doesn't have to be exact though as any reasonable camera should have similar matrices over at least some temperature range.

Note that a DCP profile cannot be made to "correct" white balance, that is change your "As Shot" white balance multipliers to something else. In some reproduction setups you may want to do that, and for this you need to use an ICC profile instead.

If you design your own profile with DCamProf and use it in Adobe Lightroom for example it's as discussed highly likely that you will get a white balance shift compared to the bundled profile. This doesn't mean that there is something wrong with your profile, but simply that your calibration setup and matrix optimizations did not exactly match Adobe's. If you want to apply your profile that previously used the bundled one with a custom white balance settings this white balance shift can be problematic though. Avoiding it is fortunately simple: just copy the color matrix from the bundled profile, which is possible directly in the make-dcp command. As the actual color correction sits in the forward matrix and LUTs, this change of color matrix will not affect your color rendition (except for the slight effect cuased by the dual-illuminant mixing described earlier), you just get rid of the white balance shift.

ICC-specific white balance properties

ICC profiles have unlike DCPs no connection to the raw converter's white balance setting. When DCamProf calculates its native profiles it makes both a ColorMatrix and ForwardMatrix, but if you convert to a ICC profile only the ForwardMatrix will be used as it has no element to identify illuminant color temperature as DCPs have.

Raw converters that use ICC profiles have some other method than using the profile to figure out a suitable temperature/tint to show in the user interface. It may be using some hard-coded ColorMatrix, some hard-coded preset values or other model.

Normally ICC profiles are designed to not affect the user white balance, so when you change profile to an entirely different one the white balance will still not change (except for tiny changes related to correction of neutrals). However ICC profiles can change the white balance if designed for that. One application could be to make an ICC profile that changes the camera's "As Shot" white balance to match a specific light source used in a reproduction setup. DCamProf can make such a profile if you instruct it to, as described in the make-icc reference documentation. This feature is unique to ICC, you can't make it with DCP as the DCP design prohibits white balance alterations by the profile.

Color for extreme white balances

Raw converters are designed such that "white" should be white, and for extreme color temperatures (candle light, Nordic winter dusk etc) this is not the case. In such a case you will have to adjust the look creatively to taste, as the available color models do not support profiling those situations in any accurate way.

Tone curves and camera profiles

An example image rendered with a linear tone curve using an accurate colorimetric profile. The exposure has been increased to make the image easier to compare with the others that have tone curves (which typically have a quite strong brightening component).

This should be used as reference when evaluating accuracy of colors. However, it does look flatter than the eye experienced in the brighter real scene, which is a normal appearance phenomenon. This means that we need to apply some sort of curve even when we want a neutral realistic look.

Same profile, but now with the DNG default curve, which is a modified RGB curve. Note the garish colors. Light colors are also desaturated, not so easily seen in this picture although the white shirt has lost much of its original slight blue cast. Desaturation issues can more clearly be seen in light blue skies for example.

DNG uses a hue-stabilized RGB curve (constant HSV hue) so it's better at retaining hue than a standard RGB curve (which most ICC-based raw converters use).

Same profile, with the tone curve applied on luminance only, while hue and saturation are kept constant. As luminance channel the J of CIECAM02 Jab is used here, similar to the more well-known Lab.

Intuitively one may expect this to be truest to the original, but as seen it looks desaturated. This is because in human vision color appearance is tightly connected to scene contrast, so if you increase contrast also saturation must be increased to keep the original appearance.

Same profile, here with DCamProf's built-in neutral tone reproduction operator. Color appearance is now very close to the original linear curve, but we have increased the global contrast so the photo displayed on a screen appears truer to the real scene.
Adobe Camera Raw's profile with the intended tone curve (same as in the others). Looks pretty natural, but some issues with saturated colors; too saturated reds and too little saturation on the purple and bright yellow-green. Additionally, skin-tones are slightly over-saturated and yellowish, and again the slight blue tint of the white shirt has been lost.

While some errors can be side effects of the curve, they're mainly deliberate subjective adjustments by Adobe's profile designers with the purpose to achieve a designed "look", like films had in analog photography. DCamProf's tone-curve operator intends to stay true to the color appearance of the original scene and leave subjective adjustments to the photographer.

Adobe Camera Raw's profile with linear tone curve. Here we can clearly see that it's not a "scene-referred" profile. The profile has been adapted for the S-shaped DNG tone curve and is therefore desaturated.

Note that comparing all these pictures may be hard directly on this web page as color shifts slightly with viewing angle. To critically compare then download the files, look straight at them while flipping through them in an image viewer. The images were made during development so the result from the current may differ a little, but you should from these images get an idea what the typical differences are and how large they are.

The tone curve is perhaps the most under-estimated factor in camera color reproduction. Here's why I think it deserves much attention:
  • As soon as you apply a tone curve you significantly alter the look of the colors.
  • The well-known raw converters' tone curves don't work well with linear "accurate" profiles, but will produce garish colors.
  • Ignoring the effect of tone curves is what makes many self-made profiles fail for all-around use.
  • There is no established standard tone curve that produces neutral results as the standard color appearance models like CIECAM02 don't embrace the tone curve concept.
    • Applying the curve on the L channel in Lab or V channel in HSV or L in HSL won't cut it. A custom tone reproduction operator is required.
Per default a profile is designed to produce accurate colors with a linear tone curve. This makes sense as cameras match colors better if there is a linear conversion, especially when making a matrix-only profile which by nature is linear. In other words, a linear profile strives to make the camera into a colorimetric measurement device.

A linear tone curve is the right thing for reproduction work, for example when we shoot a painted artwork and print on corresponding media. In this case the input "scene" and output media have the same dynamic range and will be displayed in similar conditions. However in general-purpose photography the actual scene has typically considerably higher dynamic range than the output media, that is the distance between the darkest shadow and the brightest color is higher than we can reproduce on screen or paper.

The solution to this problem since the early days of photography is to apply an S-shaped tone curve. In film the curve compress highlights and shadows about equal (a sigmoid curve), while in digital photography there's been a shift to compress highlights more than shadows, which also brightens the image about a stop or so as a side effect. This suits digital cameras better as it retains more highlight detail. The principle is the same though, that is increased slope at the midtones with compressed shadows and highlights.

The need to compress highlights and shadows is obvious (otherwise we would not fit the scene's original range on the lower dynamic range available on screen), but do we really need to increase midtone contrast? The usual explanation is that the output media has lower contrast than the real scene and thus we need to compensate to restore original contrast. While this can be said to be true for matte paper, a calibrated screen will produce appropriate contrast for midtones. It surely cannot shine as bright as the sun and (probably) not make shadows as dark as in real life, but midtone contrast is accurate. In typical workflows we create the image first for the screen and then make further adaptations for prints (screen to print matching is a separate and well-documented subject), so when it comes to camera profiles comparing with screen output makes most sense which we will do here.

If we increase the midtone contrast with our tone curve, we will exaggerate. For a typical curve type this is mainly seen as increased saturation, as increased contrast separates the color channels more which leads to more saturation. Okay, so this is wrong then? Well, it's not that simple. Let's say we display a shot of a sunny outdoor scene. Although midtone contrast on the screen can be rendered correctly, the overall luminance is much lower. This makes the Stevens and Hunt color appearance phenomena come into play, that is the brighter a scene is the more colorful (=saturated) and contrasty it appears. That is to make the displayed photo appear closer to the real scene we need to increase both lightness contrast and colorfulness, which an S-shaped tone curve does for us.

So then all is good with the tone curves applied by typical raw converters? No. In fact if we're into a neutral and realistic starting point it's sort of a disaster. Most converters apply a pure RGB curve which has little to do with perceptual accuracy. Lightroom and many DNG raw converters apply a slightly different RGB curve that reduces hue shift problems (HSV hue is kept constant), but it's still in most situations almost identical in look to a pure RGB curve. It varies between converters in which RGB space this curve is applied, which also affects the result. In Lightroom/DNG it's always applied in the huge linear Prophoto color space, while in many ICC raw converters it's applied in a smaller color space.

Let's start with the RGB tone curve problems. It will increase saturation more than is reasonable to compensate for Stevens and Hunt effects, so you get a saturated look. You might like that, but it's not realistic. Another problem is that for highly saturated colors one or more channels may reach into the compressed sections in highlights or shadows and that leads to a non-linear change of color, that is you get a hue shift. Typically the desired lightening and desaturation effect (transition into clipping) masks the hue shift so it's not a huge problem, but it's there.

Then there is the color space problem. If the RGB tone curve is applied in a large color space such as one with Prophoto primaries (like in the DNG case) one or more channels can be pushed outside the output color space (typically sRGB or AdobeRGB) so we get clipping and thus a quite large hue shift. Some raw converters partially repair this through gamut mapping (Lightroom does), but still there may be residual hue shift.

To battle the various RGB tone curve issues bundled profiles typically have various subjective adjustments to counter curve issues. For example the profile may desaturate high saturation reds to avoid color space clipping. Naturally this means that the same profile used with a linear curve will produce too little saturation in the reds. That is a profile must be specifically designed for the intended curve.

I think this is bad design. In fact one could argue that staying with RGB curves (and similar) has inhibited the development of good profiling tools and makes it unnecessarily hard to get natural colors in our photos.

It doesn't have to be this way, the RGB tone curve is legacy from the 1990s when its low computational cost was one of the reasons to use it. It can also be seen as a nostalgic connection to film photography. In the film days the film had to produce the subjective look too, so exaggerated contrast and saturation were desirable properties. This thinking has been kept in most raw converters today despite that we have all possibilities to start off neutral and design our own look rather than relying on bundled looks. The RGB tone curve produces a saturated look that many like to have in their end result, but as said it still doesn't work well for profiles that aren't specifically adapted for it. Using a DCamProf neutral linear profile and applying and RGB tone curve will produce a garish look.

Tone reproduction operators

In the research world the problem of mapping colorimetric values from a real scene to the limited dynamic range on a screen or print is well-known and is the subject for many scientific papers. The scientific term for the "tone curve" that compresses the dynamic range to fit is "tone reproduction operator", and can instead of a simple global tone curve be scene-dependent and spatially varying, what we in the photography world call "tone mapping".

In science the goal is generally to make an as exact appearance match as possible, for example if we have shot a scene in very low luminance level (at night) also the eye's night vision with its desaturated color is modeled. Modeling all aspects of human vision at the scene and at reproduction becomes complex and is still a very active area of research.

Current raw converters are not designed for this type of advanced appearance modeling and it's generally not what a creative photographer is interested in. For example, in night photography we typically want to make use of the camera's ability to "see" more saturated colors than our eye can.

There is a middle way though. While we do want to increase contrast and don't really mind that it will be more than realistic for scenes not shot in bright sunlight, RGB tone curve color shifts are not beneficial. That is the tone reproduction operator we want for general-purpose photography is a basic S-shaped tone curve just like in traditional photography but without color shifts. This middle way has not got much attention in the research world though. Once computers got powerful enough researchers moved away from the "simple" tone curve models into tone mapping.

While tone mapping is useful in many cases, it's better handled separately in practical photography. It doesn't replace the need of a tone curve-based operator, it's just a complement. There is no widely used "standard" operator with this property though, so I had to come up with an own for DCamProf.

DCamProf's neutral tone reproduction operator

With DCamProf I've chosen the approach to render accurate neutral linear profiles (scene-referred), and then develop a new spatially uniform tone reproduction operator that doesn't have the hue shift and over-saturation problems of the commonly used RGB curve. This means that the profile can be developed just like a "reproduction profile" and no subjective tuning is required to adapt for the RGB curve's issues.

This operator can be applied when generating a DCP or ICC profile so you can achieve the intended look in your raw converter.

It has the following properties:

  • Luminance is the same as an DCP RGB-HSV curve applied in linear Prophoto space
    • This means that the contrast and brightening will be the same as a standard DCP/RGB curve, so you can use the same curve shape.
  • The contrast (the curve shape) is a subjective choice made when generating the profile.
    • Typically one chooses a contrast that looks realistic for bright sunny outdoor scenes, and thus exaggerates contrast for other scenes, which typically is what you want subjectively.
    • If we'd want a realistic contrast curve for all types of scenes the curve would have to be scene-specific, and then the raw converter would have to work with linear profiles and apply curve itself. However, most raw converters are designed for profiles that have a fixed curve applied.
  • Hue is kept constant.
  • The contrast of the curve is measured and the saturation is adapted accordingly so the perceptual impression is that saturation is kept constant.
    • Higher contrast needs higher saturation, otherwise the image would look desaturated.
  • Highlights are desaturated (less so than in an RGB tone curve though) to make a nice transition into clipping.
  • Non-linear elements are used where it adds perceptual accuracy, but the algorithm has been kept as simple as possible.
  • Compared to an RGB tone curve this operator increases saturation less and lacks hue shifts, typically noted most on bright colors like a sky.
Most will be satisfied with the defaults weights, but if you like the weights can be tuned manually by providing a JSON configuration file. See the ntro_conf.json file in the data-examples directory for a documented example (shows the default weights).

The operator make no local adjustments, and as it's just a part of a camera profile it couldn't do that anyway. This means that only the curve is analyzed for contrast, and as an image can vary in contrast locally (for example a large flat blue sky has low contrast even if the curve is a steep S-curve) also the eye's perception of color vary a little over the image surface, and thus some areas may receive a bit too much saturation or too little. This is not a large problem, but something to be aware of when evaluating results.

DNG profile implementation notes

When making a DNG profile the operator is implemented through the LookTable and curve. So if you strip away the LookTable and curve you have the standard linear colorimetric profile.

The DNG profile LUTs are not as flexible as ICC LUTs, most notable is that you cannot alter grays, not increase saturation or change lightness (value). As the LUT work with multipliers on saturation it's logical that you cannot increase saturation from zero. However, it's not logical that value cannot be scaled. Some DNG profile implementations support scaling grays (as the LUT itself does support), but the public DNG reference code as well as Adobe's products ignore the value multipliers for gray and instead set them to 1.0, that is no change.

This means that you cannot implement a curve directly in the LUT, as grays cannot be darkened or brightened (which a curve requires). The workaround is to embed a tone curve (which can scale grays), predict the result of that curve and reverse the undesired aspect to get the intended result. This is how DCamProf does it. There is one potential problem though: it's not specified in the DNG specification how the tone curve should work, so there may be raw converters out there that does not use Adobe's hue-stabilized RGB curve variant and if so you will not get the desired output.

If you come across such a raw converter and want to use this tone reproduction operator, please let me know.

The LookTable will per default be gamma-encoded for the value divisions, this will make perceptually better use of the range (that is higher density in the shadows) meaning that the default 15 value divisions should be enough for most curves. Some older or simpler raw converters may not support the gamma encoding tag though, and if so you can disable it.

Verifying tone reproduction accuracy

Once you have applied a curve you can no longer do normal automatic Delta E comparisons to check for accuracy. By definition a curve adds a lot of lightness "errors" as it applies contrast, and we also add saturation "errors" to perceptually compensate for the increased contrast. The one-to-one delta E comparisons only work for linear profiles.

There are no readily available color science models top help us out here, so the only method at hand is to verify by eye.To do this you make a linear profile first that can be measured for accuracy and use that as reference. Then you copy an image so you have two versions one with the linear profile applied and one with the curve, and do A/B swapping. It's important to do swapping and let the eye adpat for a couple of seconds, if you would compare side by side the eye will be confused by the two different contrast levels displayed simultaneously.

A photograph with faces in it is one good reference point, as our eyes are very good at detecting subtle differences in skin tones. I also recommend testing a sunny outdoor landscape scene, where you can check if the applied contrast is suitable (that is look globally and get a feel if the scene looks as contrasty as in real life but without exaggeration). Check if the color of the blue sky seems right, hue shift of light tones is typical for simpler curves. I also recommend testing a photo with various high saturation colors which you can find in flowers naturally or as artificial colors for example in toys or sports clothing. High saturation testing is a bit difficult as you can run into color space clipping. Using a wide gamut screen will certainly not hurt in this case.

As mentioned in the description of DCamProf's neutral tone reproduction operator there are limitations with operators that can only apply a global adjustment without adjusting specifically for the image content (which all profiles must do). Keep this in mind when evaluating the result.

The goal DCamProf strives for is a neutral starting point even when a curve has been applied, and then you can subjectively add saturation to your liking in the raw converter.

Scene-referred vs output-referred

You have probably heard/read that "DNG profiles are scene-referred and ICC profiles are output-referred", and in the next sentence it's said that scene-referred is better. What does this mean?

A scene-referred camera profile simply means that the purpose of the profile is to correct the colors so the output represents a true linear colorimetric measurement of the original scene. In other words we want the XYZ values for the standard observer, or any reversible conversion thereof. That is what we in daily speak would call an accurate linear profile, which DCamProf makes per default.

An output-referred camera profile should instead produce output that can be directly connected to a screen or printer ICC profile and produce a pleasing output for that media. As discussed, for cameras this means in practice that there should be some sort of tone-curve applied to get a pleasing midtone contrast and compressed highlights. In other words if the camera profile converts to XYZ space, those XYZ values should already have the curve applied and also any other subjective adjustments.

It's true that the ICC standard is written such that it expects camera profiles to work this way. However, raw converters that use ICC profiles don't necessarily follows this intention. Some let the ICC profile make a scene-referred conversion, while some makes some sort of mix between scene-referred and output-referred (let it do subjective color adjustments, but not apply a curve), and only a few do it the ICC standard way and make the ICC profile fully output-referred.

While DNG profiles can be 100% scene-referred, they can also have a "LookTable" LUT and/or a tone curve which are subjective adjustments for output, effectively making the profile output-referred. Adobe's own profiles have these type of adjustments, and are thus output-referred. I think scene-referred vs output-referred is a bit confusing concept as DNG profiles supports both things natively and ICC profiles do it in practice depending on raw converter design.

To support all-around use of scene-referred profiles the raw converter must have a type of tone reproduction operator that can change contrast without distorting color, otherwise scene-referred will only make sense with the linear curve. Of the big name raw converters few (none?) have such an operator but instead require profiles to be adapted for a curve. This is why DCamProf supports applying its own tone reproduction operator directly in the profile; raw converters in general are simply not up to using scene-referred profiles with their internal tone curves.

Tone curves and matrix-only profiles

To compenaste to get good color after an RGB tone curve has been applied the profile needs to make non-linear adjustments. This is not possible with matrix-only profiles as they by nature are 100% linear.

However, a matrix profile made to match a matte target, such as the classic CC24, will most likely produce too low saturation of high saturation colors, and will thus produce a less garish look together with an RGB tone curve than a colorimetric LUT profile would (which can accurately reproduce high saturation colors as well).

It's generally not a good idea to try to get good match of high saturation colors for a matrix profile in any case, as that will reduce precision of the more important normal range of colors. That is a good matrix profile is generally a bit desaturated and therefor works okay (although not perceptually accurate) together with an RGB tone curve in most circumstances.

DCamProf does not provide any functionality to adapt matrix-only profiles for tone curves (unless you add a LUT on top which you can for DNG profiles), so if you intend to use your matrix profile with an RGB-like curve make sure you design it with not too high saturation colors.

The look of over-exposure

Digital cameras clip the raw channels straight off when over-exposed which may not result in a pleasing look, even with a rolloff in the profile's tone curve. So some raw converters apply some special rendering of over-exposed shots to simulate a more film-like behavior, mostly by desaturation.

This is not standardized and cannot be controlled by the camera profile. There should be no need to do so either, but it's good to be aware of this if you compare output of the same camera profile in two different raw converters. If the shot is over-exposed the raw converter itself may affect the look. Naturally if you lower exposure of a clipped image the raw converter's highlight reconstruction algorithm will affect the look, which also is outside the scope of a camera profile.

Chromatic adaptation transforms in camera profiling

If the light of a scene changes from say a blueish daylight (D65) to a reddish tungsten light (StdA) and we give some time for our eyes to adapt the colors will still look approximately the same. This is the eye's chromatic adaptation, and the phenomenon that colors appears the same despite a new light color is called "color constancy".

However, the eye is only approximately color constant, that is some colors will appear slightly different under the new light. In color science the chromatic adaptation behavior of the eye/brain has been tested with various psychophysical experiments where test persons match colors under different lights, in order to find "corresponding color sets". The corresponding color under a different light can be a different sample, which is an example of "color inconstancy".

These experiments have then served as basis when developing chromatic adaptation transforms, CATs, mathematical models of the human vision's chromatic adaptation behavior. A CAT thus models both the color constant and the inconstant parts of adaptation.

A CAT does the following: provided a CIE XYZ tristimulus value under a source illuminant, predict what the XYZ tristimulus value should be under a destination illuminant that provides the same color appearance. The illuminants are given as whitepoints, so the CAT does not need any spectral data.

In camera profiling a chromatic adaptation transform is needed when the calibration illuminant is different from D50. The reason for this is that the profile connection space is always D50 (for both ICC and DNG profiles), that is the color rendering pipeline in raw converters need the profile to output colors relative to D50, which then can be converted further to colors for your screen or printer.

If the profile is made for say tungsten light (StdA, 2850K) we then need to convert those XYZ coordinates to corresponding colors under D50. This can be made with a CAT, and the current best for these tasks is the CAT coming with the CIECAM02, CAT02. However, the CAT is still far from perfect. There are challenges concerning the accuracy of the experimental data they are based on, and the experiments also cover limited illuminant range (usually StdA to D65) and limited range of colors. In addition the CATs are designed with various trade-offs to make them easier to use mathematically. And finally, these transforms work on tristimulus values only, of both colors and illuminants. Any knowledge of spectral information won't contribute.

"Relighting" transform

There's another type of chromatic transform too which sometimes is needed in camera profiling. Let's say we have the XYZ value under D50 for a test target patch, and we want to predict which XYZ signal we will get from the same patch lit under StdA. That is we're relighting the patch. If we have the reflectance spectrum of the patch and the destination illuminant it's straight-forward, we just calculate the new XYZ values the normal way with spectral integration.

However some reference files provided with commercial test targets only have XYZ coordinates, and if we don't have a spectrometer to measure the target ourselves then we need to make a transform without having any spectra at hand.

This transform is not the same as a CAT. A CAT finds a corresponding color and models the color inconstancy aspects of human vision. However as human vision is approximately color constant many software applications use a CAT anyway when a relighting transform is called for, and there's not much else to do as the established color appearance models do not provide any other transform. There is no standardized name for the "relighting transform" which means that CAT is sometimes used in the literature also for this which causes some confusion. In this documentation "relighting transform" will be used.

With DCamProf there is a better alternative for relighting than using a CAT. If the reflectance spectrum is missing DCamProf can generate a virtual spectrum which matches the given XYZ coordinate, and that spectrum can then be lit by any illuminant. Of course the rendered spectrum will not exactly match the unknown real spectrum, but tests made on various sets show that for most colors this method outperforms both Bradford CAT and CAT02. Rendering virtual spectra often gets you within 1 DE from the correct answer, while the CAT is often in the range 2-4 DE.

Performance of CATs

The performance of a relighting transform is easy to verify as long as you have spectral data, and there are plenty of databases with various spectra.

With a CAT the only data to verify against is the correlated color experiments made, and CAT02 generally wins when it comes to the established models. However, as discussed all of these models are rather approximate, and the question arises that maybe they introduce more errors than they fix? A CAT02 conversion from StdA to D65 will have about 3-4 DE on average compared to the correlated color set experiments. Performance is probably not so good outside the StdA to D65 range as the reference experiments does not cover a wider range than that.

It would be most interesting to compare CAT with simple spectral relighting, as the latter is usually available when profiling. When using the relighting transform as a CAT we assume perfect color constancy, which indeed is wrong, but on the other hand the error will be no larger than the range of color inconstancy, which presumably is quite small. Unfortunately the correlated color experiments don't have spectral data so there is no way to make this comparison. What we can see though is that relighting is about 3 DE on average from CAT02, with up to 6-7 in saturated reds and yellow-greens.

From these results a fair guess is that a CAT is indeed better at predicting the color inconstancy aspects of human vision than just keeping perfect color constancy (that is do relighting from spectra), but also that relighting may be more robust and may have smaller appearance errors in some ranges.

When are CATs and relighting used?

If you make a D50 profile and have D50 XYZ target reference values no CAT or relighting is required. If you like you can make a D50 profile even if the actual light used when shooting the target is not D50. What then will happen is that the look will be as if lit by D50, but the profile will only work as intended in the light used at shooting time (if you make a DCP it's light temperature estimation will be off too, but that does not hurt performance in any way).

DCamProf needs target reference values as illuminated by the calibration illuminant (= the light the target was shot under). Why? There are two reasons, one is to calculate the color matrix which is used in DNG profiles to estimate light temperatures, and the other is to know the color appearance under that light so we can using a CAT get corresponding colors for D50 (the profile connection space) which the color correction is made for.

As the reference file is often calculated for D50 a relighting is often required. If spectra is available in the target file this is done by spectral calculation and thus very accurate results is had. If spectra is missing a relighting transform has to be applied.

DCamProf also needs D50 reference values, as this is the profile connection space where the color correction matrix (the "forward matrix") and LUT work. If the actual look of the calibration illuminant should be retained we need to model also the color inconsistency aspects of human color vision and then a CAT is used, so we take the reference values calculated for the calibration illuminant and transform those to D50 via CAT.

With DCamProf you can if you want force color constant behavior and then D50 values will be calculated via relighting rather than CAT, assuming target spectra is available. If you are making a reproduction profile this is likely what you want.

Note that if we don't make a DNG profile, or we don't care about its ability to estimate light temperatures, and we rather use color constant behavior than using CAT, the reference values for the calibration illuminant won't matter.

Summary:

  • For D50 profiles with D50 reference values in the target neither CAT or relighting will be used, as it's not needed. This is the most common use case.
  • Relighting is performed to get XYZ reference values for the calibration illuminant (unless the XYZ reference values already matches it).
  • The (often relighted) calibration illuminant reference values are used for two things, and depending on context these may or may not be used in the final profile:
    1. To derive the "color matrix" used by DNG profiles for light temperature estimation. It's not used by ICC profiles.
    2. To serve as starting point for the CAT02 transform to D50, the illuminant for the profile connection space where the color correction matrix (the "forward matrix") and LUT works, that is where the actual color correction takes place. If color constant behavior is enabled (-C flag), this case will not be applied.
  • If color constant behavior is enabled (-C flag), relighting rather than CAT is used to get the D50 reference values. As reference files typically contain D50 values to start with relighting is generally not necessary.
  • If no spectra is available in the target file and no virtual spectra is generated, Bradford CAT is used as a "poor man's" relighting transform. It's generally better to enable virtual spectra generation (-S flag) in this situation as it provides more accurate results.
  • Color constant behavior is generally desired in copy applications, while using CAT to model real appearance is typically preferred for general-purpose profiles.

Testing CAT-designed profiles

If CAT02 was employed when designing the profile, for example to keep the color appearance of colors under tungsten light, you should test the profile with the same criteria. Using DCamProf's test-profile command you can just mirror the parameters from make-profile. If you use some external software for testing it's likely that it will not apply a CAT and rather expect perfect color constancy. In that case you should either not use that software for testing, or redesign your profile with the -C flag, that is disable CAT.

Subjective looks in camera profiles

The camera profiles bundled with commercial raw converters generally don't try to reproduce a neutral accurate color appearance, but instead apply a designed look. The central aspect is of course that they apply a tone curve, as discussed separately in the tone curve section, but the appearance of colors are also adjusted with the intention to produce a more "pleasing" result than an accurate profile will. It can be about rendering smoother and less reddish caucasian skintones and more saturated colors to make images "pop".

This is very similar to how color films worked, few films had very accurate color but instead different types of subjective color that could suit more or less well depending on subject. Contrast (tone curve) differed between films too. That is one can say that the commercial camera profiles builds on the film tradition. Although we with digital technology could design the look separate from the profile (using the raw converter adjustments, or a photo editor), the traditional way with preset looks is still alive and well.

It differs between raw converters how these subjective profiles are grouped. The illuminant selection (typically tungsten, flash and daylight) is not about subjectivity but about adapting the camera response to a light source, but it's often a part of the profile choice unless it's automatically derived from white balance (dual-illuminant DNG profiles have it built-in). Then there's often a subjective choice depending on intended subject, "portrait", "product" and "landscape" are common genres. Sometimes the tone curve is integrated into the profile (lower contrast for portrait, higher contrast for product and landscape), or you can select it separately. As the tone curve affects color appearance I think it's better to have it integrated in the profile, so you know the designer have had full control over the end result.

In any modern raw converter you can as a user make many different color adjustments, as well as contrast adjustments. So why should the camera profile make these adjustments? Wouldn't it be better if the camera profiles just was as accurate as possible and then you as a user would choose color and curve adjustments using the readily available tools in the raw converter?

Well, first there is tradition as we already discussed which probably is the strongest reason why profile design has stayed this way. Choosing a profile is like choosing a film which renders the scene with colors and contrast in some way you prefer. But it's also non-trivial to make these subjective color adjustments, which is another key reason to provide the user with presets. Profiles don't have simple global adjustments like pulling the saturation slider, instead there are subtle adjustments here and there, to make skin color look flattering, slightly increase separation in foilage etc. They may contain lightness-dependent hue adjustments ("hue twists") for example make shadows more saturated and cooler (bluer) and highlights warmer (redder). We also know that adjusting contrast will change color appearance in ways which can be difficult to compensate. The average user may not have the skill or interest to do these type of fine-tunings. Of course the raw converter could still separate look from the profile by having look presets it would apply on top of an accurate colorimetric profile (which I personally think would be a better design), but few if any raw converters work that way today.

In fact, few raw converters actually have adjustment tools that allows for making the typical fine adjustments you find in profiles. Capture One has the "Color Editor" which is useful for some of these tunings, but Lightroom for example is quite limited in this regard.

When it comes to companies that produce both cameras and raw converters like Phase One and Hasselblad (and well, most other camera manufacturers too, but medium format makers color rendition stand out at least in terms of reputation), the profiles with their subtle subjective adjustments are part of their trade secret that sells cameras. While the camera hardware does play a very important role in how colors are rendered, the camera profile makes the largest difference and is thus very important in differentating from the competition. The camera makers would probably not like to put this responsibility on the user.

So the reasons we have these profiles are because it's tradition, it's a way for camera and raw converter makers to differentiate, and as it's quite difficult to make the subtle adjustments yourself, so to most it's just easier if you get a preset look from the profile.

Should your custom profile apply a subjective look?

When you make an own profile using DCamProf you will per default get a profile designed for accuracy, and not get those fine-tuned subjective adjustments existing in typical commercial profiles. When applying a curve DCamProf will through it's neutral tone reproduction operator keep color apperance as true to the original as possible.

Is this a problem? Shouldn't we have some adjustments for skin tones and other subjects? Well, it's up to you to decide. First it should be noted that the neutral tone reproduction operator already does some of the adjustments you would expect, overall saturation is increased, saturation is increased in shadows, dampened for high saturation colors etc. This is not to make a look, but to compensate the appearance changes caused by the contrast curve, and I'd say that this is the most important aspect of the "subjective" adjustments you find in the bundled commercial profiles too.

If you want further adjustments that actually changes the appearance of colors depends on what type of subjects you shoot, what type of workflow you have and how much control you want during the workflow. If you shoot portraits with caucasian people you will probably want to adjust many of them to contain less red, and maybe even out the hues. You'd probably want to make a bit different adjustments from time to time, but still you may be helped by using a profile that has some skin tone adjustments built in to give you a better starting point. In that case you may want a specific "portrait" profile.

Don't forget though that any subjective adjustment in a profile will be global, so if it for example adjusts "skin tones" it will change any skin-like colors even if on entirely different objects. If you instead edit in Photoshop or similar application there are selection tools to isolate actual skin in the frame so you can modify only that, which of course makes more sense but does require time-consuming post-processing work for each image.

Also note that skin tones vary a lot person to person, and also varies depending on light, make up, tanning etc. Naturally this means that a profile that's good for one type of condition may be less good for others. Still some commercial raw converters have one subjective look that is supposed to suit any subject (Hasselblad's "Natural Color Solution" for example). If the profile makes quite small deviations from accuracy it can work quite well, but it should still be seen as a compromise.

If you do apply heavy manual post-processing to achieve a specific look it probably doesn't make much sense to have a subjectively fine-tuned profile from start, as no trace will be left of the original look anyway. Then I would prefer to get a neutral starting point so I would have an accurate baseline to start from, so I actually know what appearance changes that have been made.

A profile with a designed look is of course put to best use when you don't make much adjustments at all, or just smaller adjustments. If you have hundreds of images from a wedding a profile with some skin tone optimizations would probably not hurt. Also if your raw converter lacks tools to smoothen skintones you may want a profile that does that for you. You may also simply like the concept of selecting a preset look depending on subject, like having a portrait, landscape and a product profile.

So if you want a neutral profile or one with a designed look depends mainly on how you want to work, and to some extent also on the capabilities of your raw converter.

Designing your own subjective look

With DCamProf you can optionally design a subjective look and put into the profile. This is not an easy task, especially as DCamProf has no graphical user interface, but if you have a good bit of patience it can be done.

Here's a few examples of subjective adjustments you can find in profiles:

  • Overall saturation increase of normally saturated colors
  • Overall saturation decrease of high saturation colors
    • This is can be seen as a form of gamut mapping, reducing saturation so pictures become easier to print. Note that many raw converters are capable of dynamic gamut mapping, so it may not be wise to put it statically into the profile.
  • Bringing reds and yellows closer together in the skintone range
  • Render caucasian skin more golden and less reddish, by altering hue and maybe desaturating skintone reds.
  • Make darker tones cooler (bluer) and more saturated
  • Make midtones and highlights warmer (redder)
  • Reduce chroma of close-to-neutral colors (make gray more gray)
There are more things too, and there's no "right" set of adjustments. There are huge variations between manufacturers how they do it, just look at how differently the same camera look between different raw converters. If you are uncertain of what you like yourself you just need to experiment and don't be too nervous about it. As there haven't been much tools available to make profiles there's a lot of romanticizing of various raw converter's abilities to make great color. It's not that hard, and it's certainly not guaranteed that the manufacturer's taste concerning which adjustments that should be done or not is better than yours. The manufacturer often try to design a look that will impress the average user, and if you're into profiling your own camera you're probably not one of those.

When you develop your look it can be worthwhile to first produce a set of TIFF files of representative test images generated with other profiles you like (or don't like) so you have something to compare against.

In general, and especially when it comes to skin tones, I can recommend studying the subject of color correction. Not the least you will see things that a profile cannot and should not do, local adjustments, or adapting to conditions specific to the image. For example if a person wears bright colored clothing this can affect the tone of the skin, and naturally a profile that corrects for that will do bad in other conditions.

File formats

JSON

DCamProf uses JSON as a base for its own file formats. Open the files that comes in the data-examples directory in the DCamProf archive for documentation. The JSON parser in DCamProf has been modified to parse floating point numbers with maximum possible precision.

If you get a JSON format error of your hand-edited files it can be hard to figure out where it is, then you can use one of the online JSON validators like JSON lint.

Argyll .ti3 (and similar)

DCamProf reads Argyll .ti3 files produced by the scanin tool. Note that the Argyll .ti3 format is rich in features and DCamProf only cares about a subset of it. It expects to get RGB measurement triplets matched with XYZ reference values, and possibly (hopefully) spectral data.

DCamProf can also generate .ti3 files and will then add some columns specific to DCamProf. Files remain compatible with Argyll though as unknown columns are ignored.

The .ti3 format (or rather an even more reduced subset of it) is also used when you want to import spectral data when you make a target to be processed by camera SSFs. An example of this exists in the data-examples directory.

DCamProf can also understand formats similar to .ti3, such as files coming from Babelcolor's patchtool.

Argyll .sp

With Argyll spotread you can read ambient light to a spectrum file, and this can be fed directly to DCamProf as an illuminant.

Argyll .ti1

DCamProf make-testchart and testchart-ff commands uses Argyll's .ti1 format to specify a test chart layout.

DCP

DCamProf can read and write DNG camera profiles (DCPs).

ICC

DCamProf can read and write ICC version 2 camera profiles.

Text

DCamProf can import spectral databases as raw text data formatted in various ways using the txt2ti3 command (not to be mixed up with Argyll's command with the same name).

Command reference

DCamProf is a collection of tools built into a single binary. The first parameter specifies the command (tool) you want to run, then followed by command-specific arguments:
  dcamprof <command> [command-specific flags] <command args>
If you run the binary without parameters you get a list of all commands and their flags. Run dcamprof -v if you just want to check the version.

The basic workflow is:

  1. Make a target file containing test patches with camera RGB and reference XYZ values, and preferably also the reflectance spectra. This is either done with Argyll from test target raw photos, or by using the make-target command to render values based on provided camera SSFs.
  2. Make a camera profile using the target file, using the command make-profile. This will output a generic profile in DCamProf's own JSON format.
  3. Convert a DCamProf profile to a standardized format, using the command make-dcp or make-icc.
  4. Optionally manually edit the result (copyright strings etc) by using the dcp/icc2json and json2dcp/icc commands.
  5. Optionally evaluate target matching performance using the test-profile command.
Additionally you can use the make-target command to generate new RGB and XYZ values based on your chosen illuminant and observer. This requires the full spectrum of target patches, and to make RGB values you also need the camera's SSFs. For convenience value re-generation is supported also directly in the make-profile and test-profile commands.

Here follows a description of each command available.

make-target

  dcamprof make-target <flags, with inputs> <output.ti3>
Make a target file which contains raw camera RGB values paired with reference XYZ values, and (optionally) spectral reflectance. The file format is Argyll's .ti3, with some DCamProf extensions.

If you're using Argyll for measuring a target you don't need to use this command, but you can still use it to regenerate XYZ values with a different observer for example (this requires that the .ti3 file contains spectral data).

If you have your camera's SSFs you don't need to shoot any physical target, then you render the .ti3 file from scratch using this command.

Overview of flags:

  • -c <ssf.json>, camera's spectral sensitivity functions, only needed if you want to (re-)generate camera raw RGB values.
  • -o <observer>, only required when (re-)generating XYZ reference values from spectra, normally the default 1931_2 is a good choice.
  • -i <target illuminant>, only required when (re-)generating RGB values from spectra (default: D50)
  • -I <XYZ reference illuminant>, only required when (re-)generating XYZ from spectra (default: same as target illuminant)
  • -C, don't model color inconstancy, that is use relighting instead of a chromatic adaptation transform.
  • -p <patches.ti3>, include patch set, in Argyll .ti3 format. The file can be produced by Argyll, DCamProf or any other software with compatible format. It can contain XYZ and RGB values, and preferably it should contain spectral reflectance of the patches too. If spectra is available the XYZ and RGB values are re-generated when possible (unless -R and/or -X parameters are provided).
  • -a <name>, assign (new) class name to previously included patch set (-p). Class names is a DCamProf extension to the .ti3 format (that is Argyll files lacks it). Class-names are useful when assembling a single target file from multiple spectral sources and you want to weight them differently during profile making. See documentation for make-profile for further details.
  • -f <file.tif | tf.json>, linearize imported RGB values to match transfer function in provided tiff/json, generally only required in some ICC workflows.
  • -S render spectra for inputs that lacks it
  • -g <generated grid spacing>, adjust the grid spacing when generating spectral grids. The spacing is given in u'v' chromaticity distance, default is 0.03.
  • -d <distance>, minimum u'v' chromaticity distance between patches of different classes (default is 0.02). If you mix different spectral sources, for example greens from nature in one set and greens from artificial sources in another which overlap, this can lead to a messy-looking target and give contradicting optimization goals for certain colors. DCamProf can handle contradicting spectra well, but to keep the target cleaner you can use this parameter (which is enabled per default, set it to 0 to disable). The patch set listed first on the command line takes priority, that is overlapping patches of later sets are dropped.
  • -b <distance>, exclude patch if there is a lighter patch with same chromaticity. Suggested chromaticity distance 0.004 (default: not active). As DCamProf makes a 2.5D LUT darker patches with the same chromaticity will not really add much value, so to clean up the target you can choose to remove those. If kept they will be grouped together with lighter colors used for average correction.
  • -X, -R, don't regenerate XYZ/RGB values of imported patch sets. Per default target values are regenerated to match chosen observer, illuminant and camera SSF, if all required information is available. This is usually the best, but if you for some reason want to keep the reference values provided in the imported file use these flags.
  • -n, exclude spectra in output (default: include if all inputs has it). Target which include spectra are more flexible as XYZ (and RGB) values can be regenerated with a different observer/illuminant/camera, but makes a larger file which is harder to read. If you don't need spectra you can exclude it. Note that if some of the inputs lacks spectra the output will not have any either.
  • -r <dir>, directory to save informational reports and plots.

Built-in spectral data

DCamProf has a few spectral databases built-in. These come from freely available sources, see the acknowledgments for information.
  • cc24 -- spectral reflectance of the classic Macbeth 24 patch color checker
  • kuopio-natural -- spectral reflectance of colors occurring in typical nature in Finland, leaves, flowers etc.
  • munsell -- spectral reflectance of the full 1600 patch Munsell glossy patch set.
  • munsell-bright -- subset of Munsell, only the lightest and most saturated colors included.
This is a good start which you can do a lot with, but I'm looking for more spectral data to include in future releases of DCamProf, so if you know of some good source please let me know.

Generated spectral data

DCamProf has a spectral rendering algorithm that can make reflectance spectra to match any given XYZ coordinate for the chosen observer and illuminant. It's sort of an impossible task as there are an infinite amount of spectra to choose from. In this infinite set DCamProf finds a smooth spectra which has similar properties to real reflectance spectra.

Although not a full substitute to real measured data it can be used for experiments, test profile performance, establishing a baseline or filling out for areas where you don't have real spectral data. And indeed, a profile rendered completely from generated spectra will work, try if you like.

You can generate spectra along the chromaticity border of a gamut and optionally fill the inside with grid of patches. The samples are always made as light as possible (as high reflectance as possible) for the given chromaticity. Extremely saturated colors are by necessity narrow-band and will thus be darker than less saturated colors.

The gamuts available are locus, pointer, srgb, adobergb and prophoto. Add "-grid" suffix, eg "pointer-grid" to create a grid. The grid spacing can be adjusted with the -g parameter. Gamuts with extreme or even out of human gamut colors like locus and prophoto will cause the spectral renderer to fail producing spectra on some chromaticity coordinates, this is normal.

Be warned that spectral data generation is very processing intensive. DCamProf uses OpenMP to process several patches in parallel on all available cores, but it can still take minutes to produce a grid, or even hours if it's really dense.

A generated reflectance spectrum made by DCamProf (blue) together with a measured spectrum from a real Munsell color patch (red). Both lead to the same XYZ coordinate when integrated with the observer's CMFs. That is this shows one example of two different spectra that produces the identical vision color.

The DCamProf spectral generator strives for smooth spectra, and its result is thus a little bit more rounded than the Munsell patch in this example.

Importing raw text data

Spectral data is often delivered in text files with simply the floating point values listed on rows. Much of the data in the spectral databases linked here is in such a text format.

The separate command txt2ti3 (not to be confused with Argyll's command with the same name) can be used to convert those raw text files into .ti3 that make-target can read.

The flags should be self-explanatory so just run dcamprof without parameters to get the information.

Example: import text spectral data (here from Lippmann2000 found in the spectral databases section) and form a target where cc24 fills out where the imported data don't have patches:

  dcamprof txt2ti3 -a "caucasian" -s 1 -f 400,700,2 \
    Reflect_AllCaucasian_400_700_2nm.txt caucasian.ti3
  dcamprof make-target -p caucasian.ti3 -p cc24 output.ti3

Reflectance vs emissive spectra

The default type of spectrum in a target is a reflectance spectrum, that is how much of the light that is reflected at each wavelength. Most spectral data is of this type. Reflectance spectra is first multiplied with the illuminant to form emissive spectra which is then integrated with the observer.

It's however also possible to specify emissive spectra, that is light sources or reflective objects with an illuminant reflected off them. If you want to define a transmissive object such as a backlit leaf, you specify it as an emissive spectra, like filtered light source.

In the .ti3 file the column SAMPLE_TYPE says "R" for reflective spectra and "E" for emissive. This is a DCamProf extension and is thus ignored by Argyll.

Observers

The observer is a mathematical model of the eye, defining its spectral sensitivity functions, or color matching functions (CMFs). It's not intended to exactly match the eye's cone response, but to provide "equal" results. The observer's CMFs have been mathematically transformed to work better in real applications.

When you integrate these CMFs with a spectrum you get the CIE XYZ tristimulus coordinates. That is the observer is key element in modeling what colors we see.

As there's no method to actually measure the signals the eye sends to the brain the CMFs have been derived based on results from color matching experiments. The precision is thus dependent on the color matching skills of the people involved in the experiment.

The original observer was published as early as 1931, and it's still the number one standard observer. This is not because it's the most exact one, but because the CIE standard organization will not accept new standards unless significant improvement is made. Some minor improvements have been made over the years, but the original 1931 standard observer holds up well enough.

There are 2 and 10 degree variants of observers. This simply refers to how large area of the eye the tested color patch covers. With the more narrow 2 degree angle the eye is slightly better at color separation, but the 10 degree generally matches real situations better. The 1931 is a 2 degree observer, and the first standardized 10 degree observer was published in 1964.

DCamProf contains a number of observers, you can see a list when running the command without parameters. I'd like to use the 2006 observer as the default one as it's more accurate than the original 1931, and I'd also rather use the 10 degree observer as I think it matches real situations better than the 2 degree. However, as most color management software expects a 1931_2 observer and all the common color spaces sRGB, AdobeRGB, Prophoto are defined with a 1931_2 observer I've chosen that as the default. Only experiment with changing observer when you have full spectral information though, as changing observers will change XYZ values slightly so you can't have a reference file with XYZ values for a different observer for example.

If you change observer note that evaluation of profile-making results must be made with the same observer otherwise you will get larger Delta E than you should.

To get desired results with a different observer one needs at some point transform to colors for the 1931_2 observer as for example both DCP and ICC requires that. Currently this transform model is very simplistic in DCamProf so the results will probably not be as good as they could be. Therefore it's currently best to stay with the default 1931_2 observer.

Examples

Re-generate XYZ reference values with a new illuminant (D65) and observer (using the default 1931_2) for an Argyll-generate .ti3 file:
  dcamprof make-target -I D65 -p argyll.ti3 output.ti3
Generate targets files from scratch using camera SSF and built-in database:
  dcamprof make-target -c 5dmk2-ssf.json -i StdA -I D50 -p cc24 output.ti3
  dcamprof make-target -c 5dmk2-ssf.json -i StdA -I D50 -p cc24 -p munsell output.ti3
Use the spectral generator to make targets from scratch:
  dcamprof make-target -c 5dmk2-ssf.json -i 7500K -I D50 -g 0.01 -p pointer-grid output.ti3
  dcamprof make-target -c 5dmk2-ssf.json -i D65 -I D50 -p pointer -p srgb-grid output.ti3
Generate a border around the Pointer's gamut and use the reserved word "illuminant" to get the spectrum of the illuminant (D65 here) into the patch set, which is necessary as with only the border there would be no white patch:
  dcamprof make-target -c 5dmk2-ssf.json -i D65 -I D50 -p pointer -p illuminant output.ti3
...and then we do the same thing by using the reserved word "white" to get a perfect white reflective spectrum, which really is smarter as the reflective white will still work if we later change the illuminant:
  dcamprof make-target -c 5dmk2-ssf.json -i D65 -I D50 -p pointer -p white output.ti3
Re-generate both RGB and XYZ values from a previously created file which contains spectral information, use D65 for the RGB values and D50 for the XYZ values:
  dcamprof make-target -c 5dmk2-ssf.json -i D65 -I D50 -p input.ti3 output.ti3
Assemble a target from imported spectra and built-in database:
  dcamprof make-target -p input1.txt -a "class1" -p input2.txt -a "class2" -p cc24 output.ti3
Note that in this last case there is no SSF provided and while the input text files might have RGB values, no RGB values can be generated for the built-in cc24, and the output will thus contain dummy values (zeroes) for the RGB triplets. That is to be used when making a profile you need to run it through again to re-generate RGB values with provided camera SSFs. For convenience the make-profile and test-profile commands support re-generation directly so you usually don't need to re-generate reference values separately with the make-target command.

If you are using Argyll source files it's preferred that you include spectra throughout the workflow so XYZ reference will be re-generated with the observer chosen in DCamProf. If the XYZ reference values comes without spectra from a source you cannot control it's important to know which illuminant (and observer, nearly always 1931_2) that was used so you can later inform make-profile of that.

make-profile

  dcamprof make-profile [flags] <input-target.ti3> <output-profile.json | .icc | .dcp >
Make a camera profile based on an Argyll .ti3 target file, either generated by Argyll from a raw test target photo, or by dcamprof make-target. The target file contains test patches with raw RGB values from the camera coupled with reference CIE XYZ coordinates of the patches, and possibly also the spectral reflectance of each patch.

The output is written in DCamProf's own native format, which can be converted later on, or if you satisfy with default conversion flags you can directly write a DNG or ICC profile.

Overview of flags:

  • -n <camera name>, optional camera name. If you write DCP directly it's important to set it.
  • -w, -W, weighting to control trade-off between smoothness and accuracy, described in a separate section below.
  • -M, ignore DE weights for matrix optimization, described in a separate section below.
  • -l <l,c>, LUT lightness and chromaticity relax parameters (default: 0,0). Coarse relax of the LUT stretching, which can be used as an alternative or complement to normal weighting (-w). A suitable starting point can be -l 0.1,0.1. You can also disable one dimension completely by providing a negative parameter, for example to disable lightness correction: -l -1,0.
  • -d <distance>, minimum u'v' chromaticity distance between patches when optimizing LUT, default 0.02. Close patches will be grouped together and an average correction is made.
  • -o, observer, default 1931_2. If target XYZ values are not re-generated (that is the target lacks spectra) this must match the observer used when the XYZ values was originally generated. If not known the best guess is generally 1931_2, that is the default.
  • -c <ssf.json>, camera's spectral sensitivity functions, only needed if you want to regenerate camera raw RGB values from spectral information in the target file.
  • -i <calibration illuminant>, this is the illuminant the target was shot under, that is the illuminant the target file RGB values was generated for. Can be specified as an exif light-source name or number, xy coordinate, XYZ coordinate, a spectrum.json file or an Argyll SPECT file (produced by Argyll's illumread. To allow any target value re-generation from spectra it must be a source with known spectrum. If camera SSF is provided (-c) RGB values will be re-generated.
  • -I specifies the illuminant for the XYZ reference values. Can be specified as an exif light-source name or number, xy coordinate, XYZ coordinate or a spectrum.json file. If spectral information is provided in the target the XYZ values will be re-generated according to chosen illuminant (and observer) when possible, and then this parameter is thus ignored. If there is no spectral information it's however important that the illuminant and observer matches what was used for the target.
  • -C, don't model color inconstancy, that is use relighting instead of a chromatic adaptation transform.
  • -S render spectra for inputs that lacks it
  • -B, don't re-balance target so most neutral patch becomes 100% neutral. Per default the target D50 XYZ values used for color corrections will be remapped slightly such that the whitest patch in the target equals 100% neutral (in reality they usually differs 1-2 DE), this means that the ideal white balance for the profile will be the same as picking the whitest patch which is what most will expect. By enabling this flag there will be no re-balancing and instead the ideal white will be the true white, that is typically 1-2 DE different from the white patch. This is more of mathematical interest than having a real visible effect.
  • -b <patch name or index>, manually point out most neutral patch in target. Per default DCamProf will search and find the most neutral among the lightest patches in the target, in some cases it may not be the lightest white but maybe a neutral gray below. If you want to make sure it picks a specific patch you can specify it with this parameter.
  • -k <chroma delta> adjust chroma of XYZ reference values, intended for subjective look adjustment
  • -x <exclude.txt>, text file with sample id to exclude from target, one id per line, or class + id. The purpose of this file is to make it simple to remove possibly problematic patches and re-generate the profile to evaluate changes.
  • -m, -f, pre-generated matrices if you want to skip the matrix finder step.
  • -s, run an alternate (much) slower matrix optimization algorithm which can find a little better result.
  • -L, skip LUT in informational report. LUT is always generated anyway, but if you intend to make a matrix profile in the end it can be useful to show the DE report on the matrix only while you do repeated runs tuning weights.
  • -r <dir>, directory to save informational reports and plots.

Illuminants

It's important that you get illuminants right in order to generate a correct profile. The .ti3 file format does not contain information on which illuminant that was used for the camera raw RGB or XYZ values. This means that you must keep track of that yourself and provide the information to DCamProf via the -i and -I parameters.

There are a few possible scenarios:

  • Target file has no spectral information, camera RGB values were created for the desired calibration illuminant, XYZ reference values for some other illuminant. Re-generation is not possible.
  • Target has spectral information, DCamProf knows the spectrum of the calibration illuminant, XYZ values in the files don't matter as DCamProf will re-generate from spectra. RGB values must still match calibration illuminant.
  • Target has spectral information, DCamProf has illuminant spectrum and you provide camera's SSFs. Both RGB and XYZ values will be re-generated from spectra.
For optimal results you want to avoid the first case. That is provide a target with spectral information, and a calibration illuminant with known spectrum. Then all XYZ values will be re-generated from spectra. If the target lacks spectra you can choose to simulate them by enabling the -S flag. It cannot exactly recreate the original unknown spectra of course, but if DCamProf has to perform a relighting transform the results will generally be more accurate than if not using simulated spectra.

In the most flexible case you have the camera's SSFs too. In this case also the RGB values are regenerated for the calibration illuminant you choose.

If you lack camera SSFs the RGB values from the file will be used directly. It typically means that the file comes from Argyll scanin of your converted raw shot of a physical test target and will by nature contain the RGB values for the light that illuminated the test target at the time of shooting. In this case it depends on use case if it's important that the calibration illuminant you specify matches the real one or not, as follows:

  • If you intend to make a DNG profile and you want it to be good at estimating the light temperature, it's important to match the calibration illuminant. For a single illuminant DNG profile the light temperature estimate is only informational, for a dual-illuminant it controls forward matrix mixing and has therefore more direct control on color correction result. You can read more about this in the white balance section.
    • Note that for dual-illuminant DNG profiles the calibration illuminant must match a known EXIF lightsource.
  • If you intend to model color appearance with CAT (enabled per default), it's important to match the calibration illuminant so the CAT gets the appropriate starting point when converting to the profile connection space which is D50.
  • If you disable CAT (-C), that is enable 100% perfect color constancy, the calibration illuminant does not affect the result, except for the DNG aspects covered in the first bullet. That is in this case it's purely informational.

DCamProf will need XYZ values for both the calibration illuminant and the "profile connection space" which always is D50 (same for ICC and DCP). A target file only contains XYZ values for one illuminant, and thus the other or both must be calculated. If there is no spectral information Bradford CAT will be used, which does not provide as precise results as when calculating from spectra. With the -S flag you can enable rendering of virtual spectra which often gives a bit better result than using the Bradford CAT.

If you have spectra the XYZ values will be generated for the calibration illuminant first, and then converted via CAT02 to the profile connection space D50, and it that case it's of course important that the calibration illuminant is reasonably truthful. The purpose of using CAT in this case is to simulate the minor color appearance differences that occur due to the illuminant. You can disable this behavior with the -C flag.

In any case if you shoot the target in for example outdoor daylight you don't need to worry if you don't really know the exact temperature, guess one of D50 (midday sunny) or D65 (midday overcast). If you have a spectrometer you can bring a laptop and use Argyll's spotread to read the spectrum of the light and find out what the correlated color temperature is so you get help to choose the closest one. You can actually feed the actual measured spectrum to DCamProf as well, which makes a difference if CAT is enabled, and will make the color matrix as accurate as possible.

Here's the Argyll command to use to read the illuminant spectrum: spotread -H -T -a -s

(If you run spotread with -S (capital S) you get a spectral plot for each measurement which can be interesting. It's a bit user-unfriendly though, the program may seem to lock up. You need to activate the plot window and press space to get back to the program.)

If you lack reflectance spectra in the target file the specified XYZ illuminant must match the ones used in the target. The values could for example originate from a target manufacturer reference file, and is then often relative to D50 or D65. Make sure to look it up so you can provide the correct one. Unlike the calibration illuminant this is really important that it's exactly right.

If you have measured the XYZ reference values yourself using a spectrometer you should have spectra in the target file. If not they have probably disappeared along the way, look over the workflow and see if you can provide DCamProf with spectral information.

Weighting

Weighting is controlled with the -w parameter and is used to specify the importance of colors. There are two purposes, one is to inform the matrix optimizer so it will try to make better fit of one set of colors at the cost of another. The other is to specify a maximally accepted error (in Delta E) for each class of colors, this is then used in matrix optimization but more importantly to relax the LUT bending. A LUT can always stretch, compress and bend to match the target patches exactly, but that can result in sharp and even inverted bends causing ugly gradient transitions (typically most visible in photos with strong out-of-focus blur backgrounds when one color transitions into another). In this case it's better to relax the fitting, and the LUT optimizer will automatically relax in the best way based on the provided acceptable delta E levels.

If you create a matrix-only profile matrix fitting is obviously important, but if you make a LUT it's generally not worthwhile fine-tuning matrix weights, as the LUT will correct the residual error anyway. Feel free to experiment though.

The matrix optimization result is mostly affected by the actual patches provided. Re-weighting them will have some effect, but changing patch set has typically a much larger effect.

When using a LUT it can be better to let the matrix optimizer ignore the delta E relaxing weights, as it may provide a better starting point for the LUT. Use the -M parameter to control this.

To assign different weights to different groups of patches the target file must be split into "classes" (=groups of patches), specified through a "SAMPLE_CLASS" column in the file. The idea is that you can have a naming such as "skin", "forest_green", "textiles" etc and then for example assign greater importance to skin-tones.

Class names in the target file is a DCamProf concept and is not available in Argyll-generated files. By running an Argyll file through dcamprof make-target -p argyll.ti3 -a name out.ti3 you can add a class column, and then edit the text file manually and change names to split into more classes if you like. That way you can split even a 24 patch color checker into several classes. However, the main purpose of class-splitting is to be used when you have a number of distinct patch sets of different spectral types as you typically have when making a target directly with a camera's SSFs.

DCamProf makes a pre-weighting per default (the user weighting is added on top), this is to handle the situation when you combine several patch sets with different density. Some patch sets may have lots of patches concentrated around some specific color, and another may have few patches widely separated. To not cause the dense sets to totally dominate, there's a pre-weighting based on Delta E distances that normalizes all patches. This is generally a good thing, but if you really want "1 patch = 1 unit" you can disable this normalization by adding -W. This normalization only affects the matrix optimizer, the LUT optimizer only looks at the max acceptable delta E deviations.

If you don't have any class names (or all patches are in the same class) there's no value in providing a matrix weight. However the delta E relaxation can still be useful.

Finding the right weights is a trial-and-error process. Dump reports and plots (-r report_dir) and visualize the results, see the section on report directory files for more examples.

For each class you can assign up to five weights using the -w parameter, these are in order:

  1. Maximum acceptable DE deviation
  2. Matrix optimization weight
  3. CIE DE2000 kL
  4. CIE DE2000 kC
  5. CIE DE2000 kH

Here follows a few examples:

"-w skin 0,4 -w nature 6,1": the class "skin" should have full LUT correction (0 Delta E) and have 4 times the weight than ordinary patches during matrix optimization, while the class "nature" can be relaxed with up to 6 DE for a smoother LUT, matrix optimizer weight set to 1 (no change).

"-w cc24 0,1 -w glossy 4,0": the class "cc24" should have full LUT correction (0 Delta E) and unchanged weight, while the class "glossy" can be relaxed with up to 6 DE for a smoother LUT, and is totally excluded from the matrix optimizer (weight 0). Excluding a class from the matrix optimizer can be useful when you have split a target in one normal color part and one with difficult-to-match extreme saturation colors. Then you may not want the extreme colors disturb the matching of the normal colors, as the LUT may get a more difficult starting point in that case.

"-w all 3.5": "all" is a reserved name to point out all patches in the target file. Here the LUT optimizer is instructed to relax all patches up to 3.5 DE to make a smoother LUT.

"-w all 0,1,4,1,1": 0 delta E, unchanged weighting (1), and then three weighting parameters for delta E, which is the lightness, chroma and hue k weights for the CIEDE2000 algorithm. A higher value means less priority, so here we say we're less bothered with lightness than chroma (saturation) and hue. Lowering the importance of lightness is often a good idea, it's probably the least disturbing to have errors in.

Lowering the importance of chroma is also often a good idea, this will typically push the the matrix towards less saturation overall (as less saturation gives the optimizer more room to fine-tune the other parameters) and improve hue accuracy, try "-w all 0,1,4,4,1" for example.

Note that all this weighting stuff is only about matching the specific provided patches, the profiler can't magically make the camera work better. For example, if a camera is bad at separating green colors, it will still be bad even if the particular green patches in the target have been mapped correctly.

When adjusting weights for matrix optimization, use the -L parameter to get the summary report only on the matrix, and then do trial and error. Note that the matrix optimizer is limited by that it must preserve the whitepoint and that all factors are somewhat interconnected. For example if you don't care about lightness but want to have perfect hue it may not help that much to increase the kL parameter to very large values because reducing its precision too much can reduce hue precision too. By making several runs adjusting parameters step-wise you'll see where the limits are.

I've tested to have more advanced weighting parameters such as making it possible to specify that it's worse with too saturated colors than less saturated, but I came to the conclusion that it only added complexity without giving predictable or valuable results. For the saturation parameter the matrix will typically naturally strive for less saturated results as it gives more room to refine the other parameters.

Still the results can seem a bit random at times, it can happen that if you decrease importance of a parameter the precision for it may still increase. This is a natural result of having a problem with infinite amount of approximate solutions, changing a parameter changes the whole equation in unpredictable ways so new better minimas might appear. Targets with few patches, such as the CC24, is more likely to cause the weighting results to appear random than targets with many patches.

Color matrix and forward matrix

You may have noted that I have adopted the DNG profile names of matrices also for the native DCamProf format. This is simply because the names are familiar. It doesn't lock the native format to DNG profiles.

The forward matrix which operates in D50 XYZ space using D50 as the reference illuminant is not unique to DNG profiles, it's used for ICC profiles too. A matrix-only ICC profile can be said to contain a forward matrix. As the conversion from the calibration illuminant to D50 is needed by both profile standards DCamProf has adopted the forward matrix.

The color matrix is however DNG-specific, it's used for estimating the temperature and tint of the scene illuminant. It won't be used when generating an ICC profile.

Looking at DCRaw internals we find the color matrix again though ("cam_xyz" in DCRaw-speak), it's using a D65 color matrix per camera to render its default colors. So you can use DCamProf to contribute color matrices to DCRaw or other software that use DCRaw-style matrices.

White balance

You can control white balance settings with the -b and -B parameters. Per default DCamProf will make a profile which expects the white balance to be set by color picking the most neutral light patch. In some cases the target "white" is actually considerably less neutral than a darker neutral gray patch and that will then be picked instead as reference. If you are going to use the target as white balance setter for a scene it's safest to specify a specific patch as reference, you do that with -b.

The most accurate correction is however had if you let DCamProf optimize towards a virtual 100% neutral patch, this will typically place the ideal white balance a little bit off the real target white. As it's only about 1-2 DE it's really only of mathematical interest it should not make a visible difference in any normal circumstance. If you want to do this you enable the -B flag.

Note that this only affects the forward matrix (which is used for the color corrections), the values used for color matrix calculation will not be re-balanced as it doesn't make sense; the color matrix is not used for color correction but only for figuring out the light's temperature and tint and thus re-balancing its data would only reduce its precision.

If you're working with SFFs and virtual targets you probably already have a perfect white in the target and then this setting will make no difference of course.

Profile-making tips

Do experiment! Learn how to use a plotting tool and plot results. To get a general feel of how profiling works in practice you can play around with one of the example camera SSFs, and then use the acquired knowledge and feel when you tune settings for your targeted camera (where you may not have SSFs).

What you will see is that there is no such thing as perfect result, and the farther from the white-point you get tougher it will be to compensate errors. While it can be fun to try to get a profile that works all the way out to the locus it will hurt performance of common colors. It's generally better to maximize performance for colors you're actually going to shoot. Pointer's gamut approximates the limit of how saturated real reflective colors can be, colors outside that need to be represented by emissive (or transmissive) light like lasers and diodes. It's generally not worth-wile trying to get a good match outside Pointer's gamut. If you have the camera's SSF you can plot and see how well the camera can actually separate colors, you will probably see that there are some issues when it comes to extremely saturated colors, and no camera profile can compensate for that.

Consider that a perfect match to a specific color checker does not mean that the color precision is perfect, not even for those colors. It's only perfect for the particular spectra the color checker has, somewhat compromised by various measurement errors throughout the profile making process. Therefore I suggest to always apply some LUT relaxing to smoothen profiles at least some. As true perfection cannot be had, it's better to make sure color transitions are smoothly rendered.

Using the -m and -f parameters you can experiment with using separate parameters for optimizing the matrix and the LUT. If the LUT seems to need a lot of strange stretching it may be because the matrices are no good, and in some cases it might be worthwhile to render them separately, perhaps with a different patch set even (which is feasible when you're using SSFs).

If you see very large errors after matrix-only correction, say 10 DE or more, the LUT may get a bit tough job and be forced to make extreme stretches than can make bad gradients and an unpredictable profile. One way to test a profile for robustness is to load it in a raw converter, show a color checker with many colors, and change white balance. If some color suddenly changes must faster than the others the LUT is probably making a strong local stretch at some point. Of course you can see this by plotting as well, but the white balance test is a good sanity check.

Modern cameras should get a decent match with the matrix alone, if you see large errors, such as 10 DE or more, it's likely that there is some wrong with your input data, such as poor lighting of the test target, glare, bad references values or reflectance spectra.

Make sure to check what the dynamic range test shows (printed in the console output when running make-profile). Example output:

Camera G on darkest patch(es) is 9.8% lighter compared to observer Y.
  Y dynamic range is 4.78 stops, G dynamic range is 4.64 stops, difference
  0.14 stops. A small difference is normal, while a large indicates that there
  is glare.
In the above example there's only 0.14 stop difference, and up to about 0.25 should be okay (that is negligible effect on profiling result). If it's more than that you should either try linearize the data (using testchart-ff command) or better redo the measurement with less glare, or simply exclude the darkest patches from the profiling process. Linearization may work in the range from 0.25 to say 0.60 stops, if it's more difference than that it will not be precise enough so you'll need to exclude the worst patches (the darkest ones) or redo the measurement.

Note that you can only trust the dynamic range test result if the target has pure black patches. If the darkest patch is colored it's a large risk that the result is misleading.

Examples

In all examples below I assume the target file has reflectance spectra. If not you need to specify the XYZ reference values illuminant using the -I parameter.

Example 1: basic profile making with default parameters, using calibration illuminant StdA (calibration illuminant = the light source the target was shot under):

  dcamprof make-profile -i StdA target.ti3 profile.json
Example 2: assuming we have a target with cc24 and pointer border, we make a smoother LUT by relaxing patch matching, 0.5 delta E on cc24 and 4 delta E on the pointer border. We still let the matrix make best match it can without relaxation (-M). We also reduce the importance of lightness matching through setting of the CIE DE2000 weights (4,1,1). By providing camera's SSF (-c) the RGB values will be re-generated for the given illuminant (D65). Plotting data is saved to the "dump" directory (-r).
  dcamprof make-profile -r dump -c ssf.json -i D65 -M -w cc24 0.5,1,4,1,1 \
    -w pointer 4,1,4,1,1 target.ti3 profile.json
Example 3: make matrices using one target, and the LUT using another by running make-profile twice:
  dcamprof make-profile -i D65 target1.ti3 m.json
  dcamprof make-profile -i D65 -m m.json -f m.json target2.ti3 profile.json

test-profile

  dcamprof test-profile [flags] [target.ti3] <profile.json | profile.dcp>
The test profile command is used to test how well a profile can match a specific target, or if you skip the target it will just run some diagnostics on the profile.

It will print a text summary on the console, for deeper information you should use the -r parameter to dump text files and plots. If you skip the target you should generally provide -r to get any useful information.

Overview of flags:

  • -o <observer>, used if patch values are re-generated, default 1931_2.
  • -c <ssf.json>, camera SSFs, used to re-generate target RGB values, or if you want to analyze the camera's color separation performance.
  • -i <test illuminant>, the illuminant the test is run under, which per default is the same as the profile's calibration illuminant.
  • -I <target XYZ reference values illuminant>, default is same as the test illuminant. Only required if the target lacks spectral data.
  • -C, don't model color inconstancy, that is use relighting instead of a chromatic adaptation transform.
  • -S render spectra for inputs that lacks it
  • -b, -B, white balance settings, see make-profile for documentation.
  • -w <r,g,b> | m<r,g,b>, provide camera WB as RGB levels or RGB multipliers. Per default white balance is derived from target, or when provided from the camera's SSFs.
  • -k <chroma delta> adjust chroma of XYZ reference values, see make-profile for documentation
  • -L, skip LUT. If the profile has a LUT but you want to test how it performs with only matrix correction enable this flag.
  • -f <file.tif | tf.json> de-linearize RGB values in target, that is run provided transfer function backwards. This is only relevant for ICC profiles made for raw converters that apply a transfer function, such as Capture One.
  • -r <dir>, directory to save informational reports and plots.
As always it's preferable with a target file which contains spectra so XYZ reference values can be re-generated rather than having to be converted using a chromatic adaptation transform.

White balance

Per default DCamProf will calculate the optimal white balance to match the target as well as possible. This is analogous to setting white balance in your raw converter with the white balance picker on the white patch on a color checker. You can adjust this white balance behavior in the make-profile command, and if you have done that you should mirror the same settings in the test-profile command.

Anyway, if you instead want to test how the profile will match colors when the camera is set to a different white balance (such as a camera preset) you can provide a custom white balance via the -w setting.

It's given as a balance between red, green and blue, or as channel multipliers. To find out what multipliers a camera is using you can use exiftool on a raw file. White balance can be stored in different ways depending on raw format, it's out of the scope of this documentation to cover it in full. Anyway, in most cases it's some sort of multipliers, and often green is repeated twice, like this:

  WB RGGB Levels Daylight : 15673 8192 8192 10727
And then you simply provide "-w m15673,8192,10727" to DCamProf, note the "m" which say that we provide white balance as multipliers rather than actual resulting balance between the channels which is 1/m.

When DCamProf prints a white balance it will show the balance normalized to 1.0, meaning that the above example is translated to 0.52,1,0.76.

Analyzing camera color separation performance

There is a special feature embedded in the test-profile command, which is that if you provide the camera's SSF you can get an analysis of the camera's color separation performance. This is a pure "hardware" test and has thus no relation to the profile so if you are only interested in this result you can provide any dummy profile.

To get a sane result you need a highly populated grid of patches to test with. I recommend to generate a locus grid, like this:

  dcamprof make-target -c cam-ssf.json -p locus-grid -g 0.01 locus-grid.ti3
This will take quite some time, but once generated you can reuse this grid with any camera since when you provide the SSF and illuminants the RGB and XYZ values will be regenerated from spectra:
  dcamprof test-profile -r dump1 -c cam-ssf.json -i D50 locus-grid.ti3 any-profile.json
To get the plot you need to provide the -r parameter, and then the file is named ssf-csep.dat. You can plot it for example with this gnuplot script:
  unset key
  set palette rgbformula 30,31,32
  set cbrange [0:300]
  plot 'gmt-locus.dat' using 1:2:4 w l lw 4 lc rgb var, \
    'ssf-csep.dat' pt 5 ps 2 lt palette, \
    'gmt-adobergb.dat' w l lc "red", \
    'gmt-pointer.dat' using 1:2:4 w l lw 2 lc rgb var
What you see is a heat-map in a u'v' chromaticity diagram, here limited to 300 max. Each dot shows how much the camera signal will change in 16 bits (65536 steps) for 1 delta E change in chromaticity (= change in hue and saturation with constant lightness). No current camera is really 16 bit, this is just used as a fixed reference to get a number in a comfortable-to-read range. For this type of test you should not worry about a camera's dynamic range and read noise, shot noise will be the limiting factor.

A black dot means that the signal change is zero and thus the camera cannot separate color at that chromaticity location and no profile can ever change that.

The test is run against the target provided and it expects a dense grid-like layout of patches, if your target is coarse there can be misleading results. The locus grid generated in this example makes reflectance spectra, so the colors tested are all related to the illuminant, the colors are as light as the illuminant allows for that chromaticity. This means more saturated colors are naturally bit darker and thus harder to separate. However it becomes harder for the eye too. Often cameras will show good separation capability in the purple range, and that is partly because the eye is relatively poor at it. As the values are related to Delta E they will be related to the eye's capability (as modeled by the observer's color matching functions).

The diagram always shows values relative to a D50 white point. You can test with a different illuminant using the -i parameter. You will see the result changing, but note that the coordinates are always remapped to D50 in the diagram.

Note that the generated locus grid will not go all the way to the edge of the line of purples. This is because the line of purples is actually black (as it's at the border of the eye's sensitivity) so moving in a bit we get saner colors. The spectral generator can still have some issues to reach all the way to the locus and line of purples so you may get some gaps.

This diagram in u'v' chromaticity coordinates shows the color separation capability of a Canon EOS 5D Mark II. The locus, Pointer's gamut and AdobeRGB gamut is shown as reference. Only points that have a patch in the provided target will be plotted, so here you see some gaps at the borders where there are no test patches.

The unit of the heat map is how many 16 bit units (65536 steps) the camera raw signal changes if the color chromaticity changes with 1 CIEDE2000 unit. The test reflectance spectra is a generated grid related to a D50 illuminant, and is made as bright as possible for each chromaticity coordinate.

The darker heat (lower signal difference) the worse color separation, if it's zero the camera can't differ at all. For complete information of limits you need to relate to photon shot noise as well, which is out of the scope of this document. What we can see is that the camera gets problems towards the locus, mainly on the cyan side and towards the red corner. We also see it's good at purples, which is partly due to that the eye is not as good and thus it takes more distance to reach one delta E.

We can also see that the diagram is a bit "worried" and that we have a notable minima inside AdobeRGB towards the red corner on the purple side. Some odd minimas here and there and the messy look is typical, as the SSFs differs greatly from the observer's CMFs. We see a smoother behavior in the green area, this is because there all three SSFs are involved in producing the signal.

DCP/ICC vs native DCamProf profiles

You can test both DCPs/ICCs and native profiles. DCPs will be rendered according to the DNG specification, but tone curve and baseline exposure is ignored. If a DCP contains both HueSatMap and LookTable only the HueSatMap is applied (as the DNG profile intention is such that HueSatMap should be about accuracy, LookTable about a subjective "look"). DCamProf will print information about this when run.

By design DCPs cannot represent colors outside the Prophoto gamut triangle, so if you're doing testing with extreme colors close to the locus you will see clipping to the Prophoto gamut edge. Otherwise a DCP should perform about the same as a native profile.

If you test a profile from Adobe or other commercial raw converter you will likely see rather large color errors. This is because those profiles are not designed to reproduce accurate colors, but rather to provide a subjective "look", like film. This counts for both DCPs and ICCs.

ICC profiles don't have the Prophoto gamut limit like DCPs, both matrix and LUT ICC profiles can like the native cover the full human gamut.

Gradient testing

Example crop from the gradient test file showing a few poor transitions, such as the yellow vertical band in the center
If you enable -r <dump> a generated gradient TIFF file will be dumped, first without any processing as gradient-ref.tif and then processed through the profile including the LUT(s) as gradient.tif. This means that the content in gradient-ref.tif corresponds to white-balanced "raw" camera data, and the output is what that becomes when processed through the profile.

The purpose of this is to diagnose the smoothness of the profile's LUT as a complement to plotting. Note that as the gradient goes through all combinations (with some spacing) there will be some "impossible" raw values too, for example maximum blue but no red and green output. It's quite common that a profile clips or make artifacts in those areas, but this is no problem as they will never appear in real images.

Dumping this artifical gradient image is also very useful for verifying the smoothness during design of a subjective look using look operators.

The RGB primaries in the output is ProPhoto, and an ICC is embedded in the files. Beware that poor gradients and clipping is likely to occur due to the screen's color management, so turning it off temporarily when analyzing the more saturated parts of the image may be worthwhile. Use the unprocessed gradient-ref.tif as sanity checking, if that has some banded gradient it's probably due to the color management of the display.

Examples

Example 1: test how well profile.dcp matches target.ti3 under illuminant StdA, and write text files and plot data to the directory "dump" (it's assumed target.ti3 has spectra, if not you need to provide the -I parameter too):
  dcamprof test-profile -r dump -i StdA target.ti3 profile.dcp
Example 2: test how well the profile will match colors with a camera white balance preset (found out via exiftool for example):
  dcamprof test-profile -r dump -w m15673,8192,10727 -i D65 target.ti3 profile.json
Example 3: disable the profile's LUT and see how well the matrix matches the target (note that some DCPs designed with other tools are made such that the matrix is very far from correct color and the LUT is required to get close):
  dcamprof test-profile -r dump -L -i D65 target.ti3 profile.dcp
Example 4: don't run any patch matching test, but only dump plots and reports:
  dcamprof test-profile -r dump profile.dcp

make-dcp

  dcamprof make-dcp [flags] <profile.json> [profile2.json] <output.dcp>
Converts a profile in DCamProf native format to Adobe's DNG Camera Profile (DCP) which can be used directly in various raw converters. There's really not much to this command, generally you only run it with the -c flag to specify unique camera name.

Overview of flags:

  • -n <unique camera name>, must match what raw converters are expecting, provide within quotes.
  • -d <profile name>, the profile name tag string, used by some raw converters (like Lightroom) in the select box when choosing profile to use, so come up with a name that makes the profile easy to identify. If spaces in the string, provide within quotes.
  • -c <copyright>, the copyright tag string. If spaces in the string, provide within quotes.
  • -b <baseline exposure offset>, optionally set the baseline exposure offset tag.
  • -Bexclude the DefaultBlackRender=None tag, meaning that some converters will then do automatic black level adjustment. If you're a Lightroom user you're probably used to automatic black level adjustment and may want it also for your DCamProf profile, and then you should enable this flag.
  • -i <calibration illuminant 1>, specify a different calibration illuminant 1 than the tag found in the source profile, useful if the source has "lsOther" and you're making a dual-illuminant profile.
  • -I <calibration illuminant 2>, specify a different calibration illuminant 2 than the tag found in the source profile, useful if the source has "lsOther" and you're making a dual -illuminant profile.
  • -m <other.dcp> copy illuminant(s) and color matrices from the provided DCP. Do this if you want your profile to calculate white balance the exact same way as the provided profile. This is useful if you need to avoid a white balance shift.
  • -h <hdiv,sdiv,vdiv>, hue and saturation divisions of LUTs (default: 90,30,15). The value divisions is only used for 3D LUTs.
  • -v <max curve matching error>, used to automatically calculate value divisions needed for the LookTable when applying a neutral tone operator. The default should do.
  • -F, skip forward matrix, will generate an old-style DNG profile without forward matrix, this is not recommended but may in some rare situations be necessary as some ancient software doesn't support forward matrices.
  • -L, skip LUT (= matrix-only profile)
  • -O, disable forward matrix whitepoint remapping. Generally not a good idea to disable as it may render the profile unusable in some DCP software.
  • -G, skip gamma-encoding of 3D LUTs. This only applies if a 3D LUT is used, that is a tone reproduction operator is applied. Normally the value channel in the LUT is gamma encoded as it better matches the eye's lightness sensitivity and we get a better use of value divisions. It may lead to compatibility issues with older/simpler DNG software though. If using this flag, consider increasing value divisions to retain precision.
  • -H, allow hue shift discontinuity between LUT entry neighbors. Most (probably none) DNG pipelines doesn't support this so it's generally a bad idea to allow it.
  • -t <linear | none | acr | custom.json>, embed/apply a tone curve. For colorimetric accuracy you should have no curve, or set it to "linear" as some raw converters apply a curve if the DCP has none. To apply a default film-curve, which may yield a more pleasing look, choose "acr" which is the default curve by Adobe and used by the DNG reference code. Note that the tone reproduction operator (-o) will affect how this curve is used. Default: "linear". Curves can be cascaded, that is you can provide -t more than once.
  • -o <neutral | standard | custom.json>, tone reproduction operator (default: neutral). Will only be applied if a non-linear curve is applied (-t parameter).
  • -r <dir>, directory to save informational reports and plots

HueSatMap LUT generator

The DNG HueSatMap is generated from the native 2.5D LUT in the DCamProf profile. This is done by sampling it at the hue and saturation divisions provided. The default is 90,30 (controlled with the -h parameter) which is a quite dense table and there's little reason to change that. If you'd want to change it it's probably to reduce the table size to get a smaller profile.

The DCamProf native LUT is spline-interpolated while a HueSatMap is linearly interpolated. This means that you may get smoother gradient transitions if you have a bit denser HueSatMap than needed for actual target matching. Therefore I think the 90,30 density is quite good to have even if the profile is based on very few patches.

If you dump plotting data with the -r parameter you will get data for the HueSatMap so you can visualize it. This is useful if you experiment with the table density.

Example plot for comparing native LUT with HSM LUT (useful to see if you should adjust HSM table size):

The plot shows a zoomed in section of the HSM LUT (blue dots) and the native LUT (beige grid).

  splot \
    'nve-lut.dat' w l lc "beige", \
    'hsm-lut.dat' pt 1 lc "blue", \
    'gmt-prophoto.dat' w l lc "red", \
    'gmt-locus.dat' w l lw 4 lc rgb var
The HSM LUT operates in linear Prophoto RGB space, converted to HSV. This means that in an u'v' coordinate system it looks very dense close to the white point, and then becomes gradually less dense.

Manual edits

When running the make-dcp command you can specify many but not all tags. If you want to adjust some of the remaining tags you need to dot this manually by using the dcp2json and json2dcp commands:
  1. dcamprof dcp2json input.dcp dcp-profile.json
  2. edit dcp-profile.json using a text editor
  3. dcamprof json2dcp dcp-profile.json output.dcp

Dual-illuminant DNG profiles

If you want to make a dual illuminant profile you make two separate native profiles and provide them both to make-dcp, like this:
  dcamprof make-dcp profile1.json profile2.json dual.dcp
The lower temperature illuminant should be listed first, and you must have illuminants with known temperature, ie you cannot have "Other" which the profile will have if you have used a custom calibration illuminant. If so, specify illuminants using the -i and -I parameters, set EXIF names that as close as possible matches the temperature of your custom illuminants.

Note that the light source temperature is the only thing that matters to DNG Profiles, it makes no difference if it's a fluorescent (peaky spectrum) or tungsten (halogen, smooth spectrum), so if your calibration illuminant was a 3500K halogen lamp, the EXIF light source "WhiteFluorescent" (3525K) is the best choice.

DCamProf makes no sanity check on your illuminant listing so if you use "Other" or place the highest temperature light source first the resulting profile may not work in your raw converter.

The most common dual-illuminant combination is StdA and D65. It generally makes little sense to combine say D50 and D65 as they're too close. The general idea of dual illuminant profiles is to make a generic profile that works in varied light conditions, and then you want to combine two light sources whose white points are relatively widely spaced. Look at the color temperatures plotted in a chromaticity diagram for example to get and idea of how much they differ.

Avoiding white balance shift

If you have used a previous profile and custom white balance in your raw converter, applying your new profile will likely cause a white balance shift. See the section on DCP-specific white balance properties for a description why this can occur.

If you want to avoid this you need to replace the color matrix/matrices in your new profile with those found in the old, by using the -m parameter. As color matrices are only used for whitepoint temperature calculations and no actual color corrections this will not affect color rendition. The ability to predict white point color temperatures is in full replaced by the old profile though.

Tone reproduction

If you're making a profile for reproduction work you should not apply any curve, likewise if the targeted raw converter is designed for linear colorimetric (scene-referred) profiles. This is the default. However, many raw converters expect general-purpose profiles to apply a contrast-increasing "film curve", and in the case of DNG profiles this curve is embedded in the profile itself.

Per default DNG raw converters use a type of RGB curve that has some color distortion issues as discussed in the tone curves section. DCamProf can instead apply an own curve type (via 3D LookTable corrections) which is more neutral. This is enabled per default (controlled by -o parameter), but will only be used if a curve is applied (-t parameter). The properties of this is discussed in the section about DCamProf's neutral tone reproduction operator. You may also want to read the DNG profile implementation notes regarding this before using.

The supplied curve is either one of the built-ins, "linear", "none", "acr" (Adobe Camera Raw's default curve which is a good choice in most circumstances), or a custom curve in a JSON file, or a RawTherapee curve (.rtc) file. The JSON file format can be the same as for the transfer function, but only the "GreenTRC" tag will be used, or "TRC" or "GrayTRC" if those are available. You can also provide a "ProfileToneCurve" from a DNG profile. As usual all other tags are ignored so you can provide a full JSON of a DNG profile (as produced by the dcp2json command).

The RawTherapee .rtc format is supported, but only for "Spline" curves. It's a simple text file format with XY handles for a spline curve in sRGB gamma (both X and Y axes are gamma-scaled), see the data-examples directory for an example. If you wish you can design the curve using RawTherapee and export it from there.

Current tone reproduction operators are the "neutral" operator, and "standard" which in the DNG profile case means just embedding the curve and make no change, and then the raw converter will likely apply an RGB type of curve. You can also provide the name of a JSON file that contains custom weights for the neutral tone reproduction operator. See the ntro_conf.json file in the data-examples directory for further details. Normally you should not need to provide custom weights, but should for example the auto curve analysis lead to a too large or small chroma scaling factor you can set it manually using the configuration file.

Some raw converters are meant to be used with colorimetric profiles without any curve, but may still not have any good tone reproduction operator built-in, that is it's very hard to achieve realistic colors as soon as you apply contrast. In that case it may still be worthwhile to apply the tone reproduction in the profile, if the raw converter supports both ways (which is the common case).

Designing a subjective look

An example image rendered with a neutral profile (that is colorimetric accurate, with a contrast S-curve and DCamProf's neutral tone reproduction operator).
Same image but here with a designed subjective look. Without layering the image on top it will be difficult to see any difference, and this is how it should be. A successful designed look is typically very close to neutral. The most visible change in this look is that yellows and greens have been warmed up.
You can extend the configuration file for the tone reproduction operator with "look operators". The purpose of these is to design a subjective look which is applied on top of the neutral tone reproduction.

Be warned that this is not an easy task, especially since DCamProf lacks a graphical user interface. The process of designing a look means rendering lots of profiles with minor adjustments and comparing until you are satisfied with the result. It requires that you have a good eye for color and know what you want to achieve.

The intention of DCamProf's "look operators" is to make very subtle adjustments, small deviations from the neutral look. That is it's not intended to make "filters" as seen on Instagram and other popular social network services.

Some key concepts:

  • The array of look operators are applied after the tone reproduction operator.
  • The look operators are applied in order one after another, so the order matters.
  • With a "Blend" array inside each look operator you define which part of the gamut it should be applied and how it should be be blended in.
  • Main color space is CIECAM02 JCh, Lightness, Chroma, Hue, but you have also access to RGB space(s) with ProPhoto primaries.
  • You can change the color of neutrals with the operator, but it will only work for ICC profiles (DNG profiles does not apply changing neutrals by design), so if you design for DNG make sure you blend out the effect towards the neutral axis (if needed).
As already touched upon it will be very difficult to drastically change the look and get good results, so if you not at all like how DCamProf renders colors in it's neutral mode you are in trouble, as the adjusted profile will typically still be quite close to neutral. In that case I suggest using some other software as DCamProf is foremost about neutral and realistic color rendering.

Available look operators:

  • AddHue
    • Note that the "Curves" operator is typically more useful when you want to modify hue.
  • AddLightness
  • AddChroma
  • ScaleChroma
  • ScaleLightness
  • SetTemperature
    • Used to warm up or cool down. A common adjustment is a slight warmup of midtones and highlights while cooling down shadows.
    • 5000K tint 0 is always reference temperature (=no change), lower temperatures will cool down, higher will warm up.
  • Curves
    • RGB curves operating with ProPhoto primaries, can be used for traditional skin tone tuning using RGB/CMY curves.
    • User-selectable gamma by number, or specifying "sRGB".
  • Stretch
    • Compress or expand in one to three dimensions (CIECAM02 JCh). Usually this is used in the hue dimension to either reduce hue spread or increase it in some range.
    • You can for example use it to even out skin tones.

As there is no GUI you need to work with trial-and-error. Using a raw converter that quickly and effectively can load new profiles (like RawTherapee) is necessary to keep sanity. To see what colors that will be affected in an image (that is what area the "Blend" section covers) a good alternative is to use "ScaleChroma" with "Value" 0 and then all colors covered by the blend will be monochrome (set "BlendInvert" to true if you want the inverse).

For example if you want to target skin tones you adjust the "Blend" section so only the faces become monochrome, and then you can use this section for various adjustments in the real profile.

Curves are used in blending, and in the "Curves" and "Stretch" operators. There are three type of curves: "Linear", "Spline" and "RoundedStep". The "RoundedStep" is just a step function with S-curve transition, the other two are self-explanatory. Be warned that it's difficult to design a spline in the blind as it easily suffers from overshoots. You can test curves in GNUPlot or design curves in RawTherapee. The RGB curves operator can be mirrored exactly in RawTherapee by selecting ProPhoto working space, and in the operator select "sRGB" gamma, so then you can design the curves operator look there and export the curves, open in a text editor, reformat and put it in your JSON file.

When blending in various look operators there is a risk that you disturb the overall smoothness of the profile, perhaps you're making too strong adjustments with a too narrow blending zone. An effective way to diagnose this is to use the test-profile command and dump and image with processed gradients.

To see how the syntax works and get further documenation, look in the data-examples directory to find a documented example.

Hue shift discontinuity

The DCP LUT works in RGB-HSV space, which means that the hue is defined as an angle 0 to 360 degrees, and modifications to the hue is defined as an offset to the input hue angle.

When the input hue angle falls inbetween two LUT table entries the offset is interpolated. For example if entry A says "add +40 degrees" and entry B says "add -30 degrees" and the input angle falls exactly inbetween the average is calculated as "(+40 + -30)/2 = +5 degrees".

If we have a large hue shift say going from +170 to -170, the actual difference between those to neighbors is only 20 degrees and the average would be +/-180, but most DNG pipelines (probably all) don't support hue shift discontinuity and simply calculates this as "(+170 + -170) / 2 = 0". I'd say this is a bug, hue angle discontinuity is a well-known caveat when working with these type of coordinate systems, something that well-design code handles. The discontinuity is just in the math (it must wrap around somewhere), not in the actual hue transition.

Unfortunately Adobe's DNG reference code doesn't handle the wrap, and thus probably all software supporting DNG profiles don't either. Therefore make-dcp will per default abort if it detects a hue shift discontinuity.

Fortunately it's very unlikely that a discontinuity would occur in a normal colorimetric profile. It can quite easily happen when you design a subjective look with look operators though, and the solution is then generally to fade out the operator on the "HSV-Saturation" axis.

The built-in DNG pipeline in DCamProf uses the DNG reference code and will thus cause discontinuity artifacts just like the others. This means that you can see discontinuity artifacts when dumping a test gradient.

Observer remapping

DNG profiles has linear Prophoto as a working space, which is defined with the 1931_2 observer. That is raw converters using DNG profiles expect the D50 whitepoint map to 1931_2's D50. If you have used a different observer you will get slightly different XYZ values, and the D50 whitepoint will thus have a slightly different coordinate. There may be a 1-2 Delta E difference.

Many raw converters sanity-check the profiles to see that the whitepoint in the forward matrix matches 1931_2 D50, and if not they consider the DCP invalid and refuse to load it.

Therefore DCamProf will also do this check and if it detects a different whitepoint it assumes a different observer has been used in profile making and adjusts the matrices and LUT-making with a linear Bradford transform to adapt.

This transform is certainly not perfect when it comes to transform from one observer to another, but as the coordinate shift between observers is so small the error of the transform is probably considerably less than the overall accuracy errors in the profiling process so I think one should not need to worry. Some brief testing I've made confirms this.

As the default observer is 1931_2 this remapping will only take place if you have changed the observer (-o parameter) when making the profile. If you want to compare errors you can run a test-profile on both the native profile and the resulting DCP. The native profile will not need observer remapping. Note that the mapping from the native LUT to the HSM LUT will also generate slight differences from the native profile. Make sure you provide the desired observer in test-profile too, otherwise you will see large errors.

The color matrix is not remapped, as it's not used for the LUT and the difference between observers is way smaller than the error you can expect in a plain matrix conversion it's kept as is.

Examples

Basic conversion, this is what you will do most of the time (replace name with your specific camera name):
  dcamprof make-dcp -n "Canon EOS 5D mark II" profile.json profile.dcp
Dual-illuminant profile with the illuminants specified (overrides tags in source profiles):
  dcamprof make-dcp -n "Canon EOS 5D mark II" -i StdA -I D65 profile1.json profile2.json profile.dcp

dcp2json, json2dcp

  dcamprof dcp2json <camera.dcp> [<dcp.json>]
  dcamprof json2dcp <dcp.json> <camera.dcp>
Convert DCP profiles to and from JSON format, useful for making manual edits.

make-icc

  dcamprof make-icc [flags] <profile.json> <output.icc>
Converts a profile in DCamProf native format to an ICC profile which can be used directly in various raw converters. Note that ICC profiles that works for one raw converter may not work in the next.

Overview of flags:

  • -n <camera name>, actually the ICC "description" tag, may contain what you like but camera name is a good idea.
  • -c <copyright>, the copyright tag string. If spaces in the string, provide within quotes.
  • -s <CLUT side division>, how many divisions the LUT cube side should be divided in, default is 33.
  • -p <lablut | xyzlut | matrix>, profile type (default: lablut if input has LUT otherwise matrix).
  • -L, skip LUT of input profile, the output profile can still be a LUT if you force it to with the -p parameter.
  • -W, let profile correct white balance, usually not desired except possibly in some specific reproduction setups.
  • -f <file.tif | tf.json>, adapt ICC to match transfer function in provided tiff/json, only required for raw converters that apply a curve to the raw data before applying the ICC.
  • -t <none | acr | custom.json>, apply a tone curve to the LUT. For colorimetric accuracy you should have no curve. To apply a default film-curve, which may yield a more pleasing look, choose "acr". You can also supply a custom curve. Note that the tone reproduction operator (-o) will affect how this curve is used. Default: "none". Curves can be cascaded, that is you can provide -t more than once.
  • -o <neutral | standard | custom.json>, tone reproduction operator (default: neutral). Will only be applied if a non-linear curve is applied (-t parameter).
  • -T, don't apply tone curve to LUT. Used if the raw pipeline will apply an RGB curve after the ICC profile is applied. Note that this is not common, if the raw pipeline applies a curve separate from the ICC it's normally done before the ICC is applied.
  • -r <dir>, directory to save informational reports and plots

Compatibility

While ICC profiles in general are rigidly standardized, it's not well standardized how camera ICC profiles are applied in raw converters. They are rarely applied directly on a linear non-white-balanced image like DNG profiles always are, but rather there is some extra pre-processing step before, and possibly a post-processing step after. This means that ICC profiles are not as easy to move between different software as DNG profiles, you may need to design your ICC profile specifically for one raw converter.

I intend to support the most popular raw converters, I think DCamProf already support most of them but I haven't tested all, so if you find any compatibility issue let me know. I cannot promise I will implement support for every ICC-using raw converter though, if it's too messy I won't support it.

DCamProf supports raw converters which either provide demosaiced linear raw data as input to the ICC, or the same with a curve. If a curve is applied that must be taken into account during the workflow, see the ICC example workflow for further information.

White balance

Most (probably all) ICC-using raw converters will apply the camera's white balance before the ICC profile is applied. You can see this if you export a file for profiling, if the white balance seems applied, then it is.

Still, a camera's "as shot" white balance rarely perfectly matches the calibration illuminant, that is a perfectly white patch will not be perfectly white but have a slight tint. DCamProf which knows the the XYZ coordinates for each patch and thus what white should be can correct for this if you'd like.

However, this would mean that when the profile is loaded the white balance will change so a perfect white (rarely exists in the target so it's interpolated) becomes RGB 1,1,1. This might be what you want, but likely not. Probably you want to keep the camera's original white balance and therefore this is the default when DCamProf makes ICC profiles. DCamProf will simply make sure that the profile maps camera RGB 1,1,1 to D50, that is use the native "forward matrix" mapping as is.

Note that since DCamProf normalizes the white balancing when making it's native profile it doesn't matter which white balance the test image had, meaning that you can convert the same native profile to both a DCP and an ICC profile, even when it was made from non-white-balanced data (like DCP requires).

Are there cases when you do want the ICC profile to correct the white balance? Yes, for example in a fixed light reproduction setup when you want to use a white balance preset on the camera (easy to remember and recall) but still get as correct white balance as possible in the final image, then the ICC profile should correct it. To do so supply the -W flag when making the ICC. For this to work the native profile must have been made from a white-balanced test image though (using the camera's preset of interest).

ICC profile type and ICC LUTs

DCamProf can make a pure matrix profile (with shaper curves if transfer function is provided via the -t parameter), and a LUT profile with either camera RGB to XYZ conversion or camera RGB to Lab.

By specifying the type you can even make a LUT profile even if the input does not have a LUT, which may be useful for testing in some cases.

An ICC LUT is always 3D, a simple table with RGB triplets in and corresponding XYZ or Lab triplets out. Ideally you would have a table entry for all possible RGB combinations which would be 65535^3 for 16 bit data, but that would fill your hard-disk with just the ICC profile so it's not a good idea. Instead very small ranges are used (33 is default) and the inbetweeners are interpolated.

DCamProf generates the ICC 3D LUT by sampling native 2.5D LUT (so the actual transformations by the LUT will still be 2.5D of course), and applies an input curve to get better perceptual spacing of the LUT cube divisions. ICC LUT resolution can at times be a problem, if you get problems matching some patches you can try increasing the cube divisions from the default 33, be warned though that the size of the ICC file will grow very fast. A reasonable test value is 128 which will give you about a 12 megabyte ICC profile, and then reduce from there towards the default 33.

DCamProf can make an RGB to XYZ or an RGB to Lab LUT. In theory the results should be identical. More effort has been put into the Lab LUT concerning smoothing etc so I recommend using that, which also is the default.

Tone reproduction

The tone reproduction functionality is largely the same as described for DNG profiles, I recommend reading that first. The difference is that ICC profiles don't embed a separate curve and look table, but the curve modifies the single LUT directly.

Many ICC raw converters apply a curve on the side though (like Capture One), and in that case you should employ a linear curve during profiling and use that when using the finished profile, as the LUT applies the curve.

If you want to apply a subjective look you can do so, as documented in the subjective look design section. A difference from DNG profiles is that ICC profiles will allow you to change the color of neutrals.

Plotting ICC LUTs

You can add the -r <report_dir> flag to get report files which include ICC plot files. As ICC LUTs are 3D they are a bit cumbersome to visualize. You can plot all points in the 3D LUT "cube" by plotting icc-lut.dat, but it may be better to plot a slice at a time using the icc-lutXX.dat files. The main thing to look for is if the LUT seems dense enough to replicate the stretching that is in the native 2.5D LUT. You don't want to have it overkill dense either as that will make the ICC file larger than needed, and LUT ICCs are always a bit large due to that they are always 3D.

Plotting the full 3D LUT with error vectors and target:
  splot \
    'icc-lut.dat' w d lc "beige", \
    'gmt-locus.dat' w l lw 4 lc rgb var, \
    'gmt-adobergb.dat' w l lc "red", \
    'gmt-pointer.dat' w l lw 2 lc rgb var, \
    'target-icc-lutve.dat' w vec lw 2 lc "black", \
    'targetd50-xyz.dat' pt 5 ps 2 lc rgb var
The example shows the default LUT with 33x33x33 points, it still becomes very dense in a 3D plot. The sides of the "gamut" are quite sharp and the shape is boxy, this is because the LUT reaches the full range of the LUT and clips (this is outside the real color range though, so don't worry).
The same plot as above, but now with just a slice:
  splot \
    'icc-lut10.dat' w d lc "beige", \
    'gmt-locus.dat' w l lw 4 lc rgb var, \
    'gmt-adobergb.dat' w l lc "red", \
    'gmt-pointer.dat' w l lw 2 lc rgb var, \
    'target-icc-lutve.dat' w vec lw 2 lc "black", \
    'targetd50-xyz.dat' pt 5 ps 2 lc rgb var
There are 20 slices indexed 00 to 19, here we plot index 10 which means 0.5 to 0.55 in the native LUT lightness range (which is Lab lightness scaled to 0.0 - 1.0 range).
The same plot as above, that is a LUT slice with target and error vectors, now viewed straight from above and zoomed in on a detail around skin-like colors.

We see here that the profile is less accurate on darker colors (longer error vectors), while spot on on the brighter. The beige crosses show the LUT points in the slice. They are here in close-by pairs as the slice fits two levels (look from the side to see), so for the actual "2D" density think of the nearby pairs as one point.

icc2json, json2icc

  dcamprof icc2json <camera.icc> [<icc.json>]
  dcamprof json2icc <icc.json> <camera.icc>
Convert ICC profiles to and from JSON format.

ICC is a large standard and supports many types of devices in addition to cameras, such as printers, scanners and monitors. DCamProf's ICC parsing is only focused on ICC version 2 camera profiles, and will ignore any irrelevant tags and refuse to parse ICC profiles that are not camera profiles. That is it's intended to look at and edit camera profiles, nothing else. This means that icc2json does not work well as a general ICC dis-assembler. If you really need to see all tags in an ICC Profile you can for example use Argyll's iccdump tool.

tiff-tf

  dcamprof tiff-tf [flags] <target.tif> [<transfer-function.json>]
Extract transfer function (TIFFTAG_TRANSFERFUNCTION) from a TIFF file and write it to a JSON file. The transfer function is a linearization curve, that is if the data has been made non-linear of by a tone curve the transfer function will be the inverse of the tone curve.

The extracted transfer function can then be used in other relevant commands such as make-icc to linearize data. However make-icc/make-dcp etc can take the TIFF file directly so extracting it first is generally only for informational purpose.

You can however also calculate a tone curve using this command (as the difference between two transfer functions), which cannot be done with any of the other commands.

  • -R, skip reconstruction. The transfer functions are defined using integers and often there are several entries in a row with the same number due to the rounding. Per default DCamProf will reconstruct those values with a robust linear interpolation. If you don't want that to happen you provide enable this flag.
  • -f <linear.tif | linear.json>, reference TIFF / JSON with the transfer function corresponding to linear response. This is then used to convert the provided tiff to a tone-curve in linear space rather than a transfer function.
Some raw converters, like Capture One, applies the tone curve before the ICC profile. If you want to extract that tone curve to use in a DCamProf workflow you need to remove the transfer function for the linear component. You then do the following: export one TIFF with linear response "linear.tif", and one with the desired curve "curve.tif", and then you run the command:
  dcamprof tiff-tf -f linear.tif curve.tif tone-curve.json
The output will then contain a tone curve in linear space calculated by applying the transfer function from "linear.tif" to the inverse of "curve.tif". This tone curve can then be provided to make-icc or make-dcp with the -t parameter.

txt2ti3

  dcamprof txt2ti3 <input.txt> <output.ti3>
Import spectral data from a text file, further described in the make-target section.

make-testchart

  dcamprof [flags] make-testchart <output.ti1>
Generate an Argyll .ti1 file (like Argyll's own targen) that can then be used with Argyll's printtarg command to make a test chart that can be printed. Overview of flags:
  • -p <patch count>, choose number of patches to generate, default is 100.
  • -w <percentage white patches>, specify the percentage of white patches. The target will be speckled with white patches which then can be used as anchors for flatfield correction. Default: 20%.
  • -b <black patch count>, black patches doesn't really contribute to profiling, but is good to have a few for sanity checking contrast and exposure. The default count is 5 which will be evenly spread out over the target.
  • -g <gray steps>, if you want a linearization step wedge specify here how many in-between gray levels there should be. The number of gray patches on each level is the same as the black count.
  • -l <layout row count>, specify the intended row count of the target. Specifying layout is required if you want an optimal white patch distribution.
  • -d <layout row relative height>,<column relative width>, relative width and height of patches, you can specify it in any unit you like as it's only relative. Default: 1,1 (square patches).
  • -O, specify this flag if the chart layout has even columns offset a half patch. Argyll's printtarg makes Colormunki style targets this way.
  • -r <dir>, directory to save informational reports and plots.
Currently this command is very basic, it only supports RGB output (as most inkjet printers today are controlled as pseudo RGB devices), and you can only control the patch count, not which patches that are generated. The patches are generated such that it starts with one white patch, and then patches are spread out with as long perceptual distance as possible, with the constraint that only the lightest possible color of a certain hue and chroma are used. That is there will for example be no brown patch, as brown is actually dark orange. The rationale behind this is that as the LUT is 2.5D it's only necessary to profile the lightest colors, any darker colors will be grouped together in a chroma-group anyway.

The patch placement in terms of perceptual distance will not be perfect as the command is unaware of the printer's profile, but as the coverage is intended to be dense it doesn't matter that much. If it becomes popular to make own targets I may further develop this command to support printer profiles and more.

Today's inkjet printers typically have more colorants than older models, which means that the spectra can be a bit more varied. However the spectral variation will still suffer compared to commercial test targets made with special printing techniques. Your mileage may vary.

The test chart generator intention is to fill the gamut so it will need quite many patches to not miss out any corner. 50 patches is probably more than enough, but if you're printing an A4 sheet you could just fill it even if it will be a bit overkill. You can increase the white patch percentage to save ink.

With the -b and -g parameters you can add step wedges for linearization. This might be an advantage for targets that will be shot in situations where glare can be an issue.

testchart-ff

  dcamprof [flags] testchart-ff <input.ti1> <input.ti3> [<input2.ti3>] <output.ti3>
...or
  dcamprof testchart-ff <input.tif> <flatfield.tif> <output.tif>
Either flatfield correct .ti3 data or a linear .tif file. If you're correcting a .ti3 file it must be a target speckled with white patches and the layout need to be specified via a .ti1 file and the layout flags. If you are correcting a .tif file the input files must be 16 bit linear gamma TIFF files.

It's also possible to linearize .ti3 files, this requires a neutral step wedge in the file, or even better neutral step wedge patches spread out over the whole surface. Overview of flags:

  • -l, -d, and -O layout specification flags working the same as for the make-testchart command.
  • -L, enable linearization.
  • -r <dir>, directory to save informational reports and plots.
If you shoot indoor and have only one light it's difficult to get even light. In this example difference between the lightest and darkest white patch is as much as 1 stop. The flatfield correction algorithm use all the white patches as anchors and makes one thin plate spline surface per channel to correct. While I recommend to have more even light than shown here, this will work.
If you have been really careful when shooting your target to have uniform light the difference of applying flat-field correction will be negligible so it's certainly not mandatory. If you shoot a large target indoors and have only one light it's however most likely that you need to flatfield correct.

If your target is speckled with white patches you don't need to shoot an extra flatfield shot, correction can be made directly on the .ti3 data. When the target is photographed we know that if the lighting is perfect all white patches should give the same RGB values. Light is never 100% uniform though so the white patches will vary. Based on the positions of those white patches and the variations thin plate spline correction maps are created to scale all patch values to match uniform light.

The indexes of white, black and gray profiles are found out from the provided .ti1 file. If you have used the make-testchart command you have already such a file, if not you need to use a text editor to hack your own that matches your target. It doesn't need to be a full-featured .ti1, testchart-ff only looks for RGB values, 100,100,100 is white, 0,0,0 is black and any value in-between with all values equal are gray (actual value not important, the gray value is taken from the .ti3 XYZ reference).

Most commercial targets are not speckled with white patches though unfortunately, and then you need to pre-process the TIFF file before you feed it to Argyll's scanin. First shoot the target, then with the exact same lighting place a equally large or larger white card in the exact same position as the target, and shoot it from the exact same camera position. Then make the exact same crop/rotation of both files and export to linear 16 bit TIFF. The image must be cropped enough so that only the white section of the white card is visible, if any surroundings or edges of the card is visible the result will not be good.

Feed those TIFF files to the testchart-ff command and you will get a new flat-field corrected output file which you then can feed to Argyll's scanin.

Another alternative is to print a chart with only white patches (ie only a grid) that exactly matches the target you have, and swap in that in a second shot (light and camera setup must be stable of course). You then run testchart-ff with this extra .ti3 file, so you have first the layout .ti1 file (showing only whites in this case), the white target .ti3, and then your real target .ti3 and finally the output .ti3 file. This is a bit cumbersome way to make flatfield, in this case it would probably be simpler to shoot a gray or white card instead and pre-process the TIFF file.

There are specific white card products to buy, but these are quite expensive. Instead you can for example use an unprinted high quality photo paper (without see through), I recommend a smooth matte OBA-free paper, make sure it lays perfectly flat just like the target. It does not matter if the card is slightly off-white, flat-field correction just corrects differences from the global average so the color of the card does not matter (if the card is colored, the global average changes too so it cancels out).

linearization

You can enable linearization with the -L flag. It only works on .ti3 files, so if you have a TIFF you can flatfield it first, then scan it and then linearize the .ti3 file by running this command again. A flatfield pass is always run first (if possible), then linearization is applied.

Camera sensors are linear but glare can distort it, and linearization has some limited ability to compensate for that. When you run this command or make-profile dynamic range in camera green is compared to observer Y reference values, if there's a large difference (but not too large), linearization might improve results.

Note that linearization is a very crude way to cancel out glare, it's much more approximate than flatfield correction is for evening out light, so it's not a good idea to rely on this.

If G vs Y dynamic range differs less than about 0.25 stops linearization will likely not have any visible difference and it's better to not apply. If it's more than say 0.6 stops the glare has distorted the data so much that while linearization will improve results it's typically better to exclude the darkest patches (=those most affected by glare) from profiling. In the in-between range you may find linearization useful.

I do recommend to do everything you can to minimize glare though so linearization will not be required. While it's perfectly ok to rely on flatfield correction, as it can accurately even out light, it's not a good idea to rely on linearization. Matte targets can be shot both outdoors and indoors with low enough glare so it can be neglected. Semi-glossy targets cannot generally not be shot outdoors at all, just too much glare, and shot right indoors the level should be low enough so you should not need to linearize. In other words, linearization is a function you hopefully will not need to use.

average-targets

  dcamprof average-targets <input1.ti1> [<input2.ti3> ...] <output.ti3>
If you have problems with too much noise in the darkest patches in your test target photos, you can make multiple shots, convert all to .ti3 files and then average them using this command. Averaging shots is an alternative to classic HDR merging and has the advantage that all shots are fully usable and thus scannable by Argyll's scanin command.

Of course you can do averaging/merging of images in other software too and make a new image which you then feed to Argyll's scanin, however you must then be absolutely sure that the software produce 100% linear results.

match-spectra

  dcamprof match-spectra [flags] <reference.ti3> <match.ti3> <output-match.ti3> [<output-ref.ti3>]
Find the spectra in match.ti3 that best matches the spectra in reference.ti3, either as seen by an observer or by camera SSFs.

Overview of flags:

  • -o <observer>, observer for DE comparison, default 1931_2.
  • -i <test illuminant>, the illuminant the comparison is run under, default D50.
  • -c <ssf.json>, camera SSFs, if provided these will be used instead of the observer for patch spectrum comparison, and then Euclidean distance is used as error value instead of CIEDE2000.
  • -S, scale spectra (that is adapt lightness) in output to better match the reference spectra.
  • -N, normalize patches before comparison, that is keep original lightness instead of equalizing them before comparison.
  • -U, don't allow repeats of the same spectrum in the output. That is if the best match for a given patch is also the best for another it's written to the output each time.
  • -E, consider all spectra as emissve. DCamProf supports a tag in the ti3 files that says if the spectrum is emissive or not. This flag causes the tag to be ignored, and all spectra is considered emissive, that is it will not be integrated with the test illuminant.
  • -e <max DE>, maximum acceptable DE to consider it to be an acceptable match, default is infinite that is the best regardless of error is included.
  • -r <dir>, directory to save informational reports and plots.

This command is typically not used in any profiling workflow, but is instead used for informational purposes. You can for example test how well the "skintone patches" of your commercial target matches real skintones from a spectral database.

As DCamProf's camera profiles are 2.5D it often makes sense to scale lightness to match both in comparison (-N) and in output (-S). If you specify one output it will contain spectra from the match.ti3 that matches, and if you specify two the second output will contain the patches from reference.ti3 for which an acceptable match was found. Per default there's no error limit and non-unique matches are allowed and then the second output will be a copy of reference.ti3

If a report directory is given (-r), spectra and XYZ coordinate plots for inputs and outputs are written there.

Report directory files

When DCamProf is run with the -r <report_dir> flag enabled it will write data files for plotting and report text files. The files are suitable to plot with gnuplot, but you can use any other plotting software if you like as it's just text files with the numbers in columns.

Text files

The report text files contain patch matching reports:
  • cm-patch-errors.txt, color matrix patch matching errors
  • fm-patch-errors.txt, forward matrix patch matching errors
  • patch-errors.txt, patch matching with full LUT correction (if any))
A patch matching row looks like this:
A1 RGB 0.076 0.095 0.040 XYZref 0.130 0.113 0.057 XYZcam 0.129 0.112 0.054 \
  sRGB #7C5547 #7C5445 DE 0.60 DE LCh -0.23 +0.46 0.31 (dark brown)
First there's the patch name (A1 in this example) then camera raw RGB values (0.0 - 1.0 range), then CIE XYZ reference values (0.0 - 1.0 range), and then what XYZ values the profile transform came up with, and then sRGB values of reference and profile (note that these will only be accurate if the color is within the sRGB gamut), and then CIEDE2000 values for color difference between reference and converted value, related to the test illuminant.

The first delta E value is the total with 1,1,1 k weights, the following three is considering lightness (L) chroma (=saturation, C) and hue (h) separately. The lightness and chroma have a sign so you can see if the color is lighter (+) or darker (-) than it should be, and if it's more saturated (+) or less saturated (-) than it should be. In the above example we see that most part of the color difference sits in chroma (0.46 delta E), and it's a tiny bit too dark and too saturated.

Finally there's a text name of the color. This text name is highly approximate and may not really be that correct, but it roughly points out the type of color in lightness (light, dark), chroma (grayish, strong, vivid etc) and hue. Look at the corresponding image files if you want the reports with actual colored squares to represent the patches.

Image files

Crop from a patch matching report image. To make it easier to see the difference the patch square has been split diagonally. The reference value is in the top left half, and the profile result in the other.
A few TIFF image files can be dumped:
  • cm-patch-errors.tif, fm-patch-errors.tif, patch-errors.tif, same as the text files patch matching reports, but showing the actual patches as colored squares.
    • The patch square is split diagonally, the top left half shows the reference value and the bottom-right what the profile produces.
    • The files are 16 bit TIFF in ProPhoto RGB color space (ICC profile is embedded).
    • Each file comes in five version with different ordering, first in target order, then in overall DE order, and then ordered by lightness, chroma and hue errors.
    • As these are ProPhoto RGB files it's very important that you have display color management enabled, otherwise it will show the wrong colors (typically desaturated).
    • If you have super-saturated patches in the target they might clip (rarely in the image itself, but in the color conversion to the screen) which will of course hurt the possibility to accurately visually evaluate the error.
  • gradient-ref.tif, gradient.tif, generated gradient images for diagnosing profile smoothness.

Plot files

It varies a bit between commands and parameters used which files that are produced, but many will be same, for example if the command processes a target it will produce files related to the target.

Most files have u'v' chromaticity coordinates, and if there's lightness there's CIE Luv / CIE Lab lightness divided by 100. The division by 100 is there to make it about the same scale as u'v'. This is the same 3D space as the DCamProf LUT operates in and is roughly "perceptually uniform", that is moving a certain distance in the diagram makes up a certain color difference. However as the space is linear and lightness is normalized it's not as uniform it could be, especially towards the line of purples which in reality goes towards black and thus hard to differ for the eye.

Here's a list of data files you can find in the report directory after a run:

  • Spectra for observer, SSF and illuminant:
    • cmf-x.dat, cmf-y.dat, cmf-z.dat, the observer's color matching functions.
    • ssf-r.dat, ssf-g.dat, ssf-b.dat, the camera's spectral sensitivity functions.
    • illuminant.dat emissive spectrum for the illuminant.
    • illuminant-d50.dat emissive spectrum for the standard illuminant D50.
  • Common gamuts for u'v' chromaticity diagrams, these are flat but have three columns to be compatible in 3D plots:
    • gmt-srgb.dat, sRGB gamut
    • gmt-adobergb.dat, Adobe RGB gamut
    • gmt-prophoto.dat, ProPhoto gamut
    • gmt-pointer.dat, Pointer's gamut
    • gmt-locus.dat, spectral locus for the chosen observer
  • Files related to the test target, for 3D plots with u'v' and L/100 except the spectra which are 2D (see also the target files in the LUT section further down):
    • target-xyz.dat, XYZ reference values for the patches, usually for the calibration illuminant
    • target-spectra.dat, reflectance spectra for the patches
    • target-xyz-<classname>.dat, target-spectra-<classname>.dat, same as above split per target class name.
    • targetd50-*, D50 versions of above. Note that spectra are the same regardless of illuminant as it's the reflectance spectra.
    • Live update of spectra and patches during target generation, can be fun to plot via a gnuplot loop to see spectral generation progress graphically:
      • live-patches.dat XYZ reference values for the chosen illuminant.
      • live-spectra.dat reflectance spectra for the patches.
  • Files related to the LUTs, the target files are all relative to D50 as the LUT works in that space:
    • nve-lut.dat, native LUT stretching in u'v' difference (addition), plus the L multiplier shown as a 1/10th of the difference from 1.0. The reason for the strange L scale is that the LUT stretching on the L scale should be fairly perceptually equal to the chromaticity stretch. That is any bend on the surface should have equal perceptual effect regardless of axis.
    • nve-lutd.dat, same as nve-lut.dat but the grid is sampled with higher density, useful for zoomed in or high resolution plots.
    • nve-ref.dat, a plain grid showing a LUT with no correction factors, can be used to plot a reference to compare.
    • nve-lutv.dat, vectors that show the difference from nve-ref.dat to nve-lut.dat
    • hsm-lut.dat, hsm-lutv.dat, hsm-ref.dat, same as the nve-* files, but for the DCP HueSatMap LUT.
    • lkt-lut.dat, lkt-lutv.dat, lkt-ref.dat, same as the nve-* files, but for the DCP LookTable LUT.
    • lkt-lutXX.dat, hsm-lutXX.dat, replace XX with 00 to value divisions-1, shows each value slice from a DCP 3D LUT. Will not be produced for 2.5D LUTs.
    • icc-lut.dat, all points in the ICC 3D LUT plotted in the same space as nve-lut-dat.
    • icc-lutXX.dat, replace XX with 00 to 19, shows slices of the ICC 3D LUT.
    • target-nve-lut.dat, the target patches' XYZ positions after native LUT correction.
    • target-nve-lutvm.dat, vectors showing the difference between matrix-only correction and LUT correction.
    • target-nve-lutve.dat, vectors showing the difference between target reference values
    • target-nve-lutve2.dat, same as *lutve.dat, but the length of the vector is CIEDE2000, divided by 100 to fit in the u'v' scale.
    • target-nve-lutve3.dat, same as *lutve2.dat, but colors normalized to lightest possible value first, that is what the error would be if the color was light, normally increases error for dark colors. (targetd50-xyz.dat) and the profile's final values after LUT, that is the error vectors. For a perfect match these are all zero length.
    • target-hsm-lut.dat, target-hsm-lutvm.dat, target-hsm-lutve*.dat, same as the target-nve-* files, but for the DCP LUT.
    • target-icc-lut.dat, target-icc-lutve*.dat, same as the target-nve-* files, but for the ICC LUT. Note that the *-lutvm.dat doesn't exist for ICC as there is usually no XYZ matrix.
  • Other files
    • ssf-csep.dat, camera color separation performance, documented separately.
    • tf-r.dat, tf-g.dat, tf-b.dat, transfer functions for linearizing RGB values.
    • tc.dat, tc-srgb.dat, tone curve in linear and sRGB gamma encoding (both axes).
    • target-ref*, target-match*, target*, target-refm*, target spectra and XYZ plots written by the match-spectra command.
    • lin-curves.dat, linearization curves from the testchart-ff command (only when linearization is enabled).

Example gnuplot scripts

As patch colors are often involved I recommend using gnuplot with a gray background rather than the default white. If you use the X11 terminal you do this by starting gnuplot with the following command gnuplot -background gray. All examples here are adapted for a gray background.

In gnuplot you do 2D plots with the plot command, and 3D plots with splot. It's often useful to view a 3D plot in 2D though, and thanks to gnuplot's isometric perspective viewing a 3D plot straight from above makes it perfectly 2D.

You can rotate a 3D plot using the mouse, and you can zoom in by right-clicking and drawing a zoom-in-box. Type reset and replot to return to the original view. It's not a quick thing to master gnuplot, but with the help of the example scripts here you should be able to get around and do the tasks necessary for visualizing DCamProf data.

You can label the axes etc, but I usually make it simple and just remove all labels with unset key.

Plotting SSF and observer CMF:
  plot \
    'cmf-x.dat' w l lc "pink", \
    'cmf-y.dat' w l lc "greenyellow", \
    'cmf-z.dat' w l lc "cyan", \
    'ssf-r.dat' w l lc "red", \
    'ssf-g.dat' w l lc "green", \
    'ssf-b.dat' w l lc "blue"
Basic plot for a test target, first the target spectra in 2D:
  plot 'target-spectra.dat' w l lc rgb var
The example shows cc24
...and then the target patches in 3D:
  set grid
  splot \
    'gmt-locus.dat' w l lw 4 lc rgb var, \
    'gmt-adobergb.dat' w l lc "red", \
    'gmt-pointer.dat' w l lw 2 lc rgb var, \
    'target-xyz.dat' pt 5 lc rgb var
Not shown in the picture, but you can also get text labels beside each patch by adding: 'target-xyz.dat' using 1:2:3:5 with labels offset 2
A suitable plot after a make-profile or test-profile run with a target with relative few patches (such as a cc24):
  splot \
    'nve-lut.dat' w l lc "beige", \
    'gmt-locus.dat' w l lw 4 lc rgb var, \
    'gmt-adobergb.dat' w l lc "red", \
    'gmt-pointer.dat' w l lw 2 lc rgb var, \
    'target-nve-lutvm.dat' w vec lw 2 lc "black", \
    'targetd50-xyz.dat' pt 5 ps 2 lc rgb var
The image shows a zoomed in section, viewed directly from above, so we see a 2D chromaticity diagram with the LUT stretching in the chromaticity dimension. The black LUT vectors are only a little visible as the matrix alone makes a fair match.
A plot after a test-profile run with a dense target, such as a locus grid:
  splot \
    'nve-lut.dat' w l lc "beige", \
    'gmt-locus.dat' w l lw 4 lc rgb var, \
    'gmt-adobergb.dat' w l lc "red", \
    'gmt-prophoto.dat' w l lc "blue", \
    'gmt-pointer.dat' w l lw 2 lc rgb var, \
    'target-nve-lutve.dat' w vec lc "black"
Here we only plot the error vectors, the actual color (reference XYZ) is at the start of the arrow and where it ends up after profiling is at the end of the arrow. For a perfect profile on a perfect camera the vector length should thus be zero over the whole field. As we can see in the example to the right errors typically grow large towards the locus, the matrix even moves points outside the human gamut.
A plot after a test-profile run with a DCP profile:
  splot \
    'hsm-lutv.dat' w vec lc "beige", \
    'gmt-locus.dat' w l lw 4 lc rgb var, \
    'gmt-adobergb.dat' w l lc "red", \
    'gmt-prophoto.dat' w l lc "blue", \
    'gmt-pointer.dat' w l lw 2 lc rgb var, \
    'targetd50-xyz.dat' pt 5 ps 1.2 lc rgb var
Here we plot the DCP HSM LUT as vectors, it can't be plotted like a grid like the native LUT. The vectors show each table position at vector start and their shift in chromaticity and lightness at vector end. Note that a DCP HSM LUT actually changes values through multiplication in linear Prophoto RGB HSV space, that's why the LUT looks like a star fitted in the prohoto triangle with high density at the white-point. The lightness axis has been transformed to match the same scale as the native LUT so the LUTs can be compared directly.

Be careful to watch gnuplot's auto-scaling of axes. The lightness axis in a LUT often gets greatly exaggerated or compressed due to it's not plot at the same scale as chromaticity. Use the set view equal command to turn on/off equal scaling (xyz = equal scaling on all axes, xy = default meaning chromaticity equal and lightness scaled to fit).

  set view equal xyz
  set view equal xy
With equal scale on the L axis a LUT typically looks very flat as L adjustments are generally minor.

More example scripts are found throughout the documentation.

Call for spectral databases and camera SSFs

DCamProf contains some built-in spectral data that has been retrieved from public sources. I'd like to have more. A database with spectral reflectance of human skin is currently the most desired, useful for rendering portrait profiles.

There are reference standard sets such as the ISO TR 16066, but those are not free and cannot be freely redistributed so I can't include that in DCamProf.

If you know of any database you think is useful for inclusion please let me know.

The other aspect is camera SSFs. It's quite complicated and/or costly to measure camera SSFs so most users will not be able to do that and thus have to rely on public sources. If you can provide camera SSFs or have links to sources I have missed please let me know.

Links to camera SSFs

Links to spectral databases

Acknowledgments

I'd like to thank those that have made camera SSFs and spectral databases available, without those DCamProf would not have been possible in its current form. Currently DCamProf has spectral databases from University of Eastern Finland and BabelColor, see the section with links to spectral databases for references.

I also would like to thank all early adopters for testing the software.

Thanks to Mike Hutt for the Nelder-Mead simplex implementation which is used in DCamProf for solving various multi-variable complex optimization problems. I also want to thank Jarno Elonen for publishing a thin plate spline implementation which served as base for the DCamProf TPS used for getting a smooth LUT.

The copyright for the TPS source is required to be repeated in the documentation, so here it is:

  Copyright (C) 2003, 2004 by Jarno Elonen

  Permission to use, copy, modify, distribute and sell this
  software and its documentation for any purpose is hereby
  granted without fee, provided that the above copyright
  notice appear in all copies and that both that copyright
  notice and this permission notice appear in supporting
  documentation. The authors make no representations about
  the suitability of this software for any purpose. It is
  provided "as is" without express or implied warranty.



(c) Copyright 2015 - Anders Torger.