Hi Paul;

I think option 1 where you scan the same note and same location multiple times as a new sample is wrong – it is NOT a new sample, and would unfairly tighten the spread of results. (Multiple scanning within a single sample ID is good).

Making the model simple will help speed of scanning – stick to white paper under the note as standard. (I would test whether a single sheet is enough, the illumination does penetrate somewhat).

You may need to have Condition as an attribute, with New, Used, Grubby as values.

Given the complex patterns on a note, how will you cope with slight inaccuracies in the area illuminated? Will you use a jig? Otherwise that variance may be greater that that of bad notes.

Also – how do you hope to recognise forgeries? Do you have access to some bad notes? or will you try to detect differences from expected scans? i.e. outliers. I would not know how to do that, but maybe CP can offer good advice!