Feb 06, 2023

Public workspaceDStretch Tattoo Protocol: Full step-by-step protocol for identification and visualization of tattoos on preserved archaeological remains using the ImageJ plugin DStretch

  • 1Department of Paleoanthropology, University of Tübingen, Institute of Archaeological Science, D-72074 Tübingen, Germany;
  • 2Tennessee Division of Archaeology, Nashville, Tennessee, USA
  • Dominik Göldner: ORCID: 0000-0002-5432-3273;
  • Aaron Deter-Wolf: ORCID: 0000-0002-0882-0455
Icon indicating open access to content
QR code linking to this content
Protocol CitationDominik Göldner, Aaron Deter-Wolf 2023. DStretch Tattoo Protocol: Full step-by-step protocol for identification and visualization of tattoos on preserved archaeological remains using the ImageJ plugin DStretch. protocols.io https://dx.doi.org/10.17504/protocols.io.n92ldp52xl5b/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working.
Created: December 31, 2022
Last Modified: February 06, 2023
Protocol Integer ID: 74611
Keywords: ink, body mods, body modification, art, body painting, #decorrelation stretch, open methods, open archaeology, open science, archaeology, prehistory, mummy, mummies, tattooed, inked
Disclaimer
This protocol shows real human remains. The image is reproduced herein with permission of the President and Fellows of Harvard College, on behalf of the Peabody Museum of Archeology and Ethnology. View this protocol with caution and respect. Researchers should always seek official permission from descendant communities and curating institutions, as appropriate to the region and culture in question, before recording, processing, or publishing images of human remains. Always treat human remains and associated digital data with respect and dignity, and adhere to professional best practices regarding scientific and ethical standards.

Read the complete protocol before applying it to your research. We advise running trials using your own data to follow each step of the protocol. Please remember to always cite this protocol when you follow it, be it partially, to its full extent, or when modified. Protocols published on Protocols.io are not peer-reviewed. All of the steps described reflect the authors' experiences with various subjects and equipment.

The decorrelation stretch process in DStretch® depends on a variety of factors, including, but not limited to, light conditions, camera equipment, recording parameters, image quality, and colors present on the subject. The specific results of any DStretch® enhancement will therefore be different for every image. The decorrelation stretch algorithm relies on color differences to work properly. Therefore, in instances where preserved remains exhibit darkened or weathered skin due to taphonomic processes, tattoo pigments may not be recognizably visible even after applying this protocol. In such instances, other, more sophisticated techniques such as multispectral imaging should be considered. The DStretch® program is also available as a standalone mobile app for Android and iOS systems; however, the functionality of that app has not been tested by us.

Our efforts suggest consistent results can be obtained by following the present protocol. Nevertheless, our approach cannot guarantee that your image data will perform equally well. In such cases, we encourage researchers to experiment with the DStretch® software to identify alternative workflows. The application of DStretch® to your image data may also yield satisfactory results when our protocol is not strictly followed. Although not tested by us, future projects are encouraged to evaluate the use of colorchecker cards to color-calibrate digital photographs, which might lead to better colormetric transformation.

Individuals with dyschromatopsia might have problems recognizing and interpreting specific colors in decorrelation-stretch output images. The Grayscale Mode in DStretch® (not demonstrated here) may be used in some cases to bypass this problem.
Abstract
Archaeological preservation of intentional skin modifications such as tattoos is rare but can yield important insights into the human past with respect to the belief systems, culture, and art of past societies. Tattooing can be briefly described as an invasive process of inserting pigments into the skin with the intention of creating permanent motifs. Preserved tattoos have been identified on naturally and deliberately mummified human remains from archaeological sites of many cultures across the globe, dating back at least 5,000 years before present (Deter-Wolf 2022; Deter-Wolf et al. 2016).

Preserved tattoos on mummified remains are often documented using standard photography. However, taphonomic processes contributing to preservation may darken or shift natural skin color, rendering tattoos indistinct or invisible to the naked eye. Multispectral imaging using specialized camera equipment has been applied with great success to bypass this problem (e.g., Alvrus et al. 2001; Austin and Goebil 2017; Barkova and Pankova 2005; Friedman 2017; Gaber et al. 1995; Hansen et al. 1991; Samadelli et al. 2015; Smith and Zimmerman 1975), but comes with the disadvantage of requiring specialized equipment that, even with the recent increased availability of consumer-grade digital imaging technology, may be prohibitively expensive.

The DStretch® software plugin presents another cost-effective and easy tool to visualize ancient, preserved tattoos. This low-cost software plugin for the open-source image processing and analysis freeware ImageJ© (Schneider et al. 2012) also functions as a stand-alone app for mobile Android and iOS devices. The DStretch® algorithm was initially developed in remote sensing (Gillespie et a. 1986) and was later adapted in 2005 by Jon Harman (2005) for the visualization of faded or ephemeral rock art (e.g., Evans and Mourad 2018; Harman 2015; Gunn et al. 2014; Gunn et al. 2010; Harman 2005), and has subsequently been used on other archaeological subjects (e.g., Emmitt et al. 2021; Gonzales et al. 2019). The program has also recently been used by Anne Austin and colleagues (Austin 2022; Austin and Arnette 2022; Austin and Gobeil 2017) to visualize tattoos preserved on archaeological remains from ancient Egypt and 19th century Europe. The mathematical background of the decorrelation-stretch transformation has been well documented (e.g., Kaur and Sohi 2017; Campbell 1996; Gillespie et a. 1986). However, to our knowledge, detailed descriptions of its applicability, especially with respect to the DStretch® software plugin, are so far widely lacking. Since 2017 the senior author (DG) has experimented with new ways of implementing DStretch® in recording archaeological subjects, including preserved tattoos and bone discolorations (Göldner in preparation), while Deter-Wolf has used the technology to evaluate tattoos on mummified remains from the Andes region of South America (Deter-Wolf et al., in preparation).

This protocol summarizes the results of our work in a reproducible and understandable framework for other researchers to use when visualizing preserved tattoos on mummified remains in both archaeological and museum settings. It represents the first complete guidelines for applying this method, covering image collection and post-processing, decorrelation stretch transformation, and subsequent documentation, including drawings, descriptions, and reconstruction drawings. This protocol can also be used in the same or slightly modified manner for other types of color-decorated objects and specimens and even on colored surface texture files derived from laser- or light-based three-dimensional (3D) scans and photogrammetry (e.g., Gonzáles et al. 2019; Emmitt et al. 2021; Keppler et al. in preparation). With this effort, we aim to establish a standard for visualizations of preserved tattoos using a reproducible and citeable method in order to improve scientific practice in the field of ancient tattoo analysis. We hope that interested scientists will adopt this protocol as a standard for their work in visualizing, identifying, reconstructing, and documenting their findings.
Image Attribution
The single photograph used in this protocol depicts the mummified, tattooed right hand (dorsal view) of an adult individual whose biological sex and original site provenience are unknown. These remains, which include both the right hand and forearm, are housed at the Peabody Museum of Archaeology and Ethnology, Harvard University, Cambridge, Massachusetts, USA (museum number 80-61-30/24005). Museum accession data identifies the region and culture of origin as “Peru/Ancient Graves.” They were donated to the museum in 1880 by Dr. W. Sturgis Bigelow and have not been previously published. The image is reproduced herein with permission of the President and Fellows of Harvard College, on behalf of the Peabody Museum of Archeology and Ethnology. The original photograph was taken by co-author Deter-Wolf on September 9, 2019 in the collections facility of the Peabody Museum of Archaeology and Ethnology using a Olympus E-PL1 DSLR camera (14-42mm Zuiko Micro Four Thirds Lens; ISO 500, 10/60s shutter speed; focal length: 14mm; 12.2 megapixel) on a black cloth background with a certified ABFO No.2 L-shaped photomacrographic metric scale manufactured by Tri-Tech Forensics. Lighting included a combination of indirect natural light and overhead fluorescent lighting. The camera was hand-held, and triggered manually. The image was recorded in the native Olympus Raw Image File Format (ORF) as an 8-bit RGB digital photograph (4032 x 3024 pixels) and subsequently converted into a high-quality JPEG file using the free but proprietary IrfanView software and its publicly available plugins. The converted image was then edited in Adobe Photoshop to eliminate the background and replace it with a solid black fill, as described in the protocol. The image was also cropped to eliminate the view of the forearm.
Guidelines
Texts written in italic refer to software commands.

Supplementary materials:

Additional information a published in the supplementary materials on Zenodo.org (DOI: 10.5281/zenodo.7607277).

The Supplementary Materials of this protocol include:

  • Acknowledgments

  • ProCreate® Brush Information

  • List of References (Citations)
  • Photography and Post-Processing Documentation Sheet I (excluding fields to document DStretch® transformation)

  • Photography and Post-Processing Documentation Sheet II (including fields to document DStretch® transformation)


Time required to apply this protocol:

  • Taking and post-processing photographs: around 30 to 90 minutes

  • DStretch® enhancement: 10 minutes

  • Documentation: depending on the complexity of the existing tattoos, up to several hours.
Materials
Hardware:

  • DSLR Camera and accessories (e.g., batteries, charger, data storage medium like a SD card and external hard drives for backup copies)

  • Computer or laptop

  • Sun cover (if photos are taken outside in the field)

  • Externally directed light source (if photos are taken in an imaging lab, e.g., studio lights)

  • Matte-black paper or cloth for the background (size according to the subject dimensions; optional but advised)

  • Camera tripod (optional)

  • UV remote control (optional)

  • Digital drawing device (e.g., drawing tablet and pen or stylus). Used device in this protocol: Apple® iPad® (6th generation; version: iPadOS 15.5) with ApplePen® (1st generation).


Software:


  • DStretch® (Harman 2005; low-cost [50$]; proprietary software; for purchase contact Jon Harman via https://www.dstretch.com/ or directly via DStretch@prodigy.net). DStretch® version used in this protocol: 8.22.


  • IrfanView© Plugin to convert RAW images to JPEG files (free; proprietary software; download via https://www.irfanview.de/). Version used in this protocol: 4.60.

  • Image processing software of your choice, such as GIMP (free; download via https://www.gimp.org/) or Adobe® Illustrator® or Photoshop® (high-cost; proprietary software; purchase and download via https://www.adobe.com/products/photoshop.html); the latter has been used by us in this protocol (Adobe® Photoshop® CS6). Version used in this protocol: 13.0 x64.

  • (Optional): Illustration software for tattoo drawings and reconstructions used along digital drawing devices. Used illustration software in this protocol: ProCreate® (low-cost [10-15$]; proprietary software; purchase and download only for Apple® iPad® from the App Store via https://apps.apple.com/us/app/procreate/id425073498). Used version in this protocol: 5.2.6.


Supplementary materials are publised on Zenodo.org (DOI: 10.5281/zenodo.7607277).
Safety warnings
Useful information on subject safety during photography of osteological subjects is summarized in the FOROST Skull Photography Protocol (Báez-Molgado et al. 2013).
For a general guideline on (small) object photography in archaeology, which can also be applied to human remains, see the SOAP protocol published by Cerasoni & Rodrigues (2022).
Before start
Mathematical background of DStretch®

DStretch® stands for decorrelation stretch, on which the program's algorithm is based. This technique uses a multivariate Karhunen-Loève transformation, which is similar to Hotelling transformation and principal component analysis (Gunn et al. 2010; Harman 2005). Colors are usually highly intercorrelated in digital images captured (figs. 1.1 and 1.3), also known as inter-channel- or band-to-band correlation, in which small differences between similar color hues may be difficult to distinguish with the naked eye (Harman, 2015). DStretch® in turn uses decorrelation to remove color correlation and highlight differences between hues.

During the decorrelation stretch process, the three color values of all pixels in a digital image are treated as coordinates in a three-dimensional color space (Gillespie et al., 1986). In the first step, a linear transformation rotates the original color space to a new one that rescales (diagonalization) the correlation matrix of the color channels. Then, a contrast stretch equalizes (normalization) the variance of the eigenvectors or the contrast. At this point, the colors are uncorrelated and stretched out to occupy a wider range of space. Finally, the image colors are rotated back into an approximation of the original color coordinate space using an inverse linear transformation. This results in a decorrelated image with extreme false colors that display much higher contrast and visible differences between different hues (fig. 1.2 and 1.4). When the DStretch® plugin is used to transform a loaded image, all steps are run rapidly and automatically. However, DStretch® also offers multiple options to run customized and more advanced transformations, which require more user experience and exercise with the software. In our protocol, we transparently demonstrate such more sophisticated functions.

Fig. 1.1 to 1.4: (Upper Left) Cropped but untransformed image taken by A. Deter-Wolf at the Peabody Museum of Archaeology and Ethnology, Harvard University. (Upper Right) Image transformed using the YBK color channel in DStretch®. (Lower Left) Spatial color distribution of the pixel colors of the original image in the 3D RGB color space. In this graph, the color pixels are highly intercorrelated, which is expected for most “natural” images. (Lower Right) Widespread 3D pixel color distribution after the YBK DStretch® transformation. The 3D plots were generated with the free ImageJ© plugin Color Inspector 3D/Color Histogram© by Barthel (2005).
General tips for DStretch® image enhancements:

In the DStretch® version (8.22) used for this protocol, more than 11 predefined channels (e.g., RGB, YBK, LAB, LBK, etc.) can be used to create instant color transformations. Each color channel is designed for suitability with a specific input hue displayed within an input image (Campbell, 1996; Harman, 2015; Kaur and Sohi, 2017).

General: The "general enhancers" including LAB, LDS, RGB, YBR, YDS, and YDT, will improve the general hue contrast between colors, meaning that they will enhance multiple colors without suppressing others (Harman 2015).
Red: The LDS and YDT color channels are good at distinguishing between different hues of red (Harman 2015). If the focus is only on red, YRE works better than YRD. LRE, CRGB, RGB00, and LDS also work well, although they cannot distinguish between different red shades. YRE is sensitive to faint red, and YRD produces more natural images of red colors. When targeting red hues, the flattening option (Flat button) in DStretch® is not necessary.

Yellow: YYE is extremely effective at enhancing yellow hues, which will be transformed into distinct browns in the generated colorspace while suppressing black and red colors.

Black/white: Both blacks and whites are difficult for DStretch® to process due to a lack of actual color information. However, most archaeological objects do not display uniform blacks or whites. Color channels like YBK and LBK that suppress other hues can reveal good results for blacks, whereas LAB, LWE, and YWE can be useful for whites.

"L" color channel enhancements in DStretch® are a bit slower but tend to produce better results. LAB tends to be less sensitive to digital artifacts (random color-pixel noise) caused by JPEG compression. CRGB tends to produce some "wild colors." DStretch® also gives the opportunity to customize color spaces using the YXX and LXX buttons, which lead to a new menu within the plugin. We will not discuss this option in further detail but advise interested people to experiment with it. In cases where the predefined color spaces will not perform well on your selected images, this may still yield meaningful results.

Tab. 1: DStretch® color channels and their respective color enhancements.


Preparation phase
Preparation phase
Camera setup:

Charge your camera batteries and spare batteries before taking images. Digitally clean the camera image sensor on a regular basis if the option is provided by your camera model. Likewise, if necessary, manually clean your camera sensor and lens before taking pictures.

Images recorded using a DSLR camera with a large sensor will reduce image noise. Although larger megapixel values do not necessarily reveal better images, it is recommended to use the highest megapixel value possible. Set the camera to record images in RAW or uncompressed TIFF file format. JPEG is a format that compresses the image data, which will eventually lead to quality loss. If you must take images in JPEG format, use the highest JPEG quality possible as well as the highest resolution setting (i.e., the highest megapixel value). Set camera settings to the lowest ISO value possible (Harman 2015). Use appropriate shutter speed and aperture (f-stop or f-number) settings that accommodate your lighting settings. We advise you to use a wide focus field in combination with the automatic focus option on your camera and lens.

To produce accurate and consistent results, you should mount the camera to a tripod. Make sure that the camera is securely fixed to the tripod and that the tripod itself has a stable stand. Use weights if necessary to stabilize its position. If available for your camera model, use a remote control to take stable pictures. This will minimize the occurrence of camera blur caused by minor hand and body movements (e.g., shaking, breathing, or even blood flow) that might appear when holding the camera in your hands when taking images.
Subject preparation:

Coordinate with the curating institution to determine whether it is necessary and possible to clean the subject or its region of interest. If so, clean the subject carefully and without causing any damage, according to the instructions of the curating institution. Once the subject is ready, place it in your imaging setup and position it to avoid falls or incidental damage. General recommendations for subject safety measures are summarized in the FOROST Skull Photography Protocol (Báez-Molgado et al. 2013) and can be directly transferred to other kinds of archaeological subjects. Generally, make sure that the subject is always safe. The main potential causes of damage are the handling and transport of the subject, when the subject falls from the photography setup, or when camera equipment falls onto the subject. Therefore, always ensure that the subject cannot fall from the setup (e.g., from a table) to the ground or that the camera or lights fall over and onto it. To prevent these situations, use proper materials and secure stands, keep safety distances, and move slowly around the setup. Always follow institutional protocols regarding the use of silicone gloves and other protective clothing when handling human remains.
Environmental setting:

Create a properly shadowed and evenly illuminated environment. Avoid over-exposure. If you use studio lights or similar light sources, place them so that shadows are minimized. In low-light situations, use a flash if necessary.
Scale bar:

Always include a scale bar for reference in your images. Choose an appropriate scale bar size that fits the dimensions of the subject. The scale bar should indicate centimeter and millimeter units. Place the scale bar next to the subject so it does not cover the region of interest. Avoid touching the subject with the scale bar, as it might cause superficial damage to it. Try to use a black-and-white-only scale bar, as colors can negatively affect the DStretch® transformation. We recommend shooting the same image twice (but only if a tripod is used to keep the camera in the same position). The first time included the scale bar, whereas the scale bar was removed in the second photograph. This image pair can be used alongside standard operations in ImageJ© to create a neat digital scale bar. A separate and detailed protocol for how to insert digital scale bars in scientific photographs was previously published by the senior author on Protocols.io (Göldner 2022).
Background setting:

If working in a controlled environment, e.g., an imaging laboratory or photographic studio, place the subject on a uniform, monochrome black background. Colored backgrounds and scale bars should be avoided because they may interfere with the DStretch® transformation process. Also avoid white backgrounds, as the white hue may reflect light back onto the subject and/or camera lens, creating overexposed areas. As background material, we recommend matte-black paper sheets or non-reflective cloth of appropriate dimensions to fit the size of the subject. Make sure that the background exceeds the subject and fills the entire imaging field of the camera. Use a professional photography light tent in conjunction with inbuilt or external studio lights if one is available. Photographic backgrounds can be replaced by an artificial black background during image post-processing (see below). Use a soft brush or air blower (e.g., a camera lens cleaner) to remove any dust and larger particles that might accumulate in the background. Coordinate with the curating institution or agency regarding the collection of dust and particle material, which, depending on the setting and condition of the subject, may contain fragments of human remains.
Imaging phase:

Following setup, briefly recheck everything and then begin the actual imaging process. Begin by taking overview images of the entire subject, followed by details of the region(s) of interest. During this stage, we recommend the collection of multiple images from each camera position in order to account for potential issues with focus or lighting. After images have been collected, copy and save them onto two storage devices to reduce data loss in case of unforeseeable accidents.
File conversion:

You may convert RAW or TIFF images to high-quality JPEG files using a converter program that works with the RAW or TIFF file format of your camera. Note that different camera brands use different RAW file formats (e.g., Canon® uses CRW, CR2 or CR3 files, Nikon® uses NEF files, Olympus® uses ORF files, and Sony® uses ARW files). Always keep the original RAW or TIFF files as backups and do not delete them. Save the newly generated high-quality JPEG files to a new subfolder in your project folder. Name all folders accordingly, e.g., "SITE_MUSUM-ID_RAW-IMAGE-NUMBER," etc. If applicable, turn the "noise suppression" option off when converting files.

For the example presented herein, we used an ORF file, which is the native RAW image file format for Olympus® cameras. To convert the ORF files to JPEGs, we used the cost-free imaging processing software IrfanView© and associated official IrfanView© plugins. Using this software enables you to convert either a single photograph or multiple images as a batch using the Batch Conversion function from the File menu. Before saving the converted files, select the highest JPEG quality possible (i.e., 100) in the saving and conversion options within the IrfanView© Conversion dialog. Note that IrfanView© is also able to convert other RAW file formats from different camera brands.
Photography setting documentation:

For reproducibility and transparency, always document the selected camera parameters and report them alongside your images in publications and/or within the supplementary materials of your studies. These parameters should include the following information: Camera type, camera brand, camera model, lens brand, lens type, lens model, ISO, shutter speed, lens-subject-distance, light conditions, optional equipment used, etc. You will find a pre-generated template called "Photography and Post-Processing Documentation Sheet I" (DOI: 10.5281/zenodo.7607277) in the Supplementary Materials of this protocol that includes all this information that you can use to document your camera specifications and photography parameter settings. This sheet also provides the opportunity to reliably document your DStretch® transformation process.
Image post-processing:

Open each image file and check it individually. Delete blurry or otherwise insufficient images, and if necessary, retake images. For this reason, the checking and image retaking step should be done immediately after the images have been taken the first time to ensure consistent results (e.g., by reducing the effect of daylight changes if operating in the field). We also recommend that you not change or disable the camera or photo setup until all image data has been collected and checked. The easiest way to review images is to simply use the storage medium (e.g., SD card) from your camera to transfer the data to a computer or laptop.

In any instances where it was not possible to take close-up images, remove the background from the digital image and replace it with a plane black color (hex color code: #000000), using the image processing software of your choice, such as GIMP or Adobe® Photoshop®, and the Magic Wand or Wizard Selection Tool. If necessary, also adjust the contrast, light, and shadow settings to reduce overly exposed or dark areas. Note that DStretch® also provides options to adjust the brightness and contrast using the Auto Contrast and Flat options in the DStretch® Main Panel. Additionally, within the DStretch® Main Panel, a button called "CB" is provided that can be used to even out the RGB color balance of the original image. This option is especially useful when the image has an overall color cast (Harman 2015). Whatever DStretch® adjustments you perform, always make sure to document these changes using the provided documentation sheet called "Photography and Post-Processing Documentation Sheet II" in the Supplementary Materials (DOI: 10.5281/zenodo.7607277). This way, you ensure that the same result can be easily reproduced.

Fig. 2: Peabody Museum # 80-61-30/24005.0, a mummified arm and hand from an unknown Peruvian archaeological site, donated to the Peabody Museum of Archeology and Ethnology in 1880. Original photograph taken by A. Deter-Wolf at the Peabody Museum of Archaeology and Ethnology, Harvard University.
Fig. 3: Original photograph after background removal and cropping.




Image enhancement with DStretch®
Image enhancement with DStretch®
Open the ImageJ© app (fig. 4).

Fig. 4: Opened ImageJ© program.

Load your image into ImageJ©. You can open the image by dragging and dropping it from the folder into the ImageJ© toolbar (the blank gray bar beneath the icons toolbar) or by using File\Open\navigate to the target folder and select image file\Open.
Within ImageJ© open the image in the DStretch® plugin via Plugins\DStretch (fig. 5).

Fig. 5: Cropped image input into ImageJ© and opened in the DStretch® plugin.

Optional step: If there is a background, as shown in the example, use one of ImageJ©'s selection tools to select a large portion of the subject, including the region of interest that contains color information for the tattoo and surrounding skin (fig. 6). The exact pixel coordinate and selection dimension values of the selected area are reported within the gray bar beneath the icons toolbar of the main ImageJ© menu. The dimensions of the active selection in the shown example are: x = 836, y = 892, w = 1154, and h = 470.

Fig. 6: The selected area eliminates the background while including color information for the tattoo and surrounding preserved skin.

For fast visualization, first set the Scale to an appropriate level like 15 (the default value), and then click on the YBK color channel button (fig. 7). The Scale value affects the enhancement transformation strength; normally, a value between 10 and 15 works for most images. Test different Scale values and choose a value that reveals the visually best results. Afterwards, turn off the active selection by clicking anywhere within the image panel with the left mouse button.

The resulting images at this point often show a great distinction between the skin and tattoo color and can be used for publication. Follow the next steps to learn about more options that can be useful for subsequent drawing and interpretation steps. Note that there are also sections describing mummy tattoo drawings, reconstruction, and description towards the end of this protocol.

Fig. 7: YBK transformed input image in extreme false colors.

For fast DStretch® transformation with subsequent hue shift and tattoo color extraction:
Set the Scale to an appropriate level, like 15 (the default value), and then click on the YBK color channel button (as described in step 5; fig. 7).
Click on the Expert button at the bottom of the Main Panel. This will open the transformed input image in a new DStretch® panel called Expert (fig. 8).
Within the Expert panel, manually adjust the value of the Hue Shift bar so that the tattoos become more visible (fig. 8). A value of 96 degrees (deg) has been tested to be effective in the current workflow using the YBK color channel and also works with the LBK channel transformation. However, this depends on the color of the tattoo pigments, the photo quality, and lighting conditions. Use a hue shift that works best for your image data, but make sure to note and report the used values. The hue shift will result in a color distinction, which is often better than color transformation using the predefined color channels. However, note that this will work best when there is already a relatively high contrast in the original and fast-transformed image. Hue-shifting transformed images with low levels of hue differences between the skin and the tattoo pigments may not lead to higher contrasts. Follow the steps from this point forward to back project the highlighted tattoo colors to the original image.

Fig. 8: YBK transformed input image shown in the Expert dialog after shifting the hue to 96 degrees.

Back project the extracted tattoo colors to the original image:
After hue shifting in the Expert panel, click on the Hue Mask Panel button beneath the Hue Shift section (fig. 9).

Fig. 9: Hue shifted input image is shown in the Hue Mask Panel. First, the HSL range was determined using the cursor at tattooed areas. After range determination, the Min Hue and Max Hue were adjusted to 240 and 360 degrees, respectively.

In order to extract the pixels corresponding to the tattoo and to eliminate the surrounding colors of the skin, the HSL range of the region of interest must be determined:

Slowly slide the cursor over the tattooed area(s). Note the HSL values shown in the gray bar beneath the icon bar of the main ImageJ© menu panel (fig. 9). The HSL value is different for each pixel, but similarly colored pixels have similar HSL values. It is advisable to closely zoom in on the image, as scattered noise pixels displaying extremely different HSL values are regularly hidden within visually monochromatic regions. Avoid these pixels, which you will easily identify by their distinct hue. At this point, you have two options: 1) Try to estimate a rough mean value of the pixels that characterize your region of interest, and define a suitable minimum to maximum range around this rough mean value. The approximate mean in our example was 300. Around this mean, we determined a range with a minimum of 240 and a maximum of 360, equaling a mean deviation of ± 60. 2) Explore the HSL values across different areas of your region of interest to get a feeling for the range. If you are confident in your choice of minimum and maximum, add and subtract 10 to obtain a likely range that will perform well. Both options might require some time and experimentation to define a suitable range that excludes most of the surrounding areas.

Once the HSL range has been identified in the previous step, the minimum to maximum hue range can be indicated within the Hue Mask Panel. In our case, we set the Min Hue value to 240 (default value = 0) and the Max Hue value to 360 deg (= default value) (fig. 9), according to the previously mentioned values.
Click on the Do Mask button. This will open a new little window called Lightness Test (fig. 10).

Fig. 10: Opened Lightness Test window before hue mask creation.

Leave all four values in the Lightness Test window at their defaults (S min = 0.00, S max = 1.00, L min = 0.00, L max = 1.00) and click on the OK button. As a result, you will get just the extracted tattoos on a plain black background (fig. 11).

Fig. 11: Extracted tattoo pixel color information following the Do Mask command.

Optional step: Floating artifact pixels may persist after the previous extraction step. The Clean button in the Hue Mask Panel gives you an option to filter out this additional digital "noise." Click on the Clean button, which will open a new window called Hue Mask Cleaning. Here you can specify the parameters as follows (fig. 12): Do not change the second or last parameters (Type = Clean and Number of extra dilations = 1), but leave them on their default settings. If you increase the values of the two remaining options (Cleaning strength and Number of iterations), the DStretch® process will aggressively remove noisy pixels. If these values are set too high, the region of interest may also be affected and shrink in size. Therefore, some trials are needed to find a proper equilibrium. In the illustrated example, we had the most success with Cleaning strength values ranging from 3 to 5 and Number of iterations between 5 and 6. Again, these values will depend on your specific image properties. It may not always be necessary to conduct the cleaning step depending on the amount of noise, which, depending on its size, may not be visible in the final image following back projection of the extracted tattoo color information.

Fig. 12: Hue Mask Cleaning panel with adjusted cleaning parameters.

Press the OK button in the Hue Mask Cleaning panel after you have adjusted the cleaning parameters. A new window will open called Hue Mask, which will ask you whether to Keep the cleaned image? Press the new OK button to confirm your choice. This will result in a cleaner image with fewer pixel artifacts (fig. 13).

Fig. 13: Cleaned color pixel extraction.

Click on the Add To HM Out button in the Hue Mask Panel to back-project the extracted tattoo information to the original image. This will first open a window called Hue Mask, which asks you to "Replace HM Out image?" Press the OK button to complete the process. You have now projected the extracted and hue-shifted tattoo color information back to the original input image (fig. 14).

Fig. 14: Final image showing the extracted and back-projected tattoo color pixel information after YBK transformation onto the original and untransformed input image.

Save the final image:
Save the final image as a JPEG file by pressing the Save JPG button on the right side of the menu. This will open a new dialog window called Save File as Jpeg, which asks you to adjust the JPEG file's image quality.
Adjust the JPEG quality value to 100 to save the image at the highest possible quality. In this dialog you will also find the Save matrix also? option. This allows you to save a TXT file containing numeric metadata information on the applied matrix transformation. This option may be relevant if you want to apply the same basic transformation operations to a batch of images. However, we have not found this option particularly useful so far and consider transparent documentation of used values and applied steps more important.
To save the JPEG image file, press OK. Navigate to the target folder, name the file, and click on save.
Subsequent tattoo documentation
Subsequent tattoo documentation
Open an illustration software program of your choice, such as Adobe® Illustrator®, Photoshop®, or ProCreate®, on a desktop, laptop, or tablet. Although we used ProCreate® on an Apple® iPad for the following steps, the basic concept and progression remain the same regardless of software or environment. The following steps require a digital drawing device, such as a tablet with a drawing pen if you work with a computer or laptop, or an ApplePen when using an Apple® iPad.
Open your illustration software and import the original, untransformed input image and the DStretch®-transformed output images as base layers. For easier navigation, rename the layers accordingly (fig. 15).

Fig. 15: All transformed and untransformed images of the same subject and region of interest, opened in ProCreate® installed on an Apple® iPad.

On top of these base layers, create three new empty layers. The lowest of these new layers will be used to indicate areas of uncertainty (see step 6). The tattoos will be drawn in the middle layer (see step 4), and the body outlines will be drawn in the top layer (see step 5). Rename these three layers accordingly (fig. 15).
To redraw the tattoos, navigate to the appropriate layer and select a round brush with a small brush size and black color (hex color code: #000000). Information on the ProCreate® brush used in this example can be found in the Supplementary Materials (DOI: 10.5281/zenodo.7607277).
Outline each tattoo in the Tattoos layer (fig. 15), tracing all edges as closely as possible (fig. 16.2). If necessary, zoom in to get a better view and more steady results (fig. 16.1). It also helps to switch between the different base reference layers to get a better feeling of the outlines, which may appear more clearly in different base images. Be sure to fully outline any internal features and non-tattooed areas, which will otherwise be infilled in the next step. Make sure that the outline is entirely closed so that you are able to fill the tattooed area with a bucket tool in the next step. If needed, use the normal round brush with a distinct gray color (hex color code: #797979) to outline areas where the tattoo is not quite visible and the outline is uncertain (not needed in this example).
Once a tattoo part is completely outlined, fill the outlined area(s) using the paint bucket tool (in black color; hex color code: #000000 and #797979 for uncertain tattoos) (fig. 16.3 and 17).

Fig. 16.1 to 16.3: (Left to right) Detail of tattooed area, outlined and filled in the new layer using Brush and Paint Bucket tools.

Continue with the next tattooed area(s) until all parts have been outlined and filled out.
Once all tattoos have been traced and redrawn, switch to the uppermost Body Outline layer (fig. 15) to trace the outlines of the imaged body part, including anatomical features and taphonomic damages. Following this step, it is possible to show the tattoo illustration as a standalone image (fig. 17). For this step, continue using a round brush with a slightly thicker brush diameter (fig. 18.3 and 18.4).
Lastly, switch to the Uncertainty layer (fig. 15) in which missing and/or damaged tattoo areas may be depicted. This may include areas in which tattoos are not visible even following the application of DStretch® and areas where the skin is too distorted, folded, cracked, or otherwise taphonomically damaged. Use a colorblind-friendly, blue-dotted brush (hex color code: #3154b0) to fill these areas. We used the ProCreate® standard Textures/Decimals brush (brush size = 9, brush opacity set to 100%, and then the entire layer's opacity reduced to 60% for better visibility of overlaying layers) (fig. 18.4).

Fig. 17: Completed tattoo drawings, body outlines, and uncertain and damaged areas.

Use the export settings of your chosen illustration software to save the resulting images in your preferred format(s) (e.g., JPEG, PNG, TIFF). We recommend exporting at least four versions of the images, including the following:

a) Untransformed input image + tattoo drawings (without anatomical outlines; fig. 18.5)

b) Transformed input image + tattoo drawings (without anatomical outlines; fig. 18.6)

c) Anatomical outlines + tattoo drawings (without missing information; fig. 18.7)

d) Anatomical outlines + tattoo drawings (with missing information; fig. 18.8)

Once all images have been exported, you must add a digital scale bar to any image that you wish to publish. The senior author has recently published a brief protocol on how to implement digital scale bars into images for archaeologists and anthropologists (Göldner 2022). Please refer to that protocol for further instructions. Examples of the current effort with inserted scale bars are shown in figures 18.1 to 18.8.

As recommended by Quellec et al. 2015 transformed DStretch® images should always be published alongside the original, untransformed images. Furthermore, all values and applied changes should be indicated, ideally in the image caption or supplementary materials of your publication. As applicable, it may be simplest to cite this (or other) protocols when they are applied, while also noting any modifications, alongside the images and "Photography and Post-Processing Documentation Sheet" provided in the Supplementary Materials of this protocol (link: ; DOI: ).

Fig. 18.1 to 18.8 (top to bottom and left to right): Full set showing all final images. 1) Original input image; 2) After YBK transformation 3) After YBK transformation and a 96-degree hue shift 4) After YBK transformation, 96-degree hue shift, pixel cleaning, and back projection to the untransformed input image. 5) Tattoo drawings on the original, untransformed image. 6) The tattoo drawings on the YBK transformed image. 7) Tattoo drawings and body outline. 8) Tattoo drawings and body outline, with indication of damaged areas. Tattoo drawings created by Dominik Göldner.

Tattoo reconstruction drawing
Tattoo reconstruction drawing
When considering publication or presentation of original photographs showing mummified, tattooed individuals, as well as any images processed using the DStretch® technique outlined herein, always be considerate of the desires of descendant communities, the ethical codes and professional obligations of archaeologists and museum professionals, and any institutional, governmental, or publisher policies and procedures regarding the analysis and display of human remains (e.g., Licata et al. 2020; McManamon 2017; Marquez-Grant and Fibiger 2011). In many instances, reconstruction drawings based on digital documentation and analysis can provide clearer depictions of preserved tattoos while avoiding the exhibition of sensitive images.

To this end, a reconstruction drawing of tattoos can be created using the illustration software of your choice. In the example described here, we used the ProCreate® software installed on an Apple® iPad® in combination with an ApplePen®. To create the drawing, open a new project file. First, create a template of the body part where the analyzed tattoo is located. For the current project, we created a template of the right hand as a base layer. Above this, we created another layer in which we drew the tattoo reconstructions.

Use your enhanced images and tattoo drawings as references to draw the tattoos, importing the enhanced images if necessary as background layers in the new project file. The reconstruction process is the most subjective part of this protocol, and requires more sophisticated skills, potentially requiring the services of a professional illustrator or artist. If possible, try to create multiple reconstructions to build a more variable overview and sense of potential outcomes, which may also widen the interpretative frame (fig. 29). Once the reconstruction is done, turn off any unnecessary layers before exporting or saving the final image.

Fig. 29: Tattoo reconstruction drawings showing a total of 16 rhomboid motifs, each of which can be subdivided into four smaller rhomboids with hollow centers. Versions created by each author exhibit small variations and demonstrate potential interpretive value in creating multiple reconstruction drawings. The images are not to scale. Dominik Göldner created the left reconstruction drawing, and Aaron Deter-Wolf created the right image.


Tattoo description
Tattoo description
Descriptions of the identified tattoos are an integral part of the documentation and should be reported in the results section of your study. Aim to give a concise but detailed description of what can be observed. Use the enhanced images and include additional indicators, such as colorblind-friendly arrows or detail boxes, to support the written descriptions. When providing the initial description of the tattoos, rely on simple, descriptive language. Avoid interpretations of what the motifs may mean or symbolize, instead including that information in the study´s interpretation section. If possible, compare your description with that in related publications in order to incorporate common terminology and identify any culture-specific terminology. Describe the hue of the preserved tattoos as they appear to the naked eye, and do not speculate as to the pigment, tools, or techniques used in their creation without citing supporting historical or physical data.

When describing the tattoos, use geometric descriptors to reference shape, and provide a count of subfeatures if there are several similar motifs present. Use anatomical terms to describe the location, direction, and position of the tattoos on the body. Providing metric measurements is encouraged, although disclaimers should be issued regarding shrinking or stretching of the skin and tattoos due to post-depositional processes. It can be useful to try to understand the distortion direction and patterns of the dried skin, as this might help you mentally visualize an approximation of the original, undistorted tattoo. However, you should be cautious about this, as your description can easily become "too" subjective.