DeepTraCE
Deep-learning based pipeline for analysis of whole-brain light sheet microscopy images
Install / Use
/learn @DeNardoLab/DeepTraCEREADME
DeepTraCE

DeepTraCE is a deep-learning based pipeline for analysis of whole-brain light sheet microscopy images. Our pipeline is optimized for recognition of cortical axons and largely builds on TRAILMAP (Friedmann et. al., 2020). The final output of this pipeline includes skeletonized images of the axon segmentations from each brain, which have been transformed to a common atlas, as well as quantifications of labeling density in each brain region.
**This is for the original step-by-step version of DeepTraCE. A streamlined version of the pipeline, all in python, is available at https://github.com/jcouto/DeepTraCE/tree/gui
Code written by Michael Gongwer and Drew Friedmann.
Models can be found at https://ucla.box.com/s/zc4ib0mo297h3wdbbjdd237r2mzd1crf
Sample data can be found at https://ucla.box.com/s/2bm6disejkwrmspugh8s4bmc3yq4p531
DeepTraCE Pipeline
Step 1: Imaging
In our protocol, we image axons in the 640nm channel and autofluorescence in the 488nm channel. This generates two stacks of individual TIF images separated into two folders, one for each channel (examples below). We will be using the 640nm channel to extract the axons and the 488nm channel to align the brains to an atlas, which allows automated quantification by brain region.
<pre> 488nm (autofluorescence) 640nm (axons) <img src="https://user-images.githubusercontent.com/52982623/150884325-964e3445-2c41-4307-a399-f067f617367b.png" width="200"> <img src="https://user-images.githubusercontent.com/52982623/150884307-d6850d3c-bff5-4dc4-8e40-d672646ae1c2.png" width="200"> </pre>Step 2: TRAILMAP
TRAILMAP is the Deep-Learning pipeline used to convert the raw 640nm image stack to an image stack of the same size containing a map of the probability that each pixel contains an axon. Pixel values of 1 indicate high likelihood that an axon is contained in that region, and pixel values of 0 indicate low likelihood. This probability is calculated using a 3D convolutional network that has been trained to recognize axons.
<pre> 640nm (axons) Probability map (TRAILMAP output) <img src="https://user-images.githubusercontent.com/52982623/150884307-d6850d3c-bff5-4dc4-8e40-d672646ae1c2.png" width="200"> <img src="https://user-images.githubusercontent.com/52982623/150887072-0f8c9b11-595d-415d-9779-f373741df6d2.png" width="200"> </pre>In the DeepTraCE pipeline, we perform this segmentation three times, once each with three models optimized to recognize axons in varying levels of depth in the tissue. The segmentation from these three models will be concatenated in a later step.
- Install TRAILMAP & install necessary dependencies (if this is already done skip to part b)
- Clone GitHub repository to drive
- Open Anaconda Prompt (install Anaconda if not already installed)
- Create an environment for TRAILMAP and install dependencies using the following line:
conda create -n trailmap_env tensorflow-gpu=2.1 opencv=3.4.2 pillow=7.0.0
*for troubleshooting see readme on github
- Select model to use for axon segmentation
- From the GitHub repository, open segment_brain_batch.py in a python editor and in line 18 change the path to the location of the model weights you would like to use
- Activate TRAILMAP environment & enter TRAILMAP directory
- Open Anaconda prompt
- Activate environment using the following line:
conda activate trailmap_env
- Enter directory using the following, using the actual directory in place of the one shown:
cd C:/Users/Michael/Documents/TRAILMAP
- Run TRAILMAP inference step on the 640nm channel of the brain of interest
- Enter the following line, replacing input_folder1+ with the directories for the brain(s):
python segment_brain_batch.py input_folder1 input_folder2 input_folder3
- This outputs the axon probability map to the same directory as the original folder, but with “seg-” added to the beginning of the folder name.
- Repeat segmentation with other models
- Change the name of the segmentation folders so they are not overwritten when you re-segment the brain with a new model (example: change name of the folder “seg-640_NAc1” to “model1_seg-640_NAc1”)
- Repeat steps b through d above using a the other models
NOTE: In windows explorer, to copy the path of a file, use Shift+RightClick and select “copy as path”
Step 3: Scale brain to a 10μm space and convert to 8-bit using ImageJ
From this point forward, we will want our brains in a 10μm space (where each voxel is 10μmx10μmx10μm), as this is the resolution of the atlas we use for registration. We perform this downscaling step in ImageJ using values calculated based on our imaging parameters, and if you would like to do this in batch we have provided a macro for this purpose. In addition, this macro converts the files to 8bit as opposed to 16- or 32- bit, as 8-bit images are best compatible with future steps. If you would like to do this manually, use the Image->Scale and Image->Type functions in ImageJ, but the instructions below are for the use of a macro.
These steps must be performed on both the 488nm raw channel and the probability map from trailmap, as these are the two images that will feed into the next step of the pipeline.
- Open ImageJ (NOTE: our macros are compatible with ImageJ v1.52 but not v1.53)
- Click “plugins” → “macros” → “edit” and open the macro file for brain scaling (macro 1)
- In line 2, add the path for the first .tif file of each brain you want to analyze to the array, separating them by commas
- You will want to scale both the TRAILMAP output and the 488nm channel for each brain. If you are planning to concatenate multiple models, you will need to do this for all of the probability maps.
- NOTE: macros are very finicky; you may need to change all the \ in the directories to \
- Run the macro using ctrl-R
- This will output the brain to a separate folder in the same directory, with “_scaled” on the end
Step 3b (optional step to improve alignment): Manually rotate each brain in ImageJ
We have found that elastix, the program used to align the brain with an atlas, particularly struggles with alignment of brains that require large degrees of rotation around the X, Y, or Z axis. To account for this, we incorporated a step where we manually rotate the brains to be flush with the atlas using an ImageJ plugin called TransformJ. In our imaging protocol, we image slightly past the midline, which allows visualization of blood vessels along the midline in the autofluorescence channel. Our goal in this rotation is to make those midline veins all appear in the same z plane, thus making the brain flush with the atlas.
- Install ImageScience’s TransformJ (https://imagescience.org/meijering/software/transformj/)
- In ImageJ, go to help->update
- Click “add update site,” check “ImageScience,” and close. After restarting ImageJ, TransformJ should be installed.
- Open 10um scaled 488nm image in ImageJ
- Assess rotation of image by scrolling to the midline and checking whether all blood vessels are visible in the same z plane or if they are rotated. If they are rotated, assess which direction the image needs to be rotated to flatten them into a single plan.
- Click plugins->ImageScience->TransformJ->TransformJ Rotate
- Select approximate degrees of rotation for each axis.
- Positive rotation in the X axis brings top of screen toward you
- Positive rotation in the Y axis brings right side of screen towards you
- Positive rotation in the Z axis brings top of screen to the right (clockwise)
- Click “ok” to run the rotation. Check if the blood vessels are now aligned. If not, repeat the process with different angles of rotation until they all appear in the same plane. Record the angles of rotation used for the final image.
- Make a substack of the rotated image in the z plane (image->stacks->tools->create substack) to crop out black borders that do not contain brain tissue. Record the slices that were included in the substack.
- Save the image as “10umrc.tif” (10um image, rotated and cropped).
- Open the 10um segmented 640nm image in ImageJ and apply the exact same transformation.
Step 4: Register brain autofluorescence channel (488nm) to atlas using Elastix
Alignment/registration is the process of taking your raw data and warping it such that it is aligned with a standardized reference atlas. This is performed in a command-line-based program called Elastix (https://elastix.lumc.nl/). The first step involves aligning the autofluorescence channel to an atlas image, and the next step involves applying the exact same transformation to the channel containing the label of interest, which is done in Transformix (a subsection of the Elastix program). Once your brain is aligned to an atlas, you then know which groups of pixels correspond to each brain region, and this information can be used for quantification and visualization.
- Open Command Prompt. If elastix is not installed, follow the directions in the elastix manual.
- Ensure that you have the 10um cropped atlas and the affine and bspline parameter files
- Run the following command, replacing the placeholders with the correct path, using affine as parameter 1 and bspline a
