Tutorial: Topaz Integration in cryoSPARC

Last Updated: November 29, 2019

The Topaz wrapper in cryoSPARC, introduced in v2.12, incorporates the deep learning models used in Topaz as a means to automatically pick particles given a subset of previously picked particles or to denoise micrographs. This new wrapper consists of four jobs:

  • Topaz Train
  • Topaz Cross Validation
  • Topaz Extract
  • Topaz Denoise

The first three jobs relate to particle picking and the final job relates to micrograph denoising.

Topaz is a particle detection tool created by Tristan Bepler and Alex J. Noble:

Bepler, T., Morin, A., Rapp, M. et al. Positive-unlabeled convolutional neural networks for particle picking in cryo-electron micrographs. Nat Methods 16, 1153–1160 (2019) doi:10.1038/s41592-019-0575-8
Bepler, T., Noble, A.J., Berger, B. Topaz-Denoise: general deep denoising models for cryoEM. bioRxiv 838920 (2019) doi: https://doi.org/10.1101/838920

Structura Biotechnology Inc. and cryoSPARC do not license Topaz nor distribute Topaz binaries. Please ensure you have your own copy of Topaz licensed and installed under the terms of its GNU General Public License v3.0.


Setup

Installing Topaz

As cryoSPARC only wraps installed Topaz instances rather than providing Topaz instances directly, Topaz must be first installed in the Python environment within cryoSPARC.

It is not recommended to use the Anaconda Python environment that is installed with cryoSPARC. This environment is subject to be destroyed upon an update to cryoSPARC. It is best to use your own Anaconda Python (2.7) environment, or a new one if one doesn't already exist. Please ensure the Topaz executable is accessible by the cryoSPARC user account, and accessible on each worker node.

Follow the instructions for installing Topaz using Anaconda found here under the Installation section: https://github.com/tbepler/topaz

Once Topaz is installed, the executable path must be found. The instructions below will detail how to find the executable path.

Finding Default Topaz Executable Path

The following is one possible method of finding this path in Linux. This method assumes that Topaz was installed using Anaconda or Pip.

  1. If you have not yet already, follow the instructions in the Topaz GitHub Repository to install Topaz using Anaconda.

  2. Open a Terminal and enter:

which topaz
  1. The previous command will output the executable path to the default Topaz executable. If this command does not output the desired path, try the instructions below for finding instance specific Topaz executable paths.

Finding Python Instance Specific Topaz Executable Path

  1. If you have not yet already, follow the instructions in the Topaz GitHub Repository to install Topaz using Anaconda.

  2. Open a Terminal and enter:

which python
  1. Take the output path of the "which python" command and remove directories from the end of the path until the directory at the end of the path is "bin". Insert this altered path in place of <PATH> in the command below. Enter the command into Terminal.
find <PATH> -executable -type f -name "topaz"
  1. The previous command will output all possible paths that contain a Topaz executable within the current Python instance. There should be only one path outputted but in the case that multiple paths are outputted, observe each executable and determine which is to be used for particle picking.

Reference and Workflow

Particle Picking: Topaz Train and Cross Validation

To perform particle picking using Topaz, a model must be first be trained using either the Topaz Train job or the Topaz Cross Validation job. Both of these jobs require the same inputs and produce the same outputs as listed below:

Inputs

  • Particle Picks
  • Micrographs

Outputs

  • Topaz Model
  • Micrographs

Parameters

Both of the Topaz Train and Topaz Cross Validation jobs feature various following parameters. The basic parameters are detailed below:

  • Path to Topaz Executable

    • The absolute path to the Topaz executable that will run the denoise job.
  • Downsampling Factor

    • The factor by which to downsample micrographs by. It is highly recommended to downsample micrographs to reduce memory load and improve model performance. For example, a recommended downsampling factor for a K2 Super Resolution (7676x7420) dataset (e.g. EMPIAR 10025), is 16.
  • Learning Rate

    • The value that determines the extent by which model weights are updated. Higher values will result with training approaching an optimum faster but may prevent the model from reaching the optimum itself, resulting with potentially worse final accuracy.
  • Minibatch Size

    • The number of examples that are used within each batch during training. Lower values will improve model accuracy at the cost of increased training time.
  • Number of Epochs

    • The number of iterations through the entire dataset the training performs. Higher number of epochs will naturally lead to longer training times. The number of epochs does not have to be optimized as the train and cross validation jobs will automatically output the model from the epoch with the highest precision.
  • Epoch Size

    • The number of updates that occur each epoch. Increasing this value will increase the amount of training performed in each epoch in exchange for slower training speed.
  • Train-Test Split

    • The fraction of the dataset to use for training. For example, a value of 0.2 will use 80% of the input micrographs for training and the remaining 20% for testing. It is highly recommended to use a train-test split greater than 0.
  • Expected Number of Particles

    • The average expected number of particles within each micrograph. This value does not have to be exact but a somewhat accurate value is necessary for Topaz to perform well. This is a necessary parameter that does not include a base value, thus it must be input by the user. It should be noted that if this parameter is lower than the average number of labeled picks input into the training job, then the training job will switch to the PN loss function, which was experimentally found to be worse than the GE-binomial loss function.

The advanced parameters are detailed below:

  • Pixel sampling factor / Number of iterations / Score threshold

    • Parameters that affect the preprocessing of micrographs. It is recommended not to change these parameters.
  • Loss function

    • The loss function used to train the model. It is recommended to use GE-binomial for the following reasons. The PU loss function is a non-negative risk estimator approach and PN is a naive approach where unlabeled data is considered as negative for training. Both of these loss functions were found to perform poorly compared to the GE-binomial and GE-KL loss functions in the paper "Positive-unlabeled convolutional neural networks for particle picking in cryo-electron micrographs" by Bepler, T. et al. [1], the developers of Topaz. The paper also found that while GE-binomial and GE-KL had similar performance in most cases, there were a few cases where GE-binomial outperformed GE-KL. Thus it is recommended to use GE-binomial.
  • Slack

    • The weight on the loss function if GE-binomial or GE-KL is selected as the loss function. It is recommended to keep the slack at -1 as -1 will use the default parameters. If the user desires to change this parameter, it is recommended to read the paper by Bepler, T. et al. [1] prior to doing so.
  • Autoencoder weight

    • The weight on the reconstruction error of the autoencoder. An autoencoder weight of 0 will disable the autoencoder. According to the paper by Bepler, T. et al. [1], the autoencoder improves classifier performance when using fewer labeled data points. However, the degree of improvement diminishes with more labeled data points until it begins to negatively affect classifier performance due to over-regularization. The paper recommends using an autoencoder weight of 10 / N when N ≤ 250 and using an autoencoder weight of 0 otherwise, where N is the number of labeled data points.
  • Regularization

    • The regularization parameter on the loss function using L2 regularization. Values less than 1 can be used to improve model performance but values greater than 1 are likely to begin to impede training.
  • Model architecture

    • ResNet stands for residual network are a neural network architecture that is popular for eliminating the vanishing gradient problem. Note that max pooling cannot be used with Conv model architectures. Note that average pooling cannot be used with the ResNet8 model architecture.
    • Conv stands for convolutional neural network which is a popular architecture for computer vision problems. Note that max pooling cannot be used with Conv model architectures.
    • According to the Topaz GitHub page, ResNet8 provides a balance of good performance and receptive field size. Conv31 and Conv63, which have smaller receptive fields can be useful when less complex models are desired. Conv127 should not be used unless quite complex models are required. The following are the receptive fields for each architecture as shown in the aforementioned GitHub page:

      • resnet8 [receptive field = 71]
      • conv127 [receptive field = 127]
      • conv63 [receptive field = 63]
      • conv31 [receptive field = 31]
  • Number of units in base layer

    • The number of units in the base layer. The ResNet8 model architecture will double the number of units during convolutions and pooling. For the Conv model architectures, the scaling of units can be specified using the Unit scaling parameter.
  • Dropout rate

    • The probability that a unit is disabled for one batch iteration during training. Dropout is sometimes useful for preventing overfitting. Low dropout rates greater than 0 and less than 0.5 can be used when the Topaz model begins overfitting during training.
  • Batch normalization

    • Enabling batch normalization reduces the covariance shift of hidden units during training. This enables higher learning rates as activation of hidden units are reduced, reduces overfitting, and has provides some regularization. It is recommended to use batch normalization.
  • Pooling method

    • Pooling method is the type of layer used to reduce the spatial complexity of layers within the model. Pooling methods improve training speed in exchange for some information loss.
    • Max pooling uses the max of the values within the pooling kernel as the output value of the kernel. Note that max pooling cannot be used with Conv model architectures.
    • Average pooling uses the average of the values within the pooling kernel as the output value of the kernel. Note that average pooling cannot be used with the ResNEt8 model architecture.
    • There is no strong recommendation regarding pooling method.
  • Unit scaling

    • The factor by which to scale the number of units during convolutions and pooling when using Conv model architectures.
  • Encoder network unit scaling

    • The factor by which to scale the number of units during convolutions and pooling within the autoencoder architecture. Only applies when an autoencoder weight greater than 0 is used.

The Topaz Cross Validation job includes unique parameters that enable the user to select which parameter to vary and how to vary the parameter during training. These parameters are:

  • Parameter to Optimize
  • Number of Cross Validation Folds
  • Initial Value to begin with during Cross Validation
  • Value to Increment Parameter by during Cross Validation

The first parameter allows the user to select which parameter to vary. The number of cross validation folds indicate how many training jobs to perform during cross validation. The initial value and the incremental value parameters serve to specify which values to test. For example, choosing the parameters found in the table below will result with the Topaz Cross Validation job testing the following learning rates two times each: 0.0001, 0.0002, 0.0003. After finding the learning rate yielding the best results, it will use that learning rate to perform the final training.

Example Parameters

  • Parameter to optimize

    • Learning rate
  • Number of cross validation folds

    • 2
  • Initial value to begin with

    • 0.0001
  • Value to increment by

    • 0.0001
  • Number of times to increment parameter

    • 3

There are other advanced training and model parameters that will not be discussed in this introductory user guide such as selection of pooling layer or encoding network. These parameters can potentially improve the Topaz model. It should be noted that some of these parameters are incompatible with certain model architectures. The job will output an error if the job is attempting to use incompatible parameters. The following parameter combinations are forbidden:

  • Parameters incompatible with ResNet architecture:

    • Average Pooling
    • Autoencoder/Encoding network
  • Parameters incompatible with Convolutional Neural Network architecture:

    • Max Pooling
    • Dropout

Similarities and Differences between Topaz Train and Topaz Cross Validation

The Topaz Train and Topaz Cross Validation jobs serve in the same purpose in that they both use particle picks and micrographs to produce models which can then be used to automatically pick particles.

The Topaz Cross Validation job is different in that it runs multiple instances of the Topaz Train job while varying a specified parameter, enabling the job to find an optimal value for a certain parameter. The Topaz Cross Validation job then uses the optimal parameter value to perform one last Topaz Train job and produces a usable model. However, a key disadvantage of the Topaz Cross Validation job is that it is significantly slower than the standard Topaz Train job.

It is recommended to use the Topaz Train job for training the Topaz model and to only use the Topaz Cross Validation job when attempting to find the optimal value for a particular parameter. The job will find the optimal value and then run a new training job using that parameter value.

Interpreting Training Results from Train and Cross Validation

Once training using either of the Topaz Train or Topaz Cross Validation jobs is complete, it will output a plot indicating the performance on the training set over each epoch. If a train-test split greater than 0 is used, a plot for the performance on the test set will also be output. The testing plot is a better indicator of the overall training results than the training plot and should be used to interpret the results whenever available. The x-axis indicates the epoch and the y-axis indicates the precision. The precision measures how accurate the model is. Successfully trained models will have a test plot gradually featuring precision that increases as the epoch increases.

If the precision begins to decrease after increasing for several epochs, then the model had begun to overfit to the training set. However, the job will automatically select the model from the epoch with the highest precision, therefore, assuming that the precision was improving prior to overfitting, the job will output a version of the model from before it began overfitting.

Below is an example of a test plot from a well-performing Topaz model.

Topaz training

The Topaz Cross Validation job also features a plot that presents the results of the cross validation and the performance at each value. An example of a cross validation plot using the example parameters shown previously can be found below:

Topaz training

Reference and Workflow

Particle Picking: Topaz Extract

Inputs

  • Topaz Model
  • Micrographs

Outputs

  • Particle Picks
  • Micrographs

The Topaz Extract job features various parameters but the most notable parameter is the particle threshold parameter. This parameter determines the minimum quality threshold at which to accept particle picks. If the results of a Topaz Extract job features too many particles, the issue may be solved simply by increasing this parameter. However, selecting an improved threshold can be conveniently done using the Inspect Particle Picks job in cryoSPARC as detailed in the next section.

Interpreting Training Results from Topaz Extract

The particle picks from the Topaz Extract job can be observed and have a threshold applied using the Inspect Particle Picks job. This job interacts with particle picks from Topaz Extract differently in that it enable a user to apply a threshold based on Topaz model performance rather than power score. To do so, vary the power score threshold in the Inspect Particle Picks job. In the image below, the extraction picks have a lower bound threshold of 15 and an upper bound threshold of 48 applied. This number is a model score applied by Topaz indicating how confident the model is in a particle pick.

Topaz extract

Once the picks are outputted from the Inspect Particle Picks job, the picks and the micrographs from which they were extracted have to be passed through the Extract from Micrographs job in cryoSPARC. This updates the CTF information within the particle picks and makes the picks compatible with other cryoSPARC jobs such as Ab-Initio Reconstruction.

Tutorial

T20S Proteasome: Particle Picking

Step 1 - Preprocess Data

  1. Preprocess the T20S subset by completing steps 1-12 in the tutorial found here: https://cryosparc.com/docs/tutorials/t20s/
  2. Ensure that the Inspect Picks, and Select 2D jobs from the linked tutorial are completed as they will be required for training the Topaz model. The ouputs from these two jobs will be used as inputs for the Topaz-related jobs.

Step 2 - Create Training Job

  1. Select Deep Picker (Topaz Train) from the Job Builder. Drag and drop the micrographs output from the completed Inspect Picks job and the particles_selected output from the completed Select 2D job into the micrographs and particles inputs respectively.
Topaz T20S tutorial
  1. Use the file browser (activated by clicking the folder icon) to locate the Topaz executable path found earlier for the Path to Topaz Executable field. Instructions on how to find the Topaz executable path can be found above.
  2. Modify the Downsampling factor parameter to 16. This parameter reduces the size of the input micrographs by the factor input and is often necessary to conform to a system's memory constraints.
  3. Modify the Expected number of particles parameter to 300.
  4. Queue the job.
  5. The job is training a Topaz model on the subset of 20 micrographs from the T20S tutorial. It is highly recommended to train deep picker models on subsets of micrographs as acquiring training picks for all micrographs takes time and is not required. Once the Topaz model learns on a sufficient subset of the micrographs, it can extract particle picks from the entire dataset.

Step 3 - Create Extraction Job

  1. Select Deep Picker (Topaz Extract) from the Job Builder. Drag and drop both the topaz_model and micrographs outputs from the Topaz Train job into the corresponding inputs on the Job Builder.
Topaz T20S tutorial
  1. Queue the job.
  2. The job is using the trained Topaz model to infer picks from the input micrographs. Even though in this tutorial job is extracting from the same micrographs used to train the model, a properly trained model will extract picks that were not used as training picks from the micrographs.

Step 4 - Acquire Particles from Extraction

  1. Select Extract from Micrographs from the Job Builder. Drag and drop both the micrographs and the particles outputs from the Topaz Extract job into the corresponding inputs on Job Builder.
  2. Queue the job.
  3. This job will update the particle picks with information required for further processing.
  4. Select 2D Classification from the Job Builder. Drag and drop the particles output from the Extract from Micrographs job into the particles input of the 2D Classification job.
  5. Queue the job.
  6. Select Select 2D classes from the Job Builder. Drag and drop both outputs of the 2D Classification job into their corresponding inputs in the Select 2D classes job.
  7. Queue the job.
  8. Wait for the job status to change to "Waiting" and then select the particle templates that should be kept for further processing.
  9. The 2D Classification and Select 2D classes jobs serve to filter out unwanted particles from the particle picking. Once the Select 2D classes job is complete, the particles output from the job can be used as particle picks to process further into the pipeline.

Next Steps

Now that a basic Topaz pipeline has been completed, the more advanced aspects of particle picking with Topaz can be explored. The following are some of these aspects:

  • Ideally, deep picking models are trained on a subset of micrographs and then perform inference on an entire dataset, as mentioned before. The Topaz model trained in this tutorial can be applied to the entire T20S dataset rather than the subset used in this tutorial. Potential refinement results will improve with the resultant increased number of picks.
  • The Topaz Train and Topaz Cross Validation jobs has many training parameters that can be fine tuned to affect the quality of the model.

Reference and Workflow

Micrograph Denoising: Topaz Denoise

The Topaz Denoise job has the following inputs and outputs.

Inputs

  • Micrographs
  • Denoise Model
  • Training Micrographs

Outputs

  • Denoised Micrographs
  • Topaz Denoise Model

These inputs and outputs are pertinent in selecting which kind of model to use for denoising the micrographs. How the inputs and outputs affect the model are detailed in the Specifying Model section.

Parameters

The key parameters are detailed below:

  • General Settings

    • Path to Topaz Executable

      • The absolute path to the Topaz executable that will run the denoise job.
    • Number of Plots to Show

      • The number of side-by-side micrograph comparisons to show at the end of the job.
  • Denoising Parameters

    • Normalize Micrographs

      • Specify whether to normalize the micrographs prior to denoising.
    • Shape of Split Micrographs

      • The shape of micrographs after they been split into patches. The shape of the split micrographs will be (x, x) where x is the input parameter.
    • Padding around Each Split Micrograph

      • The padding to set around each split micrograph.
  • Training Parameters

    • Learning Rate

      • The value that determines how quickly the training approaches an optimum. Higher values will result with training approaching an optimum faster but may prevent the model from reaching the optimum itself, resulting with potentially worse final accuracy.
    • Minibatch Size

      • The number of examples that are used within each batch during training. Lower values will improve model accuracy at the cost of significantly increased training time.
    • Number of Epochs

      • The number of iterations through the entire dataset the training performs. Higher number of epochs will naturally lead to longer training times. The number of epochs does not have to be optimized as the train and cross validation jobs will automatically output the model from the epoch with the highest precision.
    • Criteria

      • The number of updates that occur each epoch. Increasing this value will increase the amount of training performed in each epoch in exchange for slower training speed.
    • Crop Size

      • The size of each micrograph after random cropping is performed during data augmentation.
    • Number of Loading Threads

      • The number of threads to use for loading data during training.
  • Pretrained Parameters

    • Model Architecture

      • U-Net (unet) is a convolutional neural network architecture that convolves the input information until a suitable bottleneck shape and then deconvolves the data, concatenating the opposite convolution outputs during the deconvolutions.
      • U-Net Small (unet-small) is the same as a U-Net except with less layers.
      • FCNN (fcnn) stands for fully-convolutional neural network and is the standard neural network architecture used in many computer vision tasks.
      • Affine (affine) applies an affine transformation by a single convolution.
      • Prior to Topaz version 0.2.3, only the L2 model architecture is available.
  • Compute Settings

    • Use CPU for Training

      • Specify whether to only use CPU for training.

Specifying Model

To denoise micrographs using the Topaz Denoise job, users have the option of using the provided pretrained model or to train a model for immediate and future use. Thus the user must select which model to use from three general categories. These categories are:

  1. Provided pretrained model
  2. Model to be trained by user
  3. Model previously trained by user

Specifying which approach to use depends on the job inputs and the build parameters. However, the Topaz Denoise job requires the micrographs that will be denoised to be input into the micrographs input slot regardless of model specification. The job inputs and build parameters required to select each category are summarized in the table below and detailed further below the table.

Model Category "denoise_model" Input Slot "training_micrographs" Input Slot
Provided pretrained model False False
Model to be trained by user False True
Model previously trained by user True False

Using a provided training model

To use the provided training model, the denoise_model and training_micrographs input slots must be empty.

Using model to be trained by user

To train a model for immediate and future use, imported movies that were not pre-processed must be input into the training_micrographs input slot and the denoise_model input slot must be empty.

When the job is complete, it will output the trained model through the topaz_denoise_model output, allowing the the trained model to be used in other Topaz Denoise jobs. How to use this output is specified in the Using model previously trained by user section below.

Using model previously trained by user

To use a previously trained model, pass the topaz_denoise_model output from the Topaz Denoise job with the trained model into the denoise_model input slot. The training_micrographs input slot must remain empty.

Interpreting Denoising Results

Once the Topaz Denoise job is complete, the job will output micrograph comparisons, the amount of which is dependent on the Number of plots to show build parameter. Each comparison features two micrographs on the same row. The micrograph to the left is the original micrograph prior to denoising and the micrograph to the right is the denoised version of the micrograph. This side-by-side comparison serves to inform the user of the effect of the denoising.

Topaz denoise

When a Topaz Denoise job is used to train a model, a plot of the training and validation losses will also be shown. The plots for both losses should be descending overtime. If the plot for the training loss is decreasing while the plot for the validation loss is increasing, this indicates that the model has overfit and training parameters must be tuned. The simplest approach to resolving overfitting is to reduce the learning rate.

Topaz denoise

Tutorial

T20S Proteasome: Micrograph Denoising

Step 1 - Preprocess Data

  1. Preprocess the T20S subset by completing steps 1-6 in the tutorial found here: https://cryosparc.com/docs/tutorials/t20s/
  2. Ensure that the Import Movies, and CTF estimation jobs from the linked tutorial are completed as they will be required for the Topaz Denoise job. The ouputs from these two jobs will be used as inputs for the denoising job.

Step 2 - Create Denoising Job

  1. Select Topaz Denoise from the Job Builder. Drag and drop the exposures_success output from the completed CTF estimation job into the micrographs input.
Topaz T20S tutorial
  1. Use the file browser (activated by clicking the folder icon) to locate the Topaz executable path found earlier for the Path to Topaz Executable field. Instructions on how to find the Topaz executable path can be found above.
  2. Queue the job.
  3. The job is using the provided pretrained Topaz denoising model on the subset of 20 micrographs from the T20S tutorial.
  4. Once the job is completed, observe the micrograph images outputted in the log. Depending on the value of the "Number of plots to show" parameter, the job will show side-by-side micrograph comparisons where the left side features the original micrograph and the right side features the denoised micrograph. This helps determine if the denoised micrographs will be in picking or other related tasks.
Topaz T20S tutorial

Step 3 - Create Denoising Job for Training

  1. Select Topaz Denoise from the Job Builder. Drag and drop the exposures_success output from the completed CTF estimation job and the imported_movies output from the completed Import Movies job into the micrographs and training_micrographs inputs respectively.
Topaz T20S tutorial
  1. Use the file browser to locate the same Topaz executable path used in step 2. See Step 2.2 for more details.
  2. Queue the job.
  3. The job is training a new Topaz denoising model on the subset of 20 micrographs from the T20S tutorial. Once the training is complete, it will use the model to denoise the input micrographs. When the denoising is completed, the job will output both the denoised micrographs and the newly trained model.
  4. As done in step 2.5, observe the shown micrograph comparisons between the original and denoised micrographs.
  5. When training a new model, the job will also output plots of the training and validation loss. The plots for both losses should be descending overtime. If the plot for the training loss is decreasing while the plot for the validation loss is increasing, this indicates that the model has overfit and training parameters must be tuned. The simplest approach to resolving overfitting is to reduce the learning rate.
Topaz T20S tutorial
  1. The job will output denoised micrographs that barely look denoised. That is because the training data is the exact subset of data that is being denoised. When a greater variety of training data is used to train a model, the denoising will be much more noticeable.

Step 4 - Create Denoising Job for Newly Trained Model

  1. Select Topaz Denoise from the Job Builder. Drag and drop the exposures_success output from the completed CTF estimation job and the topaz_denoise_model output from the completed Topaz Denoise job from step 3 into the micrographs and denoise_model inputs respectively.
Topaz T20S tutorial
  1. Use the file browser to locate the same Topaz executable path used in step 2. See Step 2.2 for more details.
  2. Queue the job.
  3. The job is used the previously trained Topaz denoising model on the same subset of 20 micrographs. This step serves to demonstrate how to use trained Topaz denoising models. When using trained models outside of this tutorial, the input micrographs should be different from those used to trained the model.
  4. As done in step 2.5, observe the shown micrograph comparisons between the original and denoised micrographs. The output denoised micrographs should be nearly identical to those from step 3 as the model is denoising the same micrographs using the same model as the job from step 3.

Next Steps

Denoised micrographs mainly serve to improve particle picking using Manual Picker and deep learning pickers such as the Topaz particle picker. Denoising micrographs have no impact on preprocessing methods such as Motion Correction and CTF Estimation nor do they affect the performance of other pickers such as Blob Picker and Template Picker. However, the denoised micrographs can serve to help visualize particles in the Inspect Particle Picks job after using any of the pickers including the aforementioned Blob Picker and Template Picker.

To observe this functionality, continue the T20S tutorial, found here https://cryosparc.com/docs/tutorials/t20s/, until step 9. At step 9 of the linked tutorial, pass the denoised micrographs from step 2 of this tutorial instead of the micrographs output from the Template Picker.

This website uses cookies to ensure you get the best experience. To learn more, please refer to our Privacy Policy