usage:tracking

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
usage:tracking [2019/11/11 16:47] pseudomoanerusage:tracking [2021/03/06 15:44] (current) pseudomoaner
Line 20: Line 20:
  
 Some features can also be useful for telling the algorithm when something has gone wrong. For example, an object may mis-segment, maintaining approximately the same position but substantially reducing its apparent length. As long as cell length is included as a feature (and assuming it is fairly robust), the algorithm will be able to automatically detect this mis-segmentation and throw it out. Some features can also be useful for telling the algorithm when something has gone wrong. For example, an object may mis-segment, maintaining approximately the same position but substantially reducing its apparent length. As long as cell length is included as a feature (and assuming it is fairly robust), the algorithm will be able to automatically detect this mis-segmentation and throw it out.
- 
-Pairs of features that are correlated should be avoided. For example, for rod-shaped cells, cell length and cell area are approximately proportional to one another. Due to this statistical redundancy, the linking metric used by the tracking algorithm will weight changes in cell length approximately twice as strongly as it should do. In this case, a selection of only one of the two features should be made. 
  
 The checkboxes in the **Features for inclusion** panel allow you to choose the features within your dataset that fulfil the above criteria. The checkboxes in the **Features for inclusion** panel allow you to choose the features within your dataset that fulfil the above criteria.
Line 37: Line 35:
 Once you have chosen the features you wish to include, click **Calculate!**. Once this has finished processing, a histogram will appear in the top-left hand axes. This can be used to select the first of the user-defined parameters, denoted as **Proportion of low-quality links to use in training**. Changing the value of this parameter with either the slider or the edit box will alter the position of the vertical red line plotted on top of the histogram. Once you have chosen the features you wish to include, click **Calculate!**. Once this has finished processing, a histogram will appear in the top-left hand axes. This can be used to select the first of the user-defined parameters, denoted as **Proportion of low-quality links to use in training**. Changing the value of this parameter with either the slider or the edit box will alter the position of the vertical red line plotted on top of the histogram.
  
-This histogram indicates the initial distribution of frame-frame link distances, before any feature reweighting has been performed. Typically, it contains two peaks, one towards the left containing accurate links, and one further to the right containing the inaccurate links. If this is the case, it is usually best to select the **Proportion of low-quality links to use in training** parameter to split these populations in two. An example is shown below:+This histogram indicates the initial distribution of frame-frame link distances, before any feature reweighting has been performed. Typically, it contains two peaks, one towards the left containing accurate links, and one further to the right containing the inaccurate links. If this is the case, it is usually best to select the **Proportion of training links to use** parameter to split these populations in two. An example is shown below:
  
 {{ :usage:unnormalisedstepdistribution.png?nolink&400 |}} {{ :usage:unnormalisedstepdistribution.png?nolink&400 |}}
Line 43: Line 41:
 If there are more than two peaks, choose a value that splits the left-most peak from the others. If there are more than two peaks, choose a value that splits the left-most peak from the others.
  
-If you see only a single peak, try changing the **Features to use for model training** radio button selection to **All features**. This will switch the training portion of the algorithm from using only object position (usually by far the most reliable feature) to assign the initial set of links to using the entire set of features. If object position is of a similar reliability to the other features included in the model training, adding them into the training stage can improve the accuracy of initial link assignment. If this //still// results in a single peak, revert back to **Only centroids** and choose a **Proportion of low-quality links to use in training** that sits just to the right of the peak.+If you see only a single peak, try changing the **Features to use for model training** radio button selection to **All features**. This will switch the training portion of the algorithm from using only object position to assign the initial set of links to using the entire set of selected features. If object position is of a similar reliability to the other features included in the model training, adding them into the training stage can improve the accuracy of initial link assignment. If this //still// results in a single peak, revert back to **Only centroids** and choose a **Proportion of training links to use** that sits just to the right of the peak.
  
 Once you have finalised your selection, click **Calculate!** again to generate your final statistical model of the dataset. Once you have finalised your selection, click **Calculate!** again to generate your final statistical model of the dataset.
Line 70: Line 68:
   * Green and blue objects have been assigned links between the two frames. The colour of linked objects remains constant between the two frames.   * Green and blue objects have been assigned links between the two frames. The colour of linked objects remains constant between the two frames.
   * Yellow objects are objects in frame A that have not had a link assigned to an object in frame B, as their associated score was below the currently selected linking threshold.   * Yellow objects are objects in frame A that have not had a link assigned to an object in frame B, as their associated score was below the currently selected linking threshold.
-  * Purple objects are objects in frame B that have not had a link assigned to an object in frame A.+  * Purple objects are objects in frame B that have not had a link assigned from an object in frame A.
  
 In the case above, most objects have been assigned links and are correspondingly coloured green or blue. The exception to this is the pair of objects in the bottom left-hand corner of the image. They are initially segmented correctly in frame A, but become fused in frame B. Because of this mis-segmentation, the tracking algorithm has correctly rejected any link to or from these objects. In the case above, most objects have been assigned links and are correspondingly coloured green or blue. The exception to this is the pair of objects in the bottom left-hand corner of the image. They are initially segmented correctly in frame A, but become fused in frame B. Because of this mis-segmentation, the tracking algorithm has correctly rejected any link to or from these objects.
Line 90: Line 88:
 Firstly, it can be used to verify that each feature is properly normalised. If they have been properly normalised, the displayed scatterplot should be isotropic (radially symmetric) and centred on the origin. Below are examples of well-normalised (left) and poorly normalised (right) features: Firstly, it can be used to verify that each feature is properly normalised. If they have been properly normalised, the displayed scatterplot should be isotropic (radially symmetric) and centred on the origin. Below are examples of well-normalised (left) and poorly normalised (right) features:
  
-{{:usage:well-normalised.png?nolink&400}} +<WRAP half column centeralign> 
-{{ :usage:poorly-normalised.png?nolink&400}}+{{ :usage:well-normalised.png?nolink&400 }} 
 +</WRAP> 
 + 
 +<WRAP half column centeralign> 
 +{{ :usage:poorly-normalised.png?nolink&400 }} 
 +</WRAP>
  
 In the second case, the 'squashed' shape of the scatterplot indicates that the feature used for the y-axis has been poorly normalised. This can be confirmed by choosing other features to display on the x-axis - if the y-axis feature has not been properly normalised, each resulting scatterplot will appear non-isotropic. In the second case, the 'squashed' shape of the scatterplot indicates that the feature used for the y-axis has been poorly normalised. This can be confirmed by choosing other features to display on the x-axis - if the y-axis feature has not been properly normalised, each resulting scatterplot will appear non-isotropic.
Line 124: Line 127:
 === Track length filtering === === Track length filtering ===
  
-Various processes can generate short tracks that are not useful for analysis. Mis-segmentations can result in 'objects' that appear for only a single frame, objects can move in and out of the field of view over a very short period of time, or highly stringent tracking can break tracks into many short pieces. The extent of these effects can be seen by looking at the track length distribution, plotted in the lower left panel once tracking has finished:+Various processes can generate short tracks that are not useful for analysis. Mis-segmentations can result in 'objects' that appear for only a single frame, objects can move in and out of the field of view over a very short period of time, or highly stringent tracking can break tracks into many short pieces. The extent of these effects can be seen by looking at the track length distribution, plotted in the lower right panel once tracking has finished:
  
 {{ :usage:finaldistribution.png?nolink&300 }} {{ :usage:finaldistribution.png?nolink&300 }}
Line 142: Line 145:
   * **toMappings** and **rawToMappings:** Structures for mapping from the 'slice' representation of **trackableData** to the 'track' representation of **procTracks**. **rawFromMappings** contains the mappings generated by the tracking algorithm, **fromMappings** contains the mappings once manual correction has been applied.   * **toMappings** and **rawToMappings:** Structures for mapping from the 'slice' representation of **trackableData** to the 'track' representation of **procTracks**. **rawFromMappings** contains the mappings generated by the tracking algorithm, **fromMappings** contains the mappings once manual correction has been applied.
   * **fromMappings** and **rawFromMappings:** Structures for mapping from the 'track' representation back to the 'slice' representation. These allow features that can only be calculated in the track representation (such as object speed) to be converted back to the slice representation.   * **fromMappings** and **rawFromMappings:** Structures for mapping from the 'track' representation back to the 'slice' representation. These allow features that can only be calculated in the track representation (such as object speed) to be converted back to the slice representation.
 +
 +The 'slice' and 'track' representations of the dataset are stored in trackableData and procTracks, respectively. In general, the following statements should be true:
 +
 +<code matlab>  
 +  procTracks(a).x(b) == trackableData.Centroid{toMappings{a}(b,1)}(toMappings{a}(b,2),1)
 +  trackableData.Centroid{c}(d,1) == procTracks(fromMappings{c}(d,1)).x(fromMappings{c}(d,2))
 +</code>
 +
 +for all a, b, c and d. These expressions generalise for all other fields than trackableData.Centroid and procTracks.x.
 +
   * **procTracks:** The full set of track data.   * **procTracks:** The full set of track data.
   * **linkStats:** The statistics associated with each of the features, calculated during the model training stage.   * **linkStats:** The statistics associated with each of the features, calculated during the model training stage.
Line 159: Line 172:
  
   * **x** and **y:** The instantaneous coordinates of the object over time. Each is a $t \times 1$ vector.   * **x** and **y:** The instantaneous coordinates of the object over time. Each is a $t \times 1$ vector.
-  * **smoothx** and **smoothy:** The instantaneous coordinates of the object over time, smoothed using the [[https://uk.mathworks.com/help/curvefit/smooth.html#mw_ad6b65fd-4dac-46c4-a649-a7a0b301eb80|loess method]] with a span of 1% of the total track length. Each is a $t \times 1$ vector. 
   * **theta** and **vmag:** The instantaneous direction of motion (in degrees) and speed of the object. Each is a $(t-1) \times 1$ vector.   * **theta** and **vmag:** The instantaneous direction of motion (in degrees) and speed of the object. Each is a $(t-1) \times 1$ vector.
-  * **smoothTheta** and **smoothVmag:** The instantaneous direction of motion (in degrees) and speed of the object, calculated from **smoothx** and **smoothy**. Each is a $(t-1) \times 1$ vector. 
   * **times:** The timepoints the object's position was sampled at. As gaps in tracks can be bridged, this list of timepoints is not necessarily contiguous. A $t \times 1$ vector.   * **times:** The timepoints the object's position was sampled at. As gaps in tracks can be bridged, this list of timepoints is not necessarily contiguous. A $t \times 1$ vector.
   * **length:** Total length (in timepoints) of the track.   * **length:** Total length (in timepoints) of the track.
   * **start** and **end:** Start and end timepoints of the track.   * **start** and **end:** Start and end timepoints of the track.
   * **age:** The age of the object relative to the start of the track at each timepoint. Equivalent to **times** - **start**.   * **age:** The age of the object relative to the start of the track at each timepoint. Equivalent to **times** - **start**.
 +  * **interpolated:** $(t-1) \times 1$ logical vector indicating whether there was a gap in the track at this timepoint. If so, values in all other fields for this timepoint have been linearly interpolated from surrounding values.
  
 Depending on the options selected in the [[usage:feature_extraction|feature extraction module]], additional fields may also be available: Depending on the options selected in the [[usage:feature_extraction|feature extraction module]], additional fields may also be available:
Line 179: Line 191:
 ===== Video demonstration ===== ===== Video demonstration =====
  
-{{ youtube>rSnvglvt-rE?large }}+====Part 1==== 
 +{{ youtube>EW4hl439Xp4?large }} 
 + 
 +====Part 2==== 
 +{{ youtube>GckUtXZcGkY?large }}
  
  • usage/tracking.1573490851.txt.gz
  • Last modified: 2019/11/11 16:47
  • by pseudomoaner