usage:tracking

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
usage:tracking [2020/05/27 12:46] – [Feature choice] pseudomoanerusage:tracking [2021/03/06 15:44] (current) pseudomoaner
Line 35: Line 35:
 Once you have chosen the features you wish to include, click **Calculate!**. Once this has finished processing, a histogram will appear in the top-left hand axes. This can be used to select the first of the user-defined parameters, denoted as **Proportion of low-quality links to use in training**. Changing the value of this parameter with either the slider or the edit box will alter the position of the vertical red line plotted on top of the histogram. Once you have chosen the features you wish to include, click **Calculate!**. Once this has finished processing, a histogram will appear in the top-left hand axes. This can be used to select the first of the user-defined parameters, denoted as **Proportion of low-quality links to use in training**. Changing the value of this parameter with either the slider or the edit box will alter the position of the vertical red line plotted on top of the histogram.
  
-This histogram indicates the initial distribution of frame-frame link distances, before any feature reweighting has been performed. Typically, it contains two peaks, one towards the left containing accurate links, and one further to the right containing the inaccurate links. If this is the case, it is usually best to select the **Proportion of low-quality links to use in training** parameter to split these populations in two. An example is shown below:+This histogram indicates the initial distribution of frame-frame link distances, before any feature reweighting has been performed. Typically, it contains two peaks, one towards the left containing accurate links, and one further to the right containing the inaccurate links. If this is the case, it is usually best to select the **Proportion of training links to use** parameter to split these populations in two. An example is shown below:
  
 {{ :usage:unnormalisedstepdistribution.png?nolink&400 |}} {{ :usage:unnormalisedstepdistribution.png?nolink&400 |}}
Line 41: Line 41:
 If there are more than two peaks, choose a value that splits the left-most peak from the others. If there are more than two peaks, choose a value that splits the left-most peak from the others.
  
-If you see only a single peak, try changing the **Features to use for model training** radio button selection to **All features**. This will switch the training portion of the algorithm from using only object position to assign the initial set of links to using the entire set of selected features. If object position is of a similar reliability to the other features included in the model training, adding them into the training stage can improve the accuracy of initial link assignment. If this //still// results in a single peak, revert back to **Only centroids** and choose a **Proportion of low-quality links to use in training** that sits just to the right of the peak.+If you see only a single peak, try changing the **Features to use for model training** radio button selection to **All features**. This will switch the training portion of the algorithm from using only object position to assign the initial set of links to using the entire set of selected features. If object position is of a similar reliability to the other features included in the model training, adding them into the training stage can improve the accuracy of initial link assignment. If this //still// results in a single peak, revert back to **Only centroids** and choose a **Proportion of training links to use** that sits just to the right of the peak.
  
 Once you have finalised your selection, click **Calculate!** again to generate your final statistical model of the dataset. Once you have finalised your selection, click **Calculate!** again to generate your final statistical model of the dataset.
Line 88: Line 88:
 Firstly, it can be used to verify that each feature is properly normalised. If they have been properly normalised, the displayed scatterplot should be isotropic (radially symmetric) and centred on the origin. Below are examples of well-normalised (left) and poorly normalised (right) features: Firstly, it can be used to verify that each feature is properly normalised. If they have been properly normalised, the displayed scatterplot should be isotropic (radially symmetric) and centred on the origin. Below are examples of well-normalised (left) and poorly normalised (right) features:
  
-<WRAP group> 
 <WRAP half column centeralign> <WRAP half column centeralign>
 {{ :usage:well-normalised.png?nolink&400 }} {{ :usage:well-normalised.png?nolink&400 }}
Line 95: Line 94:
 <WRAP half column centeralign> <WRAP half column centeralign>
 {{ :usage:poorly-normalised.png?nolink&400 }} {{ :usage:poorly-normalised.png?nolink&400 }}
-</WRAP> 
 </WRAP> </WRAP>
  
Line 174: Line 172:
  
   * **x** and **y:** The instantaneous coordinates of the object over time. Each is a $t \times 1$ vector.   * **x** and **y:** The instantaneous coordinates of the object over time. Each is a $t \times 1$ vector.
-  * **smoothx** and **smoothy:** The instantaneous coordinates of the object over time, smoothed using the [[https://uk.mathworks.com/help/curvefit/smooth.html#mw_ad6b65fd-4dac-46c4-a649-a7a0b301eb80|loess method]] with a span of 1% of the total track length. Each is a $t \times 1$ vector. 
   * **theta** and **vmag:** The instantaneous direction of motion (in degrees) and speed of the object. Each is a $(t-1) \times 1$ vector.   * **theta** and **vmag:** The instantaneous direction of motion (in degrees) and speed of the object. Each is a $(t-1) \times 1$ vector.
-  * **smoothTheta** and **smoothVmag:** The instantaneous direction of motion (in degrees) and speed of the object, calculated from **smoothx** and **smoothy**. Each is a $(t-1) \times 1$ vector. 
   * **times:** The timepoints the object's position was sampled at. As gaps in tracks can be bridged, this list of timepoints is not necessarily contiguous. A $t \times 1$ vector.   * **times:** The timepoints the object's position was sampled at. As gaps in tracks can be bridged, this list of timepoints is not necessarily contiguous. A $t \times 1$ vector.
   * **length:** Total length (in timepoints) of the track.   * **length:** Total length (in timepoints) of the track.
   * **start** and **end:** Start and end timepoints of the track.   * **start** and **end:** Start and end timepoints of the track.
   * **age:** The age of the object relative to the start of the track at each timepoint. Equivalent to **times** - **start**.   * **age:** The age of the object relative to the start of the track at each timepoint. Equivalent to **times** - **start**.
 +  * **interpolated:** $(t-1) \times 1$ logical vector indicating whether there was a gap in the track at this timepoint. If so, values in all other fields for this timepoint have been linearly interpolated from surrounding values.
  
 Depending on the options selected in the [[usage:feature_extraction|feature extraction module]], additional fields may also be available: Depending on the options selected in the [[usage:feature_extraction|feature extraction module]], additional fields may also be available:
Line 194: Line 191:
 ===== Video demonstration ===== ===== Video demonstration =====
  
-{{ youtube>rSnvglvt-rE?large }}+====Part 1==== 
 +{{ youtube>EW4hl439Xp4?large }} 
 + 
 +====Part 2==== 
 +{{ youtube>GckUtXZcGkY?large }}
  
  • usage/tracking.1590579981.txt.gz
  • Last modified: 2020/05/27 12:46
  • by pseudomoaner