Suffice it to say, I finished creating a function to compare two dynamic texture video sequences. The function finds the dynamic texture models of the two videos, it then then passes these two models to the Martin Distance function which in turn computes the distance between the two models based on the subspace angles between the models.
Next I created a confusion matrix of the 8 segmented fire videos and 8 segmented non-fire videos. The values are the Martin distance times the complex conjugate of the Martin Distance. On the x axis, the movies go fire videos 1-8 and then non-fire videos 1-8. The y axis has the same labeling.
Here's that confusion matrix where the diagonal has been set to infinity
The following are examples of segmented fire texture videos and segmented non-fire videos.
And here are the picture previews of the movies used to create the confusion matrix.
The following are tables show the closest non-self match for each video. Remember, videos 1-8 are fire videos and 9-16 are non-fire videos. As you can see, the only misclassification is for video 8. Other than that, ever fire video gets matched with another fire video and every non-fire video gets match with a non-fire video.