Tuesday, January 27, 2009

I'm fired up!

Alrighty! Finally exterminated all of the bugs in the perceptron code. The following are the characteristics of the multilayer perceptron that I use for fire color detection:
  • 2 layer
  • 1 hidden node
  • 3 inputs, where each input is a color channel (e.g. R,G,B or H,S,V etc.)
  • trained for 100 epochs and got a mean squared error of 0.0033
More than 1 hidden node don't seem to help much and only end up slowing the classifier. Currently the classifier is trained on RGB color data.

The following is an example of an image with fire in it and how the perceptron-based fire color classifier would label the image pixels:

At first glance you might say that fire color classification fails miserably because it tends to label lots of non fire pixels as being fire. However, the goal of this first classifier was to get a very high true positive rate and a low false negative rate, irregardless of the false positive rate. This is because this classifier will be used in concert with other feature detection algorithms such as motion and maybe texture. These additional classification algorithms will help to reduce the false positive rate.

Lastly, you can see the new color palettes of the positive (fire) data and negative (non-fire) data:

Saturday, January 24, 2009

No luck with Perceptrons...yet

Well it turns out there were a few bugs/problems with my perceptron code. First off there was a problem reading in the input data which threw off the z-scaling. Then the z-scaling had some overflow problems which I fixed. Lastly, the initial weights were being set badly causing the perceptron to suck. I finally got it to work correctly on XOR.

Unfortunately when I tried to get it to learn the difference between fire and non-fire pixels it could only get an MSE of .18 which is a big failure. I suspect there's much too much similarity and overlap between fire pixels and non fire pixels, especially in the near white color range. I'm going to try and manipulate the training data to reduce this overlap to see if that helps.

If that doesn't work then I'm going to try seeing if learning HSV channels or L*a*b* channels will work bettern than the RGB channels that I've tried so far.

Wednesday, January 21, 2009

So close and yet so fire

This week I've been working on creating the necessary architecture to classify parts of images as being fire or not based on the pixel color values. I've also created a large training and testing set of labeled fire images to use once everything is working. The following is a description of how the system works so far:

  1. Run a script to go through a directory of raw fire images and make sure there are no pixels with the rgb value (255,0,0) since this is going to be the indicator color in the next stages of this process. Also create a copy of all of the original images. There is now a directory of slightly modified images called 'target' and a directory of the original images called 'original'.
  2. Use a paint program and manually label all target image fire pixels by coloring them with the rgb value (255,0,0).
  3. Run a script which reads each target image and where it finds pixels with the color (255,0,0) grab the color value at the corresponding pixel location in the original image and write it down to file.
  4. Repeat steps 1-3 for positive and negative example images to create positive and testing sets.
  5. Get rid of all duplicate values in the data sets using the Unix 'sort' and 'uniq' commands.
  6. Run another script to convert the raw positive and negative sets into training and testing sets of mixed negative and positive examples.
  7. Run a training program to train the perceptron classifier.
  8. Feed a testing image to the final fire color classifier program which uses the trained perceptron to classify fire pixels in the input image.
Currently there's a bug in either the training and/or classification stage of the process which I need to hunt down and eliminate with extreme prejudice. Other than that everything is looking good.

On a related note, the following are the color palettes of fire and non-fire pixel colors:

Monday, January 12, 2009

Call me Prometheus

This last week I finished creating the training and testing sets of positive fire videos. I used "Windows Movie Maker" to cut and cleanup the raw fire videos I'd taken over winter break. The result was a bunch of .wmv files which I then converted to .avi files using "STOIK Video Converter 2" because this is the format that OpenCV seems to like.

Next I read through All of my research material on video-based fire detection. The following is a list of the different techniques and algorithms that were used to detect fire in video:
  1. Detect fire colored pixels using gaussian-smoothed color histograms
  2. Detect the temporal variation of fire pixels by computing the average pixel-by-pixel intensity difference for fire pixels and for non fire pixels and subtracting the latter from the former.
  3. Detecting smoke by noticing a decrease in the variance of the color histogram of the image caused by the increased homogeneity of the image due to smoke's blurring effect.
  4. Also, observe the change in density of the smoke by observing the power spectra in the frequency domain.
  5. Detect the color variation of fire pixels in the wavelet domain of fire pixels.
  6. Detect the zero-crossings of the wavelet transform coefficients. This can be done in the pixel intensity range or in the R-color channel. Analysis is done in both the spatial and frequency domains using filter banks.
  7. Using HMMs on the color of the target pixel.
  8. Train a neural network on the H, S, and I channels of each pixel to classify the pixel as being a fire pixel or not based on the pixel color characteristics.
  9. Look for the following characteristics of smoke in images (1) smoke smooths object edges and since edges correspond to extrema in the wavelet domain, a decrease in these extrema is an indicator of the presence of smoke (2) smoke Decreases the values of the image's chrominance channels.
  10. Fire detection based on time derivatives. A pixel is determined to be a fire pixel based on intensity-weighted time derivatives.
  11. Thresholded temporal color variation. If the temporal color variation passes a threshold then label the pixel as being a fire pixel.
I've started working on a simple pixel color based perceptron classifier as the first step in a multi feature based fire detection algorithm. More on that In my next post.

Sunday, January 4, 2009

First Post

The goal of this project is to research and develop algorithms for detecting fire in video data. The following is a diagram of how such a system could be deployed in the real world.My first goals are to
  1. acquire video editing software
  2. install and setup OpenCV