So my mnist idea failed to improve my results. So that killed one avenue of improvement. I have another in mind but will have to wait till I get back from a brief holiday.
So details anyone? My code sort of goes like this:
1) convert the images into 5X5 tiles, and then use these to find a dictionary of features.
2) go over the images again, and map each 5X5 tile to the feature number of the most similar feature in our dictionary (which creates a second order image)
3) add up the feature numbers and form a superposition.
4) apply log(1 + x) to the coefficients of the kets.
5) then a type of k-nearest neighbours using my similarity metric (instead of Euclidean distance).
The idea I had was to use the training examples to find weights for features. The assumption being that some features carry more information than other features in the feature dictionary. Sounds plausible right? Anyway, I tried a couple of times, and each time the results were 1% worse. Doh! Down from 95% to 94% for 10 neighbours. Although it does mean my feature superpositions are somewhat tolerant of noise. A good thing I think.
The next idea, repeat step (1) and (2), several times. ie create third order, then fourth order etc images. And see if that improves things. I suspect/hope it will, but that has to wait until I have time to code it up.
Interestingly enough, my method is kind of unsupervised. ie no back prop. But that is probably only interesting if I can get my results much closer to other methods.
In other news it was my birthday yesterday.