K-NN_Demo

By calculating the distance between each sample in training set and the samples in testing set (samples to be classified or predicted so to speak), take the `K` training samples that is (are) closest to the sample that is to be classified, and to see which of the K samples occupies the class majority, the majority is then assumed to be the classification result. Basicly, it's a voting process.

Pros and Cons of K-nearest-neighbour algorithm

Pros
  1. Can work with large sample size classification problem.
  2. Good efficiency. (Relatively speaking)
  3. Algorithm's concept is easy to comprehend and then implement.
  4. It's a 'Incremental learning' model like Naive Bayesian. Does not retrain the model while input data grow.
Cons
  1. Every sample in training set requies global search, will effect efficiency when data size is huge (due to the amount of calculation).
  2. Misclassification happen when sample distribution is uneven. (Value majority more then minority)
  3. Needs more physical memory when data size increase.

UCI letter recognition data set

UCI ML Repo

The objective is to identify each of a large number of black-and-white rectangular pixel displays as one of the 26 capital letters in the English alphabet. The character images were based on 20 different fonts and each letter within these 20 fonts was randomly distorted to produce a file of 20,000 unique stimuli. Each stimulus was converted into 16 primitive numerical attributes (statistical moments and edge counts) which were then scaled to fit into a range of integer values from 0 through 15. We typically train on the first 16000 items and then use the resulting model to predict the letter category for the remaining 4000. See the article cited above for more details.

Attribute Information:
  1. lettr capital letter (26 values from A to Z)
  2. x-box horizontal position of box (integer)
  3. y-box vertical position of box (integer)
  4. width width of box (integer)
  5. high height of box (integer)
  6. onpix total # on pixels (integer)
  7. x-bar mean x of on pixels in box (integer)
  8. y-bar mean y of on pixels in box (integer)
  9. x2bar mean x variance (integer)
  10. y2bar mean y variance (integer)
  11. xybar mean x y correlation (integer)
  12. x2ybr mean of x * x * y (integer)
  13. xy2br mean of x * y * y (integer)
  14. x-ege mean edge count left to right (integer)
  15. xegvy correlation of x-ege with y (integer)
  16. y-ege mean edge count bottom to top (integer)
  17. yegvx correlation of y-ege with x (integer)
My Data Input
Training set contains 19900 samples and testing set contains 100 samples, this is not an optimal sampling solution but since I'm only demoing it, so what the heck.
├── test100.txt
└── train19900.txt

Compile and Run

$> gcc K-NN_Demo.c -o knn
$> ./knn

it will then prompt to input K in K-NN algorithm: here I'm using 1

************************************************************************************************
* K-NN Demo using Data Set: UCI Letter Recognition *
* - [Yang (Simon) Guo] - *
************************************************************************************************

rows of training data set: 19900
rows of test data set: 100
input K in K-nearest-neighbour algorithm: 1

Results

-------------------TEST RESULT SUMMARY-------------------
right prediction: 97
wrong prediction: 3
accuracy: 97.000% [0.970000]

Check out this project on Github