Friday, March 3, 2017

Hidden variable based predictor vs. functions

I have created a minimalistic example demonstrating that hidden variables are beneficial. I have written a small program that compares a predictor created by zubr with the function predictors. You may take a look at my code (it is included in the JAR, license GPL 3.0):

http://www.pawelbiernacki.net/TestHiddenVariableBasedPredictors.jar

You may also run the program directly from my server:

http://www.pawelbiernacki.net/TestHiddenVariableBasedPredictors.jnlp

It calculates the scores (amount of correct guesses divided by amount of all guesses) for various lengths of the test sequence. The predictor with the id = -1 is a zubr generated optimizer, the other predictors are simply functions. There are all possible functions tested (there are 16 of them). They have the ids 0..15.





As you can see the hidden variable based predictor outperforms the functions (its score equals 0.92 for the test sequence of length 19 while the best function achieves only 0.70).

The difference between the function predictors and the hidden variable based predictor is that the functions are stateless, while the hidden variable based predictor does have a state. Its state is a belief - a probability distribution over the set of states (vectors of hidden variable values). My point is that the IF THEN construct is too weak to achieve the AI, because IF THEN takes into account the current state only, ignoring the history. The hidden variables are all about history - they are a natural way to compress our knowledge about history. Therefore the optimizers/predictors based on hidden variables are so much better than the stateless predictors.

id 1 2 3 4 5 6 7 8 9
10
-1 0 0.25 0.5 0.63 0.7 0.75 0.79 0.81 0.83 0.85
0 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
1 0 0.25 0.42 0.44 0.45 0.46 0.46 0.47 0.47 0.47
2 0 0.25 0.33 0.31 0.3 0.29 0.29 0.28 0.28 0.28
3 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
4 0 0.25 0.42 0.5 0.55 0.58 0.61 0.63 0.64 0.65
5 0 0.25 0.33 0.38 0.3 0.33 0.36 0.38 0.33 0.35
6 0 0.25 0.5 0.5 0.4 0.42 0.5 0.5 0.44 0.45
7 0 0.25 0.33 0.31 0.3 0.29 0.29 0.28 0.28 0.28
8 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
9 0 0.25 0.5 0.5 0.4 0.42 0.5 0.5 0.44 0.45
10 0 0.25 0.33 0.38 0.3 0.33 0.36 0.38 0.33 0.35
11 0 0.25 0.42 0.44 0.45 0.46 0.46 0.47 0.47 0.47
12 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
13 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
14 0 0.25 0.42 0.5 0.55 0.58 0.61 0.63 0.64 0.65
15 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45

Take a look at the numbers (the first row is the hidden variable based predictor).

You may wonder why I used the term "predictor" rather than "optimizer". Well, this has something to do with the nature of my example. The action performed by this optimizer is a bet - it tries to predict the next value of one of the input variables. I will discuss the example used here later.

Zubr can be downloaded from https://sourceforge.net/projects/perkun/.

No comments:

Post a Comment