Sunday, February 19, 2017

IF THEN problems

In the ancient times we were imagining that Artificial Intelligence could easily be achieved just by using the IF THEN statement known from the programming languages like Pascal. We were wrong. Why? I think the reason is still not quite clear, even though we know algorithms like Q-learning or minimax. The problem with IF THEN is that it ignores the hidden variables. It is obviously based on the visible (input) variables only. Building complex logical expressions on them does not help. With IF THEN we concentrate on the visible variables ignoring the history.

The hidden variables are all about history. If we know how the world works (we know its "model" to say it in the Perkun/Wlodkowic terms) and we know the history then we can figure out what the hidden variables values look like - building the "belief", i.e. the probability distribution over the states.

Ignoring the hidden variables is not the only IF THEN problem. Simply mapping the input variables to actions does not explain the computer why these actions should be performed. It is much better to express it in terms of the payoff function (like in minimax or Perkun). Mapping game states to payoff function allows us to compare the game states (by comparing their "images" in the payoff function).

It is trivial to explain why hidden variables are necessary. Imagine an automaton which can perform a single action and it can see on input the values 0,0,1,1,0,0,1,1,... and so on. What is the 0's successor ? Either 0 or 1. With 50% probability. What is the 1's successor? Again either 0 or 1. Also with 50% probability. But if you introduce a hidden variable so that the state is denotes by both input variable and the hidden variable then you can present an automaton as follows:

Now the automaton has become deterministic, due to the introduced hidden variable! In Perkun you would describe it as follows:




values 


    value zero, one; 
}
variables
{
    input variable alpha:{zero,one};
    hidden variable beta:{zero,one};
    output variable action:{zero};
}
payoff {}
model
{
    set({alpha=>zero,beta=>zero},{action=>zero},{alpha=>one,beta=>zero},1.0);
    set({alpha=>one,beta=>zero},{action=>zero},{alpha=>one,beta=>one},1.0);
    set({alpha=>one,beta=>one},{action=>zero},{alpha=>zero,beta=>one},1.0);
    set({alpha=>zero,beta=>one},{action=>zero},{alpha=>zero,beta=>zero},1.0);
}

I have omitted the payoff function for convenience, anyway there is only one action possible. But the point is that by introducing hidden variables even non-deterministic automatons can become deterministic! Or at least less non-deterministic!

The conclusion is that IF THEN is not bad at all, but we deal with something much more complex, possibly with thousands of hidden variables there. And we have to take them into account, at least with probability, just like Perkun (or Wlodkowic) does.

You can download Perkun (and Wlodkowic) from https://sourceforge.net/projects/perkun/.



No comments:

Post a Comment