## Thursday, August 27, 2015

### Perkun. An example with hidden variables.

You are probably curious how Perkun (http://sourceforge.net/projects/perkun/) deals with the variables it cannot perceive. The hidden variables are, after all, something that differs Perkun from the other programming languages. Let us build an example.

values
{
value a,b,c,d;
value do_nothing, switch;
}
variables
{
input variable alpha:{a,b};
hidden variable beta:{c,d};
output variable gamma:{do_nothing, switch};
}
payoff
{
set({alpha=>a},0.0);
set({alpha=>b},100.0);
}

As you can see there are three variables: the input variable alpha, hidden variable beta and output variable gamma. For each fixed value of gamma we can show the model as a directed graph, because we still have a deterministic model (probabilities 0.0 or 1.0). Let us show the directed graph for gamma=do_nothing:

This was easy, each state (alpha/beta) remains as it was. Let us introduce the directed graph for gamma=switch:

Now it is slightly more complex. Let us see how Perkun deals with the file. Run:

\$ perkun abcd.perkun

It responds with:

loop with depth 1
I expect the values of the variables: alpha
perkun>

Enter a. If you look at the graphs you would not know whether we mean the state "a,c" or "a,d". That is the point - the variable beta is hidden!

perkun> a
belief:
beta=c alpha=a 0.5
beta=d alpha=a 0.5
optimal action:
gamma=switch
perkun>

Now there is something interesting. Perkun chooses the output variable gamma value to be switch, but does not know precisely in which state it is! There is 50% probability for both "a,c" and "a,d"! That is correct. Now enter a again.

perkun> a
belief:
beta=c alpha=a 0
beta=d alpha=a 1
optimal action:
gamma=switch

perkun>

Look at the "belief" in the response. Now Perkun is sure it is in "a,d" state! This is implied by the model. So we have in some cases the certainty about hidden variables. Anyway Perkun deals with them in terms of probability, so the decision making can be biased by what it believes. But isn't it the case also for ourselves? Let us now enter b:

perkun> b
belief:
beta=c alpha=b 0
beta=d alpha=b 1
optimal action:
gamma=do_nothing
perkun>

Again the Perkun is sure about the hidden variable! It knows it is in the state "b,d". It chooses "do_nothing" because the payoff says it "likes" to see alpha=b.

Terminate the session now and restart it entering b:

loop with depth 1
I expect the values of the variables: alpha
perkun> b
belief:
beta=c alpha=b 0.5
beta=d alpha=b 0.5
optimal action:
gamma=do_nothing
perkun>

Note that Perkun now is not sure whether it is in the state "b,c" or "b,d"! But it does not care. Alpha is b, that is what it likes, so it chooses do_nothing! Sometimes we just do not need to know the hidden variables to make proper decisions!