Friday, September 4, 2015

Avoiding surprises with embedded Perkun.

I have already mentioned that in some cases it may happen that Perkun (http://sourceforge.net/projects/perkun/) gets "surprised". This happens when the model says that some input values are impossible. See the code:

values
{
        value false, true;
        value hello;
}

variables
{
        input variable what_I_can_see:{false, true};
        output variable action:{hello};
}

payoff
{
        set({what_I_can_see=>false},0.0);
        set({what_I_can_see=>true},1.0);
}

model
{

set({what_I_can_see=>false },{action=>hello },{what_I_can_see=>false },0.0);
set({what_I_can_see=>false },{action=>hello },{what_I_can_see=>true },1.0);
set({what_I_can_see=>true },{action=>hello },{what_I_can_see=>false },1.0);
set({what_I_can_see=>true },{action=>hello },{what_I_can_see=>true },0.0);

}

loop(1);


The input values must be false, true, false, true,... interchangeably. If you tell Perkun it gets "false" twice it gets surprised. You probably want to avoid this situation when you embed Perkun in your own programs. In order to do that you should redefine the virtual function:

void optimizer::on_error_in_populate_belief_for_consequence(const belief & b1, const action & a, const visible_state & vs, belief & target) const;

You must do it in the new class inherited from perkun::optimizer_with_all_data (see the previous post). In Perkun Wars I redefine it in the class npc. Instead of throwing an error I call on the target belief "make_uniform". This makes Perkun to assume a reasonable belief distribution once it gets "surprised".

Another way is to make the model without zeros in the set instructions. You could replace them for example with 0.01. Then nothing is impossible.

No comments:

Post a Comment