Friday, October 28, 2016

What is so special about thragos?

When you install the game thragos-0.0.0 you can simply walk around (control with the arrow keys), and occasionally you hit the NPCs, which causes the game to stop and allows you to make decision. The NPC chooses one of the ten actions. Sometimes it attacks you, sometimes it practices or makes a weapon. It happens that the NPC asks whether you are a wizard or whether you have a weapon. Why?

Well - that is what thragos is all about. The Wlodkowic specification executed by the game is in the unpacked tarball, the file wlodkowic/thragos.wlodkowic. Please enter the directory wlodkowic and grep the file thragos.wlodkowic for "estimate" and "set":

grep "estimate" thragos.wlodkowic | grep "set"

It responds with 1480 lines similar to this one:
set({he_is_a_wizard=>{false},
he_has_a_weapon=>{false},he_is_stronger_than_me=>{false},
reward=>{false},test_result=>{false,true,none},
i_am_a_wizard=>{false,true},i_have_a_weapon=>{false,true},i_can_see_him=>{false,true},he_has_used_a_weapon=>{false,true},he_has_used_magic=>{false,true}},{action=>{estimate_whether_he_is_a_wizard}},{he_is_a_wizard=>{false},he_has_a_weapon=>{false},he_is_stronger_than_me=>{false},reward=>{none},test_result=>{false},i
_am_a_wizard=>{false},i_have_a_weapon=>{true},i_can_see_him=>{false},he_has_used_a_weapon=>{false},he_has_used_magic=>{false}},1.0);


Let's highlight the initial state query (color blue) and terminal state query (color green):
set({he_is_a_wizard=>{false},
he_has_a_weapon=>{false},he_is_stronger_than_me=>{false},
reward=>{false},test_result=>{false,true,none},
i_am_a_wizard=>{false,true},i_have_a_weapon=>{false,true},i_can_see_him=>{false,true},he_has_used_a_weapon=>{false,true},he_has_used_magic=>{false,true}}
,{action=>{estimate_whether_he_is_a_wizard}},
{he_is_a_wizard=>{false},he_has_a_weapon=>{false},he_is_stronger_than_me=>{false},reward=>{none},test_result=>{false},i_am_a_wizard=>{false},i_have_a_weapon=>{true},i_can_see_him=>{false},he_has_used_a_weapon=>{false},he_has_used_magic=>{false}},1.0);

I can assure you that the green part (terminal state query) always contains reward=>{none}. What does it mean? It means that just asking the question gives no direct benefit to the agent. In spite of that the agent sometimes chooses to LEARN something about the hidden variables. Now this is what I call real Machine Learning. It is implied by the algorithm itself. The NPC will not only perform the actions like making a weapon or practising, he will also question its environment for hidden variables. Not always, of course. But sometimes.

Tuesday, October 25, 2016

thragos-0.0.0 released!

Download it from https://sourceforge.net/projects/thragos/. It has status alpha.

It requires:

Friday, October 21, 2016

THRAGOS - screenshot

Today I was rewriting my old 3d objects dumper, which was necessary since the Blender folks have changed quite a lot in their API. Thragos works more or less, but I still need to add many things.

What is the most important is the AI - it works between the NPCs and the player, but the NPCs do not meet each other yet. I will change this.

Sunday, October 9, 2016

THRAGOS - 0 errors

Finally I eliminated all the errors reported by Wlodkowic for my model. The diagram looks as follows:

I want to write a game demonstrating how to use Wlodkowic in your own projects (just like PerkunWars for Perkun). I imagine I would run a child process for each pair of characters.


Saturday, October 8, 2016

THRAGOS - 647 errors

The amount of remaining errors dropped to 647.
Currently I have 77 Prolog rules generated by my Perl script.

Once I finish with the errors I will need to add the "illegal" and "impossible" commands to the model, and then start the implementation in C++. I thought about using gtkglextmm library, it looks nice, I could reuse the 3D models I created for chrzaszcz2.

Friday, October 7, 2016

A fantasy game THRAGOS - still 1120 errors

I found a way to generate the Prolog rules I need with Perl. First I made a draft Wlodkowic specification for a simple fantasy game ("Thragos"). There are following actions there:
  • escape
  • attack_with_bare_hands
  • attack_with_weapon
  • attack_with_magic
  • learn_magic
  • practice_to_become_stronger
  • make_a_weapon
  • estimate_whether_he_is_a_wizard
  • estimate_whether_he_has_a_weapon
  • estimate_whether_he_is_stronger
After putting all the "unimportant" rules into the specification there are 444 states. This (along with the 10 actions) makes the "test();" command to produce 4440 errors (444*10). The amount of set instructions in the model should be 1971360 (444*10*444), but some (most) of them will contain zeros.

I will show you the diagram I produced for the current rules generated by my Perl script:



You can see that still about 25% errors are remaining (precisely 1120 errors). Initially I tried to write the Prolog rules manually, but I gave it up.

I imagine I would write a simple game based on Wlodkowic where the characters would try to optimize their situation fighting against each other, learning magic, producing weapons and so on. They would also communicate with each other for example to estimate whether someone is a wizard or whether he has a weapon.

The Wlodkowic variables for this project are:

variables
{
    input variable reward:{none,false,true};
    input variable test_result:{none,false,true};
    input variable i_am_a_wizard:{false,true};
    input variable i_have_a_weapon:{false,true};
    input variable i_can_see_him:{false,true};
    input variable he_has_used_a_weapon:{false,true};
    input variable he_has_used_magic:{false,true};

    hidden variable he_is_a_wizard:{false,true};
    hidden variable he_has_a_weapon:{false,true};
    hidden variable he_is_stronger_than_me:{false,true};
   
    output variable action:{escape,attack_with_bare_hands, attack_with_weapon, attack_with_magic,
        learn_magic, practice_to_become_stronger, make_a_weapon,
        estimate_whether_he_is_a_wizard,
        estimate_whether_he_has_a_weapon,
        estimate_whether_he_is_stronger};
}


Friday, September 30, 2016

Wlodkowic 0.1.4

Perkun & Wlodkowic 0.1.4 have been published! Download them from https://sourceforge.net/projects/perkun/.

Wlodkowic 0.1.4 contains following improvements:
  • reports the expected value of payoff per action
  • Prolog generator
  • test command
The first feature means only that on optimal decision (in the loop) a real number denoting the action's "gain" (expected value of payoff) is printed out for every action considered.

A Prolog generator is created (like in Perkun) with the command:

cout << prolog generator << eol;

Unlike in Perkun it creates only the model (precisely the "set" commands).

There is also a new command:

test();

It checks the inconsistencies of the model and prints them out to the standard error stream.


Thursday, September 22, 2016

97000 set instructions

Sure. Don't comment, don't download, I will do my job anyway... Just kidding. You are absolutely right. When I have a game based on wlodkowic I will just put the following macro in its configure.ac:

PKG_CHECK_MODULES([WLODKOWIC], [libwlodkowic >= 0.1.3],[],
[AC_MSG_ERROR([Download the perkun package from https://sourceforge.net/projects/perkun/])])


That is all. So it really makes no sense to download it before you can see it in action.

BUT. But. I created a Wlodkowic specification for a simple game. I checked the amount of states: 1152. I added the "unimportant" rules. Now the amount of states dropped to 139. It is more realistic. With 5 possible actions it turns out to have a model of about 97000 set instructions (139*139*5). In Perkun there were the Prolog generators to help with that. In Wlodkowic there is no such thing. I need to write the code that generates the set instructions. Or even better, write the code that writes the code... You got my point ;)))

I have no idea how to do it. I will try with Prolog, I have nothing better than that.

Wednesday, September 21, 2016

Perkun & Wlodkowic released

Finally I got Wlodkowic completed. Take a look at the Perkun package (https://sourceforge.net/projects/perkun/ - download perkun-0.1.3.tar.gz).

I am especially proud of the apriori section - now you do not need to touch the C++ code to define the a-priori distribution. For example:


values
{
    value false, true;
}
variables
{
    input variable a:{false,true};
    hidden variable c:{false,true};
    output variable b:{false,true};
}
knowledge {}
payoff
{
    set({a=>false}, 0.0);
    set({a=>true}, 1.0);
}

model {}
apriori
{
set({a=>{false}},{c=>{false}}, 0.3);
set({a=>{false}},{c=>{true}}, 0.7);

set({a=>{true}},{c=>{false}}, 0.9);
set({a=>{true}},{c=>{true}}, 0.1);
}


But the real power of Wlodkowic is in the states reduction potential (putting the "unimportant" rules into the knowledge section).

Because due to the "unimportant" rules each input variable may have multiple values now, so unlike Perkun Wlodkowic does not simply get one line per all variables. Instead it is asking for each variable separately and expects all its values separated by spaces in a single line. Their order is not important. Take a look at the test files in src2.

There is no code for Prolog or Haskell generators in Wlodkowic (I was thinking about it, but it seems rather complex).

Tuesday, September 20, 2016

Embedding Wlodkowic

I keep developing Wlodkowic (you may download the current Perkun package to get it - https://sourceforge.net/projects/perkun/). I thought I would point out an important change between Wlodkowic and Perkun. When you embed Perkun in your own programs (like in PerkunWars) you only need to instantiate a subclass of perkun::optimizer_with_all_data and call the method parse. It means that all its activities are done directly by the parser. This is, however, not a very convenient behavior. Wlodkowic, on the contrary, will store the commands in a dedicated data structure and when embedding it you will need to call two methods:
  • parse
  • execute_commands
This is not important for you as long as you simply use wlodkowic directly (as a command line application), but it is important when embedding it.

I am working on Wlodkowic now, the main algorithm (the one used in loop) will be the same as in Perkun. If you remember - it is the algorithm implemented in the class optimizer (perkun::optimizer or wlodkowic::optimizer). It is the heart of Perkun (and Wlodkowic).

Sunday, September 18, 2016

Wlodkowic - a new language

I have started implementing a new language based on Perkun. Its name is Wlodkowic (to honor Paweł Włodkowic). It comes as a part of the Perkun package, download Perkun 0.1.2 (from http://sourceforge.net/projects/perkun/) in order to check it.

It is not finished yet, it's just the first milestone. You may run the tests (run "configure" and then "make check"). Its sources are in the directory "src2". Its tests do not require installing (unlike the Perkun tests).

The syntax changes:
  • there are two additional sections - knowledge and apriori
  • the section "knowledge" may contain "unimportant" rules
  • the queries may contain the whole equivalence classes instead of values
I think it is better to see an example. Take a look at the file src2/t8.wlodkowic. Its section "knowledge" (following the "variables" section) contains four "unimportant" rules.

knowledge
{
    unimportant({patient_is_alive=>false},patient_complains_about_x=>[{false,true}]);
    unimportant({patient_is_alive=>false},disease_1=>[{false,true}]);
    unimportant({patient_is_alive=>false},blood_pressure=>[{low,average,high}]);
    unimportant({patient_complains_about_x=>false},blood_pressure=>[{low,average},{high}]);
}


The first one says that if patient is dead we do not care whether he complains about anything. The second and third say the same about disease_1 and blood_pressure. The fourth one is interesting - it says that we only need to distinguish two classes of blood pressure, namely {low, average} and {high} IF THE PATIENT does not complain about x. Note that we use the square brackets ([]) to enclose the equivalence classes.

Wlodkowic is not fully functional yet, but it can print out the visible states and the states. Note that with these "unimportant" rules there are only 11 states, while without them there are 24 states.

The "apriori" section follows the "model" section and it will contain the knowledge about the a-priori beliefs. It is not implemented yet. I thought that Perkun really lacks it and in realistic examples Wlodkowic will have an advantage.

Wlodkowic comes with a companion library (libwlodkowic) which is completely separate from Perkun. I thought it is better this way.

Tuesday, September 13, 2016

I have an idea...

I have an idea how to improve Perkun. Do you recall the last example (the medical one)? Perkun creates "visible_states" (a kind of objects in the memory) for all combinations of the input variable values. Each visible state can cost a substantial amount of memory. I realized that it is not important to differentiate between the visible states:

{patient_is_alive=>false,complaints_1=>false,complaints_2=>false,...}
{patient_is_alive=>false,complaints_1=>true,complaints_2=>false,...}
{patient_is_alive=>false,complaints_1=>false,complaints_2=>true,...}
{patient_is_alive=>false,complaints_1=>true,complaints_2=>true,...}


It is not necessary, because the patient is dead. On the other hand it may be very important to do it when the patient is alive:

{patient_is_alive=>true,complaints_1=>false,complaints_2=>false,...}
{patient_is_alive=>true,complaints_1=>true,complaints_2=>false,...}
{patient_is_alive=>true,complaints_1=>false,complaints_2=>true,...}
{patient_is_alive=>true,complaints_1=>true,complaints_2=>true,...}

So there is a certain asymmetry in the importance of the variables. I can imagine that for some diseases it makes no difference whether the blood pressure is low or average, but the high blood pressure is dangerous. So it is too much to say that the blood pressure is not important in this case, but it could be said that two classes matter: {low,average} and {high}. I call them classes per analogy to the equivalence classes in algebra.

Maybe using the knowledge (what is important and what is not) I could save a lot of memory. This would allow my medical example to grow to more realistic proportions. Say, hundreds of variables.

I do not have a software which would allow optimization using my algorithm with this improvement - ignoring unimportant variables (in some cases). I think I will try to write it, first in C++, then possibly port it to Java.

My primary languages are C++ and Perl (I know also Python), but I think that for the project usability it is better to have it as a compiled library (like in C++ or Java).

Sunday, September 11, 2016

Perkun - a medical example.

Perkun is highly experimental and it is NOT suitable for games with two players. What is it good for? I thought I would come up with a simple example demonstrating how Perkun might be used in medicine. I have no medical background, I am just a programmer, so the example will be really simple:


values
{
    value wait;
    value false, true;
    value none, low, average, high;
    value treatment_a,treatment_b,treatment_c;
    value test_1,test_2,test_3,test_4, test_5;
}

variables
{
    input variable patient_is_alive:{false,true};
    input variable complaints_1:{false,true};
    input variable complaints_2:{false,true};
   
    input variable test_result:{none, low,average,high};
   
    hidden variable disease_1:{false,true}, disease_2:{false,true}, disease_3:{false,true};
   
    hidden variable body_parameter_1:{low,average,high};
    hidden variable body_parameter_2:{low,average,high};   
   
    output variable action:{wait, treatment_a,treatment_b,treatment_c, test_1,test_2,test_3,test_4, test_5};
}


We have a patient (the input variable patient_is_alive is the only thing that matters, i.e. affects our payoff). Then we have complaints (only two) - the input variables that we immediately get from our patient. The last input variable is a test_result. Note that we have five different tests and we use the same input variable for their result. I assume we get the result immediately after applying the test.

There are three hidden variables for three different diseases and in addition two hidden variables - body parameters. In real life I imagine there would be thousands of body parameters (which probably makes my implementation unusable). But I am just introducing the idea. The patient is an automaton which can be controlled either with treatments or with tests. See the output variable ("action"). It contains both the treatments and tests. From the Perkun point of view there is no difference between a treatment and an experiment (test). They can be applied interchangeably. Important is, it always updates the belief with the newly acquired knowledge.

There is one more drawback - I would expect that the Perkun user inherits the class perkun::optimizer_with_all_data and provides his own implementation of the method void set_apriori_belief(perkun::belief * b) const. It is necessary to provide actual numbers for the first belief. At present for any disease Perkun would assume 50% true and 50% false, which is not the case in nature.

So what is the difference between Perkun and a normal bayesian classifier? It is the application of hidden variables and the algorithm that plans updating the belief depending on the experiments/tests results. The algorithm that plans using tests and treatments interchangeably, just to keep the expected value of the payoff maximal. Perkun is not a diagnostic system, it is an optimization system. It is not trying to minimize the entropy of the belief distribution. Its only objective is to maximize the payoff. So it is perfectly happy with leaving some unknown knowledge unknown.


Saturday, September 10, 2016

Perkun2 vs. minimax

I thought it might be useful to compare Perkun2 with minimax.

In minimax:
  • there are no hidden variables
  • each move is deterministic
  • the players have exactly opposite payoff
In Perkun2:
  • hidden variables are allowed (even hidden for both players)
  • each move can be stochastic
  • the players payoff functions are independent
However if you use in model/set only the probabilities 0.0 and 1.0 then the model is deterministic. It would be relatively easy to design an algorithm like minimax that considers two independent payoffs for both players. Then the players could cooperate (unlike in minimax games). But the real killer feature are the hidden variables. They make the players to be more than just mappings INPUT->OUTPUT, they allow them to consider the game history in order to predict the consequences of the moves better.

Even though the Perkun2 syntax allows n players (more than two), the implementation of the algorithm does not support it yet. Please use no more than two players for now. I will try to fix this.

What about Perkun? It is almost like Perkun2, but it allows only a single player.

Wednesday, September 7, 2016

WebPerkun

I have created a Spring MVC web application accessible under:

http://www.pawelbiernacki.net/PerkunWebApplication/index.html

It parses a Perkun specification and allows you to play with Perkun a bit.

Limitations:
  • something is wrong with the "surprise" value
  • empty payoff/model are initialized randomly (old feature)
It is based on JPerkun (Java port of Perkun available from my website).

For the specification provided - try input values "true" and "false".

Saturday, September 3, 2016

The magic of hidden variables

Why to introduce hidden variables? There are good reasons to do so. First, they allow to improve the predictability of your model. In some cases they can even make your model deterministic. On the other hand the amount of hidden variables should be minimized. Entities should not be multiplied above the necessity (the principle known as Ockham's Razor). Second, there may be ways to reveal the hidden variables. If they are, then this gives you a chance to predict the consequences of your actions better.

It is a rather philosophical question whether the hidden variables are "real". Whatever makes your model better, is real. You could imagine a computer program (similar to Perkun/Perkun2) that introduces the hidden variables on its own.

Let us take a look at the example4 in the Perkun2 tarball (directory examples/example4). Use Perkun2 version 0.0.3 from https://sourceforge.net/projects/perkun2/. First execute the command:

> perkun2 example4_final_code_stupid.perkun2

It expects the values of the variables "response" and "reward". "Response" does not affect the payoff function, while "reward" does affect it. The "response" is used to reveal the answer on the question asked by the computer. There are two hidden variables:
  • hidden variable secret:{false,true};
  • hidden variable computer_is_asking:{false,true};
The "secret" is something that the agent computer does not know (but the other agent, human, knows it! It is an input variable for him!). The objective of the program is to say what is the value of "secret". It may choose action "false", "true", "none" or "ask" (see the output variable action).

On prompt "Perkun2 (computer) >" please type "none none":

Perkun2 (computer) > none none

The computer chooses "action=false". Why? There are 50% chances that it is right. Not very smart of it. Why not asking the human (who knows the secret)? Let us type "none false" on prompt "Perkun2 (human)":

Perkun2 (human) > none false

This means that the "response" is "none" (like previously), but the "reward" is "false", i.e. not good (see the payoff function). On next prompt type "none false" again:

Perkun2 (computer) > none false

Now the computer realizes it got punished and changes its decision - the action is "true". But it can do better than that. Exit the session and run:

> perkun2 example4_final_code_smart.perkun2

(You may check that the only difference between example4_final_code_smart.perkun2 and example4_final_code_stupid.perkun2 is the argument of the command loop). Now we run loop with the game tree depth 3.

Type "none none" on the first prompt:

Perkun2 (computer) > none none

Now the decision is to ask human for the secret! This does not change the input though, so we can type "none none" on the next prompt:

Perkun2 (human) > none none

On the next prompt the computer is expecting a response from human. The first input variable (response) should be true or false, depending on what the actual value of secret is. Let us assume the secret is true:

Perkun2 (computer) > true none

Now the chosen action is true! I.e. in the "smart" version (differing only by the game tree depth) the computer chooses first to apply the action that reveals the value of the secret, and then, depending on the human response - to answer accordingly.

How the computer knows that the human knows the truth? Well - this is implied by the definition of the agent human (check that the secret is an input variable for this agent). Why does it assume that the human will not lie? This is implied by the model of the agent human. We could also have a model that assumes that the human always lies, in that case the computer would ask and then choose the opposite response (action=false on response=true and action=true on response=false).

Instead of asking a human it could be anything else, like performing a complex calculation or checking on Internet. You should remember that using extra actions that reveal the values of hidden variables may cost extra depth of the game tree (i.e. the argument passed to "loop" must be greater).

The computer does not know what the value of the secret is (initially). Perkun/Perkun2 are able to plan actions that do not give them directly any reward, and to plan usage of the knowledge that is supposed to be revealed. This is a very important feature of Perkun/Perkun2! They plan performing experiments and using the knowledge gained in these experiments whatever it will be.

Wednesday, August 31, 2016

Perkun in C++ notation

You may think that the Perkun/Perkun2 syntax is rather odd. I was thinking about a comparison - for example with C++. And I came up with the following example:

variables
{
input variable a:{false,true},b:{false,true},c:{false,true};
output variable action:{false,true};
hidden variable d:{false,true},e:{false,true},f:{false,true};
}

Now imagine the following class in C++-like pseudocode:

class x
{
private:
class hidden { bool d,e,f; };
std::map<hidden,double> distribution;
public:
bool get_decision(bool a, bool b, bool c) const;
};

In other words we have a class with a probability distribution over a cartesian product of the hidden variables and a single function mapping the input variables into (one) output variable. Note that if there were no hidden variables then our distribution would disappear and we would have a stateless class (or simply a function) get_decision.

You might ask what is the benefit of using Perkun rather then simply hard code the function mapping input to output. I think that in simple cases it may be obviously simple to hard code it. But it is a trap. By hard coding it we do not explain the machine WHY the mapping looks so, we only provide it with the end result. We might end up with the hard coded solutions for myriads of different input sets. It might be even so that the idea of telling the machine how to derive the result from simple components (payoff,model) will be treated as highly experimental. Well, Perkun/Perkun2 are highly experimental. In spite of that I think they open intriguing possibilities as new technologies of programming.

BTW. The fact I usually only use a single output variable is a convention. You are free to use multiple output variables.




Tuesday, August 30, 2016

PerkunWars - a simple fantasy game based on Perkun.

I would like to introduce a small project you can download from https://sourceforge.net/projects/perkunwars/ - PerkunWars - a simple fantasy game.

In the game there is one player (Valder), three NPCs (Thragos, Dorban, Pregor) and a vampire. There are three locations (Wyzima, Shadizar and Novigrad). Click on a town to move there (you may also use a menu action).

If you meet any NPCs you may chat with them. The chat window will display something like:

Pregor >I was staying here, in Shadizar. The vampire? He is in Wyzima. I am staying here...
Thragos >I was staying here, in Shadizar. The vampire? He may be in Wyzima. He may be in Novigrad. I am staying here...


You may have noticed that Pregor and Thragos are avoiding the vampire. There is one more NPC - Dorban, who is a witcher and therefore is constantly hunting the vampire. You may attack the vampire yourself (garlic and holy water are helpful). But it makes no sense to attack him alone. You should attack him when the NPCs are around.


This simple game is based on Perkun. Open the file src/perkun_wars.cc. You will see the code creating pipes and forks. For each NPC there is a child process created (running the function main_perkun). An instance of object npc is created and it parses the Perkun specification. What is npc? See the file inc/perkun_wars.h. You will see the code:

class npc: public perkun::optimizer_with_all_data

So for each NPC (Dorban, Pregor and Thragos) there is an instance of perkun::optimizer_with_all_data with some redefined virtual functions, parsing a Perkun specification!

Now take a look at the directory perkun/final_code. It contains the Perkun specifications named after the NPCs. Take a look at the perkun/final_code/pregor_general.perkun. Note the payoff function:

payoff
{
set({where_is_Pregor=>place_Wyzima, do_I_see_vampire=>false}, 100.0);
set({where_is_Pregor=>place_Wyzima, do_I_see_vampire=>true}, 0.0);
set({where_is_Pregor=>place_Shadizar, do_I_see_vampire=>false}, 100.0);
set({where_is_Pregor=>place_Shadizar, do_I_see_vampire=>true}, 0.0);
set({where_is_Pregor=>place_Novigrad, do_I_see_vampire=>false}, 100.0);
set({where_is_Pregor=>place_Novigrad, do_I_see_vampire=>true}, 0.0);
}



Now check the payoff for Dorban (the file perkun/final_code/dorban_general.perkun):

payoff
{
set({where_is_Dorban=>place_Wyzima, do_I_see_vampire=>false}, 0.0);
set({where_is_Dorban=>place_Wyzima, do_I_see_vampire=>true}, 100.0);
set({where_is_Dorban=>place_Shadizar, do_I_see_vampire=>false}, 0.0);
set({where_is_Dorban=>place_Shadizar, do_I_see_vampire=>true}, 100.0);
set({where_is_Dorban=>place_Novigrad, do_I_see_vampire=>false}, 0.0);
set({where_is_Dorban=>place_Novigrad, do_I_see_vampire=>true}, 100.0);
}


Did you notice something? Of course the Dorban's payoff function makes him to follow the vampire, while Pregor tries to avoid him. This is how we use the payoff function.

The Dorban's variables are:
variables
{
input variable where_is_Dorban:

{place_Wyzima,place_Shadizar,place_Novigrad};
input variable do_I_see_vampire:{false,true};
output variable action:{do_nothing,goto_Wyzima,goto_Shadizar,goto_Novigrad};
hidden variable where_is_vampire:{place_Wyzima,place_Shadizar,place_Novigrad};
}


Note that we have one hidden variable here - where_is_vampire. Of course Dorban knows where is the vampire if he can see him (i.e. it is impossible for the vampire to be elsewhere when Dorban can see him). This is solved with the command "impossible" in the Dorban's model. But in general it is a hidden variable.





New releases of Perkun and Perkun2. The problem with Perkun2 has been solved!

There is a new release of Perkun (0.1.1) and Perkun2 (0.0.3).
You may download them from https://sourceforge.net/projects/perkun/ and https://sourceforge.net/projects/perkun2/.

In Perkun there are no substantial changes, just a couple new getters. In Perkun2 there is:
  • an extra parameter (agent name) passed to the print_prompt sub in Perl
  • enhanced code to interpret the actions
  • modified procedure loop
Please be warned that I did not test it for more than two agents. In fact I am sure it will get "surprised" when more than two agents are in place. (The term "suprised" has a strictly defined meaning in the world of Perkun).

I am currently trying to implement something similar to https://sourceforge.net/projects/perkunwars/ - a simple game demonstrating how to use both Perkun and Perkun2 in your own projects.

Wednesday, August 24, 2016

What's wrong with Perkun2?

I wish to explain more precisely what is wrong with Perkun2. Imagine a specification:

values {}
agents
{
  agent a1 {}
  agent a2 {}
  ...
  agent an {}
}
interfaces
{
  interface a1 => a2 {}
  ...
  interface an => a1 {}
}

In other words we have n agents performing the actions interchangeably, according to the pattern a1,a2,...,an,a1,a2,...,an,...

Now the algorithm I propose in Perkun2 consists of two parts:
  • how to optimize the actions (output variable values) in order to maximize the expected value of the payload function
  • how to interpret what has actually happened (how to update the belief)
The first part (optimization) seems to work - that is why I decided to publish Perkun2. I overlooked the second part, however, and left the interpretation of the input variables just as it was in Perkun. But in Perkun2 there are multiple agents. Even though the agent a1 tries to find the optimal action, it is also observing the other agents playing. It is not able to see their decisions directly, but it can observe its input variables as they are changing. For example after the agent's a2 move the input variables of the agent a1 may have changed. The same holds for agent a3,..., up to an. So the first agent is looking for the optimal action only once (on his own move), but is observing the game n times (he is observing what happens after his own move as well as what happens after his opponents moves). Note that a1 can only see HIS input variables, which may be hidden for the agents a2,...,an.

What is a little problematic is the fact that the agent ai (i>1) has the model and payload defined in terms of his own variables. Quite logical. We have to use the interfaces to map these variables into the ones observable by the agent a1.

I am sorry about the trouble again. I will try to fix it. I started with the small improvement passing the current agent name to the print_prompt sub. For example you may type:

<<PERL
$$Perkun2::Optimizer::optimizer{print_prompt} =
    sub
    {
        my ($this, $agent_name) = @_;
        print "SUPER PERKUN2 (", $agent_name, ")> ";
    };
   
PERL


(This enhancement is not published yet!) This way the Perkun2 will ask for input variables before the agents moves:

SUPER PERKUN2 (a1)>
SUPER PERKUN2 (a2)>
...
SUPER PERKUN2 (an)>
SUPER PERKUN2 (a1)>
SUPER PERKUN2 (a2)>
...

But it will find the optimal decision only for the agent a1 (even though it is observing the game for all agents).



Friday, August 19, 2016

WARNING: Perkun2 contains a bug!

There is a serious bug in Perkun2. It fails to interpret the input variables values properly when multiple agents are in place. I created a model for two players and found out that Perkun2 always gets "surprised".

I am very sorry about the bug, I will try to fix it.

The optimization itself seems to work properly.

Sunday, August 7, 2016

Perkun 0.1.0 and Perkun2 0.0.2 released!

I have released Perkun 0.1.0. There are some changes in the Perkun's behaviour:
  • the default model and default payoff function are filled with zeros
  • the test_model code is disabled
  • the Prolog generators are better (they skip the zeros in menu set thus producing smaller Perkun code) 
I have also noticed that there were some clashes in the function naming making it impossible to use both Perkun and Perkun2 as libraries. I fixed the problem releasing new Perkun2 - the version 0.0.2! I have checked that using the Perkun and Perkun2 as libraries (at the same time) works now.

Friday, March 4, 2016

Perkun2 0.0.1 - with embedded Perl!

I have published Perkun2 version 0.0.1 (https://sourceforge.net/projects/perkun2/). It contains Perl! After the "interfaces" section you may type:

values{}
agents{}
interfaces{}
<<PERL
print "this is Perl code\n";
PERL

There is also a predefined object $Perkun2::Optimizer::optimizer that supports multiple callbacks called by Perkun2. For example this is how you can register a callback "print_prompt":

<<PERL
$$Perkun2::Optimizer::optimizer{print_prompt} = sub { print "Hello: "; };
PERL


Other callbacks are:
  • execute
  • on_action
  • print_current_belief
  • print_initial_loop_message
See the examples in the folder examples/examples_perl.

The Prolog generators created by the version 0.0.1 are modified so that they do not insert the model set instructions that would contain probability 0.0. This makes the generated specifications significantly smaller.



Sunday, February 28, 2016

Perkun2 0.0.0

I have written and published Perkun2 (https://sourceforge.net/projects/perkun2/). It is similar to Perkun but supports multiple intelligent agents performing the actions interchangeably. Take a look at the website (http://www.pawelbiernacki.net/software/perkun2/index_pl.jsp), download Perkun2, see the examples.