Tuesday, October 17, 2017

Set commands with conditions

An update to my idea. I have a syntax for the "set" command from BOBR:

set({where_am_I=>(X:place)},{optimal_action=>(A:action)},{where_am_I=>(Y:place)},1.0):-there_is_a_connection(X,Y),has_target(A,Y);

The condition (optional) follows ":-" like in Prolog. All the logical placeholders are declared with the class (in the example above X has class "place"). The condition in the example above is:

there_is_a_connection(X,Y),has_target(A,Y)

I imagine I will write an engine in Java that resolves the condition. This engine will be included in the Java code created by BOBR.

Saturday, October 14, 2017

Assertions about objects

I just got an idea. If you still recall the BOBR project - I thought of adding the assertions (Prolog-like facts and rules) to the specification.

class boolean;
object false:boolean, true:boolean;
assert equal(false, not(true));
assert equal(true, not(false));


In fact you could also tell BOBR about the possible connections between the cities in PerkunWars:

class place;
object place_Wyzima:place, place_Shadizar:place, place_Novigrad:place;

assert there_is_a_connection(place_Wyzima, place_Shadizar);
assert there_is_a_connection(place_Shadizar, place_Novigrad);

assert there_is_a_path(X,Y):-there_is_a_connection(X,Y);
assert there_is_a_path(X,Y):-there_is_a_connection(X,Z), there_is_a_path(Z,Y);


This would allow you deducing that there is a path from place_Wyzima to place_Novigrad.
It is just an idea. Of course my dream would be to use the knowledge somehow to produce the model. I will think about it.

Wednesday, October 11, 2017

Perkun presentation

I have recorded and uploaded to YT a film - the Perkun presentation.
This is the same presentation I was giving for the AI Helsinki.

Tuesday, October 10, 2017

YT film about Perkun

This is a short film demonstrating Perkun specifications and the sessions for two examples from the PerkunWars.

Monday, October 9, 2017

AIHelsinki has published my Perkun slides

You can download my Perkun slides from
http://www.aihelsinki.com/past-event/aihelsinki-14th-session-september-27/

I am very grateful to AIHelsinki, thank you!

The mathematics for the algorithm is not shown on the slides, but they are good in showing what this is all about. The examples are taken from the PerkunWars game (https://sourceforge.net/projects/perkunwars/). Maybe I will write again about the game a little:

You can be in one of three places: Wyzima, Novigrad, Shadizar. There are some NPCs (non-player characters). There is also a vampire. You can attack him but it is a good idea only when there are some NPCs around, because they are going to help you. One NPC is Dorban - a witcher who is constantly looking for the vampire. Two other NPCs (Pregor and Thragos) are avoiding the vampire.

All the NPCs are controlled by the Perkun interpreter. You can "chat" with them to see what their opinion is. If you take a look at the Pregor and Dorban specifications you will see that their payoff functions is different - Pregor "obtains" 100 points for not seeing the vampire and 0 points when he is seeing him. Dorban - on the contrary - "likes" to see the vampire (you can tell from his payoff function).

If you take a look at the source code of the PerkunWars you will see the class "npc" instantiated. This class is inheriting from the Perkun optimizer (perkun::optimizer_with_all_data). There is a separate process running for each NPC. These processes "talk" with the main process through pipes.



Wednesday, September 27, 2017

Today's presentation or what I did not say

I have had the presentation. I did not mention the auxiliary commands like "cout << model << eol;" or "cout << prolog generator << eol;". I was mainly talking about Perkun. As an example I used the game PerkunWars (running the Perkun interpreters for each NPC in the game). I discussed the difference between Pregor and Dorban (human and witcher). The different behavior is achieved by different payoff in the Pregor's and Dorban's specification.

Models. I did not speak that much about how difficult it is to create a model. If anybody takes a look at the "perkun" folder of the PerkunWars package it will become clear that we need to know Perl/Prolog to do that. The specifications for Dorban and Pregor have 286 kB each, which is determined by the models sizes.

Friday, September 1, 2017

Perkun lecture at AIHelsinki in the 27th September

I am going to give a lecture (15 minutes) about Perkun in an event organized by www.aihelsinki.com in the 27th September 2017. I will speak generally about Perkun and demonstrate the examples taken from the PerkunWars.

Sunday, July 16, 2017

bobr-0.0.0 released!

I have released the tool I was writing about. It is accessible from my server:

http://www.pawelbiernacki.net/bobr-0.0.0.tar.gz

It is a Java code generator (just like zubr) but I decided I would not put it into the Perkun package.


Thursday, July 13, 2017

Spheres

I have an idea how to limit the size of the hidden variable values we search through. Imagine we have n following hidden variables:

hidden variable v1:{false,true,none};
hidden variable v2:{false,true,none};
...
hidden variable vn:{false,true,none};

Then for any visible state we specify an initial point:

v1=>none,
v2=>none,
...
vn=>none

Instead of all the space we only search a "sphere" with the grade defined as an amount of hidden variables that differ from the initial point (center). For grade 0 we only have a center. For grade 1 we have:

v1=>none
v2=>none
..
v(i-1)=>none
vi=>false or true
v(i+1)=>none

...
vn=>none

Thus we only have one variable - vi - that differs from the center. For grade = 2 we will have two such variables. For grade = n all the hidden variables will differ from the center:

v1=>false or true,
v2=>false or true,
...
vn=>false or true

My idea is to search through a sphere of k-th grade for a given center point. It may happen that the center points differ depending on the visible state.

I will implement the bobr code generator so that it only searches through such spheres.

My inspiration was programming. When we write a program we do not search through the whole space of the programs since it is huge. Instead we move in the space of programs step-by-step.

Monday, July 10, 2017

Bobr - a templates parser

I have written a small tool (bobr) that is capable of parsing my variable templates. It is not published yet.

In a quite realistic example I observed that both the input variables generated as well as the output variable (with many values) produce a relatively small space to search, while the hidden variables produce a huge one. I will not be able to search through it all. I thought of changing the algorithm so that it only searches through some small subspace.

I will also have a problem how to represent the model in terms of the hidden variable templates.

This is the template code I was parsing:


class boolean, person, place, profession, weapon;

object none;
object false:boolean, true:boolean;
object Dorban:person, Pregor:person, Thragos:person;
object warrior:profession, wizard:profession, thief:profession;
object Wyzima:place, Shadizar:place, Novigrad:place;
object bare_hands:weapon, axe:weapon, magic:weapon;

input variable reward:{false, true, none};
input variable response:{false, true, none};
input variable can_I_see_(X:person):boolean;
input variable do_I_have_(X:weapon):boolean;
input variable am_I_a_(X:profession):boolean;
input variable where_am_I:place;

output variable action:{
    goto_(X:place),
    do_nothing,
    attack_(X:person)_with_(Y:weapon),
    steal_(X:person)_(Y:weapon),
    tell_(X:person)_that_(Y:person)_has_(Z:weapon),
    tell_(X:person)_that_(Y:person)_is_a_(Z:profession),
    tell_(X:person)_that_(Y:person)_is_in_(Z:place),
    ask_(X:person)_whether_(Y:person)_has_(Z:weapon),
    ask_(X:person)_whether_(Y:person)_is_a_(Z:profession),
    ask_(X:person)_whether_(Y:person)_is_in_(Z:place)
    };

hidden variable (X:person)_has_(Y:weapon):boolean;
hidden variable (X:person)_is_a_(Y:profession):boolean;
hidden variable (X:person)_is_in_(Y:place):boolean;

hidden variable (X:person)_thinks_that_(Y:person)_has_(Z:weapon):boolean;
hidden variable (X:person)_thinks_that_(Y:person)_is_a_(Z:profession):boolean;
hidden variable (X:person)_thinks_that_(Y:person)_is_in_(Z:place):boolean;



Sunday, July 9, 2017

My dream

Some day someone will write a program based on the Perkun algorithm that will be capable of adding hidden variables dynamically and adding them ad hoc will be triggered by some observations. And adding hidden variables will possibly be trained, or maybe planned by the planning algorithm itself.

My dream would be to have a way of extending the model by the newly created hidden variables. I imagine that a model M(i+1) would be somehow derived from a model M(i) by extending it with a hidden variable (or several ones). This way starting from a model M(0) we would be able to achieve arbitrarily complex models. And the model M(0) would contain no hidden variables at all - just the pure transition probabilities for the input variables and the output variables.

I thought of ignoring all the separate states within a visible state so that we do not produce a cartesian product of the hidden variables values. Let us pretend the hidden variables are independent, unless their dependency is very important. I think I will try this.

I may have a chance to present Perkun at http://www.aihelsinki.com/. In autumn. The people from AIHelsinki were so kind to allow me that, although I am not a scientist. This project is just a hobby of mine, so I really appreciate their kindness.

Thursday, June 29, 2017

Hidden variables templates

I have an idea. I do not have any program that would parse the below code, and beyond the syntax I do not have much. But the idea is interesting. It is about building the hidden variables automatically.


class place, character, boolean;

object false:boolean, true:boolean;

object Wyzima:place, Novigrad:place, Shadizar:place;

object Dorban:character, Pregor:character, Thragos:character;

hidden variable (A:character)_can_see_(B:character):boolean;
hidden variable (A:character)_has_told_(B:character)_that_(C:variable)_is_(D:boolean):boolean;
hidden variable (A:character)_is_in_(B:place):boolean;
hidden variable (A:character)_thinks_that_(B:variable)_is_(C:boolean):boolean;


The last four lines contain hidden variable templates. I imagine that the parser would use them to generate hidden variables automatically. For example the first template would create the following variables:

hidden variable Dorban_can_see_Dorban:boolean;
hidden variable Dorban_can_see_Pregor:boolean;
hidden variable Dorban_can_see_Thragos:boolean;
hidden variable Pregor_can_see_Dorban:boolean;
hidden variable Pregor_can_see_Pregor:boolean;
hidden variable Pregor_can_see_Thragos:boolean;
hidden variable Thragos_can_see_Dorban:boolean;
hidden variable Thragos_can_see_Pregor:boolean;
hidden variable Thragos_can_see_Thragos:boolean;

The second template uses a boolean hidden variable to build another variable upon it. This way we can achieve indefinitely many variables, but of course I would expect the recursion to stop somewhere.

My problem is how to express the model in terms of the hidden variables generated this way. I have no idea how to do it. Maybe something like logical rules would work?


Tuesday, June 6, 2017

Vampisoft

I have started a software company of my own - www.vampisoft.com . I will do subcontracting, so let me know if you have any code to write (C/C++, Java, Perl, Python). I will also try to popularize my Open Source projects, especially Perkun.

Monday, May 22, 2017

To believe vs. to see

The input variables (in perkun/wlodkowic/zubr) represent what is directly perceived. The hidden variables used to construct the agent's state represent something the agent believes. What we believe may be more important than what we can see. The hidden variables can be used to model unknown parameters of the world, the hidden processes running beneath the surface of the visible. All in all the hidden variables are a tool to represent compactly the past, the history.


Thursday, March 9, 2017

Is zubr better than perkun or wlodkowic?

Just to recall: perkun and wlodkowic are interpreters. Zubr, on the contrary, is a code generator. Is it better? It has some advantage, since its code does not generate all the visible states, i.e. all the situations possible in the game. It works in a different way - much closer to the chess playing programs. It builds the game tree dynamically.

Both perkun/wlodkowic and zubr generated code contain my optimization algorithm. The same algorithm to maximize the expected value of the payoff function.

Zubr generates a Java code, which I consider an advantage.

All the three tools come in the perkun package: https://sourceforge.net/projects/perkun/

If you have a C++ program that needs my optimization algorithm then it is better to link it against libperkun or libwlodkowic. I have written two small games demonstrating how to do it, it is https://sourceforge.net/projects/perkunwars/ (for perkun) and https://sourceforge.net/projects/thragos/ (for wlodkowic). They both create separate processes for the perkun/wlodkowic interpreters and communicate with the parent process with pipes. Feel free to take a look at their source code.

There are, however, some things you might consider a zubr's disadvantage. For example the model - you have to hardcode it in the getModelProbability method. There is no syntax for a zubr model. The same holds for the payoff (method getPayoff). Wlodkowic offers an extra section for the apriori belief - again, in zubr this requires an extra method.

Zubr has also no syntax to inform the optimizer about the impossible states or illegal actions. It should be resolved with an extra feature - the iterators. I hope to explain them later. You may also take a look at the zubr man page and the code it generates.

In the recent posts I walked through the zubr examples stored in the "examples" folder of the perkun package. I tried to demonstrate that the hidden variables based state is beneficial for the quality of prediction/optimization. I think it is time for a major example using zubr, something like Perkun Wars for perkun or Thragos for wlodkowic.


Wednesday, March 8, 2017

example22_hidden_variables_based_predictor.zubr

You want a prove that hidden variables allow to optimize better? Here you are.

Imagine an optimizer that takes two input variables instead of one. The Perkun section of the zubr specification looks as follows:

values
{
    value FALSE, TRUE;
}

variables
{
    input variable alpha:{FALSE, TRUE}, reward:{FALSE, TRUE};
    hidden variable gamma:{FALSE, TRUE};
    output variable action:{FALSE, TRUE};
}


There are two input variables now: alpha and reward. What is the semantics? Alpha has a sequence FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE,... and so on, independently on the agent's action. But the agent does not know where we begin within the sequence. Action is a bet - it is an attempt to predict the next alpha. Depending on the action the agent receives a reward - an immediate information whether the prediction was correct. Reward TRUE means the prediction was right, FALSE means no reward.

You can execute the program directly from my server:

http://www.pawelbiernacki.net/hiddenVariableBasedPredictor.jnlp

For example let us start with FALSE, FALSE. The program sets its initial belief to gamma=>FALSE at 50% and gamma=>TRUE at 50%. The chosen action is FALSE (he bets the next alpha will be false). Let as assume he was wrong and the next alpha will be TRUE. So there will be no reward, enter TRUE, FALSE.

Now he knows that gamma is  FALSE (the belief reflects this). The action will be TRUE. So he thinks the next alpha will be TRUE. Let's confirm his expectations: enter TRUE, TRUE. Now gamma=>TRUE. Action => FALSE.

In short - due to the usage of the hidden variables based state his prediction will always be correct after the first two signals. He will always get a reward TRUE. Only in the beginning there is an uncertainty (reflected by the belief).

When you compare this optimizer (in fact - this predictor) with the functions based merely on the input variables you will see that no function can beat him. I found two functions that are pretty good:

f1(FALSE, FALSE) = FALSE
f1(FALSE, TRUE) = FALSE
f1(TRUE, FALSE) = TRUE
f1(TRUE, TRUE) = FALSE

f2(FALSE, FALSE) = FALSE
f2(FALSE, TRUE) = TRUE
f2(TRUE, FALSE) = TRUE
f2(TRUE, TRUE) = TRUE

I tested all the possible 16 functions - only f1 and f2 get close. But even they make mistakes (after the first two signals). On the contrary - our predictor generated by zubr can make only one mistake, after first two signals he makes no more mistakes.

If you take a look at the file example22_hidden_variables_based_predictor.zubr (unpack perkun and see the "examples" folder) you will see that we use a custom dialog (extending JDialog) in the method getInput. This was necessary because we have two input variables here. You may process the example file with zubr:

zubr example22_hidden_variables_based_predictor.zubr > MyOptimizer.java

The result Java code can be compiled (remember to place it in a package "optimizer").

What is the conclusion? The optimizer/predictor with a state is much better for the case discussed here than any function based on the input variables. The state should be based on the hidden variables (it is not the only possibility, but the most natural one). This was the problem with the AI - we tried to achieve this with IF THEN, and IF THEN can only see the current state. The hidden variables are a natural way to compress our knowledge about the past. The history.










Tuesday, March 7, 2017

example21_set_apriori_belief.zubr

In the examples I assume we have a good world model (for example we know the sequence FALSE, FALSE, TRUE, TRUE on MOVE) but we do not know exactly where we begin. If we initially get FALSE then MOVE could lead to another FALSE or TRUE. This implies that the initial belief (probability distribution) must reflect this uncertainty. But even though we do not know the hidden variables initially - we may know more than nothing about them. For instance if we talk with a patient and we are a doctor we may introduce a hidden variable "patient_has_cancer". But we should not assume 50% for TRUE and 50% for FALSE, as zubr usually does. Instead we should apply the natural probability distribution of cancer in the population, i.e. use a so-called apriori belief.

This requires us to tell zubr we will define the method setAPrioriBelief:

%option setaprioribelief own

Then in the definition section we provide the implementation:

protected void setAPrioriBelief(Belief target) {
    for (StateIterator i = createNewStateIterator(target.getVisibleState()); !i.getFinished(); i.increment()) {
        State si = i.createState();
        target.addState(si);
       
        if (si.getVariableValue("gamma") == Value.FALSE)
            target.setProbability(si, 0.3f);
        else
            target.setProbability(si, 0.7f);       
    }
}


As you can see we iterate over all possible states using a StateIterator, create states and add them to the target (Belief). We will talk later about the iterators so take them for granted now. Once we have populated belief with states we may query the states for hidden variable values and set the probability. Note that we choose to set 30% for gamma=>FALSE and 70% for gamma=>TRUE.

Now process the example with zubr and compile the java outcome:

zubr example21_set_a_priori_belief.zubr > MyOptimizer.java

You can also execute the program directly from my server:

http://www.pawelbiernacki.net/aprioriOptimizer.jnlp

Have you noticed a small change after the first signal? The belief is not 50%/50% any more, but 30%/70%! This can be important when we have more real-world examples.

Download zubr from https://sourceforge.net/projects/perkun/.

Monday, March 6, 2017

example20_hidden_variables.zubr

This is our first example with the hidden variables. The Perkun section of the zubr specification looks as follows:

values
{
    value FALSE, TRUE;
    value MOVE, DONT_MOVE;
}

variables
{
    input variable alpha:{FALSE, TRUE}; // alpha may have value FALSE or TRUE   
    hidden variable gamma:{FALSE, TRUE};
    output variable beta:{MOVE, DONT_MOVE};
}


What is it good for? Imagine an automaton that can do either MOVE or DONT_MOVE. When it constantly does MOVE then the input will be:

FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE,...

But it is not known where in the sequence we begin. So even though it knows that two FALSE are succeeding when it gets a FALSE it does not know whether it was the first FALSE in the sequence or the second one.

The payoff function makes the program to "like" TRUE as input and "dislike" FALSE.

You may process the example with zubr to obtain the Java optimizer code:

zubr example20_hidden_variables.zubr > MyOptimizer.java

Here is the link to the program (you can run it directly from my server):

http://www.pawelbiernacki.net/hiddenVariablesOptimizer.jnlp

There are three scenarios possible:

1.  TRUE -> DONT_MOVE
     TRUE -> DONT_MOVE
     TRUE -> DONT_MOVE
     ....

2.  FALSE -> MOVE
     FALSE -> MOVE
     TRUE -> MOVE
     TRUE -> DONT_MOVE
     TRUE -> DONT_MOVE
     ...

3.  FALSE -> MOVE
     TRUE -> MOVE
     TRUE -> DONT_MOVE
     TRUE -> DONT_MOVE
     ...

You can see that the program created by zubr behaves a little as if it were indeterministic. Sometimes it responds TRUE with MOVE, sometimes with DONT_MOVE.

In fact it is completely deterministic, but it has a state, which is a belief (probability distribution over a set of two possible facts - gamma => FALSE and gamma => TRUE). This belief changes depending on the performed actions and obtained results. Because it has this additional knowledge (the belief) the optimizer can permit itself for example to choose MOVE when it knows that after MOVE still a TRUE will follow. On the contrary in the first scenario after TRUE it does not know whether another TRUE will follow, therefore it chooses DONT_MOVE.

This is an important point that I want to make - the state is very important for successful optimization and the hidden variables are a natural way to construct such states. Second - the optimizers can be deterministic, but still better than functions based on the input variables. In the case discussed here it is easy to construct a function that performs just as well as the zubr generated optimizer:

f(FALSE) = MOVE
f(TRUE) = DONT_MOVE

So in this case a function is just as good as the zubr optimizer, but in the more complex cases the functions just cannot beat the optimizers. We will later discuss such an example. The hidden variable based optimizers differ from the functions in so far that they have a deeper "understanding" of the outer world.

Download zubr from https://sourceforge.net/projects/perkun/.





Sunday, March 5, 2017

example19_get_payoff.zubr

What is the purpose of an optimizer? It attempts to maximize the expected value of the so-called payoff function. In this example we are finally implementing a method specifying the payoff function. First we have to tell zubr about it:

%option getpayoff own // method getPayoff

Then in the definition section we provide the implementation:


protected float getPayoff(VisibleState vs) {

    switch (vs.getVariableValue("alpha"))
    {
        case FALSE:
            return 0.0f;
       
        case TRUE:
            return 100.0f; // TRUE is "better" than FALSE
    }
    return 0.0f;
}


This way we make our optimizer to prefer alpha=TRUE and dislike alpha=FALSE. The example can be executed as usually, with zubr:

zubr example19_get_payoff.zubr > MyOptimizer.java

There are two possible decisions: MOVE and DONT_MOVE. The Perkun section in the zubr specification looks as follows:

values
{
    value FALSE, TRUE;
    value MOVE, DONT_MOVE;
}

variables
{
    input variable alpha:{FALSE, TRUE}; // alpha may have value FALSE or TRUE   
    output variable beta:{MOVE, DONT_MOVE};
}


You can execute the final program directly from my server:
http://www.pawelbiernacki.net/getPayoffOptimizer.jnlp

As is easy to anticipate the optimizer will do MOVE after FALSE, while he would do DONT_MOVE after TRUE. We still don't have the hidden variables here, but the example is sufficient to introduce the getPayoff method.

An interesting observation - if there are no hidden variables then the optimizer could be replaced by a simple function. Only then. The optimizers with hidden variables can be much better than any function mapping input to output, as was shown previously. 

The next example will be based on hidden variables, which is what makes zubr interesting.

Saturday, March 4, 2017

example18_simple_automaton.zubr

In this example we define also the method "execute" which is performed once the optimizer finds the optimal decision:

%option execute own // method execute

In the definition section we provide the implementation:

protected void execute(Action a) {
    JOptionPane.showMessageDialog(frame, a.getVariableValue("beta").getName(), "Action", JOptionPane.INFORMATION_MESSAGE);
}


The model in this automaton requires you to enter FALSE, TRUE, FALSE, TRUE,... and so on if the optimizer chooses MOVE, and a constant signal if it chooses DONT_MOVE.

You can run the program from my server.

http://www.pawelbiernacki.net/simpleAutomaton.jnlp

The optimizer always chooses MOVE. What's wrong with it? We should define one more method that allows discriminating the input signals - getPayoff. Then the optimizer will choose such output signals that the expected value of payoff is maximal.

I would like to add some comments concerning the difference between perkun and zubr. Perkun creates all possible visible states in the very beginning, and then for each visible state it generates all possible states. A visible state contains a vector of the input variables values, while a state contains a vector of the hidden variable values. With the increasing amount of hidden variables its demand for memory grows exponentially. On the contrary zubr generates code that creates the game tree dynamically, like a chess playing program. The code written by zubr demands therefore much less memory than an equivalent program in Perkun, but is slower.

Zubr and perkun (and wlodkowic) support hidden variables. This is a killer feature - the hidden variables allow to compress our knowledge about history and are a natural way to do it. The examples discussed so far contained no hidden variables, but it will change.

The code generated by zubr contain my optimization algorithm (just like perkun and wlodkowic). This algorithm has not been documented yet, I tried to write some documents about it, but the result was not satisfactory for me. If you want to understand the algorithm please take a look at the source code (classes perkun::optimizer or wlodkowic::optimizer).

When you link libperkun or libwlodkowic to your own program you have to obey the rules of the GPL 3.0, but when you create a Java code with zubr you may use it just as you would use the outcome of yacc/bison http://www.gnu.org/software/bison/manual/html_node/Conditions.html. You are free to use the code generated by zubr in proprietary programs.

Download zubr (and perkun + wlodkowic) from: https://sourceforge.net/projects/perkun/.





Friday, March 3, 2017

Hidden variable based predictor vs. functions

I have created a minimalistic example demonstrating that hidden variables are beneficial. I have written a small program that compares a predictor created by zubr with the function predictors. You may take a look at my code (it is included in the JAR, license GPL 3.0):

http://www.pawelbiernacki.net/TestHiddenVariableBasedPredictors.jar

You may also run the program directly from my server:

http://www.pawelbiernacki.net/TestHiddenVariableBasedPredictors.jnlp

It calculates the scores (amount of correct guesses divided by amount of all guesses) for various lengths of the test sequence. The predictor with the id = -1 is a zubr generated optimizer, the other predictors are simply functions. There are all possible functions tested (there are 16 of them). They have the ids 0..15.





As you can see the hidden variable based predictor outperforms the functions (its score equals 0.92 for the test sequence of length 19 while the best function achieves only 0.70).

The difference between the function predictors and the hidden variable based predictor is that the functions are stateless, while the hidden variable based predictor does have a state. Its state is a belief - a probability distribution over the set of states (vectors of hidden variable values). My point is that the IF THEN construct is too weak to achieve the AI, because IF THEN takes into account the current state only, ignoring the history. The hidden variables are all about history - they are a natural way to compress our knowledge about history. Therefore the optimizers/predictors based on hidden variables are so much better than the stateless predictors.

id 1 2 3 4 5 6 7 8 9
10
-1 0 0.25 0.5 0.63 0.7 0.75 0.79 0.81 0.83 0.85
0 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
1 0 0.25 0.42 0.44 0.45 0.46 0.46 0.47 0.47 0.47
2 0 0.25 0.33 0.31 0.3 0.29 0.29 0.28 0.28 0.28
3 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
4 0 0.25 0.42 0.5 0.55 0.58 0.61 0.63 0.64 0.65
5 0 0.25 0.33 0.38 0.3 0.33 0.36 0.38 0.33 0.35
6 0 0.25 0.5 0.5 0.4 0.42 0.5 0.5 0.44 0.45
7 0 0.25 0.33 0.31 0.3 0.29 0.29 0.28 0.28 0.28
8 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
9 0 0.25 0.5 0.5 0.4 0.42 0.5 0.5 0.44 0.45
10 0 0.25 0.33 0.38 0.3 0.33 0.36 0.38 0.33 0.35
11 0 0.25 0.42 0.44 0.45 0.46 0.46 0.47 0.47 0.47
12 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
13 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45
14 0 0.25 0.42 0.5 0.55 0.58 0.61 0.63 0.64 0.65
15 0 0.25 0.33 0.38 0.4 0.42 0.43 0.44 0.44 0.45

Take a look at the numbers (the first row is the hidden variable based predictor).

You may wonder why I used the term "predictor" rather than "optimizer". Well, this has something to do with the nature of my example. The action performed by this optimizer is a bet - it tries to predict the next value of one of the input variables. I will discuss the example used here later.

Zubr can be downloaded from https://sourceforge.net/projects/perkun/.

Thursday, March 2, 2017

example17_get_model_probability.zubr

In this example we tell zubr we will provide our own implementation of the method getModelProbability:

%option getmodelprobability own // method getModelProbability

Then in the definition section we provide the implementation:


protected float getModelProbability(VisibleState vs1, State s1, Action a, VisibleState vs2, State s2) {

    // here we can query the visible states for input variables
    // the states for hidden variables (there are none at present)
    // and the action for the output variables (also none at present)
   
    // vs1 and s1 represent the current visible state and state
    // a represents the action
    // vs2 and s2 represent the future visible state and state
   
    System.out.println("current alpha => " + vs1.getVariableValue("alpha").getName());
    System.out.println("future alpha => " + vs2.getVariableValue("alpha").getName());

    return 0.5f;
}


The VisibleState, State and Action are classes created by zubr. We can use their methods getVariableValue to query them for the input variable values, hidden variable values and output variable values, respectively. The method getVariableValue returns Value (an enum created by zubr).

As usual we can process our example with zubr and compile the result Java class:

zubr example17_get_model_probability.zubr > MyOptimizer.java

You can run the program directly from my server:
http://www.pawelbiernacki.net/getModelProbabilityOptimizer.jnlp

However it only consumes the input, no output is shown. In order to see what the optimizer decisions are we will have to redefine the method "execute".

In order to cancel the execution press "Cancel" button on the getInput dialog box.


Wednesday, March 1, 2017

example16_on_error_in_populate_belief_for_consequence.zubr

In this example we will add an error handling method. First we have to tell zubr we will do it. In the declaration section add:

%option onerrorinpopulatebeliefforconsequence own 
// with this option we tell zubr we will provide our own implementation of the method 
// onErrorInPopulateBeliefForConsequence

Then in the definition section we provide its implementation:


// this is the implementation of the method we promised to provide:

protected void onErrorInPopulateBeliefForConsequence(Belief formerBelief, Action formerAction, VisibleState vs) {
    JOptionPane.showMessageDialog(frame, "error in populate belief for consequence", "Error", JOptionPane.ERROR_MESSAGE);
    System.exit(0);
}


As you can see the method takes following arguments:
  • Belief
  • Action
  • VisibleState
They are all classes defined by zubr. The error happens when given the belief we attempt to perform an action and obtain as a result the visible state that was unexpected. Our implementation ignores them and simply displays a dialog message, then exits.

If you download zubr (https://sourceforge.net/projects/perkun/) and process the file example16_on_error_in_populate_belief_for_consequence.zubr from the "examples" folder with it then you obtain the optimizer Java code:

zubr example16_on_error_in_populate_belief_for_consequence.zubr > MyOptimizer.java

This Java code can be compiled and executed. You can run it from my server:

http://www.pawelbiernacki.net/optimizerWithErrorHandling.jnlp

This program just allows you to input the alpha (twice) and then exits displaying our error message. Why? The problem is we did not provide any model, i.e. whatever happens will be unexpected for our optimizer. In the next example we will provide the model and things will become more interesting.





Tuesday, February 28, 2017

example15_get_input.zubr

An optimizer created by zubr communicates with the outer world via two methods:
  • getInput
  • execute
What is an optimizer? From our point of view the optimizer runs in a loop:

    while (loopIsRunning):
        getInput(...)
        ....
        execute(...)


In this example we tell zubr that the method getInput will be provided by us. The other one, "execute" will be a default one. We will provide an implementation of getInput which will enable the user to set one input variable at one of two values it may have.

// This is an example zubr specification.

package optimizer;

%option class MyOptimizer    // here we modify the target class name, it is optional
%option getinput own    // with this option we tell zubr we will provide our own implementation of the method getInput

// We can import Java packages here.
import javax.swing.JFrame;
import javax.swing.JOptionPane;
import java.awt.Dimension;


In the middle section we provide the Perkun code declaring one input variable, alpha, which may have one of the two values - FALSE or TRUE.

// here we put the Perkun code (values and variables)

values
{
    value FALSE, TRUE;
}

variables
{
    input variable alpha:{FALSE, TRUE};   
}


The method getInput looks as follows:
// this is the implementation of the method we promised to provide:
protected void getInput(Map<Variable, Value> m) {

    Object[] possibilities = {"FALSE", "TRUE"};
   
    String n = (String)JOptionPane.showInputDialog(frame, "Alpha=?", "Continue?", JOptionPane.PLAIN_MESSAGE,
         null, possibilities, "FALSE");
      if (n == null) {
        loopIsRunning = false; // here we tell the optimizer we want to break the loop
      }
      else
      if (n == "FALSE") {
         m.put(mapNameToVariable.get("alpha"), Value.FALSE);
      }
      else
      if (n == "TRUE") {
         m.put(mapNameToVariable.get("alpha"), Value.TRUE);
     
}

It is filling the map Map<Variable,Value> named "m" with values for all input variables, depending on the decision made by the user. We are free to use the map Map<String,Variable> named "mapNameToVariable" to access the variable "alpha". The Value is an enum created by zubr.

In more complex cases we will possibly have more input variables, then we will provide a different dialog.

When we run this example with zubr we will obtain a Java code:

zubr example15_get_input.zubr > MyOptimizer.java

After compiling and running it a dialog will be shown to set the variable "alpha" either with FALSE or TRUE. After you enter the alpha value twice (and press OK) an error will occur. Moreover - this error will not be reported, and we would like to have a Swing dialog shown when it happens. How to achieve this? We must provide an important error handling method, which will be discussed in the next example. The default version of this method does nothing, and the error should be handled otherwise.

Saturday, February 25, 2017

example14_my_optimizer.zubr

Next example - example14_my_optimizer.zubr. In the declaration section we have the following code:


// This is an example zubr specification.

package optimizer;

%option class MyOptimizer    // here we modify the target class name, it is optional

// We can import Java packages here.
import javax.swing.JFrame;
import java.awt.Dimension;


The instruction %option class MyOptimizer is not a Java code, it is a zubr option. It tells zubr we want to change the default target class name. Nothing complicated. We must take it into account and modify the class OptimizerThread accordingly:


protected static class OptimizerThread extends Thread {
    private MyOptimizer optimizer;

    public void run() {
        optimizer = new MyOptimizer();
        optimizer.loop(1);
      }
}


Now it creates MyOptimizer rather than Optimizer. Also note that we added in the declaration section "package optimizer;" - this means that we have to place the result Java code in a package "optimizer" (you have to create the package). As usual we can execute this example with zubr:

zubr example14_my_optimizer.zubr > MyOptimizer.zubr

The result code should be compiled with a java compiler and executed. But it still only instantiates an optimizer and runs the loop without any means to communicate with the outer world. How to make the communication? Be patient, this will be covered in next examples.

For the Windows users - I have created an installer: http://www.pawelbiernacki.net/perkun.msi.

Download zubr from https://sourceforge.net/projects/perkun/.






Friday, February 24, 2017

example13_with_optimizer_thread.zubr

Next example from the "examples" folder of the perkun package. The definition section contains the code:

// the OptimizerThread class creates an instance of the Optimizer class (created by zubr)
// and runs the loop.

protected static class OptimizerThread extends Thread {
    private Optimizer optimizer;

    public void run() {
        optimizer = new Optimizer();
        optimizer.loop(1);
      }
}

public static void main(String[] args) {
    frame = new JFrame();
    frame.setTitle("Optimizer");
    frame.setSize(new Dimension(800, 600));
    frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
    frame.setVisible(true);
   
    OptimizerThread t = new OptimizerThread();
   
    t.start();
}
In short - we create a thread that creates an optimizer and calls its method "loop". In the loop the optimizer runs as long as the predefined boolean variable isLoopRunning equals true. This variable is defined by zubr, you do not need to define it, but you can access it in the Optimizer.

Try to execute the example with zubr:

zubr example13_with_optimizer_thread.zubr > Optimizer.java

That is all - it will create a Java class with the code shown above included. But the optimizer (although running) does not communicate with the outer world. It is no good for us. We need to define some optimizer's methods to do it. We will do it in next examples.

Download zubr from https://sourceforge.net/projects/perkun/.






Thursday, February 23, 2017

example12_with_frame.zubr

This is another zubr example from the Perkun's "examples" folder. It merely demonstrates that we can open a Swing frame when running the program. There are still no Perkun values or variables. Also the optimizer class is not instantiated yet.


// This is an example zubr specification.


// We can import Java packages here.
import javax.swing.JFrame;
import java.awt.Dimension;

%%

// here we put the Perkun code (values and variables)

values {}
variables {}

%%

// here we put Java code to be included in the result class

private static JFrame frame;

public static void main(String[] args) {
    frame = new JFrame();
    frame.setTitle("Optimizer");
    frame.setSize(new Dimension(800, 600));
    frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
    frame.setVisible(true);   
}


Try to process the example file with zubr:

zubr example12_with_frame.zubr > Optimizer.java

Then compile Optimizer.java with your Java compiler and run it. It will open a frame. When you close the frame the program terminates. Not a big deal. The example just shows how to include Java into your zubr specification.

Download zubr from https://sourceforge.net/projects/perkun/.


Wednesday, February 22, 2017

example11_simple.zubr

I thought I would present the subsequent zubr examples from the "examples" folder in the package perkun. Let us begin with example11_simple.zubr:


// This is an example zubr specification.

%%

// here we put the Perkun code (values and variables)

values {}
variables {}

%%

// here we put Java code to be included in the result class


In the above example we have a legal zubr specification. We begin with a declaration section where we can place for example Java imports. Then we have a separator %% and a Perkun section follows, with two subsections - values and variables. Then we have another separator %% and definition section follows, where we can place the Java code to be included within the result class.

Let us run:
zubr example11_simple.zubr > Optimizer.java

This will produce a Java code. The code will contain an Optimizer class which needs to be instantiated. Then we can run the instance's "loop" method. 

This simple example does not contain any Perkun variables, its purpose is just to demonstrate the zubr concept. Next examples will generate Java code based on the Java Swing package.

Download the zubr tool with the package: https://sourceforge.net/projects/perkun/.


zubr - an optimizer generator tool producing Java code in Perkun 0.1.6

I made some improvements for zubr and wrote several examples. They can be found in the "examples" folder. Zubr comes as a companion tool in the Perkun package:

https://sourceforge.net/projects/perkun/

The examples are:
  • example11_simple.zubr
  • example12_with_frame.zubr 
  • example13_with_optimizer_thread.zubr
  • example14_my_optimizer.zubr
  • example15_get_input.zubr
  • example16_on_error_in_populate_belief_for_consequence.zubr
  • example17_get_model_probability.zubr
  • example18_simple_automaton.zubr
  • example19_get_payoff.zubr
  • example20_hidden_variables.zubr
  • example21_set_apriori_belief.zubr
 They are ordered in the increasing complexity. I recommend running the Java code produced by them. In order to try out an example (say the last one) run:

zubr example21_set_apriori_belief.zubr > MyOptimizer.java

Then create a Java application project (for example in NetBeans), put the file MyOptimizer.java in the source folder (package "optimizer") and enjoy a simple optimizer generated by zubr!

I decided that only the Perkun values and variables are passed in the zubr specification - the apriori probabilities and the model probabilities are passed in pure Java (there is no special zubr syntax for them). It is better like that.



Tuesday, February 21, 2017

Perkun 0.1.5 released!

Perkun 0.1.5 has been released! It contains the "zubr" tool. In the folder examples there is an example zubr specification (example10_java.zubr). If you process the file with zubr you will get a Java code containing class MyOptimizer.

There is no manual for zubr yet, I will write it.

Monday, February 20, 2017

An idea: zubr - a Java code generator

I have an idea. I want to create a version of Perkun/Wlodkowic that works just like typical chess playing programs - i.e. without generating all the possible visible states. I want to use code that generates a game tree in runtime - based on the current state. In order to do that I will need a more powerful language than Perkun - I will need to write code in a general purpose programming language. One solution would be to embed an interpreter (like I did in Perkun2). Another one would be to enhance my own language. Another - the most interesting one - would be to create a code generator.

I mean a tool similar to yacc/bison with a specification that contains code in the target language. I choose the target language to be Java rather than C (for multiple reasons). The idea is to write a new tool (probably in C++) which would take a specification similar to this one:

//
// This is a zubr specification.
// When processed with zubr it generates Java code.
// The idea is based on yacc/bison generated parsers.

package optimizer;

%option class MyOptimizer
%option getinput own

import javax.swing.JFrame;
import javax.swing.JOptionPane;
import java.awt.Dimension;

%%

values
{
    value FALSE, TRUE;
}

variables
{
    input variable alpha:{FALSE, TRUE};
    hidden variable beta:{FALSE, TRUE};
    output variable chi:{FALSE, TRUE};
}

payoff
{
    set({alpha=>FALSE},0.0);
    set({alpha=>TRUE},100.0);
}

apriori
{
}

%%

// here we have remaining code (in Java) to be included in the optimizer class


 

I choose the tool to be named "zubr" which is derived from the Polish word "żubr" meaning a European bison.

As you can see I borrow quite a lot from the yacc/bison syntax. The middle part (between %% s) has Wlodkowic syntax, but the model generator will be written in Java.

The optimizer derived this way might be inherited or instantiated directly and used in your own Java program. The most important thing would be to achieve code based on Perkun specification with hundreds, maybe thousands of hidden variables. Current implementation of Perkun/Wlodkowic is not capable of doing that.

The tool "zubr" will be packaged in Perkun (hopefully in the version 0.1.5). In fact I already have a small prototype (based on Wlodkowic code).



Sunday, February 19, 2017

IF THEN problems

In the ancient times we were imagining that Artificial Intelligence could easily be achieved just by using the IF THEN statement known from the programming languages like Pascal. We were wrong. Why? I think the reason is still not quite clear, even though we know algorithms like Q-learning or minimax. The problem with IF THEN is that it ignores the hidden variables. It is obviously based on the visible (input) variables only. Building complex logical expressions on them does not help. With IF THEN we concentrate on the visible variables ignoring the history.

The hidden variables are all about history. If we know how the world works (we know its "model" to say it in the Perkun/Wlodkowic terms) and we know the history then we can figure out what the hidden variables values look like - building the "belief", i.e. the probability distribution over the states.

Ignoring the hidden variables is not the only IF THEN problem. Simply mapping the input variables to actions does not explain the computer why these actions should be performed. It is much better to express it in terms of the payoff function (like in minimax or Perkun). Mapping game states to payoff function allows us to compare the game states (by comparing their "images" in the payoff function).

It is trivial to explain why hidden variables are necessary. Imagine an automaton which can perform a single action and it can see on input the values 0,0,1,1,0,0,1,1,... and so on. What is the 0's successor ? Either 0 or 1. With 50% probability. What is the 1's successor? Again either 0 or 1. Also with 50% probability. But if you introduce a hidden variable so that the state is denotes by both input variable and the hidden variable then you can present an automaton as follows:

Now the automaton has become deterministic, due to the introduced hidden variable! In Perkun you would describe it as follows:




values 


    value zero, one; 
}
variables
{
    input variable alpha:{zero,one};
    hidden variable beta:{zero,one};
    output variable action:{zero};
}
payoff {}
model
{
    set({alpha=>zero,beta=>zero},{action=>zero},{alpha=>one,beta=>zero},1.0);
    set({alpha=>one,beta=>zero},{action=>zero},{alpha=>one,beta=>one},1.0);
    set({alpha=>one,beta=>one},{action=>zero},{alpha=>zero,beta=>one},1.0);
    set({alpha=>zero,beta=>one},{action=>zero},{alpha=>zero,beta=>zero},1.0);
}

I have omitted the payoff function for convenience, anyway there is only one action possible. But the point is that by introducing hidden variables even non-deterministic automatons can become deterministic! Or at least less non-deterministic!

The conclusion is that IF THEN is not bad at all, but we deal with something much more complex, possibly with thousands of hidden variables there. And we have to take them into account, at least with probability, just like Perkun (or Wlodkowic) does.

You can download Perkun (and Wlodkowic) from https://sourceforge.net/projects/perkun/.