## Wednesday, December 25, 2019

### kuna-0.0.0 has been published!

The language kuna-0.0.0 has been published! It can be downloaded from https://www.perkun.org/Download. Take a look at the examples folder, it contains some working examples. I will spend some time on documenting it.

## Thursday, December 19, 2019

### kuna - a new language

I have a new language. Its name is "kuna". It is not published yet. It resembles perkun, but does not keep all its model in memory. Instead it calculates the model value from certain rules whenever it is necessary. It is therefore a little stronger than perkun. It allows more input and hidden variables.

In order to create kuna I used some of the perkun code. Especially the parser and the optimization algorithm are taken almost directly from perkun. The syntax is similar. Kuna comes as a library, just like perkun, so it is possible to use it in your own C++ programs.

A kuna program consists of three sections - values, variables and knowledge (describing both the payoff and model).

In order to create kuna I used some of the perkun code. Especially the parser and the optimization algorithm are taken almost directly from perkun. The syntax is similar. Kuna comes as a library, just like perkun, so it is possible to use it in your own C++ programs.

A kuna program consists of three sections - values, variables and knowledge (describing both the payoff and model).

## Tuesday, December 3, 2019

### www.perkun.org

I have created a website dedicated for perkun, wlodkowic, zubr and perkun2. It is www.perkun.org.

## Saturday, February 2, 2019

### A simple Perkun example

The below Perkun code represents a very simple example. There are two state variables - a (input variable) and b (hidden variable). There is also an output variable c. All the variables have boolean values.

The output variable values combinations are actions. In this example there are two actions: c=>false and c=>true. For each action we can represent the model as a directed graph, for example:

c=>false:

Every edge has the label 1.0, which corresponds with the transition probability.

And c=>true:

We can see that in each of the graphs the states are containing the state variables, i.e. a and b. The last graph can be formed into:

The two subgraphs labeled "a=>false" and "a=>true" are called in Perkun the "visible states". Each of them contains two states with fixed input variable values differing only by the hidden variable values.

And here is the Perkun code:

#!perkun

values

{

value false, true;

}

variables

{

input variable a:{false, true};

hidden variable b:{false, true};

output variable c:{false, true};

}

payoff

{

set({a=>false}, 0.0);

set({a=>true}, 100.0);

}

model

{

# model

# {c=>false}

set({a=>false ,b=>false },{c=>false },{a=>false ,b=>false },1.00000);

set({a=>false ,b=>false },{c=>false },{a=>false ,b=>true },0.00000);

set({a=>false ,b=>false },{c=>false },{a=>true ,b=>false },0.00000);

set({a=>false ,b=>false },{c=>false },{a=>true ,b=>true },0.00000);

# {c=>true}

set({a=>false ,b=>false },{c=>true },{a=>false ,b=>false },0.00000);

set({a=>false ,b=>false },{c=>true },{a=>false ,b=>true },1.00000);

set({a=>false ,b=>false },{c=>true },{a=>true ,b=>false },0.00000);

set({a=>false ,b=>false },{c=>true },{a=>true ,b=>true },0.00000);

# {c=>false}

set({a=>false ,b=>true },{c=>false },{a=>false ,b=>false },0.00000);

set({a=>false ,b=>true },{c=>false },{a=>false ,b=>true },1.00000);

set({a=>false ,b=>true },{c=>false },{a=>true ,b=>false },0.00000);

set({a=>false ,b=>true },{c=>false },{a=>true ,b=>true },0.00000);

# {c=>true}

set({a=>false ,b=>true },{c=>true },{a=>false ,b=>false },0.00000);

set({a=>false ,b=>true },{c=>true },{a=>false ,b=>true },0.00000);

set({a=>false ,b=>true },{c=>true },{a=>true ,b=>false },1.00000);

set({a=>false ,b=>true },{c=>true },{a=>true ,b=>true },0.00000);

# {c=>false}

set({a=>true ,b=>false },{c=>false },{a=>false ,b=>false },0.00000);

set({a=>true ,b=>false },{c=>false },{a=>false ,b=>true },0.00000);

set({a=>true ,b=>false },{c=>false },{a=>true ,b=>false },1.00000);

set({a=>true ,b=>false },{c=>false },{a=>true ,b=>true },0.00000);

# {c=>true}

set({a=>true ,b=>false },{c=>true },{a=>false ,b=>false },0.00000);

set({a=>true ,b=>false },{c=>true },{a=>false ,b=>true },0.00000);

set({a=>true ,b=>false },{c=>true },{a=>true ,b=>false },0.00000);

set({a=>true ,b=>false },{c=>true },{a=>true ,b=>true },1.00000);

# {c=>false}

set({a=>true ,b=>true },{c=>false },{a=>false ,b=>false },0.00000);

set({a=>true ,b=>true },{c=>false },{a=>false ,b=>true },0.00000);

set({a=>true ,b=>true },{c=>false },{a=>true ,b=>false },0.00000);

set({a=>true ,b=>true },{c=>false },{a=>true ,b=>true },1.00000);

# {c=>true}

set({a=>true ,b=>true },{c=>true },{a=>false ,b=>false },1.00000);

set({a=>true ,b=>true },{c=>true },{a=>false ,b=>true },0.00000);

set({a=>true ,b=>true },{c=>true },{a=>true ,b=>false },0.00000);

set({a=>true ,b=>true },{c=>true },{a=>true ,b=>true },0.00000);

}

loop(1);

When we execute the code with perkun we enter the interactive mode:

loop with depth 1

I expect the values of the variables: a

perkun>

Let's type "false":

belief:

b=false a=false 0.500000

b=true a=false 0.500000

optimal action:

c=true

perkun>

We can see that Perkun is not sure whether we are in the state "a=>false,b=>false" or "a=>false,b=>true" - they both have 50% belief probability. Let's type "false" again:

belief:

b=false a=false 0.00000

b=true a=false 1.00000

optimal action:

c=true

perkun>

Now it knows we are in the state "a=>false,b=>true" and wants as to "move" i.e. to execute the action "c=>true". Let us type "true":

belief:

b=false a=true 1.00000

b=true a=true 0.00000

optimal action:

c=false

perkun>

Now finally it has got what it likes - "a=>true" (see the payoff). Therefore it asks us to stay in the state (perform the action c=>false).

The output variable values combinations are actions. In this example there are two actions: c=>false and c=>true. For each action we can represent the model as a directed graph, for example:

c=>false:

Every edge has the label 1.0, which corresponds with the transition probability.

And c=>true:

We can see that in each of the graphs the states are containing the state variables, i.e. a and b. The last graph can be formed into:

The two subgraphs labeled "a=>false" and "a=>true" are called in Perkun the "visible states". Each of them contains two states with fixed input variable values differing only by the hidden variable values.

And here is the Perkun code:

#!perkun

values

{

value false, true;

}

variables

{

input variable a:{false, true};

hidden variable b:{false, true};

output variable c:{false, true};

}

payoff

{

set({a=>false}, 0.0);

set({a=>true}, 100.0);

}

model

{

# model

# {c=>false}

set({a=>false ,b=>false },{c=>false },{a=>false ,b=>false },1.00000);

set({a=>false ,b=>false },{c=>false },{a=>false ,b=>true },0.00000);

set({a=>false ,b=>false },{c=>false },{a=>true ,b=>false },0.00000);

set({a=>false ,b=>false },{c=>false },{a=>true ,b=>true },0.00000);

# {c=>true}

set({a=>false ,b=>false },{c=>true },{a=>false ,b=>false },0.00000);

set({a=>false ,b=>false },{c=>true },{a=>false ,b=>true },1.00000);

set({a=>false ,b=>false },{c=>true },{a=>true ,b=>false },0.00000);

set({a=>false ,b=>false },{c=>true },{a=>true ,b=>true },0.00000);

# {c=>false}

set({a=>false ,b=>true },{c=>false },{a=>false ,b=>false },0.00000);

set({a=>false ,b=>true },{c=>false },{a=>false ,b=>true },1.00000);

set({a=>false ,b=>true },{c=>false },{a=>true ,b=>false },0.00000);

set({a=>false ,b=>true },{c=>false },{a=>true ,b=>true },0.00000);

# {c=>true}

set({a=>false ,b=>true },{c=>true },{a=>false ,b=>false },0.00000);

set({a=>false ,b=>true },{c=>true },{a=>false ,b=>true },0.00000);

set({a=>false ,b=>true },{c=>true },{a=>true ,b=>false },1.00000);

set({a=>false ,b=>true },{c=>true },{a=>true ,b=>true },0.00000);

# {c=>false}

set({a=>true ,b=>false },{c=>false },{a=>false ,b=>false },0.00000);

set({a=>true ,b=>false },{c=>false },{a=>false ,b=>true },0.00000);

set({a=>true ,b=>false },{c=>false },{a=>true ,b=>false },1.00000);

set({a=>true ,b=>false },{c=>false },{a=>true ,b=>true },0.00000);

# {c=>true}

set({a=>true ,b=>false },{c=>true },{a=>false ,b=>false },0.00000);

set({a=>true ,b=>false },{c=>true },{a=>false ,b=>true },0.00000);

set({a=>true ,b=>false },{c=>true },{a=>true ,b=>false },0.00000);

set({a=>true ,b=>false },{c=>true },{a=>true ,b=>true },1.00000);

# {c=>false}

set({a=>true ,b=>true },{c=>false },{a=>false ,b=>false },0.00000);

set({a=>true ,b=>true },{c=>false },{a=>false ,b=>true },0.00000);

set({a=>true ,b=>true },{c=>false },{a=>true ,b=>false },0.00000);

set({a=>true ,b=>true },{c=>false },{a=>true ,b=>true },1.00000);

# {c=>true}

set({a=>true ,b=>true },{c=>true },{a=>false ,b=>false },1.00000);

set({a=>true ,b=>true },{c=>true },{a=>false ,b=>true },0.00000);

set({a=>true ,b=>true },{c=>true },{a=>true ,b=>false },0.00000);

set({a=>true ,b=>true },{c=>true },{a=>true ,b=>true },0.00000);

}

loop(1);

When we execute the code with perkun we enter the interactive mode:

loop with depth 1

I expect the values of the variables: a

perkun>

Let's type "false":

belief:

b=false a=false 0.500000

b=true a=false 0.500000

optimal action:

c=true

perkun>

We can see that Perkun is not sure whether we are in the state "a=>false,b=>false" or "a=>false,b=>true" - they both have 50% belief probability. Let's type "false" again:

belief:

b=false a=false 0.00000

b=true a=false 1.00000

optimal action:

c=true

perkun>

Now it knows we are in the state "a=>false,b=>true" and wants as to "move" i.e. to execute the action "c=>true". Let us type "true":

belief:

b=false a=true 1.00000

b=true a=true 0.00000

optimal action:

c=false

perkun>

Now finally it has got what it likes - "a=>true" (see the payoff). Therefore it asks us to stay in the state (perform the action c=>false).

Subscribe to:
Posts (Atom)