I have released the tool I was writing about. It is accessible from my server:

http://www.pawelbiernacki.net/bobr-0.0.0.tar.gz

It is a Java code generator (just like zubr) but I decided I would not put it into the Perkun package.

## Sunday, July 16, 2017

## Thursday, July 13, 2017

### Spheres

I have an idea how to limit the size of the hidden variable values we search through. Imagine we have n following hidden variables:

hidden variable v1:{false,true,none};

hidden variable v2:{false,true,none};

...

hidden variable vn:{false,true,none};

Then for any visible state we specify an initial point:

v1=>none,

v2=>none,

...

vn=>none

Instead of all the space we only search a "sphere" with the grade defined as an amount of hidden variables that differ from the initial point (center). For grade 0 we only have a center. For grade 1 we have:

v1=>none

v2=>none

..

v(i-1)=>none

vi=>false or true

v(i+1)=>none

...

vn=>none

Thus we only have one variable - vi - that differs from the center. For grade = 2 we will have two such variables. For grade = n all the hidden variables will differ from the center:

v1=>false or true,

v2=>false or true,

...

vn=>false or true

My idea is to search through a sphere of k-th grade for a given center point. It may happen that the center points differ depending on the visible state.

I will implement the bobr code generator so that it only searches through such spheres.

My inspiration was programming. When we write a program we do not search through the whole space of the programs since it is huge. Instead we move in the space of programs step-by-step.

hidden variable v1:{false,true,none};

hidden variable v2:{false,true,none};

...

hidden variable vn:{false,true,none};

Then for any visible state we specify an initial point:

v1=>none,

v2=>none,

...

vn=>none

Instead of all the space we only search a "sphere" with the grade defined as an amount of hidden variables that differ from the initial point (center). For grade 0 we only have a center. For grade 1 we have:

v1=>none

v2=>none

..

v(i-1)=>none

vi=>false or true

v(i+1)=>none

...

vn=>none

Thus we only have one variable - vi - that differs from the center. For grade = 2 we will have two such variables. For grade = n all the hidden variables will differ from the center:

v1=>false or true,

v2=>false or true,

...

vn=>false or true

My idea is to search through a sphere of k-th grade for a given center point. It may happen that the center points differ depending on the visible state.

I will implement the bobr code generator so that it only searches through such spheres.

My inspiration was programming. When we write a program we do not search through the whole space of the programs since it is huge. Instead we move in the space of programs step-by-step.

## Monday, July 10, 2017

### Bobr - a templates parser

I have written a small tool (bobr) that is capable of parsing my variable templates. It is not published yet.

In a quite realistic example I observed that both the input variables generated as well as the output variable (with many values) produce a relatively small space to search, while the hidden variables produce a huge one. I will not be able to search through it all. I thought of changing the algorithm so that it only searches through some small subspace.

I will also have a problem how to represent the model in terms of the hidden variable templates.

This is the template code I was parsing:

class boolean, person, place, profession, weapon;

object none;

object false:boolean, true:boolean;

object Dorban:person, Pregor:person, Thragos:person;

object warrior:profession, wizard:profession, thief:profession;

object Wyzima:place, Shadizar:place, Novigrad:place;

object bare_hands:weapon, axe:weapon, magic:weapon;

input variable reward:{false, true, none};

input variable response:{false, true, none};

input variable can_I_see_(X:person):boolean;

input variable do_I_have_(X:weapon):boolean;

input variable am_I_a_(X:profession):boolean;

input variable where_am_I:place;

output variable action:{

goto_(X:place),

do_nothing,

attack_(X:person)_with_(Y:weapon),

steal_(X:person)_(Y:weapon),

tell_(X:person)_that_(Y:person)_has_(Z:weapon),

tell_(X:person)_that_(Y:person)_is_a_(Z:profession),

tell_(X:person)_that_(Y:person)_is_in_(Z:place),

ask_(X:person)_whether_(Y:person)_has_(Z:weapon),

ask_(X:person)_whether_(Y:person)_is_a_(Z:profession),

ask_(X:person)_whether_(Y:person)_is_in_(Z:place)

};

hidden variable (X:person)_has_(Y:weapon):boolean;

hidden variable (X:person)_is_a_(Y:profession):boolean;

hidden variable (X:person)_is_in_(Y:place):boolean;

hidden variable (X:person)_thinks_that_(Y:person)_has_(Z:weapon):boolean;

hidden variable (X:person)_thinks_that_(Y:person)_is_a_(Z:profession):boolean;

hidden variable (X:person)_thinks_that_(Y:person)_is_in_(Z:place):boolean;

In a quite realistic example I observed that both the input variables generated as well as the output variable (with many values) produce a relatively small space to search, while the hidden variables produce a huge one. I will not be able to search through it all. I thought of changing the algorithm so that it only searches through some small subspace.

I will also have a problem how to represent the model in terms of the hidden variable templates.

This is the template code I was parsing:

class boolean, person, place, profession, weapon;

object none;

object false:boolean, true:boolean;

object Dorban:person, Pregor:person, Thragos:person;

object warrior:profession, wizard:profession, thief:profession;

object Wyzima:place, Shadizar:place, Novigrad:place;

object bare_hands:weapon, axe:weapon, magic:weapon;

input variable reward:{false, true, none};

input variable response:{false, true, none};

input variable can_I_see_(X:person):boolean;

input variable do_I_have_(X:weapon):boolean;

input variable am_I_a_(X:profession):boolean;

input variable where_am_I:place;

output variable action:{

goto_(X:place),

do_nothing,

attack_(X:person)_with_(Y:weapon),

steal_(X:person)_(Y:weapon),

tell_(X:person)_that_(Y:person)_has_(Z:weapon),

tell_(X:person)_that_(Y:person)_is_a_(Z:profession),

tell_(X:person)_that_(Y:person)_is_in_(Z:place),

ask_(X:person)_whether_(Y:person)_has_(Z:weapon),

ask_(X:person)_whether_(Y:person)_is_a_(Z:profession),

ask_(X:person)_whether_(Y:person)_is_in_(Z:place)

};

hidden variable (X:person)_has_(Y:weapon):boolean;

hidden variable (X:person)_is_a_(Y:profession):boolean;

hidden variable (X:person)_is_in_(Y:place):boolean;

hidden variable (X:person)_thinks_that_(Y:person)_has_(Z:weapon):boolean;

hidden variable (X:person)_thinks_that_(Y:person)_is_a_(Z:profession):boolean;

hidden variable (X:person)_thinks_that_(Y:person)_is_in_(Z:place):boolean;

## Sunday, July 9, 2017

### My dream

Some day someone will write a program based on the Perkun algorithm that will be capable of adding hidden variables dynamically and adding them ad hoc will be triggered by some observations. And adding hidden variables will possibly be trained, or maybe planned by the planning algorithm itself.

My dream would be to have a way of extending the model by the newly created hidden variables. I imagine that a model M(i+1) would be somehow derived from a model M(i) by extending it with a hidden variable (or several ones). This way starting from a model M(0) we would be able to achieve arbitrarily complex models. And the model M(0) would contain no hidden variables at all - just the pure transition probabilities for the input variables and the output variables.

I thought of ignoring all the separate states within a visible state so that we do not produce a cartesian product of the hidden variables values. Let us pretend the hidden variables are independent, unless their dependency is very important. I think I will try this.

I may have a chance to present Perkun at http://www.aihelsinki.com/. In autumn. The people from AIHelsinki were so kind to allow me that, although I am not a scientist. This project is just a hobby of mine, so I really appreciate their kindness.

My dream would be to have a way of extending the model by the newly created hidden variables. I imagine that a model M(i+1) would be somehow derived from a model M(i) by extending it with a hidden variable (or several ones). This way starting from a model M(0) we would be able to achieve arbitrarily complex models. And the model M(0) would contain no hidden variables at all - just the pure transition probabilities for the input variables and the output variables.

I thought of ignoring all the separate states within a visible state so that we do not produce a cartesian product of the hidden variables values. Let us pretend the hidden variables are independent, unless their dependency is very important. I think I will try this.

I may have a chance to present Perkun at http://www.aihelsinki.com/. In autumn. The people from AIHelsinki were so kind to allow me that, although I am not a scientist. This project is just a hobby of mine, so I really appreciate their kindness.

Subscribe to:
Posts (Atom)