Friday, April 30, 2021

svarog-0.0.4 has been released!

A new version of svarog (0.0.4) has been released! It is available at:

https://github.com/pawelbiernacki/svarog

It contains two new commands:

cout << estimate(<depth>,<granularity>) << eol;

and

cout << precalculate(<dept>,<granularity>) << eol;

Both <depth> and <granularity> are integer parameters, <depth> is the depth of the game tree and should be a small integer, <granularity> is the amount of values in one dimension when discretizing the hypercube of beliefs, should also be a small integer, preferably 2.

The file examples/example6_precalculated.svarog has been generated automatically by svarog for the depth 1 and granularity 2. It is not very intelligent (since depth 1 is not much) but svarog is able to use the precalculated knowledge already.

The command "estimate" does not produce a valid Svarog specification - it skips the planning (determining the actions). It can be used to estimate the amount of "on belief" clauses by the "precalculate" command.

WARNING: When using svarog-daemons - they do not benefit of the precalculated knowledge yet.


Wednesday, April 28, 2021

Hyperball, not a hypercube

Due to the normalization the space of possible beliefs is an n dimensional hyperball with a radius 1.0, not a hypercube. In spite of that Svarog generates first a hypercube, for example for three states and granularity 2 there would be:

(0,0,1)

(0,1,0)

(0,1,1)

(1,0,0)

(1,0,1)

(1,1,0)

(1,1,1)

Then it applies the normalization (when doing precalculations):

(0,0,1.0)

(0,1.0,0)

(0,0.5,0.5)

(1.0,0,0)

(0.5,0,0.5)

(0.5,0.5,0)

(0.33,0.33,0.33)

The tuple (0,0,0) is not taken into account since it cannot be normalized and does not make sense.

Tuesday, April 27, 2021

Precalculations in svarog-0.0.4

 There will be a new command in svarog-0.0.4 (that has not been published yet). The command syntax is:

cout << precalculate(<depth>,<granularity>) << eol;

Example:

cout << precalculate(4,2) << eol;

This command generates (and prints to the standard output) a Svarog specification with a precalculated knowledge about the optimal actions for the given depth of planning and granularity.

The depth is the same parameter as the one passed to the loop command - it is the depth (or height) of the game tree. It should be a small integer.

The granularity should be a small integer value, preferably 2. Svarog will discretize the space of beliefs, which is a hypercube of n dimensions, with n being the amount of possible states for a given visible state. Then when planning it will be able to use the precalculated knowledge so that it searches for the closest node in the hypercube to the actual belief.

 

WARNING: For a reasonable specification running this command can take a couple days.

It should significantly improve the performance when planning. 

In some cases the amount of possible states is still too large to build this discrete hypercube, even for a small granularity (2). For example when the amount of possible states equals 32 then the amount of nodes in the discrete hypercube is 4.29497e+09. In this case (whenever the amount is greater than 1024) Svarog will denote the visible state as "too complex". When encountered these visible states the planning should be done normally, although for the "future beliefs" we still may benefit of the precalculated cache knowledge.

The idea is that the computer analyses the given Svarog specification and performs planning for hypothetical visible states and some beliefs. Then it will have a list of "rules" of the form:

<visible state> x <belief> -> <optimal action>

It will clearly not cover all possible beliefs, but we will be able to find the closest rule to the actual belief in terms of the cartesian distance.


Thursday, April 15, 2021

dorban-0.0.0 - demo for Svarog

I have written a small demo program for Svarog. It is called dorban, it can be downloaded from https://sourceforge.net/projects/dorban/ .

I will describe it on my website https://www.perkun.org .