Tuesday, June 1, 2021

I think he thinks I think ...

I have realized what is needed to make a successful planning/optimizing algorithm for multiple intelligent agents. We need a special belief that not only covers our agent's hidden variables but also what he thinks the others think. And what he thinks the others think he thinks. And so on. Up to a certain level. Agents in a simple world would perceive each other as agents and would try to model each others psychology.

My first attempt to construct a language that does that failed - it was Perkun2. I thought about making something like Svarog for multiple agents problem. But I have no idea how it should work.

Tuesday, May 25, 2021

svarog-0.0.8 and dorban-0.0.2

There is a new Svarog and a new Dorban out there. In Svarog I found the way to handle the situations when there were too many possible states to discretize the space of beliefs. It is much faster now. The new Dorban uses it and is, consequently, much faster.

https://sourceforge.net/projects/dorban/

https://github.com/pawelbiernacki/svarog

The current intelligence level is 4, in spite of that the response time on my computer is below 1s.


Thursday, May 6, 2021

svarog-0.0.6 - Perl scripts to parallelize the precalculations.

If you have a multiple core machine you would like to parallelize the precalculations. The new svarog (0.0.6) contains four Perl scripts that help to do it. While you are free to write your own scripts using the svarog-merge tool (created in 0.0.5) it is more convenient to use these scripts:

  • svarog_generate_precalculate_tasks.pl
  • svarog_generate_precalculate_shell.pl
  • svarog_generate_control_shell.pl
  • svarog_generate_merge_shell.pl

In order to use them do the following steps:

  1. create a new directory and enter it
  2. copy a valid Svarog specification (without any commands) into it, for example as specification.svarog 
  3. copy specification.svarog into create_visible_states.svarog and append "cout << visible states << eol;" to it.
  4. execute: svarog create_visible_states.svarog > visible_states.txt
  5. check the amount of lines in visible_states.txt, in dorban's case it was 360
  6. execute: svarog_generate_precalculate_tasks.pl 4 2 specification.svarog visible_states.txt
  7. execute: svarog_generate_precalculate_shell.pl 360 > precalculate.sh
  8. execute: bash precalculate.sh

This will start the precalculations on all your processors, in parallel. In order to check whether they terminated you can use a control shell:

  1. execute: svarog_generate_control_shell.pl 360 > control.sh
  2. execute: bash control.sh 

If there are any errors this probably means that some precalculations did not terminate yet. You can use the ps shell command to check whether any svarogs are still running. When the control.sh reports no errors you will want to merge the Svarog specifications created by your tasks in the directory KNOWLEDGE:

  1. execute: svarog_generate_merge_shell.pl 360 > merge.sh
  2. execute: bash merge.sh

The result Svarog specification containing the precalculated knowledge will be the file result.svarog.

You should of course replace 360 in the above examples with the actual amount of visible states in your case.

Replace 4 (the first argument for svarog_generate_precalculate_tasks.pl) with the depth you want for the precalculations. It will affect the time required for the precalculations, but not the result size.


Wednesday, May 5, 2021

svarog-merge and a new precalculate command

svarog 0.0.5 has been released. It contains two enhancements:

  • a svarog-merge tool
  • enhanced command cout << precalculate() << eol

 

The enhanced command cout << precalculate() << eol allows precalculating knowledge for a specific visible state. The syntax is:

cout << precalculate(<depth>,<granularity>,<query>) << eol;

For example:

cout << precalculate(3,2,{has_dorban_won_a_fight=>none ,where_is_dorban=>place_Poznan ,can_dorban_see_pregor=>true ,can_dorban_see_pregor_is_alive=>true ,can_dorban_see_vampire=>true ,is_pregor_following_dorban=>true }) << eol;

The result is a valid Svarog specification with the knowledge precalculated merely for this visible state.

The tool svarog-merge requires two files:

svarog-merge <file1> <file2>

It adds all missing precalculated visible states from file2 to the file1 specification. The precalculated knowledge section must exist in the file1 (even if the section is empty) and the precalculated knowledge in both files must have the same depth and granularity.

Using the new "precalculate" command along with the svarog-merge tool allows running the precalculations on multiple core machines effectively (so that all processors have work to do).


Friday, April 30, 2021

svarog-0.0.4 has been released!

A new version of svarog (0.0.4) has been released! It is available at:

https://github.com/pawelbiernacki/svarog

It contains two new commands:

cout << estimate(<depth>,<granularity>) << eol;

and

cout << precalculate(<dept>,<granularity>) << eol;

Both <depth> and <granularity> are integer parameters, <depth> is the depth of the game tree and should be a small integer, <granularity> is the amount of values in one dimension when discretizing the hypercube of beliefs, should also be a small integer, preferably 2.

The file examples/example6_precalculated.svarog has been generated automatically by svarog for the depth 1 and granularity 2. It is not very intelligent (since depth 1 is not much) but svarog is able to use the precalculated knowledge already.

The command "estimate" does not produce a valid Svarog specification - it skips the planning (determining the actions). It can be used to estimate the amount of "on belief" clauses by the "precalculate" command.

WARNING: When using svarog-daemons - they do not benefit of the precalculated knowledge yet.


Wednesday, April 28, 2021

Hyperball, not a hypercube

Due to the normalization the space of possible beliefs is an n dimensional hyperball with a radius 1.0, not a hypercube. In spite of that Svarog generates first a hypercube, for example for three states and granularity 2 there would be:

(0,0,1)

(0,1,0)

(0,1,1)

(1,0,0)

(1,0,1)

(1,1,0)

(1,1,1)

Then it applies the normalization (when doing precalculations):

(0,0,1.0)

(0,1.0,0)

(0,0.5,0.5)

(1.0,0,0)

(0.5,0,0.5)

(0.5,0.5,0)

(0.33,0.33,0.33)

The tuple (0,0,0) is not taken into account since it cannot be normalized and does not make sense.

Tuesday, April 27, 2021

Precalculations in svarog-0.0.4

 There will be a new command in svarog-0.0.4 (that has not been published yet). The command syntax is:

cout << precalculate(<depth>,<granularity>) << eol;

Example:

cout << precalculate(4,2) << eol;

This command generates (and prints to the standard output) a Svarog specification with a precalculated knowledge about the optimal actions for the given depth of planning and granularity.

The depth is the same parameter as the one passed to the loop command - it is the depth (or height) of the game tree. It should be a small integer.

The granularity should be a small integer value, preferably 2. Svarog will discretize the space of beliefs, which is a hypercube of n dimensions, with n being the amount of possible states for a given visible state. Then when planning it will be able to use the precalculated knowledge so that it searches for the closest node in the hypercube to the actual belief.

 

WARNING: For a reasonable specification running this command can take a couple days.

It should significantly improve the performance when planning. 

In some cases the amount of possible states is still too large to build this discrete hypercube, even for a small granularity (2). For example when the amount of possible states equals 32 then the amount of nodes in the discrete hypercube is 4.29497e+09. In this case (whenever the amount is greater than 1024) Svarog will denote the visible state as "too complex". When encountered these visible states the planning should be done normally, although for the "future beliefs" we still may benefit of the precalculated cache knowledge.

The idea is that the computer analyses the given Svarog specification and performs planning for hypothetical visible states and some beliefs. Then it will have a list of "rules" of the form:

<visible state> x <belief> -> <optimal action>

It will clearly not cover all possible beliefs, but we will be able to find the closest rule to the actual belief in terms of the cartesian distance.


Thursday, April 15, 2021

dorban-0.0.0 - demo for Svarog

I have written a small demo program for Svarog. It is called dorban, it can be downloaded from https://sourceforge.net/projects/dorban/ .

I will describe it on my website https://www.perkun.org .