Some day someone will write a program based on the Perkun algorithm that will be capable of adding hidden variables dynamically and adding them ad hoc will be triggered by some observations. And adding hidden variables will possibly be trained, or maybe planned by the planning algorithm itself.
My dream would be to have a way of extending the model by the newly created hidden variables. I imagine that a model M(i+1) would be somehow derived from a model M(i) by extending it with a hidden variable (or several ones). This way starting from a model M(0) we would be able to achieve arbitrarily complex models. And the model M(0) would contain no hidden variables at all - just the pure transition probabilities for the input variables and the output variables.
I thought of ignoring all the separate states within a visible state so that we do not produce a cartesian product of the hidden variables values. Let us pretend the hidden variables are independent, unless their dependency is very important. I think I will try this.
I may have a chance to present Perkun at http://www.aihelsinki.com/. In autumn. The people from AIHelsinki were so kind to allow me that, although I am not a scientist. This project is just a hobby of mine, so I really appreciate their kindness.