Distributed dynamic decision-making and learning under uncertainty in complex and changing situations are emerging as the key competencies required to support future information-based systems. The Bayesian paradigm is acknowledged to provide a consistent and rigorous theoretical basis for joint learning and dynamic decision-making. The established theory already provides a class of efficient adaptive strategies. However, this approach fails to overcome the computational complexity barrier encountered in complex settings. This project aims to create a theoretical and algorithmic basis of a mathematically rigorous, but computationally tractable Bayesian distributed dynamic decision-making system, fully scalable in the number of local decision makers.
The project aims to develop theory, algorithms and software for Bayesian distributed dynamic decision-making. It will make a qualitatively new step towards a generic theory of multi-participant, multi-step decision making in complex dynamic situations. The project will transform the theory into a generic algorithmic and software toolset.
The theory and its conversion into a practical tool will provide:
Applications to non-trivial problems will be used to measure the project?s success. Simulation, pilot-plants and real-life (in rolling mill industry) tests will serve this purpose.