Speaker: Igor Kiselev
Classical dynamic approaches to online learning and optimization address the issue of statistical fluctuations of the incoming data by means of continual retraining their models, which is computationally intractable for real-world problems of practical interest and is inappropriate in time-critical scenarios. In this talk, we present a distributed multi-agent approach to online multi-objective optimization, which requires modeling the task as a dynamic distributed resource allocation problem (i-ODRN) and applying a game-theoretic market-based method of multi-agent negotiation in order to obtain an implicit global quasi-optimal solution to the problem. The developed multi-agent allocation algorithm is different from conventional methods by being dynamic, incremental and continuous. Goal-driven behavior of autonomous agents is supported by the developed multi-objective decision-making model, which enables the allocation algorithm to operate on the basis of non-standard optimization criteria and be suitable for exploratory data analysis using various measures of similarity. We demonstrate applicability and efficiency of the developed multi-agent approach by considering the following two implemented knowledge-based multi-agent systems for solving NP-hard optimization problems: a continuous transportation scheduling system for solving the dynamic multi-vehicle pickup and delivery problem with soft time windows (a dynamic m-PDPSTW), and an online unsupervised learning system for continuous agglomerative hierarchical clustering of streaming data.
Friday, November 7, 2008 11:30 am
-
11:30 am
EST (GMT -05:00)