This dissertation presents several new algorithms and heuristics for constraint satisfaction problems, as well as an extensive and systematic empirical evaluation of these new techniques. The goal of the research is to develop algorithms which are effective on large and hard constraint satisfaction problems.
The dissertation presents several new combination algorithms. The BJ+DVO algorithm combines backjumping with a dynamic variable ordering heuristic that utilizes a forward checking style look-ahead. A new heuristic for selecting a value called Look-ahead Value Ordering (LVO) can be combined with BJ+DVO to yield BJ+DVO+LVO. A new learning, or constraint recording, technique called jump-back learning is described. Jump-back learning is particularly effective because it takes advantage of effort that has already been expended by BJ+DVO. This type of learning can be combined with either BJ+DVO or BJ+DVO+LVO. Learning is shown to be helpful for solving optimization problems that are cast as a series of constraint problems with successively tighter cost-bound constraints. The constraints recorded by learning are used in subsequent attempts to find a solution with a lower cost-bound.
The algorithms are evaluated in the dissertation by their performance on three types of problems. Extensive use is made of random binary constraint satisfaction problems, which are generated according to certain parameters. By varying the parameters across a range of values it is possible to assess how the relative performance of algorithms is affected by characteristics of the problems. A second random problem generator creates instances modeled on scheduling problems from the electric power industry. Third, algorithms are compared on a set of DIMACS Challenge problems drawn from circuit analysis.
The dissertation presents the first systematic study of the empirical distribution of the computational effort required to solve randomly generated constraint satisfaction problems. If solvable and unsolvable problems are considered separately, the distribution of work on each type of problem can be approximated by two parametric families of continuous probability distributions. Unsolvable problems are well fit by the lognormal distribution function, while the distribution of work on solvable problems can be roughly modelled by the Weibull distribution. Both of these distributions can be highly skewed and have a long, heavy right tail.
Cited By
- Mohamed A, Yusoff M, Mutalib S and Rahman S Modified branch and bound algorithm Proceedings of the 8th Conference on 8th WSEAS International Conference on Evolutionary Computing - Volume 8, (274-279)
- Vardi M Constraint satisfaction and database theory Proceedings of the nineteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, (76-85)
- Rish I and Dechter R (2019). Resolution versus Search, Journal of Automated Reasoning, 24:1-2, (225-275), Online publication date: 1-Feb-2000.
Recommendations
Dynamic algorithms for classes of constraint satisfaction problems
Many fundamental tasks in artificial intelligence and in combinatorial optimization can be formulated as a Constraint Satisfaction Problem (CSP). It is the problem of finding an assignment of values for a set of variables, each defined on a finite ...
Heuristics for Solving Fuzzy Constraint Satisfaction Problems
ANNES '95: Proceedings of the 2nd New Zealand Two-Stream International Conference on Artificial Neural Networks and Expert SystemsWork in the field of AI over the past twenty years has shown that many problems can be represented as constraint satisfaction problems and efficiently solved by constraint satisfaction algorithms. However, constraint satisfaction in its pure form isn't ...