Introduction
The CSP Library consists of a set of classes that
can be used to efficiently represent and solve
Constraint Satisfaction Problems (CSPs).
This library is not a general purpose solver. It
does not support many features that are useful in
representing real life problems (like continuous
domains, nary constraints, composite
representations, etc). It is however an useful
testbench for experimenting with new algorithms and
ideas, do performance measurements, etc.
The design behind the CSP Library emphasizes
flexibility, rather than raw performance. The goal
has been to create a library that can be easy to use
in new experiments, by providing an infrastructure
in which researchers can play with new algorithms
and ideas. It has not been designed to squeeze every
single drop of performance out of a certain
algorithm.
The library has mechanisms for allowing search to
use different decompositions at different levels in
the tree, whereby a decomposition can decide it
would be overkill to continue the search in the
remaining subproblem and can yield control to a less
sophisticated search method that is more efficient
on small problems. The library implements
retraction plugins, like Chronological or CBJ that
can simply be plugged in into any decomposition or
filter. It can hide parts of a problem and allow a
decomposition to temporarily work on a subproblem,
it allows chaining of any number of heuristics for
the purpose of breaking ties, it allows multiple
constraints between the same pair of variables, and
it has the ability to create problems with many
types of values, not just integers. The library can
represent small problems as well as large ones and
has no artificial limitations implemented solely for
the purpose of improving performance on certain
types of problems or benchmarks. The library sports
a sophisticated, lowoverhead logging mechanism that
can be compiled in optimized code with minimal
impact. All this flexibility comes at a slight cost
in performance. That is not to say, however, that
the CSP Library is inefficient  it is not.
However, hand coded algorithms that work on integers
only and allocate everything statically will
probably be faster.
The graph above is a representation of the RLFAP
CELAR SCEN11 problem drawn by the CSP Library using
AT&T's Graphviz graph drawing software. The
following links point to higher resolution versions
of the same thing:
News
Version 1.0.1 (May 6th, 2009):

Updated to work with GCC 4.3 on 64bit Linux machines.

Fixed an obscure bug/crash in the Solver::branching()
that was only happening on 64bit machines.

Changed to use SIZE_T_MAX=static_cast<size_t>(1) instead of
UINT_MAX since size_t is 8 bytes on 64 bit machines.
Version 1.0.0 (January 1st, 2008):

Ported to Mac OS X (Tiger and Leopard).

Added support for computing and performing refutation
analysis on leftovers, i.e. those insoluble subtrees that
make up the proof of optimality in optimization problems
(or the part of the search tree that comes between the
last solution and the end of the search tree when looking
for all solutions).

Added support for computing (quasi/restricted)optimal
refutations for optimization problems.

Added support for optimization problems. An objective
function such as branchandbound can be implemented by
deriving from csp::Objective. The RLFAP problem can be
used as an example.

Added support for computing heatmaps that help visualizing
the internals of search, i.e. where the bulk of the effort
is over a population of instances.

Added support for modeling quasigroup problems as binary
problems, using global constraints, or anything
inbetween, i.e. hybrid models using both binary and
global (alldifferent) constraints.

Unary constraints are now properly supported.

Better documentation, generated with doxygen. `make html'
and `make latex' now generate HTML, DVI, PostScript and
PDF documentation in the doc/html and doc/latex
directories. The HTML documentation is available online
here and the other formats are
available through the menus on this page. The old Texinfo
documentation has been removed.

Removed the IDC branching methods, as they have not been
used in a long time and had problems to begin with.

Added support for global constraints. Implemented a
propagate()based filter that runs the propagate() method
of all the constraints until there are no more changes to
be propagated. The filter is called
Propagate
and it is now the default command
line filter (equivalent of f propagate
).

Implemented GACSchema as per "Arc consistency for
general constraint networks: preliminary results" by
Christian Bessiere and JeanCharles Regin. The object is
csp::constraints::Table. This is based on a set of tuples
of combinations of values allowed.

Implemented a very inefficient AllDifferent global
constraint based on GACSchema, as a quick exercise. It
runs out of memory for more than 8 variables, but it is
useful for testing. See csp::constraints::AllDifferent.

Implemented a real AllDifferent global constraint that
works for permutation constraints only (for now) because
it does not implement free path analysis yet. It is good
for the likes of nqueens and quasigroups. See
csp::constraints::PermutationAllDifferent.

Added the posibility of controlling the heuristics used in
the prerefutation analysis rapidrandomizedrestarts loop
that attempts to find lower upper bounds for the
refutations of insoluble subtrees. The heuristics can be
specified using the
rarrrheuristic
flag.

Implemented heuristic equivalence (see "HeavyTailed
Phenomena in Satisfiability and Constraint Satisfaction
Problem" by Carla P. Gomes, Bart Selman, Nuno Crato
and Henry Kautz). This is passed as an additional
colonseparated field in the heuristic specification.
Setting this to a value greater H than 0 means all choices
that receive a score within (100*H)percent of the highest
score are considered equally good. This is useful for
expanding the choice of tiebreaking. Quality and
equivalence are mutually exclusive, i.e. if quality is
less than 1, then equivalence must be 0, and if
equivalence is greater than 0, then quality must be 1.
Example:
h mindomain:1::off:0.15

Refactored the code. The object called Decomposition is
now called Solver. The decompositions have been renamed
branchings
and instead of being derived from
Decomposition they are derived from a much lighter object
called Branching
. The Maintaining
Consistency
branching method (MC) is now called
Binary
. The code analyzing refutations is
now a separate branching method called RefutationAnalysis.
The code common to all examples is much, much cleaner.

Added code to compute restricted optimal refutations, i.e.
refutations that only include the variables involved in
the corresponding actual refutation. The command line
flag is
rarestricted
.

All
ffa*
command line options have been
renamed to ra*
(from refutation analysis).

Renamed
csp_pareto_levy
to
csp_cdf
and csp_power_law
to
csp_pdf
.

Renamed
phasetransition
and
PHASE_TRANSITION
to
crosoverpoint
and CROSSOVER_POINT
.

Added code for computing the variables involved in each
refutation. Useful for comparisons between actual,
optimal and quasioptimal refutations, among other things.
The command line flag is
refutationvariables
.
Version 0.4.2 (May 18th, 2005):

The QuasiGroup example has been enhanced to handle
Quasigroup with Holes problems using both random and
balanced holes. The code solving quasigroup existence
problems has been disabled as the constraint model was
incorrect. The Quasigroup with Holes problems are known
to exhibit heavytails and have been used, together with
random problems, in our analysis of search heuristics,
optimal refutations and heavytails.

The network streaming code has been taken out, as it
has not been used in a while and it serves no purpose in my
current research.

The variable orderings described in "Boosting
systematic search by weighting constraints" by
Boussemart et al. have been implemented. The names are
maxweighteddegreevariable
,
minweighteddegreevariable
,
mindomain/weighteddegreevariable
and
maxdomain/weighteddegreevariable
.

The naming scheme for the variable and value ordering
heuristics command line flags has been made more
consistent. All variable ordering heuristics end with
variable
and all value ordering heuristics
end with value
.

The
min*inconsistency
value ordering
heuristics have been renamed to
min*conflicts
. See "Lookahead value
ordering for constraint satisfaction problems",
Daniel Frost and Rina Dechter.

The maxdomainsize heuristic (which really means largest
postpropagation minimum domain size) heuristic has been
implemented. See "Lookahead value ordering for
constraint satisfaction problems", Daniel Frost and
Rina Dechter. The command line flag for the heuristic is
h maxmindomainvalue
and h
minmindomainvalue
for the antiheuristic.

The experiment scripts now plot both scattered and average
graphs.

Value ordering heuristic analysis is now possible for the Random,
Lexical and SolutionProbability heuristics.

When the value ordering analysis is turned on, the library prints out
the proximity of each bad selection to a perfect choice.

New variable ordering heuristic that selects the variables
with the highest/lowest conflict count (over all their
values). The command line flags for the static versions
of the heuristic are
h maxstaticimpact
and
h minstaticimpact
(for the
antiheuristic). For the dynamic versions the command
line flags are h maxstaticimpact
and
h minstaticimpact
(for the
antiheuristic).

New variable and value ordering heuristics based on the
best postpropagation branching contribution. The command
line flags are
h minbranchingvariable
and
h maxbranchingvariable
(for the variable
ordering heuristic and antiheuristic), and h
minbranchingvalue
and h
maxbranchingvalue
(for the value ordering
heuristic and antiheuristic).

Code to analyze permistakepoint failfirstness and
compute optimal refutations has been added. The command
line option is
failfirstnessanalysis
, or
ffa
. Such an analysis can take a very long
time  running it for anything but the smallest problems
requires a goodsize cluster. See all the
ffa*
command line flags for details.
Separate search trees are generated for each mistake point
if ffa
and searchtree
are
both given.

The postprocessing of data in the experiment*.sh scripts
has been greatly enhanced.

The rlfap/celar/scen11 directory contains subproblems of RLFAP
SCEN11 useful for benchmarking and failfirstness analysis.

The
Basic
decomposition has been renamed to
KWay
.

The Solo examples have been removed. Not useful anymore.

Support for computing the backdoor has been added, as
described in "Backdoors To Typical Case Complexity" by
Ryan Williams, Carla P. Gomes and Bart Selman, and in "The
Backdoor Key: A Path to Understanding Problem Hardness" by
Yongshao Ruan, Henry Kautz and Eric Horvitz. The command
line option is
backdoor
.

Support for computing the list of branching variables (on
the way to a solution) and their assignment. The command
line option is
branchingassignments
.

Support for drawing graphs of subproblems has been added.
See the
problemgraphdepth
command line
option for details. The library now generates 2 graphs,
one as soon as search reaches the given depth, and another
right before search backtracks from that depth. The
purpose of the first is to allow the visualization of very
difficult problems for which search times out. The
purpose of the second is to incorporate information
gathered when a solution is found (i.e. backdoors).

Support for averaging the number of loose constraints per
depth has been added. Such constraints (tightness=0) are
interesting because they may artificially and
unnecessarily connect otherwise disconnected subproblems,
making it harder to make ordering decisions during search.

Constraints that become loose (tightness=0) can now be
optionally disabled. This may speed up things, mostly
because certain heuristics (like mindomain/degree) become
more accurate, taking into account only those constraints
that matter in computing the degree. The removal of loose
constraints is not on by default, but can be turned on
with the
deactivatelooseconstraints
command line flag.

Added support for printing the exact variables and values
selected on the last path traversed (usually the one
containing the solution). This includes the wrong
selections that led to failure points. The commad line
option is
lastpathselections
.

Changed the computation of the number of heuristic
failures to account for failed propagations.

The Jumper class has been renamed "Retraction"
to better follow existing CSP terminology.

Some code cleanups:

the flags controlling decomposition statistics and
those that control decomposition features are grouped
into separate BitFlags objects.

printing statistics is no longer done by the library,
but rather by the code using it; the library now
provides the outside world readonly access to the
statistics so that they can be processed in any way
(rather than just printed).

The value ordering heuristic mistakes are reported after
each solution if
heuristicmistakes
is
given.

The total and per depth number of backtracks, as well as
the number of backtracks per mistake point are reported
after each solution if
backtracks
is given.

The
graph
command line option has been
renamed to problemgraph
.

The code can now generate a plot of the search tree in
AT&T's Graphviz format. Nodes are properly labeled with
the corresponding decision if the
lastpathselections
command line option is
given. The corresponding command line options are
searchtree
and
searchtreedepth
.

The bellflattening technique has been removed from the
code, as better understanding of search made clear its
uselessness.
Version 0.4.1 (April 20th, 2004):

Support for generating problem graphs using AT&T's
Graphviz DOT format has been added. To generate
the graphs, first use the
graph=FILE
command line option,
then run one of these commands to lay out the
graph:
dot FILE.dot Tpng oFILE.png
fdp FILE.dot Tpng oFILE.png
neato FILE.dot Tpng oFILE.png
circo FILE.dot Tpng oFILE.png
twopi FILE.dot Tpng oFILE.png
twopi
is the only one that processed
the RLFAP SCEN11 problem in a reasonable amount
of time, but dot
generates nicer
layouts. dot
needed 400Mb of RAM to
generate the RLFAP SCEN11 graph, but GIMP
could not load the resulting PNG file.

The greedy processing of variables whose
domains have been reduced to a single value it's
now
off
by default. It can be
turned on by calling
Decomposition::setGreedyVariableProcessing()
or with the greedy
command line
flag. Greedy variable processing improves
performance most of the time by significantly
reducing the depth of the search tree, but I
think having it on
by default adds
another factor to the complexity of the search,
making an accurate performance analysis more
difficult.

Added code to compute the number of times a
variable is selected at each depth. The command
line option is
variableselectioncounters
.

Due to the fact that some of the statistics
gathered by the library are really expensive to
compute (notably the average constrainedness and
average tightness), the
Decomposition
object and the
corresponding command line options have been
changed to allow different statistics to be
computed independently. The corresponding
command line flags are:
averageuninstantiatedvariables
averagemodifiedvariables
averagedomainsize
averageactiveconstraints
averagedegree
averageconstrainedness
averagedensity
averagetightness
extrapropagationstatistics
variableselectioncounters

The
Decomposition
object now
supports a concept called "optimistic
search", in both static and dynamic form.
The corresponding command line options are
staticoptimism
and
dynamicoptimism
. They both
specify a percentage of values to be removed from
the domain of a variable, either before search
(in the static version), or during search. Each
variable's domain of values will be sorted
according to the main value ordering heuristic,
and a percentage of the values will be discarded
(i.e. not considered during search), with the
idea that they're not really useful. This is
based on a study of the quality of the
MinInconsistency heuristic that proved
experimentally that if the current domain of a
variable is sorted according to the heuristic
preference, most of the time the value suggested
by the heuristic is very close to one or more
values that are perfect choices (i.e. they lead
to a solution). Discarding values that are
considered "bad" by the heuristic may
generate a problem that still contains at least
one of the solutions of the original problem
while being smaller and thus easier to solve.
The only decompositions currently supporting
dynamic optimism are MC and Basic.

A new variable ordering heuristic called
"domain wipeout" has been implemented.
It can select variables in the order of the
number of times they have caused propagation
failures because of the fact that their domain
has been wiped out. The command line flags are
h maxdwos
and h
mindwos
(for the antiheuristic).

Added support for calculating the number of
search tree nodes per depth as well as the number
of failures of the value ordering heuristic.

Added support for Radio Link Frequency Assignment
Benchmarks, as provided by the Centre
d'Electronique de l'Armement (France). See
B. Cabon, S. de Givry, L. Lobjois, T. Schiex and
J.P. Warners, CONSTRAINTS journal, 1998. A copy
of the RLFAP CELAR SCEN11 data files has been
included in the
examples/rlfap/celar/scen11
directory. To run the tests, try:
./csp_rlfap d mc f ac3 q greedy
h mindomain/degree h randvar
h mindynamicinconsistency h randvar h randval
datadirectory=rlfap/celar/scen11

All variable and value ordering heuristics have
now access to the decomposition they're plugged
into so that more information is available to them
for the purpose of making a selection.

A new script (peakexperiment.sh) has been added
to draw a 3D graph of a problem's peak of
difficulty.

The
experiment.sh
script now
compresses the log so that experiments take
significantly less disk space.

Various new search statistics can now be computed
during search. They are the average domain size
per depth, the average number of active
constraints per depth, the average variable
degree per depth, the average constrainedness per
depth and the average constraint density and
tightness per depth. The can be turned on by
specifying the
statistics
command
line option. Note that constrainedness is
computed as defined in "The Constrainedness
of Search" by Ian P. Gent, Patrick Prosser
and Toby Walsh, 1999, for constraint satisfaction
problems.

Random problems can now be saved and reloaded.
The corresponding command line options are
saveproblem
and
loadproblem
.

The quality of certain value ordering heuristics
can now be analyzed (random problems only). This
works by running the search twice. The first
time we calculate all the solutions to the
problem, and store them in a file (using the
savesolutions
command line
option). The second time we run the search we
load the saved solutions using the
loadsolutions
command line option
and turn the analysis on by invoking the
heuristic with "on" as the third option
in its colonseparated list of flags:
./csp_random loadproblem=p.txt
loadsolutions=s.txt
d mc f ac3 h mindomain hmininconsistency:1::on
h randvar h randval fancy

The Minimum Inconsistency heuristic has now a
dynamic version. The command line flags are:
h minstaticinconsistency
h mindynamicinconsistency
h maxstaticinconsistency
h maxdynamicinconsistency

Some heuristics have been changed so that their
logic can be reversed. As a result, several new
heuristics are now available: Maximum Domain,
Minimum Degree, Maximum Domain/Degree, Maximum
Inconsistency, etc.

A new variable ordering heuristic has been
implemented: "Maximum Solution
Probability". This heuristic selects a
variable based on the probability of a solution
being found in the subproblem that would remain
to be solved if that variable were to be
selected. The heuristic comes in both static
(constraint probabilities are computed only once,
before search) and dynamic versions. Both are
very expensive. The command line options to
activate them are:
h maxstaticspvar
h maxdynamicspvar
h minstaticspvar
h mindynamicspvar

The "Maximum Solution Probability"
concept has also been implemented as a value
ordering heuristic. This value ordering
heuristic returns the list of values that, when
selected, maximize the probability of finding a
solution in the remaining subproblem after
consistency is restored. The antiheuristic is
also available. The command line options are
h maxspval
and h
minspval
.

It is now possible to set a certain quality for
each heuristic. For instance, specifying the
mindomain heuristic with
h
mindomain:0.5
instead of just h
mindomain
will cause the heuristic to
sort the variables according to their domain
size, identify the groups of variables that have
equal values, create another vector with just one
variable from each such group, pick the one in
the middle (consistent with our 0.5 quality
selection) and then passing its entire group to
the next heuristic for tiebreaking. This can be
useful in studying the impact of a good or bad
heuristic on the performance of various
algorithms.

Heuristics can now be made active only at certain
depths of the search tree. The depths are
specified in terms of the percentage of
instantiated variables, and the intervals where a
heuristic is to become active is specified after
the quality, as a comma separated list, for
example
h
mindomain:0.7:1020,8595
specifies that
the minimum domain heuristic is to be applied
with a quality of 0.7 at depths where the
percentage of instantiated variables is between
10 and 20% or between 85 and 95%.

The Goldilocks code as well as the related
experiment script have been updated, making it
possible to reproduce the experiments in the
paper. See the
examples/goldilocksexperiment.sh
script and the .config
files in the
examples/experiments/goldilocks
directory.

The random problem generator used a slightly
different definition of constraint density than
the one used by other researchers in the field.
Specifically, it was using the definition found
in "Understanding and Improving the MAC
Algorithm"  Daniel Sabin & Eugene Freuder,
CP97: "the fraction of the possible
constraints beyond the minimum n1, that the
problem has". A CSP with n variables
requires at least n1 constraints for its graph
to be connected. If the graph is not connected,
the subgraphs can be treated as independent
problems and solved separately  which was the
rationale behind the old definition. It seems
however that most researchers have standardized
on a density definition that is the fraction of
all possible constraints that the problem has.
The random number generator has been changed to
use this standard definition by default, but can
still use the old one if the
nonstandarddensity
command line
option is given.

All the examples (including the random problem
generator) now use by default a new pseudorandom
number generator, as presented in William
H. Press, et al.'s "Numerical Recipes in
C", Second Edition with corrections (1994)
page 282. This should make it easier for other
researchers to reproduce our results (as well as
for us to reproduce theirs), and will also make
sure that given a set of parameters the same
problem is generated on all supported platforms.
The system's pseudorandom number generator is
still available as an option (through
systemprng
).

The random problem generator we used in so far
was "model B". In order to be able to
easier reproduce other people's experiments,
models "A, C, and D" were implemented.
The models are explained in
examples/Random.cc
. A command line
option (model
) can be used to
select a particular model. See "Random
Constraint Satisfaction: Flaws and
Structure" by Ian P. Gent, Ewan MacIntyre,
Patrick Prosser, Barbara Smith and Toby Walsh for
an indepth discussion of these models.

The random number generator now defaults to
generating flawless problem instances. See
"Random Constraint Satisfaction: Flaws and
Structure" by Ian P. Gent, Ewan MacIntyre,
Patrick Prosser, Barbara Smith and Toby Walsh for
a discussion on this topic. Briefly, the models
currently used by random problem generators are
asymptotically uninteresting because they start
generating trivially insoluble problems. The
random problem generator can still be forced to
create flawed problem instances by using the
flawed
command line option.

Improved the
experiment.sh
script to
be able to average results over several runs of
the same problem created with different seeds.
Random problems now have MD5 signatures, for
comparison purposes.

Implemented AC3d. See M.R.C. van Dongen's paper
"AC3d an Efficient ArcConsistency
Algorithm with a Low SpaceComplexity",
which is available online from his web page.

Implemented AC8. Same as AC3, but keeping a
queue of variables rather than constraints. See
A. Chmeiss and P. Jegou: "Efficient
pathconsistency propagation", International
Journal on Artificial Intelligence Tools,
7(2):121142, 1998.

Implemented AC2000. Based on AC8, but improves
slightly upon it by keeping track of the most
recently pruned values and only checking for
supports for value if at least one such support
has been removed from the other variable involved
in the constraint. See "Refining the Basic
Constraint Propagation Algorithm" by
Christian Bessiere and JeanCharles Regin.

Implemented AC2001. Based on AC8, but improves
slightly upon it by keeping an extra integer to
remember the last support checked. See
"Refining the Basic Constraint Propagation
Algorithm" by Christian Bessiere and
JeanCharles Regin.

A command line option that controls the debug
level has been added to all examples
(
V
). It defaults to
"notice".

A command line option that allows the user to
verify the problem's consistency without running
a search has been added to all examples
(
consistencyonly
).

Added support for plotting the number of
constraint checks.

Added support for plotting the average total
number of backtracks.

Added support for plotting the value ordering
heuristic failures.

Added support for plotting 3D graphs of the
average number of backtracks per depth.

Added support for plotting 3D graphs of the
number of value ordering heuristic failures per
depth.

Fixed a bug in the random problem generator that
was causing the bit array used by constraints to
represent valid pairs to be incorrectly resized.

Renamed AC3p (again)  that name seems to be
used too. It's now called AC3x.

Added a "Dummy" filter that can be
combined with the "Basic" decomposition
to create a "Generate And Test" search
algorithm.

Added support for building under KDE's KDevelop 3.0.x.

Loads of other minor fixes and improvements.
Version 0.4.0 (December 11th, 2003):

The library now uses wide characters/strings everywhere.

Small fixes throughout the library, too many to mention.

The API documentation can now be generated with Doxygen.

Implemented a mechanism for restricting the search to a
subset of the original problem. See the
Decomposition
object for details.

Added an implementation of the Local Changes (LC)
algorithm, as presented in "Solution Reuse in Dynamic
Constraint Satisfaction Problems" by Gerard
Verfaillie and Thomas Schiex.

Added an implementation of the Local Changes (LC)
algorithm combined with MAClike additional constraint
propagation and the ability to select a new variable after
removing failed values from the domain of the current
variable.

The MinInconsistency value ordering heuristic has been
fixed.

AC5 was renamed AC3 Plus as it was really just an
attempt at improving AC3 rather than an AC5 filter.

AC3 and AC3p have been fixed so that they do not assume
there is at most one constraint between any pair of
variables.

Made the strategy for instantiating variables more
flexible, allowing both propagate() and
propagateAndSearch() to mark only certain variables as
instantiables. Previously both these functions would
automatically instantiate any variable that had its domain
reduced to a single value. Very useful for LC. See the
Decomposition::propagateAndSearch()
for
details.

Improved the variable selection mechanism. Reselecting a
variable that has already been selected at the current
depth is now optionally possible. It is also possible to
specify a set of variables whose selection is to be
avoided. See
Decomposition::selectVariable()
for details.

Floating point variables (single & double precision)
have been implemented as variables with discrete domains.

Reorganized a bit the way header files are included,
eliminating unnecessary dependencies and reducing
compilation time.

enablepredictable
is no longer a compile
time option, but rather a runtime option.

Better logging, can be activated separately for each
library component, decomposition, filter, jumper, etc.
Should be very helpful in tracking down problems and
following the algorithm's logic. See the l flag in the
examples, it takes a commaseparated list of patterns
matching logging categories. All messages logged by a CSP
library component use the component's C++ namespace as the
category. The
i
/indentation
command line option will cause the logging to be indented
consistent with the search depth, making it even easier to
follow the flow of the decomposition algorithms.

The code now compiles with the upcoming version of GCC
(3.4).

make pch
: new target that builds the
precompiled headers supported in GCC 3.4. Still needs
some work for a better integration in the dependency
system. Hopefully new versions of the GNU tools will
automate that.

The library has been ported to Microsoft Visual C++ .Net
7.0. It has not been tested under 7.1, but with a bit of
luck it should just work :). Note that since GCC and
Visual C++ use different STL implementations, the
library's search algorithm may follow a different path on
one platform than it does on the other for the same exact
problem and configuration. This is normal. One way to
minimize such differences would be to always add the
lexical variable and value ordering heuristics as the last
heuristics in their respective lists.

A bug in Cygwin's C++ compiler (GCC) prevents code using
wcout/wcerr from compiling for now. Hopefully some future
update to Cygwin's compiler will fix this.

A lot of effort has been put into making the library work
better with the latest versions of GNU Autoconf, GNU
Automake and GNU Libtool. Many cosmetic and
installationrelated changes.

A better explanation of the naming scheme used for
decompositions, filters and search algorithms is now being
used and can be found in doc/NamingScheme.txt. As part of
cleaning up the naming scheme, the MAC, IDCMAC and LCMAC
decompositions have been renamed to MC, IDCMC and LCMC.
When combined with filters like FP that do not restore arc
consistency, the name was confusing as it was suggesting
that the resulting search algorithm was maintaining arc
consistency, which was false.

The AIEDAM 2003 version of the Goldilocks paper is now
included with the distribution.
Documentation
This description of the library is a bit outofdate. The
general structure of the library has not changed, but a lot
more classes and algorithms have been added, and they're
completely described in the API documentation. Reading the News might also be a
good idea.
API
Download
Source Code: