LLAMA results

All results were produced by using the cross-validation splits in the repository with 10 folds and 1 repetitions.
The best values within a type (i.e., baseline (except for vbs), classif, regr and cluster) and performance measure (i.e., Percentage solved, PAR10, MCP) are colored green. Furthermore, the three best values over all groups within a performance measure are colored pink, the absolute best one is red.

The performance is measured in three different ways.

algo model succ par10 mcp
baseline vbs 1.000 0.160 0.000
baseline singleBest 0.990 0.373 0.123
baseline singleBestByPar 0.990 0.373 0.123
baseline singleBestBySuccesses 0.990 0.391 0.141
classif rpart 0.770 2.456 0.108
classif randomForest 0.780 2.320 0.066
classif ksvm 0.790 2.222 0.067
cluster XMeans 0.760 2.566 0.130
regr lm 0.610 4.079 0.307
regr rpart 0.750 2.611 0.100
regr randomForest 0.780 2.309 0.064

The following default feature steps were used for model building:

base

Number of presolved instances: 0

The cost for using the feature steps (adapted for presolving) is: 0 or on average: 0

The feature steps correspond to the following 95 / 95 instance features:

c_avg_deg_cons, c_avg_dom_cons, c_avg_domdeg_cons, c_bounds_d, c_bounds_r,
c_bounds_z, c_cv_deg_cons, c_cv_dom_cons, c_cv_domdeg_cons, c_domain,
c_ent_deg_cons, c_ent_dom_cons, c_ent_domdeg_cons, c_logprod_deg_cons, c_logprod_dom_cons,
c_max_deg_cons, c_max_dom_cons, c_max_domdeg_cons, c_min_deg_cons, c_min_dom_cons,
c_min_domdeg_cons, c_num_cons, c_priority, c_ratio_cons, c_sum_ari_cons,
c_sum_dom_cons, c_sum_domdeg_cons, d_array_cons, d_bool_cons, d_bool_vars,
d_float_cons, d_float_vars, d_int_cons, d_int_vars, d_ratio_array_cons,
d_ratio_bool_cons, d_ratio_bool_vars, d_ratio_float_cons, d_ratio_float_vars, d_ratio_int_cons,
d_ratio_int_vars, d_ratio_set_cons, d_ratio_set_vars, d_set_cons, d_set_vars,
gc_diff_globs, gc_global_cons, gc_ratio_diff, gc_ratio_globs, o_deg,
o_deg_avg, o_deg_cons, o_deg_std, o_dom, o_dom_avg,
o_dom_deg, o_dom_std, s_bool_search, s_first_fail, s_goal,
s_indomain_max, s_indomain_min, s_input_order, s_int_search, s_labeled_vars,
s_other_val, s_other_var, s_set_search, v_avg_deg_vars, v_avg_dom_vars,
v_avg_domdeg_vars, v_cv_deg_vars, v_cv_dom_vars, v_cv_domdeg_vars, v_def_vars,
v_ent_deg_vars, v_ent_dom_vars, v_ent_domdeg_vars, v_intro_vars, v_logprod_deg_vars,
v_logprod_dom_vars, v_max_deg_vars, v_max_dom_vars, v_max_domdeg_vars, v_min_deg_vars,
v_min_dom_vars, v_min_domdeg_vars, v_num_aliases, v_num_consts, v_num_vars,
v_ratio_bounded, v_ratio_vars, v_sum_deg_vars, v_sum_dom_vars, v_sum_domdeg_vars

Algorithm and Feature Subset Selection

In order to get a better insight of the scenarios, forward selections have been applied to the solvers and features to determine whether small subsets achieve comparable performances. Following this approach, we reduced the number of solvers from 22 to 1, resulting in a PAR10 score of 2.565 for the reduced model. Analogously, the model that was generated based on 1 of the originally 82 features resulted in a PAR10 score of 2.128. Below, you can find the list of the selected features and solvers:

Selected Features:
s_goal

Selected Solvers:
LCG.Glucose.free