from symbolfit.symbolfit import *
Detected IPython. Loading juliacall extension. See https://juliapy.github.io/PythonCall.jl/stable/compat/#IPython
Dataset¶
Five inputs are needed, which can be python lists or numpy arrays (more options will be added in future!):
x: independent variable (bin center location).y: dependent variable.y_up: upward uncertainty in y per bin.y_down: downward uncertainty in y per bin.bin_widths_1dbin widths in x.
- Elements in both y_up and y_down should be non-negative values.
- These values are the "delta" in y,
- y + y_up = y shifted up by one standard deviation.
- y - y_down = y shifted down by one standard deviation.
- If no uncertainty in the dataset, one can set both y_up and y_down to ones with the same shape as x.
x = [12.5, 37.5, 62.5, 87.5, 112.5, 137.5, 162.5, 187.5, 212.5, 237.5, 262.5, 287.5, 312.5, 337.5, 362.5, 387.5, 412.5, 437.5, 462.5, 487.5]
y = [10.234884262084961, 122.1119384765625, 338.9125061035156, 810.2549438476562, 649.0571899414062, 351.8170166015625, 248.619873046875, 186.88763427734375, 141.754150390625, 103.42931365966797, 78.36450958251953, 60.3994255065918, 49.005863189697266, 33.54744338989258, 27.76025390625, 25.299283981323242, 19.729631423950195, 14.033162117004395, 15.06820011138916, 9.641764640808105]
y_up = [3.199200566092248, 11.050427072134475, 18.409576478113657, 28.464977495997715, 25.476600831771226, 18.756785881423355, 15.767684454189048, 13.670685216087149, 11.906055198537633, 10.170020337229811, 8.852373104570296, 7.771706730608908, 7.000418786736781, 5.7920154859852175, 5.268800044246317, 5.029839359395411, 4.441804973650936, 3.746086239931536, 3.8817779575072504, 3.105119102515732]
y_down = [3.199200566092248, 11.050427072134475, 18.409576478113657, 28.464977495997715, 25.476600831771226, 18.756785881423355, 15.767684454189048, 13.670685216087149, 11.906055198537633, 10.170020337229811, 8.852373104570296, 7.771706730608908, 7.000418786736781, 5.7920154859852175, 5.268800044246317, 5.029839359395411, 4.441804973650936, 3.746086239931536, 3.8817779575072504, 3.105119102515732]
bin_widths_1d = [25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25]
Plot the dataset to see what we will be fitting to:
fig, axes = plt.subplots(figsize = (6, 4))
plt.errorbar(np.array(x).flatten(),
np.array(y).flatten(),
yerr = [np.array(y_down).flatten(), np.array(y_up).flatten()],
xerr = np.array(bin_widths_1d)/2,
fmt = '.', c = 'black', ecolor = 'grey', capsize = 0,
)
plt.savefig('img/toy1/dataset.png')
Configure the fit¶
Configure PySR to define the function space being searched for with symbolic regression:
from pysr import PySRRegressor
import sympy
pysr_config = PySRRegressor(
model_selection = 'accuracy',
niterations = 200,
maxsize = 60,
binary_operators = [
'+', '*'
],
unary_operators = [
'exp',
'gauss(x) = exp(-x*x)',
'tanh',
],
nested_constraints = {
'tanh': {'tanh': 0, 'exp': 0, 'gauss': 0, '*': 2},
'exp': {'tanh': 0, 'exp': 0, 'gauss': 0, '*': 2},
'gauss': {'tanh': 0, 'exp': 0, 'gauss': 0, '*': 2},
'*': {'tanh': 1, 'exp': 1, 'gauss': 1, '*': 2},
},
extra_sympy_mappings={
'gauss': lambda x: sympy.exp(-x*x),
},
loss='loss(y, y_pred, weights) = (y - y_pred)^2 * weights',
)
Here, we allow two binary operators (+, *) and three unary operators (exp, gauss, tanh) when searching for functional forms. The custom-defined gauss is there because this dataset has a peak. One can define any other function they want for their shapes.
Nested constraints are imposed to prohibit, e.g., exp(exp(x))...
Loss function is a weighted MSE, where the weight is the sqaured uncertainty by default in SymbolFit.
For PySR options, please see:
Configure SymbolFit with the PySR config and for the re-optimization process:
model = SymbolFit(
# Dataset: x, y, y_up, y_down.
x = x,
y = y,
y_up = y_up,
y_down = y_down,
# PySR configuration of the function space.
pysr_config = pysr_config,
# Constrain the maximum function size and over-write maxsize in pysr_config.
# Set a higher value for more complex shape, or when the lower one does not fit well.
max_complexity = 60,
# Whether to scale input x to be within 0 and 1 for the fits for numerical stability,
# as large x could lead to overflow when there is e.g. exp(x) -> exp(10000).
# So set this to False when your x's are or close to O(1), otherwise recommended to set True.
# After the fits, the functions will be unscaled to relect the original dataset.
input_rescale = True,
# Whether to scale y for the fits for numerical stability,
# options are (when input_rescale is True): None / 'mean' / 'max' / 'l2'.
# This is useful to stabilize fits when your y's are very large or very small.
# After the fits, the functions will be unscaled to relect the original dataset.
scale_y_by = 'mean',
# Set a maximum standard error (%) for all parameters to avoid bad fits during re-optimization.
# In the refit loop, when any of the parameters returns a standard error larger than max_stderr,
# the fit is considered failed, and the fit will retry itself for fewer or other combination of varying parameters,
# by freezing some of the parameters to their initial values and kept fixed during re-optimization.
# This is to avoid bad fits when the objective is too complex to minimize, which could cause some parameters
# to have unrealistically large standard errors.
# In most cases 10 < max_stderr < 100 suffices.
max_stderr = 20,
# Consider y_up and y_down to weight the MSE loss during SR search and re-optimization.
fit_y_unc = True,
# Set a random seed for returning the same batch of functional forms every time (single-threaded),
# otherwise set None to explore more functions every time (multi-threaded and faster).
# In most cases the function space is huge, one can retry the fits with the exact same fit configuration
# and get completely different sets of candidate functions, merely by using different random seeds.
# So if the candidate functions are not satisfactory this time, rerun it few times more with
# random_seed = None or a different seed each time.
random_seed = None,
# Custome loss weight to set "(y - y_pred)^2 * loss_weights", overwriting that with y_up and y_down.
loss_weights = None
)
Symbol Fit it!¶
Run the fits: SR fit for functional form searching -> parameterization -> re-optimization fit for improved best-fits and uncertainty estimation -> evaluation.
model.fit()
Compiling Julia backend...
[ Info: Started!
Expressions evaluated per second: 4.880e+05
Head worker occupation: 16.8%
Progress: 1073 / 3000 total iterations (35.767%)
====================================================================================================
Hall of Fame:
---------------------------------------------------------------------------------------------------
Complexity Loss Score Equation
1 1.640e-01 1.594e+01 y = 0.217
2 1.639e-01 3.594e-04 y = tanh(0.217)
3 1.639e-01 3.114e-04 y = 0.06188 + 0.14463
4 1.530e-01 6.836e-02 y = gauss(x₀) * 0.28092
5 1.498e-01 2.151e-02 y = gauss(x₀) * exp(-1.0052)
7 1.482e-01 5.400e-03 y = (gauss(x₀) + -0.26503) * gauss(-0.6659)
8 1.253e-01 1.673e-01 y = gauss(0.65944 + (-2.1137 * x₀)) * 0.65944
9 9.068e-02 3.237e-01 y = gauss(x₀ + x₀) * tanh(12.389 * x₀)
10 3.253e-02 1.025e+00 y = 0.10657 + (gauss(x₀ * -4.135) * (32.753 * x₀))
12 2.345e-02 1.636e-01 y = 0.10123 + ((2.9992 * x₀) * (gauss(x₀ * 3.7439) * 7.7795))
13 2.314e-02 1.327e-02 y = ((7.8283 * gauss(x₀ * 3.4351)) * tanh(x₀ * 3.0737)) + 0.09...
8083
14 2.023e-02 1.343e-01 y = ((gauss(x₀ * 5.4203) * 9.2954) * (32.695 * (x₀ * x₀))) + 0...
.12963
15 1.697e-02 1.761e-01 y = ((tanh(x₀ * 9.2977) * gauss(x₀ * -4.135)) * (32.753 * x₀))...
+ 0.1142
16 1.696e-02 6.705e-04 y = ((tanh(x₀ * 9.2977) * gauss(x₀ * -4.135)) * (32.753 * tanh...
(x₀))) + 0.11635
17 1.539e-02 9.692e-02 y = ((x₀ * 32.793) * (tanh(x₀ * 9.537) * gauss(x₀ * (-4.4861 +...
x₀)))) + 0.10611
19 1.522e-02 5.442e-03 y = 0.10611 + ((gauss(x₀ * (-4.4861 + x₀)) * tanh(9.537 * x₀))...
* ((x₀ * 32.793) + x₀))
20 1.435e-02 5.909e-02 y = ((gauss(5.4203 * x₀) * 9.2954) * (32.695 * (x₀ * x₀))) + (...
gauss(-1.3555 * x₀) * tanh(x₀))
21 1.043e-02 3.190e-01 y = (gauss(x₀ + x₀) * (x₀ + x₀)) + ((32.679 * (x₀ * x₀)) * (9....
2407 * gauss(5.5369 * x₀)))
22 7.417e-03 3.409e-01 y = gauss(1.7489 * x₀) + (((exp(5.0023) * ((x₀ + x₀) * x₀)) + ...
-0.95158) * gauss(-2.9382 * (x₀ + x₀)))
24 7.374e-03 2.899e-03 y = gauss((x₀ * 0.72341) + x₀) + (((exp(4.9937) * (x₀ * (x₀ + ...
x₀))) + -0.95158) * gauss(-2.9382 * (x₀ + x₀)))
25 7.366e-03 1.106e-03 y = gauss((-1.6765 + -0.044027) * x₀) + (gauss(-2.9382 * (x₀ +...
x₀)) * (-0.95173 + (exp(5.0023) * tanh((x₀ + x₀) * x₀))))
26 7.037e-03 4.568e-02 y = gauss(x₀ + (0.72341 * x₀)) + (((exp(4.9937) * (x₀ * (x₀ + ...
x₀))) + -0.95158) * gauss(((x₀ + -0.026455) + x₀) * -3.0667))
27 6.967e-03 1.000e-02 y = gauss(x₀ + (0.72341 * x₀)) + (gauss(((x₀ + -0.026455) + x₀...
) * -3.0667) * ((exp(4.9937) * (tanh(x₀) * (x₀ + x₀))) + -0.95...
158))
28 5.399e-03 2.549e-01 y = gauss((x₀ + x₀) + -0.20793) + (((((x₀ + x₀) * x₀) * exp(4....
9033)) + -1.2121) * gauss(((x₀ + (-0.20649 + x₀)) + x₀) * -2.7...
071))
29 1.259e-03 1.456e+00 y = 0.085168 + (((exp(2.3478) * gauss((x₀ + x₀) + x₀)) * x₀) +...
(gauss(-7.6216 * ((-0.32232 + x₀) + x₀)) * (x₀ * (exp(3.0085)...
+ 0.68374))))
30 1.159e-03 8.256e-02 y = 0.085168 + ((gauss(-7.6216 * ((x₀ + -0.32232) + x₀)) * ((e...
xp(3.0085) + x₀) * x₀)) + (tanh(x₀) * (exp(2.3478) * gauss((x₀...
+ x₀) + x₀))))
31 1.085e-03 6.599e-02 y = tanh(0.085168) + (((exp(2.3478) * gauss((x₀ + x₀) + x₀)) *...
tanh(x₀)) + (gauss(-7.6216 * ((-0.32232 + x₀) + x₀)) * ((x₀ *...
exp(3.0085)) + x₀)))
32 1.085e-03 1.387e-04 y = 0.085168 + ((gauss(-7.6216 * ((x₀ + -0.32232) + x₀)) * (x₀...
* ((exp(3.0085) + 0.8134) + x₀))) + ((exp(2.3478) * gauss((x₀...
+ x₀) + x₀)) * tanh(x₀)))
33 1.085e-03 2.694e-05 y = tanh(0.085168) + (((exp(2.3478) * gauss((x₀ + x₀) + x₀)) *...
tanh(x₀)) + (gauss(-7.6216 * ((-0.32232 + x₀) + x₀)) * (x₀ * ...
((0.8134 + exp(3.0085)) + x₀))))
35 1.085e-03 1.770e-05 y = 0.085168 + (((exp(2.3478) * gauss((x₀ + x₀) + x₀)) * tanh(...
x₀)) + (gauss(-7.6216 * ((-0.32232 + x₀) + x₀)) * (x₀ * ((exp(...
0.8134 * -0.22212) + exp(3.0085)) + x₀))))
38 1.084e-03 3.302e-05 y = 0.085168 + (((exp(2.3478) * gauss((x₀ + x₀) + x₀)) * tanh(...
x₀)) + (gauss(-7.6216 * ((-0.32232 + x₀) + x₀)) * (tanh(x₀) * ...
(((exp(3.0085) + x₀) + (exp(x₀) + -0.25604)) + x₀))))
39 7.348e-04 3.892e-01 y = 0.078115 + ((((exp(2.9582) * x₀) + x₀) * gauss(((-0.32242 ...
+ x₀) + x₀) * -7.6252)) + (gauss(x₀ + (x₀ * x₀)) * (((exp(2.29...
47) * gauss((x₀ + x₀) + x₀)) + gauss(x₀)) * x₀)))
40 6.427e-04 1.340e-01 y = 0.072888 + ((((exp(2.8813) * x₀) + 0.40023) * gauss(((-0.3...
2682 + x₀) + x₀) * -7.6383)) + (gauss(x₀ + (x₀ * x₀)) * (((exp...
(2.3297) * gauss((x₀ + x₀) + x₀)) + gauss(x₀)) * tanh(x₀))))
46 4.749e-04 5.041e-02 y = ((((0.24985 + ((((exp(1.8774 + 0.53518) + exp(x₀)) + x₀) *...
gauss(x₀ + (x₀ + x₀))) + tanh(2.1761))) * x₀) * gauss(x₀ + x₀...
)) + (gauss(-8.4402 * ((x₀ + -0.32594) + x₀)) * (exp(2.9794) *...
x₀))) + (-0.17567 * -0.32378)
48 4.149e-04 6.756e-02 y = ((((0.24985 + ((((exp(1.8774 + 0.53518) + exp(x₀)) + x₀) *...
gauss(x₀ + (x₀ + x₀))) + tanh(2.1761))) * x₀) * gauss(x₀ + x₀...
)) + (gauss((x₀ + -8.4402) * ((x₀ + -0.32594) + x₀)) * (exp(2....
9794) * x₀))) + (-0.17567 * -0.32378)
49 4.135e-04 3.427e-03 y = ((((0.24985 + ((((exp(1.8774 + 0.53518) + exp(x₀)) + x₀) *...
gauss(x₀ + (x₀ + x₀))) + tanh(2.1761))) * x₀) * gauss(x₀ + x₀...
)) + (gauss((x₀ + -8.4402) * ((x₀ + -0.32594) + x₀)) * (exp(2....
9794) * x₀))) + (-0.17567 * tanh(-0.32378))
51 4.034e-04 1.239e-02 y = ((((0.24985 + ((((exp(1.8774 + 0.53518) + exp(x₀)) + x₀) *...
gauss(x₀ + (x₀ + x₀))) + tanh(2.1761))) * x₀) * gauss(x₀ + x₀...
)) + (gauss((x₀ + -8.4402) * ((x₀ + -0.32594) + x₀)) * ((0.249...
85 + exp(2.9794)) * x₀))) + (-0.17567 * tanh(-0.32378))
53 3.753e-04 3.604e-02 y = (((x₀ * (0.27902 + (((x₀ + (exp(1.8872 + 0.54503) + exp(x₀...
))) * gauss(x₀ + (x₀ + x₀))) + tanh(2.1776)))) * gauss(x₀ + x₀...
)) + (gauss((x₀ + -8.4488) * ((x₀ + -0.32654) + x₀)) * (((0.57...
728 + 0.5323) + exp(2.9226)) * x₀))) + (-0.17337 * tanh(-0.322...
6))
58 3.262e-04 2.803e-02 y = ((((x₀ * 1.6412) * (gauss(x₀ * x₀) + (gauss(x₀ + (x₀ + x₀)...
) * ((exp(2.1727) + -1.2614) + tanh(x₀))))) * gauss(((0.0752 +...
x₀) + x₀) + (-0.10677 * x₀))) + (gauss(-1.2783 * ((-7.0971 + ...
(x₀ + x₀)) * (x₀ + (x₀ + -0.32725)))) * (x₀ * (exp(2.9968) + x...
₀)))) + 0.058048
---------------------------------------------------------------------------------------------------
====================================================================================================
Press 'q' and then <enter> to stop execution early.
Expressions evaluated per second: 4.950e+05
Head worker occupation: 15.6%
Progress: 2255 / 3000 total iterations (75.167%)
====================================================================================================
Hall of Fame:
---------------------------------------------------------------------------------------------------
Complexity Loss Score Equation
1 1.640e-01 1.594e+01 y = 0.217
2 1.639e-01 3.594e-04 y = tanh(0.217)
3 1.639e-01 3.114e-04 y = 0.06188 + 0.14463
4 1.530e-01 6.836e-02 y = gauss(x₀) * 0.28092
5 1.498e-01 2.151e-02 y = gauss(x₀) * exp(-1.0052)
7 1.482e-01 5.400e-03 y = (gauss(x₀) + -0.26503) * gauss(-0.6659)
8 3.234e-02 1.522e+00 y = x₀ * (23.069 * gauss(3.5955 * x₀))
10 2.345e-02 1.606e-01 y = ((gauss(x₀ * -3.7425) * 23.295) * x₀) + 0.10125
11 2.344e-02 4.798e-04 y = 0.10116 + (tanh(x₀) * (23.277 * gauss(3.698 * x₀)))
12 2.268e-02 3.301e-02 y = 0.083037 + (gauss((-4.1027 + x₀) * x₀) * (x₀ * 23.803))
14 1.977e-02 6.866e-02 y = 0.1395 + ((32.458 * (x₀ * x₀)) * (8.4834 * gauss(5.3467 * ...
x₀)))
15 1.687e-02 1.588e-01 y = ((32.833 * x₀) * (tanh(x₀ * 9.5802) * gauss(x₀ * -4.1588))...
) + 0.11537
17 1.538e-02 4.630e-02 y = 0.1069 + ((gauss((-4.4745 + x₀) * x₀) * tanh(9.5485 * x₀))...
* (32.803 * x₀))
18 7.381e-03 7.338e-01 y = (gauss(x₀ * -5.871) * (-0.95161 + (x₀ * (x₀ * exp(5.694)))...
)) + gauss(x₀ * 1.7184)
19 7.380e-03 1.815e-04 y = (gauss(x₀ * -5.871) * (-0.95161 + (tanh(x₀) * (x₀ * exp(5....
694))))) + gauss(x₀ * 1.7184)
20 7.362e-03 2.403e-03 y = ((gauss(x₀ * -5.8823) * (-0.96074 + (x₀ * (x₀ * exp(5.6827...
))))) + 0.014515) + gauss(x₀ * 1.7725)
22 7.285e-03 5.269e-03 y = gauss((x₀ * 0.72581) + x₀) + (gauss(5.818 * x₀) * (((exp(5...
.6816) * x₀) * (x₀ + -0.004605)) + -0.95048))
23 7.277e-03 1.141e-03 y = gauss((x₀ * 0.72581) + x₀) + (gauss(5.818 * x₀) * (((exp(5...
.6816) * x₀) * tanh(x₀ + -0.004605)) + -0.95048))
24 1.175e-03 1.823e+00 y = ((3.4969 * gauss((x₀ + (x₀ + -0.34781)) * -7.7541)) + ((ex...
p(2.3163) * gauss((x₀ + x₀) + x₀)) * x₀)) + 0.080337
26 1.148e-03 1.139e-02 y = (((0.0023219 + gauss(x₀ + (x₀ + x₀))) * (x₀ * exp(2.3264))...
) + (3.5148 * gauss(-7.7226 * ((x₀ + -0.34772) + x₀)))) + 0.06...
5625
27 1.148e-03 3.902e-04 y = (((0.0023219 + gauss(x₀ + (x₀ + x₀))) * (x₀ * exp(2.3264))...
) + (3.5148 * gauss(-7.7226 * ((x₀ + -0.34772) + x₀)))) + tanh...
(0.065625)
28 1.085e-03 5.654e-02 y = (((exp(2.3478) * gauss(x₀ + (x₀ + x₀))) * tanh(x₀)) + ((x₀...
* exp(3.0557)) * gauss((x₀ + (x₀ + -0.32255)) * -7.6165))) + ...
0.084971
30 1.085e-03 2.453e-05 y = ((gauss(-7.6216 * ((x₀ + -0.32232) + x₀)) * ((x₀ * exp(3.0...
085)) + x₀)) + ((exp(2.3478) * gauss((x₀ + x₀) + x₀)) * tanh(x...
₀))) + 0.085168
31 1.085e-03 3.421e-05 y = tanh(0.085168) + (((exp(2.3478) * gauss((x₀ + x₀) + x₀)) *...
tanh(x₀)) + (gauss(-7.6216 * ((-0.32232 + x₀) + x₀)) * ((x₀ *...
exp(3.0085)) + x₀)))
32 9.554e-04 1.270e-01 y = ((gauss(((x₀ + -1.2332) * (x₀ + (-0.3416 + x₀))) * 7.5116)...
* 3.5) + ((exp(2.3504) * gauss(x₀ + (x₀ + x₀))) * tanh(x₀))) ...
+ (0.098012 * exp(-0.14255))
36 6.990e-04 7.813e-02 y = 0.059704 + ((gauss(x₀ + x₀) * (x₀ + ((gauss((x₀ + x₀) + x₀...
) * exp(x₀ + 2.3874)) * x₀))) + (gauss(-8.307 * ((-0.32418 + x...
₀) + x₀)) * (exp(3.0125) * x₀)))
37 4.955e-04 3.441e-01 y = 0.059704 + ((gauss(x₀ + x₀) * (x₀ + ((gauss((x₀ + x₀) + x₀...
) * exp(x₀ + 2.3874)) * tanh(x₀)))) + (gauss(-8.307 * ((-0.324...
18 + x₀) + x₀)) * (exp(3.0125) * x₀)))
38 4.930e-04 5.074e-03 y = 0.059704 + ((gauss(x₀ + x₀) * (x₀ + ((gauss((x₀ + x₀) + x₀...
) * exp(x₀ + 2.3874)) * tanh(x₀)))) + (gauss(-8.307 * ((-0.324...
18 + x₀) + x₀)) * (exp(3.0125) * tanh(x₀))))
39 4.929e-04 5.347e-05 y = tanh(0.059704) + ((gauss(x₀ + x₀) * (x₀ + ((gauss((x₀ + x₀...
) + x₀) * exp(x₀ + 2.3874)) * tanh(x₀)))) + (gauss(-8.307 * ((...
-0.32418 + x₀) + x₀)) * (exp(3.0125) * tanh(x₀))))
41 4.884e-04 4.594e-03 y = 0.059704 + ((gauss(x₀ + x₀) * (x₀ + ((gauss((x₀ + x₀) + x₀...
) * (exp(x₀ + 2.3874) + -0.72816)) * tanh(x₀)))) + (gauss(-8.3...
07 * ((-0.32418 + x₀) + x₀)) * ((exp(3.0125) * x₀) + x₀)))
42 4.713e-04 3.578e-02 y = 0.059704 + ((gauss(x₀ + x₀) * (x₀ + ((gauss((x₀ + x₀) + x₀...
) * (exp(x₀ + 2.3874) + tanh(-0.72816))) * tanh(x₀)))) + (gaus...
s(-8.307 * ((-0.32418 + x₀) + x₀)) * ((exp(3.0125) + x₀) * x₀)...
))
44 4.147e-04 6.388e-02 y = ((gauss((-8.309 + x₀) * ((-0.32627 + x₀) + x₀)) * (((exp(2...
.9541) + -0.47421) + exp(x₀)) * x₀)) + (gauss(x₀ + x₀) * (x₀ +...
(((exp(2.4892) * gauss((x₀ + x₀) + x₀)) + 0.53028) * x₀)))) +...
tanh(0.0423)
47 3.581e-04 4.897e-02 y = 0.059704 + (((((gauss((x₀ + x₀) + x₀) * (exp(2.3874) + gau...
ss(x₀))) + tanh(0.61525 + (x₀ * x₀))) * tanh(x₀)) * gauss(x₀ +...
(x₀ * x₀))) + (gauss(((-0.32418 + x₀) + x₀) * -8.307) * (x₀ *...
(x₀ + exp(3.0125)))))
48 3.579e-04 3.620e-04 y = tanh(0.059704) + (((((gauss((x₀ + x₀) + x₀) * (exp(2.3874)...
+ gauss(x₀))) + tanh(0.61525 + (x₀ * x₀))) * tanh(x₀)) * gaus...
s(x₀ + (x₀ * x₀))) + (gauss(((-0.32418 + x₀) + x₀) * -8.307) *...
(x₀ * (x₀ + exp(3.0125)))))
50 3.557e-04 3.209e-03 y = 0.059704 + ((gauss(-8.307 * ((-0.32418 + x₀) + x₀)) * ((((...
exp(3.0125) + x₀) + x₀) + x₀) * tanh(x₀))) + (gauss(x₀ + (x₀ *...
1.167)) * (x₀ + (((gauss((x₀ + x₀) + x₀) * exp(2.3874 + (x₀ +...
-0.010221))) + x₀) * tanh(x₀)))))
52 2.559e-04 1.645e-01 y = 0.067319 + (((tanh(x₀) * (gauss(x₀) + exp(2.9419))) * gaus...
s(((-0.32354 + x₀) + x₀) * 8.3469)) + ((tanh(x₀) * (((exp(2.37...
42) * gauss((((x₀ + -0.33763) + x₀) + x₀) + x₀)) + x₀) + tanh(...
1.34))) * gauss(x₀ + ((0.71107 * x₀) * (x₀ + x₀)))))
54 2.288e-04 5.603e-02 y = 0.067319 + (((tanh(x₀) * (gauss(x₀) + exp(2.9419))) * gaus...
s(((-0.32354 + x₀) + x₀) * 8.3469)) + ((tanh(x₀) * ((((exp(2.3...
742) * gauss((((x₀ + -0.33763) + x₀) + x₀) + x₀)) + 0.20091) +...
x₀) + tanh(0.92297))) * gauss(x₀ + ((0.71107 * x₀) * (x₀ + x₀...
)))))
56 2.286e-04 4.020e-04 y = 0.067319 + (((tanh(x₀) * (gauss(0.067319 + x₀) + exp(2.941...
9))) * gauss(((-0.32354 + x₀) + x₀) * 8.3469)) + ((tanh(x₀) * ...
((((exp(2.3742) * gauss((((x₀ + -0.33763) + x₀) + x₀) + x₀)) +...
0.20091) + x₀) + tanh(0.92297))) * gauss(x₀ + ((0.71107 * x₀)...
* (x₀ + x₀)))))
58 2.284e-04 4.564e-04 y = 0.067319 + (((tanh(x₀) * (gauss((x₀ + -0.046513) + x₀) + e...
xp(2.9419))) * gauss(((-0.32354 + x₀) + x₀) * 8.3469)) + ((tan...
h(x₀) * ((((exp(2.3742) * gauss((((x₀ + -0.33763) + x₀) + x₀) ...
+ x₀)) + 0.20091) + x₀) + tanh(0.92297))) * gauss(x₀ + ((0.711...
07 * x₀) * (x₀ + x₀)))))
---------------------------------------------------------------------------------------------------
====================================================================================================
Press 'q' and then <enter> to stop execution early.
Checking if pysr_model_temp.pkl exists...
Loading model from pysr_model_temp.pkl
Re-optimizing parameterized candidate function 1/44...
Re-optimizing parameterized candidate function 2/44...bad fits 2/2...
Re-optimizing parameterized candidate function 3/44...bad fits 2/2...
Re-optimizing parameterized candidate function 4/44...bad fits 2/2...
Re-optimizing parameterized candidate function 5/44...bad fits 2/2...
Re-optimizing parameterized candidate function 6/44...bad fits 2/2...
>>> loop of re-parameterization with less NDF for bad fits 3/4...
Re-optimizing parameterized candidate function 7/44...
Re-optimizing parameterized candidate function 8/44...bad fits 4/4...
>>> loop of re-parameterization with less NDF for bad fits 1/4...
Re-optimizing parameterized candidate function 9/44...
>>> loop of re-parameterization with less NDF for bad fits 1/4...
Re-optimizing parameterized candidate function 10/44...
>>> loop of re-parameterization with less NDF for bad fits 2/8...
Re-optimizing parameterized candidate function 11/44...
>>> loop of re-parameterization with less NDF for bad fits 2/8...
Re-optimizing parameterized candidate function 12/44...
>>> loop of re-parameterization with less NDF for bad fits 3/8...
Re-optimizing parameterized candidate function 13/44...
>>> loop of re-parameterization with less NDF for bad fits 3/8...
Re-optimizing parameterized candidate function 14/44...
>>> loop of re-parameterization with less NDF for bad fits 2/8...
Re-optimizing parameterized candidate function 15/44...
>>> loop of re-parameterization with less NDF for bad fits 9/16...
Re-optimizing parameterized candidate function 16/44...
>>> loop of re-parameterization with less NDF for bad fits 9/16...
Re-optimizing parameterized candidate function 17/44...
>>> loop of re-parameterization with less NDF for bad fits 1/16...
Re-optimizing parameterized candidate function 18/44...
>>> loop of re-parameterization with less NDF for bad fits 3/32...
Re-optimizing parameterized candidate function 19/44...
>>> loop of re-parameterization with less NDF for bad fits 3/32...
Re-optimizing parameterized candidate function 20/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 21/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 22/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 23/44...
>>> loop of re-parameterization with less NDF for bad fits 3/64...
Re-optimizing parameterized candidate function 24/44...
>>> loop of re-parameterization with less NDF for bad fits 3/64...
Re-optimizing parameterized candidate function 25/44...
>>> loop of re-parameterization with less NDF for bad fits 3/64...
Re-optimizing parameterized candidate function 26/44...
>>> loop of re-parameterization with less NDF for bad fits 6/64...
Re-optimizing parameterized candidate function 27/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 28/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 29/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 30/44...
>>> loop of re-parameterization with less NDF for bad fits 3/64...
Re-optimizing parameterized candidate function 31/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 32/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 33/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 34/44...
>>> loop of re-parameterization with less NDF for bad fits 5/64...
Re-optimizing parameterized candidate function 35/44...
>>> loop of re-parameterization with less NDF for bad fits 5/64...
Re-optimizing parameterized candidate function 36/44...
>>> loop of re-parameterization with less NDF for bad fits 5/64...
Re-optimizing parameterized candidate function 37/44...
>>> loop of re-parameterization with less NDF for bad fits 1/32...
Re-optimizing parameterized candidate function 38/44...
>>> loop of re-parameterization with less NDF for bad fits 1/128...
Re-optimizing parameterized candidate function 39/44...
>>> loop of re-parameterization with less NDF for bad fits 1/256...
Re-optimizing parameterized candidate function 40/44...
>>> loop of re-parameterization with less NDF for bad fits 1/256...
Re-optimizing parameterized candidate function 41/44...
>>> loop of re-parameterization with less NDF for bad fits 1/128...
Re-optimizing parameterized candidate function 42/44...
>>> loop of re-parameterization with less NDF for bad fits 1/256...
Re-optimizing parameterized candidate function 43/44...
>>> loop of re-parameterization with less NDF for bad fits 1/256...
Re-optimizing parameterized candidate function 44/44...
>>> loop of re-parameterization with less NDF for bad fits 1/256...
Save results to output files¶
Save results to csv tables:
candidates.csv: saves all candidate functions and evaluations in a csv table.candidates_reduced.csv: saves a reduced version for essential information without intermediate results.
model.save_to_csv(output_dir = 'output_Toy_dataset_1/')
Saving full results >>> output_Toy_dataset_1/candidates.csv Saving reduced results >>> output_Toy_dataset_1/candidates_reduced.csv
Plot results to pdf files:
candidates.pdf: plots all candidate functions with associated uncertainties one by one for fit quality evaluation.candidates_sampling.pdf: plots all candidate functions with total uncertainty coverage generated by sampling parameters.candidates_gof.pdf: plots the goodness-of-fit scores.candidates_correlation.pdf: plots the correlation matrices for the parameters of the candidate functions.
model.plot_to_pdf(
output_dir = 'output_Toy_dataset_1/',
bin_widths_1d = bin_widths_1d,
#bin_edges_2d = bin_edges_2d,
plot_logy = False,
plot_logx = False,
sampling_95quantile = False
)
Plotting candidate functions 44/44 >>> output_Toy_dataset_1/candidates.pdf Plotting candidate functions (sampling parameters) 44/44 >>> output_Toy_dataset_1/candidates_sampling.pdf Plotting correlation matrices 44/44 >>> output_Toy_dataset_1/candidates_correlation.pdf Plotting goodness-of-fit scores >>> output_Toy_dataset_1/candidates_gof.pdf