For long, I’ve been dreaming of making a small world (on a computer) where I can experiment with evolution. Too ambitious? Perhaps. We already know that order and complexity can emerge from simple ingredients (e.g. cellular automaton). How about intelligence? Will it emerge out of simple rules?
To be more specific, I want to make a slightly complicated version of a cellular automaton with a little more realistic life-like entities. I’d like to make an environment in which lives can think, act, learn, reproduce, and die. Changes caused by reproduction will make diversity of the lives, and natural selection would keep the entities that fit the current environment. The process goes on and in the end, in an ideal scenario that I’m dreaming of, progressively intelligent entities would emerge from such an environment.
I know this project will not be very easy. Probably many people had already tried and failed. Can I contribute at all by doing it as a hobby? I don’t know. Will I have a unique perspective to tackle this issue? Maybe. Even if I won’t achieve much, but it’s fine. I’ll learn something valuable through the process (at the very least, more programming skills).
There are two important factors for this. The first one is the genetic algorithm implemented by inheritance and natural selection. I won’t use a specific training function, but let them survive in the environment. If they thrive, they survive. The second one is neural networks whose structure can be changed through evolution and can be trained through the course of a single life.
Articles that I will write in this blog will be like a journal for this exploration. In each article, I’ll focus on a small topic. Let’s see how it goes. I’m excited.
function res = MinuitFitEN(func, initp, x, y, varargin)
func - fit funciton handle func(p, x) % 関数ハンドルを引数として渡します。
initp - (nPar vec) estimated inital parameters % パラメータの初期値。
x - (any) 2nd argument of func % func の2番目の引数に該当するものをここに入れます。
y - (any) ErrorNum data to fit including error % フィット対象のデータを ErrorNum にして入れます。
-- from here optional parameters (default) % オプション群です。括弧内はデフォルト値です。
'Extradim' - (scalar) extra degree of freedom to subtract (0) % もし束縛条件などがあって、余計に引いておきたい自由度がある場合使います。
'LowerLimit' - (nPar vec) lower bound of the parameters (false) % パラメータの下限値がある場合に使います。
'UpperLimit' - (nPar vec) upper bound of the parameters (false) % 上限値も同様です。下限を設定したら上限も設定してください。
'StepSize' - (nPar vec) step sizes of the parameters (false) % フィッティングのステップサイズを指定したい場合に使います。
'minfunc' - (String) function to minimize ('PoissonNLL') % 最小化する関数です。デフォルトはポアソン分布です。カイ二乗フィットをするときは'Chi2'と書いてください。
'command' - (String) command to give to fminuit % fminuitに渡すコマンドを指定します。minuitのマニュアルを参照してください。
('set pri -1; scan; minimize; simplex; improve; minos')
これらの情報はフィッティングがうまくいっていることを確認するために大変有用な情報で、私自身も過去に書いた論文(Ito et al., 2017; Ito et al., 2020)で積極的に使用しています。しかし、敷居の高さもあってか一部の学問領域を除いて浸透しているとは言い難い状況です。これから書く予定の記事の中でも、これらの統計は使用していくことになると思いますので、そのときにまた具体例から適切な活用方法を掴んでいただけると幸いです。例に漏れず、これらの関数も隠れた高度な使い方がたくさんありますので、それらも紹介していけると嬉しいです。もし使ってくださる奇特な方がいらっしゃる場合は、それまでヘルプを参照して使用方法を確認していただけると有り難いです。
Please follow this instruction to get my software on your computer.
A good ending of data analysis is making a model that explains your data. Typically it requires nonlinear function fitting. In Matlab, unless you have toolboxes, you will be left with ‘fminsearch’ function with limited capability. It is nice if you can do more sophisticated nonlinear fitting without a toolbox. In this post, I’ll introduce you to a very nice external function, ‘fminuit’, that does exactly that.
Fminuit is based on CERN‘s function fitting library Minuit. You can see a full explanation of what Fminuit is on its website. Briefly, Minuit is a program that is used in CERN for decades in order to fit high-energy physics models to petabytes of data collected there. It is very robust, efficient, feature-rich, and highly controllable. Fminuit is a Matlab wrapper of Minuit. My git repository includes only a binary version of fminuit for 64-bit Intel Macs. Windows or Linux binary are available from the Fminuit page, so please download and put them to your Matlab path. (You can also compile it.)
Fminuit itself is already pretty strong software, but I further wrapped it by using my ErrorNum class to make it even more user-friendly. Let’s take a look at the beginning of the help of my ‘MinuitFitEN’ function.
function res = MinuitFitEN(func, initp, x, y, varargin)
func - fit funciton handle func(p, x)
initp - (nPar vec) estimated inital parameters
x - (any) 2nd argument of func
y - (any) ErrorNum data to fit including error
-- from here optional parameters (default)
'Extradim' - (scalar) extra degree of freedom to subtract (0)
'LowerLimit' - (nPar vec) lower bound of the parameters (false)
'UpperLimit' - (nPar vec) upper bound of the parameters (false)
'StepSize' - (nPar vec) step sizes of the parameters (false)
'minfunc' - (String) function to minimize ('PoissonNLL')
'command' - (String) command to give to fminuit
('set pri -1; scan; minimize; simplex; improve; minos')
You can run a nonlinear function fit by writing as simple as the following example. First, let’s construct the data to fit.
x = 0:0.2:3;
n_obs = 30; % number of observation
sigma = 5; % STD of the data
% Through the fit, you need to estimate these variables
base = 2; % base of the power function
offset = 3; % offset from 0
% Construct your data based on the parameters above
% Note the last term is adding errors in observations.
y_obs = repmat(base.^x, n_obs, 1) + offset + sigma * randn(n_obs, length(x));
y_err = ErrorNum.create(y_obs, 1); % ErrorNum.create calcualtes SEM
figure(101);
plotEN(x, y_err, '.', 'capsize', 0); % EN functions are for the ErrorNum class.
This yields a figure like this. As expected, you get data points that roughly obey the power function, with some fluctuations. Now, let’s fit a function to it.
powfunc = @(p, x) p(1).^x + p(2); % this is your model function to fit the data.
% you need to guess the initial parameters
base_guess = 3;
offset_guess = 1;
initial_p = [base_guess, offset_guess];
fitresult = MinuitFitEN(powfunc, initial_p, x, y_err, 'minfunc', 'Chi2');
In the first line, I’m defining an anonymous function as my model. If you have a .m file, you can get the handle by doing something like ‘@function_name’. Now fitting is done. The last two arguments for MinuitFitEN indicate the type of fitting (or a function to minimize). This time, I am doing a chi-square fit, so I’m choosing ‘Chi2’. As I’m doing a lot of analysis on spikes of neurons, the default is set to ‘PoissonNLL’, or Negative Log-Likelihood function for the Poisson distribution. In applications where ANOVA works (like this example), ‘Chi2’ should be the choice.
Let’s take a look at the result of parameter estimation.
The output of the MinuitFitEN function is a structure that contains the estimated parameters with errors. The errors are one sigma of ANOVA, so you have a ~68% chance of having your true value within this error. The first value should be 2, but it’s slightly off. The second value is nicely within our expectation, 3. The fit seems to be working! It is very nice to always have the error attached to your results. The output is again, an instance of the ErrorNum class, so we get a reasonable text output without adjusting anything.
MinuitFitEN also calculates goodness-of-fit statistics. They are stored in these variables.
>> fitresult.redchi2
ans =
1.5140
>> fitresult.redchi2sig
ans =
0.9033
‘redchi2’ stands for ‘reduced chi-squared’, I wouldn’t give a detailed explanation of this value. Please refer to Wikipedia for its definition and usage. Here, I just say that if this value is close to 1, you are doing a reasonable fitting. If it is too small, you are overfitting (or overestimating the error); if it is too large, you are underfitting (or underestimating the error).
‘redchi2sig’ gives you the significance of the goodness of your fit. It is like a p-value for the goodness-of-fit. For example, if it is larger than 0.99, you have a reduced chi-squared value that a perfect fit would encounter only 1% of the chance. To say that your model explained the data well, you want to have a reasonable value (e.g. 0.05 < redchi2sig < 0.95).
These are important statistics, and I extensively used these values in the last two papers (Ito et al., 2017; Ito et al., 2020) that I have written. (These papers have real-world examples of function fitting.) However, these are also difficult concepts to grasp if you are a beginner of function fitting. In future posts, I will demonstrate more cases to show how to use these values. I will also demonstrate more advanced usage of this function (such as setting parameter boundaries, using more complicated functions, etc…), but for now, please refer to its help to see how to use it.
In the last post, I explained the basic usage of the ErrorNum class. In this post, I will explain a little more advanced usage of the class, including the following topics.
How to use covariance for error calculation
Significance test of the value
Conversion of a value using a function and its derivative
How to use covariance for error calculation
Sometimes, two variables have errors that do not independently vary. An example case is like this.
>> x = 4 + randn(100, 1);
>> y = x + 0.5 * randn(100, 1);
>> figure;plot(x, y, '.');
>> axis([0, 8, 0, 8]);
The resulting plot is,
In this case, the error of x and y covary positively. Namely, when x is high, y also tends to be high and vice versa. In this case, x + y will have a larger error than x – y as illustrated in the next image.
If two values have finite covariance, a naive addition or subtraction in the ErrorNum class will give an incorrect error estimation.
>> x_en = ErrorNum.create(x, 1);
>> y_en = ErrorNum.create(y, 1);
>> x_en + y_en % error is simple square root addition of individual error
ans =
ErrorNum:
8.3 ± 1.5
>> x_en - y_en % this error should be smaller
ans =
ErrorNum:
0.0 ± 1.5
In the ErrorNum class, you can aid this by giving covariance of these two variables. There are methods to invoke covariance-enable calculation.
% calculate covariance and feed it into addition.
>> cov_xy = cov(x, y); % note, cov gives a covariance 'matrix', so pick up (1, 2) or (2, 1) element.
>> x_en.covadd(y_en, cov_xy(1, 2))
ans =
ErrorNum:
8 ± 2
>> x_en.covsub(y_en, cov_xy(1, 2))
ans =
ErrorNum:
0.0 ± 0.4
Now the covariance for the subtraction is reduced to 0.4 from 1.5. By construction, y is based on x plus additional noise with 0.5 * randn, so getting a value close to 0.5 makes sense.
Significance test for the value
Once you calculate a value, you often want to do a significance test. The ErrorNum class can give you a simple estimate of the error based on ANOVA (analysis of variance).
%% ANOVA significance test
>> a = ErrorNum(3, 1);
>> b = ErrorNum(2, 1);
>> a.sig % this works
ans =
0.0027
>> sig(a) % this also works
ans =
0.0027
>> sig(a - b) % this also works
ans =
0.4795
>> (a - b).sig % this doesn't work.
Error: File: ErrorNum_demo_advanced.m Line: 34 Column: 8
Invalid use of operator.
Note that there are two ways to invoke a class method. One with dot-notation, another with function-notation. Beware with the dot notation as it does not work unless an ErrorNum instance is already created before invoking the command.
Conversion of a value using a function and its derivative
Another case where the error calculation is complicated is when you convert a number with a certain function. Let’s see the following example.
In general, normal built-in functions do not work for ErrorNum (e.g. log(x) gives you an error), but by using this convert function and giving its derivative directly, you can let it apply function with appropriate (linear) errors calculation. Note that the same error in x is correctly shrunk in y, depending on the value of x.
Please follow this instruction to get my software on your computer.
Proper treatment of the errors is essential in data analysis. Sometimes it is troublesome to deal with error for every value you compute in your code. This Matlab class helps your data and error analysis a little easy. Take a look at the following example. (Empty lines are omitted.)
% you know two values with an error
>> en1 = ErrorNum(5, 0.3)
en1 =
ErrorNum:
5.0 ± 0.3
>> en2 = ErrorNum(2, 0.2)
en2 =
ErrorNum:
2.0 ± 0.2
% You can do simple arithmetics on these values.
>> en1 + en2
ans =
ErrorNum:
7.0 ± 0.4
>> en1 * en2
ans =
ErrorNum:
10.0 ± 1.2
>> en1 / en2
ans =
ErrorNum:
2.5 ± 0.3
These calculations obey basic error propagation rules. The results are shown using proper significant figures that are inferred from the error, which is the leading order of the error except when the leading order is 1.
% You should not worry about values much smaller than the error.
>> ErrorNum(1.2531, 0.2)
ans =
ErrorNum:
1.3 ± 0.2
>> ErrorNum(1.2531, 0.02)
ans =
ErrorNum:
1.25 ± 0.02
>> ErrorNum(1.2531, 0.002)
ans =
ErrorNum:
1.253 ± 0.002
This will avoid a typical mistake of a science beginner who unnecessarily reports a 10-digit number for every result.
The ErrorNum can also be a vector or matrix. Currently, it only does the element-wise calculation.