In case of regression trees, node risk is computed as sum of squared
error. To get a meaningfull value to compare with it needs to be
normalized to the number of samples in the node (or more generally to
the sum of sample weights in this node). Otherwise the sum of squared
error is highly dependend on the number of samples in the node and
comparision with `regressionAccuracy` parameter is not very meaningful.
After normalization `node_risk` means in fact sample variance for all
samples in the node, which makes much more sence and seams to be what
was originaly intended by the code given that node risk is later used as
a split termination criteria by
```
sqrt(node.node_risk) < params.getRegressionAccuracy()
```
- removed tr1 usage (dropped in C++17)
- moved includes of vector/map/iostream/limits into ts.hpp
- require opencv_test + anonymous namespace (added compile check)
- fixed norm() usage (must be from cvtest::norm for checks) and other conflict functions
- added missing license headers
* Simulated Annealing for ANN_MLP training method
* EXPECT_LT
* just to test new data
* manage RNG
* Try again
* Just run buildbot with new data
* try to understand
* Test layer
* New data- new test
* Force RNG in backprop
* Use Impl to avoid virtual method
* reset all weights
* try to solve ABI
* retry
* ABI solved?
* till problem with dynamic_cast
* Something is wrong
* Solved?
* disable backprop test
* remove ANN_MLP_ANNEALImpl
* Disable weight in varmap
* Add example for SimulatedAnnealing
* export SVM::trainAuto to python #7224
* workaround for ABI compatibility of SVM::trainAuto
* add parameter comments to new SVM::trainAuto function
* Export ParamGrid member variables
Finished with several samples support, need regression testing
Gave a more relevant name to function (getVotes)
Finished implicit implementation
Removed printf, finished regresion testing
Fixed conversion warning
Finished test for Rtrees
Fixed documentation
Initialized variable
Added doxygen documentation
Added parameter name