Merge pull request #861 from albenoit:master

This commit is contained in:
Vadim Pisarevsky 2013-05-12 12:53:14 +04:00 committed by OpenCV Buildbot
commit f1e8f69d2c
13 changed files with 616 additions and 283 deletions

View File

@ -21,7 +21,7 @@ In this tutorial you will learn how to:
General overview
================
The proposed model originates from Jeanny Herault's research at `Gipsa <http://www.gipsa-lab.inpg.fr>`_. It is involved in image processing applications with `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer) lab. This is not a complete model but it already present interesting properties that can be involved for enhanced image processing experience. The model allows the following human retina properties to be used :
The proposed model originates from Jeanny Herault's research [herault2010]_ at `Gipsa <http://www.gipsa-lab.inpg.fr>`_. It is involved in image processing applications with `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer and user) lab. This is not a complete model but it already present interesting properties that can be involved for enhanced image processing experience. The model allows the following human retina properties to be used :
* spectral whitening that has 3 important effects: high spatio-temporal frequency signals canceling (noise), mid-frequencies details enhancement and low frequencies luminance energy reduction. This *all in one* property directly allows visual signals cleaning of classical undesired distortions introduced by image sensors and input luminance range.
@ -37,7 +37,7 @@ In the figure below, the OpenEXR image sample *CrissyField.exr*, a High Dynamic
:alt: A High dynamic range image linearly rescaled within range [0-255].
:align: center
In the following image, as your retina does, local luminance adaptation, spatial noise removal and spectral whitening work together and transmit accurate information on lower range 8bit data channels. On this picture, noise in significantly removed, local details hidden by strong luminance contrasts are enhanced. Output image keeps its naturalness and visual content is enhanced.
In the following image, applying the ideas proposed in [benoit2010]_, as your retina does, local luminance adaptation, spatial noise removal and spectral whitening work together and transmit accurate information on lower range 8bit data channels. On this picture, noise in significantly removed, local details hidden by strong luminance contrasts are enhanced. Output image keeps its naturalness and visual content is enhanced. Color processing is based on the color multiplexing/demultiplexing method proposed in [chaix2007]_.
.. image:: images/retina_TreeHdr_retina.jpg
:alt: A High dynamic range image compressed within range [0-255] using the retina.
@ -86,19 +86,23 @@ This model can be used basically for spatio-temporal video effects but also in t
* performing motion analysis also taking benefit of the previously cited properties.
Literature
==========
For more information, refer to the following papers :
* Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
.. [benoit2010] Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
* Please have a look at the reference work of Jeanny Herault that you can read in his book :
Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
.. [herault2010] Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
This retina filter code includes the research contributions of phd/research collegues from which code has been redrawn by the author :
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper:
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. ====> more information in the above cited Jeanny Heraults's book.
.. [chaix2007] B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. More informations in the above cited Jeanny Heraults's book.
Code tutorial
=============
@ -229,68 +233,67 @@ Now, everything is ready to run the retina model. I propose here to allocate a r
.. code-block:: cpp
// pointer to a retina object
cv::Ptr<cv::Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = new cv::Retina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = new cv::Retina(inputFrame.size());
// pointer to a retina object
cv::Ptr<cv::Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = cv::createRetina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = cv::createRetina(inputFrame.size());
Once done, the proposed code writes a default xml file that contains the default parameters of the retina. This is useful to make your own config using this template. Here generated template xml file is called *RetinaDefaultParameters.xml*.
.. code-block:: cpp
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
In the following line, the retina attempts to load another xml file called *RetinaSpecificParameters.xml*. If you created it and introduced your own setup, it will be loaded, in the other case, default retina parameters are used.
.. code-block:: cpp
// load parameters if file exists
myRetina->setup("RetinaSpecificParameters.xml");
// load parameters if file exists
myRetina->setup("RetinaSpecificParameters.xml");
It is not required here but just to show it is possible, you can reset the retina buffers to zero to force it to forget past events.
.. code-block:: cpp
// reset all retina buffers (imagine you close your eyes for a long time)
myRetina->clearBuffers();
// reset all retina buffers (imagine you close your eyes for a long time)
myRetina->clearBuffers();
Now, it is time to run the retina ! First create some output buffers ready to receive the two retina channels outputs
.. code-block:: cpp
// declare retina output buffers
cv::Mat retinaOutput_parvo;
cv::Mat retinaOutput_magno;
// declare retina output buffers
cv::Mat retinaOutput_parvo;
cv::Mat retinaOutput_magno;
Then, run retina in a loop, load new frames from video sequence if necessary and get retina outputs back to dedicated buffers.
.. code-block:: cpp
// processing loop with no stop condition
while(true)
{
// if using video stream, then, grabbing a new frame, else, input remains the same
if (videoCapture.isOpened())
videoCapture>>inputFrame;
// processing loop with no stop condition
while(true)
{
// if using video stream, then, grabbing a new frame, else, input remains the same
if (videoCapture.isOpened())
videoCapture>>inputFrame;
// run retina filter on the loaded input frame
myRetina->run(inputFrame);
// Retrieve and display retina output
myRetina->getParvo(retinaOutput_parvo);
myRetina->getMagno(retinaOutput_magno);
cv::imshow("retina input", inputFrame);
cv::imshow("Retina Parvo", retinaOutput_parvo);
cv::imshow("Retina Magno", retinaOutput_magno);
cv::waitKey(10);
}
// run retina filter on the loaded input frame
myRetina->run(inputFrame);
// Retrieve and display retina output
myRetina->getParvo(retinaOutput_parvo);
myRetina->getMagno(retinaOutput_magno);
cv::imshow("retina input", inputFrame);
cv::imshow("Retina Parvo", retinaOutput_parvo);
cv::imshow("Retina Magno", retinaOutput_magno);
cv::waitKey(10);
}
That's done ! But if you want to secure the system, take care and manage Exceptions. The retina can throw some when it sees irrelevant data (no input frame, wrong setup, etc.).
Then, i recommend to surround all the retina code by a try/catch system like this :
@ -323,29 +326,28 @@ Once done open the configuration file *RetinaDefaultParameters.xml* generated by
.. code-block:: cpp
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>7.0e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>5.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.</horizontalCellsGain>
<hcellsTemporalConstant>1.</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>7.0e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>1.2e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>7.5e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.7e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.01</horizontalCellsGain>
<hcellsTemporalConstant>0.5</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>7.5e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>
Here are some hints but actually, the best parameter setup depends more on what you want to do with the retina rather than the images input that you give to retina. Apart from the more specific case of High Dynamic Range images (HDR) that require more specific setup for specific luminance compression objective, the retina behaviors should be rather stable from content to content. Note that OpenCV is able to manage such HDR format thanks to the OpenEXR images compatibility.
@ -381,7 +383,7 @@ This parameter set tunes the neural network connected to the photo-receptors, th
* **horizontalCellsGain** here is a critical parameter ! If you are not interested by the mean luminance and focus on details enhancement, then, set to zero. But if you want to keep some environment luminance data, let some low spatial frequencies pass into the system and set a higher value (<1).
* **hcellsTemporalConstant** similar to photo-receptors, this acts on the temporal constant of a low pass temporal filter that smooths input data. Here, a high value generates a high retina after effect while a lower value makes the retina more reactive.
* **hcellsTemporalConstant** similar to photo-receptors, this acts on the temporal constant of a low pass temporal filter that smooths input data. Here, a high value generates a high retina after effect while a lower value makes the retina more reactive. This value should be lower than **photoreceptorsTemporalConstant** to limit strong retina after effects.
* **hcellsSpatialConstant** is the spatial constant of the low pass filter of these cells filter. It specifies the lowest spatial frequency allowed in the following. Visually, a high value leads to very low spatial frequencies processing and leads to salient halo effects. Lower values reduce this effect but the limit is : do not go lower than the value of **photoreceptorsSpatialConstant**. Those 2 parameters actually specify the spatial band-pass of the retina.

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -5,48 +5,81 @@ Retina : a Bio mimetic human retina model
Retina
======
.. ocv:class:: Retina : public Algorithm
.. ocv:class:: Retina
Introduction
++++++++++++
Class which provides the main controls to the Gipsa/Listic labs human retina model. Spatio-temporal filtering modelling the two main retina information channels :
Class which provides the main controls to the Gipsa/Listic labs human retina model. This is a non separable spatio-temporal filter modelling the two main retina information channels :
* foveal vision for detailled color vision : the parvocellular pathway).
* foveal vision for detailled color vision : the parvocellular pathway.
* periphearal vision for sensitive transient signals detection (motion and events) : the magnocellular pathway.
* peripheral vision for sensitive transient signals detection (motion and events) : the magnocellular pathway.
**NOTE : See the Retina tutorial in the tutorial/contrib section for complementary explanations.**
From a general point of view, this filter whitens the image spectrum and corrects luminance thanks to local adaptation. An other important property is its hability to filter out spatio-temporal noise while enhancing details.
This model originates from Jeanny Herault work [Herault2010]_. It has been involved in Alexandre Benoit phd and his current research [Benoit2010]_ (he currently maintains this module within OpenCV). It includes the work of other Jeanny's phd student such as [Chaix2007]_ and the log polar transformations of Barthelemy Durette described in Jeanny's book.
The retina can be settled up with various parameters, by default, the retina cancels mean luminance and enforces all details of the visual scene. In order to use your own parameters, you can use at least one time the *write(String fs)* method which will write a proper XML file with all default parameters. Then, tweak it on your own and reload them at any time using method *setup(String fs)*. These methods update a *Retina::RetinaParameters* member structure that is described hereafter. ::
**NOTES :**
class Retina
* For ease of use in computer vision applications, the two retina channels are applied homogeneously on all the input images. This does not follow the real retina topology but this can still be done using the log sampling capabilities proposed within the class.
* Extend the retina description and code use in the tutorial/contrib section for complementary explanations.
Preliminary illustration
++++++++++++++++++++++++
As a preliminary presentation, let's start with a visual example. We propose to apply the filter on a low quality color jpeg image with backlight problems. Here is the considered input... *"Well, my eyes were able to see more that this strange black shadow..."*
.. image:: images/retinaInput.jpg
:alt: a low quality color jpeg image with backlight problems.
:align: center
Below, the retina foveal model applied on the entire image with default parameters. Here contours are enforced, halo effects are voluntary visible with this configuration. See parameters discussion below and increase horizontalCellsGain near 1 to remove them.
.. image:: images/retinaOutput_default.jpg
:alt: the retina foveal model applied on the entire image with default parameters. Here contours are enforced, luminance is corrected and halo effects are voluntary visible with this configuration, increase horizontalCellsGain near 1 to remove them.
:align: center
Below, a second retina foveal model output applied on the entire image with a parameters setup focused on naturalness perception. *"Hey, i now recognize my cat, looking at the mountains at the end of the day !"*. Here contours are enforced, luminance is corrected but halos are avoided with this configuration. The backlight effect is corrected and highlight details are still preserved. Then, even on a low quality jpeg image, if some luminance information remains, the retina is able to reconstruct a proper visual signal. Such configuration is also usefull for High Dynamic Range (*HDR*) images compression to 8bit images as discussed in [benoit2010]_ and in the demonstration codes discussed below.
As shown at the end of the page, parameters change from defaults are :
* horizontalCellsGain=0.3
* photoreceptorsLocalAdaptationSensitivity=ganglioncellsSensitivity=0.89.
.. image:: images/retinaOutput_realistic.jpg
:alt: the retina foveal model applied on the entire image with 'naturalness' parameters. Here contours are enforced but are avoided with this configuration, horizontalCellsGain is 0.3 and photoreceptorsLocalAdaptationSensitivity=ganglioncellsSensitivity=0.89.
:align: center
As observed in this preliminary demo, the retina can be settled up with various parameters, by default, as shown on the figure above, the retina strongly reduces mean luminance energy and enforces all details of the visual scene. Luminance energy and halo effects can be modulated (exagerated to cancelled as shown on the two examples). In order to use your own parameters, you can use at least one time the *write(String fs)* method which will write a proper XML file with all default parameters. Then, tweak it on your own and reload them at any time using method *setup(String fs)*. These methods update a *Retina::RetinaParameters* member structure that is described hereafter. XML parameters file samples are shown at the end of the page.
Here is an overview of the abstract Retina interface, allocate one instance with the *createRetina* functions.::
class Retina : public Algorithm
{
public:
// parameters setup instance
struct RetinaParameters; // this class is detailled later
// constructors
Retina (Size inputSize);
Retina (Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
// main method for input frame processing
void run (const Mat &inputImage);
void run (InputArray inputImage);
// output buffers retreival methods
// -> foveal color vision details channel with luminance and noise correction
void getParvo (Mat &retinaOutput_parvo);
void getParvo (std::valarray< float > &retinaOutput_parvo);
const std::valarray< float > & getParvo () const;
void getParvo (OutputArray retinaOutput_parvo);
void getParvoRAW (OutputArray retinaOutput_parvo);// retreive original output buffers without any normalisation
const Mat getParvoRAW () const;// retreive original output buffers without any normalisation
// -> peripheral monochrome motion and events (transient information) channel
void getMagno (Mat &retinaOutput_magno);
void getMagno (std::valarray< float > &retinaOutput_magno);
const std::valarray< float > & getMagno () const;
void getMagno (OutputArray retinaOutput_magno);
void getMagnoRAW (OutputArray retinaOutput_magno); // retreive original output buffers without any normalisation
const Mat getMagnoRAW () const;// retreive original output buffers without any normalisation
// reset retina buffers... equivalent to closing your eyes for some seconds
void clearBuffers ();
// retreive input and output buffers sizes
Size inputSize ();
Size outputSize ();
Size getInputSize ();
Size getOutputSize ();
// setup methods with specific parameters specification of global xml config file loading/write
void setup (String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
@ -63,11 +96,15 @@ The retina can be settled up with various parameters, by default, the retina can
void activateContoursProcessing (const bool activate);
};
// Allocators
cv::Ptr<Retina> createRetina (Size inputSize);
cv::Ptr<Retina> createRetina (Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
Description
+++++++++++
Class which allows the `Gipsa <http://www.gipsa-lab.inpg.fr>`_ (preliminary work) / `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer) labs retina model to be used. This class allows human retina spatio-temporal image processing to be applied on still images, images sequences and video sequences. Briefly, here are the main human retina model properties:
Class which allows the `Gipsa <http://www.gipsa-lab.inpg.fr>`_ (preliminary work) / `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer and user) labs retina model to be used. This class allows human retina spatio-temporal image processing to be applied on still images, images sequences and video sequences. Briefly, here are the main human retina model properties:
* spectral whithening (mid-frequency details enhancement)
@ -83,19 +120,23 @@ Use : this model can be used basically for spatio-temporal video effects but als
* performing motion analysis also taking benefit of the previously cited properties (check out the magnocellular retina channel output, by using the provided **getMagno** methods)
Literature
==========
For more information, refer to the following papers :
* Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
.. [Benoit2010] Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
* Please have a look at the reference work of Jeanny Herault that you can read in his book :
Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
.. [Herault2010] Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
This retina filter code includes the research contributions of phd/research collegues from which code has been redrawn by the author :
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper:
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. ====> more informations in the above cited Jeanny Heraults's book.
.. [Chaix2007] B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. More informations in the above cited Jeanny Heraults's book.
Demos and experiments !
=======================
@ -132,20 +173,24 @@ Methods description
Here are detailled the main methods to control the retina model
Retina::Retina
++++++++++++++
Ptr<Retina>::createRetina
+++++++++++++++++++++++++
.. ocv:function:: Retina::Retina(Size inputSize)
.. ocv:function:: Retina::Retina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod = RETINA_COLOR_BAYER, const bool useRetinaLogSampling = false, const double reductionFactor = 1.0, const double samplingStrenght = 10.0 )
.. ocv:function:: Ptr<Retina> createRetina(Size inputSize)
.. ocv:function:: Ptr<Retina> createRetina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod = RETINA_COLOR_BAYER, const bool useRetinaLogSampling = false, const double reductionFactor = 1.0, const double samplingStrenght = 10.0 )
Constructors
Constructors from standardized interfaces : retreive a smart pointer to a Retina instance
:param inputSize: the input frame size
:param colorMode: the chosen processing mode : with or without color processing
:param colorSamplingMethod: specifies which kind of color sampling will be used
:param colorSamplingMethod: specifies which kind of color sampling will be used :
* RETINA_COLOR_RANDOM: each pixel position is either R, G or B in a random choice
* RETINA_COLOR_DIAGONAL: color sampling is RGBRGBRGB..., line 2 BRGBRGBRG..., line 3, GBRGBRGBR...
* RETINA_COLOR_BAYER: standard bayer sampling
:param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
:param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
:param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
@ -178,55 +223,46 @@ Retina::clearBuffers
Retina::getParvo
++++++++++++++++
.. ocv:function:: void Retina::getParvo( Mat & retinaOutput_parvo )
.. ocv:function:: void Retina::getParvo( std::valarray<float> & retinaOutput_parvo )
.. ocv:function:: const std::valarray<float> & Retina::getParvo() const
.. ocv:function:: void Retina::getParvo( OutputArray retinaOutput_parvo )
.. ocv:function:: void Retina::getParvoRAW( OutputArray retinaOutput_parvo )
.. ocv:function:: const Mat Retina::getParvoRAW() const
Accessor of the details channel of the retina (models foveal vision)
Accessor of the details channel of the retina (models foveal vision). Warning, getParvoRAW methods return buffers that are not rescaled within range [0;255] while the non RAW method allows a normalized matrix to be retrieved.
:param retinaOutput_parvo: the output buffer (reallocated if necessary), format can be :
* a Mat, this output is rescaled for standard 8bits image processing use in OpenCV
* a 1D std::valarray Buffer (encoding is R1, R2, ... Rn), this output is the original retina filter model output, without any quantification or rescaling
* RAW methods actually return a 1D matrix (encoding is R1, R2, ... Rn, G1, G2, ..., Gn, B1, B2, ...Bn), this output is the original retina filter model output, without any quantification or rescaling.
Retina::getMagno
++++++++++++++++
.. ocv:function:: void Retina::getMagno( Mat & retinaOutput_magno )
.. ocv:function:: void Retina::getMagno( std::valarray<float> & retinaOutput_magno )
.. ocv:function:: const std::valarray<float> & Retina::getMagno() const
.. ocv:function:: void Retina::getMagno( OutputArray retinaOutput_magno )
.. ocv:function:: void Retina::getMagnoRAW( OutputArray retinaOutput_magno )
.. ocv:function:: const Mat Retina::getMagnoRAW() const
Accessor of the motion channel of the retina (models peripheral vision)
Accessor of the motion channel of the retina (models peripheral vision). Warning, getMagnoRAW methods return buffers that are not rescaled within range [0;255] while the non RAW method allows a normalized matrix to be retrieved.
:param retinaOutput_magno: the output buffer (reallocated if necessary), format can be :
* a Mat, this output is rescaled for standard 8bits image processing use in OpenCV
* a 1D std::valarray Buffer (encoding is R1, R2, ... Rn), this output is the original retina filter model output, without any quantification or rescaling
* RAW methods actually return a 1D matrix (encoding is M1, M2,... Mn), this output is the original retina filter model output, without any quantification or rescaling.
Retina::getParameters
+++++++++++++++++++++
Retina::getInputSize
++++++++++++++++++++
.. ocv:function:: Retina::RetinaParameters Retina::getParameters()
Retrieve the current parameters values in a *Retina::RetinaParameters* structure
:return: the current parameters setup
Retina::inputSize
+++++++++++++++++
.. ocv:function:: Size Retina::inputSize()
.. ocv:function:: Size Retina::getInputSize()
Retreive retina input buffer size
:return: the retina input buffer size
Retina::outputSize
++++++++++++++++++
Retina::getOutputSize
+++++++++++++++++++++
.. ocv:function:: Size Retina::outputSize()
.. ocv:function:: Size Retina::getOutputSize()
Retreive retina output buffer size that can be different from the input if a spatial log transformation is applied
@ -244,7 +280,7 @@ Retina::printSetup
Retina::run
+++++++++++
.. ocv:function:: void Retina::run(const Mat & inputImage)
.. ocv:function:: void Retina::run(InputArray inputImage)
Method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
@ -273,7 +309,7 @@ Retina::setup
:param retinaParameterFile: the parameters filename
:param applyDefaultSetupOnFailure: set to true if an error must be thrown on error
:param fs: the open Filestorage which contains retina parameters
:param newParameters: a parameters structures updated with the new target configuration
:param newParameters: a parameters structures updated with the new target configuration. You can retreive the current parameers structure using method *Retina::RetinaParameters Retina::getParameters()* and update it before running method *setup*.
Retina::write
+++++++++++++
@ -335,7 +371,7 @@ Retina::RetinaParameters
photoreceptorsTemporalConstant(0.5f),// the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
photoreceptorsSpatialConstant(0.53f),// the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
horizontalCellsGain(0.0f),//gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
hcellsTemporalConstant(1.f),// the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
hcellsTemporalConstant(1.f),// the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors. Reduce to 0.5 to limit retina after effects.
hcellsSpatialConstant(7.f),//the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
ganglionCellsSensitivity(0.7f)//the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.7
{};// default setup
@ -359,3 +395,60 @@ Retina::RetinaParameters
struct OPLandIplParvoParameters OPLandIplParvo;
struct IplMagnoParameters IplMagno;
};
Retina parameters files examples
++++++++++++++++++++++++++++++++
Here is the default configuration file of the retina module. It gives results such as the first retina output shown on the top of this page.
.. code-block:: cpp
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>7.5e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.01</horizontalCellsGain>
<hcellsTemporalConstant>0.5</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>7.5e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>
Here is the 'realistic" setup used to obtain the second retina output shown on the top of this page.
.. code-block:: cpp
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>8.9e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.3</horizontalCellsGain>
<hcellsTemporalConstant>0.5</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>8.9e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>

View File

@ -85,10 +85,8 @@ enum RETINA_COLORSAMPLINGMETHOD
RETINA_COLOR_BAYER//!< standard bayer sampling
};
class RetinaFilter;
/**
* @class Retina a wrapper class which allows the Gipsa/Listic Labs model to be used.
* @class Retina a wrapper class which allows the Gipsa/Listic Labs model to be used with OpenCV.
* This retina model allows spatio-temporal image processing (applied on still images, video sequences).
* As a summary, these are the retina model properties:
* => It applies a spectral whithening (mid-frequency details enhancement)
@ -110,7 +108,7 @@ class RetinaFilter;
* _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
* ====> more informations in the above cited Jeanny Heraults's book.
*/
class CV_EXPORTS Retina {
class CV_EXPORTS Retina : public Algorithm {
public:
@ -119,13 +117,13 @@ public:
struct OPLandIplParvoParameters{ // Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters
OPLandIplParvoParameters():colorMode(true),
normaliseOutput(true),
photoreceptorsLocalAdaptationSensitivity(0.7f),
photoreceptorsTemporalConstant(0.5f),
photoreceptorsLocalAdaptationSensitivity(0.75f),
photoreceptorsTemporalConstant(0.9f),
photoreceptorsSpatialConstant(0.53f),
horizontalCellsGain(0.0f),
hcellsTemporalConstant(1.f),
horizontalCellsGain(0.01f),
hcellsTemporalConstant(0.5f),
hcellsSpatialConstant(7.f),
ganglionCellsSensitivity(0.7f){};// default setup
ganglionCellsSensitivity(0.75f){};// default setup
bool colorMode, normaliseOutput;
float photoreceptorsLocalAdaptationSensitivity, photoreceptorsTemporalConstant, photoreceptorsSpatialConstant, horizontalCellsGain, hcellsTemporalConstant, hcellsSpatialConstant, ganglionCellsSensitivity;
};
@ -135,7 +133,7 @@ public:
parasolCells_beta(0.f),
parasolCells_tau(0.f),
parasolCells_k(7.f),
amacrinCellsTemporalCutFrequency(1.2f),
amacrinCellsTemporalCutFrequency(2.0f),
V0CompressionParameter(0.95f),
localAdaptintegration_tau(0.f),
localAdaptintegration_k(7.f){};// default setup
@ -146,34 +144,15 @@ public:
struct IplMagnoParameters IplMagno;
};
/**
* Main constructor with most commun use setup : create an instance of color ready retina model
* @param inputSize : the input frame size
*/
Retina(Size inputSize);
/**
* Complete Retina filter constructor which allows all basic structural parameters definition
* @param inputSize : the input frame size
* @param colorMode : the chosen processing mode : with or without color processing
* @param colorSamplingMethod: specifies which kind of color sampling will be used
* @param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
* @param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
* @param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
*/
Retina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
virtual ~Retina();
/**
* retreive retina input buffer size
*/
Size inputSize();
virtual Size getInputSize()=0;
/**
* retreive retina output buffer size
*/
Size outputSize();
virtual Size getOutputSize()=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
@ -182,8 +161,7 @@ public:
* @param retinaParameterFile : the parameters filename
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
virtual void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true)=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
@ -192,7 +170,7 @@ public:
* @param fs : the open Filestorage which contains retina parameters
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure=true);
virtual void setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure=true)=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
@ -201,31 +179,30 @@ public:
* @param newParameters : a parameters structures updated with the new target configuration
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(RetinaParameters newParameters);
virtual void setup(RetinaParameters newParameters)=0;
/**
* @return the current parameters setup
*/
Retina::RetinaParameters getParameters();
* @return the current parameters setup
*/
virtual struct Retina::RetinaParameters getParameters()=0;
/**
* parameters setup display method
* @return a string which contains formatted parameters information
*/
const String printSetup();
virtual const String printSetup()=0;
/**
* write xml/yml formated parameters information
* @rparam fs : the filename of the xml file that will be open and writen with formatted parameters information
*/
virtual void write( String fs ) const;
virtual void write( String fs ) const=0;
/**
* write xml/yml formated parameters information
* @param fs : a cv::Filestorage object ready to be filled
*/
virtual void write( FileStorage& fs ) const;
virtual void write( FileStorage& fs ) const=0;
/**
* setup the OPL and IPL parvo channels (see biologocal model)
@ -242,7 +219,7 @@ public:
* @param HcellsSpatialConstant: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
* @param ganglionCellsSensitivity: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 230
*/
void setupOPLandIPLParvoChannel(const bool colorMode=true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7);
virtual void setupOPLandIPLParvoChannel(const bool colorMode=true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7)=0;
/**
* set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel
@ -256,41 +233,41 @@ public:
* @param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
void setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7);
virtual void setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7)=0;
/**
* method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
* @param inputImage : the input cv::Mat image to be processed, can be gray level or BGR coded in any format (from 8bit to 16bits)
*/
void run(const Mat &inputImage);
virtual void run(InputArray inputImage)=0;
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
void getParvo(Mat &retinaOutput_parvo);
virtual void getParvo(OutputArray retinaOutput_parvo)=0;
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is the original retina filter model output, without any quantification or rescaling
* @param retinaOutput_parvo : a cv::Mat header filled with the internal parvo buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
void getParvo(std::valarray<float> &retinaOutput_parvo);
virtual void getParvoRAW(OutputArray retinaOutput_parvo)=0;
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
void getMagno(Mat &retinaOutput_magno);
virtual void getMagno(OutputArray retinaOutput_magno)=0;
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : the output buffer (reallocated if necessary), this output is the original retina filter model output, without any quantification or rescaling
* @param retinaOutput_magno : a cv::Mat header filled with the internal retina magno buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
void getMagno(std::valarray<float> &retinaOutput_magno);
virtual void getMagnoRAW(OutputArray retinaOutput_magno)=0;
// original API level data accessors : get buffers addresses...
const std::valarray<float> & getMagno() const;
const std::valarray<float> & getParvo() const;
// original API level data accessors : get buffers addresses from a Mat header, similar to getParvoRAW and getMagnoRAW...
virtual const Mat getMagnoRAW() const=0;
virtual const Mat getParvoRAW() const=0;
/**
* activate color saturation as the final step of the color demultiplexing process
@ -298,58 +275,27 @@ public:
* @param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
* @param colorSaturationValue: the saturation factor
*/
void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0);
virtual void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0)=0;
/**
* clear all retina buffers (equivalent to opening the eyes after a long period of eye close ;o)
*/
void clearBuffers();
virtual void clearBuffers()=0;
/**
* Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated
* @param activate: true if Magnocellular output should be activated, false if not
*/
void activateMovingContoursProcessing(const bool activate);
virtual void activateMovingContoursProcessing(const bool activate)=0;
/**
* Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated
* @param activate: true if Parvocellular (contours information extraction) output should be activated, false if not
*/
void activateContoursProcessing(const bool activate);
protected:
// Parameteres setup members
RetinaParameters _retinaParameters; // structure of parameters
// Retina model related modules
std::valarray<float> _inputBuffer; //!< buffer used to convert input cv::Mat to internal retina buffers format (valarrays)
// pointer to retina model
RetinaFilter* _retinaFilter; //!< the pointer to the retina module, allocated with instance construction
/**
* exports a valarray buffer outing from HVStools objects to a cv::Mat in CV_8UC1 (gray level picture) or CV_8UC3 (color) format
* @param grayMatrixToConvert the valarray to export to OpenCV
* @param nbRows : the number of rows of the valarray flatten matrix
* @param nbColumns : the number of rows of the valarray flatten matrix
* @param colorMode : a flag which mentions if matrix is color (true) or graylevel (false)
* @param outBuffer : the output matrix which is reallocated to satisfy Retina output buffer dimensions
*/
void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, Mat &outBuffer);
/**
*
* @param inputMatToConvert : the OpenCV cv::Mat that has to be converted to gray or RGB valarray buffer that will be processed by the retina model
* @param outputValarrayMatrix : the output valarray
* @return the input image color mode (color=true, gray levels=false)
*/
bool _convertCvMat2ValarrayBuffer(const cv::Mat inputMatToConvert, std::valarray<float> &outputValarrayMatrix);
//! private method called by constructors, gathers their parameters and use them in a unified way
void _init(const Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
virtual void activateContoursProcessing(const bool activate)=0;
};
CV_EXPORTS Ptr<Retina> createRetina(Size inputSize);
CV_EXPORTS Ptr<Retina> createRetina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
}
#endif /* __OPENCV_CONTRIB_RETINA_HPP__ */

View File

@ -76,19 +76,231 @@
namespace cv
{
Retina::Retina(const cv::Size inputSz)
class RetinaImpl : public Retina
{
public:
/**
* Main constructor with most commun use setup : create an instance of color ready retina model
* @param inputSize : the input frame size
*/
RetinaImpl(Size inputSize);
/**
* Complete Retina filter constructor which allows all basic structural parameters definition
* @param inputSize : the input frame size
* @param colorMode : the chosen processing mode : with or without color processing
* @param colorSamplingMethod: specifies which kind of color sampling will be used
* @param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
* @param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
* @param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
*/
RetinaImpl(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
virtual ~RetinaImpl();
/**
* retreive retina input buffer size
*/
Size getInputSize();
/**
* retreive retina output buffer size
*/
Size getOutputSize();
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param retinaParameterFile : the parameters filename
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param fs : the open Filestorage which contains retina parameters
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure=true);
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param newParameters : a parameters structures updated with the new target configuration
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(Retina::RetinaParameters newParameters);
/**
* @return the current parameters setup
*/
struct Retina::RetinaParameters getParameters();
/**
* parameters setup display method
* @return a string which contains formatted parameters information
*/
const String printSetup();
/**
* write xml/yml formated parameters information
* @rparam fs : the filename of the xml file that will be open and writen with formatted parameters information
*/
virtual void write( String fs ) const;
/**
* write xml/yml formated parameters information
* @param fs : a cv::Filestorage object ready to be filled
*/
virtual void write( FileStorage& fs ) const;
/**
* setup the OPL and IPL parvo channels (see biologocal model)
* OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance (low frequency energy)
* IPL parvo is the OPL next processing stage, it refers to Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision.
* for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
* @param colorMode : specifies if (true) color is processed of not (false) to then processing gray level image
* @param normaliseOutput : specifies if (true) output is rescaled between 0 and 255 of not (false)
* @param photoreceptorsLocalAdaptationSensitivity: the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)
* @param photoreceptorsTemporalConstant: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
* @param photoreceptorsSpatialConstant: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
* @param horizontalCellsGain: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
* @param HcellsTemporalConstant: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
* @param HcellsSpatialConstant: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
* @param ganglionCellsSensitivity: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 230
*/
void setupOPLandIPLParvoChannel(const bool colorMode=true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7);
/**
* set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel
* this channel processes signals outpint from OPL processing stage in peripheral vision, it allows motion information enhancement. It is decorrelated from the details channel. See reference paper for more details.
* @param normaliseOutput : specifies if (true) output is rescaled between 0 and 255 of not (false)
* @param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
* @param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
* @param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
* @param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, tipicall value is 5
* @param V0CompressionParameter: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 200
* @param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
void setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7);
/**
* method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
* @param inputImage : the input cv::Mat image to be processed, can be gray level or BGR coded in any format (from 8bit to 16bits)
*/
void run(InputArray inputImage);
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
void getParvo(OutputArray retinaOutput_parvo);
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : a cv::Mat header filled with the internal parvo buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
void getParvoRAW(OutputArray retinaOutput_parvo);
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
void getMagno(OutputArray retinaOutput_magno);
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : a cv::Mat header filled with the internal retina magno buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
void getMagnoRAW(OutputArray retinaOutput_magno);
// original API level data accessors : get buffers addresses from a Mat header, similar to getParvoRAW and getMagnoRAW...
const Mat getMagnoRAW() const;
const Mat getParvoRAW() const;
/**
* activate color saturation as the final step of the color demultiplexing process
* -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.
* @param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
* @param colorSaturationValue: the saturation factor
*/
void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0);
/**
* clear all retina buffers (equivalent to opening the eyes after a long period of eye close ;o)
*/
void clearBuffers();
/**
* Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated
* @param activate: true if Magnocellular output should be activated, false if not
*/
void activateMovingContoursProcessing(const bool activate);
/**
* Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated
* @param activate: true if Parvocellular (contours information extraction) output should be activated, false if not
*/
void activateContoursProcessing(const bool activate);
private:
// Parameteres setup members
RetinaParameters _retinaParameters; // structure of parameters
// Retina model related modules
std::valarray<float> _inputBuffer; //!< buffer used to convert input cv::Mat to internal retina buffers format (valarrays)
// pointer to retina model
RetinaFilter* _retinaFilter; //!< the pointer to the retina module, allocated with instance construction
/**
* exports a valarray buffer outing from HVStools objects to a cv::Mat in CV_8UC1 (gray level picture) or CV_8UC3 (color) format
* @param grayMatrixToConvert the valarray to export to OpenCV
* @param nbRows : the number of rows of the valarray flatten matrix
* @param nbColumns : the number of rows of the valarray flatten matrix
* @param colorMode : a flag which mentions if matrix is color (true) or graylevel (false)
* @param outBuffer : the output matrix which is reallocated to satisfy Retina output buffer dimensions
*/
void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer);
/**
*
* @param inputMatToConvert : the OpenCV cv::Mat that has to be converted to gray or RGB valarray buffer that will be processed by the retina model
* @param outputValarrayMatrix : the output valarray
* @return the input image color mode (color=true, gray levels=false)
*/
bool _convertCvMat2ValarrayBuffer(InputArray inputMatToConvert, std::valarray<float> &outputValarrayMatrix);
//! private method called by constructors, gathers their parameters and use them in a unified way
void _init(const Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
};
// smart pointers allocation :
Ptr<Retina> createRetina(Size inputSize){ return new RetinaImpl(inputSize); }
Ptr<Retina> createRetina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght){return new RetinaImpl(inputSize, colorMode, colorSamplingMethod, useRetinaLogSampling, reductionFactor, samplingStrenght);}
// RetinaImpl code
RetinaImpl::RetinaImpl(const cv::Size inputSz)
{
_retinaFilter = 0;
_init(inputSz, true, RETINA_COLOR_BAYER, false);
}
Retina::Retina(const cv::Size inputSz, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght)
RetinaImpl::RetinaImpl(const cv::Size inputSz, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght)
{
_retinaFilter = 0;
_init(inputSz, colorMode, colorSamplingMethod, useRetinaLogSampling, reductionFactor, samplingStrenght);
};
Retina::~Retina()
RetinaImpl::~RetinaImpl()
{
if (_retinaFilter)
delete _retinaFilter;
@ -97,23 +309,22 @@ Retina::~Retina()
/**
* retreive retina input buffer size
*/
Size Retina::inputSize(){return cv::Size(_retinaFilter->getInputNBcolumns(), _retinaFilter->getInputNBrows());}
Size RetinaImpl::getInputSize(){return cv::Size(_retinaFilter->getInputNBcolumns(), _retinaFilter->getInputNBrows());}
/**
* retreive retina output buffer size
*/
Size Retina::outputSize(){return cv::Size(_retinaFilter->getOutputNBcolumns(), _retinaFilter->getOutputNBrows());}
Size RetinaImpl::getOutputSize(){return cv::Size(_retinaFilter->getOutputNBcolumns(), _retinaFilter->getOutputNBrows());}
void Retina::setColorSaturation(const bool saturateColors, const float colorSaturationValue)
void RetinaImpl::setColorSaturation(const bool saturateColors, const float colorSaturationValue)
{
_retinaFilter->setColorSaturation(saturateColors, colorSaturationValue);
}
struct Retina::RetinaParameters Retina::getParameters(){return _retinaParameters;}
struct Retina::RetinaParameters RetinaImpl::getParameters(){return _retinaParameters;}
void Retina::setup(String retinaParameterFile, const bool applyDefaultSetupOnFailure)
void RetinaImpl::setup(String retinaParameterFile, const bool applyDefaultSetupOnFailure)
{
try
{
@ -137,7 +348,7 @@ void Retina::setup(String retinaParameterFile, const bool applyDefaultSetupOnFai
}
}
void Retina::setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure)
void RetinaImpl::setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure)
{
try
{
@ -176,7 +387,7 @@ void Retina::setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure)
}catch(Exception &e)
{
printf("Retina::setup: resetting retina with default parameters\n");
printf("RetinaImpl::setup: resetting retina with default parameters\n");
if (applyDefaultSetupOnFailure)
{
setupOPLandIPLParvoChannel();
@ -190,7 +401,7 @@ void Retina::setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure)
printf("%s\n", printSetup().c_str());
}
void Retina::setup(cv::Retina::RetinaParameters newConfiguration)
void RetinaImpl::setup(cv::Retina::RetinaParameters newConfiguration)
{
// simply copy structures
memcpy(&_retinaParameters, &newConfiguration, sizeof(cv::Retina::RetinaParameters));
@ -198,49 +409,48 @@ void Retina::setup(cv::Retina::RetinaParameters newConfiguration)
setupOPLandIPLParvoChannel(_retinaParameters.OPLandIplParvo.colorMode, _retinaParameters.OPLandIplParvo.normaliseOutput, _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity, _retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant, _retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant, _retinaParameters.OPLandIplParvo.horizontalCellsGain, _retinaParameters.OPLandIplParvo.hcellsTemporalConstant, _retinaParameters.OPLandIplParvo.hcellsSpatialConstant, _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity);
setupIPLMagnoChannel(_retinaParameters.IplMagno.normaliseOutput, _retinaParameters.IplMagno.parasolCells_beta, _retinaParameters.IplMagno.parasolCells_tau, _retinaParameters.IplMagno.parasolCells_k, _retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency,_retinaParameters.IplMagno.V0CompressionParameter, _retinaParameters.IplMagno.localAdaptintegration_tau, _retinaParameters.IplMagno.localAdaptintegration_k);
}
const String Retina::printSetup()
const String RetinaImpl::printSetup()
{
std::stringstream outmessage;
// displaying OPL and IPL parvo setup
outmessage<<"Current Retina instance setup :"
<<"\nOPLandIPLparvo"<<"{"
<< "\n==> colorMode : " << _retinaParameters.OPLandIplParvo.colorMode
<< "\n==> normalizeParvoOutput :" << _retinaParameters.OPLandIplParvo.normaliseOutput
<< "\n==> photoreceptorsLocalAdaptationSensitivity : " << _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity
<< "\n==> photoreceptorsTemporalConstant : " << _retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant
<< "\n==> photoreceptorsSpatialConstant : " << _retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant
<< "\n==> horizontalCellsGain : " << _retinaParameters.OPLandIplParvo.horizontalCellsGain
<< "\n==> hcellsTemporalConstant : " << _retinaParameters.OPLandIplParvo.hcellsTemporalConstant
<< "\n==> hcellsSpatialConstant : " << _retinaParameters.OPLandIplParvo.hcellsSpatialConstant
<< "\n==> parvoGanglionCellsSensitivity : " << _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity
<< "\n\t colorMode : " << _retinaParameters.OPLandIplParvo.colorMode
<< "\n\t normalizeParvoOutput :" << _retinaParameters.OPLandIplParvo.normaliseOutput
<< "\n\t photoreceptorsLocalAdaptationSensitivity : " << _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity
<< "\n\t photoreceptorsTemporalConstant : " << _retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant
<< "\n\t photoreceptorsSpatialConstant : " << _retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant
<< "\n\t horizontalCellsGain : " << _retinaParameters.OPLandIplParvo.horizontalCellsGain
<< "\n\t hcellsTemporalConstant : " << _retinaParameters.OPLandIplParvo.hcellsTemporalConstant
<< "\n\t hcellsSpatialConstant : " << _retinaParameters.OPLandIplParvo.hcellsSpatialConstant
<< "\n\t parvoGanglionCellsSensitivity : " << _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity
<<"}\n";
// displaying IPL magno setup
outmessage<<"Current Retina instance setup :"
<<"\nIPLmagno"<<"{"
<< "\n==> normaliseOutput : " << _retinaParameters.IplMagno.normaliseOutput
<< "\n==> parasolCells_beta : " << _retinaParameters.IplMagno.parasolCells_beta
<< "\n==> parasolCells_tau : " << _retinaParameters.IplMagno.parasolCells_tau
<< "\n==> parasolCells_k : " << _retinaParameters.IplMagno.parasolCells_k
<< "\n==> amacrinCellsTemporalCutFrequency : " << _retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency
<< "\n==> V0CompressionParameter : " << _retinaParameters.IplMagno.V0CompressionParameter
<< "\n==> localAdaptintegration_tau : " << _retinaParameters.IplMagno.localAdaptintegration_tau
<< "\n==> localAdaptintegration_k : " << _retinaParameters.IplMagno.localAdaptintegration_k
<< "\n\t normaliseOutput : " << _retinaParameters.IplMagno.normaliseOutput
<< "\n\t parasolCells_beta : " << _retinaParameters.IplMagno.parasolCells_beta
<< "\n\t parasolCells_tau : " << _retinaParameters.IplMagno.parasolCells_tau
<< "\n\t parasolCells_k : " << _retinaParameters.IplMagno.parasolCells_k
<< "\n\t amacrinCellsTemporalCutFrequency : " << _retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency
<< "\n\t V0CompressionParameter : " << _retinaParameters.IplMagno.V0CompressionParameter
<< "\n\t localAdaptintegration_tau : " << _retinaParameters.IplMagno.localAdaptintegration_tau
<< "\n\t localAdaptintegration_k : " << _retinaParameters.IplMagno.localAdaptintegration_k
<<"}";
return outmessage.str().c_str();
}
void Retina::write( String fs ) const
void RetinaImpl::write( String fs ) const
{
FileStorage parametersSaveFile(fs, cv::FileStorage::WRITE );
write(parametersSaveFile);
}
void Retina::write( FileStorage& fs ) const
void RetinaImpl::write( FileStorage& fs ) const
{
if (!fs.isOpened())
return; // basic error case
@ -267,7 +477,7 @@ void Retina::write( FileStorage& fs ) const
fs<<"}";
}
void Retina::setupOPLandIPLParvoChannel(const bool colorMode, const bool normaliseOutput, const float photoreceptorsLocalAdaptationSensitivity, const float photoreceptorsTemporalConstant, const float photoreceptorsSpatialConstant, const float horizontalCellsGain, const float HcellsTemporalConstant, const float HcellsSpatialConstant, const float ganglionCellsSensitivity)
void RetinaImpl::setupOPLandIPLParvoChannel(const bool colorMode, const bool normaliseOutput, const float photoreceptorsLocalAdaptationSensitivity, const float photoreceptorsTemporalConstant, const float photoreceptorsSpatialConstant, const float horizontalCellsGain, const float HcellsTemporalConstant, const float HcellsSpatialConstant, const float ganglionCellsSensitivity)
{
// retina core parameters setup
_retinaFilter->setColorMode(colorMode);
@ -290,7 +500,7 @@ void Retina::setupOPLandIPLParvoChannel(const bool colorMode, const bool normali
}
void Retina::setupIPLMagnoChannel(const bool normaliseOutput, const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float V0CompressionParameter, const float localAdaptintegration_tau, const float localAdaptintegration_k)
void RetinaImpl::setupIPLMagnoChannel(const bool normaliseOutput, const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float V0CompressionParameter, const float localAdaptintegration_tau, const float localAdaptintegration_k)
{
_retinaFilter->setMagnoCoefficientsTable(parasolCells_beta, parasolCells_tau, parasolCells_k, amacrinCellsTemporalCutFrequency, V0CompressionParameter, localAdaptintegration_tau, localAdaptintegration_k);
@ -307,16 +517,16 @@ void Retina::setupIPLMagnoChannel(const bool normaliseOutput, const float paraso
_retinaParameters.IplMagno.localAdaptintegration_k = localAdaptintegration_k;
}
void Retina::run(const cv::Mat &inputMatToConvert)
void RetinaImpl::run(InputArray inputMatToConvert)
{
// first convert input image to the compatible format : std::valarray<float>
const bool colorMode = _convertCvMat2ValarrayBuffer(inputMatToConvert, _inputBuffer);
const bool colorMode = _convertCvMat2ValarrayBuffer(inputMatToConvert.getMat(), _inputBuffer);
// process the retina
if (!_retinaFilter->runFilter(_inputBuffer, colorMode, false, _retinaParameters.OPLandIplParvo.colorMode && colorMode, false))
throw cv::Exception(-1, "Retina cannot be applied, wrong input buffer size", "Retina::run", "Retina.h", 0);
throw cv::Exception(-1, "RetinaImpl cannot be applied, wrong input buffer size", "RetinaImpl::run", "RetinaImpl.h", 0);
}
void Retina::getParvo(cv::Mat &retinaOutput_parvo)
void RetinaImpl::getParvo(OutputArray retinaOutput_parvo)
{
if (_retinaFilter->getColorMode())
{
@ -329,26 +539,52 @@ void Retina::getParvo(cv::Mat &retinaOutput_parvo)
}
//retinaOutput_parvo/=255.0;
}
void Retina::getMagno(cv::Mat &retinaOutput_magno)
void RetinaImpl::getMagno(OutputArray retinaOutput_magno)
{
// reallocate output buffer (if necessary)
_convertValarrayBuffer2cvMat(_retinaFilter->getMovingContours(), _retinaFilter->getOutputNBrows(), _retinaFilter->getOutputNBcolumns(), false, retinaOutput_magno);
//retinaOutput_magno/=255.0;
}
// original API level data accessors : copy buffers if size matches
void Retina::getMagno(std::valarray<float> &magnoOutputBufferCopy){if (magnoOutputBufferCopy.size()==_retinaFilter->getMovingContours().size()) magnoOutputBufferCopy = _retinaFilter->getMovingContours();}
void Retina::getParvo(std::valarray<float> &parvoOutputBufferCopy){if (parvoOutputBufferCopy.size()==_retinaFilter->getContours().size()) parvoOutputBufferCopy = _retinaFilter->getContours();}
// original API level data accessors : copy buffers if size matches, reallocate if required
void RetinaImpl::getMagnoRAW(OutputArray magnoOutputBufferCopy){
// get magno channel header
const cv::Mat magnoChannel=cv::Mat(getMagnoRAW());
// copy data
magnoChannel.copyTo(magnoOutputBufferCopy);
}
void RetinaImpl::getParvoRAW(OutputArray parvoOutputBufferCopy){
// get parvo channel header
const cv::Mat parvoChannel=cv::Mat(getMagnoRAW());
// copy data
parvoChannel.copyTo(parvoOutputBufferCopy);
}
// original API level data accessors : get buffers addresses...
const std::valarray<float> & Retina::getMagno() const {return _retinaFilter->getMovingContours();}
const std::valarray<float> & Retina::getParvo() const {if (_retinaFilter->getColorMode())return _retinaFilter->getColorOutput(); /* implicite else */return _retinaFilter->getContours();}
const Mat RetinaImpl::getMagnoRAW() const {
// create a cv::Mat header for the valarray
return Mat(_retinaFilter->getMovingContours().size(),1, CV_32F, (void*)get_data(_retinaFilter->getMovingContours()));
}
const Mat RetinaImpl::getParvoRAW() const {
if (_retinaFilter->getColorMode()) // check if color mode is enabled
{
// create a cv::Mat table (for RGB planes as a single vector)
return Mat(_retinaFilter->getColorOutput().size(), 1, CV_32F, (void*)get_data(_retinaFilter->getColorOutput()));
}
// otherwise, output is gray level
// create a cv::Mat header for the valarray
return Mat( _retinaFilter->getContours().size(), 1, CV_32F, (void*)get_data(_retinaFilter->getContours()));
}
// private method called by constructirs
void Retina::_init(const cv::Size inputSz, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght)
void RetinaImpl::_init(const cv::Size inputSz, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght)
{
// basic error check
if (inputSz.height*inputSz.width <= 0)
throw cv::Exception(-1, "Bad retina size setup : size height and with must be superior to zero", "Retina::setup", "Retina.h", 0);
throw cv::Exception(-1, "Bad retina size setup : size height and with must be superior to zero", "RetinaImpl::setup", "RetinaImpl.h", 0);
unsigned int nbPixels=inputSz.height*inputSz.width;
// resize buffers if size does not match
@ -369,25 +605,27 @@ void Retina::_init(const cv::Size inputSz, const bool colorMode, RETINA_COLORSAM
printf("%s\n", printSetup().c_str());
}
void Retina::_convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, cv::Mat &outBuffer)
void RetinaImpl::_convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer)
{
// fill output buffer with the valarray buffer
const float *valarrayPTR=get_data(grayMatrixToConvert);
if (!colorMode)
{
outBuffer.create(cv::Size(nbColumns, nbRows), CV_8U);
Mat outMat = outBuffer.getMat();
for (unsigned int i=0;i<nbRows;++i)
{
for (unsigned int j=0;j<nbColumns;++j)
{
cv::Point2d pixel(j,i);
outBuffer.at<unsigned char>(pixel)=(unsigned char)*(valarrayPTR++);
outMat.at<unsigned char>(pixel)=(unsigned char)*(valarrayPTR++);
}
}
}else
{
const unsigned int doubleNBpixels=_retinaFilter->getOutputNBpixels()*2;
outBuffer.create(cv::Size(nbColumns, nbRows), CV_8UC3);
Mat outMat = outBuffer.getMat();
for (unsigned int i=0;i<nbRows;++i)
{
for (unsigned int j=0;j<nbColumns;++j,++valarrayPTR)
@ -398,25 +636,27 @@ void Retina::_convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrix
pixelValues[1]=(unsigned char)*(valarrayPTR+_retinaFilter->getOutputNBpixels());
pixelValues[0]=(unsigned char)*(valarrayPTR+doubleNBpixels);
outBuffer.at<cv::Vec3b>(pixel)=pixelValues;
outMat.at<cv::Vec3b>(pixel)=pixelValues;
}
}
}
}
bool Retina::_convertCvMat2ValarrayBuffer(const cv::Mat inputMatToConvert, std::valarray<float> &outputValarrayMatrix)
bool RetinaImpl::_convertCvMat2ValarrayBuffer(InputArray inputMat, std::valarray<float> &outputValarrayMatrix)
{
const Mat inputMatToConvert=inputMat.getMat();
// first check input consistency
if (inputMatToConvert.empty())
throw cv::Exception(-1, "Retina cannot be applied, input buffer is empty", "Retina::run", "Retina.h", 0);
throw cv::Exception(-1, "RetinaImpl cannot be applied, input buffer is empty", "RetinaImpl::run", "RetinaImpl.h", 0);
// retreive color mode from image input
int imageNumberOfChannels = inputMatToConvert.channels();
// convert to float AND fill the valarray buffer
typedef float T; // define here the target pixel format, here, float
const int dsttype = DataType<T>::depth; // output buffer is float format
const int dsttype = DataType<T>::depth; // output buffer is float format
if(imageNumberOfChannels==4)
{
@ -429,7 +669,7 @@ bool Retina::_convertCvMat2ValarrayBuffer(const cv::Mat inputMatToConvert, std::
};
planes[3] = cv::Mat(inputMatToConvert.size(), dsttype); // last channel (alpha) does not point on the valarray (not usefull in our case)
// split color cv::Mat in 4 planes... it fills valarray directely
cv::split(cv::Mat_<Vec<T, 4> >(inputMatToConvert), planes);
cv::split(Mat_<Vec<T, 4> >(inputMatToConvert), planes);
}
else if (imageNumberOfChannels==3)
{
@ -455,11 +695,11 @@ bool Retina::_convertCvMat2ValarrayBuffer(const cv::Mat inputMatToConvert, std::
return imageNumberOfChannels>1; // return bool : false for gray level image processing, true for color mode
}
void Retina::clearBuffers() {_retinaFilter->clearAllBuffers();}
void RetinaImpl::clearBuffers() {_retinaFilter->clearAllBuffers();}
void Retina::activateMovingContoursProcessing(const bool activate){_retinaFilter->activateMovingContoursProcessing(activate);}
void RetinaImpl::activateMovingContoursProcessing(const bool activate){_retinaFilter->activateMovingContoursProcessing(activate);}
void Retina::activateContoursProcessing(const bool activate){_retinaFilter->activateContoursProcessing(activate);}
void RetinaImpl::activateContoursProcessing(const bool activate){_retinaFilter->activateContoursProcessing(activate);}
} // end of namespace cv

View File

@ -338,8 +338,11 @@ void RetinaColor::runColorDemultiplexing(const std::valarray<float> &multiplexed
}
// compute the gradient of the luminance
#ifdef MAKE_PARALLEL // call the TemplateBuffer TBB clipping method
cv::parallel_for_(cv::Range(2,_filterOutput.getNBrows()-2), Parallel_computeGradient(_filterOutput.getNBcolumns(), _filterOutput.getNBrows(), &(*_luminance)[0], &_imageGradient[0]));
#else
_computeGradient(&(*_luminance)[0]);
#endif
// adaptively filter the submosaics to get the adaptive densities, here the buffer _chrominance is used as a temp buffer
_adaptiveSpatialLPfilter(&_RGBmosaic[0], &_chrominance[0]);
_adaptiveSpatialLPfilter(&_RGBmosaic[0]+_filterOutput.getNBpixels(), &_chrominance[0]+_filterOutput.getNBpixels());

View File

@ -333,6 +333,53 @@ namespace cv
}
}
};
class Parallel_computeGradient: public cv::ParallelLoopBody
{
private:
float *imageGradient;
const float *luminance;
unsigned int nbColumns, doubleNbColumns, nbRows, nbPixels;
public:
Parallel_computeGradient(const unsigned int nbCols, const unsigned int nbRws, const float *lum, float *imageGrad)
:imageGradient(imageGrad), luminance(lum), nbColumns(nbCols), doubleNbColumns(2*nbCols), nbRows(nbRws), nbPixels(nbRws*nbCols){};
virtual void operator()( const Range& r ) const {
for (int idLine=r.start;idLine!=r.end;++idLine)
{
for (unsigned int idColumn=2;idColumn<nbColumns-2;++idColumn)
{
const unsigned int pixelIndex=idColumn+nbColumns*idLine;
// horizontal and vertical local gradients
const float verticalGrad=fabs(luminance[pixelIndex+nbColumns]-luminance[pixelIndex-nbColumns]);
const float horizontalGrad=fabs(luminance[pixelIndex+1]-luminance[pixelIndex-1]);
// neighborhood horizontal and vertical gradients
const float verticalGrad_p=fabs(luminance[pixelIndex]-luminance[pixelIndex-doubleNbColumns]);
const float horizontalGrad_p=fabs(luminance[pixelIndex]-luminance[pixelIndex-2]);
const float verticalGrad_n=fabs(luminance[pixelIndex+doubleNbColumns]-luminance[pixelIndex]);
const float horizontalGrad_n=fabs(luminance[pixelIndex+2]-luminance[pixelIndex]);
const float horizontalGradient=0.5f*horizontalGrad+0.25f*(horizontalGrad_p+horizontalGrad_n);
const float verticalGradient=0.5f*verticalGrad+0.25f*(verticalGrad_p+verticalGrad_n);
// compare local gradient means and fill the appropriate filtering coefficient value that will be used in adaptative filters
if (horizontalGradient<verticalGradient)
{
imageGradient[pixelIndex+nbPixels]=0.06f;
imageGradient[pixelIndex]=0.57f;
}
else
{
imageGradient[pixelIndex+nbPixels]=0.57f;
imageGradient[pixelIndex]=0.06f;
}
}
}
}
};
#endif
};
}

View File

@ -211,10 +211,10 @@ static void drawPlot(const cv::Mat curve, const std::string figureTitle, const i
*/
if (useLogSampling)
{
retina = new cv::Retina(inputImage.size(),true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
retina = cv::createRetina(inputImage.size(),true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
retina = new cv::Retina(inputImage.size());
retina = cv::createRetina(inputImage.size());
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
retina->write("RetinaDefaultParameters.xml");

View File

@ -280,10 +280,10 @@ static void loadNewFrame(const std::string filenamePrototype, const int currentF
*/
if (useLogSampling)
{
retina = new cv::Retina(inputImage.size(),true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
retina = cv::createRetina(inputImage.size(),true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
retina = new cv::Retina(inputImage.size());
retina = cv::createRetina(inputImage.size());
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
retina->write("RetinaDefaultParameters.xml");

View File

@ -111,10 +111,10 @@ int main(int argc, char* argv[]) {
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = new cv::Retina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
myRetina = cv::createRetina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = new cv::Retina(inputFrame.size());
myRetina = cv::createRetina(inputFrame.size());
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");

View File

@ -39,7 +39,7 @@ int main(int argc, char* argv[]) {
// welcome message
std::cout<<"****************************************************"<<std::endl;
std::cout<<"* Retina demonstration : demonstrates the use of is a wrapper class of the Gipsa/Listic Labs retina model."<<std::endl;
std::cout<<"* This demo will try to load the file 'RetinaSpecificParameters.xml' (if exists).\nTo create it, copy the autogenerated template 'RetinaDefaultParameters.xml'.\nThen twaek it with your own retina parameters."<<std::endl;
std::cout<<"* This demo will try to load the file 'RetinaSpecificParameters.xml' (if exists).\nTo create it, copy the autogenerated template 'RetinaDefaultParameters.xml'.\nThen tweak it with your own retina parameters."<<std::endl;
// basic input arguments checking
if (argc<2)
{
@ -100,10 +100,12 @@ int main(int argc, char* argv[]) {
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = new cv::Retina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
myRetina = cv::createRetina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = new cv::Retina(inputFrame.size());
{
myRetina = cv::createRetina(inputFrame.size());
}
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
@ -137,7 +139,7 @@ int main(int argc, char* argv[]) {
}
}catch(cv::Exception e)
{
std::cerr<<"Error using Retina : "<<e.what()<<std::endl;
std::cerr<<"Error using Retina or end of video sequence reached : "<<e.what()<<std::endl;
}
// Program end message