Table 1

Model proposedHardware typeMethodAdaptabilityBioinspirationDisparity rangeResolutionPower consumption

Shi and Tsang, 2003 [18]Mixed analog & digital (use Gabor filter chips, AER Protocol, and Xilinx CPLDBinocular energy modelNonadaptiveEmulates disparity tuned complex cells3 disparitiesLow level vision

Díaz et al., 2007 [13]Digital (use Gabors, FPGA based SOC that can be used in embedded systems)Modified phase based techniqueAdaptive (in the sense that it can dynamically adjust number of disparities)Takes multiple disparity (estimates and integrates the results to emulate computations by many neurons in parallel)FPGA can be configured to have flexible disparities depending on image, max (−24 to 24)Subpixel resolution

Shimonomura et al., 2008 [19]Mixed analog & digital (use aVLSI silicon retinas, Gabor chips to represent simple cells, & FPGA to compute disparity)Energy modelNonadaptiveInspired by the hierarchical organization of simple and complex cell5 disparitiesLow level vision225 mW

Mandal et al., 2010 [20]Mixed analog & digital (use massively parallel SIMD Current-Mode Analog Matrix Processor and FPGA based microcontroller)Binocular energy modelNonadaptiveBioinspired because it uses binocular energy model3 disparitiesLow level vision250 mW

Rogister et al., 2012 [21]Digital & software (use AER silicon retina for input and the rest of processing is done in software)NonadaptiveInspired by the asynchronous event based dynamics of the brainLow level vision

Our modelPure analog, based on floating gate MOSFETs (uses ts-WTA as building block)Position shiftAdaptive (the cell can learn any disparity during learning phase)Takes inspiration from local, hierarchical processing, cortical plasticity, and columnar architecture of the brain3 disparities but extendable to more disparitiesLow level vision180 mW (during learning phase) 60 mW (during detection phase)