International Journal of Biomedical Imaging

Volume 2015 (2015), Article ID 109804, 11 pages

http://dx.doi.org/10.1155/2015/109804

## Recovering 3D Shape with Absolute Size from Endoscope Images Using RBF Neural Network

^{1}Department of Computer Science, Chubu University, 1200 Matsumotocho, Kasugai 487-8501, Japan^{2}Department of Electronics and Electrical Engineering, IIT Guwahati, Guwahati 781039, India^{3}Department of Computer Science, University of British Columbia, Vancouver, BC, Canada V6T 1Z4^{4}Department of Gastroenterology, Aichi Medical University, 1-1 Karimata, Yazako, Nagakute 480-1195, Japan

Received 31 October 2014; Accepted 10 March 2015

Academic Editor: Richard H. Bayford

Copyright © 2015 Seiya Tsuda et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Medical diagnosis judges the status of polyp from the size and the 3D shape of the polyp from its medical endoscope image. However the medical doctor judges the status empirically from the endoscope image and more accurate 3D shape recovery from its 2D image has been demanded to support this judgment. As a method to recover 3D shape with high speed, VBW (Vogel-Breuß-Weickert) model is proposed to recover 3D shape under the condition of point light source illumination and perspective projection. However, VBW model recovers the relative shape but there is a problem that the shape cannot be recovered with the exact size. Here, shape modification is introduced to recover the exact shape with modification from that with VBW model. RBF-NN is introduced for the mapping between input and output. Input is given as the output of gradient parameters of VBW model for the generated sphere. Output is given as the true gradient parameters of true values of the generated sphere. Learning mapping with NN can modify the gradient and the depth can be recovered according to the modified gradient parameters. Performance of the proposed approach is confirmed via computer simulation and real experiment.

#### 1. Introduction

Endoscopy allows medical practitioners to observe the interior of hollow organs and other body cavities in a minimally invasive way. Sometimes, diagnosis requires assessment of the 3D shape of the observed tissue. For example, the pathological condition of a polyp often is related to its geometrical shape. Medicine is an important area of application of computer vision technology. Specialized endoscopes with a laser light beam head [1] or with two cameras mounted in the head [2] have been developed. Many approaches are based on stereo vision [3]; however, the size of endoscope becomes large and this imposes a burden on the patient. Here, we consider a general purpose endoscope, of the sort still most widely used in medical practice.

Here, shape from endoscope image is considered. Shape from shading (SFS) [4] and Fast Marching Method [5] based SFS approach [6] are proposed. These approaches use orthographic projection, while an extension of FMM to the perspective projection is proposed in [7] or further extension of FMM to both point light source illumination and perspective projection is proposed in [8]. Recent extensions include generating Lambertian image from the original multiple color images [9, 10]. Application of FMM includes the solution [11] to the oblique light source problem using neural network learning [12].

Iwahori et al. [13] developed Radial Basis Function Neural Network (RBF-NN) photometric stereo, where RBF-NN is powerful to achieve the multiple dimensional nonparametric functional approximation between input and output mapping.

Recently, VBW model [14], which is based on solving the Hamilton-Jacobi equation, has been proposed to recover a shape from an image taken under the conditions of point light source illumination and perspective projection. However the result recovered by VBW model is relative and there is a problem that VBW gives much smaller values of surface gradient and height distribution than those of true values. That is, it is impossible to apply the VBW model to obtain the exact shape and size.

This paper proposes a new approach to recover the 3D shape with absolute size from 2D image taken under the condition of both point light source illumination and perspective projection. While the VBW model approach can recover the relative shape with relative scale, the proposed approach obtains absolute depth by improving the gradient modification by RBF neural network. The final purpose of this approach is to support the medical diagnosis of the status of polyp if polyp is benign or malignant by recovering 3D shape with its absolute size.

The proposed approach generates a Lambertian sphere model. VBW model is applied for the generated sphere and shape is recovered. Here RBF-NN is used and learned with this sphere to improve the accuracy of recovered shape, where input and output of the neural network are the surface gradient parameters obtained via VBW model as input and the corresponding true values as output, respectively.

The proposed approach is evaluated and it is confirmed that the obtained shape is improved via computer simulation and real experiments.

#### 2. VBW Model

VBW model [14] is proposed as a model to calculate the depth from the view point under the conditions of point light source illumination and perspective projection by solving the Hamilton-Jacobi equations [15] combined with the model of Faugeras and Prados models [16, 17]. Lambertian reflectance is assumed for a target object as another condition.

The following processing is applied for each point of the image. First, the initial value for the depth is given using (1) as in [18]:where represents the normalized image intensity and is the focal length of the lens.

Next, the combination of gradient parameters which gives the minimum gradient is selected from the difference of the depth for the neighboring points. The depth is calculated from (2) and the process is repeated until does not change for that at the previous stage. Here, represent the image coordinates, represents the width of time, represent the minimum gradient for directions, and represents the coefficient of the perspective projection, respectively:

Here, it is noted that the shape obtained via VBW model gives the relative scale, not absolute one. This means that obtained result gives the smaller values of surface gradients than those actual values.

#### 3. Proposed Approach

##### 3.1. NN Learning for Modification of Surface Gradient

When uniform Lambertian reflectance is assumed, the intensity depends on the dot product of surface normal vector and light source vector with the inverse square law for illuminance. The image intensity of the surface is determined as follows: where is image intensity, is reflectance parameter, is a unit vector towards a point light source, is a unit surface normal vector, and is the distance between a point light source and surface point.

The basic assumption is that both of point light source and center of lens are located at the origin of coordinates and image projection is perspective projection. That is, the object is viewed and illuminated from the view point. Here, the actual endoscope image has the color textures and specular reflectance. Using the approach proposed in the paper [19] can convert the original input image into the uniform Lambertian gray scale image.

VBW model gives the relative result for the true size and shape. VBW model also assumes the condition that Lambertian image is used to recover the shape as a target. The result gives the small values of surface gradient and the depth. Here, the modification of surface gradient and improvement of the recovered shape are considered. First the surface gradient at each point is modified with neural network (NN), and then the depth is modified from modified surface gradient parameters . RBF-NN (Radial Basis Function Neural Network) [12] is used for the learning for modification of surface gradient of the result obtained by VBW model.

Expanding (3) with parameters derives the following:where are image coordinates, is focal length of the lens, and is depth.

Sphere image is synthesized using (4) and VBW model is applied to this sphere image. Surface gradient parameters are obtained using forward difference of obtained from VBW model. Calculated and the corresponding true for the synthesized sphere are given to the RBF-NN as input vector and output vector, respectively, and NN learning is applied. After NN learning, this NN can be used to modify the recovered shape for other images. Original endoscope image is shown in Figure 1(a) and generated Lambertian image using [19] is shown in Figure 1(b) as an example.