Figure 1: Geometric Decontouring for grayscale image. (a) An input grayscale. (b) Convert (a) to a 3D mesh. Note that the false contours arise. (c) Our network eliminates the false contours and preserves the fine details well. (d-e) Close-ups of (b) and (c).
An example,
For details, please refer to our paper A Deep Residual Network for Geometric Decontouring.
Figure 2: Architecture of our network for geometric detail restoration from an 8-bit grayscale image. Our network consists of two convolutional layers and several residual blocks. The gray dots indicate repetition of the residual blocks. The output of our network is the predicted residual errors.
In this paper, we formulate the geometric decontouring as a constrained optimization problem from a geometric perspective.
Because the surface shading is more sensitive to normal vectors than height values,
we define the optimization objective based on the local orientation of a surface.
Therefore, given a grayscale image
If the original normals
To this end, we design a neural network equiped with an activation function to make
To predict the rounding errors, our GDCNet naturally adopts a residual learning scheme.
Specifically, our network is trained to learn a residual mapping function
Our GDCNet takes as input an 8-bit grayscale image
The primary task of our network is to restore the lost details introduced by the rounding errors. An 8-bit grayscale image here is actually a height map which can be viewed as a 2-manifold surface, and the local orientation of a surface plays an important role in the amount of light it reflects. This orientation at any point
where, the derivatives are discretely approximated with the forward difference over the nearest neighboring pixels. Given a set of height pairs, we learn the parameters of network
$$ \begin{split} \mathbf{L} &= \sum_{(u,v) \in \Omega} |\mathbf{n}(h_o(u,v)) - \mathbf{n}(h_g(u,v)+\Phi{(h_g(u,v))}) |2^2 \ &=\sum{(u,v) \in \Omega} | \mathbf{n}(h_o(u,v)) - \mathbf{n}(\phi(u,v)) |_2^2 \end{split} $$
where,
Driven by this loss function, our network is trained to repair the normal vector through the local region of each pixel in the grayscale image, thereby indirectly repairing the height map.
Figure 3: Some samples of our Height-Grayscale Dataset. The images shown in the top row are height maps, and the images in the bottom row are grayscales. A height map encodes the geometric details, so here we render it as a surface.
-
The structure tree of the dataset:
./Height-Grayscale-Dataset ├── . ├── gray ├── └── 0.png ├── ... ├── ├── height ├── └── 0.mat ├── ... ├── ├── mask ├── └── 0.png ├── ...
The dataset can be downloaded here.
If you find our work or Dataset useful to your research, please consider citing:
@article {pg2020GDCNet,
journal = {Computer Graphics Forum},
title = {A Deep Residual Network for Geometric Decontouring},
author = {Ji, Zhongping and Zhou, Chengqin and Zhang, Qiankan and Zhang, Yu-Wei and Wang, Wenping},
volume = {39},
number = {7},
pages = {27--41},
year = {2020},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14124}
}