-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
165 lines (140 loc) · 9.13 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-JGJP9W0E2J"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-JGJP9W0E2J');
</script>
<title>David Recasens</title>
<meta name="author" content="David Recasens">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/seal_icon.png">
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<strong><em><name>David Recasens</name></em></strong>
</p>
<p>I am a PhD student in computer vision and deep learning focused on 3D reconstruction / camera odometry / 2D-3D flow and depth plus uncertainty estimation in monocular non-rigid dynamic scenes, such as endoscopies.
</p>
<p>
I am pursuing my PhD (starting in January 2021) under the guidance of <a href="https://scholar.google.com/citations?user=j_sMzokAAAAJ&hl=es">Prof. Javier Civera</a> at the <a href="http://robots.unizar.es/">Robotics Lab</a> in the <a href="http://www.unizar.es/university-zaragoza">Universidad de Zaragoza</a>. Research stay at <a href="https://cvg.ethz.ch/">Computer Vision and Geometry Group (CVG)</a> directed by <a href="https://scholar.google.com/citations?user=YYH0BjEAAAAJ&hl=en">Prof. Marc Pollefeys</a> in <a href="https://ethz.ch/en.html">ETH Zurich</a>, Switzerland, supervised by <a href="https://scholar.google.de/citations?user=biytQP8AAAAJ&hl=en">Martin. R. Oswald</a> from September 2021 to March 2022.
</p>
<p style="text-align:center">
<a href="mailto:recasens@unizar.es">Email</a>  / 
<a href="https://www.linkedin.com/in/david-recasens-lafuente/">LinkedIn</a>  / 
<!-- <a href="data/JonBarron-CV.pdf">CV</a>  /  -->
<!-- <a href="data/JonBarron-bio.txt">Bio</a>  /  -->
<a href="https://scholar.google.es/citations?user=Q1ocp7wAAAAJ&hl=es">Google Scholar</a>  / 
<a href="https://github.com/DavidRecasens/">Github</a>
</p>
</td>
<td style="padding:2.5%;width:40%;max-width:40%">
<a href="images/profile_photo_circle_cropped.png"><img style="width:100%;max-width:100%" alt="profile photo" src="images/profile_photo_circle_cropped.png" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<!-- ##### Paper 3 ##### -->
<tr onmouseout="edam_stop()" onmouseover="edam_start()">
<td width="33%" valign="top">
<img src="images/drunkards.gif" width="100%" style="border-style: none">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<strong><em><papertitle>The Drunkard’s Odometry: Estimating Camera Motion in Deforming Scenes</papertitle></em></strong>
<br>
<strong>David Recasens</strong>,
<a href="http://people.inf.ethz.ch/moswald/">Martin R. Oswald</a>,
<a href="https://people.inf.ethz.ch/pomarc/">Marc Pollefeys</a>,
<a href="https://janovas.unizar.es/sideral/CV/javier-civera-sancho">Javier Civera</a>
<br>
<a href="https://nips.cc/"><em>NeurIPS</em></a>, 2023
<br>
<a href="https://davidrecasens.github.io/TheDrunkard'sOdometry/">project page</a> /
<a href="https://arxiv.org/abs/2306.16917">arXiv paper</a> /
<a href="https://youtu.be/wL8JDg6bemg">video</a> /
<a href="https://github.com/UZ-SLAMLab/DrunkardsOdometry">GitHub</a>
<p></p>
<p>The <em>Drunkard’s Dataset</em>, a challenging collection of synthetic data targeting visual navigation and reconstruction in deformable environments. And the <em>Drunkard’s Odometry</em>, a novel monocular RGB-D deformable odometry method that breaks down optical flow estimate into rigid-body camera motion and non-rigid scene deformation.
</p>
</td>
</tr>
<!-- ##### Paper 2 ##### -->
<tr onmouseout="edam_stop()" onmouseover="edam_start()">
<td width="33%" valign="top">
<img src="images/Uncertain-Endo-Depths.gif" width="100%" style="border-style: none">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<strong><em><papertitle>On the Uncertain Single-View Depths in Endoscopies</papertitle></em></strong>
</a>
<br>
<a href="https://jrodriguezpuigvert.github.io/">Javier Rodríguez-Puigvert</a>,
<strong>David Recasens</strong>,
<a href="https://janovas.unizar.es/sideral/CV/javier-civera-sancho">Javier Civera</a>,
<a href="http://webdiis.unizar.es/~rmcantin/">Rubén Martínez-Cantín</a>
<br>
<a href="https://conferences.miccai.org/2022/en/"><em>MICCAI</em></a>, 2022
<br>
<a href="https://sites.google.com/unizar.es/uncertain-depth-endoscopies">project page</a> /
<a href="https://link.springer.com/chapter/10.1007/978-3-031-16437-8_13">MICCAI2022 paper</a> /
<a href="https://arxiv.org/abs/2112.08906">arXiv paper</a> /
<a href="https://drive.google.com/file/d/15i_TyPTq6DHQF4gVIfa7RZ6Pq_Ql4ipk/view?usp=sharing">video demo</a>
<p></p>
<p>Deepening for the first time in Bayesian deep networks for single-view depth estimation in colonoscopies.
</p>
</td>
</tr>
<!-- ##### Paper 1 ##### -->
<tr onmouseout="edam_stop()" onmouseover="edam_start()">
<td width="33%" valign="top">
<img src="images/Endo-Depth-and-Motion.gif" width="100%" style="border-style: none">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<strong><em><papertitle>Endo-Depth-and-Motion: Reconstruction and Tracking in Endoscopic Videos using Depth Networks and Photometric Constraints</papertitle></em></strong>
</a>
<br>
<strong>David Recasens</strong>,
<a href="https://webdiis.unizar.es/~jlamarca/">José Lamarca</a>,
<a href="https://webdiis.unizar.es/~jmfacil/">José M. Fácil</a>,
<a href="https://janovas.unizar.es/sideral/CV/jose-maria-martinez-montiel">José María M. Montiel</a>, <br>
<a href="https://janovas.unizar.es/sideral/CV/javier-civera-sancho">Javier Civera</a>
<br>
<a href="https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7083369"><em>RA-L</em></a> <em>and</em> <a href="https://www.iros2021.org/"><em>IROS</em></a>, 2021
<br>
<a href="https://davidrecasens.github.io/EndoDepthAndMotion/">project page</a> /
<a href="https://ieeexplore.ieee.org/abstract/document/9478277">RA-L paper</a> /
<a href="https://arxiv.org/abs/2103.16525">arXiv paper</a> /
<a href="https://youtu.be/YfXkK9R0htE">IROS 2021 video presentation</a> /
<a href="https://youtu.be/G1XWIyEbvPc">video demo</a> /
<a href="https://github.com/UZ-SLAMLab/Endo-Depth-and-Motion">GitHub</a>
<p></p>
<p>A pipeline that estimates the 6-degrees-of-freedom camera pose and dense 3D scene models from monocular endoscopic videos.
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px">
<br>
<p style="text-align:right;font-size:small;">
<br>
Based on the <a href="https://github.com/jonbarron/website">Jon Barron</a>'s template.
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
</body>
</html>