forked from keunhong/keunhong.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathvio.html
165 lines (156 loc) · 4.44 KB
/
vio.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Project Showcase for 3D Vision - FiG-NeRF Style.">
<title>Project Showcase</title>
<link href="https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;500;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0-beta3/css/all.min.css">
<style>
body {
font-family: 'Roboto', sans-serif;
margin: 0;
padding: 0;
background-color: #f4f4f4;
color: #333;
}
header {
text-align: center;
padding: 20px 0;
background-color: #ffffff;
border-bottom: 2px solid #e0e0e0;
}
header h1 {
font-size: 2.5rem;
margin: 0;
}
header .subtitle {
font-size: 1.2rem;
color: #555;
margin-top: 5px;
}
main {
max-width: 800px;
margin: 50px auto;
padding: 0 20px;
background-color: #fff;
box-shadow: 0 4px 10px rgba(0,0,0,0.1);
border-radius: 10px;
}
main h2 {
text-align: center;
color: #007BFF;
font-size: 1.75rem;
margin-bottom: 20px;
}
.authors {
text-align: center;
margin-bottom: 10px;
}
.authors span {
margin-right: 10px;
}
.conference {
text-align: center;
font-weight: bold;
color: #e74c3c;
font-size: 1.25rem;
margin-top: 5px;
}
.arxiv-btn {
display: inline-block;
margin-top: 10px;
background-color: #333;
color: #fff;
padding: 10px 20px;
border-radius: 25px;
text-decoration: none;
font-size: 0.9rem;
font-weight: 500;
}
.arxiv-btn:hover {
background-color: #555;
}
.project-image {
text-align: center;
margin: 20px 0;
}
.project-image img {
max-width: 100%;
border-radius: 10px;
}
.abstract-section {
padding: 20px;
}
.abstract-section h3 {
font-size: 1.5rem;
margin-bottom: 15px;
text-align: center;
}
.abstract-section p {
font-size: 1.1rem;
line-height: 1.6;
text-align: justify;
}
footer {
text-align: center;
padding: 20px;
background-color: #333;
color: #fff;
}
footer p {
margin: 0;
font-size: 0.9rem;
}
footer a {
color: #007BFF;
text-decoration: none;
}
</style>
</head>
<body>
<header>
<h1>Project Title: FiG-NeRF</h1>
<p class="subtitle">Figure Ground Neural Radiance Fields for 3D Object Category Modelling</p>
</header>
<main>
<h2>Presented by:</h2>
<div class="authors">
<span>Christopher Xie<sup>1</sup></span>,
<span>Keunhong Park<sup>1</sup></span>,
<span>Ricardo Martin-Brualla<sup>2</sup></span>,
<span>Matthew Brown<sup>2</sup></span>
</div>
<div class="authors">
<span><sup>1</sup>University of Washington</span>,
<span><sup>2</sup>Google Research</span>
</div>
<div class="conference">
International Conference on 3D Vision - 3DV, 2021
</div>
<a href="https://arxiv.org/abs/2011.12948" class="arxiv-btn" target="_blank">
<i class="ai ai-arxiv"></i> arXiv
</a>
<div class="project-image">
<img src="path_to_your_image.jpg" alt="Project Image">
<p>FiG-NeRF can learn high quality 3D object category models from casually captured images of objects.</p>
</div>
<div class="abstract-section">
<h3>Abstract</h3>
<p>
We investigate the use of Neural Radiance Fields (NeRF) to learn high quality 3D object category models from collections of input images.
In contrast to previous work, we are able to do this whilst simultaneously separating foreground objects from their varying backgrounds.
We achieve this via a 2-component NeRF model, FiG-NeRF, that prefers explanation of the scene as a geometrically constant background and a deformable foreground that represents the object category.
</p>
<p>
Our method allows for accurate and crisp amodal segmentation, and we show it can be used for photorealistic rendering from arbitrary viewpoints.
We evaluate our method by capturing casually taken images of objects with complex backgrounds, demonstrating high-fidelity reconstructions.
</p>
</div>
</main>
<footer>
<p>© 2024 Your Name. All Rights Reserved. | <a href="https://yourwebsite.com">Your Website</a></p>
</footer>
</body>
</html>