-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
2454 lines (1100 loc) · 72.6 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html class="theme-next muse use-motion" lang>
<head><meta name="generator" content="Hexo 3.8.0">
<!-- hexo-inject:begin --><!-- hexo-inject:end --><meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
<meta name="theme-color" content="#222">
<meta http-equiv="Cache-Control" content="no-transform">
<meta http-equiv="Cache-Control" content="no-siteapp">
<link href="/lib/fancybox/source/jquery.fancybox.css?v=2.1.5" rel="stylesheet" type="text/css">
<link href="/lib/font-awesome/css/font-awesome.min.css?v=4.6.2" rel="stylesheet" type="text/css">
<link href="/css/main.css?v=5.1.4" rel="stylesheet" type="text/css">
<link rel="apple-touch-icon" sizes="180x180" href="/images/apple-touch-icon-next.png?v=5.1.4">
<link rel="icon" type="image/png" sizes="32x32" href="/images/favicon-32x32-next.png?v=5.1.4">
<link rel="icon" type="image/png" sizes="16x16" href="/images/favicon-16x16-next.png?v=5.1.4">
<link rel="mask-icon" href="/images/logo.svg?v=5.1.4" color="#222">
<meta name="keywords" content="Hexo, NexT">
<meta property="og:type" content="website">
<meta property="og:title" content="Wenyan Cong">
<meta property="og:url" content="http://yoursite.com/index.html">
<meta property="og:site_name" content="Wenyan Cong">
<meta property="og:locale" content="default">
<meta name="twitter:card" content="summary">
<meta name="twitter:title" content="Wenyan Cong">
<script type="text/javascript" id="hexo.configurations">
var NexT = window.NexT || {};
var CONFIG = {
root: '/',
scheme: 'Muse',
version: '5.1.4',
sidebar: {"position":"left","display":"post","offset":12,"b2t":false,"scrollpercent":false,"onmobile":false},
fancybox: true,
tabs: true,
motion: {"enable":true,"async":false,"transition":{"post_block":"fadeIn","post_header":"slideDownIn","post_body":"slideDownIn","coll_header":"slideLeftIn","sidebar":"slideUpIn"}},
duoshuo: {
userId: '0',
author: 'Author'
},
algolia: {
applicationID: '',
apiKey: '',
indexName: '',
hits: {"per_page":10},
labels: {"input_placeholder":"Search for Posts","hits_empty":"We didn't find any results for the search: ${query}","hits_stats":"${hits} results found in ${time} ms"}
}
};
</script>
<link rel="canonical" href="http://yoursite.com/">
<title>Wenyan Cong</title><!-- hexo-inject:begin --><!-- hexo-inject:end -->
</head>
<body itemscope itemtype="http://schema.org/WebPage" lang="default">
<!-- hexo-inject:begin --><!-- hexo-inject:end --><div class="container sidebar-position-left
page-home">
<div class="headband"></div>
<header id="header" class="header" itemscope itemtype="http://schema.org/WPHeader">
<div class="header-inner"><div class="site-brand-wrapper">
<div class="site-meta ">
<div class="custom-logo-site-title">
<a href="/" class="brand" rel="start">
<span class="logo-line-before"><i></i></span>
<span class="site-title">Wenyan Cong</span>
<span class="logo-line-after"><i></i></span>
</a>
</div>
<p class="site-subtitle"></p>
</div>
<div class="site-nav-toggle">
<button>
<span class="btn-bar"></span>
<span class="btn-bar"></span>
<span class="btn-bar"></span>
</button>
</div>
</div>
<nav class="site-nav">
<ul id="menu" class="menu">
<li class="menu-item menu-item-home">
<a href="/" rel="section">
<i class="menu-item-icon fa fa-fw fa-home"></i> <br>
Home
</a>
</li>
<li class="menu-item menu-item-tags">
<a href="/tags/" rel="section">
<i class="menu-item-icon fa fa-fw fa-tags"></i> <br>
Tags
</a>
</li>
<li class="menu-item menu-item-categories">
<a href="/categories/" rel="section">
<i class="menu-item-icon fa fa-fw fa-th"></i> <br>
Categories
</a>
</li>
<li class="menu-item menu-item-archives">
<a href="/archives/" rel="section">
<i class="menu-item-icon fa fa-fw fa-archive"></i> <br>
Archives
</a>
</li>
<li class="menu-item menu-item-search">
<a href="javascript:;" class="popup-trigger">
<i class="menu-item-icon fa fa-search fa-fw"></i> <br>
Search
</a>
</li>
</ul>
<div class="site-search">
<div class="popup search-popup local-search-popup">
<div class="local-search-header clearfix">
<span class="search-icon">
<i class="fa fa-search"></i>
</span>
<span class="popup-btn-close">
<i class="fa fa-times-circle"></i>
</span>
<div class="local-search-input-wrapper">
<input autocomplete="off" placeholder="Searching..." spellcheck="false" type="text" id="local-search-input">
</div>
</div>
<div id="local-search-result"></div>
</div>
</div>
</nav>
</div>
</header>
<main id="main" class="main">
<div class="main-inner">
<div class="content-wrap">
<div id="content" class="content">
<section id="posts" class="posts-expand">
<article class="post post-type-normal" itemscope itemtype="http://schema.org/Article">
<div class="post-block">
<link itemprop="mainEntityOfPage" href="http://yoursite.com/2019/05/25/DailyReading-20200525/">
<span hidden itemprop="author" itemscope itemtype="http://schema.org/Person">
<meta itemprop="name" content>
<meta itemprop="description" content>
<meta itemprop="image" content="/images/avatar.gif">
</span>
<span hidden itemprop="publisher" itemscope itemtype="http://schema.org/Organization">
<meta itemprop="name" content="Wenyan Cong">
</span>
<header class="post-header">
<h1 class="post-title" itemprop="name headline">
<a class="post-title-link" href="/2019/05/25/DailyReading-20200525/" itemprop="url">Daily Reading 20200525</a></h1>
<div class="post-meta">
<span class="post-time">
<span class="post-meta-item-icon">
<i class="fa fa-calendar-o"></i>
</span>
<span class="post-meta-item-text">Posted on</span>
<time title="Post created" itemprop="dateCreated datePublished" datetime="2019-05-25T20:46:22+08:00">
2019-05-25
</time>
</span>
<span class="post-category">
<span class="post-meta-divider">|</span>
<span class="post-meta-item-icon">
<i class="fa fa-folder-o"></i>
</span>
<span class="post-meta-item-text">In</span>
<span itemprop="about" itemscope itemtype="http://schema.org/Thing">
<a href="/categories/Paper-Note/" itemprop="url" rel="index">
<span itemprop="name">Paper Note</span>
</a>
</span>
</span>
</div>
</header>
<div class="post-body" itemprop="articleBody">
<h3 id="Image-to-image-translation-for-cross-domain-disentanglement"><a href="#Image-to-image-translation-for-cross-domain-disentanglement" class="headerlink" title="Image-to-image translation for cross-domain disentanglement"></a>Image-to-image translation for cross-domain disentanglement</h3><p>posted on: NIPS2018</p>
<p>In this paper, they combine image translation and domain disentanglement and propose the concept of cross domain disentanglement. Similarly, they separate the latent representation into shared and exclusive parts. The shared part contains information for both domains and the exclusive part contains only factors of variation particular to each domain. Their network contains image translation modules and cross-domain auto-encoders. Image translation modules follow an encoder-decoder architecture. </p>
<p>l Given input image, the encoder output the latent representation, which is further separated into shared S and exclusive parts E. To guarantee the correct disentanglement, they apply two ways. 1) Based on the intuition that reconstructing Y images from Ex is impossible, they introduce a small decoder and apply GRL at the beginning layers. With adversarial learning, it forces Ex to contain exclusive features only. 2) to constrain the shared features contains similar information, they apply L1 loss on these features and add noise to avoid small signals. </p>
<p>l During disentangling, as higher resolution feature contains both shared and exclusive features, they reduce the bottleneck by increasing the size of the latent representation when encoding shared part and normally apply fully connected layers for exclusive part.</p>
<p>l The decoder takes the shared representation and a random noise which serves as the exclusive part as input. To enforce the exclusive features and noises have similar distribution, they adopt a discriminator to push distribution of Ex to N(0,1). And to avoid the noise being ignored, they propose to reconstruct the latent representation using a L1 loss.</p>
<p>Cross-domain auto-encoder takes the exchanged shared part and the exclusive part as input and reconstruct the original image by using a L1 loss. This offers an extra incentive for the encoder to put domain specific properties in exclusive representation.</p>
<p>Their experiment is conducted mainly on MNIST variations. 1) Without any labels, their model could generate diverse outputs which belongs to the other domain. 2) Given a reference of the other domain, it could also perform domain-specific translation by exchanging the exclusive parts. 3) By interpolating the exclusive and shared representations, it could generate smoothly transformed images. 4) By applying Euclidean distance between features, it could perform cross domain retrieval both semantically and stylistically. All experiments demonstrate the effectiveness of their cross-domain disentanglement.</p>
<h4 id="Pros"><a href="#Pros" class="headerlink" title="Pros:"></a>Pros:</h4><ol>
<li><p>Though their model is trained on simple dataset MNIST variations, it could be applied to bidirectional multimodal image translation in more complex datasets.</p>
</li>
<li><p>It’s not constrained to cross-domain spatial correspondence like pix2pix and BicycleGAN do. Their disentanglement is general and practical.</p>
</li>
</ol>
<h4 id="Cons"><a href="#Cons" class="headerlink" title="Cons:"></a>Cons:</h4><ol>
<li>Though the application of GRL is novel in domain disentanglement, the results in their ablation study indicates that it’s not as useful as they analyzed. </li>
</ol>
</div>
<footer class="post-footer">
<div class="post-eof"></div>
</footer>
</div>
</article>
<article class="post post-type-normal" itemscope itemtype="http://schema.org/Article">
<div class="post-block">
<link itemprop="mainEntityOfPage" href="http://yoursite.com/2019/05/22/DailyReading-20200522/">
<span hidden itemprop="author" itemscope itemtype="http://schema.org/Person">
<meta itemprop="name" content>
<meta itemprop="description" content>
<meta itemprop="image" content="/images/avatar.gif">
</span>
<span hidden itemprop="publisher" itemscope itemtype="http://schema.org/Organization">
<meta itemprop="name" content="Wenyan Cong">
</span>
<header class="post-header">
<h1 class="post-title" itemprop="name headline">
<a class="post-title-link" href="/2019/05/22/DailyReading-20200522/" itemprop="url">Daily Reading 20200522</a></h1>
<div class="post-meta">
<span class="post-time">
<span class="post-meta-item-icon">
<i class="fa fa-calendar-o"></i>
</span>
<span class="post-meta-item-text">Posted on</span>
<time title="Post created" itemprop="dateCreated datePublished" datetime="2019-05-22T20:46:22+08:00">
2019-05-22
</time>
</span>
<span class="post-category">
<span class="post-meta-divider">|</span>
<span class="post-meta-item-icon">
<i class="fa fa-folder-o"></i>
</span>
<span class="post-meta-item-text">In</span>
<span itemprop="about" itemscope itemtype="http://schema.org/Thing">
<a href="/categories/Paper-Note/" itemprop="url" rel="index">
<span itemprop="name">Paper Note</span>
</a>
</span>
</span>
</div>
</header>
<div class="post-body" itemprop="articleBody">
<h3 id="Shapes-and-Context-In-the-Wild-Image-Synthesis-amp-Manipulation"><a href="#Shapes-and-Context-In-the-Wild-Image-Synthesis-amp-Manipulation" class="headerlink" title="Shapes and Context: In-the-Wild Image Synthesis & Manipulation"></a>Shapes and Context: In-the-Wild Image Synthesis & Manipulation</h3><p>posted on: CVPR2019</p>
<p>In image synthesis and image manipulation field, recent works are mainly learning-based parametric methods. In this paper, they propose a data-driven model with no learning for interactively synthesizing in-the-wild images from semantic label input masks. Their model is controllable and interpretable, following stages as: (1) global scene context, filter the list of training examples using labels and pixel overlap of labels; (2) instance shape consistency, search boundaries and extract the shapes with similar context; (3) local part consistency, a more fine-grain constrain when global shape is not able to capture, (4) pixel-level consistency, similar to part consistency, fill the remaining holes after (2) and (3).</p>
<p>In their quantitative comparison, they measure image realism by applying FID scores and measure image quality by comparing the segmentation outputs between synthesized image and the original. Compared with pix2pix and pix2pix-HD, their method could generate both high-quality and realistic images. In their qualitative comparison,the user study indicates that their results are more favorable than pix2pix. And it could generate diverse outputs without additional efforts. </p>
<h4 id="Pros"><a href="#Pros" class="headerlink" title="Pros:"></a>Pros:</h4><ol>
<li>Compared to other parametric methods, their work has notable advantages: 1) it is not limited to specific training data dataset and distribution, 2) it performs better with more given data, while parametric methods will perform worse, 3) it could generate arbitrarily high-resolution images, 4) it can generate an exponentially large set of viable synthesized images. 5) it’s highly controllable and interpretable.</li>
</ol>
<h4 id="Cons"><a href="#Cons" class="headerlink" title="Cons:"></a>Cons:</h4><ol>
<li>The synthesized images has a good structure and semantic consistency, but the appearance of different instances is not consistent, making it visual unpleasant.</li>
</ol>
</div>
<footer class="post-footer">
<div class="post-eof"></div>
</footer>
</div>
</article>
<article class="post post-type-normal" itemscope itemtype="http://schema.org/Article">
<div class="post-block">
<link itemprop="mainEntityOfPage" href="http://yoursite.com/2019/05/21/DailyReading-20200521/">
<span hidden itemprop="author" itemscope itemtype="http://schema.org/Person">
<meta itemprop="name" content>
<meta itemprop="description" content>
<meta itemprop="image" content="/images/avatar.gif">
</span>
<span hidden itemprop="publisher" itemscope itemtype="http://schema.org/Organization">
<meta itemprop="name" content="Wenyan Cong">
</span>
<header class="post-header">
<h1 class="post-title" itemprop="name headline">
<a class="post-title-link" href="/2019/05/21/DailyReading-20200521/" itemprop="url">Daily Reading 20200521</a></h1>
<div class="post-meta">
<span class="post-time">
<span class="post-meta-item-icon">
<i class="fa fa-calendar-o"></i>
</span>
<span class="post-meta-item-text">Posted on</span>
<time title="Post created" itemprop="dateCreated datePublished" datetime="2019-05-21T20:46:22+08:00">
2019-05-21
</time>
</span>
<span class="post-category">
<span class="post-meta-divider">|</span>
<span class="post-meta-item-icon">
<i class="fa fa-folder-o"></i>
</span>
<span class="post-meta-item-text">In</span>
<span itemprop="about" itemscope itemtype="http://schema.org/Thing">
<a href="/categories/Paper-Note/" itemprop="url" rel="index">
<span itemprop="name">Paper Note</span>
</a>
</span>
</span>
</div>
</header>
<div class="post-body" itemprop="articleBody">
<h3 id="GeneGAN-Learning-Object-Transfiguration-and-Attribute-Subspace-from-Unpaired-Data"><a href="#GeneGAN-Learning-Object-Transfiguration-and-Attribute-Subspace-from-Unpaired-Data" class="headerlink" title="GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data"></a>GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data</h3><p>posted on: BMVC2017</p>
<p>GeneGAN proposed a deterministic generative model which learns disentangled attribute subspaces from weakly labeled data by adversarial training. Fed with two unpaired sets of images (with and without object), GeneGAN uses an Encoder to encode image into two parts: object attribute subspace and background subspace. The object attribute may be eyeglasses, smile, hairstyle and lighting condition. By swapping the object feature input of Decoder, GeneGAN could generate different styles of the same person, such as a person with smile to without smile. Besides reconstruction loss and normal adversarial loss, they also present nulling loss to disentangle object features from background features and the parallelogram loss to enforce a constraint between the children object and the parent objects in image pixel values. Their experiments are conducted on aligned faces.</p>
<h4 id="Pros"><a href="#Pros" class="headerlink" title="Pros:"></a>Pros:</h4><ol>
<li><p>Compared with CycleGAN, GeneGAN is simpler with only one generator and one discriminator and gains a good performance on face attribute transfiguration in face images from CelebA and Multi-PIE database.</p>
</li>
<li><p>The way to learn from weakly labeled unpaired data is inspiring. Two unpaired sets of images that with and without some object is effectively a 0/1 labeling over all training data.</p>
</li>
</ol>
<h4 id="Cons"><a href="#Cons" class="headerlink" title="Cons:"></a>Cons:</h4><ol>
<li><p>The constraints they presented hold only approximately and there will be potential leakage of information between the object and feature parts. </p>
</li>
<li><p>The object feature is not clearly defined. For eyeglasses, it can be the color, type, size, etc. While for hairstyle, it mainly focuses on the hair directions instead of any color information. Maybe it’s following the previous works and I’m wondering.</p>
</li>
</ol>
</div>
<footer class="post-footer">
<div class="post-eof"></div>
</footer>
</div>
</article>
<article class="post post-type-normal" itemscope itemtype="http://schema.org/Article">
<div class="post-block">
<link itemprop="mainEntityOfPage" href="http://yoursite.com/2019/05/20/DailyReading-20200520/">
<span hidden itemprop="author" itemscope itemtype="http://schema.org/Person">
<meta itemprop="name" content>
<meta itemprop="description" content>
<meta itemprop="image" content="/images/avatar.gif">
</span>
<span hidden itemprop="publisher" itemscope itemtype="http://schema.org/Organization">
<meta itemprop="name" content="Wenyan Cong">
</span>
<header class="post-header">
<h1 class="post-title" itemprop="name headline">
<a class="post-title-link" href="/2019/05/20/DailyReading-20200520/" itemprop="url">Daily Reading 20200520</a></h1>
<div class="post-meta">
<span class="post-time">
<span class="post-meta-item-icon">
<i class="fa fa-calendar-o"></i>
</span>
<span class="post-meta-item-text">Posted on</span>
<time title="Post created" itemprop="dateCreated datePublished" datetime="2019-05-20T20:46:22+08:00">
2019-05-20
</time>
</span>
<span class="post-category">
<span class="post-meta-divider">|</span>
<span class="post-meta-item-icon">
<i class="fa fa-folder-o"></i>
</span>
<span class="post-meta-item-text">In</span>
<span itemprop="about" itemscope itemtype="http://schema.org/Thing">
<a href="/categories/Paper-Note/" itemprop="url" rel="index">
<span itemprop="name">Paper Note</span>
</a>
</span>
</span>
</div>
</header>
<div class="post-body" itemprop="articleBody">
<h3 id="DRIT-Diverse-Image-to-Image-Translation-via-Disentangled-Representations"><a href="#DRIT-Diverse-Image-to-Image-Translation-via-Disentangled-Representations" class="headerlink" title="DRIT++: Diverse Image-to-Image Translation via Disentangled Representations"></a>DRIT++: Diverse Image-to-Image Translation via Disentangled Representations</h3><p>posted on: ECCV2018</p>
<p>This paper is somewhat like MUNIT, which treat image translation as a one-to-many multimodal mapping with unpaired data. To generate diverse outputs with unpaired training data, they propose a disentangled representation framework for learning, where the input images are embedded onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space.</p>
<p>1) To achieve representation disentanglement, they apply two strategies: weight-sharing and a content discriminator. Weight-sharing, similar to UNIT, shares the weight between the last layer of content encoders and the first layer of generators. To further constrain the same content representations encode the same information for both domains, they propose a content discriminator and content adversarial loss, D_c distinguishes the membership of content features while the content decoders try to fool Dc. </p>
<p>2) To address unpair training data, they propose a cross-cycle consistency, which contains two I2I translation: forward and backward translations. In a word, they exchange the attribute representation twice, trying to reconstruct the original images. </p>
<p>There are some other loss functions. 1) self-reconstruction loss, reconstruct the original image by encoding and decoding. 2) domain-adversarial loss, encourages G to generate realistic images in each domain. 3) latent regression loss, inspired by BicycleGAN, enforces the reconstruction on the latent attribute vector. 4) KL loss, aim to align the attribute representation with a prior Gaussian distribution 5) mode seeking regularization, improve the diversity. </p>
<p>For metrics, they adopt FID to evaluate the image quality, LPIPS to evaluate diversity and JSD&NDB to measure the similarity between the distributions of real images and generated images. Their model could generalize to multi-domain and high-resolution I2I translations. </p>
<h4 id="Pros"><a href="#Pros" class="headerlink" title="Pros:"></a>Pros:</h4><ol>
<li><p>Though embedding images to content and attribute space is similar to MUNIT, their content adversarial loss guarantees the content space containing no domain-specific information to utmost degree, which is more reasonable than MUNIT.</p>
</li>
<li><p>The cross-cycle consistency loss addresses the absence of paired training data in a cyclic way.</p>
</li>
<li><p>Their experiments are comprehensive and convincing.</p>
</li>
</ol>
<h4 id="Cons"><a href="#Cons" class="headerlink" title="Cons:"></a>Cons:</h4><ol>
<li>In their user study, there is no detail information about the number of users or the testing images, which is less convincing. And the image quality is much worse than CycleGAN.</li>
</ol>
</div>
<footer class="post-footer">
<div class="post-eof"></div>
</footer>
</div>
</article>
<article class="post post-type-normal" itemscope itemtype="http://schema.org/Article">
<div class="post-block">
<link itemprop="mainEntityOfPage" href="http://yoursite.com/2019/05/19/DailyReading-20200519/">
<span hidden itemprop="author" itemscope itemtype="http://schema.org/Person">
<meta itemprop="name" content>
<meta itemprop="description" content>
<meta itemprop="image" content="/images/avatar.gif">
</span>
<span hidden itemprop="publisher" itemscope itemtype="http://schema.org/Organization">
<meta itemprop="name" content="Wenyan Cong">
</span>
<header class="post-header">
<h1 class="post-title" itemprop="name headline">
<a class="post-title-link" href="/2019/05/19/DailyReading-20200519/" itemprop="url">Daily Reading 20200519</a></h1>
<div class="post-meta">
<span class="post-time">
<span class="post-meta-item-icon">
<i class="fa fa-calendar-o"></i>
</span>
<span class="post-meta-item-text">Posted on</span>
<time title="Post created" itemprop="dateCreated datePublished" datetime="2019-05-19T20:46:22+08:00">
2019-05-19
</time>
</span>
<span class="post-category">
<span class="post-meta-divider">|</span>
<span class="post-meta-item-icon">
<i class="fa fa-folder-o"></i>
</span>
<span class="post-meta-item-text">In</span>
<span itemprop="about" itemscope itemtype="http://schema.org/Thing">
<a href="/categories/Paper-Note/" itemprop="url" rel="index">
<span itemprop="name">Paper Note</span>
</a>
</span>
</span>
</div>
</header>
<div class="post-body" itemprop="articleBody">
<h3 id="Multimodal-Unsupervised-Image-to-Image-Translation"><a href="#Multimodal-Unsupervised-Image-to-Image-Translation" class="headerlink" title="Multimodal Unsupervised Image-to-Image Translation"></a>Multimodal Unsupervised Image-to-Image Translation</h3><p>posted on: CVPR2019</p>
<p>In this paper, they try to solve image completion in a pluralistic way. That is, given a masked input, the model could generate multiple and diverse plausible outputs, which is quite different to previous methods that could only generate one output. To have a distribution to sample foreground from, they combined CVAE and instance blind and explained why using them directly is infeasible: CVAE learns low variance prior and instance blind is unstable. Therefore, they propose the network with two parallel training paths: 1) reconstruction path is similar as instance blind, trying to reconstruct the original image and get smooth prior distribution of missing foreground. 2) generative path predicts the latent prior distribution for the missing regions conditioned on the visible pixels. During testing, only generative model would be used to infer outputs. The network is based on LS-GAN. For the loss function, they used distribution regularization (KL divergence), appearance matching loss (for rec path, it’s used on the whole image, while for gen path, it’s used on missing foreground only), and adversarial loss. </p>
<p>Note that they also present short+long term attention layer, a combination of self-attention layer and contextual flow. Short term attention is placed within decoder to harness distant spatial context. Long term attention is placed between encoder and decoder to capture the feature-feature context.</p>
<h4 id="Pros"><a href="#Pros" class="headerlink" title="Pros:"></a>Pros:</h4><ol>