Skip to content

chuangchuangtan/NPR-DeepfakeDetection

Repository files navigation

Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection


Beijing Jiaotong University, YanShan University, A*Star

overall pipeline

Reference github repository for the paper Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection.

@misc{tan2023rethinking,
      title={Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection}, 
      author={Chuangchuang Tan and Huan Liu and Yao Zhao and Shikui Wei and Guanghua Gu and Ping Liu and Yunchao Wei},
      year={2023},
      eprint={2312.10461},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

News 🆕

  • 2024/02: NPR is accepted by CVPR 2024! Congratulations and thanks to my all co-authors!
  • 2024/05: 🤗Online Demo

Environment setup

Classification environment: We recommend installing the required packages by running the command:

pip install -r requirements.txt

In order to ensure the reproducibility of the results, we provide the following suggestions:

Getting the data

paper Url
Train set CNNDetection CVPR2020 Baidudrive
Val set CNNDetection CVPR2020 Baidudrive
Table1 Test CNNDetection CVPR2020 Baidudrive
Table2 Test FreqNet AAAI2024 googledrive
Table3 Test DIRE ICCV2023 googledrive
Table4 Test UniversalFakeDetect CVPR2023 googledrive
Table5 Test Diffusion1kStep googledrive
pip install gdown==4.7.1

chmod 777 ./download_dataset.sh

./download_dataset.sh

Directory structure

Click to expand the folder tree structure.
datasets
|-- ForenSynths_train_val
|   |-- train
|   |   |-- car
|   |   |-- cat
|   |   |-- chair
|   |   `-- horse
|   `-- val
|   |   |-- car
|   |   |-- cat
|   |   |-- chair
|   |   `-- horse
|   |-- test
|       |-- biggan
|       |-- cyclegan
|       |-- deepfake
|       |-- gaugan
|       |-- progan
|       |-- stargan
|       |-- stylegan
|       `-- stylegan2
`-- Generalization_Test
    |-- ForenSynths_test       # Table1
    |   |-- biggan
    |   |-- cyclegan
    |   |-- deepfake
    |   |-- gaugan
    |   |-- progan
    |   |-- stargan
    |   |-- stylegan
    |   `-- stylegan2
    |-- GANGen-Detection     # Table2
    |   |-- AttGAN
    |   |-- BEGAN
    |   |-- CramerGAN
    |   |-- InfoMaxGAN
    |   |-- MMDGAN
    |   |-- RelGAN
    |   |-- S3GAN
    |   |-- SNGAN
    |   `-- STGAN
    |-- DiffusionForensics  # Table3
    |   |-- adm
    |   |-- ddpm
    |   |-- iddpm
    |   |-- ldm
    |   |-- pndm
    |   |-- sdv1_new
    |   |-- sdv2
    |   `-- vqdiffusion
    `-- UniversalFakeDetect # Table4
    |   |-- dalle
    |   |-- glide_100_10
    |   |-- glide_100_27
    |   |-- glide_50_27
    |   |-- guided          # Also known as ADM.
    |   |-- ldm_100
    |   |-- ldm_200
    |   `-- ldm_200_cfg
    |-- Diffusion1kStep     # Table5
        |-- DALLE
        |-- ddpm
        |-- guided-diffusion    # Also known as ADM.
        |-- improved-diffusion  # Also known as IDDPM.
        `-- midjourney


Training the model

CUDA_VISIBLE_DEVICES=0 ./pytorch18/bin/python train.py --name 4class-resnet-car-cat-chair-horse --dataroot ./datasets/ForenSynths_train_val --classes car,cat,chair,horse --batch_size 32 --delr_freq 10 --lr 0.0002 --niter 50

Testing the detector

Modify the dataroot in test.py.

CUDA_VISIBLE_DEVICES=0 ./pytorch18/bin/python test.py --model_path ./NPR.pth  --batch_size {BS}

Detection Results

When testing on AIGCDetectBenchmark, set no_resize and no_crop to True, and set batch_size to 1. To deal with images of odd sizes, add the following code in network/resnet.py.

n,c,w,h = x.shape
if w%2 == 1 : x = x[:,:,:-1,:]
if h%2 == 1 : x = x[:,:,:,:-1]
Generator CNNSpot FreDect Fusing GramNet LNP LGrad DIRE-G DIRE-D UnivFD RPTCon NPR
ProGAN 100.00 99.36 100.00 99.99 99.67 99.83 95.19 52.75 99.81 100.00 99.9
StyleGan 90.17 78.02 85.20 87.05 91.75 91.08 83.03 51.31 84.93 92.77 96.1
BigGAN 71.17 81.97 77.40 67.33 77.75 85.62 70.12 49.70 95.08 95.80 87.3
CycleGAN 87.62 78.77 87.00 86.07 84.10 86.94 74.19 49.58 98.33 70.17 90.3
StarGAN 94.60 94.62 97.00 95.05 99.92 99.27 95.47 46.72 95.75 99.97 99.6
GauGAN 81.42 80.57 77.00 69.35 75.39 78.46 67.79 51.23 99.47 71.58 85.4
Stylegan2 86.91 66.19 83.30 87.28 94.64 85.32 75.31 51.72 74.96 89.55 98.1
WFIR 91.65 50.75 66.80 86.80 70.85 55.70 58.05 53.30 86.90 85.80 60.7
ADM 60.39 63.42 49.00 58.61 84.73 67.15 75.78 98.25 66.87 82.17 84.9
Glide 58.07 54.13 57.20 54.50 80.52 66.11 71.75 92.42 62.46 83.79 96.7
Midjourney 51.39 45.87 52.20 50.02 65.55 65.35 58.01 89.45 56.13 90.12 92.6
SDv1.4 50.57 38.79 51.00 51.70 85.55 63.02 49.74 91.24 63.66 95.38 97.4
SDv1.5 50.53 39.21 51.40 52.16 85.67 63.67 49.83 91.63 63.49 95.30 97.5
VQDM 56.46 77.80 55.10 52.86 74.46 72.99 53.68 91.90 85.31 88.91 90.1
Wukong 51.03 40.30 51.70 50.76 82.06 59.55 54.46 90.90 70.93 91.07 91.7
DALLE2 50.45 34.70 52.80 49.25 88.75 65.45 66.48 92.45 50.75 96.60 99.6
Average 70.78 64.03 68.38 68.67 83.84 75.34 68.68 71.53 78.43 89.31 91.7
(1) Change "resize" to "translate and duplicate". (2) Set random seed to 70. (3) During testing, set no_crop to False.

(1)

dset = datasets.ImageFolder(
    root,
    transforms.Compose([
        # rz_func,
	transforms.Lambda(lambda img: translate_duplicate(img, opt.cropSize)),
	crop_func,
	flip_func,
	transforms.ToTensor(),
	transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
    ]))

import math
def translate_duplicate(img, cropSize):
    if min(img.size) < cropSize:
        width, height = img.size
        
        new_width = width * math.ceil(cropSize/width)
        new_height = height * math.ceil(cropSize/height)
        
        new_img = Image.new('RGB', (new_width, new_height))
        for i in range(0, new_width, width):
            for j in range(0, new_height, height):
                new_img.paste(img, (i, j))
        return new_img
    else:
        return img

(2) Set random seed to 70.

(3) During testing, set no_crop to False. And set test config

vals =       ['ADM', 'biggan', 'glide', 'midjourney', 'sdv5', 'vqdm', 'wukong']
multiclass = [ 0,     0,        0,       0,            0,      0,      0      ]
./pytorch18/bin/python  train.py --dataroot {GenImage Path} --name sdv4_bs32_ --batch_size 32 --lr 0.0002 --niter 1  --cropSize 224 --classes sdv4 

Train with sdv4 as the training set, using a random seed of 70. Pretrained checkpoint.

Generator Acc. A.P.
ADM 87.8 96.0
biggan 80.7 89.8
glide 93.2 99.1
midjourney 91.7 97.9
sdv5 94.4 99.9
vqdm 88.7 96.1
wukong 94.0 99.7
Mean 90.1 96.9

Acknowledgments

This repository borrows partially from the CNNDetection.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published