Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create my_dataset.py #1642

Open
wants to merge 133 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
133 commits
Select commit Hold shift + click to select a range
3c3703b
Create my_dataset.py
andrea2399 Apr 15, 2024
591c710
Update base_options.py
andrea2399 Apr 15, 2024
cb27518
Rename my_dataset.py to mydataset_dataset.py
andrea2399 Apr 15, 2024
87567e8
Update test.py
andrea2399 Apr 18, 2024
1cf457f
Update test.py
andrea2399 Apr 18, 2024
e9f69d6
Update test.py
andrea2399 Apr 18, 2024
33fa1a5
Update test.py
andrea2399 Apr 18, 2024
9d85be3
Update test.py
andrea2399 Apr 18, 2024
8b3354e
Update test.py
andrea2399 Apr 18, 2024
f4250ca
Update test.py
andrea2399 Apr 18, 2024
96ab16d
Update test.py
andrea2399 Apr 18, 2024
7f3c987
Update mydataset_dataset.py
andrea2399 Apr 29, 2024
bfe1533
Create myaligned_dataset
andrea2399 Apr 29, 2024
d1b417e
Rename myaligned_dataset to myaligned_dataset.py
andrea2399 Apr 29, 2024
4c49dee
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
cbc4db8
Update base_options.py
andrea2399 Apr 29, 2024
9a219d0
Update pix2pix_model.py
andrea2399 Apr 29, 2024
0cb5690
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
5118228
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
7bfc662
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
1bf9f22
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
68394eb
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
0359b58
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
c51ee70
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
3cdd11a
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
eda641f
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
9e2dcfc
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
3883e79
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
1341e82
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
64ca15e
Update myaligned_dataset.py
andrea2399 Apr 29, 2024
c1b5bb5
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
df7194a
Update aligned_dataset.py
andrea2399 Apr 30, 2024
3f7564d
Update pix2pix_model.py
andrea2399 Apr 30, 2024
cd9cd8b
Update pix2pix_model.py
andrea2399 Apr 30, 2024
1c92805
Update aligned_dataset.py
andrea2399 Apr 30, 2024
92e4c01
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
7576995
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
ef35b49
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
ea29eef
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
f12f694
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
b11ea38
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
afd31f9
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
7b427b3
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
97452ab
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
9108017
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
e4015cd
Update myaligned_dataset.py
andrea2399 Apr 30, 2024
1a72cb0
Update myaligned_dataset.py
andrea2399 May 1, 2024
5751656
Update myaligned_dataset.py
andrea2399 May 2, 2024
c79807e
Update test.py
andrea2399 May 2, 2024
ed70cb1
Update myaligned_dataset.py
andrea2399 May 2, 2024
b20583e
Update myaligned_dataset.py
andrea2399 May 2, 2024
632aa2a
Update myaligned_dataset.py
andrea2399 May 2, 2024
ae0bb9b
Update myaligned_dataset.py
andrea2399 May 2, 2024
285a435
Update myaligned_dataset.py
andrea2399 May 2, 2024
2323540
Update myaligned_dataset.py
andrea2399 May 2, 2024
86a2c1d
Update myaligned_dataset.py
andrea2399 May 2, 2024
0043e10
Update myaligned_dataset.py
andrea2399 May 6, 2024
5cecd20
Update myaligned_dataset.py
andrea2399 May 6, 2024
533fb29
Update myaligned_dataset.py
andrea2399 May 6, 2024
d09f93a
Update myaligned_dataset.py
andrea2399 May 6, 2024
78865c6
Update myaligned_dataset.py
andrea2399 May 7, 2024
5b58ad5
Update myaligned_dataset.py
andrea2399 May 7, 2024
f83c364
Update base_options.py
andrea2399 May 7, 2024
bef7112
Update base_options.py
andrea2399 May 7, 2024
e83c50c
Update base_options.py
andrea2399 May 7, 2024
45f0626
Update base_options.py
andrea2399 May 7, 2024
8537982
Update base_dataset.py
andrea2399 May 7, 2024
03fb2b8
Update base_dataset.py
andrea2399 May 7, 2024
0a0e767
Update base_dataset.py
andrea2399 May 7, 2024
7714ef2
Update base_dataset.py
andrea2399 May 7, 2024
c6fe797
Update base_dataset.py
andrea2399 May 7, 2024
28e3adb
Update base_options.py
andrea2399 May 7, 2024
1fd07a9
Update base_dataset.py
andrea2399 May 7, 2024
02b46a6
Update base_options.py
andrea2399 May 7, 2024
e5b9555
Update myaligned_dataset.py
andrea2399 May 8, 2024
42f45f8
Update base_dataset.py
andrea2399 May 8, 2024
515ae53
Update base_dataset.py
andrea2399 May 8, 2024
1caf152
Update base_dataset.py
andrea2399 May 8, 2024
17a87a7
Update base_dataset.py
andrea2399 May 8, 2024
65e6e5c
Update base_dataset.py
andrea2399 May 8, 2024
c5f369e
Update myaligned_dataset.py
andrea2399 May 8, 2024
e373d3f
Update base_dataset.py
andrea2399 May 8, 2024
a3006f2
Update base_dataset.py
andrea2399 May 8, 2024
75c1a2f
Update base_dataset.py
andrea2399 May 8, 2024
1e3da6c
Update myaligned_dataset.py
andrea2399 May 8, 2024
4e32320
Update base_dataset.py
andrea2399 May 8, 2024
6ce2f83
Update myaligned_dataset.py
andrea2399 May 8, 2024
e2c8843
Update base_dataset.py
andrea2399 May 8, 2024
9f6f717
Update myaligned_dataset.py
andrea2399 May 8, 2024
dd02d32
Update base_dataset.py
andrea2399 May 8, 2024
aeb3cb8
Update base_dataset.py
andrea2399 May 8, 2024
2f5145d
Update base_dataset.py
andrea2399 May 8, 2024
b113c71
Update myaligned_dataset.py
andrea2399 May 8, 2024
c0e0e56
Update base_dataset.py
andrea2399 May 8, 2024
4cc5452
Update base_dataset.py
andrea2399 May 8, 2024
d6f2408
Update base_dataset.py
andrea2399 May 8, 2024
6652854
Update base_dataset.py
andrea2399 May 8, 2024
624b5f6
Update myaligned_dataset.py
andrea2399 May 9, 2024
fdb12bf
Update myaligned_dataset.py
andrea2399 May 9, 2024
3c67b94
Update myaligned_dataset.py
andrea2399 May 9, 2024
668c07b
Update base_dataset.py
andrea2399 May 9, 2024
ed66ae1
Update myaligned_dataset.py
andrea2399 May 9, 2024
66730a3
Update myaligned_dataset.py
andrea2399 May 9, 2024
f03fa1a
Update base_dataset.py
andrea2399 May 13, 2024
987f0d2
Update myaligned_dataset.py
andrea2399 May 13, 2024
ea5b10a
Update base_dataset.py
andrea2399 May 13, 2024
2ff26a3
Update base_dataset.py
andrea2399 May 13, 2024
1bb687a
Update base_dataset.py
andrea2399 May 13, 2024
45a5c10
Update base_dataset.py
andrea2399 May 13, 2024
a6b5f03
Update base_dataset.py
andrea2399 May 13, 2024
8288b69
Update base_dataset.py
andrea2399 May 13, 2024
dab221d
Update base_dataset.py
andrea2399 May 13, 2024
1385e3d
Update base_dataset.py
andrea2399 May 13, 2024
a09cc76
Update base_dataset.py
andrea2399 May 13, 2024
956a88e
Update base_dataset.py
andrea2399 May 13, 2024
bc610c4
Update base_dataset.py
andrea2399 May 13, 2024
2fc8fea
Update base_dataset.py
andrea2399 May 13, 2024
95f5fac
Update base_dataset.py
andrea2399 May 13, 2024
c52b804
Update base_dataset.py
andrea2399 May 13, 2024
353b24f
Update base_dataset.py
andrea2399 May 13, 2024
005d36e
Update myaligned_dataset.py
andrea2399 May 13, 2024
981c559
Update base_dataset.py
andrea2399 May 13, 2024
50792a5
Update test.py
andrea2399 May 13, 2024
98be9a0
Update base_dataset.py
andrea2399 May 13, 2024
2b497cc
Update myaligned_dataset.py
andrea2399 May 13, 2024
e389e2b
Update test.py
andrea2399 May 14, 2024
5aec677
Update test.py
andrea2399 May 14, 2024
7fba170
Update test.py
andrea2399 May 14, 2024
7eb025b
Update base_dataset.py
andrea2399 May 14, 2024
dc862a3
Update test.py
andrea2399 May 14, 2024
b239f85
Update myaligned_dataset.py
andrea2399 May 14, 2024
0a31f48
Update test.py
andrea2399 May 14, 2024
ffb38b0
Update base_dataset.py
andrea2399 May 14, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions data/aligned_dataset.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import os
import numpy as np
from data.base_dataset import BaseDataset, get_params, get_transform
from data.image_folder import make_dataset
from PIL import Image
Expand Down
25 changes: 13 additions & 12 deletions data/base_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,10 +78,11 @@ def get_params(opt, size):
return {'crop_pos': (x, y), 'flip': flip}


def get_transform(opt, params=None, grayscale=False, method=transforms.InterpolationMode.BICUBIC, convert=True):
def get_transform(opt, params=None, grayscale=False, method=transforms.InterpolationMode.NEAREST, convert=True):
transform_list = []
if grayscale:
transform_list.append(transforms.Grayscale(1))
#if grayscale:
#transform_list += [transforms.ToTensor()]
#transform_list.append(transforms.Grayscale(1))
if 'resize' in opt.preprocess:
osize = [opt.load_size, opt.load_size]
transform_list.append(transforms.Resize(osize, method))
Expand All @@ -90,18 +91,18 @@ def get_transform(opt, params=None, grayscale=False, method=transforms.Interpola

if 'crop' in opt.preprocess:
if params is None:
transform_list.append(transforms.RandomCrop(opt.crop_size))
transform_list.append(transforms.CenterCrop(opt.crop_size))
else:
transform_list.append(transforms.Lambda(lambda img: __crop(img, params['crop_pos'], opt.crop_size)))

if opt.preprocess == 'none':
transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base=4, method=method)))

if not opt.no_flip:
if params is None:
transform_list.append(transforms.RandomHorizontalFlip())
elif params['flip']:
transform_list.append(transforms.Lambda(lambda img: __flip(img, params['flip'])))
#if opt.preprocess == 'none':
#transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base=4, method=method)))
#if not opt.no_flip:
#if params is None:
#transform_list.append(transforms.RandomHorizontalFlip())
#elif params['flip']:
#transform_list.append(transforms.Lambda(lambda img: __flip(img, params['flip'])))

if convert:
transform_list += [transforms.ToTensor()]
Expand Down
111 changes: 111 additions & 0 deletions data/myaligned_dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
from data.base_dataset import BaseDataset, get_transform
from data.image_folder import make_dataset
import tifffile as tiff
import numpy as np
import torch
from torchvision import transforms
from PIL import Image


class MyAlignedDataset(BaseDataset):
"""Custom aligned dataset class for TIFF images."""

def __init__(self, opt):
"""Initialize the dataset class.

Parameters:
opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
BaseDataset.__init__(self, opt)
self.dir_AB = opt.dataroot # Assuming data is organized in pairs in the same directory
self.AB_paths = sorted(make_dataset(self.dir_AB, opt.max_dataset_size))
input_nc = self.opt.output_nc if self.opt.direction == 'BtoA' else self.opt.input_nc
output_nc = self.opt.input_nc if self.opt.direction == 'BtoA' else self.opt.output_nc
#self.transform = transforms.Compose([
#transforms.Grayscale(1),
#transforms.ToTensor()
#])
self.transform = get_transform(opt, grayscale=(input_nc == 1))

def __getitem__(self, index):
"""Return a data point and its metadata information.

Parameters:
index (int) -- a random integer for data indexing

Returns:
a dictionary containing A, B, A_paths, and B_paths
A (tensor) -- an image in the input domain
B (tensor) -- its corresponding image in the target domain
A_paths (str) -- path to the input image
B_paths (str) -- path to the target image
"""
AB_path = self.AB_paths[index]
AB = tiff.imread(AB_path)
w, h = AB.shape[-1] // 2, AB.shape[-2]
A = Image.fromarray(AB[:, :w])
B = Image.fromarray(AB[:, w:])
#A = A.convert('L')
#B = B.convert('L')
'''
A_array = np.array(A)
B_array = np.array(B)

if np.array_equal(A, B):
print("Images A and B are equal.")

# Convert image to NumPy array and print all values
A_array = np.array(A)
print("\nAll values of image A:\n", A_array)
unique_values_A = np.unique(A_array)
num_unique_values_A = len(unique_values_A)
print("Number of unique values in image A:", num_unique_values_A)
print("Shape:", A_array.shape)
print("Type:", A_array.dtype)
print("Min value:", np.min(A_array))
print("Max value:", np.max(A_array))

B_array = np.array(B)
print("\nAll values of image B:\n", B_array)
unique_values_B = np.unique(B_array)
num_unique_values_B = len(unique_values_B)
print("Number of unique values in image B:", num_unique_values_B)
print("Shape:", B_array.shape)
print("Type:", B_array.dtype)
print("Min value:", np.min(B_array))
print("Max value:", np.max(B_array),"\n")
'''
# apply the same transform to both A and B
A = self.transform(A)
B = self.transform(B)

'''
if np.array_equal(A, B):
print("Images A and B are equal after trasnform.")

A_array_after = np.array(A)
print("\nAll values of image A after transform:\n", A_array_after)
unique_values_A_after = np.unique(A_array_after)
num_unique_values_A_after = len(unique_values_A_after)
print("Number of unique values in image A after transform:", num_unique_values_A_after)
print("Shape:", A_array_after.shape)
print("Type:", A_array_after.dtype)
print("Min value:", np.min(A_array_after))
print("Max value:", np.max(A_array_after))

B_array_after = np.array(B)
print("\nAll values of image B after transform:\n", B_array_after)
unique_values_B_after = np.unique(B_array_after)
num_unique_values_B_after = len(unique_values_B_after)
print("Number of unique values in image B after trasnform:", num_unique_values_B_after)
print("Shape:", B_array_after.shape)
print("Type:", B_array_after.dtype)
print("Min value:", np.min(B_array_after))
print("Max value:", np.max(B_array_after),"\n")

'''
return {'A': A, 'B': B, 'A_paths': AB_path, 'B_paths': AB_path}

def __len__(self):
"""Return the total number of images in the dataset."""
return len(self.AB_paths)
44 changes: 44 additions & 0 deletions data/mydataset_dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
from data.base_dataset import BaseDataset, get_transform
from data.image_folder import make_dataset
import tifffile as tiff
from PIL import Image

class MyDataset(BaseDataset):
"""Custom dataset class."""

def __init__(self, opt):
"""Initialize this dataset class.

Parameters:
opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
BaseDataset.__init__(self, opt)
self.A_paths = sorted(make_dataset(opt.dataroot, opt.max_dataset_size))
input_nc = self.opt.output_nc if self.opt.direction == 'BtoA' else self.opt.input_nc
self.transform = get_transform(opt, grayscale=(input_nc == 1))

def __getitem__(self, index):
"""Return a data point and its metadata information.

Parameters:
index - - a random integer for data indexing

Returns a dictionary that contains A and A_paths
A(tensor) - - an image in one domain
A_paths(str) - - the path of the image
"""
A_path = self.A_paths[index]
A_img = tiff.imread(A_path) # Load the TIFF image
print("Valori dell'immagine prima del dataloader:")
print(A_img)
A_img = Image.fromarray(A_img.squeeze(), mode='L') # Convert the NumPy array to a PIL image
print("Valori dell'immagine dopo prime operazioni:")
print(A_img)
A = self.transform(A_img)
print("Valori dell'immagine dopo transform:")
print(A_img)
return {'A': A, 'A_paths': A_path}

def __len__(self):
"""Return the total number of images in the dataset."""
return len(self.A_paths)
3 changes: 2 additions & 1 deletion models/pix2pix_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ def modify_commandline_options(parser, is_train=True):
By default, we use vanilla GAN loss, UNet with batchnorm, and aligned datasets.
"""
# changing the default values to match the pix2pix paper (https://phillipi.github.io/pix2pix/)
parser.set_defaults(norm='batch', netG='unet_256', dataset_mode='aligned')
#parser.set_defaults(norm='batch', netG='unet_256', dataset_mode='aligned')
parser.set_defaults(norm='batch', netG='unet_256', dataset_mode='myaligned')
if is_train:
parser.set_defaults(pool_size=0, gan_mode='vanilla')
parser.add_argument('--lambda_L1', type=float, default=100.0, help='weight for L1 loss')
Expand Down
6 changes: 3 additions & 3 deletions options/base_options.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ def initialize(self, parser):
parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here')
# model parameters
parser.add_argument('--model', type=str, default='cycle_gan', help='chooses which model to use. [cycle_gan | pix2pix | test | colorization]')
parser.add_argument('--input_nc', type=int, default=3, help='# of input image channels: 3 for RGB and 1 for grayscale')
parser.add_argument('--output_nc', type=int, default=3, help='# of output image channels: 3 for RGB and 1 for grayscale')
parser.add_argument('--input_nc', type=int, default=1, help='# of input image channels: 3 for RGB and 1 for grayscale')
parser.add_argument('--output_nc', type=int, default=1, help='# of output image channels: 3 for RGB and 1 for grayscale')
parser.add_argument('--ngf', type=int, default=64, help='# of gen filters in the last conv layer')
parser.add_argument('--ndf', type=int, default=64, help='# of discrim filters in the first conv layer')
parser.add_argument('--netD', type=str, default='basic', help='specify discriminator architecture [basic | n_layers | pixel]. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator')
Expand All @@ -38,7 +38,7 @@ def initialize(self, parser):
parser.add_argument('--init_gain', type=float, default=0.02, help='scaling factor for normal, xavier and orthogonal.')
parser.add_argument('--no_dropout', action='store_true', help='no dropout for the generator')
# dataset parameters
parser.add_argument('--dataset_mode', type=str, default='unaligned', help='chooses how datasets are loaded. [unaligned | aligned | single | colorization]')
parser.add_argument('--dataset_mode', type=str, default='myaligned', help='chooses how datasets are loaded. [unaligned | aligned | single | colorization | mydataset | myaligned]')
parser.add_argument('--direction', type=str, default='AtoB', help='AtoB or BtoA')
parser.add_argument('--serial_batches', action='store_true', help='if true, takes images in order to make batches, otherwise takes them randomly')
parser.add_argument('--num_threads', default=4, type=int, help='# threads for loading data')
Expand Down
18 changes: 16 additions & 2 deletions test.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@
See training and test tips at: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/tips.md
See frequently asked questions at: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/qa.md
"""
import tifffile
import os
import numpy as np
from options.test_options import TestOptions
from data import create_dataset
from models import create_model
Expand Down Expand Up @@ -76,5 +78,17 @@
img_path = model.get_image_paths() # get image paths
if i % 5 == 0: # save images to an HTML file
print('processing (%04d)-th image... %s' % (i, img_path))
save_images(webpage, visuals, img_path, aspect_ratio=opt.aspect_ratio, width=opt.display_winsize, use_wandb=opt.use_wandb)
webpage.save() # save the HTML
for label, image_numpy in visuals.items():
image_path = img_path[0] if len(img_path) == 1 else img_path[i]
image_name, ext = os.path.splitext(os.path.basename(image_path))
save_path = os.path.join(web_dir, f'{image_name}_{label}.tiff')
image_numpy = image_numpy.cpu().numpy()
# Salva l'immagine come tiff utilizzando tifffile
image_numpy = (image_numpy + 1) / 2
tifffile.imwrite(save_path, image_numpy, dtype='float32',) #.astype(np.uint16))
if opt.use_wandb:
wandb.save(save_path)

#save_images(webpage, visuals, img_path, aspect_ratio=opt.aspect_ratio, width=opt.display_winsize, use_wandb=opt.use_wandb)
#webpage.save() # save the HTML