Skip to content

In this repository I have applied Resnet18 Architecture on CIFAR-10 Dataset

Notifications You must be signed in to change notification settings

rashutyagi/ResNet18-on-CIFAR-10

Repository files navigation

ResNet18-on-CIFAR-10

In this repository I have applied Resnet18 Architecture on CIFAR-10 Dataset

Accuracy received above 85% in 35th epoch using Resnet18 Architecture.

Training and Testing Logs :-

0%| | 0/391 [00:00<?, ?it/s]Epoch: 1 Learning_Rate [0.0040000000000000036] Loss=1.761474370956421 Batch_id=390 Accuracy=26.69: 100%|██████████| 391/391 [01:02<00:00, 6.96it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0137, Accuracy: 3484/10000 (34.84%)

Epoch: 2 Learning_Rate [0.005944337266504118] Loss=1.4846844673156738 Batch_id=390 Accuracy=38.04: 100%|██████████| 391/391 [01:01<00:00, 7.11it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0129, Accuracy: 4017/10000 (40.17%)

Epoch: 3 Learning_Rate [0.011619830424103306] Loss=1.5220177173614502 Batch_id=390 Accuracy=44.88: 100%|██████████| 391/391 [01:01<00:00, 7.28it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0115, Accuracy: 4891/10000 (48.91%)

Epoch: 4 Learning_Rate [0.020566684770626315] Loss=1.357172966003418 Batch_id=390 Accuracy=50.48: 100%|██████████| 391/391 [01:00<00:00, 6.51it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0123, Accuracy: 4694/10000 (46.94%)

Epoch: 5 Learning_Rate [0.03206007937590945] Loss=1.0350217819213867 Batch_id=390 Accuracy=54.42: 100%|██████████| 391/391 [00:59<00:00, 6.56it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0170, Accuracy: 4515/10000 (45.15%)

Epoch: 6 Learning_Rate [0.04516888776288231] Loss=0.9740197062492371 Batch_id=390 Accuracy=59.03: 100%|██████████| 391/391 [00:59<00:00, 6.58it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0111, Accuracy: 5689/10000 (56.89%)

Epoch: 7 Learning_Rate [0.058831112237117685] Loss=0.9719946980476379 Batch_id=390 Accuracy=64.70: 100%|██████████| 391/391 [00:59<00:00, 7.34it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0078, Accuracy: 6582/10000 (65.82%)

Epoch: 8 Learning_Rate [0.07193992062409055] Loss=1.978492021560669 Batch_id=390 Accuracy=36.47: 100%|██████████| 391/391 [00:59<00:00, 6.59it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0157, Accuracy: 1916/10000 (19.16%)

Epoch: 9 Learning_Rate [0.08343331522937368] Loss=1.709959626197815 Batch_id=390 Accuracy=22.48: 100%|██████████| 391/391 [00:59<00:00, 6.59it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0133, Accuracy: 3543/10000 (35.43%)

Epoch: 10 Learning_Rate [0.0923801695758967] Loss=1.245566725730896 Batch_id=390 Accuracy=39.42: 100%|██████████| 391/391 [00:59<00:00, 6.61it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0115, Accuracy: 4859/10000 (48.59%)

Epoch: 11 Learning_Rate [0.09805566273349588] Loss=0.9965742826461792 Batch_id=390 Accuracy=57.24: 100%|██████████| 391/391 [00:59<00:00, 6.60it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0097, Accuracy: 5741/10000 (57.41%)

Epoch: 12 Learning_Rate [0.1] Loss=1.6655919551849365 Batch_id=390 Accuracy=62.31: 100%|██████████| 391/391 [00:59<00:00, 6.62it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0115, Accuracy: 4894/10000 (48.94%)

Epoch: 13 Learning_Rate [0.09968561175222017] Loss=0.9053387641906738 Batch_id=390 Accuracy=68.07: 100%|██████████| 391/391 [00:59<00:00, 7.35it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0090, Accuracy: 6188/10000 (61.88%)

Epoch: 14 Learning_Rate [0.09874640062350874] Loss=1.1221002340316772 Batch_id=390 Accuracy=68.60: 100%|██████████| 391/391 [00:58<00:00, 7.43it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0072, Accuracy: 6878/10000 (68.78%)

Epoch: 15 Learning_Rate [0.09719417773875232] Loss=0.5830186605453491 Batch_id=390 Accuracy=72.51: 100%|██████████| 391/391 [00:59<00:00, 6.60it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0066, Accuracy: 7231/10000 (72.31%)

Epoch: 16 Learning_Rate [0.09504846320134738] Loss=0.8894365429878235 Batch_id=390 Accuracy=74.35: 100%|██████████| 391/391 [00:59<00:00, 6.61it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0065, Accuracy: 7348/10000 (73.48%)

Epoch: 17 Learning_Rate [0.09233624061657436] Loss=0.7495899200439453 Batch_id=390 Accuracy=76.26: 100%|██████████| 391/391 [00:59<00:00, 6.62it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0061, Accuracy: 7478/10000 (74.78%)

Epoch: 18 Learning_Rate [0.089091617757105] Loss=0.5970357060432434 Batch_id=390 Accuracy=77.40: 100%|██████████| 391/391 [00:58<00:00, 6.63it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0061, Accuracy: 7539/10000 (75.39%)

Epoch: 19 Learning_Rate [0.08535539763797113] Loss=0.6495188474655151 Batch_id=390 Accuracy=77.70: 100%|██████████| 391/391 [00:59<00:00, 6.61it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0060, Accuracy: 7472/10000 (74.72%)

Epoch: 20 Learning_Rate [0.0811745653949763] Loss=0.7715296149253845 Batch_id=390 Accuracy=79.69: 100%|██████████| 391/391 [00:58<00:00, 6.64it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0065, Accuracy: 7434/10000 (74.34%)

Epoch: 21 Learning_Rate [0.07660169741935154] Loss=0.5066594481468201 Batch_id=390 Accuracy=80.13: 100%|██████████| 391/391 [00:59<00:00, 6.61it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0057, Accuracy: 7795/10000 (77.95%)

Epoch: 22 Learning_Rate [0.07169430017913009] Loss=0.8729343414306641 Batch_id=390 Accuracy=80.95: 100%|██████████| 391/391 [00:59<00:00, 6.62it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0054, Accuracy: 7852/10000 (78.52%)

Epoch: 23 Learning_Rate [0.06651408704194597] Loss=0.48248404264450073 Batch_id=390 Accuracy=81.97: 100%|██████████| 391/391 [00:59<00:00, 6.61it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0051, Accuracy: 8006/10000 (80.06%)

Epoch: 24 Learning_Rate [0.06112620219362893] Loss=0.6306929588317871 Batch_id=390 Accuracy=82.80: 100%|██████████| 391/391 [00:58<00:00, 7.40it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0054, Accuracy: 7931/10000 (79.31%)

Epoch: 25 Learning_Rate [0.055598401412270175] Loss=0.5138142108917236 Batch_id=390 Accuracy=83.26: 100%|██████████| 391/391 [00:59<00:00, 6.56it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0050, Accuracy: 8057/10000 (80.57%)

Epoch: 26 Learning_Rate [0.0500002] Loss=0.5265241861343384 Batch_id=390 Accuracy=83.90: 100%|██████████| 391/391 [00:59<00:00, 6.60it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0048, Accuracy: 8088/10000 (80.88%)

Epoch: 27 Learning_Rate [0.04440199858772983] Loss=0.4352378845214844 Batch_id=390 Accuracy=84.43: 100%|██████████| 391/391 [00:59<00:00, 6.57it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0044, Accuracy: 8261/10000 (82.61%)

Epoch: 28 Learning_Rate [0.03887419780637107] Loss=0.7814135551452637 Batch_id=390 Accuracy=82.02: 100%|██████████| 391/391 [00:59<00:00, 6.57it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0062, Accuracy: 7570/10000 (75.70%)

Epoch: 29 Learning_Rate [0.03348631295805405] Loss=0.5283379554748535 Batch_id=390 Accuracy=85.21: 100%|██████████| 391/391 [00:59<00:00, 6.62it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0044, Accuracy: 8264/10000 (82.64%)

Epoch: 30 Learning_Rate [0.028306099820869922] Loss=0.3929148018360138 Batch_id=390 Accuracy=86.40: 100%|██████████| 391/391 [00:58<00:00, 6.65it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0042, Accuracy: 8389/10000 (83.89%)

Epoch: 31 Learning_Rate [0.023398702580648485] Loss=0.6126952171325684 Batch_id=390 Accuracy=86.89: 100%|██████████| 391/391 [00:58<00:00, 6.64it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0046, Accuracy: 8246/10000 (82.46%)

Epoch: 32 Learning_Rate [0.0188258346050237] Loss=0.15967121720314026 Batch_id=390 Accuracy=87.85: 100%|██████████| 391/391 [00:58<00:00, 6.66it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0040, Accuracy: 8456/10000 (84.56%)

Epoch: 33 Learning_Rate [0.014645002362028864] Loss=0.3884727358818054 Batch_id=390 Accuracy=88.39: 100%|██████████| 391/391 [00:58<00:00, 6.65it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0039, Accuracy: 8449/10000 (84.49%)

Epoch: 34 Learning_Rate [0.010908782242895003] Loss=0.2804277837276459 Batch_id=390 Accuracy=89.33: 100%|██████████| 391/391 [00:59<00:00, 6.63it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0038, Accuracy: 8528/10000 (85.28%)

Epoch: 35 Learning_Rate [0.0076641593834256404] Loss=0.3510071635246277 Batch_id=390 Accuracy=89.56: 100%|██████████| 391/391 [00:58<00:00, 6.65it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0037, Accuracy: 8523/10000 (85.23%)

Epoch: 36 Learning_Rate [0.004951936798652629] Loss=0.2674649655818939 Batch_id=390 Accuracy=90.27: 100%|██████████| 391/391 [00:59<00:00, 6.58it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0036, Accuracy: 8614/10000 (86.14%)

Epoch: 37 Learning_Rate [0.002806222261247683] Loss=0.19625303149223328 Batch_id=390 Accuracy=90.96: 100%|██████████| 391/391 [00:59<00:00, 6.60it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0035, Accuracy: 8626/10000 (86.26%)

Epoch: 38 Learning_Rate [0.0012539993764912555] Loss=0.31562408804893494 Batch_id=390 Accuracy=91.21: 100%|██████████| 391/391 [00:59<00:00, 6.57it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0035, Accuracy: 8621/10000 (86.21%)

Epoch: 39 Learning_Rate [0.0003147882477798485] Loss=0.15003962814807892 Batch_id=390 Accuracy=91.34: 100%|██████████| 391/391 [00:59<00:00, 6.58it/s] 0%| | 0/391 [00:00<?, ?it/s] Test set: Average loss: 0.0034, Accuracy: 8643/10000 (86.43%)

Epoch: 40 Learning_Rate [4e-07] Loss=0.18976888060569763 Batch_id=390 Accuracy=91.46: 100%|██████████| 391/391 [00:59<00:00, 6.60it/s]

Test set: Average loss: 0.0035, Accuracy: 8631/10000 (86.31%)

About

In this repository I have applied Resnet18 Architecture on CIFAR-10 Dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published