Skip to content

Commit

Permalink
deploy: b605a87
Browse files Browse the repository at this point in the history
  • Loading branch information
atticus-carter committed Sep 9, 2024
1 parent 9d0fde4 commit b7456de
Show file tree
Hide file tree
Showing 31 changed files with 1,500 additions and 462 deletions.
4 changes: 2 additions & 2 deletions _sources/book/LA2.5.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@
"By the end of this section, you will:\n",
"- Understand the importance of selecting appropriate image augmentations for training machine learning models in underwater imagery analysis.\n",
"- Learn how to apply image transformations such as shearing, cropping, adding noise, and adjusting brightness using **PIL** and **OpenCV**.\n",
"- Develop a deeper understanding of how augmentations improve model robustness in real-world scenarios, such as handling camera misalignment, murky water conditions, and variable lighting during **AUV surveys**.\n",
"- Develop a deeper understanding of how augmentations improve model robustness in real-world scenarios, such as handling camera misalignment, murky water conditions, and variable lighting during **AUV surveys**\n",
"\n",
"---\n",
"## The Theory Behind Choosing Augmentations for Training Imagery\n",
"\n",
"When training machine learning models, particularly in **highky variable condition** tasks such as underwater image analysis, image augmentation is a powerful tool to improve model robustness and performance. Augmentation creates variations of the training data, allowing models to generalize better to real-world conditions. For example, **shearing** can simulate the effects of camera tilt commonly seen in **AUV surveys**, where slight angles and shifts can distort how objects are captured. By applying shear transformations, we teach models to recognize objects even when they are skewed due to misalignment during transect movement. Another useful augmentation is **cropping**, which mimics scenarios where cameras, such as stationary underwater systems, capture only part of an object—often seen in situations where fish or coral are cut off at the edges of the frame. Training with cropped images helps models learn to detect partial objects and improves their robustness in handling incomplete data. In murky underwater environments with sediment or low visibility, **adding noise** can replicate the challenge of detecting objects in degraded imagery. Noise simulates particles in the water, preparing models to distinguish features despite visual interference. Similarly, **brightness adjustments** are crucial for dealing with varying lighting conditions that change with depth or time of day. By exposing models to images with different brightness levels, they become more adaptable to fluctuations in light intensity. Additionally, **rotations** help models handle misalignment that occurs naturally in dynamic underwater environments, where cameras may not always be perfectly horizontal. Lastly, **blurring** can simulate motion or water flow, which is useful when image clarity is compromised by movement during data collection. Tailoring these augmentations to the challenges of underwater surveys helps build models that are more adaptable, improving detection accuracy and resilience in diverse conditions. The following sections provide examples of how to implement these augmentations in **PIL** and **OpenCV**.\n",
"When training machine learning models, particularly in **highly variable condition** tasks such as underwater image analysis, image augmentation is a powerful tool to improve model robustness and performance. Augmentation creates variations of the training data, allowing models to generalize better to real-world conditions. For example, **shearing** can simulate the effects of camera tilt commonly seen in **AUV surveys**, where slight angles and shifts can distort how objects are captured. By applying shear transformations, we teach models to recognize objects even when they are skewed due to misalignment during transect movement. Another useful augmentation is **cropping**, which mimics scenarios where cameras, such as stationary underwater systems, capture only part of an object—often seen in situations where fish or coral are cut off at the edges of the frame. Training with cropped images helps models learn to detect partial objects and improves their robustness in handling incomplete data. In murky underwater environments with sediment or low visibility, **adding noise** can replicate the challenge of detecting objects in degraded imagery. Noise simulates particles in the water, preparing models to distinguish features despite visual interference. Similarly, **brightness adjustments** are crucial for dealing with varying lighting conditions that change with depth or time of day. By exposing models to images with different brightness levels, they become more adaptable to fluctuations in light intensity. Additionally, **rotations** help models handle misalignment that occurs naturally in dynamic underwater environments, where cameras may not always be perfectly horizontal. Lastly, **blurring** can simulate motion or water flow, which is useful when image clarity is compromised by movement during data collection. Tailoring these augmentations to the challenges of underwater surveys helps build models that are more adaptable, improving detection accuracy and resilience in diverse conditions. The following sections provide examples of how to implement these augmentations in **PIL** and **OpenCV**.\n",
"\n",
":::{note}\n",
"The following is meant to be a place for you to come back and reference these crucial augmentations in future activities. \n",
Expand Down
6 changes: 2 additions & 4 deletions _sources/book/LA2.75.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -409,12 +409,10 @@
"\n",
"1. **Download and extract the ZIP files**:\n",
" - **randomcrab.zip** contains 35 images of crabs.\n",
"\n",
" https://drive.google.com/file/d/1ARAFOQ7CNsuPFIM4QaQBU89nQ35GMSlj/view?usp=sharing\n",
" - [Download randomcrab.zip](./assets/randomcrab.zip)\n",
"\n",
" - **randomfish.zip** contains 35 images of fish.\n",
"\n",
" https://drive.google.com/file/d/1RRqS8k_-k_w7ZqGYCjVHwxgYqWlOO4zZ/view?usp=sharing\n",
" - [Download randomfish.zip](./assets/randomfish.zip)\n",
"\n",
"\n",
"\n",
Expand Down
7 changes: 4 additions & 3 deletions _sources/book/LA5.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,9 @@
"In this lesson, we focus on **image classification**. The images used in this dataset are not annotated with bounding boxes because they are assumed to contain only one of the two possible classes—either \"crab\" or \"rockfish.\" This simplifies the problem, allowing us to rely on the image file names and folder structure to determine the class labels. No separate annotation files are necessary.\n",
"\n",
":::{note}\n",
"The following activity requires a small dataset download, you can download it here: https://drive.google.com/file/d/1aX3nUPtPWp3ScgGU1q4JPkZRLAyuo8ax/view?usp=sharing\n",
"The following activity requires a small dataset download, you can download it here: - [SHRCrabsandFishClassification.zip](https://drive.google.com/file/d/1aX3nUPtPWp3ScgGU1q4JPkZRLAyuo8ax/view?usp=sharing)\n",
"\n",
"Alternatively, you can use a dataset modified in the previous lesson!\n",
":::\n",
"\n",
"---\n"
Expand Down Expand Up @@ -568,8 +570,7 @@
"- **Training Loss and Validation Loss**: These graphs show how the model’s loss decreases during training. Loss is a measure of how well the model’s predictions match the true labels.\n",
"\n",
"Interpreting these graphs is one of the most important skills in CV. They help you assess whether the model is **overfitting** (performing well on training but poorly on validation) or **underfitting** (performing poorly on both training and validation). \n",
"\n",
"The next section will cover different metrics in greater depth and what constitutes good training results.\n"
"\n"
]
},
{
Expand Down
Loading

0 comments on commit b7456de

Please sign in to comment.