The global textile industry impacts nearly every human being on the planet and had an estimated size of $1000.3 billion USD in 20201. It includes the production, refinement, and sale of both synthetic and natural fibers used in thousands of industries.
The high demand for quality textiles has led to the application of automated, AI-based quality control of textile production in recent years. This is also due in part to technical developments, the use of modeling and simulations, and the high-probability of errors and defects prevalent in textile production.
With the growing use of (machine learning) ML in Industrial IoT (IIoT) and Industry 4.0, we set out to build an image recognition model in PerceptiLabs that could analyze images of textiles to determine whether or not they contain stains. A model like this could be used in conjunction with real-time camera or video feeds in textile manufacturing plants, to quickly catch defects and improve quality control.
The original dataset is unbalanced with 68 non-defect images and 398 images with different types of stain defects in polyester and cotton fabrics. To eliminate potential biases during training, we made a balanced dataset using data augmentation (applying random rotation, along with vertical and horizontal flips) to increase the number of non-defect images from 68 to 408. Figure 1 shows some example images from this dataset:
When loading the data via PerceptiLabs' Data Wizard, we resized the images to 224x224 to improve computation times, while the three (RGB) channels were used to feed data into the pre-trained model. To map the classifications to the images, we created a .csv file that associates each image file with the appropriate classification label (stain and defect_free) for loading the data into PerceptiLabs. Below is a partial example of how the .csv file looks:
Our model was built with three Components:
The model uses transfer learning via MobileNetV2 as shown in Figure 2:
Training and Results
We trained the model in batches of 32 across 10 epochs, using the ADAM optimizer, a learning rate of 0.001, and a Cross-Entropy loss function. With a training time of around 122 seconds, we achieved a training accuracy of 93.79% and a validation accuracy of 83.23%.
Figure 3 shows PerceptiLabs' Statistics view during training:
Figure 3: PerceptiLabs' Statistics View during training.
Figures 4 and 5 below show the loss and accuracy across the 10 epochs during training:
In figure 4 we can see that both training and validation loss rapidly decreased in the first epic. Training loss remained fairly stable for the remainder of the epochs while validation loss steadily increased, indicating that we could have stopped the training earlier to reduce the overfitting.
In Figure 5 we can see that training accuracy remained relatively stable after the first two epochs with a gradual decrease towards the end, while validation accuracy remained fairly stable throughout.
A model like this could be used for computer vision-based quality control in manufacturing. When paired with real-time images or video feeds of materials on production lines, this ML model provides a strong foundation for automating the identification of material defects. The model itself could also be used as the basis for transfer learning to create models for detecting defects in other types of materials.
This use case is an example of how image recognition can be used in manufacturing. If you want to build a deep learning model similar to this, run PerceptiLabs and check out the repo we created for this use case on GitHub. Also be sure to check out our other material-defect blog: Use Case: Defect Detection in Metal Surfaces.