Industrial IoT is playing a significant role in Industry 4.0, especially for automating manufacturing processes. With the integration of sensors, cameras, and AI at the edge, organizations can now automate numerous processes such as visual quality control inspections.
Take for example, the manufacturing of wood veneers. An important part of the manufacturing process involves drying the wood sheets at temperatures of up to 320°F to obtain a moisture content level of around 8% to 12%1. Following this drying process, the sheets must then be subjected to several verifications to ensure they have been dried correctly and meet quality control standards.
To help automate this verification process, we set out to build an image recognition model in PerceptiLabs that could identify veneers as either dry or wet. A model like this could potentially help manufacturers automate the process of identifying veneers with too much moisture content.
To train our model, we used images from the Veneer21 dataset2. The original dataset comprises high-resolution .png images (generally over 4000x4000 pixels) divided into two classes representing wet versus dry wood veneers. Using PerceptiLabs' Data Wizard, we pre-processed the images to 224x224 pixels. Figure 1 shows some example images from this dataset for dry veneers:
To map the classifications to the images, we created a .csv file that associates each image file with the appropriate classification number (0=dry and 1=wet) for loading the data using PerceptiLabs' Data Wizard. Below is a partial example of how the .csv file looks:
Our model was built with just three Components:
Figure 2 shows the model's topology in PerceptiLabs:
Training and Results
We trained the model with 5 epochs in batches of 32, using the ADAM optimizer, a learning rate of 0.001, and a Cross Entropy loss function. With a training time of around 22 minutes and 10 seconds, we achieved a training accuracy of 100% and a validation accuracy of 99.7%. Figure 3 shows PerceptiLabs' Statistics view during training.
Figures 4 and 5 below show the accuracy and loss across the five epochs during training:
Here we can see that accuracy increased the most and loss decreased the most during the first epoch for both training and validation. After the first epoch, accuracy and loss remained stable for the remainder of the epochs.
A model like this could be used to detect manufacturing defects on a production line. For example, the model could be used to analyze photos or video frames acquired through cameras on the factory floor capturing veneer sheets as they pass through the assembly line during manufacturing. Any sheets containing wet wood could then be flagged for further inspection by factory workers. The model itself could also be used as the basis for transfer learning to create additional models for detecting defects in other types of materials or products.
This use case is an example of how image recognition can be used to help in manufacturing. If you want to build a deep learning model similar to this, run PerceptiLabs and check out the repo we created for this use case on GitHub.
2 T. Jalonen, F. Laakom, M. Gabbouj and T. Puoskari, "Visual Product Tracking System Using Siamese Neural Networks," in IEEE Access, vol. 9, pp. 76796-76805, 2021, doi: 10.1109/ACCESS.2021.3082934.