IE251 Visual Test in the LEGO Smart Factory


Production Line Architecture

The era of cyber technology has benefited the manufacturing sector. It is therefore important for engineering education to incorporate the technology while teaching the core engineering knowledge to the students. Industrial engineering students must be equipped with the learning for the future (or rather recent development) manufacturing.  Therefore, the goal of this module is twofold: manufacturing system engineering fundamental and the introduction of recent manufacturing technology in the smart Factory.

The defect sorting mechanism is to be completed in the second half of the semester, in 7 weeks period. Each group was asked to design the handling system from the first machine to the second machine, and also the defect sorting mechanism, the gray areas in the figure.


Defect Sorting Machine in LEGO Smart Factory

The LEGO Smart Factory introduces the concept of visual tests to the students. Products with the defect are to be sorted from the good product.

The second machine from the LEGO Production line is upgraded to have the defect sorting function. A camera is installed on the machine to capture the product image. The convolutional neural network algorithm is applied to classify the product. The LEGO Smart Factory then transports the product according to the classification result. 

Students' task is to build the feeder and machine 1 from the manual instruction and to design the rest of the system by themselves.


LEGO Production Line Structure

A complete production line is assembled by combining the components of the LEGO EV3. A detail elements-by-element construction are provided by LEGO Digital Designer, a computer aided modeling for LEGO robotics. MATLAB Support for LEGO EV3 is deployed to run the LEGO production line, so the students will also learn how to run, program, analyze, and improve the LEGO production line.

LEGO Production line follows the Two Machine-One Buffer system. It is a fundamental production line structure whose interaction between elements constitutes all the central knowledge of a production line. By understanding this system, students are able to synthesize and evaluate a real production line, and conceivably extend the knowledge to a more complicated manufacturing systems.

The model of LPL follows the two-machine-one-buffer system. Raw material enters the system from machine one, buffer, machine two, and finally exit the system after completing the process. Each machine has specific parameters of failure, repair, and processing rate. Therefore, the dynamic between elements create a specific events in the production line. Starvation in machine two happens when machine one fails and the buffer is empty. Blockage happens when machine two fails, machine one is working, and the buffer becomes full. In this system. machine one is assumed to have an unlimited amount of supply, and machine two is assumed to have unlimited amount of storage.


The LEGO Production Line is constructed by LEGO robotics. It requires two EV3 bricks, two large motors, four small motors, five color sensors, and two touch sensors. All of these elements are coordinated together with the static LEGO technique components to form a functional automated production line.


Machine 1


Machine 2

Convolutional Neural Network for Visual Test

The requirement of the production line is to capture an image to be classified as good or defect during the second process. The deep learning – convolutional neural network (CNN) is suitable to do perform the classification. CNN is a deep learning-based algorithm that is able to recognize the pattern and classify the image effectively. It is a class of deep neural networks, that is also inspired by the human biological process in learning/recognizing an object, in this case, the defect/good product. The algorithm sends an image into a network, and it will return the class of the image. In LPL the final class is either good product or defected product. Additional classes can be added to the image when needed.

The network consists of multiple layers that detect different features of the image, such as edge, brightness, or complexity of the image. There are a lot of available networks, and in LPL the VGG16 is chosen as the network architecture. For learning convenience, a MATLAB program has been pre-develop for the use of LPL. Details can be found in the next sections.

User Interface: image collection, network training, image classification

The following section describes the code that controls the LEGO factory. Following the structure of CNN transfer learning, there are three main functions in this system: data collection, training, and classification.


The main program provides an interface to select the step that a user is going to perform. When the user selects operation mode 1,2, or 3, the screen is directed to the data collection, training, and the classification screen respectively.

When mode 1 is selected, another screen will appear with the live visualization of the camera. This will allow the user to insert the chip and select the appropriate button to save the image for data training. When the button Capture White Image is pressed, a function called snap save is called to capture the image (therefore “snap”) and save the image to the folder according to the pressed button, in this case, the folder with the title WHITE.


When mode 2 is selected, a screen will prompt for the user to fill in the batch size, learning rate, and the number of epochs for the training. Before the actual training, network preparation has to be performed first. This is important so we can use the pre-made MATLAB trainingNetwork function. This function requires the image, the label, layers, and all the training parameters.

First, the function will create three temporary folder consists of the three colors that are going to be trained. The image to be trained will then be labeled as ‘WHITE’, ‘DEFECT’, or ‘BLUE’ in an array called trainLabelArray for training data. The same treatment goes to the validation data. The system uses the VGG16 network to train the data.​


Classification is only possible when the file trainedNetwork.mat presents in the folder. The input image is classified by using the trained network. The output of the classification is a variable called labelnum consists of three categorical values: 1, 2, and 3 which stands for white, blue, and defected.

The MAIN program prompts three functionalities of the interface: data collection, network training, and quality control test. 

In the DATA COLLECTION, users may input the product to the machine and capture the image based on the color. This image will be stored automatically as training or validation data.

After data is collected, users TRAIN THE NETWORK in the second tab. A MATLAB GUI will appear to show the progress and the success of the training.

To test the QUALITY CONTROL, users may use the third tab on the main page. The program will automatically classify the product.

DOWNLOAD detail instructions, building manual, machine-building manual,  and programming script here.