Nature Communications
|
April 2023
Material extrusion has long been a cornerstone of additive manufacturing (AM) due to its affordability, versatility, and ability to work with multiple materials. However, its susceptibility to errors—ranging from minor dimensional discrepancies to complete build failures—has hindered its broader application in critical sectors such as healthcare, aerospace, and robotics. Traditionally, these errors are managed by skilled operators who manually monitor the printing process, detect errors, and make necessary adjustments. This method is not only time-consuming but also prone to human error, especially when dealing with new materials or unfamiliar printers. At Matta, we saw an opportunity to leverage AI to enhance this process, reducing human oversight and improving accuracy.
In collaboration with the Institute for Manufacturing at the University of Cambridge, we developed CAXTON (Collaborative Autonomous Extrusion Network), an AI-powered system designed to integrate seamlessly with existing extrusion-based 3D printers. Using inexpensive webcams and a multi-head deep convolutional neural network, CAXTON offers a robust solution for real-time error detection and correction, pushing the boundaries of what’s possible in 3D printing.
Unlike existing systems that rely on human-labelled data, CAXTON autonomously labels errors by measuring deviations from optimal printing parameters. This innovative approach allows the system to not only identify errors but also to correct them in real-time. By connecting a network of printers, CAXTON facilitates continuous data collection and collaborative learning, enabling the system to improve its accuracy and generalisability across different printers, materials, and geometries.
The system’s ability to automatically generate training data means it can create a large, diverse dataset without the need for extensive human intervention. This is a significant advantage, as it allows CAXTON to operate continuously, learning and adapting to new challenges on the fly. The result is a system that can detect and correct multiple parameters simultaneously, ensuring higher quality and fewer errors in the final print.
To train CAXTON, we generated a comprehensive dataset of 1.2 million images of parts printed with polylactic acid (PLA). Each image was meticulously labelled with the corresponding printing parameters, such as hotend temperature, bed temperature, flow rate, lateral speed, and Z offset. The data collection process was fully automated, with images captured every 0.4 seconds during printing. This high-resolution labelling allows the AI to understand the complex interactions between different parameters better than any human could.
The core of CAXTON’s intelligence lies in its multi-head deep residual attention network. This architecture uses a single shared backbone for feature extraction, with separate output heads for each parameter. By updating the shared backbone during training, the network learns how different parameters interact, which is crucial for predicting the optimal settings for each print. This approach not only improves the accuracy of individual parameter predictions but also allows the network to recognise multiple solutions to the same problem, similar to how a skilled operator might.
CAXTON’s real-time correction capabilities were rigorously tested on a variety of printers and materials. The system successfully adjusted printing parameters in response to detected errors, significantly improving the final print quality. For example, when tested with four different thermoplastics, CAXTON was able to correct errors caused by incorrect initial settings, resulting in successful prints even under challenging conditions.
What’s particularly impressive is CAXTON’s ability to discover optimal parameters for materials it hasn’t encountered before. This was demonstrated using ABS-X, a material not included in the original training dataset. The system adapted to the new material, adjusting parameters like flow rate and Z offset in real-time to ensure a successful print. This ability to generalise across different materials and setups highlights the potential of AI to make 3D printing more versatile and reliable.
Understanding why an AI makes certain decisions is crucial, especially when deploying it in production environments. To ensure transparency, we used visualisation techniques such as guided backpropagation and Gradient-weighted Class Activation Mapping (GradCAM). These methods help us see which features the network focuses on when making predictions, providing insights into the network’s decision-making process.
For instance, GradCAM visualisations revealed that the network focuses primarily on the most recent extrusion from the nozzle, which is essential for making timely corrections. These insights are invaluable for verifying the robustness of the network and ensuring that it makes reliable decisions, even in complex scenarios.
To further test CAXTON’s generalisability, we applied the system to different printer setups, including a Lulzbot Taz 6 with a different nozzle size and filament type. Despite these changes, CAXTON successfully corrected errors and completed prints that would have otherwise failed. The system was also tested on a direct ink writing setup, printing with viscous materials like PDMS, mayonnaise, and ketchup. Even in these challenging conditions, CAXTON adjusted the flow rate and Z offset effectively, demonstrating its versatility across different AM methods.
CAXTON represents a significant advancement in material extrusion AM by offering real-time error detection and correction capabilities that far surpass traditional methods. By automating data acquisition and labelling, CAXTON generates a training dataset that enables robust, generalisable error correction across a wide range of printers, materials, and geometries. The deep multi-head neural network at its core can predict multiple parameters simultaneously, learn from its mistakes, and even discover new optimal printing parameters, making it a powerful tool for improving the reliability and efficiency of 3D printing.
Read the paper
Read the code
Douglas A. J. Brion
Sebastian W. Pattinson