Experimental Indication to Improve the NN Learning Accuracy by Integrity Constraints From the NN Training Data


Various approaches to improve the classification rate of neural networks (NN) exist. Nevertheless, the application of integrity constraints to mitigate this rate is novel. This paper investigates the effectiveness of integrity constraints (ICs or short: constraints) to improve the performance of neural networks (NNs). This applies in particular to data reduction in training data. The study starts with the application of ICs to the initial NN classification, focusing on the development of data set-specific constraints. These constraints are created by machine learning algorithms, such as multiple linear regression. The method consists of applying these constraints to the misclassified data sets from different tests with the aim of reducing the misclassification rate. The effectiveness of this approach is quantified by comparing the original misclassification rates with those after applying the IC, where a significant reduction was observed in three different test cases. For example, one test case can be described as significant: In Test 1, the misclassification rate decreased from 0.78% to 0.19%, which corresponds to a reduction of 75.6%. Similar improvements were observed in the subsequent tests, underlining the potential of ICs to improve classification accuracy. Thus, this study provides convincing evidence that the NNIC approach is a valuable tool for mitigating misclassification problems in neural network applications. The combination of training a NN and subsequently apply ICs from the training data (NNIC approach) is new and the experimental results indicate evidence for improving the classification rate.

Advances in Artificial Intelligence and Machine Learning
Alexander Maximilian Röser
Alexander Maximilian Röser
Senior Consultant @WEPEX

My research interests include Artificial Intelligence, Data Science and Digital Transformation.