Numerical Evidence That the Power of Artificial Neural Networks Limits Strong AI


A famous definition of AI is based on the terms weak and strong AI from McCarthy. An open question is the characterization of these terms, i.e., the transition from weak to strong. Nearly no research results are known for this complex and important question. In this paper we investigate how the size and structure of a Neural Network (NN) limits the learnability of a training sample, and thus, can be used to discriminate weak and strong AI (domains). Furthermore, the size of the training sample is a primary parameter for the training effort estimation with the big O function. The needed training repetitions may also limit the learning tractability and will be investigated. The results are illustrated with an analysis of a feedforward NN and a training sample for language with 1,000 words including the effort for the training repetitions.

Advances in Artificial Intelligence and Machine Learning