Is the Universal Machine really universal?

Turing’s Universal Machine [1] can be shown to be not so universal: only being concerned with what is, and is not, computable—showing that an algorithm might not reach an answer.
Classical reasoning, deduction and induction, was described by Aristotle. Running a computer program can be viewed as a deductive process: given some input values (premises) and a program (an argument), some output (a conclusion) can be reached. In this analogy, the creation of a computer program would be an inductive process, and is one which is traditionally an expensive, manual process. [Although, I am not claiming that all NP problems are ones of induction!]
However, if the creation of functions/classes is represented as a program, a self-generating activity, we can generate programs in a deductive process. For example, we might write “List l = new List();”, however to change this to a different data structure, say an Array, we’d have to rewrite this text. This is an expensive process. I use Eclipse ( so that such refactoring is a cinch. But if we could generate code (well that’s what refactoring is!) we need only to supply a parameter in our input to determine the class generated in the output source code. This might sound like we’re just pushing the complexity all onto the input parameters, but this is simply a shortcoming of representing software as structured text. Writing is a learnt skill — it is the difficult task!
We can view Eclipse as an ‘engine’, a relatively small program working on a relatively large dataset which is source code. This pattern can also be seen in computer games (such as the DOOM engine and a WAD datafile). This can also be seen in chatbots, where the datafile may be AIML (; or JavaScript in smart speakers ( But these are all returning to the production of source code as a written artefact. Which, while you’ve been reading this, remains a difficult task.
The self-generating process, however, is intrinsic to speech. The key to enguage is that the data itself is natural language. Indeed Enguage, as a ‘langUAGE ENGine’, only understands a dozen or so utterances, all else is produced from these [2]. What is more, recent examples have shown examples of Peircean reasoning: abduction—determining the conditions required for a program to generate a given output [3, 4]

[1] Turing, A. M. (1937) On Computable Numbers, doi:10.1112/plms/s2-42.1.230
[2] Wheatman, M. J. (2014) An Autopoietic Repertoire,
[3] Wheatman, M. J. (2016) What Google Doesn’t Know,
[4] Wheatman, M. J. (in press) On Why and Because: Reasoning with Natural Language.