Is the Universal Machine really universal?

Turing’s Universal Machine [1] can be shown to be not so universal: only being concerned with what is, and is not, computable—showing that an algorithm might not reach an answer.
Classical reasoning, deduction and induction, was described by Aristotle. Running a computer program can be viewed as a deductive process: given some input values (premises) and a program (an argument), some output (a conclusion) can be reached. In this analogy, the creation of a computer program would be an inductive process, and is one which is traditionally an expensive, manual process. [Although, I am not claiming that all NP problems are ones of induction!]
However, if the creation of functions/classes is represented as a program, a self-generating activity, we can generate programs in a deductive process. For example, we might write “List l = new List();”, however to change this to a different data structure, say an Array, we’d have to rewrite this text. This is an expensive process. I use Eclipse ( so that such refactoring is a cinch. But if we could generate code (well that’s what refactoring is!) we need only to supply a parameter in our input to determine the class generated in the output source code. This might sound like we’re just pushing the complexity all onto the input parameters, but this is simply a shortcoming of representing software as structured text. Writing is a learnt skill — it is the difficult task!
We can view Eclipse as an ‘engine’, a relatively small program working on a relatively large dataset which is source code. This pattern can also be seen in computer games (such as the DOOM engine and a WAD datafile). This can also be seen in chatbots, where the datafile may be AIML (; or JavaScript in smart speakers ( But these are all returning to the production of source code as a written artefact. Which, while you’ve been reading this, remains a difficult task.
The self-generating process, however, is intrinsic to speech. The key to enguage is that the data itself is natural language. Indeed Enguage, as a ‘langUAGE ENGine’, only understands a dozen or so utterances, all else is produced from these [2]. What is more, recent examples have shown examples of Peircean reasoning: abduction—determining the conditions required for a program to generate a given output [3, 4]

[1] Turing, A. M. (1937) On Computable Numbers, doi:10.1112/plms/s2-42.1.230
[2] Wheatman, M. J. (2014) An Autopoietic Repertoire,
[3] Wheatman, M. J. (2016) What Google Doesn’t Know,
[4] Wheatman, M. J. (in press) On Why and Because: Reasoning with Natural Language.

Is Enguage Rules or Statistical-based NLP

Neither. Perhaps, as an interface, Engauge isn’t NLP in the traditional sense?  A trite argument might run that traditional NLP techniques haven’t worked, so why look for a new answer in old failures?

The running of a computer program can be likened to following a logical argument. Given some premises, and some rules, a conclusion can be reached. Similarly, given a program and some inputs, an output can be achieved. So perhaps all software is rules-based?

But in natural language, who creates these rules? Unless you believe, as did Kant, in pre-defined, a priori, meanings, we need to be able to create meaning through speech, as we go. We need to be able to build, to say:

to the phrase hello reply hello to you too.

The use of statistics, on the other hand, seem a less prescriptive than rules, if it is looking to the uses of language to determine possible meanings. However, if they are simply being used to determine between two possible syntax trees, does this utterance mean A or B, this approach has its own pitfalls, and will blog at some point on the difficulties in the syntax-semantic dyadic.

As a post-script: I often get asked what dictionary does Enguage use, and my reply of none is perhaps a difficult one to comprehend?  But it is the case that dictionaries do not define what words are about to mean, but record meanings of words as they have been used.  Similarly, Enguage, records meaning in utterances not words–utterances which are used to address concepts. But is it word mentioning here briefly that there are two sides to the notion of meaning.  An utterance has a structure composed of the contents of the stop list and concept names; plus, individual words will have personal meanings to the speaker: nobody knows what I mean by coffee when I say i need a coffee,

Who is Felicity?

Felicity, here, is not a person, it is the ability to find the right expression. It was used in 1955 by John Austin to explain human understanding, in his lecture series at Harvard, published posthumously [1]. Understanding, it goes, is not found in the words said; nor in some intended meaning, which would require knowledge of the unseen mind. Austin modelled it as the reaction to an utterance as a whole. If the speaker said sit down, and the listener sat down, the situation could be seen as felicitous–the right words had been found. The key to machine understanding can also be found in this idea.

This becomes central to machine understanding by the use of social conventions, which dictate the felicity of an utterance. So, if I said, hello and the machine replied hello to you too, then I know that we are conversing and we’re going to get on just fine. But if I said, hello and it replied go awayor even worse, error in line 30-the breaking of this social convention would cause a gut reaction in me: this would be infelicitous. Our relationship is in trouble. It doesn’t mean a positive and negative replies, but positive and negative attitudes.

Software already has a simple analogous system to Felicity in the exception mechanism. While a function can normally return positive and negative values, when an exceptional error occurs, it can raise, or throw, an exception, so as to allow the calling program to proceed appropriately. Enguage explicitly models social conventions in the repertoire of utterances surrounding a concept: what is said and what are the possible replies. These have to be modelled so that the user is certain as to how the software has used the utterance: an unequivocal reply. This leads onto the notion of disambiguation. But that’s another story.

[1] Austin, J. L., How to do Things With Words, OUP (1962)

Is Enguage Just a Chatbot?

The simple answer is no. But why not?

The raison d’etre of a chatbot is to keep a person talking for a certain length of time, convinced that it is a human. Alan Turing’s original idea[1] talked of creating an imaginary world, but went on to describe how this could be supported by tricks to give hints about the origin of the correspondent. Joseph Weizenbaum’s original chatbot, ELIZA[2], did a stunning job by doing a keyword search: if the user typed “mother”, ELIZA would select one from several replies, such as “Why do you mention your mother?” I don’t have the source to his original version, but I’m reasonably familiar with David Ahl’s version[3].

Enguage doesn’t do a keyword search. It must match an entire utterance for it to be understood; and, it is quite happy to tell you it doesn’t understand! It could be used to create a chatbot, try:

On “PHRASE-BEFORE mother PHRASE-AFTER”, reply “why do you mention your mother”.

But it is more than this. Enguage was written as a reaction to the lack of action in ELIZA: it serves as an interface to a machine. You can run programs, and interact with databases from Enguage; so, is it just a competitor to Alexa, or Watson?

Enguage is more than this, too. Enguage only understands 12, or so, utterances; but these support the construction of interpretant—the hypothetical cognitive mechanism giving the ability to interpret. This gives a machine the ability to form understanding, which can be done by voice.. So, while it would be difficult to program Alexa using Alexa (because the Alexa Skills Kit is based in JavaScript on a website), Enguage is open to users saying what they mean.

[1] Turing, A. M., On Computing Machinery and Intelligence, Mind, 236 (Oct., 1950)

[2] Weizenbaum, J., ELIZA – A Computer Program for the Study of Natural Language Communication between Man and Machine, CACM, 1966

[3] Ahl, D. H., More Basic Computer Games, ISBN: 9780894801372