Is Enguage Rules or Statistical-based NLP

Neither. Perhaps, as an interface, Engauge isn’t NLP in the traditional sense?  A trite argument might run that traditional NLP techniques haven’t worked, so why look for a new answer in old failures?

The running of a computer program can be likened to following a logical argument. Given some premises, and some rules, a conclusion can be reached. Similarly, given a program and some inputs, an output can be achieved. So perhaps all software is rules-based?

But in natural language, who creates these rules? Unless you believe, as did Kant, in pre-defined, a priori, meanings, we need to be able to create meaning through speech, as we go. We need to be able to build, to say:

to the phrase hello reply hello to you too.

The use of statistics, on the other hand, seem a less prescriptive than rules, if it is looking to the uses of language to determine possible meanings. However, if they are simply being used to determine between two possible syntax trees, does this utterance mean A or B, this approach has its own pitfalls, and will blog at some point on the difficulties in the syntax-semantic dyadic.

As a post-script: I often get asked what dictionary does Enguage use, and my reply of none is perhaps a difficult one to comprehend?  But it is the case that dictionaries do not define what words are about to mean, but record meanings of words as they have been used.  Similarly, Enguage, records meaning in utterances not words–utterances which are used to address concepts. But is it word mentioning here briefly that there are two sides to the notion of meaning.  An utterance has a structure composed of the contents of the stop list and concept names; plus, individual words will have personal meanings to the speaker: nobody knows what I mean by coffee when I say i need a coffee,