Natural Language Processing


There have been high hopes for Natural Language Processing. Natural
Language Processing, also known simply as NLP, is part of the broader field of
Artificial Intelligence, the effort towards making machines think. Computers may
appear intelligent as they crunch numbers and process information with blazing
speed. In truth, computers are nothing but dumb slaves who only understand on or
off and are limited to exact instructions. But since the invention of the
computer, scientists have been attempting to make computers not only appear
intelligent but be intelligent. A truly intelligent computer would not be
limited to rigid computer language commands, but instead be able to process and
understand the English language. This is the concept behind Natural Language
Processing.
The phases a message would go through during NLP would consist of
message, syntax, semantics, pragmatics, and intended meaning. (M. A. Fischer,
1987) Syntax is the grammatical structure. Semantics is the literal meaning.
Pragmatics is world knowledge, knowledge of the context, and a model of the
sender. When syntax, semantics, and pragmatics are applied, accurate Natural
Language Processing will exist.
Alan Turing predicted of NLP in 1950 (Daniel Crevier, 1994, page 9):
"I believe that in about fifty years\' time it will be possible to
program computers .... to make them play the imitation game so well that an
average interrogator will not have more than 70 per cent chance of making the
right identification after five minutes of questioning."
But in 1950, the current computer technology was limited. Because of
these limitations, NLP programs of that day focused on exploiting the strengths
the computers did have. For example, a program called SYNTHEX tried to determine
the meaning of sentences by looking up each word in its encyclopedia. Another
early approach was Noam Chomsky\'s at MIT. He believed that language could be
analyzed without any reference to semantics or pragmatics, just by simply
looking at the syntax. Both of these techniques did not work. Scientists
realized that their Artificial Intelligence programs did not think like people
do and since people are much more intelligent than those programs they decided
to make their programs think more closely like a person would. So in the late
1950s, scientists shifted from trying to exploit the capabilities of computers
to trying to emulate the human brain. (Daniel Crevier, 1994)
Ross Quillian at Carnegie Mellon wanted to try to program the
associative aspects of human memory to create better NLP programs. (Daniel
Crevier, 1994) Quillian\'s idea was to determine the meaning of a word by the
words around it. For example, look at these sentences: After the strike, the
president sent him away. After the strike, the umpire sent him away. Even though
these sentences are the same except for one word, they have very different
meaning because of the meaning of the word "strike". Quillian said the meaning
of strike should be determined by looking at the subject. In the first sentence,
the word "president" makes the word "strike" mean labor dispute. In the second
sentence, the word "umpire" makes the word "strike" mean that a batter has swung
at a baseball and missed.
In 1958, Joseph Weizenbaum had a different approach to Artificial
Intelligence, which he discusses in this quote (Daniel Crevier, 1994, page 133):
"Around 1958, I published my first paper, in the commercial magazine
Datamation. I had written a program that could play a game called "five in a
row." It\'s like ticktacktoe, except you need rows of five exes or noughts to win.
It\'s also played on an unbounded board; ordinary coordinate will do. The program
used a ridiculously simple strategy with no look ahead, but it could beat anyone
who played at the same naive level. Since most people had never played the game
before, that included just about everybody. Significantly, the paper was
entitled: "How to Make a Computer Appear Intelligent" with appear emphasized. In
a way, that was a forerunner to my later ELIZA, to establish my status as a
charlatan or con man. But the other side of the coin was that I freely started
it. The idea was to create the powerful illusion that the computer was
intelligent. I went to considerable trouble in the paper to explain that there
wasn\'t much behind the scenes, that the machine wasn\'t thinking. I explained the
strategy well enough that anybody could write that program, which is the same
thing I did with ELIZA."
ELIZA was a program written by Joe Weizenbaum which communicated to its
user while impersonating a psychotherapist. Weizenbaum wrote the program to
demonstrate the tricky alternatives to having programs look at syntax, semantics,
or pragmatics. One of ELIZA\'s