Автор работы: Пользователь скрыл имя, 05 Августа 2012 в 23:54, лекция
It is commonly recognised that acquaintance with at least some of the currently used procedures of linguistic investigation is of considerable importance both for language learners and for prospective teachers as it gives them the possibility to observe how linguists obtain answers to certain questions and is of help in the preparation of teaching material. It also helps language learners to become good observers of how language works and this is the only lasting way to become better users of language.
It goes without saying that to be useful in teaching statistics should deal with meanings as well as sound-forms as not all word-meanings are equally frequent. Besides, the number of meanings exceeds by far the number of words. The total number of different meanings recorded and illustrated in OED for the first 500 words of the Thorndike Word List is 14,070, for the first thousand it is nearly 25,000. Naturally not all the meanings should be included in the list of the first two thousand most commonly used words. Statistical analysis of meaning frequencies resulted in the compilation of A General Service List of English Words with Semantic Frequencies. The semantic count is a count of the frequency of the occurrence of the various senses of 2,000 most frequent words as found in a study of five million running words. The semantic count is based on the differentiation of the meanings in the OED and the frequencies are expressed as percentage, so that the teacher and textbook writer may find it easier to understand and use the list. An example will make the procedure clear.
room (’space’) takes less room, not enough room to turn round (in) make room for (figurative) room for improvement | } | 12% |
come to my room, bedroom, sitting room; drawing room, bathroom | } | 83% |
(plural = suite, lodgings) my room in college to let rooms | } | 2% |
It can be easily observed from the semantic count above that the meaning ‘part of a house’ (sitting room, drawing room, etc.) makes up 83% of all occurrences of the word room and should be included in the list of meanings to be learned by the beginners, whereas the meaning ’suite, lodgings’ is not essential and makes up only 2% of all occurrences of this word.
Statistical methods have been also applied to various theoretical problems of meaning. An interesting attempt was made by G. K. Zipf to study the relation between polysemy and word frequency by statistical methods. Having discovered that there is a direct relationship between the number of different meanings of a word and its relative frequency of occurrence, Zipf proceeded to find a mathematical formula for this correlation. He came to the conclusion that different meanings of a word will tend to be equal to the square root of its relative frequency (with the possible exception of the few dozen most frequent words). This was summed up in the following formula where m stands for the number of meanings, F for relative frequency — tn — F1/2. This formula is known as Zipf’s law.
Though numerous corrections to this law have been suggested, still there is no reason to doubt the principle itself, namely, that the more frequent a word is, the more meanings it is likely to have.
One of the most promising trends in statistical enquiries is the analysis of collocability of words. It is observed that words are joined together according to certain rules. The linguistic structure of any string of words may be described as a network of grammatical and lexical restrictions.1
The set of lexical restrictions is very complex. On the standard probability scale the set of (im)possibilities of combination of lexical units range from zero (impossibility) to unit (certainty).
Of considerable significance in this respect is the fact that high frequency value of individual lexical items does not forecast high frequency of the word-group formed by these items. Thus, e.g., the adjective able and the noun man are both included in the list of 2,000 most frequent words, the word-group an able man, however, is very rarely used.
The importance of frequency analysis of word-groups is indisputable as in speech we actually deal not with isolated words but with word-groups. Recently attempts have been made to elucidate this problem in different languages both on the level of theoretical and applied lexicology and lexicography.
It should be pointed out, however, that the statistical study of vocabulary has some inherent limitations.
Firstly, statistical approach is purely quantitative, whereas most linguistic problems are essentially qualitative. To put it in simplar terms quantitative research implies that one knows what to count and this knowledge is reached only through a long period of qualitative research carried on upon the basis of certain theoretical assumptions.
For example, even simple numerical word counts presuppose a qualitative definition of the lexical items to be counted. In connection with this different questions may arise, e.g. is the orthographical unit work to be considered as one word or two different words: work n — (to) work v. Are all word-groups to be viewed as consisting of so many words or are some of them to be counted as single, self-contained lexical units? We know that in some dictionaries word-groups of the type by chance, at large, in the long run, etc. are counted as one item though they consist of at least two words, in others they are not counted at all but viewed as peculiar cases of usage of the notional words chance, large, run, etc. Naturally the results of the word counts largely depend on the basic theoretical assumption, i.e. on the definition of the lexical item.1
We also need to use qualitative description of the language in deciding whether we deal with one item or more than one, e.g. in sorting out two homonymous words and different meanings of one word.2 It follows that before counting homonyms one must have a clear idea of what difference in meaning is indicative of homonymy. From the discussion of the linguistic problems above we may conclude that an exact and exhaustive definition of the linguistic qualitative aspects of the items under consideration must precede the statistical analysis.
Secondly, we must admit that not all linguists have the mathematical equipment necessary for applying statistical methods. In fact what is often referred to as statistical analysis is purely numerical counts of this or that linguistic phenomenon not involving the use of any mathematical formula, which in some cases may be misleading.
Thus, statistical analysis is applied in different branches of linguistics including lexicology as a means of verification and as a reliable criterion for the selection of the language data provided qualitative description of lexical items is available.
§ 3. Immediate Constituents Analysis
The theory of Immediate Constituents (IC) was originally elaborated as an attempt to determine the ways in which lexical units are relevantly related to one another. It was discovered that combinations of such units are usually structured into hierarchically arranged sets of binary constructions. For example in the word-group a black dress in severe style we do not relate a to black, black to dress, dress to in, etc. but set up a structure which may be represented as a black dress / in severe style. Thus the fundamental aim of IC analysis is to segment a set of lexical units into two maximally independent sequences or ICs thus revealing the hierarchical structure of this set. Successive segmentation results in Ultimate Constituents (UC), i.e. two-facet units that cannot be segmented into smaller units having both sound-form and meaning. The Ultimate Constituents of the word-group analysed above are: a | black | dress | in | severe | style.
The meaning of the sentence, word-group, etc. and the IC binary segmentation are interdependent. For example, fat major’s wife may mean that either ‘the major is fat’ or ‘his wife is fat’. The former semantic interpretation presupposes the IC analysis into fat major’s | wife, whereas the latter reflects a different segmentation into IC’s and namely fat | major’s wife.
It must be admitted that this kind of analysis is arrived at by reference to intuition and it should be regarded as an attempt to formalise one’s semantic intuition.
It is mainly to discover the derivational structure of words that IC analysis is used in lexicological investigations. For example, the verb denationalise has both a prefix de- and a suffix -ise (-ize). To decide whether this word is a prefixal or a suffixal derivative we must apply IC analysis.1 The binary segmentation of the string of morphemes making up the word shows that *denation or *denational cannot be considered independent sequences as there is no direct link between the prefix de- and nation or national. In fact no such sound-forms function as independent units in modern English. The only possible binary segmentation is de | nationalise, therefore we may conclude that the word is a prefixal derivative. There are also numerous cases when identical morphemic structure of different words is insufficient proof of the identical pattern of their derivative structure which can be revealed only by IC analysis. Thus, comparing, e.g., snow-covered and blue-eyed we observe that both words contain two root-morphemes and one derivational morpheme. IC analysis, however, shows that whereas snow-covered may be treated as a compound consisting of two stems snow + covered, blue-eyed is a suffixal derivative as the underlying structure as shown by IC analysis is different, i.e. (blue+eye)+-ed.
It may be inferred from the examples discussed above that ICs represent the word-formation structure while the UCs show the morphemic structure of polymorphic words.
§ 4. Distributional Analysis and Co-occurrence
Distributional analysis in its various forms is commonly used nowadays by lexicologists of different schools of thought. By the term distribution we understand the occurrence of a lexical unit relative to other lexical units of the same level (words relative to words / morphemes relative to morphemes, etc.). In other words by this term we understand the position which lexical units occupy or may occupy in the text or in the flow of speech. It is readily observed that a certain component of the word-meaning is described when the word is identified distributionally. For example, in the sentence The boy — home the missing word is easily identified as a verb — The boy went, came, ran, etc. home. Thus, we see that the component of meaning that is distributionally identified is actually the part-of-speech meaning but not the individual lexical meaning of the word under analysis. It is assumed that sameness / difference in distribution is indicative of sameness / difference in part-of-speech meaning.
It is also observed that in a number of cases words have different lexical meanings in different distributional patterns. Compare, e.g., the lexical meaning of the verb to treat in the following: to treat somebody well, kindly, etc. — ‘to act or behave towards’ where the verb is followed by a noun + an adverb and to treat somebody to ice-cream, champagne, etc. — ‘to supply with food, drink, entertainment, etc. at one’s own expence’ where the verb is followed by a noun+the preposition to + another noun. Compare also the meaning of the adjective ill in different distributional structures, e.g. ill look, ill luck, ill health, etc. (ill+N — ‘bad’) and fall ill, be ill, etc. (V+ill — ’sick’).
The interdependence of distribution and meaning can be also observed at the level of word-groups. It is only the distribution of otherwise completely identical lexical units that accounts for the difference in the meaning of water tap and tap water. Thus, as far as words are concerned the meaning by distribution may be defined as an abstraction on the syntagmatic level.
It should also be noted that not only words in word-groups but also whole word-groups may acquire a certain denotational meaning due to certain distributional pattern to which this particular meaning is habitually attached. For example, habitually the word preceding ago denotes a certain period of time (an hour, a month, a century, etc. ago) and the whole word-group denotes a certain temporal unit. In this particular distributional pattern any word is bound to acquire an additional lexical meaning of a certain period of time, e.g. a grief ago (E. Cummings), three cigarettes ago (A. Christie), etc. The words a grief and a cigarette are understood as indicating a certain period of time and the word-groups as denoting temporal units. This is also true of the meaning of the most unusual word-groups or sentences, e.g. griefs of joy (E. Cummings) (cf. days of joy, nights of grief, etc.), to deify one’s razorblade (E. Cummings) (cf. to sharpen the knife).
Distributional pattern as such seems to possess a component of meaning not to be found in individual words making up the word-group or the sentence. Thus, the meaning ‘make somebody do smth by means of something’ cannot be traced back to the lexical meanings of the individual words in ‘to coax somebody into accepting the suggestion’. The distributional pattern itself seems to impart this meaning to the whole irrespective of the meaning of the verb used in this structure, i.e. in the pattern V+N+into+Ving verbs of widely different lexical meaning may be used. One can say, e.g., to kiss somebody into doing smth, to flatter somebody into doing smth, to beat somebody into doing something,
etc.; in all these word-groups one finds the meaning ‘to make somebody do something’ which is actually imparted by the distributional pattern.
The same set of lexical items can mean different things in different syntactic arrangements as illustrated by: John thought he had left: Mary alone, Mary alone thought he had left John. Had he alone thought Mary left John?
As can be inferred from the above distributional analysis is mainly applied by the linguist to find out sameness or difference of meaning. It is assumed that the meaning of any lexical unit may be viewed as made up by the lexical meaning of its components and by the meaning of the pattern of their arrangement, i.e. their distributional meaning. This may perhaps be best illustrated by the semantic analysis of polymorphic words. The word singer, e.g., has the meaning of ‘one who sings or is singing’ not only due to the lexical meaning of the stem sing- and the derivational morpheme -er (= active doer), but also because of the meaning of their distributional pattern. A different pattern of arrangement of the same morphemes *ersing changes the whole into a meaningless string of sounds.1
Distribution of stems in a compound makes part of the lexical meaning of the compound word. Compare, e.g., different lexical meanings of the words formed by the same stems bird and cage in bird-cage and cage-bird.
It is also assumed that productivity largely depends on the distributional meaning of the lexical units. Distributional meaning of the lexical units accounts for the possibility of making up and understanding a lexical item that has never been heard or used before but whose distributional pattern is familiar to the speaker and the hearer. Thus, though such words as kissable, hypermagical, smiler (She is a charming smiler), etc. cannot be found in any dictionary their meaning is easily understood on the analogy with other words having the same distributional pattern, e. g- (v + -able- -> A as in readable, eatable and kissable).
From the discussion of the distributional analysis above it should not be inferred that difference in distribution is always indicative of the difference in meaning and conversely that sameness of distribution is an absolutely reliable criterion of sameness of meaning.
It was pointed out above that as a rule distribution of stems in a compound word predicts a certain component of meaning as the stem that stands first is understood as modifying the one that follows (cf. bird-cage and cage-bird). In certain cases, however, the meaning or to be more exact one of the word-meanings may be structured differently. Firstly, in morphologically non-motivated words distributional structure is not correlated with certain meaning. For instance, in the words apple-sauce, plum-sauce, etc. we actually see that the item sauce-is modified by the stems apple-, plum-, etc., hence these words may be semantically interpreted as ‘kind of sauce made of apples, plums, etc.’ One of the meanings of the word apple-sauce — ‘nonsense’, ‘insincere flattery’, however, is in no way connected with the distributional structure of stems. This is observed in all non-motivated words. Secondly, it is common knowledge that words used in identical distributional patterns may have different meanings. Compare, e.g., the meaning of the verb to move in the pattern to move+N: 1. cause to change position (e.g. move the chair, the piano, etc.), 2. arouse, work on the feelings of smb. (e.g. to move smb. deeply). In the cases of this type distributional analysis traditionally understood as the analysis on the level of different parts of speech, as an abstraction on the syntagmatic level is of little help in the analysis of sameness or difference of lexical meaning.
Distributional analysis, however, is not as a rule confined to the analysis on the part-of-speech level or in general on the grammatical level but is extended to the lexical level.
The essential difference between grammar and lexis is that grammar deals with an obligatory choice between a comparatively small and limited number of possibilities, e.g. between the man and men depending on the form of the verb to be, cf. The man is walking, The men are walking where the selection of the singular number excludes the selection of the plural number. Lexis accounts for the much wider possibilities of choice between, say, man, soldier, fireman and so on. Lexis is thus said to be a matter of choice between open sets of items while grammar is one between closed systems.1 The possibilities of choice between lexical items are not limitless however. Lexical items containing certain semantic components are usually observed only in certain positions. In phrases such as all the sun long, a grief ago and farmyards away the deviation consists of nouns sun, grief, farm yards in a position where normally only members of a limited list of words appear (in this case nouns of linear measurements such as inches, feet, miles). The difference between the normal lexical paradigm and the ad hoc paradigm can be represented as follows:
inches feet yards, etc. | ) | away (normal) | farmyards griefs, etc. | } | away (deviant) |
Cf. also “half an hour and ten thousand miles ago” (Arthur C. Clark). “She is feeling miles better today.” (Nancy Milford)
Distribution defined as the occurrence of a lexical unit relative to other lexical units can be interpreted as co-occurrence of lexical items and the two terms can be viewed as synonyms.
It follows that by the term distribution we understand the aptness of a word in one of its meanings to collocate or to co-occur with a certain group, or certain groups of words having some common semantic component. In this case distribution may be treated on the level of semantic classes or subclasses of lexical units.Thus, e.g., it is common practice to subdivide animate nouns into nouns denoting human beings and non-humans (animals, birds, etc.). Inanimate nouns are usually subdivided into concrete and abstract (cf., e.g., table, book, flower and joy,, idea, relation) which may be further classified into lexico-semantic groups, i.e. groups of words joined together by a common concept, e.g. nouns denoting pleasurable emotions (joy, delight, rapture, etc.), nouns denoting mental aptitude (cleverness, brightness, shrewdness, etc.). We observe that the verb to move followed by the nouns denoting inanimate objects (move + Nin) as a rule have the meaning of ‘cause to change position’; when, however, this verb is followed by the nouns denoting human beings (move + Nanim pers) it will usually have another meaning, i.e. ‘arouse, work on the feelings of. In other cases the classification of nouns into animate / inanimate may be insufficient for the semantic analysis, and it may be necessary to single out different lexico-semantic groups as, e.g., in the case of the adjective blind. Any collocation of this adjective with a noun denoting a living being (animate) (blind+Nan) will bring out the meaning ‘without the power to see’ (blind man, cat. etc.). Blind followed by a noun denoting inanimate objects, or abstract concepts may have different meanings depending on the lexico-semantic group the noun belongs to. Thus, blind will have the meaning ‘reckless, thoughtless, etc’ when combined with nouns denoting emotions (blind passion, love, fury, etc.) and the meaning ‘hard to discern, to see’ in collocation with nouns denoting written or typed signs (blind handwriting, blind type, etc.).
In the analysis of word-formation pattern the investigation on the level of lexico-semantic groups is commonly used to find out the word-meaning, the part of speech, the lexical restrictions of the stems, etc. For example, the analysis of the derivational pattern n+ish -> A shows that the suffix -ish is practically never combined with the noun-stems which denote units of time, units of space, etc. (*hourish, *mileish, etc.). The overwhelming majority of adjectives in -ish are formed from the noun-stems denoting living beings (wolfish, clownish, boyish, etc.).
It follows that distribution may be viewed as the place of a lexical item relative to other lexical items on the level of semantic classes and sub-classes.
The analysis of lexical collocability in word-groups is widely applied for different purposes: to find out typical, most commonly used collocations in modern English, to investigate the possibility / impossibility of certain types of meaning in certain types of collocations, and so on.
It stands to reason that certain lexical items rarely if ever co-occur because of extra-linguistic factors. There are no restrictions inherent in the grammar or vocabulary of the English language that would make co-occurrence of the participle flying with the noun rhinoceros impossible, yet we may be reasonably certain that the two words are unlikely to co-occur.
What we describe as meaning by collocation or meaning by co-occurrence is actually a blend of extra-linguistic and intra-linguistic components of meaning. One or the other component may prevail. For instance, one may argue that the meaning of the adjective good is different in good doctor, good mother, good milkman, etc. because we know that a good doctor is ‘a doctor who gives his patient adequate medical care and treatment’, whereas good mother is ‘a mother who takes care of the needs of her children and cares for them adequately’. Here naturally it is the extralinguistic factors that account for the difference in meaning.
Of greatest importance for language teaching, however, is the investigation of lexical restrictions in collocability that are of purely intralinguistic nature and cannot be accounted for by logical considerations. This can be perhaps best illustrated by comparing the collocability of correlated words in different languages. In the English language, e.g., the verb to seize may be combined with nouns denoting different kinds of emotions: I was seised with joy, grief, etc., whereas in the Russian language one can say на меня напала тоска, отчаяние, сомнение, etc. but the collocations напала радость, надежда are impossible, that is to say the Russian verb cannot be combined with nouns denoting pleasurable emotions.
Информация о работе Methods and Procedures of Lexicological Analysis