Read More
Date: 2024-08-20
318
Date: 2023-03-17
829
Date: 11-2-2022
848
|
The constraints extend to more cases than just quantifiers and negatives. For example, the same constraints seem to define at least in part the limits of conjunction reduction.
(1) John claimed that he robbed the bank and he claimed that he shot Sam.
(2) John claimed that he robbed the bank and claimed that he shot Sam.
(3) John claimed that he robbed the bank and shot Sam.
In (1) and (2), it is understood that two claims have been made. In (3), it is understood that only one claim has been made. This suggests that (1) and (2) are derived from an underlying structure like (4), while (3) is derived from something like (5).
In (4) there are two instances of claim, while in (5) there is only one. The following question now arises: If conjunction reduction applies to (4) so that it yields (2), what is to keep it from applying further to yield (5)? If it applies freely to yield (5) from (4), we would get the incorrect result that (3) should be synonymous with (1).
Observe that in (4) and commands claim, but claim does not command and. In (5) the reverse is true: claim commands and, but not vice versa. To say that (5) cannot be derived from (4) is to say that the asymmetric command relation between and and claim, cannot be reversed in the course of a derivation. But that is the same as allowing constraint 2 to hold for and and claim.
This constraint will hold for other predicates like claim as well, though this is not obvious and to discuss it here would take us far afield.
It should also be noted that, although constraint 2 holds for and and verbs like claim, constraint 1 does not. For example, consider
(6) (a) John claimed something outrageous and something quite reasonable.
(b) John claimed something outrageous and he claimed something quite reasonable.
(6 a) involves two claims, and so is synonymous with (6 b). Let us assume then that (6 a) is derived from (6 b) by conjunction reduction. Prior to conjunction reduction, as in (6 b'), and commands the two occurrences of claim. After conjunction reduction, claim precedes and. If claim and and obeyed constraint 1, this would be impossible. Exceptions to constraints, like this one, are not rare.
The fact that claim and and obey constraint 2, but not constraint 1, can be made the basis for an explanation of a rather remarkable minimal pair :
(7) (a) John claimed that he robbed the bank and that Sam shot him.
(b) John claimed that he robbed the bank and Sam shot him.
In (7 a) John is making two claims, while in (7b) he is making one. How can we account for this fact? Let us assume, as is usual, that the complementizer that is Chomsky-adjoined to the S of an object complement, as in (8 a(. Other alternatives are, given in (8 b and c); since they will yield exactly the same result, there is no need to choose among (8a, b, and c) for the sake of this argument. (8 a) seems at present to be the least problematic alternative:
In order to explain the sentences in question, we need to show that the transformation inserting the complementizer that inserts only one occurrence of that complementizer per noun phrase complement, even if the complement S is a conjunction. This can be shown readily, if one looks at sentences of the form:
(9) NP1 and NP2 are both correct.
‘Both’ can occur in sentences of this form if there are exactly two NPs conjoined in the subject. Thus, there are no sentences of the form (10) or (11):
(10) *NP1 are both correct.
(11) *NP1 and NP2 and NP3 and NP4 are both correct.
Let NP1 and NP2 in (9) each contain a pair of conjoined sentences as its complement.
Observe the following facts:
(13)*That Sam robbed the bank and Bill shot him are both correct.
(14) That Sam robbed the bank and Bill shot him and that Sally got pregnant and her mother spanked her are both correct.
(15) *That Sam robbed the bank and that Bill shot him and that Sally got pregnant and that her mother spanked her are both correct.
These sentences indicate that the rule of complementizer placement may introduce at most one occurrence of that for each noun phrase complement. (13) provides evidence that that is not subject to conjunction reduction. Since the sentence (16).
(16) That Sam robbed the bank and that Bill shot him are both correct is grammatical, and since (13) would result if conjunction reduction applied to that, it appears that conjunction reduction may not apply to that. Let us now return to:
(7) (a) John claimed that he robbed the bank and that Sam shot him.
(b) John claimed that he robbed the bank and Sam shot him.
Since (7 a) contains two occurrences of that, it must contain two noun phrase complements, and since (7 b) contains only one occurrence of that, it cannot contain two noun phrase complements.
The essential feature of the analysis in (7 a') is that (7 a) is represented as containing a noun phrase conjunction, not a sentence conjunction. Thus, (7 a) and (7 b) would differ in structure in that the former would contain an NP-conjunction, whereas the latter would contain an S-conjunction. Given this analysis, the difference in meaning between (7 a) and (7 b) is an automatic consequence of the fact that claim and and obey constraint 2, but not constraint 1. Since (7 a') does not contain an embedded S-conjunction, the only possible source would be that of (4), where and commands claim and two claims are indicated. Since constraint 1 does not apply to claim and and, such a derivation is possible. (7 b'), on the other hand, contains an embedded conjunction, and so has two conceivable sources: (4) and (5). But since and commands claim in (4) while claim commands and (but not vice versa) in (7 b'), such a derivation is ruled out by the fact that constraint 2 holds for and and claim. Thus, the only possible derivation for (7 b') would be from (5), which indicates only one claim. Thus, the difference in meaning between (7 a) and (7 b) is explained by the fact that and and claim obey constraint 2, but not constraint 1.
Constraint 2 works for or as well as for and, as the following examples show:
(17) (Either) John claimed that he robbed the bank or he claimed that he shot Sam.
(18) John either claimed that he robbed the bank or claimed that he shot Sam.
(19) John claimed that he either robbed the bank or shot Sam.
These sentences parallel (1)-(3).1 Thus, it should be clear that constraint 2 applies to or.
Constraint 1 holds for or, though not for and. Consider the following examples, pointed out by R. Lakoff [46]:
(20) Either you may answer the question or not.
(21) You may either answer the question or not.
(20) and (21) are not synonymous. (20) says that there are two possibilities: either it is the case that you are permitted to answer the question or it is not the case that you are permitted to answer the question. (20) exhausts the range of possibilities. (21), on the other hand, says that you are permitted the choice of answering or not answering. Lakoff points out that the difference in meaning can be accounted for, given Ross’ analysis of modals as verbs that take complements. And the difference between (20) and (21) is paralleled by the difference between (22) and (23), where there is an overt verb with a sentential complement.
(22) Either you are permitted to answer the question, or not.
(23) You are permitted either to answer the question or not.
She proposes that (20) and (21) differ in structure as do (24) and (25):
In (24) or commands may and not vice versa. In (25) may commands or and not vice versa.
However, in derived structure this asymmetry of command can be neutralized.
In (26) and (27), the S’s have been pruned, and so may and either command each other in both cases. The assymetric command relation of (24) and (25) is neutralized. However, (26) you )either) may answer the question or not has the meaning of (24), while {27) you may either answer the question or not has the meaning of (25). The generalization is that if either precedes may in derived structure, it must command may in underlying structure, and conversely. Thus, we have exactly the situation of constraint 1. And under Ross’ analysis of modals the fact that the constraints work for modals follows from the fact that the constraints work for the corresponding verbs taking complements (e.g., permit).
Adverbs which are understood as predicates that take complements show the same property. Compare (28 and (29):
(28) (a) It isn’t obvious that John is a communist.
(b) It is obvious that John isn’t a communist.
(29) (a) John isn’t obviously a communist.
(b) John obviously isn’t a communist.
(28a and b) have underlying structures like:
If we assume that obviously is derived from obvious by a rule of adverb-lowering, and if it is assumed that obvious is one of those predicates taking a complement for which the constraints of section 2 hold, then it follows that (29 a) should have the meaning of (28 a), and that (29 b) should have the meaning of (28 b). Since not precedes obvious in (29 a), it must command obvious in underlying structure (30 a). Since obvious precedes not in (29 b), it must command not in underlying structure, as in (30 b).
The word only also obeys the constraints, though this follows automatically from the meaning of only. Only Bill means Bill and no one other than Bill. Since the latter expression contains a quantifier, we would expect the constraints of the previous section to hold. They do.
(31) (a) John didn’t hit Bill and no one else.
(b) Bill and no one else wasn’t hit by John.
(32) (a) John didn’t hit only Bill.
(b) Only Bill wasn’t hit by John.
The (a) sentences contain the reading It wasn’t the case that there was no one other than Bill that John hit. The (b) sentences contain the reading There wasn’t anyone other than Bill that John didn’t hit. This is exactly what the constraints predict.
It should be noted again that the difference between subject and non-subject position in the clause has nothing to do with these constraints. Sentences like
(33)I talked to few girls about only those problems.
(34) I talked about only those problems to few girls.
show the predicted difference in meaning even though both few and only are in the VP in both examples.
It should be clear from these examples that at least some global derivational constraints do not serve just to limit the scope of application of a single rule, but rather can limit the applicability of a whole class of rules - in this case, quantifier-lowering, conjunction-reduction, and adverb-lowering, together with rules like passive that interact with them. This result is similar to Ross’ findings [65] that certain constraints hold for all movement rules of a certain form, not just for individual rules. Similar results were found by Postal [57] in his investigation of the crossover principle.
It has been known for some time that global derivational constraints have exceptions, as well as being subject to dialectal and idiolectal variation. Consider, for example, Ross’ constraints on movement transformations. The coordinate structure constraint, if violated at any point in a derivation, yields ill-formed sentences, e.g., *Someone and John left, but I don t know who and John left. However, if the coordinate node is later deleted by some transformation, the sentence may be acceptable, e.g., Someone and John left, but I don’t know who. Thus, the coordinate structure constraint applies throughout derivations, but with the above exception which takes precedence over the constraint. Ross’ constraints are also subject to dialectal and idiolectal variation. For the majority of English speakers sentences like John didn’t see the man who had stolen anything and John didn’t believe the claim that anyone left are ill-formed as predicted by Ross’ complex NP-constraint. However, for a great many speakers the latter sentence is grammatical and for some speakers even the former sentence is grammatical. So, it is clear that the global derivational constraints discovered by Ross are subject to such variation.
It should not be surprising that the global derivational constraints also have a range of exceptions and are subject to dialectal and idiolectal variation. For example, constraint 2 does not hold for the rule of not-transportation, the existence of which has been demonstrated by R. Lakoff [45]. Thus, when Li is a nonlogical predicate and Lj is a negative, the constraint does not hold. Similarly, constraint 1 does not hold when Li is a negative and Lj is an auxiliary verb. Thus, John cannot go can mean It is not the case that John can go. Like Ross’ constraints, constraints 1 and 2 admit of a great deal of dialectal and idiolectal variation. There are a great many people (more than one-third of the people I’ve asked) for whom constraint 1 does not hold for quantifiers and negatives. For such people, Few books were read by many men is ambiguous, as is The target wasn't hit by many arrows. Other sorts of differences also show up in the constraints. For example, for some people constraints 1 and 2 mention surface structure, not just shallow structure. Individuals with such constraints will find that I dissuaded Bill from dating many girls can mean There are many girls that I dissuaded Billfrom dating. Guy Carden [5] has pointed out that some speakers differ as to whether a constraint can hold at one level as opposed to holding throughout the grammar. McCawley [50] has shown that for some speakers the no-double-negative constraint holds only late in the grammar (at shallow structure). These speakers can get sentences like: John doesn't like Brahms and Bill doesn't like Brahms, but not Sam - he loves Brahms. (Before deletion, this would have the structure underlying *Sam doesn't not love Brahms.) However, many people find such sentences impossible. Carden has pointed out that this could be accounted for if the no-double-negative constraint held throughout the grammar down to shallow structure for such speakers. With other speakers, the same constraint seems to hold over other segments of the derivation (cf. Carden [5]).
Such facts seem to show that constraints 1 and 2 are the norm from which individuals may vary. It is not clear at present how such variations from the norm can best be described, and it would seem that the basic theory will eventually have to be revised to account for such variations on basic constraints.
As the above discussion, as well as those of Postal [57] and Ross [65] have shown, global derivational constraints are pervasive in grammar. For example, interactions between transformational rules and presuppositions are handleable in a natural way using derivational constraints. Consider Kim Burt’s observation (cf. Lakoff [43]) that future will can optionally delete if it is presupposed that the speaker is sure that the event will happen. Suppose the rule of will-deletion is given by the local derivational constraint (C1 C2). Suppose tree-condition C3 describes the presupposition in question. Then Burt’s observation can be stated in the form:
Presuppositions of coreferentiality can be treated in the same way. For example, consider the shallow-structure constraint that states that a pronoun cannot both precede and command its antecedent. Suppose C1 states that two NPs are preferential, C2 states that the pronoun precedes the antecedent, and C3 states that the pro¬ noun commands the antecedent. The constraint would then be of the form :
Another phenomenon that can be handled naturally in the basic theory is Halliday’s [24], [25] notion of ‘focus’. Halliday [24] describes focus in the following terms:
the information unit, realized as the tone group, represents the speaker’s organization of the discourse into message units: the information focus, realized as the location of the tonic, represents his organization of the components of each such unit such that at least one such component, that which is focal, is presented as not being derivable from the preceding discourse. If the information focus is unmarked (focus on the final lexical item), the nonfocal components are unspecified with regard to presupposition, so that the focal is merely cumulative in the message (hence the native speaker’s characterization of it as ‘emphatic’). If the information focus is marked (focus elsewhere than on the final lexical item), the speaker is treating the non-focal components as presupposed. (Halliday [24], p. 8).
Halliday’s account of focus has been adopted by Chomsky [10]. Assume for the moment that Halliday’s account of focus as involving the location of stress on surface structure constitutents were correct. Then the content of the sentence would be divided into a presupposed part and a part which is ‘ new ’ or focused upon. Recall that derivational constraints enable one to trace the history of nodes throughout a derivation. This is necessary if one is to pick out which parts of P1 correspond to which surface structure constituents. Given such a notion, the correspondence between PR and FOC, which are part of the semantic representation, and the corresponding surface constituents can be stated by a global derivational constraint. Thus, the Halliday-Chomsky notion of ‘focus’ can be approached naturally within the basic theory. What is needed to make such representations precise is a precise definition of the notion ‘semantic content corresponding to derived structure constituents ’.
Of course, the Halliday-Chomsky account of focus is not quite correct. For example, Halliday and Chomsky assume that the constituent bearing main stress in the surface structure is the focus, and therefore that the lexical items in that constituent provide new rather than presupposed information. This is not in general the case. Consider (37):
(37) The TALL girl left.
Here the main stress is on TALL, which should be the focus according to Halliday and Chomsky, and should therefore be new, not given, information. However, in (37) TALL is understood as modifying the noun in the same way as the restrictive relative clause who was tall. Since restrictive relative clauses are presupposed, it follows that in (37) it is presupposed, not asserted, that the girl being spoken of was tall. Thus, the meaning of the lexical item TALL cannot be new information. Another possible candidate for focus might be the whole NP the tall girl. But none of the lexical con¬ tent of this NP is new information, since it is presupposed that the individual under discussion exists, it is presupposed that that individual is a girl and it is presupposed that she is tall. None of this is new. In (37) it is presupposed that some girl left, and it is presupposed that some girl is tall. The new information is that the girl who was presupposed to have left is coreferential with the girl who was presupposed to be tall. The semantic content of the focus is an assertion of coreferentiality. In this very typical example of focus, the lexical-semantic content of the surface structure constituent bearing main stress has nothing whatever to do with the semantic content of the focus.
So far, we have assumed that the Halliday-Chomsky account of focus in terms of surface structure constituents is basically correct. But this too is obviously mistaken. Consider (38):
(38) (a) John looked up a girl who he had once met in Chicago.
(b) John looked a girl up who he had once met in Chicago.
(a') and (b') have very different surface structure constituents. Assuming that main stress falls on ‘ Chicago ’ in both cases, Chomsky and Halliday would predict that these sentences should be different in focus possibilities and in corresponding presuppositions, and that therefore they should answer different questions, and have quite different semantic representations. But it is clear that they do not answer different questions and do not make different presuppositions. Thus, it is clear that focus cannot be defined purely in terms of surface structure constitutents. Rather it seems that derived structure at some earlier point in derivations is relevant.
These difficulties notwithstanding, it is clear that the phenomenon of focus does involve global derivational constraints of some sort involving derived structure. Halliday certainly deserves credit for the detailed work he has done in this area, despite the limitations of working only with surface structure. Generative semantics should provide a natural framework for continuing Haliiday’s line of research.
Another notion which can be handled naturally within the framework of generative semantics is that of ‘topic’. Klima has observed that sentences like the following differ as to topic:
(39) (a) It is easy to play sonatas on this violin.
(b) This violin is easy to play sonatas on.
(c) Sonatas are easy to play on this violin.
(a) is neutral with respect to topic. (b) requires ‘ this violin ’ to be the topic, while (c) requires ‘ sonatas ’. There are of course predicates in English which relate topics to the things they are topics of. For example:
(40) (a) My story is about this violin.
(b) That discussion concerned sonatas.
The predicates ‘ be about ’ and ‘ concern ’ are two-place relations, whose arguments are a description of a proposition or discourse and the item which is the topic of that proposition or discourse. Thus, the (a), (b), and (c) sentences of (41) and {42) are synonymous with respect to topic as well as to the rest of their content:
(41) (a) Concerning sonatas, it is easy to play them on this violin.
(b) Concerning sonatas, they are easy to play on this violin.
(c) Sonatas are easy to play on this violin.
(42) (a) About this violin, it is easy to play sonatas on it.
(b) About this violin, it is easy to play sonatas on.
(c) This violin is easy to play sonatas on.
If the topics mentioned in the clause containing ‘ concern ’ or ‘ about ’ differ from the superficial subjects in these sentences, then there is a conflict of topics and ill-formedness results, unless it is assumed that the sentences can have more than one topic.
(43) ?*About sonatas, this violin is easy to play them on.
(44) ?*Concerning this violin, sonatas are easy to play on it.
These are well-formed only for those speakers who admit more than one topic in such sentences.
These considerations would indicate that the notion ‘ topic ’ of a sentence is to be captured by a two-place relation having the meaning of ‘concerns’ or ‘is about’. If the set of presuppositions contains such a two-place predicate whose arguments are P1 and some NP, then it will be presupposed that that NP is the topic of P1 Thus, the notion ‘ topic ’ may well turn out to be a special case of a presupposition. Since a semantic specification of ‘ concerns ’ and ‘ is about ’ is needed on independent grounds, it is possible that the special slot for TOP in semantic representation is unnecessary. Whether all cases of topic will turn out to be handleable in this way remains, of course, to be seen. Whichever turns out to be true, it is clear that the facts of (39) can be handled readily by derivational constraints. Assume that there is a rule which substitutes ‘this violin’ and ‘sonatas’ for ‘it’ in (39). Let (C1, C2) describe this operation. Let C3 describe the topic relation obtaining between P1 and the NP being substituted. Then the facts of (39) can be described by the following derivational constraint:
Note that this has exactly the form of the constraint describing the deletion of future will. Global derivational constraints linking transformations and presuppositions have this form. Of course, it may turn out to be the case that a more general characterization of the facts of (39) is possible, namely, that surface subjects in some class of sentences are always topics. In that case, there would be a derivational constraint linking presuppositions and surface structure. In any event the theory of generative semantics seems to provide an adequate framework for further study of the notion ‘topic ’.
Another sort of phenomenon amenable to treatment in the basic theory is lexical presupposition. As Fillmore (personal communication) has pointed out, ‘Leslie is a bachelor’ presupposes that Leslie is male, adult and human and asserts that he is unmarried. Similarly, ‘Sam assassinated Harry’ presupposes that Harry is an important public figure and asserts that Sam killed him. Thus, lexical insertion transformations for ‘bachelor’ and ‘assassinate’ must be linked to presuppositional information.
Thus far, most of the examples of global derivational constraints mention semantic representations in some way. However, this is not true in general. For example, Ross’ [65] constraints are purely syntactic. Another example of a purely syntactic global derivational constraint has been discussed by Harold King [38]. King noted that contraction of auxiliaries as in ‘ John’s tall ’, ‘ The concert’s at 5 o’clock’, etc. cannot occur when a constituent immediately following the auxiliary to be contracted has been deleted. For example:
(46) (a) Max is happier than Sam is these days.
(b) *Max is happier than Sam’s these days.
(47) (a) Rich though John is, I still like him.
(b) *Rich though John’s, I still like him.
(48) (a) The concert is this afternoon.
(b) The concert’s this afternoon.
(c) Tell John that the concert is in the auditorium this afternoon.
(d) Tell John where the concert is this afternoon.
(e) *Tell John where the concert’s this afternoon.
(f) Tell John that the concert’s this afternoon.
In (e) the locative adverb has been moved from after is; in (f) no such movement has taken place.
Since contraction is an automatic consequence of an optional rule of stress-lowering, the general principle is that stress-lowering on an auxiliary cannot take place if at any point earlier in the derivation any rule has deleted a constituent immediately following the auxiliary. Let (C1, C2) be the rule of stress-lowering for Auxj Let C3 — Xi — Auxj — A — Xk where A is any constituent and C4 = Xi —Auxj —Xk.
The constraint is:
A wide range of examples of global derivational constraints not mentioning semantic representation will be discussed in Lakoff [43]. The redundancy rules discussed by R. Lakoff [44] are further examples of this sort. The exact nature and extent of global derivational constraints is, of course, to be determined through future investigation. It should be clear, however, that a wide variety of such constraints do exist. Thus, the basic theory, in its account of global derivational constraints, goes far beyond the standard theory and the Aspects theory, which included only a very limited variety of such constraints.
The basic theory is, of course, not obviously correct, and is open to challenge on empirical issues of all sorts. However, before comparing theories of grammar, one should first check to see that there are empirical differences between the theories. Suppose, for example, one were to counterpose to the basic theory, or generative semantics, an ‘ interpretive theory ’ of grammar. Suppose one were to construct such an interpretive theory in the following way. Take the class of sequences of phrase-markers (P1, . ., Pj,., Pn) where all lexical insertion rules occur in a block between P1 and Pi and all upward-toward-the-surface cyclic rules apply after Pi. Call Pi ‘deep structure’. Assume that Pi. . . . Pn are limited only by local derivational constraints, except for those global constraints that define the cycle and rule ordering. Call Pi . . . . . Pn the ‘syntactic part’ of the derivation. Assume in addition that semantic representation SR = (P-m, PR, TOP, FOC,. . .), where P-m is a phrase-marker in some ‘semantically primitive’ notation, as suggested by Chomsky [10] in his account of the ‘ standard theory Then a full derivation will be a sequence of phrase-markers:
P-m, . . . . ., P-j, . . . . . , P0, P1, . . . . . Pi, . . . . . .Pn
Call P0. . . P-m the semantic part of the derivation. Assume that the sequences of phrase-markers P0 . . . . .P-j are defined by local derivational constraints and global derivational constraints that do not mention any stage of the derivation after Pi, the ‘deep structure’. Call these constraints ‘deep structure interpretation rules’. Assume that the sequences of phrase-markers P-j . . . . . P-m are defined by local derivational constraints paired with global derivational constraints that may mention Pi and Pn as well as P-j . . . . . P-m, PR, TOP, FOC, etc. Call these constraints ‘surface structure interpretation rules’. (If such global constraints may mention not only Pi and Pn, but all points in between, then we will call them ‘ intermediate structure interpretation rules ’.) It should be clear that such an ‘ interpretive theory of grammar’ is simply a restricted version of the basic theory. One can look at the deep structure interpretation rules as operations ‘going from’ P1 to P-j, which are able to ‘look back’ only as far as Pi. And we could look at the surface structure interpretation rules as operations going from the ‘ output ’ of the deep structure interpretation rules P-j to SR, while being able to ‘ look back ’ to Pi and Pn. However, as Chomsky [10] points out, the notion of ‘directionality’ is meaningless, and so there is no empirical difference between these operations and derivational constraints. Thus, such ‘interpretive theories’ are no different in empirical consequences than the basic theory, restricted in the above way, provided that such interpretive theories assume that semantic representations are of the same form as phrase-markers or are notational variants thereof. The only empirical differences are the ways in which the basic theory is assumed to be constrained, for example, the question as to whether levels like Pi, P1, and P-j exist. As we saw above, there is reason to believe that a level Pi does not exist, and no one has ever given any reasons for believing that a level P-j exists, that is, that ‘ deep structure interpretation rules ’ are segregated off from ‘surface structure interpretation rules’.
So far, no interpretive theory this explicit has been proposed. The only discussion of what might be called an interpretive theory which goes into any detail at all is given by Jackendoff [33], who discusses both surface and intermediate structure interpretation rules. However, Jackendoff explicitly refuses to discuss the nature of semantic representation and what the output of his interpretive rules is supposed to look like, so that it is impossible to determine whether his interpretive theory when completed by the addition of an account of semantic representation will be simply a restricted version of the theory of generative semantics. Jackendoff claims that semantic representations are not identical to syntactic representations ([33] p. 2), but he does not discuss this claim. However, the empirical nature of the issue is clear: Are Jackendoff’s interpretation rules simply notational variants of derivational constraints? Do his rules map phrase-markers into phrase-markers? (The only examples he gives do, in fact, do this.) Will the output of his rules be phrase-markers, or notational variants thereof? Of course, such questions are unanswerable in the absence of an account of the form of his rules and the form of their output.
Although Jackendoff does not give any characterization of the output of his envisioned interpretation rules, he does discuss a number of examples in terms of the vague notions ‘ sentence-scope ’ and ‘ VP-scope ’. Many of the examples he discusses overlap with those discussed above in connection with global derivational constraints 1 and 2. For example, he discusses sentences like ‘ Many of the arrows didn’t hit the target ’ and ‘ The target wasn’t hit by many of the arrows ’, claiming that the difference in interpretation can be accounted for by what he calls a difference in scope, which boils down to the question of whether the element in question is inside the VP or not. If he were to provide some reasonable output for his rules, then his scope-difference proposal might be made to match up with those subcases of constraint 1 where Li is in subject position and Lj is dominated by VP. The overlap is due to the fact that the subject NP precedes VP. However, there are certain crucial cases which decide between constraint 1 and the extended Jackendoff proposal, namely, cases where the two elements in question are both in the VP. Since they would not differ in VP-scope, the Jackendoff proposal would predict that the relative order of the elements should not affect the meaning. Constraint 1, however, would predict a meaning difference just as in the other cases, where the leftmost element in shallow structure commanded the other element in semantic representation. We have already seen some examples of cases like this:
(50) (a) John talked to few girls about many problems.
(b) John talked about many problems to few girls.
(51) (a) I talked to few girls about only those problems.
(b) I talked about only those problems to few girls.
These sentences show the meaning difference predicted by constraint 1, but not by Jackendoff’s scope-difference proposal. Other examples involve adverbs like carefully, quickly, and stupidly, which he claims occur within the scope of the VP when they have a manner interpretation (as opposed to sentence adverbs like evidently, which he says have sentence-scope and are not within the VP). Since Jackendoff permits some adverbs like stupidly to have both VP-scope and sentence-scope with differing interpretations, all of the following examples will contain the sentence adverb evidently just to force the VP-scope interpretation for the other adverbs, since a sentence may contain only one sentence adverb.
(52) (a) John evidently had carefully sliced the bagel quickly.
(b) John evidently had quickly sliced the bagel carefully.
(53) (a) John evidently had carefully sliced few bagels.
(b) John evidently had sliced few bagels carefully.
(54) (a) John evidently had stupidly given none of his money away.
(b) John evidently had given none of his money away stupidly.
Each of the pairs of underlined words would be within the scope of the VP according to Jackendoff, and so, according to his theory, the (a) and (b) sentences should be synonymous. They obviously are not, and the difference in their meaning is predicted by constraint 1. Thus, constraint 1 handles a range of cases that Jackendoff’s scope-difference proposal inherently cannot handle.2
But Jackendoff’s proposal is inadequate in another respect as well. In order to generate a sentence like ‘Not many arrows hit the target’ he would need a phrase-structure rule expanding determiner as an optional negative followed by a quantifier (Det -----> (NEG) Q). The meaning of the NEG relative to the quantifier would be given by his sentence-scope interpretation rule, since ‘ not many ’ is part of the subject 4 not many arrows ’ in the above sentence. This interpretation rule makes no use of the fact that not happens to precede many in this example (and is interpreted as commanding many), since Jackendoff’s interpretation rule would depend in this case on subject (sentence-scope) position, not the left-to-right order of negative and quantifier. Thus, in Jackendoff’s treatment, it is an accident that NEG happens to precede the quantifier with this meaning. Jackendoff’s scope rule would give exactly the same result if the NEG had followed the quantifier within the subject, that is, if the impossible *many not arrows existed. Thus, Jackendoff’s phrase-structure rule putting the NEG in front of the quantifier misses the fact that this order is explained by constraint 1.
On the whole I would say that the discussion of surface and intermediate structure interpretation rules found in Chomsky [10], Jackendoff [33] and Partee [55] do not deal with the real issues. As we have seen, such rules are equivalent to transformations plus global derivational constraints, given the assumption that semantic representations can be given in terms of phrase-markers. We know that transformations are needed in any theory of grammar, and we know that global derivational constraints are also needed on independent grounds, as in rule ordering, Ross’ constraints [65], R. Lakoff’s redundancy rules [44], Harold King’s contraction cases [38], and the myriad of other cases discussed in Lakoff [43]. Thus, surface and intermediate structure interpretation rules are simply examples of derivational constraints, local and global, which are needed independently. The real issues raised in such works are (i) Can semantic representation be given in terms of phrase-markers or a notational variant? (ii) Is there a level of ‘deep structure’ following lexical insertion and preceding all cyclic rules? and (iii) What are the constraints that hold at the levels of shallow structure and surface structure? These are empirical questions, (i) is discussed in Lakoff [43] (forthcoming), where it is shown that, to the limited extent to which we know anything about semantic representations, they can be given in terms of phrase-markers, (ii) was discussed in the previous section, and will be discussed more thoroughly in the following section, (iii) has been discussed in some detail by Perlmutter [56] and Ross [65]. It seems to me that many of the regularities concerning nominalizations that have been noted by Chomsky and other low-level regularities noted by Jackendoff [33] and Emonds [14] are instances of constraints on shallow or surface structure.
1 I assume that either arises as follows. The underlying structure of a disjunction is :
The rule of or-copying yields:
Then the leftmost or changes to either. In initial position in a sentence, either optionally deletes. All ors except the last optionally delete. And works in a similar fashion.
2 The facts in (52)-(54) are independent of the phenomenon of attraction to focus discussed by Jackendoff [33]. If heavy stress is placed on ‘John’ in these sentences, ‘John’ is made the focus. This does not in any way affect the interpretation of the relative scopes of the adverbs, quantifiers and negatives mentioned in these sentences.
|
|
تفوقت في الاختبار على الجميع.. فاكهة "خارقة" في عالم التغذية
|
|
|
|
|
أمين عام أوبك: النفط الخام والغاز الطبيعي "هبة من الله"
|
|
|
|
|
مكتب المرجع الديني الأعلى يعزّي باستشهاد عددٍ من المؤمنين في باكستان
|
|
|