I've been working in a "whig history" (WH) of generative grammar. A WH is a kind of rational reconstruction which, if doable, serves to reconstruct the logical development of a field of inquiry. WHs, then, are not “real” histories. Rather, they present the past as “an inevitable progression towards ever greater…enlightenment.” Real history is filled with dead ends, lucky breaks, misunderstandings, confusions, petty rivalries, and more. WHs are not. They focus on the “successful chain of theories and experiments that led to the present-day science, while ignoring failed theories and dead ends” (see here). The value of WHs is that they expose the cumulative nature of a given trajectory of inquiry. As one sign of a real science is that it has a cumulative structure and given that many think that the history of Generative Grammar (GG) fails to have a cumulative structure, many think that this tells against the GG enterprise. However, the "many" are wrong: GG has a perfectly respectable WH and both empirically and theoretically the development has been cumulative. In a word, we've made loads of progress. But this is not the topic for this post. What is?
As I went about reconstructing the relation between current minimalist theory and earlier GB theory, I came to appreciate just how powerful the No Tampering Condition (NTC) really is (I know I know, I should have understood this before, but dim bulb that I am, I didn't). I understand the NTC as follows: the inputs to a given grammatical operation must be preserved in the outputs of that operation. In effect, the NTC is a conservation principle that says that structure can be created but not destroyed. Replacing an expression with a trace of that expression destroys (i.e. fails to preserve) the input structure in the output and so the GB conception of traces is theoretically inadmissible in a minimalist theory that assumes the NTC (which, let me remind you is a very nice computational principle and part of most (all?) current minimalist proposals).
The NTC has many other virtues as well. For example, it derives the fact that movement rules cannot "lower" and that movement (at least within a single rooted sub-"tree") is always to a "c-commanding" position. Those of you who have listened to any of the Chomsky lectures I posted earlier will understand why I have used scare quotes above. If you don't know why and don't want to listen to the lectures, as David Pesetsky. He can tell you.
At any rate, the NTC also suffices to derive the GB Projection Principle and the MP Extension Condition. In addition, it suffices to eliminate trace theory as a theoretical option (viz. co-indexed empty categories that are residues of movement: [e]1). Why? because traces cannot exist in the input to the derivation and so they cannot exist in the output given the NTC. Thus, given the NTC, the only way to implement the Projection Principle is via the Copy Theory. This is all very satisfying theoretically for the usual minimalist reasons. However, it also raises a question in my mind, which I would like to ask here.
Why doesn't the NTC rule out feature valuation? One of the current grammatical operations within MP grammars is AGREE. What it does is relate two expressions (heads actually) in a Probe/Goal configuration and the goal "values" the features of the probe. Now, the way I've understood this is that the Probe is akin to a property, something like P(x) (maybe with a lambdaish binder, but who really cares) and the goal serves to turn that 'x' into some value, so turns P(x) into P(phi) for example (if you want, via something like lambda conversion, but again who really cares). At any rate, and here's my question: doesn't this violate the NTC? After all, the input to AGREE is P(x) and the output is, e.g. P(phi). Doesn't this violate a strict version of the NTC?
Note, interestingly, feature checking per se is consistent with the NTC, as no feature changing/valuing need go on to "check" if sets of features are "compatible." However, if I understand the valuation idea, then it is thought to go beyond mere bookkeeping. It is intended to change the feature composition of a probe based on the feature composition of the goal. Indeed, it is precisely for this reason that phases are required to strip off the valued yet uninterpretable features before Transfer. But if AGREE changes feature matrices then it seems incompatible with the NTC.
The same line of reasoning suggests that feature lowering is also incompatible with the NTC. To wit: if features really transfer from C to T or from v to V (either by being copied from the former to the latter or actually copied from the higher to the lower and deleted from the higher) then again the NTC in its strongest form seems to be violated.
So, my question: are theories that adopt feature valuation and feature lowering inconsistent with the NTC or not? Note, we can massage the NTC so that it does not apply to such feature "checking" operations. But then we could massage the NTC so that it does not prohibit traces. We can, after all, do anything we wish. For example, current theory stipulates that pair merge, unlike set merge, is not subject to Extension, viz. the NTC (though I think that Chomsky is not happy with this given some oblique remarks he made in lecture 3). However, if the NTC is strictly speaking incompatible with these two operations, then it is worth knowing, as it would seem to be theoretically very consequential. For example, a good chunk of phase theory, as currently understood, depends on these operations and would we discover that they are incompatible with the NTC then this might (IMO, likely does) have consequences for Darwin's Problem.
So, all you thoroughly modern minimalists out there: what say you?