Comments

Wednesday, December 18, 2013

Some big news

Here’s some big news: scientists are human! They cherish their own views and find it bracing if the views of their intellectual opponents fail to gain traction. Moreover, given the evident hierarchical organization of scientific investigation (e.g. people do not enter the game on an equal footing given that prestige is not evenly allocated across institutions or practitioners, journals are managed by people with skin in the game, friends like to help friends, careers depend on the relative success of one’s ideas, big and small etc.) it should come as no surprise that some try (and succeed) to prevent good ideas that gore sufficiently prominent oxen from seeing the light of day.  How? By denying funding by trashing grants asking for such, by making it hard to get into print, by limiting conference invitations to the non-fashionable etc. The official line is that though scientists as individuals may lack the requisite impartiality to give a wide range of views a fair hearing, Science (btw, always capitalized) as an institution allows good ideas to bubble up to the surface given enough time. In other words, Science encourages what scientists don’t.

Let’s assume that this optimistic tale is correct (but who knows really). It is still intriguing to see just how strong the forces of intellectual self-interest can be and just how long “given enough time” can be. It can be quite a long time. A Nobel Prize eventually came to Barry Marshall and Robin Warren for their work on ulcers being bacterial effects, but only after a long fight with the medical establishment that was sure that it was due to stress (here). Recently, a little window into the ways that thought leaders try to protect their favorites seeped in the NYTs science section (see here). Nathan Mhyrvold recently disputed some findings in dinosaurology in PloS One. The paper he criticizes was published in Nature in 2001, the lead author being Gregory Erickson.  Mhyrvold claimed to find a flaw in how growth rates were computed. The details, though interesting, are not important here. What is intriguing is how Erickson responded. He tried to stop publication of Mhyrvold’s paper claiming that the paper, if published, would “hurt our field by producing inherently flawed growth curves, misrepresenting the work of others, and stands to drive a wedge between labs that are currently cordial with one another.” This did not prevent publication in PloS One, an open source journal, but it also surely helped that Mhyrvold is himself a really big deal (here). One can imagine how things might have gone had this issue been raised by a somewhat less prominent critic, say a graduate student submitting her first publication and looking for a “prestigious” venue, like Nature, in which to publish it.

I don’t wish to suggest that there is anything nefarious in all of this. It’s another “crooked timber of humanity” sort of thing. However, I suspect that this sort of political maneuvering is more efficacious in smaller fields like linguistics. This is especially true given the paucity of publication outlets. Publication rates in our major journals are very low and so it is easier to prevent unfortunate ideas from seeing the light of day.

On a more personal note, I have found that reviewers often mistake agreement with the position expostulated to be a criteria for favorable review. I don’t mean that I have personally felt this to be true about my own work (though of course, whenever any paper of mine is rejected I am sure that it is because the reviewers have it in for me!) but because I have been asked on more than one occasion concerning a review I have submitted how I can recommend publication despite my professed views that I believed the position being explored to be wrong. It seems that (some) editors find it odd that one might think a paper incorrect and yet worthy of publication.  I don’t have to agree with a paper to find it interesting and provocative. Indeed its interest may lie precisely in advancing a view that I disagree with.

This problem is further exacerbated by the peer review process, I believe.  Peer review pursues the lowest common denominator. It’s hard enough to convince one person that  an off-beat idea is worth investigating. It’s harder still to convince five.[1] I once asked a friend of mine if he believed that Einstein’s 1905 papers could have been published today? They were very quirky given the standards of the time; quite informal, from a patent clerk, sent to the most prestigious physics journal in the field at the time. He thought not likely.  Luckily, the journal editor at the time was Max Planck and editors were more like impresarios than scholarly bureaucrats. Today, one threatens eviction if one is too interesting. I personally believe that this is what happened to Jacques Mehler at Cognition. His journal was too different (and IMO, by far the most interesting cog sci journal ever), a problem remedied by appointing Gerry Altmann to replace him.[2]

One of the hopes for the web is that it will allow debate over ideas to flourish without having to be squeezed through the narrow review portals of the major journals, something that Paul Krugman notes (here) has begun to happen in economics. Maybe something similar will occur in linguistics and debates that are currently largely confined to the review process (where submitters and reviewers hash it out endlessly), will be held in public and given a good airing (but see here: Mark Liberman notes that the most linguistics institutions lag far behind other scientific fields in disseminating new ideas; a holdover perhaps from our illustrious humanistic philological tradition?  The timeless humanities don’t pursue “cutting edge research.”).

Let me end: I am curious to know how singular my impressions are. The above may just be the maunderings of an embittered isolate.  Do you find that institutional Linguistics does a good job at finessing the all too natural self-serving aspects of linguists?  Do you find that the journals and granting agencies try to promote worthwhile debate and investigation?  If not, can you think of how things might be improved? I’d love to know.



[1] The inherent empiricistic data of linguistics, its basic lack of interest in theory as opposed to data (no doubt another holdover from our philological past) is another serious impediment against encouraging new ways of looking at things. But this is a topic for another day.
[2] For how dramatic a change this entailed, take a look at Mehler’s farewell address in Cognition (here) coupled with Altamnn’s initial editorial foray (here). The indicated concerns could not be more different.

Sunday, December 15, 2013

A helpful holiday tip

Rummaging through my favorite blogs I cam across a video that will be unbelievably helpful to many of you festive bottle in hand but caught out without a corkscrew. This has happened to me more than once and it really puts a damper on the festivities. There are many ways to finesse this problem (no opener) when beer bottles are the targets of opportunities. But I have never seen a way to de-cork an alluring bottle sans apparatus. One way of avoiding this kind of problem is to only buy screw tops, and there are many delicious wines that now come in this format. However, why cut yourself off from the delights of France and Italy just to avoid the chance of being corkscrewless? Better to watch this helpful educational video. Happy Holidays!!

Thursday, December 12, 2013

Simultaneous rule application: Help!!!

Lately I have been thinking of something and have gotten stuck. Very stuck. This post is a request for help.  Here’s the problem. It relates to some current minimalist technology and how it relates to the bigger framework assumptions of the enterprise.  Here’s what I don’t quite get: what’s it means to say that rules apply “all at once” at Spell Out.  Let me elaborate.

A recent minimalist innovation is the proposal that congeries of operations apply simultaneously at Spell Out (SO).  The idea of operations applying all at once is not in and of itself problematic, for it is easy to imagine that many rules can apply “in parallel.”  However, when rules so apply, they are informationally encapsulated in the sense that the output of rule A does not condition the application of rule B. ‘Condition’ here means neither feeds nor bleeds its application. When rules do feed and bleed one another, then the idea that they all apply “all at once” is hard (at least for me) to understand, for if the application of B logically requires information about the output of A then how could they apply “in parallel.” But if they are not applying “in parallel” what exactly does it mean to say that the rules apply “all at once”?

One answer to this question is that I am getting entangled in a preconception, namely my confusion is the consequence of a “derivational” mindset (DM). The DM picture treats derivations like proofs, each line licensed by some rule applying to the preceding lines.[1] The “all at once” idea is rejecting this picture and is suggesting in its place a more model theoretic idiom in which sets of constraints together vet a given object for well-formedness. An object is well formed not if derivable from rules sequentially applied, but no matter how constructed it meets all the relevant constraints.  This should be familiar to those with GBish or OTish educations, for GB and OT are very much “freely generate and filter” kinds of models, the filters being the relevant constraints.[2] If this is correct, then the current suggestion about simultaneous rule application at SO is more accurately understood as a proposal to dump the derivational conception of grammar characteristic of earlier minimalism in favor of a constraint based approach of the GB variety.

Note, that to get something like this to work, we would need some way of constructing the objects that the constraints inspect.  In GB this was the province of Phrase Structure Rules and ‘Move alpha.’ These two kinds of rules applied freely and generated the structures and dependencies that filters like Principle A/B/C, ECP, Subjacency, etc. vetted. In an MP setting, it is harder to see how this gets done, at least to me.  Recall that in an MP setting, there is only one “rule,” (i.e. Merge). So, I assume that it would generate the relevant structures and these would be subsequently vetted. In effect the operations of the computational system (i.e. Merge, Agree and anything else, e.g. Feature Transfer, Probing, ???) would apply freely and then the result would be vetted for adequacy. What would this consist in? Well, I assume checking the resultant structures for Minimality, Extension, Inclusiveness, etc.  The problem, then, would be to translate these principles, which are easy enough to picture when thought of derivationally, into constraints on freely generated structures. I confess that I am not sure how to do this.  Consider the Extension condition. How is one to state this as a well-formedness condition on derived structures rather than on the operations that determine how the structures are derived? Ditto on steroids for Derivational Economy (aka: Merge over Move) or the idea that shorter derivations trump longer ones, or determining what constitutes a chain (which are the copies that form a chain?).  Are there straightforward ways of coding these as output conditions in freely generated objects?  If so, what are they?

There is another subsidiary more conceptual concern. In early Minimalism output conditions (aka filters) were understood as Bare Output Conditions (BOCs). BOCs were legibility conditions that interfaces, particularly CI, imposed on linguistic products. Now, BOCs were not intended to be linguistic, though they imposed conditions on linguistic objects. This means that whatever filter one proposes needs to have a BOC kind of interpretation. This was always my problem with, for example, the Minimal Link Condition (MLC). Do we really think that chains are CI objects and that “thoughts” impose locality conditions on their interacting parts? Maybe, but, I’m dubious. I can see minimality arising naturally as a computational fact about how derivations proceed. I find it harder to see it as a reflection of how thoughts are constructed.  However, whatever one thinks of the MLC, understanding Economy or Extension or Phase Impenetrability or Inclusiveness as BOCs seems, at least to me, more challenging still.

Things get even hairier, I think, when one considers that range of operations supposed to happen “all at once.” So, for example, If the features of T are inherited from C (as currently assumed) and I-merge is conditioned by Agree, then this suggests that DPs move to Spec T conditional on C having merged with T. But any such movement must violate Extension. The idea seems to be that this is not a problem if all the indicated operations apply simultaneously. But how is this accomplished?  How can I-merge be conditioned (fed) by features that are only available under operations that require that a certain structure exists (i.e. C and “TP” have E-merged) but whose existence would preclude Merging the DP (doing so would violate Extension).  One answer: screw Extension. Is this what is being suggested?  If not, what?

So, I throw myself on the mercy of those who have a better grasp of the current technology. What is involved in doing operations “all at once”? Are we dumping derivations and returning to a generate-and-filter model? What do we do with apparent bleeding and feeding relations and the dependencies that exploit these notions. Which principles are we to retain, and which dispense with? Extension? Economy? Minimality? How to the rules/operations work? Sample examples of “derivations” would be nice to see.  If anyone knows the answer to all or any of these questions, please let me know.



[1] A strong version of this is that it is only the immediately preceding line can influence what happens “next.”
[2] OT’s filters are ranked, whereas GB filters were not.  However, I don’t believe that this difference makes a difference for my problem.

Tuesday, December 10, 2013

Chomsky: Fishing for compliments

Of course, just as I pooped all over the NYTs for relentlessly aiming to sink the Chomsky program in linguistics and cog-sci so as to discredit his politics, they run an opinion piece by Stanley Fish (here) that is nothing if not sympathetic. Fish describes the recent Dewey Lectures that Chomsky gave at Columbia and, IMO, perfectly captures the tone of a Chomsky presentation. I have heard Chomsky deliver several of these series of lectures in my time, one of the earliest being the Woodbridge Lectures, also at Columbia, which became Rules and Representations. Fish perfectly captures what makes these Chomsky events so impressive.

First, the lack of histrionics. Chomsky's presentational style IS a bit boring, something Fish says he aims for (with success). He speaks in a low key monotone and cracks very few jokes (and those he cracks are not that funny!). What marks his talks is a kind of relentless logical inevitability. Starting from very simple premises he reaches what feel like very substantial conclusions taking very small steps. Chomsky at his best (which IMO is the usual case) doesn't lecture at you. Rather, he is more like an intellectual tour guide taking you on a very exciting ride, showing you connections that, once he points them out, seem obvious, and, as I said before, close to inevitable. As the tour is so wonderful, there is no need for the tour guide to be entertaining and this allows the ideas to take center stage and shine. As Fish notes, this makes for a wonderful intellectual experience, the kind of thing that one would love to be the academic norm.

There are two other features of a good Chomsky talk that Fish also highlights.

First, it is always very well informed. He is prodigiously well read and is able to recruit what he has absorbed very quickly to make a point. He understands his critics extremely well and so when he disagrees, the disagreements point to a serious divergence of views. I know of no other working intellectual that has engaged his critics so widely and persistently.

Second, he respects his audience. He answers virtually every question put to him in a pretty honest way. Not everyone is satisfied when he is done. But to a degree that I have found rare  in these sorts of venues, he takes all questions seriously and tries to get to the intellectual nub of the matter. For such a gifted polemicist, IMO, Chomsky doesn't engage much in ad hominem attacks. He relentlessly argues against positions he finds weak and muddled and based on unexamined presuppositions, but rarely does he cast aspersions on the moral or intellectual virtues of those who hold these views. As I've pointed out before, given how much everyone loves their ideas best, being on the receiving end of a Chomsky critique is no picnic and can feel very personal even when it is only (only!) one's ideas that are being excoriated. Nonetheless, in my experience (which has been extensive, even on the receiving end!) Chomsky rarely engages in personal attacks in public debate. Like all humans, he has his views about the people he engages (I assume), but unlike most, when engaging intellectual concerns he sticks to the intellectual agenda. Any position is fair game. The people who hold them are not.

Some in the comments section to the NYT piece remarked how nice it would be were Columbia to make these lectures available on line. Let me second that sentiment. I can think of few more pleasant ways to wile away a snow day (like the one I am enjoying now).

Last point: as Colin Philips noted in his e-mail to me bringing this Fish piece to my attention: before you let the NYTs off the hook, recall that this was an opinion piece, not a news report. Still, I do feel a bit sheepish. Don't worry, it won't last long.

Thursday, December 5, 2013

Huh?

It appears that all languages have the functional equivalent of ‘huh.’ Note, I said functional equivalent, for being a Canadian, I am not a huh-er but an eh-er (pronounced ‘Ay’). Still as reported in the NYTs, the LATimes and HuffPost, there is an article in the recent issues of PLOS1 by some Max Plankers (MP) from Nijmegen that survey 10 different kinds of languages and find (as the NYTs reports) “a remarkable similarity among the “Huhs?” All the words had a single syllable, and they were typically limited to a low-front vowel, something akin to an “ah” or an “eh”” (where this puts Canucks like me is unclear given that it’s quite definitely ‘Ay’ not ‘eh,’ but whatever, huh?, I mean Eh!). The MPs and the press report this as the discovery of yet another universal, this one based on its utility in communication (or as Herb Clark is quoted as putting matters: “You can’t have a conversation without the ability to make repairs. It is a universal need, no matter what kind of conversation you have.”). So another universal.

Of course, not my kind of universal, for, as I’ve pointed out before, a FL universal need not be manifest in every language (e.g. a language without movement will not display island effects) and something that shows up in every language need not be an FL universal.  Or, Chomsky Universals are not Greenberg Universals.

I would not have yet again belabored this point but for a paragraph in the NYT’s report by Dr Enfield, (one of the authors) who thinks that this is yet another “challenge to the dominant view that language is primarily a matter of inborn grammatical structure…”.[1] A position held by you-know-who.  Who with any knowledge of the issues would think that this fact, assuming that it is true, could pose any sort of challenge to the Chomsky conception of FL/UG? It can only arise from thinking that Chomsky’s universals are Greenberg’s universals. In other words, it can only arise as a supposition if you have no idea what you are talking about, and this, I am sad to report, is quite common when it comes to discussions of Chomsky, UG and language. Here is another instance of the same misunderstanding.

So, what to make of this research? For me, not much. I am quite sure that there are lots of Universals that have no etiology in FL/UG. I bet that every language has a word for ‘mom’ and ‘dad’ and I am pretty confident I know why. But, it makes a great story right? After all, isn’t any story about language that includes the claim that Chomsky is wrong worthwhile just for that reason?  It is for the NYTs.  Read the three press reports and you will notice that only the NYT decided to take the ‘Chomsky is wrong’ angle. This is missing from the LATimes and the HuffPost. In fact, as ‘Chomsky’ does not appear in the body of the paper in Plos1 (not even in the references), I surmise that the NYT chased Dr Enfield down to get this quote. I wonder why? It must be for balance: to make up for all those articles in the NYT stating that Chomsky is right. Yea, that’s it: ‘all the news that’s fit to print,’ or should it be ‘fair and balanced’? Nothing else makes sense, right?





[1] What ‘Huh?’ has to do with grammatical structure is beyond me, btw.

Wednesday, December 4, 2013

Dreams of a unified theory; a great big juicy problem (Yay!!)

The intricacies of A’-syntax is one of the glories of GB.[1]  The unification of Ross’s islands in terms of subjacency and the discovery of ECP dependencies (especially the adjunct/argument distinction) coupled with wide ranging investigations of these effects in a large variety of different kinds of languages marked a high point in Generative Grammar. This all changed with the Minimalist (M) “Revolution” (yes; these are scare quotes). Thereafter, Island and ECP effects mostly fell from the hot topics list (compare post M work with that done in the 80s and early 90s where it seemed that every other paper/book was about A’-dependencies and their island/ECP restrictions). Moreover, though early M was chock full of discussions of Superiority, an A’-effect, it was mainly theoretically interesting for the light that it threw on Minimality and Shortest Move/Attract rather than how it bore on Islands or the ECP. Indeed, from where I sit, the bulk of the interesting work within M has been on A rather than A’ dependencies.[2]

Moreover, whereas there has been interesting research aiming to unify various grammatical modules, subjacency and ECP have resisted theoretical integration, at least interesting versions thereof. It is possible, indeed easy, to translate bounding theory or barriers into phase terminology.[3] However, there is nothing particularly insightful gained in doing this. It is also possible to unify Islands with Minimality given the right use of features placed in appropriate edge positions, but IMO little has been gained to date in so proceeding. So Island and ECP effects, once the pride of theoretical syntax have become a backwater and a slightly embarrassing one for three related reasons.

First, though it is pretty easy to translate Subjacency (viz. bounding theory) in phase terms, this translation simply duplicates the peccadillos of the earlier approaches (e.g. we stipulated bounding nodes, we now stipulate (strong) phases, we stipulated escape hatches (C yes, D no) we now stipulate phase edges (both which phases have any to use and how many they have)).

Second, ad hoc as this is, it’s good compared to the problems the ECP throws up. For example, the ECP is conceptually a trace licensing requirement. Where does this leave us when we replace traces with copies as M does? Do copies need licensing? Why if they are simply different occurrences of a single expression? Moreover, how do we code the difference between adjuncts versus arguments?  What makes the former so restricted when compared to the latter?

Last, the obvious redundancy between Subjacency and the ECP raises serious M questions. Both involve the same island like configurations yet they are entirely different licensing conditions. Talk of redundancy! One of Subjacency or the ECP is bad enough, but both? Argh!!

So, A’-syntax raises M issues and a natural hope is to dispose of these problems by placing them in someone else’s trash bin. And there have been several attempts, to do just this, e.g. Kluender & Kutas, Sag & Hoffmeister, Hawkins, among others. The idea has been to treat island effects as a reflection of processing complexity, the latter arising when parsers try to relate elements outside an island (fillers) to positions (gaps) within an island.  It is well known that filler/gap dependencies impose a memory/storage cost as the process of relating a filler to a gap requires keeping the filler “live” until it’s discharged in the appropriate position. Interestingly, there is independent psycho-ling evidence that the cost of keeping elements active can depend on the details of the parse quite independently of whether islands are involved (e.g. beginnings of finite clauses induce load, as does the parsing of definites).[4] Island effects, on this view, are just the sum total of these island-independent processing costs. In effect, Islands are just structures where these other costly independently manifested requirements converge. If true, this idea could, with some work, let M off the island hook.[5] Wouldn’t that be nice?

It would be, but I personally doubt that this strategy will work out.  The main problem is that it seems very hard to explain the unacceptability profiles of island effects in processing terms. A recent volume (of which I am co-editor though Jon Sprouse did all the really heavy lifting and deserves all the credit, Experimental Syntax and Island Effects) reviews the basic issues. The main take home message is that when considered in detail, the relevant cited complexity inducers (e.g. definiteness) do not eliminate the structural contributions of islands to the perceived acceptability, though they can modulate it (viz. the super-additive effects of islands remain even if the severity of the unacceptability can be manipulated). Many of the papers in the volume address these issues in detail (see especially those by Jon Sprouse, Matt Wagers, and Colin Phillips). The book also contains good representatives of the processing “complexity” alternative and the interested reader is encouraged to take a look at the papers (WARNING: being a co-editor forbids me in good conscience, from advocating purchase but I believe that many would consider this book a perfect holiday gift even for those with no interest in the relevant intellectual issues, e.g. it’s really heavy and would make a perfect paperweight or door stopper).

A nice companion piece to the papers in the above volume that I have recently read seconds the conclusion that Island Effects have a structural source.  The paper (here) is by Yoshida, Kazanina, Pablos and Sturt (YKPS) and it explores the problem in a very clever way. Here’s a quick review.

YKPS starts from the assumption that if the problem is one of the processing complexities of islands, then any dependency into an island that is computed online (as filler/gap dependencies are) should show island like properties even if these dependencies are not products of movement. They identify forward cataphora (e.g. his1 managers revealed that [island the studio that notified Jeffrey Stewart1 about the new film] selected a novel for the script) as one such dependency. YKPS shows that the indicated referential dependency is calculated online just as filler/gap dependencies are (both are very greedy in fixing the dependency). However, in contrast to movement dependencies, pronoun resolution in forward cataphora does not exhibit island effects. The argument is easy to follow and the conclusion strikes me as pretty solid, but read it and judge for yourself. What I liked about it is that it is a classic example of a typical linguistic argument form: YKPS identifies a dog that doesn’t bark. If parsing complexity is the relevant variable then it needs to explain both why some dependencies exhibit island effects and, just as importantly, why some do not. In other words, negative data counts! The absence of island effects is as much a datum as its presence is, though it is often ignored.[6] As YKPS put it:

Complexity accounts, which attribute island effects to the effect of processing complexity of the online dependency formation process, need to explain why the same complexity does not affect (my emphasis, NH) the formation of cataphoric dependencies. (17)

So, it seems to me that islands are here to stay, even if their presence in UG embarrasses minimalists.

Three points and I end. First, the argument that YKPS presents is another nice example of how psycho-techniques can be used to advance syntactic ends.  How so? Well, it is critical to YKPS’s point that forward cataphora involves the same kind of processing strategies (active filler) as do regular filler/gap dependencies that one finds in movement despite the dependencies being entirely different grammatically. This is what makes it possible to compare the two kinds of processes and conclude from their different behavior wrt islands that structural effects cannot be reduced to parsing complexity (a prima facie very reasonable hypothesis and one that might even be nice were it true!).[7] 

Second, the complexity theory of islands pertains to Subjacency Effects. The far harder problem, as I mentioned earlier, involve ECP effects. Indeed, were Subjacency Effects reduced to complexity effects, the presence of ECP effects in the very same configurations would become even more puzzling, at least to me. At any rate, both problems remain, and await a decent M analysis.

Third, let me end with some personal intellectual history. I taught a course on the old GB A’ material with Howard Lasnik this semester (a great experience, thx Howard) and have become pretty convinced that finding a way to simply recapitulate ECP and Island effects in M terms is by no means trivial.  To see this, I invite you to simply try to translate the GB theories into an M acceptable idiom. Even this is pretty hard to do, and a simple translation still leaves one short of a M acceptable account. Conclusion? This is still a really juicy research topic for the unificationally inclined, i.e. a great Minimalist research topic.



[1] I take GB to be the logical culmination of work that first developed as the Extended Standard Theory. Moreover, I here, again, take GB to be one of several kissing cousins, such as GPSG, LFG, HPSG.
[2] This is a bird’s eye evaluation and there are notable exceptions to this coarse generalization. Here is one very conspicuous exception: how ellipsis obviates island effects. Lasnik and Merchant have turned this into a small very productive industry. The main theoretical effect has been to make us reconsider what makes an island islandy. The ellipsis effects have revived an interpretation that has some roots in Ross, that it is not the illicit dependency that matters but the phonological realization thereof that counts.  Islands, on this view, are PF rather than syntactic effects. At any rate, this is really interesting stuff which has led us to understand Island Effects in new ways.
[3] At least if one allows D to be a phase, something some (e.g. Chomsky) has only grudgingly accepted.
[4] Rick Lewis has some nice models of this based on empirical work by Gibson.
[5] Of course, more work needs doing. For example, one needs to explain, why, ellipsis obviates these processing effects (see note 2).
[6] Note 4 indicates another bit of negative data that needs explanation on the complexity account. One might think, for example, that having to infer structure would add to complexity and thus increase the unacceptability of island violations, contrary to what we in fact find.
[7] Very reasonable indeed as witnessed by Chomsky’s extensive efforts to argue against the supposition that island effects are simple complexity effects in On Wh Movement.