There's been a very vigorous discussion in the comment sections to this post which expatiates on the falsifiability of proposals within generative grammar of the Chomskyan variety. It interweaves with a second point: the value of formalization. The protagonists are Alex Clark and David Pesetsky. It should come as no surprise to readers that I agree with David here. However, the interchange is worth reading and I recommend it to your attention precisely because the argument is a "canonical" one in the sense that they represent two views that about the generative enterprise that we will surely hear again (though I hope that very soon David's views prevail, as they have with me).
Though David has said what I would have said (though much better) let me add three points.
First, nobody can be against formalization. There is nothing wrong with it (though there is nothing inherently right about it either). However, in my experience its value lies not in making otherwise vague theories testable. Indeed, as we generally evaluate a particular formalization in terms of whether it respects the theoretical and empirical generalizations of the account it is formalizing, it is hard to see how formalization per se can be the feature that makes an account empirically evaluable. This comes out very clearly, for example, in the recent formalization of minimalism by Collins and Stabler. At virtually every point they assure the reader that a central feature of the minimalist program is being coded in such and such a way. And, in doing this, they make formal decisions with serious empirical and theoretical consequences and that could call into question the utility of the particular formalization. For example, the system does not tolerate sidewards movement and whether this formalization is empirically and theoretically useful may rest on whether UG allows sidewards movement or not. But the theoretical and empirical adequacy of sidewards movement and formalizations that encode it or not is not a question that any given proposed formalization addresses nor can address (as Collins and Stabler know). So, whatever, the utility of formalization, to date with some exceptions (I will return to two), it does not primarily lie in making theories that would otherwise be untestable, testable.
So what is the utility? I think that when done well, formalization allows us to clarify the import of our basic concepts. It can lay bear what the conceptual dependencies between our basic concepts are. Tim Hunter's thesis is a good example of this, I think. His formalization of some basic minimalist concepts allows us to reconceptualize them and consequently extend them empirically (at least in principle). So too with Alex Drummond's formal work on sidewards movement and Merge over Move. I mention these two because this work was done here at UMD and I was exposed to the thinking as it developed. It is not my intention to suggest that there is not other equally good work out there.
Second, any theory comes to be tested only with the help of very many ancillary hypotheses. I confess to feeling that lots of critics of Generative Grammar would benefit by reading the work criticizing naive falsificationsim (Lakatos, Cartwright, Hacking, and a favorite of mine, Laymon). As David emphasizes, and I could not agree more, it is not that hard to find problems with virtually every proposal. Given this, the trick is to evaluate proposals despite their evident shortcomings. The true/false dichotomy might be a useful idealization within formal theory, but it badly distorts actual scientific practice where the aim is to find better theories. We start from the reasonable assumption that our best theories are nonetheless probably false. We all agree that the problems are hard and that we don't know as much as we would like. Active research consists in trying to find ways of evaluating these acknowledged false accounts so that we can develop better ones. And where the improving ideas will come from is often quite unclear. Let me give a couple of example of how vague the most progressive ideas can be.
Consider the germ theory of disease. What is it? It entered as roughly the claim that some germs cause some diseases sometimes. Not one of those strongly refutable claims. Important. You bet. It started people thinking in entirely new ways and we are all the beneficiaries of this.
The Atomic Hypothesis is in the same ball park. Big things are made up of smaller things. This was an incredibly important idea (Feynman, I think, thought this was the most important scientific idea ever). Progress comes from many sources, formalization being but one. And even pretty labile theories can be tested, as, e.g. the germ theory was.
Third: Alex Clark suggests that only formal theories can address learnability concerns. I disagree. One can provide decent evidence that something is not learnable without this (think of Crain's stuff or the conceptual arguments against the learnability of island conditions). This is not to dispute that formal accounts can and have helped illuminate important matters (I am thinking of Yang's stuff in particular, but a lot of stuff done by Berwick and his students are, IMO, terrific). However, I confess that I would be very suspicious of formal learnability results that "proved" that Binding Theory was learnable, or that Movement locality theory (aka Subjacency) was or that ECP or structure dependence was. The reasons for taking these phenomena as indictions of deep grammatical structural principles is so convincing (to me) that they currently form boundary conditions on admissible formal results.
As I said, the discussion is worth reading. I suspect that minds will not be changed, but this does not make going through this (at least once anyhow) worthwhile.