A brief grading break: scaling up open peer review?

In the interests of being upfront, I admit to being very cynical about invoking "community," as communities are just as likely to be unpleasant cesspits filled with murderous escapees from the Ninth Circle as they are to be warm nests of mutually helpful participants.  Hence my ongoing caution about open peer review as a large-scale endeavor, rather than as the relatively circumscribed project it has been so far.  Specifically, I think that people are underestimating the amount of intensive moderation required to make open peer review viable, especially when controversial topics are involved; for example, Metafilter, the online community in which I've participated since 2001, requires multiple full- and part-time paid moderators working 24-7 to keep hot topic posts from devolving into full-blown firefights (they don't always succeed, either).  And this is a community with a strong sense of norms.   Moderation like this is difficult work, and most academic editors are not, in fact, going to be able to devote the necessary time to doing it.   Alas, academic communities are just as likely to devolve into battle, as anyone who has spent too much time subscribing to listservs--or, nowadays, reading social media--no doubt recognizes.  For that reason, my skepticism meter tends to rise when I hear about, say, unblinded peer review as a solution to the nasty feedback problem: requiring people to use their real names on social media seems not to have made them any more unwilling to pop off, so it's not entirely clear to me that doing the same with peer review will have much of an effect.  I suspect, in other words, that there will have to be much more gate-keeping involved than the more "open" versions of open peer review tend to envision.