At least, Slashdot says so.
The computer geek in me says "sweet!", the biologist in me says "See? I'm useful!" and the futurist in me says "this is a good way to combat singularities!". I'm sure you can follow the first two, dear reader; the latter might require a bit of explanation.
Most bioinformatics is designed to mine signal out of vast tracts of noise, in environments where we have only the most rudimentary knowledge of the protocols involved. Network analysis is a little different, in that we have a lot better knowledge of the structure of the corpus of data -- but this article represents a different way of looking at it. Up until now, protocol analysis has been largely a theory-based, a priori kind of science; deduce from first principles (i.e. the protocol spec) the shape of the interesting signal, then look for that. This technique takes the opposite approach: develop a general method for finding interesting things, and then let it loose.
Of course, both approaches converge on the same signal in theory, assuming perfect knowledge of the protocol and perfect interesting-thing-detection. In practice, protocol designers don't always know how their protocols are going to be used, and interesting-thing detection technology isn't perfect either. So they get different things! who knows, maybe this technique could find truths about the way protocols actually get used that the designers didn't think of. Perhaps this would be a way to analyze large quantities of data without having to have a priori knowledge of what you're looking for -- which would be a good thing to be able to do in a world that moves faster than your brain is actually capable of keeping up with.
See, I did have a point!