I just got asked to peer review a paper for a fairly big machine learning journal...
I'm in a bit of a moral bind, unfortunately. The paper has a simple, but somewhat cool application of machine learning to a field that could really use it (I have no idea how much one is allowed to say about papers for review, so I'm erring on the safe side).
However, the paper is 24 pages long, and it took me much longer than it should have to extract the page worth of actual meat. Its as if the author was afraid that their idea was too simple, and so they had to throw as much jargon and convolution into the paper as possible.
While this sort of mess is common in machine learning, I feel like I'd be endorsing poor writing if I were to just let it pass. The only other sizable complaint I have is that they used simulation data to check the validity of their assumptions (and did nothing at all to explain how their simulation isn't based on those same assumptions). Bad use of simulation data alone isn't enough to get you dropped in the ML world, so I'd really be forcing a value judgement of "I think you should write your papers to convey information instead of to impress people" onto them if I said "no".
Have any of you ever hit this?