So we did two new sequencing experiments, and loaded them together on our new high-throughput machine. Part A needs software A, and part B needs software B, but running software A will analyse both A and B (it's just that the A-B mix is not what you want and you ignore it), and vice versa. Both software versions will tell you how much data there is, though, for either type of data.
So software A told us the whole thing was a disaster. Don't bother to look at the details; there's just not enough data there to conclude anything. Naturally we were very unhappy about that, and were preparing to call up the supplier and complain - again - or at least get a reason why -again -, though they usually have no idea why.
In turning things every which way before calling, we decided to run software B to see if the B data was really as bad as all that. And it wasn't. But lo and behold, the A data looked pretty good too. Since the two experiments are really quite similar, it seemed reasonable to us that software B would be perfectly capable of analyzing data A, in spite of the supplier swearing up and down it couldn't.
And really, both sets of data are perfectly fine.
It's just the software A decided that it didn't like something or other about it, and tossed 80 % of our perfectly good sequence in the trash. I really wish we could get it to stop doing that. New technology is great. But it can take a while for data processing tools to catch up, and they haven't yet.
9 hours ago