DDA vs DIA - A debate

Posted on 13th April, 2020

Food for thought........A debate on data acquisition approaches.

 

“This house proposes that with the introduction and rapid take-up of SWATH data acquisition, DDA will not survive for five more years.”

 

FOR

It was once said that a week is a long time in politics, but what constitutes a long time in mass spectrometry? Five years? Recent mass spectrometry instrument development has progressed at a startling rate, and with it, better and more ingenious ways of using the instruments have emerged. Faster and faster acquisition rates coupled with huge sensitivity improvements have enabled record numbers of peptides and proteins to be identified and quantified. Currently, the vast majority of approaches to proteomic analysis hinge on Data-Dependent Acquisition (DDA), however, I intend to demonstrate that this domination is rapidly coming to an end.

Data-Independent Acquisition (DIA), and in particular SWATH, is revolutionizing proteomics and is set to bury DDA for good. Gone will be the days of missed peptide fragmentation due to the stochastic nature of DDA sampling. Instead, since every ion entering the collision cell is subjected to fragmentation, post-acquisition processing has the potential to identify many more of the lower abundance peptides often “invisible” to DDA.

Current software approaches are addressing the difficulty of achieving this efficiently, however, within a relatively short time frame, I predict that this will be solved and with it the demise of DDA will be complete. Five years is a long time in proteomics – too long for DDA to survive.

 

AGAINST

As was rightly pointed out above, DDA is the basis for the vast majority of proteomics experiments globally today. Even though DIA has been around, in one form or another, for over a decade, it has still not been possible to convince most practitioners of its superiority and therefore ability to supersede DDA. In previous times, the early QTOFs were only capable of acquiring one full scan and maybe three or four ms/ms scans per cycle. However, the modern super-fast TripleTOF (6600) can now handle acquisition speeds up to 50 Hz – with high quality product ion spectra from meaningful (sub microgram) amounts of digest on column. Unlike instruments which rely on image current for detection, a modern, fast QTOF does not generate “empty” spectra at full throttle.

At present, DDA on TripleTOFs is capable of generating far more successful peptide ids than an instrument operating in DIA acquisition mode. The limitations of DDA are not, as suggested, missed triggering of MS/MS due to spectral saturation, but they are a consequence of ionisation suppression in the source. This suppression phenomenon is equally detrimental to DIA.

I therefore contest that, far from being dead in five years’ time, DDA will remain as dominant as it is today, provided that the recent advances in QTOF speed and sensitivity continue a pace.

 

FOR

There is no disputing the fact that DDA has facilitated enormous advances in proteomics over the past twenty years or so. However, it is has an inherent weakness in that a precursor ion will either trigger MS/MS fragmentation or be missed completely. User-defined thresholds must be set globally in order to enable triggering. If set too low, fragmentation may occur at an inappropriate point in the chromatographic cycle. Optimal fragmentation should be achieved at or around the peak apex which is not easy to achieve with any degree of consistency using DDA methods. A “snap-shot” of product ions is produced with only one or two MS/MS spectra per precursor possible, whereas MS/MS data is obtained across the entire chromatographic peak with a DIA acquisition.

In addition, triplicate injections of the same sample clearly demonstrate that lower abundance precursor ions are often not present in all three data sets acquired via DDA. I would concede that fast and sensitive acquisitions do, to some extent, mitigate for this, and that acquisitions of sub-microgram digest loadings, at 50 Hz cycling work very well on a TripleTOF. However, such acquisition speeds are unrealistic for other classes of instrument, which often require the chromatography column to be overloaded in order to enable the fastest acquisition rates to be utilized.

Possibly the greatest strength of DIA is having the ability to return to the captured data at a later stage for re-interrogation in the knowledge that all fragment ions will have been recorded multiple times. A missed peptide is missed forever in a DDA run, whereas DIA has the potential to reveal it, using ion extraction, post-acquisition. The ability of DIA to generate a comprehensive digital map for present and future interrogation is clearly a massive advantage.

 

AGAINST

Well established proteomics software packages, such as Mascot and Protein Pilot, are used worldwide to great effect and form the basis of most DDA data analysis. The challenges associated with DIA data analysis are immense and at present no single pipeline is able to extract all of the information held within a DIA dataset. Research labs and Core Facilities do not have the luxury of waiting for such, long term, developments to take place.

Current, off-the-shelf, DDA methods allow very fast method set-up and sample-to-useful data turnaround times. A deep-penetrance DIA experiment is only as good as it’s library. This can take an extraordinary amount of time and effort to generate and often involves off-line fractionation and multiple reacquisitions (in DDA I might add) in order to obtain a “comprehensive” library. DDA, on the other hand, utilises well established proven algorithms with no preconceptions as to what proteins may or may not be present. This is to say nothing of modifications.

Five years is quite simply not long enough for the DIA school to get its act together, and so I can say with absolute confidence that DDA is here for the long game, and will still be the dominant mode of proteomics data acquisition throughout the 2020s.

 

Concluding remarks - FOR

The dinosaurs amongst us will cling on to the familiar rather than embrace the latest advances. But just as the dinosaurs died out some sixty five million years ago, in five years’ time it will be the turn of data-dependent acquisitions. DIA data is effectively future-proofed in that it can be re-interrogated at a later stage for “future” peptides of interest. With DDA, you get what you get and the rest is lost forever - R.I.P. DDA, it’s been good knowing you, but your time is now over!

 

Concluding remarks - AGAINST

The death of DDA in a mere five years is pie-in-the sky. Increased Q-TOF acquisition speeds coupled with the well-established bioinformatics infrastructure along with the multiplexing capabilities of isobaric tagging reagents, ensure that there will be a dominant space for DDA analyses at least for the foreseeable future. DIA simply isn’t mature enough to supersede DDA, and won’t be for many years to come. Nice try, but DDA work and they work well. So if it ain’t broke, don’t fix it. DIA has its place, but that has to be in conjunction with DDA and not instead of it.

 

 

 

FEEDBACK

You have now heard a number of arguments in favour of, and opposed to the premise that, “With the introduction and rapid take-up of SWATH data acquisition, DDA with not survive for five more years.”

 

Now have your say.         

 

Please VOTE either FOR or AGAINST and if you would like to contribute to this month’s debate, please add a COMMENT on this Blog.

Make A Comment

Characters left: 2000

Comments (0)