Management of Myelodysplastic Syndromes in High-Risk Patients - Episode 3
Yazan Madanat, MD, discusses available MDS risk assessment tools and provides insight into his preferred clinical assessment methods.
Mikkael A. Sekeres, MD: Yazan, can you go into a little more detail about how risk is assessed? You mentioned there are a lot of different risk stratification systems we could use. Do you think one is clearly better than the others? What do you use practically in the clinic?
Yazan Madanat, MD: That’s a great question. It’s a challenge to figure out what the best tool is because the IPSS [International Prognostic Scoring System] was the first and was devised in 1997. That relies on the chromosomal abnormalities, number of cytopenias, and blast percentage. But it’s quite outdated in terms of how it classifies disease at that point. It included patients with 21% to 30% blasts, where nowadays, we would classify them as having AML [acute myeloid leukemia]. It only included patients who were treated with supportive care at that time. It also has only been validated at diagnosis. It definitely has its limitations. The easy part is that it groups patients into 4 disease risks. The low and intermediate-1 can be grouped together as lower-risk, and then intermediate-2 and high can be grouped together as higher-risk disease.
When it was revised in 2012 with the IPSS-R [revised] model, it focused more on the cytogenetic risks and had more thorough cytogenetic detail with 5 cytogenetic risk categories, which gives it more points to evolve into higher-risk disease, even if the patient doesn’t have as many cytopenias or a high blast percentage. That made it a little more complex and added an intermediate category. Sometimes in clinical trials, it’s counted as lower-risk disease, while in others it’s counted as higher-risk disease, which creates some confusion as to what to do with the intermediate-risk category. One study did an analysis where they looked at a score of 3.5 and said that 3.5 or less is lower risk, but higher than that, we should group them together as higher risk.
I usually use both the IPSS and the IPSS-R and see if they correlate, because then I know I’m dealing with 1 disease risk. If they’re not correlating, you always question which is going to be more accurate. Some of the work Aziz Nazha, MD, published is looking into which score really validates a diagnosis. He presented 3 nice cases where each of the scoring systems had their deficiencies. It’s always validated up to 70% accuracy, or 80% at best, but there are always some inaccuracies in those scoring systems. I use them as a guide, but not a definitive prediction that a patient is going to progress at this time point.
Then there are the WHO [World Health Organization] Prognostic Scoring System and the MD Anderson Scoring System. The advantage of the WHO system is that it’s a dynamic scoring system, so it can be applied at diagnosis and at later time points. It also includes de novo and therapy-related MDS [myelodysplastic syndromes]. However, it’s not widely used. The issue is that it doesn’t account for cytopenias other than anemia, so it’s not a great scoring system for all patients with MDS. The MD Anderson Scoring System can be a little complex to use as a quick guide in the clinic, but it also includes therapy-related MDS. The most commonly used scoring systems are the IPSS and its revised version. We’re looking forward to hopefully a molecular IPSS score at some point.
Transcript Edited for Clarity