HTB

Monitoring patients on antiretroviral therapy in resource-limited settings with viral load, CD4 count, or clinical observation alone

Polly Clayden, HIV i-Base

The majority of roll out programmes in resource-limited settings will introduce antiretrovirals using clinical monitoring without viral load and often without monitoring of CD4 cell counts.

One of the more controversial presentations was from Andrew Philips who presented results from a paper recently published in the Lancet that used a model of HIV progression and the effect of antiretroviral therapy to determine the consequences of using different monitoring strategies on survival and development of resistance. [1, 2]

The investigators used a model (HIV Synthesis) that was originally developed for well-resourced settings. This is a stochastic computer simulation model, which tracks data on the progression of HIV and the effect of antiretroviral therapy on simulated patients.

The model generates information that includes, age, viral load and CD4 count. Each patient’s data are updated every three months showing the current number of active drugs in the regimen, the viral load and the level of adherence. For each antiretroviral, the model keeps track of current use, past virological failure, previous use, past stopping due to toxicity, and resistance to the drug.

The model was adapted to reflect the situation in resource-limited settings and the modifications included:

  • WHO stage 3 and 4 events in place of CDC grade B and C disease, particularly to model pulmonary TB.
  • Increased risk of non-HIV mortality.
  • Use of single dose NVP in a proportion of women for mother-to-child transmission prophylaxis (assumed to result in NVP resistance).
  • Interruptions in drug supply.

The model only included adult patients. Patients were eligible for antiretrovirals according to WHO guidelines and received a first-line regimen of d4T, 3TC, and NVP. The second-line regimen was AZT, ddI and LPV/r. Switching due to treatment failure was determined by monitoring strategy; some individual drug substitutions due to toxicity were assumed.

The investigators simulated outcomes of using different strategies for 6-monthly monitoring of antiretroviral therapy to determine when to switch to second-line therapy. 1. Viral load, with virological failure >500 copies/mL or 10 000 copies per mL, occurring after more than 6 months continuously on antiretrovirals. 2. CD4 cell count, with failure defined as 50% decline from peak during the current period of continuous treatment or by a 33% decline over a 6-month period. Additionally the CD4 cell count must be <200 cells/mm3, and the patient must have been on antiretroviral therapy continuously for over 9 months. 3. Clinical events, with failure defined by a new WHO stage 3 or 4 event, or by two new WHO stage 3 events or a WHO stage 4 event (multiple WHO stage 3/new WHO stage 4 event), or by a new WHO stage 4 event. The patient must have been on antiretroviral therapy for more than 6 months.

Using the model, 58% of patients generated at the time of start of ART were women with a median age of 30 years and median baseline CD4 and viral load of 66 cells/mm3 and 5.4 log10 respectively. 32% of patients had TB previously and 13% had received single dose NVP for mother to child transmission prophylaxis.

Estimates of the risk of virological failure using viral load >500 copies/mL were 16% by 1 year, 28% by 5 years, 37% by 10 years, and 51% by 20 years.

Comparing this criterion with other switch criteria, at 1 year: 87% patients if viral load >10,000 copies/mL; 26% if CD4 decline from peak; 25% if current CD4 decline; 32% if new stage 3/4 event; 19% if multiple stage 3 events or new stage 4 event and 10% if new stage 4 event would meet criteria to switch. These proportions continued similarly up to 5 years.

A small proportion of patients met the CD4 count or clinical switch criteria before viral load >500 copies/mL. Of the patients who met the viral load >500 copies/mL criterion first the proportion fulfilling the other viral load criterion (10,000 copies/mL), CD4 count and clinical criteria by one year from fulfilling viral load 500 copies/mL criterion was assessed. This was 87% if viral load >10,000 copies/mL; 26% if CD4 decline from peak; 25% if current CD4 decline; 32% if new stage 3/4 event; 19% if multiple stage 3 events or new stage 4 event and 10% if new stage 4 event, would meet criteria to switch. The median time to meet CD4 count criteria was approximately 4 years.

Using viral load >500 copies/mL as switch criterion 83% of patients had resistance to NVP, 85% to 3TC, and 26% had any thymidine analogue mutation at the time of switch. When the multiple WHO stage 3/new WHO stage 4 event criterion was used, 90% had resistance to nevirapine, 90% to lamivudine, and 55% had thymidine analogue mutations at the time of switch.

These mutations meant that the mean number of active drugs in the second-line regimen was 2·71 in those switched according to viral load and 2·37 in those switched according to clinical events.

When the investigators looked at survival, this was greatest in patients switched according to viral load >500 copies/mL but differences compared to other criteria were modest.

Survival was estimated as mean life years lived over each time period. The investigators found that death rates were not constant over each period and were much higher in the first year. At 5 years, mean person years of potential life were: 4.14 (83%), 4.09 (82%) and 4.09 (82%) using viral load >500 copies/mL, CD4 decline from peak and multiple stage 3/new stage 4 criteria respectively. At ten years these values were: 7.74 (77%), 7.52 (75%) and 7.53(75%) and at 20 years: 13.46(67%), 12.76 (64%) and 12.71 (64%) for the same criteria respectively.

The proportion of life-years spent with viral load <50 copies/mL was highest for viral load monitoring: 73%, 74% and 69% at 5, 10 and 20 years using viral load >500 copies/mL and 69%, 68% and 63% using WHO stage 3/new WHO stage 4 event criterion.

When the investigators looked at the risk of transmission of resistant virus and the proportion of life-years in patients with resistance who also had viral load over 1000 copies/mL, (as these patients will be more infectious) they found lower percentages of life-years with resistance to some drugs when the switch criterion was a viral load >500 copies/mL than with other criteria.

In their discussion in the Lancet article, the investigators wrote: “Although viral load monitoring provided a moderate improvement in survival compared with other monitoring strategies, and would be predicted to reduce resistance accumulation and the ability to transmit resistant HIV to others, at around $3500 per life-year gained (compared with a strategy of new WHO stage 3/4events), at current costs such monitoring is unlikely to be cost effective in most resource-limited settings.”

They explained that since resistance to NVP and 3TC is almost always present at the time of virological failure of the WHO first line regimen, it is largely resistance to d4T and AZT at the start of the second regimen that will be different between the viral load and other monitoring strategies. They found the mean number of active drugs in the second-line regimen was only moderately lower when the multiple WHO stage 3/new WHO stage 4 strategy was used than with use of viral load monitoring, even though the clinical monitoring strategy allows the virus to accumulate thymidine analogue mutations over more than 4 years.

They noted that data on resistance has been largely based on patients in the north with subtype B virus.

They predict that if d4T is replaced with tenofovir in first line regimens (as is increasingly happening in resource limited settings), their overall conclusions are unlikely to differ.

Additionally, if second-line regimens are used that are different to that in the model (including abacavir, ddI, and LPV/r or tenofovir, ddI, and LPV/r), these do not include AZT and so the investigators predict they would be, if anything, less sensitive to the accumulation of thymidine analogue mutations during first-line failure. Therefore their conclusions would probably hold for alternative second line regimens.

They concluded: “In summary, our results suggest that use of antiretroviral therapy without monitoring of viral load or CD4 cell count does not have marked detrimental effects on patient survival or on development of resistance. This finding is particularly relevant in view of the limited array of antiretroviral combinations available to the developing world. Access to antiretroviral therapy should be expanded to all settings as rapidly as possible; lack of access to laboratory monitoring should not be allowed to hinder this process.”

Comment

In an accompanying editorial to this article David Moore and Jonathan Mermin from HBAC write, “These results might seem surprising to clinicians familiar with HIV treatment in high-income countries, where the value of laboratory monitoring has been accepted since the introduction of highly active antiretroviral therapy in the mid-1990s.”

They rightly point out that the results from DART, which are anticipated next year, will further inform the question.

References:

  1. Phillips AN. How do monitoring strategies influence antiretroviral treatment outcome in the developing world? BHIVA Plenary Session 3. 14th Annual BHIVA Conference, 23-25 April, 2008. Belfast.
  2. Phillips AN, Pillay D, Miners AH et al. Outcomes from monitoring of patients on antiretroviral therapy in resource-limited settings with viral load, CD4 cell count, or clinical observation alone: a computer simulation model. Lancet. Vol 371 April 26, 2008 1443.

Links to other websites are current at date of posting but not maintained.