PLoS Medicine

Condividi contenuti PLOS Medicine: New Articles
A Peer-Reviewed Open-Access Journal
Aggiornato: 4 ore 8 min fa

The value of confirmatory testing in early infant HIV diagnosis programmes in South Africa: A cost-effectiveness analysis

Ma, 21/11/2017 - 23:00

by Lorna Dunning, Jordan A. Francke, Divya Mallampati, Rachel L. MacLean, Martina Penazzato, Taige Hou, Landon Myer, Elaine J. Abrams, Rochelle P. Walensky, Valériane Leroy, Kenneth A. Freedberg, Andrea Ciaranello

Background

The specificity of nucleic acid amplification tests (NAATs) used for early infant diagnosis (EID) of HIV infection is <100%, leading some HIV-uninfected infants to be incorrectly identified as HIV-infected. The World Health Organization recommends that infants undergo a second NAAT to confirm any positive test result, but implementation is limited. Our objective was to determine the impact and cost-effectiveness of confirmatory HIV testing for EID programmes in South Africa.

Method and findings

Using the Cost-effectiveness of Preventing AIDS Complications (CEPAC)–Pediatric model, we simulated EID testing at age 6 weeks for HIV-exposed infants without and with confirmatory testing. We assumed a NAAT cost of US$25, NAAT specificity of 99.6%, NAAT sensitivity of 100% for infants infected in pregnancy or at least 4 weeks prior to testing, and a mother-to-child transmission (MTCT) rate at 12 months of 4.9%; we simulated guideline-concordant rates of testing uptake, result return, and antiretroviral therapy (ART) initiation (100%). After diagnosis, infants were linked to and retained in care for 10 years (false-positive) or lifelong (true-positive). All parameters were varied widely in sensitivity analyses. Outcomes included number of infants with false-positive diagnoses linked to ART per 1,000 ART initiations, life expectancy (LE, in years) and per-person lifetime HIV-related healthcare costs. Both without and with confirmatory testing, LE was 26.2 years for HIV-infected infants and 61.4 years for all HIV-exposed infants; clinical outcomes for truly infected infants did not differ by strategy. Without confirmatory testing, 128/1,000 ART initiations were false-positive diagnoses; with confirmatory testing, 1/1,000 ART initiations were false-positive diagnoses. Because confirmatory testing averted costly HIV care and ART in truly HIV-uninfected infants, it was cost-saving: total cost US$1,790/infant tested, compared to US$1,830/infant tested without confirmatory testing. Confirmatory testing remained cost-saving unless NAAT cost exceeded US$400 or the HIV-uninfected status of infants incorrectly identified as infected was ascertained and ART stopped within 3 months of starting. Limitations include uncertainty in the data used in the model, which we examined with sensitivity and uncertainty analyses. We also excluded clinical harms to HIV-uninfected infants incorrectly treated with ART after false-positive diagnosis (e.g., medication toxicities); including these outcomes would further increase the value of confirmatory testing.

Conclusions

Without confirmatory testing, in settings with MTCT rates similar to that of South Africa, more than 10% of infants who initiate ART may reflect false-positive diagnoses. Confirmatory testing prevents inappropriate HIV diagnosis, is cost-saving, and should be adopted in all EID programmes.

HIV pre-exposure prophylaxis and early antiretroviral treatment among female sex workers in South Africa: Results from a prospective observational demonstration project

Ma, 21/11/2017 - 23:00

by Robyn Eakle, Gabriela B. Gomez, Niven Naicker, Rutendo Bothma, Judie Mbogua, Maria A. Cabrera Escobar, Elaine Saayman, Michelle Moorhouse, W. D. Francois Venter, Helen Rees, on behalf of the TAPS Demonstration Project Team

Background

Operational research is required to design delivery of pre-exposure prophylaxis (PrEP) and early antiretroviral treatment (ART). This paper presents the primary analysis of programmatic data, as well as demographic, behavioural, and clinical data, from the TAPS Demonstration Project, which offered both interventions to female sex workers (FSWs) at 2 urban clinic sites in South Africa.

Methods and findings

The TAPS study was conducted between 30 March 2015 and 30 June 2017, with the enrolment period ending on 31 July 2016. TAPS was a prospective observational cohort study with 2 groups receiving interventions delivered in existing service settings: (1) PrEP as part of combination prevention for HIV-negative FSWs and (2) early ART for HIV-positive FSWs. The main outcome was programme retention at 12 months of follow-up. Of the 947 FSWs initially seen in clinic, 692 were HIV tested. HIV prevalence was 49%. Among those returning to clinic after HIV testing and clinical screening, 93% of the women who were HIV-negative were confirmed as clinically eligible for PrEP (n = 224/241), and 41% (n = 110/270) of the women who were HIV-positive had CD4 counts within National Department of Health ART initiation guidelines at assessment. Of the remaining women who were HIV-positive, 93% were eligible for early ART (n = 148/160). From those eligible, 98% (n = 219/224) and 94% (n = 139/148) took up PrEP and early ART, respectively. At baseline, a substantial fraction of women had a steady partner, worked in brothels, and were born in Zimbabwe. Of those enrolled, 22% on PrEP (n = 49/219) and 60% on early ART (n = 83/139) were seen at 12 months; we observed high rates of loss to follow-up: 71% (n = 156/219) and 30% (n = 42/139) in the PrEP and early ART groups, respectively. Little change over time was reported in consistent condom use or the number of sexual partners in the last 7 days, with high levels of consistent condom use with clients and low use with steady partners in both study groups. There were no seroconversions on PrEP and 7 virological failures on early ART among women remaining in the study. Reported adherence to PrEP varied over time between 70% and 85%, whereas over 90% of participants reported taking pills daily while on early ART. Data on provider-side costs were also collected and analysed. The total cost of service delivery was approximately US$126 for PrEP and US$406 for early ART per person-year. The main limitations of this study include the lack of a control group, which was not included due to ethical considerations; clinical study requirements imposed when PrEP was not approved through the regulatory system, which could have affected uptake; and the timing of the implementation of a national sex worker HIV programme, which could have also affected uptake and retention.

Conclusions

PrEP and early ART services can be implemented within FSW routine services in high prevalence, urban settings. We observed good uptake for both PrEP and early ART; however, retention rates for PrEP were low. Retention rates for early ART were similar to retention rates for the current standard of care. While the cost of the interventions was higher than previously published, there is potential for cost reduction at scale. The TAPS Demonstration Project results provided the basis for the first government PrEP and early ART guidelines and the rollout of the national sex worker HIV programme in South Africa.

Closing the gaps in the HIV care continuum

Ma, 21/11/2017 - 23:00

by Ruanne V. Barnabas, Connie Celum

In a Perspective, Ruanne Barnabas and Connie Celum discuss the implications of the accompanying Link4Health and Engage4Health studies for HIV care in sub-Saharan Africa

HIV self-testing among female sex workers in Zambia: A cluster randomized controlled trial

Ma, 21/11/2017 - 23:00

by Michael M. Chanda, Katrina F. Ortblad, Magdalene Mwale, Steven Chongo, Catherine Kanchele, Nyambe Kamungoma, Andrew Fullem, Caitlin Dunn, Leah G. Barresi, Guy Harling, Till Bärnighausen, Catherine E. Oldenburg

Background

HIV self-testing (HIVST) may play a role in addressing gaps in HIV testing coverage and as an entry point for HIV prevention services. We conducted a cluster randomized trial of 2 HIVST distribution mechanisms compared to the standard of care among female sex workers (FSWs) in Zambia.

Methods and findings

Trained peer educators in Kapiri Mposhi, Chirundu, and Livingstone, Zambia, each recruited 6 FSW participants. Peer educator–FSW groups were randomized to 1 of 3 arms: (1) delivery (direct distribution of an oral HIVST from the peer educator), (2) coupon (a coupon for collection of an oral HIVST from a health clinic/pharmacy), or (3) standard-of-care HIV testing. Participants in the 2 HIVST arms received 2 kits: 1 at baseline and 1 at 10 weeks. The primary outcome was any self-reported HIV testing in the past month at the 1- and 4-month visits, as HIVST can replace other types of HIV testing. Secondary outcomes included linkage to care, HIVST use in the HIVST arms, and adverse events. Participants completed questionnaires at 1 and 4 months following peer educator interventions. In all, 965 participants were enrolled between September 16 and October 12, 2016 (delivery, N = 316; coupon, N = 329; standard of care, N = 320); 20% had never tested for HIV. Overall HIV testing at 1 month was 94.9% in the delivery arm, 84.4% in the coupon arm, and 88.5% in the standard-of-care arm (delivery versus standard of care risk ratio [RR] = 1.07, 95% CI 0.99–1.15, P = 0.10; coupon versus standard of care RR = 0.95, 95% CI 0.86–1.05, P = 0.29; delivery versus coupon RR = 1.13, 95% CI 1.04–1.22, P = 0.005). Four-month rates were 84.1% for the delivery arm, 79.8% for the coupon arm, and 75.1% for the standard-of-care arm (delivery versus standard of care RR = 1.11, 95% CI 0.98–1.27, P = 0.11; coupon versus standard of care RR = 1.06, 95% CI 0.92–1.22, P = 0.42; delivery versus coupon RR = 1.05, 95% CI 0.94–1.18, P = 0.40). At 1 month, the majority of HIV tests were self-tests (88.4%). HIV self-test use was higher in the delivery arm compared to the coupon arm (RR = 1.14, 95% CI 1.05–1.23, P = 0.001) at 1 month, but there was no difference at 4 months. Among participants reporting a positive HIV test at 1 (N = 144) and 4 months (N = 235), linkage to care was non-significantly lower in the 2 HIVST arms compared to the standard-of-care arm. There were 4 instances of intimate partner violence related to study participation, 3 of which were related to HIV self-test use. Limitations include the self-reported nature of study outcomes and overall high uptake of HIV testing.

Conclusions

In this study among FSWs in Zambia, we found that HIVST was acceptable and accessible. However, HIVST may not substantially increase HIV cascade progression in contexts where overall testing and linkage are already high.

Trial registration

ClinicalTrials.gov NCT02827240

Postmenopausal hormone therapy and risk of stroke: A pooled analysis of data from population-based cohort studies

Ve, 17/11/2017 - 23:00

by Germán D. Carrasquilla, Paolo Frumento, Anita Berglund, Christer Borgfeldt, Matteo Bottai, Chiara Chiavenna, Mats Eliasson, Gunnar Engström, Göran Hallmans, Jan-Håkan Jansson, Patrik K. Magnusson, Peter M. Nilsson, Nancy L. Pedersen, Alicja Wolk, Karin Leander

Background

Recent research indicates a favourable influence of postmenopausal hormone therapy (HT) if initiated early, but not late, on subclinical atherosclerosis. However, the clinical relevance of timing of HT initiation for hard end points such as stroke remains to be determined. Further, no previous research has considered the timing of initiation of HT in relation to haemorrhagic stroke risk. The importance of the route of administration, type, active ingredient, and duration of HT for stroke risk is also unclear. We aimed to assess the association between HT and risk of stroke, considering the timing of initiation, route of administration, type, active ingredient, and duration of HT.

Methods and findings

Data on HT use reported by the participants in 5 population-based Swedish cohort studies, with baseline investigations performed during the period 1987–2002, were combined in this observational study. In total, 88,914 postmenopausal women who reported data on HT use and had no previous cardiovascular disease diagnosis were included. Incident events of stroke (ischaemic, haemorrhagic, or unspecified) and haemorrhagic stroke were identified from national population registers. Laplace regression was employed to assess crude and multivariable-adjusted associations between HT and stroke risk by estimating percentile differences (PDs) with 95% confidence intervals (CIs). The fifth and first PDs were calculated for stroke and haemorrhagic stroke, respectively. Crude models were adjusted for age at baseline only. The final adjusted models included age at baseline, level of education, smoking status, body mass index, level of physical activity, and age at menopause onset. Additional variables evaluated for potential confounding were type of menopause, parity, use of oral contraceptives, alcohol consumption, hypertension, dyslipidaemia, diabetes, family history of cardiovascular disease, and cohort. During a median follow-up of 14.3 years, 6,371 first-time stroke events were recorded; of these, 1,080 were haemorrhagic. Following multivariable adjustment, early initiation (<5 years since menopause onset) of HT was associated with a longer stroke-free period than never use (fifth PD, 1.00 years; 95% CI 0.42 to 1.57), but there was no significant extension to the time period free of haemorrhagic stroke (first PD, 1.52 years; 95% CI −0.32 to 3.37). When considering timing as a continuous variable, the stroke-free and the haemorrhagic stroke-free periods were maximal if HT was initiated approximately 0–5 years from the onset of menopause. If single conjugated equine oestrogen HT was used, late initiation of HT was associated with a shorter stroke-free (fifth PD, −4.41 years; 95% CI −7.14 to −1.68) and haemorrhagic stroke-free (first PD, −9.51 years; 95% CI −12.77 to −6.24) period than never use. Combined HT when initiated late was significantly associated with a shorter haemorrhagic stroke-free period (first PD, −1.97 years; 95% CI −3.81 to −0.13), but not with a shorter stroke-free period (fifth PD, −1.21 years; 95% CI −3.11 to 0.68) than never use. Given the observational nature of this study, the possibility of uncontrolled confounding cannot be excluded. Further, immortal time bias, also related to the observational design, cannot be ruled out.

Conclusions

When initiated early in relation to menopause onset, HT was not associated with increased risk of incident stroke, regardless of the route of administration, type of HT, active ingredient, and duration. Generally, these findings held also for haemorrhagic stroke. Our results suggest that the initiation of HT 0–5 years after menopause onset, as compared to never use, is associated with a decreased risk of stroke and haemorrhagic stroke. Late initiation was associated with elevated risks of stroke and haemorrhagic stroke when conjugated equine oestrogen was used as single therapy. Late initiation of combined HT was associated with haemorrhagic stroke risk.

Core Outcome Set-STAndards for Development: The COS-STAD recommendations

Gi, 16/11/2017 - 23:00

by Jamie J. Kirkham, Katherine Davis, Douglas G. Altman, Jane M. Blazeby, Mike Clarke, Sean Tunis, Paula R. Williamson

Background

The use of core outcome sets (COS) ensures that researchers measure and report those outcomes that are most likely to be relevant to users of their research. Several hundred COS projects have been systematically identified to date, but there has been no formal quality assessment of these studies. The Core Outcome Set-STAndards for Development (COS-STAD) project aimed to identify minimum standards for the design of a COS study agreed upon by an international group, while other specific guidance exists for the final reporting of COS development studies (Core Outcome Set-STAndards for Reporting [COS-STAR]).

Methods and findings

An international group of experienced COS developers, methodologists, journal editors, potential users of COS (clinical trialists, systematic reviewers, and clinical guideline developers), and patient representatives produced the COS-STAD recommendations to help improve the quality of COS development and support the assessment of whether a COS had been developed using a reasonable approach. An open survey of experts generated an initial list of items, which was refined by a 2-round Delphi survey involving nearly 250 participants representing key stakeholder groups. Participants assigned importance ratings for each item using a 1–9 scale. Consensus that an item should be included in the set of minimum standards was defined as at least 70% of the voting participants from each stakeholder group providing a score between 7 and 9. The Delphi survey was followed by a consensus discussion with the study management group representing multiple stakeholder groups. COS-STAD contains 11 minimum standards that are the minimum design recommendations for all COS development projects. The recommendations focus on 3 key domains: the scope, the stakeholders, and the consensus process.

Conclusions

The COS-STAD project has established 11 minimum standards to be followed by COS developers when planning their projects and by users when deciding whether a COS has been developed using reasonable methods.

Prospects for passive immunity to prevent HIV infection

Ma, 14/11/2017 - 23:00

by Lynn Morris, Nonhlanhla N. Mkhize

In a Perspective, Lynn Morris and Nonhlanhla Mkhize discuss the prospects for broadly neutralizing antibodies to be used in preventing HIV infection.

Safety, pharmacokinetics, and immunological activities of multiple intravenous or subcutaneous doses of an anti-HIV monoclonal antibody, VRC01, administered to HIV-uninfected adults: Results of a phase 1 randomized trial

Ma, 14/11/2017 - 23:00

by Kenneth H. Mayer, Kelly E. Seaton, Yunda Huang, Nicole Grunenberg, Abby Isaacs, Mary Allen, Julie E. Ledgerwood, Ian Frank, Magdalena E. Sobieszczyk, Lindsey R. Baden, Benigno Rodriguez, Hong Van Tieu, Georgia D. Tomaras, Aaron Deal, Derrick Goodman, Robert T. Bailer, Guido Ferrari, Ryan Jensen, John Hural, Barney S. Graham, John R. Mascola, Lawrence Corey, David C. Montefiori, on behalf of the HVTN 104 Protocol Team , and the NIAID HIV Vaccine Trials Network

Background

VRC01 is an HIV-1 CD4 binding site broadly neutralizing antibody (bnAb) that is active against a broad range of HIV-1 primary isolates in vitro and protects against simian-human immunodeficiency virus (SHIV) when delivered parenterally to nonhuman primates. It has been shown to be safe and well tolerated after short-term administration in humans; however, its clinical and functional activity after longer-term administration has not been previously assessed.

Methods and findings

HIV Vaccine Trials Network (HVTN) 104 was designed to evaluate the safety and tolerability of multiple doses of VRC01 administered either subcutaneously or by intravenous (IV) infusion and to assess the pharmacokinetics and in vitro immunologic activity of the different dosing regimens. Additionally, this study aimed to assess the effect that the human body has on the functional activities of VRC01 as measured by several in vitro assays. Eighty-eight healthy, HIV-uninfected, low-risk participants were enrolled in 6 United States clinical research sites affiliated with the HVTN between September 9, 2014, and July 15, 2015. The median age of enrollees was 27 years (range, 18–50); 52% were White (non-Hispanic), 25% identified as Black (non-Hispanic), 11% were Hispanic, and 11% were non-Hispanic people of diverse origins. Participants were randomized to receive the following: a 40 mg/kg IV VRC01 loading dose followed by five 20 mg/kg IV VRC01 doses every 4 weeks (treatment group 1 [T1], n = 20); eleven 5 mg/kg subcutaneous (SC) VRC01 (treatment group 3 [T3], n = 20); placebo (placebo group 3 [P3], n = 4) doses every 2 weeks; or three 40 mg/kg IV VRC01 doses every 8 weeks (treatment group 2 [T2], n = 20). Treatment groups T4 and T5 (n = 12 each) received three 10 or 30 mg/kg IV VRC01 doses every 8 weeks, respectively. Participants were followed for 32 weeks after their first VRC01 administration and received a total of 249 IV infusions and 208 SC injections, with no serious adverse events, dose-limiting toxicities, nor evidence for anti-VRC01 antibodies observed. Serum VRC01 levels were detected through 12 weeks after final administration in all participants who received all scheduled doses. Mean peak serum VRC01 levels of 1,177 μg/ml (95% CI: 1,033, 1,340) and 420 μg/ml (95% CI: 356, 494) were achieved 1 hour after the IV infusion series of 30 mg/kg and 10 mg/kg doses, respectively. Mean trough levels at week 24 in the IV infusion series of 30 mg/kg and 10 mg/kg doses, respectively, were 16 μg/ml (95% CI: 10, 27) and 6 μg/ml (95% CI: 5, 9) levels, which neutralize a majority of circulating strains in vitro (50% inhibitory concentration [IC50] > 5 μg/ml). Post-infusion/injection serum VRC01 retained expected functional activity (virus neutralization, antibody-dependent cellular cytotoxicity, phagocytosis, and virion capture). The limitations of this study include the relatively small sample size of each VRC01 administration regimen and missing data from participants who were unable to complete all study visits.

Conclusions

VRC01 administered as either an IV infusion (10–40 mg/kg) given monthly or bimonthly, or as an SC injection (5 mg/kg) every 2 weeks, was found to be safe and well tolerated. In addition to maintaining drug concentrations consistent with neutralization of the majority of tested HIV strains, VRC01 concentrations from participants’ sera were found to avidly capture HIV virions and to mediate antibody-dependent cellular phagocytosis, suggesting a range of anti-HIV immunological activities, warranting further clinical trials.

Trial registration

Clinical Trials Registration: NCT02165267

Treatment guidelines and early loss from care for people living with HIV in Cape Town, South Africa: A retrospective cohort study

Ma, 14/11/2017 - 23:00

by Ingrid T. Katz, Richard Kaplan, Garrett Fitzmaurice, Dominick Leone, David R. Bangsberg, Linda-Gail Bekker, Catherine Orrell

Background

South Africa has undergone multiple expansions in antiretroviral therapy (ART) eligibility from an initial CD4+ threshold of ≤200 cells/μl to providing ART for all people living with HIV (PLWH) as of September 2016. We evaluated the association of programmatic changes in ART eligibility with loss from care, both prior to ART initiation and within the first 16 weeks of starting treatment, during a period of programmatic expansion to ART treatment at CD4+ ≤ 350 cells/μl.

Methods and findings

We performed a retrospective cohort study of 4,025 treatment-eligible, non-pregnant PLWH accessing care in a community health center in Gugulethu Township affiliated with the Desmond Tutu HIV Centre in Cape Town. The median age of participants was 34 years (IQR 28–41 years), almost 62% were female, and the median CD4+ count was 173 cells/μl (IQR 92–254 cells/μl). Participants were stratified into 2 cohorts: an early cohort, enrolled into care at the health center from 1 January 2009 to 31 August 2011, when guidelines mandated that ART initiation required CD4+ ≤ 200 cells/μl, pregnancy, advanced clinical symptoms (World Health Organization [WHO] stage 4), or comorbidity (active tuberculosis); and a later cohort, enrolled into care from 1 September 2011 to 31 December 2013, when the treatment threshold had been expanded to CD4+ ≤ 350 cells/μl. Demographic and clinical factors were compared before and after the policy change using chi-squared tests to identify potentially confounding covariates, and logistic regression models were used to estimate the risk of pre-treatment (pre-ART) loss from care and early loss within the first 16 weeks on treatment, adjusting for age, baseline CD4+, and WHO stage. Compared with participants in the later cohort, participants in the earlier cohort had significantly more advanced disease: median CD4+ 146 cells/μl versus 214 cells/μl (p < 0.001), 61.1% WHO stage 3/4 disease versus 42.8% (p < 0.001), and pre-ART mortality of 34.2% versus 16.7% (p < 0.001). In total, 385 ART-eligible PLWH (9.6%) failed to initiate ART, of whom 25.7% died before ever starting treatment. Of the 3,640 people who started treatment, 58 (1.6%) died within the first 16 weeks in care, and an additional 644 (17.7%) were lost from care within 16 weeks of starting ART. PLWH who did start treatment in the later cohort were significantly more likely to discontinue care in <16 weeks (19.8% versus 15.8%, p = 0.002). After controlling for baseline CD4+, WHO stage, and age, this effect remained significant (adjusted odds ratio [aOR] = 1.30, 95% CI 1.09–1.55). As such, it remains unclear if early attrition from care was due to a “healthy cohort” effect or to overcrowding as programs expanded to accommodate the broader guidelines for treatment. Our findings were limited by a lack of generalizability (given that these data were from a single high-volume site where testing and treatment were available) and an inability to formally investigate the effect of crowding on the main outcome.

Conclusions

Over one-quarter of this ART-eligible cohort did not achieve the long-term benefits of treatment due to early mortality, ART non-initiation, or early ART discontinuation. Those who started treatment in the later cohort appeared to be more likely to discontinue care early, and this outcome appeared to be independent of CD4+ count or WHO stage. Future interventions should focus on those most at risk for early loss from care as programs continue to expand in South Africa.

A combination intervention strategy to improve linkage to and retention in HIV care following diagnosis in Mozambique: A cluster-randomized study

Ma, 14/11/2017 - 23:00

by Batya Elul, Matthew R. Lamb, Maria Lahuerta, Fatima Abacassamo, Laurence Ahoua, Stephanie A. Kujawski, Maria Tomo, Ilesh Jani

Background

Concerning gaps in the HIV care continuum compromise individual and population health. We evaluated a combination intervention strategy (CIS) targeting prevalent barriers to timely linkage and sustained retention in HIV care in Mozambique.

Methods and findings

In this cluster-randomized trial, 10 primary health facilities in the city of Maputo and Inhambane Province were randomly assigned to provide the CIS or the standard of care (SOC). The CIS included point-of-care CD4 testing at the time of diagnosis, accelerated ART initiation, and short message service (SMS) health messages and appointment reminders. A pre–post intervention 2-sample design was nested within the CIS arm to assess the effectiveness of CIS+, an enhanced version of the CIS that additionally included conditional non-cash financial incentives for linkage and retention. The primary outcome was a combined outcome of linkage to care within 1 month and retention at 12 months after diagnosis. From April 22, 2013, to June 30, 2015, we enrolled 2,004 out of 5,327 adults ≥18 years of age diagnosed with HIV in the voluntary counseling and testing clinics of participating health facilities: 744 (37%) in the CIS group, 493 (25%) in the CIS+ group, and 767 (38%) in the SOC group. Fifty-seven percent of the CIS group achieved the primary outcome versus 35% in the SOC group (relative risk [RR]CIS vs SOC = 1.58, 95% CI 1.05–2.39). Eighty-nine percent of the CIS group linked to care on the day of diagnosis versus 16% of the SOC group (RRCIS vs SOC = 9.13, 95% CI 1.65–50.40). There was no significant benefit of adding financial incentives to the CIS in terms of the combined outcome (55% of the CIS+ group achieved the primary outcome, RRCIS+ vs CIS = 0.96, 95% CI 0.81–1.16). Key limitations include the use of existing medical records to assess outcomes, the inability to isolate the effect of each component of the CIS, non-concurrent enrollment of the CIS+ group, and exclusion of many patients newly diagnosed with HIV.

Conclusions

The CIS showed promise for making much needed gains in the HIV care continuum in our study, particularly in the critical first step of timely linkage to care following diagnosis.

Trial registration

ClinicalTrials.gov NCT01930084

Virological response and resistance among HIV-infected children receiving long-term antiretroviral therapy without virological monitoring in Uganda and Zimbabwe: Observational analyses within the randomised ARROW trial

Ma, 14/11/2017 - 23:00

by Alexander J. Szubert, Andrew J. Prendergast, Moira J. Spyer, Victor Musiime, Philippa Musoke, Mutsa Bwakura-Dangarembizi, Patricia Nahirya-Ntege, Margaret J. Thomason, Emmanuel Ndashimye, Immaculate Nkanya, Oscar Senfuma, Boniface Mudenge, Nigel Klein, Diana M. Gibb, A. Sarah Walker, the ARROW Trial Team

Background

Although WHO recommends viral load (VL) monitoring for those on antiretroviral therapy (ART), availability in low-income countries remains limited. We investigated long-term VL and resistance in HIV-infected children managed without real-time VL monitoring.

Methods and findings

In the ARROW factorial trial, 1,206 children initiating ART in Uganda and Zimbabwe between 15 March 2007 and 18 November 2008, aged a median 6 years old, with median CD4% of 12%, were randomised to monitoring with or without 12-weekly CD4 counts and to receive 2 nucleoside reverse transcriptase inhibitors (2NRTI, mainly abacavir+lamivudine) with a non-nucleoside reverse transcriptase inhibitor (NNRTI) or 3 NRTIs as long-term ART. All children had VL assayed retrospectively after a median of 4 years on ART; those with >1,000 copies/ml were genotyped. Three hundred and sixteen children had VL and genotypes assayed longitudinally (at least every 24 weeks). Overall, 67 (6%) switched to second-line ART and 54 (4%) died. In children randomised to WHO-recommended 2NRTI+NNRTI long-term ART, 308/378 (81%) monitored with CD4 counts versus 297/375 (79%) without had VL <1,000 copies/ml at 4 years (difference = +2.3% [95% CI −3.4% to +8.0%]; P = 0.43), with no evidence of differences in intermediate/high-level resistance to 11 drugs. Among children with longitudinal VLs, only 5% of child-time post–week 24 was spent with persistent low-level viraemia (80–5,000 copies/ml) and 10% with VL rebound ≥5,000 copies/ml. No child resuppressed <80 copies/ml after confirmed VL rebound ≥5,000 copies/ml. A median of 1.0 (IQR 0.0,1.5) additional NRTI mutation accumulated over 2 years’ rebound. Nineteen out of 48 (40%) VLs 1,000–5,000 copies/ml were immediately followed by resuppression <1,000 copies/ml, but only 17/155 (11%) VLs ≥5,000 copies/ml resuppressed (P < 0.0001). Main study limitations are that analyses were exploratory and treatment initiation used 2006 criteria, without pre-ART genotypes.

Conclusions

In this study, children receiving first-line ART in sub-Saharan Africa without real-time VL monitoring had good virological and resistance outcomes over 4 years, regardless of CD4 monitoring strategy. Many children with detectable low-level viraemia spontaneously resuppressed, highlighting the importance of confirming virological failure before switching to second-line therapy. Children experiencing rebound ≥5,000 copies/ml were much less likely to resuppress, but NRTI resistance increased only slowly. These results are relevant to the increasing numbers of HIV-infected children receiving first-line ART in sub-Saharan Africa with limited access to virological monitoring.

Trial registration

ISRCTN Registry, ISRCTN24791884

Bioequivalence of twice-daily oral tacrolimus in transplant recipients: More evidence for consensus?

Ma, 14/11/2017 - 23:00

by Simon Ball

In this Perspective on the clinical trial by Rita Alloway and colleagues, Simon Ball explains the benefits to healthcare systems and individual patients of the bioequivalence established between generic and brand-name formulations of an immunosuppressive drug in transplant recipients.

Bioequivalence between innovator and generic tacrolimus in liver and kidney transplant recipients: A randomized, crossover clinical trial

Ma, 14/11/2017 - 23:00

by Rita R. Alloway, Alexander A. Vinks, Tsuyoshi Fukuda, Tomoyuki Mizuno, Eileen C. King, Yuanshu Zou, Wenlei Jiang, E. Steve Woodle, Simon Tremblay, Jelena Klawitter, Jost Klawitter, Uwe Christians

Background

Although the generic drug approval process has a long-term successful track record, concerns remain for approval of narrow therapeutic index generic immunosuppressants, such as tacrolimus, in transplant recipients. Several professional transplant societies and publications have generated skepticism of the generic approval process. Three major areas of concern are that the pharmacokinetic properties of generic products and the innovator (that is, “brand”) product in healthy volunteers may not reflect those in transplant recipients, bioequivalence between generic and innovator may not ensure bioequivalence between generics, and high-risk patients may have specific bioequivalence concerns. Such concerns have been fueled by anecdotal observations and retrospective and uncontrolled published studies, while well-designed, controlled prospective studies testing the validity of the regulatory bioequivalence testing approach for narrow therapeutic index immunosuppressants in transplant recipients have been lacking. Thus, the present study prospectively assesses bioequivalence between innovator tacrolimus and 2 generics in individuals with a kidney or liver transplant.

Methods and findings

From December 2013 through October 2014, a prospective, replicate dosing, partially blinded, randomized, 3-treatment, 6-period crossover bioequivalence study was conducted at the University of Cincinnati in individuals with a kidney (n = 35) or liver transplant (n = 36). Abbreviated New Drug Applications (ANDA) data that included manufacturing and healthy individual pharmacokinetic data for all generics were evaluated to select the 2 most disparate generics from innovator, and these were named Generic Hi and Generic Lo. During the 8-week study period, pharmacokinetic studies assessed the bioequivalence of Generic Hi and Generic Lo with the Innovator tacrolimus and with each other. Bioequivalence of the major tacrolimus metabolite was also assessed. All products fell within the US Food and Drug Administration (FDA) average bioequivalence (ABE) acceptance criteria of a 90% confidence interval contained within the confidence limits of 80.00% and 125.00%. Within-subject variability was similar for the area under the curve (AUC) (range 12.11–15.81) and the concentration maximum (Cmax) (range 17.96–24.72) for all products. The within-subject variability was utilized to calculate the scaled average bioequivalence (SCABE) 90% confidence interval. The calculated SCABE 90% confidence interval was 84.65%–118.13% and 80.00%–125.00% for AUC and Cmax, respectively. The more stringent SCABE acceptance criteria were met for all product comparisons for AUC and Cmax in both individuals with a kidney transplant and those with a liver transplant. European Medicines Agency (EMA) acceptance criteria for narrow therapeutic index drugs were also met, with the only exception being in the case of Brand versus Generic Lo, in which the upper limits of the 90% confidence intervals were 111.30% (kidney) and 112.12% (liver). These were only slightly above the upper EMA acceptance criteria limit for an AUC of 111.11%. SCABE criteria were also met for the major tacrolimus metabolite 13-O-desmethyl tacrolimus for AUC, but it failed the EMA criterion. No acute rejections, no differences in renal function in all individuals, and no differences in liver function were observed in individuals with a liver transplant using the Tukey honest significant difference (HSD) test for multiple comparisons. Fifty-two percent and 65% of all individuals with a kidney or liver transplant, respectively, reported an adverse event. The Exact McNemar test for paired categorical data with adjustments for multiple comparisons was used to compare adverse event rates among the products. No statistically significant differences among any pairs of products were found for any adverse event code or for adverse events overall. Limitations of this study include that the observations were made under strictly controlled conditions that did not allow for the impact of nonadherence or feeding on the possible pharmacokinetic differences. Generic Hi and Lo were selected based upon bioequivalence data in healthy volunteers because no pharmacokinetic data in recipients were available for all products. The safety data should be interpreted in light of the small number of participants and the short observation periods. Lastly, only the 1 mg tacrolimus strength was utilized in this study.

Conclusions

Using an innovative, controlled bioequivalence study design, we observed equivalence between tacrolimus innovator and 2 generic products as well as between 2 generic products in individuals after kidney or liver transplantation following current FDA bioequivalence metrics. These results support the position that bioequivalence for the narrow therapeutic index drug tacrolimus translates from healthy volunteers to individuals receiving a kidney or liver transplant and provides evidence that generic products that are bioequivalent with the innovator product are also bioequivalent to each other.

Trial registration

ClinicalTrials.gov NCT01889758.

Association between the 2012 Health and Social Care Act and specialist visits and hospitalisations in England: A controlled interrupted time series analysis

Ma, 14/11/2017 - 23:00

by James A. Lopez Bernal, Christine Y. Lu, Antonio Gasparrini, Steven Cummins, J. Frank Wharham, Steven B. Soumerai

Background

The 2012 Health and Social Care Act (HSCA) in England led to among the largest healthcare reforms in the history of the National Health Service (NHS). It gave control of £67 billion of the NHS budget for secondary care to general practitioner (GP) led Clinical Commissioning Groups (CCGs). An expected outcome was that patient care would shift away from expensive hospital and specialist settings, towards less expensive community-based models. However, there is little evidence for the effectiveness of this approach. In this study, we aimed to assess the association between the NHS reforms and hospital admissions and outpatient specialist visits.

Methods and findings

We conducted a controlled interrupted time series analysis to examine rates of outpatient specialist visits and inpatient hospitalisations before and after the implementation of the HSCA. We used national routine hospital administrative data (Hospital Episode Statistics) on all NHS outpatient specialist visits and inpatient hospital admissions in England between 2007 and 2015 (with a mean of 26.8 million new outpatient visits and 14.9 million inpatient admissions per year). As a control series, we used equivalent data on hospital attendances in Scotland. Primary outcomes were: total, elective, and emergency hospitalisations, and total and GP-referred specialist visits. Both countries had stable trends in all outcomes at baseline. In England, after the policy, there was a 1.1% (95% CI 0.7%–1.5%; p < 0.001) increase in total specialist visits per quarter and a 1.6% increase in GP-referred specialist visits (95% CI 1.2%–2.0%; p < 0.001) per quarter, equivalent to 12.7% (647,000 over the 5,105,000 expected) and 19.1% (507,000 over the 2,658,000 expected) more visits per quarter by the end of 2015, respectively. In Scotland, there was no change in specialist visits. Neither country experienced a change in trends in hospitalisations: change in slope for total, elective, and emergency hospitalisations were −0.2% (95% CI −0.6%–0.2%; p = 0.257), −0.2% (95% CI −0.6%–0.1%; p = 0.235), and 0.0% (95% CI −0.5%–0.4%; p = 0.866) per quarter in England. We are unable to exclude confounding due to other events occurring around the time of the policy. However, we limited the likelihood of such confounding by including relevant control series, in which no changes were seen.

Conclusions

Our findings suggest that giving control of healthcare budgets to GP-led CCGs was not associated with a reduction in overall hospitalisations and was associated with an increase in specialist visits.

Evidence-based restructuring of health and social care

Ma, 14/11/2017 - 23:00

by Aziz Sheikh

In this Perspective, Aziz Sheikh discusses research to evaluate health policy changes in the provision of care, commenting on a study by James Lopez Bernal and colleagues that examined specialist-dominated hospital care versus community-based care in the United Kingdom.

Perinatal mortality associated with induction of labour versus expectant management in nulliparous women aged 35 years or over: An English national cohort study

Ma, 14/11/2017 - 23:00

by Hannah E. Knight, David A. Cromwell, Ipek Gurol-Urganci, Katie Harron, Jan H. van der Meulen, Gordon C. S. Smith

Background

A recent randomised controlled trial (RCT) demonstrated that induction of labour at 39 weeks of gestational age has no short-term adverse effect on the mother or infant among nulliparous women aged ≥35 years. However, the trial was underpowered to address the effect of routine induction of labour on the risk of perinatal death. We aimed to determine the association between induction of labour at ≥39 weeks and the risk of perinatal mortality among nulliparous women aged ≥35 years.

Methods and findings

We used English Hospital Episode Statistics (HES) data collected between April 2009 and March 2014 to compare perinatal mortality between induction of labour at 39, 40, and 41 weeks of gestation and expectant management (continuation of pregnancy to either spontaneous labour, induction of labour, or caesarean section at a later gestation). Analysis was by multivariable Poisson regression with adjustment for maternal characteristics and pregnancy-related conditions. Among the cohort of 77,327 nulliparous women aged 35 to 50 years delivering a singleton infant, 33.1% had labour induced: these women tended to be older and more likely to have medical complications of pregnancy, and the infants were more likely to be small for gestational age.Induction of labour at 40 weeks (compared with expectant management) was associated with a lower risk of in-hospital perinatal death (0.08% versus 0.26%; adjusted risk ratio [adjRR] 0.33; 95% CI 0.13–0.80, P = 0.015) and meconium aspiration syndrome (0.44% versus 0.86%; adjRR 0.52; 95% CI 0.35–0.78, P = 0.002). Induction at 40 weeks was also associated with a slightly increased risk of instrumental vaginal delivery (adjRR 1.06; 95% CI 1.01–1.11, P = 0.020) and emergency caesarean section (adjRR 1.05; 95% CI 1.01–1.09, P = 0.019). The number needed to treat (NNT) analysis indicated that 562 (95% CI 366–1,210) inductions of labour at 40 weeks would be required to prevent 1 perinatal death. Limitations of the study include the reliance on observational data in which gestational age is recorded in weeks rather than days. There is also the potential for unmeasured confounders and under-recording of induction of labour or perinatal death in the dataset.

Conclusions

Bringing forward the routine offer of induction of labour from the current recommendation of 41–42 weeks to 40 weeks of gestation in nulliparous women aged ≥35 years may reduce overall rates of perinatal death.

Validity of a minimally invasive autopsy for cause of death determination in maternal deaths in Mozambique: An observational study

Me, 08/11/2017 - 23:00

by Paola Castillo, Juan Carlos Hurtado, Miguel J. Martínez, Dercio Jordao, Lucilia Lovane, Mamudo R. Ismail, Carla Carrilho, Cesaltina Lorenzoni, Fabiola Fernandes, Sibone Mocumbi, Zara Onila Jaze, Flora Mabota, Anelsio Cossa, Inacio Mandomando, Pau Cisteró, Alfredo Mayor, Mireia Navarro, Isaac Casas, Jordi Vila, Maria Maixenchs, Khátia Munguambe, Ariadna Sanz, Llorenç Quintó, Eusebio Macete, Pedro Alonso, Quique Bassat, Jaume Ordi, Clara Menéndez

Background

Despite global health efforts to reduce maternal mortality, rates continue to be unacceptably high in large parts of the world. Feasible, acceptable, and accurate postmortem sampling methods could provide the necessary evidence to improve the understanding of the real causes of maternal mortality, guiding the design of interventions to reduce this burden.

Methods and findings

The validity of a minimally invasive autopsy (MIA) method in determining the cause of death was assessed in an observational study in 57 maternal deaths by comparing the results of the MIA with those of the gold standard (complete diagnostic autopsy [CDA], which includes any available clinical information). Concordance between the MIA and the gold standard diagnostic categories was assessed by the kappa statistic, and the sensitivity, specificity, positive and negative predictive values and their 95% confidence intervals (95% CI) to identify the categories of diagnoses were estimated. The main limitation of the study is that both the MIA and the CDA include some degree of subjective interpretation in the attribution of cause of death.A cause of death was identified in the CDA in 98% (56/57) of cases, with indirect obstetric conditions accounting for 32 (56%) deaths and direct obstetric complications for 24 (42%) deaths. Nonobstetric infectious diseases (22/32, 69%) and obstetric hemorrhage (13/24, 54%) were the most common causes of death among indirect and direct obstetric conditions, respectively. Thirty-six (63%) women were HIV positive, and HIV-related conditions accounted for 16 (28%) of all deaths. Cerebral malaria caused 4 (7%) deaths. The MIA identified a cause of death in 86% of women. The overall concordance of the MIA with the CDA was moderate (kappa = 0.48, 95% CI: 0.31–0.66). Both methods agreed in 68% of the diagnostic categories and the agreement was higher for indirect (91%) than for direct obstetric causes (38%). All HIV infections and cerebral malaria cases were identified in the MIA. The main limitation of the technique is its relatively low performance for identifying obstetric causes of death in the absence of clinical information.

Conclusions

The MIA procedure could be a valuable tool to determine the causes of maternal death, especially for indirect obstetric conditions, most of which are infectious diseases.The information provided by the MIA could help to prioritize interventions to reduce maternal mortality and to monitor progress towards achieving global health targets.

Cardiovascular disease (CVD) and chronic kidney disease (CKD) event rates in HIV-positive persons at high predicted CVD and CKD risk: A prospective analysis of the D:A:D observational study

Ma, 07/11/2017 - 23:00

by Mark A. Boyd, Amanda Mocroft, Lene Ryom, Antonella d’Arminio Monforte, Caroline Sabin, Wafaa M. El-Sadr, Camilla Ingrid Hatleberg, Stephane De Wit, Rainer Weber, Eric Fontas, Andrew Phillips, Fabrice Bonnet, Peter Reiss, Jens Lundgren, Matthew Law

Background

The Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study has developed predictive risk scores for cardiovascular disease (CVD) and chronic kidney disease (CKD, defined as confirmed estimated glomerular filtration rate [eGFR] ≤ 60 ml/min/1.73 m2) events in HIV-positive people. We hypothesized that participants in D:A:D at high (>5%) predicted risk for both CVD and CKD would be at even greater risk for CVD and CKD events.

Methods and findings

We included all participants with complete risk factor (covariate) data, baseline eGFR > 60 ml/min/1.73 m2, and a confirmed (>3 months apart) eGFR < 60 ml/min/1.73 m2 thereafter to calculate CVD and CKD risk scores. We calculated CVD and CKD event rates by predicted 5-year CVD and CKD risk groups (≤1%, >1%–5%, >5%) and fitted Poisson models to assess whether CVD and CKD risk group effects were multiplicative. A total of 27,215 participants contributed 202,034 person-years of follow-up: 74% male, median (IQR) age 42 (36, 49) years, median (IQR) baseline year of follow-up 2005 (2004, 2008). D:A:D risk equations predicted 3,560 (13.1%) participants at high CVD risk, 4,996 (18.4%) participants at high CKD risk, and 1,585 (5.8%) participants at both high CKD and high CVD risk. CVD and CKD event rates by predicted risk group were multiplicative. Participants at high CVD risk had a 5.63-fold (95% CI 4.47, 7.09, p < 0.001) increase in CKD events compared to those at low risk; participants at high CKD risk had a 1.31-fold (95% CI 1.09, 1.56, p = 0.005) increase in CVD events compared to those at low risk. Participants’ CVD and CKD risk groups had multiplicative predictive effects, with no evidence of an interaction (p = 0.329 and p = 0.291 for CKD and CVD, respectively). The main study limitation is the difference in the ascertainment of the clinically defined CVD endpoints and the laboratory-defined CKD endpoints.

Conclusions

We found that people at high predicted risk for both CVD and CKD have substantially greater risks for both CVD and CKD events compared with those at low predicted risk for both outcomes, and compared to those at high predicted risk for only CVD or CKD events. This suggests that CVD and CKD risk in HIV-positive persons should be assessed together. The results further encourage clinicians to prioritise addressing modifiable risks for CVD and CKD in HIV-positive people.

HIV prevalence and behavioral and psychosocial factors among transgender women and cisgender men who have sex with men in 8 African countries: A cross-sectional analysis

Ma, 07/11/2017 - 23:00

by Tonia Poteat, Benjamin Ackerman, Daouda Diouf, Nuha Ceesay, Tampose Mothopeng, Ky-Zerbo Odette, Seni Kouanda, Henri Gautier Ouedraogo, Anato Simplice, Abo Kouame, Zandile Mnisi, Gift Trapence, L. Leigh Ann van der Merwe, Vicente Jumbe, Stefan Baral

Introduction

Sub-Saharan Africa bears more than two-thirds of the worldwide burden of HIV; however, data among transgender women from the region are sparse. Transgender women across the world face significant vulnerability to HIV. This analysis aimed to assess HIV prevalence as well as psychosocial and behavioral drivers of HIV infection among transgender women compared with cisgender (non-transgender) men who have sex with men (cis-MSM) in 8 sub-Saharan African countries.

Methods and findings

Respondent-driven sampling targeted cis-MSM for enrollment. Data collection took place at 14 sites across 8 countries: Burkina Faso (January–August 2013), Côte d’Ivoire (March 2015–February 2016), The Gambia (July–December 2011), Lesotho (February–September 2014), Malawi (July 2011–March 2012), Senegal (February–November 2015), Swaziland (August–December 2011), and Togo (January–June 2013). Surveys gathered information on sexual orientation, gender identity, stigma, mental health, sexual behavior, and HIV testing. Rapid tests for HIV were conducted. Data were merged, and mixed effects logistic regression models were used to estimate relationships between gender identity and HIV infection. Among 4,586 participants assigned male sex at birth, 937 (20%) identified as transgender or female, and 3,649 were cis-MSM. The mean age of study participants was approximately 24 years, with no difference between transgender participants and cis-MSM. Compared to cis-MSM participants, transgender women were more likely to experience family exclusion (odds ratio [OR] 1.75, 95% CI 1.42–2.16, p < 0.001), rape (OR 1.95, 95% CI 1.63–2.36, p < 0.001), and depressive symptoms (OR 1.30, 95% CI 1.12–1.52, p < 0.001). Transgender women were more likely to report condomless receptive anal sex in the prior 12 months (OR 2.44, 95% CI 2.05–2.90, p < 0.001) and to be currently living with HIV (OR 1.81, 95% CI 1.49–2.19, p < 0.001). Overall HIV prevalence was 25% (235/926) in transgender women and 14% (505/3,594) in cis-MSM. When adjusted for age, condomless receptive anal sex, depression, interpersonal stigma, law enforcement stigma, and violence, and the interaction of gender with condomless receptive anal sex, the odds of HIV infection for transgender women were 2.2 times greater than the odds for cis-MSM (95% CI 1.65–2.87, p < 0.001). Limitations of the study included sampling strategies tailored for cis-MSM and merging of datasets with non-identical survey instruments.

Conclusions

In this study in sub-Saharan Africa, we found that HIV burden and stigma differed between transgender women and cis-MSM, indicating a need to address gender diversity within HIV research and programs.

Reaching global HIV/AIDS goals: What got us here, won't get us there

Ma, 07/11/2017 - 23:00

by Wafaa M. El-Sadr, Katherine Harripersaud, Miriam Rabkin

In a Perspective, Wafaa El-Sadr and colleagues discuss tailored approaches to treatment and prevention of HIV infection.