PLoS Medicine

Condividi contenuti PLOS Medicine: New Articles
A Peer-Reviewed Open-Access Journal
Aggiornato: 7 ore 48 min fa

Postmenopausal hormone therapy and risk of stroke: A pooled analysis of data from population-based cohort studies

Ve, 17/11/2017 - 23:00

by Germán D. Carrasquilla, Paolo Frumento, Anita Berglund, Christer Borgfeldt, Matteo Bottai, Chiara Chiavenna, Mats Eliasson, Gunnar Engström, Göran Hallmans, Jan-Håkan Jansson, Patrik K. Magnusson, Peter M. Nilsson, Nancy L. Pedersen, Alicja Wolk, Karin Leander

Background

Recent research indicates a favourable influence of postmenopausal hormone therapy (HT) if initiated early, but not late, on subclinical atherosclerosis. However, the clinical relevance of timing of HT initiation for hard end points such as stroke remains to be determined. Further, no previous research has considered the timing of initiation of HT in relation to haemorrhagic stroke risk. The importance of the route of administration, type, active ingredient, and duration of HT for stroke risk is also unclear. We aimed to assess the association between HT and risk of stroke, considering the timing of initiation, route of administration, type, active ingredient, and duration of HT.

Methods and findings

Data on HT use reported by the participants in 5 population-based Swedish cohort studies, with baseline investigations performed during the period 1987–2002, were combined in this observational study. In total, 88,914 postmenopausal women who reported data on HT use and had no previous cardiovascular disease diagnosis were included. Incident events of stroke (ischaemic, haemorrhagic, or unspecified) and haemorrhagic stroke were identified from national population registers. Laplace regression was employed to assess crude and multivariable-adjusted associations between HT and stroke risk by estimating percentile differences (PDs) with 95% confidence intervals (CIs). The fifth and first PDs were calculated for stroke and haemorrhagic stroke, respectively. Crude models were adjusted for age at baseline only. The final adjusted models included age at baseline, level of education, smoking status, body mass index, level of physical activity, and age at menopause onset. Additional variables evaluated for potential confounding were type of menopause, parity, use of oral contraceptives, alcohol consumption, hypertension, dyslipidaemia, diabetes, family history of cardiovascular disease, and cohort. During a median follow-up of 14.3 years, 6,371 first-time stroke events were recorded; of these, 1,080 were haemorrhagic. Following multivariable adjustment, early initiation (<5 years since menopause onset) of HT was associated with a longer stroke-free period than never use (fifth PD, 1.00 years; 95% CI 0.42 to 1.57), but there was no significant extension to the time period free of haemorrhagic stroke (first PD, 1.52 years; 95% CI −0.32 to 3.37). When considering timing as a continuous variable, the stroke-free and the haemorrhagic stroke-free periods were maximal if HT was initiated approximately 0–5 years from the onset of menopause. If single conjugated equine oestrogen HT was used, late initiation of HT was associated with a shorter stroke-free (fifth PD, −4.41 years; 95% CI −7.14 to −1.68) and haemorrhagic stroke-free (first PD, −9.51 years; 95% CI −12.77 to −6.24) period than never use. Combined HT when initiated late was significantly associated with a shorter haemorrhagic stroke-free period (first PD, −1.97 years; 95% CI −3.81 to −0.13), but not with a shorter stroke-free period (fifth PD, −1.21 years; 95% CI −3.11 to 0.68) than never use. Given the observational nature of this study, the possibility of uncontrolled confounding cannot be excluded. Further, immortal time bias, also related to the observational design, cannot be ruled out.

Conclusions

When initiated early in relation to menopause onset, HT was not associated with increased risk of incident stroke, regardless of the route of administration, type of HT, active ingredient, and duration. Generally, these findings held also for haemorrhagic stroke. Our results suggest that the initiation of HT 0–5 years after menopause onset, as compared to never use, is associated with a decreased risk of stroke and haemorrhagic stroke. Late initiation was associated with elevated risks of stroke and haemorrhagic stroke when conjugated equine oestrogen was used as single therapy. Late initiation of combined HT was associated with haemorrhagic stroke risk.

Core Outcome Set-STAndards for Development: The COS-STAD recommendations

Gi, 16/11/2017 - 23:00

by Jamie J. Kirkham, Katherine Davis, Douglas G. Altman, Jane M. Blazeby, Mike Clarke, Sean Tunis, Paula R. Williamson

Background

The use of core outcome sets (COS) ensures that researchers measure and report those outcomes that are most likely to be relevant to users of their research. Several hundred COS projects have been systematically identified to date, but there has been no formal quality assessment of these studies. The Core Outcome Set-STAndards for Development (COS-STAD) project aimed to identify minimum standards for the design of a COS study agreed upon by an international group, while other specific guidance exists for the final reporting of COS development studies (Core Outcome Set-STAndards for Reporting [COS-STAR]).

Methods and findings

An international group of experienced COS developers, methodologists, journal editors, potential users of COS (clinical trialists, systematic reviewers, and clinical guideline developers), and patient representatives produced the COS-STAD recommendations to help improve the quality of COS development and support the assessment of whether a COS had been developed using a reasonable approach. An open survey of experts generated an initial list of items, which was refined by a 2-round Delphi survey involving nearly 250 participants representing key stakeholder groups. Participants assigned importance ratings for each item using a 1–9 scale. Consensus that an item should be included in the set of minimum standards was defined as at least 70% of the voting participants from each stakeholder group providing a score between 7 and 9. The Delphi survey was followed by a consensus discussion with the study management group representing multiple stakeholder groups. COS-STAD contains 11 minimum standards that are the minimum design recommendations for all COS development projects. The recommendations focus on 3 key domains: the scope, the stakeholders, and the consensus process.

Conclusions

The COS-STAD project has established 11 minimum standards to be followed by COS developers when planning their projects and by users when deciding whether a COS has been developed using reasonable methods.

Prospects for passive immunity to prevent HIV infection

Ma, 14/11/2017 - 23:00

by Lynn Morris, Nonhlanhla N. Mkhize

In a Perspective, Lynn Morris and Nonhlanhla Mkhize discuss the prospects for broadly neutralizing antibodies to be used in preventing HIV infection.

Safety, pharmacokinetics, and immunological activities of multiple intravenous or subcutaneous doses of an anti-HIV monoclonal antibody, VRC01, administered to HIV-uninfected adults: Results of a phase 1 randomized trial

Ma, 14/11/2017 - 23:00

by Kenneth H. Mayer, Kelly E. Seaton, Yunda Huang, Nicole Grunenberg, Abby Isaacs, Mary Allen, Julie E. Ledgerwood, Ian Frank, Magdalena E. Sobieszczyk, Lindsey R. Baden, Benigno Rodriguez, Hong Van Tieu, Georgia D. Tomaras, Aaron Deal, Derrick Goodman, Robert T. Bailer, Guido Ferrari, Ryan Jensen, John Hural, Barney S. Graham, John R. Mascola, Lawrence Corey, David C. Montefiori, on behalf of the HVTN 104 Protocol Team , and the NIAID HIV Vaccine Trials Network

Background

VRC01 is an HIV-1 CD4 binding site broadly neutralizing antibody (bnAb) that is active against a broad range of HIV-1 primary isolates in vitro and protects against simian-human immunodeficiency virus (SHIV) when delivered parenterally to nonhuman primates. It has been shown to be safe and well tolerated after short-term administration in humans; however, its clinical and functional activity after longer-term administration has not been previously assessed.

Methods and findings

HIV Vaccine Trials Network (HVTN) 104 was designed to evaluate the safety and tolerability of multiple doses of VRC01 administered either subcutaneously or by intravenous (IV) infusion and to assess the pharmacokinetics and in vitro immunologic activity of the different dosing regimens. Additionally, this study aimed to assess the effect that the human body has on the functional activities of VRC01 as measured by several in vitro assays. Eighty-eight healthy, HIV-uninfected, low-risk participants were enrolled in 6 United States clinical research sites affiliated with the HVTN between September 9, 2014, and July 15, 2015. The median age of enrollees was 27 years (range, 18–50); 52% were White (non-Hispanic), 25% identified as Black (non-Hispanic), 11% were Hispanic, and 11% were non-Hispanic people of diverse origins. Participants were randomized to receive the following: a 40 mg/kg IV VRC01 loading dose followed by five 20 mg/kg IV VRC01 doses every 4 weeks (treatment group 1 [T1], n = 20); eleven 5 mg/kg subcutaneous (SC) VRC01 (treatment group 3 [T3], n = 20); placebo (placebo group 3 [P3], n = 4) doses every 2 weeks; or three 40 mg/kg IV VRC01 doses every 8 weeks (treatment group 2 [T2], n = 20). Treatment groups T4 and T5 (n = 12 each) received three 10 or 30 mg/kg IV VRC01 doses every 8 weeks, respectively. Participants were followed for 32 weeks after their first VRC01 administration and received a total of 249 IV infusions and 208 SC injections, with no serious adverse events, dose-limiting toxicities, nor evidence for anti-VRC01 antibodies observed. Serum VRC01 levels were detected through 12 weeks after final administration in all participants who received all scheduled doses. Mean peak serum VRC01 levels of 1,177 μg/ml (95% CI: 1,033, 1,340) and 420 μg/ml (95% CI: 356, 494) were achieved 1 hour after the IV infusion series of 30 mg/kg and 10 mg/kg doses, respectively. Mean trough levels at week 24 in the IV infusion series of 30 mg/kg and 10 mg/kg doses, respectively, were 16 μg/ml (95% CI: 10, 27) and 6 μg/ml (95% CI: 5, 9) levels, which neutralize a majority of circulating strains in vitro (50% inhibitory concentration [IC50] > 5 μg/ml). Post-infusion/injection serum VRC01 retained expected functional activity (virus neutralization, antibody-dependent cellular cytotoxicity, phagocytosis, and virion capture). The limitations of this study include the relatively small sample size of each VRC01 administration regimen and missing data from participants who were unable to complete all study visits.

Conclusions

VRC01 administered as either an IV infusion (10–40 mg/kg) given monthly or bimonthly, or as an SC injection (5 mg/kg) every 2 weeks, was found to be safe and well tolerated. In addition to maintaining drug concentrations consistent with neutralization of the majority of tested HIV strains, VRC01 concentrations from participants’ sera were found to avidly capture HIV virions and to mediate antibody-dependent cellular phagocytosis, suggesting a range of anti-HIV immunological activities, warranting further clinical trials.

Trial registration

Clinical Trials Registration: NCT02165267

Treatment guidelines and early loss from care for people living with HIV in Cape Town, South Africa: A retrospective cohort study

Ma, 14/11/2017 - 23:00

by Ingrid T. Katz, Richard Kaplan, Garrett Fitzmaurice, Dominick Leone, David R. Bangsberg, Linda-Gail Bekker, Catherine Orrell

Background

South Africa has undergone multiple expansions in antiretroviral therapy (ART) eligibility from an initial CD4+ threshold of ≤200 cells/μl to providing ART for all people living with HIV (PLWH) as of September 2016. We evaluated the association of programmatic changes in ART eligibility with loss from care, both prior to ART initiation and within the first 16 weeks of starting treatment, during a period of programmatic expansion to ART treatment at CD4+ ≤ 350 cells/μl.

Methods and findings

We performed a retrospective cohort study of 4,025 treatment-eligible, non-pregnant PLWH accessing care in a community health center in Gugulethu Township affiliated with the Desmond Tutu HIV Centre in Cape Town. The median age of participants was 34 years (IQR 28–41 years), almost 62% were female, and the median CD4+ count was 173 cells/μl (IQR 92–254 cells/μl). Participants were stratified into 2 cohorts: an early cohort, enrolled into care at the health center from 1 January 2009 to 31 August 2011, when guidelines mandated that ART initiation required CD4+ ≤ 200 cells/μl, pregnancy, advanced clinical symptoms (World Health Organization [WHO] stage 4), or comorbidity (active tuberculosis); and a later cohort, enrolled into care from 1 September 2011 to 31 December 2013, when the treatment threshold had been expanded to CD4+ ≤ 350 cells/μl. Demographic and clinical factors were compared before and after the policy change using chi-squared tests to identify potentially confounding covariates, and logistic regression models were used to estimate the risk of pre-treatment (pre-ART) loss from care and early loss within the first 16 weeks on treatment, adjusting for age, baseline CD4+, and WHO stage. Compared with participants in the later cohort, participants in the earlier cohort had significantly more advanced disease: median CD4+ 146 cells/μl versus 214 cells/μl (p < 0.001), 61.1% WHO stage 3/4 disease versus 42.8% (p < 0.001), and pre-ART mortality of 34.2% versus 16.7% (p < 0.001). In total, 385 ART-eligible PLWH (9.6%) failed to initiate ART, of whom 25.7% died before ever starting treatment. Of the 3,640 people who started treatment, 58 (1.6%) died within the first 16 weeks in care, and an additional 644 (17.7%) were lost from care within 16 weeks of starting ART. PLWH who did start treatment in the later cohort were significantly more likely to discontinue care in <16 weeks (19.8% versus 15.8%, p = 0.002). After controlling for baseline CD4+, WHO stage, and age, this effect remained significant (adjusted odds ratio [aOR] = 1.30, 95% CI 1.09–1.55). As such, it remains unclear if early attrition from care was due to a “healthy cohort” effect or to overcrowding as programs expanded to accommodate the broader guidelines for treatment. Our findings were limited by a lack of generalizability (given that these data were from a single high-volume site where testing and treatment were available) and an inability to formally investigate the effect of crowding on the main outcome.

Conclusions

Over one-quarter of this ART-eligible cohort did not achieve the long-term benefits of treatment due to early mortality, ART non-initiation, or early ART discontinuation. Those who started treatment in the later cohort appeared to be more likely to discontinue care early, and this outcome appeared to be independent of CD4+ count or WHO stage. Future interventions should focus on those most at risk for early loss from care as programs continue to expand in South Africa.

A combination intervention strategy to improve linkage to and retention in HIV care following diagnosis in Mozambique: A cluster-randomized study

Ma, 14/11/2017 - 23:00

by Batya Elul, Matthew R. Lamb, Maria Lahuerta, Fatima Abacassamo, Laurence Ahoua, Stephanie A. Kujawski, Maria Tomo, Ilesh Jani

Background

Concerning gaps in the HIV care continuum compromise individual and population health. We evaluated a combination intervention strategy (CIS) targeting prevalent barriers to timely linkage and sustained retention in HIV care in Mozambique.

Methods and findings

In this cluster-randomized trial, 10 primary health facilities in the city of Maputo and Inhambane Province were randomly assigned to provide the CIS or the standard of care (SOC). The CIS included point-of-care CD4 testing at the time of diagnosis, accelerated ART initiation, and short message service (SMS) health messages and appointment reminders. A pre–post intervention 2-sample design was nested within the CIS arm to assess the effectiveness of CIS+, an enhanced version of the CIS that additionally included conditional non-cash financial incentives for linkage and retention. The primary outcome was a combined outcome of linkage to care within 1 month and retention at 12 months after diagnosis. From April 22, 2013, to June 30, 2015, we enrolled 2,004 out of 5,327 adults ≥18 years of age diagnosed with HIV in the voluntary counseling and testing clinics of participating health facilities: 744 (37%) in the CIS group, 493 (25%) in the CIS+ group, and 767 (38%) in the SOC group. Fifty-seven percent of the CIS group achieved the primary outcome versus 35% in the SOC group (relative risk [RR]CIS vs SOC = 1.58, 95% CI 1.05–2.39). Eighty-nine percent of the CIS group linked to care on the day of diagnosis versus 16% of the SOC group (RRCIS vs SOC = 9.13, 95% CI 1.65–50.40). There was no significant benefit of adding financial incentives to the CIS in terms of the combined outcome (55% of the CIS+ group achieved the primary outcome, RRCIS+ vs CIS = 0.96, 95% CI 0.81–1.16). Key limitations include the use of existing medical records to assess outcomes, the inability to isolate the effect of each component of the CIS, non-concurrent enrollment of the CIS+ group, and exclusion of many patients newly diagnosed with HIV.

Conclusions

The CIS showed promise for making much needed gains in the HIV care continuum in our study, particularly in the critical first step of timely linkage to care following diagnosis.

Trial registration

ClinicalTrials.gov NCT01930084

Virological response and resistance among HIV-infected children receiving long-term antiretroviral therapy without virological monitoring in Uganda and Zimbabwe: Observational analyses within the randomised ARROW trial

Ma, 14/11/2017 - 23:00

by Alexander J. Szubert, Andrew J. Prendergast, Moira J. Spyer, Victor Musiime, Philippa Musoke, Mutsa Bwakura-Dangarembizi, Patricia Nahirya-Ntege, Margaret J. Thomason, Emmanuel Ndashimye, Immaculate Nkanya, Oscar Senfuma, Boniface Mudenge, Nigel Klein, Diana M. Gibb, A. Sarah Walker, the ARROW Trial Team

Background

Although WHO recommends viral load (VL) monitoring for those on antiretroviral therapy (ART), availability in low-income countries remains limited. We investigated long-term VL and resistance in HIV-infected children managed without real-time VL monitoring.

Methods and findings

In the ARROW factorial trial, 1,206 children initiating ART in Uganda and Zimbabwe between 15 March 2007 and 18 November 2008, aged a median 6 years old, with median CD4% of 12%, were randomised to monitoring with or without 12-weekly CD4 counts and to receive 2 nucleoside reverse transcriptase inhibitors (2NRTI, mainly abacavir+lamivudine) with a non-nucleoside reverse transcriptase inhibitor (NNRTI) or 3 NRTIs as long-term ART. All children had VL assayed retrospectively after a median of 4 years on ART; those with >1,000 copies/ml were genotyped. Three hundred and sixteen children had VL and genotypes assayed longitudinally (at least every 24 weeks). Overall, 67 (6%) switched to second-line ART and 54 (4%) died. In children randomised to WHO-recommended 2NRTI+NNRTI long-term ART, 308/378 (81%) monitored with CD4 counts versus 297/375 (79%) without had VL <1,000 copies/ml at 4 years (difference = +2.3% [95% CI −3.4% to +8.0%]; P = 0.43), with no evidence of differences in intermediate/high-level resistance to 11 drugs. Among children with longitudinal VLs, only 5% of child-time post–week 24 was spent with persistent low-level viraemia (80–5,000 copies/ml) and 10% with VL rebound ≥5,000 copies/ml. No child resuppressed <80 copies/ml after confirmed VL rebound ≥5,000 copies/ml. A median of 1.0 (IQR 0.0,1.5) additional NRTI mutation accumulated over 2 years’ rebound. Nineteen out of 48 (40%) VLs 1,000–5,000 copies/ml were immediately followed by resuppression <1,000 copies/ml, but only 17/155 (11%) VLs ≥5,000 copies/ml resuppressed (P < 0.0001). Main study limitations are that analyses were exploratory and treatment initiation used 2006 criteria, without pre-ART genotypes.

Conclusions

In this study, children receiving first-line ART in sub-Saharan Africa without real-time VL monitoring had good virological and resistance outcomes over 4 years, regardless of CD4 monitoring strategy. Many children with detectable low-level viraemia spontaneously resuppressed, highlighting the importance of confirming virological failure before switching to second-line therapy. Children experiencing rebound ≥5,000 copies/ml were much less likely to resuppress, but NRTI resistance increased only slowly. These results are relevant to the increasing numbers of HIV-infected children receiving first-line ART in sub-Saharan Africa with limited access to virological monitoring.

Trial registration

ISRCTN Registry, ISRCTN24791884

Bioequivalence of twice-daily oral tacrolimus in transplant recipients: More evidence for consensus?

Ma, 14/11/2017 - 23:00

by Simon Ball

In this Perspective on the clinical trial by Rita Alloway and colleagues, Simon Ball explains the benefits to healthcare systems and individual patients of the bioequivalence established between generic and brand-name formulations of an immunosuppressive drug in transplant recipients.

Bioequivalence between innovator and generic tacrolimus in liver and kidney transplant recipients: A randomized, crossover clinical trial

Ma, 14/11/2017 - 23:00

by Rita R. Alloway, Alexander A. Vinks, Tsuyoshi Fukuda, Tomoyuki Mizuno, Eileen C. King, Yuanshu Zou, Wenlei Jiang, E. Steve Woodle, Simon Tremblay, Jelena Klawitter, Jost Klawitter, Uwe Christians

Background

Although the generic drug approval process has a long-term successful track record, concerns remain for approval of narrow therapeutic index generic immunosuppressants, such as tacrolimus, in transplant recipients. Several professional transplant societies and publications have generated skepticism of the generic approval process. Three major areas of concern are that the pharmacokinetic properties of generic products and the innovator (that is, “brand”) product in healthy volunteers may not reflect those in transplant recipients, bioequivalence between generic and innovator may not ensure bioequivalence between generics, and high-risk patients may have specific bioequivalence concerns. Such concerns have been fueled by anecdotal observations and retrospective and uncontrolled published studies, while well-designed, controlled prospective studies testing the validity of the regulatory bioequivalence testing approach for narrow therapeutic index immunosuppressants in transplant recipients have been lacking. Thus, the present study prospectively assesses bioequivalence between innovator tacrolimus and 2 generics in individuals with a kidney or liver transplant.

Methods and findings

From December 2013 through October 2014, a prospective, replicate dosing, partially blinded, randomized, 3-treatment, 6-period crossover bioequivalence study was conducted at the University of Cincinnati in individuals with a kidney (n = 35) or liver transplant (n = 36). Abbreviated New Drug Applications (ANDA) data that included manufacturing and healthy individual pharmacokinetic data for all generics were evaluated to select the 2 most disparate generics from innovator, and these were named Generic Hi and Generic Lo. During the 8-week study period, pharmacokinetic studies assessed the bioequivalence of Generic Hi and Generic Lo with the Innovator tacrolimus and with each other. Bioequivalence of the major tacrolimus metabolite was also assessed. All products fell within the US Food and Drug Administration (FDA) average bioequivalence (ABE) acceptance criteria of a 90% confidence interval contained within the confidence limits of 80.00% and 125.00%. Within-subject variability was similar for the area under the curve (AUC) (range 12.11–15.81) and the concentration maximum (Cmax) (range 17.96–24.72) for all products. The within-subject variability was utilized to calculate the scaled average bioequivalence (SCABE) 90% confidence interval. The calculated SCABE 90% confidence interval was 84.65%–118.13% and 80.00%–125.00% for AUC and Cmax, respectively. The more stringent SCABE acceptance criteria were met for all product comparisons for AUC and Cmax in both individuals with a kidney transplant and those with a liver transplant. European Medicines Agency (EMA) acceptance criteria for narrow therapeutic index drugs were also met, with the only exception being in the case of Brand versus Generic Lo, in which the upper limits of the 90% confidence intervals were 111.30% (kidney) and 112.12% (liver). These were only slightly above the upper EMA acceptance criteria limit for an AUC of 111.11%. SCABE criteria were also met for the major tacrolimus metabolite 13-O-desmethyl tacrolimus for AUC, but it failed the EMA criterion. No acute rejections, no differences in renal function in all individuals, and no differences in liver function were observed in individuals with a liver transplant using the Tukey honest significant difference (HSD) test for multiple comparisons. Fifty-two percent and 65% of all individuals with a kidney or liver transplant, respectively, reported an adverse event. The Exact McNemar test for paired categorical data with adjustments for multiple comparisons was used to compare adverse event rates among the products. No statistically significant differences among any pairs of products were found for any adverse event code or for adverse events overall. Limitations of this study include that the observations were made under strictly controlled conditions that did not allow for the impact of nonadherence or feeding on the possible pharmacokinetic differences. Generic Hi and Lo were selected based upon bioequivalence data in healthy volunteers because no pharmacokinetic data in recipients were available for all products. The safety data should be interpreted in light of the small number of participants and the short observation periods. Lastly, only the 1 mg tacrolimus strength was utilized in this study.

Conclusions

Using an innovative, controlled bioequivalence study design, we observed equivalence between tacrolimus innovator and 2 generic products as well as between 2 generic products in individuals after kidney or liver transplantation following current FDA bioequivalence metrics. These results support the position that bioequivalence for the narrow therapeutic index drug tacrolimus translates from healthy volunteers to individuals receiving a kidney or liver transplant and provides evidence that generic products that are bioequivalent with the innovator product are also bioequivalent to each other.

Trial registration

ClinicalTrials.gov NCT01889758.

Association between the 2012 Health and Social Care Act and specialist visits and hospitalisations in England: A controlled interrupted time series analysis

Ma, 14/11/2017 - 23:00

by James A. Lopez Bernal, Christine Y. Lu, Antonio Gasparrini, Steven Cummins, J. Frank Wharham, Steven B. Soumerai

Background

The 2012 Health and Social Care Act (HSCA) in England led to among the largest healthcare reforms in the history of the National Health Service (NHS). It gave control of £67 billion of the NHS budget for secondary care to general practitioner (GP) led Clinical Commissioning Groups (CCGs). An expected outcome was that patient care would shift away from expensive hospital and specialist settings, towards less expensive community-based models. However, there is little evidence for the effectiveness of this approach. In this study, we aimed to assess the association between the NHS reforms and hospital admissions and outpatient specialist visits.

Methods and findings

We conducted a controlled interrupted time series analysis to examine rates of outpatient specialist visits and inpatient hospitalisations before and after the implementation of the HSCA. We used national routine hospital administrative data (Hospital Episode Statistics) on all NHS outpatient specialist visits and inpatient hospital admissions in England between 2007 and 2015 (with a mean of 26.8 million new outpatient visits and 14.9 million inpatient admissions per year). As a control series, we used equivalent data on hospital attendances in Scotland. Primary outcomes were: total, elective, and emergency hospitalisations, and total and GP-referred specialist visits. Both countries had stable trends in all outcomes at baseline. In England, after the policy, there was a 1.1% (95% CI 0.7%–1.5%; p < 0.001) increase in total specialist visits per quarter and a 1.6% increase in GP-referred specialist visits (95% CI 1.2%–2.0%; p < 0.001) per quarter, equivalent to 12.7% (647,000 over the 5,105,000 expected) and 19.1% (507,000 over the 2,658,000 expected) more visits per quarter by the end of 2015, respectively. In Scotland, there was no change in specialist visits. Neither country experienced a change in trends in hospitalisations: change in slope for total, elective, and emergency hospitalisations were −0.2% (95% CI −0.6%–0.2%; p = 0.257), −0.2% (95% CI −0.6%–0.1%; p = 0.235), and 0.0% (95% CI −0.5%–0.4%; p = 0.866) per quarter in England. We are unable to exclude confounding due to other events occurring around the time of the policy. However, we limited the likelihood of such confounding by including relevant control series, in which no changes were seen.

Conclusions

Our findings suggest that giving control of healthcare budgets to GP-led CCGs was not associated with a reduction in overall hospitalisations and was associated with an increase in specialist visits.

Evidence-based restructuring of health and social care

Ma, 14/11/2017 - 23:00

by Aziz Sheikh

In this Perspective, Aziz Sheikh discusses research to evaluate health policy changes in the provision of care, commenting on a study by James Lopez Bernal and colleagues that examined specialist-dominated hospital care versus community-based care in the United Kingdom.

Perinatal mortality associated with induction of labour versus expectant management in nulliparous women aged 35 years or over: An English national cohort study

Ma, 14/11/2017 - 23:00

by Hannah E. Knight, David A. Cromwell, Ipek Gurol-Urganci, Katie Harron, Jan H. van der Meulen, Gordon C. S. Smith

Background

A recent randomised controlled trial (RCT) demonstrated that induction of labour at 39 weeks of gestational age has no short-term adverse effect on the mother or infant among nulliparous women aged ≥35 years. However, the trial was underpowered to address the effect of routine induction of labour on the risk of perinatal death. We aimed to determine the association between induction of labour at ≥39 weeks and the risk of perinatal mortality among nulliparous women aged ≥35 years.

Methods and findings

We used English Hospital Episode Statistics (HES) data collected between April 2009 and March 2014 to compare perinatal mortality between induction of labour at 39, 40, and 41 weeks of gestation and expectant management (continuation of pregnancy to either spontaneous labour, induction of labour, or caesarean section at a later gestation). Analysis was by multivariable Poisson regression with adjustment for maternal characteristics and pregnancy-related conditions. Among the cohort of 77,327 nulliparous women aged 35 to 50 years delivering a singleton infant, 33.1% had labour induced: these women tended to be older and more likely to have medical complications of pregnancy, and the infants were more likely to be small for gestational age.Induction of labour at 40 weeks (compared with expectant management) was associated with a lower risk of in-hospital perinatal death (0.08% versus 0.26%; adjusted risk ratio [adjRR] 0.33; 95% CI 0.13–0.80, P = 0.015) and meconium aspiration syndrome (0.44% versus 0.86%; adjRR 0.52; 95% CI 0.35–0.78, P = 0.002). Induction at 40 weeks was also associated with a slightly increased risk of instrumental vaginal delivery (adjRR 1.06; 95% CI 1.01–1.11, P = 0.020) and emergency caesarean section (adjRR 1.05; 95% CI 1.01–1.09, P = 0.019). The number needed to treat (NNT) analysis indicated that 562 (95% CI 366–1,210) inductions of labour at 40 weeks would be required to prevent 1 perinatal death. Limitations of the study include the reliance on observational data in which gestational age is recorded in weeks rather than days. There is also the potential for unmeasured confounders and under-recording of induction of labour or perinatal death in the dataset.

Conclusions

Bringing forward the routine offer of induction of labour from the current recommendation of 41–42 weeks to 40 weeks of gestation in nulliparous women aged ≥35 years may reduce overall rates of perinatal death.

Validity of a minimally invasive autopsy for cause of death determination in maternal deaths in Mozambique: An observational study

Me, 08/11/2017 - 23:00

by Paola Castillo, Juan Carlos Hurtado, Miguel J. Martínez, Dercio Jordao, Lucilia Lovane, Mamudo R. Ismail, Carla Carrilho, Cesaltina Lorenzoni, Fabiola Fernandes, Sibone Mocumbi, Zara Onila Jaze, Flora Mabota, Anelsio Cossa, Inacio Mandomando, Pau Cisteró, Alfredo Mayor, Mireia Navarro, Isaac Casas, Jordi Vila, Maria Maixenchs, Khátia Munguambe, Ariadna Sanz, Llorenç Quintó, Eusebio Macete, Pedro Alonso, Quique Bassat, Jaume Ordi, Clara Menéndez

Background

Despite global health efforts to reduce maternal mortality, rates continue to be unacceptably high in large parts of the world. Feasible, acceptable, and accurate postmortem sampling methods could provide the necessary evidence to improve the understanding of the real causes of maternal mortality, guiding the design of interventions to reduce this burden.

Methods and findings

The validity of a minimally invasive autopsy (MIA) method in determining the cause of death was assessed in an observational study in 57 maternal deaths by comparing the results of the MIA with those of the gold standard (complete diagnostic autopsy [CDA], which includes any available clinical information). Concordance between the MIA and the gold standard diagnostic categories was assessed by the kappa statistic, and the sensitivity, specificity, positive and negative predictive values and their 95% confidence intervals (95% CI) to identify the categories of diagnoses were estimated. The main limitation of the study is that both the MIA and the CDA include some degree of subjective interpretation in the attribution of cause of death.A cause of death was identified in the CDA in 98% (56/57) of cases, with indirect obstetric conditions accounting for 32 (56%) deaths and direct obstetric complications for 24 (42%) deaths. Nonobstetric infectious diseases (22/32, 69%) and obstetric hemorrhage (13/24, 54%) were the most common causes of death among indirect and direct obstetric conditions, respectively. Thirty-six (63%) women were HIV positive, and HIV-related conditions accounted for 16 (28%) of all deaths. Cerebral malaria caused 4 (7%) deaths. The MIA identified a cause of death in 86% of women. The overall concordance of the MIA with the CDA was moderate (kappa = 0.48, 95% CI: 0.31–0.66). Both methods agreed in 68% of the diagnostic categories and the agreement was higher for indirect (91%) than for direct obstetric causes (38%). All HIV infections and cerebral malaria cases were identified in the MIA. The main limitation of the technique is its relatively low performance for identifying obstetric causes of death in the absence of clinical information.

Conclusions

The MIA procedure could be a valuable tool to determine the causes of maternal death, especially for indirect obstetric conditions, most of which are infectious diseases.The information provided by the MIA could help to prioritize interventions to reduce maternal mortality and to monitor progress towards achieving global health targets.

Cardiovascular disease (CVD) and chronic kidney disease (CKD) event rates in HIV-positive persons at high predicted CVD and CKD risk: A prospective analysis of the D:A:D observational study

Ma, 07/11/2017 - 23:00

by Mark A. Boyd, Amanda Mocroft, Lene Ryom, Antonella d’Arminio Monforte, Caroline Sabin, Wafaa M. El-Sadr, Camilla Ingrid Hatleberg, Stephane De Wit, Rainer Weber, Eric Fontas, Andrew Phillips, Fabrice Bonnet, Peter Reiss, Jens Lundgren, Matthew Law

Background

The Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study has developed predictive risk scores for cardiovascular disease (CVD) and chronic kidney disease (CKD, defined as confirmed estimated glomerular filtration rate [eGFR] ≤ 60 ml/min/1.73 m2) events in HIV-positive people. We hypothesized that participants in D:A:D at high (>5%) predicted risk for both CVD and CKD would be at even greater risk for CVD and CKD events.

Methods and findings

We included all participants with complete risk factor (covariate) data, baseline eGFR > 60 ml/min/1.73 m2, and a confirmed (>3 months apart) eGFR < 60 ml/min/1.73 m2 thereafter to calculate CVD and CKD risk scores. We calculated CVD and CKD event rates by predicted 5-year CVD and CKD risk groups (≤1%, >1%–5%, >5%) and fitted Poisson models to assess whether CVD and CKD risk group effects were multiplicative. A total of 27,215 participants contributed 202,034 person-years of follow-up: 74% male, median (IQR) age 42 (36, 49) years, median (IQR) baseline year of follow-up 2005 (2004, 2008). D:A:D risk equations predicted 3,560 (13.1%) participants at high CVD risk, 4,996 (18.4%) participants at high CKD risk, and 1,585 (5.8%) participants at both high CKD and high CVD risk. CVD and CKD event rates by predicted risk group were multiplicative. Participants at high CVD risk had a 5.63-fold (95% CI 4.47, 7.09, p < 0.001) increase in CKD events compared to those at low risk; participants at high CKD risk had a 1.31-fold (95% CI 1.09, 1.56, p = 0.005) increase in CVD events compared to those at low risk. Participants’ CVD and CKD risk groups had multiplicative predictive effects, with no evidence of an interaction (p = 0.329 and p = 0.291 for CKD and CVD, respectively). The main study limitation is the difference in the ascertainment of the clinically defined CVD endpoints and the laboratory-defined CKD endpoints.

Conclusions

We found that people at high predicted risk for both CVD and CKD have substantially greater risks for both CVD and CKD events compared with those at low predicted risk for both outcomes, and compared to those at high predicted risk for only CVD or CKD events. This suggests that CVD and CKD risk in HIV-positive persons should be assessed together. The results further encourage clinicians to prioritise addressing modifiable risks for CVD and CKD in HIV-positive people.

HIV prevalence and behavioral and psychosocial factors among transgender women and cisgender men who have sex with men in 8 African countries: A cross-sectional analysis

Ma, 07/11/2017 - 23:00

by Tonia Poteat, Benjamin Ackerman, Daouda Diouf, Nuha Ceesay, Tampose Mothopeng, Ky-Zerbo Odette, Seni Kouanda, Henri Gautier Ouedraogo, Anato Simplice, Abo Kouame, Zandile Mnisi, Gift Trapence, L. Leigh Ann van der Merwe, Vicente Jumbe, Stefan Baral

Introduction

Sub-Saharan Africa bears more than two-thirds of the worldwide burden of HIV; however, data among transgender women from the region are sparse. Transgender women across the world face significant vulnerability to HIV. This analysis aimed to assess HIV prevalence as well as psychosocial and behavioral drivers of HIV infection among transgender women compared with cisgender (non-transgender) men who have sex with men (cis-MSM) in 8 sub-Saharan African countries.

Methods and findings

Respondent-driven sampling targeted cis-MSM for enrollment. Data collection took place at 14 sites across 8 countries: Burkina Faso (January–August 2013), Côte d’Ivoire (March 2015–February 2016), The Gambia (July–December 2011), Lesotho (February–September 2014), Malawi (July 2011–March 2012), Senegal (February–November 2015), Swaziland (August–December 2011), and Togo (January–June 2013). Surveys gathered information on sexual orientation, gender identity, stigma, mental health, sexual behavior, and HIV testing. Rapid tests for HIV were conducted. Data were merged, and mixed effects logistic regression models were used to estimate relationships between gender identity and HIV infection. Among 4,586 participants assigned male sex at birth, 937 (20%) identified as transgender or female, and 3,649 were cis-MSM. The mean age of study participants was approximately 24 years, with no difference between transgender participants and cis-MSM. Compared to cis-MSM participants, transgender women were more likely to experience family exclusion (odds ratio [OR] 1.75, 95% CI 1.42–2.16, p < 0.001), rape (OR 1.95, 95% CI 1.63–2.36, p < 0.001), and depressive symptoms (OR 1.30, 95% CI 1.12–1.52, p < 0.001). Transgender women were more likely to report condomless receptive anal sex in the prior 12 months (OR 2.44, 95% CI 2.05–2.90, p < 0.001) and to be currently living with HIV (OR 1.81, 95% CI 1.49–2.19, p < 0.001). Overall HIV prevalence was 25% (235/926) in transgender women and 14% (505/3,594) in cis-MSM. When adjusted for age, condomless receptive anal sex, depression, interpersonal stigma, law enforcement stigma, and violence, and the interaction of gender with condomless receptive anal sex, the odds of HIV infection for transgender women were 2.2 times greater than the odds for cis-MSM (95% CI 1.65–2.87, p < 0.001). Limitations of the study included sampling strategies tailored for cis-MSM and merging of datasets with non-identical survey instruments.

Conclusions

In this study in sub-Saharan Africa, we found that HIV burden and stigma differed between transgender women and cis-MSM, indicating a need to address gender diversity within HIV research and programs.

Reaching global HIV/AIDS goals: What got us here, won't get us there

Ma, 07/11/2017 - 23:00

by Wafaa M. El-Sadr, Katherine Harripersaud, Miriam Rabkin

In a Perspective, Wafaa El-Sadr and colleagues discuss tailored approaches to treatment and prevention of HIV infection.

Effectiveness of a combination strategy for linkage and retention in adult HIV care in Swaziland: The Link4Health cluster randomized trial

Ma, 07/11/2017 - 23:00

by Margaret L. McNairy, Matthew R. Lamb, Averie B. Gachuhi, Harriet Nuwagaba-Biribonwoha, Sean Burke, Sikhathele Mazibuko, Velephi Okello, Peter Ehrenkranz, Ruben Sahabo, Wafaa M. El-Sadr

Background

Gaps in the HIV care continuum contribute to poor health outcomes and increase HIV transmission. A combination of interventions targeting multiple steps in the continuum is needed to achieve the full beneficial impact of HIV treatment.

Methods and findings

Link4Health, a cluster-randomized controlled trial, evaluated the effectiveness of a combination intervention strategy (CIS) versus the standard of care (SOC) on the primary outcome of linkage to care within 1 month plus retention in care at 12 months after HIV-positive testing. Ten clusters of HIV clinics in Swaziland were randomized 1:1 to CIS versus SOC. The CIS included point-of-care CD4+ testing at the time of an HIV-positive test, accelerated antiretroviral therapy (ART) initiation for treatment-eligible participants, mobile phone appointment reminders, health educational packages, and noncash financial incentives. Secondary outcomes included each component of the primary outcome, mean time to linkage, assessment for ART eligibility, ART initiation and time to ART initiation, viral suppression defined as HIV-1 RNA < 1,000 copies/mL at 12 months after HIV testing among patients on ART ≥6 months, and loss to follow-up and death at 12 months after HIV testing. A total of 2,197 adults aged ≥18 years, newly tested HIV positive, were enrolled from 19 August 2013 to 21 November 2014 (1,096 CIS arm; 1,101 SOC arm) and followed for 12 months. The median participant age was 31 years (IQR 26–39), and 59% were women. In an intention-to-treat analysis, 64% (705/1,096) of participants at the CIS sites achieved the primary outcome versus 43% (477/1,101) at the SOC sites (adjusted relative risk [RR] 1.52, 95% CI 1.19–1.96, p = 0.002). Participants in the CIS arm versus the SOC arm had the following secondary outcomes: linkage to care regardless of retention at 12 months (RR 1.08, 95% CI 0.97–1.21, p = 0.13), mean time to linkage (2.5 days versus 7.5 days, p = 0.189), retention in care at 12 months regardless of time to linkage (RR 1.48, 95% CI 1.18–1.86, p = 0.002), assessment for ART eligibility (RR 1.20, 95% CI 1.07–1.34, p = 0.004), ART initiation (RR 1.16, 95% CI 0.96–1.40, p = 0.12), mean time to ART initiation from time of HIV testing (7 days versus 14 days, p < 0.001), viral suppression among those on ART for ≥6 months (RR 0.97, 95% CI 0.88–1.07, p = 0.55), loss to follow-up at 12 months after HIV testing (RR 0.56, 95% CI 0.40–0.79, p = 0.002), and death (N = 78) within 12 months of HIV testing (RR 0.80, 95% CI 0.46–1.35, p = 0.41). Limitations of this study include a small number of clusters and the inability to evaluate the incremental effectiveness of individual components of the combination strategy.

Conclusions

A combination strategy inclusive of 5 evidence-based interventions aimed at multiple steps in the HIV care continuum was associated with significant increase in linkage to care plus 12-month retention. This strategy offers promise of enhanced outcomes for HIV-positive patients.

Trial registration

ClinicalTrials.gov NCT01904994.

Measuring success: The challenge of social protection in helping eliminate tuberculosis

Ma, 07/11/2017 - 23:00

by Priya B. Shete, David W. Dowdy

In this Perspective on the research article by William Rudgard and colleagues, Priya Shete and coauthor discuss the challenges of measuring the impact of social protection programs such as cash transfers.

Comparison of two cash transfer strategies to prevent catastrophic costs for poor tuberculosis-affected households in low- and middle-income countries: An economic modelling study

Ma, 07/11/2017 - 23:00

by William E. Rudgard, Carlton A. Evans, Sedona Sweeney, Tom Wingfield, Knut Lönnroth, Draurio Barreira, Delia Boccia

Background

Illness-related costs for patients with tuberculosis (TB) ≥20% of pre-illness annual household income predict adverse treatment outcomes and have been termed “catastrophic.” Social protection initiatives, including cash transfers, are endorsed to help prevent catastrophic costs. With this aim, cash transfers may either be provided to defray TB-related costs of households with a confirmed TB diagnosis (termed a “TB-specific” approach); or to increase income of households with high TB risk to strengthen their economic resilience (termed a “TB-sensitive” approach). The impact of cash transfers provided with each of these approaches might vary. We undertook an economic modelling study from the patient perspective to compare the potential of these 2 cash transfer approaches to prevent catastrophic costs.

Methods and findings

Model inputs for 7 low- and middle-income countries (Brazil, Colombia, Ecuador, Ghana, Mexico, Tanzania, and Yemen) were retrieved by literature review and included countries' mean patient TB-related costs, mean household income, mean cash transfers, and estimated TB-specific and TB-sensitive target populations. Analyses were completed for drug-susceptible (DS) TB-related costs in all 7 out of 7 countries, and additionally for drug-resistant (DR) TB-related costs in 1 of the 7 countries with available data. All cost data were reported in 2013 international dollars ($). The target population for TB-specific cash transfers was poor households with a confirmed TB diagnosis, and for TB-sensitive cash transfers was poor households already targeted by countries’ established poverty-reduction cash transfer programme. Cash transfers offered in countries, unrelated to TB, ranged from $217 to $1,091/year/household. Before cash transfers, DS TB-related costs were catastrophic in 6 out of 7 countries. If cash transfers were provided with a TB-specific approach, alone they would be insufficient to prevent DS TB catastrophic costs in 4 out of 6 countries, and when increased enough to prevent DS TB catastrophic costs would require a budget between $3.8 million (95% CI: $3.8 million–$3.8 million) and $75 million (95% CI: $50 million–$100 million) per country. If instead cash transfers were provided with a TB-sensitive approach, alone they would be insufficient to prevent DS TB-related catastrophic costs in any of the 6 countries, and when increased enough to prevent DS TB catastrophic costs would require a budget between $298 million (95% CI: $219 million–$378 million) and $165,367 million (95% CI: $134,085 million–$196,425 million) per country. DR TB-related costs were catastrophic before and after TB-specific or TB-sensitive cash transfers in 1 out of 1 countries. Sensitivity analyses showed our findings to be robust to imputation of missing TB-related cost components, and use of 10% or 30% instead of 20% as the threshold for measuring catastrophic costs. Key limitations were using national average data and not considering other health and social benefits of cash transfers.

Conclusions

A TB-sensitive cash transfer approach to increase all poor households’ income may have broad benefits by reducing poverty, but is unlikely to be as effective or affordable for preventing TB catastrophic costs as a TB-specific cash transfer approach to defray TB-related costs only in poor households with a confirmed TB diagnosis. Preventing DR TB-related catastrophic costs will require considerable additional investment whether a TB-sensitive or a TB-specific cash transfer approach is used.

HIV-1 persistence following extremely early initiation of antiretroviral therapy (ART) during acute HIV-1 infection: An observational study

Ma, 07/11/2017 - 23:00

by Timothy J. Henrich, Hiroyu Hatano, Oliver Bacon, Louise E. Hogan, Rachel Rutishauser, Alison Hill, Mary F. Kearney, Elizabeth M. Anderson, Susan P. Buchbinder, Stephanie E. Cohen, Mohamed Abdel-Mohsen, Christopher W. Pohlmeyer, Remi Fromentin, Rebecca Hoh, Albert Y. Liu, Joseph M. McCune, Jonathan Spindler, Kelly Metcalf-Pate, Kristen S. Hobbs, Cassandra Thanh, Erica A. Gibson, Daniel R. Kuritzkes, Robert F. Siliciano, Richard W. Price, Douglas D. Richman, Nicolas Chomont, Janet D. Siliciano, John W. Mellors, Steven A. Yukl, Joel N. Blankson, Teri Liegler, Steven G. Deeks

Background

It is unknown if extremely early initiation of antiretroviral therapy (ART) may lead to long-term ART-free HIV remission or cure. As a result, we studied 2 individuals recruited from a pre-exposure prophylaxis (PrEP) program who started prophylactic ART an estimated 10 days (Participant A; 54-year-old male) and 12 days (Participant B; 31-year-old male) after infection with peak plasma HIV RNA of 220 copies/mL and 3,343 copies/mL, respectively. Extensive testing of blood and tissue for HIV persistence was performed, and PrEP Participant A underwent analytical treatment interruption (ATI) following 32 weeks of continuous ART.

Methods and findings

Colorectal and lymph node tissues, bone marrow, cerebral spinal fluid (CSF), plasma, and very large numbers of peripheral blood mononuclear cells (PBMCs) were obtained longitudinally from both participants and were studied for HIV persistence in several laboratories using molecular and culture-based detection methods, including a murine viral outgrowth assay (mVOA). Both participants initiated PrEP with tenofovir/emtricitabine during very early Fiebig stage I (detectable plasma HIV-1 RNA, antibody negative) followed by 4-drug ART intensification. Following peak viral loads, both participants experienced full suppression of HIV-1 plasma viremia. Over the following 2 years, no further HIV could be detected in blood or tissue from PrEP Participant A despite extensive sampling from ileum, rectum, lymph nodes, bone marrow, CSF, circulating CD4+ T cell subsets, and plasma. No HIV was detected from tissues obtained from PrEP Participant B, but low-level HIV RNA or DNA was intermittently detected from various CD4+ T cell subsets. Over 500 million CD4+ T cells were assayed from both participants in a humanized mouse outgrowth assay. Three of 8 mice infused with CD4+ T cells from PrEP Participant B developed viremia (50 million input cells/surviving mouse), but only 1 of 10 mice infused with CD4+ T cells from PrEP Participant A (53 million input cells/mouse) experienced very low level viremia (201 copies/mL); sequence confirmation was unsuccessful. PrEP Participant A stopped ART and remained aviremic for 7.4 months, rebounding with HIV RNA of 36 copies/mL that rose to 59,805 copies/mL 6 days later. ART was restarted promptly. Rebound plasma HIV sequences were identical to those obtained during acute infection by single-genome sequencing. Mathematical modeling predicted that the latent reservoir size was approximately 200 cells prior to ATI and that only around 1% of individuals with a similar HIV burden may achieve lifelong ART-free remission. Furthermore, we observed that lymphocytes expressing the tumor marker CD30 increased in frequency weeks to months prior to detectable HIV-1 RNA in plasma. This study was limited by the small sample size, which was a result of the rarity of individuals presenting during hyperacute infection.

Conclusions

We report HIV relapse despite initiation of ART at one of the earliest stages of acute HIV infection possible. Near complete or complete loss of detectable HIV in blood and tissues did not lead to indefinite ART-free HIV remission. However, the small numbers of latently infected cells in individuals treated during hyperacute infection may be associated with prolonged ART-free remission.