Sensory feedback may smooth walking with a prosthetic leg

Article Type
Changed
Tue, 09/24/2019 - 16:34

 

A prosthetic leg that elicits the sensation of knee motion and the feeling of the sole of the foot touching the ground may improve walking performance and reduce phantom limb pain, according to a proof-of-concept study with two patients.

Huntstock/thinkstockphotos

With the bionic leg system, the patients performed better during clinically important tests indoors and outdoors, study author Stanisa Raspopovic, PhD, explained during a press briefing about the research. The findings were published in Nature Medicine.

The results indicate that the use of sensory feedback “could be common practice” in prosthetic devices in the future, he said. Dr. Raspopovic is a researcher at Swiss Federal Institute of Technology Zürich and a founder of SensArs Neuroprosthetics, which is based in Lausanne, Switzerland.

Neural prosthetics allow the nervous system and external devices to interact. These brain-machine interfaces may improve quality of life for patients with brain or spinal cord injuries, degenerative disease, or loss of limbs.

“Conventional leg prostheses do not convey sensory information about motion or interaction with the ground to above-knee amputees, thereby reducing confidence and walking speed in the users,” the study authors wrote. Users may also have high levels of mental and physical fatigue, and the lack of physiologic feedback from the extremity to the brain may contribute to the generation of phantom limb pain.

To evaluate whether neural sensory feedback restoration could address these issues, investigators conducted a study with two patients who had undergone transfemoral amputations as a result of traumatic events. The patients were implanted with four intraneural stimulation electrodes in the remaining tibial nerve. The prosthetic leg device included sensors to represent foot touch and pressure and knee joint angle. The sensors transmitted sensory signals to the nervous system through the stimulation electrodes in the tibial nerve.

When the patients walked outdoors over a path traced in the sand, “participants’ speeds were significantly higher when sensory feedback was provided,” the authors wrote. One participant walked 3.56 m/min faster, and the other walked 5.68 m/min faster.

The participants also rated their confidence in the prosthesis on a scale from 0 to 10. For patient 1, self-rated confidence improved from 4.85 to 7.71 with the device. Patient 2 reported a confidence level that climbed from 2.7 to 5.55.

When tested indoors, both patients reached a 0.5 km/hour higher speed on the treadmill when stimulation was provided and both had a lower mean rate of oxygen uptake during the sensory feedback trials, the study authors reported.

Levels of phantom limb pain also decreased significantly after 10-minute stimulation sessions, but not during control sessions.

Longer studies with more patients are required, and fully implantable devices without transcutaneous cables need to be developed, the authors wrote.

Grants from the European Research Council, European Commission, and Swiss National Science Foundation funded the research. Dr. Raspopovic and two coauthors hold shares of SensArs Neuroprosthetics, a start-up company dealing with the commercialization of neurocontrolled artificial limbs.

SOURCE: Petrini FM et al. Nat Med. 2019 Sep 9. doi: 10.1038/s41591-019-0567-3.

Publications
Topics
Sections

 

A prosthetic leg that elicits the sensation of knee motion and the feeling of the sole of the foot touching the ground may improve walking performance and reduce phantom limb pain, according to a proof-of-concept study with two patients.

Huntstock/thinkstockphotos

With the bionic leg system, the patients performed better during clinically important tests indoors and outdoors, study author Stanisa Raspopovic, PhD, explained during a press briefing about the research. The findings were published in Nature Medicine.

The results indicate that the use of sensory feedback “could be common practice” in prosthetic devices in the future, he said. Dr. Raspopovic is a researcher at Swiss Federal Institute of Technology Zürich and a founder of SensArs Neuroprosthetics, which is based in Lausanne, Switzerland.

Neural prosthetics allow the nervous system and external devices to interact. These brain-machine interfaces may improve quality of life for patients with brain or spinal cord injuries, degenerative disease, or loss of limbs.

“Conventional leg prostheses do not convey sensory information about motion or interaction with the ground to above-knee amputees, thereby reducing confidence and walking speed in the users,” the study authors wrote. Users may also have high levels of mental and physical fatigue, and the lack of physiologic feedback from the extremity to the brain may contribute to the generation of phantom limb pain.

To evaluate whether neural sensory feedback restoration could address these issues, investigators conducted a study with two patients who had undergone transfemoral amputations as a result of traumatic events. The patients were implanted with four intraneural stimulation electrodes in the remaining tibial nerve. The prosthetic leg device included sensors to represent foot touch and pressure and knee joint angle. The sensors transmitted sensory signals to the nervous system through the stimulation electrodes in the tibial nerve.

When the patients walked outdoors over a path traced in the sand, “participants’ speeds were significantly higher when sensory feedback was provided,” the authors wrote. One participant walked 3.56 m/min faster, and the other walked 5.68 m/min faster.

The participants also rated their confidence in the prosthesis on a scale from 0 to 10. For patient 1, self-rated confidence improved from 4.85 to 7.71 with the device. Patient 2 reported a confidence level that climbed from 2.7 to 5.55.

When tested indoors, both patients reached a 0.5 km/hour higher speed on the treadmill when stimulation was provided and both had a lower mean rate of oxygen uptake during the sensory feedback trials, the study authors reported.

Levels of phantom limb pain also decreased significantly after 10-minute stimulation sessions, but not during control sessions.

Longer studies with more patients are required, and fully implantable devices without transcutaneous cables need to be developed, the authors wrote.

Grants from the European Research Council, European Commission, and Swiss National Science Foundation funded the research. Dr. Raspopovic and two coauthors hold shares of SensArs Neuroprosthetics, a start-up company dealing with the commercialization of neurocontrolled artificial limbs.

SOURCE: Petrini FM et al. Nat Med. 2019 Sep 9. doi: 10.1038/s41591-019-0567-3.

 

A prosthetic leg that elicits the sensation of knee motion and the feeling of the sole of the foot touching the ground may improve walking performance and reduce phantom limb pain, according to a proof-of-concept study with two patients.

Huntstock/thinkstockphotos

With the bionic leg system, the patients performed better during clinically important tests indoors and outdoors, study author Stanisa Raspopovic, PhD, explained during a press briefing about the research. The findings were published in Nature Medicine.

The results indicate that the use of sensory feedback “could be common practice” in prosthetic devices in the future, he said. Dr. Raspopovic is a researcher at Swiss Federal Institute of Technology Zürich and a founder of SensArs Neuroprosthetics, which is based in Lausanne, Switzerland.

Neural prosthetics allow the nervous system and external devices to interact. These brain-machine interfaces may improve quality of life for patients with brain or spinal cord injuries, degenerative disease, or loss of limbs.

“Conventional leg prostheses do not convey sensory information about motion or interaction with the ground to above-knee amputees, thereby reducing confidence and walking speed in the users,” the study authors wrote. Users may also have high levels of mental and physical fatigue, and the lack of physiologic feedback from the extremity to the brain may contribute to the generation of phantom limb pain.

To evaluate whether neural sensory feedback restoration could address these issues, investigators conducted a study with two patients who had undergone transfemoral amputations as a result of traumatic events. The patients were implanted with four intraneural stimulation electrodes in the remaining tibial nerve. The prosthetic leg device included sensors to represent foot touch and pressure and knee joint angle. The sensors transmitted sensory signals to the nervous system through the stimulation electrodes in the tibial nerve.

When the patients walked outdoors over a path traced in the sand, “participants’ speeds were significantly higher when sensory feedback was provided,” the authors wrote. One participant walked 3.56 m/min faster, and the other walked 5.68 m/min faster.

The participants also rated their confidence in the prosthesis on a scale from 0 to 10. For patient 1, self-rated confidence improved from 4.85 to 7.71 with the device. Patient 2 reported a confidence level that climbed from 2.7 to 5.55.

When tested indoors, both patients reached a 0.5 km/hour higher speed on the treadmill when stimulation was provided and both had a lower mean rate of oxygen uptake during the sensory feedback trials, the study authors reported.

Levels of phantom limb pain also decreased significantly after 10-minute stimulation sessions, but not during control sessions.

Longer studies with more patients are required, and fully implantable devices without transcutaneous cables need to be developed, the authors wrote.

Grants from the European Research Council, European Commission, and Swiss National Science Foundation funded the research. Dr. Raspopovic and two coauthors hold shares of SensArs Neuroprosthetics, a start-up company dealing with the commercialization of neurocontrolled artificial limbs.

SOURCE: Petrini FM et al. Nat Med. 2019 Sep 9. doi: 10.1038/s41591-019-0567-3.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

MYSTIC trial analysis IDs mutations prognostic of mNSCLC outcomes

Article Type
Changed
Mon, 09/09/2019 - 10:54

– Mutations in the tumor suppressor KEAP1 or STK11 genes in patients with metastatic non–small cell lung cancer (mNSCLC) in the randomized, phase 3 MYSTIC trial experienced poorer outcomes than did patients without the mutations, according to an exploratory analysis of trial data.

Sharon Worcester/MDedge News
Dr. Naiyer A. Rizvi

Mutations in ARID1a, however, were associated with improved overall survival (OS) among patients in the trial who were treated with durvalumab + tremelimumab.

Profiling of circulating tumor DNA from 943 evaluable baseline plasma specimens showed median OS of 7.4 vs. 12.9 months among 170 patients with KEAP1 mutations (m) vs. in 773 with KEAP1 wild type (hazard ratio, 1.64), and 6.8 vs. 12.6 months in patients with STK11m vs. 796 with STK11wt (HR, 1.52), Naiyer A. Rizvi, MD, the Price Family Professor of Medicine, director of Thoracic Oncology, and codirector of Cancer Immunotherapy at Columbia University Irving Medical Center, New York, reported at the World Conference on Lung Cancer.

Objective response rates (ORR) in the groups, respectively, were 17.6% vs. 27.7% for KEAP1m vs. KEAP1wt, and 16.3% vs. 27.6% for STK11m vs. STK11wt, Dr. Rizvi said at the conference, sponsored by the International Association for the Study of Lung Cancer.

This was regardless of MYSTIC trial treatment arm; the open-label, multicenter, global trial compared durvalumab monotherapy or durvalumab plus tremelimumab with platinum-based chemotherapy for the first-line treatment of patients with epidermal growth factor receptor and anaplastic lymphoma kinase wild-type, locally-advanced or metastatic NSCLC.


Mutations in the ARID1a gene, however, had no impact on OS (12.6 vs. 11.4 months in 114 vs. 829 patients with ARID1am vs. ARID1awt; HR, 0.94), and ORRs, respectively, were 35.1% vs. 24.6%.

When comparing outcomes by treatment arm, the ORRs with chemotherapy were 15.1% vs. 34% for KEAP1m vs. KEAP1wt, 12.2% vs. 33.6% for STK11m vs. STK1wt, and 28.1% vs. 31% for ARID1am vs. ARID1awt.

The ORRs in the durvalumab arm were 16.7% vs. 25.2% for KEAP1m vs. KEAP1st, 14.5% vs. 25.7% for STK11m vs. STK11wt and 25.6% vs. 23.4%, respectively, and in the durvalumab + tremelimumab arm they were 20.6% vs. 23.9% for KEAP1m vs. KEAP1wt, 21.6% vs. 23.6% for STK11m vs. STK11wt.

“The key finding here is really the ARID1a response,” Dr. Rizvi said, noting the “pretty impressive response rates” of 51.3% with ARID1am vs. 19.4% for ARID1awt.

The relationship between gene alterations and response to anti-programmed death-1 (PD-1) therapy with and without anti-CTLA-4 therapy is not well characterized. These findings, which suggest that KEAP1 and STK11 mutations are prognostic for OS in mNSCLC, and that ARID1am may be predictive of OS benefit in patients receiving durvalumab + tremelimumab, provide insights to the potential impact of specific mutations on response to immunotherapy, Dr. Rizvi said.

“STK11 and KEAP1 mutations ... are relatively common mutations – they are actually the third and fourth most common mutations in lung cancer after p53 and KRAS,” he said, adding that they influence outcomes and need to be factored in to analyses of outcomes in lung cancer. “ARID1am patients were about 10% of the population and they did particularly well with durvalumab and tremelimumab, and I think these exploratory analyses can help us think about how we use [tumor mutational burden] and outcomes among cancer patients in future trials.”

The MYSTIC trial was sponsored by AstraZeneca. Dr. Rizvi disclosed royalties related to intellectual property/patents filed by MSKCC and Personal Genome Diagnostics.

SOURCE: Rizvi N et al. WCLC 2019: Abstract OA04.07.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Mutations in the tumor suppressor KEAP1 or STK11 genes in patients with metastatic non–small cell lung cancer (mNSCLC) in the randomized, phase 3 MYSTIC trial experienced poorer outcomes than did patients without the mutations, according to an exploratory analysis of trial data.

Sharon Worcester/MDedge News
Dr. Naiyer A. Rizvi

Mutations in ARID1a, however, were associated with improved overall survival (OS) among patients in the trial who were treated with durvalumab + tremelimumab.

Profiling of circulating tumor DNA from 943 evaluable baseline plasma specimens showed median OS of 7.4 vs. 12.9 months among 170 patients with KEAP1 mutations (m) vs. in 773 with KEAP1 wild type (hazard ratio, 1.64), and 6.8 vs. 12.6 months in patients with STK11m vs. 796 with STK11wt (HR, 1.52), Naiyer A. Rizvi, MD, the Price Family Professor of Medicine, director of Thoracic Oncology, and codirector of Cancer Immunotherapy at Columbia University Irving Medical Center, New York, reported at the World Conference on Lung Cancer.

Objective response rates (ORR) in the groups, respectively, were 17.6% vs. 27.7% for KEAP1m vs. KEAP1wt, and 16.3% vs. 27.6% for STK11m vs. STK11wt, Dr. Rizvi said at the conference, sponsored by the International Association for the Study of Lung Cancer.

This was regardless of MYSTIC trial treatment arm; the open-label, multicenter, global trial compared durvalumab monotherapy or durvalumab plus tremelimumab with platinum-based chemotherapy for the first-line treatment of patients with epidermal growth factor receptor and anaplastic lymphoma kinase wild-type, locally-advanced or metastatic NSCLC.


Mutations in the ARID1a gene, however, had no impact on OS (12.6 vs. 11.4 months in 114 vs. 829 patients with ARID1am vs. ARID1awt; HR, 0.94), and ORRs, respectively, were 35.1% vs. 24.6%.

When comparing outcomes by treatment arm, the ORRs with chemotherapy were 15.1% vs. 34% for KEAP1m vs. KEAP1wt, 12.2% vs. 33.6% for STK11m vs. STK1wt, and 28.1% vs. 31% for ARID1am vs. ARID1awt.

The ORRs in the durvalumab arm were 16.7% vs. 25.2% for KEAP1m vs. KEAP1st, 14.5% vs. 25.7% for STK11m vs. STK11wt and 25.6% vs. 23.4%, respectively, and in the durvalumab + tremelimumab arm they were 20.6% vs. 23.9% for KEAP1m vs. KEAP1wt, 21.6% vs. 23.6% for STK11m vs. STK11wt.

“The key finding here is really the ARID1a response,” Dr. Rizvi said, noting the “pretty impressive response rates” of 51.3% with ARID1am vs. 19.4% for ARID1awt.

The relationship between gene alterations and response to anti-programmed death-1 (PD-1) therapy with and without anti-CTLA-4 therapy is not well characterized. These findings, which suggest that KEAP1 and STK11 mutations are prognostic for OS in mNSCLC, and that ARID1am may be predictive of OS benefit in patients receiving durvalumab + tremelimumab, provide insights to the potential impact of specific mutations on response to immunotherapy, Dr. Rizvi said.

“STK11 and KEAP1 mutations ... are relatively common mutations – they are actually the third and fourth most common mutations in lung cancer after p53 and KRAS,” he said, adding that they influence outcomes and need to be factored in to analyses of outcomes in lung cancer. “ARID1am patients were about 10% of the population and they did particularly well with durvalumab and tremelimumab, and I think these exploratory analyses can help us think about how we use [tumor mutational burden] and outcomes among cancer patients in future trials.”

The MYSTIC trial was sponsored by AstraZeneca. Dr. Rizvi disclosed royalties related to intellectual property/patents filed by MSKCC and Personal Genome Diagnostics.

SOURCE: Rizvi N et al. WCLC 2019: Abstract OA04.07.

– Mutations in the tumor suppressor KEAP1 or STK11 genes in patients with metastatic non–small cell lung cancer (mNSCLC) in the randomized, phase 3 MYSTIC trial experienced poorer outcomes than did patients without the mutations, according to an exploratory analysis of trial data.

Sharon Worcester/MDedge News
Dr. Naiyer A. Rizvi

Mutations in ARID1a, however, were associated with improved overall survival (OS) among patients in the trial who were treated with durvalumab + tremelimumab.

Profiling of circulating tumor DNA from 943 evaluable baseline plasma specimens showed median OS of 7.4 vs. 12.9 months among 170 patients with KEAP1 mutations (m) vs. in 773 with KEAP1 wild type (hazard ratio, 1.64), and 6.8 vs. 12.6 months in patients with STK11m vs. 796 with STK11wt (HR, 1.52), Naiyer A. Rizvi, MD, the Price Family Professor of Medicine, director of Thoracic Oncology, and codirector of Cancer Immunotherapy at Columbia University Irving Medical Center, New York, reported at the World Conference on Lung Cancer.

Objective response rates (ORR) in the groups, respectively, were 17.6% vs. 27.7% for KEAP1m vs. KEAP1wt, and 16.3% vs. 27.6% for STK11m vs. STK11wt, Dr. Rizvi said at the conference, sponsored by the International Association for the Study of Lung Cancer.

This was regardless of MYSTIC trial treatment arm; the open-label, multicenter, global trial compared durvalumab monotherapy or durvalumab plus tremelimumab with platinum-based chemotherapy for the first-line treatment of patients with epidermal growth factor receptor and anaplastic lymphoma kinase wild-type, locally-advanced or metastatic NSCLC.


Mutations in the ARID1a gene, however, had no impact on OS (12.6 vs. 11.4 months in 114 vs. 829 patients with ARID1am vs. ARID1awt; HR, 0.94), and ORRs, respectively, were 35.1% vs. 24.6%.

When comparing outcomes by treatment arm, the ORRs with chemotherapy were 15.1% vs. 34% for KEAP1m vs. KEAP1wt, 12.2% vs. 33.6% for STK11m vs. STK1wt, and 28.1% vs. 31% for ARID1am vs. ARID1awt.

The ORRs in the durvalumab arm were 16.7% vs. 25.2% for KEAP1m vs. KEAP1st, 14.5% vs. 25.7% for STK11m vs. STK11wt and 25.6% vs. 23.4%, respectively, and in the durvalumab + tremelimumab arm they were 20.6% vs. 23.9% for KEAP1m vs. KEAP1wt, 21.6% vs. 23.6% for STK11m vs. STK11wt.

“The key finding here is really the ARID1a response,” Dr. Rizvi said, noting the “pretty impressive response rates” of 51.3% with ARID1am vs. 19.4% for ARID1awt.

The relationship between gene alterations and response to anti-programmed death-1 (PD-1) therapy with and without anti-CTLA-4 therapy is not well characterized. These findings, which suggest that KEAP1 and STK11 mutations are prognostic for OS in mNSCLC, and that ARID1am may be predictive of OS benefit in patients receiving durvalumab + tremelimumab, provide insights to the potential impact of specific mutations on response to immunotherapy, Dr. Rizvi said.

“STK11 and KEAP1 mutations ... are relatively common mutations – they are actually the third and fourth most common mutations in lung cancer after p53 and KRAS,” he said, adding that they influence outcomes and need to be factored in to analyses of outcomes in lung cancer. “ARID1am patients were about 10% of the population and they did particularly well with durvalumab and tremelimumab, and I think these exploratory analyses can help us think about how we use [tumor mutational burden] and outcomes among cancer patients in future trials.”

The MYSTIC trial was sponsored by AstraZeneca. Dr. Rizvi disclosed royalties related to intellectual property/patents filed by MSKCC and Personal Genome Diagnostics.

SOURCE: Rizvi N et al. WCLC 2019: Abstract OA04.07.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM WCLC 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Unusually Early-Onset Plantar Verrucous Carcinoma

Article Type
Changed
Tue, 09/10/2019 - 09:48
Display Headline
Unusually Early-Onset Plantar Verrucous Carcinoma

To the Editor:

Verrucous carcinoma (VC) is a rare type of squamous cell carcinoma characterized by a well-differentiated low-grade tumor with a high degree of keratinization. First described by Ackerman1 in 1948, VC presents on the skin or oral and genital mucosae with minimal atypical cytologic findings.1-3 It most commonly is seen in late middle-aged men (85% of cases) and presents as a slow-growing mass, often of more than 10 years’ duration.2,3 Verrucous carcinoma frequently is observed at 3 particular anatomic sites: the oral cavity, known as oral florid papillomatosis; the anogenital area, known as Buschke-Löwenstein tumor; and on the plantar surface, known as epithelioma cuniculatum.2-13

A 19-year-old man presented with an ulcerous lesion on the right big toe of 2 years’ duration. He reported that the lesion had gradually increased in size and was painful when walking. Physical examination revealed an ulcerated lesion on the right big toe with purulent inflammation and necrosis, unclear edges, and border nodules containing a fatty, yellowish, foul-smelling material (Figure 1). Histologic examination of purulent material from deep within the primary lesion revealed gram-negative rods and gram-positive diplococci. Erlich-Ziehl-Neelsen staining and culture in Lowenstein-Jensen medium were negative for mycobacteria. Histologic examination and fungal culture were not diagnostic for fungal infection.

Figure 1. Ulcerated lesion on the right great toe with purulent inflammation and necrosis, unclear edges, and border nodules containing a fatty, yellowish, foul-smelling material. The lesion was composed of smaller papulonodular structures, giving an irregular appearance.


The differential diagnosis included tuberculosis cutis verrucosa, subcutaneous mycoses, swimming pool granuloma, leishmania cutis, chronic pyoderma vegetans, and VC. A punch biopsy of the lesion showed chronic nonspecific inflammation, hyperkeratosis, parakeratosis, and pseudoepitheliomatous hyperplasia. A repeat biopsy performed 15 days later also showed a nonspecific inflammation. At the initial presentation, an anti–human immunodeficiency virus test was negative. A purified protein derivative (PPD) skin test was positive and showed a 17-mm induration, and a sputum test was negative for Mycobacterium tuberculosis. A chest radiograph was normal. We considered the positive PPD skin test to be clinically insignificant; we did not find an accompanying tuberculosis infection, and the high exposure to atypical tuberculosis in developing countries such as Turkey, which is where the patient resided, often explains a positive PPD test.



At the initial presentation, radiography of the right big toe revealed porotic signs and cortical irregularity of the distal phalanx. A deep incisional biopsy of the lesion was performed for pathologic and microbiologic analysis. Erlich-Ziehl-Neelsen staining was negative, fungal elements could not be observed, and there was no growth in Lowenstein-Jensen medium or Sabouraud dextrose agar. Polymerase chain reaction for human papillomavirus, M tuberculosis, and atypical mycobacterium was negative. Periodic acid–Schiff staining was negative for fungal elements. Histopathologic examination revealed an exophytic as well as endophytic squamous cell proliferation infiltrating deeper layers of the dermis with a desmoplastic stroma (Figure 2). Slight cytologic atypia was noted. A diagnosis of VC was made based on the clinical and histopathologic findings. The patient’s right big toe was amputated by plastic surgery 6 months after the initial presentation.

Figure 2. A and B, Exophytic as well as endophytic squamous cell proliferation, infiltrating deeper layers of the dermis with a desmoplastic stroma (H&E, original magnification ×20 and ×40).


The term epithelioma cuniculatum was first used in 1954 to describe plantar VC. The term cuniculus is Latin for rabbit nest.3 At the distal part of the plantar surface of the foot, VC presents as an exophytic funguslike mass with abundant keratin-filled sinuses.14 When pressure is applied to the lesion, a greasy, yellowish, foul-smelling material with the consistency of toothpaste emerges from the sinuses. The lesion resembles pyoderma vegetans and may present with secondary infections (eg, Staphylococcus aureus, gram-negative bacteria, fungal infection) and/or ulcerations. Its appearance resembles an inflammatory lesion more than a neoplasm.6 Sometimes the skin surrounding the lesion may be a yellowish color, giving the impression of a plantar wart.3,4 In most cases, in situ hybridization demonstrates a human papillomavirus genome.2-5,10 Other factors implicated in the etiopathogenesis of VC include chronic inflammation; a cicatrice associated with a condition such as chronic cutaneous tuberculosis, ulcerative leprosy, dystrophic epidermolysis bullosa, or chronic osteomyelitis4; recurrent trauma3; and/or lichen planus.2,4 In spite of its slow development and benign appearance, VC may cause severe destruction affecting surrounding bony structures and may ultimately require amputation.2,4 In its early stages, VC can be mistaken for a benign tumor or other benign lesion, such as giant seborrheic keratosis, giant keratoacanthoma, eccrine poroma, or verruciform xanthoma, potentially leading to an incorrect diagnosis.5



Histopathologic examination, especially of superficial biopsies, generally reveals squamous cell proliferation demonstrating minimal pleomorphism and cytologic atypia with sparse mitotic figures.4-6 Diagnosis of VC can be challenging if the endophytic proliferation, which characteristically pushes into the dermis and even deeper tissues at the base of the lesion, is not seen. This feature is uncommon in squamous cell carcinomas.3,4,6 Histopathologic detection of koilocytes can lead to difficulty in distinguishing VC from warts.5 The growth of lesions is exophytic in plantar verrucae, whereas in VC it may be either exophytic or endophytic.4 At early stages, it is too difficult to distinguish VC from pseudoepitheliomatous hyperplasia caused by chronic inflammation, as well as from tuberculosis and subcutaneous mycoses.3,6 In these situations, possible responsible microorganisms must be sought out. Amelanotic malignant melanoma and eccrine poroma also should be considered in the differential diagnosis.3,5 If the biopsy specimen is obtained superficially and is fragmented, the diagnosis is more difficult, making deep biopsies essential in suspicious cases.4 Excision is the best treatment, and Mohs micrographic surgery may be required in some cases.2,3,11 It is important to consider that radiotherapy may lead to anaplastic transformation and metastasis.2 Metastasis to lymph nodes is very rare, and the prognosis is excellent when complete excision is performed.2 Recurrence may be observed.4

Our case of plantar VC is notable because of the patient’s young age, which is uncommon, as the typical age for developing VC is late middle age (ie, fifth and sixth decades of life). A long-standing lesion that is therapy resistant and without a detectable microorganism should be investigated for malignancy by repetitive deep biopsy regardless of the patient’s age, as demonstrated in our case.

References
  1. Ackerman LV. Verrucous carcinoma of the oral cavity. Surgery. 1948;23:670-678.
  2. Schwartz RA. Verrucous carcinoma of the skin and mucosal. J Am Acad Dermatol. 1995;32:1-21.
  3. Kao GF, Graham JH, Helwig EB. Carcinoma cuniculatum (verrucous carcinoma of the skin): a clinicopathologic study of 46 cases with ultrastructural observations. Cancer. 1982;49:2395-2403.
  4. Mc Kee PH, ed. Pathology of the Skin. 2nd ed. London, England: Mosby-Wolfe; 1996.
  5. Schwartz RA, Stoll HL. Squamous cell carcinoma. In: Freedberg IM, Eisen AZ, Wolff K, et al, eds. Fitzpatrick’s Dermatology in General Medicine. 5th ed. New York, NY: Mc-Graw Hill; 1999:840-856.
  6. MacKie RM. Epidermal skin tumours. In: Rook A, Wilkinson DS, Ebling FJG, et al, eds. Textbook of Dermatology. 5th ed. Oxford, United Kingdom: Blackwell Scientific; 1992:1500-1556.
  7. Yoshtatsu S, Takagi T, Ohata C, et al. Plantar verrucous carcinoma: report of a case treated with Boyd amputation followed by reconstruction with a free forearm flap. J Dermatol. 2001;28:226-230.
  8. Van Geertruyden JP, Olemans C, Laporte M, et al. Verrucous carcinoma of the nail bed. Foot Ankle Int. 1998;19:327-328.
  9. Sanchez-Yus E, Velasco E, Robledo A. Verrucous carcinoma of the back. J Am Acad Dermatol. 1986;14(5 pt 2):947-950.
  10. Noel JC, Peny MO, Detremmerie O, et al. Demonstration of human papillomavirus type 2 in a verrucous carcinoma of the foot. Dermatology. 1993;187:58-61.
  11. Mora RG. Microscopically controlled surgery (Mohs’ chemosurgery) for treatment of verrucous squamous cell carcinoma of the foot (epithelioma cuniculatum). J Am Acad Dermatol. 1983;8:354-362.
  12. Kathuria S, Rieker J, Jablokow VR, et al. Plantar verrucous carcinoma (epithelioma cuniculatum): case report with review of the literature. J Surg Oncol. 1986;31:71-75.
  13. Brownstein MH, Shapiro L. Verrucous carcinoma of skin: epithelioma cuniculatum plantare. Cancer. 1976;38:1710-1716.
  14. Ho J, Diven DG, Butler PJ, et al. An ulcerating verrucous plaque on the foot. verrucous carcinoma (epithelioma cuniculatum). Arch Dermatol. 2000;136:547-548, 550-551.
Article PDF
Author and Disclosure Information

Dr. Seremet is from the Department of Dermatology, Ataturk Training and Research Hospital, Izmir, Turkey. Drs. Erdemir, Kiremitci, and Gunel are from the Department of Dermatology, Istanbul Training and Research Hospital, Turkey. Dr. Demirkesen is from the Department of Pathology, Cerrahpas¸a Medical Faculty, University of Istanbul.

The authors report no conflict of interest.

Correspondence: Sıla Seremet, MD, Department of Dermatology, Ataturk Training and Research Hospital, 35360 Basin Sitesi, Izmir, Turkey ([email protected]).

Issue
Cutis - 104(2)
Publications
Topics
Page Number
E34-E36
Sections
Author and Disclosure Information

Dr. Seremet is from the Department of Dermatology, Ataturk Training and Research Hospital, Izmir, Turkey. Drs. Erdemir, Kiremitci, and Gunel are from the Department of Dermatology, Istanbul Training and Research Hospital, Turkey. Dr. Demirkesen is from the Department of Pathology, Cerrahpas¸a Medical Faculty, University of Istanbul.

The authors report no conflict of interest.

Correspondence: Sıla Seremet, MD, Department of Dermatology, Ataturk Training and Research Hospital, 35360 Basin Sitesi, Izmir, Turkey ([email protected]).

Author and Disclosure Information

Dr. Seremet is from the Department of Dermatology, Ataturk Training and Research Hospital, Izmir, Turkey. Drs. Erdemir, Kiremitci, and Gunel are from the Department of Dermatology, Istanbul Training and Research Hospital, Turkey. Dr. Demirkesen is from the Department of Pathology, Cerrahpas¸a Medical Faculty, University of Istanbul.

The authors report no conflict of interest.

Correspondence: Sıla Seremet, MD, Department of Dermatology, Ataturk Training and Research Hospital, 35360 Basin Sitesi, Izmir, Turkey ([email protected]).

Article PDF
Article PDF

To the Editor:

Verrucous carcinoma (VC) is a rare type of squamous cell carcinoma characterized by a well-differentiated low-grade tumor with a high degree of keratinization. First described by Ackerman1 in 1948, VC presents on the skin or oral and genital mucosae with minimal atypical cytologic findings.1-3 It most commonly is seen in late middle-aged men (85% of cases) and presents as a slow-growing mass, often of more than 10 years’ duration.2,3 Verrucous carcinoma frequently is observed at 3 particular anatomic sites: the oral cavity, known as oral florid papillomatosis; the anogenital area, known as Buschke-Löwenstein tumor; and on the plantar surface, known as epithelioma cuniculatum.2-13

A 19-year-old man presented with an ulcerous lesion on the right big toe of 2 years’ duration. He reported that the lesion had gradually increased in size and was painful when walking. Physical examination revealed an ulcerated lesion on the right big toe with purulent inflammation and necrosis, unclear edges, and border nodules containing a fatty, yellowish, foul-smelling material (Figure 1). Histologic examination of purulent material from deep within the primary lesion revealed gram-negative rods and gram-positive diplococci. Erlich-Ziehl-Neelsen staining and culture in Lowenstein-Jensen medium were negative for mycobacteria. Histologic examination and fungal culture were not diagnostic for fungal infection.

Figure 1. Ulcerated lesion on the right great toe with purulent inflammation and necrosis, unclear edges, and border nodules containing a fatty, yellowish, foul-smelling material. The lesion was composed of smaller papulonodular structures, giving an irregular appearance.


The differential diagnosis included tuberculosis cutis verrucosa, subcutaneous mycoses, swimming pool granuloma, leishmania cutis, chronic pyoderma vegetans, and VC. A punch biopsy of the lesion showed chronic nonspecific inflammation, hyperkeratosis, parakeratosis, and pseudoepitheliomatous hyperplasia. A repeat biopsy performed 15 days later also showed a nonspecific inflammation. At the initial presentation, an anti–human immunodeficiency virus test was negative. A purified protein derivative (PPD) skin test was positive and showed a 17-mm induration, and a sputum test was negative for Mycobacterium tuberculosis. A chest radiograph was normal. We considered the positive PPD skin test to be clinically insignificant; we did not find an accompanying tuberculosis infection, and the high exposure to atypical tuberculosis in developing countries such as Turkey, which is where the patient resided, often explains a positive PPD test.



At the initial presentation, radiography of the right big toe revealed porotic signs and cortical irregularity of the distal phalanx. A deep incisional biopsy of the lesion was performed for pathologic and microbiologic analysis. Erlich-Ziehl-Neelsen staining was negative, fungal elements could not be observed, and there was no growth in Lowenstein-Jensen medium or Sabouraud dextrose agar. Polymerase chain reaction for human papillomavirus, M tuberculosis, and atypical mycobacterium was negative. Periodic acid–Schiff staining was negative for fungal elements. Histopathologic examination revealed an exophytic as well as endophytic squamous cell proliferation infiltrating deeper layers of the dermis with a desmoplastic stroma (Figure 2). Slight cytologic atypia was noted. A diagnosis of VC was made based on the clinical and histopathologic findings. The patient’s right big toe was amputated by plastic surgery 6 months after the initial presentation.

Figure 2. A and B, Exophytic as well as endophytic squamous cell proliferation, infiltrating deeper layers of the dermis with a desmoplastic stroma (H&E, original magnification ×20 and ×40).


The term epithelioma cuniculatum was first used in 1954 to describe plantar VC. The term cuniculus is Latin for rabbit nest.3 At the distal part of the plantar surface of the foot, VC presents as an exophytic funguslike mass with abundant keratin-filled sinuses.14 When pressure is applied to the lesion, a greasy, yellowish, foul-smelling material with the consistency of toothpaste emerges from the sinuses. The lesion resembles pyoderma vegetans and may present with secondary infections (eg, Staphylococcus aureus, gram-negative bacteria, fungal infection) and/or ulcerations. Its appearance resembles an inflammatory lesion more than a neoplasm.6 Sometimes the skin surrounding the lesion may be a yellowish color, giving the impression of a plantar wart.3,4 In most cases, in situ hybridization demonstrates a human papillomavirus genome.2-5,10 Other factors implicated in the etiopathogenesis of VC include chronic inflammation; a cicatrice associated with a condition such as chronic cutaneous tuberculosis, ulcerative leprosy, dystrophic epidermolysis bullosa, or chronic osteomyelitis4; recurrent trauma3; and/or lichen planus.2,4 In spite of its slow development and benign appearance, VC may cause severe destruction affecting surrounding bony structures and may ultimately require amputation.2,4 In its early stages, VC can be mistaken for a benign tumor or other benign lesion, such as giant seborrheic keratosis, giant keratoacanthoma, eccrine poroma, or verruciform xanthoma, potentially leading to an incorrect diagnosis.5



Histopathologic examination, especially of superficial biopsies, generally reveals squamous cell proliferation demonstrating minimal pleomorphism and cytologic atypia with sparse mitotic figures.4-6 Diagnosis of VC can be challenging if the endophytic proliferation, which characteristically pushes into the dermis and even deeper tissues at the base of the lesion, is not seen. This feature is uncommon in squamous cell carcinomas.3,4,6 Histopathologic detection of koilocytes can lead to difficulty in distinguishing VC from warts.5 The growth of lesions is exophytic in plantar verrucae, whereas in VC it may be either exophytic or endophytic.4 At early stages, it is too difficult to distinguish VC from pseudoepitheliomatous hyperplasia caused by chronic inflammation, as well as from tuberculosis and subcutaneous mycoses.3,6 In these situations, possible responsible microorganisms must be sought out. Amelanotic malignant melanoma and eccrine poroma also should be considered in the differential diagnosis.3,5 If the biopsy specimen is obtained superficially and is fragmented, the diagnosis is more difficult, making deep biopsies essential in suspicious cases.4 Excision is the best treatment, and Mohs micrographic surgery may be required in some cases.2,3,11 It is important to consider that radiotherapy may lead to anaplastic transformation and metastasis.2 Metastasis to lymph nodes is very rare, and the prognosis is excellent when complete excision is performed.2 Recurrence may be observed.4

Our case of plantar VC is notable because of the patient’s young age, which is uncommon, as the typical age for developing VC is late middle age (ie, fifth and sixth decades of life). A long-standing lesion that is therapy resistant and without a detectable microorganism should be investigated for malignancy by repetitive deep biopsy regardless of the patient’s age, as demonstrated in our case.

To the Editor:

Verrucous carcinoma (VC) is a rare type of squamous cell carcinoma characterized by a well-differentiated low-grade tumor with a high degree of keratinization. First described by Ackerman1 in 1948, VC presents on the skin or oral and genital mucosae with minimal atypical cytologic findings.1-3 It most commonly is seen in late middle-aged men (85% of cases) and presents as a slow-growing mass, often of more than 10 years’ duration.2,3 Verrucous carcinoma frequently is observed at 3 particular anatomic sites: the oral cavity, known as oral florid papillomatosis; the anogenital area, known as Buschke-Löwenstein tumor; and on the plantar surface, known as epithelioma cuniculatum.2-13

A 19-year-old man presented with an ulcerous lesion on the right big toe of 2 years’ duration. He reported that the lesion had gradually increased in size and was painful when walking. Physical examination revealed an ulcerated lesion on the right big toe with purulent inflammation and necrosis, unclear edges, and border nodules containing a fatty, yellowish, foul-smelling material (Figure 1). Histologic examination of purulent material from deep within the primary lesion revealed gram-negative rods and gram-positive diplococci. Erlich-Ziehl-Neelsen staining and culture in Lowenstein-Jensen medium were negative for mycobacteria. Histologic examination and fungal culture were not diagnostic for fungal infection.

Figure 1. Ulcerated lesion on the right great toe with purulent inflammation and necrosis, unclear edges, and border nodules containing a fatty, yellowish, foul-smelling material. The lesion was composed of smaller papulonodular structures, giving an irregular appearance.


The differential diagnosis included tuberculosis cutis verrucosa, subcutaneous mycoses, swimming pool granuloma, leishmania cutis, chronic pyoderma vegetans, and VC. A punch biopsy of the lesion showed chronic nonspecific inflammation, hyperkeratosis, parakeratosis, and pseudoepitheliomatous hyperplasia. A repeat biopsy performed 15 days later also showed a nonspecific inflammation. At the initial presentation, an anti–human immunodeficiency virus test was negative. A purified protein derivative (PPD) skin test was positive and showed a 17-mm induration, and a sputum test was negative for Mycobacterium tuberculosis. A chest radiograph was normal. We considered the positive PPD skin test to be clinically insignificant; we did not find an accompanying tuberculosis infection, and the high exposure to atypical tuberculosis in developing countries such as Turkey, which is where the patient resided, often explains a positive PPD test.



At the initial presentation, radiography of the right big toe revealed porotic signs and cortical irregularity of the distal phalanx. A deep incisional biopsy of the lesion was performed for pathologic and microbiologic analysis. Erlich-Ziehl-Neelsen staining was negative, fungal elements could not be observed, and there was no growth in Lowenstein-Jensen medium or Sabouraud dextrose agar. Polymerase chain reaction for human papillomavirus, M tuberculosis, and atypical mycobacterium was negative. Periodic acid–Schiff staining was negative for fungal elements. Histopathologic examination revealed an exophytic as well as endophytic squamous cell proliferation infiltrating deeper layers of the dermis with a desmoplastic stroma (Figure 2). Slight cytologic atypia was noted. A diagnosis of VC was made based on the clinical and histopathologic findings. The patient’s right big toe was amputated by plastic surgery 6 months after the initial presentation.

Figure 2. A and B, Exophytic as well as endophytic squamous cell proliferation, infiltrating deeper layers of the dermis with a desmoplastic stroma (H&E, original magnification ×20 and ×40).


The term epithelioma cuniculatum was first used in 1954 to describe plantar VC. The term cuniculus is Latin for rabbit nest.3 At the distal part of the plantar surface of the foot, VC presents as an exophytic funguslike mass with abundant keratin-filled sinuses.14 When pressure is applied to the lesion, a greasy, yellowish, foul-smelling material with the consistency of toothpaste emerges from the sinuses. The lesion resembles pyoderma vegetans and may present with secondary infections (eg, Staphylococcus aureus, gram-negative bacteria, fungal infection) and/or ulcerations. Its appearance resembles an inflammatory lesion more than a neoplasm.6 Sometimes the skin surrounding the lesion may be a yellowish color, giving the impression of a plantar wart.3,4 In most cases, in situ hybridization demonstrates a human papillomavirus genome.2-5,10 Other factors implicated in the etiopathogenesis of VC include chronic inflammation; a cicatrice associated with a condition such as chronic cutaneous tuberculosis, ulcerative leprosy, dystrophic epidermolysis bullosa, or chronic osteomyelitis4; recurrent trauma3; and/or lichen planus.2,4 In spite of its slow development and benign appearance, VC may cause severe destruction affecting surrounding bony structures and may ultimately require amputation.2,4 In its early stages, VC can be mistaken for a benign tumor or other benign lesion, such as giant seborrheic keratosis, giant keratoacanthoma, eccrine poroma, or verruciform xanthoma, potentially leading to an incorrect diagnosis.5



Histopathologic examination, especially of superficial biopsies, generally reveals squamous cell proliferation demonstrating minimal pleomorphism and cytologic atypia with sparse mitotic figures.4-6 Diagnosis of VC can be challenging if the endophytic proliferation, which characteristically pushes into the dermis and even deeper tissues at the base of the lesion, is not seen. This feature is uncommon in squamous cell carcinomas.3,4,6 Histopathologic detection of koilocytes can lead to difficulty in distinguishing VC from warts.5 The growth of lesions is exophytic in plantar verrucae, whereas in VC it may be either exophytic or endophytic.4 At early stages, it is too difficult to distinguish VC from pseudoepitheliomatous hyperplasia caused by chronic inflammation, as well as from tuberculosis and subcutaneous mycoses.3,6 In these situations, possible responsible microorganisms must be sought out. Amelanotic malignant melanoma and eccrine poroma also should be considered in the differential diagnosis.3,5 If the biopsy specimen is obtained superficially and is fragmented, the diagnosis is more difficult, making deep biopsies essential in suspicious cases.4 Excision is the best treatment, and Mohs micrographic surgery may be required in some cases.2,3,11 It is important to consider that radiotherapy may lead to anaplastic transformation and metastasis.2 Metastasis to lymph nodes is very rare, and the prognosis is excellent when complete excision is performed.2 Recurrence may be observed.4

Our case of plantar VC is notable because of the patient’s young age, which is uncommon, as the typical age for developing VC is late middle age (ie, fifth and sixth decades of life). A long-standing lesion that is therapy resistant and without a detectable microorganism should be investigated for malignancy by repetitive deep biopsy regardless of the patient’s age, as demonstrated in our case.

References
  1. Ackerman LV. Verrucous carcinoma of the oral cavity. Surgery. 1948;23:670-678.
  2. Schwartz RA. Verrucous carcinoma of the skin and mucosal. J Am Acad Dermatol. 1995;32:1-21.
  3. Kao GF, Graham JH, Helwig EB. Carcinoma cuniculatum (verrucous carcinoma of the skin): a clinicopathologic study of 46 cases with ultrastructural observations. Cancer. 1982;49:2395-2403.
  4. Mc Kee PH, ed. Pathology of the Skin. 2nd ed. London, England: Mosby-Wolfe; 1996.
  5. Schwartz RA, Stoll HL. Squamous cell carcinoma. In: Freedberg IM, Eisen AZ, Wolff K, et al, eds. Fitzpatrick’s Dermatology in General Medicine. 5th ed. New York, NY: Mc-Graw Hill; 1999:840-856.
  6. MacKie RM. Epidermal skin tumours. In: Rook A, Wilkinson DS, Ebling FJG, et al, eds. Textbook of Dermatology. 5th ed. Oxford, United Kingdom: Blackwell Scientific; 1992:1500-1556.
  7. Yoshtatsu S, Takagi T, Ohata C, et al. Plantar verrucous carcinoma: report of a case treated with Boyd amputation followed by reconstruction with a free forearm flap. J Dermatol. 2001;28:226-230.
  8. Van Geertruyden JP, Olemans C, Laporte M, et al. Verrucous carcinoma of the nail bed. Foot Ankle Int. 1998;19:327-328.
  9. Sanchez-Yus E, Velasco E, Robledo A. Verrucous carcinoma of the back. J Am Acad Dermatol. 1986;14(5 pt 2):947-950.
  10. Noel JC, Peny MO, Detremmerie O, et al. Demonstration of human papillomavirus type 2 in a verrucous carcinoma of the foot. Dermatology. 1993;187:58-61.
  11. Mora RG. Microscopically controlled surgery (Mohs’ chemosurgery) for treatment of verrucous squamous cell carcinoma of the foot (epithelioma cuniculatum). J Am Acad Dermatol. 1983;8:354-362.
  12. Kathuria S, Rieker J, Jablokow VR, et al. Plantar verrucous carcinoma (epithelioma cuniculatum): case report with review of the literature. J Surg Oncol. 1986;31:71-75.
  13. Brownstein MH, Shapiro L. Verrucous carcinoma of skin: epithelioma cuniculatum plantare. Cancer. 1976;38:1710-1716.
  14. Ho J, Diven DG, Butler PJ, et al. An ulcerating verrucous plaque on the foot. verrucous carcinoma (epithelioma cuniculatum). Arch Dermatol. 2000;136:547-548, 550-551.
References
  1. Ackerman LV. Verrucous carcinoma of the oral cavity. Surgery. 1948;23:670-678.
  2. Schwartz RA. Verrucous carcinoma of the skin and mucosal. J Am Acad Dermatol. 1995;32:1-21.
  3. Kao GF, Graham JH, Helwig EB. Carcinoma cuniculatum (verrucous carcinoma of the skin): a clinicopathologic study of 46 cases with ultrastructural observations. Cancer. 1982;49:2395-2403.
  4. Mc Kee PH, ed. Pathology of the Skin. 2nd ed. London, England: Mosby-Wolfe; 1996.
  5. Schwartz RA, Stoll HL. Squamous cell carcinoma. In: Freedberg IM, Eisen AZ, Wolff K, et al, eds. Fitzpatrick’s Dermatology in General Medicine. 5th ed. New York, NY: Mc-Graw Hill; 1999:840-856.
  6. MacKie RM. Epidermal skin tumours. In: Rook A, Wilkinson DS, Ebling FJG, et al, eds. Textbook of Dermatology. 5th ed. Oxford, United Kingdom: Blackwell Scientific; 1992:1500-1556.
  7. Yoshtatsu S, Takagi T, Ohata C, et al. Plantar verrucous carcinoma: report of a case treated with Boyd amputation followed by reconstruction with a free forearm flap. J Dermatol. 2001;28:226-230.
  8. Van Geertruyden JP, Olemans C, Laporte M, et al. Verrucous carcinoma of the nail bed. Foot Ankle Int. 1998;19:327-328.
  9. Sanchez-Yus E, Velasco E, Robledo A. Verrucous carcinoma of the back. J Am Acad Dermatol. 1986;14(5 pt 2):947-950.
  10. Noel JC, Peny MO, Detremmerie O, et al. Demonstration of human papillomavirus type 2 in a verrucous carcinoma of the foot. Dermatology. 1993;187:58-61.
  11. Mora RG. Microscopically controlled surgery (Mohs’ chemosurgery) for treatment of verrucous squamous cell carcinoma of the foot (epithelioma cuniculatum). J Am Acad Dermatol. 1983;8:354-362.
  12. Kathuria S, Rieker J, Jablokow VR, et al. Plantar verrucous carcinoma (epithelioma cuniculatum): case report with review of the literature. J Surg Oncol. 1986;31:71-75.
  13. Brownstein MH, Shapiro L. Verrucous carcinoma of skin: epithelioma cuniculatum plantare. Cancer. 1976;38:1710-1716.
  14. Ho J, Diven DG, Butler PJ, et al. An ulcerating verrucous plaque on the foot. verrucous carcinoma (epithelioma cuniculatum). Arch Dermatol. 2000;136:547-548, 550-551.
Issue
Cutis - 104(2)
Issue
Cutis - 104(2)
Page Number
E34-E36
Page Number
E34-E36
Publications
Publications
Topics
Article Type
Display Headline
Unusually Early-Onset Plantar Verrucous Carcinoma
Display Headline
Unusually Early-Onset Plantar Verrucous Carcinoma
Sections
Inside the Article

Practice Points

  • Verrucous carcinoma (VC) frequently is observed at 3 particular anatomic sites: the oral cavity, the anogenital area, and on the plantar surface.
  • Plantar VC is rare, with a male predominance and most patients presenting in the fifth to sixth decades of life.
  • Differentiating VS from benign tumors may be difficult, especially if only superficial biopsies are taken. Multiple biopsies and a close clinical correlation are required before a definite diagnosis is possible.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

How does diet affect the risk of IBD?

Article Type
Changed
Mon, 09/09/2019 - 10:33

 

Evidence suggests that diet may cause incident inflammatory bowel disease (IBD) and induce associated symptoms, according to a lecture delivered at Freston Conference 2019, sponsored by the American Gastroenterological Association.

Dr. Ashwin N. Ananthakrishnan

Although the literature is highly consistent, it contains discordant findings, and many questions remain unanswered. “We need more rigorous studies, and particularly more interventions, to truly understand the role diet may play in patients with IBD,” said Ashwin N. Ananthakrishnan, MD, MPH, associate professor of medicine at Massachusetts General Hospital in Boston.
 

Food can cause symptoms in IBD

Many patients with IBD are convinced that their diet caused their disease. A relevant point for physicians to consider is that these patients are at least as likely as is the general population to have intolerance or sensitivity to food components such as lactose and gluten. In a prospective questionnaire of 400 consecutive patients with IBD in the United Kingdom, 48% expressed the belief that diet could initiate IBD, and 57% said that diet could trigger a flare-up. In addition, 60% of respondents reported worsening of symptoms after eating certain foods, and about two-thirds deprived themselves of their favorite foods to prevent relapses (Inflamm Bowel Dis. 2016;22[1]:164-70). A French study found similar results. “Clearly there’s something there,” said Dr. Ananthakrishnan. Patients’ beliefs about the relationship between food and their symptoms are not simply misconceptions, he added.

A Canadian study published in 2016 found that almost one-third of patients with IBD avoid many food groups. “But there is significant heterogeneity in the foods that are avoided, and sometimes we mistake this heterogeneity for a lack of association between diet and symptoms in IBD,” said Dr. Ananthakrishnan. A larger number of patients avoid certain foods during periods of active disease, which suggests that food exacerbates their symptoms, he added. The same study showed that patients with IBD have more restrictive diets than do community controls. Patients eat fewer fruits and vegetables and generally consume less iron-rich food and less protein-rich food than healthy controls. GI intolerance, rather than professional advice, is the most common reason that patients with IBD restrict their diets (JPEN J Parenter Enteral Nutr. 2016;40[3]:405-11.).

A cross-sectional survey of 130 patients with IBD and 70 controls yielded similar results. Among patients, GI symptoms that resulted from consuming foods were not related to disease activity, disease location, or prior surgery. Patients with IBD tended to have greater frequency of GI intolerance to foods than did controls (Scand J Gastroenterol. 1997;32[6]:569-71.).
 

Diet may cause intestinal inflammation

International research has recorded increases in the consumption of sugar and fat (particularly saturated fat) and concomitant decreases in fiber consumption during the past several decades. The incidence of IBD has increased in parallel with these dietary changes with a remarkably similar trajectory, said Dr. Ananthakrishnan. The correlation between dietary changes and IBD incidence “holds true even more strikingly in countries that are now experiencing Westernization,” he added. These countries have undergone more rapid dietary changes, and their IBD incidence has doubled or tripled. The transition to “less traditional diets” appears to promote intestinal inflammation, said Dr. Ananthakrishnan.

 

 

An analysis of data from the European Prospective Investigation into Cancer (EPIC) study found an association between high consumption of sugar and soft drinks, together with low consumption of vegetables, and risk of ulcerative colitis (Inflamm Bowel Dis. 2016;22[2]:345-54.). A subsequent analysis of data from two prospective Swedish cohorts, however, found no association between consumption of sugary beverages and risk of Crohn’s disease or ulcerative colitis (Clin Gastroenterol Hepatol. 2019;17[1]:123-9.).

Although the data on sugar are mixed, data on the association between other macronutrient groups and risk of IBD are more consistent. When Dr. Ananthakrishnan and colleagues examined data from the Nurses’ Health Study, they found that the highest quintile of dietary fiber intake was associated with a 40% reduction in risk of Crohn’s disease, compared with the lowest quintile. The observed reduction of risk seemed to be greatest for fiber derived from fruits. Fiber from cereals, whole grains, or legumes, however, did not affect risk of Crohn’s disease (Gastroenterology. 2013;145[5]:970-7.).

A separate analysis of the Nurses’ Health Study suggested that high intake of n-3 polyunsaturated fatty acids (PUFAs) and low intake of n-6 PUFAs was associated with a 31% reduction in risk of ulcerative colitis and a 15% reduction in the risk of Crohn’s disease. These data were consistent with a previous analysis of EPIC data that found that high intake of n-6 PUFAs was associated with increased risk of ulcerative colitis (Gut. 2009;58[12]:1606-11.). Other analyses indicate that genetic polymorphisms likely modify the association between PUFAs and risk of ulcerative colitis, said Dr. Ananthakrishnan. “There may be an additional layer of complexity beyond just measuring your dietary intake.”

In addition to macronutrients, micronutrients can modify a patient’s risk of ulcerative colitis or Crohn’s disease. When Dr. Ananthakrishnan and colleagues examined the Nurses’ Health Study, they found an inverse association between vitamin D intake and risk of Crohn’s disease (Gastroenterology. 2012;142[3]:482-9.). In a separate study, they found that a zinc intake greater than 16 mg/day was associated with reduced risk of Crohn’s disease (Int J Epidemiol. 2015;44[6]:1995-2005.).

Patients aged older than 40 years and patients of European ancestry tend to be overrepresented in cohort studies, which reduces the generalizability of their conclusions, said Dr. Ananthakrishnan. Furthermore, cohort studies have not produced consistent findings regarding the relationship between various dietary components and risk of IBD. Nevertheless, the data suggest that dietary patterns may be associated with incident Crohn’s disease or ulcerative colitis.

 

An influence of diet on IBD risk is plausible

One mechanism through which diet may exercise a causal influence on the risk of IBD is by affecting the microbiome. In 2011, investigators studied 98 healthy volunteers who answered questionnaires about their diet. The researchers also used 16s rDNA sequencing to characterize the population’s stool samples. A diet high in animal protein, amino acids, and saturated fats was associated with large populations of Bacteroides. A diet low in fat and in animal protein, but high in carbohydrates and simple sugars was associated with large populations of Prevotella. When the investigators conducted a controlled-feeding study of 10 patients, microbiome composition changed within 1 day of initiating a high-fat-and-low-fiber or a low-fat-and-high-fiber diet (Science. 2011;334[6052]:105-8.). A more recent study showed that the diversity of the microbiome increased with the adoption of an animal-based diet (Nature. 2014;505[7484]:559-63.).

 

 

Diet also may exert a causal influence on IBD risk by altering the intestinal barrier. In an experimental model, 5-mg/mL concentrations of fiber from plantain and broccoli significantly reduced the translocation of Escherichia coli through a human intestinal epithelial barrier (Gut. 2010;59[10]:1331-9.). Increased fiber intake may thus result in reduced intestinal inflammation, said Dr. Ananthakrishnan.

Observational and experimental evidence thus support an effect of diet on the risk of IBD, and experimental evidence indicates that this effect is biologically plausible. Nevertheless, “there are many missing links,” and further study will clarify the role of diet in IBD incidence, said Dr. Ananthakrishnan.

AGA offers education for your patients about IBD, including lifestyle and nutrition management, in the AGA GI Patient Center at https://www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease-ibd.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Evidence suggests that diet may cause incident inflammatory bowel disease (IBD) and induce associated symptoms, according to a lecture delivered at Freston Conference 2019, sponsored by the American Gastroenterological Association.

Dr. Ashwin N. Ananthakrishnan

Although the literature is highly consistent, it contains discordant findings, and many questions remain unanswered. “We need more rigorous studies, and particularly more interventions, to truly understand the role diet may play in patients with IBD,” said Ashwin N. Ananthakrishnan, MD, MPH, associate professor of medicine at Massachusetts General Hospital in Boston.
 

Food can cause symptoms in IBD

Many patients with IBD are convinced that their diet caused their disease. A relevant point for physicians to consider is that these patients are at least as likely as is the general population to have intolerance or sensitivity to food components such as lactose and gluten. In a prospective questionnaire of 400 consecutive patients with IBD in the United Kingdom, 48% expressed the belief that diet could initiate IBD, and 57% said that diet could trigger a flare-up. In addition, 60% of respondents reported worsening of symptoms after eating certain foods, and about two-thirds deprived themselves of their favorite foods to prevent relapses (Inflamm Bowel Dis. 2016;22[1]:164-70). A French study found similar results. “Clearly there’s something there,” said Dr. Ananthakrishnan. Patients’ beliefs about the relationship between food and their symptoms are not simply misconceptions, he added.

A Canadian study published in 2016 found that almost one-third of patients with IBD avoid many food groups. “But there is significant heterogeneity in the foods that are avoided, and sometimes we mistake this heterogeneity for a lack of association between diet and symptoms in IBD,” said Dr. Ananthakrishnan. A larger number of patients avoid certain foods during periods of active disease, which suggests that food exacerbates their symptoms, he added. The same study showed that patients with IBD have more restrictive diets than do community controls. Patients eat fewer fruits and vegetables and generally consume less iron-rich food and less protein-rich food than healthy controls. GI intolerance, rather than professional advice, is the most common reason that patients with IBD restrict their diets (JPEN J Parenter Enteral Nutr. 2016;40[3]:405-11.).

A cross-sectional survey of 130 patients with IBD and 70 controls yielded similar results. Among patients, GI symptoms that resulted from consuming foods were not related to disease activity, disease location, or prior surgery. Patients with IBD tended to have greater frequency of GI intolerance to foods than did controls (Scand J Gastroenterol. 1997;32[6]:569-71.).
 

Diet may cause intestinal inflammation

International research has recorded increases in the consumption of sugar and fat (particularly saturated fat) and concomitant decreases in fiber consumption during the past several decades. The incidence of IBD has increased in parallel with these dietary changes with a remarkably similar trajectory, said Dr. Ananthakrishnan. The correlation between dietary changes and IBD incidence “holds true even more strikingly in countries that are now experiencing Westernization,” he added. These countries have undergone more rapid dietary changes, and their IBD incidence has doubled or tripled. The transition to “less traditional diets” appears to promote intestinal inflammation, said Dr. Ananthakrishnan.

 

 

An analysis of data from the European Prospective Investigation into Cancer (EPIC) study found an association between high consumption of sugar and soft drinks, together with low consumption of vegetables, and risk of ulcerative colitis (Inflamm Bowel Dis. 2016;22[2]:345-54.). A subsequent analysis of data from two prospective Swedish cohorts, however, found no association between consumption of sugary beverages and risk of Crohn’s disease or ulcerative colitis (Clin Gastroenterol Hepatol. 2019;17[1]:123-9.).

Although the data on sugar are mixed, data on the association between other macronutrient groups and risk of IBD are more consistent. When Dr. Ananthakrishnan and colleagues examined data from the Nurses’ Health Study, they found that the highest quintile of dietary fiber intake was associated with a 40% reduction in risk of Crohn’s disease, compared with the lowest quintile. The observed reduction of risk seemed to be greatest for fiber derived from fruits. Fiber from cereals, whole grains, or legumes, however, did not affect risk of Crohn’s disease (Gastroenterology. 2013;145[5]:970-7.).

A separate analysis of the Nurses’ Health Study suggested that high intake of n-3 polyunsaturated fatty acids (PUFAs) and low intake of n-6 PUFAs was associated with a 31% reduction in risk of ulcerative colitis and a 15% reduction in the risk of Crohn’s disease. These data were consistent with a previous analysis of EPIC data that found that high intake of n-6 PUFAs was associated with increased risk of ulcerative colitis (Gut. 2009;58[12]:1606-11.). Other analyses indicate that genetic polymorphisms likely modify the association between PUFAs and risk of ulcerative colitis, said Dr. Ananthakrishnan. “There may be an additional layer of complexity beyond just measuring your dietary intake.”

In addition to macronutrients, micronutrients can modify a patient’s risk of ulcerative colitis or Crohn’s disease. When Dr. Ananthakrishnan and colleagues examined the Nurses’ Health Study, they found an inverse association between vitamin D intake and risk of Crohn’s disease (Gastroenterology. 2012;142[3]:482-9.). In a separate study, they found that a zinc intake greater than 16 mg/day was associated with reduced risk of Crohn’s disease (Int J Epidemiol. 2015;44[6]:1995-2005.).

Patients aged older than 40 years and patients of European ancestry tend to be overrepresented in cohort studies, which reduces the generalizability of their conclusions, said Dr. Ananthakrishnan. Furthermore, cohort studies have not produced consistent findings regarding the relationship between various dietary components and risk of IBD. Nevertheless, the data suggest that dietary patterns may be associated with incident Crohn’s disease or ulcerative colitis.

 

An influence of diet on IBD risk is plausible

One mechanism through which diet may exercise a causal influence on the risk of IBD is by affecting the microbiome. In 2011, investigators studied 98 healthy volunteers who answered questionnaires about their diet. The researchers also used 16s rDNA sequencing to characterize the population’s stool samples. A diet high in animal protein, amino acids, and saturated fats was associated with large populations of Bacteroides. A diet low in fat and in animal protein, but high in carbohydrates and simple sugars was associated with large populations of Prevotella. When the investigators conducted a controlled-feeding study of 10 patients, microbiome composition changed within 1 day of initiating a high-fat-and-low-fiber or a low-fat-and-high-fiber diet (Science. 2011;334[6052]:105-8.). A more recent study showed that the diversity of the microbiome increased with the adoption of an animal-based diet (Nature. 2014;505[7484]:559-63.).

 

 

Diet also may exert a causal influence on IBD risk by altering the intestinal barrier. In an experimental model, 5-mg/mL concentrations of fiber from plantain and broccoli significantly reduced the translocation of Escherichia coli through a human intestinal epithelial barrier (Gut. 2010;59[10]:1331-9.). Increased fiber intake may thus result in reduced intestinal inflammation, said Dr. Ananthakrishnan.

Observational and experimental evidence thus support an effect of diet on the risk of IBD, and experimental evidence indicates that this effect is biologically plausible. Nevertheless, “there are many missing links,” and further study will clarify the role of diet in IBD incidence, said Dr. Ananthakrishnan.

AGA offers education for your patients about IBD, including lifestyle and nutrition management, in the AGA GI Patient Center at https://www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease-ibd.

 

Evidence suggests that diet may cause incident inflammatory bowel disease (IBD) and induce associated symptoms, according to a lecture delivered at Freston Conference 2019, sponsored by the American Gastroenterological Association.

Dr. Ashwin N. Ananthakrishnan

Although the literature is highly consistent, it contains discordant findings, and many questions remain unanswered. “We need more rigorous studies, and particularly more interventions, to truly understand the role diet may play in patients with IBD,” said Ashwin N. Ananthakrishnan, MD, MPH, associate professor of medicine at Massachusetts General Hospital in Boston.
 

Food can cause symptoms in IBD

Many patients with IBD are convinced that their diet caused their disease. A relevant point for physicians to consider is that these patients are at least as likely as is the general population to have intolerance or sensitivity to food components such as lactose and gluten. In a prospective questionnaire of 400 consecutive patients with IBD in the United Kingdom, 48% expressed the belief that diet could initiate IBD, and 57% said that diet could trigger a flare-up. In addition, 60% of respondents reported worsening of symptoms after eating certain foods, and about two-thirds deprived themselves of their favorite foods to prevent relapses (Inflamm Bowel Dis. 2016;22[1]:164-70). A French study found similar results. “Clearly there’s something there,” said Dr. Ananthakrishnan. Patients’ beliefs about the relationship between food and their symptoms are not simply misconceptions, he added.

A Canadian study published in 2016 found that almost one-third of patients with IBD avoid many food groups. “But there is significant heterogeneity in the foods that are avoided, and sometimes we mistake this heterogeneity for a lack of association between diet and symptoms in IBD,” said Dr. Ananthakrishnan. A larger number of patients avoid certain foods during periods of active disease, which suggests that food exacerbates their symptoms, he added. The same study showed that patients with IBD have more restrictive diets than do community controls. Patients eat fewer fruits and vegetables and generally consume less iron-rich food and less protein-rich food than healthy controls. GI intolerance, rather than professional advice, is the most common reason that patients with IBD restrict their diets (JPEN J Parenter Enteral Nutr. 2016;40[3]:405-11.).

A cross-sectional survey of 130 patients with IBD and 70 controls yielded similar results. Among patients, GI symptoms that resulted from consuming foods were not related to disease activity, disease location, or prior surgery. Patients with IBD tended to have greater frequency of GI intolerance to foods than did controls (Scand J Gastroenterol. 1997;32[6]:569-71.).
 

Diet may cause intestinal inflammation

International research has recorded increases in the consumption of sugar and fat (particularly saturated fat) and concomitant decreases in fiber consumption during the past several decades. The incidence of IBD has increased in parallel with these dietary changes with a remarkably similar trajectory, said Dr. Ananthakrishnan. The correlation between dietary changes and IBD incidence “holds true even more strikingly in countries that are now experiencing Westernization,” he added. These countries have undergone more rapid dietary changes, and their IBD incidence has doubled or tripled. The transition to “less traditional diets” appears to promote intestinal inflammation, said Dr. Ananthakrishnan.

 

 

An analysis of data from the European Prospective Investigation into Cancer (EPIC) study found an association between high consumption of sugar and soft drinks, together with low consumption of vegetables, and risk of ulcerative colitis (Inflamm Bowel Dis. 2016;22[2]:345-54.). A subsequent analysis of data from two prospective Swedish cohorts, however, found no association between consumption of sugary beverages and risk of Crohn’s disease or ulcerative colitis (Clin Gastroenterol Hepatol. 2019;17[1]:123-9.).

Although the data on sugar are mixed, data on the association between other macronutrient groups and risk of IBD are more consistent. When Dr. Ananthakrishnan and colleagues examined data from the Nurses’ Health Study, they found that the highest quintile of dietary fiber intake was associated with a 40% reduction in risk of Crohn’s disease, compared with the lowest quintile. The observed reduction of risk seemed to be greatest for fiber derived from fruits. Fiber from cereals, whole grains, or legumes, however, did not affect risk of Crohn’s disease (Gastroenterology. 2013;145[5]:970-7.).

A separate analysis of the Nurses’ Health Study suggested that high intake of n-3 polyunsaturated fatty acids (PUFAs) and low intake of n-6 PUFAs was associated with a 31% reduction in risk of ulcerative colitis and a 15% reduction in the risk of Crohn’s disease. These data were consistent with a previous analysis of EPIC data that found that high intake of n-6 PUFAs was associated with increased risk of ulcerative colitis (Gut. 2009;58[12]:1606-11.). Other analyses indicate that genetic polymorphisms likely modify the association between PUFAs and risk of ulcerative colitis, said Dr. Ananthakrishnan. “There may be an additional layer of complexity beyond just measuring your dietary intake.”

In addition to macronutrients, micronutrients can modify a patient’s risk of ulcerative colitis or Crohn’s disease. When Dr. Ananthakrishnan and colleagues examined the Nurses’ Health Study, they found an inverse association between vitamin D intake and risk of Crohn’s disease (Gastroenterology. 2012;142[3]:482-9.). In a separate study, they found that a zinc intake greater than 16 mg/day was associated with reduced risk of Crohn’s disease (Int J Epidemiol. 2015;44[6]:1995-2005.).

Patients aged older than 40 years and patients of European ancestry tend to be overrepresented in cohort studies, which reduces the generalizability of their conclusions, said Dr. Ananthakrishnan. Furthermore, cohort studies have not produced consistent findings regarding the relationship between various dietary components and risk of IBD. Nevertheless, the data suggest that dietary patterns may be associated with incident Crohn’s disease or ulcerative colitis.

 

An influence of diet on IBD risk is plausible

One mechanism through which diet may exercise a causal influence on the risk of IBD is by affecting the microbiome. In 2011, investigators studied 98 healthy volunteers who answered questionnaires about their diet. The researchers also used 16s rDNA sequencing to characterize the population’s stool samples. A diet high in animal protein, amino acids, and saturated fats was associated with large populations of Bacteroides. A diet low in fat and in animal protein, but high in carbohydrates and simple sugars was associated with large populations of Prevotella. When the investigators conducted a controlled-feeding study of 10 patients, microbiome composition changed within 1 day of initiating a high-fat-and-low-fiber or a low-fat-and-high-fiber diet (Science. 2011;334[6052]:105-8.). A more recent study showed that the diversity of the microbiome increased with the adoption of an animal-based diet (Nature. 2014;505[7484]:559-63.).

 

 

Diet also may exert a causal influence on IBD risk by altering the intestinal barrier. In an experimental model, 5-mg/mL concentrations of fiber from plantain and broccoli significantly reduced the translocation of Escherichia coli through a human intestinal epithelial barrier (Gut. 2010;59[10]:1331-9.). Increased fiber intake may thus result in reduced intestinal inflammation, said Dr. Ananthakrishnan.

Observational and experimental evidence thus support an effect of diet on the risk of IBD, and experimental evidence indicates that this effect is biologically plausible. Nevertheless, “there are many missing links,” and further study will clarify the role of diet in IBD incidence, said Dr. Ananthakrishnan.

AGA offers education for your patients about IBD, including lifestyle and nutrition management, in the AGA GI Patient Center at https://www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease-ibd.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM FRESTON CONFERENCE 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Interview with Andrew Solomon, MD, on diagnosing multiple sclerosis

Article Type
Changed
Thu, 12/15/2022 - 14:41
Display Headline
Interview with Andrew Solomon, MD, on diagnosing multiple sclerosis

Andrew Solomon, MD, is a neurologist and Associate Professor in the Department of Neurological Sciences and Division Chief of Multiple Sclerosis at The University of Vermont. We sat down to talk with Dr. Solomon about his experience with multiple sclerosis (MS) misdiagnosis and what can be done to improve MS diagnosis going forward.

 

How prevalent is the misdiagnosis of MS and what are the effects that it has on patients?

The first thing to clarify is what we mean by MS misdiagnosis. In this case, we’re talking about patients who are incorrectly assigned a diagnosis of MS. MS is hard to diagnose. Sometimes we take too long to diagnose people who have MS, sometimes we incorrectly diagnose MS in people who don’t have it.

 

We don’t have very good data in terms of how frequent misdiagnosis is, but we have some. The earliest data we have is from the 1980s. In 1988 there was a study published involving approximately 500 patients who had been diagnosed with MS in life and subsequently died between the 1960s and the 1980s and had a postmortem exam. 6% of those people who had a diagnosis of MS during their lifetime didn’t actually have MS.1

 

In 2012, we did a survey of 122 MS specialists.2 We asked them if they had encountered patients incorrectly assigned a diagnosis of MS in the past year, and 95% of them had seen such a patient. This was not the most scientific study because it was just a survey and subject to recall bias. Still, many of these MS providers recalled having seen three to ten such patients in the last year where they strongly felt that a pre-existing MS diagnosis made by another provider was incorrect.

 

Another study was recently published by Dr. Kaisey.3 She looked at referrals to two academic MS centers on the West Coast, and specifically new patient referrals over 12 months to those two MS centers.  She found that out of 241 patients referred to the two MS centers, almost one in five who was referred with a pre-existing diagnosis of MS who came to these MS centers was subsequently found not have MS. This included 17% of patients at Cedars Sinai and 19% at UCLA. That’s an alarmingly high number of patients. And that’s the best data we have right now.

 

We don’t know how representative that number is of other MS centers and clinical practice nationally, but the bottom line is that misdiagnosis of MS is, unfortunately, fairly common and there’s a lot of risks to patients associated with misdiagnosis, as well as costs to our health care system.

 

Can you elaborate on the risks to patients?

We published a study where we looked at records from 110 patients who had been incorrectly assigned a diagnosis of MS. Twenty-three MS specialists from 4 different MS centers participated in this study that was published in Neurology in 2016.4

 

70% of these patients had been receiving disease modifying therapy for MS, which certainly has risks and side effects associated with it. 24% received disease modifying therapy with a known risk of PML, which is a frequently fatal brain infection associated with these therapies. Approximately 30% of these patients who did not have MS were on disease modifying therapy for in 3 to 9 years and 30% were on disease modifying therapy for 10 years or more.

 

Dr. Kaisey’s study also supports our findings. She found approximately 110 patient-years of exposure to unnecessary disease modifying therapy in the misdiagnosed patients in her study.3 Patients suffer side effects in addition to unnecessary risk related to these therapies.

 

It’s also important to emphasize that many of these patients also received inadequate treatment for their correct diagnoses.

 

How did the 2017 revision to the McDonald criteria address the challenge of MS diagnosis?

The problem of MS misdiagnosis is prominently discussed in the 2017 McDonald criteria.5 Unfortunately, one of the likely causes of misdiagnosis is that many clinicians may not read the full manuscript of the McDonald criteria itself. They instead rely on summary cards or reproductions—condensed versions of how to diagnose MS—which often don’t really provide the full context that would allow physicians to think critically and avoid a misdiagnosis.

 

MS is still a clinical diagnosis, which means we’re reliant on physician decision-making to confirm a diagnosis of MS. There are multiple steps to it.

 

First, we must determine if somebody has a syndrome typical for MS and they have objective evidence of a CNS lesion. After that the McDonald criteria can be applied to determine if they meet dissemination in time, dissemination in space, and have no other explanation for this syndrome before making a diagnosis of MS. Each one of those steps is reliant on thoughtful clinical decision-making and may be prone to potential error.

 

Knowing the ins and outs and the details of each of those steps is important. Reading the diagnostic criteria is probably the first step in becoming skilled at MS diagnosis. It’s important to know the criteria were not developed as a screening tool.  The neurologist is essentially the screening tool. The diagnostic criteria can’t be used until a MS-typical syndrome with objective evidence of a CNS lesion is confirmed.

 

In what way was the previous criteria flawed?

I wouldn’t say any of the MS diagnostic criteria were flawed; they have evolved along with data in our field that has helped us make diagnosis of MS earlier in many patients. When using the criteria, it’s important to understand the types of patients in the cohorts used to validate our diagnostic criteria. They were primarily younger than 50, and usually Caucasian. They had only syndromes typical for MS with objective evidence of CNS damage corroborating these syndromes.  Using the criteria more broadly in patients who do not fit this profile can reduce its specificity and lead to misdiagnosis.  

 

For determination of MRI dissemination in time and dissemination space, there are also some misconceptions that frequently lead to misdiagnosis. Knowing which areas are required for dissemination in space in crucial. For example, the optic nerve currently is not an area that can be used to fulfill dissemination in space, which is a mistake people frequently make. Knowing that the terms “juxtacortical” and “periventricular” means touching the ventricle and touching the cortex is very important. This is a mistake that’s often made as well, and many disorders present with MRI lesions near but not touching the cortex or ventricle. Knowing each element of our diagnostic criteria and what those terms specifically mean is important. In the 2017 McDonald criteria there’s an excellent glossary that helps clinicians understand these terms and how to use them appropriately.5

 

 

What more needs to be done to prevent MS misdiagnosis?

First, we need to figure out how to better educate clinicians on how to use our diagnostic criteria appropriately.

 

We recently completed a study that suggests that residents in training, and even MS specialists, have trouble using the diagnostic criteria. This study was presented at the American Academy of Neurology Annual meeting but has not been published yet. Education on how to use the diagnostic criteria, and in which patients to use the criteria (and in which patients the criteria do not apply) is important, particularly when new revisions to the diagnostic criteria are published.

 

We published a paper recently that provided guidance on how to avoid misdiagnosis using the 2017 McDonald criteria, and how to approach patients where the diagnostic criteria didn’t apply.6 Sometimes additional clinical, laboratory, or MRI evaluation and monitoring is required in such patients to either confirm a diagnosis of MS, or determine that a patient does not have MS.

 

 

We also desperately need biomarkers that may help us diagnose MS more accurately in patients who have neurological symptoms and an abnormal MRI and are seeing a neurologist for the first time. There’s research going on now to find relevant biomarkers in the form of blood tests, as well as MRI assessments. In particular, ongoing research focused on the MRI finding we have termed the “central vein sign” suggests this approach may be helpful as a MRI-specific biomarker for MS.7,8 We need multicenter studies evaluating this and other biomarkers in patients who come to our clinics for a new evaluation for MS, to confirm that they are accurate. We need more researchers working on how to improve diagnosis of MS.

 

 

References:

 

1. Engell T. A clinico-pathoanatomical study of multiple sclerosis diagnosis. Acta Neurol Scand. 1988;78(1):39-44.

2. Solomon AJ, Klein EP, Bourdette D. "Undiagnosing" multiple sclerosis: the challenge of misdiagnosis in MS. Neurology. 2012;78(24):1986-1991.

3. Kaisey M, Solomon AJ, Luu M, Giesser BS, Sicotte NL. Incidence of multiple sclerosis misdiagnosis in referrals to two academic centers. Mult Scler Relat Disord. 2019;30:51-56.

4. Solomon AJ, Bourdette DN, Cross AH, et al. The contemporary spectrum of multiple sclerosis misdiagnosis: a multicenter study. Neurology. 2016;87(13):1393-1399.

5. Thompson AJ, Banwell BL, Barkhof F, et al. Diagnosis of multiple sclerosis: 2017 revisions of the McDonald criteria. Lancet Neurol. 2018;17(2):162-173..

6. Solomon AJ, Naismith RT, Cross AH. Misdiagnosis of multiple sclerosis: impact of the 2017 McDonald criteria on clinical practice. Neurology. 2019;92(1):26-33.

7. Sati P, Oh J, Constable RT, et al; NAIMS Cooperative. The central vein sign and its clinical evaluation for the diagnosis of multiple sclerosis: a consensus statement from the North American Imaging in Multiple Sclerosis Cooperative. Nat Rev Neurol. 2016;12(12):714-722.

8. Sinnecker T, Clarke MA, Meier D, et al. Evaluation of the central vein sign as a diagnostic imaging biomarker in multiple sclerosis [published online ahead of print August 19, 2019]. JAMA Neurology.  2019: doi: 10.1001/jamaneurol.2019.2478.

Publications
Topics
Sections

Andrew Solomon, MD, is a neurologist and Associate Professor in the Department of Neurological Sciences and Division Chief of Multiple Sclerosis at The University of Vermont. We sat down to talk with Dr. Solomon about his experience with multiple sclerosis (MS) misdiagnosis and what can be done to improve MS diagnosis going forward.

 

How prevalent is the misdiagnosis of MS and what are the effects that it has on patients?

The first thing to clarify is what we mean by MS misdiagnosis. In this case, we’re talking about patients who are incorrectly assigned a diagnosis of MS. MS is hard to diagnose. Sometimes we take too long to diagnose people who have MS, sometimes we incorrectly diagnose MS in people who don’t have it.

 

We don’t have very good data in terms of how frequent misdiagnosis is, but we have some. The earliest data we have is from the 1980s. In 1988 there was a study published involving approximately 500 patients who had been diagnosed with MS in life and subsequently died between the 1960s and the 1980s and had a postmortem exam. 6% of those people who had a diagnosis of MS during their lifetime didn’t actually have MS.1

 

In 2012, we did a survey of 122 MS specialists.2 We asked them if they had encountered patients incorrectly assigned a diagnosis of MS in the past year, and 95% of them had seen such a patient. This was not the most scientific study because it was just a survey and subject to recall bias. Still, many of these MS providers recalled having seen three to ten such patients in the last year where they strongly felt that a pre-existing MS diagnosis made by another provider was incorrect.

 

Another study was recently published by Dr. Kaisey.3 She looked at referrals to two academic MS centers on the West Coast, and specifically new patient referrals over 12 months to those two MS centers.  She found that out of 241 patients referred to the two MS centers, almost one in five who was referred with a pre-existing diagnosis of MS who came to these MS centers was subsequently found not have MS. This included 17% of patients at Cedars Sinai and 19% at UCLA. That’s an alarmingly high number of patients. And that’s the best data we have right now.

 

We don’t know how representative that number is of other MS centers and clinical practice nationally, but the bottom line is that misdiagnosis of MS is, unfortunately, fairly common and there’s a lot of risks to patients associated with misdiagnosis, as well as costs to our health care system.

 

Can you elaborate on the risks to patients?

We published a study where we looked at records from 110 patients who had been incorrectly assigned a diagnosis of MS. Twenty-three MS specialists from 4 different MS centers participated in this study that was published in Neurology in 2016.4

 

70% of these patients had been receiving disease modifying therapy for MS, which certainly has risks and side effects associated with it. 24% received disease modifying therapy with a known risk of PML, which is a frequently fatal brain infection associated with these therapies. Approximately 30% of these patients who did not have MS were on disease modifying therapy for in 3 to 9 years and 30% were on disease modifying therapy for 10 years or more.

 

Dr. Kaisey’s study also supports our findings. She found approximately 110 patient-years of exposure to unnecessary disease modifying therapy in the misdiagnosed patients in her study.3 Patients suffer side effects in addition to unnecessary risk related to these therapies.

 

It’s also important to emphasize that many of these patients also received inadequate treatment for their correct diagnoses.

 

How did the 2017 revision to the McDonald criteria address the challenge of MS diagnosis?

The problem of MS misdiagnosis is prominently discussed in the 2017 McDonald criteria.5 Unfortunately, one of the likely causes of misdiagnosis is that many clinicians may not read the full manuscript of the McDonald criteria itself. They instead rely on summary cards or reproductions—condensed versions of how to diagnose MS—which often don’t really provide the full context that would allow physicians to think critically and avoid a misdiagnosis.

 

MS is still a clinical diagnosis, which means we’re reliant on physician decision-making to confirm a diagnosis of MS. There are multiple steps to it.

 

First, we must determine if somebody has a syndrome typical for MS and they have objective evidence of a CNS lesion. After that the McDonald criteria can be applied to determine if they meet dissemination in time, dissemination in space, and have no other explanation for this syndrome before making a diagnosis of MS. Each one of those steps is reliant on thoughtful clinical decision-making and may be prone to potential error.

 

Knowing the ins and outs and the details of each of those steps is important. Reading the diagnostic criteria is probably the first step in becoming skilled at MS diagnosis. It’s important to know the criteria were not developed as a screening tool.  The neurologist is essentially the screening tool. The diagnostic criteria can’t be used until a MS-typical syndrome with objective evidence of a CNS lesion is confirmed.

 

In what way was the previous criteria flawed?

I wouldn’t say any of the MS diagnostic criteria were flawed; they have evolved along with data in our field that has helped us make diagnosis of MS earlier in many patients. When using the criteria, it’s important to understand the types of patients in the cohorts used to validate our diagnostic criteria. They were primarily younger than 50, and usually Caucasian. They had only syndromes typical for MS with objective evidence of CNS damage corroborating these syndromes.  Using the criteria more broadly in patients who do not fit this profile can reduce its specificity and lead to misdiagnosis.  

 

For determination of MRI dissemination in time and dissemination space, there are also some misconceptions that frequently lead to misdiagnosis. Knowing which areas are required for dissemination in space in crucial. For example, the optic nerve currently is not an area that can be used to fulfill dissemination in space, which is a mistake people frequently make. Knowing that the terms “juxtacortical” and “periventricular” means touching the ventricle and touching the cortex is very important. This is a mistake that’s often made as well, and many disorders present with MRI lesions near but not touching the cortex or ventricle. Knowing each element of our diagnostic criteria and what those terms specifically mean is important. In the 2017 McDonald criteria there’s an excellent glossary that helps clinicians understand these terms and how to use them appropriately.5

 

 

What more needs to be done to prevent MS misdiagnosis?

First, we need to figure out how to better educate clinicians on how to use our diagnostic criteria appropriately.

 

We recently completed a study that suggests that residents in training, and even MS specialists, have trouble using the diagnostic criteria. This study was presented at the American Academy of Neurology Annual meeting but has not been published yet. Education on how to use the diagnostic criteria, and in which patients to use the criteria (and in which patients the criteria do not apply) is important, particularly when new revisions to the diagnostic criteria are published.

 

We published a paper recently that provided guidance on how to avoid misdiagnosis using the 2017 McDonald criteria, and how to approach patients where the diagnostic criteria didn’t apply.6 Sometimes additional clinical, laboratory, or MRI evaluation and monitoring is required in such patients to either confirm a diagnosis of MS, or determine that a patient does not have MS.

 

 

We also desperately need biomarkers that may help us diagnose MS more accurately in patients who have neurological symptoms and an abnormal MRI and are seeing a neurologist for the first time. There’s research going on now to find relevant biomarkers in the form of blood tests, as well as MRI assessments. In particular, ongoing research focused on the MRI finding we have termed the “central vein sign” suggests this approach may be helpful as a MRI-specific biomarker for MS.7,8 We need multicenter studies evaluating this and other biomarkers in patients who come to our clinics for a new evaluation for MS, to confirm that they are accurate. We need more researchers working on how to improve diagnosis of MS.

 

 

References:

 

1. Engell T. A clinico-pathoanatomical study of multiple sclerosis diagnosis. Acta Neurol Scand. 1988;78(1):39-44.

2. Solomon AJ, Klein EP, Bourdette D. "Undiagnosing" multiple sclerosis: the challenge of misdiagnosis in MS. Neurology. 2012;78(24):1986-1991.

3. Kaisey M, Solomon AJ, Luu M, Giesser BS, Sicotte NL. Incidence of multiple sclerosis misdiagnosis in referrals to two academic centers. Mult Scler Relat Disord. 2019;30:51-56.

4. Solomon AJ, Bourdette DN, Cross AH, et al. The contemporary spectrum of multiple sclerosis misdiagnosis: a multicenter study. Neurology. 2016;87(13):1393-1399.

5. Thompson AJ, Banwell BL, Barkhof F, et al. Diagnosis of multiple sclerosis: 2017 revisions of the McDonald criteria. Lancet Neurol. 2018;17(2):162-173..

6. Solomon AJ, Naismith RT, Cross AH. Misdiagnosis of multiple sclerosis: impact of the 2017 McDonald criteria on clinical practice. Neurology. 2019;92(1):26-33.

7. Sati P, Oh J, Constable RT, et al; NAIMS Cooperative. The central vein sign and its clinical evaluation for the diagnosis of multiple sclerosis: a consensus statement from the North American Imaging in Multiple Sclerosis Cooperative. Nat Rev Neurol. 2016;12(12):714-722.

8. Sinnecker T, Clarke MA, Meier D, et al. Evaluation of the central vein sign as a diagnostic imaging biomarker in multiple sclerosis [published online ahead of print August 19, 2019]. JAMA Neurology.  2019: doi: 10.1001/jamaneurol.2019.2478.

Andrew Solomon, MD, is a neurologist and Associate Professor in the Department of Neurological Sciences and Division Chief of Multiple Sclerosis at The University of Vermont. We sat down to talk with Dr. Solomon about his experience with multiple sclerosis (MS) misdiagnosis and what can be done to improve MS diagnosis going forward.

 

How prevalent is the misdiagnosis of MS and what are the effects that it has on patients?

The first thing to clarify is what we mean by MS misdiagnosis. In this case, we’re talking about patients who are incorrectly assigned a diagnosis of MS. MS is hard to diagnose. Sometimes we take too long to diagnose people who have MS, sometimes we incorrectly diagnose MS in people who don’t have it.

 

We don’t have very good data in terms of how frequent misdiagnosis is, but we have some. The earliest data we have is from the 1980s. In 1988 there was a study published involving approximately 500 patients who had been diagnosed with MS in life and subsequently died between the 1960s and the 1980s and had a postmortem exam. 6% of those people who had a diagnosis of MS during their lifetime didn’t actually have MS.1

 

In 2012, we did a survey of 122 MS specialists.2 We asked them if they had encountered patients incorrectly assigned a diagnosis of MS in the past year, and 95% of them had seen such a patient. This was not the most scientific study because it was just a survey and subject to recall bias. Still, many of these MS providers recalled having seen three to ten such patients in the last year where they strongly felt that a pre-existing MS diagnosis made by another provider was incorrect.

 

Another study was recently published by Dr. Kaisey.3 She looked at referrals to two academic MS centers on the West Coast, and specifically new patient referrals over 12 months to those two MS centers.  She found that out of 241 patients referred to the two MS centers, almost one in five who was referred with a pre-existing diagnosis of MS who came to these MS centers was subsequently found not have MS. This included 17% of patients at Cedars Sinai and 19% at UCLA. That’s an alarmingly high number of patients. And that’s the best data we have right now.

 

We don’t know how representative that number is of other MS centers and clinical practice nationally, but the bottom line is that misdiagnosis of MS is, unfortunately, fairly common and there’s a lot of risks to patients associated with misdiagnosis, as well as costs to our health care system.

 

Can you elaborate on the risks to patients?

We published a study where we looked at records from 110 patients who had been incorrectly assigned a diagnosis of MS. Twenty-three MS specialists from 4 different MS centers participated in this study that was published in Neurology in 2016.4

 

70% of these patients had been receiving disease modifying therapy for MS, which certainly has risks and side effects associated with it. 24% received disease modifying therapy with a known risk of PML, which is a frequently fatal brain infection associated with these therapies. Approximately 30% of these patients who did not have MS were on disease modifying therapy for in 3 to 9 years and 30% were on disease modifying therapy for 10 years or more.

 

Dr. Kaisey’s study also supports our findings. She found approximately 110 patient-years of exposure to unnecessary disease modifying therapy in the misdiagnosed patients in her study.3 Patients suffer side effects in addition to unnecessary risk related to these therapies.

 

It’s also important to emphasize that many of these patients also received inadequate treatment for their correct diagnoses.

 

How did the 2017 revision to the McDonald criteria address the challenge of MS diagnosis?

The problem of MS misdiagnosis is prominently discussed in the 2017 McDonald criteria.5 Unfortunately, one of the likely causes of misdiagnosis is that many clinicians may not read the full manuscript of the McDonald criteria itself. They instead rely on summary cards or reproductions—condensed versions of how to diagnose MS—which often don’t really provide the full context that would allow physicians to think critically and avoid a misdiagnosis.

 

MS is still a clinical diagnosis, which means we’re reliant on physician decision-making to confirm a diagnosis of MS. There are multiple steps to it.

 

First, we must determine if somebody has a syndrome typical for MS and they have objective evidence of a CNS lesion. After that the McDonald criteria can be applied to determine if they meet dissemination in time, dissemination in space, and have no other explanation for this syndrome before making a diagnosis of MS. Each one of those steps is reliant on thoughtful clinical decision-making and may be prone to potential error.

 

Knowing the ins and outs and the details of each of those steps is important. Reading the diagnostic criteria is probably the first step in becoming skilled at MS diagnosis. It’s important to know the criteria were not developed as a screening tool.  The neurologist is essentially the screening tool. The diagnostic criteria can’t be used until a MS-typical syndrome with objective evidence of a CNS lesion is confirmed.

 

In what way was the previous criteria flawed?

I wouldn’t say any of the MS diagnostic criteria were flawed; they have evolved along with data in our field that has helped us make diagnosis of MS earlier in many patients. When using the criteria, it’s important to understand the types of patients in the cohorts used to validate our diagnostic criteria. They were primarily younger than 50, and usually Caucasian. They had only syndromes typical for MS with objective evidence of CNS damage corroborating these syndromes.  Using the criteria more broadly in patients who do not fit this profile can reduce its specificity and lead to misdiagnosis.  

 

For determination of MRI dissemination in time and dissemination space, there are also some misconceptions that frequently lead to misdiagnosis. Knowing which areas are required for dissemination in space in crucial. For example, the optic nerve currently is not an area that can be used to fulfill dissemination in space, which is a mistake people frequently make. Knowing that the terms “juxtacortical” and “periventricular” means touching the ventricle and touching the cortex is very important. This is a mistake that’s often made as well, and many disorders present with MRI lesions near but not touching the cortex or ventricle. Knowing each element of our diagnostic criteria and what those terms specifically mean is important. In the 2017 McDonald criteria there’s an excellent glossary that helps clinicians understand these terms and how to use them appropriately.5

 

 

What more needs to be done to prevent MS misdiagnosis?

First, we need to figure out how to better educate clinicians on how to use our diagnostic criteria appropriately.

 

We recently completed a study that suggests that residents in training, and even MS specialists, have trouble using the diagnostic criteria. This study was presented at the American Academy of Neurology Annual meeting but has not been published yet. Education on how to use the diagnostic criteria, and in which patients to use the criteria (and in which patients the criteria do not apply) is important, particularly when new revisions to the diagnostic criteria are published.

 

We published a paper recently that provided guidance on how to avoid misdiagnosis using the 2017 McDonald criteria, and how to approach patients where the diagnostic criteria didn’t apply.6 Sometimes additional clinical, laboratory, or MRI evaluation and monitoring is required in such patients to either confirm a diagnosis of MS, or determine that a patient does not have MS.

 

 

We also desperately need biomarkers that may help us diagnose MS more accurately in patients who have neurological symptoms and an abnormal MRI and are seeing a neurologist for the first time. There’s research going on now to find relevant biomarkers in the form of blood tests, as well as MRI assessments. In particular, ongoing research focused on the MRI finding we have termed the “central vein sign” suggests this approach may be helpful as a MRI-specific biomarker for MS.7,8 We need multicenter studies evaluating this and other biomarkers in patients who come to our clinics for a new evaluation for MS, to confirm that they are accurate. We need more researchers working on how to improve diagnosis of MS.

 

 

References:

 

1. Engell T. A clinico-pathoanatomical study of multiple sclerosis diagnosis. Acta Neurol Scand. 1988;78(1):39-44.

2. Solomon AJ, Klein EP, Bourdette D. "Undiagnosing" multiple sclerosis: the challenge of misdiagnosis in MS. Neurology. 2012;78(24):1986-1991.

3. Kaisey M, Solomon AJ, Luu M, Giesser BS, Sicotte NL. Incidence of multiple sclerosis misdiagnosis in referrals to two academic centers. Mult Scler Relat Disord. 2019;30:51-56.

4. Solomon AJ, Bourdette DN, Cross AH, et al. The contemporary spectrum of multiple sclerosis misdiagnosis: a multicenter study. Neurology. 2016;87(13):1393-1399.

5. Thompson AJ, Banwell BL, Barkhof F, et al. Diagnosis of multiple sclerosis: 2017 revisions of the McDonald criteria. Lancet Neurol. 2018;17(2):162-173..

6. Solomon AJ, Naismith RT, Cross AH. Misdiagnosis of multiple sclerosis: impact of the 2017 McDonald criteria on clinical practice. Neurology. 2019;92(1):26-33.

7. Sati P, Oh J, Constable RT, et al; NAIMS Cooperative. The central vein sign and its clinical evaluation for the diagnosis of multiple sclerosis: a consensus statement from the North American Imaging in Multiple Sclerosis Cooperative. Nat Rev Neurol. 2016;12(12):714-722.

8. Sinnecker T, Clarke MA, Meier D, et al. Evaluation of the central vein sign as a diagnostic imaging biomarker in multiple sclerosis [published online ahead of print August 19, 2019]. JAMA Neurology.  2019: doi: 10.1001/jamaneurol.2019.2478.

Publications
Publications
Topics
Article Type
Display Headline
Interview with Andrew Solomon, MD, on diagnosing multiple sclerosis
Display Headline
Interview with Andrew Solomon, MD, on diagnosing multiple sclerosis
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 06/06/2019 - 11:30
Un-Gate On Date
Thu, 06/06/2019 - 11:30
Use ProPublica
CFC Schedule Remove Status
Thu, 06/06/2019 - 11:30
Hide sidebar & use full width
render the right sidebar.

Postinflammatory Hyperpigmentation Following Treatment of Hyperkeratosis Lenticularis Perstans With Tazarotene Cream 0.1%

Article Type
Changed
Tue, 09/10/2019 - 09:43
Display Headline
Postinflammatory Hyperpigmentation Following Treatment of Hyperkeratosis Lenticularis Perstans With Tazarotene Cream 0.1%

To the Editor:

Hyperkeratosis lenticularis perstans (HLP), or Flegel disease, is a rare keratinization disorder characterized by asymptomatic, red-brown, 1- to 5-mm papules with irregular horny scales commonly seen on the dorsal feet and lower legs.1 Hyperkeratosis lenticularis perstans is notorious for being difficult to treat. Various treatment options, including 5-fluorouracil, topical and oral retinoids, vitamin D3 derivatives, psoralen plus UVA therapy, and dermabrasion, have been explored but none have proven to be consistently effective.

A woman in her 50s presented with an asymptomatic eruption on the legs and thighs that had been present for the last 20 years. She had been misdiagnosed by multiple outside providers with atopic dermatitis and was treated with topical steroids without considerable improvement. Upon initial presentation to our clinic , physical examination revealed a woman with Fitzpatrick skin type II with multiple hyperpigmented, red-brown, 2- to 6-mm papules on the extensor surfaces of the lower legs and upper thighs (Figure, A). A 3-mm punch biopsy of a lesion on the right upper thigh revealed hyperkeratosis and parakeratosis with basal layer degeneration and a perivascular lymphocytic infiltrate. The clinical and histopathologic findings were consistent with HLP.

The patient was started on treatment with 5-fluorouracil cream on the right leg and tazarotene cream 0.1% on the left leg to determine which agent would work best. After 9 weeks of treatment, slight improvement was observed on both legs, but the lesions were still erythematous (Figure, B). Treatment was continued, and after 14 weeks complete resolution of the lesions was noted on both legs; however, postinflammatory hyperpigmentation (PIH) was observed on the left leg, which had been treated with tazarotene (Figure, C). The patient was lost to follow-up prior to treatment of the PIH.

A, On initial presentation, multiple, hyperpigmented, red-brown, 2- to 6-mm papules on the extensor surface of the legs and thighs were observed. B, After 9 weeks of treatment with 5-fluorouracil cream on the right leg and tazarotene cream 0.1% on the left leg, slight improvement was noted, but the lesions were still erythematous. C, After 14 weeks of treatment, there was complete resolution of lesions on both legs; however, postinflammatory hyperpigmentation was observed on the left leg, which had been treated with tazarotene.

Postinflammatory hyperpigmentation is an acquired excess of pigment due to a prior disease process such as an infection, allergic reaction, trauma, inflammatory disease, or drug reaction. In our patient, this finding was unusual because tazarotene has been shown to be an effective treatment of PIH.2,3

In PIH, there is either abnormal production or distribution of melanin pigment in the epidermis and/or dermis. Several mechanisms for PIH have been suggested. One potential mechanism is disruption of the basal cell layer due to dermal lymphocytic inflammation, causing melanin to be released and trapped by macrophages present in the dermal papillae. Another possible mechanism is epidermal hypermelanosis, in which the release and oxidation of arachidonic acid to prostaglandins and leukotrienes alters immune cells and melanocytes, causing an increase in melanin and increased transfer of melanin to keratinocytes in the surrounding epidermis.4

Treatment of PIH can be a difficult and prolonged process, especially when a dermal rather than epidermal melanosis is observed. Topical retinoids, topical hydroquinone, azelaic acid, corticosteroids, tretinoin cream, glycolic acid, and trichloroacetic acid have been shown to be effective in treating epidermal PIH. Tazarotene is a synthetic retinoid that has been proven to be an effective treatment of PIH3; however, in our patient the PIH progressed with treatment. One plausible explanation is that irritation caused by the medication led to further PIH.2,5



It is uncommon for tazarotene to cause PIH. Hyperpigmentation is listed as an adverse effect observed during the postmarketing experience according to one manufacturer6 and the US Food and Drug Administration; however, details about prior incidents of hyperpigmentation have not been reported in the literature. Our case is unique because both treatments showed considerable improvement in HLP, but more PIH was observed on the tazarotene-treated leg.

References
  1. Bean SF. Hyperkeratosis lenticularis perstans. a clinical, histopathologic, and genetic study. Arch Dermatol. 1969;99:705-709.
  2. Callender V, St. Surin-Lord S, Davis E, et al. Postinflammatory hyperpigmentation: etiologic and therapeutic considerations. Am J Clin Dermatol. 2011;12:87-99.
  3. McEvoy G. Tazarotene (topical). In: AHFS Drug Information. Bethesda, MD: American Society of Health-System Pharmacists, Inc; 2014:84-92.
  4. Lacz N, Vafaie J, Kihiczak N, et al. Postinflammatory hyperpigmentation: a common but troubling condition. Int J Dermatol. 2004;43:362-365.
  5. Tazorac (tazarotene) cream [package insert]. Irvine, CA: Allergan, Inc; 2013.
  6. Tazorac (tazarotene) gel [package insert]. Irvine, CA: Allergan, Inc; 2014.
Article PDF
Author and Disclosure Information

From the Department of Dermatology, University of Texas Medical Branch at Galveston.

The authors report no conflict of interest.

Correspondence: Kristyna Gleghorn, MD, University of Texas Medical Branch at Galveston, 4.112 McCullough, 301 University Blvd, Galveston, TX 77555 ([email protected]).

Issue
Cutis - 104(2)
Publications
Topics
Page Number
E37-E38
Sections
Author and Disclosure Information

From the Department of Dermatology, University of Texas Medical Branch at Galveston.

The authors report no conflict of interest.

Correspondence: Kristyna Gleghorn, MD, University of Texas Medical Branch at Galveston, 4.112 McCullough, 301 University Blvd, Galveston, TX 77555 ([email protected]).

Author and Disclosure Information

From the Department of Dermatology, University of Texas Medical Branch at Galveston.

The authors report no conflict of interest.

Correspondence: Kristyna Gleghorn, MD, University of Texas Medical Branch at Galveston, 4.112 McCullough, 301 University Blvd, Galveston, TX 77555 ([email protected]).

Article PDF
Article PDF

To the Editor:

Hyperkeratosis lenticularis perstans (HLP), or Flegel disease, is a rare keratinization disorder characterized by asymptomatic, red-brown, 1- to 5-mm papules with irregular horny scales commonly seen on the dorsal feet and lower legs.1 Hyperkeratosis lenticularis perstans is notorious for being difficult to treat. Various treatment options, including 5-fluorouracil, topical and oral retinoids, vitamin D3 derivatives, psoralen plus UVA therapy, and dermabrasion, have been explored but none have proven to be consistently effective.

A woman in her 50s presented with an asymptomatic eruption on the legs and thighs that had been present for the last 20 years. She had been misdiagnosed by multiple outside providers with atopic dermatitis and was treated with topical steroids without considerable improvement. Upon initial presentation to our clinic , physical examination revealed a woman with Fitzpatrick skin type II with multiple hyperpigmented, red-brown, 2- to 6-mm papules on the extensor surfaces of the lower legs and upper thighs (Figure, A). A 3-mm punch biopsy of a lesion on the right upper thigh revealed hyperkeratosis and parakeratosis with basal layer degeneration and a perivascular lymphocytic infiltrate. The clinical and histopathologic findings were consistent with HLP.

The patient was started on treatment with 5-fluorouracil cream on the right leg and tazarotene cream 0.1% on the left leg to determine which agent would work best. After 9 weeks of treatment, slight improvement was observed on both legs, but the lesions were still erythematous (Figure, B). Treatment was continued, and after 14 weeks complete resolution of the lesions was noted on both legs; however, postinflammatory hyperpigmentation (PIH) was observed on the left leg, which had been treated with tazarotene (Figure, C). The patient was lost to follow-up prior to treatment of the PIH.

A, On initial presentation, multiple, hyperpigmented, red-brown, 2- to 6-mm papules on the extensor surface of the legs and thighs were observed. B, After 9 weeks of treatment with 5-fluorouracil cream on the right leg and tazarotene cream 0.1% on the left leg, slight improvement was noted, but the lesions were still erythematous. C, After 14 weeks of treatment, there was complete resolution of lesions on both legs; however, postinflammatory hyperpigmentation was observed on the left leg, which had been treated with tazarotene.

Postinflammatory hyperpigmentation is an acquired excess of pigment due to a prior disease process such as an infection, allergic reaction, trauma, inflammatory disease, or drug reaction. In our patient, this finding was unusual because tazarotene has been shown to be an effective treatment of PIH.2,3

In PIH, there is either abnormal production or distribution of melanin pigment in the epidermis and/or dermis. Several mechanisms for PIH have been suggested. One potential mechanism is disruption of the basal cell layer due to dermal lymphocytic inflammation, causing melanin to be released and trapped by macrophages present in the dermal papillae. Another possible mechanism is epidermal hypermelanosis, in which the release and oxidation of arachidonic acid to prostaglandins and leukotrienes alters immune cells and melanocytes, causing an increase in melanin and increased transfer of melanin to keratinocytes in the surrounding epidermis.4

Treatment of PIH can be a difficult and prolonged process, especially when a dermal rather than epidermal melanosis is observed. Topical retinoids, topical hydroquinone, azelaic acid, corticosteroids, tretinoin cream, glycolic acid, and trichloroacetic acid have been shown to be effective in treating epidermal PIH. Tazarotene is a synthetic retinoid that has been proven to be an effective treatment of PIH3; however, in our patient the PIH progressed with treatment. One plausible explanation is that irritation caused by the medication led to further PIH.2,5



It is uncommon for tazarotene to cause PIH. Hyperpigmentation is listed as an adverse effect observed during the postmarketing experience according to one manufacturer6 and the US Food and Drug Administration; however, details about prior incidents of hyperpigmentation have not been reported in the literature. Our case is unique because both treatments showed considerable improvement in HLP, but more PIH was observed on the tazarotene-treated leg.

To the Editor:

Hyperkeratosis lenticularis perstans (HLP), or Flegel disease, is a rare keratinization disorder characterized by asymptomatic, red-brown, 1- to 5-mm papules with irregular horny scales commonly seen on the dorsal feet and lower legs.1 Hyperkeratosis lenticularis perstans is notorious for being difficult to treat. Various treatment options, including 5-fluorouracil, topical and oral retinoids, vitamin D3 derivatives, psoralen plus UVA therapy, and dermabrasion, have been explored but none have proven to be consistently effective.

A woman in her 50s presented with an asymptomatic eruption on the legs and thighs that had been present for the last 20 years. She had been misdiagnosed by multiple outside providers with atopic dermatitis and was treated with topical steroids without considerable improvement. Upon initial presentation to our clinic , physical examination revealed a woman with Fitzpatrick skin type II with multiple hyperpigmented, red-brown, 2- to 6-mm papules on the extensor surfaces of the lower legs and upper thighs (Figure, A). A 3-mm punch biopsy of a lesion on the right upper thigh revealed hyperkeratosis and parakeratosis with basal layer degeneration and a perivascular lymphocytic infiltrate. The clinical and histopathologic findings were consistent with HLP.

The patient was started on treatment with 5-fluorouracil cream on the right leg and tazarotene cream 0.1% on the left leg to determine which agent would work best. After 9 weeks of treatment, slight improvement was observed on both legs, but the lesions were still erythematous (Figure, B). Treatment was continued, and after 14 weeks complete resolution of the lesions was noted on both legs; however, postinflammatory hyperpigmentation (PIH) was observed on the left leg, which had been treated with tazarotene (Figure, C). The patient was lost to follow-up prior to treatment of the PIH.

A, On initial presentation, multiple, hyperpigmented, red-brown, 2- to 6-mm papules on the extensor surface of the legs and thighs were observed. B, After 9 weeks of treatment with 5-fluorouracil cream on the right leg and tazarotene cream 0.1% on the left leg, slight improvement was noted, but the lesions were still erythematous. C, After 14 weeks of treatment, there was complete resolution of lesions on both legs; however, postinflammatory hyperpigmentation was observed on the left leg, which had been treated with tazarotene.

Postinflammatory hyperpigmentation is an acquired excess of pigment due to a prior disease process such as an infection, allergic reaction, trauma, inflammatory disease, or drug reaction. In our patient, this finding was unusual because tazarotene has been shown to be an effective treatment of PIH.2,3

In PIH, there is either abnormal production or distribution of melanin pigment in the epidermis and/or dermis. Several mechanisms for PIH have been suggested. One potential mechanism is disruption of the basal cell layer due to dermal lymphocytic inflammation, causing melanin to be released and trapped by macrophages present in the dermal papillae. Another possible mechanism is epidermal hypermelanosis, in which the release and oxidation of arachidonic acid to prostaglandins and leukotrienes alters immune cells and melanocytes, causing an increase in melanin and increased transfer of melanin to keratinocytes in the surrounding epidermis.4

Treatment of PIH can be a difficult and prolonged process, especially when a dermal rather than epidermal melanosis is observed. Topical retinoids, topical hydroquinone, azelaic acid, corticosteroids, tretinoin cream, glycolic acid, and trichloroacetic acid have been shown to be effective in treating epidermal PIH. Tazarotene is a synthetic retinoid that has been proven to be an effective treatment of PIH3; however, in our patient the PIH progressed with treatment. One plausible explanation is that irritation caused by the medication led to further PIH.2,5



It is uncommon for tazarotene to cause PIH. Hyperpigmentation is listed as an adverse effect observed during the postmarketing experience according to one manufacturer6 and the US Food and Drug Administration; however, details about prior incidents of hyperpigmentation have not been reported in the literature. Our case is unique because both treatments showed considerable improvement in HLP, but more PIH was observed on the tazarotene-treated leg.

References
  1. Bean SF. Hyperkeratosis lenticularis perstans. a clinical, histopathologic, and genetic study. Arch Dermatol. 1969;99:705-709.
  2. Callender V, St. Surin-Lord S, Davis E, et al. Postinflammatory hyperpigmentation: etiologic and therapeutic considerations. Am J Clin Dermatol. 2011;12:87-99.
  3. McEvoy G. Tazarotene (topical). In: AHFS Drug Information. Bethesda, MD: American Society of Health-System Pharmacists, Inc; 2014:84-92.
  4. Lacz N, Vafaie J, Kihiczak N, et al. Postinflammatory hyperpigmentation: a common but troubling condition. Int J Dermatol. 2004;43:362-365.
  5. Tazorac (tazarotene) cream [package insert]. Irvine, CA: Allergan, Inc; 2013.
  6. Tazorac (tazarotene) gel [package insert]. Irvine, CA: Allergan, Inc; 2014.
References
  1. Bean SF. Hyperkeratosis lenticularis perstans. a clinical, histopathologic, and genetic study. Arch Dermatol. 1969;99:705-709.
  2. Callender V, St. Surin-Lord S, Davis E, et al. Postinflammatory hyperpigmentation: etiologic and therapeutic considerations. Am J Clin Dermatol. 2011;12:87-99.
  3. McEvoy G. Tazarotene (topical). In: AHFS Drug Information. Bethesda, MD: American Society of Health-System Pharmacists, Inc; 2014:84-92.
  4. Lacz N, Vafaie J, Kihiczak N, et al. Postinflammatory hyperpigmentation: a common but troubling condition. Int J Dermatol. 2004;43:362-365.
  5. Tazorac (tazarotene) cream [package insert]. Irvine, CA: Allergan, Inc; 2013.
  6. Tazorac (tazarotene) gel [package insert]. Irvine, CA: Allergan, Inc; 2014.
Issue
Cutis - 104(2)
Issue
Cutis - 104(2)
Page Number
E37-E38
Page Number
E37-E38
Publications
Publications
Topics
Article Type
Display Headline
Postinflammatory Hyperpigmentation Following Treatment of Hyperkeratosis Lenticularis Perstans With Tazarotene Cream 0.1%
Display Headline
Postinflammatory Hyperpigmentation Following Treatment of Hyperkeratosis Lenticularis Perstans With Tazarotene Cream 0.1%
Sections
Inside the Article

Practice Points

  • Hyperkeratosis lenticularis perstans is a rare keratinization disorder that presents with asymptomatic red-brown papules with irregular horny scales on the lower extremities.
  • Hyperkeratosis lenticularis perstans can be difficult to diagnose and treat. Hematoxylin and eosin staining generally will show hyperkeratosis and parakeratosis with basal layer degeneration and a perivascular lymphocytic infiltrate.
  • Tazarotene cream 0.1% is a synthetic retinoid sometimes used for treatment of hyperpigmentation, but it also can cause postinflammatory hyperpigmentation.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Fox Chase faculty receive grants for cancer research, education

Article Type
Changed
Mon, 09/09/2019 - 08:00

 



Faculty members at Fox Chase Cancer Center have received grants to promote education about liver cancer, study pancreatic and breast cancer, and examine burnout among physician assistants (PAs).

Eric D. Tetzlaff

Eric D. Tetzlaff, a PA at Fox Chase in Philadelphia, has received a 3-year grant from the Association of Physician Assistants in Oncology. With this $15,000 grant, Mr. Tetzlaff plans to conduct a longitudinal study that will explore burnout among PAs working in oncology.

The goals of his study are to “understand the impact of the attitudes of oncology PAs regarding teamwork, expectations for their professional role, type of collaborative practice, organizational context of the job environment, and moral distress, on burnout and career satisfaction,” according to Fox Chase.

Dr. Jaye Gardiner

Jaye Gardiner, PhD, a postdoctoral researcher in the Edna Cukierman laboratory at Fox Chase, has received a $163,500 grant from the American Cancer Society. With this grant, Dr. Gardiner will investigate the role of tumor stroma in pancreatic cancer.

Dr. Gardiner plans to explore how cancer-associated fibroblasts in the pancreatic stroma “communicate with one another and how this communication is altered in tumor-promoting versus tumor-restricting conditions,” according to Fox Chase.

Dr. Dietmar J. Kappes

Dietmar J. Kappes, PhD, a professor of blood cell development and cancer and director of the Transgenic Mouse Facility at Fox Chase, has received a 5-year grant from the National Institutes of Health. With this $626,072 grant, Dr. Kappes will investigate the role of the transcription factor ThPOK in breast cancer.

Dr. Kappes and colleagues previously found a link between high cytoplasmic levels of ThPOK and poor outcomes in breast cancer. Now, Dr. Kappes plans to “further elucidate the role of ThPOK in breast cancer by combining novel animal models and molecular approaches,” according to Fox Chase.

Evelyn González

Evelyn González, senior director of the Fox Chase’s Office of Community Outreach, and Shannon Lynch, PhD, who is with the Cancer Prevention and Control program, have received a 2-year grant from the Pennsylvania Department of Human Services.
 

The pair will use this $125,000 grant to provide liver cancer and hepatitis education to communities in the Philadelphia area with the greatest burden of liver cancer and related risk factors. Dr. Lynch will find these at-risk communities, and the Office of Community Outreach will work with partner groups in those areas to provide bilingual education about hepatitis and how it relates to liver cancer.

Dr. Shannon Lynch



Movers in Medicine highlights career moves and personal achievements by hematologists and oncologists. Did you switch jobs, take on a new role, climb a mountain? Tell us all about it at [email protected], and you could be featured in Movers in Medicine.

Publications
Topics
Sections

 



Faculty members at Fox Chase Cancer Center have received grants to promote education about liver cancer, study pancreatic and breast cancer, and examine burnout among physician assistants (PAs).

Eric D. Tetzlaff

Eric D. Tetzlaff, a PA at Fox Chase in Philadelphia, has received a 3-year grant from the Association of Physician Assistants in Oncology. With this $15,000 grant, Mr. Tetzlaff plans to conduct a longitudinal study that will explore burnout among PAs working in oncology.

The goals of his study are to “understand the impact of the attitudes of oncology PAs regarding teamwork, expectations for their professional role, type of collaborative practice, organizational context of the job environment, and moral distress, on burnout and career satisfaction,” according to Fox Chase.

Dr. Jaye Gardiner

Jaye Gardiner, PhD, a postdoctoral researcher in the Edna Cukierman laboratory at Fox Chase, has received a $163,500 grant from the American Cancer Society. With this grant, Dr. Gardiner will investigate the role of tumor stroma in pancreatic cancer.

Dr. Gardiner plans to explore how cancer-associated fibroblasts in the pancreatic stroma “communicate with one another and how this communication is altered in tumor-promoting versus tumor-restricting conditions,” according to Fox Chase.

Dr. Dietmar J. Kappes

Dietmar J. Kappes, PhD, a professor of blood cell development and cancer and director of the Transgenic Mouse Facility at Fox Chase, has received a 5-year grant from the National Institutes of Health. With this $626,072 grant, Dr. Kappes will investigate the role of the transcription factor ThPOK in breast cancer.

Dr. Kappes and colleagues previously found a link between high cytoplasmic levels of ThPOK and poor outcomes in breast cancer. Now, Dr. Kappes plans to “further elucidate the role of ThPOK in breast cancer by combining novel animal models and molecular approaches,” according to Fox Chase.

Evelyn González

Evelyn González, senior director of the Fox Chase’s Office of Community Outreach, and Shannon Lynch, PhD, who is with the Cancer Prevention and Control program, have received a 2-year grant from the Pennsylvania Department of Human Services.
 

The pair will use this $125,000 grant to provide liver cancer and hepatitis education to communities in the Philadelphia area with the greatest burden of liver cancer and related risk factors. Dr. Lynch will find these at-risk communities, and the Office of Community Outreach will work with partner groups in those areas to provide bilingual education about hepatitis and how it relates to liver cancer.

Dr. Shannon Lynch



Movers in Medicine highlights career moves and personal achievements by hematologists and oncologists. Did you switch jobs, take on a new role, climb a mountain? Tell us all about it at [email protected], and you could be featured in Movers in Medicine.

 



Faculty members at Fox Chase Cancer Center have received grants to promote education about liver cancer, study pancreatic and breast cancer, and examine burnout among physician assistants (PAs).

Eric D. Tetzlaff

Eric D. Tetzlaff, a PA at Fox Chase in Philadelphia, has received a 3-year grant from the Association of Physician Assistants in Oncology. With this $15,000 grant, Mr. Tetzlaff plans to conduct a longitudinal study that will explore burnout among PAs working in oncology.

The goals of his study are to “understand the impact of the attitudes of oncology PAs regarding teamwork, expectations for their professional role, type of collaborative practice, organizational context of the job environment, and moral distress, on burnout and career satisfaction,” according to Fox Chase.

Dr. Jaye Gardiner

Jaye Gardiner, PhD, a postdoctoral researcher in the Edna Cukierman laboratory at Fox Chase, has received a $163,500 grant from the American Cancer Society. With this grant, Dr. Gardiner will investigate the role of tumor stroma in pancreatic cancer.

Dr. Gardiner plans to explore how cancer-associated fibroblasts in the pancreatic stroma “communicate with one another and how this communication is altered in tumor-promoting versus tumor-restricting conditions,” according to Fox Chase.

Dr. Dietmar J. Kappes

Dietmar J. Kappes, PhD, a professor of blood cell development and cancer and director of the Transgenic Mouse Facility at Fox Chase, has received a 5-year grant from the National Institutes of Health. With this $626,072 grant, Dr. Kappes will investigate the role of the transcription factor ThPOK in breast cancer.

Dr. Kappes and colleagues previously found a link between high cytoplasmic levels of ThPOK and poor outcomes in breast cancer. Now, Dr. Kappes plans to “further elucidate the role of ThPOK in breast cancer by combining novel animal models and molecular approaches,” according to Fox Chase.

Evelyn González

Evelyn González, senior director of the Fox Chase’s Office of Community Outreach, and Shannon Lynch, PhD, who is with the Cancer Prevention and Control program, have received a 2-year grant from the Pennsylvania Department of Human Services.
 

The pair will use this $125,000 grant to provide liver cancer and hepatitis education to communities in the Philadelphia area with the greatest burden of liver cancer and related risk factors. Dr. Lynch will find these at-risk communities, and the Office of Community Outreach will work with partner groups in those areas to provide bilingual education about hepatitis and how it relates to liver cancer.

Dr. Shannon Lynch



Movers in Medicine highlights career moves and personal achievements by hematologists and oncologists. Did you switch jobs, take on a new role, climb a mountain? Tell us all about it at [email protected], and you could be featured in Movers in Medicine.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Chronic hypertension in pregnancy increased 13-fold since 1970

Article Type
Changed
Fri, 09/13/2019 - 10:17

 

The rate of chronic hypertension during pregnancy has increased significantly in the United States since 1970 and is more common in older women and in black women, according to a population-based, cross-sectional analysis.

Jovanmandic/Getty Images

Researchers analyzed data from more than 151 million women with delivery-related hospitalizations in the United States between 1970 and 2010 and found that the rate of chronic hypertension in pregnancy increased steadily over time from 1970 to 1990, plateaued from 1990 to 2000, then increased again to 2010.

The analysis revealed an average annual increase of 6% – which was higher among white women than among black women – and an overall 13-fold increase from 1970 to 2010. These increases appeared to be independent of rates of obesity and smoking. The findings were published in Hypertension.

The rates of chronic hypertension also increased with maternal age, among both black and white women.

“The strong association between age and rates of chronic hypertension underscores the potential for both biological and social determinants of health to influence risk,” wrote Cande V. Ananth, PhD, from the Rutgers University, New Brunswick, N.J., and coauthors. “The period effect in chronic hypertension in pregnancy is thus largely a product of the age effect and the increasing mean age at first birth in the U.S.”

The overall prevalence of chronic hypertension in pregnancy was 0.63%, but was twofold higher in black women, compared with white women (1.24% vs. 0.53%). The authors noted that black women experienced disproportionally higher rates of ischemic placental disease, pregestational and gestational diabetes, preterm delivery and perinatal mortality, which may be a consequences of higher rates of obesity, social disadvantage, smoking, and less access to care.

“This disparity may also be related to the higher tendency of black women to develop vascular disease at an earlier age than white women, which may also explain why the age-associated increase in chronic hypertension among black women is relatively smaller than white women,” they wrote. “The persistent race disparity in chronic hypertension is also a cause for continued concern and underscores the role of complex population dynamics that shape risks.”

This was the largest study to evaluate changes in the prevalence of chronic hypertension in pregnancy over time and particularly how the prevalence is influenced by age, period, and birth cohort.

In regard to the 13-fold increase from 1970 to 2010, the researchers suggested that changing diagnostic criteria for hypertension, as well as earlier access to prenatal care, may have played a part. For example, the American College of Cardiology recently modified their guidelines to include patients with systolic and diastolic blood pressures of 130-139 mm Hg and 80-89 mm Hg as stage 1 hypertension, which they noted would increase the prevalence rates of chronic hypertension during pregnancy.

The researchers reported having no outside funding and no conflicts of interest.

SOURCE: Ananth CV et al. Hypertension. 2019 Sept 9. doi: 10.1161/HYPERTENSIONAHA.119.12968.

Publications
Topics
Sections

 

The rate of chronic hypertension during pregnancy has increased significantly in the United States since 1970 and is more common in older women and in black women, according to a population-based, cross-sectional analysis.

Jovanmandic/Getty Images

Researchers analyzed data from more than 151 million women with delivery-related hospitalizations in the United States between 1970 and 2010 and found that the rate of chronic hypertension in pregnancy increased steadily over time from 1970 to 1990, plateaued from 1990 to 2000, then increased again to 2010.

The analysis revealed an average annual increase of 6% – which was higher among white women than among black women – and an overall 13-fold increase from 1970 to 2010. These increases appeared to be independent of rates of obesity and smoking. The findings were published in Hypertension.

The rates of chronic hypertension also increased with maternal age, among both black and white women.

“The strong association between age and rates of chronic hypertension underscores the potential for both biological and social determinants of health to influence risk,” wrote Cande V. Ananth, PhD, from the Rutgers University, New Brunswick, N.J., and coauthors. “The period effect in chronic hypertension in pregnancy is thus largely a product of the age effect and the increasing mean age at first birth in the U.S.”

The overall prevalence of chronic hypertension in pregnancy was 0.63%, but was twofold higher in black women, compared with white women (1.24% vs. 0.53%). The authors noted that black women experienced disproportionally higher rates of ischemic placental disease, pregestational and gestational diabetes, preterm delivery and perinatal mortality, which may be a consequences of higher rates of obesity, social disadvantage, smoking, and less access to care.

“This disparity may also be related to the higher tendency of black women to develop vascular disease at an earlier age than white women, which may also explain why the age-associated increase in chronic hypertension among black women is relatively smaller than white women,” they wrote. “The persistent race disparity in chronic hypertension is also a cause for continued concern and underscores the role of complex population dynamics that shape risks.”

This was the largest study to evaluate changes in the prevalence of chronic hypertension in pregnancy over time and particularly how the prevalence is influenced by age, period, and birth cohort.

In regard to the 13-fold increase from 1970 to 2010, the researchers suggested that changing diagnostic criteria for hypertension, as well as earlier access to prenatal care, may have played a part. For example, the American College of Cardiology recently modified their guidelines to include patients with systolic and diastolic blood pressures of 130-139 mm Hg and 80-89 mm Hg as stage 1 hypertension, which they noted would increase the prevalence rates of chronic hypertension during pregnancy.

The researchers reported having no outside funding and no conflicts of interest.

SOURCE: Ananth CV et al. Hypertension. 2019 Sept 9. doi: 10.1161/HYPERTENSIONAHA.119.12968.

 

The rate of chronic hypertension during pregnancy has increased significantly in the United States since 1970 and is more common in older women and in black women, according to a population-based, cross-sectional analysis.

Jovanmandic/Getty Images

Researchers analyzed data from more than 151 million women with delivery-related hospitalizations in the United States between 1970 and 2010 and found that the rate of chronic hypertension in pregnancy increased steadily over time from 1970 to 1990, plateaued from 1990 to 2000, then increased again to 2010.

The analysis revealed an average annual increase of 6% – which was higher among white women than among black women – and an overall 13-fold increase from 1970 to 2010. These increases appeared to be independent of rates of obesity and smoking. The findings were published in Hypertension.

The rates of chronic hypertension also increased with maternal age, among both black and white women.

“The strong association between age and rates of chronic hypertension underscores the potential for both biological and social determinants of health to influence risk,” wrote Cande V. Ananth, PhD, from the Rutgers University, New Brunswick, N.J., and coauthors. “The period effect in chronic hypertension in pregnancy is thus largely a product of the age effect and the increasing mean age at first birth in the U.S.”

The overall prevalence of chronic hypertension in pregnancy was 0.63%, but was twofold higher in black women, compared with white women (1.24% vs. 0.53%). The authors noted that black women experienced disproportionally higher rates of ischemic placental disease, pregestational and gestational diabetes, preterm delivery and perinatal mortality, which may be a consequences of higher rates of obesity, social disadvantage, smoking, and less access to care.

“This disparity may also be related to the higher tendency of black women to develop vascular disease at an earlier age than white women, which may also explain why the age-associated increase in chronic hypertension among black women is relatively smaller than white women,” they wrote. “The persistent race disparity in chronic hypertension is also a cause for continued concern and underscores the role of complex population dynamics that shape risks.”

This was the largest study to evaluate changes in the prevalence of chronic hypertension in pregnancy over time and particularly how the prevalence is influenced by age, period, and birth cohort.

In regard to the 13-fold increase from 1970 to 2010, the researchers suggested that changing diagnostic criteria for hypertension, as well as earlier access to prenatal care, may have played a part. For example, the American College of Cardiology recently modified their guidelines to include patients with systolic and diastolic blood pressures of 130-139 mm Hg and 80-89 mm Hg as stage 1 hypertension, which they noted would increase the prevalence rates of chronic hypertension during pregnancy.

The researchers reported having no outside funding and no conflicts of interest.

SOURCE: Ananth CV et al. Hypertension. 2019 Sept 9. doi: 10.1161/HYPERTENSIONAHA.119.12968.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HYPERTENSION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Abstracts Presented at the 2019 AVAHO Annual Meeting (Digital Edition)

Article Type
Changed
Wed, 09/09/2020 - 14:14
Display Headline
Abstracts Presented at the 2019 AVAHO Annual Meeting
September 20-22, 2019 ► Minneapolis, MI
Publications
Topics
Sections
September 20-22, 2019 ► Minneapolis, MI
September 20-22, 2019 ► Minneapolis, MI
Publications
Publications
Topics
Article Type
Display Headline
Abstracts Presented at the 2019 AVAHO Annual Meeting
Display Headline
Abstracts Presented at the 2019 AVAHO Annual Meeting
Sections
Citation Override
Fed Pract. 2019 August;36(suppl)
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 09/12/2018 - 10:15
Un-Gate On Date
Wed, 09/12/2018 - 10:15
Use ProPublica
CFC Schedule Remove Status
Wed, 09/12/2018 - 10:15
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Can we eradicate malaria by 2050?

Article Type
Changed
Wed, 09/11/2019 - 14:22

 

A new report by members of the Lancet Commission on Malaria Eradication has called for ending malaria in Africa within a generation, specifically aiming at the year 2050.

Courtesy NIAID
This image shows a malaria-infected red blood cell.

The Lancet Commission on Malaria Eradication is a joint endeavor between The Lancet and the University of California, San Francisco, and was convened in 2017 to consider the feasibility and affordability of malaria eradication, as well as to identify priority actions for the achievement of the goal. Eradication was considered “a necessary one given the never-ending struggle against drug and insecticide resistance and the social and economic costs associated with a failure to eradicate.”

Between 2000 and 2017, the worldwide annual incidence of malaria declined by 36%, and the annual death rate declined by 60%, according to the report. In 2007, Bill and Melinda Gates proposed that controlling malaria was not enough and complete eradication was the only scientifically and ethically defensible objective. This goal was adopted by the World Health Organization and other interested parties, and by 2015, global strategies and a potential timeline for eradication were developed.

“Global progress has stalled since 2015 and the malaria community is now at a critical moment, faced with a decision to either temper its ambitions as it did in 1969 or recommit to an eradication goal,” according to the report.

In the report, the authors used new modeling analysis to estimate plausible scenarios for the distribution and intensity of malaria in 2030 and 2050. Socioeconomic and environmental trends, together with enhanced access to high-quality diagnosis, treatment, and vector control, could lead to a “world largely free of malaria” by 2050, but with pockets of low-level transmission persisting across a belt of Africa.

Current statistics lend weight to the promise of eventual eradication, according to the report.

Between 2000 and 2017, 20 countries – constituting about one-fifth of the 106 malaria-endemic countries in 2000 – eliminated malaria transmission within their borders, reporting zero indigenous malaria cases for at least 1 year. However, this was counterbalanced by the fact that between 2015 and 2017, 55 countries had an increase in cases, and 38 countries had an increase in deaths.

“The good news is that 38 countries had incidences of fewer than ten cases per 1,000 population in 2017, with 25 countries reporting fewer than one case per 1,000 population. The same 38 countries reported just 5% of total malaria deaths. Nearly all of these low-burden countries are actively working towards national and regional elimination goals of 2030 or earlier,” according to the report.

The analysis undertaken for the report consisted of the following four steps:

1. Development of a machine-learning model to capture associations between malaria endemicity data and a wide range of socioeconomic and environmental geospatial covariates.

2. Mapping of covariate estimates to the years 2030 and 2050 on the basis of projected global trends.

3. Application of the associations learned in the first step to projected covariates generated in the second step to estimate the possible future global landscape of malaria endemicity.

4. Use of a mathematical transmission model to explore the potential effect of differing levels of malaria interventions.

 

 

The report indicates that an annual spending of $6 billion or more is required, while the current global expenditure is approximately $4.3 billion. An additional investment of $2 billion per year is necessary, with a quarter of the funds coming from increased development assistance from external donors and the rest from government health spending in malaria-endemic countries, according to the report.

However, other areas of concern remain, including the current lack of effective and widely deployable outdoor biting technologies, though these are expected to be available within the next decade, according to the report.

In terms of the modeling used in the report, the authors noted that past performance does not “capture the effect of mass drug administration or mass chemoprevention because these interventions are either relatively new or have yet to be applied widely. These underestimates might be counteracted by the absence of drug or insecticide resistance from our projections,which result in overly optimistic estimates for the continued efficacy of current tools.”

The commission was launched in October 2017 by the Global Health Group at the University of California, San Francisco. The commission built on the 2010 Lancet Malaria Elimination Series, “which evaluated the operational, technical, and financial requirements for malaria elimination and helped shape and build early support for the eradication agenda,” according to the report.

SOURCE: Feachem RGA et al. Lancet. 2019 Sept 8. doi: 10.1016/S0140-6736(19)31139-0.


 

Publications
Topics
Sections

 

A new report by members of the Lancet Commission on Malaria Eradication has called for ending malaria in Africa within a generation, specifically aiming at the year 2050.

Courtesy NIAID
This image shows a malaria-infected red blood cell.

The Lancet Commission on Malaria Eradication is a joint endeavor between The Lancet and the University of California, San Francisco, and was convened in 2017 to consider the feasibility and affordability of malaria eradication, as well as to identify priority actions for the achievement of the goal. Eradication was considered “a necessary one given the never-ending struggle against drug and insecticide resistance and the social and economic costs associated with a failure to eradicate.”

Between 2000 and 2017, the worldwide annual incidence of malaria declined by 36%, and the annual death rate declined by 60%, according to the report. In 2007, Bill and Melinda Gates proposed that controlling malaria was not enough and complete eradication was the only scientifically and ethically defensible objective. This goal was adopted by the World Health Organization and other interested parties, and by 2015, global strategies and a potential timeline for eradication were developed.

“Global progress has stalled since 2015 and the malaria community is now at a critical moment, faced with a decision to either temper its ambitions as it did in 1969 or recommit to an eradication goal,” according to the report.

In the report, the authors used new modeling analysis to estimate plausible scenarios for the distribution and intensity of malaria in 2030 and 2050. Socioeconomic and environmental trends, together with enhanced access to high-quality diagnosis, treatment, and vector control, could lead to a “world largely free of malaria” by 2050, but with pockets of low-level transmission persisting across a belt of Africa.

Current statistics lend weight to the promise of eventual eradication, according to the report.

Between 2000 and 2017, 20 countries – constituting about one-fifth of the 106 malaria-endemic countries in 2000 – eliminated malaria transmission within their borders, reporting zero indigenous malaria cases for at least 1 year. However, this was counterbalanced by the fact that between 2015 and 2017, 55 countries had an increase in cases, and 38 countries had an increase in deaths.

“The good news is that 38 countries had incidences of fewer than ten cases per 1,000 population in 2017, with 25 countries reporting fewer than one case per 1,000 population. The same 38 countries reported just 5% of total malaria deaths. Nearly all of these low-burden countries are actively working towards national and regional elimination goals of 2030 or earlier,” according to the report.

The analysis undertaken for the report consisted of the following four steps:

1. Development of a machine-learning model to capture associations between malaria endemicity data and a wide range of socioeconomic and environmental geospatial covariates.

2. Mapping of covariate estimates to the years 2030 and 2050 on the basis of projected global trends.

3. Application of the associations learned in the first step to projected covariates generated in the second step to estimate the possible future global landscape of malaria endemicity.

4. Use of a mathematical transmission model to explore the potential effect of differing levels of malaria interventions.

 

 

The report indicates that an annual spending of $6 billion or more is required, while the current global expenditure is approximately $4.3 billion. An additional investment of $2 billion per year is necessary, with a quarter of the funds coming from increased development assistance from external donors and the rest from government health spending in malaria-endemic countries, according to the report.

However, other areas of concern remain, including the current lack of effective and widely deployable outdoor biting technologies, though these are expected to be available within the next decade, according to the report.

In terms of the modeling used in the report, the authors noted that past performance does not “capture the effect of mass drug administration or mass chemoprevention because these interventions are either relatively new or have yet to be applied widely. These underestimates might be counteracted by the absence of drug or insecticide resistance from our projections,which result in overly optimistic estimates for the continued efficacy of current tools.”

The commission was launched in October 2017 by the Global Health Group at the University of California, San Francisco. The commission built on the 2010 Lancet Malaria Elimination Series, “which evaluated the operational, technical, and financial requirements for malaria elimination and helped shape and build early support for the eradication agenda,” according to the report.

SOURCE: Feachem RGA et al. Lancet. 2019 Sept 8. doi: 10.1016/S0140-6736(19)31139-0.


 

 

A new report by members of the Lancet Commission on Malaria Eradication has called for ending malaria in Africa within a generation, specifically aiming at the year 2050.

Courtesy NIAID
This image shows a malaria-infected red blood cell.

The Lancet Commission on Malaria Eradication is a joint endeavor between The Lancet and the University of California, San Francisco, and was convened in 2017 to consider the feasibility and affordability of malaria eradication, as well as to identify priority actions for the achievement of the goal. Eradication was considered “a necessary one given the never-ending struggle against drug and insecticide resistance and the social and economic costs associated with a failure to eradicate.”

Between 2000 and 2017, the worldwide annual incidence of malaria declined by 36%, and the annual death rate declined by 60%, according to the report. In 2007, Bill and Melinda Gates proposed that controlling malaria was not enough and complete eradication was the only scientifically and ethically defensible objective. This goal was adopted by the World Health Organization and other interested parties, and by 2015, global strategies and a potential timeline for eradication were developed.

“Global progress has stalled since 2015 and the malaria community is now at a critical moment, faced with a decision to either temper its ambitions as it did in 1969 or recommit to an eradication goal,” according to the report.

In the report, the authors used new modeling analysis to estimate plausible scenarios for the distribution and intensity of malaria in 2030 and 2050. Socioeconomic and environmental trends, together with enhanced access to high-quality diagnosis, treatment, and vector control, could lead to a “world largely free of malaria” by 2050, but with pockets of low-level transmission persisting across a belt of Africa.

Current statistics lend weight to the promise of eventual eradication, according to the report.

Between 2000 and 2017, 20 countries – constituting about one-fifth of the 106 malaria-endemic countries in 2000 – eliminated malaria transmission within their borders, reporting zero indigenous malaria cases for at least 1 year. However, this was counterbalanced by the fact that between 2015 and 2017, 55 countries had an increase in cases, and 38 countries had an increase in deaths.

“The good news is that 38 countries had incidences of fewer than ten cases per 1,000 population in 2017, with 25 countries reporting fewer than one case per 1,000 population. The same 38 countries reported just 5% of total malaria deaths. Nearly all of these low-burden countries are actively working towards national and regional elimination goals of 2030 or earlier,” according to the report.

The analysis undertaken for the report consisted of the following four steps:

1. Development of a machine-learning model to capture associations between malaria endemicity data and a wide range of socioeconomic and environmental geospatial covariates.

2. Mapping of covariate estimates to the years 2030 and 2050 on the basis of projected global trends.

3. Application of the associations learned in the first step to projected covariates generated in the second step to estimate the possible future global landscape of malaria endemicity.

4. Use of a mathematical transmission model to explore the potential effect of differing levels of malaria interventions.

 

 

The report indicates that an annual spending of $6 billion or more is required, while the current global expenditure is approximately $4.3 billion. An additional investment of $2 billion per year is necessary, with a quarter of the funds coming from increased development assistance from external donors and the rest from government health spending in malaria-endemic countries, according to the report.

However, other areas of concern remain, including the current lack of effective and widely deployable outdoor biting technologies, though these are expected to be available within the next decade, according to the report.

In terms of the modeling used in the report, the authors noted that past performance does not “capture the effect of mass drug administration or mass chemoprevention because these interventions are either relatively new or have yet to be applied widely. These underestimates might be counteracted by the absence of drug or insecticide resistance from our projections,which result in overly optimistic estimates for the continued efficacy of current tools.”

The commission was launched in October 2017 by the Global Health Group at the University of California, San Francisco. The commission built on the 2010 Lancet Malaria Elimination Series, “which evaluated the operational, technical, and financial requirements for malaria elimination and helped shape and build early support for the eradication agenda,” according to the report.

SOURCE: Feachem RGA et al. Lancet. 2019 Sept 8. doi: 10.1016/S0140-6736(19)31139-0.


 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.