Handheld device highly sensitive in detecting amblyopia; can be used in children as young as 2 years of age

Article Type
Changed
Wed, 10/06/2021 - 10:44

A handheld vision screening device to test for amblyopia and strabismus has been found to have a sensitivity of 100%, a specificity of 85%, and a median acquisition time of 28 seconds, according to a study published in the Journal of American Association for Pediatric Ophthalmology and Strabismus.

The prospective study involved 300 children recruited from two Kaiser Permanente Southern California pediatric clinics. The patients, aged 24-72 months, were first screened by trained research staff for amblyopia and strabismus using the device, called the Pediatric Vision Scanner (PVS). They were subsequently screened by a pediatric ophthalmologist who was masked to the previous screening results and who then performed a comprehensive eye examination.

With the gold-standard ophthalmologist examination, six children (2%) were identified as having amblyopia and/or strabismus. Using the PVS, all six children with amblyopia and/or strabismus were identified, yielding 100% sensitivity. PVS findings were normal for 45 children (15%), yielding a specificity rate of 85%. The positive predictive value was 26.0% (95% confidence interval, 12.4%-32.4%), and the negative predictive value was 100% (95% CI, 97.1%-100%).

The findings suggest that the device could be used to screen for amblyopia, according to Shaival S. Shah, MD, the study’s first author, who is a pediatric ophthalmologist and regional section lead of pediatric ophthalmology, Southern California Permanente Medical Group.

“A strength of this device is that it is user friendly and easy to use and very quick, which is essential when working with young children,” said Dr. Shah in an interview. He noted that the device could be used for children as young as 2 years.

Dr. Shah pointed out that the children were recruited from a pediatrician’s office and reflect more of a “real-world setting” than had they been recruited from a pediatric ophthalmology clinic.

Dr. Shah added that, with a negative predictive value of 100%, the device is highly reliable at informing the clinician that amblyopia is not present. “It did have a positive predictive value of 26%, which needs to be considered when deciding one’s vision screening strategy,” he said.

A limitation of the study is that there was no head-to-head comparison with another screening device, noted Dr. Shah. “While it may have been more useful to include another vision screening device to have a head-to-head comparison, we did not do this to limit complexity and cost.”

Michael J. Wan, MD, FRCSC, pediatric ophthalmologist, Sick Kids Hospital, Toronto, and assistant professor at the University of Toronto, told this news organization that the device has multiple strengths, including quick acquisition time and excellent detection rate of amblyopia and strabismus in children as young as 2 years.

“It is highly reliable at informing the clinician that amblyopia is not present,” said Dr. Wan, who was not involved in the study. “The PVS uses an elegant mechanism to test for amblyopia directly (as opposed to other screening devices, which only detect risk factors). This study demonstrates the impressive diagnostic accuracy of this approach. With a study population of 300 children, the PVS had a sensitivity of 100% and specificity of 85% (over 90% in cooperative children). This means that the PVS would detect essentially all cases of amblyopia and strabismus while minimizing the number of unnecessary referrals and examinations.”

He added that, although the study included children as young as 2 years, only 2.5% of the children were unable to complete the PVS test. “Detecting amblyopia in children at an age when treatment is still effective has been a longstanding goal in pediatric ophthalmology,” said Dr. Wan, who described the technology as user friendly. “Based on this study, the search for an accurate and practical pediatric vision screening device appears to be over.”

Dr. Wan said it would be useful to replicate this study with a different population to confirm the findings.

Dr. Shah and Dr. Wan disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A handheld vision screening device to test for amblyopia and strabismus has been found to have a sensitivity of 100%, a specificity of 85%, and a median acquisition time of 28 seconds, according to a study published in the Journal of American Association for Pediatric Ophthalmology and Strabismus.

The prospective study involved 300 children recruited from two Kaiser Permanente Southern California pediatric clinics. The patients, aged 24-72 months, were first screened by trained research staff for amblyopia and strabismus using the device, called the Pediatric Vision Scanner (PVS). They were subsequently screened by a pediatric ophthalmologist who was masked to the previous screening results and who then performed a comprehensive eye examination.

With the gold-standard ophthalmologist examination, six children (2%) were identified as having amblyopia and/or strabismus. Using the PVS, all six children with amblyopia and/or strabismus were identified, yielding 100% sensitivity. PVS findings were normal for 45 children (15%), yielding a specificity rate of 85%. The positive predictive value was 26.0% (95% confidence interval, 12.4%-32.4%), and the negative predictive value was 100% (95% CI, 97.1%-100%).

The findings suggest that the device could be used to screen for amblyopia, according to Shaival S. Shah, MD, the study’s first author, who is a pediatric ophthalmologist and regional section lead of pediatric ophthalmology, Southern California Permanente Medical Group.

“A strength of this device is that it is user friendly and easy to use and very quick, which is essential when working with young children,” said Dr. Shah in an interview. He noted that the device could be used for children as young as 2 years.

Dr. Shah pointed out that the children were recruited from a pediatrician’s office and reflect more of a “real-world setting” than had they been recruited from a pediatric ophthalmology clinic.

Dr. Shah added that, with a negative predictive value of 100%, the device is highly reliable at informing the clinician that amblyopia is not present. “It did have a positive predictive value of 26%, which needs to be considered when deciding one’s vision screening strategy,” he said.

A limitation of the study is that there was no head-to-head comparison with another screening device, noted Dr. Shah. “While it may have been more useful to include another vision screening device to have a head-to-head comparison, we did not do this to limit complexity and cost.”

Michael J. Wan, MD, FRCSC, pediatric ophthalmologist, Sick Kids Hospital, Toronto, and assistant professor at the University of Toronto, told this news organization that the device has multiple strengths, including quick acquisition time and excellent detection rate of amblyopia and strabismus in children as young as 2 years.

“It is highly reliable at informing the clinician that amblyopia is not present,” said Dr. Wan, who was not involved in the study. “The PVS uses an elegant mechanism to test for amblyopia directly (as opposed to other screening devices, which only detect risk factors). This study demonstrates the impressive diagnostic accuracy of this approach. With a study population of 300 children, the PVS had a sensitivity of 100% and specificity of 85% (over 90% in cooperative children). This means that the PVS would detect essentially all cases of amblyopia and strabismus while minimizing the number of unnecessary referrals and examinations.”

He added that, although the study included children as young as 2 years, only 2.5% of the children were unable to complete the PVS test. “Detecting amblyopia in children at an age when treatment is still effective has been a longstanding goal in pediatric ophthalmology,” said Dr. Wan, who described the technology as user friendly. “Based on this study, the search for an accurate and practical pediatric vision screening device appears to be over.”

Dr. Wan said it would be useful to replicate this study with a different population to confirm the findings.

Dr. Shah and Dr. Wan disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A handheld vision screening device to test for amblyopia and strabismus has been found to have a sensitivity of 100%, a specificity of 85%, and a median acquisition time of 28 seconds, according to a study published in the Journal of American Association for Pediatric Ophthalmology and Strabismus.

The prospective study involved 300 children recruited from two Kaiser Permanente Southern California pediatric clinics. The patients, aged 24-72 months, were first screened by trained research staff for amblyopia and strabismus using the device, called the Pediatric Vision Scanner (PVS). They were subsequently screened by a pediatric ophthalmologist who was masked to the previous screening results and who then performed a comprehensive eye examination.

With the gold-standard ophthalmologist examination, six children (2%) were identified as having amblyopia and/or strabismus. Using the PVS, all six children with amblyopia and/or strabismus were identified, yielding 100% sensitivity. PVS findings were normal for 45 children (15%), yielding a specificity rate of 85%. The positive predictive value was 26.0% (95% confidence interval, 12.4%-32.4%), and the negative predictive value was 100% (95% CI, 97.1%-100%).

The findings suggest that the device could be used to screen for amblyopia, according to Shaival S. Shah, MD, the study’s first author, who is a pediatric ophthalmologist and regional section lead of pediatric ophthalmology, Southern California Permanente Medical Group.

“A strength of this device is that it is user friendly and easy to use and very quick, which is essential when working with young children,” said Dr. Shah in an interview. He noted that the device could be used for children as young as 2 years.

Dr. Shah pointed out that the children were recruited from a pediatrician’s office and reflect more of a “real-world setting” than had they been recruited from a pediatric ophthalmology clinic.

Dr. Shah added that, with a negative predictive value of 100%, the device is highly reliable at informing the clinician that amblyopia is not present. “It did have a positive predictive value of 26%, which needs to be considered when deciding one’s vision screening strategy,” he said.

A limitation of the study is that there was no head-to-head comparison with another screening device, noted Dr. Shah. “While it may have been more useful to include another vision screening device to have a head-to-head comparison, we did not do this to limit complexity and cost.”

Michael J. Wan, MD, FRCSC, pediatric ophthalmologist, Sick Kids Hospital, Toronto, and assistant professor at the University of Toronto, told this news organization that the device has multiple strengths, including quick acquisition time and excellent detection rate of amblyopia and strabismus in children as young as 2 years.

“It is highly reliable at informing the clinician that amblyopia is not present,” said Dr. Wan, who was not involved in the study. “The PVS uses an elegant mechanism to test for amblyopia directly (as opposed to other screening devices, which only detect risk factors). This study demonstrates the impressive diagnostic accuracy of this approach. With a study population of 300 children, the PVS had a sensitivity of 100% and specificity of 85% (over 90% in cooperative children). This means that the PVS would detect essentially all cases of amblyopia and strabismus while minimizing the number of unnecessary referrals and examinations.”

He added that, although the study included children as young as 2 years, only 2.5% of the children were unable to complete the PVS test. “Detecting amblyopia in children at an age when treatment is still effective has been a longstanding goal in pediatric ophthalmology,” said Dr. Wan, who described the technology as user friendly. “Based on this study, the search for an accurate and practical pediatric vision screening device appears to be over.”

Dr. Wan said it would be useful to replicate this study with a different population to confirm the findings.

Dr. Shah and Dr. Wan disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Data sharing to improve AI used in breast-imaging research

Article Type
Changed
Thu, 12/15/2022 - 17:26

A large dataset of digital breast tomosynthesis (DBT) images should help advance the artificial intelligence (AI) algorithms used for breast cancer imaging, researchers report.

The curated dataset, which consists of 22,032 DBT volumes associated with 5,610 studies from 5,060 patients, was published online in JAMA Network Open. The studies were divided into types: normal studies (91.4%), actionable studies that required additional imaging but no biopsy (5.0%), benign biopsied studies (2.0%), and studies that detected cancer (1.6%).

To develop and evaluate their deep-learning model for the detection of architectural distortions and masses, the researchers used a test set of 460 studies from 418 patients with cancer. Their algorithm reached a breast-based sensitivity of two false positives per DBT volume, or 65%.

“The main focus of this publication is on the dataset, rather than on a specific hypothesis,” said principal researcher Maciej A. Mazurowski, PhD, scientific director of the Duke Center for Artificial Intelligence in Radiology in Durham, N.C.

“We have publicly shared a large dataset of digital breast tomosynthesis images, which are sometimes referred to as 3D mammograms, for more than 5,000 patients. There are two purposes for sharing data like these. One is to improve research and development of machine-learning algorithms. You can train models with these data. The other reason, maybe even more important, is to provide a benchmark to test algorithms,” he said in an interview.

The large-scale sharing of data is a key step toward transparency in science, said Dr. Mazurowski. “It is about making sure results can be easily reproduced and setting benchmarks.”

The dataset includes masses and architectural distortions that were annotated by two experienced radiologists, but does not include annotations for calcifications and/or microcalcifications.

This lack of calcifications is a limitation of the study, said Jean Seely, MD, professor of radiology at the University of Ottawa, who is president of the Canadian Society of Breast Imaging and regional lead for the Ontario Breast Screening Program.

“About 45% of invasive breast cancers are diagnosed based on calcifications,” she explained.

Still, although the sensitivity of the AI algorithm was not high (65%) – the average sensitivity of 2D mammography is 85% – the researchers should be commended for releasing such a large dataset, said Dr. Seely.

“The fact that they have made it publicly available is very, very useful,” she said, adding that the dataset can be leveraged in future breast-imaging research.

Although DBT is much better at identifying breast cancers than mammography, DBT exams take about 30% more time to read.

“There’s a lot of work being done in artificial intelligence in breast imaging to not only improve the workflow for breast radiologists, but also to help with the diagnosis and detection,” she noted. “Anything that helps improve the confidence and the accuracy of the radiologist is really what we’re aiming for right now.”

The size and the content of this dataset will contribute to breast-imaging research, said Jaron Chong, MD, of the department of medical imaging at Western University in London, Ontario, who is chair of the AI Standing Committee at the Canadian Association of Radiologists.

“The contribution could be valuable in the long term because DBT is a rare dataset in comparison to conventional 2D mammography,” said Dr. Chong. “Most existing datasets have focused on two-dimensional imaging. We might see more research papers reference this dataset in the future, iterating and improving upon this article’s algorithm performance.”

Dr. Mazurowski reports serving as an adviser to Gradient Health. Dr. Seely is an unpaid principal investigator for the Ottawa site of the Tomosynthesis Mammographic Imaging Screening Trial (TMIST). Dr. Chong has disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A large dataset of digital breast tomosynthesis (DBT) images should help advance the artificial intelligence (AI) algorithms used for breast cancer imaging, researchers report.

The curated dataset, which consists of 22,032 DBT volumes associated with 5,610 studies from 5,060 patients, was published online in JAMA Network Open. The studies were divided into types: normal studies (91.4%), actionable studies that required additional imaging but no biopsy (5.0%), benign biopsied studies (2.0%), and studies that detected cancer (1.6%).

To develop and evaluate their deep-learning model for the detection of architectural distortions and masses, the researchers used a test set of 460 studies from 418 patients with cancer. Their algorithm reached a breast-based sensitivity of two false positives per DBT volume, or 65%.

“The main focus of this publication is on the dataset, rather than on a specific hypothesis,” said principal researcher Maciej A. Mazurowski, PhD, scientific director of the Duke Center for Artificial Intelligence in Radiology in Durham, N.C.

“We have publicly shared a large dataset of digital breast tomosynthesis images, which are sometimes referred to as 3D mammograms, for more than 5,000 patients. There are two purposes for sharing data like these. One is to improve research and development of machine-learning algorithms. You can train models with these data. The other reason, maybe even more important, is to provide a benchmark to test algorithms,” he said in an interview.

The large-scale sharing of data is a key step toward transparency in science, said Dr. Mazurowski. “It is about making sure results can be easily reproduced and setting benchmarks.”

The dataset includes masses and architectural distortions that were annotated by two experienced radiologists, but does not include annotations for calcifications and/or microcalcifications.

This lack of calcifications is a limitation of the study, said Jean Seely, MD, professor of radiology at the University of Ottawa, who is president of the Canadian Society of Breast Imaging and regional lead for the Ontario Breast Screening Program.

“About 45% of invasive breast cancers are diagnosed based on calcifications,” she explained.

Still, although the sensitivity of the AI algorithm was not high (65%) – the average sensitivity of 2D mammography is 85% – the researchers should be commended for releasing such a large dataset, said Dr. Seely.

“The fact that they have made it publicly available is very, very useful,” she said, adding that the dataset can be leveraged in future breast-imaging research.

Although DBT is much better at identifying breast cancers than mammography, DBT exams take about 30% more time to read.

“There’s a lot of work being done in artificial intelligence in breast imaging to not only improve the workflow for breast radiologists, but also to help with the diagnosis and detection,” she noted. “Anything that helps improve the confidence and the accuracy of the radiologist is really what we’re aiming for right now.”

The size and the content of this dataset will contribute to breast-imaging research, said Jaron Chong, MD, of the department of medical imaging at Western University in London, Ontario, who is chair of the AI Standing Committee at the Canadian Association of Radiologists.

“The contribution could be valuable in the long term because DBT is a rare dataset in comparison to conventional 2D mammography,” said Dr. Chong. “Most existing datasets have focused on two-dimensional imaging. We might see more research papers reference this dataset in the future, iterating and improving upon this article’s algorithm performance.”

Dr. Mazurowski reports serving as an adviser to Gradient Health. Dr. Seely is an unpaid principal investigator for the Ottawa site of the Tomosynthesis Mammographic Imaging Screening Trial (TMIST). Dr. Chong has disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

A large dataset of digital breast tomosynthesis (DBT) images should help advance the artificial intelligence (AI) algorithms used for breast cancer imaging, researchers report.

The curated dataset, which consists of 22,032 DBT volumes associated with 5,610 studies from 5,060 patients, was published online in JAMA Network Open. The studies were divided into types: normal studies (91.4%), actionable studies that required additional imaging but no biopsy (5.0%), benign biopsied studies (2.0%), and studies that detected cancer (1.6%).

To develop and evaluate their deep-learning model for the detection of architectural distortions and masses, the researchers used a test set of 460 studies from 418 patients with cancer. Their algorithm reached a breast-based sensitivity of two false positives per DBT volume, or 65%.

“The main focus of this publication is on the dataset, rather than on a specific hypothesis,” said principal researcher Maciej A. Mazurowski, PhD, scientific director of the Duke Center for Artificial Intelligence in Radiology in Durham, N.C.

“We have publicly shared a large dataset of digital breast tomosynthesis images, which are sometimes referred to as 3D mammograms, for more than 5,000 patients. There are two purposes for sharing data like these. One is to improve research and development of machine-learning algorithms. You can train models with these data. The other reason, maybe even more important, is to provide a benchmark to test algorithms,” he said in an interview.

The large-scale sharing of data is a key step toward transparency in science, said Dr. Mazurowski. “It is about making sure results can be easily reproduced and setting benchmarks.”

The dataset includes masses and architectural distortions that were annotated by two experienced radiologists, but does not include annotations for calcifications and/or microcalcifications.

This lack of calcifications is a limitation of the study, said Jean Seely, MD, professor of radiology at the University of Ottawa, who is president of the Canadian Society of Breast Imaging and regional lead for the Ontario Breast Screening Program.

“About 45% of invasive breast cancers are diagnosed based on calcifications,” she explained.

Still, although the sensitivity of the AI algorithm was not high (65%) – the average sensitivity of 2D mammography is 85% – the researchers should be commended for releasing such a large dataset, said Dr. Seely.

“The fact that they have made it publicly available is very, very useful,” she said, adding that the dataset can be leveraged in future breast-imaging research.

Although DBT is much better at identifying breast cancers than mammography, DBT exams take about 30% more time to read.

“There’s a lot of work being done in artificial intelligence in breast imaging to not only improve the workflow for breast radiologists, but also to help with the diagnosis and detection,” she noted. “Anything that helps improve the confidence and the accuracy of the radiologist is really what we’re aiming for right now.”

The size and the content of this dataset will contribute to breast-imaging research, said Jaron Chong, MD, of the department of medical imaging at Western University in London, Ontario, who is chair of the AI Standing Committee at the Canadian Association of Radiologists.

“The contribution could be valuable in the long term because DBT is a rare dataset in comparison to conventional 2D mammography,” said Dr. Chong. “Most existing datasets have focused on two-dimensional imaging. We might see more research papers reference this dataset in the future, iterating and improving upon this article’s algorithm performance.”

Dr. Mazurowski reports serving as an adviser to Gradient Health. Dr. Seely is an unpaid principal investigator for the Ottawa site of the Tomosynthesis Mammographic Imaging Screening Trial (TMIST). Dr. Chong has disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article