When assessing severity of COVID-19 from lung ultrasound (LUS) frames, both anatomical phenomena (e.g., the pleural line, presence of consolidations), as well as sonographic artifacts, such as A-lines and B-lines are of importance. While ultrasound devices aim to provide an accurate visualization of the anatomy, the orientation of the sonographic artifacts differ between probe types. This difference poses a challenge in designing a unified deep artificial neural network capable of handling all probe types.
In this work we improve upon Roy et al (2020): We train a simple deep neural network to assess the severity of COVID19 from LUS data. To address the challenge of handling both linear and convex probes in a unified manner we employed two strategies: First, we augment the input frames of convex probes with a ``rectified” version in which A-lines and B-lines assume a horizontal/vertical aspect close to that achieved with linear probes. Second, we explicitly inform the network of the presence of important anatomical features and artifacts. We use a known Radon-based method for detecting the pleural line and B-lines and feed the detected lines as inputs to the network.
[Michael Roberts, Oz Frank, Shai Bagon, Yonina C. Eldar, and Carola-Bibiane Schönlieb. "AI and Point of Care Image Analysis for COVID-19." In Artificial Intelligence in Covid-19, pp. 85-119. Springer, Cham, 2022.]
[Oz Frank, Nir Schipper, Mordehay Vaturi, Gino Soldati, Andrea Smargiassi, Riccardo Inchingolo, Elena Torri, Tiziano Perrone, Federico Mento, Libertario Demi, Meirav Galun, Yonina C. Eldar, Shai Bagon Integrating Domain Knowledge Into Deep Networks for Lung Ultrasound With Applications to COVID-19 IEEE Transactions on Medical Imaging (2021)]
[A recorded talk at the Acoustics Virtually Everywhere, The 179th Meeting of the Acoustical Society of America]