MillenniumPost
Opinion

Challenging prospects

Deep learning-based CADs in the healthcare sector hold promise but concerns around explainability, data shortage and data breach need to be addressed beforehand

Challenging prospects
X

Healthcare facilities and organisations are gradually shifting their focus towards the application of artificial intelligence (AI) in disease diagnosis and patient treatment. Deep learning, a branch of AI, trains prediction models using enormous amounts of data and sophisticated algorithms. Using deep and complex structures, deep hierarchical algorithms efficiently learn non-linear data with great accuracy and precision. In the areas of biological image processing, disease identification, and the development of therapeutic systems for surgical and preoperative assistance, deep learning research has shown promising results. Furthermore, deep learning has made huge strides in the past several years in terms of how quickly and precisely machines interpret and handle large amounts of data. The need of the hour is thus to explore the prospects of the implementation of deep learning algorithms-based computer-aided diagnostic (CAD) systems in the medical and healthcare industries and to identify the socio-technical challenges related to trust and legal issues, explanatory possibilities, and reliability.

Explainability

A good understanding of medical data, techniques for processing it, and the use of CAD systems are required to use resources and data in the healthcare sector efficiently, and to provide outcomes that can be trusted. The CAD system uses the user's input medical records and generates a diagnosis prediction using deep learning models that engineers have trained. While the models are optimised to resist a range of attacks and attenuations, most of the proposed AI systems are considered "black box" models that lack explanatory power. There is an increasing tendency to try to create medically explainable artificial intelligence (XAI) systems that explain to a human how the CAD system made a decision. That means the inner workings and predictions of models are publicly available so that human analysts can correctly interpret them to build confidence in the model's predictions, find errors or omissions, and identify potential limitations in the model. Inserting the XAI module into the CAD system provides both numerical and visual explanations to the users, affirming transparency and building trust. To support the expansion of CAD research, users are given the option to contribute their data to the healthcare facility's data storage cloud. Further, users can report misdiagnoses to the development team for evaluation and improvement of the CAD system. While CADs are well-trained based on real-time data and thoroughly tested in the field before being deployed in the medical industry, to gain user confidence, they are retrained multiple times based on their feedback. The performance of the retrained models is continuously evaluated under the guidance of medical practitioners. Once a model is standardised, it can be easily interpreted by ordinary people while being compared with other reliable methods.

Insufficient data

Despite regular validation of AI-based CADs, deep neural networks, when noisy, can produce incorrect classifiers. The prediction results are often impacted when the medical images are significantly degraded due to noise, losses, and motion artefacts during their acquisition. Multiplicative noise is often present in biomedical imaging, such as MRI, computed tomography, ultrasound, and positron emission tomography (PET). Denoising changes the spatial and temporal distribution as well as the contrast of medical images, so a significant amount of detail is lost. If AI systems are not well protected against malicious attacks, this can cause distrust towards AI among ordinary people. If a malicious third party tries to introduce some form of adversarial attack, the CADs must be prepared to deal with it. Without resistance to these attacks, users will lose trust in CAD. Linear algorithms do not eliminate signal-dependent multiplicative noise and are mainly described by complex Rayleigh and Gamma models. In addition, because the noise and edges of medical images contain high frequencies, linear filters often do not provide sufficient performance when denoising. Eliminating artefacts poses a major challenge for researchers, engineers, and biomedical professionals deep in healthcare decision support systems.

Users of deep learning-based CADs may also receive erroneous diagnostic findings due to usage of unavailability of data and imbalanced data during training. An ideal medical image dataset should have the right metadata, identifiers, and image volumes. They should be properly annotated with basic facts, and be supported by the appropriate licences to be distributed in the deep learning research community. Captions and data generated with the image method are part of the metadata. Medical imaging is time-consuming and expensive to collect. A large volume of authentically annotated medical imaging data is still lacking, which hinders the development of deep learning in medical imaging. To overcome the scarcity of medical image datasets, the process of creating synthetic datasets has begun. The deep learning model treats the new images as alien data but still increases the effective volume of the image dataset.

Privacy and security

One of the most precious resources in the healthcare system is health data, which contains all the important details about each person's health who has ever registered for treatment or diagnostics at a public or private health institution. Medical data has been digitally archived in recent years, and sharing this data is regulated by several regulations. The German government (Bundestag) approved the Patient Data Protection Act (PDSG) in 2020 to completely automate the country's healthcare sector. Medical data has lately been easier to access thanks to several databases, organisations, and websites, including Harvard Dataverse, UK Biobank, Kaggle, and IEEE Dataport. Registered data scientists, machine learning algorithms, and deep learning algorithms can use the licenced data that the organisations' trade. However, these medical databases can be compromised by external malware on the system, anti-virus programmes, and those requiring high cyber security measures such as effective password policies, a firewall, and penetration testing. If malicious programmes can steal personal health information, patients may experience harassment, cyberbullying, paranoia, or mental pain. Regular privacy audits can ensure user trust and proper use, and security standards must protect users against unauthorised use.

While deep learning in healthcare encounters challenges, the future of AI in healthcare holds promise, from disease diagnosis to AI-assisted robotic surgical procedures. Deep learning in medical imaging cannot progress until perception-based AI is used to build public confidence. While it is crucial to explain to the user how the system operates, how the model is created, and what effect it has, deep learning-based designs also need to be improved to be adaptable, taking into consideration the quality deterioration over time. Deep learning-based designs must be transparent without sacrificing user privacy and security, and without any bias when data is collected, labelled, processed, and models are operated. This is in addition to addressing AI explainability challenges. Furthermore, by merging responsibility and XAI, the technical, ethical, and social components of deep learning-based design for medical image analysis may be maximised for better decision-making and usability.

The writer is Associate Professor, Dept of Computer Science, Techno International New Town. Views expressed are personal

Next Story
Share it