At ECR 2023, I had the privilege of joining a panel session of clinicians and technology experts from across the radiology profession to discuss some of the myths and realities surrounding the sharing of data within our industry. The panel debated such topics as data being the new currency in healthcare, the legality and boundaries of handling sensitive patient data, and the important distinction between personalised and anonymised data. Our open discussion also sparked some intriguing questions from the audience; so intriguing, in fact, we wanted to share these insights and explore their importance for the future of AI for radiology in more detail.
Tackling this issue starts with elaborating on the concept of data selling vs revenue sharing. The latter is definitely the preferred route as it ensures everyone is properly reimbursed, but both lawmakers and organisations within healthcare need to make it interesting for the hospital to see the benefits and actively participate in such a scheme. However, it’s important to remember that the patient must always come first in this type of discussion. Patient data may be the new healthcare currency, but that does not mean it is up for sale. Let alone up for grabs.
It’s also important to see the benefits for the patient - both in terms of how their data can actively help everyone through the development of more effective tools and through reimbursement - and for the radiologist (and the wider hospital setting) where such data can profoundly benefit better workflows.
If a hospital invests in the power of data and develops its own software tools, there’s the opportunity for revenue and IP sharing. This can be shared for free, but hospitals will want to see a monetary benefit to making their tools and research available to those that haven’t spent resources making it happen.
There’s also the challenge of data protection. The laws surrounding the use of data differ between countries, and many present obstacles to the development of AI. Lawmakers aren’t future-proofing legislation to support the growth of tools such as these. Medical data flows more freely in the US, but without a more stringent operation, there’s a greater risk of data being accidentally leaked or misplaced.
All data has a value, and all data has a cost. How those two factors are ultimately calculated and how one affects the other remains a topic of continued debate. The data has a value that’s inherent to the patient themselves, which we discuss under question three below, but that value is also determined by the hospital itself through the work and research that is conducted using that data.
For AI developers, data siloed within a hospital is useless. They need access to that information to develop their platforms further, so the additional challenge is incentivising hospitals to see the benefits of selling data and understanding the legalities surrounding data sharing within healthcare. Hospitals need to see the value in sharing siloed data, both from an ethical standpoint and also in how it can financially benefit that institution.
As we’ve discussed, there’s an argument to be made that the value of data is inherent to the patient themselves and that if their data is sold, then they should be reimbursed as part of that data transaction. There's also the issue of insurance and how those payments can be attributed to the overall cost of that data.
This becomes even more complex when you sell large packets containing clinical data related to thousands of patients. From an administrative standpoint, is it possible to calculate the data’s value based on its sale and reimburse each patient individually? Even if a patient signs an acknowledgment that their data may be shared, how do you determine the value of one patient’s data over another?
The problem grows more complex when you consider that a signed consent form may not cover all testing and development methods in the future. The key here is to gain broader consent that identifies that certain data holds an ethical benefit to all mankind and that agreeing to all future development is a noble gesture.
Bonus – understanding the difference between personalised and anonymised data
Patient data is highly sensitive, so it demands consistent and reliable protection. But it’s important to dispel some of the myths surrounding the handling of such information, especially if it ultimately benefits the clinician and the care they can deliver. Sharing sensitive data is perfectly normal and something we shouldn’t shy away from. Data protection regulations are there to protect data, not isolate it.
It’s also vital to understand the difference between patient data and machine data, where one covers the very personal medical records of a patient, while the other is a packet of data covering the performance of a particular piece of hardware or software. For instance, the new EU Data Act covers the latter and ensures that any data that can benefit society and the economies of the European Union flows freely. But all these regulations still add complexity when it comes to receiving, handling, and sending both types of data.
The concept many have begun to accept is the receipt of consented or anonymised data that can be used to better systems such as decision-making tools and software applications. Gaining that consent is vitally important as not only does it ensure a medical practice is handling data legally and with full transparency to the subject of that data, but it’s also giving that patient sight into how that data will ultimately improve care for everyone.
You can visit the ECR website and access a full discussion recording if you'd like to hear more about this topic.
Better yet, follow us on LinkedIn to check in on the progress of the Odelia project when it comes to data sharing to train AI models.
Main image disclosure - courtesy of DALL·E 2023-04-07 07.12.17 - add a similar server looking element as in the first frame and show that mri data is being exchanged