THE CMG VOICE

AI brings hope for earlier detection of breast cancer

Any family who has experienced an unexpected cancer diagnosis understands how quickly someone can look and feel perfectly fine, go in for a routine screening test (whether a mammogram, blood test, or colonoscopy) then suddenly life changes. A positive test result, followed by more specific tests, confirm a diagnosis of cancer. Even in the best of circumstances, this typically results in physical and emotional harm caused by the diagnosis and treatment. As technology advances, AI brings hope for earlier detection of breast cancer.

Recently, one breast cancer survivor who also happens to be a researcher who for years has been working on AI, decided to try and apply AI to the detection of – and really the prediction of – breast cancer.

Regina Barzilay is her name, and she and her team recently laid out their hypothesis in a recent article in the Journal of Clinical Oncology. Essentially, they intend to prove that AI can scan through thousands and thousands of older mammograms to start learning about what features eventually lead to breast cancer. That will, hopefully, allow them to read a mammogram in real time and predict with accuracy whether the patient will likely develop breast cancer as far as 5 years out, and before the cancer even manifests.

How does it work, exactly? In a rare example of transparency, the designers of the program don’t exactly know themselves. They just know that it does.

The team has already fed hundreds of thousands of mammograms into the program to “teach” it how to accurately predict which patient would develop breast cancer. Every time the program got one correct, it was “rewarded” by a mathematical adjustment to the model, and “penalized” in similar fashion.

So far, the program has been correct about 76% of the time. Soon, trials are about to begin to further study the abilities of the AI program. If and when it is ready for prime time, there will be interesting questions regarding the ability of health care providers to rely on the information, how it might apply the same or differently to different ethnic and racial groups, and whether or not its use is appropriate in a particular circumstance.

In the context of harm caused to patients due to wrongful medical care, it remains to be seen whether AI in this context will have an appreciable, positive impact.

You can read more about this here.