Revolutionizing Endometrial Cancer Classification: The Impact of the Endometrial Promise Classifier

NSMP endometrial cancer is the most common molecular type of endometrial cancer, and although most patients with this cancer type have excellent outcomes there is tremendous variability in prognosis. Earlier this year, we published a study which demonstrated that two key features, tumor grade and estrogen receptor status, enable stratification of clinical outcomes within NSMP endometrial cancers and can be used to direct care.

To discuss the development of the Endometrial ProMisE Molecular Classifier, and the new NSMP study further, we are delighted to have Dr. Jessica McAlpine and Dr. Amy Jamieson, esteemed gynecological cancer surgeons at Vancouver General Hospital, who have been instrumental in the development and implementation of this innovative tool.

Together, they will share insights on how this classifier is revolutionizing the classification of endometrial cancer and its clinical significance for patients.

Note: this conversation was edited for length and clarity


Q1 – What is an Endometrial ProMisE Classifier and how did we get here?

Dr. Jessica McAlpine: The traditional system of categorizing tumors based on histomorphology, or how they look under a microscope, has worked well in ovarian cancer but has been problematic in endometrial cancer. Expert pathologists often disagree on how to classify tumors, particularly high-grade ones, leading to inconsistent pathology diagnoses, challenges in interpretations of treatment efficacy in clinical trials, and inability to accurately provide prognostic information. This hindered research and patient care, and highlighted the need for a more objective and reproducible classification system.

In 2013, a breakthrough came with The Cancer Genome Atlas project (TCGA), a study that provided in-depth profiling of endometrial cancer. However, methods used by the TCGA were not practical and were not able to be transferred to the clinic. Our team started to work on developing a pragmatic classifier that could use standard pathology material to identify four molecular subtypes of endometrial cancer with prognostic value. The four subtypes:1) POLE mutated (POLEmut), 2) mismatch repair deficient (MMRd), 3) p53 abnormal (p53abn) and 4) no specific molecular profile (NSMP). We utilized simple immunohistochemistry and focused sequencing, and with the help of Imagia Canexia Health, incorporated the POLE test. Taken together, these molecular features enable classification and can be executed reliably by any pathology lab.


Q2 – What was the clinical significance of the Endometrial Promise Classifier and its impact on patient care?

Dr. Amy Jamieson: The clinical significance of the Endometrial Promise Classifier has been tremendous. The World Health Organization recommended its incorporation into routine endometrial cancer pathology reporting in 2020, and the same year the European clinical management guidelines had changed to incorporate molecular classification for treatment decisions with other international organizations following (NCCN, FIGO). Molecular classification has impacted patient care by guiding preoperative imaging, surgical decisions, and treatment plans.


Q3 – Can you walk us through the highlights of your most recent publication on Modern Pathology?

Dr. Amy Jamieson: The aim of the study was to identify key features associated with outcomes in endometrial cancers with a specific molecular subtype called NSMP, which has diverse features and clinical outcomes. The study looked at over a thousand cases of NSMP endometrial cancers and analyzed various clinical, pathological, immunohistochemical, and genetic features.

Through this analysis, we found that two critical features, tumor grade and estrogen receptor status could stratify prognosis. Patients with low-risk NSMP subset, which included grade 1 or 2 tumors and estrogen receptor positive tumors, had excellent outcomes with a very low five-year disease recurrence rate of 1.6% across all stages and 1.4% for stage 1. The majority (84%) of NSMP endometrial cancers fell into this low-risk group, which is exciting as it represents a large proportion of patients with excellent outcomes. Combining these ‘low-risk NSMP’ cases with one of the other relatively  less common molecular subtypes POLEmut endometrial cancers, together encompass 50% of all endometrial cancer patients with excellent outcomes who may not need any adjuvant treatment (ie who do not need chemotherapy or radiation)

Dr. Jessica McAlpine:  Exactly–adding to that, I think this hints to the next frontier in care. We’re so fussed about making sure that we treat those patients who need additional therapy, but you cannot overstate the impact of not having to give someone toxic radiation and toxic chemotherapy with long-term consequences. As a clinician, that makes us really excited if you can safely identify people who after surgery has been performed could step away from any additional therapy and just be followed with routine visits.


The ProMisE classifier has revolutionized the classification of endometrial cancer, providing a more objective and reproducible system that has a significant impact on patient care. With its incorporation into clinical practice and guidelines, it has become an invaluable tool for gynecological cancer surgeons, medical oncologists and radiation oncologists in making informed treatment decisions for patients with endometrial cancer. The future looks promising as more countries and organizations adopt this innovative classifier to ensure access and equity, further improving patient outcomes in the field of gynecologic cancers.


Thank you Dr. Jamieson and Dr. McAlpine for taking the time to discuss this important topic. For additional information about this study, please reach out to

Poster sessions at AACR: a recap

We were grateful for the opportunity to attend and present at the AACR Annual Meeting last week, where three of our scientists highlighted some of the exciting work happening in our Vancouver lab.

1- Method for identifying microsatellite instability high DNA abnormality samples, presented by Rosalia Aguirre-Hernandez, Computational Biology Senior Manager

Sequencing costs can add up quickly for tissue biopsies. This poster presents a machine learning algorithm to identify microsatellite instability (MSI) samples without the use of normal tissue, which typically doubles the cost of next-generation sequencing.

Tissue samples with high microsatellite instability (MSI-H) can be indicators of cancerous tumors, which are sensitive to certain types of cancer treatments. The team at ICH developed a method for classifying a tissue sample as being MSI-H without having to use normal tissue from the same person, thus decreasing the cost of sequencing. The MSI detection algorithm that the team developed can accurately identify samples with MSI-H tumors. When used in a clinical setting, these patients can be directed to treatments such as immune-checkpoint inhibitors.

During the conference, we received several questions asking if our algorithm was publicly available. If you would like to learn more about this study or discuss how we can work together, please reach out to

Click here to view the poster


2- Liquid biopsy testing in metastatic or advanced breast cancer patients during the COVID-19 pandemic, presented by Benjamin Furman, Senior Bioinformatician

Through Project ACTT (Access to Cancer Testing and Treatment), we provided free, liquid biopsy testing to a large number of Canadian cancer patients at a time when traditional testing was a challenge to get during the COVID-19 pandemic. The data from these samples allows us to develop insights into the mutational landscape of various cancer types.

This particular poster highlighted a population of Canadian women with metastatic breast cancer who were tested using a liquid biopsy gene panel to identify biomarkers that could match them with targeted therapies. Over 50% of the samples were identified as hormone positive, with greater than 60% harboring PIK3CA and ESR1 ctDNA mutations. Studies have shown that metastatic PIK3CA mutated ER-positive/HER2-negative tumors are predicted to respond to alpelisib therapy, which has FDA and Health Canada approval. This means that patients with these mutations can be matched with the targeted therapy for treatment.

With help from our partners, we will continue to offer free testing to Canadians with certain cancer types. To build on this work, we hope to explore this data in more detail, contributing to the scientific field of cancer genetics.

Click here to view the poster


3- Development of a one-step molecular classifier for endometrial carcinoma using an amplicon-based gene panel and next generation sequencing technology, presented by Melissa McConechy, Principal Scientist

The original ProMisE (pragmatic molecular classification tool) test requires multiple steps to derive the molecular subtypes for endometrial cancer. Clinically, tests can be performed at multiple centres and at different times, and results may not be provided to patients until after treatment has already begun. This defeats the purpose of molecular testing for endometrial cancer, since results can help determine more effective treatment decisions. This poster presents a study conducted by the ICH team using a one step DNA-based test, Find It. Results were compared alongside ProMisE test results with the goal of recapitulating the prognostic value of ProMisE by producing concordant results with a single test. Results showed that Find It alone can perform the molecular classifier rather than two different tests.

A paper about this study is also being developed and will soon be submitted for publication.

Click here to view the poster


For more information about any of these posters and studies, or if you’d like to discuss how we can work together to bring these tests to your lab, hospital, or cancer care center, please reach out to

Cutting-edge precision oncology, where the patient is

The theme for this year’s World Cancer Day is Close the Care Gap. Health equity is a pillar of our work at Imagia Canexia Health. So today our Chief Medical Officer, Dr. David Huntsman, is opening the conversation to showcase how our community-centered approach to cancer care is improving access to genomic-driven cancer care for all.

Geographic inequity

There are myriad reasons why outcomes for patients vary significantly between urban and rural communities (link). A key driver of geographic inequity is hard boiled into current practice models. Current cancer management revolves around substantive physical infrastructure including specialized surgical centers, radiation oncology bunkers, imaging services and clinical laboratories. Not only does distance to these centers create inconveniences for patients, sample testing and distribution, and care teams but as centralized systems, they are inherently vulnerable to delays and cancellations, an issue greatly amplified during the covid-19 pandemic (link).

A decentralized model of cancer care

As cancer treatments, especially targeted therapies and immunotherapy improve there is a huge opportunity to change this model. To bring treatment to the patient. To enable outcomes independent of geography. To paraphrase Siddhartha Mukherjee (link), one of the greatest challenges of 21st-century medicine is to bring our most advanced medical knowledge and technology to every person on the planet, where and when they need it.

For a patient, time and precision are the most valuable assets. Time to diagnosis is critical, as is timely identification of the right treatment and effective monitoring (link). Liquid biopsies (link) and locally performed next-generation sequencing technologies (link) are essential tools to transform many of the steps in this journey, making precision oncology a reality in community-based care.

Locally delivered precision oncology

These two key technologies are transforming the reliance on urban academic centers into a distributed, robust, and flexible care model. For patients and physicians, locally delivered liquid biopsies offer the benefits of precision, speed, and reduced inconvenience. Simple blood draws—rather than invasive, sometimes dangerous, and sometimes impossible physical biopsies—can often identify features of cancer for life-changing treatment decisions, monitoring treatment efficacy. In the future these may replace imaging monitoring for some cancers. As these technologies advance, most common cancers will be amenable to cell-free testing. For patients, physicians, and healthcare systems, evolving our models of care will improve outcomes while concurrently obviating factors that drive geographic inequity.

Guided by this vision, we are working to bring high-value next-generation sequencing tests to more community-based laboratories. Curious about what this could mean for your healthcare system? Talk to us at

About the Author
Dr. David Huntsman, Chief Medical Officer at Imagia Canexia Health, is a professor in the Departments of Pathology and Laboratory Medicine and Obstetrics and Gynecology at the University of British Columbia.

A look back at the history of the Human Genome Project

Today’s blog post comes from guest writer Stanislav Volik, who has worked in genomics since the 1990s. His thesis for his PhD was one of the first genomics theses defended in Russia. His genomics focus has been on cancer studies, specifically on breast and prostate cancer. With two colleagues, he invented and patented a paired-end sequence approach for deciphering the structure of tumour genomes in the early 2000s, before NGS made it feasible to sequence the tumour DNA. 

A look back at the history of the Human Genome Project

With the most recent release of a complete human genome by the telomere-to-telomere consortium, I found myself reflecting on the history of our collective efforts to achieve a better understanding of our genetic heritage. We could say that this year marks the coming of age for the Human Genome Project (HGP). Twenty one years ago, the first drafts of the human genome sequence were published by the public National Institutes of Health-led International Human Genome Consortium and commercial entity Celera Genomics, founded by Craig Venter. The “First Draft”, of course, was exactly that – about 90% of the euchromatic (generally gene-rich regions) were analyzed. This prompted a string of follow-up press conferences and articles, describing ever more complete versions of the whole genome sequence until about three years later on October 21, 2004 when the International Human Genome Sequencing Consortium published the penultimate paper titled “Finishing the euchromatic sequence of the human genome”. By any measure, this is one of the most towering scientific and technological achievements of the late century. One of the most interesting aspects of its completion is the way in which available technology was shaping the strategy and even politics around this monumental endeavour.

Alta, Utah – The birthplace of the Human Genome Project [Image Source]

Where it all began

The timeline of the HGP is still available on the Oakridge National Laboratory website archive. Even in its current, barely functioning form, it reveals a fascinating story of an idea that seemed impossible when, in 1984, a group of 19 scientists found themselves snowed in at a ski resort in Alta, Utah. They grappled with the problem of identifying DNA mutations in Hiroshima and Nagasaki nuclear attack survivors and their children. Existing methods could not identify the then-expected number of mutations, but the advent of molecular cloning, pulsed-field gel electrophoresis, and other wonders of technology gave everybody the feeling that the solution was possible. Charles deLisi, the newly appointed director of the )Office of Health and Environmental Research at Department of Energy (DOE who read a draft of the Alta report in October 1985, and while reading it, first had the idea of a dedicated human genome project. Next year the Human Genome Initiative was proposed by the DOE after a feasibility workshop in Santa Fe, New Mexico. In 1987, it was endorsed and the first budget estimate appeared. Finally, in 1990 National Institutes of Health (NIH) and the DOE announced the first five-year plan titled “Understanding Our Genetic Inheritance. The US Human Genome Project”. The project was announced with an approximate annual budget of $200M with a stated goal to complete the sequencing of the first human genome in 15 years (a total of $3B in 1990 dollars, an equivalent of approximately $6B today). 

1980 Nobel laureates P. Berg, W. Gilbert and F. Sanger (left to right)

The Maxam, Gilbert, and Sanger race

In 1985, the concept of sequencing the whole human genome was truly revolutionary scientific thinking at its best, since no appropriate technology was ready for such a task. Four years passed since the 1980 Nobel Prize in Chemistry was shared between P. Berg for his “fundamental studies of the biochemistry of nucleic acids, with particular regard to recombinant-DNA” and W. Gilbert and F. Sanger (the second Nobel Prize for the latter) for “their contributions concerning the determination of base sequences in nucleic acids”. But it was not quite clear which of Gilbert or Sanger’s approaches to sequencing would prove to be the most efficient. Maxam and Gilbert developed a purely chemical method of sequencing nucleic acids that required many chemical steps, but could be performed on double-stranded DNA. Sanger’s approach on the other hand required single-stranded DNA. In the early and mid-1980s, both methods were still widely used, and the advantages of Sanger’s approach with its reliability (given access to high quality enzymes and nucleotides) and longer reads, was just being established. Both approaches had limited read length (approximately 200-250 bases for Maxam-Gilbert and 350-500 bases for Sanger), and required the genomic DNA to be fragmented prior to analysis. Given the realities of fully manual slab gel sequencing, this meant that determining a sequence of a single average human mRNA was an achievement worthy of publication in a fairly high impact journal. With an average time for analysis of a ready-to-sequence DNA fragment of ~6 hours, an average read length of 350-500 bases, and 10-20 DNA fragments analyzed per slab gel, the throughput for a qualified post-doc at that time achieved a whopping 1.7-2.0 kb per hour. With a haploid human genome size of ~3 billion bases, one was looking at the very minimum of 171 years for a single station to sequence perfectly ordered, minimally overlapping fragments that could then be assembled into the final reference sequence.

Mapping it out

There was one caveat – this set of minimally overlapping genomic DNA fragments did not exist yet. It was not immediately clear if anybody was able to create one or how to order those into a full sequence, given the fact that the human genome contained numerous highly repetitive sequences that were longer than an average read length of existing technologies. It became apparent that an absolute prerequisite of achieving the stated goal of creating a human genome reference sequence was to have a physical map of the genome that would contain information on the order and physical spacing of some genomic features that could be identified in sequenceable fragments. This would allow ordering of a multitude of reads necessary to determine the human genome sequence. Consequently, much effort was spent by the broad scientific community over the course of the next 14 or so years (counting from the fateful Alta meeting) on developing ever more detailed sets of human genome physical maps and ever more complete libraries of ever larger DNA fragments (clones) that were being produced and mapped back to the genome using more sophisticated molecular biology techniques. This work was very much supported by the scientific community, not only because it was deemed absolutely necessary for the success of the project, but also because it was “fair”, allowing even relatively small groups to meaningfully contribute to the success of this huge endeavour. 

Sanger wins and gets automated

In parallel with the massive efforts aimed at creating a comprehensive physical map of a human genome, a lot of effort was focused on streamlining and then automating DNA sequencing in order to drastically increase sequencing throughput. Sanger sequencing won this battle since it proved to be easier to automate – no complicated chemical reactions were required, and as an additional bonus, it offered longer read lengths. But the most important factor was that biological machinery of DNA synthesis used by this technology proved to be sufficiently robust and versatile to allow for labeling the nucleotides with first biotin and later fluorescent dyes, obviating the need for radioactive labeling. In 1984 Fritz Pohl reported the first method for non-radioactive colorimetric DNA sequencing. In 1986 Leroy Hood’s group published a method for automated fluorescence-based DNA sequence analysis, a technology that allowed Applied Biosystems to offer the first automated DNA sequencers (ABI370/373), a machine that enabled first massive sequencing projects, such as the effort to catalog all expressed human genes using “Expressed Sequence Tags” (EST). In 1995, another breakthrough instrument was released (ABI Prism 310) that did away with the pesky problem of pouring flawless big and thin (down to 0.4mm thick) gels that greatly simplified and sped up the sequencing process. Finally, in 1997, the ABI3700 capillary sequencer was released, that boasted 96 capillaries, a configuration that “gives the 3700 system the capacity to analyze simultaneously 96 samples as often as 16 times per day for a total of 16 × 96 = 1,536 samples a day”, as the ABI brochure touted. In other words, users could expect to receive a whopping 768Kb of sequence daily. 

Venter causes outrage

This unprecedented increase in sequencing capacity suddenly made another approach feasible – de novo sequencing of complex genomes without construction of ordered genomic fragment libraries, and without a long and very expensive process of physical mapping, an approach that came to be known as “shotgun” sequencing. The theoretical feasibility of such an approach was established in 1995 by Leroy Hood’s team. In a paper titled “Pairwise End Sequencing: A Unified Approach to Genomic Mapping and Sequencing”, they demonstrated that a large complex genome can be sequenced using just a collection of randomly cloned fragments of at least two very different sizes, that would be randomly subcloned, sequenced, and ordered based on the identification of these paired end sequences in the contigs assembled from subclones. A mere two years later, in 1997, Craig Venter, the founder of The Institute of Genome Research and then-Celera Genomics, announced that his team will “single handedly sequence human genome” in just three years for $300M or 1/10th of the originally estimated cost of the public International Human Genome Project.

Needless to say, Venter’s announcement caused an uproar in the genomics community. First, it appeared to obsolete all the huge efforts spent on physical map construction and ordering clone libraries. Second, it put leaders and political supporters of the public HGP in a really bad light: after spending 10x Venter’s budget and working on the project for seven years since its official launch in 1990, their proposed timeline for releasing the draft sequence was still seven years away (2005). And, finally, the scientific community was outraged by Venter’s plans to offer paid access to the genomic sequence to commercial entities. I still remember the charged atmosphere at the Cold Spring Harbor meeting in 1997 when Venter made his announcement. Nobody knew the details (no internet as we know it today), only rumors about closed door talks between NIH and Wellcome Trust. It was very late that day around 10 pm, when Craig Venter came on the podium trying to present his idea. He was essentially booed off the stage by the outraged audience. Then-NIH director Francis Collins and then-head of Wellcome Trust came on the podium and proclaimed that the public HGP won’t be beat; that Wellcome Trust will devote whatever resources needed to ensure “competitiveness” of the public HGP and to ensure that everybody will have a free and unfettered access to its results. 

Craig Venter (left) and Francis Collins (right) with former US President Bill Clinton to announce the first map of the Human Genome Project [Image Source]

Be it as it may, Venter’s initiative did result in a substantial reevaluation of the HGP strategy. In the end both teams (Venter’s and HGP) ended up using hybrid shotgun plus physical mapping information for the first human genome assemblies resulting in groundbreaking simultaneous 2001 publications. And animosity towards Craig Venter didn’t last long in the genomic community – a few years later many of the people who booed in 1997 were applauding his talk to the same audience devoted to the first large scale metagenomic project. 

Final comments

Looking back over the many years of my professional life, witnessing the completion of the first HGP was surely an experience of a lifetime. Essentially, the HGP set a new paradigm in biological studies serving as a prime catalyst of developing revolutionary new technologies, that became tectonic forces in their own right, obsoleting some massive efforts, yet opening many new paths. This pattern continued, with the next topic to be addressed, focusing on the actual genetic diversity of humans and how we can use this knowledge to meaningfully impact our lives. This could not be accomplished using those first generation sequencing technologies that enabled the HGP’s success. The next phase of breakthroughs followed, which led to the emergence of next generation sequencing (NGS) technologies, which finally made it a routine, not only to sequence individual genomes, but allowed studying genomes and transcriptomes of single cells. Stay tuned for our next blog post, where we dive deeper into the next phase in technology development – NGS. 

The Promise of Liquid Biopsies to Improve Health Equity in Lung Cancer

With headlines like these, it is clear that liquid biopsies have some major implications for lung cancers:

Fast, Precise Cancer Care Is Coming to a Hospital Near You
Liquid biopsy and non-small cell lung cancer: are we looking at the tip of the iceberg?
The Quest for Cancer-Detecting Blood Tests Speeds Up

Now several years after the initial race to create a simpler blood test for cancer patients, where do we stand today and what does this mean for the daily race against the clock for curing lung cancer?

Dr. Kam Kafi, Imagia Canexia Health’s VP of Oncology, is here to break down the use of liquid biopsy in lung cancer patients and its future potential.


I) What is Liquid Biopsy? 

Liquid biopsies are minimally invasive tests that are capable of detecting cancer cells or genetic material that are released into the blood from a tumor. 


Collecting tumor tissue through biopsies is considered the gold standard for diagnosing and treating cancer. However, a tissue biopsy has always had its limits—it’s painful, and sometimes when a doctor takes a sample, they miss the spot where the cancer is.

Blood tests could obviate the need for surgeons to cut tissue samples from suspicious lumps and lesions and make it possible to reveal cancer lurking in places needles and scalpels cannot safely reach. They could also determine what type of cancer is taking root and what treatment might work best to squash it.  Furthermore, they are easily repeatable making them attractive in monitoring patients that are diagnosed and treated, where traditionally you require repeated imaging which is less sensitive and comes with radiation exposure. 

What makes the search difficult is that in certain types of cancer, there might not be a lot of DNA being released. It is like finding a needle in a haystack because the shed DNA from cancer represents just 0.1% of all cell-free DNA floating around in the blood. Although far from perfect, recent advances in technology and growing number of patients tested is helping overcome these limitations. 


II) Impact of liquid biopsy in lung cancer

Lung cancer is one of the deadliest cancers in the world and the National Cancer Institute estimates it will kill over 150,000 people a year, despite a growing number of novel therapies. Part of the problem is there are a lot of different mutations that can cause the most common form of it and while the genetic profile of a tumor can tell doctors which treatment to prescribe, finding the mutation is an exercise in trial and error. Screen for the most common mutation, then try that drug. If it doesn’t work, screen again and try something else. It can take weeks or even months to find a treatment that’s optimal, taking time patients don’t have.

Currently liquid biopsies are not used to diagnose cancer but rather to monitor disease progression or to detect genetic mutations in the tumor that could suggest which drug should be used to treat the disease.


Historically, the first clinical application of liquid biopsy was in advanced Non-Small Cell Lung Cancer (NSCLC). These pioneer studies were followed by the advent and adoption of next-generation sequencing (NGS) which widened the spectrum of detectable mutations leading to different opportunities. 

First, whenever a druggable alteration was found, matched therapy with specific medication might be offered. Second, the presence of certain mutations or co-mutations could provide prognostic and predictive information, helping oncologist decide on next steps (such as simply monitor or add additional treatments). Last, the detection of tumor-specific genetic alterations at baseline and the measurement of their variation during treatment could be useful to monitor the effectiveness of treatment early on and follow the course of the disease. If we see the amount of cancer dna in the blood go down, it’s good, and if it starts coming back up we have a early warning signal.


III) What’s next? Future implications for monitoring, screening and health equity

Innovations in the development of liquid biopsy platforms over the past decade have led to a growing number of regulatory approvals for blood-based tests that are transforming precision cancer care for patients with advanced disease. These tests have increased their range of clinical applications in cancer treatment to now include monitoring cancer growth, detecting genetic mutations, identifying signs of relapse, and predicting sensitivity to immunotherapy. 

Over the next 5 years, liquid biopsy will be much more about enabling standards of care in oncology for new applications, including the detection of molecular measurable residual disease (MRD) and in cancer screening and treatment selection in earlier-stage disease, where you can improve survival for patients.


In 2020, the FDA
approved two comprehensive genomic profiling liquid biopsy tests. And the number of regulatory approvals continues to grow. 

Despite the increasing popularity of liquid biopsy testing in oncology care, currently their use is limited to a handful of commercial offerings that are mostly available in the U.S,  still expensive, and not widely used in community care settings where the majority of patients are treated.   

What we need now is to leverage the existing infrastructure to enable access to liquid biopsies in-house at the point of care. This is exactly what we’re working on at ICH. By adapting a distributed testing model, we provide community cancer centers with all the know-how and technology (AI, Bioinformatics, Reporting) to bring testing in-house so that they can provide cutting edge testing to their patients using the infrastructure that is in place. 

Through this approach, we are able to reduce the disparities in cancer testing by bringing down the price of liquid biopsies, speed up turnaround times as samples don’t need to get shipped out, and ensure greater patient data privacy as all the information stays within the institute. 

As technology is democratized and becomes more affordable, it directly increases health equity by providing accessibility to larger populations that traditionally don’t have the same resources at top-tier hospitals in major cities. And it’s getting noticed…

The growing adoption of the technology across the globe, from Asia to the Middle-east to South-America signals a brighter future for cancer patients across the globe.

To learn more about Imagia Canexia Health solutions and how you can bring quality NGS testing in-house, get in touch with us by writing to


References used: 

CGC 2022 Key Takeaways

Last week, we were in St. Louis to attend the Cancer Genomics Consortium Annual Meeting. It was great to be back in person, listening to all the presentations and discussions about work and advancements being made in the field of cancer genomics.

Here are the key takeaways we took from the meeting:

Interpretation and reporting is one of the key bottlenecks of NGS testing A common issue brought up in a few presentations is the challenge of next-generation sequencing (NGS) data reporting, as well as the interpretation of reports. Kilannin Krysiak from Clinic Interpretations of Variants in Cancer (CIViC) gave a presentation on the need to update interpretation resources because of the complex nature of variant relationships. Not only is the data complex, but clinical guidelines are constantly being updated. Therefore, knowledge bases also need regular updating to ensure proper clinical reporting.

In addition to data reporting, the interpretation of these reports poses their own challenges. This can be attributed to the individual expertise of the person interpreting the data, but also to the fact that not everyone is using the same data source. Valerie Barbie from the Swiss Institute of Bioinformatics provided an overview of a government funded project in Switzerland to build a clinical infrastructure that would allow research to leverage data. The goal is for hospitals to be onboarded and gain access to this central database. They have also developed the Swiss Variant Interpretation Platform (SVIP) for Oncology in an effort to help clinicians. This platform does not just push information out to clinicians, but rather takes input from a panel of experts during a review cycle. In her presentation, Valerie Barbie explains that they recognized the value of the clinicians’ experience, but that their expertise and knowledge was not being documented or captured in any way. The platform enables clinicians to challenge and complement the data for better interpretation.

Is whole genome sequencing (WGS) the future?
It’s difficult to say if WGS will be the future testing modality for cancer care, but several attendees of the meeting believe so. Not only do they believe this to be the case, but they are already performing WGS for solid tumor profiling. When asked, “why not whole exome sequencing first?”, their response was that a whole exome sequencing (WES) test would only give incrementally more information than a 500 gene panel, and they believe a sub-1,000 gene panel would capture the majority of short range coding region level variations that may impact cancer. On the other hand, WGS would provide an important view into structural and large-scale variations that could better inform clinicians about the patient’s cancer that WES cannot. Marcin Imielińsk from New York Genome Center demonstrated how his team’s novel bioinformatics approaches were able to identify 90% of structural variants using short read sequencing technology, so it’s clear that the bag of tools for making sense of WGS data is growing.

Advancements in the last five years is making in-house testing more desirable The topic of in-house vs. send out testing was covered in a presentation by Ravindra Kolhe from Augusta University. NGS testing has advanced significantly in the last five years and as more cancer centers and oncologists get comfortable with data, there is a greater desire to establish precision medicine or precision oncology in-house. The cost of sequencing has also gone down and clinical utility continues to grow, making the case for bringing testing in-house. One point that Ravindra Kolhe made in his presentation is that NGS testing is no longer a cost center and that many institutions are realizing that there is revenue to be made by bringing testing in-house.

Thuy Phung from the University of South Alabama outlined why they made the decision to work with Imagia Canexia Health to bring cancer testing in-house. In addition to being more cost effective, they avoid long wait times associated with sending tests out. Internal testing and processing capabilities at a local health center also simplifies the process and benefits underserved populations by bringing testing to a community that may not otherwise have access to the test, bridging the health equity gap.

To learn more about our solution and how we can help your organization bring liquid biopsy testing in-house, contact us at

Health Canada Authorizes Imagia Canexia Health Insights Platform to be Manufactured and Distributed Across Canada

Imagia Canexia Health (ICH), a genomics-based cancer treatment testing company that accelerates access to precision care by combining AI expertise with advanced molecular biopsy solutions, today announced that Health Canada now permits ICHIP—held under ICH’s Imagia Healthcare subsidiary—to be manufactured and deployed across Canada. Imagia Canexia Health offers a unique clinical solution for oncologists to quickly generate reports, which includes therapeutic and clinical trial recommendations. The ICHIP furthers their ability by providing intricate molecular and computational genome analysis, from targeted next-generation sequencing (NGS) data, for individual cancer patients. This new approval advances Imagia Canexia Health’s mission to combine groundbreaking genomics, oncology, artificial intelligence, and informatics so health systems can provide cost-effective testing in-house.

Clinicians receive next-generation sequencing (NGS) data sourced from human tissue or blood samples. These insights are collected, as well as processed, on Illumina’s NextSeq and MiSeq devices. Then, ICHIP uses AI to detect and analyze genomic variants, match interpretations, identify potential clinical trials, as well as generate a cancer-treatment results report to augment treatment decisions by oncologists. This information is used in conjunction with other clinical and diagnostic findings to make the most informed care-management decisions.

The ability to now implement ICHIP locally at our Cancer Center means clinicians will have a report informed by the clinical context, creating even more robust real-time-data insights to support their expertise,

said Dr. Bryan Lo, the Medical Director, Molecular Oncology Diagnostics Lab, at The Ottawa Hospital and Eastern Ontario Regional Laboratory Association, who is currently using ICH’s software to report out liquid biopsy cases.

Cancer is a constant fight against time, and novel technology like the ICHIP which helps make faster decisions is invaluable to our patients’ lives.

ICH is proud of our Health Canada Medical Device Establish License, as it is important for commercializing ICHIP, which can now be manufactured and distributed across the country,

said Imagia Canexia Health CEO Geralyn Ochab.

This milestone reinforces ICH’s commitment to having our products reach patients and improve Canadian’s lives.

About Imagia Canexia Health

Imagia Canexia Health (ICH) is a genomics-based cancer treatment testing company that accelerates access to precision care by combining AI expertise with advanced molecular biopsy solutions. Leveraging AI-based informatics for treatment selection and monitoring, oncologists now have leading clinical decision support right at their fingertips. With a network of over 20 hospitals and reference labs worldwide, ICH ensures that doctors have the right insights to deliver cost-effective cancer testing to patients no matter where they seek treatment. Join ICH in closing the health-equity gap in cancer:

Peter Weltman
(415) 340-2040‬

Imagia Canexia Health names Molecular Biologist Vincent Funari PHD as Chief Science Officer

Funari appointment helms Imagia Canexia Health with a distinguished biology and data science leader to further the company’s precision cancer care mission.

Imagia Canexia Health, a genomics-based cancer treatment testing company that accelerates access to precision care by combining AI expertise with advanced molecular biopsy solutions, today announced Vincent Funari Ph.D. as the company’s new Chief Science Officer (CSO). Vincent will apply his two-decades worth of unique proficiency—leading the intersection of molecular biology and advanced data science technology—to deliver precision cancer treatments for patients no matter where they live.

Throughout his career, he co-authored more than 45 peer-reviewed genomics papers in publications including: Nature Immunology, JAMA, Science, Nature Genetics, AJHG, and Genomics. Vincent joins ICH to realize their vision to combine advanced genomics, oncology, artificial intelligence, and informatics so health systems can provide cost-effective cancer testing in-house.

I’ve spent my entire career leveraging technology to create the biggest changes in bioinformatics, and it’s with great pleasure that I bring this experience to Imagia Canexia Health as the company’s Chief Science Officer,” said Vincent Funari, Ph.D. “Cancer remains one of the biggest battle grounds of our time, and leading ICH’s scientific efforts to provide treatment-testing for more people is incredibly motivating.

Vincent brings an unprecedented level of institutional knowledge to Imagia Canexia Health, especially his influential work at the intersection of science and technology,” said Imagia Canexia Health CEO Geralyn Ochab. “As we continue to close the health equity gap through data-driven cancer care, it’s with great confidence that we have Vincent leading our scientific strategy.

About Imagia Canexia Health
Imagia Canexia Health (ICH) is a genomics-based cancer treatment testing company that accelerates access to precision care by combining AI expertise with advanced molecular biopsy solutions. Leveraging AI-based informatics for treatment selection and monitoring, oncologists now have leading clinical decision support right at their fingertips. With a network of over 20 hospitals and reference labs worldwide, ICH ensures that doctors have the right insights to deliver cost-effective cancer testing to patients no matter where they seek treatment. Join ICH in closing the health-equity gap in cancer:

Peter Weltman
(415) 340-2040‬

The Evolution of Digital Health



Health is becoming digital health, encompassing everything from electronic patient records to telemedicine and mobile health — spurred on by the pandemic. But the next evolution will involve artificial intelligence. While the potential of AI is enormous, there are still a number of challenges to delivering impactful solutions for clinical adoption.

While digital health isn’t new, there’s still a big gap in adoption, which has been slow and disjointed. We can already do a lot today, from digital diagnostics and remote patient monitoring to software-based therapeutic interventions. So where does AI fit in?

Artificial intelligence refers to the ability of a computer system to make a prediction or decision on a specific task that historically would require human cognition. Most of the capabilities available today can be categorized as Artificial Narrow Intelligence (ANI), which means it can assist with or take over specific focused tasks, without the ability to self-expand functionality.

On the other hand, machine learning (ML) is a category within AI that allows a computer system to act without being explicitly programmed, acquiring knowledge through data, observations and interactions that allow it to generalize to new settings.

How AI fits into digital health

As part of the AI-driven collaborative discovery process, health-care organizations need to first access data from silos across departments. This data then requires AI-assisted contextualization, exploration and annotation so it can be used for data-driven study design and AI model development. It’s critical to standardize this discovery process, making it repeatable and reproducible. Throughout the process, health-care organizations should consider privacy and potential bias.

Each of these steps, however, has its challenges. In preparing medical imaging data for machine learning, for example, there are challenges with availability of annotated data and potential biases that could affect generalizability of AI algorithms, according to an article in Radiology. New approaches such as federated learning and interactive reporting may ultimately improve data availability for AI applications in imaging.

In the U.S., there’s been a big push for the clinical adoption of electronic health records (EHRs), which starts with digitizing health records to provide insights at a patient level and, eventually, at a population level. Recommendations can then be pushed back to EHRs for clinical decision support.

In Canada, we’re further behind; EHRs aren’t widely used in all aspects of care. One of the biggest barriers to the more widespread use of EHRs is that physicians spend more than half their time on data entry and administrative tasks rather than face-to-face visits with patients, which results in declining quality of care. But these digital tools are becoming more user friendly, particularly as the pandemic accelerates the transition to digital health.

With mobile health, we’re also getting self-serve tools into the hands of patients — so we’re moving from encounter-based medicine to patient-centric care. For example, a monitoring tool on a patient’s wearable device could monitor blood pressure 24/7 in between appointments.

But these types of digital tools produce a lot of data. And there are related challenges. What’s the context of that data? What’s the quality of that data? Are there inherent biases? Digitalization of big data creates new challenges when it comes to interpreting data and making predictions or decisions.


The challenges ahead

Humans can only consider five to 15 metrics when making a decision. So with three months’ worth of data and millions of data points, it’s beyond the capacity of a single individual to make an informed recommendation. AI is trained on specific data and ‘learns’ from new data, providing a level of automation that’s narrow in scope but extremely high speed.

That’s the promise of AI: to offload the manual data crunching and provide high-speed recommendations on multiple variables, ultimately providing more patient-centric care. But we’re not there yet. Health-care institutes have an abundance of new data, but they’re unsure of its value. And it could reduce health equity because not everyone has access to it.

While the quality and quantity of AI research in health care is rapidly growing, the number and scope of usable products are still limited. When we consider how much of that research is being translated into physician use or patient care, we’ve seen a very limited number of FDA-approved algorithms. Of those, the majority have a very narrow spectrum of utility. And they’ve already been flagged for risks because there’s a known lack of complete data, meaning they’re not diverse enough for the real world.



While we’re seeing interesting applications of AI across industries, in health care it’s not only lagging but there are fundamental issues that still need to be addressed. According to an article in Digital Medicine about the “inconvenient truth” of AI in health care, to realize the potential of AI across health systems two fundamental issues need to be addressed: the issues of data ownership and trust, and the investment in data infrastructure to work at scale.


Data ownership, trust and scale

We need to strengthen data quality, governance, security and interoperability for AI systems to work. But the infrastructure to realize this at scale doesn’t currently exist. Data health components are sitting in silos; they’re not interoperable and they’re of varying quality. Because of the variability that exists, it’s difficult for physicians to ‘mine’ that data and make equitable, patient-centred decisions.

A deep learning health-care system first requires a digital knowledge base (including patient demographics and clinical data, as well as past clinical outcomes), followed by AI analyses for diagnosis and treatment selection, as well as clinical decision support where the recommendations are discussed between patient and clinician. This data is then added to the knowledge base to continue the process.

But there are several issues with this process. On the data side, scientific data isn’t ‘fair,’ which means AI models have an inherent bias toward the data set from the institute where the parameters were applied — without a mechanism to ‘train’ the AI somewhere else or let different systems learn from each other. As a result, it’s not able to overcome the inherent biases in the model.


From a business perspective, it’s also hard to sell institutional transformation. Most health-care institutes are relegated to using a software-as-a-service solution with a pre-trained model, which applies to a very limited data set. These algorithms have a utility in a particular setting — but that’s where the buck stops. And that means there’s no resulting structural or long-term change within health-care institutes.


Adopting a deep learning approach

For organizations to truly adopt a deep learning approach, it needs to be deeply embedded in their infrastructure to answer multiple narrow questions in a scalable way. Data needs to be accessible, searchable and usable, whether on-premise or in the cloud. It requires quality control, structuring and labelling.

But each of these steps is slow and labour-intensive. To train a single AI model, it’s necessary to first ingest relevant data, process it to make it accessible and searchable, and then allow users to annotate and contextualize it. While there are AI-assisted mechanisms that can speed up this process, those mechanisms need to be part of the infrastructure.

On top of that, data in health-care institutes is typically low quality; it’s not anonymized and has no context, so it’s not always usable. When designing an AI model at a single site where they’re de-risking institutional or geographical bias, they need a way to repeat this process at other institutes and allow the model to train and learn from that diversity.

It’s a big challenge, to say the least. While each of these tasks can be addressed by technology, if they’re not standardized and interoperable — across technologies and institutes — then they’re not scalable. And that’s where we are today, which means many of these FDA-approved algorithms are failing in the wild.


Overcoming bias in AI models

So how do you overcome these limitations and ensure you’re not introducing bias into your models? Our approach is to provide a collaborative framework, bringing the tools to the experts and allowing the AI models to overcome current limitations or friction points at each step in the process.

At each health-care institute, we provide a data hub that ingests and indexes data, making it searchable and accessible. We use language processing to sift through the data, contextualize it and make it easier and faster for clinicians to search for an appropriate group of patients they want to use in their studies.

When clinicians are looking at this data, they’re also being asked for their expertise on a particular use case — and that knowledge becomes available to everyone else. This allows institutes to leverage their expertise and translate the knowledge of domain experts, while at the same time speeding up data maturation.

And because this entire process is standardized and reproducible across different institutions, two different hospitals — even in two different languages — are able to benefit. If we allow the learnings to be exchanged, rather than the data itself, we’re able to maintain patient privacy and data ownership, solving two critical issues with AI in health-care settings.

Through these learnings, health-care institutes can develop a meta model that performs a task in a way that allows them to see the variability of a patient population. This meta model not only understands bias, but it can be redeployed with parameters that can be adjusted for a particular practice.

This, in turn, can help to address the issue of digital health equity. Clinical trials are typically run out of a few Centres of Excellence, which means data is only collected on people within a certain radius of those centres. In a distributed learning framework, infrastructure is provided to all institutes, reducing the bias of those Centres of Excellence. That means if health data is captured in Nunavut, for example, it can be included in the learnings, even without AI experts based in Nunavut.


The future of AI in healthcare

When it comes to AI, there’s still a big delta between the most advanced institutes and the average institute. But the pandemic has brought to light many of the inefficiencies in our health-care system. Many departments still have to manually calculate the best way to deal with supply and demand, optimize schedules and deal with backlogs.

We’re already seeing the use of statistical or machine learning models by insurance companies to predict things such as hospital readmission risks or understand high-risk patients based on socioeconomic factors. This can help to ensure patients get the specialty care they need and don’t get bounced around until they land on the right set of care providers.

We’re just starting to tap the potential of AI in health-care settings. In an emergency room, for example, it can be used to triage high-risk patients faster. During appointments, it can be used by clinicians to get a more complete picture of exam results to ensure nothing is missed. This reduces risk, and also helps us move toward more personalized health care.

Artificial intelligence is not meant to replace clinicians, but rather to help them focus on what matters: patients, rather than manual data entry and administrative tasks. When properly implemented, it can help clinicians better serve their patients while reducing burnout.

But AI in health care isn’t a magic bullet. It’s more of a digital elixir: a medical solution that brings together data science, machine learning and deep learning that can help clinicians transform data into better patient care.

How AI can help unlock the clinical power of genomic data


Article’s message in a nutshell

Genomic data has the potential to be clinically useful, but its use today is very limited – this potential has not been realized. Imagia Canexia Health is filling this gap by applying cutting edge AI/ML/DL technologies to enable multi-site analysis of genomic data, contextualized in patients’ real life clinical journey, while respecting their privacy.


The power of genomics


From my uncontrollable distaste for Cilantro, to my family’s increased risk of developing breast cancer, a lot of information is encoded in my genome. This 3.2 billion letters-long sequence of A, T, C, and Gs contained in every single cell in my body not only impacts how I taste food, and my health risks, but is also both completely unique to me, and can therefore be used to identify me. It is also partly shared within my family, the ethnic groups I belong to, and the entire human population. Because this information is so unique, and so powerful, it was thought that accessing it would have the potential to eradicate disease altogether. So how is genetic data used in healthcare today ?


Genetic testing is available in a variety of clinical scenarios. From prenatal genetic testing, to newborn screening, hereditary cancer screening, rare disease diagnosis, or determining a patient’s likelihood of responding well to specific treatments, genetics has permeated many domains of medicine. In my family for instance, where many women were affected by breast and other cancers, doctors decided to investigate if genetics played a role in our family’s health. They first identified a BRCA-2 genetic mutation in my great aunt, which gave her an increased risk of developing Breast or Ovarian Cancer. They recommended that all women in the family get tested, and those who tested positive for the mutation were offered frequent follow ups, and even preventative surgeries to minimize their risk to develop the disease. The predictive power of this genetic information is just one of the ways in which genomics can impact cancer care. Indeed, all cancerous cells harbor genetic alterations that, if identified and understood properly, can help us detect cancer early, predict how a specific tumor will respond to a treatment, and match a patient with a specific drug.


Genetic sequencing technologies are most commonly used in oncology, cardiology and immunology, and are continuously improved. From testing for a specific letter change or “single nucleotide variant” in a precise location of the genome, to the analysis of the 3.2 billion letters or “base pairs” that compose the entire human genome, technologies have improved dramatically, and the cost (in time and money) of producing this data has dropped at a remarkable rate. To demonstrate this, we geneticists like to compare what it took to first sequence the human genome in the 1980s (over 2 billion dollars, an international team of hundreds of scientists, and a total of 13 years[1]) to what it takes today (a whole genome sequence can be produced on one machine in a couple of days, for less than 2.000 dollars).


However, one may argue that there is still a lot of progress to be made. Indeed, we are far from having solved all major health issues, and the prognosis for most patients diagnosed with cancer today is still grim. Of course, genetics can’t solve it all, and many other factors – such as our environment, diet and lifestyle – play a major role in our likelihood of developing diseases. Still, we have virtually no understanding of what the majority of our genome actually does (the sum of all 20,000 genes represents only around 2% of our full genome sequence !), and have only scratched the surface of how our genes interact with each other and with other elements in our bodies. So, let’s recap. Each of the 30 trillion cells in a human body contains a 3,2 billion letter code, composed of 4 letters, within which are contained 20,000 genes, and 64 million letters outside of genes… Every second, each cell activates a specific combination of genes to perform its core functions. When errors accumulate in cells, it can produce tumors and lead to cancer… In the end, it seems that understanding genomics is a “big data” problem, so, could AI help move the needle?



Current limitations, and how we at Imagia Canexia Health are addressing them


First, even though it is becoming more common in Canada, generating clinical grade genomic data is still expensive, not part of routine clinical care for most patients, and available only in large research hospitals. The data produced is so large (several GB per patient for a whole genome sequence) that it is not stored in hospital Electronic Health Records (EHR), and is therefore not readily available  for research. The genetic information stored in patients’ health records is often in the form of a text report from a clinical geneticist describing the presence or absence of genetic mutations, and an interpretation of how this affects patient care.

To address this gap, at we have launched research projects that look at ways to infer genetic status by analysing standard-of care clinical images. For instance, we are analysing Computed Tomography (CT) and Positron Emission Tomography (PET) scan images of patients with Lung Cancer who have had a genetic test (RNAseq, or the sequencing of RNA, which is the product of active genes). In these cancers, which are the most common in adults in Canada[2], genetic tests are used in the clinic to define what treatment is most appropriate. However, it requires a biopsy of the tumor, which is an invasive procedure, and results can be lengthy to obtain. If our machine learning algorithms can find markers on the image that predict genetic test results, this could allow a faster, more efficient matching of patients with the best treatment. Being able to generate genetic insights without ever having to run a full range of expensive genetic tests, could mean increasing access to personalized medicine in Canada.


Second, clinically generated genomic data that is accessible for research is not only scarce, but it also critically lacks diversity. Indeed, most genomic data generated to date is from people of european ancestry[3] and this heavily impacts our ability to interpret genetic mutations, which frequency and mechanism of action sometimes differs across populations[4].



Just like in the genomics community, the issue of biases is also heavily discussed in the Artificial Intelligence community, and researchers globally are grappling with this problem[5]. One of the ways to solve this issue is to share data – because more powerful, reproducible and generalizable results can be achieved if more, and more diverse data are produced and shared across institutions, and across provinces.


However, two issues arise when you want to share patient data: competition is fierce, and there are privacy and other legal concerns. Indeed, sharing patient data broadly can be perceived as risky:  there are complex federal and provincial regulations at play in order to protect patient privacy, especially if it contains personal information, or is considered “identifiable”, such as a whole genome sequence. As data custodians, healthcare institutions are in charge of ensuring this data is secured and appropriately protected, which sometimes generates a reluctance to share. For companies, this data may also contain information and knowledge protected by Intellectual Property provisions. And for researchers, who rely on scarce and extremely competitive funding to produce data and generate publishable results, sharing data can mean losing one’s competitive advantage.


To address these problems, Imagia has developed a technological solution: Our EVIDENS platform is based on the concept of federated learning, where raw patient data always remains within the institution in which they have been produced, and only insights on the data are shared. Clinicians and researchers are able to collaborate across multiple institutes without ever sharing any raw data, which allows us to overcome this lack of diversity and sample size while alleviating major privacy concerns (for more details you can refer to our previous blog post)


This innovative approach is also used in the new Digital Health and Discovery Platform, a federally funded, pancanadian initiative co-led by Imagia and the Terry Fox Research Institute. The DHDP aims to accelerate precision medicine by bringing together leaders in the fields of Artificial Intelligence and healthcare. Partners in the DHDP are also developing ways to engage public and private partners in mutually beneficial projects to stimulate innovation and commercialization of clinical products.


Third, there is a lack of standardization in genomic data generation, analysis and interpretation. Although a great majority of genomic data is produced to date on machines engineered by the global industry leader Illumina, the way clinicians and researchers go from sequencing machine outputs to clinical interpretation varies greatly. This is not to say that there are no standardization efforts in progress, the most notable being led by the Global Alliance for Genomics and Health or GA4GH. Imagia is actively participating in this effort, by working directly with Illumina on a project aiming notably at generating and testing the efficiency of standard genomic pipelines. (see our press release here). The problem with standards is that even when they exist, it is challenging to stimulate large groups to use them. In order to incentivize the community to use a standardised approach, state of the art technological and software tools that we and others are developing will be baked directly into our DHDP platform, and we will help fund projects that use them, giving Canadian researchers a strong incentive to include them in their research practices.


Finally, genomic data is most useful if interpreted in the context of a patient’s clinical journey. Genetic data alone is often not enough to gain a full understanding of a patient’s condition, which can only be achieved when combining multiple sources of data: patient records, clinician reports, medical test results (e.g laboratory blood tests), imaging data, etc…



As a response to this challenge, our EVIDENS platform supports ingestion of multiple sources of data, and we have developed advanced Artificial Intelligence methods to efficiently and reliably combine these rich datasets. For instance, we are  working on a project to develop a machine learning (ML) algorithm that can process a combination of clinical data, pathology report, genomic data and clinical imaging data in lung cancer patients. This allows us to generate more powerful models and increases our potential for discoveries.


Our hope for the future


Our vision at Imagia Canexia Health is that genomic data, combined with other clinical data, and analysed via cutting edge AI/ML technologies, has the potential to help more patients affected by high burden diseases in Canada. In order to take on this challenge, we are partnering with Canadian and global leaders in genomics. Because patients are at the center of everything we do, our team has developed technological solutions to ensure that patient data is always secure and protected, and that their privacy is respected throughout our pipeline. We are actively developing methods to generate discoveries that will be translated into better diagnosis/treatment for all patients, even if they have not had a genetic test. There is still a long way to go until whole genome sequencing is a routine clinical practice, but in the meantime, we believe that AI/ML methods can help unlock the clinical potential of genomic data.


[2] Canadian Cancer Statistics Advisory Committee. Canadian Cancer Statistics: A 2020 special report on lung cancer. Toronto, ON: Canadian Cancer Society; 2020. Available at: (accessed [March 26, 2021]).

[3] Abul-Husn NS, Kenny EE. Personalized Medicine and the Power of Electronic Health Records. Cell. 2019 Mar 21;177(1):58-69. doi: 10.1016/j.cell.2019.02.039. PMID: 30901549; PMCID: PMC6921466.

[4] Bien SA, Wojcik GL, Hodonsky CJ, Gignoux CR, Cheng I, Matise TC, Peters U, Kenny EE, North KE. The Future of Genomic Studies Must Be Globally Representative: Perspectives from PAGE. Annu Rev Genomics Hum Genet. 2019 Aug 31;20:181-200.

[5] Reproducibility in machine learning for health research: Still a ways to go. Matthew B. A. Mcdermott, Shirly Wang, Nikki Marinsek, Rajesh Ranganath, Luca Foschini, Marzyeh Ghassemi, Science Translational Medicine, 24 Mar 2021