ePrivacy and GPDR Cookie Consent by Cookie Consent

Episode 4: Extracting clinical insights from comprehensive genomic data - Dr. Bratic-Hench

Mélanie Moxhet

Dec. 1, 2020


Dr. Bratic-Hench, University Hospital (Basel)

The massive parallel sequencing of nucleic acids in healthy and neoplastic tissues has significantly boosted our understanding of cancer biology but prompts also for standardised data interpretation. How to analyse and process the genomic data from large panel tests? Where to find the most relevant sources for interpretation? How could interpretation solutions help to upscale and harmonise such processes?

Dr. Ivana Bratic-Hench is molecular biologist expert at the NGS diagnostic team of the Institute for medical genetics and pathology at the University Hospital Basel. In this episode of your ONCOmmunity podcast, she discusses the ever-changing landscape of biomarker interpretation and big data in molecular diagnostics.

Data curation is the critical process behind data interpretation. Data curation also fulfils the very important purpose of quality control.


0:06 Welcome to ONCOmmunity, the molecular oncology podcast by ONCODNA. Today, we want to discuss how data interpretation in genomics is changing and how the implementation of interpretation solutions could help to upscale and harmonize such processes. Therefore, we’re speaking with Dr. Ivana Bratic-Hench, molecular biologist expert at the NGS Diagnostic Team of the Institute for Medical Genetics and Pathology at the University Hospital Basel. Welcome, and thank you for joining.


Dr. Ivana Bratic-Hench

0:42 Thank you, and hello, altogether. I’m glad to be here and to share with you my expertise.



0:49 There are numerous public databases for oncological genomics, such as Cosmic, TCGA, OncoKB, ClinVar, gnomAD, and many others. How do you find or combine the most relevant sources for your data interpretation?


Dr. Ivana Bratic-Hench

1:06 Thank you for bringing this question. The massive parallel sequencing of nucleic acids in healthy and neoplastic tissues has significantly boosted our understanding of cancer biology. On the downside, this development has not only led to a wealth of useful but also to a wealth of poorly understood data.


To eventually put sequencing data in a clinical context, the organizing tools you mentioned are an absolute prerequisite. They have, however, been developed in different contexts and, therefore, are not necessarily adapted for clinical use.


1:44 Our understanding of certain genetic variants has dramatically increased over the past decade, but dropping prices for sequenced base in particular with massive sequencing approaches have led to a simultaneous assessment of many genes, which are fortunately, most of them, irrelevant for the particular patient.


Parallel sequencing brings along harmonization of the data acquisition but prompts also for standardized data interpretation. To date, this has turned into a time-consuming, laborious process, even though assist systems are in use, as you mentioned.


2:28 The fact that a variant of unknown significance as of today might become a therapeutic target tomorrow is a paradigm shift in medical oncology. In this process, trust-worthy interpretations are required. Whether this can be achieved by combining databases initially built for different purposes into one big metadatabase remains today a matter of a hot debate, I would say.


I personally think it would not make sense to mix predicted biological significance with experimental or clinical evidence.


Here in Basel, we currently curate our own annotation of variants based on all previous sequencing runs that we collected in an in-house-developed database. Here, we primarily rely in the expert panel suggestions from ClinVar database and only exceptionally use other databases, for example, COSMIC, TCGA, cBioPortal, etc.


3:34 In-house data curation also fulfills a very important purpose, namely quality control purpose. Parallel sequencing kits produce panel-specific artifacts that we stumble upon during routine use. Such artifacts are not necessarily identified during our validation processes. Once we identify such an artifact, we internally record it and describe the context of it in our database.


Now when it comes to variant interpretation, we are currently using a three-tiered system to classify variants, which means we would classify variants into three groups, variants with known clinical significance, variants of unknown relevance, and variants that likely represent benign polymorphisms. In particular, for the later category, we sometimes consult population-based databases.


4:34 Very often, we get the question from the clinicians of the context of the detected somatic variance in retrospect to the clinical trials. And in order to report that, we are using commercial solutions. In this case, at the moment, we are using Oncomine reporter system from Thermo Fisher, but there are other solutions on the market.


And one of the equivalent databases is OncoKDM. And we are currently also testing this database and trying to integrate it into our diagnostic analysis pipeline.


And now how do we ensure diagnostic quality? To ensure it, our institution follows a review rule for almost all newly diagnosed cancers, but also for somatic variants. This is of particular relevance in unclear cases, and at least two internal experts regularly discuss these cases before signing the reports out.


On this way, we deliver concise clinical reports that integrate histological, genetic, and clinical findings.



5:44 We already discussed the diversity of molecular biomarkers in oncology and the challenge of the many existing guidelines around them. How do you address this issue?


Dr. Ivana Bratic-Hench

5:55 This is very important question that you bring along, and the major obstacle hampering harmonized reporting between laboratories is the fact that guidelines do not correlate, either at international or at a national level. We are in regular discussion with our local oncologists at University Hospital Basel and discuss with them their particular preferences on our reporting scheme.


In general, we follow a tumor type-focused reporting scheme and also report negative findings according to the clinical context. This is, for example, we would report a BRAF wild type in the case of melanomas.


6:39 This facilitates patient-centered management that we value much over national or international reporting guidelines. However, we do stay up to date with local running treatment protocols and systematically assess and report on decision-making biomarkers. And our reporting style largely follows the Swiss National General Genetic Laboratory Reporting Recommendation guidelines.


For transparency reasons, in every of our reports, we add concise descriptions of the methods used that also contain statements of respected limits of detection and links to the respective sources of information.



7:29 Where and how do you see your data interpretation process growing and improving in the future? What do you need for this to happen?


Dr. Ivana Bratic-Hench

7:38 To answer your question, I would like to stress out that data curation is the critical process behind data interpretation. Hence, combining multiple layers of evidence will likely increase classification granularity. This could be achieved by integration of different data from different sources besides sequencing technology. For example, immunohistochemistry could confirm loss of expression of particle cell cycle regulator down-streams of a detected variant and would underline its pathogenicity. Whether this information should be recorded as image data or categorically remains a matter of debate today.


Notably, significant differences between immunohistochemistry and FISH protocols exist between the laboratories, which makes this interpretation a bit difficult. And hence, integrated expert-reviewed information directly might be more straightforward.


8:45 Also, presentation of findings at molecular tumor boards is a routine practice in Basel. And here, the respective examiner would present the data after the case has been outlined by the treating oncologist, and afterwards, an in-depth expert discussion about potential management strategies takes place, and sometimes even raw data are being reviewed during the board meeting. And this is why it’s very important to keep transparency in every report that we make.


And to summarize here, in Basel, not only genetic but also transcription and epigenetic profiles are diagnostically assessed in many tumor biopsies on a routine basis. As technologies rapidly develop, I could foresee that we will have very soon pocket-sized profiling devices on every desk that would, for example, perform whole genome, epigenome, and transcriptome sequencing within a short time, probably even during that consultation, and we are almost at that speed with analysis of the epigenetic data of patient samples.


9:58 On one side, that’s very fast, and it’s good. The patients would get a very fast diagnosis. But on the other side, we will generate significantly more information that any physician could grasp, and that makes manual interpretation impossible.


Hence, this manual process needs to be shifted in the hands of the expert, who in joint effort keep the underlying database updated and, very importantly, keep it clinically safe as possible.


We are already at this point with regard to the epigenetic data and using those data in brain tumor classification. And if this would be soon possible or impossible in near future with somatic variants remains to be seen.


Thank you very much for giving me the opportunity to speak about this very important topic with you today.



10:55 Thank you very much, Dr. Bratic-Hench for your insights and explanations. After digging into the process of genomic data interpretation, we will be talking with the trained cancer nurse and discussing there experiences with and how to apply genomics and molecular diagnostics in a daily clinical routine, in the next episode of ONCOmmunity, the molecular oncology podcast by ONCODNA.