Genalice was started in 2011 from a dream. You are reading this blog rooted in our website, so you probably know what makes us tick.
In all the years that we worked on making high quality genetics analysis tools available and affordable ‘for all’ to fight complex diseases, we had one single focus: Finding biomarkers in any way possible to support diagnosis and treatment. This is manifest in LINK, our correlation and association engine and Map which is taking care of the heavy lifting of large-scale data processing for short read alignment, variant calling and population calling on a tiny compute and storage footprint.
The quality concern
We pride ourselves in providing high quality output, because one cannot be careful enough, when it gets to health. This does not only apply to humans, but to animals and plants as well. Specifically, when they become part of the human food chain or domicile.
It is clear that this quality consideration must further be complemented with regulations, controls and certifications to avoid operational errors. When humans care for humans, we bring in compassion and empathy; yet stand the risk of human error. This must be managed to strike the balance between correctness, timeliness and care.
Some time ago, an article passed on by Bas Tolhuis in my team, triggered one thing we really never considered. It concerns a case study in which a genetic mutation reached the statistical level of being indicative for diagnosis and treatment. It passed all the defined criteria. The application in the clinic complied with regulations and adheres to set procedures, yet it turned out to be incorrect.
So, the cohorts that were used (and verified) proved the mutation to be correct. However, when tested in a much larger group, it turned out that the variants exist in the DNA of many (which were clearly not part of the control group), who do not show the symptoms, and where not diagnosed with the disease.
How to solve this?
One could say: “This makes the case for using very large cohorts of case and control groups for studies”… I’ll leave it to the study designers and grant committees to judge the merit.
If we assume for a moment the plausible outcome that we indeed need to check more samples to minimize the chance of identifying incorrect genetic markers, I believe we have two options:
Expand every genetic research project to include more samples;
- Define a large-scale genetic crosscheck to guard entrance to the clinic.
- A safety net
I would vote for the latter. This is by far more effective and controllable than adding weight and cost to many research projects, which face constraint budgets already. We should only move to very large cohorts if and when the question and the impact of the result warrants that.
I further think that the safety net must be under government control, even though the operation or service could be contracted to specialists from the private sector. The mechanism of such ‘genetic safety crosscheck’ would be similar to the process of identifying a marker. Only in this case, the (control) cohort size would be a huge, vanilla group. It would include the disease target group, the phenotype should be shallow (e.g. sex, year of birth, sick or not sick, etc.), and the test would only indicate a doubt level.
When a set of mutations, deemed significant for a case, appear in a group, which does not have the expected ‘signature’ for the case at hand, this gives reason for further investigation. Even a simple genotype-only test may work as a starting point. If the (complex) mutation frequency ratio does not match the expected ratio of a disease group, this is a red flag. Note that the safety net is only meant to raise red flags.
A blog like this over-simplifies and does not take into account all the pitfalls and hurdles. Still, I believe such large digital genetic resource will pay for itself in terms of minimizing the cost of error and the acceleration of test acceptance in health. The ‘business’ case should, of course, be worked out by health-economists.
Ideally, the complete population (all who have a health record) would be genotyped and become part of the resource. I personally think that every citizen, who wants to benefit from public health, can be called to action to contribute by consent to such well- protected safety net.
Clearly, the check must be available to all researchers prior to submitting a paper. This makes it a very powerful public or fee based service for the non-public sector, unless, of course, sponsorships warrant free access.
The future, available today
Technology is moving very fast and already allows the above to be engineered at very low cost levels and can easily be conducted at the scale of millions of samples.
So triggered by the ‘message in a bottle’, I think the drive of the genomics industry to increase the number of biomarkers by one or two orders, warrants setting up a watchman (or watchmen) to prevent human suffering and deploying expensive treatments and care in vain. This is almost a requirement rather than an option.