Last week, the FDA sent a sternly-worded letter to the personal genomics company 23andMe, arguing that the company is marketing an unapproved diagnostic device. Many have weighed in on this, but I’d like to highlight a thoughtful post by Mike Eisen.
Eisen makes the important point that interpreting the genetics literature is complicated, and a company (like 23andMe) that provides this interpretation as a service could potentially add value. I’d like to add a simple point: this is absolutely not limited to genetics. In fact, there are already many software applications that calculate your risk for various diseases based on standard (i.e. non-genetic) epidemiology. For example, here’s a (NIH-based) site for calculating your risk of having a heart attack:
And here’s a site for calculating your risk of having a stroke in the next 10 years:
And here’s one for diabetes. And colorectal cancer. And breast cancer. And melanoma. And Parkinson’s.
I don’t point this out because it leads to an obvious conclusion; it doesn’t. But all of the scientific points made about risk prediction from 23andMe (the models are not very predictive, they’re missing a lot of important variables, there are likely errors in measurements, etc.) of course apply to traditional epidemiology as well. Ultimately, I think a lot rides on the question: what is the aspect of 23andMe that sets them apart from these websites and makes them more suspect? Is it because they focus on genetic risk factors rather than “traditional” risk factors (though note several of these sites ask about family history, which of course implicitly includes genetic information)? Is it the fact that they’re a for-profit company selling a product? Is it something about the way risks are reported, or the fact that risks for many diseases are presented on a single site? Is it because some genetic risk factors (like BRCA1) have strong effects, while standard epidemiological risk factors are usually of small effect? Or is it something else?
The ongoing debate about whether, what, when and how to feedback incidental findings (IFs) from whole genome sequencing continues to rage on both sides of the Atlantic following the American College of Medical Genetics and Genomics’ controversial recommendations on reporting IFs released last month. In an unexpected twist, the authors of the guidance have now written “a clarification” in response to the many criticisms that have been raised including here on GenomesUnzipped. The clarification covers five points – autonomy, children, labs, communication and interpretation.
Continue reading ‘ACMG guidelines on IFs – responding to the response…’
By now, we’re probably all familiar with Niels Bohr’s famous quote that “prediction is very difficult, especially about the future”. Although Bohr’s experience was largely in quantum physics, the same problem is true in human genetics. Despite a plethora of genetic variants associated with disease – with frequencies ranging from ultra-rare to commonplace, and effects ranging from protective to catastrophic – variants where we can accurately predict the severity, onset and clinical implications are still few and far between. Phenotypic heterogeneity is the norm even for many rare Mendelian variants, and despite the heritable nature of many common diseases, genomic prediction is rarely good enough to be clinically useful.
The breadth of genomic complexity was really brought home to me a few weeks ago while listening to a range of fascinating talks at the Genomic Disorders 2013 conference. Set against a policy backdrop that includes the recent ACMG guidelines recommending opportunistic screening of 57 genes, and ongoing rumblings in the UK about the 100,000 NHS genomes, the lack of predictability in genomic medicine is rather sobering. For certain genes and diseases, we can or will be able to make accurate and clinically useful predictions; but for many, we can’t and won’t. So what’s the problem? In short, context matters – genomic, environmental and phenotypic. Here are six reasons why genomic prediction is hard, all of which were covered by one or more speakers at Genomic Disorders (I recommend reading to the end – the last one on the list is rather surprising!):
Continue reading ‘Why predicting the phenotypic effect of mutations is hard’
One of the major bioethical debates in clinical genetics and genomics research is the issue of what to do with incidental or secondary findings (IFs) unrelated to the original clinical or research question. Every genome contains thousands of rare variants, including a surprising number of loss of function variants, as well as hundreds of variants associated with common disease and dozens linked with recessive conditions. As whole genome or exome sequencing is used more routinely in non-anonymised cohorts – such as the 100,000 patient genomes to be sequenced by the UK NHS – these variants will be uncovered and linked to an increasing number of individuals. What should we do with them?
Robert Green of Brigham and Women’s Hospital in Boston, who co-chairs the American College of Medical Genetics (ACMG) working group on secondary findings, was quoted in a Nature blog last year saying, “we don’t think it’s going to be a sustainable strategy for the evolving practice of genomic medicine to ignore secondary findings of medical importance”. But just saying it doesn’t make it so. There are still numerous questions that need to be addressed – you can be part of the debate by participating in the Sanger Institute’s Genomethics survey.
Continue reading ‘Do we have an obligation to look?’
The recent announcement that the UK Government has earmarked £100 million to “sequence 100,000 whole genomes of NHS patients at diagnostic quality over the next three to five years” raises a number of questions, with which the Department of Health are no doubt grappling as I write. I’ve previously discussed the thorny issue of using targeted versus whole genome sequencing to maximize diagnostic yield and benefit patients. However, one of the great achievements of next generation sequencing technologies is to make the assay – actually sequencing genome (or some portion of it) – one of the easier parts of clinical genomics. Although laboratories will have to be suitably equipped, staffed and flexibly managed to deal with high sample throughput and ever changing scientific specifications, the biggest challenge will be to implement genomic knowledge in the clinic.
Continue reading ‘£100M for whole patient genomes – an implementation challenge’
On 10th December 2012, UK Prime Minister David Cameron launched a Report on the Strategy for UK Life Sciences One Year On by announcing that the Government has earmarked £100 million to “sequence 100,000 whole genomes of NHS patients at diagnostic quality over the next three to five years”. This ambitious initiative – which will focus initially on cancer, rare diseases and infectious diseases – aims to train a new generation of genetic scientists, stimulate the UK life sciences industry and “revolutionise” patient care.
There is no doubt that this investment offers a major opportunity for the UK to firmly establish itself as a world-leader in medical genomics. However, deciding how best to use the £100M to maximise patient benefit will be a challenge. There are numerous implementation issues, outlined in the PHG Foundation’s response to the announcement. Not least of these is the urgent need for informatics provision to facilitate storage, processing, annotation, interpretation and secure access to both genomic and phenotypic data. This will involve determining appropriate ethical and operational standards across a broad range of questions.
But there is one particularly crucial question that needs to be answered early on: what is the most appropriate assay to use for clinical implementation? All the literature released by the Government, and quoted extensively by the press, states quite categorically that the money will be used for “sequencing whole genomes”. Surely this can’t really be true? (I certainly hope it’s just coincidence that if you multiply a £1000 genome by 100,000 patients you reach the magic figure of £100 million…) If it is the case, there are several major problems.
Continue reading ‘£100M for whole patient genomes – revolutionising genetic diagnostics or squandering NHS cash?’
About a year ago on this site, I discussed a model for addressing some of the major problems in scientific publishing. The main idea was simple: replace the current system of pre-publication peer review with one in which all research is immediately published and only afterwards sorted according to quality and community interest. This post generated a lot of discussion; in conversations since, however, I’ve learned that almost anyone who has thought seriously about the role of the internet in scientific communication has had similar ideas.
The question, then, is not whether dramatic improvements in the system of scientfic publication are possible, but rather how to implement them. There is now a growing trickle of papers posted to pre-print servers ahead of formal publication. I am hopeful that this is bringing us close to dispensing with one of the major obstacles in the path towards a modern system of scientific communication: the lack of rapid and wide distribution of results.*
Continue reading ‘The first steps towards a modern system of scientific publication’
The recent announcement of a new journal sponsored by the Howard Hughes Medical Institute, the Max Planck Society, and the Wellcome Trust generated a bit of discussion about the issues in the scientific publishing process it is designed to address—arbitrary editorial decisions, slow and unhelpful peer review, and so on. Left unanswered, however, is a more fundamental question: why do we publish scientific articles in peer-reviewed journals to begin with? What value does the existence of these journals add? In this post, I will argue that cutting journals out of scientific publishing to a large extent would be unconditionally a good thing, and that the only thing keeping this from happening is the absence of a “killer app”.
Google Scholar in 2015?
Continue reading ‘Why publish science in peer-reviewed journals?’
Disclaimer: Genomes Unzipped received 12 free kits from Lumigenix for review purposes, and Dan Vorhaus has provided legal advice to the company. We plan to release a full review of the Lumigenix service in early July.
Last month three direct-to-consumer (DTC) genetic testing companies opened their mailboxes to find a slightly ominous but entirely expected letter from the FDA. The three recipients (Lumigenix, American International Biotechnology Services and Precision Quality DNA) received substantively equivalent letters, with the FDA warning each company that its genetic testing service “appears to meet the definition of a device as that term is defined in section 201(h) of the Federal Food Drug and Cosmetic Act,” and that the agency would like to meet with company representatives “to discuss whether the service [they] are promoting requires review by FDA and what information [they] would need to submit in order for [their] product to be legally marketed.”
Translated from bureaucratese, that means that the FDA views these services as ones that may need to be formally reviewed by the agency and either approved or cleared before they can be legally sold. The FDA letter asks each company to describe its service and to explain either (1) why it does not require FDA approval or (2) how the company plans to pursue such approval.
This is a strategy that the FDA has pursued with a growing cadre of DTC service providers. These letters (currently 23 and counting1) represent the only public and company-specific actions the agency has taken to date with respect to DTC genetic testing. While many DTC letter recipients are engaged in dialogue with the FDA, those conversations have occurred beyond the public’s view. Until now.
Continue reading ‘DTC Genetic Testing and the FDA: is there an end in sight to the regulatory uncertainty?’
[Editor’s Note: This guest post is contributed by Blaine Bettinger. Blaine is the author of The Genetic Genealogist, a blog that examines the intersection of genetics and ancestry, and a patent attorney at Bond, Schoeneck & King in Syracuse, NY.]
As you may have heard, I recently made my 23andMe and Family Tree DNA autosomal testing results available for download online at “mygenotype,” and dedicated the information to the public domain (if dedicating DNA sequence to the public domain is even possible – I’m currently doing some research in this area and expect to write more in the future). [Editor’s Note: see additional comments on personal genomics data in the public domain at the end of this post.]
At “mygenotype” you can download the following:
My Family Tree DNA Results:
- Affymetrix Autosomal DNA Results (2010)
- Affymetrix X-Chromosome DNA Results (2010)
- Illumina Autosomal DNA Results (2011)
- Illumina X-Chromosome DNA Results (2011)
My 23andMe Results:
- V2 Results (2008)
- V3 Results (2010)
- Y-DNA Results (2010)
- mtDNA Results (2010)
You can also find my SNPedia Promethease reports:
In addition to my genome, Razib Khan of Gene Expression has a spreadsheet of approximately 48 other genomes that are available for download online.
A Challenge To YOU
Now that the information is out there, available to anyone who might be interested, it remains to be seen who might be interested in the information.
Continue reading ‘My Genome Online – A Challenge To You’