One of the major bioethical debates in clinical genetics and genomics research is the issue of what to do with incidental or secondary findings (IFs) unrelated to the original clinical or research question. Every genome contains thousands of rare variants, including a surprising number of loss of function variants, as well as hundreds of variants associated with common disease and dozens linked with recessive conditions. As whole genome or exome sequencing is used more routinely in non-anonymised cohorts – such as the 100,000 patient genomes to be sequenced by the UK NHS – these variants will be uncovered and linked to an increasing number of individuals. What should we do with them?
Robert Green of Brigham and Women’s Hospital in Boston, who co-chairs the American College of Medical Genetics (ACMG) working group on secondary findings, was quoted in a Nature blog last year saying, “we don’t think it’s going to be a sustainable strategy for the evolving practice of genomic medicine to ignore secondary findings of medical importance”. But just saying it doesn’t make it so. There are still numerous questions that need to be addressed – you can be part of the debate by participating in the Sanger Institute’s Genomethics survey.
- What constitutes an IF? As I’ve said before, genomic variants range from common to rare, and from irrelevant to disastrous. Potential IFs might include misattributed paternity, carrier status for recessive diseases, or a dominant oncogenic mutation. Many people advocate limiting IFs to a short defined and agreed list of clinically valid and actionable variants associated with serious diseases.
- Is there a duty to disclose IFs to patients and/or research participants, and if so, who’s duty is it? Do clinicians and/or scientists who have both access to genomic information and the competence to interpret it have a moral responsibility to re-contact and disclose genomic findings to research participants? Unfortunately, most scientific researchers generally lack the clinical experience to feed back results directly to patients, and most clinicians lack the skills required to accurately interpret the variants. Moreover, almost everyone lacks the informatics architecture needed to make feedback to patients logistically feasible! Even if there is a duty to disclose IFs, it’s unclear how this duty should be fulfilled and what proportion of research or healthcare funding should be reserved for this purpose.
- Should individuals be able to choose what IFs to receive? Rather than impose a system upon patients or research participants, some people advocate offering different options. Couples might want to know their carrier status for numerous recessive diseases; individuals already on medication might want to know whether they are fast or slow drug metabolisers; and so on. Some people might not want to receive anything, and I would strongly defend their right not to know. However, the level of genomic knowledge required to make an informed choice about what information to receive – or indeed not to receive – is enormous.
- Who has a right to your genetic information? Protecting patient confidentiality is at the heart of current medical practice, but by its very nature genomic information is familial. IFs may have significant implications for family members, rather than the individual under investigation. Because of our common genetic heritage, there may also be substantial health benefits of sharing genomic variation much more widely. Then again, how would you feel to find your own genome in a public database with an undisclosed, highly penetrant BRCA mutation?
Reframing the debate
One of the things that has always bothered me about the IF debate in genomics is the idea that these findings are ‘incidental’ at all – that we just can’t help but stumble upon significant variants while reading peoples’ genomes. In this fantasy world, researchers and clinicians who don’t share IFs with people must be hiding them somewhere, stowed away in a draw marked ‘Confidential’. I think this wholly misleading scenario is partly a result of the frequent comparison with medical imaging, where radiologists really can’t help but see IFs. However, the situation in genomics is less like a spot-the-different picture and more like a well-worded Google search. Because the genome is so vast, analyses have to be targeted at particular types of variants based on the clinical or research question. Assessing someone’s genome for a hereditary genetic predisposition to breast cancer, for example, is unlikely to incidentally throw up the fact that they also have a dominant neurodegenerative disease – the mutations are in different locations and will not both be extracted by a computational search for either one of them. Simply stumbling upon clinically actionable IFs in a whole genome sequence is highly unlikely (with the exception of co-located IFs, where a single variant is associated with multiple diseases, which are both incidental and unavoidable.)
Gliwa and Berman [Am J Bioethics 13 (2013): 32-42] have finally posed the right question: do researchers have a duty to actively look for certain types of genetic findings? To address this issue, Gliwa and Berman drew on the ancillary care literature, which focuses on the “circumstances under which investigators should provide care for research participants which is not independently required for safe, scientifically sound trials”. Three criteria are presented for judging when there might be an obligation to actively seek IFs: (1) benefit to the patient, (2) uniqueness of access to the data, and (3) reasonableness of burden to the research team. They conclude that because the potential benefits are low (due to our very incomplete understanding of the genome) and the burden is high (as analysing and confirming variants for feedback would require considerable investment of time and resources), there is currently no obligation to feedback IFs. However they argue that if both these factors change, and it remains difficult/economically infeasible for patients to access the information via other sources, in future an obligation to disclose IFs might arise.
Throwing away the “stumble strategy” would allow disclosure of specific classes of variants to become an evidence-based decision rather than a process of random luck. This surely has to be a move in the right direction. However, even assuming that a list of clinically reportable variants could be agreed, interpretation of those variants in asymptomatic individuals who lack family history of a given disease will be incredibly challenging, and there is every chance of doing more harm than good. Given this, would you want to know?
Susan Wolf’s conclusion to this issue is to recommend “return the data”. The raw data, the disk, don’t worry about interpreting. If a participant wants to interpret it, they can find experts who might do so for them. The researcher doesn’t have to learn the art.
That said, it’s incredible to me that we have so little support for what should be a simple problem: plug the genome in, get warnings out. It’s not like image analysis (where humans massively outclass computers): genomes are very standard. A deletion of three bases in a particular location causes CFTR-F508Del. This is a very well-understood variant. Where’s the database for that? Maybe ClinVar will solve this? We’re still waiting. Demand more, push grant-funding agencies to fund more people tackling this issue. Fund different people if the current ones are doing a bad job. I refuse to believe this is so difficult.
With respect to “public database” — participants in the Personal Genome Project are warned that their genomes may contain variants only realized to be seriously predictive years after the data has been made public. This is in part intractable: there will be predictive variants that aren’t discovered/published/studied until years after the genome is made public. In other words, great interpretation won’t completely resolve this risk. And researchers have an obligation to explain those risks to participants.
Anyway, in the absence of the interpretations-that-should-exist, I support the simple solution: return the raw data. Give the participant the disk, give them the autonomy to figure it out and the researcher is relieved of the burden of playing gatekeeper.
What follows is not my position.
However, to play devil’s advocate: one problem of “just returning the data”, with research data, is the confidence that one should hold in that data.
One well-rehearsed limitation is the quality of the data – a researcher is running research-grade protocols on research-grade machines. Maybe these are comparable to clinical quality data in some cases, but in others the data is (by design) of lower resolution than one would accept for a clinical decision. For example, the HLA typing produced in our lab has been widely used to impute HLA types from GWAS data – but it should not be used to tissue type for transplantation, and it seems the product was withdrawn when someone tried to use it for clinical purposes.
This could be explained to people.
Another, perhaps more serious limitation, is that a proportion of data will be attributed to the wrong person. I have (with many others) “fixed” data behind the scenes to get samples to join up across projects that have emerged from the WTCCC – this is not always possible, search for “we excluded 16 samples” in this. These are good research labs, contributing good quality samples to a good research facility – and still a few samples (and even plates) become misidentified between experiments. It happens even to the best of us.
CFTR-F508Del she says. hmm let’s google that.
Google top hit: wikipedia.
http://en.wikipedia.org/wiki/%CE%94F508
ΔF508. opening sentence ΔF508 (delta-F508) is a shorthand that most likely refers to CFTRΔF508, also referred to as F508del-CFTR.
None of those are the representation Madeline chose.
Google hits 2,3,4
Correction of the F508del-CFTR protein processing defect in vitro by …
http://www.ncbi.nlm.nih.gov/pubmed/21976485
Rescue of F508del-CFTR by RXR motif inactivation triggers …
http://www.ncbi.nlm.nih.gov/pubmed/20044041
Most F508del-CFTR Is Targeted to Degradation at an Early Folding …
mcb.asm.org/content/25/12/5242.short
We searched for CFTR-F508Del but page rank shows 4 names which are more preferred:
ΔF508
delta-F508
CFTRΔF508
F508del-CFTR.
Hits 7 and 8 finally use the CFTR-F508Del format of the name.
But they aren’t actually about the variation in question, they are about a chemical that interacts with it
http://www.emdmillipore.com/chemicals/cftr-f508del-corrector-km11060/EMD_BIO-219676/p_AFmb.s1OUeEAAAErxIEqPdB8
> Where’s the database for that? Maybe ClinVar will solve this? We’re still waiting.
It does have a ‘standard’ name rs113993960.
http://www.ncbi.nlm.nih.gov/SNP/snp_ref.cgi?rs=113993960
Maybe this is
http://xkcd.com/927/
But until we embrace some true standard, we’re going to be stuck with “the absence of the interpretations-that-should-exist.”
I agree with Madeleine Ball that one should be able to return the raw data, if at least obtained in a CLIA lab. I feel that it is very problematic to sequence live humans only in research settings without any methods in place for returning results or incorporating results into the electronic medical record. The initial germline exome and/or whole genome sequencing for each live human should be performed in a clinical-grade (CLIA-certified in America) manner, so that genetic results can be returned. This is exactly what they are doing at 23andMe for their genotyping, and I expect they will get exome and WGS in CLIA labs in the future for anything that they offer to the general public.
I have written about this in various places, including a guest post on this website: https://genomesunzipped.org/2012/02/guest-post-time-to-bring-human-genome-sequencing-into-the-clinic.php
My colleague and I argue in an article in press for an analytic-interpretive split involving the return of “raw” CLIA data, so that anyone can get the data interpreted downstream.
Lyon, G.J.* and Segal J.P.*, Practical, ethical and regulatory considerations for the evolving medical and research genomics landscape, Applied & Translational Genomics 2013. http://www.sciencedirect.com/science/article/pii/S2212066113000021
We mention in this paper the recent report from the Presidential Commission for the study of Bioethical Issues that made the below recommendation:
“Recommendation 4.1
Funders of whole genome sequencing research, relevant clinical entities, and the commercial sector should facilitate explicit exchange of information between genomic researchers and clinicians, while maintaining robust data protection safeguards, so that whole genome sequence and health data can be shared to advance genomic medicine. Performing all whole genome sequencing in CLIA-approved laboratories would remove one of the barriers to data sharing. It would help ensure that whole genome sequencing generates high-quality data that clinicians and researchers can use to draw clinically relevant conclusions. It would also ensure that individuals who obtain their whole genome sequence data could share them more confidently in patient-driven research initiatives, producing more meaningful data.”
My last point is that the 1 Million Veteran Program (http://www.research.va.gov/mvp/ ) does NOT currently have any plans to return results or incorporate anything into medical records. In fact, I am not even sure they plan to perform the WGS in the Illumina CLIA-certified lab. This decision by the VA throws away any real chance of incorporating genomic data into clinical care, although there was time in which some at the VA was claiming that the results would be returned and placed in the EHR, thereby engendering “therapeutic misconception” on the part of the veterans. It is certainly one option to decree that none of the genomic data will go back to the participants, but this is simply a waste of money and effort, and it also duplicates clinical efforts, thus wasting precious resources.
In conclusion, the VA (and US Government) is throwing away a real opportunity to lead in the implementation of Genomic Medicine in clinical care, in comparison to what the UK is trying to do now with the projected clinical grade sequencing of 100,000 of their NHS participants.
Mike Cariaso:
dbSNP IDs are an imperfect solution because they refer to positions, not specific variants. There are tri-allelic dbSNP entries. While you might find the nomenclature I used a bit odd, it’s not far from HGVS recommendations. At any rate, you had no trouble figuring out what well known variation I was referring to. It has many alternative names because it is famous.
In addition, dbSNP IDs doesn’t address my issue with lacking a database which tells us which variations are clinically actionable. I think you may have missed my point: we need a database that tells us which variations predict serious pathogenic actionable diseases. Nomenclature is beside the point, we can map things into whatever flavor nomenclature you want from the original data. I used a nomenclature that was more likely to be recognized by people familiar with cystic fibrosis.
Gholson Lyon:
I agree it is tragic (but unsurprising) that the 1 Million Veteran Program has no plan to return data to individuals.
I respectfully disagree that CLIA certification should be a requirement for the return of research data. As I understand it, not all lawyers believe that CLIA is currently required for return of research data. And in many cases no such certification will exist (e.g. return of microbiome data).
Return of data can be conceptualized as existing on a bridge between “research” and “clinic”. The researcher has no business being a clinician, and any requirements in that direction ask him to perform in a role he lacks the skills for. CLIA certification is part of a path we shouldn’t be walking.
For example: CLIA certification of Illumina sequencing will not cover the handling of samples before and after they were in Illumina’s hands. Should the whole stream of research be certified? Sample collection performed by the researchers? Informatics and data handling procedures? If we don’t certify the whole workflow that produced the data, why should certification at one particular point give us much assurance? The chain is only as strong as its weakest link. Neil notes the very real danger of sample swaps.
In addition, CLIA certification leaves a lot to be desired. I’ve recently confronted the fact that it fails to certify informatic trustworthiness. The consequence has been Illumina’s production — under CLIA aegis — of informatically embarrassing data. Look at the third line in their “analytically validated” personal genome file (“genome.block.anno.vcf” on hu032C04’s PGP profile). It looks like someone merged a bunch of separately generated files to create a “whole genome” file, but sloppily took only the headers from their “chr18” run. Headers contain data vital to interpreting the whole of a file. Quality assurance of informatics is critical if we expect people to be automatically interpreting these files for clinical purposes.
We could demand *more* standards and certification, since it doesn’t appear to have been enough. But I don’t think that’s the answer. I don’t think the *law* is an appropriate way to regulate this. We have IRBs.
IRBs can determine whether participant populations for a given study are allowed to receive the data. As Neil notes, researchers can educate participants about the research-grade quality of the data. If the IRB judges the data too sensitive, the education insufficient, and/or the data insufficiently trustworthy — then researchers can be blocked from returning data at the IRB level.
At the PGP we do return data, including preliminary interpretations. Participants are warned of many sources of error (including sample swaps!) and are told the data may not be used medically — they should consult a healthcare professional to confirm and follow up on potential findings. As a result, one participant discovered he had an undiagnosed case of essential thrombocytosis.
P.S. It has been pointed out to me that downloading over a GB to see what I’m talking about is a bit much to expect! (Although I will note this geek tip: you can run zless on a download in progress.)
For readers’ convenience, I’ve made a Google doc with the first 100 lines of the Illumina’s CLIA-certified gVCF file with “chr18” in the header: http://goo.gl/A2NE0
Madeleine et al.,
We have CLIA due to substantial quackery that occurred in the 1960’s in the absence of any regulation of clinical testing, and this included companies giving back negative Pap smear results WITHOUT doing the test, followed by women dying of undiagnosed cervical cancer. I am myself a PGP participant, and I am fully aware that my own genome was not sequenced in a CLIA approved laboratory. But, this is research that I consented to, as part of an IRB-approved protocol. That is NOT the same as clinical care, and I would like to see America move toward sequencing exomes and whole genomes for thousands and eventually millions of people in a clinical-grade (industrial) manner, with return of results and incorporation into the electronic medical record. I want very much for the 1 Million Veteran Program to embrace this idea. However, we will NOT achieve this in America in the absence of any regulation (i.e. CLIA), and I would far prefer that people embrace some oversight with CLIA rather than having the FDA step in and regulate heavily this terrain. The entire CLIA process of exome and whole genome sequencing should include: 1) saliva/blood sample processing, 2) sequencing data generation, AND 3) interpretation (and please see our paper for more detail). We propose in our paper an analytic-interpretive split, so that “raw data” can be generated in CLIA labs in Steps 1 and 2, delivered back to people on hard drives and/or via the internet, and then interpreted downstream (and perhaps by multiple players) in Step 3 in CLIA-certified bioinformatics pipelines (which could be in the plug and play manner that you describe). See the PDF preprint here (and we are making corrections now to the page proofs, so stay tuned for the final paper): http://www.sciencedirect.com/science/article/pii/S2212066113000021
I hope you will read our paper, as we articulate fully our perspective there. My main purpose in making this argument is that I very much want the PGP, 1 Million Veteran’s Program, and other initiatives to scale up to millions of people, with return of results and incorporation into the electronic medical record. 23andMe has already demonstrated that one can return genotyping results “under CLIA aegis”, including BRCA1 mutations, via a web portal, and I believe all exome and WGS sequencing projects involving live human beings should embrace this, so that we can finally move toward a world of individualized and preventive medicine. All of this will occur eventually with the industrialization of WGS, much like what is now occurring with genotyping microarrays and what has already occurred with magnetic resonance imaging, but I would prefer it happen sooner rather than later!
I feel a productive way forward would be to explore WGS of Jews, because of well documented geneology, starting from our forefathers to familial disease pedigrees. We will gain a lot from exploring also pedigrees of outstanding artists, scientists, Nobelists etc etc.
We are running such a project, if interested, please write to me at
http://www.JDNAF.org
Emanuel Yakobson, Ph.D.
Pofessor of Comparative Ethnic Genomics
University of Latvia, Riga
Weizmann Institute of Science
Molecular Genetics
Rehovot, Israel
Rehovot,Israel
Gholson,
Thank you for that link, but I think we may be talking past each other. I am not disagreeing with all regulation, I am disagreeing with bad regulation. Specifically, I disagree with currently requiring CLIA certification for the return of whole genome results from research. One concern I have is that this advocacy will only encourage researchers to use the lack of CLIA certification as a defense for not returning results. I also believe mandating CLIA for all human subjects research will block competition in the sequencing industry, resulting in technological stagnation. Genome prices will only continue to drop with competition and further technology improvements.
It’s notable that you hold 23andme up as an exemplary actor in terms of regulation. 23andme decided to take a stand against regulatory actions it saw as overly-paternalistic, pushing back against FDA’s threatening letters and forging ahead to provide ApoE4 testing to its customers. Later 23andme did begin to pursue FDA clearance — after forcefully demonstrating the consumer demand for these results, and after demonstrating that the jumping-off-building speculations failed to materialize.
It’s important to look at the reality of CLIA in whole genome sequencing in its current incarnation. As I mentioned earlier, Illumina is returning personal genomes with flaws in basic informatic data. At the same time, CLIA certification has clearly been difficult for competitors like Complete Genomics to acquire. My belief is these issues with Illumina’s product are not only the consequence of insufficient regulation on the part of CLIA, but also Illumina’s complacency in the genome sequencing market — their position is aided by advocacy for CLIA-certified genomes in all research, scaring consumers away from their uncertified competitors.
Yes, in an ideal world, we should want good regulation, and your piece proposes regulation for more aspects of genomic data. But it’s far from trivial to put those ideals into practice, and CLIA certification is currently failing to assure some basic aspects of the data produced. And I think that’s not surprising: the translation of research into clinic takes time, and part of that timescale is the footwork and experience needed to figure out what appropriate regulation is. Once the work is done, regulation can be used as products roll out into the clinical realm — but give regulators time to do a better job and let CLIA be for clinical data, not research. If CLIA or other WGS regulation matures into a useful assurance of quality, then IRBs might require that certification as standard practice. Until then, we should encourage researchers to be free to explore, develop, and apply the new technologies needed for future improvements in genomics.
This is a great and obviously thought-provoking (and comment-provoking) post from Caroline. I don’t have much to add in this particular forum, though I largely echo many of Madeleine’s points above.
In particular, Gholson, as you know – and as Madeleine points out – there is far from unanimous agreement that CLIA does or should apply to research results, as opposed to clinical results. While I understand your desire to bring all data up to the highest common denominator, I think that:
1) There are important costs associated with CLIA certification and compliance that cannot be ignored (see, e.g., here: http://www.genomeweb.com/genomeweb-feature-core-labs-eye-clinic-familiarity-clia-needed), and the time and money invested in pursuing and maintaining the same can detract from other valuable research efforts;
2) CLIA is not a guarantee of analytically valid data, including for the reasons that Madeleine mentions above, much less of clinically valid or useful data. That’s just not CLIA’s domain. I suspect CLIA may be more than is necessary for many research uses, and less than is necessary for many clinical uses; and
3) Related to the two points above, it’s really important to respect the differences between clinical care and scientific research, while realizing that individuals – and their data – have an important role to play in both. I don’t think we need a one-size-fits-all approach here.
Thanks to all for the great conversation!
These are complex issues, but I’ll try to keep this comment short.
In regard to return of results, I published my viewpoint on this two years ago. See here: http://lyonlab.cshl.edu/publications/There_is_nothing_incidental_about_unrelated_findings.pdf
In regard to the comments by Dan and Madeleine, the 1 Million Veteran’s Program (1MVP) is already hiding behind CLIA to justify NOT returning results. Their website ( http://www.research.va.gov/mvp/veterans.cfm ) today states:
Will results from my blood tests be forwarded to me?
It will not be possible to give participants results of the blood tests. Due to regulations under the Clinical Laboratory Improvement Ammendments (CLIA), we are legally unable to return research results to participants. Results from the blood tests will not be placed in participants’ electronic health record. Participants should discuss any health concerns with their doctor or other health care provider, who can arrange any necessary and appropriate tests.
My suggested solution is to demand that they perform the sequencing under CLIA aegis so that they DO return results AND incorporate them in the EMR, perhaps collaborating with 23andMe (or some other entity) for a web-based system in which to return results. Your suggested solution appears to be to say that it is ok to sequence 1 million veterans in research labs (with basically only IRB oversight) AND to give them back their results (kind of like what is happening now with the Personal Genome Project), which does not appear to be workable under the current law (i.e. CLIA). Just as an aside, I do have a paper under review now demonstrating that the false negative rate for Complete Genomics “whole” genomes (v2 pipeline) is high, and I also believe there are MANY other unrelated reasons for why CG has not gotten CLIA certified.
I realize this is not a “one size fits all” scenario, but the 1 million veterans that are supposed to be getting sequenced in coming years could actually benefit from receiving their results, moving them (and us as a society) closer to individualized and preventive medicine. But, this won’t happen if policymakers, companies and researchers keep hiding behind CLIA to justify NOT returning results, as then we’ll just have a few “anointed” academics and companies “siloed” with a bunch of research-grade data. I don’t think the solution is to abolish the requirement for CLIA, but rather to demand that the 1MVP adopt and require CLIA certification for a standardized sequencing package, so that raw data CAN be collated, returned, and widely shared.
My last point is that I am reading “Looking Backward” right now (recommended by Bert Gold via Twitter), and it is really sad to see how little progress we have made as a society in the past 140 years. See here: http://www.amazon.com/Looking-Backward-Dover-Thrift-Editions/dp/0486290387/ref=sr_1_1?ie=UTF8&qid=1363780645&sr=8-1&keywords=looking+backward
Can this be true?
i.e. the government may decide that vendors cannot call data “clinical”, nor place it in official health records unless it has the correct stamp, but that has nothing to do with giving data-in-general to data subjects. It would be more helpful to say:
Anyhow, today this came up:
http://www.genomeweb.com/sequencing/acmg-recommends-labs-return-some-incidental-genetic-findings-doctors-patients
If you can’t read that, the report it is reporting on is:
http://www.acmg.net/docs/ACMG_Releases_Highly-Anticipated_Recommendations_on_Incidental_Findings_in_Clinical_Exome_and_Genome_Sequencing.pdf
While this is entirely about the clinical space, having a group of eminent people dig out a concrete list of “incidental” findings which they recommend should be back should (IMHO) also lead to similar considerations in the research space.
The ACMG Recommendations literally use the terms “incidental” and “secondary” interchangeably. When did this occur? As this great blog post points out, searching for specific secondary findings is not at all “incidental”.
After looking at hundreds of families worth of genetic information, I can tell you exactly how many legitimate incidental findings I’ve had: ONE.
These guidelines are basically demanding that sequencing and analysis providers include secondary finding analysis as part of all their products. Not only that, but they seem to require manual curation and explicit reporting:
“The Working Group recognized that there is no single database currently available that represents an accurately curated compendium of known pathogenic variants, nor is there an automated algorithm to identify all novel variants meeting criteria for pathogenicity. Therefore, evaluation and reporting of positive findings in these genes may require significant manual curation.”
In other words, the idea Madeline Ball mentioned to simply return all the raw data (which is actually something I thought may cover this in the future) appears to not be enough.
Moreover, and I’ll be blunt here, this stuff isn’t free. If everyone is going to be forced to do significant additional analyses to obtain secondary findings, there is a cost associated with that, and the cost will come back to the clinician (or researcher). And, as they state:
“The Working Group acknowledged that there was insufficient evidence about benefits, risks and costs of disclosing incidental findings to make evidence-based recommendations”
So good luck getting them reimbursed. In fact, they even admit that later on:
“We recognize that laboratories that adopt these recommendations may add significant costs to at least some of their sequencing reports with primer design and Sanger confirmation of positive findings, evidence review, report generation and sign-out. We do not know the implications that this may have on reimbursement for clinical sequencing.”
I think everyone appreciates a set of guidelines, but the omissions from this particular set leave too many unanswered questions. Not the least of which is, “Who is going to pay for the secondary findings nobody wanted in the first place?”