This is a guest post by Peter Cheng and Eliana Hechter from the University of California, Berkeley.
Suppose that you’ve had your DNA genotyped by 23andMe or some other DTC genetic testing company. Then an article shows up in your morning newspaper or journal (like this one) and suddenly there’s an additional variant you want to know about. You check your raw genotypes file to see if the variant is present on the chip, but it isn’t! So what next? [Note: the most recent 23andMe chip does include this variant, although older versions of their chip do not.]
Genotype imputation is a process used for predicting, or “imputing”, genotypes that are not assayed by a genotyping chip. The process compares the genotyped data from a chip (e.g. your 23andMe results) with a reference panel of genomes (supplied by big genome projects like the 1000 Genomes or HapMap projects) in order to make predictions about variants that aren’t on the chip. If you want a technical review of imputation (and the program IMPUTE in particular), we recommend Marchini & Howie’s 2010 Nature Reviews Genetics article. However, the following figure provides an intuitive understanding of the process.
Continue reading ‘Learning more from your 23andMe results with Imputation’
The recent announcement that the UK Government has earmarked £100 million to “sequence 100,000 whole genomes of NHS patients at diagnostic quality over the next three to five years” raises a number of questions, with which the Department of Health are no doubt grappling as I write. I’ve previously discussed the thorny issue of using targeted versus whole genome sequencing to maximize diagnostic yield and benefit patients. However, one of the great achievements of next generation sequencing technologies is to make the assay – actually sequencing genome (or some portion of it) – one of the easier parts of clinical genomics. Although laboratories will have to be suitably equipped, staffed and flexibly managed to deal with high sample throughput and ever changing scientific specifications, the biggest challenge will be to implement genomic knowledge in the clinic.
Continue reading ‘£100M for whole patient genomes – an implementation challenge’
On 10th December 2012, UK Prime Minister David Cameron launched a Report on the Strategy for UK Life Sciences One Year On by announcing that the Government has earmarked £100 million to “sequence 100,000 whole genomes of NHS patients at diagnostic quality over the next three to five years”. This ambitious initiative – which will focus initially on cancer, rare diseases and infectious diseases – aims to train a new generation of genetic scientists, stimulate the UK life sciences industry and “revolutionise” patient care.
There is no doubt that this investment offers a major opportunity for the UK to firmly establish itself as a world-leader in medical genomics. However, deciding how best to use the £100M to maximise patient benefit will be a challenge. There are numerous implementation issues, outlined in the PHG Foundation’s response to the announcement. Not least of these is the urgent need for informatics provision to facilitate storage, processing, annotation, interpretation and secure access to both genomic and phenotypic data. This will involve determining appropriate ethical and operational standards across a broad range of questions.
But there is one particularly crucial question that needs to be answered early on: what is the most appropriate assay to use for clinical implementation? All the literature released by the Government, and quoted extensively by the press, states quite categorically that the money will be used for “sequencing whole genomes”. Surely this can’t really be true? (I certainly hope it’s just coincidence that if you multiply a £1000 genome by 100,000 patients you reach the magic figure of £100 million…) If it is the case, there are several major problems.
Continue reading ‘£100M for whole patient genomes – revolutionising genetic diagnostics or squandering NHS cash?’
Here at Genomes Unzipped we love genomes. But there is more to the world of biology than genomics, there is more to understanding your own body than personal genetic tests. To understand the human body, you have to look not just at the DNA present, but also at what genes are turned on in what tissues, what cells are being produced in what numbers, what compounds are circulating in your blood, and even what other organisms are also living on your body. However, for the interested consumer the non-genetic aspects of personalized medicine have generally been less readily accessible than the genetic aspect. This post discusses a few companies that are trying to fill this gap, and who are looking to the general public to crowd-source funding for their products.
A quick note: I have not investigated these companies in detail, and, as with all crowd-sourcing, you should be aware that the company may not manage to produce the product as they describe it (or even get to make it at all).
Continue reading ‘Crowd-funding personalized bioscience’
Following the Genomes Unzipped post entitled “Exaggerations and errors in the promotion of genetic ancestry testing”, we received a request to reply from Jim Wilson. Jim Wilson is the chief scientist of BritainsDNA. He is not the one who gave the BBC interview that prompted the Genomes Unzipped post but he is a key contributor to the science behind BritainsDNA. We are keen to tell both sides of this story and this post is an opportunity for BritainsDNA to state their arguments and motivation. -VP
I saw Vincent Plagnol’s post here on Genomes Unzipped about the promotion of genetic ancestry testing and felt compelled to respond. While I did not give the interview that was the subject of the post, I am the chief scientist at BritainsDNA and I feel that the post was biased in presenting only one side of the story and thus misrepresenting the situation. Perhaps I can offer another perspective for readers.
Continue reading ‘Response to “Exaggerations and errors in the promotion of genetic ancestry testing”’
One thing we have done in Genomes Unzipped is to report on what is on the market for consumers interested in getting information about their genetic data. While we have found generally positive things to say about this market, there are also many exaggerated claims especially when it comes to making inferences about an individual’s ancestors from direct-to-consumer genetics companies. An example came up last summer with a BBC radio 4 interview of Alistair Moffat of Britain’s DNA. This post will discuss the scientific basis of some of the claims made in the interview.
But first of all, what is my motivation to write this post? After all, there are quite a few genetic ancestry companies like Britain’s DNA, making similar claims. Why specifically discuss this BBC radio 4 interview? The main reason is that listening to this radio interview prompted my UCL colleagues David Balding and Mark Thomas to ask questions to the Britain’s DNA scientific team; the questions have not been satisfactorily answered. Instead, a threat of legal action was issued by solicitors for Mr Moffat. Any type of legal threat is an ominous sign for an academic debate. This motivated me to point out some of the incorrect, or at the very least exaggerated, statements made in this interview. Importantly, while I received comments from several people for this post, the opinion presented here is entirely mine and does not involve any of my colleagues at Genomes Unzipped.
Continue reading ‘Exaggerations and errors in the promotion of genetic ancestry testing’
As part of the Personal Genome Project (PGP), my genome was recently sequenced by Complete Genomics. My PGP profile, including the sequence, is here, and their report on my genome is here. As I play around with the best ways to analyze these data, I’ll write additional posts, but for now I’ve noticed only one thing: I’m almost surprised by how unsurprising my full genome sequence is.
According the the PGP’s genome annotator, I have two variants of “high” clinical relevance. The first is the APOE4 allele, which Luke had already reported that I carry. The second is a variant that causes alpha-1-antitrypsin deficiency, which is also typed by 23andMe.
Of course, this is all quite reassuring. Long-time readers will remember that last year I was briefly worried that I might have Brugada syndrome. I do not carry any of the known pathogenic mutations (modulo worries about false negatives); this of course is now unsurprising, but would have been really nice information to have, say, when I was talking with a cardiologist last year.
As I mentioned a few weeks ago, we recently published a large study into the genetics of inflammatory bowel disease (IBD), which included a number of analyses digging into the biology and evolutionary history of IBD genetic risk. Gratifyingly, our paper has stimulated a lot of discussion among other scientists, which has generated several ideas about future directions for this work. One question that was raised by several population-genetics experts at ASHG was about our natural selection analysis, and in particular our claim to discover an enrichment of balancing selection in IBD loci. In the paper, we found clear signals of natural selection on IBD loci, a subset of which we interpreted as balancing selection. In this post I will set out how I came to this conclusion, but then outline another explanation that could explain the results: recent local positive selection in Europeans.
Continue reading ‘Looking closer at natural selection in inflammatory bowel disease’
Many of the Genomes Unzipped team are spending the week at the American Society of Human Genetics meeting in San Francisco. This year the coverage of the meeting on Twitter is more intense than ever before, and social media is becoming an increasingly mainstream component of the conference. Chris Gunter, Jonathan Gitlin, Jeannine Mjoseth, Shirley Wu and I will be presenting a workshop on social media use for scientists this evening, and we prepared these guidelines for those interested in live coverage of meetings.
- Check the conference social media guidelines first.
If there aren’t any, ask an organizer what the rules are. If there is no formal policy, you may want to take the initiative and ask speakers if they’re OK with their talks being tweeted.
- Use the right #hashtag when you tweet.
This ensures that everything written about a meeting is aggregated in a single channel. When you search a hashtag it filters those posts for you.
- Remember that people are listening.
Twitter is a public conversation. Don’t say anything you wouldn’t be prepared to tell the speaker to their face. Also, bear in mind that your boss and potential employers may be following.
- Remember that people are listening who aren’t at the meeting.
In general, leave off the conference hashtag for in-jokes and social chatter unless it’s likely to be genuinely entertaining to outsiders.
- Be careful tweeting new findings.
If a speaker is presenting unpublished data, don’t write about it unless you’re sure they’re happy to share.
- Do your best to ensure that your tweets don’t misrepresent presented material.
Add as much context as you can, and actively correct misunderstandings that arise about something you tweet.
- Add value by contributing your specific area(s) of expertise to provide insight into presented material.
Don’t just be the fifth person to tweet the easy soundbite from the plenary; instead, explain the unappreciated but profound scientific significance of their fourteenth slide.
- At the same time, don’t tweet everything a speaker says.
One to three key take-home messages per talk is usually enough, unless a presentation is particularly fascinating.
- Don’t swamp the hashtag by quote-tweeting everyone else.
Use the official retweet function, or “break the hashtag” (for instance, delete the # character) in your quote-tweets.
- If you’re organizing a conference, be proactive with a social media policy.
Make sure both the presenters and the audience at the meeting are aware in advance what this policy is.
Out in Nature this week is a paper by three Genomes Unzipped authors reporting 71 new genetic associations with inflammatory bowel disease (IBD). This breaks the record for the largest number of associations for any common disease, and includes many new and interesting biological insights that you should all go and read about in the paper itself (pay-to-access I’m afraid) or on the Sanger Institute’s website.
One thing that we did not discuss in the paper was genetic prediction of IBD (i.e. using the risk variants we have discovered to predict who will or will not develop the disease). In this post I want to outline some of the situations in which we have considered using genetic risk prediction of IBD, and discuss whether any of them would actually work in practice.
Continue reading ‘Dozens of new IBD genes, but can they predict disease?’