Tag Archive for 'sequencing'

Page 2 of 2

Are synthetic associations a man-made phenomenon?

Early last year David Goldstein and colleagues published a provocative paper claiming that many GWAS associations are driven not by common variants of modest effect (the canonical common disease – common variant hypothesis underpinning GWAS) but instead by a local cluster of lower frequency  variants that have much bigger effects on disease risk. They dubbed this hypothesized phenomenon “synthetic association” and the term quickly became a genetics buzzword. The paper was widely discussed in both the specialist and mainstream media, and caused quite a stir among academic statistical geneticists.

That debate has been re-opened today by a set of Perspectives in PLoS Biology: a rebuttal by us (Carl & Jeff) and our colleagues at Sanger, a rebuttal by Naomi Wray, Shaun Purcell and Peter Visscher, a rebuttal to the rebuttals by David Goldstein and an editorial by Robert Shields to tie it all together.

Continue reading ‘Are synthetic associations a man-made phenomenon?’

A Googol of Genomes?

[Editor’s Note: this was originally posted over at the Genomics Law Report but we’d like to survey Genomes Unzipped readers as well. How many complete genomes do you think will be sequenced in 2011? Poll is at bottom.]

Earlier this week we took a look back at 2010 and offered our projections for the coming year in personal genomics. Topic #1, just as it was last year: the $1,000 genome.

In hindsight, it might have been ill-advised to offer predictions about the near-term future of genome sequencing during the same week in which one of the year’s major industry conferences (the JP Morgan annual Healthcare Conference) is taking place.

Continue reading ‘A Googol of Genomes?’

Solving Medical Mysteries Using Sequencing

There is a real “wow” paper out in pre-print at the journal Genetics in Medicine. It is a wonderful example of the application of cutting edge sequencing technology to solve a medical mystery. Even better, the authors also include an auxiliary discussion about the medical and ethical issues surrounding the diagnosis, which raises some interesting issues about the transition from research to clinical sequencing.

The Case

A child manifested severe inflammation of the bowel at 15 months; antibiotics failed to clear it up, and he started to lose weight. Standard treatments seemed to have only sporadic effects, and only severe treatment with immunosuppressants, surgery and full bowel clearing could slow down the disease, which is not a long term solution. No cause could be found; the patient’s active immune system seemed to be acting abnormally, but all tests for the known congenital immune deficiencies came back negative. The doctors could try a full bone-marrow transplant, but without knowing what was causing the disease, and where it was localised, they had no way of knowing if such an extreme intervention would be successful.

Such a severe and early onset disease is likely to be genetic, but testing immune genes at random to find the mutation could take years before it turned anything up. Meanwhile, the child was seriously malnourished, and at times required daily wound care under general anaesthetic. A few years ago this might have been the end of the story.

Continue reading ‘Solving Medical Mysteries Using Sequencing’

Saturday Links

Due to a communication breakdown, no-one wrote a Friday Links post yesterday, so today we have a Saturday Links to make up for it.

Steve Hsu has a very appropriately named post, News from the future, about the Beijing Genomics Institute. The BGI is the largest genome sequencing center in China, and one of the largest in the world, and is growing faster than any other, and loading up on a shedload of high-tech HiSeq machines.

Steve reports that the BGI are claiming that their sequencing rate will soon be at 1000 genomes per day, with a cost of about $5k (£3.2k) each. To put a slight downer on these amazing numbers, he clarifies that this might be referring to 10X genomes, which would realistically mean ~300 high quality genomes a day, at $15k (£9.6). Either way, if you want to keep an eye on how fast whole-genome sequencing is progressing, perhaps with an eye to when you’re ready to shell out to get your own done.

A question for the comments: how cheap would a whole-genome sequence have to get before you’d order one?

Continue reading ‘Saturday Links’

Friday Links

Over at Your Genetic Genealogist, CeCe Moore talks about investigating evidence of low-level Ashkenazi Jewish descent in her 23andMe data. What I like about this story is how much digging CeCe did; after one tool threw up a “14% Ashkenazi” result, she looked for similar evidence in 23andMe’s tool. She then did the same analysis on her mother’s DNA, finding no apparant Ashkenazi heritage, and to top it all off got her paternal uncle genotyped, which showed even greater Ashkenazi similarity. [LJ]

A paper out in PLoS Medicine looks at the interaction between genetics and physical activity in obesity. The take-home message is pretty well summarized in the figure to the left; genetic predispositions are less important in determining BMI for those who do frequency physical excercise than for those who remain inactive. This illustrates the importance of including non-genetic risk factors in disease prediction; not only because they are very important in their own right (the paper demonstrates that physical activity is about as predictive of BMI as known genetic factors), but also because information on environmental influences allows better calibration of genetic risk. [LJ]

Trends in Genetics have published an opinion piece in their most recent issue outlining the types of genetic variants we might expect to see for common human diseases (defined by allele frequency and risk), and how exome and whole-genome sequencing could be used to find them.  They give a brief, relatively jargon-free, overview of gene-mapping techniques that have been previously used, and discuss how sequencing can take this research further, particularly for the previously less tractable category of low-frequency variants that confer a moderate level of disease risk. [KIM]

More Sanger shout outs this week; Sanger Institute postdoc Liz Murchison, along with the rest of the Cancer Genome Project, have announced the sequencing of the Tasmanian Devil genome. The CGP is interested in the Tasmanian Devil due to a rare, odd and nasty facial cancer, which is passed from Devil to Devil by biting. In fact, all the tumours are descended from the tumour of one individual; 20 years or so on, and 80% of the Devil population has been wiped out by the disease. As well as a healthy genome, the team also sequenced two tumour genomes, in the hope of learning more about what mutations made the cells go tumours, and what makes the cancer so unique.

I have to say, this isn’t going to be an easy job; assembling a high-quality reference genome of an under-studied organism is a lot of work, especially using Illumina’s short read technology, and identifying and making sense of tumour mutations is equally difficult. Add to this the fact that the tumour genome is from a different individual to the healthy individual, this all adds up to a project of unprecedented scope. On the other hand, the key to saving a species from extinction could rest on this sticky bioinformatics problem, and if anyone is in the position to deal with it, it’s the Cancer Genome Project. [LJ]

Tasmanian Devil image from Wikimedia Commons.

Friday Links

A lot of the Genomes Unzipped crew seem to be away on holiday at the moment, so today’s Links post may lack the the authorial diversity that you’re accustomed to.

I just got around to reading the August addition of PLoS Genetics, and found a valuable study from the Keck School of Medicine in California. They authors looked at the effect of known common variants in five American ethnic groups (European, African, Hawaiian, Latino and Japanese Americans), to assess how similar or different the effects sizes were across the groups.

The authors calculated odds ratios for each variant in each ethnic group, and looked for evidence of heterogeneity in odds ratios. They find that, in general, the odds ratios tend to show surprisingly little variation between ethnic groups; the direction of risk was the same in almost all cases, and the mean odds ratio was roughly equal across populations (the authors note that this pretty effectively shoots down David Goldstein’s “synthetic association” theory of common variation). One interesting exception was that the effect size of the known T2D variants was significantly larger in Japanese Americans, who had a mean odds ratio of 1.20, compared to 1.08-1.13 for other ethnic groups. The graph to the left shows the distribution of odds ratios in European and Japanese Americans.

These sorts of datasets will be very useful for personal genomics in the future, as a decade of European-centered genetics research has left non-Europeans somewhat in the lurch with regards to disease risk predictions. However, the problem with the approach in this paper is that even this in large a study (6k cases, 7k controls) the error bounds on the odds ratios within each group are still pretty large. [LJ]

Over at the Guardian Science Blog, Dorothy Bishop explains the difference between learning that a trait is heritable (e.g. from twin studies), and mapping a specific gene “for” a trait (e.g. via GWAS). Her conclusion is worth repeating:

The main message is that we need to be aware of the small effect of most individual genes on human traits. The idea that we can test for a single gene that causes musical talent, optimism or intelligence is just plain wrong. Even where reliable associations are found, they don’t correspond to the kind of major influences that we learned about in school biology. And we need to realise that twin studies, which consider the total effect of a person’s genetic makeup on a trait, often give very different results from molecular studies of individual genes.

There are also interesting questions to be asked about why there is such a gap between heritabilities estimated by twin studies, and the heritability that can be explained by GWAS results. That is, however, is a question for another day. [LJ]

Another article just released in PLoS Genetics provides a powerful illustration of just how routine whole-genome sequencing is now becoming for researchers: the authors report on complete, high-coverage genome sequence data for twenty individuals. The samples included 10 haemophilia patients and 10 controls, taken as part of a larger study looking at the genetic factors underlying resistance to HIV infection. While this is still a small sample size by the standards of modern genomics, there are a few interesting insights that can be gleaned from the data: for instance, the researchers argue from their data that each individual has complete inactivation of 165 protein-coding genes due to genetic variants predicted to disrupt gene function. I’ll be following up on this claim in a future post. [DM]

Finally, a quick shout-out to our fellow Sanger researchers, including Verneri Anttila and Aarno Palotie, along with everyone else in the International Headache Genetics Consortium, for finding the first robust genetic association to migrane. They looked at 3,279 cases and >10k controls (and another 3,202 cases to check their results), and found that the variant rs1835740 was significantly associated with the disease.

To tie in with the above story, in the region of 40-65% of variation in migraine is heritable, but only about 2% of this was explained by the rs1835740 variant. However, explaining heritability isn’t the main point of GWAS studies: a little follow-up found that rs1835740 was correlated with expression of the gene MTDH, which in turn suggests a defect in glutamate transport; hopefully this new discovery will help shed some light on the etiology of the disease. [LJ]

Friday links

Welcome to the inaugural Friday links post. We’ll be using these posts to share interesting articles stumbled across by Unzipped members during the week.

We’re still tweaking the format, but the basic idea will be a brief paragraph of commentary followed by the initials of the person who wrote it.

Dan Koboldt reviews a recent paper reporting the use of whole-genome sequencing to find the mutation responsible for a severe genetic disease. Interestingly, in this case the disease was undiagnosed, and the causal variant was used to produce a diagnosis of sitosterolemia; more interestingly, this diagnosis had already been ruled out by another test, that was shown to be a false negative. [DM]

Sitting Bull Stamp ScienceNews reports that researchers from the University of Copenhagen have got permission to sequence the genome of Sitting Bull, the native American war chief that led the battle of Little Bighorn. I don’t know exactly what they intend to learn from the genome scientifically, but it seems like this might serve primarily as a monument to a major figure in native American resistance. So the question I have is this: how can we go from a genome sequence (which is generally just a text file on a computer) to a public rememberance, something akin to the 1989 postage stamp shown to the left? [LJ]

Two papers in the current issue of Nature Genetics highlight recent inroads made in understanding the genetics of infectious disease susceptibility. The first found an association between risk of meningococcal disease and CFH, a gene previously implicated in age related macular degeneration. The second identified a susceptibility locus for tuberculosis in African samples. Paul de Bakker and Amalio Telenti have a nice News and Views piece about them as well, remarking on this welcome advance not only in understanding infection, but also in using GWAS to gain insight about disease risk in non-Europeans. [JCB]

Update: Dan Frost from the GoldenHelix blog has drawn our attention to a thought-provoking post on the future of GWAS studies. The post suggests that much of the missing heritability in complex disease is hiding in the set of variants that are badly tagged by existing chips, and proposes that GWAS studies in the future may include a sequencing phase to discover new variants in cases, followed by genotyping using custom genotype chips to capture this variation. The question, from my point of view, is how many common SNPs are there that aren’t well tagged by existing chips, and thus how much heritability could be hidden there? This is exactly the sort of question that the 1000 Genomes dataset was designed to answer. [LJ]

How widespread personal genomics could benefit molecular biology

While the majority of the buzz surrounding personal genomics has to do with prediction of disease risk and other medical applications, there’s clearly the potential for these sorts of technologies to influence basic science as well. In this post, I’ll lay out one such potential application: the use of personal genomics in understanding basic molecular biology, in particular the biology of transcriptional regulation in humans.

Continue reading ‘How widespread personal genomics could benefit molecular biology’

Personal genomics: the importance of sequencing

Those of us who live and breath genomics get very excited about sequencing DNA. Genomes Unzipped will be sure to cover the constant battles between sequencing companies to produce complete and accurate genome sequences for low prices; from our point of view, ‘low prices’ means affordable for consumers, or less than £1000 or so for a full sequence of an individual.

But why do we care about sequencing? You can go to a company like 23andMe and get a genotyping chip done; this won’t give you your full DNA sequence, but it will give you information about half a million sites on your genome, at the much lower cost of around £300. The sites picked for these chips are ones that are most variable in the population, and those that are well-studied. Why do we care about the rest? What more does sequencing give you?
Continue reading ‘Personal genomics: the importance of sequencing’


Page optimized by WP Minify WordPress Plugin