Archive for the 'Uncategorized' Category

Genomics England and the 100,000 genomes

Genomics-England-logo21The UK’s ambitious plan to sequence 100,000 whole genomes of NHS patients over the next 3-5 years, announced by the UK Prime Minister in December last year, sparked interest and curiosity throughout the UK genetics community. Undeterred by the enormity of the task, a new company, Genomics England Limited (GeL), was set up in June of this year by the Department of Health, tasked with delivering the UK100K genome project. Yesterday, they held what I’m sure will be the first of many ‘Town Hall’ engagement events, to inform and consult clinicians, scientists, patients and the public on their nascent plans.

So what did we learn? First, let’s be clear on the aims. GeL’s remit is to deliver 100,000 whole genome sequences of NHS patients by the end of 2017. No fewer patients, no less sequence. At its peak, GeL will produce 30,000 whole genome sequences per year. There’s no getting away from the fact that this is an extremely ambitious plan! But fortunately, the key people at GeL are under no illusions about the fact that theirs is a near impossible task. Continue reading ‘Genomics England and the 100,000 genomes’

Uncovering functional variation in humans by genome and transcriptome sequencing

Tuuli_chamonix2_croppedDr. Tuuli Lappalainen is a postdoctoral researcher at Stanford University, where she works on functional genetic variation in human populations and specializes in population-scale RNA-sequencing. She kindly agreed to write a guest post on her recent publication in Nature, “Uncovering functional variation in humans by genome and transcriptome sequencing”, which describes work done while she was at the University of Geneva. -DM

In a paper published online today in Nature we describe results of the largest RNA-sequencing study of multiple human populations to date, and provide a comprehensive map of how genetic variation affects the transcriptome. This was achieved by RNA-sequencing of individuals that are part of the 1000 Genomes sample set, thus adding a functional dimension to the most important catalogue of human genomes. In this blog post, I will discuss how our findings shed light on genetic associations to disease.

As genome-wide studies are providing an increasingly comprehensive catalog of genetic variants that predispose to various diseases, we are faced with a huge challenge: what do these variants actually do in the cell? Understanding the biological mechanisms underlying diseases is essential to develop interventions, but traditional molecular biology follow-up is not really feasible for the thousands of discovered GWAS loci. Thus, we need high-throughput approaches for measuring genetic effects at the cellular level, which is an intermediate between the genome and the disease. The cellular trait most amenable for such analysis is the transcriptome, which we can now measure reliably and robustly by RNA-sequencing (as shown by our companion paper in Nature Biotechnology).

In this project, several European institutes of the Geuvadis Consortium sequenced mRNA and small RNA from lymphoblast cell lines from 465 individuals that are in the 1000 Genomes sample set. The idea of gene expression analysis of genetic reference samples is not new (see e.g. papers by Stranger et al., Pickrell et al. and Montgomery et al.), but the bigger scale and better quality enables discovery of exciting new biology.
Continue reading ‘Uncovering functional variation in humans by genome and transcriptome sequencing’

Ten guidelines for tweeting at conferences

Many of the Genomes Unzipped team are spending the week at the American Society of Human Genetics meeting in San Francisco. This year the coverage of the meeting on Twitter is more intense than ever before, and social media is becoming an increasingly mainstream component of the conference. Chris Gunter, Jonathan Gitlin, Jeannine Mjoseth, Shirley Wu and I will be presenting a workshop on social media use for scientists this evening, and we prepared these guidelines for those interested in live coverage of meetings.

  1. Check the conference social media guidelines first.
    If there aren’t any, ask an organizer what the rules are. If there is no formal policy, you may want to take the initiative and ask speakers if they’re OK with their talks being tweeted.
  2. Use the right #hashtag when you tweet.
    This ensures that everything written about a meeting is aggregated in a single channel. When you search a hashtag it filters those posts for you.
  3. Remember that people are listening.
    Twitter is a public conversation. Don’t say anything you wouldn’t be prepared to tell the speaker to their face. Also, bear in mind that your boss and potential employers may be following.
  4. Remember that people are listening who aren’t at the meeting.
    In general, leave off the conference hashtag for in-jokes and social chatter unless it’s likely to be genuinely entertaining to outsiders.
  5. Be careful tweeting new findings.
    If a speaker is presenting unpublished data, don’t write about it unless you’re sure they’re happy to share.
  6. Do your best to ensure that your tweets don’t misrepresent presented material.
    Add as much context as you can, and actively correct misunderstandings that arise about something you tweet.
  7. Add value by contributing your specific area(s) of expertise to provide insight into presented material.
    Don’t just be the fifth person to tweet the easy soundbite from the plenary; instead, explain the unappreciated but profound scientific significance of their fourteenth slide.
  8. At the same time, don’t tweet everything a speaker says.
    One to three key take-home messages per talk is usually enough, unless a presentation is particularly fascinating.
  9. Don’t swamp the hashtag by quote-tweeting everyone else.
    Use the official retweet function, or “break the hashtag” (for instance, delete the # character) in your quote-tweets.
  10. If you’re organizing a conference, be proactive with a social media policy.
    Make sure both the presenters and the audience at the meeting are aware in advance what this policy is.

Heritability and twins, yet again

Slate’s Brian Palmer has written an astonishingly ignorant critique of the use of twin studies to estimate the heritability of complex traits. Razib has a pithy response, in which he refers to the Slate piece as “a sloppy mishmash”: there’s just so much wrong with the piece (beginning with its first sentence: “One of the main messages of science over the last couple of decades is that genes are destiny”) that it’s hard to know where to start pulling it apart.

Fortunately there’s no need for a point-by-point response here: Luke wrote a lengthy response to another ignorant critique of twin studies late last year, and his cautious defense of the methodology is just as pertinent here. In addition, it’s been reassuring to see that the comments thread at Slate has been almost universally negative.

As Luke noted last year, there are some valid criticisms that can be pointed at twin studies, although none of these fundamentally undermine the value of these studies for understanding human genetics. It’s a shame that Palmer chose to ignore these substantive criticisms in favour of sweeping dismissals and eugenic slurs.

Friday Links: New genes for multiple sclerosis, and a new list of DTC genomics companies

This week sees the publication of a large study of the genetics of multiple sclerosis. A consortium of 23 research groups gathered together data on nearly 10,000 MS suffers, and discovered 29 new genetic variants that contribute to disease risk. Overall, genetic variants for MS can now explain around 20% of the overall heritability of the disease, and these genetic variants highlight pathways that are likely to be important in the disease (such as T-helper-cell differentiation). Notably, this study is published in Nature, which is pretty rare for genome-wide association studies such as this. Perhaps related to this is the wonderful degree of detail included in the figures, such as in the ancestry plots of individuals in the study (see left). It is also surprisingly readable, containing just 4 pages of main article, with the nitty-gritty relegated to 100 pages of supplemental text. [LJ]

The Genetics and Public Policy Center have released an updated version of their list of direct-to-consumer genetic testing companies. You can view the list as a rather user-unfriendly massive PDF matrix of companies versus diseases tested here. The list is certainly not as useful as it could be – for instance, there are no indications of test price or quality, and whole-genome sequencing companies are shown as not testing for any disease, rather than (effectively) testing for all diseases – but it would be a good starting point for a crowd-sourcing project to produce a more comprehensive database. Hmmm… [DM]

23andMe publishes new findings for Parkinson’s disease

journal.pgen.1002141.g001

The Genomes Unzipped members have spent a lot of time discussing their 23andMe genotyping data, and therefore it makes sense to follow-up on the recent scientific publications from this company. This new publication from 23andMe is particularly newsworthy because while 23andMe had already reported new findings for common traits, this is as far as I can tell the first time that a direct-to-consumer genetics company has tackled a major disease. Here, this is Parkinson’s disease (PD) a relatively common condition which has been a focus of 23andMe for a long time now. This 23andMe study identifies two new PD loci. They also replicated the vast majority of published findings, hence confirming the validity of their approach and confirming their role as a significant player in the field of common disease genetics. 

I should also mention that I was involved in a companion paper that will be published shortly in the same journal (only slowed down by technical issues, hopefully only a matter of days) and therefore my enthusiasm about this study may be somewhat biased.

Continue reading ’23andMe publishes new findings for Parkinson’s disease’

Interpretome: new online tools for analysing personal genome data

Konrad Karczewski and colleagues from Stanford have put together a very handy set of online tools for analysing personal genomic data. The tools work within your browser (Chrome and FireFox only, so the ~18% of you who continue to use Internet Explorer now have yet another incentive to change), meaning your genetic data never actually leaves your computer. They currently work with raw, unzipped data from 23andMe and Lumigenix. The tools were developed initially for use in Stanford’s pioneering Genomics and Personalized Medicine elective course for graduate and medical students, in which students had the opportunity to explore their own 23andMe or Lumigenix data. Karczewski has some background over at his personal blog.

Once you’ve pointed Interpretome to your raw data file (top right-hand corner) and assigned your ancestry you can start playing with the tools – for instance, you can calculate your type 2 diabetes risk or warfarin dose, or estimate the fraction of your genome inherited from Neandertal (see image above for my result). A caveat: I’m writing this post without carefully checking the output from any of these analyses, so as always in personal genomics, interpret your results with caution.

I suspect one of the more popular suites of tools will be the PCA package, which allow you to place your genetic data in the context of worldwide patterns of genetic variation. Here the authors have pre-calculated the crucial information (the PCA loadings) for each SNP in the 23andMe data-set, allowing them to very quickly calculate your position in a worldwide genetic map containing thousands of individuals. Here’s my 23andMe v3 data (black square) projected onto a genetic map of Europe created with POPRES samples. The picture isn’t quite as pretty as the one in the 2008 Nature paper using the same cohort – the Interpretome team haven’t applied the same extensive filters to remove extraneous features from the data have had to work with the smaller number of SNPs that overlap with the 23andMe v3 chip, and you need to plot PC1 vs PC4 before you start seeing something that resembles a map of Europe – but it’s enough to give you a sense as to where you fit. I was unsurprised to find myself sitting smack in the middle of the British cluster:

Anyway, go and check it out, and send it to your friends. We’re delighted to see such a handy package released free to the public – kudos to everyone involved in putting the website together. We’ll likely be posting a more thorough review of the site once we’ve had time to test the tools out on a range of Genomes Unzipped data-sets.

Responsible and effective use of personal genomes

This is the final of three posts from panellists in the Race to the $1000 Genome session today at the Cheltenham Science Festival – this time by Genomes Unzipped’s own Caroline Wright.

As the previous posts from Clive Brown and Adam Rutherford have indicated, there has long been enormous hype and hope surrounding the human genome project and the prospect of a $1000 genome. But what do these developments really mean for the general public? What do we need to know – either as individuals or as health care providers – before we can decide whether it’s worth having a genome sequenced?

Before starting to unpick some of the issues involved in the responsible and effective use of personal genome sequences, it’s worth reviewing how, where and why someone might actually have their genome sequenced. There are currently essentially three different and nonequivalent contexts in which an individual could have their genome sequenced:
Continue reading ‘Responsible and effective use of personal genomes’

Our genetic data are now officially in the public domain

We’ve finally found the time to formally set up copyright for Genomes Unzipped. As you can see at the bottom of this post, most content on the Genomes Unzipped website is now available under a Creative Commons Attribution-ShareAlike 3.0 Unported License, meaning that it can be reused so long as proper attribution is given, and that the resulting product is distributed under the same or similar license. We’ve made an exception in one special case: all of our raw genetic data are now available under the Creative Commons CC0 public domain option. That means we waive copyright to our genetic information, so it can be used for any purpose without restriction or attribution.

One of our goals in this project is to encourage people to develop novel tools for mining information from genetic data, using our data as a testing ground. Adding the CC0 license makes it explicit that our data are intended to be a community resource: if you want to use them to test your new analysis tool, or to see how much data you can expect to get from a genetic test, or even to make an artwork, feel free.

Finally, a teaser: the pool of genetic data available from the group is set to expand over the next few weeks. Stay tuned…

Creative Commons LicenceGenomes Unzipped content is available for reuse under a Creative Commons Attribution-ShareAlike 3.0 Unported License.


Page optimized by WP Minify WordPress Plugin