Thursday, December 2, 2010

Article Recommendation: Glioblastoma Subtypes Defined Using Data from TCGA

After reading Verhaak et al. 2010 in Cancer Cell, and I was impressed by this very good study analyzing data from an important resource for genomics research.

The authors were able to define gene signatures to define 4 subtypes of glioblastoma.  The experimental design was pretty straightforward, and the results were quite clear.  Most importantly, their predictive model was trained an a relatively large set of 173 patient samples and validated on an even larger set of 260 patient samples (from 5 independent studies).

The study focused mostly on data provided by The Cancer Genome Atlas (TCGA).  TCGA is a database that contains various types of genomic data (gene/miRNA expression, gene/miRNA copy number, DNA sequence/polymorphism, and DNA methylation), and most or all types of genomic data are available for each patient in the database.  This provides a unique opportunity to integrate many different types of data, usually for a large number of clinical samples.  For anyone not aware of this resource, I would strongly recommend checking out the links provides above as well as the original TCGA paper (also on glioblastoma) published in Nature.

Sunday, November 21, 2010

The Personal Benefits of Self-Regulation

Although dishonest individuals do not always experience immediate repercussions for their unethical behavior, there are a number of benefits to having the self-discipline and courage to search for an honest career that is truly helps other people because the development of ethical habits early in one's professional career is likely to pay off later in life.

There can be significant long-term consequences to dishonest behavior, as shown by an increasing number of retractions of papers from scientific journals and a noticeable presence of scientific misconduct in the news.  For example, Anil Potti resigned from Duke University after it was revealed that he published forged results and included inaccurate information in his CV (such as falsely claiming that he was a Rhodes Scholar).  However, there are also a number less drastic consequences that do not involve formal punishment for bad behavior.

Embellishing results early in one's career can make downstream research more difficult.  For example, inaccurate predictions will make it difficult to get positive results from follow-up experiments to validate a preliminary hypothesis.  Also, many scientific disciplines offer rotations for graduate students, and it will be more difficult to recruit top-notch graduate students if other labs can offer more interesting projects with a better experimental design.  It is difficult to maintain a steady stream of publications in a lab with little or no competent personnel.

Although scientists who forge results on a regular basis may not necessarily have to worry about downstream analysis (because all of their results are false anyways), individuals who make false claims on a regular basis are more likely to be caught by others attempting to verify important results.

Networking and social interactions will also be more difficult for individuals with a reputation for behaving dishonestly.   Even if the general public is not aware of a person's reputation, individuals  who behave unethically on a regular basis will probably have difficulties developing a close network of friends.

As my grandmother used to say, the common saying shouldn't be "practice makes perfect" but rather "practice makes permanent" because all habits (good or bad) are difficult to break.  If aspiring scientists start engaging in unethical conduct, then it will become increasingly hard to break those habits at a later stage in their career.

Tuesday, August 17, 2010

Is it worth the effort to develop personalized cancer treatments?

Robert Langreth has published a series of articles about personalized cancer treatments in Forbes Magazine (see Part I, Part II, Part III, and/or this summary in GenomeWeb).  In a nutshell, the author's main point (in the first article) is that it's very difficult to develop new drugs, and the costs to produce drugs that only help a small proportion of patients may outweigh the benefits for that drug.

I agree with the author in that I don't think it's reasonable to expect to produce an individualized drug for every possible mutation that can cause a disease (such as cancer).  However, I think genetic studies can still help improve treatments for several reasons.

First, there are a wide variety of tools that physicians can use to help patients, and I think it is wrong to view this argument from an "all-or-nothing" point of view.  Pfizer's response in the third article also criticizes the "all-or-nothing" thinking, although they cite the need for "accumulated modest advances" while I am saying it is good to have more options.  For example, the first article mentions the need to develop better surgical methods.  I'd like to see more personalized drug treatments as well as new surgical technologies.  I imagine there will be certain circumstances where a personalized drug therapy is ideal and certian circumstances where surgery will be necessary.  If we reach the point where even 30% of patients can receive personalized drug treatments, then I think that is pretty good.

Second, genetic tools can assist with the diagnosis of existing drugs (or drugs in clinical trails that were not originally designed for individuals with a specific mutation).  For example, a drug company may come very close to bringing a drug to the market, but then realize that the drug has severe side-effects for certain individuals.  If the individuals with the severe side-effects can be identified ahead of time, then the company can prevent total loss of their research costs by targeting individuals without a particular mutation.  Furthermore, scientists can discover new functions for drug candidates (as happened with Viagra), so genetic information may be able to provide researchers with alternative uses for existing drugs (or novel drug candidates).

Finally, it's important to keep in mind that cancer is the 2nd leading cause of death (in the US).  I'm sure there is a going to be a point where a mutation in a particular gene (or related pathway) is too rare to warrant developing a personalized treatment, but drugs that can decrease mortality in even 10-20% of patients may still be able to help a large number of people.

Thursday, August 12, 2010

Benefits to a 3-tier system for DTC genetic testing

The popular genetic testing company 23andMe has two rankings for genetic associations present in the scientific literature: Established Research Reports and Preliminary Research Reports.  The relatively recent GAO report on genetic testing claimed that 23andMe provided "reports that showed conflicting predictions for the same DNA and profile, but did not explain how to interpret these different results" (and the response from 23andMe can be viewed here).  However, I think this system provides a useful basis to improve education about genomics research and genetic testing.  In fact, I think it would be even better to provide 3-tier system to describe genetic associations.

In particular, I think genetic associations can be classified as "Based upon Preliminary Evidence," "Based Upon Reproducible Evidence," or "Therapeutically Useful."  In this case, I would consider "Reproducible" associations to be equivalent to 23andMe's "Established Research Reports."  I would consider "Therapeutically Useful" associations to be those that have been proven useful in terms of significantly reducing patient mortality or morbidity.

The "Therapeutically Useful" classification is important because there could be many reasons why individuals with a particular mutation may have a increased risk of dying from cancer that is statistically significant, but information about that particular mutation may not be important for informative for making medical decisions.  For example, models for genetic predisposition may be complicated for certain diseases, and it may be necessary to incorporate currently unknown information about other mutations in order to provide an accurate estimate of risk to develop a given disease.   Also, there are cases where environmental factors may be more important than genetic factors.  And the list goes on.

In practice, I think the FDA could play a role in helping define the third category of genetic tests, and I can think of at least two ways to implement this.  First, genetic testing companies could post some sort of "FDA-approved" icon for tests that have shown to produce positive results when applied in a clinical setting.  Second, the FDA could post a listing of genetic tests that have been proven useful through clinical trails and provide a list of appropriate treatments that correspond to a given test result.

There are benefits to having access to genetic information that doesn't necessarily meet the criteria for a "Therapeutically Useful" test.  For example, there may be no "FDA-approved" diagnostic for a particular problem and an experimental prediction may be the best possible resource.  Furthermore, the FDA has indicated that it does not see a need to regulate the release of "raw genetic information," and it is acceptable to communicate information about genetic associations though other means.  For example, the FDA would never censor an article in the New York Time about a new discovery or preliminary result.  Genetic tests that are"Based upon Preliminary Evidence" or "Based Upon Reproducible Evidence" are basically indicating that "Hey, you have this mutation that has been described in the scientific literature."  If it is too confusing to provide separate predictions for each tier of genetic associations, then genetic testing companies should at least be able to provide a list of publications describing a mutation of interest (and possibly provide a brief summary of the findings).

Information from DTC genetic testing can also fundamentally increase understanding about human genetics and contribute to scientific research, as indicated by 23andMe's publication in PLoS Genetics.  In fact, 23andMe could actually help establish "Therapeutically Useful" tests if customers could upload clinical information that is directly incorporated into their models.  Likewise, companies like PatientsLikeMe could help test the therapeutic value of genetic tests if patients could upload their genetic information

In general, I think individuals should take a much time as they reasonably can to research a topic prior to making a life-altering decision.  Even if a diagnostic is 98% accurate, what if you happen to be in the minority that gets a false-positive?  Even well-established tests can have false positives.  Important information can be gained from independent tests, consulting with a physician or genetic counselor (or getting second opinions from multiple professionals), or even talking to friends who might have went though similar situations.

Although I understand that a "3-tier" system for genetic testing may be confusing for people at first,  I think this system could be a useful tool to educate the public about genetic testing and encourage individuals to take a more active role in making medical decisions.  In fact,  a recent post by John Timmer concluded with the suggestion that heavy regulation of the DTC testing industry will probably not be necessary if a sufficiently large proportion of the general public took the initiative to better educate themselves.

Sunday, June 27, 2010

Paper on Microarray Analysis of "Watchful Waiting" Prostate Cancer Cohort

Last week, I noticed an interesting paper that was published in BMC Medical Genomics this March.

The authors of this paper wanted to use microarrays to develop an effective prostate cancer diagnostic defined by gene expression patterns.  More specifically, the authors were studying tissue samples taken from the Swedish "Watchful Waiting" cohort.  This large collection of patients developed prostate cancer between 1977 and 1999.  The length of follow-up time for clinical data recorded in this study is significantly longer than has been used in any other attempt to develop a microarray-based prostate cancer diagnostic.  In some cases, clinical information about individuals in this cohort was recorded over 2 decades before the microarray was even invented.

There were two aspects of this study I found particularly interesting.  First, it is pretty rare to find a cohort studied as carefully as the Watchful Waiting cohort.  Second, the authors concluded that "none of the predictive models using molecular profiles significantly improved over models using clinical variables only."

The findings of this study seem to agree with an earlier post where I mentioned two earlier studies to show that GWAS data did not significantly improve risk models for heart disease and type II diabetes.  Although those studies utilized a fundamentally different tools for analysis (the earlier studies looked at genomic sequence whereas this newer study examined gene expression patterns), it was interesting to see examples of cases where genomic technology has not been able to improve upon existing clinical diagnostics.

Of course, these studies leave the reader asking several important questions.  For example, why do these large studies result in negative results?  How long will it take for genomic research to make substantial impacts on clinical diagnostics and therapeutics?  What are the practical limits for developing applications based upon medical genomic research?

I'm not going to even pretend like I know the answers to all of these questions.  Although I'm certain that genomic research will ultimately result in disappointing results for some major studies, this paper did provide some hope that genomic research can still pave the way for future breakthroughs.

For example, the authors discuss how there is significant heterogeneity within and between prostate cancer samples - expression patterns in one region of a given tumor can be significantly different than other regions of that same tumor, and this makes it especially difficult to compare gene expression patterns between different tumors.  It is also important to determine the optimal time to take tissue samples for analysis; diagnostics taken too far in the advance will not yield clinically useful information, and feasible treatments may not even exist for results of a diagnostic applied during a late stage of cancer development.  The authors also point out that several other diagnostic microarray studies resulted in similar lists of prostate cancer biomarkers.  In other words, microarray analysis can probably yield reasonably accurate results - the problem is that the biomarkers aren't a significant improvement over current diagnostics.

I find it encouraging that the authors have a plausible explanation for their negative results and that independent microarray studies have come to similar conclusions, and I continue to be hopeful that genomics research can help achieve important medical breakthroughs in the future.

-------

FYI, Nakagawa et al. 2008 is also an excellent prostate cancer study utilizing microarray data.

Monday, June 21, 2010

Seth Berkley's TED Talk on HIV/Flu Vaccine Development

Seth Berkley's recent TED talk focuses on HIV and influenza vaccine research.  In general, I think the talk does a good job of reviewing why it is so hard to develop an AIDS vaccine or a universal flu vaccine.  However, there are some times when I thought that Dr. Berkley was overselling the research results.

For example, Dr. Berkley says "let's take a look at a video that we're debuting at TED, for the first time, on how an effective HIV vaccine might work."  Now, I think the video does do a very good job illustrating the general principle of how vaccines work, but it does not provide any specific details regarding how an effective HIV vaccine can be developed.

At another point in the video, Dr. Berkley suggests that a universal flu vaccine can be created by designing vaccines that target conserved regions on the surface of the influenza vaccine.  These proteins would be located in roughly the equivalent of the blue region of the following 3D rendition of a flu virus (from http://johnfenzel.typepad.com/john_fenzels_blog/images/flu_image.jpg):


As you might imagine, these proteins have not been used because the scientists believed that the immune system would not respond well to them because the H and N spikes (the green and yellow things in the picture above) would block most antibodies produced during the immune response.  Judging from a quick search of the internet, it seems like most images agree with the picture shown above (and in the TED talk).

To be fair, I found one example of a flu virus with less densely packed surface proteins, and the candidate proteins (M2e proteins) may be large enough to clear enough room to interact with the host antibodies.  However, I fear this new vaccine design may be based on data which shows encouraging results during pre-clinical research but is not very effective during clinical trails.  That said, I would obviously be pleasantly surprised if this design does lead to a successful universal flu vaccine, and I honestly do think Dr. Berkley does a good job of broadly describing of how new technology can aid in rational vaccine design.

I also thought that Dr. Berkley did an excellent job describing how changes in vaccine production could significantly increase the effectiveness of flu vaccines.  Namely, Dr. Berkley points out that flu vaccines have been produced from chicken eggs ever since the 1940s.  Different flu strains vary in their ability to grow in chicken eggs, and production of flu vaccines using chicken eggs takes "more than half a year."  Dr. Berkley proposes a method that would allow companies to produce flu vaccines in E. coli.  I think this is an excellent strategy that could significant improve the process of vaccine development.

I think it is also worth mentioning that Dr. Berkley does acknowledge how hard it is to predict the future of vaccine development.  When asked to give a time line to expect an effective HIV vaccine, Dr. Berkley responds "everybody says it's 10 years, but it's been 10 years every 10 years."  In general, it is always important for people to always interpret preliminary research findings with a grain of salt.

Overall, I think Dr. Berkley does a good job providing an interesting talk about a very important subject.

Tuesday, May 18, 2010

Should the FDA regulate direct-to-consumer genetic testing?

Last week, Walgreens reversed its decision to provide "spit kits" for Pathway Genomics' genetic tests due to a letter from the FDA requiring Pathway Genomics to either get FDA approval or explain why they are exempt from approval.  This week, CVS also made a similar decision to postpone selling of the Pathway tests.

Recently, I have noticed a number of blog posts that seemed to side with the FDA.  For example, 80beats has criticized the Pathway Genomics tests in terms of usefulness, legality, and unpredictable public response.  Genomeboy has complained that Pathway Genomics is not being very transparent in terms of explaining their analysis (especially in comparison to 23andMe).

I agree that a lack of transparency and usefulness would be serious problems for genetic tests.  As mentioned in my first blog post, there are indeed many problems with the current accuracy of genetic tests (in terms of discrepancies between companies, incorrect prediction of clearly known characteristics such as eye color, etc.).  I think it is also essential that companies provide you with your SNP data so that you can try to seek alternative opinions regarding how to interpret the genetic test.  I was also very disturbed that the CSO of Pathway Genomics claims that "[Pathway Genomics] don't feel that [they are] practicing medicine" even though they wish to sell genetic tests in a drugstores.

That said, I think FDA regulation should only be used as an absolutely last resort.  First, I think many other criticisms are not warranted.  For example, the Wall Street Journal has a nice article about how the negative public response to genetic tests has been exaggerated.  Second, I think it will be valuable for consumers to have access to objective analysis of the accuracy and usefulness of commercially available genetic tests (either conducted through public or private means), but I don't think that necessarily has to be done through FDA regulation.  For example, a large database (perhaps something like PatientsLikeMe) of genetic test results, medical history, and lifestyle changes can help provide the necessary information to consumers.  It will take a significant amount of time to throughly examine these genetic tests, and I think this analysis can be conducted much quicker (and perhaps even better) if the public has direct access to the tests.

I would be ok with some sort of warning label that the tests are not completely accurate and other analysis should taken into consideration when making medical decisions, but I think it would be too extreme to completely remove these tests from the market.

Friday, May 7, 2010

What defines a "good" baby?

Yesterday, I noticed this New York Times video showing that babies would prefer "good" puppets after viewing puppet shows with "good" and "bad" characters.  The conclusion was that infants may already have the ability to determine right and wrong behavior.

I think this study could be interesting in so far as it shows that the babies may have to ability to make abstract connections and infer which individuals are likely to be helpful without directly interacting with them.  For example, it is one thing for the baby to learn that it can get fed if it cries.  It takes an additional cognitive step to be able to guess that somebody is likely to help you without ever being helped by that person in the past.

However, I do not think this study really says anything about the origins of "good" or "bad" behavior in the baby itself.  Having the ability to determine that somebody is friendly does not necessarily mean that you have a conscience or a desire to emulate that behavior.  Con artists can manipulate individuals who have a desire to help other people.  Serial killers have lured their victims into cars by pretending to be weak or injured and asking for assistance.  In other words, I do not think the behavior shown by the babies in the study is a reliable indicator of "good" behavior later in life.

That said, I don't believe that moral behaviors absolutely must be learned from parents or other moral authority figures.  If I recall correctly, there have been other studies with babies that actually require the babies to perform truly altruistic actions.  I'm also certain there are several situations where individuals would be "nice" based on rational decision making (even in the absence of the individual's ability to emphasize with other people).  I just disagree with the conclusions drawn from this specific study.

Thursday, April 22, 2010

Obligations associated with publicly funded research

When I was reading this Effect Measure post, I was reminded of the on-going effort to make all federally funded research available to the public.

I certainly believe that research funded by the public should be considered property of the public.  In fact, I think this should also apply to patents.  If public funds are used to discover a novel therapeutic or diagnostic, then I don't think that product should be subject to considerable mark-up to recover research costs since the public has already paid for the initial investment and has had to pay the costs for all the biomedical research that did not bring new products to the market.  I think a similar logic should apply to discoveries that are made using funds from non-profit organizations (such as March of Dimes, etc.).

Of course, there is going to be a messy gray area where part of the discovery was made using public funds (perhaps the discovery of a genetic association, for example) while some of the funding was provided through private means (a potential example might be the costs of clinical trails).  I should also make clear that I'm not against the use of patents - I just think that we need to make sure that the public is not being double charged for research costs.  Perhaps this could be implemented by setting a cap at the percentage of price that is allowed to go towards profits for any product directly resulting from research that was conducted using public or non-profit funds.  I think this is a moral obligation of scientists, and I think this could help limit the escalating costs of health care.

Tuesday, April 13, 2010

Do patients report symptoms better than physicians?

There is a very interesting article in the New York Times today that discusses how doctors need to pay more attention to patient complaints.  For example, the author opens the article discussing how she stopped taking Bextra after she started to develop a large red blister on her tongue.  Her physician said the symptom was most likely a coincidence, but the drug was taken off the market shortly thereafter because it caused dangerous side effects (including mouth blisters).  Although it is impossible to say what caused the mouth blister in this case, the author does a good job of demonstrating that the doctor should have taken her complaint more seriously.

I found this article to be especially exciting because it emphasizes that patients can directly provide powerful information for evaluating medical treatments and diagnostics.  This reminds me of the success of medical databases based upon information provided directly from patients, such as PatientsLikeMe.  Interestingly, the article also describes a side effect database from the FDA called MedWatch, which allows doctors and patients to report negative systems experienced with various treatments.  I was very excited to see that even the FDA appreciated the value of information reported directly from patients.  I think a system similar to MedWatch can significantly improve the process of conducting clinical trails.

Of course, I should emphasize that the article is not trying to say that medical information should only be based upon information from patients.  Physicians definitely need to play a role in assessing medical treatments.  However, I think databases of patient feedback can be a powerful tool that can help reshape health care and drug development.

Wednesday, April 7, 2010

Book Review of “The Decision Tree”

“The Decision Tree” is an excellent book about personalized medicine that inspires readers to take a more active role in their health care. “The Decision Tree” is written by Thomas Goetz, who is a popular science writer and executive editor of Wired. He also has an MPH from UC Berkeley.

I’ve also found lots of videos with Thomas talking about personalized medicine (including a TED talk and a FORA.tv talk).  If you can’t get a hold of his book, I would recommend at least checking out some of these videos.

There is too much information in this book to relay in a single blog post. Therefore, I will only summarize the most interesting take-home messages. I’ll also provide a list of useful web-based tools that I learned about from this book.

Selected Take-Home Messages:

1) “Control over destiny” is important for your health - The Whitehall II study showed social status was strongest risk factor for heart disease. Even after adjusting for known risk factors for heart disease, people with low social status still had more than twice the risk of dying of heart disease. Thomas says that lack of control by itself can cause stress sufficient to lead to chronic illness. The need for people to take an active role in maintaining their health is emphasized throughout the book.

2) Tracking your own health statistics is valuable form of preventative medicine – Thomas discusses how self-medicating with constant number crunching can work like a self-imposed Hawthorne Effect. Some success stories include Weight Watchers and Nike+.

3) Development of a drug fact box can help patients decide which medications they wish to take – Similar to the nutrition facts label currently on food, this would allow individuals to quickly access the effectiveness and associated side-effects of a specific drug. I think this could significantly improve consumer knowledge about drug treatments, and I am glad to hear that FDA is considering a recommendation to require drug facts box on pharmaceutical labels.

4) Poor market incentives and CT scans– Although there was a relatively recent study that showed 85% of nodules discovered via CT scan were stage I lung cancer and 92% of the 375 patients with these tumors removed were still alive 10 years later, there has also been a follow-up study showed that mortality rate for those receiving CT scans does not significantly differ from those without CT scans because CT scans are primarily discovering slow growing, non-lethal tumors. This is especially important because the surgery associated with removing lung cancer has a 2-5 percent mortality rate by itself (so, you don’t want to undergo surgery unless it is really necessary). Thomas also discusses how CT scans are an anomaly in that they are a technology that does not follow Moore’s law, meaning that CT scans have actually become more expensive over time.  He attributes this market failure to lack of price transparency (can’t shop around and different companies in the same area may pay significantly different prices for CT scanners), ability to pass cost to patients and insurers (no incentives to avoid giving needless tests), and lack of automation (can’t run CT scan without trained radiologist). I think these examples all speak to the need to have a health care system that does not encourage needless testing and can correct at least some of these market failures that are driving up healthcare prices.


Health Care on the Web:

1) PatientsLikeMe – as mentioned in my previous post, PatientsLikeMe contains a database of information provided directly from patients and serves as a community for individuals to discuss symptoms and treatments associated with various diseases.

2) CureTogether – similar to PatientsLikeMe, but without "expert advice" to guide data analysis. Thomas describes advantages and disadvantages to this system.

3) Adjuvant! – also similar to PatientsLikeMe, but this is a database where physicians share information that cannot directly be accessed by patients. This database is specifically intended to help design combination treatments for cancer patients.  I think this could potentially be a useful model for assessing the effectiveness of diagnostics and treatments when more formal restrictions are required (such as FDA approval).

4) NIH Gene Test Website – provides a list of currently available genetic tests.

5) UT-San Antonio Prostate Risk Calculator - 75% of men over 80 have some form of prostate cancer, but less 5% of those will actually die from prostate cancer. Therefore, PSA test results should be weighed with other risk factors before deciding to follow an intensive treatment for prostate cancer.

Friday, April 2, 2010

Why Do Genomic Cancer Diagnostics Cost So Much?

After reading the introduction to this PLoS ONE article, I started to wonder why there are there several published microarray expression profiles for cancer progression yet relatively few microarray-based diagnostics used in a clinical setting. Although this PLoS ONE paper focuses on analysis of ovarian cancer (and mentions the lack of a clinical microarray diagnostic for ovarian cancer), the paper also cites the current use of a breast cancer diagnostic called MammaPrint.

After reading the wikipedia entry on MammaPrint, I was surprised to learn that it took 5 years for the diagnostic to reach the market following the initial publication showing that the expression profiles for a set of 70 genes could successfully predict the cancer progression. This information is important because more aggressive treatments early in cancer progression may be able to help cancer patients who would otherwise have a high mortality rate (as predicted by their gene expression profile). I was also surprised to learn the high price of both MammaPrint and its competitor Oncotype DX. Although I do not think I can provide a complete answer to why these prices are so high, I would like to take a moment to first demonstrate that the price of these tests far exceeds the cost to conduct the test and then discuss how I think these costs can be offset by decreasing the amount of time and effort that it takes to bring a medical diagnostic tool to the market.

Based upon their wikipedia entries, the MammaPrint diagnostic costs $4,200 and the Oncotype DX test costs $3,978. To give you an idea about how much it actually costs to carry out this test, it costs $350 for a full service microarray analysis (including labor and data analysis) of an Agilent Whole Human Genome Microarray for on-campus customers at the UT-Southwestern Micoarray facility. This is comparable to the cost of most of the microarray facilites that I have worked with, and Agilent produces high quality microarrays. Now, most laboratory kits have a warning that they are “intended for research purposes only,” and this is probably true for the human Agilent array. However, I think this warning is mostly to avoid litigation and not due to a severe lack of technical accuracy, and I expect the actual cost for a clinical microarray test to be in the hundreds (not thousands) of dollars. I’m sure that this high cost is the product of a combination of factors, such as research costs, legal costs, patent law, and the US healthcare system. However, I’m going to focus on ways to potentially cut research costs because that is the area that I know most about.

Now, I want to make clear that the initial publication of a potential diagnostic test is not sufficient to prove the widespread effectiveness of that test. For example, the microarray test for ovarian cancer in the PLoS ONE article had substantially better predictive power on the training dataset than when applied to a new dataset. Therefore, I want to make clear that I do think follow-up studies were necessary to prove the effectiveness of MammaPrint.  However, I still don’t think it should have taken 5 years to test the effectiveness of this diagnostic and I think effectiveness can be determined without as much government regulation.

Before MammaPrint could be put into widespread use, it had to gain FDA approval. This required multiple verification studies, and this is the crucial event that defines the 5 year gap between initial publication and availability on the free market. First off, I don’t think FDA approval should be necessary for diagnostics. I do think physicians need some way to quickly access the effectiveness of a medical diagnostic and/or therapeutic, but I think there are more better ways to determine the effectiveness of a given treatment. For example, a relatively recently posted TED talk by Jamie Heywood discusses how his start-up Patients Like Me, developed by three MIT engineers, can diagnose medical treatments more quickly and effectively than clinical trails. This website analyses a database of information provided by patients, and therapeutic effectiveness can be assessed immediately based upon currently available data. In the very least, I think this company could be an excellent model for a more formal system using data from physicians that does not carry all the restrictions of a clinical trail. These changes should decrease the cost of medical care because companies claim that these price markups are necessary to recoup the costs of research and development, and a more streamlined process for accessing the effectiveness of treatments will decrease research costs.

Wednesday, March 24, 2010

Democratizing Scientific Research

Today, I came across a very interesting blog post by Christina Agapakis (via a GenomeWeb article). This post focuses on efforts to "democratize scientific research.” For the most part, the article discusses a post on another blog and the efforts by a group called DIYbio, which stands for “do-it-yourself biology.” She also emphasizes the need to make formal changes in the way institutions conduct scientific research, and I particularly liked some of her suggestions for improvement:

"What if there were more opportunities for high-paying technical jobs in science for people without advanced degrees? What if there were more biotech vocational programs to learn the skills you would need to work in these jobs? What if it were easier and cheaper for groups of scientists and engineers everywhere to turn ideas and hypotheses into technology and knowledge? What if there were real ways for knowledge to become power for that kid living in South Central LA?"

In regard to the issues brought up directly in her post, I think Christina is a little too harsh on DIYbio. I do agree that the organization's efforts are probably not sufficient to help intelligent individuals “[living] in a community plagued by violence and poverty” reach their full potential, and it is probably reasonable to assume that most of the participants are “white, middle class, and primarily male.” However, I think DIYbio positively contributes to the production of new ideas and products that would not otherwise be feasible. To be fair, Christina does say that DIYbio has helped promote “scientific participation and enthusiasm” and could be a useful model for developing programs for underprivileged individuals, but her opinions about DIYbio are mostly negative. She also claims that DIYbio perpetuates “the myth of the Victorian Gentleman Scientist,” who “[pursues] a 'pure' science not because of an interest in money and free of any state control but because of a deep curiosity with the power of the natural world.” I certainly agree that the Gentleman Scientist is an undesirable model for scientists that is unlikely to produce practical research and does not provide a reasonable venue for research for anyone without prior financial security. However, I honestly didn’t get that vibe from reading over the DIYbio website, although I should admit that my knowledge is limited because I hadn't heard of DIYbio before today.

In general, I am a big supporter of anything that removes politics from scientific research because I think personal and professional bias compromises the objective judgment that is so critical to good science. I think this requires fair assessment of scientific progress that emphasizes the impact of a specific result (and the effectiveness of the methods to achieve that result) and limits the role of authority as much as possible. For example, I think it would help if the peer-review process for scientific journals was a double blind process. Currently, the reviewers know the author names and institutional affiliations, but the authors do not usually know the identity of their reviewers. I know some exceptions to the latter case but not the former case (please correct my ignorance if you know of some examples). Of course, this would not be a perfect solution. Some reviewers may be able to recognize the work done by established scientists due to biological systems and methods employed in previous publications, and well-connected scientists will probably be friends with some reviewers and can give them a heads up that they have submitted an article for peer review. This is slightly tangential to the topic of DIYbio, but my ideas are relevant to the broad concept of "democratizing” scientific research by formal changes in institutional policies.

There is also a very interesting (and long) response to Christina’s post that does a good job discussing how “[it] is entirely possible to develop high level expertise outside of the formal system” and also noting that certain avenues of research (like “programming and electronics’) are much more feasible for the everyday scientist than research in biology or chemistry. Materials for biological research can be harder to acquire and more prone to regulation. For certain types of biological research, such as biomedical research, it is also relatively difficult to bring therapeutics to the market without a formal research position, in part due to the need to run clinical trails and gain FDA approval. The author of the comment specifically mentions over-regulation. I agree that some regulation is definitely unnecessary and hinders scientific progress (for example, I personally think the FDA should only limit the use of drugs due to toxicity without also considering effectiveness, and instead allow physicians to decide on the own if the treatment is effective and superior to alternative methods), but I do think that there is some rationale for not allowing everyone access to stuff like anthrax.

In short, I think informal biology research is a good thing that can benefit society (as acknowledged in Christina’s concluding paragraph), and I also think it would also benefit society to employ formal institutional changes to encourage individuals from varied backgrounds to conduct scientific research.

Wednesday, March 3, 2010

Who should be responsible for genetic counseling?

Today, I read a GenomeWeb article regarding a debate on whether Myriad Genetics’ BRCA test (BRACAnalysis) should be interpreted by primary care physicians or genetic counselors. Myriad claims that primary care physicians can and should interpret the test results, but critics claim this can result in inaccurate interpretation of test results.

Certain mutations in BRCA1 or BRCA2 genes can lead to an increased risk of developing breast and/or ovarian cancer. I am trying to keep track of notes about BRCA risk here, and prevalence of high risk cancer genes here.  So, I welcome input from others, and I have modified some of the content in this paragraph since the original post.

At least one survey from Medco indicates “most [doctors] believe that personal genomic information can be useful in their care for patients and help them make treatment decisions [but] the majority said they do not know enough about such tests.” In contrast, a different survey reports that “community-based physicians appeared to be successful incorporating BRCA1/2 testing into their practices.” However, a majority of these doctors utilized the assistance of some sort of genetics expert when making decisions about patient care. Myriad also emphasizes the paucity of genetic counselors, but critics have pointed out that Myriad fails to inform doctors that telephone-based genetic counseling is available and, in fact, required by certain insurance companies such as United Healthcare and Aetna.

I do think patients should be given the option of talking to a genetic counselor, and I think most physicians would benefit from at least informally discussing the details of a specific genetic test with a genetics expert. In general, patients should always seek out second opinions if they do not feel comfortable with their physician’s advice, and I would also advocate that patients conduct some independent research. For example, the National Cancer Institute provides a lot of useful information on BRCA1/2.

More than 10 years after the original post, I think some of my impressions have changed.  However, I still believe that changes in medical training and public education about the role that genetic counselors can play in making medical decisions is important. I also still hope that legal intervention may not be necessary to improve the interpretation of medical diagnostics.

Change Log:

[post not created with change log]

4/23/2021 - add links to newer posts with notes about estimating BRCA risk / prevalence (since I think the numbers in the original post might have given the wrong impression).

Also, change concluding paragraph, which originally included this link, where I think I down-played the importance of oversight (and I myself have certainly been submitting FDA MedWatch reports for multiple more recent genomic results).

Friday, February 26, 2010

Temple Grandin's TED Talk on Autism

After listening to Temple Grandin’s TED talk, “The world needs all kinds of minds,” I spent a lot of time learning more about Temple and her accomplishments. As a child, Temple Grandin suffered from severe autism, but as an adult she became famous for developing more effective and humane protocols in the cattle industry and she become a prominent speaker to help others understand autism. She posits that her acute visual thinking (and hampered verbal abilities) helped her gain insight into the animal mind and this is the cause for her success in the cattle industry.

In addition to her TED talk, I also listened to the BBC special “The Woman Who Thinks like a Cow.” I would strongly recommend watching this video if you want to gain some insight into Temple’s life and accomplishments. I also plan to check out one of her books sometime in the near future.

I think Temple does a fantastic job of accomplishing two things: 1) helping the audience understand how autistic people think and 2) emphasizing the need for individuals with “autistic-like” traits and encouraging those with “unique [minds]” to find jobs that suit their abilities.

For reasons of brevity, I will simply encourage you to watch the TED talk if you want to gain insight into how an autistic mind works. However, I would like to think more carefully about Temple’s second point.

Temple emphasizes that society needs different types of thinkers for different types of jobs. For example, she often remarks that an autistic mind is well-suited for a job in Silicon Valley. Now, I do agree that we should be aware that people (such as highly functioning autistic individuals) can very talented at certain things but bad at other tasks. However, I think she occasionally takes this argument a little too far.

Namely, I think Temple overestimates the role of autistic individuals in society and underestimates the tragic life of severely autistic individuals. Previous to this TED talk, she claimed “[society] would still be socializing around the fire if it was not for autistic individuals.”  When questioned about this claim during the question and answer session after her TED talk, she responded “Who do you think made the first spears? An Asperger guy.” While it is true that many highly functioning autistic individuals can have a heightened sense of perception and analytical abilities, it is not safe to assume that social skills are always antagonistic to analytical thinking. There are many individuals who have both verbal and visual talents. For example, a good physician needs both analytical and communication skills in order to succeed at his or her job. Although I agree that most people have a dominant style of thinking, overall aptitude varies significantly between individuals and it is not necessarily safe to assume that verbal and visual skills are mutually exclusive. To be fair, Temple does mention that about one-half of autistic individuals will never learn how to talk and therefore cannot maintain a job and independent lifestyle. For this reason, I question Temple’s broad claim that autism genes are good for society. Autism is a spectrum of disorders, and severe autism devastates the lives of many individuals. Even Temple has to take antidepressants in order to conduct everyday functions (as I learned from the BBC special). Therefore, I think it is good to have individuals with a “brush” of autism, but I also think that society would benefit from the development of novel therapeutics that could correct the developmental delays in communication associated with autism and/or genetic diagnostics that can help predict if a child is likely to develop a case of severe autism.

Tuesday, February 23, 2010

Traditional versus Genetic Risk Factors

Yesterday, I started reading this GenomeWeb article discussing the utility of genome-wide association studies (GWAS) in testing drug safety. This article largely agreed with my earlier post. In fact, the article mentions the utility of genetic diagnostics for prescribing abacavir (Ziagen, a HIV drug), as described in “The Language of Life.” The article also mentions new studies to determine genetic risk factors for flucloxacillin (an antibiotic) and clozapine (an antipsychotic). In general, this article provides good support for my earlier claim that drug sensitivity is the most exciting area of GWAS research.

However, the article also mentions that scientists may have overestimated the strength of GWAS in predicting risk to diseases such as heart disease and type II diabetes. This slightly contradicts my most recent post where I claim that genetic predictors of type II diabetes are probably “pretty good.” Therefore, I decided it might be worthwhile to reevaluate my claims.

I think the recent type II diabetes study is well designed, and I was surprised by the results. The authors calculate multiple models for traditional (e.g. age, sex, family history, waist circumference, body mass index, smoking behavior, cholesterol levels) and genetic risk factors. They found that traditional models identified onset of type II diabetes with a 20-30% success rate (at 5% false positive rate), while the gene-based model had only about a 6.5% success rate (at 5% false positive rate). The GenomeWeb article also mentions an interview with the senior director of research at 23andMe (a genetic diagnostic company), who claims that some rare variants may impart especially high risk to certain individuals and genetic tests should therefore complement risk assessment using traditional risk factors. Readers should also take a look at Web Table B of the diabetes study, which calculates predictive power of individual variants. For example, TCF7L2 had one of the strongest associations with type II diabetes in the GWAS Catalog, and this gene did in fact have a significantly higher rate of incidence for diabetes for individuals with a certain set of variants. However, the risk of developing diabetes only increased from 5.4% to 8.6%. Therefore, I think these time consuming diagnostic studies (lasting 10+ years) are unfortunately essential to determine the practicality of genetic tests for diseases whose genetic basis is either complex or unclear.

On the other hand, I don’t think the results of the heart disease study are very surprising. After checking the GWAS Catalog for genetic associations with myocardial infarction and stroke (the diseases examined in the study), I only found a small amount of information on these diseases. There was only one study on myocardial infarction, which only had one significant variant (rs10757278-G, p-value = 1 x 10-20), and two studies on stroke with weak associations (p-values between 9 x 10-6 and 1 x 10-9) and no overlapping associations between the two studies. The heart disease study also used a gene model with 101 variants, which is much larger than the number of significant associations listed in the GWAS catalog (unless the authors considered diseases not explicitly described in the abstract). Unlike the diabetes study, the authors did not provide a table describing how accurately individual variants could predict the onset of heart disease.

Of course, genetic models could improve when novel variants are discovered and/or better models for complex diseases are developed. I also think genetic models would naturally be more accurate for simple diseases that are associated with mutations in single gene. In the meantime, I think consumers need to currently interpret genetic risk factors for complex diseases with a grain of salt, but I think it is still worth getting excited about using genomic tools to predict novel drug targets and discover genetic sensitivities to drugs that are currently on the market or in clinical trials.

Tuesday, February 16, 2010

Playing with the GWAS Catalog


When I was looking at an article in PLoS Biology, I noticed that the abstract listed a comprehensive government database for genome-wide association studies (the “GWAS Catalog”). This database provides a lot of interesting information. In order to get a feel for the data in the GWAS Catalog, I looked at the data for four specific diseases (autism, prostate cancer, type I diabetes, and type II diabetes).

[If you are a non-scientist looking at this database, stronger genetic associations (which should more accurately predict genetic predisposition to a disease) should have low p-values and should be reproducible between different studies.]

1) Autism - There were 3 studies included in the GWAS Catalog. The first two studies identified the same exact region (but with slightly different variants), and the third study identified a different but nearby region. Although I think that there is probably something interesting going on in this region of chromosome 5, I don’t think it is worth getting very excited about the specific variants identified in these studies. For example, the p-values for the autism studies are the lowest out of the four diseases that I analyzed (meaning autism has the weakest genetic component and/or the genetic component of autism is the most complex to model). Furthermore, the most recent study showed that the expression levels of SEMA5A (one of the genes listed in the GWAS Catalog for autism) are very similar for autistic and normal people (see Fig 2. if you have access to this article). The authors of this study claim that gene expression in autistic patients is significantly lower than in normal patients, but I think the statistical significance may be due to an over-fitting problem because they only look at 20 autism patients and 10 control patients (and I have a hard time believing this was enough data to adjust for “age at brain acquisition, post-mortem interval and sex”). The genes with the strongest genetic association in the first study (CDH10 and CDH9) also have similar expression patterns in both autistic and control patents, and the authors of this first study report that this difference is not statistically significant. Of course, the autism variants may be non-functional yet retain similar gene expression levels, but I would still seriously question the strength of any of the specific variants listed in these studies.

2) Prostate Cancer – I looked at the data for prostate cancer, type I diabetes, and type II diabetes because variants for these three diseases are included in at least two of the three major genomic tests listed in “The Language of Life.”  More specifically, the three major genomic testing companies gave completely different predictions regarding Dr. Collins’ risk of getting prostate cancer. The GWAS Catalog lists 11 studies (10 of which have significant associations), and the genetic associations for prostate cancer were much stronger than for autism (p-values equal 3 x 10-33 vs. 2 x 10-10, respectively). Highly significant genetic associations were found within the 8q24.21 and 17q12 regions in several independent studies, but many associations are only found in individual studies. According to “The Language of Life”, deCODE has 13 variants for prostate cancer, Navigenics has 9 variants, and 23andMe has 5 variants. Based upon what I’ve seen in the GWAS Catalog, I think that there probably are at least 5 strong, reproducible variants that could be used to calculate genetic predisposition to prostate cancer, but I am not certain if there 13 variants with well-established genetic associations. However, calculating genetic association for several variants at the same time can be tricky, and the difference in test results may be a problem with the underlying models for calculating genetic association more so than the individual variants considered for the analysis.

3) Type I Diabetes – Type I diabetes has a very strong genetic component, and the molecular basis for this disease is well understood. In these respects, the data in the GWAS Catalog are a good reflection of what is known about this disease. The strongest associations had the lowest p-value out of all the diseases considered (5 x 10-134 for a variant within the Major Histocompatibility Complex, or MHC), and either MHC or HLA (which is part of the MHC) had the strongest genetic association for 4 out of the 8 studies in the GWAS Catalog. This makes a lot of sense because the MHC displays antigens to immune system (thereby telling the body which cells to attack) and type I diabetes is due to due to an autoimmune response where the immune system attacks and destroys the insulin-producing beta cells in the pancreas. It bothered me that some studies reported pretty different results, but that is why I think that it is necessary to only use reproducible associations for genetic testing.

4) Type II Diabetes – The GWAS Catalog contained 15 studies on type II diabetes (12 of which had significant results), which is the highest number of studies listed for the four diseases that I looked at. The strongest associations for type II diabetes had p-values similar to prostate cancer, but higher than type I diabetes. This makes sense because type I diabetes has a stronger genetic component than type II diabetes, so type II diabetes should have weaker associations than type I diabetes. The 8 genes listed as predictors of type II diabetes in “The Language of Life” (TCF7L2, IGF2BP2, CDKN2A, CDKAL1, KCNJ11, HHEX, SLC20A8, and PPARG) were pretty well represented among the different studies listed in the GWAS Catalog, so I bet the predictors of genetic predisposition to type II diabetes are pretty good.

Monday, February 15, 2010

PLoS Medicine Debate on Medical Patents

The most recent issue of PLoS Medicine contains an article with three mini-essays regarding the debate “Are Patents Impeding Medical Care and Innovation?”.

In a nutshell, I would say this paper describes four main arguments from those who argue for refinement of the current patent system for drugs and medical devices. First, drug companies only develop a small proportion of drugs for diseases affecting developing counties. Second, many existing drugs are often too expensive for use in developing counties. Third, pharmaceutical companies are producing relatively fewer drugs at higher cost, and some attribute this to patent law. Fourth, patents can dissuade newcomers from developing technologies in a market that already contains patented products. There are other arguments that have been made against certain types of medical patents, but I think these are the four main arguments discussed throughout the article. Of course, proponents of medical patents argue that patents are necessary in order to provide the proper incentives for innovation.

Evidence within the article (especially by the author of the second mini-essay) indicates that patent law may not be responsible for the third argument (fewer new drugs at higher cost) and the fourth argument (barriers to entry/innovation) may be exaggerated. Furthermore, I think the fourth argument applies to all industries, at least to some extent. Therefore, I will mostly focus on the arguments regarding the impact of patent law on developing countries.

The authors of the third mini-essay cite “malaria, pneumonia, diarrhea, and tuberculosis…account for 21% of the global disease burden, [but] receive 0.31% of all public and private funds devoted for health research.” I think the fact that the problems of developing countries receive a low proportion of funds from both public and private funds indicates that this problem is not completely caused by patent laws. To be honest, I think it makes sense for people to want to spend a higher proportion of their money on problems that directly affect them, so I would probably expect global needs to exceed funding no matter what.

The authors of the third mini-essay provide a good solution to cutting costs of drugs for developing countries. The non-profit Drugs for Diseases Initiative “finances R&D up front and offers the outcome of its research on a nonexclusive basis to generic producers”. Universities also hold the patent on a number of important drugs, so scientists from various non-profit organizations (such as universities) could negotiate deals with companies to produce and sell their drugs at a reduced cost. Although it was not mentioned in this article, some drug companies already offer drugs to developing countries at a reduced cost or donate patents to non-profit organizations.

The author of the first mini-essay mentions that some people believe that a prize system could replace the current patent system. I completely disagree with implementing a prize system to replace patent law, but it is possible that a prize system could complement innovation in the non-profit sector (this is already done on various scales, but more prizes certainly wouldn’t hurt).

That said, I don’t think patent laws are absolutely perfect. For example, I think we may reach a point where limitations need to be set for patents on genetic information. A popular example of this problem (also mentioned in this article) would be Myriad Genetics’ patent on BRCA1/2.

In general, I really like this article format. I wish the mini-essays more cleanly divided into “pro,” “con,” and “unsure” categories, but this is a very minor issue that does not significantly detract from a well-written and organized article. On a different note, I would like to mention that a debate article like this has not been published in PLoS Medicine since August 2009 (although several were published in the months prior to that), and I would personally like to see a lot more articles like this.

I am a huge fan of the PLoS journals in general, and I think PLoS journals make excellent choices for reviews of scientific journal articles in blog entries because they are open-source. So you can definitely count on more PLoS journal reviews from me in the future!

Sunday, February 14, 2010

Review of "The Language of Life"

I have wanted to learn more about the current status of personalized genomics for some time, and I was hoping the release of Dr. Francis Collins’ new book “The Language of Life” could help bring me up to speed. Dr. Collins is the current director of the NIH, and he was also the head of the Human Genome Project.

Overall, I like the book, and I think Dr. Collins does a good job presenting facts objectively, providing both optimistic and pessimistic evidence. However, readers should be careful to distinguish between “potential” applications and current applications of personalized medicine; the potential applications greatly outnumber the tools currently in widespread use.

I have included relatively brief summaries of the main applications of personalized medicine and some cool factoids that I gleaned from the book:

Applications:

1) Personalized drug treatments - This is the aspect of personalized medicine that I find most exciting. Adverse drug reactions are the fifth leading cause of death in the United States (although some problems are due to human error, rather than genetic sensitivities). Dr. Collins discusses the current use of diagnostic tests to guide prescriptions for 6-MP (leukemia), Warfarin (blod clot/heart attack), Ziagen (HIV), and Herceptin (breast cancer). There are also several drug sensitivities that can be revealed using genetic tests (such as 23andMe), but such genetic testing is not currently standard practice. Many other potential applications of personalized drug treatments are discussed, and Dr. Collins also discusses the numerous drugs that have been developed after discovering the genetic basis for various diseases (using genetic/genomic tools).

2) Assessment of risk factors in your own genome – This is the topic of the book’s introduction. Dr. Collins discusses his own family history and his interpretation of genetic tests provided by 23andMe, deCODE, and Navigenics. He also provides a list of his positive results at the end of the book. Although exciting progress has been made in this area, I think these tests need to be more accurate. For example, the three tests were not even in agreement as to whether or not Dr. Collins should have either increased or decreased disposition to prostate cancer (the difference was due to which genetic variants were considered as part of the test). 23andMe also predicted Dr. Collins would have brown eyes, when in fact he had blue eyes. New genetic associations are constantly being published, and I think companies need to be conservative and only test for reproducible associations discovered by independent studies.

3) Assessing risk factors when planning children – This is certainly the most controversial aspect of personalized medicine. Although couples can get individual genomic tests and simply forgo having children (or staying together) if they are both carriers for a severe, recessive disease (like cystic fibrosis), a more aggressive and controversial route would be pre-implantation genetic analysis (PGD). PGD involves in vitro fertilization, conducting genetic tests on the fertilized embryos, and only implanting the embryos that are free from serious diseases. This technology is already in use, but it will obviously raise concerns for those who either believe that life begins at conception as well as those who fear a GATTACA-esque future of designer babies. In fact, Dr. Collins reports that 42% of PGD clinics would be willing to apply this procedure for sex selection, and a California lab currently advertises providing selection for eye and hair color. Dr. Collins proposes regulating which traits can or cannot be selected for during this process, and he also raises the point that defining every trait genetically (as presented in GATTACA) would be impossible because a number of traits are determined more by environment than genetics and the number of embryos needed to produce the right combination of desired traits would be enormous and impractical. I also think it is worth recalling the current accuracy of genetic tests (recall that Dr. Collins was supposed to have brown eyes when in fact he had blue eyes). I am generally not a fan of government regulation, but I do see how this thing can get out of hand and would at least advocate giving people the facts necessary to view these tests with a critical eye.

4) Gene therapy and stem cells – Although I wouldn’t usually consider this to be “personalized medicine”, these therapies are disused in depth in the final chapter of the book. Collins discusses case studies for gene therapy treating LCA (a disease that causes blindness) and X-linked SCID (“bubble boy”) patients. In an earlier chapter, Dr. Collins discusses a case study where stem cells (containing double mutants for CCR5) implanted in the bone marrow of a leukemia patient was able to confer resistance to HIV. Collins also discusses the use of iPS stem cells (engineered from normal cells, not embryonic stem cells) to cure sickle-cell anemia in mice, but he also notes potential complications (i.e. one of the four genes used to induce these cells is an oncogene, and may cause cancer in human patients). However, it is important to remember that case studies can sometimes provide atypically good results.

Cool Factoids:

1) There is currently a free government website that allows people to record and analyze their family history. Dr. Collins describes this as currently “the single most important source of information about your future health”, and I personally think this would be a cool extra credit activity for high school students to see how they can apply genetics to real-world problems…that is assuming students can prove they have used the website without handing their medical history over to their biology teacher.

2) Free tools already exist that allow people to keep digital copies of their medical records, which can be rapidly accessed by designated health care providers. Two such tools are provided in the book: Google Health and Microsoft HealthVault

3) In addition to the discussion of Dr. Collin’s family history and genomic analysis, he also describes catching malaria and TB (on separate occasions) as a volunteer physician and a medical intern, respectively. Of course, he successfully recovered in both cases.

In conclusion, I think “The Language of Life” provides a good review of the progress of genomics in biomedical research, and I would especially recommend it to nonscientists who want to learn more genomic medicine.
 
Creative Commons License
My Biomedical Informatics Blog by Charles Warden is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 United States License.