Club EvMed: Candidate gene studies have taught us little about trait genetics but a lot about the fallibility of the scientific process - Shared screen with speaker view
Who can see your viewing activity?
Care to comment on this? https://www.science.org/doi/10.1126/sciadv.abi5884
Other end of the spectrum from candidate genes.
I watched a recent Tedx talk delivered by a young molecular geneticist (from Taipei?) who we lab has invented a well-targeted CRISPER-based method to perform single base pair editing. He mentioned that 1000’s of genetic diseases are based on single base pair mutations. Example he offered: disease that causes very rapid aging (10X+ normal) from time of birth. Are many of these based on non-replicated candidate gene studies?
Now is a good time to put your questions and comments in Chat…or prepare to raise your hand
Without knowing the exact list of variants & disorders it's hard to say, but often those variants have very high penetrance and/or are detected in GWAS cohorts. Because of their penetrance, they often have more mechanistic connections to health than these candidate gene hypotheses in psychology (e.g. knockout a protein vs "this seems to do something to brain development")
As I understand it, there are variants that have big effect sizes for disease, but they're very rare variants. Not the same as e.g. serotonin transporter polymorphisms
What approach do you advocate for testing hypotheses based on signals of selection?
Whole exam and genome sequencing in conjunction with standardized clinical assessment are more reliable ways to connect variants with diseases or conditions.
Why are current CG studies still being performed without the necessary power? Do you think it is largely because of publication bias? I can understand some of the past hype around CGs but I find it more difficult to understand why the issue of inadequate statistical power persists
How did these replication studies handle differences in populations/ancestry between the cohorts?
I also have this question about interpolation comparisons and how this is handled in GWAS. What about doing between population comparisons as others argue you will find something significant but not appropriate to do this, instead should only be applied to within population analyses?
Even so, a recent study in JAMA using large biobank datasets suggest that average penetrance for pathogenic or presumed loss-of-function variants is about 7%.
Clinical studies such as those being discussed are different from field studies conducted by many evolutionary biologists. What study designs or analytic strategies could be successful and credible?
Meredith Spence Beaulieu
We’ll be opening up for discussion soon! You can keep posting questions in the chat, or feel free to raise your hand at any point to ask a question or raise a point. We’ll call on you after the presentation (To raise hand, click “Reactions” at the bottom of your screen, then “Raise hand.”)
Thanks for looking at my chat question. I must leave to teach. I’ll watch the tape! Great work!! Paul Watson
Perhaps current measurements of the outcomes are inadequate?
What do these different replicability problems—across the paradigms--have in common? How do they differ?For population genetics: what proportion of the problem is poor design? What proportion is publication bias? What proportion is intractable noise? “Environmental” sensitivity? Nonlinear interactions between heritable genetic information? Is our basic model for how the biology works just broken and wrong?
Do you see your role in this as a sort of whistleblower? Are you as a scientist insulated from any potential pushback from individuals who may be offended by your work?
Just commenting on the culture of data sharing - are you saying this as it relates to academics or are you finding DTC personal ancestry/genomic companies are starting to share data as well?
But many journals are now limiting the amount of supplemental material (including the journal Genetics!).
Big problems in medical journals (not correlational or epidemiological) also with false positives
Yep, cancer biology has a replicability crisis that mirrors psychology's
What Brandon said freaks me out all the time!
If researchers were encouraged to publish negative results, how much would this problem have been detected earlier? Part of the sampling error issue may be that the bias toward publishing positive results (associations in this case) have skewed what might have become apparent earlier if other papers had the opportunity to be published.
But even CG x E studies had a poor imagination on what the relevant Es were…..and how it tunes G effects
Have to run, but great talk, very interesting! Such an important topic. Thanks Matt & all
Have to run, but I really enjoyed this talk & discussion
Tenure leads to more exploration and better science -- that's what we have always claimed.
Great stuff, thank you!
Thank you! Great talk
Very important topic. Thanks very much!
Great presentation, thank you!
Thanks for this presentation - lots to think about.
Such a great discussion please expand at conference!