Real-Time qRT-PCR standard curves... efficiency is too h
互联网
In my Real-Time qRT-PCR experiments, I employ the standard curve method for quantification of gene expression. However, standard curves seem to be a huge hit-or-miss procedure for me, even with genes that are well-established to work well with Real-Time such as GAPDH.
At times, I am able to produce excellent standard curves with slopes at approximately -3.3 (~100% efficiency); the associated melting curves for each dilution are excellent as well.
However, most of the time I may get excellent melting curves for each dilution, but the slope for my standard curve may fall at around -2.6 (~142% efficiency)!
Can anyone offer some explanations for this strange dilemma? Consequently, can anyone offer some suggestions to solve this problem?
One thing I can think of is that I'm getting disuniform amplification efficiencies at different RNA concentrations. FYI, I typically use 100 ng/uL RNA for "1x" and make serial dilutions up to 100x. I understand I really should be using a larger range of dilutions, but greater than 100x dilutions just don't work out for my genes/primers.
Thanks for any help I can get.
What is the range of the Cts? Did you see any contamination in your negative control?
My Ct values for the standard curves usually range from around 14-20, and the negative controls are showing no DNA contamination.
What instrument are you using?
Dear YuJ,
Are you using SYBR Green as reporter dye and you are using iCycler?
What is your lowest dilution in your standard?
This is what always happen to me. If i run a SYBR Green assay using standard ranging from 10e6 to 1, i will get > 100% efficiency.
So what i did was I unselect the last two dilution (10 and 1) and the efficiency become ~100% and the slope ~3.3.
I think this is due to the primer dimer formation that contribute to the false signal in you reaction. And the effect of the primer dimer is great enough to afact you data especially in the low concentration standard.
Best regards
Hadrian
Hi all,
I am indeed using SYBR Green chemistry (ABI SYBR Green PCR Mix), but the instrument I use is the Applied Biosystems 7900HT.
I have tried removing certain dilutions in all possible permutations before plotting the trendlines, but the outcomes are inconsistent between experiments. I also don't believe I have any primer-dimers because of the very pronounced product peaks and complete absence of primer-dimer peaks from the dissociation/melting data.
Hi YuJ,
Could you please describe how you preparing your quantitative standard?
Thanks
Hadrian
Hi Hadrian,
My quantitative standards are prepared as such...
1. RNA isolation using TRIZOL reagent following manufacturer instructions plus an additional overnight purification step with ethanol (100%) and sodium acetate (3 M).
2. Spectrophotometric determination of [RNA]. A260/A280 ratio is usually at around 1.8.
3. Serial dilution of RNA at 1x, 10x, 25x, 75x, 100x, and sometimes 1000x (with 1x being 100 ng/uL diluted from stock, 10x being 10 ng/uL, etc.) using DEPC-treated water. I make sure to vortex each tube very well before pipetting for the next dilution.
4. Reagents in RT-PCR reactions include: Multiscribe Reverse Transcriptase (ABI), RNaseOUT RNase Inhibitor (Invitrogen), SYBR Green 2x PCR Mix (ABI). These, plus DEPC-treated water, are combined as a master mix.
5. Primer concentrations have been previously optimized for each primer. In the case of GAPDH, I determined the ideal concentration to use would be 0.06 uM per reaction for both the forward and reverse primer.
6. In each reaction tube, I first add the water and master mix, then the primers, and then finally the appropriate RNA template.
7. Each tube is vortexed and loaded in triplicate into the appropriate optical reaction plate. Plate is centrifuged to get rid of air bubbles and placed into the ABI 7900HT.
8. Thermocycler is set for a 45C RT phase (30min), 95C melting + 60C annealing phase (40 cycles), and then an additional dissociation phase at the end for generating the dissociation curves.
I probably included a lot of irrelevant facts, but hopefully this is what you meant by how I prepare my quantiative standards.
Thanks a lot.
Dear YuJ,
Thanks for your description. I would strongly suggest you to use invitro transcription to prepare your GAPDH mRNA rather than using TRIzol extracted total RNA.
The reason is when you use TRIzol to extraction, you will get total RNA not GAPDH specific mRNA. So when you quantitate using spetrophotometrically, the reading will be out. Thus it would not give you a ~3.3 slope standard curve.
You may try to clone the GAPDH gene into a plasmid and invitro trascripted(IVT) into GAPDH mRNA --> treat with DNase I to completly remove DNA conteminant--> stop DNase reaction --> purify the mRNA --> quantitate you mRNundefined-->converte xx ug/ul into xx RNA copy/ul--> perform 10 fold serial dilution --> run RT-PCR.
This should give you a batter result.
~undefinedoptional:
If you want to make sure that you IVT GAPDH mRNA is free from DNA conteminat, you may run a RT-PCR and a PCR (no RT) simultaneously. Your (no RT) PCR should not give you any band. If it does, treat you standard again with DNase I.
Note:
when you do a cloning please in clude 10-20 bp extra flanking at the both 3' and 5' end of your acture mRNA target. This will provide bater stability against exonuclease and yield prolong shelf life to your standard.
I would like to recommend you to use Ambion produst for IVT purpose.
Good luck.
Hadrian
Thank you for your advice, Hadrian, but I do not believe that method is suitable for me.
Although I've only mentioned GAPDH serial dilution as an example, I am actually conducting a gene expression study for some other genes, merely using GAPDH as an internal control.
Also, when I made the spectrophotometric measurements, I was just interested in the A260/A280 ratio as a measure of RNA purity and a rough estimate of the total RNA concentration in the stock tube. Indeed I am measuring total RNA, but gene-specific primers are used in the RT-PCR for actual quantification.
Sorry, I should have first mentioned what I'm actually doing instead of assuming that others can read my mind.
In my Real-Time qRT-PCR experiments, I employ the standard curve method for quantification of gene expression. However, standard curves seem to be a huge hit-or-miss procedure for me, even with genes that are well-established to work well with Real-Time such as GAPDH.
At times, I am able to produce excellent standard curves with slopes at approximately -3.3 (~100% efficiency); the associated melting curves for each dilution are excellent as well.
However, most of the time I may get excellent melting curves for each dilution, but the slope for my standard curve may fall at around -2.6 (~142% efficiency)!
Can anyone offer some explanations for this strange dilemma? Consequently, can anyone offer some suggestions to solve this problem?
One thing I can think of is that I'm getting disuniform amplification efficiencies at different RNA concentrations. FYI, I typically use 100 ng/uL RNA for "1x" and make serial dilutions up to 100x. I understand I really should be using a larger range of dilutions, but greater than 100x dilutions just don't work out for my genes/primers.
Thanks for any help I can get.
How much volume RNA are you actually putting into your RT reaction? I suggest doing your dilutions after you do your RT reaction. Dilution of your RNA affects the kinetics of your RT reactions as well as your PCR reactions unless you compensate your primers and enzymes relative to your RNA dilutions. This is more true with hexamer primers than it is with DT oligos or GSPs.
A constant concentration of hexamers across a series of dilution of RNA will produce smaller cDNA products the larger your dilution. Add on top of that the difference amount of cDNA used for PCR you have a lot of variables that affect the efficiency of your PCR.
However, it might be that you are using too much cDNA into your PCR, which can cause inhibition leading to late amplification that becomes less and less of factor the greater your dilution factor becomes. Thus narrowing your data points and producing a slope that is way below 3.3.
Hi YuJ,
Since you are doing expression study, would't be more appropriate if you running reletive quanttification rather absolute quantification??
Just wander
Regards
Hadrian
I think there is an inhibition in your PCR. 2 cases possible: You put too much template in your PCR or there is an inhibitor in your template (come from extraction). When you dilut your template you dilute the inhibition factor so the amplification is more efficient. That's why you have an efficiency >100%. To be sure of that, try with high dilutions of your template. The efficiency should decrease.
I am having the same problem..i.e. high efficiency and I too think it is due to presence of an inhibitor which gets diluted out as I dilute my samples. All my dilutions have similar Ct values and in fact sometimes the more dilute sample gives a lower Ct value! Does anybody know to get rid of this inhibitor from the extract?
-Basu
Basu- I am having the exact same problem. The advice I got from ABI was to purify my RNA samples with a column clean-up. So, I did the purification last Friday using the RNeasy MinElute Clean-up Kit. I got excellent RNA purity - I started with a purity of about 1.7 and ended up with purities over 2. However, I had large sample loss - about 50-80%. (Qiagen says I should recover 80-90% of my RNA samples with their columns, but so far I have not had good luck with recovery.) I am going to try RT PCR on these samples now and hopefully improve my efficiencies!
<center> <p> </p> </center>
上一篇:Inconsistent results from replicates in Taqman real-time 下一篇:Delta-delta Ct method and PCR efficiencies-Real-Time PCR