Most trading journals fail because they log entries and exits but not state. A real journal answers 12 specific questions per trade — what you were trying to prove with the trade, what you ignored from your plan, whether you would have taken the trade if it were your last of the day, what you felt at entry, and eight more. These questions surface the emotional and behavioural patterns that raw trade data misses. Over a sample of 50 to 100 trades, the answers cluster in ways that show you exactly which biases are driving your worst outcomes. This page is the full list with explanations of what each question actually tells you, how to answer it without turning the journal into homework, and how the questions map onto the AI journal feature that can answer several of them automatically.
If you have ever started a trading journal and stopped within six weeks, you are the rule, not the exception. The failure rate on manual journaling is somewhere north of 80 percent for discretionary traders who try it without a specific framework.
The reason is not laziness. It is that most journals are structured to log what a spreadsheet could have logged — instrument, entry, exit, P&L. That data is necessary, but it is not where the edge in journaling comes from. The edge comes from capturing the information that only the trader has access to: what they were thinking, what they were feeling, what they ignored from their plan, what they were trying to prove. Without those, the journal is a ledger. With those, the journal becomes a mirror.
The 12 questions below are the specific prompts that, answered over a sample of 50 to 100 trades, surface the patterns that are quietly driving your worst outcomes. They are deliberately short. They do not ask you to write essays. They ask for the one piece of data per question that, stacked over time, turns into something actionable.
Name it. Not "long breakout." The specific name from your playbook. "Opening range breakout with retest." "Continuation pullback into prior day high." If the setup does not have a name on your playbook, the trade was not from the playbook. Note that.
This question does two things. Over time, it lets you see the win rate and expectancy of each named setup separately, which is usually different from what you think. It also surfaces the setups you are trading that are not on the playbook at all — the trades you slid in because something "looked good" in the moment. Those show up as "no name" entries, and when you tally them at the end of the month, the pattern is usually ugly.
Yes or no. If no, why did you take it.
Traders who plan the night before and then trade a different plan the next morning are common. The journal catches this. A trade that was not on the plan is a trade that was generated in session, which means it was generated in a more stressed, more reactive state than the planned trades. Across a sample, these in-session-generated trades almost always have worse expectancy than planned trades. If you can see that in your own data, you stop taking as many of them.
One word. Calm. Eager. Nervous. Frustrated. Bored. Sharp. Vengeful. Relieved. You are not writing a mood diary. You are tagging a context variable.
Over 100 trades, you will see that specific emotional states correlate with specific outcomes. Calm and sharp usually produce clean trades. Eager, vengeful, and bored usually produce losses. The correlation is not complicated. What the question does is force you to notice the state before you rationalize it. You cannot write "eager" on a trade and then pretend later that the entry was purely technical.
This is the most revealing question on the list and the one most traders dislike. The honest answers are things like: I wanted to make back the last loss. I wanted to prove I could take bigger size. I wanted to show myself I am not afraid. I wanted to be right about the top I called yesterday.
None of these are valid reasons to take a trade. All of them are reasons traders take trades. The question does not make the motivation go away. It makes it visible. A trader who can write "I was trying to make back the last loss" on their own journal entry is in a much better position to catch the same motivation in real time next week.
Every trade that deviates from the plan deviates in a specific way. Maybe the stop was wider. Maybe the setup was 80% of the real setup. Maybe the session time was past your normal cutoff. Maybe the size was different from plan. Write down the specific deviation.
Across a sample, the pattern of deviations is usually concentrated. Most traders deviate in one or two specific ways, repeatedly, not in a random spread. Once you can see that you widen stops on 30% of your losing trades, you can build a rule that blocks stop widening, and that single change often recovers more than any "better setup" would.
This question has a specific diagnostic purpose. It separates real setups from trades you took to stay in the game.
A real setup is one you would take regardless of whether other opportunities are coming. A marginal setup becomes attractive only because you are going to have more chances after it — if this one fails, another will come. But if you had to pick one trade for the entire day, would this be the one. If the answer is yes, the setup is real. If the answer is "well, not really, but I wanted to trade," the setup is a session-filler, and session-fillers as a category tend to lose money over time.
Not where the stop was. What you were going to do, emotionally and process-wise, if the trade stopped out. The plan could be: take the loss, note it, continue to the next setup. Or: take the loss, end the session. Or: take the loss, stand up and walk away for 10 minutes before the next trade.
Trades entered without a plan for being wrong tend to produce the worst emotional aftermath — the trader is surprised by the loss even though statistically they should have expected it, and the surprise is what drives the revenge trade that follows. Having a pre-committed plan for loss defuses this almost entirely.
Honest, post-hoc. The bear divergence you saw and dismissed. The weak volume on the breakout. The higher timeframe was against you. The VWAP was between you and your target. In retrospect, what was the signal you downweighted at entry.
The point is not to beat yourself up. The point is to calibrate your filter for next time. Over a sample, there is usually one or two specific pieces of information you repeatedly downweight — often because confirming it would have meant skipping the trade. Making that visible in the journal lets you add it as an explicit filter next time.
A, B, C, or D. Grade the trade on whether it was executed correctly — setup valid, entry at trigger, stop at rule, exit at rule — not on whether it made money.
This is the single most important question for separating signal from variance in your journal. A winning trade graded C reinforces bad execution habits every time you credit the win to your skill. A losing trade graded A is information about variance, not about your process. Over 100 trades, the correlation between your grades and your P&L should be positive, but not perfect — because variance exists. If they are too tightly correlated, you are grading on outcome, not on process, and the journal is not doing its job.
One specific change. Not "trade better." A concrete adjustment. "Wait for the retest instead of entering on the initial break." "Size 1 contract instead of 2 on mid-morning setups." "Pass on this setup when the VWAP is between entry and target." The change should be small enough to actually implement on the next occurrence.
This question is where the journal stops being a log and becomes a feedback loop. The changes you name here are the evolution of your playbook in slow motion. If you review your journal monthly, you should see the named changes appear in your actual behaviour on the next few trades of that setup. If they do not, the journal is not actually driving iteration — it is just documenting inaction.
Were you up or down on the day. Up or down on the week. Where was your trailing drawdown floor. Were you close to a profit target or a daily loss limit.
Account context changes trade quality. Traders take different trades at the start of an evaluation than they do at 90% completion. They take different trades on a green week than on a red one. The journal should capture this so you can see whether your expectancy differs by account context — most traders' does, often dramatically, and the expectancy is usually worse near account-level pressure points like the profit target or the drawdown floor.
Two or three sentences. Plain language. Would a disciplined mentor sign off on this trade, or would they find something wrong with it.
This is the integrity question. It is the backstop that catches trades where you have rationalized your way through the first 11 questions but know, at some level, that the trade was not a good one. Writing the imagined mentor conversation forces a frank self-assessment that is hard to duck. Over time, the question trains better honesty, which is the foundation of any useful journal.
Do not try to answer all 12 on every trade. Here is the workable version.
Every trade: Questions 1, 2, 3, 5, 9. Setup name, planned or not, feeling at entry, deviation from plan, process grade. Thirty seconds per trade.
Best and worst trade of each session: All 12 questions. Two trades per day at most, about 5 to 8 minutes each.
Weekly review: Look back at the week's deep entries. What patterns repeat. What biases are showing up. What one change will you make for next week.
If you are starting from zero and have not been journaling at all, here is a sustainable ramp.
Days 1 to 3: answer Questions 1, 2, 3, and 9 on every trade. Setup name, planned or not, feeling at entry, process grade. Four questions, 20 to 30 seconds per trade. Nothing else.
Days 4 to 7: add Question 5 (what did I ignore from my plan) on every trade. Still fast. You are now capturing five data points per trade, which is enough to start seeing patterns in your own journal by the weekend.
Days 8 to 14: pick one trade per session — the best or worst of the day, alternating — and answer all 12 questions on that single trade. Everything else stays at the five-question level.
After two weeks of this, you have roughly 50 five-question entries and 10 deep-dive entries, which is enough data to start pulling out specific behavioural patterns — the setup you take most often, the emotional states that correlate with losses, the deviations that repeat. That is the inflection point where the journal goes from feeling like homework to feeling like a tool.
The traders who abandon journaling in week three are almost always the ones who tried to answer all 12 questions on every trade from day one. It was too much. They burned out. Start narrow. Expand when the narrow version is solid.
One final note. The AI journal handles the first layer automatically — setup tagging, context variables, process metadata — which leaves your manual writing for the introspective questions that actually require you to sit with the answer. See the AI journal guide for how that split works in practice.