We analyzed 10.2 million transactions flagged as high-risk by our platform between January 2024 and September 2025 — looking specifically at how fraud patterns vary by calendar period. What we found was not subtle. Fraud rates, attack type distribution, and the specific techniques used shift significantly across the calendar year in patterns that are consistent, predictable, and largely unaccounted for in how most institutions calibrate their detection systems.
Static models trained on annual or multi-year data average out these seasonal patterns. The result is detection that's well-calibrated for the annual average but miscalibrated for the periods that matter most — the peaks where fraud concentrates and where the cost of miscalibration is highest.
The Q4 compression effect
The most pronounced seasonal pattern in our data is the Q4 compression: fraud activity compresses into a shorter, more intense period between late October and the first week of January. Across the 10.2 million flagged transactions, 34% of confirmed fraud incidents fell in Q4, against 26% of total transaction volume — an over-representation ratio of roughly 1.3x.
More significant than the volume is the composition. Q4 fraud is disproportionately card-not-present fraud targeting e-commerce transactions. The Q4 CNP fraud rate in our dataset was 2.1x the Q2 baseline. This is partly a volume effect — e-commerce transaction volume is substantially higher in Q4 — but the fraud rate itself (fraud incidents per transaction) rises, not just the fraud count. Fraudsters concentrate effort in Q4 because the signal-to-noise ratio for detection systems degrades: high transaction volumes create detection noise, staffing patterns mean response times are longer, and merchants are sometimes more permissive about friction in order to protect holiday conversion rates.
The specific Q4 peak in our data is the 72-hour window starting the Wednesday before Thanksgiving through Black Friday. In 2024, fraud attempts in that window ran at 3.4x the October baseline rate. Models not calibrated for that spike either miss more fraud than usual (if calibrated for the baseline) or block more legitimate transactions (if calibrated for the peak).
Tax season and account opening fraud
The second major seasonal pattern is a February-April concentration of account opening fraud and synthetic identity activity. This aligns directly with US tax season dynamics: tax refunds increase consumer deposits, which creates both opportunity (higher balances to target via account takeover) and cover (new account openings are more common during tax season, making synthetic identity openings less anomalous in aggregate).
In our dataset, synthetic identity fraud attempts were 2.7x more frequent in February through April than in the September-November baseline. Account takeover attempts targeting high-balance accounts peaked in March — coinciding with the period when direct deposit tax refunds are appearing in accounts. We also saw a distinct pattern of authorized push payment fraud in March, where victims were targeted with fraudulent IRS-themed communications directing them to transfer refund-related funds.
For institutions that are recalibrating their fraud models on an annual or semi-annual basis, a model calibrated against data that underweights the February-April fraud concentration will be underprepared for tax season and may generate elevated false positives as detection systems see unusual patterns they weren't trained to recognize correctly.
Summer travel and the geographic anomaly problem
The June-August period shows a distinct pattern driven by legitimate behavior that degrades fraud detection performance: travel. A larger proportion of legitimate transactions during summer months occur in geographies that deviate from the cardholder's home market. This is exactly the signal that many fraud models use to flag account takeover and card-not-present fraud.
In our data, false positive rates for detection systems that weight geographic anomaly heavily spike in June-August by approximately 40% relative to baseline. The legitimate customers triggering these false positives are disproportionately high-value, high-tenure accounts — exactly the customers whose false decline experience is most costly from a churn perspective.
The correct calibration response is to reduce the weight on geographic anomaly signals during June-August and compensate with higher sensitivity on behavioral anomaly signals that aren't affected by travel (transaction timing, device patterns, authentication behavior). This is achievable but requires explicit seasonal recalibration — it doesn't happen automatically in a static model.
The back-to-school gift card spike
A pattern we didn't expect before this analysis: a distinct spike in gift card fraud in the late July to mid-August window. Gift card fraud — purchasing gift cards using compromised payment credentials, then rapidly liquidating them — is a year-round activity, but it concentrated in this window at 1.8x the baseline rate in both 2024 and 2025.
The back-to-school retail period drives higher gift card transaction volumes as a legitimate category, which provides cover for fraudulent gift card purchases. The detection challenge is that gift card purchases are a legitimate and common purchase type during this period, making velocity-based flagging less effective — the threshold at which gift card purchase velocity becomes anomalous is higher than at other times of year, and fraudulent purchases can stay below that elevated threshold.
Post-holiday returns fraud: January concentration
January shows a concentration of a different fraud type: returns fraud. The period immediately after holiday return windows open sees elevated rates of policy abuse and organized retail fraud — exploiting return policies using purchases made in November-December with compromised cards, or using sophisticated receipt manipulation and empty-box return schemes.
This manifests in financial institution data as chargeback disputes with specific characteristics: disputes filed shortly after return windows open, concentrated in electronics and luxury goods categories, with unusually high dispute win rates by merchants. For institutions providing dispute management services, January dispute volume requires additional capacity that static annual staffing models don't account for.
Practical implications for detection calibration
The actionable implication of seasonal fraud patterns is straightforward: fraud detection models should be recalibrated on a quarterly cycle at minimum, not annually. A model calibrated in January is increasingly miscalibrated by June. A model calibrated in September is well-positioned for Q4 but increasingly stale by the following spring.
For institutions using third-party detection platforms, the relevant questions to ask your vendor are: how frequently are models retrained, and on what data window? Are seasonal patterns explicitly modeled as features, or averaged out in the training data? What's the process for emergency recalibration if a new seasonal pattern emerges that wasn't present in historical training data?
The 10.2 million transactions we analyzed are a snapshot of patterns that will evolve. What they establish clearly is that "average" fraud pattern calibration is below-average performance during the periods that matter most. Seasonal awareness isn't a nice-to-have in fraud operations — it's a baseline competency that the data says most programs need to take more seriously.