A lot of link-review systems still do the same lazy thing. They score the submitted URL and call it done.

Attackers count on that.

If the visible link looks harmless, they can hide the real destination one or two hops later. Sometimes more. That is why single-hop review misses so much obvious badness.

Attackers split the flow on purpose

Redirect chains give attackers room to separate the job:

  • one domain for delivery
  • one domain for tracking
  • one domain for filtering
  • one domain for the actual phishing page or payload

That setup is useful for a simple reason. If one hop gets blocked, the whole campaign does not always die with it. The final destination can change while the lure stays the same.

What you miss when you stop at hop one

If you only score the first URL, you can miss:

  • a destination hiding behind a common shortener
  • tracking or filtering infrastructure in the middle
  • a final host that changes by region or user agent
  • client-side redirects triggered by the page itself
  • delays that only fire after the page loads

So when someone says, "the submitted URL looked clean," that is not much of a conclusion anymore.

The chain is evidence

Each hop tells you something useful:

  • how old the domain is
  • whether the hosting overlaps with known junk
  • whether the same parameters show up across campaigns
  • whether the cert or ASN lines up with other bad infrastructure
  • whether the link behaves differently in different environments

If you only save the final page, you lose the part that explains how the user got there in the first place.

Good review means logging every hop

For each step in the chain, you want:

  1. the exact URL before and after normalization
  2. the response code
  3. the Location header or client-side redirect behavior
  4. domain, ASN, and certificate context
  5. timing
  6. whether the result changed by geography, user agent, or session

Now you have something an analyst can reason about instead of a vague "it redirected somewhere."

Headers are not enough

Attackers do not stick to clean 301 and 302 flows. They use:

  • meta refresh tags
  • JavaScript window.location
  • timer-based redirects
  • hidden form submits
  • challenge pages that forward only after validation

That means serious link review has to look at page behavior, not just headers.

Churn is a clue too

One signal people still underuse is redirect instability.

If the same submitted URL lands on different final hosts over short stretches, that usually means somebody is actively working the infrastructure. Maybe they are rotating mirrors. Maybe they are swapping destinations by region. Maybe they are dodging takedowns. Whatever the reason, it is not normal.

Why this matters in practice

Following the chain helps in boring, practical ways:

  • moderators can see where the link actually went
  • abuse teams can group related URLs together
  • analysts can explain why a block happened
  • engineering teams can replay the same chain later

That is a lot better than arguing over a clean-looking first hop.

One simple rule

If you only inspect the first URL, you are reviewing the bait, not the attack.

LinkShield follows the full chain, records what happened at each step, and shows the real destination before you make a call. Get started with LinkShield if you want that evidence instead of guesswork.