LinkedCamp
← All posts

LinkedIn's 360Brew Broke Your Outreach: Fix It Before Q2

Luke Henrik·Apr 27, 2026·8 min read
Editorial illustration of a stylized neural-network sphere scanning a LinkedIn-style profile card and a message envelope

If your team's reply rates fell off a cliff between February and mid-March, you're not running a bad list. You're running into 360Brew.

LinkedIn quietly retired its old patchwork of ranking models — separate systems for feed, search, notifications, and messaging — and replaced them with a single 150-billion-parameter foundation model. The shift was technical plumbing on the surface. Underneath, it changed how every connection request, InMail, and cold message gets scored before it reaches a recipient's inbox.

Most of the analysis floating around right now treats 360Brew as a creator-economy story: who's losing feed reach, what content formats win. Almost nobody is connecting it to outbound. That's the gap. 360Brew now reads your profile as a credibility signal and grades your outreach against it for coherence, relevance, and authenticity. Pattern-matched sequences that worked in Q4 2025 are getting deprioritized — not banned, just buried.

What 360Brew Actually Changed for Outreach

The old LinkedIn ranking stack used dozens of narrow models, each trained on identifiers — your member ID, the recipient's ID, message length, recency. It was effectively a lookup engine. 360Brew is a reasoning engine. It reads your profile, the recipient's profile, your message text, and your relationship history as semantic inputs and produces a single relevance score.

Petya Savova's analysis of the published 360Brew paper makes the implication clear: the system isn't matching IDs anymore, it's evaluating whether your outreach makes sense given who you claim to be. A sales coach pitching CFO software triggers a coherence drop. A headline that says "Helping B2B SaaS scale" attached to a message about recruiting healthcare workers triggers another.

This is why classic automation playbooks broke. The old detection layer looked for behavioral patterns — too many requests per hour, identical message hashes, browser fingerprints. 360Brew looks at meaning. You can be fully under the ~100 weekly connection request cap, fully compliant on cadence, and still get throttled because your message-to-profile coherence score is low.

The Five Patterns Getting Deprioritized Right Now

We pulled aggregate data from LinkedCamp accounts that saw reply-rate drops of more than 35% between January and March 2026. Five patterns kept surfacing.

1. Profile-message mismatch. Senders whose headline, About section, and recent activity don't align with what they're pitching. 360Brew flags this as low authority. If your profile says "Founder" but your messages all sound like a cold SDR script, the system reads incoherence.

2. Templated first sentences. The 360Brew paper highlights a "Lost in Distance" effect — the model weights the opening tokens of a message disproportionately. Openers like "I came across your profile and was impressed" or "Saw you're the [Title] at [Company]" hash to near-identical embeddings across millions of messages. They get scored as low-novelty before the recipient ever sees them. We covered the A/B data on this in 360Brew Is Deprioritizing AI Openers.

3. Generic personalization tokens. {firstName}, {company}, and {title} merge fields are visible to the model as syntactic shells. When the surrounding sentence reads identically across 10,000 messages, swapping the token doesn't fool anything. 360Brew rewards specificity in the non-token portion of the sentence.

4. Engagement-pod warmups. Accounts that ran reciprocal-like pods to lift SSI are now showing depressed acceptance rates. The model appears to penalize unnatural engagement graphs — sudden reciprocal liking patterns between accounts with no semantic content overlap.

5. Sequence-style follow-ups with no context shift. Three-touch and five-touch sequences where each follow-up references the prior message ("just bumping this up," "following up on my note last week") get compressed into a single low-value thread. The model treats it as one weak signal, not five touches.

How 360Brew Reads a Connection Request

Walk through what happens when you send a request with a note. The model ingests four inputs simultaneously: your full profile (headline, About, recent posts, work history), the recipient's profile, the note text, and any prior interaction graph between you. It outputs a relevance probability that determines whether the request lands as a normal notification, gets buried in "My Network," or — at the low end — gets auto-suppressed entirely.

Three signals carry the most weight in our testing.

First, profile authority for the topic you're messaging about. If you message a VP of Engineering about DevOps tooling and your last six LinkedIn posts are about DevOps tooling, the score climbs. If your activity feed is empty or off-topic, it falls.

Second, message specificity in the first 12-15 tokens. The opener has to read as written for this person. Not just their name — a referenced detail the model can verify against the recipient's profile.

Third, interaction graph realism. Have you engaged with the recipient's content before? Do you share group memberships or mutual connections that show real overlap, not bought ones? The old system counted mutuals. The new system reads whether the mutuals make sense.

A Before/After Sequence Rewrite

Here's a real LinkedCamp sequence we rewrote for an agency client targeting Heads of RevOps at Series B SaaS companies. Reply rate before the rewrite: 4.2%. After: 14.7%.

Before (Touch 1, connection request note):

Hi {firstName}, I came across your profile and was impressed with your background at {company}. I'd love to connect and share some ideas on how we're helping RevOps leaders scale their pipeline. Open to a quick chat?

After (Touch 1):

{firstName} — saw your post last week about consolidating your attribution stack. We just helped a Series B team kill three tools and rebuild reporting in HubSpot in 11 days. Worth comparing notes?

What changed: the opener references a verifiable post. The middle removes generic claims and substitutes a specific outcome with a number. The close asks for a peer conversation, not a sales call. 360Brew can verify the post reference against the recipient's activity history, which lifts the coherence score immediately.

We applied the same logic to touches 2 and 3. The follow-ups stopped saying "just bumping this" and instead introduced a new piece of context each time — a relevant case study, a question about a specific tool in their stack, a reference to a hire they'd just announced. Each touch became a standalone signal rather than a thread of weak ones.

The 7-Day Pre-Q2 Audit Checklist

If you run an SDR team or an agency, this is the audit we recommend completing before April 1.

Day 1 — Profile coherence pass. Pull every sender's profile. Confirm the headline, About, and last 10 posts align with what they're pitching. If a sender pitches RevOps but posts about marathon running, their coherence score is dragging the whole sequence down.

Day 2 — Opener audit. Export the first sentence of every active message template. Run a quick similarity check (even pasting them side by side works). Anything that hashes to a near-duplicate across more than two sequences gets rewritten. Pull from our 2026 opener templates for replacement patterns.

Day 3 — Personalization token review. For every merge field, ask: does the sentence still make sense if I delete the token? If yes, the token is decoration, not personalization. Rewrite the sentence to require the specific detail.

Day 4 — Sequence cadence rebuild. Restructure follow-ups so each touch introduces new context rather than referencing the prior message. Three strong standalone touches beat five linked weak ones.

Day 5 — Engagement graph cleanup. If you ran pods or warmup tools, pause them. Have senders engage organically with 5-10 ICP accounts per day for two weeks before resuming outreach.

Day 6 — Volume recalibration. Drop weekly send volume by 30% temporarily. Acceptance rates matter more than send rates under 360Brew. A 25% acceptance rate on 60 requests beats 15% on 100.

Day 7 — Measurement reset. Establish new baselines. Old benchmarks from 2025 — RAIN Group's research showing buyers respond to relevance, the Bridge Group's SDR benchmark of roughly 23% connection acceptance — are still directionally useful, but your absolute numbers have shifted. Track week-over-week, not against last quarter.

Ready to scale your outbound?
Put what you just read into practice — free for 14 days.

LinkedCamp runs AI-personalized LinkedIn + email sequences on dedicated IPs, with AI agents that book meetings while you focus on closing.

What This Means for Automation Tools

A fair question: is automation dead? No, but the value proposition shifted. The old job of automation was scale — sending more requests, faster, without getting flagged. The new job is precision — sending fewer, better-targeted, profile-coherent messages that pass 360Brew's relevance filter.

LinkedCamp accounts that adapted in February are seeing reply rates recover toward Q4 2025 levels. The ones still running 2024 playbooks are still bleeding. The dividing line is whether the operator treats automation as a volume tool or a sequencing tool.

If you're using AI to draft messages, the March 2026 authenticity update added a second layer of scrutiny on top of 360Brew's semantic checks. Human review of AI drafts isn't optional anymore — it's a survival mechanism.

What to Watch Heading Into Q2

Three shifts to track over the next 60 days.

The first is whether LinkedIn extends 360Brew's scoring to InMail in the same way it now scores connection requests. Sales Navigator users have largely been insulated so far, but the architecture suggests parity is coming.

The second is the Social Selling Index. SSI was a leading indicator under the old system. Under 360Brew, it's a lagging one — your score adjusts after the model has already changed how it treats your outreach. Don't optimize for SSI; optimize for the underlying signals SSI eventually reflects.

The third is whether industry benchmarks recalibrate. The LinkedIn State of Sales report and HubSpot's outbound research will start publishing 2026 numbers over the summer. Expect average reply rates across the platform to settle at a new, lower baseline — but the gap between top-quartile and median performers to widen significantly. The teams who did the audit work in Q1 will sit in that top quartile.

TL;DR
  • LinkedIn replaced its old ranking stack with 360Brew, a single foundation model that scores outreach for semantic coherence between sender profile, recipient profile, and message text.
  • Five patterns are getting deprioritized: profile-message mismatch, templated openers, generic personalization tokens, engagement-pod warmups, and sequence-style follow-ups with no context shift.
  • Rewriting openers to reference verifiable recipient details and restructuring follow-ups as standalone touches recovered reply rates from 4.2% to 14.7% in our test sequence.
  • The 7-day pre-Q2 audit covers profile coherence, opener uniqueness, personalization depth, sequence cadence, engagement graph health, volume recalibration, and benchmark reset.
  • Automation isn't dead — but its job shifted from volume to precision. Top-quartile teams will pull away from median performers through Q2 and beyond.

Ready to try LinkedCamp?

14-day free trial, dedicated IP, AI agents — start outbound in under an hour.