Operator Manual · Read once, reference always
Bitcoin Storm · Sub-Affiliate DD · Manual v1.0

How to use the
DD form, well.

A walkthrough for the Lead Operator
What this manual is, and why it exists

The Sub-Affiliate Due Diligence form (DD) was designed to do one job: let one Lead Operator triage two hundred applicants down to a credible shortlist of fifty in roughly twenty-five hours of audit time, with confidence in every decline and every promotion.

The DD form alone is the tool. This manual is the user guide. It explains, section by section and item by item, why each part of the DD looks the way it looks, what each scoring item is actually testing, and how to apply the audit consistently across applicants you've never met. Read it once before your first batch. Reference it whenever an applicant doesn't fit the standard pattern.

Nothing in this manual replaces operator judgement. The DD is a tool that enables good judgement by removing the cognitive load of remembering what to check for and where to be sceptical. You decide. The DD makes deciding faster.

Contents
  1. The mental model behind the DD
  2. The end-to-end workflow — inbound application to verdict
  3. Part A walkthrough — what every applicant section is probing for
  4. Audit Block A — Identity legitimacy
  5. Audit Block B — Audience quality (the heart of the audit)
  6. Audit Block C — Compliance posture
  7. Audit Block D — Capacity and fit
  8. The verdict mechanic — when to push back on a Red
  9. Applicant patterns you'll see at scale
  10. Operator failure modes to avoid
  11. When to delegate audit work to a deputy
Section 01

The mental model
behind the DD.

Before you touch the form, internalise three principles. The whole DD is built on these. If you understand them, you'll handle edge cases the form doesn't anticipate. If you don't, you'll second-guess the form when it's right.

Principle 01

The DD is a self-filtering document. The questions are designed so applicants who shouldn't be sub-affiliates either won't fill it in honestly, won't fill it in completely, or will fill it in with answers that flag themselves. Roughly one in four applicants will eliminate themselves before you score a single item. Trust that part of the design and don't chase incomplete forms. An applicant who can't fill in a DD form for a paid affiliate role won't fill in monthly attribution reports for an eighteen-month engagement either.

Principle 02

You're filtering for honesty more than impressiveness. Audience quality matters, but so does honest disclosure. An applicant with a 4K real engaged audience who tells you exactly that is more valuable than one with a 40K bot-inflated audience who describes themselves as a "thought leader." The DD weights honesty by making honest disclosure cheap (one paragraph) and dishonesty expensive (immediate termination if discovered).

Principle 03

Speed is a feature. The audit is designed to take 10–15 minutes per applicant, not 60–90 minutes. That speed is intentional. Accept that the DD will sometimes get edge cases wrong rather than slow down to chase certainty on every applicant. The 10K-per-sub-affiliate cap and the hard performance gates protect you from any individual mistake. A bad applicant who slips through an audit will fail the Month 2 gate and exit. A good applicant who you reject incorrectly is one of fifty — you have replacements. The cost of a missed-decline at the audit stage is much lower than the cost of a slow audit.

"The DD is not perfect. It is fast and good enough. The cap and the gates do the rest."
Section 02

The end-to-end workflow.

From the moment an applicant first contacts you to the moment you sign or decline. Eight steps.

1
Inbound contact — the soft filter. Applicants reach you via outreach you initiated (Twitter DM, LinkedIn, podcast pitch reply) or referral. Reply with a short message: "Thanks — please complete this DD form and return as a single PDF or document. I review applications in batches every two weeks." This message itself filters out the unserious. The two-week batch cycle prevents you from being whittled down by individual chasers.
2
Receive the completed DD. Save to a single folder, named: YYYY-MM-DD_LastName_FirstName_DD.pdf. Resist the urge to read it carefully on receipt. Batch processing is faster than rolling assessment.
3
Five-minute scan-pass. Open each form and check only: did they complete the form? Did they answer Section 06 hard-stops? Is Section 09 ("Why You") a real paragraph or filler? About 20-25% of applications fail this scan and go straight to a "Decline — incomplete" file. Do not score these. Do not respond beyond a one-line "Thanks for applying — not progressing on this round."
4
Full audit. Open each surviving form alongside Part B (the audit rubric). Work through Blocks A, B, C, D in order. Each item is a scored G/A/R box. 10–15 minutes per applicant, no exceptions. If an item is taking longer than two minutes, mark it Amber and move on — the verdict mechanic will catch genuinely problematic applicants at the tally stage.
5
Tally and verdict. Count Greens, Ambers, Reds across all four blocks. Apply the verdict rule: 0–1 Reds → Proceed; 2 Reds → Clarify; 3+ Reds → Decline. Sanctions hit (item A.04 Red) is automatic decline regardless. Mark the verdict box and write a one-paragraph note in the operator notes field.
6
Clarify-batch. Applicants flagged "Clarify" get one email asking the specific question: "Your audience analytics show X — the application says Y. Can you reconcile?" Give them seven days to respond. No response = decline by default. Honest response that resolves the flag = re-score and Proceed. Defensive or evasive response = decline.
7
Interview the Proceeds. 30-minute video call. The DD has done the audit; the interview is for what the form can't catch — voice, judgement, fit. Three questions: "Walk me through how you'd run a Bitcoin Storm campaign in your first 60 days." "What would make you exit early?" "What questions do you have for me?" The third is the most revealing.
8
Contract or hold. Successful interviews receive the sub-affiliate agreement and the Sub-Affiliate House Rules for signature. Unsuccessful interviews receive a one-line email: "Thanks for the time — not progressing. Best of luck." File DD forms (decline and proceed both) for twelve months per the document retention policy.
Section 03

Part A walkthrough —
what every section is probing for.

Each section of the applicant-facing form has a deliberate structural purpose. Knowing what each is testing helps you read the answers correctly.

01 Identity

What it's probing for: a real person you can find online, contract with, and pay.

Why structured this way

We ask for legal name and public handle separately because they often differ. We ask for tax residency separately from country of residence because some applicants live abroad but pay tax somewhere else — relevant for both AML and for understanding which jurisdictional regulators apply to them. The trading-entity field is optional because not all applicants operate via a company — sole traders are perfectly acceptable.

What to look for

Names that match the public identity of the channels they list. Phone numbers in the country they claim to live in. Email addresses on a domain that aligns with the channel (a podcast called "Wealth Frequency" with an email at wealthfrequency.com is normal; one at @gmail.com with a Russian-domain handle, less so).

02 Audience & channels

What it's probing for: verifiable audience size and a paper trail of recent posts so you can scrape engagement signals manually.

Why structured this way

The five-most-recent-post URL field is the most important on the entire form. It gives you the data to manually run audience-quality checks (Block B in the audit) without having to find their channel yourself. The platform analytics screenshot requirement filters out applicants whose claimed numbers can't be substantiated — if they refuse to share or send a low-quality screenshot, that is itself the signal.

What to look for

A primary channel that's at least 12 months old. Engagement that scales with follower count (not flat across follower-count ranges). Five recent post URLs that actually work and load. Analytics screenshots that match the numbers in the form (not vague-grey screenshots of unknown provenance).

03 Audience composition

What it's probing for: whether the audience can actually be addressed by the protocol.

Why structured this way

The protocol cannot accept US persons. A 100K-follower channel that is 70% US-based has 30K addressable followers. We need to know that early so capacity calculations are honest. Language matters too — an English-speaking podcast targeting Spanish-only audiences has a translation problem you need to factor in.

What to look for

An audience composition that genuinely fits the protocol's eligibility map. UK / EU / non-restricted Asia-Pacific / Latin America / non-restricted Africa are all good. Estimated US share above 60% is a real problem — not necessarily disqualifying, but capacity must be discounted heavily.

04 Track record

What it's probing for: prior commercial behaviour as a paid promoter, and whether they have a history of being removed from things.

Why structured this way

Past sponsorship behaviour is the single best predictor of future sponsorship behaviour. We ask for outcomes ("with what outcome") because anyone can list past clients — not everyone can describe what they actually delivered. A clean record of three small completed promotions beats a list of twelve unfinished engagements. The "removed by counterparty" question filters for applicants who've burnt bridges — an honest "yes, here's why" can be fine; a "no" later contradicted by easy due-diligence is not.

What to look for

Detail. Specifics. Outcomes. Vague answers ("worked with several brands in the space") are amber. Refusals to answer ("confidential under NDA") are amber unless they offer to share with you privately. Outright lies caught by 30 seconds of Googling are red.

05 Commercial setup

What it's probing for: whether you can pay them and they can receive payment lawfully.

Why structured this way

Sole traders are fine; limited companies are fine; applicants who can't tell you what they are aren't fine. Tax registration matters because an applicant operating below the radar tax-wise is one regulatory enquiry away from being unable to issue you an invoice. Bank country and currency only at this stage — full account details belong on the contract, not the DD.

What to look for

Clear answers. "Sole trader, registered for self-assessment in the UK, no VAT, no insurance" is a complete answer for a smaller operator. Refusal to disclose tax registration status is a red flag for unrelated reasons.

06 Compliance & conflicts — the hard-stops

What it's probing for: deal-killers and disclosed risk.

Why structured this way

Eight yes/no questions that take the applicant 90 seconds to answer and that, if answered honestly, cover roughly 95% of the regulatory and reputational risk surface. "Yes" with a paragraph of context is often acceptable. "No" later disproven by any check you run is automatic termination, even after contract. The form itself states this. Applicants who lie on these questions are filtering themselves out.

What to look for

Cleanly answered, honestly contextualised. A "yes, I was a director of a company that wound down voluntarily in 2019, here's the public record" is fine. Repeated "yes" answers without context aren't. Any "no" that you can disprove with a Google search in 60 seconds is a hard decline.

07 Capacity

What it's probing for: calibration. Specifically, whether they understand their own audience.

Why structured this way

The 6-month, 12-month, 18-month estimate format forces them to think about pacing. An applicant who claims 10K users in 6 months but 10K total at 18 months is contradicting themselves. An applicant who claims 200 users in 6 months and 10K at 18 months is implicitly claiming a 50× acceleration — possible but rare and worth asking about. The "evidence" field is what saves you from optimistic but unsupported claims.

What to look for

Self-aware, sober numbers. The best answers will say "I think 1,500-3,000 over 18 months is realistic for my audience" rather than confidently asserting 10K. Calibrated humility is more valuable than ambitious overclaim. The applicant who hits 7K users having promised 3K is a hero. The one who promised 10K and delivers 4K is a problem.

08 Working style

What it's probing for: whether they're manageable across an 18-month engagement.

Why structured this way

Performance is one thing, manageability is another. A high-performing operator who never responds to messages is a worse hire than a moderate-performing operator you can actually communicate with. Timezone matters when you're coordinating across 50 sub-affiliates — an applicant who refuses any synchronous time creates ongoing friction.

What to look for

Honest, sustainable answers. "Same-day response during business hours, monthly call yes" is gold. "Available 24/7" is suspicious — nobody is. "Async only, no calls" can work but only with very experienced operators.

09 Why you

What it's probing for: voice. Specifically, whether the applicant has one.

Why structured this way

The single most predictive question on the form, despite being qualitative. An applicant who can write 200 honest, voice-driven words about why they want this role will write 200 hours of campaign content in their own voice for the same audience. An applicant who writes 200 words of LinkedIn-management-consultant filler will produce filler content. This is the section where AI-generated applications fail loudest.

What to look for

References to specific protocol mechanics (the 3× multiplier, the 275 BTC, the 5-year cycle, the 80/20 split). Personal voice ("the thing that struck me about", "I keep coming back to"). Specific commitments ("I'd push this to my newsletter readership and a couple of mid-tier UK podcasts I have relationships with"). Avoid: generic enthusiasm, "Bitcoin is the future", management-speak, AI-generic constructions.

10 Declarations & signature

What it's probing for: formal commitment to the truthfulness of everything above.

Why structured this way

Four explicit confirmations and a signature give the form legal weight. An applicant who has signed and submitted has affirmed the answers. If anything turns out to be false post-contract, the signature is the basis for termination without severance. This is not theoretical: in a 50-sub-affiliate programme over 18 months, two or three terminations are statistically expected, and the signed DD is what makes them clean.

What to look for

A real signature, a real date. Refusal to sign or "agree by typing your name" workarounds are red flags — an operator who won't sign isn't ready for the engagement.

Section 04

Audit Block A —
Identity legitimacy.

Block A is the prerequisite for everything else. If you can't establish that the applicant is who they say they are, audience quality scores don't matter. Four items, ~3 minutes total per applicant.

A.01 Name & channel cross-reference

What you're testing: does the legal name belong to the public identity of the channel?

Method · 30 seconds
Search "[Legal name] [channel handle]" on Google. Real operators show up: podcast directories, LinkedIn, conference speaker lists, news mentions. The connection should be findable in the first page of results.

Green: clear hits linking name to channel. Amber: adjacent hits but no direct link — ask for clarification. Red: nothing at all, or hits suggesting a different person owns the channel.

A.02 Account age vs. follower count

What you're testing: whether the channel grew naturally or was bought.

Method · 60 seconds
Use the "channel start date" field from Section 02. Calculate followers per month. Normal organic growth in the Bitcoin / finance space sits at 100–500 followers per month. 2,000+ per month sustained over many months requires evidence: a viral thread, a media appearance, a paid promotion, a referenced spike. No evidence = bought.

Green: consistent growth proportional to age. Amber: faster than typical but applicant offers a credible explanation. Red: implausible growth with no public catalyst.

A.03 Adverse media search

What you're testing: whether the applicant has a public history of fraud, scams, or investigation.

Method · 60 seconds
Run four Google queries: "[Legal name] fraud", "[Legal name] scam", "[Legal name] investigation", "[Legal name] lawsuit". Skim the first page of each. False positives (common name, unrelated person) are amber, not red — bring them up in clarification. Direct matches are red.

This step takes the same time on every applicant, so do it on every applicant, not just the suspicious ones. Catching one bad actor pays for hundreds of clean searches.

A.04 Sanctions list check

What you're testing: whether contracting with the applicant would breach UK / US / EU sanctions.

Method · 30 seconds
UK OFSI consolidated list, US OFAC SDN list, and EU consolidated list are all free public databases with search functions. Type the applicant's name. Any match — even partial — is red until cleared. This is a Green-or-Red item; there is no Amber.

A sanctions match is automatic decline regardless of any other audit score. Counsel must be informed. The DD form must be filed. The applicant gets a one-line response: "Thanks for applying; we are unable to progress."

Section 05

Audit Block B —
audience quality.

Block B is the core of the audit. It's where most of the genuine filtering happens. Six items, ~6–8 minutes per applicant. Pay attention here even when you're tired.

Why audience quality is the heaviest lift

Identity (Block A) is binary; either they're real or they're not. Compliance (Block C) is mostly captured by the form's hard-stop questions. Capacity (Block D) is calibration. Audience quality is where the actual value of the sub-affiliate is determined. A 40K-follower account with 5% real engagement (2K real engaged people) is more valuable than a 200K-follower account with 0.3% engagement (600 real engaged people). The DD's audience block is what surfaces that distinction.

B.01 Engagement-to-follower ratio

What you're testing: whether the audience actually engages with the content.

Method · 90 seconds
Open the five recent post URLs the applicant supplied. For each: total reactions (likes + reposts) divided by follower count, expressed as a percentage. Average across the five.

Benchmarks: Real Bitcoin/finance accounts engage 1–5%. Mass-market accounts can be lower. Bot-inflated accounts run below 0.5%. Green >1%, Amber 0.5–1%, Red <0.5%. Don't be misled by single high-performing posts — the average is the signal.

B.02 Follower username pattern check — the highest-value test

What you're testing: whether the followers themselves look like real people.

Method · 60 seconds
Click on the channel's followers tab. Scroll through the first three or four screens. Look at the names, profile photos, and bios.

Real audience pattern: realistic first-and-last names, real profile photos (varied, not stock), bios that reference jobs/interests/cities, accounts with their own content history.

Bot audience pattern (this is the test you want to memorise): usernames like Crypto_King_8472, MoonHodler_99012, Sarah__Investor_4421, default avatars or stock photos, identical bios across many accounts, accounts that follow 5,000 people but have 12 followers themselves, accounts created in the last six months.

This single check catches the highest proportion of inflated audiences in the shortest time. If this comes up Red, two of your other items are likely Red too — but you'd already have made up your mind.

B.03 Comment-to-like ratio & comment quality

What you're testing: whether the engagement is from people thinking, or from people pressing buttons.

Method · 90 seconds
On the same 5 posts you used for B.01: count comments as a percentage of likes, and read 3–5 comments per post.

Real engagement: comments are 5–20% of likes. Comments are sentences. Comments are on-topic.

Bot engagement: likes vastly outnumber comments (1,000 likes / 4 comments). Comments are emoji strings, "Great post!", "🔥🔥🔥", or repetitive recycled phrases that appear on multiple posts.

One signal off (e.g., low ratio but quality comments) = Amber. Both signals off = Red.

B.04 Engagement velocity over time

What you're testing: whether the engagement profile has changed unnaturally.

Method · 90 seconds
Click on posts from 6–12 months ago. Compare engagement to recent posts. Is there a smooth growth curve, or a step-change?

Real growth: engagement scales smoothly with follower-count growth over time. Bought engagement: a sudden 10× jump that doesn't correspond to any public event — no viral thread, no media break, no launch. Real spikes are explainable; bought spikes aren't.

B.05 Audience-niche match

What you're testing: whether the audience would actually be interested in a Bitcoin protocol.

Method · 90 seconds
Sample 30–50 followers (from the followers tab). What does the audience look like as a whole?

A "Bitcoin podcaster" whose followers are mostly K-pop accounts, OnlyFans models, MLM enthusiasts, or generic lifestyle accounts has either bought followers or has an audience that won't convert. Neither outcome is useful to the protocol. Real Bitcoin-niche audiences cluster — the followers will themselves be commentators on Bitcoin, business owners, finance professionals, software developers, with overlap into the broader sound-money discourse.

B.06 Free third-party audit cross-check (optional)

What you're testing: what an independent tool says about the same channel.

Method · 2 minutes (optional)
SparkToro for X (free tier), Modash for Instagram (free tier), Social Blade for YouTube. Type the handle, read the audience-quality score.

Optional because you don't need it if Blocks B.01–B.05 already give you a clear picture. Useful as a tiebreaker when the audit is mixed. Free third-party tools aren't perfect but they catch obvious manipulation patterns reliably.

Section 06

Audit Block C —
compliance posture.

Three items. ~3 minutes total. The applicant has already done most of the work for you in Section 06 of Part A; your job is to spot inconsistencies and missing disclosures.

C.01 Hard-stop questions consistency

What you're testing: whether the applicant's stated answers in Section 06 match what your other checks (A.03 adverse media, A.04 sanctions, C.02 corporate registry) revealed.

Method · 60 seconds
Cross-reference the applicant's eight yes/no answers in Section 06 against findings from A.03 and A.04. Anything inconsistent?

Green: all answers consistent with your findings; any disclosed yeses come with sensible context. Amber: a "yes" was disclosed but the context is thin — ask for more. Red: a "no" answer is contradicted by what you found. Inconsistency between disclosure and findings is the highest-weight Red on the entire DD. A bot-inflated audience is recoverable through the cap; a dishonest applicant is not.

C.02 Corporate registry check

What you're testing: if the applicant operates via a registered company, whether that company has a clean record.

Method · 2 minutes
UK applicants: Companies House (free). US applicants: state Secretary of State business search; SEC EDGAR for registered entities. Most other jurisdictions have free national registries. Look at: directorships current vs. resigned, prior insolvencies, dissolved companies, any flagged conduct, charges and mortgages.

Skip this item if the applicant is a sole trader (no entity to check). Otherwise, run it. Most applicants will be clean. The ones who aren't will save you a costly mistake.

C.03 Conflict of interest scan

What you're testing: whether the applicant is currently promoting any product the protocol would consider a conflict.

Method · 90 seconds
Scroll through the applicant's last 90 days of posts on their primary channel. Look for: paid partnerships with other Bitcoin treasury protocols, prize-pool products, savings DApps, lottery products, exchange referrals, NFT promotions.

Cross-reference against Section 06 disclosure. A clean record + no disclosure = Green. A current promotion that they disclosed honestly + offered to exit = Amber, address in interview. A current undisclosed promotion = Red.

Section 07

Audit Block D —
capacity & fit.

Three items, ~2 minutes total. The applicant has self-assessed; your job is to sanity-check.

D.01 Stated capacity vs. visible audience

What you're testing: arithmetic. Does their 18-month delivery target square with their visible audience size?

Method · 60 seconds
Take their 18-month delivery target (Section 07). Divide by their addressable audience (Section 03 minus US share). The result is the implied conversion rate they're claiming.

Bitcoin-niche conversion benchmarks: 0.1–1% of an audience converts to a paid action over a 12–18 month campaign. So a 10K-follower account claiming 10K conversions = 100% conversion rate, implausible. Same account claiming 200–500 = 2–5% conversion, ambitious but possible. A 100K account claiming 10K = 10% conversion, very ambitious; needs supporting evidence.

D.02 Geographic fit

What you're testing: US-share concentration risk.

Method · 30 seconds
Read Section 03's stated US-audience share.

Green: below 25% US. Amber: 25–60% US — capacity must be discounted accordingly; sub-affiliate is still viable. Red: above 60% US — the addressable audience is too small to justify a sub-affiliate slot regardless of audience quality.

D.03 "Why You" answer quality

What you're testing: the applicant's voice and protocol fluency. This is the most predictive single item on the entire DD.

Method · 60 seconds
Read Section 09 of Part A. Read it aloud, mentally or actually. Three questions: (1) Is it specific to the protocol? Does it reference real mechanics — the 3× multiplier, the founding cohort, the 5-year cycle, the 80/20 split? (2) Does it have voice — would you recognise it as written by a person, not by a brand or an AI? (3) Does it commit to anything specific — an audience, a tactic, a timeline?

Green: all three. Specific, voiced, committal. Amber: two of three — readable and personal but generic about the protocol, or vice versa. Red: AI-generic, no voice, no specifics. The AI-generic pattern is recognisable: balanced sentence rhythm, ChatGPT-cadence transitions ("Furthermore...", "Moreover..."), management-consultant vocabulary, no concrete protocol references.

Section 08

The verdict mechanic.

Once you've scored all sixteen items across the four blocks, the verdict tally is mechanical.

The simple rule

0–1 Reds total → Proceed. Send to interview. Interview validates whether the form's positive signal holds up in conversation.

2 Reds total → Clarify. Send a single email naming the two specific concerns. Seven days to respond. Honest response that resolves both = re-score and re-decide. No response or evasive response = decline.

3+ Reds total → Decline. One-line email: "Thanks for applying — not progressing on this round."

Sanctions match (item A.04 Red) → Automatic decline. Bypasses everything else. Counsel notified. DD filed.

When to override the verdict

The mechanic is intentionally rigid. But there are three legitimate override scenarios:

1
Three Reds, all in the same block, all explainable. Example: an applicant whose three Reds are all in Block D (capacity / fit) because they're a young creator with a small but real audience — not lying, just not yet at the scale the rubric expects. You may override to Clarify if you see strategic value (e.g., breakout potential in a niche audience). Document the reasoning in operator notes.
2
Zero Reds but a strong gut "no". If something feels off but you can't isolate the signal — tone, register, the way they answered Section 09 — trust it and decline. Document the reason as "operator judgement — insufficient fit signal." You don't owe applicants an explanation beyond the standard one-line response.
3
Two Reds with active engagement from a high-trust source. Example: an applicant referred to you by a sub-affiliate already on the books, who personally vouches for them. Override to Clarify rather than Decline, but require the referrer to vouch in writing. Documented vouching shifts liability.

Outside these three scenarios, follow the verdict mechanic. Overrides should be rare — one or two per 200 applicants. Frequent overrides are a sign the rubric needs revision, not that you have better instincts than the rubric.

Flagging deputy-track candidates during audit

A small subset of applicants will produce unusually clean audits — all four blocks Green or near-Green, organised communication, demonstrated leadership in their existing audience. These are not just "Proceed" applicants, they are future deputy candidates. Around month three of operations the Lead Operator promotes approximately five sub-affiliates from inside the cohort to deputy roles, each overseeing roughly eight of the others. The deputy stipend is paid on top of the underlying sub-affiliate retainer; the role is a tier-up promotion, not a separate hire.

When you flag a deputy-track candidate during audit, add a tag in the registry: "deputy-track candidate" alongside the standard verdict. The flag is internal only — do not mention promotion possibilities to the applicant during onboarding. Promote them only after they have demonstrated three months of clean delivery, not on the strength of the audit alone. The audit identifies the candidate pool; the first ninety days of delivery confirm the choice.

"A clean audit is necessary but not sufficient for the deputy track. Three months of clean delivery is the actual qualification."
Section 09

Applicant patterns
you'll see at scale.

Across 200 applications, recurring archetypes emerge. Recognising them shortcuts the audit. Five common patterns and how to handle each.

The Inflated Influencer

40K–200K followers, claims to be a "Bitcoin thought leader," low engagement-to-follower ratio, follower base shows clear bot patterns, comments are emoji strings. Section 09 is generic, often AI-flavoured. Will not disclose any past sponsorship issues.

Block B: 4–6 Reds Block D.03: Red
Verdict: Decline. Most common applicant in the open-pitch lane. The rubric handles them automatically.
The Honest Small Operator

2K–15K real followers, 2–5% engagement, comments are real conversations, audience clearly fits the niche, Section 07 capacity numbers are sober (200–1,500 over 18 months). Section 09 is specific, written in their own voice, mentions concrete tactics. May not have run paid promotions before.

Mostly Greens Maybe one Amber on prior track record
Verdict: Proceed to interview. The most undervalued category. Five of these are worth one Inflated Influencer and won't burn out by month six.
The Over-Promising Pro

Real audience, real engagement, professional setup, registered company, prior promotion experience — but Section 07 claims 10K users in 18 months on a 20K addressable audience. 50% conversion rate is implausible. Section 09 is glossy and on-brand but light on specific protocol references.

Mostly Greens D.01: Red D.03: Amber
Verdict: Clarify. Ask: "What's the basis for the 10K target on a 20K-addressable audience?" An honest re-calibration is recoverable. Doubling down without evidence is not.
The Wrong-Niche Veteran

Solid 50K+ audience, real engagement, professional history — but the audience is finance generally, or wealth management, or an adjacent space, with little Bitcoin specificity. Section 09 is well-written but reads as "I run promotions for a living" not "I find this protocol interesting."

Block B: Greens B.05: Amber/Red D.03: Amber
Verdict: Lean Decline. Real operator, wrong fit. The audience won't convert at the rate the protocol needs. A polite decline preserves the relationship for a future, more aligned engagement.
The Quiet Heavyweight

Mid-sized audience (10K–40K), exceptional engagement (5–10%), comments are substantive, audience is laser-focused on the right niche. Section 04 lists prior promotions with measurable outcomes ("3,400 paid signups for X over 6 months"). Section 09 is short, sober, and references three specific protocol mechanics without showing off. Probably referred by another sub-affiliate.

All Greens
Verdict: Proceed urgently. Don't lose them. These applicants get fast-tracked to interview within five days. Lock them in.
Section 10

Operator failure modes
to avoid.

Patterns of operator behaviour that undermine the DD's effectiveness. Read these once and don't do them.

F.01 Audit-creep

Spending 45 minutes on a single applicant when the rubric is designed for 10–15 minutes. The cost: you process 25% of the volume. The risk: you start to favour applicants who let you go deep over those who fit cleanly. Discipline yourself to the time budget.

F.02 The friendly outlier override

An applicant you like personally produces three Reds. You override to Proceed because "the rubric doesn't capture how interesting they are." Two months in they fail the gates and you're embarrassed. The rubric was right; you knew them too well to score honestly. If you can't be neutral, hand the audit to a deputy.

F.03 Sympathetic ear for excuses

Clarify-stage responses become long stories about why the audience metrics are unfairly low ("the algorithm changed", "I had a baby", "I was offline for three months"). Treat these as Decline-by-explanation. The 50 sub-affiliate slots will be filled either way; an applicant whose primary message is excuses will produce excuses for missed gates too.

F.04 Skipping Block A on "obviously good" applicants

Applicant looks great so you skip the sanctions check. You won't, in 199 cases out of 200, find anything. The 200th is why you do the check on every applicant. Block A is the only block where speed of process matters more than judgement — just do it on every form, in the same order, every time.

F.05 Letting the inbox dictate the pace

Applicants who chase get prioritised; quiet applicants get buried. This is the wrong order. The Quiet Heavyweight pattern is precisely the applicant who doesn't chase. Set a two-week batch cycle, communicate it on inbound, and stick to it. Chasers don't get exceptions.

F.06 Treating the DD as the contract

The DD is the gate. The contract is the contract. Once you've decided to Proceed, the sub-affiliate signs the proper agreement and the Sub-Affiliate House Rules. Don't try to relitigate audit findings inside the contract or vice versa.

Section 11

When to delegate audit work.

The DD is designed for one Lead Operator to run end-to-end across 200 applicants in roughly 25–30 hours of audit time, plus 15 hours of interviews and clarifications. That's a workable burden for one person inside a six-week recruitment window.

If the inbound volume goes higher — 400 or 500 applications, which can happen if a programme launch goes broader than expected — the audit volume crosses the line where one person can sustain quality. At that point, three options:

1
Stop accepting applications until you've cleared the queue. The cleanest option. Communicate the closure on outbound channels. Re-open in two weeks once you're caught up. This preserves quality at the cost of slowing throughput.
2
Delegate Block A and Block C to a deputy. These are the most mechanical blocks — mostly checking against external lists and registries. A trusted deputy (assistant, contractor) can score Blocks A and C. You score Blocks B and D, and final verdict. Halves the audit time per applicant.
3
Delegate the entire audit to a deputy with sample-checking. If you have a deputy you trust deeply (they've shadowed you on 30+ audits), they run the full audit and you sample-check 10% of their decisions. This scales but introduces decision drift — the deputy will have slightly different instincts. Worth doing only after the deputy has demonstrated calibration on a meaningful sample.
"The DD's job is to make audit decisions consistent across applicants. Delegation's job is to make audit time scalable across applicants. Don't sacrifice the first to achieve the second."
Manual status: This document is the operator's guide to using the Sub-Affiliate Due Diligence form effectively. It is intended for the Lead Operator and any deputy who handles audit work. It is not shared with applicants — applicants see only the DD form itself. The manual is a living document and may be updated as the programme matures and new patterns emerge. Updates require seven days' written notice to anyone with access to the manual.