The Sub-Affiliate Due Diligence form (DD) was designed to do one job: let one Lead Operator triage two hundred applicants down to a credible shortlist of fifty in roughly twenty-five hours of audit time, with confidence in every decline and every promotion.
The DD form alone is the tool. This manual is the user guide. It explains, section by section and item by item, why each part of the DD looks the way it looks, what each scoring item is actually testing, and how to apply the audit consistently across applicants you've never met. Read it once before your first batch. Reference it whenever an applicant doesn't fit the standard pattern.
Nothing in this manual replaces operator judgement. The DD is a tool that enables good judgement by removing the cognitive load of remembering what to check for and where to be sceptical. You decide. The DD makes deciding faster.
Before you touch the form, internalise three principles. The whole DD is built on these. If you understand them, you'll handle edge cases the form doesn't anticipate. If you don't, you'll second-guess the form when it's right.
The DD is a self-filtering document. The questions are designed so applicants who shouldn't be sub-affiliates either won't fill it in honestly, won't fill it in completely, or will fill it in with answers that flag themselves. Roughly one in four applicants will eliminate themselves before you score a single item. Trust that part of the design and don't chase incomplete forms. An applicant who can't fill in a DD form for a paid affiliate role won't fill in monthly attribution reports for an eighteen-month engagement either.
You're filtering for honesty more than impressiveness. Audience quality matters, but so does honest disclosure. An applicant with a 4K real engaged audience who tells you exactly that is more valuable than one with a 40K bot-inflated audience who describes themselves as a "thought leader." The DD weights honesty by making honest disclosure cheap (one paragraph) and dishonesty expensive (immediate termination if discovered).
Speed is a feature. The audit is designed to take 10–15 minutes per applicant, not 60–90 minutes. That speed is intentional. Accept that the DD will sometimes get edge cases wrong rather than slow down to chase certainty on every applicant. The 10K-per-sub-affiliate cap and the hard performance gates protect you from any individual mistake. A bad applicant who slips through an audit will fail the Month 2 gate and exit. A good applicant who you reject incorrectly is one of fifty — you have replacements. The cost of a missed-decline at the audit stage is much lower than the cost of a slow audit.
From the moment an applicant first contacts you to the moment you sign or decline. Eight steps.
YYYY-MM-DD_LastName_FirstName_DD.pdf. Resist the urge to read it carefully on receipt. Batch processing is faster than rolling assessment.Each section of the applicant-facing form has a deliberate structural purpose. Knowing what each is testing helps you read the answers correctly.
What it's probing for: a real person you can find online, contract with, and pay.
We ask for legal name and public handle separately because they often differ. We ask for tax residency separately from country of residence because some applicants live abroad but pay tax somewhere else — relevant for both AML and for understanding which jurisdictional regulators apply to them. The trading-entity field is optional because not all applicants operate via a company — sole traders are perfectly acceptable.
Names that match the public identity of the channels they list. Phone numbers in the country they claim to live in. Email addresses on a domain that aligns with the channel (a podcast called "Wealth Frequency" with an email at wealthfrequency.com is normal; one at @gmail.com with a Russian-domain handle, less so).
What it's probing for: verifiable audience size and a paper trail of recent posts so you can scrape engagement signals manually.
The five-most-recent-post URL field is the most important on the entire form. It gives you the data to manually run audience-quality checks (Block B in the audit) without having to find their channel yourself. The platform analytics screenshot requirement filters out applicants whose claimed numbers can't be substantiated — if they refuse to share or send a low-quality screenshot, that is itself the signal.
A primary channel that's at least 12 months old. Engagement that scales with follower count (not flat across follower-count ranges). Five recent post URLs that actually work and load. Analytics screenshots that match the numbers in the form (not vague-grey screenshots of unknown provenance).
What it's probing for: whether the audience can actually be addressed by the protocol.
The protocol cannot accept US persons. A 100K-follower channel that is 70% US-based has 30K addressable followers. We need to know that early so capacity calculations are honest. Language matters too — an English-speaking podcast targeting Spanish-only audiences has a translation problem you need to factor in.
An audience composition that genuinely fits the protocol's eligibility map. UK / EU / non-restricted Asia-Pacific / Latin America / non-restricted Africa are all good. Estimated US share above 60% is a real problem — not necessarily disqualifying, but capacity must be discounted heavily.
What it's probing for: prior commercial behaviour as a paid promoter, and whether they have a history of being removed from things.
Past sponsorship behaviour is the single best predictor of future sponsorship behaviour. We ask for outcomes ("with what outcome") because anyone can list past clients — not everyone can describe what they actually delivered. A clean record of three small completed promotions beats a list of twelve unfinished engagements. The "removed by counterparty" question filters for applicants who've burnt bridges — an honest "yes, here's why" can be fine; a "no" later contradicted by easy due-diligence is not.
Detail. Specifics. Outcomes. Vague answers ("worked with several brands in the space") are amber. Refusals to answer ("confidential under NDA") are amber unless they offer to share with you privately. Outright lies caught by 30 seconds of Googling are red.
What it's probing for: whether you can pay them and they can receive payment lawfully.
Sole traders are fine; limited companies are fine; applicants who can't tell you what they are aren't fine. Tax registration matters because an applicant operating below the radar tax-wise is one regulatory enquiry away from being unable to issue you an invoice. Bank country and currency only at this stage — full account details belong on the contract, not the DD.
Clear answers. "Sole trader, registered for self-assessment in the UK, no VAT, no insurance" is a complete answer for a smaller operator. Refusal to disclose tax registration status is a red flag for unrelated reasons.
What it's probing for: deal-killers and disclosed risk.
Eight yes/no questions that take the applicant 90 seconds to answer and that, if answered honestly, cover roughly 95% of the regulatory and reputational risk surface. "Yes" with a paragraph of context is often acceptable. "No" later disproven by any check you run is automatic termination, even after contract. The form itself states this. Applicants who lie on these questions are filtering themselves out.
Cleanly answered, honestly contextualised. A "yes, I was a director of a company that wound down voluntarily in 2019, here's the public record" is fine. Repeated "yes" answers without context aren't. Any "no" that you can disprove with a Google search in 60 seconds is a hard decline.
What it's probing for: calibration. Specifically, whether they understand their own audience.
The 6-month, 12-month, 18-month estimate format forces them to think about pacing. An applicant who claims 10K users in 6 months but 10K total at 18 months is contradicting themselves. An applicant who claims 200 users in 6 months and 10K at 18 months is implicitly claiming a 50× acceleration — possible but rare and worth asking about. The "evidence" field is what saves you from optimistic but unsupported claims.
Self-aware, sober numbers. The best answers will say "I think 1,500-3,000 over 18 months is realistic for my audience" rather than confidently asserting 10K. Calibrated humility is more valuable than ambitious overclaim. The applicant who hits 7K users having promised 3K is a hero. The one who promised 10K and delivers 4K is a problem.
What it's probing for: whether they're manageable across an 18-month engagement.
Performance is one thing, manageability is another. A high-performing operator who never responds to messages is a worse hire than a moderate-performing operator you can actually communicate with. Timezone matters when you're coordinating across 50 sub-affiliates — an applicant who refuses any synchronous time creates ongoing friction.
Honest, sustainable answers. "Same-day response during business hours, monthly call yes" is gold. "Available 24/7" is suspicious — nobody is. "Async only, no calls" can work but only with very experienced operators.
What it's probing for: voice. Specifically, whether the applicant has one.
The single most predictive question on the form, despite being qualitative. An applicant who can write 200 honest, voice-driven words about why they want this role will write 200 hours of campaign content in their own voice for the same audience. An applicant who writes 200 words of LinkedIn-management-consultant filler will produce filler content. This is the section where AI-generated applications fail loudest.
References to specific protocol mechanics (the 3× multiplier, the 275 BTC, the 5-year cycle, the 80/20 split). Personal voice ("the thing that struck me about", "I keep coming back to"). Specific commitments ("I'd push this to my newsletter readership and a couple of mid-tier UK podcasts I have relationships with"). Avoid: generic enthusiasm, "Bitcoin is the future", management-speak, AI-generic constructions.
What it's probing for: formal commitment to the truthfulness of everything above.
Four explicit confirmations and a signature give the form legal weight. An applicant who has signed and submitted has affirmed the answers. If anything turns out to be false post-contract, the signature is the basis for termination without severance. This is not theoretical: in a 50-sub-affiliate programme over 18 months, two or three terminations are statistically expected, and the signed DD is what makes them clean.
A real signature, a real date. Refusal to sign or "agree by typing your name" workarounds are red flags — an operator who won't sign isn't ready for the engagement.
Block A is the prerequisite for everything else. If you can't establish that the applicant is who they say they are, audience quality scores don't matter. Four items, ~3 minutes total per applicant.
What you're testing: does the legal name belong to the public identity of the channel?
Green: clear hits linking name to channel. Amber: adjacent hits but no direct link — ask for clarification. Red: nothing at all, or hits suggesting a different person owns the channel.
What you're testing: whether the channel grew naturally or was bought.
Green: consistent growth proportional to age. Amber: faster than typical but applicant offers a credible explanation. Red: implausible growth with no public catalyst.
What you're testing: whether the applicant has a public history of fraud, scams, or investigation.
This step takes the same time on every applicant, so do it on every applicant, not just the suspicious ones. Catching one bad actor pays for hundreds of clean searches.
What you're testing: whether contracting with the applicant would breach UK / US / EU sanctions.
A sanctions match is automatic decline regardless of any other audit score. Counsel must be informed. The DD form must be filed. The applicant gets a one-line response: "Thanks for applying; we are unable to progress."
Block B is the core of the audit. It's where most of the genuine filtering happens. Six items, ~6–8 minutes per applicant. Pay attention here even when you're tired.
Identity (Block A) is binary; either they're real or they're not. Compliance (Block C) is mostly captured by the form's hard-stop questions. Capacity (Block D) is calibration. Audience quality is where the actual value of the sub-affiliate is determined. A 40K-follower account with 5% real engagement (2K real engaged people) is more valuable than a 200K-follower account with 0.3% engagement (600 real engaged people). The DD's audience block is what surfaces that distinction.
What you're testing: whether the audience actually engages with the content.
Benchmarks: Real Bitcoin/finance accounts engage 1–5%. Mass-market accounts can be lower. Bot-inflated accounts run below 0.5%. Green >1%, Amber 0.5–1%, Red <0.5%. Don't be misled by single high-performing posts — the average is the signal.
What you're testing: whether the followers themselves look like real people.
Real audience pattern: realistic first-and-last names, real profile photos (varied, not stock), bios that reference jobs/interests/cities, accounts with their own content history.
Bot audience pattern (this is the test you want to memorise): usernames like Crypto_King_8472, MoonHodler_99012, Sarah__Investor_4421, default avatars or stock photos, identical bios across many accounts, accounts that follow 5,000 people but have 12 followers themselves, accounts created in the last six months.
This single check catches the highest proportion of inflated audiences in the shortest time. If this comes up Red, two of your other items are likely Red too — but you'd already have made up your mind.
What you're testing: whether the engagement is from people thinking, or from people pressing buttons.
Real engagement: comments are 5–20% of likes. Comments are sentences. Comments are on-topic.
Bot engagement: likes vastly outnumber comments (1,000 likes / 4 comments). Comments are emoji strings, "Great post!", "🔥🔥🔥", or repetitive recycled phrases that appear on multiple posts.
One signal off (e.g., low ratio but quality comments) = Amber. Both signals off = Red.
What you're testing: whether the engagement profile has changed unnaturally.
Real growth: engagement scales smoothly with follower-count growth over time. Bought engagement: a sudden 10× jump that doesn't correspond to any public event — no viral thread, no media break, no launch. Real spikes are explainable; bought spikes aren't.
What you're testing: whether the audience would actually be interested in a Bitcoin protocol.
A "Bitcoin podcaster" whose followers are mostly K-pop accounts, OnlyFans models, MLM enthusiasts, or generic lifestyle accounts has either bought followers or has an audience that won't convert. Neither outcome is useful to the protocol. Real Bitcoin-niche audiences cluster — the followers will themselves be commentators on Bitcoin, business owners, finance professionals, software developers, with overlap into the broader sound-money discourse.
What you're testing: what an independent tool says about the same channel.
Optional because you don't need it if Blocks B.01–B.05 already give you a clear picture. Useful as a tiebreaker when the audit is mixed. Free third-party tools aren't perfect but they catch obvious manipulation patterns reliably.
Three items. ~3 minutes total. The applicant has already done most of the work for you in Section 06 of Part A; your job is to spot inconsistencies and missing disclosures.
What you're testing: whether the applicant's stated answers in Section 06 match what your other checks (A.03 adverse media, A.04 sanctions, C.02 corporate registry) revealed.
Green: all answers consistent with your findings; any disclosed yeses come with sensible context. Amber: a "yes" was disclosed but the context is thin — ask for more. Red: a "no" answer is contradicted by what you found. Inconsistency between disclosure and findings is the highest-weight Red on the entire DD. A bot-inflated audience is recoverable through the cap; a dishonest applicant is not.
What you're testing: if the applicant operates via a registered company, whether that company has a clean record.
Skip this item if the applicant is a sole trader (no entity to check). Otherwise, run it. Most applicants will be clean. The ones who aren't will save you a costly mistake.
What you're testing: whether the applicant is currently promoting any product the protocol would consider a conflict.
Cross-reference against Section 06 disclosure. A clean record + no disclosure = Green. A current promotion that they disclosed honestly + offered to exit = Amber, address in interview. A current undisclosed promotion = Red.
Three items, ~2 minutes total. The applicant has self-assessed; your job is to sanity-check.
What you're testing: arithmetic. Does their 18-month delivery target square with their visible audience size?
Bitcoin-niche conversion benchmarks: 0.1–1% of an audience converts to a paid action over a 12–18 month campaign. So a 10K-follower account claiming 10K conversions = 100% conversion rate, implausible. Same account claiming 200–500 = 2–5% conversion, ambitious but possible. A 100K account claiming 10K = 10% conversion, very ambitious; needs supporting evidence.
What you're testing: US-share concentration risk.
Green: below 25% US. Amber: 25–60% US — capacity must be discounted accordingly; sub-affiliate is still viable. Red: above 60% US — the addressable audience is too small to justify a sub-affiliate slot regardless of audience quality.
What you're testing: the applicant's voice and protocol fluency. This is the most predictive single item on the entire DD.
Green: all three. Specific, voiced, committal. Amber: two of three — readable and personal but generic about the protocol, or vice versa. Red: AI-generic, no voice, no specifics. The AI-generic pattern is recognisable: balanced sentence rhythm, ChatGPT-cadence transitions ("Furthermore...", "Moreover..."), management-consultant vocabulary, no concrete protocol references.
Once you've scored all sixteen items across the four blocks, the verdict tally is mechanical.
0–1 Reds total → Proceed. Send to interview. Interview validates whether the form's positive signal holds up in conversation.
2 Reds total → Clarify. Send a single email naming the two specific concerns. Seven days to respond. Honest response that resolves both = re-score and re-decide. No response or evasive response = decline.
3+ Reds total → Decline. One-line email: "Thanks for applying — not progressing on this round."
Sanctions match (item A.04 Red) → Automatic decline. Bypasses everything else. Counsel notified. DD filed.
The mechanic is intentionally rigid. But there are three legitimate override scenarios:
Outside these three scenarios, follow the verdict mechanic. Overrides should be rare — one or two per 200 applicants. Frequent overrides are a sign the rubric needs revision, not that you have better instincts than the rubric.
A small subset of applicants will produce unusually clean audits — all four blocks Green or near-Green, organised communication, demonstrated leadership in their existing audience. These are not just "Proceed" applicants, they are future deputy candidates. Around month three of operations the Lead Operator promotes approximately five sub-affiliates from inside the cohort to deputy roles, each overseeing roughly eight of the others. The deputy stipend is paid on top of the underlying sub-affiliate retainer; the role is a tier-up promotion, not a separate hire.
When you flag a deputy-track candidate during audit, add a tag in the registry: "deputy-track candidate" alongside the standard verdict. The flag is internal only — do not mention promotion possibilities to the applicant during onboarding. Promote them only after they have demonstrated three months of clean delivery, not on the strength of the audit alone. The audit identifies the candidate pool; the first ninety days of delivery confirm the choice.
Across 200 applications, recurring archetypes emerge. Recognising them shortcuts the audit. Five common patterns and how to handle each.
40K–200K followers, claims to be a "Bitcoin thought leader," low engagement-to-follower ratio, follower base shows clear bot patterns, comments are emoji strings. Section 09 is generic, often AI-flavoured. Will not disclose any past sponsorship issues.
Block B: 4–6 Reds Block D.03: Red2K–15K real followers, 2–5% engagement, comments are real conversations, audience clearly fits the niche, Section 07 capacity numbers are sober (200–1,500 over 18 months). Section 09 is specific, written in their own voice, mentions concrete tactics. May not have run paid promotions before.
Mostly Greens Maybe one Amber on prior track recordReal audience, real engagement, professional setup, registered company, prior promotion experience — but Section 07 claims 10K users in 18 months on a 20K addressable audience. 50% conversion rate is implausible. Section 09 is glossy and on-brand but light on specific protocol references.
Mostly Greens D.01: Red D.03: AmberSolid 50K+ audience, real engagement, professional history — but the audience is finance generally, or wealth management, or an adjacent space, with little Bitcoin specificity. Section 09 is well-written but reads as "I run promotions for a living" not "I find this protocol interesting."
Block B: Greens B.05: Amber/Red D.03: AmberMid-sized audience (10K–40K), exceptional engagement (5–10%), comments are substantive, audience is laser-focused on the right niche. Section 04 lists prior promotions with measurable outcomes ("3,400 paid signups for X over 6 months"). Section 09 is short, sober, and references three specific protocol mechanics without showing off. Probably referred by another sub-affiliate.
All GreensPatterns of operator behaviour that undermine the DD's effectiveness. Read these once and don't do them.
Spending 45 minutes on a single applicant when the rubric is designed for 10–15 minutes. The cost: you process 25% of the volume. The risk: you start to favour applicants who let you go deep over those who fit cleanly. Discipline yourself to the time budget.
An applicant you like personally produces three Reds. You override to Proceed because "the rubric doesn't capture how interesting they are." Two months in they fail the gates and you're embarrassed. The rubric was right; you knew them too well to score honestly. If you can't be neutral, hand the audit to a deputy.
Clarify-stage responses become long stories about why the audience metrics are unfairly low ("the algorithm changed", "I had a baby", "I was offline for three months"). Treat these as Decline-by-explanation. The 50 sub-affiliate slots will be filled either way; an applicant whose primary message is excuses will produce excuses for missed gates too.
Applicant looks great so you skip the sanctions check. You won't, in 199 cases out of 200, find anything. The 200th is why you do the check on every applicant. Block A is the only block where speed of process matters more than judgement — just do it on every form, in the same order, every time.
Applicants who chase get prioritised; quiet applicants get buried. This is the wrong order. The Quiet Heavyweight pattern is precisely the applicant who doesn't chase. Set a two-week batch cycle, communicate it on inbound, and stick to it. Chasers don't get exceptions.
The DD is the gate. The contract is the contract. Once you've decided to Proceed, the sub-affiliate signs the proper agreement and the Sub-Affiliate House Rules. Don't try to relitigate audit findings inside the contract or vice versa.
The DD is designed for one Lead Operator to run end-to-end across 200 applicants in roughly 25–30 hours of audit time, plus 15 hours of interviews and clarifications. That's a workable burden for one person inside a six-week recruitment window.
If the inbound volume goes higher — 400 or 500 applications, which can happen if a programme launch goes broader than expected — the audit volume crosses the line where one person can sustain quality. At that point, three options: