Overview
First Query (formerly Haven Influence) is an influencer marketing agency I co-founded and ran Campaign Operations. We were a lean team, which meant I owned a wide range of responsibilities: from building the technical infrastructure to running campaigns end-to-end. The core problem we solved was that brands were wasting hours on manual creator vetting with no reliable signal on audience quality or real engagement. I built the infrastructure to fix that, and then ran the actual campaigns on top of it.
What I built and owned:
Built custom scraping software to qualify 12,000+ creators by engagement rate, audience quality, and growth trajectory
Owned the full campaign lifecycle: creator outreach, negotiation, contracts, creative briefs, and final approvals
Built internal dashboards on GitHub and Vercel with Supabase as the data layer, using SQL to keep performance data structured and current
Used Cursor to iterate on and fix bugs in the dashboards when things broke
Set up outreach infrastructure in Apollo, Clay, Smartleads, and Infraforge, and used AI to clean data, draft outreach, and summarise client meetings into clear next steps
Monitored campaign KPIs (engagement, CTR, conversions) and adjusted the creator mix to improve ROAS
Replaced manual sourcing processes with a scalable, reusable creator database
Growth Process
Influencer marketing has an AARRR problem. Most teams optimise for Acquisition (reach, impressions) without closing the loop on Revenue (did it convert?). At First Query, I built the ops layer to connect those two ends, starting with smarter creator selection, then tracking the full funnel through to ROAS.
The approach:
Engagement as a filter, not a vanity metric: I set minimum thresholds on engagement rate, audience authenticity, and niche fit before any creator entered the pipeline
Test small, then scale: ran initial pilots before committing to full budgets, escalating only when performance data justified it
ROAS as the acceptance test: every campaign had a clear revenue-per-spend target, and creator selection was adjusted mid-flight when numbers weren't tracking
Toolstack: Apollo, Clay, Infraforge, custom scraping scripts, GitHub/Vercel/Supabase dashboards, Smartleads for outreach sequencing, Cursor for dashboard fixes, AI for data cleaning and meeting summaries


The Challenge
Influencer marketing is a data problem that most teams treat as a relationship problem. Creator vetting was entirely manual: no standardised scoring, no quality controls, no way to move fast without missing signals that matter (fake followers, low-intent audiences, engagement that doesn't convert).
The side-effect: campaigns were expensive to start and hard to stop. There was no test-then-scale structure, no mid-campaign adjustment logic, and no clear ROAS target that tied the creator fee to a real business outcome.
Results
Numbers I can point to: 12,000+ influencers qualified and scored through the scraping system, replacing a process that previously took days per creator
What this proved: when you treat creator selection as a data problem and ROAS as the acceptance test, influencer marketing becomes a lot less expensive to get right.
3x+ faster creator sourcing vs. manual benchmarks
Measurably improved ROAS for OptiNourish (DTC supplement brand) through tighter creator selection and mid-campaign KPI adjustments
Reusable creator database built as a durable asset and not just a one-campaign list
