What we are showing you (and why we cannot show everything)
Thanks for the feedback, I understand this is an unknown minefield for most people, so we have made a online demo site showing some of the 768 pointers we use.
Loading...
Verify on BlockchainThe fraud detection paradox
Our system tracks 768 different measurements to identify fake streams and protect artist revenue. We are showing you a small sample of these dimensions below. We cannot reveal all 768 for a critical security reason: if attackers know exactly what we measure, they can design their fraud operations to fake those specific patterns.
This is like a bank explaining its vault has "multiple security layers" without publishing the exact alarm codes and guard schedules.
Sample dimensions we are disclosing (approximately 15% of the total system)
Timing patterns (when streams happen)
- Hour-of-day distribution: Real humans stream more in evenings and weekends; bots often run 24/7 with unnatural consistency
- Session length: Humans listen for varied periods; bots often have identical session durations
- Time between plays: Real listeners have natural gaps; coordinated fake accounts show suspicious synchronization
Location patterns (where streams come from)
- Geographic consistency: Real users stay in reasonable locations; bots jump between countries impossibly fast
- Timezone alignment: Listening patterns should match local time zones; mismatches indicate VPN or proxy manipulation
- IP reputation: We check if streaming comes from known bot networks or suspicious data centers
Device patterns (how people listen)
- Device consistency: Real users typically have 1-3 devices; fraud operations often show dozens of accounts per device
- Platform behavior: Real listeners use normal phones and computers; bots often use server infrastructure pretending to be mobile devices
Engagement patterns (how people interact)
- Song completion rates: Real listeners skip songs they dislike; bots often play everything to 100%
- Genre diversity: Real humans explore multiple styles; many bot operations focus narrowly to maximize specific track plays
- Social connections: Real users follow friends and share music; fake accounts exist in isolation or artificial networks
Why do we hold back the other 653 dimensions
Security through selective disclosure
Fraud operations are businesses. They invest in technology to evade detection. If we published our complete detection model, sophisticated attackers would:
- Build simulation systems, testing their fake streams against our exact criteria
- Optimize their bots to pass every measurement we disclosed
- Render our fraud detection worthless within weeks
By revealing only a representative sample, we show transparency about our approach (mathematics-based behavioral analysis) while maintaining operational security (specific thresholds, weight factors, and interaction effects remain confidential).
Industry precedent
Banks do not publish exactly how they detect counterfeit currency. Credit card companies do not reveal precisely how fraud scoring works. Security researchers who discovered vulnerabilities in Google, Anthropic, and OpenAI systems withheld attack details until patches were available.
We follow the same principle: demonstrate capability and methodology without compromising effectiveness.
What does this mean?
Your revenue is protected by a comprehensive system measuring hundreds of behavioral factors that bots cannot fake simultaneously. We are showing you enough to understand it is thorough and grounded in real patterns, but not so much that attackers can reverse-engineer it.
When you see payment numbers, you can trust they reflect genuine listeners analyzed across all 768 dimensions, not just the sample we disclosed here.
Competitive moat: Our fraud detection is not a list of features competitors can copy from a product page. It is a complex behavioral model where effectiveness depends on keeping specific detection criteria confidential.
Revenue protection: Industry fraud rates of 5-15% directly impact margins. Our system protects unit economics by catching manipulation other platforms miss, but only if we maintain operational security around detection methods.
Regulatory readiness: We maintain detailed documentation of all 768 dimensions for audit purposes, regulatory inquiries, and dispute resolution. Selective public disclosure balances transparency with security, a position regulators understand and accept.
The disclosure balance
What we show:
- Overall approach: behavioral analysis using 768 numeric dimensions
- Representative samples: timing, location, device, engagement patterns
- Architecture: text-free ai processing, human oversight, blockchain audit trails
- Results: 95% detection accuracy, sub-2% false positive rate
What we protect:
- Exact dimension definitions and calculation methods
- Specific threshold values triggering different risk levels
- Weight factors determining how dimensions combine
- Interaction effects between multiple dimensions
- Detection strategies for emerging attack techniques
This balance allows you to evaluate our security depth while preserving the operational secrecy that makes it effective.
"What if I need full technical details for due diligence?"
Qualified security auditors can review complete documentation under NDA during formal due diligence. We provide full disclosure to authorized parties while maintaining public operational security.
Bottom line
We are showing you enough of our 768-dimension fraud detection system to demonstrate sophistication and effectiveness. We are protecting the rest to ensure attackers cannot reverse-engineer and evade it. This balance serves both artists (revenue protection) and investors (sustainable unit economics) while maintaining the operational security that makes the system valuable.