Everywhere you look, something is “AI.” Papers, pitches, product pages. The volume alone makes it hard to tell what's real. PubMed, for example, went from about 1,000 AI-tagged papers in 2015 to 30,000+ in 2024. University technology transfer offices (TTOs) now see the same flood in disclosures.
That hype has a cost. If we can't separate signal from noise, we misjudge what's protectable, who actually invented what, and whether the value lives in a technical mechanism or in data, weights, and pipelines. We also stumble in diligence when buyers ask for evidence, safety plans, and licensing clarity.
This tool offers a practical framework to distinguish genuine value from noise in AI-labeled disclosures and to translate credible value into the right protection and deal path.
Classify the “AI” first, then run the five-signal test. If it clears, pick protection that matches where the value lives.
Fast taxonomy
AI as the core invention — mechanism-level novelty altering compute behavior.
AI as an enabling capability — integration, calibration, operating points.
AI as a bolt-on — no unique data/algorithm, no lift.
Five signals
Benchmark lift vs credible baseline on external/prospective data; tie to outcomes where relevant.
Defensible data provenance, rights, reproducible pipelines, bias assessment, governance.
§101 survivability via concrete computational improvement, not domain recitation.
Safety/regulatory lifecycle: intended use, bounds, human-in-the-loop, verification & validation, security, monitoring, rollback; map to NIST AI RMF and use a PCCP where applicable.
Upstream licenses compatible with outbound promises: FoU, flow-downs, redistribution, export.
Evidence pack (captures from Signals)
As you answer the five signals, record the artifacts below. They form the evidence pack for licensing and diligence.
Model card & evaluation report — external test set, baseline, failure modes, hardware footprint.
Data provenance memo — sources, rights/consents, territoriality, retention, reproducible pipelines.
IP map — what's patentable vs secret; document significant human contributions.
Change-management plan — monitoring, drift triggers, rollback; map to NIST AI RMF; use a PCCP if regulated.
License stack — all third-party models/data/libs and their terms.