Separating AI Signal from Noise

Everywhere you look, something is “AI.” Papers, pitches, product pages. The volume alone makes it hard to tell what's real. PubMed, for example, went from about 1,000 AI-tagged papers in 2015 to 30,000+ in 2024. University technology transfer offices (TTOs) now see the same flood in disclosures.

That hype has a cost. If we can't separate signal from noise, we misjudge what's protectable, who actually invented what, and whether the value lives in a technical mechanism or in data, weights, and pipelines. We also stumble in diligence when buyers ask for evidence, safety plans, and licensing clarity.

The rules are tightening too. The USPTO says only humans can be inventors. The FDA has a pathway for AI/ML updates through PCCPs. The EU AI Act is rolling out. All this means, TTOs need a sophisticated way to evaluate AI-related inventions.

This tool offers a practical framework to distinguish genuine value from noise in AI-labeled disclosures and to translate credible value into the right protection and deal path.

Developed by Saurin Parikh, Senior Technology Licensing Specialist at UR Ventures, University of Rochester.

How to use • Taxonomy + Five Signals

Classify the “AI” first, then run the five-signal test. If it clears, pick protection that matches where the value lives.

Fast taxonomy

  • AI as the core invention — mechanism-level novelty altering compute behavior.
  • AI as an enabling capability — integration, calibration, operating points.
  • AI as a bolt-on — no unique data/algorithm, no lift.

Five signals

  1. Benchmark lift vs credible baseline on external/prospective data; tie to outcomes where relevant.
  2. Defensible data provenance, rights, reproducible pipelines, bias assessment, governance.
  3. §101 survivability via concrete computational improvement, not domain recitation.
  4. Safety/regulatory lifecycle: intended use, bounds, human-in-the-loop, verification & validation, security, monitoring, rollback; map to NIST AI RMF and use a PCCP where applicable.
  5. Upstream licenses compatible with outbound promises: FoU, flow-downs, redistribution, export.

Evidence pack (captures from Signals)

As you answer the five signals, record the artifacts below. They form the evidence pack for licensing and diligence.

  • Model card & evaluation report — external test set, baseline, failure modes, hardware footprint.
  • Data provenance memo — sources, rights/consents, territoriality, retention, reproducible pipelines.
  • IP map — what's patentable vs secret; document significant human contributions.
  • Change-management plan — monitoring, drift triggers, rollback; map to NIST AI RMF; use a PCCP if regulated.
  • License stack — all third-party models/data/libs and their terms.

Shortcuts: Back, R Restart
Maintained by sauriiiin · Last updated 2026-04-14