Skip to content
All posts

Trust Issues? Keep the Receipts: Verify Software Supply Chains and Spot Deepfakes Faster

By now, most security teams have accepted a grim reality: you can do everything “right” and still get burned by something you didn’t build, didn’t write, and didn’t even know you were running.

A dependency update. A container base image. A CI/CD plugin. A third-party library that went unmaintained, got hijacked, and quietly started shipping surprises.

And that’s just the software side.

On the communications side, a well-timed deepfake can trigger the kind of confusion that incident responders hate most: the kind where the problem isn’t malware, it’s people making decisions based on something that looks real.

DigitalERA’s 2026 Resolution #3 is about addressing both problems with one concept (which we will try to make digestible without cutting its technical nature):

Build trust infrastructure: provenance for code, data, and media.
Real-life meaning: make authenticity cheap to verify and tampering expensive to pull off.

 

[Download our 90-Day Plan for Trust Infrastructure— to stop relying on trust-by-default for your most important systems and communications]

 

What’s “Trust Infrastructure,” Really?

Forget the buzzword. Trust infrastructure is the plumbing that helps you answer, quickly and defensibly, four key questions:

    • Did this build actually come from our pipeline?
    • Is this container image exactly what we tested and approved?
    • Can we trace this AI model to its training data and tuning steps?
    • Is this “CEO message” authentic, or the start of a very bad day?

It’s not one product. It’s a set of practices and controls that turns “I think” into “I can prove it.”

 

Why This Is Suddenly a Big Deal

Analyst shops like Gartner and Forrester have been pointing in the same direction: provenance is moving from “security nerd topic” to “audit, fraud, and risk topic.” With a few forces driving that shift:

1) Supply chain compromises keep paying off

Attackers go where trust is implicit. If they can slip into what you already accept as “safe,” they bypass a lot of your defenses.

2) AI systems bring new attack surfaces, and new evidence problems

AI pipelines have more ingredients: datasets, feature stores, open-source libraries, base models, fine-tunes, plugins. When something goes wrong, you need lineage, not vibes (not talking about vibe coding here).

3) Deepfakes and disinformation are now operational threats

The fastest way to cause damage isn’t always breaking into a network. Sometimes it’s breaking confidence, in executives, in brand channels, in internal workflows.

When a fake voice note “from the CFO” can move money, provenance becomes a control, not a curiosity.

 

The Three Building Blocks: SBOM/MLBOM, Attestation, and Media Provenance

1) SBOM + MLBOM: “What’s in this thing?”

An SBOM (Software Bill of Materials) is a standardized inventory of what’s inside your software: libraries, dependencies, versions.

An MLBOM (Machine Learning Bill of Materials) applies the same logic to AI: model versions, training datasets, evaluation sets, libraries, feature pipelines, and deployment artifacts.

Why it matters: You can’t manage supply chain risk if you can’t answer what you’re running, especially in crown-jewel systems.

Technical examples (illustrative, and some endorsements):

    • SBOM standards: SPDX, CycloneDX
    • Common generation/scanning patterns: Syft/Grype, Trivy, SCA tooling integrated into CI
    • For binaries and legacy artifacts where “SBOMs are hard,” approaches like binary composition analysis can help uncover what’s actually inside shipped executables. This is where tools like Netrise can be useful from a technical standpoint: extracting component signals when source transparency isn’t available.

Real-life meaning: If you don’t know what you’re running, attackers do, and they’re counting on it.

2) Attestation: “How was it built? and, can we prove it?”

SBOMs tell you ingredients. Attestation tells you the chain of custody.

An SBOM tells you what’s inside an application. Attestation that a specific artifact came from a specific source, built in an approved pipeline, and wasn’t swapped on the way to production.

In practice, teams lean on frameworks like SLSA (to mature build provenance) and in-toto-style metadata (to record what steps produced which artifacts), then enforce it with signing + verification and, increasingly, transparency-style logging.

The operational pattern is simple: builds and container images get signed, provenance is attached to releases, and deployments are blocked when signatures or attestations don’t verify.

Kubernetes is often where this becomes enforceable at scale. Admission controls and policy engines can require verified signatures and provenance, especially in crown-jewel namespaces, so only approved artifacts can run.

Translation for the rest of us: don’t just claim “we built it.” Make the software prove it before it gets anywhere near production.

3) Watermarking and Content Provenance: “Is this real?”

Deepfakes turned authenticity into a security problem, and it doesn’t belong to the SOC alone. Legal and Comms need a verification workflow as much as IR does, because the blast radius is financial, reputational, and operational.

The highest-risk targets are predictable: executive messages, customer payment/billing notices, public incident updates, investor relations, and official audio/video statements. The technical response is to establish content provenance (C2PA-style metadata where applicable), use watermarking when it makes operational sense, and cryptographically sign high-stakes public statements. Pair that with canonical “source of truth” channels so employees and customers know where to verify quickly.

Goal: make authenticity checks fast under pressure, so “is this real?” doesn’t turn into a 45-minute Slack debate while attackers cash out.

 

The Outcome: What Leadership Actually Gets

Trust infrastructure won’t stop every attack. What it does is shrink two of the most expensive forms of chaos that modern organizations face.

The first is supply chain chaos: the frantic scramble to answer questions like, “Are we affected?” and “What exactly ran in production?” When artifacts are signed, attested, and traceable, incident response stops being guesswork. You can identify impacted systems faster, isolate what changed, and explain it without hand-waving.

The second is authenticity chaos: the moment when someone forwards a screenshot, an audio clip, or an email supposedly from a senior executive, and the organization has to decide whether to act. Provenance and verification workflows create a safer default: pause, verify, proceed with evidence.

Done well, this translates into outcomes executives understand immediately. Fraud and impersonation become harder to pull off. Incident forensics get faster and cleaner because you can prove what was deployed and when. Audits stop feeling like archaeology because the evidence trail exists by design, not by scramble.

In other words, trust infrastructure doesn’t just reduce risk. It reduces confusion, the kind attackers count on most.


TL;DR

Security teams love detection. Executives love assurances. Attackers love implicit trust.

Resolution #3 is about replacing assumptions with evidence:

Prove what you run. Prove what you ship. Prove what you say.

Because in 2026, the question won’t always be “Are we compromised?”

Sometimes it’ll be: “Is that even real?”