High-risk obligations enforce from August 2026

EU AI Act compliance,
backed by real cloud evidence.

Yaragudi reads your live AWS, GCP and Azure infrastructure and produces article-by-article compliance verdicts — using AI to reason over real evidence, not questionnaires.

layer2_report_2026-04-28.json
Live

Assessment

Acme Fraud Detection Platform

High-Risk · AWS · ap-southeast-2

Score

56/100

Article 9PARTIAL

Risk Management

Claude confidence 78%

Article 10COMPLIANT

Data Governance

Claude confidence 91%

Article 11PARTIAL

Tech Documentation

Claude confidence 74%

Article 12COMPLIANT

Logging & Records

Claude confidence 88%

Article 13PARTIAL

Transparency

Claude confidence 69%

Article 14NON-COMPLIANT

Human Oversight

Claude confidence 83%

Article 15COMPLIANT

Accuracy & Security

Claude confidence 86%

Article 43NON-COMPLIANT

Conformity

Claude confidence 79%

Article 72PARTIAL

Post-Market Monitor

Claude confidence 81%

Reads real evidence directly from

AWS
GCP
AZURE
CLAUDE

The problem

Self-attestation won't survive a regulator audit.

The EU AI Act is the world's first horizontal AI regulation. It applies to any organisation that puts an AI system on the EU market — wherever they're headquartered. Penalties reach €35 million or 7% of global turnover, and Article 43 conformity assessments require documented evidence, not a signed statement.

Aug 2026

High-risk obligations enforce in full

€35M

Maximum fine — or 7% of global turnover, whichever is higher

9 articles

Of evidence-driven obligations Yaragudi covers automatically

The old way

  • Quarterly questionnaires sent to engineering
  • Static PDFs that go stale within weeks
  • Big-4 consultants charging €25–80k per assessment
  • No link between policy documents and live infrastructure
  • Auditor asks for evidence — team scrambles for screenshots

The Yaragudi way

  • Live cloud scans — AWS, GCP, Azure — running on demand or on schedule
  • Article-by-article verdicts traced to exact resources and configurations
  • AI reasoning over real evidence, not multiple-choice surveys
  • Remediation plans with effort estimates and timelines
  • Audit pack ready to export the moment a regulator asks

How it works

From AI system to audit-ready report in three steps.

01

Classify the AI system

You describe the AI system in plain language. Claude maps it to one of the four EU AI Act risk tiers — Unacceptable, High, Limited, or Minimal — with reasoning and confidence so you can challenge the verdict.

~30 seconds
02

Scan your live cloud

Yaragudi connects to AWS, GCP, and Azure with read-only credentials and pulls evidence per article: encryption coverage, access controls, audit logging, monitoring schedules, model registries — anything a regulator would actually ask for.

~2 minutes per cloud
03

Get evidence-backed verdicts

Claude reasons over the evidence and returns a per-article verdict — Compliant, Partial, or Non-Compliant — with the specific gaps, remediation steps, effort estimates, and timelines. Export as JSON, DOCX or PDF for the audit pack.

Article 9 → 72

Coverage

Four pillars. Nine articles.
Real evidence on every verdict.

We mirror how the EU AI Act is organised — risk and governance first, then data, then operations, then conformity. Each pillar maps to the specific articles your DPA and auditor will actually ask about.

01Articles 9, 14

Risk & Governance

The safety culture and human oversight obligations every high-risk AI system must demonstrate before going live.

Article 9

Risk Management System

Continuous identification, evaluation, and mitigation of AI system risks across the lifecycle.

Article 14

Human Oversight

Effective human-in-the-loop controls — including override, monitoring, and stop mechanisms.

02Articles 10, 11

Data & Documentation

What you trained on, how it was governed, and the technical record auditors and the EU AI Office will request.

Article 10

Data Governance

Quality, relevance, and bias controls on training, validation, and testing datasets.

Article 11

Technical Documentation

Annex IV-aligned documentation kept current for the lifetime of the AI system.

03Articles 12, 13, 15

Operational Controls

Runtime guarantees — logging, transparency, accuracy and security — that prove the system stays within its declared envelope.

Article 12

Record Keeping & Logging

Automatic event logging sufficient for traceability and post-market investigation.

Article 13

Transparency & Information

Clear, accessible information for deployers about capabilities, limitations, and intended use.

Article 15

Accuracy, Robustness & Cybersecurity

Performance benchmarks, resilience to adversarial inputs, and protection against unauthorised access.

04Articles 43, 72

Conformity & Lifecycle

Pre-market conformity assessment plus the post-market monitoring that catches drift, incidents, and emerging risks.

Article 43

Conformity Assessment

Documented evidence that high-risk AI systems meet all applicable requirements before market placement.

Article 72

Post-Market Monitoring

Active monitoring for incidents, performance degradation, and emerging risks after deployment.

Why Yaragudi

Compliance tools chase paperwork.
We chase evidence.

Retrieval-Augmented Reasoning

Claude doesn't speculate — it reasons over evidence we retrieve from your live infrastructure. Every verdict is traceable to the resource it came from.

Multi-cloud native

First-class connectors for AWS, GCP and Azure — built for organisations whose AI systems span more than one cloud, not bolt-ons.

Article-by-article granularity

We don't summarise compliance into a vague green/red. Every Article from 9 to 72 gets its own verdict, gaps, and remediation plan.

Built on Claude

Powered by Anthropic's Claude — the same models trusted by enterprises for high-stakes reasoning. No fine-tuning on your data, ever.

Pricing

Transparent pricing.
No procurement runaround.

All plans include the full 9-article coverage and use Claude for evidence analysis. Annual billing available with 2 months free on Growth.

Pilot Scan

€7,500one-time

A single end-to-end Layer 2 assessment of one AI system in one cloud — including remediation report and live walkthrough.

  • 1 AI system, 1 cloud (AWS, GCP or Azure)
  • Full classification + 9-article scan
  • Evidence-backed remediation plan
  • 60-min findings walkthrough with our team
  • DOCX + JSON audit pack
Start a pilot
Most popular

Growth

€3,500/ month

Continuous compliance for fast-moving teams. Up to 5 AI systems, all three clouds, monthly scans, alerts on drift.

  • Up to 5 AI systems
  • AWS + GCP + Azure all included
  • Monthly scheduled scans
  • Slack alerts on compliance drift
  • Quarterly review with our team
  • Email support, 1 business day SLA
Talk to sales

Enterprise

from €90,000/ year

For organisations with dozens of AI systems, complex multi-region footprints, and procurement-heavy compliance teams.

  • Unlimited AI systems
  • Continuous monitoring + custom scan cadence
  • SSO, SCIM, audit log export
  • Dedicated success manager
  • Jira / ServiceNow integrations
  • Custom SLAs and DPAs
Contact sales

Need on-prem deployment or air-gapped environments? Talk to us.

Security & data handling

Built to pass the security review,
not to slow it down.

Read-only credentials

Our cloud connectors require nothing more than the Reader role on AWS, GCP and Azure. We can scan but never modify your infrastructure.

Evidence stays in your tenant

Scan results are streamed directly to your Yaragudi workspace. Optionally self-host the engine inside your own VPC for air-gapped environments.

No model training on your data

We use the Claude API with a no-training agreement. Your prompts, evidence, and reports are never used to train models — Anthropic's or anyone else's.

Encryption everywhere

TLS 1.3 in transit, AES-256 at rest. Per-customer encryption keys for Enterprise. Secrets stored in your cloud's native KMS, never in our database.

SOC 2 Type II in progress

We are in active observation period for SOC 2 Type II with a Big-4 auditor. Type I report available on request under NDA today.

Data residency

Choose EU, UK or US data residency. Enterprise plans support customer-chosen residency for both compute and storage.

Frequently asked questions

Answers your CISO will ask first.

Prohibited practices have applied since February 2025. General-purpose AI obligations applied from August 2025. The bulk of high-risk system obligations — including most of the articles Yaragudi covers — enforce in full from 2 August 2026. If your AI system falls into a high-risk category (Annex III), you have until that date to be evidence-ready.

Request access

Let's see if Yaragudi is the right fit.

Tell us a little about your AI footprint and your timeline. We reply personally within one business day, and the first call is a no-pitch 30-minute conversation focused on whether we can actually help.

Prefer email?hello@yaragudi.com

What we'll cover

Your AI estate, current compliance gaps, fit assessment, and pricing.

Who you'll meet

A founder. No SDR sales chain, no qualification scripts.

What happens next

If we're a fit, we propose a pilot. If we're not, we'll point you to who is.

Cloud(s) in use

We'll only use this to respond to your enquiry. No marketing emails, no third-party sharing.