Skip to content
Testing & Sandbox

Testing & Sandbox

Sandbox Environment

Every developer account includes access to a fully isolated sandbox environment at sandbox.api.aiffinity.me. The sandbox mirrors the production API surface but uses synthetic user profiles, ephemeral storage, and rate limits that are relaxed for development.

# Point your SDK at the sandbox
import { ProviderPlatformClient } from '@aiffinity/provider-platform-sdk';

const client = new ProviderPlatformClient({
  clientId: process.env.AIFFINITY_SANDBOX_CLIENT_ID,
  clientSecret: process.env.AIFFINITY_SANDBOX_CLIENT_SECRET,
  environment: 'sandbox',
});

Creating Test Scenarios & Golden Fixtures

Golden fixtures are pre-defined request/response pairs that serve as the baseline for your capability tests. They ensure deterministic validation and catch regressions when you update your runtime.

Defining a fixture

Create a fixtures/ directory in your project root. Each fixture is a JSON file containing the input parameters and expected output:

// fixtures/current-weather-sunny.json
{
  "name": "Sunny day in Berlin",
  "capability": "current_weather",
  "input": {
    "params": { "location": "Berlin, DE" }
  },
  "expected": {
    "status": "ok",
    "data": {
      "location": "Berlin, DE",
      "temperature_c": 22,
      "condition": "sunny",
      "humidity_pct": 45
    }
  },
  "tolerances": {
    "temperature_c": { "delta": 5 },
    "humidity_pct": { "delta": 10 }
  }
}

Running fixtures

# Validate all fixtures in the directory
npx @aiffinity/provider-platform-sdk test \
  --fixtures ./fixtures \
  --runtime http://localhost:3000

# Run a single fixture
npx @aiffinity/provider-platform-sdk test \
  --fixture ./fixtures/current-weather-sunny.json \
  --runtime http://localhost:3000

Generating fixtures from live data

You can snapshot a live response to create a baseline fixture automatically:

npx @aiffinity/provider-platform-sdk snapshot \
  --capability current_weather \
  --params '{"location":"Berlin, DE"}' \
  --runtime http://localhost:3000 \
  --output ./fixtures/current-weather-berlin.json

Conformance Suite

The conformance suite is a comprehensive validation framework that evaluates your package across 6 dimensions. Every package version must score at least 80 out of 100 to be eligible for submission.

Dimension What it checks Max Score
Schema JSON Schema validity, required fields, type correctness, enum compliance 20
Contracts Capability descriptor completeness, surface guidance validity, refresh intervals 20
Performance Response latency (p50 < 500ms, p99 < 2s), payload size limits, TTL compliance 15
Error Handling Graceful degradation, correct error envelope format, retry headers, timeout behavior 15
Security Auth token handling, no credential leakage in responses, HTTPS enforcement, webhook signature verification 15
Documentation Capability descriptions, parameter documentation, example payloads, changelog presence 15

The suite runs automatically when you call client.packages.validate() or use the CLI. Each dimension is scored independently — a perfect score in one dimension cannot compensate for a failing score in another. Individual dimension minimums are not enforced, but the aggregate must reach 80.

Local Testing with CLI

The fastest way to validate during development is to run the conformance suite locally against your capability descriptor file:

# Validate a local descriptor (no running runtime needed)
npx @aiffinity/provider-platform-sdk validate --local ./capability.json

Local validation checks the schema, contracts, and documentation dimensions (up to 55 points). It does not exercise performance, error handling, or security — those require a running runtime.

Full local validation with a running runtime

# Start your runtime in one terminal
node handler.js

# Run the full suite against it
npx @aiffinity/provider-platform-sdk validate \
  --local ./capability.json \
  --runtime http://localhost:3000 \
  --full

The --full flag exercises all 6 dimensions by sending synthetic requests to your runtime endpoint, measuring latency, testing error paths, and verifying auth handling.

CI integration

# GitHub Actions example
- name: Run conformance suite
  run: |
    npx @aiffinity/provider-platform-sdk validate \
      --local ./capability.json \
      --runtime http://localhost:3000 \
      --full \
      --min-score 80 \
      --output json > conformance-report.json

- name: Check threshold
  run: |
    SCORE=$(jq '.score' conformance-report.json)
    if [ "$SCORE" -lt 80 ]; then
      echo "Conformance score $SCORE is below threshold (80)"
      exit 1
    fi

Remote Testing

Once your runtime is deployed, run the conformance suite against the deployed endpoint to validate real-world conditions:

# Test against a deployed runtime
npx @aiffinity/provider-platform-sdk validate \
  --package-id $PACKAGE_ID \
  --version-id $VERSION_ID \
  --runtime https://weather.acme.com/aiffinity \
  --full

Remote testing adds production-realistic conditions:

# SDK approach for remote testing
const result = await client.packages.validate(packageId, versionId, {
  runtimeUrl: 'https://weather.acme.com/aiffinity',
  full: true,
});

console.log(`Score: ${result.score}/100`);
console.log(`Passed: ${result.passed}`);
for (const [dim, score] of Object.entries(result.dimensions)) {
  console.log(`  ${dim}: ${score.earned}/${score.max}`);
}

Certification Scores

After the conformance suite completes, you receive a detailed certification report. Here is an example of a passing report:

Dimension Earned Max Status
Schema 18 20 Pass
Contracts 20 20 Pass
Performance 14 15 Pass
Error Handling 12 15 Pass
Security 15 15 Pass
Documentation 13 15 Pass

Total: 92/100 Certified

Each dimension includes a breakdown of individual checks with pass/fail results and actionable feedback. Access the full report from the developer console or export it as JSON:

npx @aiffinity/provider-platform-sdk validate \
  --package-id $PACKAGE_ID \
  --version-id $VERSION_ID \
  --output json > report.json

# View dimension breakdowns
jq '.dimensions[] | {name, earned, max, checks: [.checks[] | select(.passed == false)]}' report.json

Trace Store for Debugging

Every capability execution in the sandbox is recorded in the trace store. Traces capture the full lifecycle of a request: inbound parameters, runtime invocation, response payload, and timing metadata.

Trace ID
trc_8f2a1b4c-9d3e-4f5a-b6c7-d8e9f0a1b2c3

Viewing traces

# List recent traces for a capability
npx @aiffinity/provider-platform-sdk traces list \
  --capability current_weather \
  --limit 20

# View a specific trace
npx @aiffinity/provider-platform-sdk traces get \
  --trace-id trc_8f2a1b4c-9d3e-4f5a-b6c7-d8e9f0a1b2c3

SDK trace inspection

const traces = await client.sandbox.traces.list({
  capability: 'current_weather',
  limit: 20,
  since: '2026-04-01T00:00:00Z',
});

for (const trace of traces.items) {
  console.log(`[${trace.id}] ${trace.status} ${trace.duration_ms}ms`);

  // Inspect request/response pair
  const detail = await client.sandbox.traces.get(trace.id);
  console.log('  Request:', JSON.stringify(detail.request.params));
  console.log('  Response:', JSON.stringify(detail.response.data));

  // Check for errors
  if (detail.error) {
    console.log('  Error:', detail.error.type, detail.error.detail);
  }
}

Trace contents

Each trace record contains:

Regression Analysis

When you submit a new version of your package, Aiffinity automatically runs drift detection against the previous published version. This catches unintentional breaking changes before they reach users.

What drift detection checks

# Run drift detection manually between two versions
npx @aiffinity/provider-platform-sdk drift \
  --package-id $PACKAGE_ID \
  --from-version $OLD_VERSION_ID \
  --to-version $NEW_VERSION_ID
// SDK drift detection
const drift = await client.packages.detectDrift(packageId, {
  fromVersion: oldVersionId,
  toVersion: newVersionId,
});

console.log(`Breaking changes: ${drift.breaking.length}`);
console.log(`Warnings: ${drift.warnings.length}`);

for (const change of drift.breaking) {
  console.log(`  [BREAKING] ${change.path}: ${change.description}`);
}
for (const warning of drift.warnings) {
  console.log(`  [WARNING] ${warning.path}: ${warning.description}`);
}

Drift detection results are included in the review submission. Packages with breaking changes require explicit acknowledgment and a migration guide before they can be approved.

Running Tests via SDK

The SDK provides a programmatic interface for all testing operations, making it easy to integrate into your existing test frameworks.

Jest / Vitest integration

import { describe, it, expect } from 'vitest';
import { ProviderPlatformClient } from '@aiffinity/provider-platform-sdk';

const client = new ProviderPlatformClient({
  clientId: process.env.AIFFINITY_SANDBOX_CLIENT_ID!,
  clientSecret: process.env.AIFFINITY_SANDBOX_CLIENT_SECRET!,
  environment: 'sandbox',
});

describe('Weather capability conformance', () => {
  it('should pass the full conformance suite', async () => {
    const result = await client.packages.validate(
      process.env.PACKAGE_ID!,
      process.env.VERSION_ID!,
      { full: true }
    );

    expect(result.score).toBeGreaterThanOrEqual(80);
    expect(result.passed).toBe(true);
  });

  it('should score at least 15/20 on schema', async () => {
    const result = await client.packages.validate(
      process.env.PACKAGE_ID!,
      process.env.VERSION_ID!,
    );

    expect(result.dimensions.schema.earned).toBeGreaterThanOrEqual(15);
  });

  it('should meet performance thresholds', async () => {
    const result = await client.packages.validate(
      process.env.PACKAGE_ID!,
      process.env.VERSION_ID!,
      { full: true, runtimeUrl: process.env.RUNTIME_URL! }
    );

    expect(result.dimensions.performance.earned).toBeGreaterThanOrEqual(12);
  });

  it('should have no breaking drift from previous version', async () => {
    const drift = await client.packages.detectDrift(
      process.env.PACKAGE_ID!,
      {
        fromVersion: process.env.PREV_VERSION_ID!,
        toVersion: process.env.VERSION_ID!,
      }
    );

    expect(drift.breaking).toHaveLength(0);
  });
});

Fixture testing via SDK

import { loadFixtures, runFixtures } from '@aiffinity/provider-platform-sdk/testing';

const fixtures = await loadFixtures('./fixtures');
const results = await runFixtures(client, fixtures, {
  runtimeUrl: 'http://localhost:3000',
});

for (const r of results) {
  console.log(`${r.passed ? 'PASS' : 'FAIL'} ${r.fixture.name}`);
  if (!r.passed) {
    console.log(`  Expected: ${JSON.stringify(r.expected)}`);
    console.log(`  Received: ${JSON.stringify(r.actual)}`);
    console.log(`  Diff: ${JSON.stringify(r.diff)}`);
  }
}