DF-002 · Design Framework

1.0 What to Measure (and Why)

In AI-shaped work, the question is no longer “Can someone use AI tools?” Most people can. The differentiator is whether a person can collaborate with AI while governing consequences — producing valuable outcomes while strengthening judgment instead of outsourcing it.

The latest thinking at Coincentives Labs: AI fluency should be measured as governance quality in human–AI collaboration — not as recall, tool lists, or output polish.


2.0 The AI Fluency Engine

The AI Fluency Engine is our current framework for describing and evaluating collaboration quality with AI. It combines: (1) four governance functions (what is governed), and (2) three evolutionary phases (how mature the governance posture is).

AI Fluency Engine — four governance functions across three evolutionary phases

A practical map: what is governed (functions) and how governance matures (phases).

2.1 The Four Governance Functions (What is governed)

Governance functions describe the recurring places where consequences are shaped in the collaboration: scope, substance, risk, and durable value.

Communicate

Governs scope: intent, constraints, boundaries, and “done.”

Co-Create

Governs substance: exploration and synthesis that expands possibilities without chaos.

Challenge

Governs risk: validity, assumptions, trade-offs, and uncertainty.

Curate

Governs durability: reusable artifacts, decision rules, and embedded value.

2.2 The Three Evolutionary Phases (How governance matures)

Phases describe an evolution of posture. They are not ranks. They capture whether governance is exploratory, reliable, or proof-ready.


3.0 The Engine in Action: Functions Across Phases

Below is a practical view of what each governance function looks like as it matures. This is written to reflect consequence governance: external outcomes (quality, risk, impact) and internal outcomes (augmentation vs offloading).

a. Communicate — Governing Scope and Intent

Communicate governs the scope of consequences: what is being attempted, under what constraints, for whom, and what “done” means.

  • Curious: experiments with framing; tests how different constraints change outcomes.
  • Confident: consistently specifies intent, constraints, priorities, and non-goals before generation.
  • Certified: produces scope that others can reuse—clear decision context, traceable rationale, and auditable “done.”

b. Co-Create — Governing Substance and Possibility

Co-Create governs the substance of consequences: controlled expansion of options and synthesis into usable direction.

  • Curious: explores multiple approaches; uses AI to expand the space without committing too early.
  • Confident: generates credible alternatives and selects with explicit criteria and trade-offs.
  • Certified: synthesizes best elements into a coherent plan others can execute—not just a list of options.

c. Challenge — Governing Risk, Validity, and Soundness

Challenge governs the validity of consequences: assumptions, uncertainty, evidence, and risk. This is where augmentation is protected and offloading is prevented.

  • Curious: notices inconsistencies and probes uncertainty; asks “what could be wrong?”
  • Confident: separates fact from inference; demands checks; corrects errors rather than accepting plausibility.
  • Certified: establishes defensible decision rules—what evidence is sufficient, what risks are acceptable, and why.

d. Curate — Governing Durability and Embedded Value

Curate governs whether collaboration creates durable value: reusable artifacts, institutional memory, and workflows that preserve judgment rather than replace it.

  • Curious: identifies outputs worth preserving; begins documenting decisions and constraints.
  • Confident: converts results into reusable checklists, templates, and next-action plans.
  • Certified: produces repeatable systems: reusable artifacts + principles/decision rules that transfer across contexts.

4.0 How to Use the AI Fluency Engine

This framework is meant to turn vague claims into defensible proof. Whether you are hiring or job searching, you can use it as a checklist:

Turn doctrine into evidence

We measure AI fluency as governed collaboration — and turn it into evidence (and optional proof-of-skill) that holds up under optimization.