Product

What can someone actually use or expect to use?

Open People builds systems that keep people in the decision loop. Below is what exists today, what is in development, and what is experimental.

Every tool includes three capabilities. People can see what AI is doing. People can correct AI when it drifts. People maintain final authority over outcomes. These capabilities emphasize human oversight over automation.

In Development

Alignment Monitoring Tools

Tools that help people see what AI is doing, correct it when it drifts, and maintain oversight over outcomes.

Capabilities

  • Show operators what AI systems are doing in real time
  • Enable operators to pause or correct AI behavior
  • Track behavior over time to detect when AI drifts
  • Generate reports that show how AI behavior matches human intent
Live

Research Environments

Real environments where we test alignment tools with real stakeholders. Our flagship environment is a fine-dining restaurant that operates as a research laboratory.

Capabilities

  • Test alignment protocols in real operations
  • Collect data on how tools perform under pressure
  • Refine tools based on operator feedback
  • Apply learnings across domains (dining, government, infrastructure)
Experimental

Executive Decision Frameworks

Frameworks that help executives make decisions about AI deployment, risk assessment, and governance.

Capabilities

  • Translate safety research into decision criteria
  • Assess risks of deploying AI in specific contexts
  • Design governance structures for high-stakes deployments

For detailed research frameworks and technical documentation, see our Research page.