Research
Frameworks for solving human–AI alignment
The frameworks below—Codex, Cortex, Assistant, Temporal—solve one problem: how to let AI learn without eroding human agency. Each framework addresses a specific aspect of this problem.
Together, they form a system for building tools that keep people in the decision loop. We introduce them here as research tools. Their practical applications appear in our products.
Codex
A framework for encoding human intentions into AI systems and verifying that behavior matches those intentions.
Problem it solves
AI systems often behave differently than intended, even with clear instructions. This mismatch causes harm in healthcare, finance, and critical infrastructure.
Approach
Codex encodes human values, organizational goals, and ethical boundaries into AI systems. It includes mechanisms that check whether behavior matches declared intentions. When behavior drifts, it alerts operators so they can intervene.
Cortex
Real environments where we test alignment protocols with real stakeholders under real pressure.
Problem it solves
Alignment research often happens in isolation. Researchers don't see how protocols perform with real people, under real pressure, in real environments.
Approach
Cortex environments are real operations—restaurants, government offices, infrastructure projects—where we deploy alignment tools and refine them based on operator feedback and observed behavior.
Assistant
Interfaces that show operators what AI systems are doing and enable intervention when behavior drifts.
Problem it solves
AI systems often operate as black boxes. Operators cannot see what is happening, cannot intervene when needed, and cannot maintain accountability.
Approach
Assistant provides dashboards that show system behavior in human-readable ways. It enables operators to pause, correct, or override AI behavior. It keeps human judgment central, not peripheral.
Temporal
Protocols for maintaining alignment as systems evolve, contexts change, and new risks emerge.
Problem it solves
Alignment is not a one-time setup. Systems drift over time. Contexts change. New failure modes emerge. Static alignment mechanisms become outdated.
Approach
Temporal provides monitoring systems and update protocols that keep alignment mechanisms current. It tracks behavior over time, detects drift, and alerts operators when mechanisms need updates.
How this connects to the product
These frameworks inform the tools we build. For example:
- Codex principles power our alignment monitoring tools
- Cortex environments are our research laboratories
- Assistant interfaces power our dashboards and intervention mechanisms
- Temporal protocols ensure our systems maintain alignment over time
If you are evaluating Open People as a partner or investor, these frameworks show our research depth. If you are a potential user, see our Product page for what you can use today.