When Fair Pay Isn’t Simple: Designing Trust in a Complex AI Research System

Company

Prolific

Timeline

~2 weeks

Role

Product Designer

Tools

Figma, Miro, Notion & Qualtrics

Figma, Miro, Notion
& Qualtrics

When is pay fair in a system built on time, automation, and uncertainty?

In an AI research platform, small design choices around compensation had outsized effects on trust. This project looked at how fairness could be made clear and defensible within a complex, multi-stakeholder system.

At Prolific, researchers design studies that participants complete in exchange for compensation. A recurring issue began to surface: participants sometimes didn’t complete a full hour of work yet still received full payment. Researchers, on the other hand, struggled to balance fairness with increasingly tight budgets.

Solving this meant rethinking how compensation was calculated and communicated - in a way that felt fair and transparent to participants, while remaining sustainable for researchers.

This wasn’t just a billing problem. It was a trust problem.

Why this problem mattered

Some studies are made up of fixed tasks that don’t always take the full allotted time. Historically, participants were paid the full hourly rate regardless, which meant researchers were often paying for work that hadn’t been done — an unsustainable pattern at scale.

At the same time, participants reported feeling uneasy when receiving full pay for less time spent. Many weren’t sure whether they had “done the right thing”.

Fairness needed to exist not just in logic, but in perception.

Discovery: seeing all sides

To ground the work in reality, I looked beyond the interface:

  • Support team insights showed participants frequently contacting Prolific with questions about payment and fairness

  • Engineering concepting clarified what was technically feasible within existing APIs and systems

From this, three core truths emerged:

  1. Researchers need to manage budgets sustainably

  2. Participants need to feel fairly treated and understand how their pay is calculated

  3. There was ambuguity in how payment was calculated that wasn't in our control

These principles became our north star.

Ideation & design: clarity over complexity

Rather than reinvent the compensation system, we took a lean, transparency-first approach.

We focused design efforts on the places where compensation appears most often:

  • Study cards

  • Study dashboard previews

  • Submission confirmation screens

My design goals were deliberately simple:

  • Clearly explain how rewards work

  • Use plain, human language

  • Stay within existing UI patterns

We developed Dynamic Rewards - a model where participants receive compensation proportional to the work completed, clearly surfaced at key moments of engagement.

Through rapid iteration, we moved from cluttered and ambiguous labels to a model where participants quickly understood what to expect and why.

Key takeaways

1. Less is more with compensation language
Over-explaining created confusion. Clear, minimal language at the right moments mattered most.

2. Collaboration needs structure
With many invested stakeholders, direction could drift quickly. Strong decision ownership and checkpoints kept momentum.

3. Fair systems must feel fair
Trust is emotional. A system can be logically fair, but if it doesn’t feel fair, it fails users.

Where we go next

The MVP laid a strong foundation. Next steps include:

  • Refining messaging based on real-world behaviour

  • Exploring broader engagement metrics beyond time

  • Adding automated guidance for researchers setting up Dynamic Rewards

This project pushed my thinking beyond UI - into ethics, clarity, and trust — and continues to shape how I approach complex experience problems.

Create a free website with Framer, the website builder loved by startups, designers and agencies.