Reward Strategies
Design reward mechanisms that incentivize quality contributions and drive engagement.
Overview
As the platform operator, you decide how many Activity Points (or custom tokens) to reward for each prompt. ProjectZero provides the infrastructure - you provide the reward logic.
Key Principle
Reward amounts should reflect the value a prompt brings to your platform. High-quality, detailed prompts that generate better LLM responses should earn more rewards than simple queries.
Factors to Consider
Common factors that can influence reward amounts:
Prompt Quality
Detailed, well-structured prompts that produce high-quality LLM outputs
- • Length and detail level
- • Clarity and specificity
- • Technical complexity
User Contribution History
Reward consistent, high-value contributors more generously
- • Total prompts submitted
- • Historical quality scores
- • User tier/status
Engagement Metrics
Track how users interact with LLM responses
- • Response upvotes/ratings
- • Follow-up questions
- • Session duration
Platform Goals
Align rewards with your business objectives
- • Encourage specific prompt types
- • Drive activity during slow periods
- • Reward early adopters
Example Strategies
1. Fixed Reward (Simple)
Award the same amount for every prompt. Easiest to implement, but doesn't incentivize quality.
function calculateReward(prompt: string): number {
return 25 // Fixed 25 AP for every prompt
}Best for: MVP launches, simple platforms where all prompts have similar value.
2. Length-Based (Basic Quality Proxy)
Reward based on prompt length, assuming longer prompts are more detailed.
function calculateReward(prompt: string): number {
const length = prompt.length
if (length < 50) return 10 // Short query
if (length < 200) return 25 // Medium detail
if (length < 500) return 50 // Detailed prompt
return 100 // Very detailed
}Pros: Simple, encourages detail. Cons: Can be gamed with filler text.
3. ML-Based Quality Score
Use machine learning to predict prompt quality based on features.
async function calculateReward(prompt: string): Promise<number> {
// Extract features
const features = {
length: prompt.length,
wordCount: prompt.split(' ').length,
hasQuestionMark: prompt.includes('?'),
avgWordLength: calculateAvgWordLength(prompt),
technicalTerms: countTechnicalTerms(prompt),
}
// ML model predicts quality score (0-100)
const qualityScore = await mlModel.predict(features)
// Map quality to reward
const baseReward = 10
const bonusReward = Math.floor(qualityScore * 0.9) // 0-90 bonus
return baseReward + bonusReward // 10-100 AP
}Best for: Mature platforms with labeled training data for quality assessment.
4. User Tier-Based
Reward based on user contribution history and tier.
interface User {
tier: 'bronze' | 'silver' | 'gold' | 'platinum'
totalPrompts: number
avgQualityScore: number
}
function calculateReward(prompt: string, user: User): number {
const baseReward = 25
// Tier multipliers
const multipliers = {
bronze: 1.0,
silver: 1.2,
gold: 1.5,
platinum: 2.0,
}
const tierBonus = baseReward * (multipliers[user.tier] - 1)
// Streak bonus (consecutive days)
const streakBonus = user.consecutiveDays * 2
return Math.floor(baseReward + tierBonus + streakBonus)
}5. Engagement-Based (Post-Mint)
Initial reward + bonus based on response quality/engagement.
// Initial mint
const initialReward = 25
mintPrompt(promptHash, author, initialReward)
// Track engagement
trackEngagement(promptHash, {
userRating: 5,
followUpQuestions: 3,
sessionDuration: 600, // seconds
})
// Award bonus later (separate transaction or off-chain)
async function awardEngagementBonus(promptHash: string) {
const engagement = await getEngagement(promptHash)
let bonus = 0
if (engagement.userRating >= 4) bonus += 20
if (engagement.followUpQuestions >= 2) bonus += 15
if (engagement.sessionDuration >= 300) bonus += 10
if (bonus > 0) {
await mintAdditionalReward(promptHash, bonus)
}
}Best for: Platforms with strong engagement metrics. Rewards truly valuable prompts.
Reward Economics
Design sustainable reward economics that balance user incentives with platform costs.
Activity Points vs. Custom Tokens
Activity Points (Default)
- ✓ Simple ERC-20 token
- ✓ No monetary value
- ✓ Used for leaderboards, badges
- ✓ Low regulatory risk
Custom ERC-20 Token
- ✓ Branded to your platform
- ✓ Can add utility (subscriptions, NFTs)
- ✓ Potentially tradeable
- ⚠ Consider legal implications
Budget Considerations
// Example: Budget 100,000 AP/month
const monthlyBudget = 100_000
const expectedPrompts = 5_000 // prompts/month
// Average reward should be <= budget / prompts
const avgReward = monthlyBudget / expectedPrompts // 20 AP
// Distribution
const rewards = {
low: 10, // 50% of prompts
medium: 25, // 30% of prompts
high: 50, // 15% of prompts
premium: 100, // 5% of prompts
}
// Weighted average = (10*0.5 + 25*0.3 + 50*0.15 + 100*0.05) = 20.5 AP
// Slightly over budget, adjust if neededTip: Start conservative with rewards. It's easier to increase rewards later than to decrease them (which can demotivate users).
Best Practices
Start Simple, Iterate
Begin with fixed or length-based rewards. Collect data, then refine your strategy.
Monitor Distribution
Track reward distribution weekly. If 90% of users get the minimum, increase differentiation.
Prevent Gaming
Implement anti-spam measures: rate limits, duplicate detection, minimum quality thresholds.
Communicate Clearly
Tell users how rewards are calculated. Transparency builds trust and guides behavior.
Dynamic Reward Adjustment
Adjust rewards based on real-time platform conditions.
function calculateDynamicReward(prompt: string, context: PlatformContext): number {
let baseReward = 25
// Time-based multipliers
const hour = new Date().getHours()
if (hour >= 2 && hour <= 6) {
baseReward *= 1.5 // Encourage off-peak usage
}
// Supply-demand balancing
const dailyPromptsToday = context.promptsToday
const dailyTarget = context.dailyTarget
if (dailyPromptsToday < dailyTarget * 0.5) {
baseReward *= 1.3 // Boost rewards when activity is low
} else if (dailyPromptsToday > dailyTarget * 1.5) {
baseReward *= 0.8 // Reduce when over-budget
}
// Category-based incentives
if (isHighPriorityCategory(prompt)) {
baseReward *= 1.2
}
return Math.floor(baseReward)
}