Prompt Engineering Comparison Hub
3 papers - avg viability 4.3
Top Papers
- PEEM: Prompt Engineering Evaluation Metrics for Interpretable Joint Evaluation of Prompts and Responses(7.0)
PEEM is a framework for interpretable evaluation of prompts and responses in large language models, enhancing prompt design and optimization.
- Prompt Architecture Determines Reasoning Quality: A Variable Isolation Study on the Car Wash Problem(4.0)
Develop a reasoning framework to significantly improve large language models' performance on implicit constraint tasks like the car wash problem.
- Prompt Readiness Levels (PRL): a maturity scale and scoring framework for production grade prompt assets(2.0)
A framework for assessing the maturity and readiness of prompt engineering assets in generative AI.