3 papers - avg viability 4.0
LLMORPH automates LLM testing by generating new test cases from existing ones, uncovering model inconsistencies without human labels.
A novel approach to testing AI-integrated software by leveraging relationships between test executions to create scalable oracles.
A method to test the faithfulness of LLM self-explanations in varying semantic contexts.