Generation AI

AI Trust, Eval Frameworks, and Why Data Quality Matters

Episode Summary

In this episode of Generation AI, hosts JC and Ardis tackle one of the most pressing concerns in higher education today: how to trust AI outputs. They explore the psychology of trust in technology, the evaluation frameworks used to measure AI accuracy, and how Retrieval Augmented Generation (RAG) helps ground AI responses in factual data. The conversation offers practical insights for higher education professionals who want to implement AI solutions but worry about accuracy and reliability. Listeners will learn how to evaluate AI systems, what questions to ask vendors, and why having public-facing content is crucial for effective AI implementation.

Episode Notes

In this episode of Generation AI, hosts JC and Ardis tackle one of the most pressing concerns in higher education today: how to trust AI outputs. They explore the psychology of trust in technology, the evaluation frameworks used to measure AI accuracy, and how Retrieval Augmented Generation (RAG) helps ground AI responses in factual data. The conversation offers practical insights for higher education professionals who want to implement AI solutions but worry about accuracy and reliability. Listeners will learn how to evaluate AI systems, what questions to ask vendors, and why having public-facing content is crucial for effective AI implementation.

Introduction: The Trust Challenge in AI (00:00:06)

The Psychology of Trust in AI (00:03:35)

Evaluating AI Outputs: The Evals Framework (00:11:41)

Retrieval Augmented Generation (RAG) Explained (00:27:23)

Addressing Common AI Trust Concerns (00:33:31)

Conclusion: Building Earned Trust (00:38:11)