There is always some risk of hallucination (defined as when a chatbot makes up untrue information in its response) with any large language model. The approach in developing Oxford Insight’s AI Study Assistant attempts to mitigate this with all responses grounded in the reliable sources from your course’s text.
However, it is essential to exercise judgment when using AI outputs as part of your study. We always recommend reading in full the text sections referenced in the Assistant’s responses.
Have further questions? Contact Us