Comments
There's unfortunately not much to read here yet...
Follow the full discussion on Reddit.
Hi, I’m Jonathan, LastMile AI’s ML Engineer. We’re building tools to evaluate LLM outputs in production – specifically for RAG applications. Right now, we’re focused on hallucination detection: given the data retrieved during a RAG query, is the response faithful to that data, or hallucinatory?
There's unfortunately not much to read here yet...
Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.
Discover the best guides, books, papers and news in Machine Learning, once per week.