Andrew Ng's LLM Quality & Safety Course

Andrew Ng's LLM Quality & Safety Course

Andrew Ng's New Course on LLM Quality and Safety

Andrew Ng, in collaboration with WhyLabs, has launched a new course on ensuring the quality and safety of Large Language Models (LLM) at DeepLearning.ai. Led by Bernease Herman, a senior data scientist at WhyLabs, this one-hour course focuses on best practices for monitoring LLM systems.

Course Overview

  • Instructor: Bernease Herman
  • Duration: 1 hour
  • Collaboration: DeepLearning.ai and WhyLabs
  • Focus: Quality and safety of LLM applications

Key Points

  • Open source communities allow rapid prototyping of LLM applications.
  • Quality and safety have been significant barriers to practical deployment.
  • Risks include hallucinations, data leakage, and prompt injections.

Common Issues Addressed

  1. Hallucinations: LLM generating incorrect information.
  2. Data Leakage: Exposure of sensitive personal information.
  3. Prompt Injections: Tricking the LLM into undesirable actions.

Course Goals

  • Understand potential issues with LLM systems.
  • Learn best practices to mitigate problems.
  • Discover and create metrics for monitoring safety and quality.

Join the Course

If you're interested in improving the quality and safety of your LLM applications, you can join the course here. This initiative is part of Andrew Ng's ongoing efforts to empower learners with knowledge and expertise in the field of AI and deep learning, building on the success of his previous courses on Generative AI and its applications.

Read more