Andrew Ng's New Course on LLM Quality and Safety
Andrew Ng, in collaboration with WhyLabs, has launched a new course on ensuring the quality and safety of Large Language Models (LLM) at DeepLearning.ai. Led by Bernease Herman, a senior data scientist at WhyLabs, this one-hour course focuses on best practices for monitoring LLM systems.
- Instructor: Bernease Herman
- Duration: 1 hour
- Collaboration: DeepLearning.ai and WhyLabs
- Focus: Quality and safety of LLM applications
- Open source communities allow rapid prototyping of LLM applications.
- Quality and safety have been significant barriers to practical deployment.
- Risks include hallucinations, data leakage, and prompt injections.
Common Issues Addressed
- Hallucinations: LLM generating incorrect information.
- Data Leakage: Exposure of sensitive personal information.
- Prompt Injections: Tricking the LLM into undesirable actions.
- Understand potential issues with LLM systems.
- Learn best practices to mitigate problems.
- Discover and create metrics for monitoring safety and quality.
Join the Course
If you're interested in improving the quality and safety of your LLM applications, you can join the course here. This initiative is part of Andrew Ng's ongoing efforts to empower learners with knowledge and expertise in the field of AI and deep learning, building on the success of his previous courses on Generative AI and its applications.