Prompt Engineering
In this section, we will learn how to use a large language model (LLM) to quickly build new and powerful applications.
Using the OpenAI API, we’ll be able to quickly build capabilities that learn to innovate and create value in ways that were cost-prohibitive, highly technical, or simply impossible before now. This section will describe how LLMs work, provide best practices for prompt engineering, and show how LLM APIs can be used in applications for a variety of tasks, including:
- Summarizing (e.g., summarizing user reviews for brevity)
- Inferring (e.g., sentiment classification, topic extraction)
- Transforming text (e.g., translation, spelling & grammar correction)
- Expanding (e.g., automatically writing emails)
We will also go over two key principles for writing effective prompts
- how to systematically engineer good prompts, and
- learn to build a custom chatbot.
Base LLM
Predicts next word, based on text training data
Instruction Tuned LLM
FIne-tune on instructions and good attempts at following those instructions
More practical applications are focused on RLHF: Reinforcement Learning with Human Feedback make the system more helpful and that’s what we’ll cover next.