Prompt Engineering and Generative AI - Fundamentals
Develop essential engineering skills with expert instruction and practical examples.
Skills you'll gain:
Skill Level
Requirements
Who This Course Is For
About This Course
This course delves into the fundamental concepts related to Prompt Engineering and Generative AI. The course has subsections on Fundamentals of Prompt Engineering, Retrieval Augmented Generation, Fine-tuning a large language model (LLM) and Guardrails for LLM. Section on Prompt Engineering Fundaments:The first segment provides a definition of prompt engineering, best practices of prompt engineering and an example of a prompt given to the Gemini-Pro model with references for further reading.
The second segment explains what streaming a response is from a large language model, examples of providing specific instructions to the Gemini-Pro model as well as temperature and token count parameters. The third segment explains what Zero-Shot Prompting technique is with examples using the Gemini Model. The fourth segment explains Few-shot and Chain-of-Thought Prompting techniques with examples using the Gemini Model.
Subsequent segments in this section shall discuss setting up the Google Colab notebook to work with the GPT model from OpenAI and provide examples of Tree-of-Thoughts prompting technique, including the Tree-of-Thoughts implementation from Langchain to solve the 4x4 Sudoku Puzzle. Section on Retrieval Augmented Generation (RAG):In this section, the first segment provides a definition of Retrieval Augmented Generation Prompting technique, the merits of Retrieval Augmented Generation and applying Retrieval Augmented Generation to a CSV file, using the Langchain framework In the second segment on Retrieval Augmented Generation, a detailed example involving the Arxiv Loader, FAISS Vector Database and a Conversational Retrieval Chain is shown as part of the RAG pipeline using Langchain framework. In the third segment on Retrieval Augmented Generation, evaluation of response from a Large Language Model (LLM) using the RAGAS framework is explained.
In the fourth segment on Retrieval Augmented Generation, the use of Langsmith is shown complementing the RAGAS framework for evaluation of LLM response. In the fifth segment, use of the Gemini Model to create text embeddings and performing document search is explained. Section on Large Language Model Fine-tuning :In this section, the first segment provides a summary of prompting techniques with examples involving LLMs from Hugging Face repository and explaining the differences between prompting an LLM and fine-tuning an LLM.
Topics Covered
Course Details
View pricing and check out the reviews. See what other learners had to say about the course.
This course includes:
Not sure if this is right for you?
Browse More Engineering CoursesContinue Your Learning Journey
Explore more Engineering courses to deepen your skills and advance your expertise.