Analyze the user prompts provided to a language model. Which scenario exemplifies prompt injection (jailbreaking)?
What does accuracy measure in the context of fine-tuning results for a generative model?
What is the primary purpose of LangSmith Tracing?
When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?
Which is the main characteristic of greedy decoding in the context of language model word prediction?
Why is it challenging to apply diffusion models to text generation?
When should you use the T-Few fine-tuning method for training a model?
Given the following code:
Prompt Template
(input_variable[‘’rhuman_input",'city’’], template-template)
Which statement is true about Promt Template in relation to input_variables?
What does "Loss" measure in the evaluation of OCI Generative AI fine-tuned models?
The difference between the accuracy of the model at the beginning of training and the accuracy of the deployed model
How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?
What is the purpose of Retrievers in LangChain?
Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the information retrieved by the retrieval system?
How are documents usually evaluated in the simplest form of keyword-based search?
Which is NOT a category of pertained foundational models available in the OCI Generative AI service?
What is LangChain?
Which Oracle Accelerated Data Science (ADS) class can be used to deploy a Large Language Model (LLM) application to OCI Data Science model deployment?
Which statement describes the difference between Top V and Top p" in selecting the next token in the OCI Generative AI Generation models?
Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?
Which LangChain component is responsible for generating the linguistic output in a chatbot system?