Special Summer Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70special

Oracle 1z0-1127-25 Oracle Cloud Infrastructure 2025 Generative AI Professional Exam Practice Test

Page: 1 / 9
Total 88 questions

Oracle Cloud Infrastructure 2025 Generative AI Professional Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$37.5  $124.99

PDF Study Guide

  • Product Type: PDF Study Guide
$33  $109.99
Question 1

What does a cosine distance of 0 indicate about the relationship between two embeddings?

Options:

A.

They are completely dissimilar

B.

They are unrelated

C.

They are similar in direction

D.

They have the same magnitude

Question 2

Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?

Options:

A.

Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs.

B.

PEFT requires replacing the entire model architecture with a new one designed specifically for the new task, making it significantly more data-intensive than Fine-tuning.

C.

Both Fine-tuning and PEFT require the model to be trained from scratch on new data, making them equally data and computationally intensive.

D.

Fine-tuning and PEFT do not involve model modification; they differ only in the type of data used for training, with Fine-tuning requiring labeled data and PEFT using unlabeled data.

Question 3

Which LangChain component is responsible for generating the linguistic output in a chatbot system?

Options:

A.

Document Loaders

B.

Vector Stores

C.

LangChain Application

D.

LLMs

Question 4

In the simplified workflow for managing and querying vector data, what is the role of indexing?

Options:

A.

To convert vectors into a non-indexed format for easier retrieval

B.

To map vectors to a data structure for faster searching, enabling efficient retrieval

C.

To compress vector data for minimized storage usage

D.

To categorize vectors based on their originating data type (text, images, audio)

Question 5

What issue might arise from using small datasets with the Vanilla fine-tuning method in the OCI Generative AI service?

Options:

A.

Overfitting

B.

Underfitting

C.

Data Leakage

D.

Model Drift

Question 6

What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?

Options:

A.

It allows the LLM to access a larger dataset.

B.

It eliminates the need for any training or computational resources.

C.

It provides examples in the prompt to guide the LLM to better performance with no training cost.

D.

It significantly reduces the latency for each model request.

Question 7

What does accuracy measure in the context of fine-tuning results for a generative model?

Options:

A.

The number of predictions a model makes, regardless of whether they are correct or incorrect

B.

The proportion of incorrect predictions made by the model during an evaluation

C.

How many predictions the model made correctly out of all the predictions in an evaluation

D.

The depth of the neural network layers used in the model

Question 8

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 days?

Options:

A.

480 unit hours

B.

240 unit hours

C.

744 unit hours

D.

20 unit hours

Question 9

How are chains traditionally created in LangChain?

Options:

A.

By using machine learning algorithms

B.

Declaratively, with no coding required

C.

Using Python classes, such as LLMChain and others

D.

Exclusively through third-party software integrations

Question 10

Which is NOT a category of pretrained foundational models available in the OCI Generative AI service?

Options:

A.

Summarization models

B.

Generation models

C.

Translation models

D.

Embedding models

Question 11

What is LangChain?

Options:

A.

A JavaScript library for natural language processing

B.

A Python library for building applications with Large Language Models

C.

A Java library for text summarization

D.

A Ruby library for text generation

Question 12

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?

Options:

A.

25 unit hours

B.

40 unit hours

C.

20 unit hours

D.

30 unit hours

Question 13

When does a chain typically interact with memory in a run within the LangChain framework?

Options:

A.

Only after the output has been generated.

B.

Before user input and after chain execution.

C.

After user input but before chain execution, and again after core logic but before output.

D.

Continuously throughout the entire chain execution process.

Question 14

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

Options:

A.

Increasing temperature removes the impact of the most likely word.

B.

Decreasing temperature broadens the distribution, making less likely words more probable.

C.

Increasing temperature flattens the distribution, allowing for more varied word choices.

D.

Temperature has no effect on the probability distribution; it only changes the speed of decoding.

Question 15

What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?

Options:

A.

It specifies a string that tells the model to stop generating more content.

B.

It assigns a penalty to frequently occurring tokens to reduce repetitive text.

C.

It determines the maximum number of tokens the model can generate per response.

D.

It controls the randomness of the model’s output, affecting its creativity.

Question 16

What is the purpose of frequency penalties in language model outputs?

Options:

A.

To ensure that tokens that appear frequently are used more often

B.

To penalize tokens that have already appeared, based on the number of times they have been used

C.

To reward the tokens that have never appeared in the text

D.

To randomly penalize some tokens to increase the diversity of the text

Question 17

What does the Loss metric indicate about a model's predictions?

Options:

A.

Loss measures the total number of predictions made by a model.

B.

Loss is a measure that indicates how wrong the model's predictions are.

C.

Loss indicates how good a prediction is, and it should increase as the model improves.

D.

Loss describes the accuracy of the right predictions rather than the incorrect ones.

Question 18

Which statement is true about the "Top p" parameter of the OCI Generative AI Generation models?

Options:

A.

"Top p" selects tokens from the "Top k" tokens sorted by probability.

B.

"Top p" assigns penalties to frequently occurring tokens.

C.

"Top p" limits token selection based on the sum of their probabilities.

D.

"Top p" determines the maximum number of tokens per response.

Question 19

What do embeddings in Large Language Models (LLMs) represent?

Options:

A.

The color and size of the font in textual data

B.

The frequency of each word or pixel in the data

C.

The semantic content of data in high-dimensional vectors

D.

The grammatical structure of sentences in the data

Question 20

How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?

Options:

A.

Groundedness pertains to factual correctness, whereas Answer Relevance concerns query relevance.

B.

Groundedness refers to contextual alignment, whereas Answer Relevance deals with syntactic accuracy.

C.

Groundedness measures relevance to the user query, whereas Answer Relevance evaluates data integrity.

D.

Groundedness focuses on data integrity, whereas Answer Relevance emphasizes lexical diversity.

Question 21

Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?

Options:

A.

Updates the weights of the base model during the fine-tuning process

B.

Serves as a designated point for user requests and model responses

C.

Evaluates the performance metrics of the custom models

D.

Hosts the training data for fine-tuning custom models

Question 22

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

Options:

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and is typically used when no training data exists.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.

D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

Question 23

In which scenario is soft prompting especially appropriate compared to other training styles?

Options:

A.

When there is a significant amount of labeled, task-specific data available.

B.

When the model needs to be adapted to perform well in a different domain it was not originally trained on.

C.

When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training.

D.

When the model requires continued pre-training on unlabeled data.

Question 24

What is the characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

Options:

A.

It updates all the weights of the model uniformly.

B.

It selectively updates only a fraction of weights to reduce the number of parameters.

C.

It selectively updates only a fraction of weights to reduce computational load and avoid overfitting.

D.

It increases the training time as compared to Vanilla fine-tuning.

Question 25

Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship. What is the nature of these relationships, and why arethey crucial for language models?

Options:

A.

Linear relationships; they simplify the modeling process

B.

Semantic relationships; crucial for understanding context and generating precise language

C.

Hierarchical relationships; important for structuring database queries

D.

Temporal relationships; necessary for predicting future linguistic trends

Question 26

When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?

Options:

A.

When the LLM already understands the topics necessary for text generation

B.

When the LLM does not perform well on a task and the data for prompt engineering is too large

C.

When the LLM requires access to the latest data for generating outputs

D.

When you want to optimize the model without any instructions

Page: 1 / 9
Total 88 questions