Black Friday Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70special

Oracle 1z0-1110-23 Oracle Cloud Infrastructure Data Science 2023 Professional Exam Practice Test

Page: 1 / 8
Total 80 questions

Oracle Cloud Infrastructure Data Science 2023 Professional Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$37.5  $124.99

PDF Study Guide

  • Product Type: PDF Study Guide
$33  $109.99
Question 1

As a data scientist, you are working on a global health data set that has data from more than 50

countries. You want to encode three features such as 'countries', 'race' and 'body organ' as

categories.

Which option would you use to encode the categorical feature?

Options:

A.

OneHotEncoder ()

B.

DataFrameLabelEncoder ()

C.

show_in_notebook ()

D.

auto_transform()

Question 2

You have just received a new data set from a colleague. You want to quickly find out summary

information about the data set, such as the types of features, the total number of observations, and

distributions of the data. Which Accelerated Data Science (ADS) SDK method from the ADSDataset

class would you use?

Options:

A.

show_corr()

B.

to_xgb ()

C.

compute ()

D.

show_in_notebook ()

Question 3

Which Oracle Accelerated Data Science (ADS) classes can be used for easy access to data sets from

reference libraries and index websites such as scikit-learn?

Options:

A.

DataLabeling

B.

DatasetBrowser

C.

SecretKeeper

D.

ADSTuner

Question 4

As you are working in your notebook session, you find that your notebook session does not have enough compute CPU and memory for your workload. How would you scale up your notebook session without losing your work?

Options:

A.

Ensure your files and environments are written to the block volume storage under the /home/datascience directory, deactivate the notebook session, and activate the notebook larger compute shape selected.

B.

Down your files and data to your local machine, delete your notebook session, provision tebook session on a larger compute shape, and upload your files from your local the new notebook session.

C.

Deactivate your notebook session, provision a new notebook session on larger compute shape, and re-create all your file changes.

D.

Create a temporary bucket in Object Storage, write all your files and data to Object Storage, delete tur ctebook session, provision a new notebook session on a larger compute shape, and capy your flies and data from your temporary bucket onto your new notebook session.

Question 5

You are attempting to save a model from a notebook session to the model catalog by using the

Accelerated Data Science (ADS) SDK, with resource principal as the authentication signer, and you

get a 404 authentication error. Which two should you look for to ensure permissions are set up

correctly?

Options:

A.

The model artifact is saved to the block volume of the notebook session.

B.

A dynamic group has rules that matching the notebook sessions in it compartment.

C.

The policy for your user group grants manages permissions for the model catalog in this

compartment.

D.

The policy for a dynamic group grant manages permissions for the model catalog in it

compartment.

E.

The networking configuration allows access to Oracle Cloud Infrastructure services through a

Service Gateway.

Question 6

data scientist, you use the Oracle Cloud Infrastructure (OCI) Language service to train custom

models. Which types of custom models can be trained?

Options:

A.

Image classification, Named Entity Recognition (NER)

B.

Text classification, Named Entity Recognition (NER)

C.

Sentiment Analysis, Named Entity Recognition (NER)

D.

Object detection, Text classification

Question 7

Which two statements are true about published conda environments?

Options:

A.

They are curated by Oracle Cloud Infrastructure (OCI) Data Science.

B.

The odac conda init command is used to configure the location of published conda

environments.

C.

Your notebook session acts as the source to share published conda environments with team

members.

D.

You can only create a published conda environment by modifying a Data Science conda

environment.

E.

In addition to service job run environment variables, conda environment variables can be

used in Data Science Jobs.

Question 8

You have just received a new data set from a colleague. You want to quickly find out summary information about the data set, such as the types of features, total number of observations, and data distributions, Which Accelerated Data Science (ADS) SDK method from the AD&Dataset class would you use?

Options:

A.

Show_in_notebook{}

B.

To_xgb{}

C.

Compute{}

D.

Show_corr{}

Question 9

You are building a model and need input that represents data as morning, afternoon, or evening.

However, the data contains a time stamp. What part of the Data Science life cycle would you be in

when creating the new variable?

Options:

A.

Data access

B.

Feature engineering

C.

Model type selection

D.

Model validation

Question 10

You are a data scientist leveraging the Oracle Cloud Infrastructure (OCI) Language AI service for

various types of text analyses. Which TWO capabilities can you utilize with this tool?

Options:

A.

Topic classification

B.

Table extraction

C.

Sentiment analysis

D.

Sentence diagramming

E.

Punctuation correction

Question 11

Using Oracle AutoML, you are tuning hyperparameters on a supported model class and have

specified a time budget. AutoML terminates computation once the time budget is exhausted. What

would you expect AutoML to return in case the time budget is exhausted before hyperparameter

tuning is completed?

Options:

A.

The current best-known hyperparameter configuration is returned.

B.

A random hyperparameter configuration is returned.

C.

A hyperparameter configuration with a minimum learning rate is returned.

D.

The last generated hyperparameter configuration is returned

Question 12

You are using Oracle Cloud Infrastructure (OCI) Anomaly Detection to train a model to detect

anomalies in pump sensor data. How does the required False Alarm Probability setting affect an

anomaly detection model?

Options:

A.

It is used to disable the reporting of false alarms.

B.

It changes the sensitivity of the model to detecting anomalies.

C.

It determines how many false alarms occur before an error message is generated.

D.

It adds a score to each signal indicating the probability that its a false alarm.

Question 13

You are working as a data scientist for a healthcare company. They decide to analyze the data to

find patterns in a large volume of electronic medical records. You are asked to build a PySpark

solution to analyze these records in a JupyterLab notebook. What is the order of recommended

steps to develop a PySpark application in Oracle Cloud Infrastructure (OCI) Data Science?

Options:

A.

Launch a notebook session. Install a PySpark conda environment. Configure core-site.xml.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

B.

Install a Spark conda environment. Configure core-site.xml. Launch a notebook session.

Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your

PySpark application.

C.

Configure core-site.xml. Install a PySpark conda environment. Create a Data Flow application

with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Launch a

notebook session.

D.

Launch a notebook session. Configure core-site.xml. Install a PySpark conda environment.

E.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

Question 14

You are a data scientist with a set of text and image files that need annotation, and you want to use Oracle Cloud Infrastructure (OCI) Data Labeling. Which of the following THREE an-notation classes are supported by the tool.?

Options:

A.

Object Detection

B.

Named Entity Extraction

C.

Classification (single/multi label)

D.

Key-Point and Landmark

E.

Polygonal Segmentation

F.

Semantic Segmentation

Question 15

You are building a model and need input that represents data as morning, afternoon, or evening. However, the data contains a time stamp. What part of the Data Science life cycle would you be in when creating the new variable?

Options:

A.

Model type selection

B.

Model validation

C.

Data access

D.

Feature engineering

Question 16

You are a data scientist working inside a notebook session and you attempt to pip install a

package from a public repository that is not included in your conda environment. After running this

command, you get a network timeout error.

What might be missing from your networking configuration?

Options:

A.

FastConnect to an on-premises network.

B.

Primary Virtual Network Interface Card (VNIC).

C.

NAT Gateway with public internet access.

D.

Service Gateway with private subnet access

Question 17

You are a data scientist leveraging the Oracle Cloud Infrastructure (OCI) Language AI service for various types of text analyses. Which TWO capabilities can you utilize with this tool?

Options:

A.

Table extraction

B.

Punctuation correction

C.

Sentence diagramming

D.

Topic classification

E.

Sentiment analysis

Question 18

What preparation steps are required to access an Oracle AI service SDK from a Data Science

notebook session?

Options:

A.

Create and upload score.py and runtime.yaml.

B.

Create and upload the APIsigning key and config file.

C.

Import the REST API.

D.

Call the ADS command to enable AI integration

Question 19

You have trained three different models on your data set using Oracle AutoML. You want to

visualize the behavior of each of the models, including the baseline model, on the test set. Which

class should be used from the Accelerated Data Science (ADS) SDK to visually compare the models?

Options:

A.

EvaluationMetrics

B.

ADSEvaluator

C.

ADSExplainer

D.

ADSTuner

Question 20

You have trained a machine learning model on Oracle Cloud Infrastructure (OCI) Data Science,

and you want to save the code and associated pickle file in a Git repository. To do this, you have to

create a new SSH key pair to use for authentication. Which SSH command would you use to create

the public/private algorithm key pair in the notebook session?

Options:

A.

ssh-agent

B.

ssh-copy-id

C.

ssh-add

D.

ssh-Keygen

Question 21

What preparation steps are required to access an Oracle AI service SDK from a Data Science notebook session?

Options:

A.

Call the Accented Data Science (ADS) command to enable Al integration

B.

Create and upload the API signing key and config file

C.

Import the REST API

D.

Create and upload execute.py and runtime.yaml

Question 22

When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) Data Science model catalog, you create a score.py file. What is the purpose of the score.py fie?

Options:

A.

Define the compute scaling strategy.

B.

Configure the deployment infrastructure.

C.

Define the inference server dependencies.

D.

Execute the inference logic code

Question 23

You are creating an Oracle Cloud Infrastructure (OCI) Data Science job that will run on a recurring basis in a production environment. This job will pick up sensitive data from an Object Storage bucket, train a model, and save it to the model catalog. How would you design the authentication mechanism for the job?

Options:

A.

Package your personal OC file and keys in the job artifact.

B.

Use the resource principal of the job run as the signer in the job code, ensuring there is a dynamic group for this job run with appropriate access to Object Storage and the model catalog.

C.

Store your personal OCI config file and kays in the Vault, and access the Vault through the job nun resource principal

D.

Create a pre-authenticated request (PAA) for the Object Storage bucket, and use that in the job code.

Question 24

Six months ago, you created and deployed a model that predicts customer churn for a call

centre. Initially, it was yielding quality predictions. However, over the last two months, users are

questioning the credibility of the predictions.

Which two methods would you employ to verify the accuracy of the model?

Options:

A.

Retrain the model

B.

Validate the model using recent data

C.

Drift monitoring

D.

Redeploy the model

E.

Operational monitoring

Page: 1 / 8
Total 80 questions