How can a Snowflake user access a JSON object, given the following table? (Select TWO).
src:salesperson.name
src:sa1esPerson. name
src:salesperson.Name
SRC:salesperson.name
SRC:salesperson.Name
To access a JSON object in Snowflake, dot notation is used where the path to the object is specified after the column name containing the JSON data. Both lowercase and uppercase can be used for attribute names, so both “name” and “Name” are valid. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Where is Snowflake metadata stored?
Within the data files
In the virtual warehouse layer
In the cloud services layer
In the remote storage layer
Snowflake’s architecture is divided into three layers: database storage, query processing, and cloud services. The metadata, which includes information about the structure of the data, the SQL operations performed, and the service-level policies, is stored in the cloud services layer. This layer acts as the brain of the Snowflake environment, managing metadata, query optimization, and transaction coordination.
What is a characteristic of the Snowflake Query Profile?
It can provide statistics on a maximum number of 100 queries per week.
It provides a graphic representation of the main components of the query processing.
It provides detailed statistics about which queries are using the greatest number of compute resources.
It can be used by third-party software using the Query Profile API.
The Snowflake Query Profile provides a graphic representation of the main components of the query processing. This visual aid helps users understand the execution details and performance characteristics of their queries4.
What privilege should a user be granted to change permissions for new objects in a managed access schema?
Grant the OWNERSHIP privilege on the schema.
Grant the OWNERSHIP privilege on the database.
Grant the MANAGE GRANTS global privilege.
Grant ALL privileges on the schema.
To change permissions for new objects in a managed access schema, a user should be granted the MANAGE GRANTS global privilege. This privilege allows the user to manage access control through grants on all securable objects within Snowflake2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which query contains a Snowflake hosted file URL in a directory table for a stage named bronzestage?
list @bronzestage;
select * from directory(@bronzestage);
select metadata$filename from @bronzestage;
select * from table(information_schema.stage_directory_file_registration_history(
stage name=>'bronzestage1));
The query that contains a Snowflake hosted file URL in a directory table for a stage named bronzestage is select * from directory(@bronzestage). This query retrieves a list of all files on the stage along with metadata, including the Snowflake file URL for each file3.
Data storage for individual tables can be monitored using which commands and/or objects? (Choose two.)
SHOW STORAGE BY TABLE;
SHOW TABLES;
Information Schema -> TABLE_HISTORY
Information Schema -> TABLE_FUNCTION
Information Schema -> TABLE_STORAGE_METRICS
To monitor data storage for individual tables, the commands and objects that can be used are ‘SHOW STORAGE BY TABLE;’ and the Information Schema view ‘TABLE_STORAGE_METRICS’. These tools provide detailed information about the storage utilization for tables. References: Snowflake Documentation
Which of the following is the Snowflake Account_Usage.Metering_History view used for?
Gathering the hourly credit usage for an account
Compiling an account's average cloud services cost over the previous month
Summarizing the throughput of Snowpipe costs for an account
Calculating the funds left on an account's contract
The Snowflake Account_Usage.Metering_History view is used to gather the hourly credit usage for an account. This view provides details on the credits consumed by various services within Snowflake for the last 365 days1.
Which languages requite that User-Defined Function (UDF) handlers be written inline? (Select TWO).
Java
Javascript
Scala
Python
SQL
User-Defined Function (UDF) handlers must be written inline for Javascript and SQL. These languages allow the UDF logic to be included directly within the SQL statement that creates the UDF2.
Query parsing and compilation occurs in which architecture layer of the Snowflake Cloud Data Platform?
Cloud services layer
Compute layer
Storage layer
Cloud agnostic layer
Query parsing and compilation in Snowflake occur within the cloud services layer. This layer is responsible for various management tasks, including query compilation and optimization
Which of the following are considerations when using a directory table when working with unstructured data? (Choose two.)
A directory table is a separate database object.
Directory tables store data file metadata.
A directory table will be automatically added to a stage.
Directory tables do not have their own grantable privileges.
Directory table data can not be refreshed manually.
Directory tables in Snowflake are used to store metadata about data files in a stage. They are not separate database objects but are conceptually similar to external tables. Directory tables do not have grantable privileges of their own
How can a data provider ensure that a data consumer is going to have access to the required objects?
Enable the data sharing feature in the account and validate the view.
Use the CURRENT_ROLE and CURRENT_USER functions to validate secure views.
Use the CURRENT_ function to authorize users from a specific account to access rows in a base table.
Set the SIMULATED DATA SHARING CONSUMER session parameter to the name of the consumer account for which access is being simulated.
To ensure a data consumer has access to the required objects, a data provider can enable the data sharing feature and validate that the consumer can access the views or tables shared with them. References: Based on general data sharing practices in cloud services as of 2021.
What is the recommended way to change the existing file format type in my format from CSV to JSON?
ALTER FILE FORMAT my_format SET TYPE=JSON;
ALTER FILE FORMAT my format SWAP TYPE WITH JSON;
CREATE OR REPLACE FILE FORMAT my format TYPE-JSON;
REPLACE FILE FORMAT my format TYPE-JSON;
To change the existing file format type from CSV to JSON, the recommended way is to use the ALTER FILE FORMAT command with the SET TYPE=JSON clause. This alters the file format specification to use JSON instead of CSV. References: Based on my internal knowledge as of 2021.
How would a user execute a series of SQL statements using a task?
Include the SQL statements in the body of the task CREATE TASK mytask .. AS INSERT INTO target1 SELECT .. FROM stream_s1 WHERE .. INSERT INTO target2 SELECT .. FROM stream_s1
WHERE ..
A stored procedure can have only one DML statement per stored procedure invocation and therefore the user should sequence stored procedure calls in the task definition CREATE TASK mytask .... AS
call stored_proc1(); call stored_proc2();
Use a stored procedure executing multiple SQL statements and invoke the stored procedure from the task. CREATE TASK mytask .... AS call stored_proc_multiple_statements_inside();
Create a task for each SQL statement (e.g. resulting in task1, task2, etc.) and string the series of SQL statements by having a control task calling task1, task2, etc. sequentially.
To execute a series of SQL statements using a task, a user would use a stored procedure that contains multiple SQL statements and invoke this stored procedure from the task. References: Snowflake Documentation2.
How does Snowflake allow a data provider with an Azure account in central Canada to share data with a data consumer on AWS in Australia?
The data provider in Azure Central Canada can create a direct share to AWS Asia Pacific, if they are both in the same organization.
The data consumer and data provider can form a Data Exchange within the same organization to create a share from Azure Central Canada to AWS Asia Pacific.
The data provider uses the GET DATA workflow in the Snowflake Data Marketplace to create a share between Azure Central Canada and AWS Asia Pacific.
The data provider must replicate the database to a secondary account in AWS Asia Pacific within the same organization then create a share to the data consumer's account.
Snowflake allows data providers to share data with consumers across different cloud platforms and regions through database replication. The data provider must replicate the database to a secondary account in the target region or cloud platform within the same organization, and then create a share to the data consumer’s account. This process ensures that the data is available in the consumer’s region and on their cloud platform, facilitating seamless data sharing. References: Sharing data securely across regions and cloud platforms | Snowflake Documentation
How can a user improve the performance of a single large complex query in Snowflake?
Scale up the virtual warehouse.
Scale out the virtual warehouse.
Enable standard warehouse scaling.
Enable economy warehouse scaling.
Scaling up the virtual warehouse in Snowflake involves increasing the compute resources available for a single warehouse, which can improve the performance of large and complex queries by providing more CPU and memory resources. References: Based on general cloud data warehousing knowledge as of 2021.
Which privilege must be granted to a share to allow secure views the ability to reference data in multiple databases?
CREATE_SHARE on the account
SHARE on databases and schemas
SELECT on tables used by the secure view
REFERENCE_USAGE on databases
To allow secure views the ability to reference data in multiple databases, the REFERENCE_USAGE privilege must be granted on each database that contains objects referenced by the secure view2. This privilege is necessary before granting the SELECT privilege on a secure view to a share.
When should a user consider disabling auto-suspend for a virtual warehouse? (Select TWO).
When users will be using compute at different times throughout a 24/7 period
When managing a steady workload
When the compute must be available with no delay or lag time
When the user does not want to have to manually turn on the warehouse each time it is needed
When the warehouse is shared across different teams
Disabling auto-suspend for a virtual warehouse is recommended when there is a steady workload, which ensures that compute resources are always available. Additionally, it is advisable to disable auto-suspend when immediate availability of compute resources is critical, eliminating any startup delay
What is the purpose of using the OBJECT_CONSTRUCT function with me COPY INTO command?
Reorder the rows in a relational table and then unload the rows into a file
Convert the rows in a relational table lo a single VARIANT column and then unload the rows into a file.
Reorder the data columns according to a target table definition and then unload the rows into the table.
Convert the rows in a source file to a single variant column and then load the rows from the file to a variant table.
The OBJECT_CONSTRUCT function is used with the COPY INTO command to convert the rows in a relational table to a single VARIANT column, which can then be unloaded into a file. This is useful for transforming table data into a semi-structured JSON format
Which operations are handled in the Cloud Services layer of Snowflake? (Select TWO).
Security
Data storage
Data visualization
Query computation
Metadata management
The Cloud Services layer in Snowflake is responsible for various services, including security (like authentication and authorization) and metadata management (like query parsing and optimization). References: Based on general cloud architecture knowledge as of 2021.
What is the name of the SnowSQLfile that can store connection information?
history
config
snowsqLcnf
snowsql.pubkey
The SnowSQL file that can store connection information is named ‘config’. It is used to store user credentials and connection details for easy access to Snowflake instances. References: Based on general database knowledge as of 2021.
Which feature is integrated to support Multi-Factor Authentication (MFA) at Snowflake?
Authy
Duo Security
One Login
RSA SecurlD Access
Snowflake integrates Duo Security to support Multi-Factor Authentication (MFA). This feature provides increased login security for users connecting to Snowflake, and it is managed completely by Snowflake without the need for users to sign up separately with Duo4.
For non-materialized views, what column in Information Schema and Account Usage identifies whether a view is secure or not?
CHECK_OPTION
IS_SECURE
IS_UPDATEABLE
TABLE_NAME
In the Information Schema and Account Usage, the column that identifies whether a view is secure or not is IS_SECURE2.
How does a scoped URL expire?
When the data cache clears.
When the persisted query result period ends.
The encoded URL access is permanent.
The length of time is specified in the expiration_time argument.
A scoped URL expires when the persisted query result period ends, which is typically after the results cache expires. This is currently set to 24 hours
A data provider wants to share data with a consumer who does not have a Snowflake account. The provider creates a reader account for the consumer following these steps:
1. Created a user called "CONSUMER"
2. Created a database to hold the share and an extra-small warehouse to query the data
3. Granted the role PUBLIC the following privileges: Usage on the warehouse, database, and schema, and SELECT on all the objects in the share
Based on this configuration what is true of the reader account?
The reader account will automatically use the Standard edition of Snowflake.
The reader account compute will be billed to the provider account.
The reader account can clone data the provider has shared, but cannot re-share it.
The reader account can create a copy of the shared data using CREATE TABLE AS...
The reader account compute will be billed to the provider account. Very Comprehensive Explanation: In Snowflake, when a provider creates a reader account for a consumer who does not have a Snowflake account, the compute resources used by the reader account are billed to the provider’s account. This allows the consumer to query the shared data without incurring any costs. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which objects together comprise a namespace in Snowflake? (Select TWO).
Account
Database
Schema
Table
Virtual warehouse
In Snowflake, a namespace is comprised of a database and a schema. The combination of a database and schema uniquely identifies database objects within an account
Which statement accurately describes a characteristic of a materialized view?
A materialized view can query only a single table.
Data accessed through materialized views can be stale.
Materialized view refreshes need to be maintained by the user.
Querying a materialized view is slower than executing a query against the base table of the view.
A characteristic of a materialized view is that the data accessed through it can be stale. This is because the data in a materialized view may not reflect the latest changes in the base tables until the view is refreshed
For the ALLOWED VALUES tag property, what is the MAXIMUM number of possible string values for a single tag?
10
50
64
256
For the ALLOWED VALUES tag property, the maximum number of possible string values for a single tag is 256. This allows for a wide range of values to be assigned to a tag when it is set on an object
Which Snowflake tool would be BEST to troubleshoot network connectivity?
SnowCLI
SnowUI
SnowSQL
SnowCD
SnowCD (Snowflake Connectivity Diagnostic Tool) is the best tool provided by Snowflake for troubleshooting network connectivity issues. It helps diagnose and resolve issues related to connecting to Snowflake services
.
Which statement describes how Snowflake supports reader accounts?
A reader account can consume data from the provider account that created it and combine it with its own data.
A consumer needs to become a licensed Snowflake customer as data sharing is only supported between Snowflake accounts.
The users in a reader account can query data that has been shared with the reader account and can perform DML tasks.
The SHOW MANAGED ACCOUNTS command will view all the reader accounts that have been created for an account.
Snowflake supports reader accounts, which are a type of account that allows data providers to share data with consumers who are not Snowflake customers. However, for data sharing to occur, the consumer needs to become a licensed Snowflake customer because data sharing is only supported between Snowflake accounts. References: Introduction to Secure Data Sharing | Snowflake Documentation2.
The first user assigned to a new account, ACCOUNTADMIN, should create at least one additional user with which administrative privilege?
USERADMIN
PUBLIC
ORGADMIN
SYSADMIN
The first user assigned to a new Snowflake account, typically with the ACCOUNTADMIN role, should create at least one additional user with the USERADMIN administrative privilege. This role is responsible for creating and managing users and roles within the Snowflake account. References: Access control considerations | Snowflake Documentation
What happens when a database is cloned?
It does not retain any privileges granted on the source object.
It replicates all granted privileges on the corresponding source objects.
It replicates all granted privileges on the corresponding child objects.
It replicates all granted privileges on the corresponding child schema objects.
When a database is cloned in Snowflake, it does not retain any privileges that were granted on the source object. The clone will need to have privileges reassigned as necessary for users to access it. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What effect does WAIT_FOR_COMPLETION = TRUE have when running an ALTER WAREHOUSE command and changing the warehouse size?
The warehouse size does not change until all queries currently running in the warehouse have completed.
The warehouse size does not change until all queries currently in the warehouse queue have completed.
The warehouse size does not change until the warehouse is suspended and restarted.
It does not return from the command until the warehouse has finished changing its size.
The WAIT_FOR_COMPLETION = TRUE parameter in an ALTER WAREHOUSE command ensures that the command does not return until the warehouse has completed resizing. This means that the command will wait until all the necessary compute resources have been provisioned and the warehouse size has been changed. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake edition enables data sharing only through Snowflake Support?
Virtual Private Snowflake
Business Critical
Enterprise
Standard
The Snowflake edition that enables data sharing only through Snowflake Support is the Virtual Private Snowflake (VPS). By default, VPS does not permit data sharing outside of the VPS environment, but it can be enabled through Snowflake Support4.
Which stream type can be used for tracking the records in external tables?
Append-only
External
Insert-only
Standard
The stream type that can be used for tracking the records in external tables is ‘External’. This type of stream is specifically designed to track changes in external tables
Network policies can be applied to which of the following Snowflake objects? (Choose two.)
Roles
Databases
Warehouses
Users
Accounts
Network policies in Snowflake can be applied to users and accounts. These policies control inbound access to the Snowflake service and internal stages, allowing or denying access based on the originating network identifiers12.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What persistent data structures are used by the search optimization service to improve the performance of point lookups?
Micro-partitions
Clustering keys
Equality searches
Search access paths
The search optimization service in Snowflake uses persistent data structures known as search access paths to improve the performance of point lookups. These structures enable efficient retrieval of data by reducing the amount of data scanned during queries.
Search Access Paths:
Search access paths are special indexing structures maintained by the search optimization service.
They store metadata about the distribution of data within tables, enabling faster lookups for specific values.
Point Lookups:
Point lookups involve searching for a specific value within a column.
By leveraging search access paths, Snowflake can quickly locate the exact micro-partition containing the value, minimizing the amount of data scanned.
Performance Improvement:
The use of search access paths significantly reduces query execution time for point lookups.
This is especially beneficial for large tables where scanning all micro-partitions would be computationally expensive.
References:
Snowflake Documentation: Search Optimization Service
Snowflake Documentation: Understanding Search Access Paths
What are the main differences between the account usage views and the information schema views? (Select TWO).
No active warehouse to needed to query account usage views but one is needed to query information schema views.
Account usage views do not contain data about tables but information schema views do.
Account issue views contain dropped objects but information schema views do not.
Data retention for account usage views is 1 year but is 7 days to 6 months for information schema views, depending on the view.
Information schema views are read-only but account usage views are not.
The account usage views in Snowflake provide historical usage data about the Snowflake account, and they retain this data for a period of up to 1 year. These views include information about dropped objects, enabling audit and tracking activities. On the other hand, information schema views provide metadata about database objects currently in use, such as tables and views, but do not include dropped objects. The retention of data in information schema views varies, but it is generally shorter than the retention for account usage views, ranging from 7 days to a maximum of 6 months, depending on the specific view.References: Snowflake Documentation on Account Usage and Information Schema
What is the MINIMUM Snowflake edition that must be used in order to see the ACCESS_HISTORY view?
Standard
Enterprise
Business Critical
Virtual Private Snowflake (VPS)
The ACCESS_HISTORY view in Snowflake provides detailed information about queries executed in the account, including metadata such as the time of execution, the user, and the SQL text of the queries. This view is available in the Snowflake Enterprise edition and higher editions. The Standard edition does not include this feature.
References:
Snowflake Documentation: Access History
Snowflake Editions: Snowflake Pricing
Which roles can make grant decisions to objects within a managed access schema? (Select TWO)
ACCOUNTADMIN
SECURITYADMIN
SYSTEMADMIN
ORGADMIN
USERADMIN
Managed Access Schemas: These are a special type of schema designed for fine-grained access control in Snowflake.
Roles with Grant Authority:
ACCOUNTADMIN: The top-level administrative role can grant object privileges on all objects within the account, including managed access schemas.
SECURITYADMIN: Can grant and revoke privileges on objects within the account, including managed access schemas.
Important Note: The ORGADMIN role focuses on organization-level management, not object access control.
Who can create and manage reader accounts? (Select TWO).
A user with ACCOUNTADMIN role
A user with securityadmin role
A user with SYSADMIN role
A user with ORGADMIH role
A user with CREATE ACCOUNT privilege
In Snowflake, reader accounts are special types of accounts that allow data sharing with external consumers without them having their own Snowflake account. The creation and management of reader accounts can be performed by users with the ACCOUNTADMIN role or the ORGADMIN role. The ACCOUNTADMIN role has comprehensive administrative privileges within a Snowflake account, including managing other accounts and roles. The ORGADMIN role, which is higher in hierarchy, oversees multiple accounts within an organization and can manage reader accounts across those accounts.
References:
Snowflake Documentation: Creating and Managing Reader Accounts
Which type of workload is recommended for Snowpark-optimized virtual warehouses?
Workloads with ad hoc analytics
Workloads that have large memory requirements
Workloads with unpredictable data volumes for each query
Workloads that are queried with small table scans and selective filters
Snowpark-optimized virtual warehouses in Snowflake are designed to efficiently handle workloads with large memory requirements. Snowpark is a developer framework that allows users to write code in languages like Scala, Java, and Python to process data in Snowflake. Given the nature of these programming languages and the types of data processing tasks they are typically used for, having a virtual warehouse that can efficiently manage large memory-intensive operations is crucial.
Understanding Snowpark-Optimized Virtual Warehouses:
Snowpark allows developers to build complex data pipelines and applications within Snowflake using familiar programming languages.
These virtual warehouses are optimized to handle the execution of Snowpark workloads, which often involve large datasets and memory-intensive operations.
Large Memory Requirements:
Workloads with large memory requirements include data transformations, machine learning model training, and advanced analytics.
These operations often need to process significant amounts of data in memory to perform efficiently.
Snowpark-optimized virtual warehouses are configured to provide the necessary memory resources to support these tasks, ensuring optimal performance and scalability.
Other Considerations:
While Snowpark can handle other types of workloads, its optimization for large memory tasks makes it particularly suitable for scenarios where data processing needs to be done in-memory.
Snowflake’s ability to scale compute resources dynamically also plays a role in efficiently managing large memory workloads, ensuring that performance is maintained even as data volumes grow.
References:
Snowflake Documentation: Introduction to Snowpark
Snowflake Documentation: Virtual Warehouses
When working with table MY_TABLE that contains 10 rows, which sampling query will always return exactly 5 rows?
SELECT * FROM MY_TABLE SAMPLE SYSTEM (5);
SELECT * FROM MY_TABLE SAMPLE BERNOULLI (5);
SELECT * FROM MY_TABLE SAMPLE (5 ROWS);
SELECT * FROM MY_TABLE SAMPLE SYSTEM (1) SEED (5);
In Snowflake, SAMPLE (5 ROWS) ensures an exact count of 5 rows is returned from MY_TABLE, regardless of table size. This is different from SAMPLE SYSTEM or SAMPLE BERNOULLI, which use percentage-based sampling, potentially returning varying row counts based on probabilistic methods.
The ROWS option is deterministic and does not depend on percentage, making it ideal when an exact row count is required.
What does the Remote Disk I/O statistic in the Query Profile indicate?
Time spent reading from the result cache.
Time spent reading from the virtual warehouse cache.
Time when the query processing was blocked by remote disk access.
The level of network activity between the Cloud Services layer and the virtual warehouse.
The Remote Disk I/O statistic in the Query Profile reflects time spent waiting on remote disk access, which can occur when data needs to be retrieved from external storage (remote). This metric is crucial for identifying bottlenecks related to I/O delays, often suggesting a need for performance optimization in data retrieval paths.
The other options relate to caching and network activity, but Remote Disk I/O specifically measures the wait time for data access from remote storage locations.
Authorization to execute CREATE
Primary role
Secondary role
Application role
Database role
In Snowflake, the authorization to execute CREATE <object> statements, such as creating tables, views, databases, etc., is determined by the role currently set as the user's primary role. The primary role of a user or session specifies the set of privileges (including creation privileges) that the user has. While users can have multiple roles, only the primary role is used to determine what objects the user can create unless explicitly specified in the session.
Which Snowflake table is an implicit object layered on a stage, where the stage can be either internal or external?
Directory table
Temporary table
Transient table
A table with a materialized view
A directory table in Snowflake is an implicit object layered on a stage, whether internal or external. It allows users to query the contents of a stage as if it were a table, providing metadata about the files stored in the stage, such as filenames, file sizes, and last modified timestamps.
References:
Snowflake Documentation: Directory Tables
When does a materialized view get suspended in Snowflake?
When a column is added to the base table
When a column is dropped from the base table
When a DML operation is run on the base table
When the base table is reclustered
A materialized view in Snowflake gets suspended when structural changes that could impact the view's integrity are made to the base table, such as B. When a column is dropped from the base table. Dropping a column from the base table on which a materialized view is defined can invalidate the view's data, as the view might rely on the column that is being removed. To maintain data consistency and prevent the materialized view from serving stale or incorrect data, Snowflake automatically suspends the materialized view.
Upon suspension, the materialized view does not reflect changes to the base table until it is refreshed or re-created. This ensures that only accurate and current data is presented to users querying the materialized view.
References:
Snowflake Documentation on Materialized Views: Materialized Views
Which Query Profile metrics will provide information that can be used to improve query performance? (Select TWO).
Synchronization
Remote disk IO
Local disk IO
Pruning
Spillage
Two key metrics in Snowflake’s Query Profile that provide insights for performance improvement are:
Remote Disk IO: This measures the time the query spends waiting on remote disk access, indicating potential performance issues related to I/O bottlenecks.
Pruning: This metric reflects how effectively Snowflake’s micro-partition pruning is reducing the data scanned. Better pruning (more partitions excluded) leads to faster query performance, as fewer micro-partitions need to be processed.
These metrics are essential for identifying and addressing inefficiencies in data retrieval and storage access, optimizing overall query performance.
Which function can be used to convert semi-structured data into rows and columns?
TABLE
FLATTEN
PARSE_JSON
JSON EXTRACT PATH TEXT
To convert semi-structured data into rows and columns in Snowflake, the FLATTEN function is utilized.
FLATTEN Function: This function takes semi-structured data (e.g., JSON) and transforms it into a relational table format by breaking down nested structures into individual rows. This process is essential for querying and analyzing semi-structured data using standard SQL operations.
Example Usage:
SELECT
f.value:attribute1 AS attribute1,
f.value:attribute2 AS attribute2
FROM
my_table,
LATERAL FLATTEN(input => my_table.semi_structured_column) f;
References:
Snowflake Documentation on FLATTEN
Which table function is used to perform additional processing on the results of a previously-run query?
QUERY_HISTORY
RESULT_SCAN
DESCRIBE_RESULTS
QUERY HISTORY BY SESSION
The RESULT_SCAN table function is used in Snowflake to perform additional processing on the results of a previously-run query. It allows users to reference the result set of a previous query by its query ID, enabling further analysis or transformations without re-executing the original query.
References:
Snowflake Documentation: RESULT_SCAN
When an object is created in Snowflake. who owns the object?
The public role
The user's default role
The current active primary role
The owner of the parent schema
In Snowflake, when an object is created, it is owned by the role that is currently active. This active role is the one that is being used to execute the creation command. Ownership implies full control over the object, including the ability to grant and revoke access privileges. This is specified in Snowflake's documentation under the topic of Access Control, which states that "the role in use at the time of object creation becomes the owner of the object."
References:
Snowflake Documentation: Object Ownership
Awarding a user which privileges on all virtual warehouses is equivalent to granting the user the global MANAGE WAREHOUSES privilege?
MODIFY, MONITOR and OPERATE privileges
ownership and usage privileges
APPLYBUDGET and audit privileges
MANAGE LISTING ADTOTOLFillment and resolve all privileges
Granting a user the MODIFY, MONITOR, and OPERATE privileges on all virtual warehouses in Snowflake is equivalent to granting the global MANAGE WAREHOUSES privilege. These privileges collectively provide comprehensive control over virtual warehouses.
MODIFY Privilege:
Allows users to change the configuration of the virtual warehouse.
Includes resizing, suspending, and resuming the warehouse.
MONITOR Privilege:
Allows users to view the status and usage metrics of the virtual warehouse.
Enables monitoring of performance and workload.
OPERATE Privilege:
Grants the ability to start and stop the virtual warehouse.
Includes pausing and resuming operations as needed.
References:
Snowflake Documentation: Warehouse Privileges
Based on Snowflake recommendations, when creating a hierarchy of custom roles, the top-most custom role should be assigned to which role?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
USERADMIN
Based on Snowflake recommendations, when creating a hierarchy of custom roles, the top-most custom role should ideally be granted to the ACCOUNTADMIN role. This recommendation stems from the best practices for implementing a least privilege access control model, ensuring that only the necessary permissions are granted at each level of the role hierarchy. The ACCOUNTADMIN role has the highest level of privileges in Snowflake, including the ability to manage all aspects of the Snowflake account. By assigning the top-most custom role to ACCOUNTADMIN, you ensure that the administration of role hierarchies and the assignment of roles remain under the control of users with the highest level of oversight and responsibility within the Snowflake environment.
References:
Snowflake Documentation on Access Control: Managing Access Control
What is the MINIMUM Snowftake edition that supports database replication?
Standard
Enterprise
Business Critical
Virtual Private Snowflake (VPS)
The minimum Snowflake edition that supports database replication is the Enterprise edition. Database replication allows data to be replicated between different Snowflake accounts or regions, providing high availability and disaster recovery capabilities.
References:
Snowflake Documentation: Database Replication
Which statement describes Snowflake tables?
Snowflake tables arc logical representation of underlying physical data.
Snowflake tables are the physical instantiation of data loaded into Snowflake.
Snowflake tables require that clustering keys be defined lo perform optimally.
Snowflake tables are owned by a use.
In Snowflake, tables represent a logical structure through which users interact with the stored data. The actual physical data is stored in micro-partitions managed by Snowflake, and the logical table structure provides the means by which SQL operations are mapped to this data. This architecture allows Snowflake to optimize storage and querying across its distributed, cloud-based data storage system.References: Snowflake Documentation on Tables
Which file function provides a URL with access to a file on a stage without the need for authentication and authorization?
GET_RELATIVE_PATH
GET_PRESIGNED_URL
BUILD_STAGE_FILE_URL
BUILD_SCOPED_FILE_URL
The GET_PRESIGNED_URL file function in Snowflake provides a URL with access to a file on a stage without requiring authentication and authorization. This is particularly useful for sharing data files stored in Snowflake stages with external parties securely and conveniently. The presigned URL generated by this function gives temporary access to the file, which expires after a specified duration.
Example usage of GET_PRESIGNED_URL:
SELECT GET_PRESIGNED_URL('
This function generates a URL that can be used to directly access a file in the stage, making it easier to share data without compromising security.
What can be used to process unstructured data?
External tables
The copy into