In a PDI transformation you are retrieving data from a large lookup table using a Database Lookup step from improve performance, you enable caching in the stepand use the Load all data from table option.
In this scenario, which three statement s are correct about the data flow of the ‘Database Lookup step? (Choose three.)
Which PDI step or entry processes data within the Hadoop cluster?
To simplify themigration of a PDI solution from one environment to another, you need to externalize all database connection details.
Which two methods will satisfy this requirement? (Choose two.)
Choose 2 answers
you want to make a dynamic PDI transformation that is driven with variables that areloaded from a properties file.
Which free form text fields within a step can be configured with variables?
Which three file formats are splittable on HDFS? (Choose three).
Choose 3 answers
Choose 3 answers
A customer has an existing PDI job that calls a transformation. They want to execute the transformation through Spark on their Hadoop cluster.
Which change must be made to satisfy this requirement?
You are migrating to a new version of the Pentaho server and you want to import your old repository.
What are two methods to accomplish this task? (Choose two.) Choose 2 answers
You have used for copies of the Sort rows step in parallel on the local JVM using the ‘Change number of copies to start’option.
Each of the sorted stream need to be merged together to ensure the proper sort sequence.
Which step should be used in this scenario?
A transformation is running in a production environment and you want to monitor it in real time.
Which tool should you use?