1z0-067

150 MINUTI PER 102 DOMANDE

60% DI RISPOSTE CORRETTE

#############################

QUESTION 1
In your database, the tbs percent used parameter is set to 60 and the tbs percent free parameter is set to 20.
Which two storage-tiering actions might be automated when using Information Lifecycle Management (ILM) to automate data movement?

A. The movement of all segments to a target tablespace with a higher degree of compression, on a different storage tier, when the source tablespace exceeds tbs percent used
B. Setting the target tablespace to read-only after the segments are moved
C. The movement of some segments to a target tablespace with a higher degree of compression, on a different storage tier, when the source tablespace exceeds TbS percent used
D. Taking the target tablespace offline after the segments are moved
E. The movement of some blocks to a target tablespace with a lower degree of compression, on a different storage tier, when the source tablespace exceeds tbs
percent used

###

B,C
The threshold for activating tiering policies is based on two parameters:
TBS PERCENT USED
TBS PERCENT FREE
Both values can be controlled by the DBMS_ILM_ADMIN package.
TBS PERCENT USED and TBS PERCENT FREE default to 85 and 25, respectively. Hence, whenever the source tablespace’s usage percentage goes beyond 85 percent, any tiering policy specified on its objects will be executed and objects will be moved to the target tablespace until the source tablespace becomes at least 25 percent free. Note that it is possible to add a custom condition to tiering policies to enable movement of data based on conditions other than how full the tablespace is.
In addition, the READ ONLY option must be explicitly specified for the target tablespace.

Tier -> livelli /layers
Tiering policy mia traduzione -> politica del livello

###

QUESTION 2
You want to consolidate backup information and centrally manage backup and recovery scripts for multiple databases running in your organization.
Which two backup solutions can be used?
A. RMAN recovery catalog
B. RMAN Media Management Library
C. Enterprise Manager Cloud Control
D. Enterprise Manager Database Express
E. Oracle Secure Backup

###

Correct Answer: AC

Purpose of the Recovery Catalog
A recovery catalog is a database schema used by RMAN to store metadata about one or more Oracle databases. Typically, you store the catalog in a dedicated database. A recovery catalog provides the following benefits:
•    A recovery catalog creates redundancy for the RMAN repository stored in the control file of each target database. The recovery catalog serves as a secondary metadata repository. If the target control file and all backups are lost, then the RMAN metadata still exists in the recovery catalog.
•    A recovery catalog centralizes metadata for all your target databases. Storing the metadata in a single place makes reporting and administration tasks easier to perform.
•    A recovery catalog can store metadata history much longer than the control file. This capability is useful if you have to do a recovery that goes further back in time than the history in the control file. The added complexity of managing a recovery catalog database can be offset by the convenience of having the extended backup history available.

QUESTION 3
You want RMAN to make duplicate copies of data files when using the backup command.
What must you set using the RMAN configure command to achieve this?
A. MAXSETSIZE TO 2;
B. DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
C. CHANNEL DEVICE TYPE DISK FORMAT '/disk1/%U' , '/disk2/%U';
D. DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 2;

###

Correct Answer: D (duplexing)

You can use the CONFIGURE … BACKUP COPIES command to specify how many copies of each backup piece should be created on the specified device type for the specified type of file. This type of backup is known as a duplexed backup set. The CONFIGURE settings for duplexing only affect backups of datafiles, control files, and archived logs into backup sets, and do not affect image copies.

###
QUESTION 4
You create a table with the period for clause to enable the use of the Temporal Validity feature of Oracle Database 12c.
Examine the table definition:
create table employees
(empno number, salary number,
deptid number, name varchar2(100),
period for employee_time);
Which three statements are true concerning the use of the Valid Time Temporal feature for the EMPLOYEES table?

A. The valid time columns employee_time_start and employee_time_end are automatically created.
B. The same statement may filter on both transaction time and valid temporal time by using the AS OF TIMESTAMP and PERIOD FOR clauses.
C. The valid time columns are not populated by the Oracle Server automatically.
D. The valid time columns are visible by default when the table is described.
E. Setting the session valid time using
DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME sets the visibility for data manipulation language (DML), data definition language (DDL), and queries performed by the session.

###

A ok
B ok
C ok
D wrong (*)
E wrong (ddl is not true)
(*)Option for visibility
All – Sets the visibility of temporal data to the full table, which is the default temporal table visibility
CURRENT – Sets the visibility of temporal data to currently valid data within the valid time period at the session level
ASOF – Sets the visibility of temporal data to data valid as of the given time as defined by the timestamp

####

QUESTION 5
Which two statements are true when row-archival management is enabled?
A. Visibility of the ORA_ARCHIVE_STATE column is controlled by the row archival visibility session parameter.
B. The ORA_ARCHIVE_STATE column is updated manually or by a program that can reference activity tracking columns, to indicate that a row is no longer considered active.
C. The row archival visibility session parameter defaults to all rows.
D. The ORA_ARCHIVE_STATE column is visible if it is referenced in the select list of a query.
E. The ORA_ARCHIVE_STATE column is updated automatically by the database based on activity tracking columns, to indicate that a row is no longer considered active.

###

A is wrong   -> ora_archive_state is an hidden column is only displayed in a query
B is correct -> update emp_arch_copy set ora_archive_state=dbms_ilm.archivestatename(1)
C is wrong   -> default is ‘Active’
D is correct -> ora_archive_state is an hidden column is only displayed in a query (not desc)
E is wrong   -> manual too

To manage in-database archiving for a table, you must enable ROW ARCHIVAL for the table, manipulate the ORA_ARCHIVE_STATE hidden column of the table, and specify either ACTIVE or ALL for the ROW ARCHIVAL VISIBILITY session parameter.
The hidden column is only displayed if specified in a query. First, describe the table structure of HR.emp_arch. Notice that the ora_archive_state column is not listed.

use the dbms_ilm.archivestatename procedure to update the ora_archive_state value for employee_id 102 and 103.
update emp_arch_copy set ora_archive_state=dbms_ilm.archivestatename(1)
where employee_id in (102, 103);

The session parameter “row archival visibility” has a default value of “ACTIVE” which means that users can only see active rows of a table

###

QUESTION 6 / 156
Which two resources might be prioritized between competing pluggable databases (PDBs) when creating a multitenant container database (COB) plan using Oracle
Database Resource Manager?
A. maximum undo per consumer group
B. maximum idle time for a session in a PDB
C. parallel server limit
D. CPU
E. maximum number of sessions for a PDB

###

Correct Answer: CD
The directives control allocation of the following resources to the PDBs:
CPU
Parallel execution servers
In a CDB with multiple PDBs, some PDBs typically are more important than others.
The Resource Manager enables you to prioritize and limit the resource usage of specific PDBs.

With the Resource Manager, you can:
Specify that different PDBs should receive different shares of the system resources
Limit the CPU usage of a particular PDB
Limit the number of parallel execution servers that a particular PDB can use2
Limit the resource usage of different sessions connected to a single PDB
Monitor the resource usage of PDBs

###

QUESTION 7 / 157
Which three types of failures are detected by the Data Recovery Advisor (DRA)?
A. loss of a non-critical data file
B. loss of a control file
C. physical data block corruption
D. logical data block corruption
E. loss of an archived redo log file

###

A ok
B ok
C ok
D is wrong (handle some logical data block corruption)
E is wrong  non parla di archive log (quello è questione propria di rman)

Data Recovery Advisor can diagnose failures such as the following:
Components such as datafiles and control files that are not accessible because they do not exist, do not have the correct access permissions, have been taken offline, and so on
Physical corruptions such as block checksum failures and invalid block header field values
Inconsistencies such as a data file that is older than other database files
I/O failures such as hardware errors, operating system driver failures, and exceeding operating system resource limits (for example, the number of open files)
The Data Recovery Advisor may detect or handle some logical corruptions. In general, corruptions of this type require help from Oracle Support Services.

###

QUESTION 8 / 158
You want to capture column group usage and gather extended statistics for better cardinality estimates for the customers table in the SH schema.

1. Issue the SELECT DBMS_STATS.CREATE_EXTENDED_STATS(`SH', 'CUSTOMERS')from dual statement.
2. Execute the dbms_stats.seed_col_usage (null,'SH',500) procedure.
3. Execute the required queries on the customers table.
4. Issue the select dbms_stats.report_col_usage('SH', 'customers') from dual statement.
Identify the correct sequence of steps.
A. 3, 2, 1, 4
B. 2, 3, 4, 1
C. 4, 1, 3, 2
D. 3, 2, 4, 1

###

Correct Answer: B (2341)
La corretta sequenza è:

2 Oracle must observe a representative workload, in order to determine the appropriate column groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE
3 execute query or run explain plan
4 DBMS_STATS.REPORT_COL_USAGE
1 DBMS_STATS.CREATE_EXTENDED_STATS

Step 1 (2). Seed column usage
Oracle must observe a representative workload, in order to determine the appropriate column groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it should observe the workload.
Step 2: (3) You don't need to execute all of the queries in your work during this window. You can simply run explain plan for some of your longer running queries to ensure column group information is recorded for these queries.
Step 3. (1) Create the column groups
At this point you can get Oracle to automatically create the column groups for each of the tables based on the usage information captured during the monitoring window. You simply have to call the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires just two arguments, the schema name and the table name. From then on, statistics will be maintained for each column group whenever statistics are gathered on the table.
Note:
*    DBMS_STATS.REPORT_COL_USAGE reports column usage information and records all the SQL operations the database has processed for a given object.
*    The Oracle SQL optimizer has always been ignorant of the implied relationships between data columns within the same table. While the optimizer has traditionally analyzed the distribution of values within a column, he does not collect value-based relationships between columns.
*    Creating extended statistics
Here are the steps to create extended statistics for related table columns withdbms_stats.created_extended_stats:
1 - The first step is to create column histograms for the related columns.
2 Next, we run dbms_stats.create_extended_stats to relate the columns together. Unlike a traditional procedure that is invoked via an execute ("exec") statement, Oracle extended statistics are created via a select statement.

###

QUESTION 9 / 159
Examine the initialization parameter that is set in the PFILE:
DB_CREATE_FILE_DEST ='/u01/app/oracle/oradata/'
You execute the following command to create the CDB1. container database (CDB):
SQL>CREATE DATABASE CDB1
DEFAULT TABLESPACE users
DEFAULT TEMPORARY TABLE SPACE temp
UNDO TABLESPACE undotbsl
ENABLE PLUGGABLE DATABASE
SEED
SYSTEM DATAFILES SIZE125M AUTOEXTEND ON NEXT10M MAXSIZEUNLIMITED
SYSAUX DATAFILESSIZE 100M;
Which three statements are true?

A. It creates a multitenant container database with a root and a seed pluggable database (PDB) that are opened in read-write and read-only modes, respectively.
B. The files created for both the root and seed databases use Oracle Managed Files (OMF).
C. It creates a multitenant container database with the root and seed databases opened and one PDB mounted.
D. It sets the users tablespace as the default for both the root and seed databases.
E. undotbsl is used as the undo tablespace for both the root and seed databases.
F. It creates a multitenant container database with the root database opened and the seed database mounted.

###

A is correct
B is correct
C is wrong for sure
D is wrong for sure
E is correct
F is wrong for sure

C non viene creato pdb
D is incorret because seed have only system e sysaux tbs
F seed is always in read only mode

###

QUESTION 10 / 160
Examine the steps to configure Oracle Secure Backup (OSB) for use with RMAN:
1.Create media families for data files and archived redo log files.
2.Configure database backup storage selectors or RMAN media management parameters.
3.Create an OSB user preauthorized for RMAN operations.
4.Configure RMAN Access to the OSB SBT.
5.Disable Non-Uniform Memory Access (NUMA) awareness by setting the ob_ignore_numa parameter to 0.
Identify the steps in the correct order.
A. 1, 4, 3, 2, 5
B. 1, 3, 4, 5, 2
C. 4, 3, 1, 2, 5
D. 4, 3, 5, 1, 2

###

C 4 3 1 2 5
Oracle Secure Backup enables reliable data protection through file-system backup to tape.
Configuring Oracle Secure Backup for Use with RMAN
To configure Oracle Secure Backup for use with RMAN, perform the following steps in Oracle Secure Backup:
1.    Configure RMAN access to the Oracle Secure Backup SBT.
See Also:
"Configuring RMAN Access to the Oracle Secure Backup SBT Library"
2.    Create an Oracle Secure Backup user preauthorized for RMAN operations.
Note:
This is a required step. An RMAN backup operation fails without it.
See Also:
"Creating a Preauthorized Oracle Secure Backup User"
3.    It is recommended that you create media families for data files and archived redo logs. If you do not create your own media families, then by default RMAN uses the RMAN-DEFAULT media family.
See Also:
"Creating Media Families for RMAN Backups"
4.    Optionally, configure database backup storage selectors or RMAN media management parameters. These settings give you more fine-grained control over storage selection for backups.
See Also:
•    "Creating a Database Backup Storage Selector in Enterprise Manager"
•    "Setting Media Management Parameters in RMAN"
5.    Optionally, disable NUMA-awareness by setting the OB_IGNORE_NUMA to 0.
The default value of this parameter is 1, thus making Oracle Secure Backup NUMA-aware. This ensures that, for a database backup or restore operation, the Oracle shadow process and the Oracle Secure Backup data service are located in the same NUMA region or node.

###

QUESTION 11 / 5
Examine the RMAN command:
RMAN> SET ENCRYPTION IDENTIFIED BY ON FOR ALL TABLESPACES; RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
Which type of encryption is used for the backup performed by using this command?
A. password-mode encryption
B. dual-mode encryption
C. transparent encryption
D. default encryption

###

B is correct

A SET ENCRYPTION ON IDENTIFIED BY password ONLY
C DEFAULT - SET ENCRYPTION ON
D FAKE -> SIMILAR A


Transparent Encryption of Backups
This is the default mode and uses the Oracle wallet.
A wallet is a password-protected container used to store authentication and signing credentials,
including private keys, certificates, and trusted certificates needed by SSL.
SET ENCRYPTION ON;

Password Encryption of Backups
This mode uses only password protection. You must provide a password when creating and restoring encrypted backups.
SET ENCRYPTION ON IDENTIFIED BY password ONLY

Dual Mode Encryption of Backups
This mode requires either the wallet or a password.
SET ENCRYPTION ON IDENTIFIED BY password

###

QUESTION 12 / 6
The following parameters are set for your Oracle 12c database instance:
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE
OPTIMIZER_USE_SQL_PLAN_BASELINES=TRUE
You want to manage the SQL plan evolution task manually. Examine the following steps:

1.Set the evolve task parameters.
2.Create the evolve task by using the DBMS_SPM.CREATE_EVOVLE_TASK function.
3.Implement the recommendations in the task by using the
DBMS_SPM.IMPLEMENT_EVOLVE_TASK function.
4.Execute the evolve task by using the DBMS_SPM.EXECUTE_EVOLVE_TASK function.
5.Report the task outcome by using the DBMS_SPM.REPORT_EVOLVE_TASK function.
Identify the correct sequence of steps.
A. 2, 4, 5
B. 2, 1, 4, 3, 5
C. 1, 2, 3, 4, 5
D. 1, 2, 4, 5

###

Correct Answer: B
1.    Create an evolve task (2)
2.    Optionally, set SET ENCRYPTION ON IDENTIFIED BY password ONLY parameters (1)
3.    Execute the evolve task (4)
4.    Implement the recommendations in the task (3)
5.    Report on the task outcome (5)

Nota: I due parametric sono indipendenti
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=false
OPTIMIZER_USE_SQL_PLAN_BASELINES=true

###

QUESTION 13 / 7
You created a database with DBCA by using one of the Oracle supplied templates.
Which is the default permanent tablespace for all users except DBSNMP and OUTLN?
A. USERS
B. SYSTEM
C. SYSAUX
D. EXAMPLE




###


Correct Answer: A
Explanation:
This table space is used to store permanent user objects and data. Like the TEMP table space, every database should have a table space for permanent user data
that is assigned to users. Otherwise, user objects will be created in the SYSTEM table space, which is not good practice. In the preconfigured database, USERS is
assigned the default table space, and space for all objects created by non-system users comes from this table space. For system users, the default permanent
table space remains SYSTEM.

###

QUESTION 14 / 8
Your database is running in archivelog mode. Examine the parameters for your database instance:
LOG_ARCHIVE_DEST_1='LOCATION=/disk1/arch MANDATORY'
LOG_ARCHIVE_DEST_2='LOCATION=/disk2/arch'
LOG_ARCHIVE_DEST_3='LOCATION=/disk3/arch'
LOG_ARCHIVE_DEST_4='LOCATION=/disk4/arch'
LOG_ARCHIVE_MIN_SUCCEED_DEST=2
While the database is open, you notice that the destination set by the log_archive_dest_1 parameter is not available.
All redo log groups have been used.
What happens at the next log switch?
A. The database instance hangs and the redo log files are not overwritten.
B. The archived redo log files are written to the fast recovery area until the mandatory destination is made available.
C. The database instance is shutdown immediately.
D. The destination set by the log_archive_dest parameter is ignored and the archived redo
   log files are created in the next two available locations to guarantee archive log success.

###

Correct Answer: A

OK RIGHT ANSWER IS A!
The failure of any mandatory destination, including a mandatory standby destination,
makes the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter irrelevant.

MANDATORY and OPTIONAL
You can specify a policy for reusing online redo log files using the OPTIONAL or MANDATORY attributes.
If a destination is optional, archiving to that destination may fail,
yet the online redo log file is available for reuse and may be overwritten eventually.
If the archival operation of a mandatory destination fails, online redo log files cannot be overwritten.
If neither the MANDATORY nor the OPTIONAL attribute is specified, the default is OPTIONAL.
At least one destination must succeed even if all destinations are designated to be optional.
The LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter (where n is an integer from 1 to 10)
specifies the number of destinations that must archive successfully before the log writer process
can overwrite the online redo log files. All mandatory destinations and non-standby optional
destinations contribute to satisfying the LOG_ARCHIVE_MIN_SUCCEED_DEST=ncount.
For example, you can set the parameter as follows:

# Database must archive to at least two locations before
# overwriting the online redo log files.
LOG_ARCHIVE_MIN_SUCCEED_DEST = 2
When determining how to set your parameters, note that:
This attribute does not affect the data protection mode for the destination.
You must have at least one local destination, which you can declare OPTIONAL or MANDATORY.
At least one local destination is operationally treated as mandatory, because the minimum value for the LOG_ARCHIVE_MIN_SUCCEED_DEST
parameter is 1.
The failure of any mandatory destination, including a mandatory standby destination,
makes the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter irrelevant.

The LOG_ARCHIVE_MIN_SUCCEED_DEST parameter value cannot be greater than the number of destinations,
nor greater than the number of mandatory destinations plus the number of optional local destinations.
If you defer a mandatory destination, and the online redo log file is overwritten without transferring the redo data to the standby site,
then you must transfer the redo log file to the standby site manually.
The BINDING column of the V$ARCHIVE_DEST fixed view specifies how failure affects the archival operation.

MANDATORY
Specifies that the transmission of redo data to the destination must succeed before the local online redo log
file can be made available for reuse.
OPTIONAL
Specifies that successful transmission of redo data to the destination is not required
before the online redo log file can be made available for reuse.
If the value set for the LOG_ARCHIVE_MIN_SUCCEED_DEST
parameter (that defines the minimum number of destinations that
must receive redo data successfully before the log writer process on the primary
database can reuse the online redo log file) is met, the online redo log file is marked for reuse.

The following example shows the MANDATORY attribute:
LOG_ARCHIVE_DEST_1=’LOCATION=/arch/dest MANDATORY’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_3=’SERVICE=denver MANDATORY’
LOG_ARCHIVE_DEST_STATE_3=ENABLE

Made more logfile switch until all redos groups are full. first log switch succeds, second also succeds then next will hang database.
Oracle documentation IS NOT WRONG!
Question clearly states All redo log groups have been used.
Do the test and youl see I was right as well as oracle doc.
A is the correct answer!

Se il valore impostato per la LOG_ARCHIVE_MIN_SUCCEED_DEST è soddisfatto
il file di redolog online è segnato per il rituilizzo

MANDATORY < mi sembra di capire che mandatory indica la destinazione
(che potrebbe anche essere l'unica perchè per default il min suceed dest è almeno 1)
che deve avere successo prima di permettere la sovrascrittura.

Specifica che la trasmissione dei redo alla destinazione deve avere successo
prima che il redolog locale possa essere disponibile per il riuso

il fallimento di ogni destinazione designata come MANDATORY
RENDE la LOG_ARCHIVE_MIN_SUCCEED_DEST irrilevante.

###

QUESTION 15 / 9
Identify three scenarios in which RMAN will use backup sets to perform active database duplication.
A. when the duplicate ... from active database command contains the section size clause
B. when you perform active database duplication on a database with flashback disabled
C. when you specify set encryption before the duplicate ... from active database command
D. when the number of auxiliary channels allocated is equal to or greater than the number of target channels
E. when you perform active database duplication on a database that has read-only tablespaces

###

A,C,D
Starting with Oracle Database 12c Release 1 (12.1), RMAN can use backup sets to transfer the source database files that need to be duplicated. The backup sets are transferred over the network to the auxiliary database. Backup sets can be encrypted for additional security. Specify the encryption algorithm by using the SET ENCRYPTION ALGORITHM command before the DUPLICATE command.
RMAN uses backup sets to perform active database duplication when the connection to the target database is established using a net service name and any one of the following conditions is satisfied:
The DUPLICATE … FROM ACTIVE DATABASE command contains either the USING BACKUPSET, USING COMPRESSED BACKUPSET, or SECTION SIZE clause.
The number of auxiliary channels allocated is equal to or greater than the number of target channels allocated.
Reference:http://docs.oracle.com/database/121/BRADV/rcmdupdb.htm#BRADV298

###

QUESTION 16 / 10
Which two statements are true about recovering logically corrupted tables or table partitions from an RMAN backup?
A. Tables or table partitions can be recovered by using an auxiliary instance only.
B. Tables or table partitions with a foreign key cannot be recovered.
C. Tables or table partitions can be recovered only when the database is in mount state.
D. Tables or table partitions from the system and sysaux tablespaces cannot be recovered.
E. Tables with not null constraints cannot be recovered.

###

A D

A is absolutely correct as the question clearly states that recover is done from rman backup.
D correct

not B evidentemente sbagliata
not C
not E Tables with named not null constraints can be recovered. They cant be recovered with remap.

###

QUESTION 17 / 11
Your database is running in archivelog mode and a nightly backup of the database,
along with an autobackup of the control file, is taken by using RMAN. Because
of a media failure, the SPFILE and the control files are lost.
Examine the steps to restore the SPFILE and the control file to mount the database:


1.Set DBID of the target database in RMAN.
2.Start the database instance by using the startup force nomount command in RMAN.
3.Restore the control files from the backup.
4.Mount the database.
5.Restore the SPFILE from the autobackup.
6.Create a PFILE from the recovered SPFILE.
7.Restart the instance in nomount state.



Identify the required steps in the correct order.
A. 1, 2, 5, 3, 6, 4
B. 1, 2, 3, 5, 6, 4
C. 2, 1, 5, 7, 3, 4
D. 2, 1, 5, 6, 7, 4, 3

Risposta C
Not A perchè il restore del control file va fatto in nomount
la questione se impostare db_id (o meno) manda fuori strada ma questo qui sopra fa la differenza

###

QUESTION 18 / 12
After implementing full Oracle Data Redaction, you change the default value for the number data type as follows:
SQL> SELECT NUMBER_VALUE FROM REDACTION_VALUES_FOR_TYPE_FULL; NUMBER_VALUE
SQL> EXEC DBMS_REDACT.UPDATE_FULL_REDACTI0N_VALUES(-1)
PL/SQL procedure successfully completed.
SQL> select number_value from redaction_values_for_type_full;
NUMBER VALUE
After changing the value, you notice that FULL redaction continues to redact numeric data with a zero.
What must you do to activate the new default value for numeric full redaction?
A. Re-enable redaction policies that use FULL data redaction.
B. Re-create redaction policies that use FULL data redaction.
C. Re-connect the sessions that access objects with redaction policies defined on them.
D. Flush the shared pool.
E. Restart the database instance.

###

Correct Answer: E
After you run this procedure, restart the database.

Explanation: About Altering the Default Full Data Redaction Value You can alter the default displayed values for full Data Redaction polices. By default, 0 is the
redacted value when Oracle Database performs full redaction (DBMS_REDACT.FULL) on a column of the NUMBER data type. If you want to change it to another
value (for example, 7), then you can run the DBMS_REDACT.UPDATE_FULL_REDACTION_VALUES procedure to modify this value. The modification applies to
all of the Data Redaction policies in the current database instance. After you modify a value, you must restart the database for it to take effect.

###

QUESTION 19 / 13
You want to create a guaranteed restore point for your database by executing the command:
SQL> CREATE RESTORE POINT dbrsp1 GUARANTEE FLASHBACK DATABASE;
Identify two prerequisites for the successful execution of this command.
A. The database must be running in archivelog mode.
B. Flashback Database must be enabled
C. Fast Recovery Area must be enabled.
D. The recyclebin must be enabled for the database.
E. Undo retention guarantee must be enabled.
F. A database backup must be taken.

###

Correct Answer: AC

You must have created a flash recovery area before creating a guaranteed restore point.
You need not enable flashback database before you create the restore point. However, if flashback database is not enabled, then the first guaranteed restore point you create on this database must be created when the database is mounted.
The database must be in ARCHIVELOG mode if you are creating a guaranteed restore point.


###

QUESTION 20 / 14
Your database has a table customers that contains the columns cust_name, amt_due, and old_status.
Examine the commands executed and their output:
SQL>UPDATE customers SET amt_due=amt_due+amt_due*l. 1 WHEREcust_name='JAMES';
1row updated.
SQL> ALTER TABLE customers DROP COLUMN old_status;
Table Altered
SQL> UPDATE customers SET amt_due=amt_due+amt_due*1.5 WHERE cust_r.ame='JAMES';
1 row updated.
SQL> COMMIT;
SQL> SELECT versions_xid AS XID, versior.s_startscr. AS START_SCN,
versions_er.cscn AS END_SCN, versior.s_operatior. AS OPERATION', amt_due
FROM customers VERSIONS BETWEEN SCN MINVALULEAND MAXVALUE WHERE custname='JAMES';.
XIDSTART_SCNEND_SCNOPERATIONAMT_DUE 07002f00cl03000017063371706337 U3300

Why is it that only one update is listed by the Flashback Version Query?

A. Supplemental logging is not enabled for the database.
B. The undo data that existed for versions of rows before the change to the table structure is invalidated.
C. The db_flashbACK_retention_target parameter is set to a lower value and the undo data pertaining to the first transaction is flushed out.
D. Undo retention guarantee is not enabled.
E. Flashback Data Archive is full after the first update statement.

###

B
I dati di undo che esistevano per la versione di record prima del il cambiamento della struttura sono invalidati
Limitations and Restrictions on Flashback Query
• Flashback Query is Not available after restarting the database.
• You cannot specify a subquery in the expression of the AS OF clause.
• You cannot use the VERSIONS clause in flashback queries to temporary , external tables, fixed tables, or tables that are part of a cluster.
• You cannot use the VERSIONS clause in flashback queries to views. However, you can use the VERSIONS syntax in the defining query of a view.
• You cannot specify this clause if you have specified query_name in the query_table_expression.
• Flashback Query does not undo anything. It is only a query mechanism. You can take the output from a Flashback Query and perform an undo yourself in many circumstances.
• Flashback Query does not tell you what changed. LogMiner does that.
• Flashback Query can undo changes and can be very efficient if you know the rows that need to be moved back in time. You can use it to move a full table back in time, but this is very expensive if the table is large since it involves a full table copy.
• Flashback Query does not work through DDL operations that modify columns, or drop or truncate tables.
• LogMiner is very good for getting change history, but it gives you changes in terms of deltas (insert, update, delete), not in terms of the before and after image of a row. These can be difficult to deal with in some applications.

###

QUESTION 21 / 15
Which two methods can be used to add an Oracle 11g database to a multitenant container database (CDB) as a pluggable database (PDB)?
A. Use the dbMS_pdb package to plug the Oracle11g database into the existing CDB as a PDB.
B. Use the create database ... enable pluggable database statement to create a PDB by copying data files from pdbSseed and use data pump to load data from
the Oracle11gdatabase into the newly created PDB.
C. Pre-create a PDB in CDB and use data pump to load data from the complete database export of the Oracle11g database into the newly created PDB.
D. Pre-create a PDB in CDB and use the network_link and parallel parameters with data pump import to import data from the Oracle11g database to the newly created PDB.
E. Upgrade the Oracle11g database to a 12c non-CDB and use the dbms_pdb.describe procedure to plug the database as a new PDB into the CDB.

###

Correct Answer: DE
A wrong per versioni
B wrong create pluggable si utilizza per creare cdb
C wrong perché si tratta di un import export
D correct
E is sure


QUESTION 22 / 16
In which three scenarios is media recovery required?

A. when a tablespace is accidentally dropped from a database
B. when archived redo log files are lost
C. when data files are lost
D. when one of the online redo log members is corrupted
E. when all control files are lost

###

A ok
B no, restore archive
C ok (come A)
D no (un solo membro non vi è bisogno di media recovery
E ok (perdita del control file è necessario media recovery)


ACE  risposta giusta
Media recovery intende che vi è stato un problema hardware

###

QUESTION 23
In the SPFILE, UNDO TABLESPACE is Set to UNDOTBS.
You rename the undotbs undo tablespace:
ALTER TABLESPACE undotbs RENAME TO undotbs_old;
Which statement is true?
A. The tablespace will be renamed but the data file headers will not be updated.
B. The statement will fail because you cannot rename an undo tablespace.
C. The tablespace will be renamed and all the changes will be logged in the alert log.
D. The tablespace will be renamed and a message written to the alert log indicating that you should change the corresponding initialization parameter.
E. You must set the undo_tablespace parameter to some other tablespace name before renaming undotbs.

###

C is correct (implica utilizzo Spfile) – il tbs sara’ rinominato e tutti I cambiamenti saranno registrati nel log
D sarebbe stata corretta in caso di utilizzo pfile

If the tablespace is an undo tablespace and if the following conditions are met, then the tablespace name is changed to the new tablespace name in the server parameter file (SPFILE).
The server parameter file was used to start up the database.
The tablespace name is specified as the UNDO_TABLESPACE for any instance.
If a traditional initialization parameter file (PFILE) is being used then a message is written to the alert log stating that the initialization parameter file must be manually changed.

###

QUESTION 24 / 18
Which two statements are true about dropping a pluggable database (PDB)?
A. A PDB must be in mount state or it must be unplugged.
B. The data files associated with a PDB are automatically removed from disk.
C. A dropped and unplugged PDB can be plugged back into the same multitenant container database (CDB) or other CDBs.
D. A PDB must be in closed state.
E. The backups associated with a PDB are removed.
F. A PDB must have been opened at least once after creation.

###

AC

A is correct
B is correct se viene specificato including datafile
C is correct
D e F si riferiscono alla operazione di unplug
E is wrong backup associated are not removed

The DROP PLUGGABLE DATABASE statement drops a PDB. You can drop a PDB when you want to move the PDB from one CDB to another or when you no longer need the PDB.
When you drop a PDB, the control file of the CDB is modified to eliminate all references to the dropped PDB. Archived redo log files and backups associated with the PDB are not removed, but you can use Oracle Recovery Manager (RMAN) to remove them.
When dropping a PDB, you can either keep or delete the PDB's data files by using one of the following clauses:
KEEP DATAFILES, the default, retains the data files.
The PDB's temp file is removed even when KEEP DATAFILES is specified because the temp file is no longer needed.
INCLUDING DATAFILES removes the data files from disk.
If a PDB was created with the SNAPSHOT COPY clause, then you must specify INCLUDING DATAFILES when you drop the PDB.
The following prerequisites must be met:
The PDB must be in mounted mode, or it must be unplugged.
See "Modifying the Open Mode of PDBs".
See "Unplugging a PDB from a CDB".
The current user must have SYSDBA or SYSOPER administrative privilege, and the privilege must be either commonly granted or locally granted in the PDB. The user must exercise the privilege using AS SYSDBA or AS SYSOPER at connect time.

SYS@cdb> select con_id,name,open_mode from v$pdbs;

    CON_ID NAME                                     OPEN_MODE
---------- ---------------------------------------- ----------
         2 PDB$SEED                                 READ ONLY
         3 PDB1                                     READ WRITE
         4 PDB2                                     READ WRITE
         5 PDB3                                     MOUNTED

SYS@cdb>DROP PLUGGABLE DATABASE PDB3 INCLUDING DATAFILES;

Pluggable database dropped.


Unplugging a PDB
The following prerequisites must be met:
•    The current user must have SYSDBA or SYSOPER administrative privilege, and the privilege must be either commonly granted or locally granted in the PDB. The user must exercise the privilege using AS SYSDBA or AS SYSOPER at connect time.
•    The PDB must have been opened at least once.
•    The PDB must be closed. In an Oracle Real Application Clusters (Oracle RAC) environment, the PDB must be closed on all instances


###

QUESTION 25 / 19
On your Oracle 12c database, you invoke SQL*Loader to load data into the employees table in the hr schema by issuing the command:
S>sqlldrhr/hr@pdb table=employees
Which two statements are true about the command?
A. It succeeds with default settings if the employees table exists in the hr schema.
B. It fails because noSQL*Loader data file location is specified.
C. It fails if the hr user does not have the create any directory privilege.
D. It fails because noSQL*Loader control file location is specified.
E. It succeeds and creates the employees table in the HR schema.

###

AC

###

QUESTION 26 / 20
Which three RMAN persistent settings can be set for a database?
A. backup retention policy
B. default backup device type
C. default section size for backups
D. when synchronizing backups, which were not performed by using RMAN, with the RMAN repository
E. when listing backups that are required for recovery operations
(forse questa domanda è sbagliata perche non si riesce a capire la terza risposta corretta)

###

AB
A -CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
B -CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(OB_DEVICE=tape1)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/disk1/oracle/dbs/snapcf_ev.f'; # default

###

QUESTION 26 bis


Which three RMAN persistent settings can be set for a database?
A. default destinations for backups
B. multiple backup device types for a single backup
C. default section size for backups
D. default backup device type
E. backup retention policy

###

ADE

perchè non B ? studiare
B. multiple backup device types for a single backup

forse perchè si possono impostare i device type ma non è possibile configurare multipli device type per un singolo backup a livello di configurazione persistente, non vi è una voce specifica

mentre è certo che
il duplexing non si puo utilizzare contemporaneamente per disco e nastro

###

QUESTION 27 / 21
Your production database is running in archivelog mode. You use RMAN with a recovery catalog to back up your database to media and the database is uniquely identified in the recovery catalog.
You want to create a test database from the production database and allow the production database to remain open during the duplicate process. You restore the
database backups to a new host with the same directory structure as the production database and want to use the recovery catalog for future backups after the
database is successfully restored to the new host.
How would you achieve this?
A. by using the RMAN switch command to set the new location for the data files
B. by using the RMAN duplicate command with nofilenamecheck to recover the database to the new host
C. by using the RMAN duplicate command with dbid and set newname for tablespace to recover the database to the new host
D. by creating a new database in the new host, and then using the RMAN recover command

###

Risposta B
The FROM ACTIVE DATABASE clause is not specified. By not specifying this clause, you instruct RMAN to perform backup-based duplication.
The DBID of the source database is specified because the source database name prod is not unique in the recovery catalog.
The NOFILENAMECHECK check is specified because it is necessary when the duplicate database files use the same names as the source database files. (stessa struttura )
DUPLICATE command automatically assigns the duplicate database a different DBID so that it can be registered in the same recovery catalog as the source database.

###

QUESTION 28 / 22
Identify two scenarios in which the RMAN crosscheck command can be used.
A. when checking for backups that are not required as per the retention policy
B. when updating the RMAN repository if any of the archived redo log files have been deleted without using RMAN to do the deletes
C. when updating outdated information about backups that disappeared from disk or media or became corrupted and inaccessible
D. when synchronizing backups, which were not performed by using RMAN, with the RMAN repository
E. when listing backups that are required for recovery operations

###

A wrong
B
C
D wrong
E wrong
concordano tutti
(C) Crosschecks update outdated RMAN repository information about backups whose repository records do not match their physical status. For example, if a user removes archived logs from disk with an operating system command, the repository still indicates that the logs are on disk, when in fact they are not.
If the backup is on disk, then the CROSSCHECK command determines whether the header of the file is valid. If the backup is on tape, then the command simply checks that the backup exists. The possible status values for backups are AVAILABLE, UNAVAILABLE, and EXPIRED
The CROSSCHECK command does not delete operating system files, and it does not remove RMAN repository records of backups that are not available at the time of the crosscheck. It only updates the (B) repository records with the status of the backups. Use the DELETE command to remove records of expired backups from the RMAN repository.

###

QUESTION 29 / 23
A database is running in archivelog mode.
You want to back up a 10 TB data file belonging to the users tablespace.
The backup of the data file is too slow.
What type of backup do you recommend to improve the performance of the backup?
A. image copy backup by using RMAN
B. multisection image copy backup by using RMAN
C. multisection parallel backup by using RMAN
D. cold backup after taking the tablespace offline
E. cold backup after placing the tablespace in backup mode

###

C
multi-section enables parallel copy and its faster
wheras the image copy is going to copy everything from the data-file including the unused space as well.

il duplexing non è possibile quando si utilizza image copy

###

QUESTION 30 / 24
Automatic Undo Management is enabled for your database.
You want a user to retrieve metadata and historical data for a given transaction or for transactions in a given time interval.

Which three are prerequisites to fulfill this requirement?
A. Minimal supplemental logging must be enabled.
B. The database must be running in archivelog mode.
C. Flashback Data Archive must be created and the flashback archive administer system privilege must be granted to the user.
D. The flashback any table privilege must be granted to the user.
E. The select any transaction privilege must be granted to the user.
F. The recycle bin parameter must be set to on.

###

A B E
Non d perchè non è obbligatorio (puoi grantare la singola tabella ad esempio)

16.2.3 Configuring Your Database for Flashback Transaction
To configure your database for the Flashback Transaction feature, you or your database administrator must:
With the database mounted but not open, enable ARCHIVELOG:
ALTER DATABASE ARCHIVELOG;
Open at least one archive log:
ALTER SYSTEM ARCHIVE LOG CURRENT;
If not done, enable minimal and primary key supplemental logging:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
If you want to track foreign key dependencies, enable foreign key supplemental logging:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
For Oracle Flashback Transaction Query
Grant the SELECT ANY TRANSACTION privilege.
To allow execution of undo SQL code retrieved by an Oracle Flashback Transaction Query, grant SELECT, UPDATE, DELETE, and INSERT privileges for specific tables.

https://docs.oracle.com/database/121/ADFNS/adfns_flashback.htm#ADFNS621


###

QUESTION 31 / 25
Examine these Data Pump commands to export and import objects from and to the same database.
The dba has not yet created users hr1 and oe1.
$expdp system/manager
schemas = hr.oe
directory = EXP_DIR
dumpfile = export.dat
include = table

$ impdpsysten/manager
schemas = hr1,oe1
directory = EXP_DIR
dumpfile = export.dat
remap_schena=hr:hrl, oe:oe1
What will happen when running these commands?

A. expdp will fail because no path has been defined for the dumpfile.
B. expdp will succeed but impdp will fail because the users do not exist.
C. inpdp will create two users called hr1 and oe1 and import all objects to the new schemas.
D. impdp will create two users called hr1 and oe1 and import only the tables owned by hr and oe schemas to ht1 and oe1 schemas, respectively.

###

Risposta B (testato da me) (nell’expdp c’è include tables esporta solo le tabelle)

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORA-39002: invalid operation
ORA-39165: Schema OE1 was not found.
ORA-39165: Schema HR1 was not found.

###

QUESTION 32 / 26
Which two statements are true about a multitenant architecture?
A. Each pluggable database (PDB) has its own initialization parameter file.
B. A PDB can have a private undo tablespace.
C. Log switches occur only at the container database level.
D. A PDB can have a private temporary tablespace.
E. Each PDB has a private control file.

###

CD

###

QUESTION 33 / 27
Examine the command to create a pluggable database (PDB):
SQL> CREATE PLUGGABLE DATABASE pdb2 FROM pdb1
FILE_NAME-_CONVERT = ('/disk1/oracle/pdb1/', '/disk2/oracle/pdb2/') PATH_PREFIX = '/disk2/oracle/pdb2';
Which two statements are true?

A. The pluggable database pdb2 is created by cloning pdb1 and is in mount state.
B. Details about the metadata describing pdb2 are stored in an XML file in the'/disk2/oracle/pdb2/' directory.
C. The tablespace specifications of pdb2 are the same as pdb1.
D. All database objects belonging to common users in PDB1 are cloned in PDB2.
E. pdb2 is created with its own private undo and temp tablespaces.

###

AC

A corretta
B NO sarebbe vero se uno creass il file xml
C corretta i file vengono convertiti ma le specifiche sono le stesse
D NO falsa - nel senso che viene clonato il db e non gli oggetti che appartengono ai common user
E NO è chiaramente falsa i pdb non possono avere un proprio undo

###

QUESTION 34 / 28
Which three tasks can be automatically performed by the Automatic Data Optimization feature of Information Lifecycle Management (ILM)?
A. tracking the most recent read time for a table segment in a user tablespace
B. tracking the most recent write time for a table segment in a user tablespace
C. tracking insert time by row for table rows
D. tracking the most recent write time for each block in a table segment (oppure- Tracking the most recent write time for a table block)
E. tracking the most recent read time for a table segment in the sysauxtablespace
F. tracking the most recent write time for a table segment in the sysauxtablespace

###

ABD

Incorrect:
Not E, Not F When Heat Map is enabled, all accesses are tracked by the in-memory activity
tracking module. Objects in the SYSTEM and SYSAUX tablespaces are not tracked.
* To implement your ILM strategy, you can use Heat Map in Oracle Database to track data access
and modification.
Heat Map provides data access tracking at the segment-level and data modification tracking at the
segment and row level.
* To implement your ILM strategy, you can use Heat Map in Oracle Database to track data access
and modification. You can also use Automatic Data Optimization (ADO) to automate the
compression and movement of data between different tiers of storage within the database.
Automatic Data Optimization with Oracle Database 12c
with Oracle Database 12c

Heat Map automatically tracks modification and query timestamps at the row and segment levels.
Automatic Data Optimization (ADO) automatically moves and compresses data according to user-defined policies based on the information collected by Heat Map.
Enables implemention of automated storage and compression tiering.
Supports OLTP and Data Warehousing compression tiering.

can automate compression and storage tiering policies based on the actual usage of the data.
Segment level Heat Map tracks the time of last modification and access of tables and partitions.
Row level Heat Map tracks modification times for individual rows (aggregated to the block level).

###

QUESTION 35 / 29
Which two are direct benefits of the multiprocess, multithreaded architecture of Oracle Database 12c when it is enabled?
A. Reduced logical I/O
B. Reduced virtual memory utilization
C. Improved Serial Execution performance
D. Reduced physical I/O
E. Reduced CPU utilization

###

BE

Benefits can be seen at CPU (E), Memory Usage (B), System Reliability and Parallel Operations level.

###

QUESTION 36 / 30
Examine the steps/operations performed during the RMAN backup operation by using Oracle Secure Backup (OSB):
1. Start the RMAN client by using the RMAN target / command.
2. Start the RMAN client by using the OSB user.
3. RMAN creates the backup pieces.
4. Run the RMAN backup command with the sbt channels.
5. OSB creates a backup job and assigns a unique identifier.
6. OSB creates a backup job request through the OSB sbt library.
7. OSB stores metadata about RMAN backup pieces in the OSB catalog.
8. OSB starts the backup operation.
9. OSB updates the RMAN catalog.

Identify the required steps/operations performed in correct order.
A. 1, 4, 6, 5, 8, 3, 9
B. 1, 6, 4, 5, 8, 3, 9
C. 2, 4, 6, 5, 8, 3, 7
D. 2, 4, 5, 8, 3, 7, 9

###

C

###

QUESTION 37 / 31
You want to back up a database such that only formatted blocks are backed up.
Which statement is true about this backup operation?

A. The backup must be performed in mount state.
B. The tablespace must be taken offline.
C. All files must be backed up as backup sets.
D. The database must be backed up as an image copy.

###

C -> All files must be backed up as backup sets.

###

QUESTION 38 / 32
You wish to enable an audit policy for all database users, except sys, system, and scott.
You issue the following statements:

SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYS;
SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYSTEM;
SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SCOTT;
For which database users is the audit policy now active?
A. all users except sys
B. all users except scott
C. all users except sys and scott
D. all users except sys, system, and scott

###

B ------->>> Oracle Database uses the last exception user list


The unified audit policy only can have either the BY clause or the EXCEPT clause, but not both for the same policy.
If you run multiple AUDIT statements on the same unified audit policy but specify different BY users, then Oracle Database audits all of these users.

--->
If you run multiple AUDIT statements on the same unified audit policy but specify different EXCEPT users, then Oracle Database uses the last exception user list, not any of the users from the preceding lists. This means the effect of the earlier AUDIT POLICY ... EXCEPT statements are overridden by the latest AUDIT POLICY ... EXCEPT statement.

You can only enable common unified audit policies for common users.
In a multitenant environment, you can enable a common audit policy only from the root and a local audit policy only from the PDB to which it applies.

###

QUESTION 39 / 33
Your database instance is started using an SPFILE. You are connected to cdb$root, as a DBA.
You issue:
SQL> ALTER SYSTEM SET STATISTICS_LEVEL=ALL SCOPE=BOTH;
Which two statements are true about the statistics level parameter?
A. It is immediately set to all in the SPFILE and the CDB instance.
B. It is immediately set to all in only those pluggable databases (PDBs) where the value is set to typical.
C. It is immediately set to all only for cdbSroot.
D. It is immediately set to all in all PDBs where the statistics_level parameter is not set.
E. It is set to all for all PDBs only in the SPFILE.

###

AD

B sbagliato typical significa che l'impostazione (diversa) per il dato pdb c'è quindi non eredita dal cdb
C sbagliato perchè viene impostato anche nei pdb con parametro typical
D sbagliato viene impostato dove statisitc_level=typycal (default)
E sbagliato

STATISTICS_LEVEL = { ALL | TYPICAL | BASIC }
è UN PARAMETRO ISPDB_MODIFIABLE (QUINDI PUO ESSERE IMPOSTATO PER OGNI SINGOLO PDB)
testato
nota se i parametri dei pdb vari sono come quello del cdb la query su v$system_parameter non riporta differenze
gli ispdb_modifiable sono visibili quando differiscono dal cdb


http://docs.oracle.com/database/121/ADMIN/cdb_admin.htm#ADMIN13650

###
QUESTION 39bis

You are connected to a pluggable database (PDB) as a common user with DBA privileges.
The STATISTICS_LEVEL parameter is PDB_MODIFIABLE. You execute the following:
SQL > ALTER SYSTEM SET STATISTICS_LEVEL = ALL SID = ‘*’ SCOPE = SPFILE;
Which is true about the result of this command?
   

A. The STATISTICS_LEVEL parameter is set to all whenever this PDB is re-opened.
B. The STATISTICS_LEVEL parameter is set to ALL whenever any PDB is reopened.
C. The STATISTICS_LEVEL parameter is set to all whenever the multitenant container database (CDB) is restarted.
D. Nothing happens; because there is no SPFILE for each PDB, the statement is ignored.


##############

A


###

QUESTION 40 / 34
You are administering a multitenant container database (CDB).
Identify two ways to access a pluggable database (PDB) that is open in read-only mode.
A. by using the connect statement as a local user having only the set container privilege
B. by using easy connect
C. by using external authentication
D. as a common user with the set container privilege
E. by executing the alter session set container command as a local user

###

BD

tested
B funziona
D funziona

local user > pdb
root user > cdb e pdb
common user con alter set container > cdb con pssibilita pdb

###

QUESTION 41 / 35
In which situation can you use Flashback Database?

A. when undoing a shrink data file operation
B. when retrieving a dropped tablespace
C. when returning to a point in time before the restoration or re-creation of a control file
D. when returning to a point in time before the most recent open resetlogs operation

###

D > è corretta inequivocabilmente
B > potrrebbe essere corretta se quando hanno droppato i tbs non hanno droppato i datafile

A) incorre in una specifica limitazione della flashback database
C) incorre in un altra limitazione (restore o re create controlfile discarge all flashback log)


The basic procedure for using Flashback Database to reverse an unwanted OPEN RESETLOGS (D)


B è corretta in quanto la flashback database viene utilizzata per recuperare errori logici (e non di media faiulure)
non sappiamo pero' se chi ha fatto la drop del tbs ha specificato including contents and datafile in questo caso non funzionerebbe

D nel momento di ri-creazione o restore dei control file vengono scaricati tutti i flashback log accumulati fino a quel momento, quindi vincolerebbe la flashback massima fino  a quello momento nonostante i valori di retention

The database restores the version of each block that is immediately before the target time. OK but
***>> You cannot use FLASHBACK DATABASE to return to a point in time before the restore or re-creation of a control file.

>>>>>>> hen you can use Flashback Database to reverse the OPEN RESETLOGS.

Shut down the database, mount it, and re-check the flashback window. If the resetlogs SCN is still within the flashback window, then use this form of the FLASHBACK DATABASE command:

RMAN> FLASHBACK DATABASE TO BEFORE RESETLOGS;

potrebbereo essere corrette tutte e due


############################


Ma su testking indicano due risposte eppure la domanda è situation e non situations
Nel caso la seconda risposta sarebbe B perche
You can use flashback to retrieve a dropped tablespace e quindi anche una tabella.
Use the FLASHBACK DATABASE statement to return the database to a past time or system change number (SCN). This statement provides a fast alternative to performing incomplete database recovery.
Following a FLASHBACK DATABASE operation, in order to have write access to the flashed back database, you must reopen it with an ALTER DATABASE OPEN RESETLOGS statement
You must have the SYSDBA system privilege. A flash recovery area must have been prepared for the database.
The database must have been put in FLASHBACK mode with an ALTER DATABASE FLASHBACK ON
statement unless you are flashing the database back to a guaranteed restore point. The database must be mounted but not open.

###

QUESTION 42 / 36
For your database, an incremental level 1 backup is taken every week day.
On Tuesday, before the backup is performed, you add a new tablespace.
You execute the command:

RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG WEEKLY DATABASE;

Which statement is true about the execution of the command?
   A. It returns an error because there is no level 0 backup available for new data files.
   B. It performs an image copy backup of new data files, and a level 1 incremental backup of all other data files.
   C. It performs a level-0 backup of all data files including those that belong to the new tablespace.
   D. It performs an image copy backup of all data files including those that belong to the new tablespace.
   E. It performs a backup as a backup set of all data files including those that belong to the new tablespace.

###

B is correct

The BACKUP INCREMENTAL LEVEL 1... FOR RECOVER OF COPY WITH TAG...
command does not actually always create a level 1 incremental backup.
If there is no level 0 image copy backup of an particular datafile,
then executing this command creates an image copy backup of the datafile
on disk with the specified tag instead of creating the level 1 backup.
Thus, the first time the script runs, it creates the image copy of the datafile needed
to begin the cycle of incremental updates.
In the second run and all subsequent runs, it produces level 1 incremental backups of the datafile.
The RECOVER COPY OF DATABASE WITH TAG... command causes RMAN to apply any available
incremental level 1 backups to a set of datafile copies with the specified tag.
If there is no incremental backup or no datafile copy, the command generates a message but does not generate an error.
The first time the script runs, this command has no effect, because there is neither a datafile copy nor a level 1 incremental backup.
The second time the script runs, there is a datafile copy
(created by the first BACKUP command), but no incremental level 1 backup,
so again, the command has no effect.
On the third run and all subsequent runs, there is a datafile copy
and a level 1 incremental from the previous run, so the level 1 incremental
is applied to the datafile copy, bringing the datafile copy up to the checkpoint SCN of the level 1 incremental.

###

QUESTION 43 / 37
Which three conditions must be true for unused block compression to be used automatically while performing backups by using RMAN?
A. The compatible initialization parameter is set to 10.2 or higher.
B. There are no guaranteed restore points defined for the database.
C. The default device for the backup must be set to disk.
D. The tablespaces are locally managed.
E. The fast recovery area is less than 50 percent free.

###

ABD (parametro - not restore point - locally managed)

During unused block compression, RMAN does not check each block. Instead, RMAN reads the bitmaps that indicate what blocks are currently allocated and then only reads the blocks that are currently allocated.

Unused block compression is turned on automatically when all of the following five conditions are true:
----------------------------------------------------------------------------------------------------------------
UNUSED BLOCK COMPRESSION IS ON AUTOMATICALLY
The COMPATIBLE initialization parameter is set to 10.2 or higher. (A)
There are currently no guaranteed restore points defined for the database.(B)
The data file is locally managed.(D)
The data file is being backed up to a backup set as part of a full backup or a level 0 incremental backup.
The backup set is created on disk, or Oracle Secure Backup is the media manager. (C)
----------------------------------------------------------------------------------------------------------------

A feature by which RMAN reduces the size of data file backup sets by skipping data blocks.
RMAN always skips blocks that have never been used. Under certain conditions,
which are described in the BACKUP AS BACKUPSET entry in Oracle Database Backup and Recovery Reference,
RMAN also skips previously used blocks that are not currently being used.

Io penso che C è esclusa perchè non è necessario che il default device sia impostato a disk
ma che in ogni caso utilizzi o OSB oppure il backupset sia impostato su disco (a prescindere dal default)

Reference:http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmcncpt.htm#BRADV89481 (See unused block compression)

###

QUESTION 44 / 151
Your database supports a Decision Support System (DSS) workload that involves the execution of complex queries. Currently, the database is running with peak workload.
You want to analyze some of the most resource-intensive statements cached in the library cache.
What must you run to receive recommendations on the efficient use of indexes and materialized views to improve query performance?

A. SQL Performance Analyzer (SQL Performance Analyzer enables you to assess the performance impact of any system change resulting in changes to SQL execution plans and performance characteristics
B. SQL Access Advisor (indici e viste materializzate- which is a tuning tool that provides advice on materialized views, indexes, and materialized view logs)
C. SQL Tuning Advisor (profili e ristrutturazione sql)
D. Automatic Workload Repository (AWR) report
E. Automatic Database Diagnostic Monitor (ADDM analizza gli AWR per identificare colli di bottiglia – cpu memory i/o application concurrency contention)

###


Risposta B


QUESTION 45 / 152
You install "Oracle Grid Infrastructure for a standalone server" on a host on which the orcl1 and orcl2 databases both have their instances running.
Which two statements are true?

A. Both orcl1 and orcl2 are automatically added to the Oracle Restart configuration.
B. All database listeners running from the database home are automatically added to the Oracle Restart configuration.
C. The srvctl add database command must be used to add orcl1 and orcl2 to the Oracle Restart configuration.
D. The crsctl start has command must be used to start software services for Oracle Automatic Storage Management (ASM) after the "Oracle Grid Infrastructure for a standalone server" installation is complete.
E. All databases subsequently created by using the Database Configuration Assistant (DBCA) are automatically added to the Oracle Restart configuration.

###

A wrong
B wrong
C OK (srvctl è il commando da utilizzare)
D wrong (si utilizza srvctl)
E OK

If you install Oracle Restart by installing the Oracle Grid Infrastructure for a standalone server and then create your database, the database is automatically added to the Oracle Restart configuration (E), and is then automatically restarted when required. However, if you install Oracle Restart on a host computer on which a database already exists, you must manually add the database, the listener, the Oracle Automatic Storage Management (Oracle ASM)  (C) instance, and possibly other components to the Oracle Restart configuration.
After configuring Oracle Restart to manage your database, you may want to:
•    Add additional components to the Oracle Restart configuration.
•    Remove components from the Oracle Restart configuration.
•    Temporarily suspend Oracle Restart management for one or more components.
•    Modify the Oracle Restart configuration options for an individual component.


Correct Answer: CE


QUESTION 46 / 153
In your multitenant container database (CDB) that contains pluggable databases (PDBs), the hr user executes the following commands to create and grant privileges on a procedure:

CREATE OR REPLACE PROCEDURE
create_test_v(v_emp_idNUMBER,v_enameVARCHAR2,v_S ALARYNUMBER,v_dept_idNUMBER)
BEGIN
INSERT INTO hr.test VALUES (v_emp_id, v_ename, v salary, v_dept_id);
END;
/
GRANT EXECUTE ON CREATE_TEST TO John, jim, smith, king;
How can you prevent users having the execute privilege on the create_test_v procedure from inserting values into tables on which they do not have any privileges?

A. Create the create_test procedure with definer's rights.
B. Grant the execute privilege to users with grant option on the create_test procedure.
C. Create the create_test procedure with invoker's rights.
D. Create the create_test procedure as part of a package and grant users the execute privilege on the package.

###


Correct Answer: C
You can control access to privileges that are necessary to run user-created procedures by using definer’s rights, which execute with the privileges of the owner, or with invoker’s rights, which execute with the privileges of the user unning the procedure.

################################################################

QUESTION 47 / 154 DA VERIFICARE
You must unload data from the orders, order_items, and products database tables to four files using the External Tables.
CREATE TABLE orders_ext
(order_id, order_date, product_id, product_name,quantity)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY ext.dir
LOCATION (`ordersl.dmp','orders2.dmp','orders3.dmp','orders4.dmp')
)
PARALLEL
AS
SELECT o.order_id,o.order_date,p.product_id,p.product_name,i.quantity
FROM orders o,productsp,order_itemsi
WHERE o.orderjd = i.order_id and i.product_id = p.product_id;

You execute the command shown in the Exhibit, but only two files are created.
Which parameter must be changed so that four files are created?

A. TYPE
B. LOCATION
C. PARALLEL
D. DEFAULT DIRECTORY
E. ORGANIZATION EXTERNAL

##


risposta C -> parallel


The PARALLEL clause enables parallel query on the data sources. The granule of parallelism is by default a data source, but parallel access within a data source is implemented whenever possible. For example, if PARALLEL=3 were specified, then more than one parallel execution server could be working on a data source.
But, parallel access within a data source is provided by the access driver only if all of the following conditions are met:




###

QUESTION 48 / 155
Users report this error message when inserting rows into the orders table:
ERROR at line 1:
ORA-01654f:unable to extend index USERS.ORDERS_IND by 8 in tablespace INDEXES
You determine that the indexes tablespace is out of space and there is no free space on the filesystem used by the Oracle database.
Which two must you do to fix this problem without affecting currently executing queries?
A. drop and re-create the index
B. coalesce the orders_ind index
C. coalesce the indexes tablespace
D. perform an on line table rebuild using dbms_redefinition.
E. rebuild the index online moving it to another tablespace that has enough free space for the index

###

BE

###

QUESTION 49 / 38
Evaluate these statements:
CREATE TABLE purchase_orders
(po_id NUMBER(4),
po_date TIMESTAMP,
supplier_id NUM8ER(6),
po_total NUMBER(8,2), CONSTRAINT order_pk PRIMARY KEY(po_id))
PARTITION BY RANGE(po_date)
(PARTITION Q1 VALUES LESSTHAN (TO_DATE('01-apr-2007','dd-mon-yyyy')), PARTITION Q2VALUESLESSTHAN(TO_DATE('01-jul-2007','dd-mon-yyyy')),
 PARTITION Q3 VALUES LESSTHAN (TO~DATE('01-oct-2007','dd-non-yyyy')), PARTITION Q4VALUESLESSTHAN (TO_DATE('Ol-jan-2008','dd-non-yyyy' )));

CREATE TABLE purchase_order_items
(po_id NUMBER(4)NOT NULL,
product_id NUMBER(6)NOT NULL,
unit_prlce NUMBER(8,2),
quantity NUMBER(8),
CONSTRAINT po_items_f k
FOREIGNKEY(po_id)REFERENCES purchase_orders(po_id) )
PARTITION BY REFERENCE(po_items_fk);

Which two statements are true?
A. Partitions of purchase_order_items are assigned unique names based on a sequence.
B. The purchase_orders and purchase_order_items tables are created with four partition each.
C. purchase_order_items table partitions exist in the same tablespaces as the purchase_orders table partitions.
D. The purchase_order_items table inherits the partitioning key by duplicating the key columns from the parent table.
E. Partition maintenance operations on the purchase_order_items table require disabling the foreign key constraint.

###

A è sbagliata - non si richiama nessuna sequence
B ok la tabella dipendente costruisce le partizione come quella padre
C ok stesso tbs
D é sbagliata non vi è duplicazione di key e colonne
E è sbagliata proprio per la nuova caratteristica del reference partition

partition by reference
This creates partitions identical to those in the parent table.
Note that there is no column called same, yet the table has been partitioned on that column.
The clause partition by reference has the name of the foreign key in the partition definition.
This instructs Oracle Database 11g to confirm the partitioning is done per the scheme used in the parent table—in this case, customers.
Note the NOT NULL constraint for column cust_id.; this is required for reference partitioning.

Reference partitioning also allows the DBA to maintain both sets of partitions by only managing partitions on the parent table.

The ability to partition by reference removes the necessity for managing the partitions on the child table.
Similarly, reference partitioning also eliminates the need
to include unnecessary duplicate columns from the parent table to enable equi-partitioning of the child table.  

###

QUESTION 50 / 39
Which four actions are possible during an Online Datafile Move operation?
A. Creating and dropping tables in the datafile being moved
B. Performing file shrink of the data file being moved
C. Querying tables in the datafile being moved
D. Performing Block Media Recovery for a data block in the datafile being moved
E. Flashing back the database
F. Executing DML statements on objects stored in the datafile being moved

###

Right Answer is: ACDF
BE is wrong


For who may think E is correct reference to Oracle Database 12c: New Features for Administrators Chapter 8 – Page# 50

An Online Move data file operation is not compatible when:
• The data file is an OFFLINE data file
• A concurrent flashback database operation is executing
• A media recovery is completing
• A file shrink operation or tablespace offline/drop operation involving the same file is performing

But it is compatible with:
• Block media recovery
• ALTER TABLESPACE READ ONLY or READ WRITE operations
• Data file extension operation
• Tablespace/database online backup mode involving the same file

###

QUESTION 51 / 40
Examine the command used to perform an incremental level-0 backup:
RMAN>BACKUP INCREMENTAL LEVEL 0 DATABASE;
To enable block change tracking, after the incremental level 0 backup, you issue the command:
SQL>ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE'/mydir/rman_change_track.f';
To perform an incremental level-1 cumulative backup, you issue the command:
RMAN>BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE;
Which two statements are true in the preceding situation?

A. The block change tracking data is used only from the next incremental backup.
B. The incremental level 1 backup fails because a block change tracking file Is created after the level 0 backup.
C. The incremental level 1 backup does not use change tracking data for accomplishing the backup.
D. The block change tracking file scans all blocks and creates a bitmap for the blocks backed up in the level 0 backup.
E. The block change tracking data is used for the next incremental level 1 backup only after the next level 0 backup.

###

C ok
E ok


RMAN's change tracking feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a change tracking file. If change tracking is enabled, RMAN uses the change tracking file to identify changed blocks for incremental backup, thus avoiding the need to scan every block in the datafile.

After enabling change tracking, the first level 0 incremental backup still has to scan the entire datafile, as the change tracking file does not yet reflect the status of the blocks.
Subsequent incremental backup that use this level 0 as parent will take advantage of the change tracking file.

Using change tracking in no way changes the commands used to perform incremental backups, and the change tracking files themselves generally require little maintenance after initial configuration.

Change tracking is disabled by default, because it does introduce some minimal performance overhead on your database during normal operations. However, the benefits of avoiding full datafile scans during backup are considerable, especially if only a small percentage of data blocks are changed between backups. If your backup strategy involves incremental backups, then you should enable change tracking.

###

QUESTION 52 / 41
You specified the warning and critical thresholds for a locally managed tablespace to be 60% and 70%, respectively.
From the tablespace space usage metrics, you find that the space usage has reached the specified warning threshold value, but no alerts have been generated.
What could be the reason for this?
A. The event parameter was not set.
B. The sql_trace parameter is set to false.
C. Enterprise Manager was not used.
D. The statistics_level parameter is set to basic.
E. The time_statistics parameter is set to false.

###

Correct Answer: D    <---- br="">
 ---TYPICAL (colleziona le tipiche) ALL (le colleziona tutte) BASIC (è disabilitato)
“Database metrics are not computed by Oracle when the initialization parameter called statistics_level is set to BASIC. “

###

QUESTION 53 / 42
You are administering a multitenant container database (CDB) cdb1 that is running in archivelog mode and contains pluggable databases (PDBs), pdb_i and pdb_2.
While opening pdb_1, you get an error:
SQL> alter pluggable database pdb_1 open;

ORA-011S7:cannotidentify/lockdatafile11-seeDBWRtracefile
ORA-01110:data file 11:'/u01/app/oracle/oradata/cdb1/pcb_1/example01.dbf'
To repair the failure, you open an RMAN session for the target database CDBSROOT. You execute the following as the first command:
RMAN>REPAIR FAILURE;
Which statement describes the consequence of the command?

A. The command performs the recovery and closes the failure.
B. The command produces an error because RMAN is not connected to the target database pdb_1.
C. The command produces an error because the advise failure command was not executed before the REPAIR FAILUER command.
D. The command executes successfully, performs recovery, and opens PDB_1.

###

D (sarebbe da provare)


The recommended workflow is to run
LIST FAILURE to display failures,
ADVISE FAILURE to display repair options, and
REPAIR FAILURE to fix the failures.

In the current release,
Data Recovery Advisor can only be used to diagnose and repair data corruptions
in non-CDBs and the root of a multitenant container database (CDB).
Data Recovery Advisor is not supported for pluggable databases (PDBs).

The following operations are not available when you connect as target directly to a PDB:

    Back up archived logs
    Delete archived logs
    Delete archived log backups
    Restore archived logs (RMAN does restore archived logs when required during media recovery.)
    Point-in-time recovery (PITR)
    TSPITR
    Table recovery
    Duplicate database
    Flashback operations
    Running Data Recovery Advisor
    Report/delete obsolete
    Register database
    Import catalog
    Reset database
    Configuring the RMAN environment (using the CONFIGURE command)

###   

QUESTION 54 / 43
What can be automatically implemented after the SQL Tuning Advisor is run as part of the Automated Maintenance Task?
A. statistics recommendations
B. SQL profile recommendations
C. SQL statement restructure recommendations
D. creation of materialized views to improve query performance


###

B

The SQL Tuning Advisor takes one or more SQL statements as an input and invokes the Automatic Tuning Optimizer
to perform SQL tuning on the statements.
The output of the SQL Tuning Advisor is in the form of an advice or recommendations, along with a rationale for each recommendation and its expected benefit.
The recommendation relates to collection of statistics on objects,
creation of new indexes,
--restructuring of the SQL statement,
--or creation of a SQL profile.
You can choose to accept the recommendation to complete the tuning of the SQL statements.

Oracle Database can automatically tune SQL statements by identifying problematic SQL statements and implementing tuning recommendations using the SQL Tuning Advisor during system maintenance windows. You can also run the SQL Tuning Advisor selectively on a single or a set of SQL statements that have been identified as problematic.

During the tuning process, all recommendation types are considered and reported, but only SQL profiles can be implemented automatically.

###

QUESTION 55 / 44
You use RMAN with a recovery catalog to back up your database.
The backups and the archived redo log files are backed up to media daily.
Because of a media failure, the entire database along with the recovery catalog database is lost.
Examine the steps required to recover the database:
1.Restore an autobackup of the server parameter file.
2.Restore the control file.
3.Start up the database instance in nomount state.
4.Mount the database.
5.Restore the data files.
6.Open the database with the resetlogs option.
7.Recover the data files.
8.Set DBID for the database.

Identify the required steps in the correct order.
A. 1, 8, 3, 2, 4, 5, 7, 6
B. 8, 1, 3, 2, 4, 5, 7, 6
C. 1, 3, 2, 4, 8, 5, 6, 7
D. 8, 3, 2, 4, 5, 7, 6
E. 8, 1, 3, 2, 4, 5, 6

###

risposta B



If you want to restore the control file from autobackup,
the database must be in a NOMOUNT state.
You must first set the DBID for your database,
and then use the RESTORE CONTROLFILE FROM AUTOBACKUP command:

è perso l'intero database quindi per fare la connessione non riconoscerebbe il target per cui va impostato db id


If the database is up at the time of the loss of the SPFILE, connect to the target database. For example, run:
% rman TARGET /

If the database is not up when the SPFILE is lost, and you are not using a recovery catalog, then you must set the DBID of the target database. See "Determining your DBID" for details on determining your DBID.

Shut down the instance and restart it without mounting. When the SPFILE is not available, RMAN starts the instance with a dummy parameter file. For example:

RMAN> STARTUP FORCE NOMOUNT;
Restore the server parameter file. If restoring to the default location, then run:

RMAN> RESTORE SPFILE FROM AUTOBACKUP;
If restoring to a nondefault location, then you could run commands as in the following example:

RMAN> RESTORE SPFILE TO '/tmp/spfileTEMP.ora' FROM AUTOBACKUP;



SET DBID 676549873;         (8)
RUN                (1)
{
ALLOCATE CHANNEL c1 DEVICE TYPE sbt;
RESTORE SPFILE FROM AUTOBACKUP;
}
Run the STARTUP NOMOUNT command.(3)
restore control file         (2)
mount databse             (4)
(5)
(7)
(6)

###

QUESTION 56 / 45
Which three statements are true about the startup and shutdown of multitenant container databases (CDBs) and pluggable databases (PDBs)?

A. A PDB opened in restricted mode allows only local users to connect.
B. When a CDB is open in restricted mode, PDBs must also be opened in restricted mode.
C. When a CDB is in mount state, PDBs are automatically placed in mount state.
D. All PDBs must be shut down before shutting down a CDB instance.
E. When a CDB instance is started, PDBs can be placed in open state by
   using database triggers or by executing the alter pluggable database command.

###

BCE

A falso
B OK - sembra corretto se cdb è in restriced anche se pdb risulta open in realta non permette connessioni
C OK - sembra corretto (per default sono in mount anche quando apri normalemente ho anche provato con il save state)
D falso
E OK - sembra corretto


CREATE OR REPLACE TRIGGER open_pdbs
  AFTER STARTUP ON DATABASE
BEGIN
   EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN';
END open_pdbs;
/

###

QUESTION 57 / 46
A telecom company wishes to generate monthly bills to include details of customer calls, listed in order of time of call.
Which table organization allows for generating the bills with minimum degree of row sorting?

A. a hash cluster
B. an index cluster
C. a partitioned table
D. a sorted hash cluster
E. a heap table with a rowid column

###


D <<<
   
   
    Hashing is not advantageous in the following situations:
    Most queries on the table retrieve rows over a range of cluster key values. For example,
    in full table scans or queries such as the following, a hash function cannot be used to determine the location of specific hash keys. Instead, the equivalent of a full table scan must be done to fetch the rows for the query.

    SELECT . . . WHERE cluster_key < . . . ;
    With an index, key values are ordered in the index, so cluster key values that satisfy the WHERE clause of a query can be found with relatively few I/Os.
    The table is not static, but instead is continually growing. If a table grows without limit, the space required over the life of the table (its cluster) cannot be predetermined.
    Applications frequently perform full-table scans on the table and the table is sparsely populated. A full-table scan in this situation takes longer under hashing.
    You cannot afford to preallocate the space that the hash cluster will eventually need.



QUESTION 58 / 47
Examine the following steps of privilege analysis for checking and revoking excessive, unused privileges granted to users:
1. Create a policy to capture the privileges used by a user for privilege analysis.
2. Generate a report with the data captured for a specified privilege capture.
3. Start analyzing the data captured by the policy.
4. Revoke the unused privileges.
5. Compare the used and unused privileges' lists.
6. Stop analyzing the data.
Identify the correct sequence of steps.
A. 1, 3, 5, 6, 2, 4
B. 1, 3, 6, 2, 5, 4
C. 1, 3, 2, 5, 6, 4
D. 1, 3, 5, 2, 6, 4

###

B (indovinata)

###

QUESTION 59 / 48
Your multitenant container database (CDB) cdb1 that is running in archivelog
mode contains two pluggable databases (PDBs), pdb2_1 and pdb2_2, both of which are open.
RMAN is connected to the target database pdb2_1.

RMAN> BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;
Which statement is true about the execution of this command to back up the database?

A. All data files belonging to pdb2_1 are backed up and all archive log files are deleted.
B. All data files belonging to pdb2_1 are backed up along with the archive log files.
C. Only the data files belonging to pdb2_1 are backed up.
D. This command gives an error because archive log files can be backed up only when RMAN is connected to the root database.

###

C - sembra corretta e non B perchè quando fai backup diretto di un pdb non puoi agire su arc log (perchè generati globalmente e appartengono a entita superiore cdb)

As it is a PDB, there are no archive logs to backup.

###

QUESTION 60 / 49
You notice that the performance of your production 24/7 Oracle 12c database has significantly degraded.
Sometimes you are not able to connect to the instance because it hangs.
You do not want to restart the database instance.
How can you detect the cause of the degraded performance?

A. Enable Memory Access Mode, which reads performance data from SGA.
B. Use emergency monitoring to fetch data directly from SGA for analysis.
C. Run Automatic Database Diagnostic Monitor (ADDM) to fetch information from the latest Automatic Workload Repository (AWR) snapshots.
D. Use Active Session History (ASH) data and hang analysis in regular performance monitoring,
E. Run ADDM in diagnostic mode.

###

B. Use emergency monitoring to fetch data directly from SGA for analysis.
corretta

Hang analysis can be performed using oradebug hanganalyze, as described here.

$ sqlplus / as sysdba

SQL> oradebug hanganalyze 3
On versions prior to 11g, you can run hang analyze from a preliminary connection,
which may help if you are trying to connect to a hung database, so a normal connection is not possible.

$ sqlplus -prelim / as sysdba

###

QUESTION 61 / 50
You issue commands in SQL*Plus as the Oracle owner, to enable multithreading for your UNIX- based Oracle 12c database:
CONNECT / AS SYSDBA
ALTER SYSTEM SET THREADED_EXECUTION= TRUE SCOPE=SPFILE;
SHUTDOWN IMMEDIATE
You then restart the instance and get an error:
STARTUP
ORA-01031:insufficient privileges
Why does the startup command return the error shown?

A. because the threaded architecture requires exiting from sql*pIus and reconnecting with sql*PIus / as sysdba before issuing a startup command
B. because the threaded architecture requires issuing a new connect / as sysdba from within sql*pIus before issuing a startup command
C. because the threaded architecture requires authentication using a password file before issuing a startup command
D. because the threaded architecture requires connecting to the instance via a listener before issuing a startup command
E. because the threaded architecture requires restarting the listener before issuing a startup command

###

C ok

You need a connection which is authenticated trough the password file.
OK, we correct this when we login without OS authentication:

###

QUESTION 62 / 51
Your multitenant container database (CDB) cdb1,
which has no startup triggers and contains multiple pluggable databases (PDBs),
is started up by using the command:

SQL>STARTUP
Which two statements are true about the successful execution of the command?

A. All redo log files are opened.
B. The root, the seed, and all the PDBs are opened in read-write mode.
C. All the PDBs are opened in read-write mode.
D. All the PDBs are in closed state.
E. Only the root database is opened in read-write mode.

###

A E

B    chiaramente errata
C D  i pdb vengono aperti per default in stato di mount
  ma ci potrebbe essere il 'save state' o il trigger
  quindi non è certo al 100%  che le istanze siano o tutte chiuse o tutte aperte,
  cioè tutte nella stessa situazione

###

QUESTION 63 / 53
Examine the commands executed to monitor database operations:
$> conn sys/oracle@prod as sysdba
SQL> VAR eid NUMBER
SQL>EXEC :eid :=
DBMS_SQL_MONITOR.BEGlN_OPERATION('batch_job',FORCED_TRACKING=>'Y');
Which two statements are true?
A. Database operations will be monitored only when they consume a significant amount of resource.
B. Database operations for all sessions will be monitored.
C. Database operations will be monitored only if the STATISTICS_LEVEL parameter is set to TYPICAL
   and CONTROL_MANAGEMENT_PACK_ACCESS is set DIAGNISTIC + TUNING.
D. Only DML and DDL statements will be monitored for the session.
E. All subsequent statements in the session will be treated as one database operation and will be monitored.

###

C E

(C)
Setting the CONTROL_MANAGEMENT_PACK_ACCESS initialization parameter
to DIAGNOSTIC+TUNING (default) enables monitoring of database operations. Real-Time SQL
Monitoring is a feature of the Oracle Database Tuning Pack.
Note:
* The DBMS_SQL_MONITOR package provides information about Real-time SQL Monitoring and
Real-time Database Operation Monitoring.

(not B)
BEGIN_OPERATION Function starts a composite database operation in the current session.

(E)
FORCE_TRACKING – forces the composite database operation to be tracked when the
operation starts. You can also use the string variable ‘Y’.

(not A)
NO_FORCE_TRACKING – the operation will be tracked only when it has consumed at
least 5 seconds of CPU or I/O time. You can also use the string variable ‘N’.

###

QUESTION 64 / 54

Examine the command:
$expdp SYSTEM
FULL=YES
DUMPFILE=dpump_dir1:fulll%U.dmp,
dpump_dir2:full2%U.dmp,
FILESIZE=400M
PARALLEL=3
JOB_NAME=expfull
Which statement is true about the execution of the command?

A. It fails because the log file parameter is not specified.
B. It fails because no absolute path is specified for the log file and dump file.
C. It succeeds and exports the full database, simultaneously creating three copies of dump files at
three different locations.
D. It succeeds and exports the full database, simultaneously creating three dump files at three
different locations, but the total number of dump files can exceed three.

###

D OK - funziona anche senza log, crea tre o piu pezzi, ogni pezzo non superiore di 400mb
Data pump performance can be improved by using the PARALLEL parameter.
This should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read


B chiaramente errata
C errata perchè se superassero i 400mb i dumpfile diventerebbero piu di 3

A non è corretta
  anche se nel caso logfile non è specificato ma la direcotry logica di default abbia un percorso valido
  non è necessario il logfile=
  e crea un logile chiamato come il jobname.log


This command will create 4 files in the default datapump export directory and this export will then be imported by using the %U substitution variable again.

### 

QUESTION 65 / 55

You notice that the performance of your production 24x7 Oracle 12c database has significantly degraded.
Sometimes, you are not able to connect to the database instance because it hangs.
How can you detect the cause of the degraded performance?

A. by performing emergency monitoring using Real-Time Automatic Database Diagnostic Monitor (ADDM) to fetch data directly from SGA for analysis
B. by running ADDM to fetch information from the latest Automatic Workload Repository (AWR) snapshots
C. by using Active Session History (ASH) data and performing hang analysis
D. by running ADDM in diagnostic mode



#####

A

https://oracle-base.com/articles/12c/emergency-monitoring-em12c

non B perchè non ci interessa al quel momento prendere un dato dall'ultimo snap_id (che potrebbe risalire a chissa)
C perche la risposta è opzione di emergency monitor dei EMcloud che è possibile interpellare anche direttamente con viste di sistema, oradebug ecc
no D perche addm è un diagnostic mode - situazioni normali



###

QUESTION 66 / 56
Automatic Shared Memory Management (ASMm) is enabled for your database instance, but parameters for the managed components are not defined.
You execute this command:
SQL> ALTER SYSTEM SET DB_CACHE_SIZE = 100M;
Which statement is true?
A. The minimum size for the standard buffer cache is 100 MB.
B. The maximum size for the standard buffer cache is 100 MB.
C. The minimum space guaranteed in the buffer cache for any server process is 100 MB.
D. The maximum space in the buffer cache that can be released for dynamic distribution is 100 MB.
E. The minimum size for all buffer caches is 100 MB.

###

A corretta

DB_CACHE_SIZE specifies the size of the DEFAULT buffer pool for buffers
If the parameter is specified, then the user-specified value indicates a minimum value for the memory pool.

###

QUESTION 67 / 57
You created a tablespace with this statement:
CREATE BIGFILE TABLESPACE adtbs
DATAFILE '/proddb/data/adtbs.dbf' SIZE 10G;
The tablespace is nearly full and you need to avoid any out of space errors for the load of a 5 gig table.
Which two alter statements will achieve this?
A. ALTER TA3LESPACE adtbs RESI2E 20G;
B. ALTER TA3LESPACE adtbs ADD DATAFILE;
C. ALTER TABLESPACE adtbs AUTOEXTEND ON;
D. ALTER TA3LESPACE adtbs ADD DATAFILE '/proddb/data/adtbsl.dbf' SIZE 1QG;
E. ALTER TA3LESPACE adtbs MODIFY DATAFILE '/proddb/data/adtbs.dbf AUTOEXTEND ON;

###

A - C

SQL> ALTER TABLESPACE bigfile_tbs resize 1G;

Tablespace altered.

SQL> ALTER TABLESPACE bigfile_tbs AUTOEXTEND ON;

Tablespace altered.

SQL> ALTER TABLESPACE adtbs MODIFY DATAFILE 'C:\APP\DANIELEFERRARI\ORADATA\CDB\PDB2\ADBF.DMP' AUTOEXTEND ON
                       *
ERROR at line 1:
ORA-02142: missing or invalid ALTER TABLESPACE option

###

QUESTION 68 / 58
Which two statements are true regarding the Oracle Data Pump export and import operations?
A. You cannot export data from a remote database.
B. You can rename tables during import.
C. You can overwrite existing dump files during export.
D. You can compress data but not metadata during export.

###

BC

A sbagliata si puo' fare
B Corretta si puo fare <--remap_table along="" br="" first="" is="" name="" nbsp="" of="" old="" parameter="" schema="" table="" takes="" the="" two="" values="" with="">                           (hr.superheros) and second is the new name of the table which you want to give (superheros_copy
C Corretto overwrite è permesso
D sbagliata (si comprime TUTTO il dump, forse)


impdp  hr/hr@ORCL  DIRECTORY = demo  DUMPFILE = superhero.dmp  LOGFILE = sh_imp.log  TABLES = superheros  REMAP_TABLE = hr.superheros:superheros_copy;

###

QUESTION 69 / 59
You have installed two 64G flash devices to support the Database Smart Flash Cache feature on your database server that is running on Oracle Linux.
You have set the db_smart_flash_file parameter:
DB_FLASH_CACHE_FILE= `/dev/flash_device_1` `/dev/flash_device_2`
How should the Db_flash_cache_size be configured to use both devices?

A. Set DB_FLASH_CACHE_SIZE=64G.
B. Set DD_FLASH_CACHE_SIZE=64G, 64G.
C. Set DD_FLASK_CACKE_SI2E=i28G.
D. db_flash_cache_SIzE is automatically configured by the instance at startup.

###

B

sizes for "Database Smart Flash Cache" area.
In previous releases only one file|device could be defined.
DB_FLASH_CACHE_FILE = /dev/sda, /dev/sdb, /dev/sdc
DB_FLASH_CACHE_SIZE = 32G, 32G, 64G
So above settings defines 3 devices which will be in use by "DB Smart Flash Cache"
/dev/sda size 32G
/dev/sdb size 32G
/dev/sdc size 64G
New view V$FLASHFILESTAT it's used to determine the cumulative latency and read counts of each file|device and compute the average latency

###

QUESTION 70 / 60
You are required to migrate your 11.2.0.3 database to an Oracle 12c database.
Examine the list of steps that might be used to accomplish this task:

1.Place all user-defined tablespaces in read-only mode on the source database.
2.Use the RMAN convert command to convert data files to the target platform's endian format, if required.
3.Perform a full transportable export on the source database with the parameters
VERSI0N=I2, TRANSPORTABLE=ALWAYS, and FULL=Y.
4. Transport the data files for all the user-defined tablespaces.
5. Transport the export dump file to the target database.
6. Perform an import on the target database by using the full, network_link, and transportable_datafiles parameters.
7. Perform an import on the target database by using the full and transportable_datafiles parameters.
   Identify the required steps in the correct order.

A. 1, 3, 5, 4, 2, and 6
B. 1, 2, 4, 6, 5, 3, and 7
C. 1, 2, 4, and 7
D. 2, 4, 5, 6, and 7

###

Correct Answer: A
ABBINAMENTO 1 3

###

QUESTION 71 / 61
Your multitenant container database (CDB) cdb1 that is running in archivelog mode contains two pluggable databases (PDBs), pdb2_1 and pdb2_2. RMAN is
connected to the target database pdb2_1.
Examine the command executed to back up pdb2_1:

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;

Which statement is true about the execution of this command?


A. It fails because archive log files cannot be backed up using a connection to a PDB.
B. It succeeds but only the data files belonging to the pdb2_i pluggable database are backed up.
C. It succeeds and all data files belonging to PDB2_i are backed up along with the archive log files.
D. It fails because the pluggable clause is missing.

###

B risposta corretta

As it is a PDB, there are no archive logs to backup. Hence, option C is wrong and B is right.

###

QUESTION 72 / 62
View the Exhibit showing steps to create a database resource manager plan.
SQL>execute dbms_resource_manager.create_pendingarea();
PL/SQLproceduresuccessfully completed.
3QL>exec dbms_resource_manager,create_consumergroup
(consumer_group=>'OLTP,comment=>,onlineuser')
PL/SQL procedure successfully completed.
SQL>exec dbras_resource_manager.create_plan(plan=>'PRIU3ER3',comment=>'dssprio');
SQL>exec
Dbms_resource_manager.create_plan_directive(plan=>'PRIU3ER3',group_or_subplan=>'OLTP',c omraent=>'onlinegrp'CPU_Pl=>60);
PL/SQL procedure successfully completed.
After execting the steps in the exhibit you execute this procedure, which results in an error:
SQL> EXECUTE dbms_resource_manager.validate_pending_area ();

What is the reason for the error?

A. The pending area is automatically submitted when creating plan directives.
B. The procedure must be executed before creating any plan directive.
C. The sys_group group is not included in the resource plan.
D. The other_groups group is not included in the resource plan.
E. Pending areas can not be validated until submitted.

###

Correct Answer: D

OTHER_GROUPS

This consumer group contains all sessions that have not been assigned to a consumer group.
Every resource plan must contain a directive to OTHER_GROUPS.

###

QUESTION 73 / 63
Your database is running in noarchivelog mode.
One of the data files belonging to the system tablespace is corrupted.
You notice that all online redo logs have been overwritten since the last backup.
Which method would you use to recover the data file?

A. Shut down the instance if not already shut down, restore all data files belonging to the system tablespace from the last backup, and            restart the instance.
B. Shut down the instance if not already shut down, restore the corrupted data file belonging to the system tablespace from the last backup, and restart the
instance.
C. Shut down the instance if not already shut down, restore all data files for the entire database from the last backup, and restart the instance.
D. Mount the database, restore all data files belonging to the system tablespace from the last backup, and open the database.

###

direi C ok C

C => noarchivelog + all redo logs overwritten!

###

QUESTION 74 / 64
You execute the RMAN commands:
RMAN> BACKUP VALIDATE DATABASE;
RMAN> RECOVER CORRUPTION LIST;
Which task is performed by these commands?

A. Corrupted blocks, if any, are repaired in the backup created.
B. Only those data files that have corrupted blocks are backed up.
C. Corrupted blocks in the data files are checked and repaired before performing the database backup.
D. The database is checked for physically corrupt blocks and any corrupted blocks are repaired.

###

D

You can either use RECOVER CORRUPTION LIST
to recover all blocks reported in the V$DATABASE_BLOCK_CORRUPTION view,
or specify the data file number and block number or the tablespace and data block address (DBA).
RMAN can only perform complete recovery of individual blocks.

By default, if Flashback Database is enabled,
then RMAN searches the flashback logs for good copies of corrupt blocks.
By default, if the target database exists in a Data Guard environment,
then RECOVER BLOCK command can automatically retrieve blocks from a physical standby database to a primary database and vice-versa.

###

QUESTION 75 / 144
You are connected to a pluggable database (PDB) as a common user with the sysdba privilege.
The PDB is open and you issue the shutdown immediate
command.
What is the outcome?
A. The PDB is closed.
B. The PDB is placed in mount state.
C. The command executes only if the common user is granted the set container privilege for the PDB.
D. The command results in an error because the PDB can be shut down only by a local user.

###

A <- a="" alter="" br="" close.="" closes="" connected="" database="" equivalent="" immediate="" is="" it="" pdb.="" pdb="" pluggable="" shutdown="" statement="" the="" to="" when="">B

The statement SHUTDOWN IMMEDIATE when connected to a PDB is equivalent to ALTER PLUGGABLE DATABASE CLOSE. It closes the PDB.
When the current container is a PDB, the SQL*Plus SHUTDOWN command closes the PDB.
After the SHUTDOWN command is issued on a PDB successfully, it is in mounted mode


QUESTION 76 / 145
Which three statements are true about the SQL*Loader utility?
A. It can be used to load data from multiple external files into multiple tables.
B. It can be used to extract and reorganize data from external files, and then load it into a table.
C. It can be used to load data from external files using direct path only. (direct path is not exlusive)*
D. It can be used to create tables using data that is stored in external files.
E. It can be used to generate unique sequential values in specified columns while loading data.

###

A B E

•    Load data from multiple datafiles during the same load session.
A    Load data into multiple tables during the same load session.
•    Specify the character set of the data.
•    Selectively load data (you can load records based on the records' values).
B    Manipulate the data before loading it, using SQL functions.
E    Generate unique sequential key values in specified columns.
•    Use the operating system's file system to access the datafiles.
•    Load data from disk, tape, or named pipe.
•    Generate sophisticated error reports, which greatly aids troubleshooting.
•    Load arbitrarily complex object-relational data.
•    Use secondary datafiles for loading LOBs and collections.
•    Use either conventional or direct path loading.
    While conventional path loading is very flexible, direct path loading provides  
    superior loading performance.*

External Table Loads

An external table load creates an external table for data that is contained in a datafile.
The load executes INSERT statements to insert the data from the datafile into the target table.

The advantages of using external table loads over conventional path and direct path loads are as follows:

An external table load attempts to load datafiles in parallel.
If a datafile is big enough, it will attempt to load that file in parallel.

An external table load allows modification of the data being loaded by using SQL functions and PL/SQL functions as part of the INSERT statement that is used to create the external table.


-------------------

QUESTION 77 / 146

While performing database backup to tape via the media manager interface, you notice that tape streaming is not happening because RMAN is not sending data blocks fast enough to the tape drive.

Which two actions would you take for tape streaming to happen during the backup?

A.    Configure backup optimization.
B.    Configure the channel to increase maxopenfiles.
C.    Configure a backup policy by using incremental backups.
D.    Configure the channel to increase capacity with the rate parameter.
E.    Configure the channel to adjust the tape buffer size by using the BLOKSIZE option.
F.    Configure large_pool, if not done already. Alternatively, you can increase the size of LARGE_POOL.

###

BE

It’s B and E
Write Phase for System Backup Tape (SBT)
When backing up to SBT, RMAN gives the media management software a stream of bytes and associates a unique name with this stream. All details of how and where that stream is stored are handled entirely by the media manager. Thus, a backup to tape involves the interaction of both RMAN and the media manager.
RMAN Component of the Write Phase for SBT
The RMAN-specific factors affecting the SBT write phase are analogous to the factors affecting disk reads. In both cases, the buffer allocation, slave processes, and synchronous or asynchronous I/O affect performance.
Allocation of Tape Buffers
If you back up to or restore from an SBT device, then by default the database allocates four buffers for each channel for the tape writers (or reads if restoring data as shown in Table 23-1). The size of the tape I/O buffers is platform-dependent. You can change this value with the PARMS and BLKSIZE parameters of the ALLOCATE CHANNEL or CONFIGURE CHANNEL command.
Tuning the Read Phase
RMAN may not be able to send data blocks to the output device fast enough to keep it occupied. For example, during an incremental backup, RMAN only backs up blocks changed since a previous data file backup as part of the same strategy. If you do not turn on block change tracking, then RMAN must scan whole data files for changed blocks, and fill output buffers as it finds such blocks. If few blocks changed, and if RMAN is making an SBT backup, then RMAN may not fill output buffers fast enough to keep the tape drive streaming.
You can improve backup performance by adjusting the level of multiplexing, which is number of input files simultaneously read and then written into the same RMAN backup piece. The level of multiplexing is the minimum of the MAXOPENFILES setting on the channel and the number of input files placed in each backup set. The following table makes recommendations for adjusting the level of multiplexing.
Table 23-3 Adjusting the Level of Multiplexing
ASM Striped Disk Recommendation
No
Yes
Increase the level of multiplexing. Determine which is the minimum, MAXOPENFILES or the number of files in each backup set, and then increase this value.
In this way, you increase the rate at which RMAN fills tape buffers, which makes it more likely that buffers are sent to the media manager fast enough to maintain streaming.
No
No
Increase the MAXOPENFILES setting on the channel.
Yes
Not applicable
Set the MAXOPENFILES parameter on the channel to 1 or 2.


QUESTION 78 / 147
You are administering a multitenant container database (CDB) cdb1.
Examine the command and its output:
SQL>show parameterfile
NAME TYPE VALUE
-----------------------------------------------------------------------
db_create_file_deststring
db_file_name_convertstring
db_filesinteger 200
You verify that sufficient disk space is available and that no file currently exists in the `/u0l/app/oracle/oradata/cdb1/salesdb' location.
You plan to create a new pluggable database (PDB) by using the command:
SQL>CREATE PLUGGABLE DATABASE SALES PDB
ADMIN USER salesadm IDENTIFIED BY password
ROLES=(dba)
DEFAULT TABLESPACE sales
DATAFILE' /u01/app/oracle/oradata/cdb1/salesdb/sales01.dbf'SIZE 250M AUTOEXTEND ON
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/', '/u01/app/oracle/oradata/cdb1/salesdb/')
STORAGE(MAXSIZE2G)
PATH_PREFIX='/u01/app/oracle/oradata/cdb1/SALESPDB';

Which statement is true?

A. SALES PDB is created and is in mount state.
B. PDB creation fails because the DB_file_name_convert parameter is not set in the CDB.
C. SALE SPDB is created and is in read/write mode.
D. PDB creation fails because a default temporary tablespace is not defined for SALESPDB.

###

Risposta corretta è A (fai la prova dell’istruzione e crea il pdb)


QUESTION 79 / 148
You want to migrate your Oracle 11g database as a pluggable database (PDB) in a multitenant container database (CDB).
The following are the possible steps to accomplish this task:
1. Place all the user-defined tablespace in read-only mode on the source database.
2. Upgrade the source database to a 12c version.
3. Create a new PDB in the target container database.
4. Perform a full transportable export on the source database with the VERSION parameter set to 12 using the expdp utility.
5. Copy the associated data files and export the dump file to the desired location in the target database.
6. Invoke the Data Pump import utility on the new PDB database as a user with the DATAPUMP_IMP_FULL_DATABASE role and specify the full transportable
import options.
7. Synchronize the PDB on the target container database by using the DBMS_PDS.SYNC_ODB function.
Identify the correct order of the required steps

A. 2, 1, 3, 4, 5, 6
B. 1, 3, 4, 5, 6, 7
C. 1, 4, 3, 5, 6, 7
D. 2, 1, 3, 4, 5, 6, 7
E. 1, 5, 6, 4, 3, 2


###

C

Steps

1. Create a directory in source database to store the export dump files.
2. Set the user and application tablespace in the source database as READ ONLY
3. Export the source database using expdp with parameters version=12.0, transportable=always and full=y
4. Copy the dumpfile and datafiles for tablespaces containing user /application data.
5. Create a new PDB in the destination CDB using create pluggable database command.
6. Create a directory in the destination PDB pointing to the folder containing the dump file or create a directory for dump file and move the dump file there.
7. Create an entry in tnsnames.ora for the new PDB.
8. Import in to the target using impdp with parameters FULL=Y and TRANSPORT_DATAFILES parameters. Make sure, the account is having IMP_FULL_DATABASE.
9. Restore the tablespaces to READ-WRITE in source database.


http://www.oracle.com/technetwork/database/upgrade/upgrading-oracle-database-wp-12c-1896123.pdf
Upgrading to Oracle Database 12c Examples of Using Full Transportable Export/Import
page 11
1 – Set user tablespaces in the source database to READ ONLY.
2 – From the Oracle Database 11gRelease 2 (11.2.0.3) environment, export the metadata and any data residing in administrative tablespaces
from the source database using the FULL=Y and TRANSPORTABLE=ALWAYS parameters.
3 – Copy the tablespace data files from the source system to the destination system.
4 – Create a CDB on the destination system, including a PDB into which you will import the source database.
5 – In the Oracle Database 12c environment, connect to the pre- created PDB and import the dump file.


###

QUESTION 80 / 149
You want to consolidate databases for the CRM, ERP, and SCM applications by migrating them to pluggable databases (PDBs).
You have already created a test system to support the consolidation of databases in a multitenant container database (CDB) that has multiple PDBs.
What is the easiest way to perform capacity planning for consolidation?

A. capturing the most resource-intensive SQL statements in a SQL Tuning Set on the production system and using the SQL Performance Analyzer on the test system
B. capturing the workload on the production system and replaying the workload for one PDB at a time on the test system
C. capturing the workload on the production system and using Consolidated Database Replay to replay the workload of all production systems simultaneously for all PDBs
D. capturing the most resource-intensive SQL statements in a SQL Tuning Set on the production system and using the SQL Tuning Advisor on the test system

###

Risposta C
You want to consolidate databases for the CRM, ERP, and SCM applications by migrating them to pluggable databases
You can use Consolidated Database Replay to combine the captured workloads
from the three applications and replay them concurrently on PDBs

###

QUESTION 81 / 150
Identify three benefits of unified auditing.
A. It helps to reduce disk space used to store an audit trail in a database
B. It guarantees zero-loss auditing.
C. It reduces overhead on a database caused by auditing, by having a single audit trail.
D. An audit trail cannot be modified because it is read-only.
E. It automatically audits Recovery Manager (RMAN) events.

###

Correct Answer: CDE
Cde

###

QUESTION 82 / 65
Examine the backup requirement for your company:
1) Every Sunday, a backup of all used data file blocks is performed.
2) Every Wednesday and Friday, a backup of all the changed blocks since last Sunday's backup is performed.
3) On all the other days, a backup of only the changed blocks since the last day's backup is performed.
Which backup strategy satisfies the requirements?

A. level 0 backup on Sunday, cumulative incremental backup on Wednesday and Friday, and differential incremental level 1 backup on all the other days
B. level 0 backup on Sunday, differential incremental backup on Wednesday and Friday, and cumulative incremental level 1 backup on all the other days
C. full database backup on Sunday, level 0 backup on Wednesday and Friday, and cumulative incremental level 1 backup on all the other days
D. full database backup on Sunday, level 0 backup on Wednesday and Friday, and differential incremental level 1 backup on all the other days

###

RISPOSTA A

•    Sunday
An incremental level 0 backup backs up all blocks that have ever been in use in this database.
•    Monday - Saturday
A cumulative incremental level 1 backup copies all blocks changed since the most recent level 0 backup. Because the most recent level 0 backup was created on Sunday, the level 1 backup on each day Monday through Saturday backs up all blocks changed since the Sunday backup.
•    The cycle is repeated for the next week.

###

QUESTION 83 / 66
Your database is running in archivelog mode. Examine the initialization parameters you plan to set for your database instance.
LOG_ARCHIVE_DEST_1 = 'LOCATION=/disk1/arch'
LOG_ARCHIVE_DEST_2 = 'L0CATI0N=/disk2/arch'
LOG_ARCHIVE_DEST_3 = 'LOCATION=/disk3/arch'
LOG_ARCHIVE_DEST_4 = 'L0CATI0N=/disk4/arch MANDATORY'
Identify the statement that correctly describes these settings.

###

A. An online redo log file is not allowed to be overwritten
   if the archived log file cannot be created in any of the log_archive_dest_N destinations.

B. Optional destinations cannot use the fast recovery area.

C. An online redo log file is not allowed to be overwritten if the archived log file cannot be created
   in the location specified for log_archive_dest_4.

D. These settings work only if log_archive_min_succeed_dest is set to a value of 4.


   correct  Answer: tutti dicono A

###
a me sembra C

A è corretta se min succeed è impostato a 4 ----
C è corretta senza sapere valore del min_succeed_dest
SE LE OPERAZIONI DI ARCHIVIAZIONE di una destinazione mandatory fallisce online redo log non possono essere sovrascritti.

The LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter specifica il numero di destinazioni
che devono essere archiviate con successo prima che i log vengano sovrascritti.

###

QUESTION 83 bis 66
Your database is running in archivelog mode. Examine the parameters for your database instance:
LOG_ARCHIVE_DEST_l ='LOCATION=/disk1/arch MANDATORY'
LOG_ARCHIVE_DEST_2 ='LOCATION=/disk2/arch'
LOG_ARCHIVE_DEST_3 ='LOCATION=/disk3/arch'
LOG_ARCHIVE_DEST_4 ='LOCATION=/disk4/arch'
LOG_ARCHIVE_MIN_SUCCEED_DEST = 2
While the database is open, you notice that the destination set by the log_archive_dest_1 parameter is not
available. All redo log groups have been used.
What happens at the next log switch?

A. The database instance hangs and the redo log files are not overwritten.
B. The archived redo log files are written to the fast recovery area until the mandatory destination is made
available.
C. The database instance is shutdown immediately.
D. The destination set by the log_archive_dest parameter is ignored and the archived redo log files are
created in the next two available locations to guarantee archive log success.

###

A

LOG_ARCHIVE_MIN_SUCCEED_DEST=n determines the minimum number of destinations
to which the database must successfully archive a redo log group before it can reuse online log files

###

QUESTION 84 / 67
Which three statements correctly describe the relationship amongst jobs, programs, and schedules within the Oracle Job Scheduler?
A. A job is specified as part of a program definition.
B. A program can be used in the definition of multiple jobs.
C. A program and job can be specified as part of a schedule definition.
D. A program and schedule can be specified as part of a job definition.
E. A program and window can be specified as part of a job definition.

###

BCD
A is wrong – job and program are separate entity
E is wrong – window cant be specified in job def

###

QUESTION 85 / 68
Which two statements describe the relationship between a window, a resource plan, and a job class?
A. A window specifies a resource plan that will be activated when that window becomes active.
B. A window specifies a job class that will be activated when that window becomes active.
C. A job class specifies a window that will be open when that job class becomes active.
D. A window in association with a resource plan controls a job class allocation.
E. A window in association with a job class controls a resource allocation.

###

AE


A
Each window specifies the resource plan to activate when the window open (becomes active)
D
and each job class specifies a resource consumer group or specifies a database service
WINDOW is an INTERVAL

Windows work with job classes to control resource allocation.
Each window specifies the resource plan to activate when the window opens (becomes active),
and each job class specifies a resource consumer group or specifies a database service, which can map to a consumer group.
A job that runs within a window, therefore, has resources allocated to it according to the consumer group of its job class and the resource plan of the window.

###

QUESTION 86 / 69
Which two are prerequisites for creating a backup-based duplicate database?

A. connecting to the target database and a recovery catalog to execute the duplicate command
B. creating a password file for an auxiliary instance
C. connecting to an auxiliary instance
D. matching the database identifier (DBID) of the source database and the duplicate database
E. creating an SPFILE for the target database

###

Not A -> original database (TARGET)
B -> step 1
C -> dobbiamo connetterci al ausiliario  per fare il ripristino
Not D -> non possono essere uguali i dbid e su questa non vi è dubbio
Not E -> original database (TARGET) sta anche gia' in piedi a che serve fare spfile


Prerequisites Common to All Forms of Duplication
RMAN must be connected as AUXILIARY to the instance of the duplicate database.
The instance of the duplicate database is called the auxiliary instance.
The auxiliary instance must be started with the NOMOUNT option.

    Step 1: Create an Oracle Password File for the Auxiliary Instance
    Step 2: Establish Oracle Net Connectivity to the Auxiliary Instance
    Step 3: Create an Initialization Parameter File for the Auxiliary Instance
    Step 4: Start the Auxiliary Instance with SQL*Plus

A password file is required for the auxiliary instance only if one of the following conditions is true:
You use the RMAN client on a host other than the destination host.
You duplicate from an active database.

The auxiliary instance must be available through Oracle Net if either of the following conditions is met:

You use the RMAN client on a host other than the destination host.
You duplicate from an active database.

###

QUESTION 87 / 70
Which three statements are true about Oracle Secure Backup (OSB)?
A. It can encrypt client data written to tape.
B. It can be used to take image copy backups to tape.
C. It can be used to manage tape backup and restore operations for multiple databases.
D. It can be used along with an RMAN recovery catalog for maintaining records of backups in a tape library.
E. It can be used to perform file system backups at the file, directory, file system, or raw partition level.

###

ACE

###


QUESTION 88 / 71
LDAP_DIRECTORY_SYSAUTH is set to YES.
Users requiring DBAs access have been granted the sysdba enterprise role in Oracle Internet Directory(OID).
SSL has been configure for the database and OLD and the password file has been configured for the database.
User scott with sysdba privilege tries to connect remotely using this command:
$sqlplusscott/tiger@DB0l As sysdba where DB01 is the net service name.
Which authentication method will be attempted first?
A. authentication by password file
B. authentication by using certificates overSSL
C. authentication by using the Oracle Internet Directory
D. authentication by using the local OS of the database server

###

A
If the database is configured to use a password file for remote authentication, Oracle Database checks the password file first.

###

QUESTION 89 / 72
Your database is running in archivelog mode and regular nightly backups are taken.
Due to a media failure, the current online redo log group, which has one
member, is lost and the instance is aborted.
Examine the steps to recover the online redo log group and move it to a new location.

1.Restore the corrupted redo log group.
2.Restore the database from the most recent database backup.
3.Perform an incomplete recovery.
4.Relocate the member of the damaged online redo log group to a new location.
5.Open the database with the resetlogs option.
6. Issue a checkpoint and clear the log.
Identify the required steps in the correct order.

A. 1, 3, 4, 5
B. 6, 3, 4, 5
C. 2, 3, 4, 5
D. 6, 4, 3, 5

###

Risposta corretta C

Istanza aborted quindi 6 non è possibile (not D not B)
Differenza tra 1 e 2
All members of a log group lost.
--The database should be in the mount state for v$log access to retrive scn
--Restore ENTIRE database to determined SCN
quindi 2 risposta corretta C

###

QUESTION 90 / 73
You are administering a multitenant container database (CDB) that contains two pluggable databases (PDBs),
pdb1 and pdb2. You are connected to pdb2 as a common user with DBA privileges.
The statistics_level parameter is PDB modifiable.
As the user sys, execute the following command on pdb2:

SQL> ALTER SYSTEM SET STATISTICS_LEVEL=ALL SID='*' SCOPE=SPFILE;
Which statement is true about the result of this command?

A. The statistics_level parameter is set to all when any of the PDBs is reopened.
B. The statistics_level parameter is set to all only for PDB2 when it is reopened.
C. The statistics_level parameter is set to all when the root database is restarted.
D. The statement is ignored because there is no SPFILE for a PDB.

###

B  corretta !
Not C
perchè il parametro quando riavvi il cdb (il pdb rimane in mount) è impostato typical per tutti (ad esempio)
e solo quando riapri il pdb diventa ALL

###

QUESTION 91 / 74
Examine the command to back up the ASM metadata:
ASMCMD>md_backup /backup/ASM_backup
In which three situations can you use the backup?

A. when one or more disks in an ASM disk group are lost
B. when the data file on an ASM disk group gets corrupted
C. when one of the disks in a disk group is accidentally unplugged
D. when one or more file directory paths are accidentally deleted from an ASM disk group
E. when all the ASM disk groups for the ASM instance are lost

###



ADE  <<< IL principio è METADATA, non mette al riparo da corruzioni o perdite di dischi. Crea un file di backup che permette la creazione
di uno o piu' DISK GROUP


The MD_BACKUP command creates a backup file containing metadata for one or more disk groups. By default all the mounted disk groups are included in the backup file
D
The backed diskgroup metadata can be directly restored upon failure of the disk or we can have asmcmd generate a script and later use the script to generate the disk groups and all of the dependencies from sqlplus.

QUESTION 92 / 75
You are administering a database that supports data warehousing workload and Is running in noarchivelog mode. You use RMAN to perform a level 0 backup on
Sundays and level 1 Incremental backups on all the other days of the week.
One of the data files is corrupted and the current online redo log file is lost because of a media failure.
You want to recover the data file.
Examine the steps involved in the recovery process:

1.Shut down the database instance.
2.Start up the database instance in nomount state.
3.Mount the database.
4.Take the data file offline.
5.Put the data file online.
6.Restore the control file.
7.Restore the database.
8.Restore the data file.
9.Open the database with the resetlog option.
10.Recover the database with the noredo option.
11.Recover the data file with the noredo option.
Identify the required steps in the correct order.
A. 4, 8, 11, 5
B. 1, 3, 8, 11, 9
C. 1, 2, 6, 3, 7, 10, 9
D. 1, 3, 7, 10, 9
E. 1, 2, 6, 3, 8, 11, 9

###

C (va restorato il db a causa della perdita non Solo del data file ma anche del redolog)??
 ricordiamno che il db era in noarchivelog
###

QUESTION 93 / 76
Examine the commands:
SQL> ALTER SESSION SET RECYCLEBIN = ON;
Session altered.
SQL> DROP TABLE emp; --(First EMP table)
Total dropped.
SQL> CREATE TABLE emp (id NUMBER CONSTRAINT emp_id_idx PRIMARY KEY, name VARCHAR2 (15), salary NUMBER(7,2) );
Table created.
You then execute multiple INSERT statements to insert rows into EMP table and drop the table again:
SQL> DROP TABLE emp; -- (Second EMP table)
Table dropped.
SQL> FLASHBACK TABLE emp TO BEFORE DROP;
Which statement is true about the FLASHBACK command?
A. It recovers the structure, data, and indexes of the first emp table.
B. It recovers only the structure of the second emp table.
C. It returns an error because two tables with the same name exist in the recycle bin.
D. It recovers the structure, data, and indexes of the second emp table.

###

D
(nei pdf è riportato A, ma non ha senso)
If you know that the employees table has been dropped multiple times, and you want to retrieve the oldest version, query the USER_RECYLEBIN table to determine the system-generated name, and then use that name in theFLASHBACK TABLE statement. (System-generated names in your database will differ from those shown here.)

###

QUESTION 94 / 77
Which three statements are true about the keystore storage framework for transparent data encryption?
A. It facilitates and helps to enforce keystore backup requirements.
B. It handles encrypted data without modifying applications.
C. It enables a keystore to be stored only in a file on a file system.
D. It enables separation of duties between the database administrator and the security administrator.
E. It transparently decrypts data for the database users and applications that access this data.
F. It helps to track encryption keys and implement requirements such as keystore password rotation and master encryption key reset or re-key operations

###

BDE
NOT A (BACKUP)
NOT C (SEMPLIFICA LA GESTIONE DELLA CRIPTAZIONE NON ABBIAMO BISGONO DI GESTIRLA)
NOT F (COME SOPRA)


•    As a security administrator, you can be sure that sensitive data is encrypted and therefore safe in the event that the storage media or data file is stolen.
•    Using TDE helps you address security-related regulatory compliance issues.
•    You do not need to create auxiliary tables, triggers, or views to decrypt data for the authorized user or application. Data from tables is transparently decrypted for the database user and application. An application that processes sensitive data can use TDE to provide strong data encryption with little or no change to the application.
•(E)    Data is transparently decrypted for database users and applications that access this data. Database users and applications do not need to be aware that the data they are accessing is stored in encrypted form.
•    You can encrypt data with zero downtime on production systems by using online table redefinition or you can encrypt it offline during maintenance periods. (See Oracle Database Administrator's Guide for more information about online table redefinition.)
•(B)    You do not need to modify your applications to handle the encrypted data. The database manages the data encryption and decryption.
•    Oracle Database automates TDE master encryption key and keystore management operations. The user or application does not need to manage TDE master encryption keys.

###

QUESTION 95 / 78
You want to reduce fragmentation and reclaim unused space for the sales table but not its dependent objects.
During this operation, you want to ensure the following:
. Long-running queries are not affected.
. No extra space is used.
. Data manipulation language (DML) operations on the table succeed at all times throughout the process.
. Unused space is reclaimed both above and below the high water mark.

Which alter TABLE option would you recommend?

B. DEALLOCATE UNUSED
C. SHRINK SPACE CASCADE
D. SHRINK SPACE COMPACT
E. ROW STORE COMPRESS BASIC

D

Segment shrink reclaims unused space both above and below the high water mark.
In contrast, space deallocation reclaims unused space only above the high water mark.
In shrink operations, by default, the database compacts the segment,
Adjusts the high water mark, and releases the reclaimed space.
COMPACT ->     WHEN you have long-running queries
CASCADE –> when extend to dependending object

###

QUESTION 96 / 79
You have a production Oracle 12c database running on a host.
You want to install and create databases across multiple new machines that do not have any Oracle database software installed.
You also want the new databases to have the same directory structure and components as your existing 12c database.

The steps in random order:

1.Create directory structures similar to the production database on all new machines.
2.Create a response file for Oracle Universal Installer (OUI) with the same configurations as the production database.
3.Create a database clone template for the database.
4.Run the Database Configuration Assistant (DBCA) to create the database.
5.Run OUI in graphical mode on each machine.
6.Run OUI in silent mode using the OUI response file.

Identify the required steps in the correct sequence to achieve
the requirement with minimal human intervention.

A. 2, 1, 6, and 4
B. 2, 3, and 6
C. 3, 1, 5, and 6
D. 2, 3, 1, and 6
E. 1, 5, and 4

###

D
Mi sembra coretto

###

QUESTION 97 / 80
For which two requirements would you use the Database Resource Manager?

A. limiting the CPU used per database call
B. specifying the maximum number of concurrent sessions allowed for a user (LICENSE_MAX_SESSIONS)
C. specifying the amount of private space a session can allocate in the shared pool of the SGA
D. limiting the degree of parallelism of operations performed by a user or group of users
E. specifying an idle time limit that applies to sessions that are idle and blocking other sessions

###

DE
How Does the Database Resource Manager Address These Problems?
The Oracle Database Resource Manager helps to overcome these problems by allowing the database more control over how machine resources are allocated.
Specifically, using the Database Resource Manager, you can:
•    Guarantee certain users a minimum amount of processing resources regardless of the load on the system and the number of users
•    Distribute available processing resources by allocating percentages of CPU time to different users and applications. In a data warehouse, a higher percentage may be given to ROLAP (relational on-line analytical processing) applications than to batch jobs.
•    Limit the degree of parallelism of any operation performed by members of a group of users D
•    Create an active session pool. This pool consists of a specified maximum number of user sessions allowed to be concurrently active within a group of users. Additional sessions beyond the maximum are queued for execution, but you can specify a timeout period, after which queued jobs will terminate.
•    Allow automatic switching of users from one group to another group based on administrator defined criteria. If a member of a particular group of users creates a session that executes for longer than a specified amount of time, that session can be automatically switched to another group of users with different resource requirements.
•    Prevent the execution of operations that the optimizer estimates will run for a longer time than a specified limit
•    Create an undo pool. This pool consists of the amount of undo space that can be consumed in by a group of users.
•    Limit the amount of time that a session can be idle. This can be further defined to mean only sessions that are blocking other sessions. E
•    Configure an instance to use a particular method of allocating resources. You can dynamically change the method, for example, from a daytime setup to a nighttime setup, without having to shut down and restart the instance.
•    Allow the cancellation of long-running SQL statements and the termination of long-running sessions.

###

QUESTION 98 / 81
Your multitenant container database (CDB) contains multiple pluggable databases (PDBs).
You execute the command to create a common user:

SQL> CREATE USER c##a_admin
IDENTIFIED BY password
DEFAULT TABLESPACE users
QUOTA 100M ON users
TEMPORARY TABLESPACE temp;

Which statement is true about the execution of the command?

A. The common user is created in the CDB and all the PDBs, and uses the users and temp tablespaces of the CDB to store schema objects.
B. The command succeeds only if all the PDBs have the users and temp tablespaces.
C. The command gives an error because the container=all clause is missing.
D. The command succeeds and sets the default permanent tablespace of a PDB as the default tablespace for the c##a_admin user if the users tablespace does not exist in that PDB.

###

B

SYS@cdb>CREATE USER c##b_admin IDENTIFIED BY password
  2  DEFAULT TABLESPACE users QUOTA 100M ON users
  3  TEMPORARY TABLESPACE temp;
CREATE USER c##b_admin IDENTIFIED BY password
*
ERROR at line 1:
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB2
ORA-00959: tablespace 'USERS' does not exist

###
    
QUESTION 99 / 82

Which two statements are true about the Automatic Diagnostic Repository (ADR)?
A. The ADR base is shared across multiple instances.
B. The ADR base keeps all diagnostic information in binary format.
C. The ADR can be used to store statspack snapshots to diagnose database performance issues.
D. The ADR can be used for problem diagnosis even when the database instance is down.
E. The ADR is used to store Automatic Workload Repository (AWR) snapshots.

###

AD

The data is then stored in the Automatic Diagnostic Repository (ADR)—a file-based
repository outside the database—where it can later be retrieved by incident number and analyzed.
D

The ADR is a file-based repository for database diagnostic data such as traces, dumps, the alert log, health monitor reports, and more. It has a unified directory structure across multiple instances and multiple products. Beginning with Release 11g, the database, Automatic Storage Management (ASM), and other Oracle products or components store all diagnostic data in the ADR. Each instance of each product stores diagnostic data underneath its own home directory within the ADR. For example, in an Oracle Real Application Clusters environment with shared storage and ASM, each database instance and each ASM instance has an ADR home directory. ADR's unified directory structure, consistent diagnostic data formats across products and instances, and a unified set of tools enable customers and Oracle Support to correlate and analyze diagnostic data across multiple instances
A


###

QUESTION 100 / 83
user_data is a non encrypted tablespace containing tables with data.
You must encrypt all data in this tablespace.
Which three methods can do this?
A. Use Data Pump.
B. Use ALTER TABLE. . .MOVE
C. Use CREATE TABLE AS SELECT
D. Use alter tablespace to encrypt the tablespace after enabling row movement on all its
E. Use alter tablespace to encrypt the tablespace.

###

A B C are correct http://www.oracle.com/technetwork/testcontent/o19tte-086996.html
Finally, note that you can only create encrypted tablespaces; you cannot modify existing tablespaces to encrypt them. So, when you need existing data in encrypted tablespaces, the best solution is to first create encrypted tablespaces and then move the objects from the unencrypted tablespaces to them.

The following are restrictions for encrypted tablespaces:

You cannot encrypt an existing tablespace with an ALTER TABLESPACE statement. However, you can use Data Pump or SQL statements such as CREATE TABLE AS SELECT or ALTER TABLE MOVE to move existing table data into an encrypted tablespace.

Encrypted tablespaces are subject to restrictions when transporting to another database. See "Limitations on Transportable Tablespace Use".

When recovering a database with encrypted tablespaces (for example after a SHUTDOWN ABORT or a catastrophic error that brings down the database instance), you must open the Oracle wallet after database mount and before database open, so the recovery process can decrypt data blocks and redo.

###

QUESTION 101 / 84
Which two statements are true about a common user?
A. A common user connected to a pluggable database (PDB) can exercise privileges across other PDBs.
B. A common user with the create user privilege can create other common users, as well as local users.
C. A common user can be granted only a common role.
D. A common user can have a local schema in a PDB.
E. A common user always uses the global temporary tablespace that is defined at the CDB level as the default temporary tablespace

###

BD
--
ACE

A common user with the necessary privileges can switch between PDBs.
A common user is a database user that has the same identity in the root and in every existing and future PDB.
Every common user can connect to and perform operations within the root, and within any PDB in which it has privileges.
common user with the appropriate privileges can switch between containers, a common user in the root can administer PDBs.
A common user can perform administrative tasks specific to the root or PDBs, such as plugging and unplugging PDBs, changing their state, or specifying the temporary tablespace for the multitenant container database (CDB).
Only common users who have the appropriate privileges can navigate between containers that belong to a CDB. For example, common users can perform the following operations across multiple PDBs:
Granting privileges to common users or common roles
Running an ALTER DATABASE statement that specifies the recovery clauses that apply to the entire CDB
Running an ALTER PLUGGABLE DATABASE statement to change the state of a given PDB while connected to the root.
A local user connected to a PDB can also change its state, given appropriate privileges.
A user can only perform common operations on a common role, for example, granting privileges commonly to the role, when the following criteria are met:
The user is a common user whose current container is root.
The user has the SET CONTAINER privilege granted commonly, which means that the privilege applies in all containers.
The user has privilege controlling the ability to perform the specified operation, and this privilege has been granted commonl
A privilege that is granted commonly can be used in every existing and future container.
Only common users can grant privileges commonly, and only if the grantee is common.
solo gli Utenti comuni possono assegnare privilegi comuni e solo se grantee->assegnatario è comune

A common user can grant privileges to another common user or to a common role.
un utente commune puo concedere a un altro utente commune o ad un ruolo comune

The grantor must be connected to the root and must specify CONTAINER=ALL in the GRANT statement.
Both system and object privileges can be commonly granted. (Object privileges become actual only with regard to the specified object.)
When a common user connects to or switches to a given container, this user's ability to perform various activities (such as creating a table) is controlled by both the commonly granted and locally granted privileges this user has.
Do not grant privileges to PUBLIC commonly.

Granted concesso
Grantee assegnatario
Grantor concedente

###

QUESTION 102 / 85
You are administering a database that supports a data warehousing workload and is running in noarchivelog mode.
You use RMAN to perform a level 0 backup on
Sundays and level 1 incremental backups on all the other days of the week.
One of the data files is corrupted and the current online redo log file is lost because of a media failure.
Which action must you take for recovery?

A. Restore the data file, recover it by using the recover datafile noredo command, and use the resetlogs option to open the database.
B. Restore the control file and all the data files, recover them by using the recover database noredo command,
   and use the resetlogs option to open the database.
C. Restore all the data files, recover them by using the recover database command, and open the database.
D. Restore all the data files, recover them by using the recover database noredo command, and use the resetlogs option to open the database.

###

B

startup force nomount;
restore controlfile from autobackup;
alter database mount;
restore database;
recover database noredo;
alter database open resetlogs;

QUESTION 102 bis / 75

You are administering a database that supports data warehousing workload and Is running in noarchivelog mode. You use RMAN to perform a level 0 backup on Sundays and level 1 Incremental backups on all the other days of the week.

One of the data files is corrupted and the current online redo log file is lost because of a media failure. You want to recover the data file.

Examine the steps involved in the recovery process:

1.    Shut down the database instance.
2.    Start up the database instance in nomount state.
3.    Mount the database.
4.    Take the data file offline.
5.    Put the data file online.
6.    Restore the control file.
7.    Restore the database.
8.    Restore the data file.
9.    Open the database with the resetlog option.
10.    Recover the database with the noredo option.
11.    Recover the data file with the noredo option.

Identify the required steps in the correct order.

A.    4, 8, 11, 5
B.    1, 3, 8, 11, 9
C.    1, 2, 6, 3, 7, 10, 9
D.    1, 3, 7, 10, 9
E.    1, 2, 6, 3, 8, 11, 9

###

C
Va fatto prima nomount ripristino control file e ripristino dei datafile di tutto il db
Not E (solo il datafile)
Database in noarchive va recuperato tutto il db non solo lo specifico datafile
Not B not D non si puo fare mount dobbiamo ripristinare i datafile
Not A primo passo shutdown

###

QUESTION 103 / 86
Which three statements are true about Oracle Restart?

A. It can be configured to automatically attempt to restart various components after a hardware or software failure.
B. While starting any components, it automatically attempts to start all dependencies first and in proper order.
C. It can be configured to automatically restart a database in case of normal shutdown of the database instance.
D. It can be used to only start Oracle components.
E. It runs periodic check operations to monitor the health of Oracle components.

###

ABE

A
Oracle Restart improves the availability of your Oracle database. When you install Oracle Restart,
various Oracle components can be automatically restarted after a hardware or software failure or whenever your database host computer restarts

B
Oracle Restart ensures that Oracle components are started in the proper order, in accordance with component dependencie
E
Oracle Restart runs periodic check operations to monitor the health of these components.
If a check operation fails for a component, the component is shut down and restarted.

###

QUESTION 104 / 87
Examine the parameters for your database instance:
optimizer_adaptive_reporting_only Boolean FALSE
optimizer_capture_sql_plan_baselines Boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 12.1.0.1

Which three statements are true about the process of automatic optimization by using statistics feedback?

A. The optimizer automatically changes a plan during subsequent execution of a SQL statement if there is a huge difference in optimizer estimates and execution statistics.
B. The optimizer can re optimize a query only once using cardinality feedback.
C. The optimizer enables monitoring for cardinality feedback after the first execution of a query.
D. The optimizer does not monitor cardinality feedback if dynamic sampling and multicolumn statistics are enabled.
E. After the optimizer identifies a query as a re-optimization candidate, statistics collected by the collectors are submitted to the optimizer.

###

ACD

C:    During the first execution of a SQL statement, an execution plan is generated as usual.
D:    if multi-column statistics are not present for the relevant combination of columns,
        the optimizer can fall back on cardinality feedback.

(not B)* Cardinality feedback. This feature, enabled by default in 11.2, is intended to improve plans for repeated executions.

optimizer_dynamic_sampling optimizer_features_enable

Dynamic sampling or multi-column statistics allow the optimizer to more accurately estimate selectivity of conjunctive predicates.

Note:

* OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sampling performed by the optimizer. Range of values. 0 to 10

Cardinality feedback was introduced in Oracle Database 11gR2. The purpose of this feature is to automatically improve plans for queries that are executed repeatedly, for which the optimizer does not estimate cardinalities in the plan properly. The optimizer may misestimate cardinalities for a variety of reasons, such as missing or inaccurate statistics, or complex predicates. Whatever the reason for the misestimate, cardinality feedback may be able to help.

DICONO ANCHE -->  ACE
Explanation/Reference:
C: During the first execution of a SQL statement, an execution plan is generated as usual.
D: if multi-column statistics are not present for the relevant combination of columns, the optimizer can fall back on cardinality feedback.
(not B)* Cardinality feedback. This feature, enabled by default in 11.2, is intended to improve plans for repeated executions.
optimizer_dynamic_sampling optimizer_features_enable
Dynamic sampling or multi-column statistics allow the optimizer to more accurately estimate selectivity of conjunctive predicates.
Note:
* OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sampling performed by the optimizer. Range of values. 0 to 10
Cardinality feedback was introduced in Oracle Database 11gR2. The purpose of this feature is to automatically improve plans for queries that are executed repeatedly, for which the optimizer does not estimate cardinalities in the plan properly. The optimizer may misestimate cardinalities for a variety of reasons, such as missing or inaccurate statistics, or complex predicates. Whatever the reason for the misestimate, cardinality feedback may be able to help.

QUESTION 105 / 88
RMAN is connected to the target database prod1 and an auxiliary instance in nomount state.
Examine the command to create a duplicate database:
RMAN> DUPLICATE TARGET DATABASE TO dup1
FROM ACTIVE DATABASE
NOFILENAMECHECK         <-- br="" diverso="" host="">PASSWORD FILE        <-- br="" copia="" la="" password="">SPFILE;            <-- br="" clause="" command="" copies="" database="" duplicate="" f="" nbsp="" of="" or="" restores="" rman="" server="" source="" spfile="" the="" then="" use="" you="">                parameter file to the default location for the auxiliary instance on the destination host.
               
               
Which two statements are true about the execution of the duplicate command?
A. All archive redo log files are automatically copied to the duplicate database.
B. The duplicate database has the same directory structure as the source database.
C. The duplicate database is created by using the backups created during the execution of  the duplicate command.
D. The password file and SPFILE for the duplicate database dup1 are created in their respective default locations.
E. The duplicate database is created without using RMAN backups and prod1 is allowed to remain open during duplication.

###

BE
qui sembra che le risposte corrette siano tre
B D E
la D potrebbe essere sbagliata perchè esattamente password file e SPFILE verrebbero copiati (non creati) nelle destinazioni di default.

Not A ->  altra risposta certa (sbagliata) I redolog file vengono copiati se necessari per il scn che viene automaticamente deciso, ma non vengono copiati "All archive redolog"
B corretta ->
Not C ->  cerchiamo di escludere questa, viene utilizzato  full image copy (dalla 11)
D ->  Sembra corretta la passwordfile is Copied by default for standby databases; for nonstandby databases, copied only if PASSWORD FILE option is specified (il nostro caso) SPFILE indica che Copies the server parameter file from the source database to the duplicate database. No initialization parameters previously set in the duplicate database are used.
E Corretta ->   vedi sotto sembra l'unica con qualche certezza

active database duplication is
A duplicate database that is created over a network without restoring backups of the target database. This technique is an alternative to backup-based duplication.

RMAN automatically copies the server parameter file to the destination host,
restarts the auxiliary instance with the server parameter file,
copies all necessary database files and archived redo logs over the network to the destination host,
and recovers the database.
Finally, RMAN opens the database with theRESETLOGS option to create the online redo log.dupdb is theDB_NAME of the duplicate database


 If duplicating a database on the same host as the source database, then make sure that NOFILENAMECHECK is not set
 questo significa che host è diverso
 e anche

 Esempio - Duplicating to a Host with the Same Directory Structure (Active)

 DUPLICATE TARGET DATABASE
   TO dupdb
   FROM ACTIVE DATABASE
   SPFILE
   NOFILENAMECHECK;

Prerequisites Specific to Active Database Duplication

When you execute DUPLICATE with FROM ACTIVE DATABASE, at least one normal target channel and at least one AUXILIARY channel are required.

When you connect RMAN to the source database as TARGET, you must specify a password, even if RMAN uses operating system authentication. The source database must be mounted or open. If the source database is open, then archiving must be enabled. If the source database is not open, then it must have been shut down consistently.

When you connect RMAN to the auxiliary instance, you must provide a net service name. This requirement applies even if the auxiliary instance is on the local host.

The source database and auxiliary instances must use the same SYSDBA password, which means that both instances must have password files. You can create the password file with a single password so you can start the auxiliary instance and enable the source database to connect to it.

The DUPLICATE behavior for password files varies depending on whether your duplicate database acts as a standby database. If you create a duplicate database that is not a standby database, then RMAN does not copy the password file by default. You can specify the PASSWORD FILE option to indicate that RMAN should overwrite the existing password file on the auxiliary instance. If you create a standby database, then RMAN copies the password file to the standby host by default, overwriting the existing password file. In this case, the PASSWORD FILE clause is not necessary.

You cannot use the UNTIL clause when performing active database duplication.
RMAN chooses a time based on when the online data files have been completely copied,
so that the data files can be recovered to a consistent point in time.

You want the duplicate database filenames to be the same as the source database filenames, and if the databases are in different hosts, then you must specify NOFILENAMECHECK.



###

QUESTION 106 / 89
A user issues a query on the sales table and receives the following error:
ERROR at line 1:
ORA-01565: error in identifying file '/u0l/app/oracle/oradata/ORCL/temp01.dbf'
ORA-27037: unable to obtain file status
Which two actions would you take to recover the temporary tablespace?

A. Drop the temp01.dbf file, and then re-create the temp file.
B. Add a new temp file to the temporary tablespace and drop the temp01.dbf file.
C. Shut down the database instance, start up the database instance in mount state, create a new temporary tablespace, and then open the database.
D. Take the temporary tablespace offline, recover the missing temp file, and then bring the temporary tablespace online.
E. Create a new temporary tablespace and assign it as the default to the user.

###

BE
CD sicuramente no
rimane la A ma andrebbe fatto offline drop e add (not create) e nel caso in cui il path non è piu disponibile

If for any reason your temporary tablespace becomes unavailable, you can also re-create it yourself. Since there
are never any permanent objects in temporary tablespaces, you can simply re-create them as needed. Here is an
example of how to create a locally managed temporary tablespace:
CREATE TEMPORARY TABLESPACE temp TEMPFILE
'/u01/dbfile/o12c/temp01.dbf' SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K;
If your temporary tablespace exists but the temporary data files are missing, you can simply add the temporary
data file(s) as shown here:
ALTER TABLESPACE temp ADD TEMPFILE
'/u01/dbfile/o12c/temp01.dbf' SIZE 500M REUSE;
If you have to restore your database to a location that didn’t previously exist (say, you had a complete failure and
the original mount points are now unavailable), you may need to first take offline and drop the original temporary
tablespace tempfile and then add the tempfile to the new directory. For example:
SQL> alter database tempfile '/u01/dbfile/o12c/temp01.dbf' offline;
SQL> alter database tempfile '/u01/dbfile/o12c/temp01.dbf' drop;
SQL> alter tablespace temp add tempfile '/u02/dbfile/o12c/temp01.dbf' size 500m;

###

QUESTION 107 / 90
Your database supports an online transaction processing (OLTP) workload in which one of the applications creates a temporary table for a session and performs transactions on it.
This consumes a lot of undo tablespace and is affecting undo retention.
Which two actions would you take to solve this problem?
A. Enable temporary undo for the database.
B. Enable undo retention guarantee.
C. Increase the size of the redo log buffer.
D. Enable Automatic Memory Management (AMM).
E. Increase the size of the temporary tablespace.

###

AE
Temporary undo records are stored in the database's temporary tablespaces and thus are not logged in the redo log. When temporary undo is enabled, some of the segments used by the temporary tablespaces store the temporary undo, and these segments are called temporary undo segments. When temporary undo is enabled, it might be necessary to increase the size of the temporary tablespaces to account for the undo records.
Enabling temporary undo provides the following benefits:
Temporary undo reduces the amount of undo stored in the undo tablespaces.
Less undo in the undo tablespaces can result in more realistic undo retention period requirements for undo records.
Temporary undo reduces the size of the redo log.
Performance is improved because less data is written to the redo log, and components that parse redo log records, such as LogMiner, perform better because there is less redo data to parse.
Temporary undo enables data manipulation language (DML) operations on temporary tables in a physical standby database with the Oracle Active Data Guard option. However, data definition language (DDL) operations that create temporary tables must be issued on the primary database.

B – wrong cuz this wud make matters worse
C – wrong cuz not related to the issue of UNDO or TEMP
D – wrong cuz not related to the issue of UNDO or TEMP

###

QUESTION 108 / 91
Which two statements are true about service creation for pluggable databases (PDBs)?
A. When a PDB is created, a service is automatically started in the instance with the same name as the PDB.
B. The default service that is automatically created by a database at the time of PDB creation can be dropped, provided a new additional service is created.
C. A database managed by Oracle Restart can have additional services created or existing services modified by using the srvctl utility for each PDB.
D. Only a common user can create additional services for a PDB.
E. When a PDB is created, a service with the same name as the PDB is created in the PDB.

###

C
E

A no il servizion non è startato ma creato
B IS NOT il servizio non puo' essere eliminato -Not B: PDB default service name must not be dropped
D is not!
Creating, Modifying, or Removing a Service for a PDB
You can create, modify, or remove a service with a PDB property in the following ways:
If your single-instance database is being managed by Oracle Restart or your Oracle RAC database is being managed by Oracle Clusterware, then use the Server Control (SRVCTL) utility to create, modify, or remove the service.
To create a service for a PDB using the SRVCTL utility, u
se the add service command and specify the PDB in the -pdb parameter. If you do not specify a PDB in the -pdb parameter when you create a service, then the service is associated with the root.
To modify the PDB property of a service using the SRVCTL utility, use the modify service command and specify the PDB in the -pdb parameter. To remove a service for a PDB using the SRVCTL utility, use the remove service command.
You can use other SRVCTL commands to manage the service, such as the start service and stop service commands, even if they do not include the -pdb parameter.
The PDB name is not validated when you create or modify a service with the SRVCTL utility.
However, an attempt to start a service with invalid PDB name results in an error.
If your database is not being managed by Oracle Restart or Oracle Clusterware, then use the DBMS_SERVICE package to create or remove a database service.
When you create a service with the DBMS_SERVICE package, the PDB property of the service is set to the current container. Therefore, to create a service with a PDB property set to a specific PDB using the DBMS_SERVICE package, run the CREATE_SERVICE procedure when the current container is that PDB. If you create a service using the CREATE_SERVICE procedure when the current container is the root, then the service is associated with the root.
You cannot modify the PDB property of a service with the DBMS_SERVICE package. However, you can remove a service in one PDB and create a similar service in a different PDB. In this case, the new service has the PDB property of the PDB in which it was created.
You can also use other DBMS_SERVICE subprograms to manage the service, such as the START_SERVICE and STOP_SERVICE procedures. Use the DELETE_SERVICE procedure to remove a service.
Oracle recommends using the SRVCTL utility to create and modify services. However, if you do not use the SRVCTL utility, then you can use the DBMS_SERVICE package.


###

QUESTION 109 / 92
You want to move your existing recovery catalog to another database.
Examine the steps:
1) Export the catalog data by using the Data Pump Export utility in the source database.
2) Create a recovery catalog user and grant the necessary privileges in the target database.
3) Create a recovery catalog by using the create catalog command.
4) Import the catalog data into the new recovery catalog by using the Data Pump Import utility in the target database.
5) Import the source recovery catalog schema by using the import catalog command.
6) Connect to the destination database.
7) Connect as catalog to the destination recovery catalog schema.
Identify the option with the correct sequence for moving the recovery catalog.

A. 1, 6, 4
B. 2, 3, 7, 5
C. 1, 2, 6, 4
D. 1, 2, 3, 6, 5

###

B

perchè import catalog è una feature apposita che permette di importare il catalogo direttamente dal source alla destinazione



Create a recovery catalog on the destination database, but do not register any databases in the new catalog.
"Creating a Recovery Catalog" explains how to perform this task.
Import the source catalog into the catalog created in the preceding step.
"Importing a Recovery Catalog" explains how to perform this task.

Start RMAN and connect as CATALOG to the destination recovery catalog schema.
For example:
% rman
RMAN> CONNECT CATALOG 111cat@destdb;
Import the source recovery catalog schema, specifying the connection string for the source catalog.
For example, enter the following command to import the catalog owned by 102cat on database srcdb:
IMPORT CATALOG 102cat@srcdb;
A variation is to import metadata for a subset of the target databases registered in the source catalog.
You can specify the databases by DBID or database name, as shown in the following examples:
IMPORT CATALOG 102cat@srcdb DBID=1423241, 1423242;

###

QUESTION 110 / 93

Examine the command and its output:
SQL> DROP TABLE EMPLOYEE;
SQL> SELECT object_name AS recycle_name, original_name, type FROM recyclebin;
RECYCLE_NAMEORIGINAL_NAMETYPE
--------------------------------------------------------------------------------------------------------------
binsgk31sj/3akk5hg3j21kl5j3d==$0EMPLOYEE TABLE
You then successfully execute the command:
SQL> FLASHBACK TABLE "BINSgk31sj/3akk5hg3j21kl5j3d==$0" TO BEFORE DROP;
Which two statements are true?
A. It flashes back the employee table and all the constraints associated with the table.
B. It automatically flashes back all the indexes on the employes table.
C. It automatically flashes back any triggers defined on the table.
D. It flashes back only the structure of the table and not the data.
E. It flashes back the data from the recycle bin and the existing data in the original table is permanently lost.

###

AC
A è parzialmente corretta in quanto non vengono restorati TUTTI I vincoli

Triggers and constraints are restored as well except foreign key constraints. Note that all restored, table-related objects will be restored with their recycle bin names, rather than their original names. So, you might want to make a note of the original names before you do the restore. Also, materialized views (Mviews) that are dependent on the tables being dropped are dropped and are not saved in the recycle bin, so they are lost forever.

###

QUESTION 111 / 94
You want the execution of large database operations to suspend, and then resume, in the event of space allocation failures.
You set the value of the initialization parameter resumable_timeout to 3600.
Which two statements are true?
A. A resumable statement can be suspended and resumed only once during execution.
B. Data Manipulation Language (DML) operations are resumable, provided that they are not embedded in a PL/SQL block.
C. A suspended statement will report an error if no corrective action has taken place during a timeout period.
D. Before a statement executes in resumable mode, the alter session enable resumable statement must be issued in its session.
E. Suspending a statement automatically results in suspending a transaction and releasing all the resources held by the transaction.

###

CD

not A -- suspended and resumed multiple times during execution
not B
C
D
not E

A statement executes in a resumable mode only if its session
has been enabled for resumable space allocation by one of the following actions:
•    The RESUMABLE_TIMEOUT initialization parameter is set to a nonzero value.
•    The ALTER SESSION ENABLE RESUMABLE statement is issued.

When a resumable statement is suspended the following actions are taken:
•    The error is reported in the alert log.
•    The system issues the Resumable Session Suspended alert.
•    If the user registered a trigger on the AFTER SUSPEND system event, the user trigger is executed.
A user supplied PL/SQL procedure can access the error message data using the
DBMS_RESUMABLE package and the DBA_ or USER_RESUMABLE view.
A resumable statement can be suspended and resumed multiple times during execution.

“A statement executes in resumable mode only if its session has been enabled for resumable space allocation by one of the following actions:

– The ALTER SESSION ENABLE RESUMABLE statement is issued in the session before the statement executes when the RESUMABLE_TIMEOUT initialization parameter is set to a nonzero value.

– The ALTER SESSION ENABLE RESUMABLE TIMEOUT timeout_value statement is issued in the session before the statement executes, and the timeout_value is a nonzero value.”

So it is always needed to enabled resumable for the session, resumable_timeout parameter changes only the requied syntax of alter session statement.

###

QUESTION 112 / 95
Your database is running in Archivelog mode and Automatic Undo Management is enabled.
Which two tasks should you perform before enabling Flashback Database?

A. Enable minimal supplemental logging.
B. Ensure that the db_flashback_retention_target parameter is set to a point in time (in minutes) to which the database can be flashed back.
C. Enable the recyclebin.
D. Enable undo retention guarantee.
E. Enable Fast Recovery Area.

###

BE
prima di abilitare la flashback si puo’
Optionally, set the DB_FLASHBACK_RETENTION_TARGET to the length of the desired flashback window in minutes:

you must have a flash recovery area enabled, because flashback logs can only be stored in the flash recovery area. Fast recovery area is synonym of flash recovery area

###

QUESTION 113 / 96
Consider the following scenario for your database:
- Backup optimization is enabled in RMAN.
- The recovery window is set to seven days in RMAN.
- The most recent backup to disk for the tools tablespace was taken on March 1, 2013.
- The tools tablespace is read-only since March 2, 2013.
On March 15, 2013, you issue the RMAN command to back up the database to disk.
Which statement is true about the backup of the tools tablespace?

A. The RMAN backup fails because the tools tablespace is read-only.
B. RMAN skips the backup of the tools tablespace because backup optimization is enabled.
C. RMAN creates a backup of the tools tablespace because backup optimization is applicable only for the backups written to media.
D. RMAN creates a backup of the tools tablespace because no backup of the tablespace exists within the seven-day recovery window.

###

risposta è D
When you turn on backup optimization, all backup commands will skip backups of any file if it has not changed and if it has already been backed up to the allocated device type.  A file can be any dbf file, an archived redo log or an RMAN "backup set".  Here are some of the main features of the RMAN configure backup optimization on command:
•    In order to back up the flash recovery area itself using RMAN, you must set configure backup optimization to ON.
•    Setting backup optimization on stops the backups of Read Only Tablespaces (ROT), whenever a valid backup of the tablespace already exists in the RMAN catalog database.
•    If backup optimization is enabled, then RMAN skips backups of archived logs that have already been backed up to the allocated device.
On February 21, when you issue a command to back up tablespace tools to tape, RMAN backs it up even though it did not change after the January 3 backup (because it is read-only). RMAN makes the backup because no backup of the tablespace exists within the 7-day recovery window.
This behavior enables the media manager to expire old tapes. Otherwise, the media manager would be forced to keep the January 3 backup of tablespace tools indefinitely. By making a more recent backup of tablespace tools on February 21, RMAN enables the media manager to expire the tape containing the January 3 backup.
With a recovery window-based retention policy:
For backups to tape, RMAN takes another backup of a file, even if a backup of an identical file exists, if the most recent backup is older than the configured recovery window. This is done to allow media to be recycled after the media expires.
For backups to disk, RMAN skips taking the backup if an identical file is available from a backup on disk, even if that backup is older than the beginning of the recovery window. The retention policy causes RMAN to retain the old backup for as long as it is needed.

###

QUESTION 114 / 97
You set the following parameters in the parameter file and restart the database instance:

NEMORY_MAX_TARGET=0
MEMORY_TARGET=500M
PGA_AGGREGATE_TARGET=90M
SGA_TARGET=270M

Which two statements are true?
A. The memory_max_target parameter is automatically set to 500 MB.
B. The pga_aggregate_target and sga_target parameters are automatically set to zero.
C. The value of the memory_max_target parameter remains zero for the database instance.
D. The lower limits of the pga_aggregate_target and sga_target parameters are set to 90 MB and 270 MB respectively.
E. The instance does not start up because Automatic Memory Management (AMM) is enabled but pga_aggregate_target and sga_target parameters are set to
nonzero values.

###

AD
In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and include a value for MEMORY_TARGET, then the database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, then the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a nonzero value, provided that it does not exceed the value of MEMORY_MAX_TARGET.

With MEMORY_TARGET set, the SGA_TARGET setting becomes the minimum size of the SGA and the PGA_AGGREGATE_TARGET setting becomes the minimum size of the instance PGA. By setting both of these to zero as shown, there are no minimums, and the SGA and instance PGA can grow as needed as long as their sum is less than or equal to the MEMORY_TARGET setting. The sizing of SQL work areas remains automatic.
You can omit the statements that set the SGA_TARGET and PGA_AGGREGATE_TARGET parameter values to zero and leave either or both of the values as positive numbers. In this case, the values act as minimum values for the sizes of the SGA or instance PGA.

###

QUESTION 115 / 98
Your database supports an OLTP workload. Examine the output of the query:
SQL> SeLECT target_mttr, estimated_mttr
FROM v$instance_recovery
Target_mttre stimated_mttr
---------------------- ---------------------------
To ensure faster instance recovery, you set the fast_start_mttrjtargh:t initialization parameter to 30.
What is the effect of this setting on the database?

A. Automatic checkpoint tuning is disabled.
B. The frequency of log switches is increased.
C. The overhead on database performance is increased because of frequent writes to disk.
D. The MTTR advisor is disabled.

###

C CORRECT
nOT A
NOT D

etting fast_start_mttr_target to non-zero enables checkpoint tuning

FAST_START_MTTR_TARGET enables you
to specify the number of seconds the database takes to perform crash recovery
of a single instance. Based on internal statistics, incremental checkpoint
automatically adjusts the checkpoint target to meet the requirement of
FAST_START_MTTR_TARGET.

###

QUESTION 116 / 99
Which three statements are true about persistent lightweight jobs?
A. A user cannot set privileges on them.
B. They generate large amounts of metadata.
C. They may be created as fully self-contained jobs.
D. They must reference an existing Scheduler Program.
E. The are useful when users need to create a large number of jobs quickly.

###
ADE

A ok
B wrong
C wrong are not fully self-contained
D ok
E ok
Lightweight Jobs
Use lightweight jobs when you have many short-duration jobs that run frequently.
Lightweight jobs have the following characteristics:
   Unlike regular jobs, they are not schema objects.
   They have a significant improvement in create and drop time over regular jobs because they do not have the overhead of creating a schema object.
   They have lower average session creation time than regular jobs.
   They have a small footprint on disk for job metadata and runtime data.
You designate a lightweight job by setting the job_style job attribute to 'LIGHTWEIGHT'.
The other job style is 'REGULAR', which is the default.
Like programs and schedules, regular jobs are schema objects.
In releases before Oracle Database 11g Release 1, regular jobs were the only job style supported by the Scheduler.
A regular job offers the maximum flexibility but does entail some overhead when it is created or dropped.
The user has fine-grained control of the privileges on the job, and the job can have as its action a program or a stored procedure owned by another user.
If a relatively small number of jobs that run infrequently need to be created, then regular jobs are preferred over lightweight jobs.
A lightweight job must reference a program object (program) to specify a job action.
The program must be already enabled when the lightweight job is created,
and the program type must be either 'PLSQL_BLOCK' or 'STORED_PROCEDURE'.
Because lightweight jobs are not schema objects, you cannot grant privileges on them.
A lightweight job inherits privileges from its specified program.
Thus, any user who has a certain set of privileges on the program has corresponding privileges on the lightweight job.

The ADMINISTER RESOURCE MANAGER system privilege can 
be granted and revoked only with the DBMS_RESOURCE_MANAGER_PRIVS
package, not with the usual GRANT and REVOKE commands.



###

QUESTION 117 / 100
You restore and recover your database to a new host by using an existing RMAN open database backup.
Which step must you perform next?
A. Execute catproc.sql to recompile invalid PL/SQL modules.
B. Open the database with the resetlogs option.
C. Set a new database identifier (DBID) for the newly restored database.
D. Use the RMAN set newname and switch commands to switch to new files.



##############


A ricompila I package
B <<<<<<<<<<<<<<<<<<<<<<<<< OK
C viene fatto automaticamente da RMAN
D ricollocazione dei file avviene in fase di restore eventualmente




    If possible, restore or re-create all relevant network files such as tnsnames.ora and listener.ora and a password file.

    Start RMAN and connect to the target database instance.

    At this stage, no initialization parameter file exists. If you have set ORACLE_SID and ORACLE_HOME, then you can use operating system authentication to connect as SYSDBA. For example, start RMAN as follows:

    % rman
    RMAN> CONNECT TARGET /

    Specify the DBID for the target database with the SET DBID command, as described in "Restoring the Server Parameter File".
    For example, enter the following command:

    SET DBID 676549873;

    Run the STARTUP NOMOUNT command.
    When the server parameter file is not available, RMAN attempts to start the instance with a dummy server parameter file.
    Allocate a channel to the media manager and then restore the server parameter file from autobackup.
    For example, enter the following command to restore the server parameter file from Oracle Secure Backup:

    RUN
    {
      ALLOCATE CHANNEL c1 DEVICE TYPE sbt;
      RESTORE SPFILE FROM AUTOBACKUP;
    }

    Restart the instance with the restored server parameter file.

    STARTUP FORCE NOMOUNT;

    Write a command file to perform the restore and recovery operation, and then execute the command file. The command file should do the following:

        Allocate a channel to the media manager.

        Restore a control file autobackup (see "Performing Recovery with a Backup Control File and No Recovery Catalog").

        Mount the restored control file.

        Catalog any backups not recorded in the repository with the CATALOG command.

        Restore the data files to their original locations. If volume names have changed, then run SET NEWNAME commands before the restore operation and perform a switch after the restore operation to update the control file with the new locations for the data files, as shown in the following example.

        Recover the data files. RMAN stops recovery when it reaches the log sequence number specified.

    RMAN> RUN
    {
      # Manually allocate a channel to the media manager
      ALLOCATE CHANNEL t1 DEVICE TYPE sbt;
      # Restore autobackup of the control file. This example assumes that you have
      # accepted the default format for the autobackup name.
      RESTORE CONTROLFILE FROM AUTOBACKUP;
      #  The set until command is used in case the database
      #  structure has changed in the most recent backups, and you want to
      #  recover to that point in time. In this way RMAN restores the database
      #  to the same structure that the database had at the specified time.
      ALTER DATABASE MOUNT;
      SET UNTIL SEQUENCE 1124 THREAD 1;
      RESTORE DATABASE;
      RECOVER DATABASE;
    }

    The following example of the RUN command shows the same scenario except with new file names for the restored data files:

    RMAN> RUN
    {
      #  If you must restore the files to new locations,
      #  use SET NEWNAME commands:
      SET NEWNAME FOR DATAFILE 1 TO '/dev/vgd_1_0/rlvt5_500M_1';
      SET NEWNAME FOR DATAFILE 2 TO '/dev/vgd_1_0/rlvt5_500M_2';
      SET NEWNAME FOR DATAFILE 3 TO '/dev/vgd_1_0/rlvt5_500M_3';
      ALLOCATE CHANNEL t1 DEVICE TYPE sbt;
      RESTORE CONTROLFILE FROM AUTOBACKUP;
      ALTER DATABASE MOUNT;
      SET UNTIL SEQUENCE 124 THREAD 1;
      RESTORE DATABASE;
      SWITCH DATAFILE ALL; # Update control file with new location of data files.
      RECOVER DATABASE;
    }

    If recovery was successful, then open the database and reset the online logs:

    ALTER DATABASE OPEN RESETLOGS;


###

QUESTION 118 / 101
Which two statements are true about unified auditing?

A. A unified audit trail captures audit information from unified audit policies and audit settings.
B. Unified auditing is enabled by executing make-fins_rdbms.mk uniaud_onioracle ORACLE_HOME=SORACLE_HOME.
C. Audit records are created for all users except sys.
D. Audit records are created only for the DML and DDL operations performed on database objects.
E. Unified auditing is enabled by setting the audit_trail parameter to db, extended.
F. A unified audit trail resides in a read-only table in the audsys schema in the system tablespace.

###

A,B
F is wrong tbs is sysaux


https://docs.oracle.com/database/121/DBSEG/auditing.htm#DBSEG343
In unified auditing, the unified audit trail captures audit information from a variety of sources.
Audit records (including SYS audit records) from unified audit policies and AUDIT settings
Fine-grained audit records from the DBMS_FGA PL/SQL package
Oracle Database Real Application Security audit records
Oracle Recovery Manager audit records
Oracle Database Vault audit records
Oracle Label Security audit records
Oracle Data Mining records
Oracle Data Pump
Oracle SQL*Loader Direct Load
https://docs.oracle.com/database/121/TDPSG/GUID-BF747771-01D1-4BFB-
8489-08988E1181F6.htm#TDPSG55281
Enable the unified auditing executable.
UNIX: Run the following command:
make -f ins_rdbms.mk uniaud_on ioracle ORACLE_HOME=$ORACLE_HOME
https://docs.oracle.com/database/121/DBSEG/auditing.htm#DBSEG1024
The unified audit trail, which resides in a read-only table in the AUDSYS schema in the SYSAUX tablespace.

###

QUESTION 119 / 102
Your database is running in archivelog mode.
You are taking a backup of your database by using RMAN with a recovery catalog. Because of a media failure, one of
the data files and all the control files are lost.
Examine the steps to recover the database:

1.Restore the control files by using the RMAN restore controlfile command.
2.Mount the database.
3.Restore the data files by using the RMAN restore database command.
4.Open the database with the resetlogs option.
5.Recover the data files by using the RMAN recover using backup controlfile command.
6. Start the database instance in nomount state.
7. Connect to the target database by using a recovery catalog.
8.Open the database.
9.Restore the data file.
10.Recover the data file.
Identify the required steps in the correct order.

A. 7, 6, 1, 2, 3, 5, 4
B. 7, 2, 1, 3, 5, 8
C. 7, 6, 1, 2, 9, 10, 8
D. 7, 6, 1, 2, 9, 10, 4

###

Correct Answer: D

7. Connect to the target database by using a recovery catalog.
6. Start the database instance in nomount state.
1.Restore the control files by using the RMAN restore controlfile command.
2.Mount the database.
9.Restore the data file.
10.Recover the data file.
4.Open the database with the resetlogs option.

alla fine
ALTER DATABASE OPEN RESETLOGS;

###

QUESTION 120 / 103
You plan to use the In-Database Archiving feature of Oracle Database 12c,
and store rows that are inactive for over three months, in Hybrid Columnar Compressed
(HCC) format.
Which three storage options support the use of HCC?

A. ASM disk groups with ASM disks consisting of Exadata Grid Disks.
B. ASM disk groups with ASM disks consisting of LUNS on any Storage Area Network array
C. ASM disk groups with ASM disks consisting of any zero padded NFS-mounted files
D. Database files stored in ZFS and accessed using conventional NFS mounts.
E. Database files stored in ZFS and accessed using the Oracle Direct NFS feature
F. Database files stored in any file system and accessed using the Oracle Direct NFS feature
G. ASM disk groups with ASM disks consisting of LUNs on Pillar Axiom Storage arrays

###

Correct Answer: AEG
Section: (none)
Explanation
Explanation/Reference:
Explanation: HCC requires the use of Oracle Storage Exadata (A), Pillar Axiom (G) or Sun ZFS Storage Appliance (ZFSSA).
Note:
* Hybrid Columnar Compression, initially only available on Exadata, has been extended to support Pillar Axiom and Sun ZFS Storage Appliance (ZFSSA) storage
when used with Oracle Database Enterprise Edition 11.2.0.3 and above
* Oracle offers the ability to manage NFS using a feature called Oracle Direct NFS (dNFS). Oracle Direct NFS implements NFS V3 protocol within the Oracle
database kernel itself. Oracle Direct NFS client overcomes many of the challenges associated with using NFS with the Oracle Database with simple configuration,
better performance than traditional NFS clients, and offers consistent configuration across platforms.

###

QUESTION 121 / 104
You notice a performance change in your production Oracle 12c database.
You want to know which change caused this performance difference.
Which method or feature should you use?

A. Compare Period ADDM report
B. AWR Compare Period report
C. Active Session History (ASH) report
D. taking a new snapshot and comparing it with a preserved snapshot

###

A correct

not B CORRETTA AWR COMPARE
The AWR Compare Period Reports don’t map the root causes to performance changes


In addition to letting you compare performance between two time periods, the AWR Compare Periods Reports capability also lets you compare performance between DB Replay capture and replay-and even between two different DB replays.
Regardless of the nature of comparison, you still need to analyze the huge volumes of performance differentials between them. The AWR Compare Period Reports don’t map the root causes to performance changes.
Oracle 12c offers you something that takes you further in your search for the reason for a performance change – the Compare Period ADDM Report. This report performs a cause-to-effect analysis, making it simpler to understand why performance deviated from a base time.
Reference: Oracle 12c New Features – By Sam Alapati

The Automatic Workload Repository (AWR) Compare Periods report enables you to compare database performance between two periods of time.



Explanation: The awrddrpt.sql report is the Automated Workload Repository Compare Period Report.
The awrddrpt.sql script is located in the $ORACLE_HOME/
rdbms/admin directory.
Incorrect:
Not A: Compare Period ADDM
Use this report to perform a high-level comparison of one workload replay to its capture or to another replay of the same capture.
Only workload replays that contain at least 5 minutes of database time can be compared using this report.

spiegazione da pdf


###

QUESTION 122 / 105
Which parameter must be set to which value to implement automatic PGA memory management?
A. Set memory_target to zero.
B. Set STATISTICS_LEVEL to BASIC.
C. Set pga_aggregate_target to a nonzero value.
D. Set pga_aggregate_target and sga_target to the same value.
E. Set sga_target to zero.

###

C
When automatic memory management is not enabled, the default method for the instance PGA is automatic PGA memory management.

###

QUESTION 123 / 106
Examine the following set of RMAN commands:
RKAN> CONFIGURE CHANNEL del DEVICE TYPE DISK FORMAT ' /u02/backup/%U' ; RKAN> RUN
{
ALLOCATE CHANNEL chl DEVICE TYPE DISK;
EXECUTE SCRIPT arc_backup;
}
Which statement is true about the RMAN run block execution?

A. The script is executed and bothDC1and chi channels are used for script execution.
B. The execution of the script fails because multiple channels cannot exist simultaneously.
C. The persistent configuration parameter,DC1, is overridden because a new channel is allocated in the RMAN run block.
D. The new channel, chi, is ignored because a channel has been configured already.

###

C


###

QUESTION 124 / 107
You create two Resource Manager plans, one for night time workloads, the other for day time.
How would you make the plans switch automatically?

A. Use job classes.
B. Use scheduler windows.
C. Use the mapping rule for the consumer groups.
D. Set the switch_time plan directive for both plans.
E. Use scheduler schedules.

###

B is correct

The resource manager is only activated when a default resource plan is assigned.  Only one resource plan can be active at any given time.  Resource plan switches can be automated using scheduler windows or performed manually by setting the resource_manager_plan parameter using the alter system command as shown below.
alter system set resource_manager_plan = day_plan;

###

QUESTION 125 / 108
Which three statements are true about Consolidated Database Replay?

A. The workload capture and replay systems must have the same operating system (OS).
B. Multiple workload captures from multiple databases can be replayed simultaneously on all pluggable databases (PDBs) in a multitenant container database
(CDB).
C. A subset of the captured workload can be replayed.
D. The number of captured workloads must be the same as the number of PDBs in a multitenant CDB.
E. Multiple replay schedules can be defined for a consolidated replay and during replay initialization, you can select from any of the existing replay schedules.

###

BCE

Il D è una cazzata basta leggerla

Ed anche A è falso

Consolidated Database Replay supports multiple workloads captured from one or multiple systems running Oracle Database 9i Release 2 (release 9.2.0.8.0) or higher on one or multiple operating systems. For example, you can use workloads captured from one system running Oracle Database 9i Release 2 (release 9.2.0.8.0) on HP-UX and another system running Oracle Database 10g Release 2 (release 10.2.0.4.0) on AIX.

Consolidated Database Replay enables you to replay multiple workloads captured from one or multiple systems concurrently. During the replay, every workload capture that is consolidated will start to replay when the consolidated replay begins. Depending on the use case, you can employ various workload scale-up techniques when using Consolidated Database Replay.

Database Replay enables you to capture a workload on the production system and replay it on a test system. This can be very useful when evaluating or adopting new database technologies because these changes can be tested on a test system without affecting the production system. However, some situations may require you to replay multiple workloads concurrently to accurately predict how additional workloads are handled by a system.
For example, you may want to conduct stress testing on a system by adding workloads to an existing workload capture and replaying them together. You may also want to perform scale-up testing by folding an existing workload capture or remapping database schemas.
Consolidated Database Replay enables you to consolidate multiple workloads captured from one or multiple systems and replay them concurrently on a test system. This enables you to perform more comprehensive testing such as stress testing and scale-up testing.

###

QUESTION 126 / 109
Which two statements are true about Flashback Version Query?
A. The result of a query can be used as part of a DML statement.
B. It can be used to create views.
C. It can be used only if Flashback Data Archive is enabled for a table.
D. It retrieves all versions of rows that exist in a time interval, including the start time and end time.
E. It can be used to retrieve the SQL that is required to undo a row change and the user responsible for the change.

###

D E

A wrong
B wrong
C mi sembra non abbia bisogno dell flashback archival wrong
D correct
E correct (insieme alla flashback transaction query)


Using Oracle Flashback Version Query

Use Oracle Flashback Version Query to retrieve the different versions of specific rows that existed during a given time interval. A row version is created whenever a COMMIT statement is executed.


###

QUESTION 127 / 110
Which three statements are true about unplugging a pluggable database (PDB)?
A. The PDB must be open in read only mode.
B. The PDB must be closed.
C. The unplugged PDB becomes a non-CDB.
D. The unplugged PDB can be plugged into the same multitenant container database (CDB)
E. The unplugged PDB can be plugged into another CDB.
F. The PDB data files are automatically removed from disk.


###

Correct Answer: BDE
A wrong
B ok
C wrong
D ok
E ok
F wrong

Explanation:
B, not A: The PDB must be closed before unplugging it.

D: An unplugged PDB contains data dictionary tables, and some of the columns in these encode information in an endianness-sensitive way. There is no supported
way to handle the conversion of such columns automatically. This means, quite simply, that an unplugged PDB cannot be moved across an endianness difference.

E (not F): To exploit the new unplug/plug paradigm for patching the Oracle version most effectively, the source and destination CDBs should share a filesystem so
that the PDB's datafiles can remain in place.

###

QUESTION 128 / 111
You are administering a multitenant container database (CDB) that contains multiple pluggable databases (PDBs). You are connected to cdb$root as the sys user.
You execute the commands:
SQL> CREATE USER C##ADMIN IDENTIFIED BY orcll23;
SQL> CREATE ROLE C##CONNECT;
SQL> GRANT CREATE SESSION, CREATE TABLE, SELECT ANY TABLE TO C##CONNECT;
SQL> GRANT C##CONNECT to C##ADMIN CONTAINER=ALL;

Which statement is true about the c##connect role?

A. It is created only in cdb$root and cannot be granted to the c##admin user with the container=all clause.
B. It is granted to the c##admin user only in the CDB.
C. It is granted to the c##admin user in all PDBs and can be granted only to a local user in a PDB.
D. It is granted to the c##admin user in all PDBs and can be granted object and system privileges for a PDB.

###

D
mi sembra
Si è concesso a ll’utente C## in tutti I pdb e puo grantare Oggetti e privilegi di sistema per un pdb

C dovrebbe essere sbagliata per ‘only’

Roles and Privileges Granted Commonly
A common user or role may be commonly granted a privilege (CONTAINER=ALL).
The privilege is granted to this common user or role in all existing and future containers.
For example, a SELECT ANY TABLE privilege granted commonly to common user c##dba applies to this user in all containers.
A user or role may receive a common role granted commonly.

As mentioned in a footnote on Table 18-6,
a common role may receive a privilege granted locally.
Thus, a common user can be granted a common role,
and this role may contain locally granted privileges.
For example, the common role c##admin may be granted the SELECT ANY TABLE privilege that is local to hrpdb.
Locally granted privileges in a common role apply only in the container in which the privilege was granted.
Thus, the common user with the c##admin role does not have the right to exercise an hrpdb-contained privilege
in salespdb or any PDB other than hrpdb.

D is right, I tried it out:

1) First of all I executed the statements given:

$ sqlplus / as sysdba
SQL> CREATE USER C##ADMIN IDENTIFIED BY orcl123;
User created.
SQL> CREATE ROLE C##CONNECT;
Role created.
SQL> GRANT CREATE SESSION, CREATE TABLE, SELECT ANY TABLE TO C##CONNECT;
Grant succeeded.
SQL> GRANT C##CONNECT to C##ADMIN CONTAINER=ALL;
Grant succeeded.

2) After that I changed container, logged into PDB1:

SQL> alter session set container=pdb1;
Session altered.

3) I queried the granted roles of C##ADMIN:

SQL> select GRANTED_ROLE from dba_role_privs where GRANTEE=’C##ADMIN’;

GRANTED_ROLE
——————————————————————————–
C##CONNECT

So, as we see the C##CONNECT-Role is granted to C##ADMIN in all containers. This proves that answer “D” is right.

So concerning the answer “C”.

The following steps prove that “C” is wrong:

1) I connected to the root and I created another common user named c##demo:

SQL> CREATE USER C##demo IDENTIFIED BY demo;
User created.

2) After that I connected to PDB1 as SYS and queried the roles of C##DEMO (no roles granted yet):

SQL> connect sys/oracle@pdb1 as sysdba
Connected.
SQL> select GRANTED_ROLE from dba_role_privs where GRANTEE=’C##DEMO’;
no rows selected

3) My attempt to grant a common role to a common user in PDB1 was successful !!!

SQL> grant C##CONNECT to C##demo;
Grant succeeded.

Conclusion: C is wrong because it states that the role C##CONNECT can be granted only to a local user in a PDB (“only” is wrong).



###

QUESTION 129 / 112
Examine the RMAN command:
RMAN> BACKUP VALIDATE DATABASE;
Which statement is true about the execution of the command?

A. Block change tracking must be enabled before executing this command.
B. The database must be running in archivelog mode for the successful execution of this command.
C. A complete database backup must exist before executing this command.
D. The command checks for blocks containing all zeros, an invalid checksum, or a corrupt block header.
E. The command checks for blocks that contain a valid checksum and matching headers and footers, but that has logically inconsistent contents.

###

A   serve per velocizzare performance rman
B   non è necessario archive mode
C   necessario un backup completo?
D OK il commando valida solo per corruzioni fisiche
E   per corruzioni logiche bisogna aggiungere -> CHECK LOGICAL
   
The main purpose of RMAN validation is to check for corrupt blocks and missing files. You can also use RMAN to determine whether backups can be restored. You can use the following RMAN commands to perform validation: For example, you can validate that all database files and archived logs can be backed up by running a command as shown in the following example. This command checks for physical corruptions only.

###

QUESTION 130 / 113
Which three conditions must be met before you create a Virtual Private Catalog (VPC)?
A. A base recovery catalog should exist.
B. The owner of VPC cannot own recovery catalog.
C. At least one target database should be registered in the recovery catalog.
D. The register database privilege should be granted to the virtual catalog owner.
E. The recovery_catalog_owner role should be granted to the virtual catalog owner.

###

ADE

A -> OK sono due cose diverse ma la base deve esistere perchè il virtuale è un set di viste sinonimi (della base)
The recovery catalog can be a base recovery catalog, which is a database schema that contains RMAN metadata for a set of target databases. A virtual private catalog is a set of synonyms and views that enable user access to a subset of a base recovery catalog.
B wrong
C wrong (non è necessario) -> A connection to a target database is not required.
D ->  ok you are creating a virtual private catalog, then the base recovery catalog owner must have used the RMAN GRANT command to grant either the CATALOG or REGISTER privilege
E -> ok
Prerequisites
Execute this command only at the RMAN prompt. RMAN must be connected to the recovery catalog database either through the CATALOG command-line option or the CONNECT CATALOG command, and the catalog database must be open. A connection to a target database is not required.The recovery catalog owner, whether the catalog is a base recovery catalog or a virtual private catalog, must be granted the RECOVERY_CATALOG_OWNER role. This user must also be granted space privileges in the tablespace where the recovery catalog tables will reside. The recovery catalog is created in the default tablespace of the recovery catalog owner. If you are creating a virtual private catalog, then the base recovery catalog owner must have used the RMAN GRANT command to grant either the CATALOG or REGISTER privilege (see Example 2-57). See the CONNECT CATALOG description for restrictions for RMAN client connections to a virtual catalog when the RMAN client is from release Oracle Database 10g or earlier.

###

QUESTION 131 / 114
Which two statements are true regarding SecureFile lobs?
A. The amount of undo retained is user controlled.
B. They can be used only for nonpartitioned tables.
C. Fragmentation is minimized by using variable-sized chunks.
D. They support random reads and writes of encrypted LOB data.

###

A non c’entrra
B wrong (LOB_partition_storage)
C ok (chunk) you may specify the chunk size when creating table that stores LOB
D ok SecureFiles Intelligent Encryption, available with the Oracle Advanced Security Option, introduces a new encryption facility for LOBs. The data is encrypted using Transparent Data Encryption (TDE), which allows the data to be stored securely, and still allows for random read and write access.

###

QUESTION 132 / 115
Which three statements are true about compression of backup sets?
A. Compressed backups can only be written to media.
B. Binary compression creates performance overhead during a backup operation.
C. Unused blocks below the high-water mark are not backed up.
D. Compressed backups cannot have section size defined during a backup operation
E. It works only for locally managed tablespaces.

###

BCE

For any use of the BACKUP command that creates backupsets,
you can take advantage of RMAN's support for binary compression of backupsets,
by using the  AS COMPRESSED BACKUPSET option to the BACKUP command.
The resulting backupsets are compressed using an algorithm optimized
for efficient compression of Oracle database files. No extra uncompression
steps are required during recovery if you use RMAN's integrated compression.
The primary disadvantage of using RMAN binary compression
is performance overhead during backups and restores.
Backup performance while creating compressed backupsets is CPU bound.
If you have more than one CPU, you can use increased parallelism to run jobs on multiple CPUs and thus improve performance.
A wrong
D wrong

###

QUESTION 133 / 116
Which three statements are true about the database instance startup after an instance failure?

A. The RECO process recovers the uncommitted transactions at the next instance startup.
B. Online redo log files and archived redo log files are required to complete the rollback stage of instance recovery.
C. Uncommitted changes are rolled back to ensure transactional consistency.
D. The SMON process automatically performs the database recovery.
E. Media recovery is required to complete the database recovery.
F. Changes committed before the failure, which were not written to the data files, are re-applied.

###
CDF

A wrong (The recoverer process (RECO) is a background process used with the distributed database configuration that automatically resolves failures involving distributed transactions)
B wrong - because si dovrebbe parlare di rolling forward (il primo step- cache recovery)
C Correct - rollback della transazione il secondo step  del processo di recovery – transaction recovery
D correct
E wrong (è automatico non è necessario nessun intervento)
F Correct – fa parte sempre del transaction recovery
CDF
The goal of crash and instance recovery is to restore the data block changes located in the cache of the terminated instance and to close the redo thread that was left open. Instance and crash recovery use only online redo log files and current online datafiles. Oracle Database recovers the redo threads of the erminated instances together. The online redo log is a set of operating system files that record all changes made to any database block, including data, index, and rollback segments, whether the changes are committed or uncommitted. All changes to Oracle Database blocks are recorded in the online redo log.
The first step of recovery from an instance or media failure is called cache recovery or rolling forward, and involves reapplying all of the changes recorded in the redo log to the datafiles. Because rollback data is also recorded in the redo log, rolling forward also regenerates the corresponding rollback segments.
Rolling forward proceeds through as many redo log files as necessary to bring the database forward in time. Rolling forward usually includes online redo log files (instance recovery or media recovery) and could include archived redo log files (media recovery only).Transaction Recovery. After the roll forward, any changes that were not committed must be undone. Oracle Database applies undo blocks to roll back uncommitted changes in data blocks that were either written before the failure or introduced by redo application during cache recovery. This process is called rolling back or transaction recovery.

###

QUESTION 134 / 117
You are administering a multitenant container database (CDB) cdb1 that has multiple pluggable
databases (PDBs). As the sys user on cdb$root, you execute the commands:
SQL> CREATE USER C##ADMIN IDENTIFIED BY orc1123;
SQL> GRANT CREATE SESSION to C##ADMIN CONTAINER=ALL;
SQL> GRANT CREATE USER TO C##ADMIN CONTAINER=ALL;

Which two statements are true about the c##admin user that is created in all PDBs?

A. It can create only local users in all the PDBs.
B. It has a common schema for all the PDBs.
C. It can create common users only when it is logged in to the CDB.
D. It can create only local users in the CDB.
E. It can be granted only common roles in the PDBs.

###


A wrong
B ok corretto I common user hanno uno schema per ogni pdb presente e futuro
C ok corretto puo creare Utenti comuni solo nel cdb
D falso
E falso in generale

###

QUESTION 135 / 119
Which three requirements must be met before a tablespace can be transported across different platforms?
A. Both the source and target databases must use the same character set.
B. The platforms of both the source and target databases must have the same endian format.
C. The compatible parameter value must be the same in the source and target databases.
D. The minimum compatibility level for both the source and target databases must be 10.0.0.
E. The tablespace to be transported must be in read-only mode.

###

A correct
B falso
C falso
D corretto – trattandosi di differenti piattaforme il minimo è 10
E corretto

The following table shows the minimum compatibility requirements of the source and target tablespace in various scenarios. The source and target database need not have the same compatibility setting.
Same platform 8 – 8
different block size 9-9
Different platform 10-10

###

QUESTION 136 / 126
You execute the commands on a multitenant container database CDB1 that has multiple pluggable databases:
$ . oraenv
ORACLE-_SID = [oracle] ? cdb1
The oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 is /u01/app/oracle
$ rman target /
Recovery Manager : Release 12.1.0.0.2 - production on Fri Ju1 19 05:18:33: 2013
Coppyright (c) 1982, 2013, oracle and/or its affiliates. All rights reserved.
Connected to target database:CDB1 (DBID=782249327)
RMAN>SELECT name FROM v$tablespace;
Which statement is true about the execution of the last command?

A. It succeeds and displays all the tablespaces that belong to the root database.
B. It fails and returns an error because a connection is not made by using the sysdba privilege.
C. It succeeds and displays all the tablespaces that belong to the root and pluggable databases.
D. It fails and returns an error because SQL commands cannot be executed at the RMAN prompt.

###

C
New features di rman si esegue direttamente sqlplus
e dal cdb si visualizzano tutti i tablespace

###

QUESTION 137 / 127
Which Oracle Database component is audited by default if the Unified Auditing option is enabled?
A. Oracle Data Pump
B. Oracle Recovery Manager (RMAN)
C. Oracle Label Security
D. Oracle Database Vault
E. Oracle Real Application Security

###

B rman is audit for default

###

QUESTION 138 / 128
Which two statements are true about tablespaces in multitenant container databases (CDBs)?
A. Default permanent tablespaces can be shared across pluggable databases (PDBs).
B. The current container must be set to root to create or modify the default temporary tablespace or tablespace group for a CDB.
C. Each PDB can have its own default temporary tablespace.
D. The default permanent tablespace for a PDB can be changed only by a local user with the required permissions.
E. The amount of space that each PDB can use in a shared temporary tablespace must be set at the CDB level.

###

A is wrong
B correct
C correct
D wrong - can be change from root in cdb too
E è opzionale non è obbligatorio

A PDB can either have its owner temporary tablespace, or if it is created without a temporary tablespace, it can share the temporary tablespace with the CBD.

A permanent tablespace can be associated whit only one container
When a tbs is created in a container it is associated with that container
CDB can have an UNDO tbs

###

QUESTION 139 / 129
When is the UNDO_RETENTION parameter value ignored by a transaction?
A. when the data file of the undo tablespace is autoextensible
B. when there are multiple undo tablespaces available in a database
C. when the undo tablespace is of a fixed size and retention guarantee is not enabled
D. when Flashback Database is enabled

###

C is correct
Reference:http://docs.oracle.com/cd/B19306_01/server.102/b14231/undo.htm(undo retention, see the bullets)

###

QUESTION 140 / 130
Which two options can be configured for an existing database by using the Database Configuration Assistant (DBCA)?
A. Database Resident Connection Pooling
B. Oracle Suggested Backup Strategy
C. Database Vault in ORACLE_HOME
D. Nondefaultblocksizetablespaces
E. Configure Label Security

###

It’s C and E
Just test it by running DBCA on an existing 12c database.

###

QUESTION 141 / 131
You have set the value of the NLS_TIMESTAMP_TZ_FORMAT parameter to YYYY-MM-DD. The default format of which two data types would be affected by this
setting?
A. DATE
B. TIMESTAMP
C. INTERVAL YEAR TO MONTH
D. INTERVAL DAY TO SECOND
E. TIMESTAMP WITH LOCAL TIME ZONE


###

Correct Answer: BE

NLS_TIMESTAMP_TZ_FORMAT defines the default date format for the TIMESTAMP and TIMESTAMP WITH LOCAL TIME ZONE data types. It is used with the TO_CHAR and TO_TIMESTAMP_TZ functions


#######################################################################################
#######################################################################################


Question 142 / 1
Which two statements are true about scheduling operations in a pluggable database (PDB)?
A A job defined in a PDB runs only if that PDB is open.
B Scheduler attribute setting is performed only at the CDB level.
C.Scheduler objects created by users can be exported or imported using Data Pump.
D.Scheduler jobs for a PDB can be created only by common users.
E.Scheduler jobs for a PDB can be defined only at the container database (CDB) level.


###

AC

A correct
B wrong
C correct
D wrong
E wrong

In material of course says:
In general, all scheduler objects created by the user can be exported/imported into the PDB using data pump.
Predefined scheduler objects will not get exported and that means that any changes made
to these objects by the user will have to be made once again after the database
has been imported into the pluggable database.
However, this is how import/export works currently.
A job defined in a PDB will run only if a PDB is open.

###

QUESTION 143 / 2

A complete database backup to media is taken for your database every day.
Which three actions would you take to improve backup performance?

A. Set the backup_tape_io_slaves parameter to true.
B. Set the dbwr_io_slaves parameter to a nonzero value if synchronous I/O is in use.
C. Configure large pool if not already done.
D. Remove the rate parameter, if specified, in the allocate channel command.
E. Always use RMAN compression for tape backups rather than the compression provided by media manager.
F. Always use synchronous I/O for the database.

###

BCD (LIV)

###

QUESTION 144 / 8
Your database is running in archivelog mode. Examine the parameters for your database instance:
LOG_ARCHIVE_DEST_l ='LOCATION=/disk1/arch MANDATORY'
LOG_ARCHIVE_DEST_2 ='LOCATION=/disk2/arch'
LOG_ARCHIVE_DEST_3 ='LOCATION=/disk3/arch'
LOG_ARCHIVE_DEST_4 ='LOCATION=/disk4/arch'
LOG_ARCHIVE_MIN_SUCCEED_DEST = 2

While the database is open, you notice that the destination set by the log_archive_dest_1 parameter is not
available. All redo log groups have been used.
What happens at the next log switch?

A. The database instance hangs and the redo log files are not overwritten.
B. The archived redo log files are written to the fast recovery area until the mandatory destination is made
available.
C. The database instance is shutdown immediately.
D. The destination set by the log_archive_dest parameter is ignored and the archived redo log files are
created in the next two available locations to guarantee archive log success.

###

A

###

QUESTION 145 / 4
You notice performance degradation in your production Oracle 12c database.
You want to know what caused this performance difference.
Which method or feature should you use?

A. Database Replay
B. Automatic Database Diagnostic Monitor (ADDM) Compare Period report
C. Active Session History (ASH) report
D. SQL Performance Analyzer

###

Risposta B
ADDM Report. This report performs a cause-to-effect analysis, making it simpler to understand why performance deviated from a base time.

Reference: http://docs.oracle.com/cd/E24628_01/server.121/e17635/tdppt_degrade.htm

###

QUESTION 146 / 3
For which three pieces of information can you use the RMAN list command?

A. stored scripts in the recovery catalog
B. available archived redo log files
C. backup sets and image copies that are obsolete
D. backups of tablespaces
E. backups that are marked obsolete according to the current retention policy


###

A B D
(for obsolete you do report)

About the LIST Command

The primary purpose of the LIST command is to list backup and copies. For example, you can list:

– Backups and proxy copies of a database, tablespace, datafile, archived redo log, or control file
– Backups that have expired
– Backups restricted by time, path name, device type, tag, or recoverability
– Archived redo log files and disk copies

###

QUESTION 147 / 52

Examine the resources consumed by a database instance whose current Resource Manager plan is
displayed.
SQL> SELECT name, active_sessions, queue_length,
consumed_cpu_time, cpu_waits, cpu_wait_time
FROM v$rsrc_consumer_group;

NAME                             ACTIVE_SESSIONS QUEUE_LENGTH CONSUMED_CPU_TIME  CPU_WAITS CPU_WAIT_TIME
-------------------------------- --------------- ------------ ----------------- ---------- -------------
OTHER_GROUPS                                  12            0                 0          0             0
SYSTEM_GROUP                              49            0                 0          0             0
DSS_QUERIES                       4            2              5412    5464546      84748478




Which two statements are true?

A. A belonging to DSS_QUERIES fails with an error.
B. An attempt to start a new session by a user belonging to OTHER_GROUPS fails with an error.
C. The CPU_WAIT_TIME column indicates the total time that sessions in the consumer group waited for
the CPU due to resource management.
D. The CPU_WAIT_TIME column indicates the total time that sessions in the consumer group waited for
the CPU due to I/O waits and latch or enqueue contention.
E. A user belonging to the DSS_QUERIES resource consumer group can create a new session but the
session will be queued.

###

CE

NAME         - Name of the consumer group
ACTIVE_SESSIONS - Number of currently active sessions in the consumer group
QUEUE_LENGTH    - Number of sessions waiting in the queue
CPU_WAITS     - Cumulative amount of CPU time consumed by all sessions in the consumer group
*CPU_WAIT_TIME     - Cumulative number of times all sessions in the consumer
            group had to wait for CPU because of resource management.
            This does not include waits due to latch or enqueue contention,
            I/O waits, and so on

A wrong
B wrong
C ok *
D wrong
E ok

###

QUESTION 148 / 125

Which statement is true about the loss or damage of a temp file that belongs to the temporary tablespace of a pluggable database (PDB)?
A.    The PDB is closed and the temp file is re-created automatically when the PDB is opened.
B.    The PDB is closed and requires media recovery at the PDB level.
C.    The PDB does not close and the temp file is re-created automatically whenever the container database (CDB) is opened.
D.    The PDB does not close and starts by using the default temporary tablespace defined for the CD

###

C
Corretta
If a temp file belonging to a PDB temporary tablespace is lost or damaged, and the user issuing the statement uses it, an error during the execution of SQL statements that require that temporary space for sorting occurs.
The PDB can open with a missing temporary file. If any of the temporary files do not exist when the PDB is opened, they are automatically re-created. They are also automatically recreated at CDB startup.

pdb non si chiude (esclude A e B)
il tempfile quindi si ricrea quando il cdb si apre (o il pdb si apre)
ma non è detto che parta con il default temporary (ne puo avere uno privato) (not D)




###

QUESTION 149 / 120

Examine the output:

SQL > ARCHIVE LOGLIST
Database log mode Archive Mode
Automatic archival Enabled
Archive Destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 376
ext log sequence to archive 378
Current log sequence 378
Which three types of files are automatically placed in the fast recovery area?

A.    Flashback data archives (FDA)
B.    Archived redo log files
C.    Control file autobackups
D.    Server parameter file (SPFILE)
E.    Recovery Manager (RMAN) backup pieces

###

Not A (tiene flashback log non i flashback data archive)
B ok sicuri
C OK sicuri
Not D
E ok (anche se il fatto che lo chiama backupset mi mette i brividi)

Overview of the Fast Recovery Area

The fast recovery area can contain
control files,
online redo logs,
archived redo logs,
flashback logs,
and RMAN backups.

Files in the recovery area are permanent or transient.
Permanent files are active files used by the database instance. All files that are not permanent are transient. In general, Oracle Database eventually deletes transient files after they become obsolete under the backup retention policy or have been backed up to tape.

The fast recovery area is an Oracle Database managed space that can be used
to hold RMAN disk backups,
control file autobackups
and archived redo log files.
The files placed in this location are maintained by Oracle Database and the generated file names are maintained in Oracle Managed Files (OMF) format.

###

QUESTION 150 / 123

In your database, there are tablespaces that were read-only when the last backup was taken.
These tablespaces have not been made read/write since then.
You want to perform an incomplete recovery on the database by using a backup control file.
What precaution must you take for the read-only tablespaces before performing an incomplete recovery?

A.    All the read-only tablespaces should be taken offline.
B.    All the read-only tablespaces should be restored separately.
C.    All the read-only tablespaces should be renamed to have the MISSINGnnnn format.
D.    All the read-only tablespaces should be made online with logging disabled.

###

A ok
Take data files from read-only tablespaces offline before doing recovery with a backup control file,
and then bring the files online at the end of media recovery.

###

QUESTION 151 / 124

Examine the RMAN commands executed in your database:
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO disk;
RMAN> CONFIGURE DEVICETYPE DISK BACKUP TYPE TO BACKUPSET;
RKAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

You issue the command:

RMAN> BACKUP DATABASE;

Which two statements are true about the command?

###

A.    It performs a log switch.
B.    It creates compressed backup sets by using binary compression by default.
C.    It backs up only the used blocks in data files.
D.    It backs up data files, the control file, and the server parameter file.
E.    It creates a backup of only the control file whenever the database undergoes a structural change.

C:
RMAN backup sets automatically use unused block compression.
D:
If CONFIGURE CONTROLFILE AUTOBACKUP is ON (by default it is OFF), then RMAN automatically backs up the control file and server parameter file after every backup and after database structural changes.
Not E: spfile is also backed up.

nel comando è specificato BACKUPSET e non COMPRESSION (non è un default)
BACKUPSET indica di fare backup solo dei blocchi utilizzati (unused blocks are not backuped)
###

QUESTION 152 / 133

You issue the RMAN commands:

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO disk;
RKAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COPY;
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

Which three tasks are performed by the BACKUP DATABASE command?

A.    switching the online redo log file
B.    backing up all data files as image copies and archive log files, and deleting those archive log files
C.    backing up only the used blocks in the data files
D.    backing up all used and unused blocks in the data files
E.    backing up all archived log files and marking them as obsolete

###

ABD

A,B,D
You’ll find about A here
https://docs.oracle.com/database/121/BRADV/rcmbckba.htm#BRADV89516

###

QUESTION 153 / 135

Examine the command to duplicate a database:

RMAN> DUPLICATE TARGET DATABASE TO cdb
PLUGGABLE DATABASE pdb1, pdb5;
Which two statements are true about the DUPLICATE command?

A.    The SPFILE is copied along with the data files of the pluggable databases (PDBs). The root and the seed database in the container database (CDB) are also duplicated.
B.    A backup of pdb1 and pdb5 must exist before executing the command.
C.    The duplicate command first creates a backup, and then duplicates the PDBs by using the backup.
D.    An auxiliary instance must be started with the initialization parameter

###

cd

Not A
Not B  in dubbio
C
D:

il parametro di init ENABLE_PLUGGABLE_DATABASE set to TRUE.

When duplicating a whole CDB or one more PDBs:
•    You must create the auxiliary instance as a CDB.
    To do so, start the instance with the following declaration in the initialization parameter file:
•    enable_pluggable_database=TRUE


QUESTION 154 / 137

Identify three reasons for using a recovery catalog with Recovery Manager (RMAN).

A.    to store backup information of multiple databases in one place
B.    to restrict the amount of space that is used by backups
C.    to maintain a backup for an indefinite period of time by using the KEEP FOREVER clause
D.    to store RMAN scripts that are available to any RMAN client that can connect to target databases registered in the recovery catalog
E.    to automatically delete obsolete backups after a specified period of time

A
not B
C
D
Not E viene fatto a comando

The answer is ACD.
E is not true because you could do that with the retention policy even if you are not connected to a recovery catalog.
keepOption – Overrides any configured retention policy for this backup so that the backup is not considered obsolete, as shown in Example 2-25.
You can use the KEEP syntax to generate archival database backups that satisfy business or legal requirements. The KEEP setting is an attribute of the backup set (not individual backup piece) or image copy.
Note: You cannot use KEEP with BACKUP BACKUPSET.
With the KEEP syntax, you can keep the backups so that they are considered obsolete after a specified time (KEEP UNTIL), or make them never obsolete (KEEP FOREVER). As shown in Example 2-26, you must be connected to a recovery catalog when you specify KEEP FOREVER.
Note: You can use CHANGE to alter the status of a backup generated with KEEP.
Note: You cannot use KEEP UNTIL with PLUS ARCHIVELOG.

QUESTION 155 /140 / 139

The CATDBI2c database contains an Oracle Database 12c catalog schema owned by the rc12c user.
The CATDB11 database contains an Oracle Database l1g catalog schema owned by the rc11 user.
A database with DBID=1423241 is registered in the CATDB11 catalog.
Both the recovery catalog databases are open.

In the CATDB12c database, you execute the commands: Srman

RMAN> CONNECT CATALOG rc12c/pass12c8catdbI2c

RMAN> IMPORT CATALOG rc1l/pwdcatl19catdbl1 DBID=I423241; What is the outcome of the import?

A.    It fails because the target database and recovery catalog database are of different versions.
B.    It succeeds and all global scripts in the sc:: catalog that have the same name as existing global scripts in the RC12C catalog are automatically renamed.
C.    It succeeds but the database is not automatically registered in the Rc12c catalog.
D.    It fails because RMAN is not connected to the target database with DBID=1423241.

###

Answer: A
https://docs.oracle.com/database/121/RCMRF/rcmsynta026.htm#RCMRF198
The version of the source recovery catalog schema must be equal to the current version of the destination recovery catalog schema. If they are not equal, then upgrade the schemas to the same version.
A.    It fails because the target database and recovery catalog database are of different versions.
B.    It succeeds and all global scripts in the sc:: catalog that have the same name as existing global scripts in the RC12C catalog are automatically renamed.
C.    It succeeds but the database is not automatically registered in the Rc12c catalog.
D.    It fails because RMAN is not connected to the target database with DBID=1423241.

###

QUESTION 156 / 141
You issue the command:

SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

Which statement is true about the command?

A.    It creates a copy of the control file and stores it in the location specified in the diagnostic_dest initialization parameter.
B.    It creates a file that contains the SQL statement, which is required to re-create the control file.
C.    It updates the alert log file with the location and contents of the control file.
D.    It creates a binary backup of the control file.

###

B
•  Back up the control file to a binary file (duplicate of existing control file) using the following statement (D)
ALTER DATABASE BACKUP CONTROLFILE TO '/oracle/backup/control.bkp';
•  Produce SQL statements that can later be used to re-create your control file (B)
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
This command writes a SQL script to a trace file where it can be captured and edited to reproduce the control file. View the alert log to determine the name and location of the trace file.

###

QUESTION 157 / 118 (
View the SPFILE parameter settings in the Exhibit.

You issue this command and get errors: SQL> startup

ORA-00824:cannot set SGA_TARGET or MEMORY_TARGET due to existing internal settings,see alertlog for more information

Why did the instance fail to start?

A.    because pga_aggregate_target is not set
B.    because statistics_level is set to basic
C.    because memory_target and memory_max_target cannot be equal
D.    because sga_target and memory_target are both set

###

B

SQL> startup nomount
ORA-01078: failure in processing system parameters
ORA-00824: cannot set SGA_TARGET or MEMORY_TARGET due to existing internal settings
ORA-00848: STATISTICS_LEVEL cannot be set to BASIC with SGA_TARGET or MEMORY_TARGET
SQL>

Explanation:
Setting SGA Target Size
You enable the automatic shared memory management feature by setting the SGA_TARGET
parameter to an on zero value. This parameter sets the total size of the SGA. It replaces the
parameters that control the memory allocated for a specific set of individual components, which
are now automatically and dynamically resized (tuned) as needed.
Note:
The STATISTICS_LEVEL initialization parameter must be set to TYPICAL (the default) or ALL for
automatic shared memory management to function.


###

QUESTION 158 / 121
Which two statements are true about Resource Manager plans
for individual pluggable databases (PDB plans) in a multitenant container database (CDB)?

A.    If no PDB plan is enabled for a pluggable database, then all sessions for that PDB are treated to an equal degree of the resource share          of that PDB.
B.    In a PDB plan, subplans may be used with up to eight consumer groups.
C.    If a PDB plan is enabled for a pluggable database, then resources are allocated to consumer groups across all PDBs in the cDB.
D.    If no PDB plan is enabled for a pluggable database, then the PDB share in the CDB plan is dynamically calculated.
E.    If a PDB plan is enabled for a pluggable database, then resources are allocated to consumer groups based on the shares provided to the PDB in the CDB plan and the shares provided to the consumer groups in the PDB plan.

###

AE

A: Setting a PDB resource plan is optional. If not specified, all sessions within the PDB are treated equally. In a non-CDB database, workloads within a database are managed with resource plans. In a PDB, workloads are also managed with resource plans, also called PDB resource plans. The functionality is similar except for the following differences:

Non-CDB Database Multi-level resource plans Up to 32 consumer groups Subplans

PDB Database

Single-level resource plans only Up to 8 consumer groups
(Not B) No subplans

A E , after reading more stuff, sorry for the previous noise

https://docs.oracle.com/database/121/ADMIN/cdb_dbrm.htm#ADMIN13774

When you do not explicitly define directives for a PDB, the PDB uses the default directive
CREATE_PENDING_AREA
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE
DBMS_RESOURCE_MANAGER.DELETE_CDB_PLAN_DIRECTIVE
DBMS_RESOURCE_MANAGER.UPDATE_CDB_PLAN_DIRECTIVE

Directive in CDB Plan in PDB

Inside the PDB to create a PLAN

In SQL*Plus, ensure that the current container is a PDB.
Create a pending area using the CREATE_PENDING_AREA procedure.
Create, modify, or delete consumer groups using the CREATE_CONSUMER_GROUP procedure.
Map sessions to consumer groups using the SET_CONSUMER_GROUP_MAPPING procedure.
Create the PDB resource plan using the CREATE_PLAN procedure.
Create PDB resource plan directives using the CREATE_PLAN_DIRECTIVE procedure.
Validate the pending area using the VALIDATE_PENDING_AREA procedure.
Submit the pending area using the SUBMIT_PENDING_AREA procedure.

###

QUESTION 158 bis / k

following steps are typically used to create complex resource plan using command line interface

1 creation of resurce plans
2 creation of a pending area
3 creation of plan directives
4 creation of resource consumer groups
5 validation of the pending arera
6 submission of pending area

in which sequence create complex resource plan

a 12435
b 13426
c 21436
d 24135

###

C

###

QUESTION 159 / 122

In a database supporting an OLTP workload, tables are frequently updated on both key and non-keycolumns.
Reports are also generated by joining multiple tables.
Which table organization or type would provide the best performance for this hybrid workload?

A.    heap table with a primary key index
B.    external table
C.    hash clustered table
D.    global temporary table
E.    index clustered table

###

A
DA VERIFICARE


###

QUESTION 160 / 132

Which statement is true about Enterprise Manager (EM) express in Oracle Database 12c?

A.    By default, EM express is available for a database after database creation.
B.    You can use EM express to manage multiple databases running on the same server.
C.    You can perform basic administrative tasks for pluggable databases by using the EM express interface.
D.    You cannot start up or shut down a database instance by using create and configure pluggable databases by using EM express.
E.    You can create and configure pluggable databases by using EM express.

###   

C

anche A e D sembrano corrette

EM Express is built inside the database. Note:

Oracle Enterprise Manager Database Express (EM Express)
is a web-based database management tool that is built inside the Oracle Database.
It supports key performance management and basic database administration functions.
From an architectural perspective, EM Express has no mid-tier or middleware components,
ensuring that its overhead on the database server is negligible.

The three main categories of administrative tasks are:
    Configuration
    Storage
    Security


###

QUESTION 161 / 134

As part of a manual upgrade process, after installing the software for Oracle Database 12c
and preparing the new Oracle home, you shut down the existing single-instance database.
Which step should you perform next to start the upgrade of the database?

A.    Start up the database instance by using the new location of the server parameter file and run the catuppst.sqi script to generate informational messages and log files during the upgrade.
B.    Start up the database instance by using the new location of the server parameter file and run the cact1.pl script from the new Oracle home to use parallel upgrade options that reduce down time.
C.    Start up the database instance by using the STARTUP UPGRADE command and gather fixed object statistics to minimize the time needed for recompilation.
D.    Start up the database instance by using the STARTUP UPGRADE command, which opens the existing database, and then performs additional upgrade operations.

###

B
startup upgrade pfile=pfile_name

Run the catctl.pl script from the new Oracle home as described in this step.
The Parallel Upgrade Utility, catctl.pl, provides parallel upgrade options that reduce downtime.

###

QUESTION 162 / 136

Which three statements are true regarding the use of the Database Migration Assistant for Unicode (DMU)?

A.    A DBA can check specific tables with the DMU
B.    The database to be migrated must be opened read-only.
C.    The release of the database to be converted can be any release since 9.2.0.8.
D.    The DMU can report columns that are too long in the converted characterset
E.    The DMU can report columns that are not represented in the converted characterset

ADE

Explanation/Reference:

A: In certain situations, you may want to exclude selected columns
or tables from scanning or conversion steps of the migration process.

D: Exceed column limit

The cell data will not fit into a column after conversion.
E: Need conversion
the cell data needs to be converted, because its binary representation in
the target character set is different than the representation in the current character set,
but neither length limit issues nor invalid representation issues have been found.



QUESTION 163 / 138

Which two statements are true regarding Oracle Data Pump?

A.    EXPDP and IMPDP are the client components of Oracle Data Pump.
B.      DBMS_DATAPUMP PL/SQL packages can be used independently of the Data Pump clients.
C.    Oracle Data Pump export and import operations can be performed only by users with the SYSDBA privilege.
D.    Oracle Data Pump imports can be done from the export files generated in the Original Export Utility.
E.    EXPDP and IMPDP use the procedures provided by DBMS_METADATA to execute export and import commands.

AB

###

QUESTION 164 / 142

You create a default Flashback Data Archive FLA1 and enable it for the EMPLOYEES table in the HR schema.
After a few days, you want to alter the EMPLOYEES table by executing the command:
SQL> ALTER TABLE EMPLOYEES ADD PHONE NUMBER(12);

Which statement is true about the execution of the command?

A.    It gives an error because DDL statements cannot be executed on a table that is enabled for Flashback Data Archive.
B.    It executes successfully and all metadata related to the EMPLOYEES table before altering the table definition is purged from Flashback Data Archive.
C.    It executes successfully and continues to store metadata related to the EMPLOYEES table.
D.    It executes successfully but Flashback Data Archive is disabled for the EMPLOYEES table.

###

C
Flashback data archives retain historical data across data definition language (DDL)
changes to the database as long as the DDL change does not affect the structure of the table. The one exception to this rule is that flashback data archives do retain historical data when a column is added to the table.
C

###

QUESTION 165 / 143

Examine the commands executed in CDBS ROOT of your multitenant container database (CDB)
that has multiple pluggable databases (PDB):

SQL> CREATE ROLE c ##role1 CONTAINER-ALL;
SQL> GRANT CREATE SESSION, CREATE TABLE TO c##role1 CONTAINER'ALL;
SQL> CREATE USER c##adnin IDENTIFIED BY orcl123;
SQL> GRANT c##role1 TO c##adnin CONTAINER=ALL;
SQL> GRANT SELECT ON DBA_USERS to c##rola1 CONTAINER*ALL;

Which statement is true about granting the select privilege on the DBA_users view to the c##ROLE1 role?

A.    The command fails and gives an error because object privileges cannot be granted to a common user.
B.    The command fails because container is not set to current.
C.    The command succeeds and the common user c##adnin can create a session and query the DBA_users view in cdbsroot and all the PDBs.
D.    The command succeeds and the common user c##admin can create a session in cdbSroot and all the PDBs, but can only query the dba_users view in ct3S?cdt.
E.    The command succeeds and the common user c##Admin can create a session and query the DBA users view only in cdbsrooi.

###

C=ok
provato

###

QUESTION 166

Which three statements are true about a job chain?

A.It can be executed using event-based or time-based schedules.
B.It can contain a nested chain of jobs.
C.It can be used to implement dependency-based scheduling.
D.It cannot invoke the same program or nested chain in multiple steps in the chain.
E.It cannot have more than one dependency.

###

ABC
Chains are the means by which you can implement dependency based scheduling, in which jobs are started depending on the outcomes of one or more previous jobs.
DBMS_SCHEDULER.DEFINE_CHAIN_STEP
DBMS_SCHEDULER.DEFINE_CHAIN_EVENT_STEP

###

QUESTION 167

Because of logical corruption of data in a table, you want to recover the table from an
RMAN backup to a specified point in time.
Examine the steps to recover this table from an RMAN backup:

1.Determine which backup contains the table that needs to be recovered.
2.Issue the recover table RMAN command with an auxiliary destination defined and the
point in time specified.
3.Import the Data Pump export dump file into the auxiliary instance.
4.Create a Data Pump export dump file that contains the recovered table on a target
database.
Identify the required steps in the correct order.

A. 1, 4, 3, 2
B. 1, 2, 4
C. 1, 4, 3
D. 1, 2

###

B (1 2 4)

1. (1)Determines which backup contains the tables or table partitions that need to be recovered, based on the point in time specified for the recovery.
2. (2)Creates an auxiliary database and recovers the specified tables or table partitions, until the specified point in time, into this auxiliary database.
3. (4)Creates a Data Pump export dump file that contains the recovered tables or table partitions.
4. (Optional) Imports the Data Pump export dump file into the target instance.
5. (Optional) Renames the recovered tables or table partitions in the target database.

###

QUESTION 168

Examine the command:
SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
In which two scenarios is this command required?

A. The current online redo log file is missing.
B. A data file belonging to a noncritical tablespace is missing.
C. All the control files are missing.
D. The database backup is older than the control file backup.
E. All the data files are missing.

###

CD

"In cancel-based recovery, recovery proceeds by prompting you with the suggested filenames of archived redo log files.
Recovery stops when you specify CANCEL instead of a filename or when all redo has been applied to the datafiles.

Cancel-based recovery is better than change-based or time-based recovery if you want to control which archived log terminates recovery. For example,
you may know that you have lost all logs past sequence 1234,
so you want to cancel recovery after log 1233 is applied.

###

QUESTION 169
Which two are prerequisites for setting up Flashback Data Archive?

A. Fast Recovery Area should be defined.
B. Undo retention guarantee should be enabled.
C. Supplemental logging should be enabled.
D. Automatic Undo Management should be enabled.
E. All users using Flashback Data Archive should have unlimited quota on the Flashback
   Data Archive tablespace.
F. The tablespace in which the Flashback Data Archive is created should have Automatic
   Segment Space Management (ASSM) enabled.

###

D F
the QUOTA can be adjusted dynamically at any time by:
adding more tablespaces to the flashback archive or by
modifying the quota on a tablespace which is already part of the flashback archive:

###

QUESTION 170
The environmental variable oracle_Base is set to /u01/app/oracle
and oracle_home is set to /u01/app/oracle/product/12.1.0/db1.
You want to check the diagnostic files created as part of the Automatic Diagnostic Repository (ADR).
Examine the initialization parameters set in
your database. What is the location of the ADR base?

audit_file_deststring/u01/app/oracle/admin/eml2rep/adump
background_dump_deststring
core_dump_deststring
db_create_file_deststring
db_recovery_file_deststring/u01/app/oracle/fast_recovery_area
diagnostic_deststring


A. It is set to/u01/app/oracle/product:/12.1.0/db_1/log.
B. It is set to /u01/app/oracle/admin/enl2r&p/adump.
C. It is set to /u01/app/oracle.
D. It is set to /u01/app/oracle/flash_recovery_area.

###

C
ADR_BASE

The Automatic Diagnostic Repository (ADR) is a directory structure that is stored outside of the database. It is therefore available for problem diagnosis when the database is down.
The ADR root directory is known as ADR base. Its location is set by the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted or left null, the database sets DIAGNOSTIC_DEST upon startup as follows:
If environment variable ORACLE_BASE is set, DIAGNOSTIC_DEST is set to the directory designated by ORACLE_BASE.
If environment variable ORACLE_BASE is not set, DIAGNOSTIC_DEST is set to ORACLE_HOME/log.

###

QUESTION 171

You want to export the pluggable database (PDB) hr pdb1 from the multitenant container
database (CDB)CDB1 and import it into the cdb2 CDB as the emp_pdb1 PDB.

Examine the list of possible steps required to perform the task:
1.Create a PDB named emp_pdb1.
2.Export the hr_pdb1 PDB by using the full clause.
3.Open the emp_pdb1 PDB.
4.Mount the emp_pdb1 PDB.
5.Synchronize the emp_pdb1 PDB in restricted mode.
6.Copy the dump file to the Data Pump directory.
7.Create a Data Pump directory in the emp_pdb1 PDB.
8.Import data into emp_pdb1 with the full and remap clauses.
9.Create the same tablespaces in emp_pdb1 as in hr_pdb1 for new local user objects.
Identify the required steps in the correct order.

A. 2, 1, 3, 7, 6, and 8
B. 2, 1, 4, 5, 3, 7, 6, 9, and 8
C. 2, 1, 3, 7, 6, 9, and 8
D. 2, 1, 3, 5, 7, 6, and 8

###

C
->  trick here is to understand that all the data are exported using the data pump utility and there to create the tablespaces is necessary

###

QUESTION 172
You wish to create jobs to satisfy these requirements:
1. Automatically bulk load data from a flat file.
2. Rebuild indexes on the SALES table after completion of the bulk load.
How would you create these jobs?

A. Create both jobs by using Scheduler raised events.
B. Create both jobs using application raised events.
C. Create one job to rebuild indexes using application raised events and another job to
   perform bulk load using Scheduler raised events.
D. Create one job to rebuild indexes using Scheduler raised events and another job to
   perform bulk load by using events raised by the application.

###

C
The bulk loader would be started in response to a file watcher scheduler event and the indexes would be rebuilt in response to an application event raised by the bulk loader.

###

QUESTION 173

Your Oracle 12c multitenant container database (CDB) contains multiple pluggable
databases (PDBs). In the PDB hr_pdb, the common user c##admin and the local user
b_admin have only the connect privilege.
You create a common role c##role1 with the create table and select any table privileges.
You then execute the commands:

SQL> GRANT c##role1 TO c##admin CONTAINER=ALL;
SQL> CONN sys/oracle@HR_PDB as sysdba
SQL> GRANT c##role1 TO b_admin CONTAINER=CURRENT;
Which two statements are true?

A. C##admin can create and select any table, and grant the c##role1 role to users only in
   the root container.
B. B_admin can create and select any table in both the root container and kr_pdb.
C. c##admin can create and select any table in the root container and all the PDBs.
D. B_admin can create and select any table only in hr_pdb.
E. The grant c=»role1 to b_admin command returns an error because container should be
   set to ALL.

###

C D

###

QUESTION 174

Examine the commands executed in the root container of your multitenant container
database (CDB) that has multiple pluggable databases (PDBs):
SQL> CREATE USER c##a_admin IDENTIFIED BY orcl123;
SQL> CREATE ROLE c##role1 CONTAINER=ALL;
SQL> GRANT CREATE VIEW TO C##roleI CONTAINER=ALL;
SQL> GRANT c##role1 TO c##a_admin CONTAINER=ALL;
SQL> REVOKE c##role1 FROM c##a_admin;
What is the result of the revoke command?

A. It executes successfully and the c##role1 role is revoked from the c##a_admin user only
   in the root container.
B. It fails and reports an error because the container=all clause is not used.
C. It executes successfully and the c##rocl1 role is revoked from the c##a_admin user in the
   root database and all the PDBs.
D. It fails and reports an error because the comtainer=current clause is not used.

###

A is not correct!

B is correct.

un privilegio comune assegnato con container all
non puo essere revocato se non si specifica container (perche il default è container current)

If the current container is the root:
Specify CONTAINER = CURRENT to revoke a locally granted system privilege, object privilege, or role from a common user or common role. The privilege or role is revoked from the user or role only in the root. This clause does not revoke privileges granted with CONTAINER = ALL.
Specify CONTAINER = ALL to revoke a commonly granted system privilege, object privilege on a common object, or role from a common user or common role. The privilege or role is revoked from the user or role across the entire CDB. This clause can revoke only a privilege or role granted with CONTAINER = ALL from the specified common user or common role. This clause does not revoke privileges granted locally with CONTAINER = CURRENT. However, any locally granted privileges that depend on the commonly granted privilege being revoked are also revoked.
If you omit this clause, then CONTAINER = CURRENT is the default.


SQL> REVOKE c##role1 FROM c##a_admin;
REVOKE c##role1 FROM c##a_admin
*
ERROR at line 1:
ORA-01951: ROLE ‘C##ROLE1’ not granted to ‘C##A_ADMIN’

SQL> REVOKE c##role1 FROM c##a_admin CONTAINER=ALL;

Revoke succeeded.

###

QUESTION 175

Examine the RMAN command:
RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON;
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
Which prerequisite must be met
before accomplishing the backup?

A. Oracle wallet for the encryption must be set up.
B. All the tablespaces in the database must be encrypted.
C. The password for the encryption must be set up.
D. Oracle Database Vault must be enabled.

###

A

configuration encryption will use by Transparent encryption,
For transparent encryption, you will need to create a wallet, and it must be open.

Transparent encryption will then occur automatically after you have issued the
CONFIGURE ENCRYPTION FOR DATABASE ON or CONFIGURE ENCRYPTION FOR TABLESPACE ON command.

###

QUESTION 176

A database is running in archivelog mode. The database contains locally managed
tablespaces. Examine the RMAN command:
RMAN> BACKUP
AS COMPRESSED BACKUPSET
SECTION SIZE 1024M
DATABASE;
Which statement is true about the execution of the command?

A. The backup succeeds only if all the tablespaces are locally managed.
B. The backup succeeds only if the RMAN default device for backup is set to disk.
C. The backup fails because you cannot specify section size for a compressed backup.
D. The backup succeeds and only the used blocks are backed up with a maximum backup
   piece size of 1024 MB.

###

D

COMPRESSED Enables binary compression.
RMAN compresses the data written into the backup set to reduce the overall size of the backup set. All backups that create backup sets can create compressed backup sets. Restoring compressed backup sets is no different from restoring uncompressed backup sets.
RMAN applies a binary compression algorithm as it writes data to backup sets. This compression is similar to the compression provided by many media manager vendors. When backing up to a locally attached tape device, compression provided by the media management vendor is usually preferable to the binary compression provided by BACKUP AS COMPRESSED BACKUPSET. Therefore, use uncompressed backup sets and turn on the compression provided by the media management vendor when backing up to locally attached tape devices. You should not use RMAN binary compression and media manager compression together.
Some CPU overhead is associated with compressing backup sets. If the target database is running at or near its maximum load, then you may find the overhead unacceptable. In most other circumstances, compressing backup sets saves enough disk space to be worth the CPU overhead.
SECTION SIZE sizeSpec Specifies the size of each backup section produced during a data file backup.
By setting this parameter, RMAN can create a multisection backup. In a multisection backup, RMAN creates a backup piece that contains one file section, which is a contiguous range of blocks in a file. All sections of a multisection backup are the same size. You can create a multisection backup for a data file, but not a data file copy.
File sections enable RMAN to create multiple steps for the backup of a single large data file. RMAN channels can process each step independently and in parallel, with each channel producing one section of a multisection backup set.
If you specify a section size that is larger than the size of the file, then RMAN does not use multisection backup for the file. If you specify a small section size that would produce more than 256 sections, then RMAN increases the section size to a value that results in exactly 256 sections.
Depending on where you specify this parameter in the RMAN syntax, you can specify different section sizes for different files in the same backup job.
Note: You cannot use SECTION SIZE with MAXPIECESIZE or with INCREMENTAL LEVEL 1.

###

question 177 (DA 1Z0-060 ... da studiare)
http://www.aiotestking.com/oracle/which-three-situations-will-data-not-be-redacted-2/


question 178 (incontrata al test e non presente nei dump)

you install oracle grid infrastructure standalone and issue the following command:
crsctl start has
which two existing components get automatically added to the oracle restart configuration?

a) oracle cssd services
b) the database whose instance is running
c) oracle notification services
d) oracle healthcheck services
e) oracle net listener

1 commento:

Unknown ha detto...

Salve a tutti,
sto preparando questo esame ed ho acquistato dal sito Oracle la preparazione della KAPLAN e dopo aver letto I 194 quiz, non ne ho trovato nessuno corrispondente a quelli inseriti in questo articolo, mentre c'e corrispondenza tra I quiz di questo articolo e quelli forniti da PassLeader.


Nella confusion che si genera in questo caso, credo che quelli forniti da KAPLAN a pagamento sul sito Oracle dovrebbero essere quelli "attendibili" o sbaglio?


Saluti
Claudio