PK Doa,mimetypeapplication/epub+zipPKDiTunesMetadata.plistE artistName Oracle Corporation book-info cover-image-hash 919243058 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 230837483 publisher-unique-id E49106-05 unique-id 151211300 genre Oracle Documentation itemName Oracle® Database SQL Tuning Guide, 12c Release 1 (12.1) releaseDate 2014-06-29T12:57:22Z year 2014 PKkJEPKDMETA-INF/container.xml PKYuPKDOEBPS/tgsql_method.htmc0 SQL Performance Methodology

2 SQL Performance Methodology

This chapter describes the recommended methodology for SQL tuning. This chapter contains the following topics:

2.1 Designing Your Application

This section contains the following topics:

2.1.1 Data Modeling

Data modeling is important to successful application design. You must perform this modeling in a way that quickly represents the business practices. Heated debates may occur about the correct data model. The important thing is to apply greatest modeling efforts to those entities affected by the most frequent business transactions.

In the modeling phase, there is a great temptation to spend too much time modeling the non-core data elements, which results in increased development lead times. Use of modeling tools can then rapidly generate schema definitions and can be useful when a fast prototype is required.

2.1.2 Writing Efficient Applications

In the design and architecture phase of any system development, care should be taken to ensure that the application developers understand SQL execution efficiency. To achieve this goal, the development environment must support the following characteristics:

  • Good database connection management

    Connecting to the database is an expensive operation that is highly unscalable. Therefore, a best practice is to minimize the number of concurrent connections to the database. A simple system, where a user connects at application initialization, is ideal. However, in a web-based or multitiered application in which application servers multiplex database connections to users, this approach can be difficult. With these types of applications, design them to pool database connections, and not reestablish connections for each user request.

  • Good cursor usage and management

    Maintaining user connections is equally important to minimizing the parsing activity on the system. Parsing is the process of interpreting a SQL statement and creating an execution plan for it. This process has many phases, including syntax checking, security checking, execution plan generation, and loading shared structures into the shared pool. There are two types of parse operations:

    • Hard parsing

      A SQL statement is submitted for the first time, and no match is found in the shared pool. Hard parses are the most resource-intensive and unscalable, because they perform all the operations involved in a parse.

    • Soft parsing

      A SQL statement is submitted for the first time, and a match is found in the shared pool. The match can be the result of previous execution by another user. The SQL statement is shared, which is optimal for performance. However, soft parses are not ideal, because they still require syntax and security checking, which consume system resources.

    Because parsing should be minimized as much as possible, application developers should design their applications to parse SQL statements once and execute them many times. This is done through cursors. Experienced SQL programmers should be familiar with the concept of opening and re-executing cursors.

  • Effective use of bind variables

    Application developers must also ensure that SQL statements are shared within the shared pool. To achieve this goal, use bind variables to represent the parts of the query that change from execution to execution. If this is not done, then the SQL statement is likely to be parsed once and never re-used by other users. To ensure that SQL is shared, use bind variables and do not use string literals with SQL statements. For example:

    Statement with string literals:

    SELECT * FROM employees 
      WHERE last_name LIKE 'KING';
    

    Statement with bind variables:

    SELECT * FROM employees 
      WHERE last_name LIKE :1;
    

    The following example shows the results of some tests on a simple OLTP application:

    Test                         #Users Supported
    No Parsing all statements           270 
    Soft Parsing all statements         150
    Hard Parsing all statements          60
    Re-Connecting for each Transaction   30
    

    These tests were performed on a four-CPU computer. The differences increase as the number of CPUs on the system increase.

2.2 Deploying Your Application

This section contains the following topics:

2.2.1 Deploying in a Test Environment

The testing process mainly consists of functional and stability testing. At some point in the process, performance testing is performed.

The following list describes some simple rules for performance testing an application. If correctly documented, then this list provides important information for the production application and the capacity planning process after the application has gone live.

  • Use the Automatic Database Diagnostic Monitor (ADDM) and SQL Tuning Advisor for design validation

  • Test with realistic data volumes and distributions

    All testing must be done with fully populated tables. The test database should contain data representative of the production system in terms of data volume and cardinality between tables. All the production indexes should be built and the schema statistics should be populated correctly.

  • Use the correct optimizer mode

    Perform all testing with the optimizer mode that you plan to use in production.

  • Test a single user performance

    Test a single user on an idle or lightly-used database for acceptable performance. If a single user cannot achieve acceptable performance under ideal conditions, then multiple users cannot achieve acceptable performance under real conditions.

  • Obtain and document plans for all SQL statements

    Obtain an execution plan for each SQL statement. Use this process to verify that the optimizer is obtaining an optimal execution plan, and that the relative cost of the SQL statement is understood in terms of CPU time and physical I/Os. This process assists in identifying the heavy use transactions that require the most tuning and performance work in the future.

  • Attempt multiuser testing

    This process is difficult to perform accurately, because user workload and profiles might not be fully quantified. However, transactions performing DML statements should be tested to ensure that there are no locking conflicts or serialization problems.

  • Test with the correct hardware configuration

    Test with a configuration as close to the production system as possible. Using a realistic system is particularly important for network latencies, I/O subsystem bandwidth, and processor type and speed. Failing to use this approach may result in an incorrect analysis of potential performance problems.

  • Measure steady state performance

    When benchmarking, it is important to measure the performance under steady state conditions. Each benchmark run should have a ramp-up phase, where users are connected to the application and gradually start performing work on the application. This process allows for frequently cached data to be initialized into the cache and single execution operations—such as parsing—to be completed before the steady state condition. Likewise, after a benchmark run, a ramp-down period is useful so that the system frees resources, and users cease work and disconnect.

2.2.2 Rollout Strategies

When new applications are rolled out, two strategies are commonly adopted:

  • Big Bang approach - all users migrate to the new system at once

  • Trickle approach - users slowly migrate from existing systems to the new one

Both approaches have merits and disadvantages. The Big Bang approach relies on reliable testing of the application at the required scale, but has the advantage of minimal data conversion and synchronization with the old system, because it is simply switched off. The Trickle approach allows debugging of scalability issues as the workload increases, but might mean that data must be migrated to and from legacy systems as the transition takes place.

It is difficult to recommend one approach over the other, because each technique has associated risks that could lead to system outages as the transition takes place. Certainly, the Trickle approach allows profiling of real users as they are introduced to the new application, and allows the system to be reconfigured while only affecting the migrated users. This approach affects the work of the early adopters, but limits the load on support services. Thus, unscheduled outages only affect a small percentage of the user population.

The decision on how to roll out a new application is specific to each business. Any adopted approach has its own unique pressures and stresses. The more testing and knowledge that you derive from the testing process, the more you realize what is best for the rollout.

PKc h0c0PKDOEBPS/tgsql_stcb.htmJQ Gathering Diagnostic Data with SQL Test Case Builder

17 Gathering Diagnostic Data with SQL Test Case Builder

A SQL test case is a set of information that enables a developer to reproduce the execution plan for a specific SQL statement that has encountered a performance problem. SQL Test Case Builder is a tool that automatically gathers information needed to reproduce the problem in a different database instance.

This chapter contains the following topics:

17.1 Purpose of SQL Test Case Builder

In many cases, a reproducible test case makes it easier to resolve SQL-related problems. SQL Test Case Builder automates the sometimes difficult and time-consuming process of gathering and reproducing as much information as possible about a problem and the environment in which it occurred.

The output of SQL Test Case Builder is a set of scripts in a predefined directory. These scripts contain the commands required to re-create all the necessary objects and the environment. After the test case is ready, you can create a zip file of the directory and move it to another database, or upload the file to Oracle Support.

17.2 Concepts for SQL Test Case Builder

This section contains the following topics:

17.2.1 SQL Incidents

In the fault diagnosability infrastructure of Oracle Database, an incident is a single occurrence of a problem. A SQL incident is a SQL-related problem. When a problem (critical error) occurs multiple times, the database creates an incident for each occurrence. Incidents are timestamped and tracked in the Automatic Diagnostic Repository (ADR). Each incident is identified by a numeric incident ID, which is unique within the ADR.

SQL Test Case Builder is accessible any time on the command line. In Oracle Enterprise Manager Cloud Control (Cloud Control), the SQL Test Case pages are only available after a SQL incident is found.


See Also:


17.2.2 What SQL Test Case Builder Captures

SQL Test Case Builder captures permanent information such as the query being executed, table and index definitions (but not the actual data), PL/SQL packages and program units, optimizer statistics, SQL plan baselines, and initialization parameter settings. Starting in Oracle Database 12c, SQL Test Case Builder also captures and replays transient information, including information only available as part of statement execution.

SQL Test Case Builder supports the following:

  • Adaptive plans

    SQL Test Case Builder captures inputs to the decisions made regarding adaptive plans, and replays them at each decision point (see "Adaptive Plans"). For adaptive plans, the final statistics value at each buffering statistics collector is sufficient to decide on the final plan.

  • Automatic memory management

    The database automatically handles the memory requested for each SQL operation. Actions such as sorting can affect performance significantly. SQL Test Case Builder keeps track of the memory activities, for example, where the database allocated memory and how much it allocated.

  • Dynamic statistics

    Regathering dynamic statistics on a different database does not always generate the same results, for example, when data is missing (see "Dynamic Statistics"). To reproduce the problem, SQL Test Case Builder exports the dynamic statistics result from the source database. In the testing database, SQL Test Case Builder reuses the same values captured from the source database instead of regathering dynamic statistics.

  • Multiple execution support

    SQL Test Case Builder can capture dynamic information accumulated during multiple executions of the query. This capability is important for automatic reoptimization (see "Automatic Reoptimization").

  • Compilation environment and bind values replay

    The compilation environment setting is an important part of the query optimization context. SQL Test Case Builder captures nondefault settings altered by the user when running the problem query in the source database. If any nondefault parameter values are used, SQL Test Case Builder re-establishes the same values before running the query.

  • Object statistics history

    The statistics history for objects is helpful to determine whether a plan change was caused by a change in statistics values. DBMS_STATS stores the history in the data dictionary. SQL Test Case Builder stores this statistics data into a staging table during export. During import, SQL Test Case Builder automatically reloads the statistics history data into the target database from the staging table.

  • Statement history

    The statement history is important for diagnosing problems related to adaptive cursor sharing, statistics feedback, and cursor sharing bugs. The history includes execution plans and compilation and execution statistics.

17.2.3 Output of SQL Test Case Builder

The output of the SQL Test Case Builder is a set of files that contains the commands required to re-create all the necessary objects and the environment. By default, SQL Test Case Builder stores the files in the following location, where incnum refers to the incident number and runnum refers to the run number:

$ADR_HOME/incident/incdir_incnum/SQLTCB_runnum

For example, a valid output file name could be as follows:

$ORACLE_HOME/log/diag/rdbms/dbsa/dbsa/incident/incdir_2657/SQLTCB_1

You can specify a nondefault location by creating an Oracle directory and invoking DBMS_SQLDIAG.EXPORT_SQL_TESTCASE, as in the following example:

CREATE OR REPLACE DIRECTORY my_tcb_dir_exp '/tmp';
 
BEGIN 
  DBMS_SQLDIAG.EXPORT_SQL_TESTCASE (
    directory => 'my_tcb_dir_exp'
,   sql_text  => 'SELECT COUNT(*) FROM sales'
,   testcase  => tco
);
END;

See Also:

Oracle Database Administrator's Guide to learn about the structure of the ADR repository

17.3 User Interfaces for SQL Test Case Builder

You can access SQL Test Case Builder either through Cloud Control or using PL/SQL on the command line.

17.3.1 Graphical Interface for SQL Test Case Builder

Within Cloud Control, you can access SQL Test Case Builder from the Incident Manager page or the Support Workbench page.

17.3.1.1 Accessing the Incident Manager

This task explains how to navigate to the Incident Manager from the Incidents and Problems section on the Database Home page.

To access the Incident Manager: 

  1. Access the Database Home page, as described in "Accessing the Database Home Page in Cloud Control."

  2. In the Incidents and Problems section, locate the SQL incident to be investigated.

    In the following example, the ORA 600 error is a SQL incident.

    Description of incidents_and_problems.gif follows

  3. Click the summary of the incident.

    The Problem Details page of the Incident Manager appears.

    Description of problem_details.gif follows

    The Support Workbench page appears, with the incidents listed in a table.


See Also:


17.3.1.2 Accessing the Support Workbench

This task explains how to navigate to the Incident Manager from the Oracle Database menu.

To access the Support Workbench: 

  1. Access the Database Home page, as described in "Accessing the Database Home Page in Cloud Control."

  2. From the Oracle Database menu, select Diagnostics, then Support Workbench.

    The Support Workbench page appears, with the incidents listed in a table.


See Also:

Online help for Cloud Control

17.3.2 Command-Line Interface for SQL Test Case Builder

You can use the DBMS_SQLDIAG package to perform tasks relating to SQL Test Case Builder. This package consists of various subprograms for the SQL Test Case Builder, some of which are listed in Table 17-1.

Table 17-1 SQL Test Case Functions in DBMS_SQLDIAG

ProcedureDescription

EXPORT_SQL_TESTCASE

Exports a SQL test case to a user-specified directory

EXPORT_SQL_TESTCASE_DIR_BY_INC

Exports a SQL test case corresponding to the incident ID passed as an argument

EXPORT_SQL_TESTCASE_DIR_BY_TXT

Exports a SQL test case corresponding to the SQL text passed as an argument

IMPORT_SQL_TESTCASE

Imports a SQL test case into a schema



See Also:

Oracle Database PL/SQL Packages and Types Reference to learn more about the DBMS_SQLDIAG package

17.4 Running SQL Test Case Builder

This tutorial explains how to run SQL Test Case Builder using Cloud Control.

Assumptions

This tutorial assumes the following:

  • You ran the following EXPLAIN PLAN statement as user sh, which causes an internal error:

    EXPLAIN PLAN FOR
      SELECT unit_cost, sold
      FROM   costs c,
             ( SELECT /*+ merge */ p.prod_id, SUM(quantity_sold) AS sold
               FROM   products p, sales s
               WHERE  p.prod_id = s.prod_id
               GROUP BY p.prod_id ) v
      WHERE  c.prod_id = v.prod_id;
    
  • In the Incidents and Problems section on the Database Home page, a SQL incident generated by the internal error appears.

To run SQL Test Case Builder: 

  1. Access the Incident Details page, as explained in "Accessing the Incident Manager".

  2. Click the Incidents tab.

    The Problem Details page appears.

    Description of problem_details_summary.gif follows

  3. Click the summary for the incident.

    The Incident Details page appears.

    Description of incident_details.gif follows

  4. In Guided Resolution, click View Diagnostic Data.

    The Incident Details: incident_number page appears.

    Description of incident_details_num.gif follows

  5. In the Application Information section, click Additional Diagnostics.

    The Additional Diagnostics subpage appears.

    Description of additional_diagnostics.gif follows

  6. Select SQL Test Case Builder, and then click Run.

    The Run User Action page appears.

    Description of run_user_action.gif follows

  7. Select a sampling percentage (optional), and then click Submit.

    After processing completes, the Confirmation page appears.

    Description of sqltcb_confirmation.gif follows

  8. Access the SQL Test Case files in the location described in "Output of SQL Test Case Builder".


See Also:

Online help for Cloud Control

PK%OQJQPKDOEBPS/cover.htm  Cover

Oracle Corporation

PK@t` PKDOEBPS/tgsql_profiles.htm Managing SQL Profiles

22 Managing SQL Profiles

This chapter contains the following topics:

22.1 About SQL Profiles

A SQL profile is a database object that contains auxiliary statistics specific to a SQL statement. Conceptually, a SQL profile is to a SQL statement what object-level statistics are to a table or index. SQL profiles are created when a DBA invokes SQL Tuning Advisor (see "About SQL Tuning Advisor").

This section contains the following topics:

22.1.1 Purpose of SQL Profiles

When profiling a SQL statement, SQL Tuning Advisor uses a specific set of bind values as input, and then compares the optimizer estimate with values obtained by executing fragments of the statement on a data sample. When significant variances are found, SQL Tuning Advisor bundles corrective actions together in a SQL profile, and then recommends its acceptance.

The corrected statistics in a SQL profile can improve optimizer cardinality estimates, which in turn leads the optimizer to select better plans. SQL profiles provide the following benefits over other techniques for improving plans:

  • Unlike hints and stored outlines, SQL profiles do not tie the optimizer to a specific plan or subplan. SQL profiles fix incorrect estimates while giving the optimizer the flexibility to pick the best plan in different situations.

  • Unlike hints, no changes to application source code are necessary when using SQL profiles. The use of SQL profiles by the database is transparent to the user.

22.1.2 Concepts for SQL Profiles

A SQL profile is a collection of auxiliary statistics on a query, including all tables and columns referenced in the query. The profile stores this information in the data dictionary. The optimizer uses this information at optimization time to determine the correct plan.


Note:

The SQL profile contains supplemental statistics for the entire statement, not individual plans. The profile does not itself determine a specific plan.

A SQL profile contains, among other statistics, a set of cardinality adjustments. The cardinality measure is based on sampling the WHERE clause rather than on statistical projection. A profile uses parts of the query to determine whether the estimated cardinalities are close to the actual cardinalities and, if a mismatch exists, uses the corrected cardinalities. For example, if a SQL profile exists for SELECT * FROM t WHERE x=5 AND y=10, then the profile stores the actual number of rows returned.

When choosing plans, the optimizer has the following sources of information:

  • The environment, which contains the database configuration, bind variable values, optimizer statistics, data set, and so on

  • The supplemental statistics in the SQL profile

Figure 22-1 shows the relationship between a SQL statement and the SQL profile for this statement. The optimizer uses the SQL profile and the environment to generate an execution plan. In this example, the plan is in the SQL plan baseline for the statement.

Figure 22-1 SQL Profile

Description of Figure 22-1 follows

If either the optimizer environment or SQL profile change, then the optimizer can create a new plan. As tables grow, or as indexes are created or dropped, the plan for a SQL profile can change. The profile continues to be relevant even if the data distribution or access path of the corresponding statement changes. In general, you do not need to refresh SQL profiles.

Over time, profile content can become outdated. In this case, performance of the SQL statement may degrade. The statement may appear as high-load or top SQL. In this case, the Automatic SQL Tuning task again captures the statement as high-load SQL. You can implement a new SQL profile for the statement.

Internally, a SQL profile is implemented using hints that address different types of problems. These hints do not specify any particular plan. Rather, the hints correct errors in the optimizer estimation algorithm that lead to suboptimal plans. For example, a profile may use the TABLE_STATS hint to set object statistics for tables when the statistics are missing or stale.

22.1.2.1 SQL Profile Recommendations

As explained in "SQL Profiling", SQL Tuning Advisor invokes Automatic Tuning Optimizer to generate SQL profile recommendations. Recommendations to implement SQL profiles occur in a finding, which appears in a separate section of the SQL Tuning Advisor report.

When you implement (or accept) a SQL profile, the database creates the profile and stores it persistently in the data dictionary. However, the SQL profile information is not exposed through regular dictionary views.

Example 22-1 SQL Profile Recommendation

In this example, the database found a better plan for a SELECT statement that uses several expensive joins. The database recommends running DBMS_SQLTUNE.ACCEPT_SQL_PROFILE to implement the profile, which enables the statement to run 98.53% faster.

-------------------------------------------------------------------------------
FINDINGS SECTION (2 findings)
-------------------------------------------------------------------------------
 
1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
  A potentially better execution plan was found for this statement. Choose
  one of the following SQL profiles to implement.
 
  Recommendation (estimated benefit: 99.45%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile.
    execute dbms_sqltune.accept_sql_profile(task_name => 'my_task',
            object_id => 3, task_owner => 'SH', replace => TRUE);
 
  Validation results
  ------------------
  The SQL profile was tested by executing both its plan and the original plan
  and measuring their respective execution statistics. A plan may have been
  only partially executed if the other could be run to completion in less time.
 
                           Original Plan  With SQL Profile  % Improved
                           -------------  ----------------  ----------
  Completion Status:             PARTIAL          COMPLETE
  Elapsed Time(us):            15467783            226902      98.53 %
  CPU Time(us):                15336668            226965      98.52 %
  User I/O Time(us):                  0                 0
  Buffer Gets:                  3375243             18227      99.45 %
  Disk Reads:                         0                 0
  Direct Writes:                      0                 0
  Rows Processed:                     0               109
  Fetches:                            0               109
  Executions:                         0                 1
 
  Notes
  -----
  1. The SQL profile plan was first executed to warm the buffer cache.
  2. Statistics for the SQL profile plan were averaged over next 3 executions.

Sometimes SQL Tuning Advisor may recommend implementing a profile that uses the Automatic Degree of Parallelism (Auto DOP) feature. A parallel query profile is only recommended when the original plan is serial and when parallel execution can significantly reduce the elapsed time for a long-running query.

When it recommends a profile that uses Auto DOP, SQL Tuning Advisor gives details about the performance overhead of using parallel execution for the SQL statement in the report. For parallel execution recommendations, SQL Tuning Advisor may provide two SQL profile recommendations, one using serial execution and one using parallel.

The following example shows a parallel query recommendation. In this example, a degree of parallelism of 7 improves response time significantly at the cost of increasing resource consumption by almost 25%. You must decide whether the reduction in database throughput is worth the increase in response time.

  Recommendation (estimated benefit: 99.99%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile to use parallel execution
    for this statement.
    execute dbms_sqltune.accept_sql_profile(task_name => 'gfk_task',
            object_id => 3, task_owner => 'SH', replace => TRUE,
            profile_type => DBMS_SQLTUNE.PX_PROFILE);
 
  Executing this query parallel with DOP 7 will improve its response time
  82.22% over the SQL profile plan. However, there is some cost in enabling
  parallel execution. It will increase the statement's resource consumption by
  an estimated 24.43% which may result in a reduction of system throughput.
  Also, because these resources are consumed over a much smaller duration, the
  response time of concurrent statements might be negatively impacted if
  sufficient hardware capacity is not available.
 
  The following data shows some sampled statistics for this SQL from the past
  week and projected weekly values when parallel execution is enabled.
 
                                 Past week sampled statistics for this SQL
                                 -----------------------------------------
  Number of executions                                                   0
  Percent of total activity                                            .29
  Percent of samples with #Active Sessions > 2*CPU                       0
  Weekly DB time (in sec)                                            76.51
 
                              Projected statistics with Parallel Execution
                              --------------------------------------------
  Weekly DB time (in sec)                                            95.21

See Also:


22.1.2.2 SQL Profiles and SQL Plan Baselines

You can use SQL profiles with or without SQL plan management. No strict relationship exists between the SQL profile and the plan baseline. If a statement has multiple plans in a SQL plan baseline, then a SQL profile is useful because it enables the optimizer to choose the lowest-cost plan in the baseline.

22.1.3 User Interfaces for SQL Profiles

Oracle Enterprise Manager Cloud Control (Cloud Control) usually handles SQL profiles as part of automatic SQL tuning.

On the command line, you can manage SQL profiles with the DBMS_SQLTUNE package. To use the APIs, you must have the ADMINISTER SQL MANAGEMENT OBJECT privilege.


See Also:


22.1.4 Basic Tasks for SQL Profiles

This section explains the basic tasks involved in managing SQL profiles. Figure 22-2 shows the basic workflow for implementing, altering, and dropping SQL profiles.

Figure 22-2 Managing SQL Profiles

Description of Figure 22-2 follows

Typically, you manage SQL profiles in the following sequence:

  1. Implement a recommended SQL profile.

    "Implementing a SQL Profile" describes this task.

  2. Obtain information about SQL profiles stored in the database.

    "Listing SQL Profiles" describes this task.

  3. Optionally, modify the implemented SQL profile.

    "Altering a SQL Profile" describes this task.

  4. Drop the implemented SQL profile when it is no longer needed.

    "Dropping a SQL Profile" describes this task.

To tune SQL statements on another database, you can transport both a SQL tuning set and a SQL profile to a separate database. "Transporting a SQL Profile" describes this task.


See Also:

Oracle Database PL/SQL Packages and Types Reference for information about the DBMS_SQLTUNE package

22.2 Implementing a SQL Profile

Implementing (also known as accepting) a SQL profile means storing it persistently in the database. A profile must be implemented before the optimizer can use it as input when generating plans.

22.2.1 About SQL Profile Implementation

As a rule of thumb, implement a SQL profile recommended by SQL Tuning Advisor. If the database recommends both an index and a SQL profile, then either use both or use the SQL profile only. If you create an index, then the optimizer may need the profile to pick the new index.

In some situations, SQL Tuning Advisor may find an improved serial plan in addition to an even better parallel plan. In this case, the advisor recommends both a standard and a parallel SQL profile, enabling you to choose between the best serial and best parallel plan for the statement. Implement a parallel plan only if the increase in response time is worth the decrease in throughput.

To implement a SQL profile, execute the DBMS_SQLTUNE.ACCEPT_SQL_PROFILE procedure. Some important parameters are as follows:

  • profile_type

    Set this parameter to REGULAR_PROFILE for a SQL profile without a change to parallel execution, or PX_PROFLE for a SQL profile with a change to parallel execution.

  • force_match

    This parameter controls statement matching. Typically, an accepted SQL profile is associated with the SQL statement through a SQL signature that is generated using a hash function. This hash function changes the SQL statement to upper case and removes all extra whites spaces before generating the signature. Thus, the same SQL profile works for all SQL statements in which the only difference is case and white spaces.

    By setting force_match to true, the SQL profile additionally targets all SQL statements that have the same text after the literal values in the WHERE clause have been replaced by bind variables. This setting may be useful for applications that use only literal values because it enables SQL with text differing only in its literal values to share a SQL profile. If both literal values and bind variables are in the SQL text, or if force_match is set to false (default), then the literal values in the WHERE clause are not replaced by bind variables.


See Also:

Oracle Database PL/SQL Packages and Types Reference for information about the ACCEPT_SQL_PROFILE procedure

22.2.2 Implementing a SQL Profile

This section shows how to use the ACCEPT_SQL_PROFILE procedure to implement a SQL profile.

Assumptions

This tutorial assumes the following:

  • The SQL Tuning Advisor task STA_SPECIFIC_EMP_TASK includes a recommendation to create a SQL profile.

  • The name of the SQL profile is my_sql_profile.

  • The PL/SQL block accepts a profile that uses parallel execution (profile_type).

  • The profile uses force matching.

To implement a SQL profile: 

  • Connect SQL*Plus to the database with the appropriate privileges, and then execute the ACCEPT_SQL_PROFILE function.

    For example, execute the following PL/SQL:

    DECLARE
      my_sqlprofile_name VARCHAR2(30);
    BEGIN
      my_sqlprofile_name := DBMS_SQLTUNE.ACCEPT_SQL_PROFILE ( 
        task_name    => 'STA_SPECIFIC_EMP_TASK'
    ,   name         => 'my_sql_profile'
    ,   profile_type => DBMS_SQLTUNE.PX_PROFILE
    ,   force_match  => true 
    );
    END;
    /
    

See Also:

Oracle Database PL/SQL Packages and Types Reference to learn about the DBMS_SQLTUNE.ACCEPT_SQL_PROFILE procedure

22.3 Listing SQL Profiles

The data dictionary view DBA_SQL_PROFILES stores SQL profiles persistently in the database. The statistics are in an Oracle internal format, so you cannot query profiles directly. However, you can list profiles.

To list SQL profiles: 

  • Connect SQL*Plus to the database with the appropriate privileges, and then query the DBA_SQL_PROFILES view.

    For example, execute the following query:

    COLUMN category FORMAT a10
    COLUMN sql_text FORMAT a20
    
    SELECT NAME, SQL_TEXT, CATEGORY, STATUS
    FROM   DBA_SQL_PROFILES;
    

    Sample output appears below:

    NAME                           SQL_TEXT             CATEGORY   STATUS
    ------------------------------ -------------------- ---------- --------
    SYS_SQLPROF_01285f6d18eb0000   select promo_name, c DEFAULT    ENABLED
                                   ount(*) c from promo
                                   tions p, sales s whe
                                   re s.promo_id = p.pr
                                   omo_id and p.promo_c
                                   ategory = 'internet'
                                    group by p.promo_na
                                   me order by c desc
    

See Also:

Oracle Database Reference to learn about the DBA_SQL_PROFILES view

22.4 Altering a SQL Profile

You can alter attributes of an existing SQL profile using the attribute_name parameter of the ALTER_SQL_PROFILE procedure.

The CATEGORY attribute determines which sessions can apply a profile. View the CATEGORY attribute by querying DBA_SQL_PROFILES.CATEGORY. By default, all profiles are in the DEFAULT category, which means that all sessions in which the SQLTUNE_CATEGORY initialization parameter is set to DEFAULT can use the profile.

By altering the category of a SQL profile, you determine which sessions are affected by profile creation. For example, by setting the category to DEV, only sessions in which the SQLTUNE_CATEGORY initialization parameter is set to DEV can use the profile. Other sessions do not have access to the SQL profile and execution plans for SQL statements are not impacted by the SQL profile. This technique enables you to test a profile in a restricted environment before making it available to other sessions.

The example in this section assumes that you want to change the category of the SQL profile so it is used only by sessions with the SQL profile category set to TEST, run the SQL statement, and then change the profile category back to DEFAULT.

To alter a SQL profile: 

  1. Connect SQL*Plus to the database with the appropriate privileges, and then use the ALTER_SQL_PROFILE procedure to set the attribute_name.

    For example, execute the following code to set the attribute CATEGORY to TEST:

    VARIABLE pname my_sql_profile
    BEGIN DBMS_SQLTUNE.ALTER_SQL_PROFILE ( 
       name            =>  :pname
    ,  attribute_name  =>  'CATEGORY'
    ,  value           =>  'TEST'      
    );
    END;
    
  2. Change the initialization parameter setting in the current database session.

    For example, execute the following SQL:

    ALTER SESSION SET SQLTUNE_CATEGORY = 'TEST';
    
  3. Test the profiled SQL statement.

  4. Use the ALTER_SQL_PROFILE procedure to set the attribute_name.

    For example, execute the following code to set the attribute CATEGORY to DEFAULT:

    VARIABLE pname my_sql_profile
    BEGIN 
      DBMS_SQLTUNE.ALTER_SQL_PROFILE ( 
         name            =>  :pname
    ,    attribute_name  =>  'CATEGORY'
    ,    value           =>  'DEFAULT'   
    );
    END;
    

See Also:


22.5 Dropping a SQL Profile

You can drop a SQL profile with the DROP_SQL_PROFILE procedure.

Assumptions

This section assumes the following:

  • You want to drop my_sql_profile.

  • You want to ignore errors raised if the name does not exist.

To drop a SQL profile: 

  • Connect SQL*Plus to the database with the appropriate privileges, call the DBMS_SQLTUNE.DROP_SQL_PROFILE procedure.

    The following example drops the profile named my_sql_profile:

    BEGIN
      DBMS_SQLTUNE.DROP_SQL_PROFILE ( 
        name => 'my_sql_profile' 
    );
    END;
    /
    

See Also:


22.6 Transporting a SQL Profile

Yo"u can transport SQL profiles. This operation involves exporting the SQL profile from the SYS schema in one database to a staging table, and then importing the SQL profile from the staging table into another database. You can transport a SQL profile to any Oracle database created in the same release or later.

Table 22-1 shows the main procedures and functions for managing SQL profiles.

Table 22-1 APIs for Transporting SQL Profiles

Procedure or FunctionDescription

CREATE_STGTAB_SQLPROF

Creates the staging table used for copying SQL profiles from one system to another.

PACK_STGTAB_SQLPROF

Moves profile data out of the SYS schema into the staging table.

UNPACK_STGTAB_SQLPROF

Uses the profile data stored in the staging table to create profiles on this system.


The following graphic shows the basic workflow of transporting SQL profiles:

Description of tgsql_vm_066.png follows

Assumptions

This tutorial assumes the following:

  • You want to transport my_profile from a production database to a test database.

  • You want to create the staging table in the dba1 schema.

To transport a SQL profile: 

  1. Connect SQL*Plus to the database with the appropriate privileges, and then use the CREATE_STGTAB_SQLPROF procedure to create a staging table to hold the SQL profiles.

    The following example creates my_staging_table in the dba1 schema:

    BEGIN
      DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF ( 
        table_name  => 'my_staging_table'
    ,   schema_name => 'dba1' 
    );
    END;
    /
    
  2. Use the PACK_STGTAB_SQLPROF procedure to export SQL profiles into the staging table.

    The following example populates dba1.my_staging_table with the SQL profile my_profile:

    BEGIN
      DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (  
        profile_name         => 'my_profile'
    ,   staging_table_name   => 'my_staging_table'
    ,   staging_schema_owner => 'dba1' 
    );
    END;
    / 
    
  3. Move the staging table to the database where you plan to unpack the SQL profiles.

    Move the table using your utility of choice. For example, use Oracle Data Pump or a database link.

  4. On the database where you plan to import the SQL profiles, use UNPACK_STGTAB_SQLPROF to unpack SQL profiles from the staging table.

    The following example shows how to unpack SQL profiles in the staging table:

    BEGIN
      DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(
         replace            => true
    ,    staging_table_name => 'my_staging_table'
    );
    END;
    /
    

See Also:


PK2TݓPKDOEBPS/tgsql_optcncpt.htm Query Optimizer Concepts

4 Query Optimizer Concepts

This chapter describes the most important concepts relating to the query optimizer. This chapter contains the following topics:

4.1 Introduction to the Query Optimizer

The query optimizer (called simply the optimizer) is built-in database software that determines the most efficient method for a SQL statement to access requested data.

This section contains the following topics:

4.1.1 Purpose of the Query Optimizer

The optimizer attempts to generate the best execution plan for a SQL statement. The best execution plan is defined as the plan with the lowest cost among all considered candidate plans. The cost computation accounts for factors of query execution such as I/O, CPU, and communication.

The best method of execution depends on myriad conditions including how the query is written, the size of the data set, the layout of the data, and which access structures exist. The optimizer determines the best plan for a SQL statement by examining multiple access methods, such as full table scan or index scans, and different join methods such as nested loops and hash joins.

Because the database has many internal statistics and tools at its disposal, the optimizer is usually in a better position than the user to determine the best method of statement execution. For this reason, all SQL statements use the optimizer.

Consider a user who queries records for employees who are managers. If the database statistics indicate that 80% of employees are managers, then the optimizer may decide that a full table scan is most efficient. However, if statistics indicate that few employees are managers, then reading an index followed by a table access by rowid may be more efficient than a full table scan.

4.1.2 Cost-Based Optimization

Query optimization is the overall process of choosing the most efficient means of executing a SQL statement. SQL is a nonprocedural language, so the optimizer is free to merge, reorganize, and process in any order.

The database optimizes each SQL statement based on statistics collected about the accessed data. When generating execution plans, the optimizer considers different access paths and join methods. Factors considered by the optimizer include:

  • System resources, which includes I/O, CPU, and memory

  • Number of rows returned

  • Size of the initial data sets

The cost is a number that represents the estimated resource usage for an execution plan. The optimizer assigns a cost to each possible plan, and then chooses the plan with the lowest cost. For this reason, the optimizer is sometimes called the cost-based optimizer (CBO) to contrast it with the legacy rule-based optimizer (RBO).


Note:

The optimizer may not make the same decisions from one version of Oracle Database to the next. In recent versions, the optimizer might make different decision because better information is available and more optimizer transformations are possible.

4.1.3 Execution Plans

An execution plan describes a recommended method of execution for a SQL statement. The plans shows the combination of the steps Oracle Database uses to execute a SQL statement. Each step either retrieves rows of data physically from the database or prepares them for the user issuing the statement.

In Figure 4-1, the optimizer generates two possible execution plans for an input SQL statement, uses statistics to calculate their costs, compares their costs, and chooses the plan with the lowest cost.

Figure 4-1 Execution Plans

Description of Figure 4-1 follows

4.1.3.1 Query Blocks

As shown in Figure 4-1, the input to the optimizer is a parsed representation of a SQL statement. Each SELECT block in the original SQL statement is represented internally by a query block. A query block can be a top-level statement, subquery, or unmerged view (see "View Merging").

In Example 4-1, the SQL statement consists of two query blocks. The subquery in parentheses is the inner query block. The outer query block, which is the rest of the SQL statement, retrieves names of employees in the departments whose IDs were supplied by the subquery.

Example 4-1 Query Blocks

SELECT first_name, last_name
FROM   hr.employees
WHERE  department_id 
IN     (SELECT department_id 
        FROM   hr.departments 
        WHERE  location_id = 1800);

The query form determines how query blocks are interrelated.


See Also:

Oracle Database Concepts for an overview of SQL processing

4.1.3.2 Query Subplans

For each query block, the optimizer generates a query subplan. The database optimizes query blocks separately from the bottom up. Thus, the database optimizes the innermost query block first and generates a subplan for it, and then generates the outer query block representing the entire query.

The number of possible plans for a query block is proportional to the number of objects in the FROM clause. This number rises exponentially with the number of objects. For example, the possible plans for a join of five tables are significantly higher than the possible plans for a join of two tables.

4.1.3.3 Analogy for the Optimizer

One analogy for the optimizer is an online trip advisor. A cyclist wants to know the most efficient bicycle route from point A to point B. A query is like the directive "I need the most efficient route from point A to point B" or "I need the most efficient route from point A to point B by way of point C." The trip advisor uses an internal algorithm, which relies on factors such as speed and difficulty, to determine the most efficient route. The cyclist can influence the trip advisor's decision by using directives such as "I want to arrive as fast as possible" or "I want the easiest ride possible."

In this analogy, an execution plan is a possible route generated by the trip advisor. Internally, the advisor may divide the overall route into several subroutes (subplans), and calculate the efficiency for each subroute separately. For example, the trip advisor may estimate one subroute at 15 minutes with medium difficulty, an alternative subroute at 22 minutes with minimal difficulty, and so on.

The advisor picks the most efficient (lowest cost) overall route based on user-specified goals and the available statistics about roads and traffic conditions. The more accurate the statistics, the better the advice. For example, if the advisor is not frequently notified of traffic jams, road closures, and poor road conditions, then the recommended route may turn out to be inefficient (high cost).

4.2 About Optimizer Components

The optimizer contains three main components, which are shown in Figure 4-2.

Figure 4-2 Optimizer Components

Description of Figure 4-2 follows

A set of query blocks represents a parsed query, which is the input to the optimizer. The optimizer performs the following operations:

  1. Query transformer

    The optimizer determines whether it is helpful to change the form of the query so that the optimizer can generate a better execution plan. See "Query Transformer".

  2. Estimator

    The optimizer estimates the cost of each plan based on statistics in the data dictionary. See "Estimator".

  3. Plan Generator

    The optimizer compares the costs of plans and chooses the lowest-cost plan, known as the execution plan, to pass to the row source generator. See "Plan Generator".

4.2.1 Query Transformer

For some statements, the query transformer determines whether it is advantageous to rewrite the original SQL statement into a semantically equivalent SQL statement with a lower cost. When a viable alternative exists, the database calculates the cost of the alternatives separately and chooses the lowest-cost alternative. Chapter 5, "Query Transformations" describes the different types of optimizer transformations.

Figure 4-3 shows the query transformer rewriting an input query that uses OR into an output query that uses UNION ALL.

Figure 4-3 Query Transformer

Description of Figure 4-3 follows

4.2.2 Estimator

The estimator is the component of the optimizer that determines the overall cost of a given execution plan. The estimator uses three different types of measures to achieve this goal:

  • Selectivity

    The percentage of rows in the row set that the query selects, with 0 meaning no rows and 1 meaning all rows. Selectivity is tied to a query predicate, such as WHERE last_name LIKE 'A%', or a combination of predicates. A predicate becomes more selective as the selectivity value approaches 0 and less selective (or more unselective) as the value approaches 1.


    Note:

    Selectivity is an internal calculation that is not visible in the execution plans.

  • Cardinality

    The cardinality is the estimated number of rows returned by each operation in an execution plan. This input, which is crucial to obtaining an optimal plan, is common to all cost functions. Cardinality can be derived from the table statistics collected by DBMS_STATS, or derived after accounting for effects from predicates (filter, join, and so on), DISTINCT or GROUP BY operations, and so on.

  • Cost

    This measure represents units of work or resource used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work.

As shown in Figure 4-4, if statistics are available, then the estimator uses them to compute the measures. The statistics improve the degree of accuracy of the measures.

Figure 4-4 Estimator

Description of Figure 4-4 follows

For the query shown in Example 4-1, the estimator uses selectivity, cardinality, and cost measures to produce its total cost estimate of 3:

----------------------------------------------------------------------------------
| Id| Operation                    |Name             |Rows|Bytes|Cost(%CPU)| Time|
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT             |                 |  10|  250| 3 (0)| 00:00:01|
| 1 |  NESTED LOOPS                |                 |    |     |      |         |
| 2 |   NESTED LOOPS               |                 |  10|  250| 3 (0)| 00:00:01|
|*3 |    TABLE ACCESS FULL         |DEPARTMENTS      |   1|    7| 2 (0)| 00:00:01|
|*4 |    INDEX RANGE SCAN          |EMP_DEPARTMENT_IX|  10|     | 0 (0)| 00:00:01|
| 5 |   TABLE ACCESS BY INDEX ROWID|EMPLOYEES        |  10|  180| 1 (0)| 00:00:01|
----------------------------------------------------------------------------------

4.2.2.1 Selectivity

The selectivity represents a fraction of rows from a row set. The row set can be a base table, a view, or the result of a join. The selectivity is tied to a query predicate, such as last_name = 'Smith', or a combination of predicates, such as last_name = 'Smith' AND job_id = 'SH_CLERK'.


Note:

Selectivity is an internal calculation that is not visible in execution plans.

A predicate filters a specific number of rows from a row set. Thus, the selectivity of a predicate indicates how many rows pass the predicate test. Selectivity ranges from 0.0 to 1.0. A selectivity of 0.0 means that no rows are selected from a row set, whereas a selectivity of 1.0 means that all rows are selected. A predicate becomes more selective as the value approaches 0.0 and less selective (or more unselective) as the value approaches 1.0.

The optimizer estimates selectivity depending on whether statistics are available:

  • Statistics not available

    Depending on the value of the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter, the optimizer either uses dynamic statistics or an internal default value. The database uses different internal defaults depending on the predicate type. For example, the internal default for an equality predicate (last_name = 'Smith') is lower than for a range predicate (last_name > 'Smith') because an equality predicate is expected to return a smaller fraction of rows.

  • Statistics available

    When statistics are available, the estimator uses them to estimate selectivity. Assume there are 150 distinct employee last names. For an equality predicate last_name = 'Smith', selectivity is the reciprocal of the number n of distinct values of last_name, which in this example is .006 because the query selects rows that contain 1 out of 150 distinct values.

    If a histogram exists on the last_name column, then the estimator uses the histogram instead of the number of distinct values. The histogram captures the distribution of different values in a column, so it yields better selectivity estimates, especially for columns that have data skew. See Chapter 11, "Histograms."

4.2.2.2 Cardinality

The cardinality is the estimated number of rows returned by each operation in an execution plan. For example, if the optimizer estimate for the number of rows returned by a full table scan is 100, then the cardinality for this operation is 100. The cardinality value appears in the Rows column of the execution plan.

The optimizer determines the cardinality for each operation based on a complex set of formulas that use both table and column level statistics, or dynamic statistics, as input. The optimizer uses one of the simplest formulas when a single equality predicate appears in a single-table query, with no histogram. In this case, the optimizer assumes a uniform distribution and calculates the cardinality for the query by dividing the total number of rows in the table by the number of distinct values in the column used in the WHERE clause predicate.

For example, user hr queries the employees table as follows:

SELECT first_name, last_name
FROM   employees
WHERE  salary='10200';

The employees table contains 107 rows. The current database statistics indicate that the number of distinct values in the salary column is 58. Thus, the optimizer calculates the cardinality of the result set as 2, using the formula 107/58=1.84.

Cardinality estimates must be as accurate as possible because they influence all aspects of the execution plan. Cardinality is important when the optimizer determines the cost of a join. For example, in a nested loops join of the employees and departments tables, the number of rows in employees determines how often the database must probe the departments table. Cardinality is also important for determining the cost of sorts.

4.2.2.3 Cost

The optimizer cost model accounts for the I/O, CPU, and network resources that a query is predicted to use. The cost is an internal numeric measure that represents the estimated resource usage for a plan. The lower the cost, the more efficient the plan.

The execution plan displays the cost of the entire plan, which is indicated on line 0, and each individual operation. For example, the following plan shows a cost of 14.

EXPLAINED SQL STATEMENT:
------------------------
SELECT prod_category, AVG(amount_sold) FROM   sales s, products p WHERE
 p.prod_id = s.prod_id GROUP BY prod_category
 
Plan hash value: 4073170114
 
----------------------------------------------------------------------
| Id  | Operation                | Name                 | Cost (%CPU)|
----------------------------------------------------------------------
|   0 | SELECT STATEMENT         |                      |    14 (100)|
|   1 |  HASH GROUP BY           |                      |    14  (22)|
|   2 |   HASH JOIN              |                      |    13  (16)|
|   3 |    VIEW                  | index$_join$_002     |     7  (15)|
|   4 |     HASH JOIN            |                      |            |
|   5 |      INDEX FAST FULL SCAN| PRODUCTS_PK          |     4   (0)|
|   6 |      INDEX FAST FULL SCAN| PRODUCTS_PROD_CAT_IX |     4   (0)|
|   7 |    PARTITION RANGE ALL   |                      |     5   (0)|
|   8 |     TABLE ACCESS FULL    | SALES                |     5   (0)|
----------------------------------------------------------------------

The cost is an internal unit that you can use for plan comparisons. You cannot tune or change it.

The access path determines the number of units of work required to get data from a base table. The access path can be a table scan, a fast full index scan, or an index scan.

  • Table scan or fast full index scan

    During a table scan or fast full index scan, the database reads multiple blocks from disk in a single I/O. Therefore, the cost of the scan depends on the number of blocks to be scanned and the multiblock read count value.

  • Index scan

    The cost of an index scan depends on the levels in the B-tree, the number of index leaf blocks to be scanned, and the number of rows to be fetched using the rowid in the index keys. The cost of fetching rows using rowids depends on the index clustering factor.

The join cost represents the combination of the individual access costs of the two row sets being joined, plus the cost of the join operation.

4.2.3 Plan Generator

The plan generator explores various plans for a query block by trying out different access paths, join methods, and join orders. Many plans are possible because of the various combinations that the database can use to produce the same result. The optimizer picks the plan with the lowest cost.

Figure 4-5 shows the optimizer testing different plans for an input query.

Figure 4-5 Plan Generator

Description of Figure 4-5 follows

The following snippet from an optimizer trace file shows some computations that the optimizer performs:

GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  DEPARTMENTS[D]#0  EMPLOYEES[E]#1
 
***************
Now joining: EMPLOYEES[E]#1
***************
NL Join
  Outer table: Card: 27.00  Cost: 2.01  Resp: 2.01  Degree: 1  Bytes: 16
Access path analysis for EMPLOYEES
. . .
  Best NL cost: 13.17
. . .
SM Join
  SM cost: 6.08
     resc: 6.08 resc_io: 4.00 resc_cpu: 2501688
     resp: 6.08 resp_io: 4.00 resp_cpu: 2501688
. . .
SM Join (with index on outer)
  Access Path: index (FullScan)
. . .
HA Join
  HA cost: 4.57
     resc: 4.57 resc_io: 4.00 resc_cpu: 678154
     resp: 4.57 resp_io: 4.00 resp_cpu: 678154
Best:: JoinMethod: Hash
       Cost: 4.57  Degree: 1  Resp: 4.57  Card: 106.00 Bytes: 27
. . .

***********************
Join order[2]:  EMPLOYEES[E]#1  DEPARTMENTS[D]#0
. . .
 
***************
Now joining: DEPARTMENTS[D]#0
***************
. . .
HA Join
  HA cost: 4.58
     resc: 4.58 resc_io: 4.00 resc_cpu: 690054
     resp: 4.58 resp_io: 4.00 resp_cpu: 690054
Join order aborted: cost > best plan cost
***********************

The trace file shows the optimizer first trying the departments table as the outer table in the join. The optimizer calculates the cost for three different join methods: nested loops join (NS), sort merge (SM), and hash join (HJ). The optimizer picks the hash join as the most efficient method:

Best:: JoinMethod: Hash
       Cost: 4.57  Degree: 1  Resp: 4.57  Card: 106.00 Bytes: 27

The optimizer then tries a different join order, using employees as the outer table. This join order costs more than the previous join order, so it is abandoned.

The optimizer uses an internal cutoff to reduce the number of plans it tries when finding the lowest-cost plan. The cutoff is based on the cost of the current best plan. If the current best cost is large, then the optimizer explores alternative plans to find a lower cost plan. If the current best cost is small, then the optimizer ends the search swiftly because further cost improvement is not significant.

4.3 About Automatic Tuning Optimizer

The optimizer performs different operations depending on how it is invoked. The database provides the following types of optimization:

  • Normal optimization

    The optimizer compiles the SQL and generates an execution plan. The normal mode generates a reasonable plan for most SQL statements. Under normal mode, the optimizer operates with strict time constraints, usually a fraction of a second, during which it must find an optimal plan.

  • SQL Tuning Advisor optimization

    When SQL Tuning Advisor invokes the optimizer, the optimizer is known as Automatic Tuning Optimizer. In this case, the optimizer performs additional analysis to further improve the plan produced in normal mode. The optimizer output is not an execution plan, but a series of actions, along with their rationale and expected benefit for producing a significantly better plan.

4.4 About Adaptive Query Optimization

In Oracle Database, adaptive query optimization is a set of capabilities that enables the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics. Adaptive optimization is helpful when existing statistics are not sufficient to generate an optimal plan.

The following graphic shows the feature set for adaptive query optimization:

Description of tgsql_vm_069.png follows

4.4.1 Adaptive Plans

An adaptive plan enables the optimizer to defer the final plan decision for a statement until execution time. The ability of the optimizer to adapt a plan, based on information learned during execution, can greatly improve query performance.

Adaptive plans are useful because the optimizer occasionally picks a suboptimal default plan because of a cardinality misestimate. The ability to adapt the plan at run time based on actual execution statistics results in a more optimal final plan. After choosing the final plan, the optimizer uses it for subsequent executions, thus ensuring that the suboptimal plan is not reused.

4.4.1.1 How Adaptive Plans Work

An adaptive plan contains multiple predetermined subplans, and an optimizer statistics collector. A subplan is a portion of a plan that the optimizer can switch to as an alternative at run time. For example, a nested loops join could be switched to a hash join during execution. An optimizer statistics collector is a row source inserted into a plan at key points to collect run-time statistics. These statistics help the optimizer make a final decision between multiple subplans.

During statement execution, the statistics collector gathers information about the execution, and buffers some rows received by the subplan. Based on the information observed by the collector, the optimizer chooses a subplan. At this point, the collector stops collecting statistics and buffering rows, and permits rows to pass through instead. On subsequent executions of the child cursor, the optimizer continues to use the same plan unless the plan ages out of the cache, or a different optimizer feature (for example, adaptive cursor sharing or statistics feedback) invalidates the plan.

The database uses adaptive plans when OPTIMIZER_FEATURES_ENABLE is 12.1.0.1 or later, and the OPTIMIZER_ADAPTIVE_REPORTING_ONLY initialization parameter is set to the default of false (see "Controlling Adaptive Optimization").

4.4.1.2 Adaptive Plans: Join Method Example

Example 4-2 shows a join of the order_items and product_information tables. An adaptive plan for this statement shows two possible plans, one with a nested loops join and the other with a hash join.

Example 4-2 Join of order_items and product_information

SELECT product_name  
FROM   order_items o, product_information p  
WHERE  o.unit_price = 15 
AND    quantity > 1  
AND    p.product_id = o.product_id

A nested loops join is preferable if the database can avoid scanning a significant portion of product_information because its rows are filtered by the join predicate. If few rows are filtered, however, then scanning the right table in a hash join is preferable.

The following graphic shows the adaptive process. For the query in Example 4-2, the adaptive portion of the default plan contains two subplans, each of which uses a different join method. The optimizer automatically determines when each join method is optimal, depending on the cardinality of the left side of the join.

The statistics collector buffers enough rows coming from the order_items table to determine which join method to use. If the row count is below the threshold determined by the optimizer, then the optimizer chooses the nested loops join; otherwise, the optimizer chooses the hash join. In this case, the row count coming from the order_items table is above the threshold, so the optimizer chooses a hash join for the final plan, and disables buffering.

Description of tgsql_vm_076.png follows

After the optimizer determines the final plan, DBMS_XPLAN.DISPLAY_CURSOR displays the hash join. The Note section of the execution plan indicates whether the plan is adaptive, as shown in the following sample plan:

----------------------------------------------------------------------------------------------------------------
|Id | Operation          | Name                |Starts|E-Rows|A-Rows|   A-Time   |Buffers|Reads|OMem|1Mem|O/1/M|
----------------------------------------------------------------------------------------------------------------
|  0| SELECT STATEMENT   |                     | 1 |   |  13 |00:00:00.10 |  21 |  17 |       |       |        |
|* 1|  HASH JOIN         |                     | 1 | 4 |  13 |00:00:00.10 |  21 |  17 |  2061K|  2061K|   1/0/0|
|* 2|   TABLE ACCESS FULL| ORDER_ITEMS         | 1 | 4 |  13 |00:00:00.07 |   5 |   4 |    |        |          |
|  3|   TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 1 | 288 |00:00:00.03 |  16 |  13 |    |        |          |
----------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")
   2 - filter(("O"."UNIT_PRICE"=15 AND "QUANTITY">1))
 
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------
Note
-----
   - this is an adaptive plan

See Also:


4.4.1.3 Adaptive Plans: Parallel Distribution Methods

Typically, parallel execution requires data redistribution to perform operations such as parallel sorts, aggregations, and joins. Oracle Database can use many different data distributions methods. The database chooses the method based on the number of rows to be distributed and the number of parallel server processes in the operation.

For example, consider the following alternative cases:

  • Many parallel server processes distribute few rows.

    The database may choose the broadcast distribution method. In this case, the entire result set is sent to all of the parallel server processes.

  • Few parallel server processes distribute many rows.

    If a data skew is encountered during the data redistribution, then it could adversely effect the performance of the statement. The database is more likely to pick a hash distribution to ensure that each parallel server process receives an equal number of rows.

The hybrid hash distribution technique is an adaptive parallel data distribution that does not decide the final data distribution method until execution time. The optimizer inserts statistic collectors in front of the parallel server processes on the producer side of the operation. If the actual number of rows is less than a threshold, defined as twice the degree of parallelism chosen for the operation, then the data distribution method switches from hash to broadcast. Otherwise, the data distribution method is a hash.

The following graphic shows a hybrid hash join between the departments and employees tables. A statistics collector is inserted in front of the parallel server processes scanning the departments table. The distribution method is based on the run-time statistics. In this example, the number of rows is less than the threshold of twice the degree of parallelism, so the optimizer chooses a broadcast technique for the departments table.

Description of tgsql_vm_077.png follows

In the following alternative example, the threshold is 16, or twice the specified DOP of 8. Because the number of rows (27) is greater than the threshold (16), the optimizer chooses a hash rather than a broadcast distribution. Note the statistics collector in step 10 of the plan.

EXPLAIN PLAN FOR 
  SELECT /*+ parallel(8) full(e) full(d) */ department_name, sum(salary)
  FROM   employees e, departments d
  WHERE  d.department_id=e.department_id
  GROUP BY department_name;

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); 

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------
Plan hash value: 3213496516
 
----------------------------------------------------------------------------------------------------------------
|Id | Operation                          | Name      |Rows|Bytes|Cost   | Time     |   TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT                   |           | 27 | 621 | 6 (34)| 00:00:01 |       |      |            |
| 1 |  PX COORDINATOR                    |           |    |     |       |          |       |      |            |
| 2 |   PX SEND QC (RANDOM)              |  :TQ10003 | 27 | 621 | 6 (34)| 00:00:01 | Q1,03 | P->S |   QC (RAND)|
| 3 |    HASH GROUP BY                   |           | 27 | 621 | 6 (34)| 00:00:01 | Q1,03 | PCWP |            |
| 4 |     PX RECEIVE                     |           | 27 | 621 | 6 (34)| 00:00:01 | Q1,03 | PCWP |            |
| 5 |      PX SEND HASH                  |  :TQ10002 | 27 | 621 | 6 (34)| 00:00:01 | Q1,02 | P->P |        HASH|
| 6 |       HASH GROUP BY                |           | 27 | 621 | 6 (34)| 00:00:01 | Q1,02 | PCWP |            |
|*7 |        HASH JOIN                   |           |106 |2438 | 5 (20)| 00:00:01 | Q1,02 | PCWP |            |
| 8 |         PX RECEIVE                 |           | 27 | 432 | 2  (0)| 00:00:01 | Q1,02 | PCWP |            |
| 9 |          PX SEND HYBRID HASH       |  :TQ10000 | 27 | 432 | 2  (0)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
|10 |           STATISTICS COLLECTOR     |           |    |     |       |          | Q1,00 | PCWC |            |
|11 |            PX BLOCK ITERATOR       |           | 27 | 432 | 2  (0)| 00:00:01 | Q1,00 | PCWC |            |
|12 |             TABLE ACCESS FULL      |DEPARTMENTS| 27 | 432 | 2  (0)| 00:00:01 | Q1,00 | PCWP |            |
|13 |         PX RECEIVE                 |           |107 | 749 | 2  (0)| 00:00:01 | Q1,02 | PCWP |            |
|14 |          PX SEND HYBRID HASH (SKEW)|  :TQ10001 |107 | 749 | 2  (0)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
|15 |           PX BLOCK ITERATOR        |           |107 | 749 | 2  (0)| 00:00:01 | Q1,01 | PCWC |            |
|16 |            TABLE ACCESS FULL       | EMPLOYEES |107 | 749 | 2  (0)| 00:00:01 | Q1,01 | PCWP |            |
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   7 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
 
Note
-----
   - Degree of Parallelism is 8 because of hint
 
32 rows selected.

See Also:

Oracle Database VLDB and Partitioning Guide to learn more about parallel data redistribution techniques

4.4.2 Adaptive Statistics

The quality of the plans that the optimizer generates depends on the quality of the statistics. Some query predicates become too complex to rely on base table statistics alone, so the optimizer augments these statistics with adaptive statistics.

The following topics describe types of adaptive statistics:

4.4.2.1 Dynamic Statistics

During the compilation of a SQL statement, the optimizer decides whether to use dynamic statistics by considering whether the available statistics are sufficient to generate an optimal execution plan. If the available statistics are insufficient, then the optimizer uses dynamic statistics to augment the statistics. One type of dynamic statistics is the information gathered by dynamic sampling. The optimizer can use dynamic statistics for table scans, index access, joins, and GROUP BY operations, thus improving the quality of optimizer decisions.


See Also:

"Dynamic Statistics" to learn more about dynamic statistics and optimizer statistics in general

4.4.2.2 Automatic Reoptimization

Whereas adaptive plans help decide between multiple subplans, they are not feasible for all kinds of plan changes. For example, a query with an inefficient join order might perform suboptimally, but adaptive plans do not support adapting the join order during execution. In these cases, the optimizer considers automatic reoptimization. In contrast to adaptive plans, automatic reoptimization changes a plan on subsequent executions after the initial execution.

At the end of the first execution of a SQL statement, the optimizer uses the information gathered during execution to determine whether automatic reoptimization is worthwhile. If execution informations differs significantly from optimizer estimates, then the optimizer looks for a replacement plan on the next execution. The optimizer uses the information gathered during the previous execution to help determine an alternative plan. The optimizer can reoptimize a query several times, each time learning more and further improving the plan.

4.4.2.2.1 Reoptimization: Statistics Feedback

A form of reoptimization known as statistics feedback (formerly known as cardinality feedback) automatically improves plans for repeated queries that have cardinality misestimates. The optimizer can estimate cardinalities incorrectly for many reasons, such as missing statistics, inaccurate statistics, or complex predicates.

The basic process of reoptimization using statistics feedback is as follows:

  1. During the first execution of a SQL statement, the optimizer generates an execution plan.

    The optimizer may enable monitoring for statistics feedback for the shared SQL area in the following cases:

    • Tables with no statistics

    • Multiple conjunctive or disjunctive filter predicates on a table

    • Predicates containing complex operators for which the optimizer cannot accurately compute selectivity estimates

    At the end of execution, the optimizer compares its initial cardinality estimates to the actual number of rows returned by each operation in the plan during execution. If estimates differ significantly from actual cardinalities, then the optimizer stores the correct estimates for subsequent use. The optimizer also creates a SQL plan directive so that other SQL statements can benefit from the information obtained during this initial execution.

  2. After the first execution, the optimizer disables monitoring for statistics feedback.

  3. If the query executes again, then the optimizer uses the corrected cardinality estimates instead of its usual estimates.

Example 4-3 Statistics Feedback

This example shows how the database uses statistics feedback to adjust incorrect estimates.

  1. The user oe runs the following query of the orders, order_items, and product_information tables:

    SELECT o.order_id, v.product_name
    FROM   orders o,
           ( SELECT order_id, product_name
             FROM   order_items o, product_information p
             WHERE  p.product_id = o.product_id
             AND    list_price < 50
             AND    min_price < 40 ) v
    WHERE  o.order_id = v.order_id
    
  2. Querying the plan in the cursor shows that the estimated rows (E-Rows) is far fewer than the actual rows (A-Rows).

    Example 4-4 Actual Rows and Estimated Rows

    ------------------------------------------------------------------------------------------------------------
    | Id| Operation             | Name                |Starts|E-Rows|A-Rows|   A-Time  |Buffers|OMem|1Mem|O/1/M|
    ------------------------------------------------------------------------------------------------------------
    |  0| SELECT STATEMENT      |                     |     1|      |  269 |00:00:00.10|   1338|    |    |     |
    |  1|  NESTED LOOPS         |                     |     1|     1|  269 |00:00:00.10|   1338|    |    |     |
    |  2|   MERGE JOIN CARTESIAN|                     |     1|     4| 9135 |00:00:00.04|     33|    |    |     |
    |* 3|    TABLE ACCESS FULL  | PRODUCT_INFORMATION |     1|     1|   87 |00:00:00.01|     32|    |    |     |
    |  4|    BUFFER SORT        |                     |    87|   105| 9135 |00:00:00.01|      1|4096|4096|1/0/0|
    |  5|     INDEX FULL SCAN   | ORDER_PK            |     1|   105|  105 |00:00:00.01|      1|    |    |     |
    |* 6|   INDEX UNIQUE SCAN   | ORDER_ITEMS_UK      |  9135|     1|  269 |00:00:00.03|   1305|    |    |     |
    ------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
       3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))
       6 - access("O"."ORDER_ID"="ORDER_ID" AND "P"."PRODUCT_ID"="O"."PRODUCT_ID")
    
  3. The user oe reruns the following query of the orders, order_items, and product_information tables:

    SELECT o.order_id, v.product_name
    FROM   orders o,
           ( SELECT order_id, product_name
             FROM   order_items o, product_information p
             WHERE  p.product_id = o.product_id
             AND    list_price < 50
             AND    min_price < 40 ) v
    WHERE  o.order_id = v.order_id;
    
  4. Querying the plan in the cursor shows that the optimizer used statistics feedback (as shown in the Note section) for the second execution, and also chose a different plan.

    Example 4-5 Actual Rows and Estimated Rows

    ----------------------------------------------------------------------------------------------------------------
    | Id| Operation              |Name            |Starts|E-Rows|A-Rows|   A-Time  |Buffers|Reads|OMem |1Mem |O/1/M|
    ----------------------------------------------------------------------------------------------------------------
    |  0| SELECT STATEMENT       |                   |  1|      |   269|00:00:00.03|    60 |   1 |     |     |     |
    |  1|  NESTED LOOPS          |                   |  1|   269|   269|00:00:00.03|    60 |   1 |     |     |     |
    |* 2|   HASH JOIN            |                   |  1|   313|   269|00:00:00.03|    39 |   1 |1321K|1321K|1/0/0|
    |* 3|    TABLE ACCESS FULL   |PRODUCT_INFORMATION|  1|    87|    87|00:00:00.01|    15 |   0 |     |     |     |
    |  4|    INDEX FAST FULL SCAN|ORDER_ITEMS_UK     |  1|   665|   665|00:00:00.02|    24 |   1 |     |     |     |
    |* 5|   INDEX UNIQUE SCAN    |ORDER_PK           |269|     1|   269|00:00:00.01|    21 |   0 |     |     |     |
    ----------------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
       2 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")
       3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))
       5 - access("O"."ORDER_ID"="ORDER_ID")
     
    Note
    -----
       - statistics feedback used for this statement
    
    
    

    In the preceding output, the estimated number of rows (269) matches the actual number of rows.

4.4.2.2.2 Reoptimization: Performance Feedback

Another form of reoptimization is performance feedback. This reoptimization helps improve the degree of parallelism automatically chosen for repeated SQL statements when PARALLEL_DEGREE_POLICY is set to ADAPTIVE.

The basic process of reoptimization using performance feedback is as follows:

  1. During the first execution of a SQL statement, when PARALLEL_DEGREE_POLICY is set to ADAPTIVE, the optimizer determines whether to execute the statement in parallel, and if so, which degree of parallelism to use.

    The optimizer chooses the degree of parallelism based on the estimated performance of the statement. Additional performance monitoring is enabled for all statements.

  2. At the end of the initial execution, the optimizer compares the following:

    • The degree of parallelism chosen by the optimizer

    • The degree of parallelism computed based on the performance statistics (for example, the CPU time) gathered during the actual execution of the statement

    If the two values vary significantly, then the database marks the statement for reparsing, and stores the initial execution statistics as feedback. This feedback helps better compute the degree of parallelism for subsequent executions.

  3. If the query executes again, then the optimizer uses the performance statistics gathered during the initial execution to better determine a degree of parallelism for the statement.


Note:

Even if PARALLEL_DEGREE_POLICY is not set to ADAPTIVE, statistics feedback may influence the degree of parallelism chosen for a statement.

4.4.2.3 SQL Plan Directives

A SQL plan directive is additional information that the optimizer uses to generate a more optimal plan. For example, during query optimization, when deciding whether the table is a candidate for dynamic statistics, the database queries the statistics repository for directives on a table. If the query joins two tables that have a data skew in their join columns, a SQL plan directive can direct the optimizer to use dynamic statistics to obtain an accurate cardinality estimate.The optimizer collects SQL plan directives on query expressions rather than at the statement level. In this way, the optimizer can apply directives to multiple SQL statements. The database automatically maintains directives, and stores them in the SYSAUX tablespace. You can manage directives using the package DBMS_SPD.

4.5 About Optimizer Management of SQL Plan Baselines

SQL plan management is a mechanism that enables the optimizer to automatically manage execution plans, ensuring that the database uses only known or verified plans (see Chapter 23, "Managing SQL Plan Baselines"). This mechanism can build a SQL plan baseline, which contains one or more accepted plans for each SQL statement.

The optimizer can access and manage the plan history and SQL plan baselines of SQL statements. This capability is central to the SQL plan management architecture. In SQL plan management, the optimizer has the following main objectives:

  • Identify repeatable SQL statements

  • Maintain plan history, and possibly SQL plan baselines, for a set of SQL statements

  • Detect plans that are not in the plan history

  • Detect potentially better plans that are not in the SQL plan baseline

The optimizer uses the normal cost-based search method.

PKٷPKDOEBPS/title.htm6 Oracle Database SQL Tuning Guide, 12c Release 1 (12.1)

Oracle® Database

SQL Tuning Guide

12c Release 1 (12.1)

E49106-05

June 2014


Oracle Database SQL Tuning Guide, 12c Release 1 (12.1)

E49106-05

Copyright © 2014, Oracle and/or its affiliates. All rights reserved.

Primary Author: Lance Ashdown

Contributing Authors: Maria Colgan, Tom Kyte

Contributors: Pete Belknap, Ali Cakmak, Sunil Chakkappen, Immanuel Chan, Deba Chatterjee, Chris Chiappa, Dinesh Das, Leonidas Galanis, William Endress, Bruce Golbus, Katsumi Inoue, Kevin Jernigan, Shantanu Joshi, Adam Kociubes, Allison Lee, Sue Lee, David McDermid, Colin McGregor, Ted Persky, Ekrem Soylemez, Hong Su, Murali Thiyagarajah, Mark Townsend, Randy Urbano, Bharath Venkatakrishnan, Hailing Yu

Contributor:  The Oracle Database 12c documentation is dedicated to Mark Townsend, who was an inspiration to all who worked on this release.

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

PKh PKDOEBPS/tgsql_glossary.htm Glossary

Glossary

accepted plan

In the context of SQL plan management, a plan that is in a SQL plan baseline for a SQL statement and thus available for use by the optimizer. An accepted plan contains a set of hints, a plan hash value, and other plan-related information.

access path

The means by which the database retrieves data from a database. For example, a query using an index and a query using a full table scan use different access paths.

adaptive cursor sharing

A feature that enables a single statement that contains bind variables to use multiple execution plans. Cursor sharing is "adaptive" because the cursor adapts its behavior so that the database does not always use the same plan for each execution or bind variable value.

adaptive optimizer

A feature of the optimizer that enables it to adapt plans based on run-time statistics.

adaptive plan

An execution plan that changes after optimization because run-time conditions indicate that optimizer estimates are inaccurate. An adaptive plan has different built-in plan options. During the first execution, before a specific subplan becomes active, the optimizer makes a a final decision about which option to use. The optimizer bases its choice on observations made during the execution up to this point. Thus, an adaptive plan enables the final plan for a statement to differ from the default plan.

adaptive query optimization

A set of capabilities that enables the adaptive optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics. Adaptive optimization is helpful when existing statistics are not sufficient to generate an optimal plan.

antijoin

A join that returns rows that fail to match the subquery on the right side. For example, an antijoin can list departments with no employees. Antijoins use the NOT EXISTS or NOT IN constructs.

Automatic Database Diagnostic Monitor (ADDM)

ADDM is self-diagnostic software built into Oracle Database. ADDM examines and analyzes data captured in Automatic Workload Repository (AWR) to determine possible database performance problems.

automatic optimizer statistics collection

The automatic running of the DBMS_STATS package to collect optimizer statistics for all schema objects for which statistics are missing or stale.

automatic initial plan capture

The mechanism by which the database automatically creates a SQL plan baseline for any repeatable SQL statement executed on the database. Enable automatic initial plan capture by setting the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES initialization parameter to true (the default is false).

See repeatable SQL statement.

automatic reoptimization

The ability of the optimizer to automatically change a plan on subsequent executions of a SQL statement. Automatic reoptimization can fix any suboptimal plan chosen due to incorrect optimizer estimates, from a suboptimal distribution method to an incorrect choice of degree of parallelism.

automatic SQL tuning

The work performed by Automatic SQL Tuning Advisor it runs as an automated task within system maintenance windows.

Automatic SQL Tuning Advisor

SQL Tuning Advisor when run as an automated maintenance task. Automatic SQL Tuning runs during system maintenance windows as an automated maintenance task, searching for ways to improve the execution plans of high-load SQL statements.

See SQL Tuning Advisor.

Automatic Tuning Optimizer

The optimizer when invoked by SQL Tuning Advisor. In SQL tuning mode, the optimizer performs additional analysis to check whether it can further improve the plan produced in normal mode. The optimizer output is not an execution plan, but a series of actions, along with their rationale and expected benefit for producing a significantly better plan.

Automatic Workload Repository (AWR)

The infrastructure that provides services to Oracle Database components to collect, maintain, and use statistics for problem detection and self-tuning.

AWR snapshot

A set of data for a specific time that is used for performance comparisons. The delta values captured by the snapshot represent the changes for each statistic over the time period. Statistics gathered by are queried from memory. You can display the gathered data in both reports and views.

baseline

In the context of AWR, the interval between two AWR snapshots that represent the database operating at an optimal level.

bind-aware cursor

A bind-sensitive cursor that is eligible to use different plans for different bind values. After a cursor has been made bind-aware, the optimizer chooses plans for future executions based on the bind value and its cardinality estimate.

bind-sensitive cursor

A cursor whose optimal plan may depend on the value of a bind variable. The database monitors the behavior of a bind-sensitive cursor that uses different bind values to determine whether a different plan is beneficial.

bind variable

A placeholder in a SQL statement that must be replaced with a valid value or value address for the statement to execute successfully. By using bind variables, you can write a SQL statement that accepts inputs or parameters at run time. The following query uses v_empid as a bind variable:

SELECT * FROM employees WHERE employee_id = :v_empid;

bind variable peeking

The ability of the optimizer to look at the value in a bind variable during a hard parse. By peeking at bind values, the optimizer can determine the selectivity of a WHERE clause condition as if literals had been used, thereby improving the plan.

bitmap join index

A bitmap index for the join of two or more tables.

bitmap piece

A subcomponent of a single bitmap index entry. Each indexed column value may have one or more bitmap pieces. The database uses bitmap pieces to break up an index entry that is large in relation to the size of a block.

B-tree index

An index organized like an upside-down tree. A B-tree index has two types of blocks: branch blocks for searching and leaf blocks that store values. The leaf blocks contain every indexed data value and a corresponding rowid used to locate the actual row. The "B" stands for "balanced" because all leaf blocks automatically stay at the same depth.

bulk load

A CREATE TABLE AS SELECT or INSERT INTO ... SELECT operation.

cardinality

The number of rows that is expected to be or actually is returned by an operation in an execution plan. Data has low cardinality when the number of distinct values in a column is low in relation to the total number of rows.

Cartesian join

A join in which one or more of the tables does not have any join conditions to any other tables in the statement. The optimizer joins every row from one data source with every row from the other data source, creating the Cartesian product of the two sets.

child cursor

The cursor containing the plan, compilation environment, and other information for a statement whose text is stored in a parent cursor. The parent cursor is number 0, the first child is number 1, and so on. Child cursors reference the same SQL text as the parent cursor, but are different. For example, two queries with the text SELECT * FROM t use different cursors when they reference two different tables named t.

cluster scan

An access path for a table cluster. In an indexed table cluster, Oracle Database first obtains the rowid of one of the selected rows by scanning the cluster index. Oracle Database then locates the rows based on this rowid.

column group

A set of columns that is treated as a unit.

column group statistics

Extended statistics gathered on a group of columns treated as a unit.

column statistics

Statistics about columns that the optimizer uses to determine optimal execution plans. Column statistics include the number of distinct column values, low value, high value, and number of nulls.

complex view merging

The merging of views containing the GROUP BY or DISTINCT keywords.

composite database operation

In a database operation, the activity between two points in time in a database session, with each session defining its own beginning and end points. A session can participate in at most one composite database operation at a time.

concurrency

Simultaneous access of the same data by many users. A multiuser database management system must provide adequate concurrency controls so that data cannot be updated or changed improperly, compromising data integrity.

concurrent statistics gathering mode

A mode that enables the database to simultaneously gather optimizer statistics for multiple tables in a schema, or multiple partitions or subpartitions in a table. Concurrency can reduce the overall time required to gather statistics by enabling the database to fully use multiple CPUs.

condition

A combination of one or more expressions and logical operators that returns a value of TRUE, FALSE, or UNKNOWN.

cost

A numeric internal measure that represents the estimated resource usage for an execution plan. The lower the cost, the more efficient the plan.

cost-based optimizer (CBO)

The legacy name for the optimizer. In earlier releases, the cost-based optimizer was an alternative to the rule-based optimizer (RBO).

cost model

The internal optimizer model that accounts for the cost of the I/O, CPU, and network resources that a query is predicted to use.

cumulative statistics

A count such as the number of block reads. Oracle Database generates many types of cumulative statistics for the system, sessions, and individual SQL statements.

cursor

A handle or name for a private SQL area in the PGA. Because cursors are closely associated with private SQL areas, the terms are sometimes used interchangeably.

cursor cache

See shared SQL area.

cursor merging

Combining cursors to save space in the shared SQL area. If the optimizer creates a plan for a bind-aware cursor, and if this plan is the same as an existing cursor, then the optimizer can merge the cursors.

data flow operator (DFO)

The unit of work between data redistribution stages in a parallel query.

data skew

Large variations in the number of duplicate values in a column.

database operation

A set of database tasks defined by end users or application code, for example, a batch job or ETL processing.

default plan

For an adaptive plan, the execution plan initially chosen by the optimizer using the statistics from the data dictionary. The default plan can differ from the final plan.

disabled plan

A plan that a database administrator has manually marked as ineligible for use by the optimizer.

degree of parallelism

The number of parallel execution servers associated with a single operation. Parallel execution is designed to effectively use multiple CPUs. Oracle Database parallel execution framework enables you to either explicitly choose a specific degree of parallelism or to rely on Oracle Database to automatically control it.

dense key

A numeric key that is stored as a native integer and has a range of values.

dense grouping key

A key that represents all grouping keys whose grouping columns come from a particular fact table or dimension.

dense join key

A key that represents all join keys whose join columns come from a particular fact table or dimension.

density

A decimal number between 0 and 1 that measures the selectivity of a column. Values close to 1 indicate that the column is unselective, whereas values close to 0 indicate that this column is more selective.

direct path read

A single or multiblock read into the PGA, bypassing the SGA.

driving table

The table to which other tables are joined. An analogy from programming is a for loop that contains another for loop. The outer for loop is the analog of a driving table, which is also called an outer table.

dynamic performance view

A view created on dynamic performance tables, which are virtual tables that record current database activity. The dynamic performance views are called fixed views because they cannot be altered or removed by the database administrator. They are also called V$ views because they begin with the string V$ (GV$ in Oracle RAC).

dynamic statistics

An optimization technique in which the database executes a recursive SQL statement to scan a small random sample of a table's blocks to estimate predicate selectivities.

dynamic statistics level

The level that controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics. Set the dynamic statistics level using either the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter or a statement hint.

enabled plan

In SQL plan management, a plan that is eligible for use by the optimizer.

endpoint number

A number that uniquely identifies a bucket in a histogram. In frequency and hybrid histograms, the endpoint number is the cumulative frequency of endpoints. In height-balanced histograms, the endpoint number is the bucket number.

endpoint repeat count

In a hybrid histogram, the number of times the endpoint value is repeated, for each endpoint (bucket) in the histogram. By using the repeat count, the optimizer can obtain accurate estimates for almost popular values.

endpoint value

An endpoint value is the highest value in the range of values in a histogram bucket.

equijoin

A join whose join condition contains an equality operator.

estimator

The component of the optimizer that determines the overall cost of a given execution plan.

execution plan

The combination of steps used by the database to execute a SQL statement. Each step either retrieves rows of data physically from the database or prepares them for the user issuing the statement. You can override execution plans by using hints.

execution tree

A tree diagram that shows the flow of row sources from one step to another in an execution plan.

expression

A combination of one or more values, operators, and SQL functions that evaluates to a value. For example, the expression 2*2 evaluates to 4. In general, expressions assume the data type of their components.

expression statistics

A type of extended statistics that improves optimizer estimates when a WHERE clause has predicates that use expressions.

extended statistics

A type of optimizer statistics that improves estimates for cardinality when multiple predicates exist or when predicates contain expressions.

extensible optimizer

An optimizer capability that enables authors of user-defined functions and indexes to create statistics collection, selectivity, and cost functions that the optimizer uses when choosing an execution plan. The optimizer cost model is extended to integrate information supplied by the user to assess CPU and I/O cost.

extension

A column group or an expression. The statistics collected for column groups and expressions are called extended statistics.

external table

A read-only table whose metadata is stored in the database but whose data in stored in files outside the database. The database uses the metadata describing external tables to expose their data as if they were relational tables.

filter condition

A WHERE clause component that eliminates rows from a single object referenced in a SQL statement.

final plan

In an adaptive plan, the plan that executes to completion. The default plan can differ from the final plan.

fixed object

A dynamic performance table or its index. The fixed objects are owned by SYS. Fixed object tables have names beginning with X$ and are the base tables for the V$ views.

fixed plan

An accepted plan that is marked as preferred, so that the optimizer considers only the fixed plans in the SQL plan baseline. You can use fixed plans to influence the plan selection process of the optimizer.

frequency histogram

A type of histogram in which each distinct column value corresponds to a single bucket. An analogy is sorting coins: all pennies go in bucket 1, all nickels go in bucket 2, and so on.

full outer join

A combination of a left and right outer join. In addition to the inner join, the database uses nulls to preserve rows from both tables that have not been returned in the result of the inner join. In other words, full outer joins join tables together, yet show rows with no corresponding rows in the joined tables.

full table scan

A scan of table data in which the database sequentially reads all rows from a table and filters out those that do not meet the selection criteria. All data blocks under the high water mark are scanned.

global temporary table

A special temporary table that stores intermediate session-private data for a specific duration.

hard parse

The steps performed by the database to build a new executable version of application code. The database must perform a hard parse instead of a soft parse if the parsed representation of a submitted statement does not exist in the shared SQL area.

hash cluster

A type of table cluster that is similar to an indexed cluster, except the index key is replaced with a hash function. No separate cluster index exists. In a hash cluster, the data is the index.

hash collision

Hashing multiple input values to the same output value.

hash function

A function that operates on an arbitrary-length input value and returns a fixed-length hash value.

hash join

A method for joining large data sets. The database uses the smaller of two data sets to build a hash table on the join key in memory. It then scans the larger data set, probing the hash table to find the joined rows.

hash scan

An access path for a table cluster. The database uses a hash scan to locate rows in a hash cluster based on a hash value. In a hash cluster, all rows with the same hash value are stored in the same data block. To perform a hash scan, Oracle Database first obtains the hash value by applying a hash function to a cluster key value specified by the statement, and then scans the data blocks containing rows with that hash value.

hash table

An in-memory data structure that associates join keys with rows in a hash join. For example, in a join of the employees and departments tables, the join key might be the department ID. A hash function uses the join key to generate a hash value. This hash value is an index in an array, which is the hash table.

hash value

In a hash cluster, a unique numeric ID that identifies a bucket. Oracle Database uses a hash function that accepts an infinite number of hash key values as input and sorts them into a finite number of buckets. Each hash value maps to the database block address for the block that stores the rows corresponding to the hash key value (department 10, 20, 30, and so on).

hashing

A mathematical technique in which an infinite set of input values is mapped to a finite set of output values, called hash values. Hashing is useful for rapid lookups of data in a hash table.

heap-organized table

A table in which the data rows are stored in no particular order on disk. By default, CREATE TABLE creates a heap-organized table.

height-balanced histogram

A histogram in which column values are divided into buckets so that each bucket contains approximately the same number of rows.

hint

An instruction passed to the optimizer through comments in a SQL statement. The optimizer uses hints to choose an execution plan for the statement.

histogram

A special type of column statistic that provides more detailed information about the data distribution in a table column.

hybrid hash distribution technique

An adaptive parallel data distribution that does not decide the final data distribution method until execution time.

hybrid histogram

An enhanced height-based histogram that stores the exact frequency of each endpoint in the sample, and ensures that a value is never stored in multiple buckets.

implicit query

A component of a DML statement that retrieves data without a subquery. An UPDATE, DELETE, or MERGE statement that does not explicitly include a SELECT statement uses an implicit query to retrieve the rows to be modified.

in-memory scan

A table scan that retrieves rows from the In-Memory Column Store.

incremental statistics maintenance

The ability of the database to generate global statistics for a partitioned table by aggregating partition-level statistics.

index

Optional schema object associated with a nonclustered table, table partition, or table cluster. In some cases indexes speed data access.

index cluster

An table cluster that uses an index to locate data. The cluster index is a B-tree index on the cluster key.

index clustering factor

A measure of row order in relation to an indexed value such as employee last name. The more scattered the rows among the data blocks, the lower the clustering factor.

index fast full scan

A scan of the index blocks in unsorted order, as they exist on disk. This scan reads the index instead of the table.

index full scan

The scan of an entire index in key order.

index-organized table

A table whose storage organization is a variant of a primary B-tree index. Unlike a heap-organized table, data is stored in primary key order.

index range scan

An index range scan is an ordered scan of an index that has the following characteristics:

  • One or more leading columns of an index are specified in conditions.

  • 0, 1, or more values are possible for an index key.

index range scan descending

An index range scan in which the database returns rows in descending order.

index skip scan

An index scan occurs in which the initial column of a composite index is "skipped" or not specified in the query. For example, if the composite index key is (cust_gender,cust_email), then the query predicate does not reference the cust_gender column.

index statistics

Statistics about indexes that the optimizer uses to determine whether to perform a full table scan or an index scan. Index statistics include B-tree levels, leaf block counts, the index clustering factor, distinct keys, and number of rows in the index.

index unique scan

A scan of an index that returns either 0 or 1 rowid.

inner join

A join of two or more tables that returns only those rows that satisfy the join condition.

inner table

In a nested loops join, the table that is not the outer table (driving table). For every row in the outer table, the database accesses all rows in the inner table. The outer loop is for every row in the outer table and the inner loop is for every row in the inner table.

join

A statement that retrieves data from multiple tables specified in the FROM clause of a SQL statement. Join types include inner joins, outer joins, and Cartesian joins.

join condition

A condition that compares two row sources using an expression. The database combines pairs of rows, each containing one row from each row source, for which the join condition evaluates to true.

join elimination

The removal of redundant tables from a query. A table is redundant when its columns are only referenced in join predicates, and those joins are guaranteed to neither filter nor expand the resulting rows.

join factorization

A cost-based transformation that can factorize common computations from branches of a UNION ALL query. Without join factorization, the optimizer evaluates each branch of a UNION ALL query independently, which leads to repetitive processing, including data access and joins. Avoiding an extra scan of a large base table can lead to a huge performance improvement.

join method

A method of joining a pair of row sources. The possible join methods are nested loop, sort merge, and hash joins. A Cartesian join requires one of the preceding join methods

join order

The order in which multiple tables are joined together. For example, for each row in the employees table, the database can read each row in the departments table. In an alternative join order, for each row in the departments table, the database reads each row in the employees table.

To execute a statement that joins more than two tables, Oracle Database joins two of the tables and then joins the resulting row source to the next table. This process continues until all tables are joined into the result.

join predicate

A predicate in a WHERE or JOIN clause that combines the columns of two tables in a join.

key vector

A data structure that maps between dense join keys and dense grouping keys.

latch

A low-level serialization control mechanism used to protect shared data structures in the SGA from simultaneous access.

left join tree

A join tree in which the left input of every join is the result of a previous join.

left table

In an outer join, the table specified on the left side of the OUTER JOIN keywords (in ANSI SQL syntax).

library cache

An area of memory in the shared pool. This cache includes the shared SQL areas, private SQL areas (in a shared server configuration), PL/SQL procedures and packages, and control structures such as locks and library cache handles.

library cache hit

The reuse of SQL statement code found in the library cache.

library cache miss

During SQL processing, the act of searching for a usable plan in the library cache and not finding it.

maintenance window

A contiguous time interval during which automated maintenance tasks run. The maintenance windows are Oracle Scheduler windows that belong to the window group named MAINTENANCE_WINDOW_GROUP.

manual plan capture

The user-initiated bulk load of existing plans into a SQL plan baseline.

materialized view

A schema object that stores a query result. All materialized views are either read-only or updatable.

multiblock read

An I/O call that reads multiple database blocks. Multiblock reads can significantly speed up full table scans.

NDV

Number of distinct values. The NDV is important in generating selectivity estimates.

nested loops join

A type of join method. A nested loops join determines the outer table that drives the join, and for every row in the outer table, probes each row in the inner table. The outer loop is for each row in the outer table and the inner loop is for each row in the inner table. An analogy from programming is a for loop inside of another for loop.

nonequijoin

A join whose join condition does not contain an equality operator.

nonjoin column

A predicate in a WHERE clause that references only one table.

nonpopular value

In a histogram, any value that does not span two or more endpoints. Any value that is not nonpopular is a popular value.

noworkload statistics

Optimizer system statistics gathered when the database simulates a workload.

on-demand SQL tuning

The manual invocation of SQL Tuning Advisor. Any invocation of SQL Tuning Advisor that is not the result of an Automatic SQL Tuning task is on-demand tuning.

optimization

The overall process of choosing the most efficient means of executing a SQL statement.

optimizer

Built-in database software that determines the most efficient way to execute a SQL statement by considering factors related to the objects referenced and the conditions specified in the statement.

optimizer cost model

The model that the optimizer uses to select an execution plan. The optimizer selects the execution plan with the lowest cost, where cost represents the estimated resource usage for that plan. The optimizer cost model accounts for the I/O, CPU, and network resources that the query will use.

optimizer environment

The totality of session settings that can affect execution plan generation, such as the work area size or optimizer settings (for example, the optimizer mode).

optimizer goal

The prioritization of resource usage by the optimizer. Using the OPTIMIZER_MODE initialization parameter, you can set the optimizer goal best throughput or best response time.

optimizer statistics

Details about the database its object used by the optimizer to select the best execution plan for each SQL statement. Categories include table statistics such as numbers of rows, index statistics such as B-tree levels, system statistics such as CPU and I/O performance, and column statistics such as number of nulls.

optimizer statistics collection

The gathering of optimizer statistics for database objects. The database can collect these statistics automatically, or you can collect them manually by using the system-supplied DBMS_STATS package.

optimizer statistics collector

A row source inserted into an execution plan at key points to collect run-time statistics for use in adaptive plans.

optimizer statistics preferences

The default values of the parameters used by automatic statistics collection and the DBMS_STATS statistics gathering procedures.

outer join

A join condition using the outer join operator (+) with one or more columns of one of the tables. The database returns all rows that meet the join condition. The database also returns all rows from the table without the outer join operator for which there are no matching rows in the table with the outer join operator.

outer table

See driving table

parallel execution

The application of multiple CPU and I/O resources to the execution of a single database operation.

parallel query

A query in which multiple processes work together simultaneously to run a single SQL query. By dividing the work among multiple processes, Oracle Database can run the statement more quickly. For example, four processes retrieve rows for four different quarters in a year instead of one process handling all four quarters by itself.

parent cursor

The cursor that stores the SQL text and other minimal information for a SQL statement. The child cursor contains the plan, compilation environment, and other information. When a statement first executes, the database creates both a parent and child cursor in the shared pool.

parse call

A call to Oracle to prepare a SQL statement for execution. The call includes syntactically checking the SQL statement, optimizing it, and then building or locating an executable form of that statement.

parsing

The stage of SQL processing that involves separating the pieces of a SQL statement into a data structure that can be processed by other routines.

A hard parse occurs when the SQL statement to be executed is either not in the shared pool, or it is in the shared pool but it cannot be shared. A soft parse occurs when a session attempts to execute a SQL statement, and the statement is already in the shared pool, and it can be used.

partition maintenance operation

A partition-related operation such as adding, exchanging, merging, or splitting table partitions.

partition-wise join

A join optimization that divides a large join of two tables, one of which must be partitioned on the join key, into several smaller joins.

pending statistics

Unpublished optimizer statistics. By default, the optimizer uses published statistics but does not use pending statistics.

performance feedback

This form of automatic reoptimization helps improve the degree of parallelism automatically chosen for repeated SQL statements when PARALLEL_DEGREE_POLICY is set to ADAPTIVE.

pipelined table function

A PL/SQL function that accepts a collection of rows as input. You invoke the table function as the operand of the table operator in the FROM list of a SELECT statement.

plan evolution

The manual change of an unaccepted plan in the SQL plan history into an accepted plan in the SQL plan baseline.

plan generator

The part of the optimizer that tries different access paths, join methods, and join orders for a given query block to find the plan with the lowest cost.

plan selection

The optimizer's attempt to find a matching plan in the SQL plan baseline for a statement after performing a hard parse.

plan verification

Comparing the performance of an unaccepted plan to a plan in a SQL plan baseline and ensuring that it performs better.

popular value

In a histogram, any value that spans two or more endpoints. Any value that is not popular is an nonpopular value.

predicate pushing

A transformation technique in which the optimizer "pushes" the relevant predicates from the containing query block into the view query block. For views that are not merged, this technique improves the subplan of the unmerged view because the database can use the pushed-in predicates to access indexes or to use as filters.

private SQL area

An area in memory that holds a parsed statement and other information for processing. The private SQL area contains data such as bind variable values, query execution state information, and query execution work areas.

proactive SQL tuning

Using SQL tuning tools to identify SQL statements that are candidates for tuning before users have complained about a performance problem.

See reactive SQL tuning, SQL tuning.

projection view

An optimizer-generated view that appear in queries in which a DISTINCT view has been merged, or a GROUP BY view is merged into an outer query block that also contains GROUP BY, HAVING, or aggregates.

See simple view merging, complex view merging.

query

An operation that retrieves data from tables or views. For example, SELECT * FROM employees is a query.

query block

A top-level SELECT statement, subquery, or unmerged view

query optimizer

See optimizer.

reactive SQL tuning

Diagnosing and fixing SQL-related performance problems after users have complained about them.

See proactive SQL tuning, SQL tuning.

recursive SQL

Additional SQL statements that the database must issue to execute a SQL statement issued by a user. The generation of recursive SQL is known as a recursive call. For example, the database generates recursive calls when data dictionary information is not available in memory and so must be retrieved from disk.

reoptimization

See automatic reoptimization.

repeatable SQL statement

A statement that the database parses or executes after recognizing that it is tracked in the SQL statement log.

response time

The time required to complete a unit of work.

See throughput.

result set

In a query, the set of rows generated by the execution of a cursor.

right join tree

A join tree in which the right input of every join is the result of a previous join.

right table

In an outer join, the table specified on the right side of the OUTER JOIN keywords (in ANSI SQL syntax).

rowid

A globally unique address for a row in a table.

row set

A set of rows returned by a step in an execution plan.

row source

An iterative control structure that processes a set of rows in an iterated manner and produces a row set.

row source generator

Software that receives the optimal plan from the optimizer and outputs the execution plan for the SQL statement.

row source tree

A collection of row sources produced by the row source generator. The row source tree for a SQL statement shows information such as table order, access methods, join methods, and data operations such as filters and sorts.

sample table scan

A scan that retrieves a random sample of data from a simple table or a complex SELECT statement, such as a statement involving joins and views.

sampling

Gathering statistics from a random subset of rows in a table.

selectivity

A value indicating the proportion of a row set retrieved by a predicate or combination of predicates, for example, WHERE last_name = 'Smith'. A selectivity of 0 means that no rows pass the predicate test, whereas a value of 1 means that all rows pass the test.

The adjective selective means roughly "choosy." Thus, a highly selective query returns a low proportion of rows (selectivity close to 0), whereas an unselective query returns a high proportion of rows (selectivity close to 1).

semijoin

A join that returns rows from the first table when at least one match exists in the second table. For example, you list departments with at least one employee. The difference between a semijoin and a conventional join is that rows in the first table are returned at most once. Semijoins use the EXISTS or IN constructs.

shared pool

Portion of the SGA that contains shared memory constructs such as shared SQL areas.

shared SQL area

An area in the shared pool that contains the parse tree and execution plan for a SQL statement. Only one shared SQL area exists for a unique statement. The shared SQL area is sometimes referred to as the cursor cache.

simple database operation

A database operation consisting of a single SQL statement or PL/SQL procedure or function.

simple view merging

The merging of select-project-join views. For example, a query joins the employees table to a subquery that joins the departments and locations tables.

soft parse

Any parse that is not a hard parse. If a submitted SQL statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This reuse of code is also called a library cache hit.

sort merge join

A type of join method. The join consists of a sort join, in which both inputs are sorted on the join key, followed by a merge join, in which the sorted lists are merged.

SQL Access Advisor

SQL Access Advisor is internal diagnostic software that recommends which materialized views, indexes, and materialized view logs to create, drop, or retain.

SQL compilation

In the context of Oracle SQL processing, this term refers collectively to the phases of parsing, optimization, and plan generation.

SQL plan directive

Additional information and instructions that the optimizer can use to generate a more optimal plan. For example, a SQL plan directive might instruct the optimizer to collect missing statistics or gather dynamic statistics.

SQL handle

A string value derived from the numeric SQL signature. Like the signature, the handle uniquely identifies a SQL statement. It serves as a SQL search key in user APIs.

SQL ID

For a specific SQL statement, the unique identifier of the parent cursor in the library cache. A hash function applied to the text of the SQL statement generates the SQL ID. The V$SQL.SQL_ID column displays the SQL ID.

SQL incident

In the fault diagnosability infrastructure of Oracle Database, a single occurrence of a SQL-related problem. When a problem (critical error) occurs multiple times, the database creates an incident for each occurrence. Incidents are timestamped and tracked in the Automatic Diagnostic Repository (ADR).

SQL management base (SMB)

A logical repository that stores statement logs, plan histories, SQL plan baselines, and SQL profiles. The SMB is part of the data dictionary and resides in the SYSAUX tablespace.

SQL plan baseline

A set of one or more accepted plans for a repeatable SQL statement. Each accepted plan contains a set of hints, a plan hash value, and other plan-related information. SQL plan management uses SQL plan baselines to record and evaluate the execution plans of SQL statements over time.

SQL plan capture

Techniques for capturing and storing relevant information about plans in the SQL management base (SMB) for a set of SQL statements. Capturing a plan means making SQL plan management aware of this plan.

SQL plan directive

Additional information that the optimizer uses to generate a more optimal plan. The optimizer collects SQL plan directives on query expressions rather than at the statement level. In this way, the directives are usable for multiple SQL statements.

SQL plan history

The set of plans generated for a repeatable SQL statement over time. The history contains both SQL plan baselines and unaccepted plans.

SQL plan management

SQL plan management is a preventative mechanism that records and evaluates the execution plans of SQL statements over time. SQL plan management can prevent SQL plan regressions caused by environmental changes such as a new optimizer version, changes to optimizer statistics, system settings, and so on.

SQL processing

The stages of parsing, optimization, row source generation, and execution of a SQL statement.

SQL profile

A set of auxiliary information built during automatic tuning of a SQL statement. A SQL profile is to a SQL statement what statistics are to a table. The optimizer can use SQL profiles to improve cardinality and selectivity estimates, which in turn leads the optimizer to select better plans.

SQL profiling

The verification and validation by the Automatic Tuning Advisor of its own estimates.

SQL signature

A numeric hash value computed using a SQL statement text that has been normalized for case insensitivity and white space. It uniquely identifies a SQL statement. The database uses this signature as a key to maintain SQL management objects such as SQL profiles, SQL plan baselines, and SQL patches.

SQL statement log

When automatic SQL plan capture is enabled, a log that contains the SQL ID of SQL statements that the optimizer has evaluated over time. A statement is tracked when it exists in the log.

SQL test case

A problematic SQL statement and related information needed to reproduce the execution plan in a different environment. A SQL test case is stored in an Oracle Data Pump file.

SQL test case builder

A database feature that gathers information related to a SQL statement and packages it so that a user can reproduce the problem on a different database. The DBMS_SQLDIAG package is the interface for SQL test case builder.

SQL trace file

A server-generated file that provides performance information on individual SQL statements. For example, the trace file contains parse, execute, and fetch counts, CPU and elapsed times, physical reads and logical reads, and misses in the library cache.

SQL tuning

The process of improving SQL statement efficiency to meet measurable goals.

SQL Tuning Advisor

Built-in database diagnostic software that optimizes high-load SQL statements.

See Automatic SQL Tuning Advisor.

SQL tuning set (STS)

A database object that includes one or more SQL statements along with their execution statistics and execution context.

star schema

A relational schema whose design represents a dimensional data model. The star schema consists of one or more fact tables and one or more dimension tables that are related through foreign keys.

statistics feedback

A form of automatic reoptimization that automatically improves plans for repeated queries that have cardinality misestimates. The optimizer may estimate cardinalities incorrectly for many reasons, such as missing statistics, inaccurate statistics, or complex predicates.

stored outline

A set of hints for a SQL statement. The hints in stored outlines direct the optimizer to choose a specific plan for the statement.

subplan

A portion of an adaptive plan that the optimizer can switch to as an alternative at run time. A subplan can consist of multiple operations in the plan. For example, the optimizer can treat a join method and the corresponding access path as one unit when determining whether to change the plan at run time.

subquery

A query nested within another SQL statement. Unlike implicit queries, subqueries use a SELECT statement to retrieve data.

subquery unnesting

A transformation technique in which the optimizer transforms a nested query into an equivalent join statement, and then optimizes the join.

synopsis

A set of auxiliary statistics gathered on a partitioned table when the INCREMENTAL value is set to true.

system statistics

Statistics that enable the optimizer to use CPU and I/O characteristics. Index statistics include B-tree levels, leaf block counts, clustering factor, distinct keys, and number of rows in the index.

table cluster

A schema object that contains data from one or more tables, all of which have one or more columns in common. In table clusters, the database stores together all the rows from all tables that share the same cluster key.

table expansion

A transformation technique that enables the optimizer to generate a plan that uses indexes on the read-mostly portion of a partitioned table, but not on the active portion of the table.

table statistics

Statistics about tables that the optimizer uses to determine table access cost, join cardinality, join order, and so on. Table statistics include row counts, block counts, empty blocks, average free space per block, number of chained rows, average row length, and staleness of the statistics on the table.

throughput

The amount of work completed in a unit of time.

See response time.

top frequency histogram

A variation of a frequency histogram that ignores nonpopular values that are statistically insignificant, thus producing a better histogram for highly popular values.

tuning mode

One of the two optimizer modes. When running in tuning mode, the optimizer is known as the Automatic Tuning Optimizer. In tuning mode, the optimizer determines whether it can further improve the plan produced in normal mode. The optimizer output is not an execution plan, but a series of actions, along with their rationale and expected benefit for producing a significantly better plan.

unaccepted plan

A plan for a statement that is in the SQL plan history but has not been added to the SQL plan management.

unselective

A relatively large fraction of rows from a row set. A query becomes more unselective as the selectivity approaches 1. For example, a query that returns 999,999 rows from a table with one million rows is unselective. A query of the same table that returns one row is selective.

user response time

The time between when a user submits a command and receives a response.

See throughput.

vector I/O

A type of I/O in which the database obtains a set of rowids, sends them batched in an array to the operating system, which performs the read.

view merging

The merging of a query block representing a view into the query block that contains it. View merging can improve plans by enabling the optimizer to consider additional join orders, access methods, and other transformations.

workload statistics

Optimizer statistics for system activity in a specified time period.

PKiSa-ccPKDOEBPS/tgsql_sqlproc.htmn~ SQL Processing

3 SQL Processing

This chapter explains how Oracle Database processes SQL statements. Specifically, the section explains the way in which the database processes DDL statements to create objects, DML to modify data, and queries to retrieve data.

This chapter contains the following topics:

3.1 About SQL Processing

SQL processing is the parsing, optimization, row source generation, and execution of a SQL statement. Depending on the statement, the database may omit some of these stages. Figure 3-1 depicts the general stages of SQL processing.

Figure 3-1 Stages of SQL Processing

Description of Figure 3-1 follows

3.1.1 SQL Parsing

As shown in Figure 3-1, the first stage of SQL processing is parsing. This stage involves separating the pieces of a SQL statement into a data structure that other routines can process. The database parses a statement when instructed by the application, which means that only the application­, and not the database itself, can reduce the number of parses.

When an application issues a SQL statement, the application makes a parse call to the database to prepare the statement for execution. The parse call opens or creates a cursor, which is a handle for the session-specific private SQL area that holds a parsed SQL statement and other processing information. The cursor and private SQL area are in the program global area (PGA).

During the parse call, the database performs the following checks:

The preceding checks identify the errors that can be found before statement execution. Some errors cannot be caught by parsing. For example, the database can encounter deadlocks or errors in data conversion only during statement execution.


See Also:

Oracle Database Concepts to learn about deadlocks

3.1.1.1 Syntax Check

Oracle Database must check each SQL statement for syntactic validity. A statement that breaks a rule for well-formed SQL syntax fails the check. For example, the following statement fails because the keyword FROM is misspelled as FORM:

SQL> SELECT * FORM employees;
SELECT * FORM employees
         *
ERROR at line 1:
ORA-00923: FROM keyword not found where expected

3.1.1.2 Semantic Check

The semantics of a statement are its meaning. Thus, a semantic check determines whether a statement is meaningful, for example, whether the objects and columns in the statement exist. A syntactically correct statement can fail a semantic check, as shown in the following example of a query of a nonexistent table:

SQL> SELECT * FROM nonexistent_table;
SELECT * FROM nonexistent_table
              *
ERROR at line 1:
ORA-00942: table or view does not exist

3.1.1.3 Shared Pool Check

During the parse, the database performs a shared pool check to determine whether it can skip resource-intensive steps of statement processing. To this end, the database uses a hashing algorithm to generate a hash value for every SQL statement. The statement hash value is the SQL ID shown in V$SQL.SQL_ID. This hash value is deterministic within a version of Oracle Database, so the same statement in a single instance or in different instances has the same SQL ID.

When a user submits a SQL statement, the database searches the shared SQL area to see if an existing parsed statement has the same hash value. The hash value of a SQL statement is distinct from the following values:

  • Memory address for the statement

    Oracle Database uses the SQL ID to perform a keyed read in a lookup table. In this way, the database obtains possible memory addresses of the statement.

  • Hash value of an execution plan for the statement

    A SQL statement can have multiple plans in the shared pool. Typically, each plan has a different hash value. If the same SQL ID has multiple plan hash values, then the database knows that multiple plans exist for this SQL ID.

Parse operations fall into the following categories, depending on the type of statement submitted and the result of the hash check:

  • Hard parse

    If Oracle Database cannot reuse existing code, then it must build a new executable version of the application code. This operation is known as a hard parse, or a library cache miss.


    Note:

    The database always perform a hard parse of DDL.

    During the hard parse, the database accesses the library cache and data dictionary cache numerous times to check the data dictionary. When the database accesses these areas, it uses a serialization device called a latch on required objects so that their definition does not change. Latch contention increases statement execution time and decreases concurrency.

  • Soft parse

    A soft parse is any parse that is not a hard parse. If the submitted statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This reuse of code is also called a library cache hit.

    Soft parses can vary in how much work they perform. For example, configuring the session shared SQL area can sometimes reduce the amount of latching in the soft parses, making them "softer."

    In general, a soft parse is preferable to a hard parse because the database skips the optimization and row source generation steps, proceeding straight to execution.

Figure 3-2 is a simplified representation of a shared pool check of an UPDATE statement in a dedicated server architecture.

Figure 3-2 Shared Pool Check

Description of Figure 3-2 follows

If a check determines that a statement in the shared pool has the same hash value, then the database performs semantic and environment checks to determine whether the statements have the same meaning. Identical syntax is not sufficient. For example, suppose two different users log in to the database and issue the following SQL statements:

CREATE TABLE my_table ( some_col INTEGER );
SELECT * FROM my_table;

The SELECT statements for the two users are syntactically identical, but two separate schema objects are named my_table. This semantic difference means that the second statement cannot reuse the code for the first statement.

Even if two statements are semantically identical, an environmental difference can force a hard parse. In this context, the optimizer environment is the totality of session settings that can affect execution plan generation, such as the work area size or optimizer settings (for example, the optimizer mode). Consider the following series of SQL statements executed by a single user:

ALTER SESSION SET OPTIMIZER_MODE=ALL_ROWS;
ALTER SYSTEM FLUSH SHARED_POOL;               # optimizer environment 1
SELECT * FROM sh.sales;

ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS;  # optimizer environment 2
SELECT * FROM sh.sales;

ALTER SESSION SET SQL_TRACE=true;             # optimizer enviornment 3
SELECT * FROM sh.sales;

In the preceding example, the same SELECT statement is executed in three different optimizer environments. Consequently, the database creates three separate shared SQL areas for these statements and forces a hard parse of each statement.


See Also:


3.1.2 SQL Optimization

During the optimization stage, Oracle Database must perform a hard parse at least once for every unique DML statement and performs the optimization during this parse. The database never optimizes DDL unless it includes a DML component such as a subquery that requires optimization. Chapter 4, "Query Optimizer Concepts" explains the optimization process in more detail.

3.1.3 SQL Row Source Generation

The row source generator is software that receives the optimal execution plan from the optimizer and produces an iterative execution plan that is usable by the rest of the database. The iterative plan is a binary program that, when executed by the SQL engine, produces the result set.

The execution plan takes the form of a combination of steps. Each step returns a row set. The next step either uses the rows in this set, or the last step returns the rows to the application issuing the SQL statement.

A row source is a row set returned by a step in the execution plan along with a control structure that can iteratively process the rows. The row source can be a table, view, or result of a join or grouping operation.

The row source generator produces a row source tree, which is a collection of row sources. The row source tree shows the following information:

  • An ordering of the tables referenced by the statement

  • An access method for each table mentioned in the statement

  • A join method for tables affected by join operations in the statement

  • Data operations such as filter, sort, or aggregation

Example 3-1 shows the execution plan of a SELECT statement when AUTOTRACE is enabled. The statement selects the last name, job title, and department name for all employees whose last names begin with the letter A. The execution plan for this statement is the output of the row source generator.

Example 3-1 Execution Plan

SELECT e.last_name, j.job_title, d.department_name 
FROM   hr.employees e, hr.departments d, hr.jobs j
WHERE  e.department_id = d.department_id
AND    e.job_id = j.job_id
AND    e.last_name LIKE 'A%';
 
Execution Plan
----------------------------------------------------------
Plan hash value: 975837011
 
---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |     3 |   189 |     7  (15)| 00:00:01 |
|*  1 |  HASH JOIN                    |             |     3 |   189 |     7  (15)| 00:00:01 |
|*  2 |   HASH JOIN                   |             |     3 |   141 |     5  (20)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| EMPLOYEES   |     3 |    60 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | EMP_NAME_IX |     3 |       |     1   (0)| 00:00:01 |
|   5 |    TABLE ACCESS FULL          | JOBS        |    19 |   513 |     2   (0)| 00:00:01 |
|   6 |   TABLE ACCESS FULL           | DEPARTMENTS |    27 |   432 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
   2 - access("E"."JOB_ID"="J"."JOB_ID")
   4 - access("E"."LAST_NAME" LIKE 'A%')
       filter("E"."LAST_NAME" LIKE 'A%')

3.1.4 SQL Execution

During execution, the SQL engine executes each row source in the tree produced by the row source generator. This step is the only mandatory step in DML processing.

Figure 3-3 is an execution tree, also called a parse tree, that shows the flow of row sources from one step to another in the plan in Example 3-1. In general, the order of the steps in execution is the reverse of the order in the plan, so you read the plan from the bottom up.

Each step in an execution plan has an ID number. The numbers in Figure 3-3 correspond to the Id column in the plan shown in Example 3-1. Initial spaces in the Operation column of the plan indicate hierarchical relationships. For example, if the name of an operation is preceded by two spaces, then this operation is a child of an operation preceded by one space. Operations preceded by one space are children of the SELECT statement itself.

Figure 3-3 Row Source Tree

Description of Figure 3-3 follows

In Figure 3-3, each node of the tree acts as a row source, which means that each step of the execution plan in Example 3-1 either retrieves rows from the database or accepts rows from one or more row sources as input. The SQL engine executes each row source as follows:

  • Steps indicated by the black boxes physically retrieve data from an object in the database. These steps are the access paths, or techniques for retrieving data from the database.

    • Step 6 uses a full table scan to retrieve all rows from the departments table.

    • Step 5 uses a full table scan to retrieve all rows from the jobs table.

    • Step 4 scans the emp_name_ix index in order, looking for each key that begins with the letter A and retrieving the corresponding rowid. For example, the rowid corresponding to Atkinson is AAAPzRAAFAAAABSAAe.

    • Step 3 retrieves from the employees table the rows whose rowids were returned by Step 4. For example, the database uses rowid AAAPzRAAFAAAABSAAe to retrieve the row for Atkinson.

  • Steps indicated by the clear boxes operate on row sources.

    • Step 2 performs a hash join, accepting row sources from Steps 3 and 5, joining each row from the Step 5 row source to its corresponding row in Step 3, and returning the resulting rows to Step 1.

      For example, the row for employee Atkinson is associated with the job name Stock Clerk.

    • Step 1 performs another hash join, accepting row sources from Steps 2 and 6, joining each row from the Step 6 source to its corresponding row in Step 2, and returning the result to the client.

      For example, the row for employee Atkinson is associated with the department named Shipping.

In some execution plans the steps are iterative and in others sequential. The hash join shown in Example 3-1 is sequential. The database completes the steps in their entirety based on the join order. The database starts with the index range scan of emp_name_ix. Using the rowids that it retrieves from the index, the database reads the matching rows in the employees table, and then scans the jobs table. After it retrieves the rows from the jobs table, the database performs the hash join.

During execution, the database reads the data from disk into memory if the data is not in memory. The database also takes out any locks and latches necessary to ensure data integrity and logs any changes made during the SQL execution. The final stage of processing a SQL statement is closing the cursor.

3.2 How Oracle Database Processes DML

Most DML statements have a query component. In a query, execution of a cursor places the results of the query into a set of rows called the result set.

Result set rows can be fetched either a row at a time or in groups. In the fetch stage, the database selects rows and, if requested by the query, orders the rows. Each successive fetch retrieves another row of the result until the last row has been fetched.

In general, the database cannot determine for certain the number of rows to be retrieved by a query until the last row is fetched. Oracle Database retrieves the data in response to fetch calls, so that the more rows the database reads, the more work it performs. For some queries the database returns the first row as quickly as possible, whereas for others it creates the entire result set before returning the first row.

3.2.1 Read Consistency

In general, a query retrieves data by using the Oracle Database read consistency mechanism. This mechanism, which uses undo data to show past versions of data, guarantees that all data blocks read by a query are consistent to a single point in time.

For an example of read consistency, suppose a query must read 100 data blocks in a full table scan. The query processes the first 10 blocks while DML in a different session modifies block 75. When the first session reaches block 75, it realizes the change and uses undo data to retrieve the old, unmodified version of the data and construct a noncurrent version of block 75 in memory.


See Also:

Oracle Database Concepts to learn about multiversion read consistency

3.2.2 Data Changes

DML statements that must change data use the read consistency mechanism to retrieve only the data that matched the search criteria when the modification began. Afterward, these statements retrieve the data blocks as they exist in their current state and make the required modifications. The database must perform other actions related to the modification of the data such as generating redo and undo data.

3.3 How Oracle Database Processes DDL

Oracle Database processes DDL differently from DML. For example, when you create a table, the database does not optimize the CREATE TABLE statement. Instead, Oracle Database parses the DDL statement and carries out the command.

The database processes DDL differently because it is a means of defining an object in the data dictionary. Typically, Oracle Database must parse and execute many recursive SQL statements to execute a DDL statement. Suppose you create a table as follows:

CREATE TABLE mytable (mycolumn INTEGER);

Typically, the database would run dozens of recursive statements to execute the preceding statement. The recursive SQL would perform actions such as the following:

  • Issue a COMMIT before executing the CREATE TABLE statement

  • Verify that user privileges are sufficient to create the table

  • Determine which tablespace the table should reside in

  • Ensure that the tablespace quota has not been exceeded

  • Ensure that no object in the schema has the same name

  • Insert rows that define the table into the data dictionary

  • Issue a COMMIT if the DDL statement succeeded or a ROLLBACK if it did not


See Also:

Oracle Database Development Guide to learn about processing DDL, transaction control, and other types of statements

PKްnnPKDOEBPS/tgsql_pt_plan.htm Query Execution Plans

Part III

Query Execution Plans

This part contains the following chapters:

PK刻PKDOEBPS/tgsql_pt_spm.htm` SQL Controls

Part IX

SQL Controls

This part contains the following chapters:

PK0e`PKDOEBPS/tgsql_pt_intro.htm0 SQL Performance Fundamentals

Part I

SQL Performance Fundamentals

This part contains the following chapters:

PK>J<50PKDOEBPS/preface.htmW Preface

Preface

This manual explains how to tune Oracle SQL.

This preface contains the following topics:

Audience

This document is intended for database administrators and application developers who perform the following tasks:

  • Generating and interpreting SQL execution plans

  • Managing optimizer statistics

  • Influencing the optimizer through initialization parameters or SQL hints

  • Controlling cursor sharing for SQL statements

  • Monitoring SQL execution

  • Performing application tracing

  • Managing SQL tuning sets

  • Using SQL Tuning Advisor or SQL Access Advisor

  • Managing SQL profiles

  • Managing SQL baselines

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

Related Documents

This manual assumes that you are familiar with the following documents:

To learn how to tune data warehouse environments, see Oracle Database Data Warehousing Guide.

Many examples in this book use the sample schemas, which are installed by default when you select the Basic Installation option with an Oracle Database. See Oracle Database Sample Schemas for information on how these schemas were created and how you can use them.

To learn about Oracle Database error messages, see Oracle Database Error Messages Reference. Oracle Database error message documentation is only available in HTML. If you are accessing the error message documentation on the Oracle Documentation CD, then you can browse the error messages by range. After you find the specific range, use your browser's find feature to locate the specific message. When connected to the Internet, you can search for a specific error message using the error message search feature of the Oracle online documentation.

Conventions

The following text conventions are used in this document:

ConventionMeaning
boldfaceBoldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.
italicItalic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.
monospaceMonospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.

PK-\WPKDOEBPS/index.htm Index

Index

A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W 

A

access paths, 3.1.4
execution plans, 6.1
full table scan, 10.2.3.1
full table scans, 11.1
adaptive plans, 4.4.1, 7.2.1, 7.2.1, 7.3.1, 14.2.1, 17.2.2
cardinality misestimates, 4.4.1
join methods, 4.4.1.2
optimizer statistics collector, 4.4.1.1
parallel distribution methods, 4.4.1.3
reporting mode, 14.2.1
subplans, 4.4.1.1
adaptive query optimization, 4.4
adaptive plans, 4.4.1, 7.2.1, 7.3.1, 14.2.1, 17.2.2
controlling, 14.2.4
dynamic statistics, 10.3.2
adaptive statistics, 4.4.2
automatic reoptimization, 4.4.2.2
dynamic statistics, 4.4.2.1
SQL plan directives, 4.4.2.3, 13.3.1
ADDM, 1.4.2.1.1
ALTER INDEX statement, A.1.7
ALTER SESSION statement
examples, 18.4.2
antijoins, 9.1.3
applications
implementing, 2.2
automatic reoptimization, 4.4.2.2, 7.2.1, 10.4.1.2
cardinality misestimates, 4.4.2.2.1
performance feedback, 4.4.2.2.2
statistics feedback, 4.4.2.2.1
automatic statistics collection, 12.2
Automatic Tuning Optimizer, 1.4.2.1.2
Automatic Workload Repository (AWR), 1.4.2.1.1

B

big bang rollout strategy, 2.2.2
bind variables, 15.1.2
bitmap indexes
inlist iterator, 7.2.5.5.3
on joins, A.6
when to use, A.5
BOOLEAN data type, 8.3.3.1
broadcast
distribution value, 7.2.6, 7.3.2
BYTES column
PLAN_TABLE table, 7.2.6, 7.3.2

C

cardinality, 1.4.2.1.2, 4.2.2, 4.4.1, 4.4.1.2, 4.4.2.2.1, 4.4.2.3, 6.2.1, 8.4.1.1, 10.2.1, 10.4.1.1, 11.1, 11.1, 11.3
CARDINALITY column
PLAN_TABLE table, 7.2.6, 7.3.2
cartesian joins, 9.2.4
clusters, A.8, A.8
sorted hash, A.9
column group statistics, 10.4.1.1
column groups, 13.3.1, 13.3.1.2
columns
cardinality, 4.2.2
to index, A.1.3
compilation, SQL, 10.4, 10.4.1.1, 10.4.3
composite indexes, A.1.4
composite partitioning
examples of, 7.2.5.2
concurrent statistics gathering, 12.4.7.1, 12.4.7.3, Glossary
consistent mode
TKPROF, 18.4.4.1
constraints, A.1.10
COST column
PLAN_TABLE table, 7.2.6, 7.3.2
create_extended_statistics, 13.3.2.2, 13.3.2.2
current mode
TKPROF, 18.4.4.1
CURSOR_NUM column
TKPROF_TABLE table, 18.4.5.3
CURSOR_SHARING initialization parameter, 15.2
cursors, SQL, 3.1.1

D

data
modeling, 2.1.1
data blocks, 3.2.1
data dictionary cache, 3.1.1.3
data flow operator (DFO), 5.7.2
Data Pump
Export utility
statistics on system-generated columns names, 13.7.2
Import utility
copying statistics, 13.7.2
data skew, 11.1
data types
BOOLEAN, 8.3.3.1
database operations
composite, 1.4.2.2.2, 16.1
definition, 1.4.2.2.2, 16.1
simple, 1.4.2.2.2, 16.1
database operations, monitoring, 1.4.2.2.2, 16.1
composite, 16.1.2.2
composite operations, 16.1
creating database operations, 16.3
enabling with hints, 16.2.2
enabling with initialization parameters, 16.2.1
Enterprise Manager interface, 16.1.3.1
generating a report, 16.4
PL/SQL interface, 16.1.3.2
purpose, 16.1.1
real-time SQL, 16.1
simple operations, 16.1
DATE_OF_INSERT column
TKPROF_TABLE table, 18.4.5.3
DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter, 8.2.2.2, 8.2.2.2
DBMS_ADVISOR package, 21.1.1
DBMS_MONITOR package
end-to-end application tracing, 18.1.1.2
DBMS_SQLTUNE package
SQL Tuning Advisor, 19.1.3.1, 20.3.1.2.1
dbms_stats functions
create_extended_statistics, 13.3.2.2, 13.3.2.2
drop_extended_stats, 12.5.5, 13.3.1.6, 13.3.2.4
gather_table_stats, 13.3.2.2, 13.3.2.2
show_extended_stats_name, 13.3.1.5
DBMS_STATS package, 21.1.2.3
DBMS_XPLAN package
displaying plan table output, 6.4
DDL (data definition language)
processing of, 3.3
deadlocks, 3.1.1
debugging designs, 2.2.1
dedicated server, 3.1.1.3
dense keys, 5.7.2.1
dense grouping keys, 5.7.2.1
dense join keys, 5.7.2.1
density, histogram, 11.3.2
DEPTH column
TKPROF_TABLE table, 18.4.5.3
designs
debugging, 2.2.1
testing, 2.2.1
validating, 2.2.1
development environments, 2.2
DIAGNOSTIC_DEST initialization parameter, 18.4.1
disabled constraints, A.1.10
DISTRIBUTION column
PLAN_TABLE table, 7.2.6, 7.3.2
domain indexes
and EXPLAIN PLAN, 7.2.5.6
using, A.7
drop_extended_stats, 12.5.5, 13.3.1.6, 13.3.2.4
dynamic statistics, 4.4.2.1, 10.3.2, 10.4, 10.4.2, 12.4.6, 17.2.2
controlling, 13.1
process, 13.1.1
sampling levels, 13.1.1
when to use, 13.1.3

E

enabled constraints, A.1.10
endpoint repeat counts, in histograms, 11.6
end-to-end application tracing, 1.4.2.2.3
action and module names, 18.1.1.2
creating a service, 18.1.1.2
DBMS_APPLICATION_INFO package, 18.1.1.2
DBMS_MONITOR package, 18.1.1.2
enforced constraints, A.1.10
examples
ALTER SESSION statement, 18.4.2
EXPLAIN PLAN output, 18.4.4.10
SQL trace facility output, 18.4.4.10
EXECUTE_TASK procedure, 21.2.4
execution plans, 3.1.1.3
adaptive, 4.4.1, 7.2.1, 7.2.1, 7.3.1
examples, 18.4.3.1
overview of, 6.1
TKPROF, 18.4.3.1, 18.4.3.2
V$ views, 7.3.1
viewing with the utlxpls.sql script, 6.3
execution trees, 3.1.4
EXPLAIN PLAN statement
access paths, 8.2.4.2, 8.2.5.3
and domain indexes, 7.2.5.6
and full partition-wise joins, 7.2.5.4
and partial partition-wise joins, 7.2.5.3
and partitioned objects, 7.2.5
basic steps, 6.3
examples of output, 18.4.4.10
execution order of steps in output, 6.3
invoking with the TKPROF program, 18.4.3.2
PLAN_TABLE table, 6.2.6
restrictions, 6.2.5
scripts for viewing output, 6.3
viewing the output, 6.3
extended statistics, 10.2.2
extensions, 10.4.1

F

fixed objects
gathering statistics for, 12.1, 12.4.5
frequency histograms, 11.4
FULL hint, A.1.6
full outer joins, 9.3.2.4
full partition-wise joins, 7.2.5.4
full table scans, 10.2.3.1, 11.1
function-based indexes, A.2

G

gather_table_stats, 13.3.2.2, 13.3.2.2
global temporary tables, 10.2.4

H

hard parsing, 2.1.2, 3.1.1.3
hash
distribution value, 7.2.6, 7.3.2
hash clusters
sorted, A.9
hash joins, 9.2.2
cost-based optimization, 9.1.3
hash partitions, 7.2.5
examples of, 7.2.5.1
hashing, A.9
height-balanced histograms, 11.5
high-load SQL
tuning, 12.1.2.1.2, 20.3.1.2.1
hints, optimizer, 1.4.2.2.4
FULL, A.1.6
NO_INDEX, A.1.6
NO_MONITOR, 16.2.2
histograms, 11
cardinality algorithms, 11.3
data skew, 11.1
definition, 11
density, 11.3.2
endpoint numbers, 11.3.1
endpoint repeat counts, 11.6
endpoint values, 11.3.1
frequency, 11.4
height-balanced, 11.5
hybrid, 11.6
NDV, 11
nonpopular values, 11.3.2
popular values, 11.3.2
purpose, 11.1
top frequency, 11.4
hybrid histograms, 11.6

I

ID column
PLAN_TABLE table, 7.2.6, 7.3.2
incremental statistics, 12.4.8.4, 12.4.8.5
index clustering factor, 10.2.3.1
INDEX hint, A.1.6
index statistics, 10.2.3
index clustering factor, 10.2.3.1
INDEX_COMBINE hint, A.1.6
indexes
avoiding the use of, A.1.6
bitmap, A.5
choosing columns for, A.1.3
composite, A.1.4
domain, A.7
dropping, A.1.1
enforcing uniqueness, A.1.9
ensuring the use of, A.1.5
function-based, A.2
improving selectivity, A.1.4
low selectivity, A.1.6
modifying values of, A.1.3
non-unique, A.1.9
rebuilding, A.1.7
re-creating, A.1.7
scans, 8.3.3
selectivity of, A.1.3
initialization parameters
DIAGNOSTIC_DEST, 18.4.1
INLIST ITERATOR operation, 7.2.5.5
inlists, 7.2.5.5
in-memory aggregation, 5.7
controls, 5.7.3
how it works, 5.7.2
purpose, 5.7.1
in-memory table scans, 8.2.5
controls, 8.2.5.2
example, 8.2.5.3
when chosen, 8.2.5.1
I/O
reducing, A.1.4

J

joins
antijoins, 9.1.3
cartesian, 9.2.4
full outer, 9.3.2.4
hash, 9.2.2
nested loops, 3.1.4, 9.2.1.1
nested loops and cost-based optimization, 9.1.3
order, 14.3.2
outer, 9.3.2
partition-wise
examples of full, 7.2.5.4
examples of partial, 7.2.5.3
full, 7.2.5.4
semijoins, 9.1.3
sort-merge and cost-based optimization, 9.1.3, 9.1.3

K

key vectors, 5.7.2.1

L

latches
parsing and, 3.1.1.3
library cache, 3.1.1.3
library cache miss, 3.1.1.3
locks
deadlocks, 3.1.1

M

manual plan capture, 23.1.2.2
MAX_DUMP_FILE_SIZE initialization parameter
SQL Trace, 18.4.1
modeling
data, 2.1.1
multiversion read consistency, 3.2.1

N

NDV, 11
nested loops joins, 9.2.1.1
cost-based optimization, 9.1.3
NO_INDEX hint, A.1.6
nonpopular values, in histograms, 11.3.2
NOT IN subquery, 9.1.3

O

OBJECT_INSTANCE column
PLAN_TABLE table, 7.2.6, 7.3.2
OBJECT_NAME column
PLAN_TABLE table, 7.2.6, 7.3.2
OBJECT_NODE column
PLAN_TABLE table, 7.2.6, 7.3.2
OBJECT_OWNER column
PLAN_TABLE table, 7.2.6, 7.3.2
OBJECT_TYPE column
PLAN_TABLE table, 7.2.6, 7.3.2
OPERATION column
PLAN_TABLE table, 7.2.6, 7.2.6, 7.3.2, 7.3.2
optimization, SQL, 4.1.2
optimizer
adaptive, 7.2.1
definition, 4.1
environment, 3.1.1.3
estimator, 4.2.2
execution, 3.1.4
goals, 14.2.3
purpose of, 4.1.1
row sources, 3.1.3, 3.1.3
statistics, 14.3
throughput, 14.2.3
OPTIMIZER column
PLAN_TABLE, 7.2.6, 7.3.2
optimizer environment, 3.1.1.3
optimizer hints, 1.4.2.2.4
FULL, A.1.6
MONITOR, 16.2.2
NO_INDEX, A.1.6
optimizer statistics
adaptive statistics, 4.4.2
automatic collection, 12.2
bulk loads, 10.3.3
cardinality, 11.1
collection, 12.1
column group, 10.4.1.1
column groups, 13.3.1
dynamic, 10.3.2, 10.4, 12.4.6, 13.1, 17.2.2
extended, 10.2.2
gathering concurrently, 12.4.7, 12.4.7.1, Glossary
gathering in parallel, 12.4.7.3
histograms, 11
incremental, 12.4.8.4, 12.4.8.5
index, 10.2.3
pluggable databases and, 12.2
preferences, 12.3.1
SQL plan directives, 10.4.1, 13.3.1
system, 12.5.1
temporary, 10.2.4
optimizer statistics collection, 12.1
optimizer statistics collectors, 4.4.1.1
OPTIONS column
PLAN_TABLE table, 7.2.6, 7.3.2
OTHER column
PLAN_TABLE table, 7.2.6, 7.3.2
OTHER_TAG column
PLAN_TABLE table, 7.2.6, 7.3.2
outer joins, 9.3.2

P

packages
DBMS_ADVISOR, 21.1.1
DBMS_STATS, 21.1.2.3
parallel execution
gathering statistics, 12.4.7.3
PARENT_ID column
PLAN_TABLE table, 7.2.6, 7.3.2
parse calls, 3.1.1
parsing, SQL, 3.1.1
hard, 2.1.2
hard parse, 3.1.1.3
parse trees, 3.1.4
soft, 2.1.2
soft parse, 3.1.1.3
partition maintenance operations, 12.4.8.4
PARTITION_ID column
PLAN_TABLE table, 7.2.6, 7.3.2
PARTITION_START column
PLAN_TABLE table, 7.2.6, 7.3.2
PARTITION_STOP column
PLAN_TABLE table, 7.2.6, 7.3.2
partitioned objects
and EXPLAIN PLAN statement, 7.2.5
partitioning
distribution value, 7.2.6, 7.3.2
examples of, 7.2.5.1
examples of composite, 7.2.5.2
hash, 7.2.5
range, 7.2.5
start and stop columns, 7.2.5.1
partition-wise joins
full, 7.2.5.4
full, and EXPLAIN PLAN output, 7.2.5.4
partial, and EXPLAIN PLAN output, 7.2.5.3
performance
viewing execution plans, 6.3
PLAN_TABLE table
BYTES column, 7.2.6, 7.3.2
CARDINALITY column, 7.2.6, 7.3.2
COST column, 7.2.6, 7.3.2
creating, 6.2.6, 6.2.6
displaying, 6.4
DISTRIBUTION column, 7.2.6, 7.3.2
ID column, 7.2.6, 7.3.2
OBJECT_INSTANCE column, 7.2.6, 7.3.2
OBJECT_NAME column, 7.2.6, 7.3.2
OBJECT_NODE column, 7.2.6, 7.3.2
OBJECT_OWNER column, 7.2.6, 7.3.2
OBJECT_TYPE column, 7.2.6, 7.3.2
OPERATION column, 7.2.6, 7.3.2
OPTIMIZER column, 7.2.6, 7.3.2
OPTIONS column, 7.2.6, 7.3.2
OTHER column, 7.2.6, 7.3.2
OTHER_TAG column, 7.2.6, 7.3.2
PARENT_ID column, 7.2.6, 7.3.2
PARTITION_ID column, 7.2.6, 7.3.2
PARTITION_START column, 7.2.6, 7.3.2
PARTITION_STOP column, 7.2.6, 7.3.2
POSITION column, 7.2.6, 7.3.2
REMARKS column, 7.2.6, 7.3.2
SEARCH_COLUMNS column, 7.2.6, 7.3.2
STATEMENT_ID column, 7.2.6, 7.3.2
TIMESTAMP column, 7.2.6, 7.3.2
pluggable databases
automatic optimizer statistics collection, 12.2
manageability features, 12.2
SQL management base, 23.1.5.1
SQL Tuning Advisor, 20.1.1, 21.1
SQL tuning sets, 19.1
popular values, in histograms, 11.3.2
POSITION column
PLAN_TABLE table, 7.2.6, 7.3.2
PRIMARY KEY constraint, A.1.9
private SQL areas
parsing and, 3.1.1
processes
dedicated server, 3.1.1.3
programming languages, 2.2

Q

queries
avoiding the use of indexes, A.1.6
ensuring the use of indexes, A.1.5

R

range
distribution value, 7.2.6, 7.3.2
examples of partitions, 7.2.5.1
partitions, 7.2.5
Real-Time Database Operations, 1.4.2.2.2
Real-Time SQL Monitoring, 1.4.2.2.2, 16.1
REBUILD clause, A.1.7
recursive calls, 18.4.4.5
recursive SQL, 3.3, 10.3.2, 10.4.3
REMARKS column
PLAN_TABLE table, 7.2.6, 7.3.2
reoptimization, automatic, 4.4.2.2, 7.2.1, 10.4.1.2
cardinality misestimates, 4.4.2.2.1
performance feedback, 4.4.2.2.2
statistics feedback, 4.4.2.2.1
result sets, SQL, 3.1.3, 3.2
rollout strategies
big bang approach, 2.2.2
trickle approach, 2.2.2
round-robin
distribution value, 7.2.6, 7.3.2
row source generation, 3.1.3
rowids
table access by, 8.2.3
rows
row set, 3.1.3
row source, 3.1.3
rowids used to locate, 8.2.3

S

SAMPLE BLOCK clause, 8.2.4
SAMPLE clause, 8.2.4
sample table scans, 8.2.4
scans
in-memory, 8.2.5
sample table, 8.2.4
SEARCH_COLUMNS column
PLAN_TABLE table, 7.2.6, 7.3.2
SELECT statement
SAMPLE clause, 8.2.4
selectivity
creating indexes, A.1.3
improving for an index, A.1.4
indexes, A.1.6
semijoins, 9.1.3
shared pool, 10.4.1.1
parsing check, 3.1.1.3
shared SQL areas, 3.1.1.3
show_extended_stats_name, 13.3.1.5
soft parsing, 2.1.2, 3.1.1.3
sort merge joins
cost-based optimization, 9.1.3
SQL
execution, 3.1.4
optimization, 4.1.2
parsing, 3.1.1
recursive, 3.3
result sets, 3.1.3, 3.2
stages of processing, 8.2, 8.3
SQL Access Advisor, 1.4.2.1.3, 21.1.1, 21.1.1
constants, 21.6.3
EXECUTE_TASK procedure, 21.2.4
SQL compilation, 10.4, 10.4.1.1, 10.4.3
SQL management base
pluggable databases and, 23.1.5.1
SQL parsing
parse calls, 3.1.1
SQL Performance Analyzer, 1.4.2.1.5
SQL plan baselines, 1.4.2.1.4, 23.1
displaying, 23.3
SQL plan capture, 23.1.2
SQL plan directives, 4.4.2.3, 10.4.1, 13.3.1
cardinality misestimates, 10.4.1.1
managing, 13.10
SQL plan management, 1.4.2.1.4
automatic plan capture, 23.1.2.1
introduction, 23.1
manual plan capture, 23.1.2.2
plan capture, 23.1
plan evolution, 23.1, 23.1.4
plan selection, 23.1, 23.1.3
plan verification, 23.1.4
purpose, 23.1.1
SQL plan baselines, 23.1
SQL plan capture, 23.1.2
SQL processing
semantic check, 3.1.1.2
shared pool check, 3.1.1.3
stages, 3.1
syntax check, 3.1.1.1
SQL profiles, 1.4.2.1.2
and SQL plan baselines, 23.1.1.2
SQL statements
avoiding the use of indexes, A.1.6
ensuring the use of indexes, A.1.5
execution plans of, 6.1
modifying indexed data, A.1.3
SQL Test Case Builder, 17.1
SQL test cases, 17
SQL trace facility, 1.4.2.2.3, 18.3.1, 18.4.3
example of output, 18.4.4.10
output, 18.4.4.1
statement truncation, 18.4.4.7
trace files, 18.4.1
SQL trace files, 1.4.2.2.3
SQL tuning
definition, 1.1
introduction, 1
tools overview, 1.4.2
SQL Tuning Advisor, 1.4.2.1.2
administering with APIs, 19.1.3.1, 20.3.1.2.1
input sources, 20.1.2.2, 21.1.2.1
pluggable databases and, 20.1.1, 21.1
using, 12.1.2.1.2, 20.3.1.2.1
SQL tuning sets
pluggable databases and, 19.1
SQL Tuning Sets
managing with APIs, 19.1
SQL, recursive, 10.4.3, 13.1
SQL_STATEMENT column
TKPROF_TABLE, 18.4.5.3
SQLTUNE_CATEGORY initialization parameter
determining the SQL Profile category, 22.4
start columns
in partitioning and EXPLAIN PLAN statement, 7.2.5.1
STATEMENT_ID column
PLAN_TABLE table, 7.2.6, 7.3.2
statistics, optimizer, 4.1.2, 14.3
adaptive statistics, 4.4.2
automatic collection, 12.2
bulk loads, 10.3.3
cardinality, 11.1
collection, 12.1
column group, 10.4.1.1
column groups, 13.3.1, 13.3.1.2
dynamic, 10.3.2, 10.4, 12.4.6, 13.1, 17.2.2
dynamic statistics, 10.4.2
exporting and importing, 13.7
extended, 10.2.2
gathering concurrently, 12.4.7
incremental, 12.4.8.4, 12.4.8.5
index, 10.2.3
limitations on restoring previous versions, 13.5.2
preferences, 12.3.1
system, 10.2.5, 12.5, 12.5.1
user-defined, 10.2.6
stop columns
in partitioning and EXPLAIN PLAN statement, 7.2.5.1
subqueries
NOT IN, 9.1.3
system statistics, 12.5.1

T

table statistics, 10.2.1
temporary tables, global, 10.2.4
testing designs, 2.2.1
throughput
optimizer goal, 14.2.3
TIMED_STATISTICS initialization parameter
SQL Trace, 18.4.1
TIMESTAMP column
PLAN_TABLE table, 7.2.6, 7.3.2
TKPROF program, 18.3.2, 18.4.3
editing the output SQL script, 18.4.5.2
example of output, 18.4.4.10
generating the output SQL script, 18.4.5.1
row source operations, 18.4.4.2
syntax, 18.4.3.2
using the EXPLAIN PLAN statement, 18.4.3.2
wait event information, 18.4.4.3
TKPROF_TABLE, 18.4.5.3, 18.4.5.3
top frequency histograms, 11.4
TRACEFILE_IDENTIFIER initialization parameter
identifying trace files, 18.4.1
tracing
consolidating with trcsess, 18.2
identifying files, 18.4.1
trcsess utility, 18.2
trickle rollout strategy, 2.2.2
tuning
logical structure, A.1.1

U

UNIQUE constraint, A.1.9
uniqueness, A.1.9
USER_DUMP_DEST initialization parameter
SQL Trace, 18.4.1
USER_ID column, TKPROF_TABLE, 18.4.5.3
user_stat_extensions, 13.3.1.5
UTLXPLP.SQL script
displaying plan table output, 6.4
for viewing EXPLAIN PLANs, 6.3
UTLXPLS.SQL script
displaying plan table output, 6.4
for viewing EXPLAIN PLANs, 6.3
used for displaying EXPLAIN PLANs, 6.3

V

V$SQL_PLAN view
using to display execution plan, 6.2.4.1
V$SQL_PLAN_STATISTICS view
using to display execution plan statistics, 6.2.4.1
V$SQL_PLAN_STATISTICS_ALL view
using to display execution plan information, 6.2.4.1
validating designs, 2.2.1

W

workloads, 2.2.1
PK|וQBPKDOEBPS/tgsql_mig.htm Migrating Stored Outlines to SQL Plan Baselines

24 Migrating Stored Outlines to SQL Plan Baselines

This chapter explains the concepts and tasks relating to stored outline migration. This chapter contains the following topics:


Note:

Starting in Oracle Database 12c, stored outlines are deprecated. See Chapter 24, "Migrating Stored Outlines to SQL Plan Baselines" for an alternative.

24.1 About Stored Outline Migration

A stored outline is a set of hints for a SQL statement. The hints direct the optimizer to choose a specific plan for the statement. A stored outline is a legacy technique for providing plan stability.

Stored outline migration is the user-initiated process of converting stored outlines to SQL plan baselines. A SQL plan baseline is a set of plans proven to provide optimal performance.

This section contains the following topics:

24.1.1 Purpose of Stored Outline Migration

This section assumes that you rely on stored outlines to maintain plan stability and prevent performance regressions. The goal of this section is to provide a convenient method to safely migrate from stored outlines to SQL plan baselines. After the migration, you can maintain the same plan stability that you had using stored outlines while being able to use the more advanced features provided by the SQL Plan Management framework.

Specifically, the section explains how to address the following problems:

  • Stored outlines cannot automatically evolve over time. Consequently, a stored outline may be optimal when you create it, but become a suboptimal plan after a database change, leading to performance degradation.

  • Hints in a stored outline can become invalid, as with an index hint on a dropped index. In such cases, the database still uses the outlines but excludes the invalid hints, producing a plan that is often worse than the original plan or the current best-cost plan generated by the optimizer.

  • For a SQL statement, the optimizer can only choose the plan defined in the stored outline in the currently specified category. The optimizer cannot choose from other stored outlines in different categories or the current cost-based plan even if they improve performance.

  • Stored outlines are a reactive tuning technique, which means that you only use a stored outline to address a performance problem after it has occurred. For example, you may implement a stored outline to correct the plan of a SQL statement that became high-load. In this case, you used stored outlines instead of proactively tuning the statement before it became high-load.

The stored outline migration PL/SQL API helps solve the preceding problems in the following ways:

  • SQL plan baselines enable the optimizer to use the same optimal plan and allow this plan to evolve over time.

    For a specified SQL statement, you can add new plans as SQL plan baselines after they are verified not to cause performance regressions.

  • SQL plan baselines prevent plans from becoming unreproducible because of invalid hints.

    If hints stored in a plan baseline become invalid, then the plan may not be reproducible by the optimizer. In this case, the optimizer selects an alternative reproducible plan baseline or the current best-cost plan generated by optimizer.

  • For a specific SQL statement, the database can maintain multiple plan baselines.

    The optimizer can choose from a set of optimal plans for a specific SQL statement instead of being restricted to a single plan per category, as required by stored outlines.

24.1.2 How Stored Outline Migration Works

This section explains how the database migrates stored outlines to SQL plan baselines. This information is important for performing the task of migrating stored outlines.

24.1.2.1 Stages of Stored Outline Migration

The following graphic shows the main stages in stored outline migration:

Description of pfgrf231.gif follows

The migration process has the following stages:

  1. The user invokes a function that specifies which outlines to migrate.

  2. The database processes the outlines as follows:

    1. The database copies information in the outline needed by the plan baseline.

      The database copies it directly or calculates it based on information in the outline. For example, the text of the SQL statement exists in both schemas, so the database can copy the text from outline to baseline.

    2. The database reparses the hints to obtain information not in the outline.

      The plan hash value and plan cost cannot be derived from the existing information in the outline, which necessitates reparsing the hints.

    3. The database creates the baselines.

  3. The database obtains missing information when it chooses the SQL plan baseline for the first time to execute the SQL statement.

    The compilation environment and execution statistics are only available during execution when the plan baseline is parsed and compiled.

The migration is complete only after the preceding phases complete.

24.1.2.2 Outline Categories and Baseline Modules

An outline is a set of hints, whereas a SQL plan baseline is a set of plans. Because they are different technologies, some functionality of outlines does not map exactly to functionality of baselines. For example, a single SQL statement can have multiple outlines, each of which is in a different outline category, but the only category that currently exists for baselines is DEFAULT.

The equivalent of a category for an outline is a module for a SQL plan baseline. Table 24-1 explains how outline categories map to modules.

Table 24-1 Outline Categories

ConceptDescriptionDefault Value

Outline Category

Specifies a user-defined grouping for a set of stored outlines.

You can use categories to maintain different stored outlines for a SQL statement. For example, a single statement can have an outline in the OLTP category and the DW category.

Each SQL statement can have one or more stored outlines. Each stored outline is in one and only one outline category. A statement can have multiple stored outlines in different categories, but only one stored outline exists for each category of each statement.

During migration, the database maps each outline category to a SQL plan baseline module.

DEFAULT

Baseline Module

Specifies a high-level function being performed.

A SQL plan baseline can belong to one and only one module.

After an outline is migrated to a SQL plan baseline, the module name defaults to outline category name.

Baseline Category

Only one SQL plan baseline category exists. This category is named DEFAULT. During stored outline migration, the module name of the SQL plan baseline is set to the category name of the stored outline.

A statement can have multiple SQL plan baselines in the DEFAULT category.

DEFAULT


When migrating stored outlines to SQL plan baselines, Oracle Database maps every outline category to a SQL plan baseline module with the same name. As shown in the following diagram, the outline category OLTP is mapped to the baseline module OLTP. After migration, DEFAULT is a super-category that contains all SQL plan baselines.

Description of pfgrf230.gif follows

24.1.3 User Interface for Stored Outline Migration

You can use the DBMS_SPM package to perform the stored outline migration. Table 24-2 describes the relevant functions in this package.

Table 24-2 DBMS_SPM Functions Relating to Stored Outline Migration

DBMS_SPM FunctionDescription

MIGRATE_STORED_OUTLINE

Migrates existing stored outlines to plan baselines.

Use either of the following formats:

  • Specify outline name, SQL text, outline category, or all stored outlines.

  • Specify a list of outline names.

ALTER_SQL_PLAN_BASELINE

Changes an attribute of a single plan or all plans associated with a SQL statement.

DROP_MIGRATED_STORED_OUTLINE

Drops stored outlines that have been migrated to SQL plan baselines.

The function finds stored outlines marked as MIGRATED in the DBA_OUTLINES view, and then drops these outlines from the database.


You can control stored outline and plan baseline behavior with initialization and session parameters. Table 24-3 describes the relevant parameters. See Table 24-5 and Table 24-6 for an explanation of how these parameter settings interact.

Table 24-3 Parameters Relating to Stored Outline Migration

Initialization or Session ParameterDescriptionParameter Type

CREATE_STORED_OUTLINES

Determines whether Oracle Database automatically creates and stores an outline for each query submitted during the session.

Initialization parameter

OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES

Enables or disables the automatic recognition of repeatable SQL statement and the generation of SQL plan baselines for these statements.

Initialization parameter

USE_STORED_OUTLINES

Determines whether the optimizer uses stored outlines to generate execution plans.

Note: This is a session parameter, not an initialization parameter.

Session

OPTIMIZER_USE_SQL_PLAN_BASELINES

Enables or disables the use of SQL plan baselines stored in SQL Management Base.

Initialization parameter


You can use database views to access information relating to stored outline migration. Table 24-4 describes the following main views.

Table 24-4 Views Relating to Stored Outline Migration

ViewDescription

DBA_OUTLINES

Describes all stored outlines in the database.

The MIGRATED column is important for outline migration and shows one of the following values: NOT-MIGRATED and MIGRATED. When MIGRATED, the stored outline has been migrated to a plan baseline and is not usable.

DBA_SQL_PLAN_BASELINES

Displays information about the SQL plan baselines currently created for specific SQL statements.

The ORIGIN column indicates how the plan baseline was created. The value STORED-OUTLINE indicates the baseline was created by migrating an outline.



See Also:


24.1.4 Basic Steps in Stored Outline Migration

This section explains the basic steps in using the PL/SQL API to perform stored outline migration. The basic steps are as follows:

  1. Prepare for stored outline migration.

    Review the migration prerequisites and determine how you want the migrated plan baselines to behave.

    See "Preparing for Stored Outline Migration".

  2. Do one of the following:

  3. Perform post-migration confirmation and cleanup.

    See "Performing Follow-Up Tasks After Stored Outline Migration".

24.2 Preparing for Stored Outline Migration

This section explains how to prepare for stored outline migration.

To prepare for stored outline migration: 

  1. Connect SQL*Plus to the database with SYSDBA privileges or the EXECUTE privilege on the DBMS_SPM package.

    For example, do the following to use operating system authentication to log on to a database as SYS:

    % sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    
  2. Query the stored outlines in the database.

    The following example queries all stored outlines that have not been migrated to SQL plan baselines:

    SELECT NAME, CATEGORY, SQL_TEXT
    FROM   DBA_OUTLINES
    WHERE  MIGRATED = 'NOT-MIGRATED';
    
  3. Determine which stored outlines meet the following prerequisites for migration eligibility:

    • The statement must not be a run-time INSERT AS SELECT statement.

    • The statement must not reference a remote object.

    • This statement must not be a private stored outline.

  4. Decide whether to migrate all outlines, specified stored outlines, or outlines belonging to a specified outline category.

    If you do not decide to migrate all outlines, then identify the outlines or categories that you intend to migrate.

  5. Decide whether the stored outlines migrated to SQL plan baselines use fixed plans or nonfixed plans:

    • Fixed plans

      A fixed plan is frozen. If a fixed plan is reproducible using the hints stored in plan baseline, then the optimizer always chooses the lowest-cost fixed plan baseline over plan baselines that are not fixed. Essentially, a fixed plan baseline acts as a stored outline with valid hints.

      A fixed plan is reproducible when the database can parse the statement based on the hints stored in the plan baseline and create a plan with the same plan hash value as the one in the plan baseline. If one of more of the hints become invalid, then the database may not be able to create a plan with the same plan hash value. In this case, the plan is nonreproducible.

      If a fixed plan cannot be reproduced when parsed using its hints, then the optimizer chooses a different plan, which can be either of the following:

      • Another plan for the SQL plan baseline

      • The current cost-based plan created by the optimizer

      In some cases, a performance regression occurs because of the different plan, requiring SQL tuning.

    • Nonfixed plans

      If a plan baseline does not contain fixed plans, then SQL Plan Management considers the plans equally when picking a plan for a SQL statement.

  6. Before beginning the actual migration, ensure that the Oracle database meets the following prerequisites:

    • The database must be Enterprise Edition.

    • The database must be open and must not be in a suspended state.

    • The database must not be in restricted access (DBA), read-only, or migrate mode.

    • Oracle Call Interface (OCI) must be available.


See Also:


24.3 Migrating Outlines to Utilize SQL Plan Management Features

The goals of this task are as follows:

  • To allow SQL Plan Management to select from all plans in a plan baseline for a SQL statement instead of applying the same fixed plan after migration

  • To allow the SQL plan baseline to evolve in the face of database changes by adding new plans to the baseline

Assumptions

This tutorial assumes the following:

  • You migrate all outlines.

    To migrate specific outlines, see Oracle Database PL/SQL Packages and Types Reference for details about the DBMS_SPM.MIGRATE_STORED_OUTLINE function.

  • You want the module names of the baselines to be identical to the category names of the migrated outlines.

  • You do not want the SQL plans to be fixed.

    By default, generated plans are not fixed and SQL Plan Management considers all plans equally when picking a plan for a SQL statement. This situation permits the advanced feature of plan evolution to capture new plans for a SQL statement, verify their performance, and accept these new plans into the plan baseline.

To migrate stored outlines to SQL plan baselines: 

  1. Connect SQL*Plus to the database with the appropriate privileges.

  2. Call PL/SQL function MIGRATE_STORED_OUTLINE.

    The following sample PL/SQL block migrates all stored outlines to fixed baselines:

    DECLARE
      my_report CLOB;
    BEGIN
      my_outlines := DBMS_SPM.MIGRATE_STORED_OUTLINE( attribute_name => 'all' );
    END;
    /
    

See Also:


24.4 Migrating Outlines to Preserve Stored Outline Behavior

The goal of this task is to migrate stored outlines to SQL plan baselines and preserve the original behavior of the stored outlines by creating fixed plan baselines. A fixed plan has higher priority over other plans for the same SQL statement. If a plan is fixed, then the plan baseline cannot be evolved. The database does not add new plans to a plan baseline that contains a fixed plan.

Assumptions

This tutorial assumes the following:

  • You want to migrate only the stored outlines in the category named firstrow.

    See Oracle Database PL/SQL Packages and Types Reference for syntax and semantics of the DBMS_SPM.MIGRATE_STORED_OUTLINE function.

  • You want the module names of the baselines to be identical to the category names of the migrated outlines.

To migrate stored outlines to plan baselines: 

  1. Connect SQL*Plus to the database with the appropriate privileges.

  2. Call PL/SQL function MIGRATE_STORED_OUTLINE.

    The following sample PL/SQL block migrates stored outlines in the category firstrow to fixed baselines:

    DECLARE
      my_report CLOB;
    BEGIN
      my_outlines := DBMS_SPM.MIGRATE_STORED_OUTLINE( 
        attribute_name => 'category', 
        attribute_value => 'firstrow',
        fixed => 'YES' );
    END;
    /
    

    After migration, the SQL plan baselines is in module firstrow and category DEFAULT.


See Also:


24.5 Performing Follow-Up Tasks After Stored Outline Migration

The goals of this task are as follows:

  • To configure the database to use plan baselines instead of stored outlines for stored outlines that have been migrated to SQL plan baselines

  • To create SQL plan baselines instead of stored outlines for future SQL statements

  • To drop the stored outlines that have been migrated to SQL plan baselines

This section explains how to set initialization parameters relating to stored outlines and plan baselines. The OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES and CREATE_STORED_OUTLINES initialization parameters determine how and when the database creates stored outlines and SQL plan baselines. Table 24-5 explains the interaction between these parameters.

Table 24-5 Creation of Outlines and Baselines

CREATE_STORED_OUTLINES Initialization ParameterOPTIMIZER_CAPTURE_SQL_PLAN_BASELINES Initialization ParameterDatabase Behavior

false

false

When executing a SQL statement, the database does not create stored outlines or SQL plan baselines.

false

true

The automatic recognition of repeatable SQL statements and the generation of SQL plan baselines for these statements is enabled. When executing a SQL statement, the database creates only new SQL plan baselines (if they do not exist) with the category name DEFAULT for the statement.

true

false

Oracle Database automatically creates and stores an outline for each query submitted during the session. When executing a SQL statement, the database creates only new stored outlines (if they do not exist) with the category name DEFAULT for the statement.

category

false

When executing a SQL statement, the database creates only new stored outlines (if they do not exist) with the specified category name for the statement.

true

true

Oracle Database automatically creates and stores an outline for each query submitted during the session. The automatic recognition of repeatable SQL statements and the generation of SQL plan baselines for these statements is also enabled.

When executing a SQL statement, the database creates both stored outlines and SQL plan baselines with the category name DEFAULT.

category

true

Oracle Database automatically creates and stores an outline for each query submitted during the session. The automatic recognition of repeatable SQL statements and the generation of SQL plan baselines for these statements is also enabled.

When executing a SQL statement, the database creates stored outlines with the specified category name and SQL plan baselines with the category name DEFAULT.


The USE_STORED_OUTLINES session parameter (it is not an initialization parameter) and OPTIMIZER_USE_SQL_PLAN_BASELINES initialization parameter determine how the database uses stored outlines and plan baselines. Table 24-6 explains how these parameters interact.

Table 24-6 Use of Stored Outlines and SQL Plan Baselines

USE_STORED_OUTLINES Session ParameterOPTIMIZER_USE_SQL_PLAN_BASELINES Initialization ParameterDatabase Behavior

false

false

When choosing a plan for a SQL statement, the database does not use stored outlines or plan baselines.

false

true

When choosing a plan for a SQL statement, the database uses only SQL plan baselines.

true

false

When choosing a plan for a SQL statement, the database uses stored outlines with the category name DEFAULT.

category

false

When choosing a plan for a SQL statement, the database uses stored outlines with the specified category name.

If a stored outline with the specified category name does not exist, then the database uses a stored outline in the DEFAULT category if it exists.

true

true

When choosing a plan for a SQL statement, stored outlines take priority over plan baselines.

If a stored outline with the category name DEFAULT exists for the statement and is applicable, then the database applies the stored outline. Otherwise, the database uses SQL plan baselines. However, if the stored outline has the property MIGRATED, then the database does not use the outline and uses the corresponding SQL plan baseline instead (if it exists).

category

true

When choosing a plan for a SQL statement, stored outlines take priority over plan baselines.

If a stored outline with the specified category name or the DEFAULT category exists for the statement and is applicable, then the database applies the stored outline. Otherwise, the database uses SQL plan baselines. However, if the stored outline has the property MIGRATED, then the database does not use the outline and uses the corresponding SQL plan baseline instead (if it exists).


Assumptions

This tutorial assumes the following:

  • You have completed the basic steps in the stored outline migration (see "Basic Steps in Stored Outline Migration").

  • Some stored outlines may have been created before Oracle Database 10g.

    Hints in releases before Oracle Database 10g use a local hint format. After migration, hints stored in a plan baseline use the global hints format introduced in Oracle Database 10g.

To place the database in the proper state after the migration: 

  1. Connect SQL*Plus to the database with the appropriate privileges, and then check that SQL plan baselines have been created as the result of migration.

    Ensure that the plans are enabled and accepted. For example, enter the following query (partial sample output included):

    SELECT SQL_HANDLE, PLAN_NAME, ORIGIN, ENABLED, ACCEPTED, FIXED, MODULE
    FROM   DBA_SQL_PLAN_BASELINES;
    
    SQL_HANDLE                     PLAN_NAME  ORIGIN         ENA ACC FIX MODULE
    ------------------------------ ---------- -------------- --- --- --- ------
    SYS_SQL_f44779f7089c8fab       STMT01     STORED-OUTLINE YES YES NO  DEFAULT
    .
    .
    .
    
  2. Optionally, change the attributes of the SQL plan baselines.

    For example, the following statement changes the status of the baseline for the specified SQL statement to fixed:

    DECLARE
      v_cnt PLS_INTEGER;
    BEGIN 
      v_cnt := DBMS_SPM.ALTER_SQL_PLAN_BASELINE(               
                               sql_handle=>'SYS_SQL_f44779f7089c8fab', 
                               attribute_name=>'FIXED', 
                               attribute_value=>'NO');
      DBMS_OUTPUT.PUT_LINE('Plans altered: ' || v_cnt);
    END;
    /
    
  3. Check the status of the original stored outlines.

    For example, enter the following query (partial sample output included):

    SELECT NAME, OWNER, CATEGORY, USED, MIGRATED 
    FROM   DBA_OUTLINES
    ORDER BY NAME;
    
    NAME       OWNER      CATEGORY   USED   MIGRATED
    ---------- ---------- ---------- ------ ------------
    STMT01     SYS        DEFAULT    USED   MIGRATED
    STMT02     SYS        DEFAULT    USED   MIGRATED
    .
    .
    .
    
  4. Drop all stored outlines that have been migrated to SQL plan baselines.

    For example, the following statements drops all stored outlines with status MIGRATED in DBA_OUTLINES:

    DECLARE
      v_cnt PLS_INTEGER;
    BEGIN 
      v_cnt := DBMS_SPM.DROP_MIGRATED_STORED_OUTLINE();
      DBMS_OUTPUT.PUT_LINE('Migrated stored outlines dropped: ' || v_cnt);
    END;
    /
    
  5. Set initialization parameters so that:

    • When executing a SQL statement, the database creates plan baselines but does not create stored outlines.

    • The database only uses stored outlines when the equivalent SQL plan baselines do not exist.

    For example, the following SQL statements instruct the database to create SQL plan baselines instead of stored outlines when a SQL statement is executed. The example also instructs the database to apply a stored outline in category allrows or DEFAULT only if it exists and has not been migrated to a SQL plan baseline. In other cases, the database applies SQL plan baselines instead.

    ALTER SYSTEM 
      SET CREATE_STORED_OUTLINE = false SCOPE = BOTH;
    
    ALTER SYSTEM 
      SET OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = true SCOPE = BOTH;
    
    ALTER SYSTEM 
       SET OPTIMIZER_USE_SQL_PLAN_BASELINES = true SCOPE = BOTH;
    
    ALTER SESSION
       SET USE_STORED_OUTLINES = allrows SCOPE = BOTH;
    

See Also:


PK;h^PKDOEBPS/tgsql_join.htm Joins

9 Joins

This chapter contains the following topics:

9.1 About Joins

A join combines the output from exactly two row sources, such as tables or views, and returns one row source. The returned row source is the data set.

A join is characterized by multiple tables in the WHERE (non-ANSI) or FROM ... JOIN (ANSI) clause of a SQL statement. Whenever multiple tables exist in the FROM clause, Oracle Database performs a join.

A join condition compares two row sources using an expression. The join condition defines the relationship between the tables. If the statement does not specify a join condition, then the database performs a Cartesian join (see "Cartesian Joins"), matching every row in one table with every row in the other table.

http://techbus.safaribooksonline.com/book/databases/oracle/9781590599174/chapter-10-optimizing-joins/409?uicode=oracle

See Also:

Oracle Database SQL Language Reference for a concise discussion of joins in Oracle SQL

9.1.1 Join Trees

Typically, a join tree is represented as an upside-down tree structure. As shown in Figure 9-1, table1 is the left table, and table2 is the right table. The optimizer processes the join from left to right. For example, if this graphic depicted a nested loops join, then table1 is the outer loop, and table2 is the inner loop.

Figure 9-1 Join Tree

Description of Figure 9-1 follows

The input of a join can be the result set from a previous join. If a join tree includes more than two branches, then the most common tree type is the left deep tree, which is illustrated in Figure 9-2. A left deep tree is a join tree in which every join has an input from a previous join, and this input is always on the left.

Figure 9-2 Left Deep Join Tree

Description of Figure 9-2 follows

A less common type of join tree is a right join tree, shown in Figure 9-3, in which every join has an input from a previous join, and this input is always on the right.

Figure 9-3 Right Deep Join Tree

Description of Figure 9-3 follows

Some join trees are hybrids of left and right trees, so that some joins have a right input from a previous join, and some joins have a left input from a previous join. Figure 9-4 gives an example of this type of tree.

Figure 9-4 Hybrid Left and Right Join Tree

Description of Figure 9-4 follows

In yet another variation, both inputs of the join are the results of a previous join.

9.1.2 How the Optimizer Executes Join Statements

The database joins pairs of row sources. When multiple tables exist in the FROM clause, the optimizer must determine which join operation is most efficient for each pair. The optimizer must make the following interrelated decisions:

  • Access paths

    As for simple statements, the optimizer must choose an access path to retrieve data from each table in the join statement. For example, the optimizer might choose between a full table scan or an index scan. See Chapter 8, "Optimizer Access Paths."

  • Join methods

    To join each pair of row sources, Oracle Database must decide how to do it. The "how" is the join method. The possible join methods are nested loop, sort merge, and hash joins. A Cartesian join requires one of the preceding join methods. Each join method has specific situations in which it is more suitable than the others. See "Join Methods."

  • Join types

    The join condition determines the join type. For example, an inner join retrieves only rows that match the join condition. An outer join retrieves rows that do not match the join condition. See "Join Types."

  • Join order

    To execute a statement that joins more than two tables, Oracle Database joins two tables and then joins the resulting row source to the next table. This process continues until all tables are joined into the result. For example, the database joins two tables, and then joins the result to a third table, and then joins this result to a fourth table, and so on.

9.1.3 How the Optimizer Chooses Execution Plans for Joins

When choosing an execution plan, the optimizer considers the following factors:

  • The optimizer first determines whether joining two or more tables results in a row source containing at most one row.

    The optimizer recognizes such situations based on UNIQUE and PRIMARY KEY constraints on the tables. If such a situation exists, then the optimizer places these tables first in the join order. The optimizer then optimizes the join of the remaining set of tables.

  • For join statements with outer join conditions, the table with the outer join operator typically comes after the other table in the condition in the join order.

    In general, the optimizer does not consider join orders that violate this guideline, although the optimizer overrides this ordering condition in certain circumstances. Similarly, when a subquery has been converted into an antijoin or semijoin, the tables from the subquery must come after those tables in the outer query block to which they were connected or correlated. However, hash antijoins and semijoins are able to override this ordering condition in certain circumstances.

The optimizer generates a set of execution plans, according to possible join orders, join methods, and available access paths. The optimizer then estimates the cost of each plan and chooses the one with the lowest cost.

The optimizer estimates the cost of a query plan by computing the estimated I/Os to be performed by the query plan and the estimated CPU required by the plan. These I/Os have specific costs associated with them: one cost for a single block I/O, and another cost for multiblock I/Os. Also, different functions and expressions have CPU costs associated with them. The optimizer determines the total cost of a query plan using these metrics. These metrics may be influenced by many initialization parameter and session settings at compile time, such as the DB_FILE_MULTI_BLOCK_READ_COUNT setting, system statistics, and so on.

For example, the optimizer estimates costs in the following ways:

  • The cost of a nested loops join depends on the cost of reading each selected row of the outer table and each of its matching rows of the inner table into memory. The optimizer estimates these costs using statistics in the data dictionary (see "Introduction to Optimizer Statistics").

  • The cost of a sort merge join depends largely on the cost of reading all the sources into memory and sorting them.

  • The cost of a hash join largely depends on the cost of building a hash table on one of the input sides to the join and using the rows from the other side of the join to probe it.


See Also:


9.2 Join Methods

A join method is the mechanism for joining two row sources. Depending on the statistics, the optimizer chooses the method with the lowest estimated cost.

As shown in Figure 9-5, each join method has two children: the driving (also called outer) row source and the driven-to (also called inner) row source.

Figure 9-5 Join Method

Description of Figure 9-5 follows

This section contains the following topics:

9.2.1 Nested Loops Joins

A nested loop joins an outer data set to an inner data set. For each row in the outer data set that matches the single-table predicates, the database retrieves all rows in the inner data set that satisfy the join predicate. If an index is available, then the database can use it to access the inner data set by rowid.

This section contains the following topics:

9.2.1.1 When the Optimizer Considers Nested Loops Joins

Nested loops joins are useful when the following conditions are true:

  • The database joins small subsets of data, or the database joins large sets of data with the optimizer mode set to FIRST_ROWS (see Table 14-1).


    Note:

    The number of rows expected from the join is what drives the optimizer decision, not the size of the underlying tables. For example, a query might join two tables of a billion rows each, but because of the filters the optimizer expects data sets of 5 rows each.

  • The join condition is an efficient method of accessing the inner table.

In general, nested loops joins work best on small tables with indexes on the join conditions. If a row source has only one row, as with an equality lookup on a primary key value (for example, WHERE employee_id=101), then the join is a simple lookup. The optimizer always tries to put the smallest row source first, making it the driving table.

Various factors enter into the optimizer decision to use nested loops. For example, the database may read several rows from the outer row source in a batch. Based on the number of rows retrieved, the optimizer may choose either a nested loop or a hash join to the inner row source (see "Adaptive Plans"). For example, if a query joins departments to driving table employees, and if the predicate specifies a value in employees.last_name, then the database might read enough entries in the index on last_name to determine whether an internal threshold is passed. If the threshold is not passed, then the optimizer picks a nested loop join to departments, and if the threshold is passed, then the database performs a hash join, which means reading the rest of employees, hashing it into memory, and then joining to departments.

If the access path for the inner loop is not dependent on the outer loop, then the result can be a Cartesian product: for every iteration of the outer loop, the inner loop produces the same set of rows. To avoid this problem, use other join methods to join two independent row sources.

9.2.1.2 How Nested Loop Joins Work

Think of a nested loop as two nested for loops. For example, if a query joins employees and departments, then a nested loop in pseudocode might be:

FOR erow IN (select * from employees where X=Y) LOOP
  FOR drow IN (select * from departments where erow is matched) LOOP
    output values from erow and drow
  END LOOP
END LOOP

The inner loop is executed for every row of the outer loop. The employees table is the "outer" data set because it is in the exterior for loop. The outer table is sometimes called a driving table. The departments table is the "inner" data set because it is in the interior for loop.

A nested loops join involves the following basic steps:

  1. The optimizer determines the driving row source and designates it as the outer loop.

    The outer loop produces a set of rows for driving the join condition. The row source can be a table accessed using an index scan, a full table scan, or any other operation that generates rows.

  2. The optimizer designates the other row source as the inner loop.

    The outer loop appears before the inner loop in the execution plan, as follows:

    NESTED LOOPS 
      outer_loop
      inner_loop 
    
  3. For every fetch request from the client, the basic process is as follows:

    1. Fetch a row from the outer row source

    2. Probe the inner row source to find rows that match the predicate criteria

    3. Repeat the preceding steps until all rows are obtained by the fetch request

    Sometimes the database sorts rowids to obtain a more efficient buffer access pattern.

9.2.1.3 Nested Nested Loops

The outer loop of a nested loop can itself be a row source generated by a different nested loop. The database can nest two or more outer loops to join as many tables as needed. Each loop is a data access method.

The following template shows how the database iterates through three nested loops:

SELECT STATEMENT
  NESTED LOOPS 3
    NESTED LOOPS 2          - Row source becomes OUTER LOOP 3.1
      NESTED LOOPS 1        - Row source becomes OUTER LOOP 2.1
        OUTER LOOP 1.1
        INNER LOOP 1.2  
      INNER LOOP 2.2
    INNER LOOP 3.2

The database orders the loops as follows:

  1. The database iterates through NESTED LOOPS 1:

    NESTED LOOPS 1 
      OUTER LOOP 1.1
      INNER LOOP 1.2
    

    The output of NESTED LOOP 1 is a row source.

  2. The database iterates through NESTED LOOPS 2, using the row source generated by NESTED LOOPS 1 as its outer loop:

    NESTED LOOPS 2       
      OUTER LOOP 2.1         - Row source generated by NESTED LOOPS 1
      INNER LOOP 2.2 
    

    The output of NESTED LOOPS 2 is another row source.

  3. The database iterates through NESTED LOOPS 3, using the row source generated by NESTED LOOPS 2 as its outer loop:

    NESTED LOOPS 3      
      OUTER LOOP 3.1         - Row source generated by NESTED LOOPS 2
      INNER LOOP 3.2
    

Example 9-1 Nested Nested Loops Join

Suppose you join the employees and departments tables as follows:

SELECT /*+ ORDERED USE_NL(d) */ e.last_name, e.first_name, d.department_name
FROM   employees e, departments d
WHERE  e.department_id=d.department_id
AND    e.last_name like 'A%';

The plan reveals that the optimizer chose two nested loops (Step 1 and Step 2) to access the data:

SQL_ID  ahuavfcv4tnz4, child number 0
-------------------------------------
SELECT /*+ ORDERED USE_NL(d) */ e.last_name, d.department_name FROM
employees e, departments d WHERE  e.department_id=d.department_id AND
 e.last_name like 'A%'
 
Plan hash value: 1667998133
 
----------------------------------------------------------------------------------
|Id| Operation                             |Name      |Rows|Bytes|Cost(%CPU)|Time|
----------------------------------------------------------------------------------
| 0| SELECT STATEMENT                      |             |  |   |5 (100)|        |
| 1|  NESTED LOOPS                         |             |  |   |       |        |
| 2|   NESTED LOOPS                        |             | 3|102|5   (0)|00:00:01|
| 3|    TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES   | 3| 54|2   (0)|00:00:01|
|*4|     INDEX RANGE SCAN                  | EMP_NAME_IX | 3|   |1   (0)|00:00:01|
|*5|    INDEX UNIQUE SCAN                  | DEPT_ID_PK  | 1|   |0   (0)|        |
| 6|   TABLE ACCESS BY INDEX ROWID         | DEPARTMENTS | 1| 16|1   (0)|00:00:01|
----------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - access("E"."LAST_NAME" LIKE 'A%')
       filter("E"."LAST_NAME" LIKE 'A%')
   5 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

In this example, the basic process is as follows:

  1. The database begins iterating through the inner nested loop (Step 2) as follows:

    1. The database searches the emp_name_ix for the rowids for all last names that begins with A (Step 4).

      For example:

      Abel,employees_rowid
      Ande,employees_rowid
      Atkinson,employees_rowid
      Austin,employees_rowid
      
    2. Using the rowids from the previous step, the database retrieves a batch of rows from the employees table (Step 3). For example:

      Abel,Ellen,80
      Abel,John,50
      

      These rows become the outer row source for the innermost nested loop.

      The batch step is typically part of adaptive execution plans. To determine whether a nested loop is better than a hash join, the optimizer needs to determine many rows come back from the row source. If too many rows are returned, then the optimizer switches to a different join method.

    3. For each row in the outer row source, the database scans the dept_id_pk index to obtain the rowid in departments of the matching department ID (Step 5), and joins it to the employees rows. For example:

      Abel,Ellen,80,departments_rowid
      Ande,Sundar,80,departments_rowid
      Atkinson,Mozhe,50,departments_rowid
      Austin,David,60,departments_rowid
      

      These rows become the outer row source for the outer nested loop (Step 1).

  2. The database iterates through the outer nested loop as follows:

    1. The database reads the first row in outer row source.

      For example:

      Abel,Ellen,80,departments_rowid
      
    2. The database uses the departments rowid to retrieve the corresponding row from departments (Step 6), and then joins the result to obtain the requested values (Step 1).

      For example:

      Abel,Ellen,80,Sales
      
    3. The database reads the next row in the outer row source, uses the departments rowid to retrieve the corresponding row from departments (Step 6), and iterates through the loop until all rows are retrieved.

      The result set has the following form:

      Abel,Ellen,80,Sales
      Ande,Sundar,80,Sales
      Atkinson,Mozhe,50,Shipping
      Austin,David,60,IT
      

9.2.1.4 Current Implementation for Nested Loops Joins

Oracle Database 11g introduced a new implementation for nested loops that reduces overall latency for physical I/O. When an index or a table block is not in the buffer cache and is needed to process the join, a physical I/O is required. The database can batch multiple physical I/O requests and process them using a vector I/O instead of one at a time. A vector is an array. The database obtains a set of rowids, and then sends them in an array to the operating system, which performs the read.

As part of the new implementation, two NESTED LOOPS join row sources might appear in the execution plan where only one would have appeared in prior releases. In such cases, Oracle Database allocates one NESTED LOOPS join row source to join the values from the table on the outer side of the join with the index on the inner side. A second row source is allocated to join the result of the first join, which includes the rowids stored in the index, with the table on the inner side of the join.

Consider the query in "Original Implementation for Nested Loops Joins". In the current implementation, the execution plan for this query might be as follows:

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name              | Rows  | Bytes | Cost(%CPU)| Time      |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                   |    19 |   722 |     3   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                   |       |       |            |          |
|   2 |   NESTED LOOPS               |                   |    19 |   722 |     3   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL         | DEPARTMENTS       |     2 |    32 |     2   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | EMP_DEPARTMENT_IX |    10 |       |     0   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| EMPLOYEES         |    10 |   220 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("D"."DEPARTMENT_NAME"='Marketing' OR "D"."DEPARTMENT_NAME"='Sales')
   4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

In this case, rows from the hr.departments table form the outer row source (Step 3) of the inner nested loop (Step 2). The index emp_department_ix is the inner row source (Step 4) of the inner nested loop. The results of the inner nested loop form the outer row source (Row 2) of the outer nested loop (Row 1). The hr.employees table is the outer row source (Row 5) of the outer nested loop.

For each fetch request, the basic process is as follows:

  1. The database iterates through the inner nested loop (Step 2) to obtain the rows requested in the fetch:

    1. The database reads the first row of departments to obtain the department IDs for departments named Marketing or Sales (Step 3). For example:

      Marketing,20
      

      This row set is the outer loop. The database caches the data in the PGA.

    2. The database scans emp_department_ix, which is an index on the employees table, to find employees rowids that correspond to this department ID (Step 4), and then joins the result (Step 2).

      The result set has the following form:

      Marketing,20,employees_rowid
      Marketing,20,employees_rowid
      Marketing,20,employees_rowid
      
    3. The database reads the next row of departments, scans emp_department_ix to find employees rowids that correspond to this department ID, and then iterates through the loop until the client request is satisfied.

      In this example, the database only iterates through the outer loop twice because only two rows from departments satisfy the predicate filter. Conceptually, the result set has the following form:

      Marketing,20,employees_rowid
      Marketing,20,employees_rowid
      Marketing,20,employees_rowid
      .
      .
      .
      Sales,80,employees_rowid
      Sales,80,employees_rowid
      Sales,80,employees_rowid
      .
      .
      .
      

      These rows become the outer row source for the outer nested loop (Step 1). This row set is cached in the PGA.

  2. The database organizes the rowids obtained in the previous step so that it can more efficiently access them in the cache.

  3. The database begins iterating through the outer nested loop as follows:

    1. The database retrieves the first row from the row set obtained in the previous step, as in the following example:

      Marketing,20,employees_rowid
      
    2. Using the rowid, the database retrieves a row from employees to obtain the requested values (Step 1), as in the following example:

      Michael,Hartstein,13000,Marketing
      
    3. The database retrieves the next row from the row set, uses the rowid to probe employees for the matching row, and iterates through the loop until all rows are retrieved.

      The result set has the following form:

      Michael,Hartstein,13000,Marketing
      Pat,Fay,6000,Marketing
      John,Russell,14000,Sales
      Karen,Partners,13500,Sales
      Alberto,Errazuriz,12000,Sales
      .
      .
      .
      

In some cases, a second join row source is not allocated, and the execution plan looks the same as it did before Oracle Database 11g. The following list describes such cases:

  • All of the columns needed from the inner side of the join are present in the index, and there is no table access required. In this case, Oracle Database allocates only one join row source.

  • The order of the rows returned might be different from the order returned in releases earlier than Oracle Database 12c. Thus, when Oracle Database tries to preserve a specific ordering of the rows, for example to eliminate the need for an ORDER BY sort, Oracle Database might use the original implementation for nested loops joins.

  • The OPTIMIZER_FEATURES_ENABLE initialization parameter is set to a release before Oracle Database 11g. In this case, Oracle Database uses the original implementation for nested loops joins.

9.2.1.5 Original Implementation for Nested Loops Joins

In the current release, both the new and original implementation are possible. For an example of the original implementation, consider the following join of the hr.employees and hr.departments tables:

SELECT e.first_name, e.last_name, e.salary, d.department_name
FROM   hr.employees e, hr.departments d
WHERE  d.department_name IN ('Marketing', 'Sales')
AND    e.department_id = d.department_id;

In releases before Oracle Database 11g, the execution plan for this query might appear as follows:

-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |    19 |   722 |     3  (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| EMPLOYEES         |    10 |   220 |     1  (0)| 00:00:01 |
|   2 |   NESTED LOOPS              |                   |    19 |   722 |     3  (0)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL        | DEPARTMENTS       |     2 |    32 |     2  (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN         | EMP_DEPARTMENT_IX |    10 |       |     0  (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("D"."DEPARTMENT_NAME"='Marketing' OR "D"."DEPARTMENT_NAME"='Sales')
   4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

For each fetch request, the basic process is as follows:

  1. The database iterates through the loop to obtain the rows requested in the fetch:

    1. The database reads the first row of departments to obtain the department IDs for departments named Marketing or Sales (Step 3). For example:

      Marketing,20
      

      This row set is the outer loop. The database caches the row in the PGA.

    2. The database scans emp_department_ix, which is an index on the employees.department_id column, to find employees rowids that correspond to this department ID (Step 4), and then joins the result (Step 2).

      Conceptually, the result set has the following form:

      Marketing,20,employees_rowid
      Marketing,20,employees_rowid
      Marketing,20,employees_rowid
      
    3. The database reads the next row of departments, scans emp_department_ix to find employees rowids that correspond to this department ID, and iterates through the loop until the client request is satisfied.

      In this example, the database only iterates through the outer loop twice because only two rows from departments satisfy the predicate filter. Conceptually, the result set has the following form:

      Marketing,20,employees_rowid
      Marketing,20,employees_rowid
      Marketing,20,employees_rowid
      .
      .
      .
      Sales,80,employees_rowid
      Sales,80,employees_rowid
      Sales,80,employees_rowid
      .
      .
      .
      
  2. Depending on the circumstances, the database may organize the cached rowids obtained in the previous step so that it can more efficiently access them.

  3. For each employees rowid in the result set generated by the nested loop, the database retrieves a row from employees to obtain the requested values (Step 1).

    Thus, the basic process is to read a rowid and retrieve the matching employees row, read the next rowid and retrieve the matching employees row, and so on. Conceptually, the result set has the following form:

    Michael,Hartstein,13000,Marketing
    Pat,Fay,6000,Marketing
    John,Russell,14000,Sales
    Karen,Partners,13500,Sales
    Alberto,Errazuriz,12000,Sales
    .
    .
    .
    

9.2.1.6 Nested Loops Controls

For some SQL examples, the data is small enough for the optimizer to prefer full table scans and hash joins. However, you can add a USE_NL to instruct the optimizer to change the join method to nested loops. This hint instructs the optimizer to join each specified table to another row source with a nested loops join, using the specified table as the inner table.

The related hint USE_NL_WITH_INDEX ( table index ) hint instructs the optimizer to join the specified table to another row source with a nested loops join using the specified table as the inner table. The index is optional. If no index is specified, then the nested loops join uses an index with at least one join predicate as the index key.

Example 9-2 Nested Loops Hint

Assume that the optimizer chooses a hash join for the following query:

SELECT e.last_name, d.department_name
FROM   employees e, departments d
WHERE  e.department_id=d.department_id;

The plan looks as follows:

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |       |       |     5 (100)|          |
|*  1 |  HASH JOIN         |             |   106 |  2862 |     5  (20)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| DEPARTMENTS |    27 |   432 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| EMPLOYEES   |   107 |  1177 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------

To force a nested loops join using departments as the inner table, add the USE_NL hint as in the following query:

SELECT /*+ ORDERED USE_NL(d) */ e.last_name, d.department_name
FROM   employees e, departments d
WHERE  e.department_id=d.department_id;

The plan looks as follows:

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |       |       |    34 (100)|          |
|   1 |  NESTED LOOPS      |             |   106 |  2862 |    34   (3)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| EMPLOYEES   |   107 |  1177 |     2   (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| DEPARTMENTS |     1 |    16 |     0   (0)|          |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - filter("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

The database obtains the result set as follows:

  1. In the nested loop, the database reads employees to obtain the last name and department ID for an employee (Step 2). For example:

    De Haan,90
    
  2. For the row obtained in the previous step, the database scans departments to find the department name that matches the employees department ID (Step 3), and joins the result (Step 1). For example:

    De Haan,Executive
    
  3. The database retrieves the next row in employees, retrieves the matching row from departments, and then repeats this process until all rows are retrieved.

    The result set has the following form:

    De Haan,Executive
    Kochnar,Executive
    Baer,Public Relations
    King,Executive
    .
    .
    .
    

See Also:


9.2.2 Hash Joins

The database uses a hash join to join larger data sets. The optimizer uses the smaller of two data sets to build a hash table on the join key in memory, using a deterministic hash function to specify the location in the hash table in which to store each row. The database then scans the larger data set, probing the hash table to find the rows that meet the join condition.

9.2.2.1 When the Optimizer Considers Hash Joins

In general, the optimizer considers a hash join when the following conditions are true:

  • A relatively large amount of data must be joined, or a large fraction of a small table must be joined.

  • The join is an equijoin.

A hash join is most cost effective when the smaller data set fits in memory. In this case, the cost is limited to a single read pass over the two data sets.

Because the hash table is in the PGA, Oracle Database can access rows without latching them. This technique reduces logical I/O by avoiding the necessity of repeatedly latching and reading blocks in the database buffer cache.

If the data sets do not fit in memory, then the database partitions the row sources, and the join proceeds partition by partition. This can use a lot of sort area memory, and I/O to the temporary tablespace. This method can still be the most cost effective, especially when parallel query servers are used.

9.2.2.2 How Hash Joins Work

A hashing algorithm takes a set of inputs and applies a deterministic hash function to generate a hash value between 1 and n, where n is the size of the hash table. In a hash join, the input values are the join keys. The output values are indexes (slots) in an array, which is the hash table.

9.2.2.2.1 Hash Tables

To illustrate a hash table, assume that the database hashes hr.departments in a join of departments and employees. The join key column is department_id. The first 5 rows of departments are as follows:

SQL> select * from departments where rownum < 6;
 
DEPARTMENT_ID DEPARTMENT_NAME                MANAGER_ID LOCATION_ID
------------- ------------------------------ ---------- -----------
           10 Administration                        200        1700
           20 Marketing                             201        1800
           30 Purchasing                            114        1700
           40 Human Resources                       203        2400
           50 Shipping                              121        1500

The database applies the hash function to each department_id in the table, generating a hash value for each. For this illustration, the hash table has 5 slots (it could have more or less). Because n is 5, the possible hash values range from 1 to 5. The hash functions might generate the following values for the department IDs:

f(10) = 4
f(20) = 1
f(30) = 4
f(40) = 2
f(50) = 5

Note that the hash function happens to generate the same hash value of 4 for departments 10 and 30. This is known as a hash collision. In this case, the database puts the records for departments 10 and 30 in the same slot, using a linked list. Conceptually, the hash table looks as follows:

1    20,Marketing,201,1800
2    40,Human Resources,203,2400
3
4    10,Administration,200,1700 -> 30,Purchasing,114,1700
5    50,Shipping,121,1500
9.2.2.2.2 Hash Join: Basic Steps

A hash join of two row sources uses the following basic steps:

  1. The database performs a full scan of the smaller data set, and then applies a hash function to the join key in each row to build a hash table in the PGA.

    In pseudocode, the algorithm might look as follows:

    FOR small_table_row IN (SELECT * FROM small_table)
    LOOP
      slot_number := HASH(small_table_row.join_key);
      INSERT_HASH_TABLE(slot_number,small_table_row);
    END LOOP;
      
    
  2. The database probes the second data set, using whichever access mechanism has the lowest cost.

    Typically, the database performs a full scan of both the smaller and larger data set. The algorithm in pseudocode might look as follows:

    FOR large_table_row IN (SELECT * FROM large_table)
    LOOP
       slot_number := HASH(large_table_row.join_key);
       small_table_row = LOOKUP_HASH_TABLE(slot_number,large_table_row.join_key);
       IF small_table_row FOUND
       THEN
          output small_table_row + large_table_row;
       END IF;
    END LOOP;
    

    For each row retrieved from the larger data set, the database does the following:

    1. Applies the same hash function to the join column or columns to calculate the number of the relevant slot in the hash table.

      For example, to probe the hash table for department ID 30, the database applies the hash function to 30, which generates the hash value 4.

    2. Probes the hash table to determine whether rows exists in the slot.

      If no rows exist, then the database processes the next row in the larger data set. If rows exist, then the database proceeds to the next step.

    3. Checks the join column or columns for a match. If a match occurs, then the database either reports the rows or passes them to the next step in the plan, and then processes the next row in the larger data set.

      If multiple rows exist in the hash table slot, the database walks through the linked list of rows, checking each one. For example, if department 30 hashes to slot 4, then the database checks each row until it finds 30.

Example 9-3 Hash Joins

An application queries the oe.orders and oe.order_items tables, joining on the order_id column.

SELECT o.customer_id, l.unit_price * l.quantity
FROM   orders o, order_items l
WHERE  l.order_id = o.order_id;

The execution plan is as follows:

--------------------------------------------------------------------------
| Id  | Operation            |  Name        | Rows  | Bytes | Cost (%CPU)|
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |              |   665 | 13300 |     8  (25)|
|*  1 |  HASH JOIN           |              |   665 | 13300 |     8  (25)|
|   2 |   TABLE ACCESS FULL  | ORDERS       |   105 |   840 |     4  (25)|
|   3 |   TABLE ACCESS FULL  | ORDER_ITEMS  |   665 |  7980 |     4  (25)|
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("L"."ORDER_ID"="O"."ORDER_ID")

Because the orders table is small relative to the order_items table, which is 6 times larger, the database hashes orders. In a hash join, the data set for the hash table always appears first in the list of operations (Step 2). In Step 3, the database performs a full scan of the larger order_items later, probing the hash table for each row.

9.2.2.3 How Hash Joins Work When the Hash Table Does Not Fit in the PGA

The database must use a different technique when the hash table does not fit entirely in the PGA. In this case, the database uses a temporary space to hold portions (called partitions) of the hash table, and sometimes portions of the larger table that probes the hash table.

The basic process is as follows:

  1. The database performs a full scan of the smaller data set, and then builds an array of hash buckets in both the PGA and on disk.

    When the PGA hash area fills up, the database finds the largest partition within the hash table and writes it to temporary space on disk. The database stores any new row that belongs to this on-disk partition on disk, and all other rows in the PGA. Thus, part of the hash table is in memory and part of it on disk.

  2. The database takes a first pass at reading the other data set.

    For each row, the database does the following:

    1. Applies the same hash function to the join column or columns to calculate the number of the relevant hash bucket.

    2. Probes the hash table to determine whether rows exist in the bucket in memory.

      If the hashed value points to a row in memory, then the database completes the join and returns the row. If the value points to a hash partition on disk, however, then the database stores this row in the temporary tablespace, using the same partitioning scheme used for the original data set.

  3. The database reads each on-disk temporary partition one by one

  4. The database joins each partition row to the row in the corresponding on-disk temporary partition.

9.2.2.4 Hash Join Controls

The USE_HASH hint instructs the optimizer to use a hash join when joining two tables together. See "Guidelines for Join Order Hints".

9.2.3 Sort Merge Joins

A sort merge join is a variation on a nested loops join. The database sorts two data sets (the SORT JOIN operations), if they are not already sorted. For each row in the first data set, the database probes the second data set for matching rows and joins them (the MERGE JOIN operation), basing its start position on the match made in the previous iteration:

Description of tgsql_vm_081.png follows

9.2.3.1 When the Optimizer Considers Sort Merge Joins

A hash join requires one hash table and one probe of this table, whereas a sort merge join requires two sorts. The optimizer may choose a sort merge join over a hash join for joining large amounts of data when any of the following conditions is true:

  • The join condition between two tables is not an equijoin, that is, uses an inequality condition such as <, <=, >, or >=.

    In contrast to sort merges, hash joins require an equality condition.

  • Because of sorts required by other operations, the optimizer finds it cheaper to use a sort merge.

    If an index exists, then the database can avoid sorting the first data set. However, the database always sorts the second data set, regardless of indexes.

A sort merge has the same advantage over a nested loops join as the hash join: the database accesses rows in the PGA rather than the SGA, reducing logical I/O by avoiding the necessity of repeatedly latching and reading blocks in the database buffer cache. In general, hash joins perform better than sort merge joins because sorting is expensive. However, sort merge joins offer the following advantages over a hash join:

  • After the initial sort, the merge phase is optimized, resulting in faster generation of output rows.

  • A sort merge can be more cost-effective than a hash join when the hash table does not fit completely in memory.

    A hash join with insufficient memory requires both the hash table and the other data set to be copied to disk. In this case, the database may have to read from disk multiple times. In a sort merge, if memory cannot hold the two data sets, then the database writes them both to disk, but reads each data set no more than once.

9.2.3.2 How Sort Merge Joins Work

As in a nested loops join, a sort merge join reads two data sets, but sorts them when they are not already sorted. For each row in the first data set, the database finds a starting row in the second data set, and then reads the second data set until it finds a nonmatching row. In pseudocode, the high-level algorithm might look as follows:

READ data_set_1 SORT BY JOIN KEY TO temp_ds1
READ data_set_2 SORT BY JOIN KEY TO temp_ds2
 
READ ds1_row FROM temp_ds1
READ ds2_row FROM temp_ds2

WHILE NOT eof ON temp_ds1,temp_ds2
LOOP
    IF ( temp_ds1.key = temp_ds2.key ) OUTPUT JOIN ds1_row,ds2_row
    ELSIF ( temp_ds1.key <= temp_ds2.key ) READ ds1_row FROM temp_ds1
    ELSIF ( temp_ds1.key => temp_ds2.key ) READ ds2_row FROM temp_ds2
END LOOP

For example, the database sorts the first data set as follows:

10,20,30,40,50,60,70


The database sorts the second data set as follows:

20,20,40,40,40,40,40,60,70,70

The database begins by reading 10 in the first data set, and then starts at the beginning of data set 2:

20 too high, stop, get next ds1_row

The database proceeds to the second row of data set 1 (20). The database proceeds through the second data set as follows:

20 match, proceed
20 match, proceed
40 too high, stop, get next ds1_row

The database gets the next row in data set 1, which is 30. The database starts at the number of its last match, which was 20, and then walks through data set 2 looking for a match:

20 too low, proceed
20 too low, proceed
40 too high, stop, get next ds1_row 

The database gets the next row in data set 1, which is 40. The database starts at the number of its last match, which was 20, and then proceeds through data set 2 looking for a match:

20 too low, proceed
20 too low, proceed
40 match, proceed
40 match, proceed
40 match, proceed
40 match, proceed
40 match, proceed
60 too high, stop, get next ds1_row

As the database proceeds through data set 1, the database does not need to read every row in data set 2. This is an advantage over a nested loops join.

Example 9-4 Sort Merge Join Using Index

The following query joins the employees and departments tables on the department_id column, ordering the rows on department_id as follows:

SELECT e.employee_id, e.last_name, e.first_name, e.department_id, 
       d.department_name
FROM   employees e, departments d
WHERE  e.department_id = d.department_id
ORDER BY department_id;

A query of DBMS_XPLAN.DISPLAY_CURSOR shows that the plan uses a sort merge join:

----------------------------------------------------------------------------------
|Id | Operation                    | Name        | Rows|Bytes |Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT             |             |     |      | 5(100)|          |
| 1 |  MERGE JOIN                  |             | 106 | 4028 | 5 (20)| 00:00:01 |
| 2 |   TABLE ACCESS BY INDEX ROWID| DEPARTMENTS |  27 |  432 | 2  (0)| 00:00:01 |
| 3 |    INDEX FULL SCAN           | DEPT_ID_PK  |  27 |      | 1  (0)| 00:00:01 |
|*4 |   SORT JOIN                  |             | 107 | 2354 | 3 (34)| 00:00:01 |
| 5 |    TABLE ACCESS FULL         | EMPLOYEES   | 107 | 2354 | 2  (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
       filter("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

The two data sets are the departments table and the employees table. Because an index orders the departments table by department_id, the database can read this index and avoid a sort (Step 3). The database only needs to sort the employees table (Step 4), which is the most CPU-intensive operation.

Example 9-5 Sort Merge Join Without an Index

You join the employees and departments tables on the department_id column, ordering the rows on department_id as follows. In this example, you specify NO_INDEX and USE_MERGE to force the optimizer to choose a sort merge:

SELECT /*+ USE_MERGE(d e) NO_INDEX(d) */ e.employee_id, e.last_name, e.first_name, 
       e.department_id, d.department_name
FROM   employees e, departments d
WHERE  e.department_id = d.department_id
ORDER BY department_id;

A query of DBMS_XPLAN.DISPLAY_CURSOR shows that the plan uses a sort merge join:

----------------------------------------------------------------------------------
| Id  | Operation           | Name        | Rows  | Bytes | Cost (%CPU)| Time    |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |             |       |       |     6 (100)|         |
|   1 |  MERGE JOIN         |             |   106 |  9540 |     6  (34)| 00:00:01|
|   2 |   SORT JOIN         |             |    27 |   567 |     3  (34)| 00:00:01|
|   3 |    TABLE ACCESS FULL| DEPARTMENTS |    27 |   567 |     2   (0)| 00:00:01|
|*  4 |   SORT JOIN         |             |   107 |  7383 |     3  (34)| 00:00:01|
|   5 |    TABLE ACCESS FULL| EMPLOYEES   |   107 |  7383 |     2   (0)| 00:00:01|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
       filter("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

Because the departments.department_id index is ignored, the optimizer performs a sort, which increases the combined cost of Step 2 and Step 3 by 67% (from 3 to 5).

9.2.3.3 Sort Merge Join Controls

The USE_MERGE hint instructs the optimizer to use a sort merge join. In some situations it may make sense to override the optimizer with the USE_MERGE hint. For example, the optimizer can choose a full scan on a table and avoid a sort operation in a query. However, there is an increased cost because a large table is accessed through an index and single block reads, as opposed to faster access through a full table scan.


See Also:

Oracle Database SQL Language Reference to learn about the USE_MERGE hint

9.2.4 Cartesian Joins

The database uses a Cartesian join when one or more of the tables does not have any join conditions to any other tables in the statement. The optimizer joins every row from one data source with every row from the other data source, creating the Cartesian product of the two sets. Therefore, the total number of rows resulting from the join is calculated using the following formula, where rs1 is the number of rows in first row set and rs2 is the number of rows in the second row set:

rs1 X rs2 = total rows in result set

9.2.4.1 When the Optimizer Considers Cartesian Joins

The optimizer uses a Cartesian join for two row sources in any of the following circumstances:

  • No join condition exists.

    In some cases, the optimizer could pick up a common filter condition between the two tables as a possible join condition.


    Note:

    If a Cartesian join appears in a query plan, it could be caused by an inadvertently omitted join condition. In general, if a query joins n tables, then n-1 join conditions are required to avoid a Cartesian join.

  • A Cartesian join is an efficient method.

    For example, the optimizer may decide to generate a Cartesian product of two very small tables that are both joined to the same large table.

  • The ORDERED hint specifies a table before its join table is specified.

9.2.4.2 How Cartesian Joins Work

At a high level, the algorithm for a Cartesian join looks as follows, where ds1 is typically the smaller data set, and ds2 is the larger data set:

FOR ds1_row IN ds1 LOOP
  FOR ds2_row IN ds2 LOOP
    output ds1_row and ds2_row
  END LOOP
END LOOP

Example 9-6 Cartesian Join

In this example, a user intends to perform an inner join of the employees and departments tables, but accidentally leaves off the join condition:

SELECT e.last_name, d.department_name
FROM   employees e, departments d

The execution plan is as follows:

----------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |Cost (%CPU)| Time  |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |             |       |       |11 (100)|          |
|   1 |  MERGE JOIN CARTESIAN  |             |  2889 | 57780 |11   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL    | DEPARTMENTS |    27 |   324 | 2   (0)| 00:00:01 |
|   3 |   BUFFER SORT          |             |   107 |   856 | 9   (0)| 00:00:01 |
|   4 |    INDEX FAST FULL SCAN| EMP_NAME_IX |   107 |   856 | 0   (0)|          |
----------------------------------------------------------------------------------

In Step 1 of the preceding plan, the CARTESIAN keyword indicates the presence of a Cartesian join. The number of rows (2889) is the product of 27 and 107.

In Step 3, the BUFFER SORT operation indicates that the database is copying the data blocks obtained by the scan of emp_name_ix from the SGA to the PGA. This strategy avoids multiple scans of the same blocks in the database buffer cache, which would generate many logical reads and permit resource contention.

9.2.4.3 Cartesian Join Controls

The ORDERED hint instructs the optimizer to join tables in the order in which they appear in the FROM clause. By forcing a join between two row sources that have no direct connection, the optimizer must perform a Cartesian join.

Example 9-7 ORDERED Hint

In the following example, the ORDERED hint instructs the optimizer to join employees and locations, but no join condition connects these two row sources:

SELECT /*+ORDERED*/ e.last_name, d.department_name, l.country_id, l.state_province
FROM   employees e, locations l, departments d
WHERE  e.department_id = d.department_id
AND    d.location_id = l.location_id

The following execution plan shows a Cartesian product (Step 3) between locations (Step 6) and employees (Step 4), which is then joined to the departments table (Step 2):

----------------------------------------------------------------------------------
| Id  | Operation             | Name        | Rows  | Bytes |Cost (%CPU)|Time    |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |             |       |       | 37 (100)|          |
|*  1 |  HASH JOIN            |             |   106 |  4664 | 37   (6)| 00:00:01 |
|   2 |   TABLE ACCESS FULL   | DEPARTMENTS |    27 |   513 |  2   (0)| 00:00:01 |
|   3 |   MERGE JOIN CARTESIAN|             |  2461 | 61525 | 34   (3)| 00:00:01 |
|   4 |    TABLE ACCESS FULL  | EMPLOYEES   |   107 |  1177 |  2   (0)| 00:00:01 |
|   5 |    BUFFER SORT        |             |    23 |   322 | 32   (4)| 00:00:01 |
|   6 |     TABLE ACCESS FULL | LOCATIONS   |    23 |   322 |  0   (0)|          |
----------------------------------------------------------------------------------

See Also:

Oracle Database SQL Language Reference to learn about the ORDERED hint

9.3 Join Types

A join type is determined by the type of join condition. This section contains the following topics:

9.3.1 Inner Joins

An inner join (sometimes called a simple join) is a join that returns only rows that satisfy the join condition. Inner joins are either equijoins or nonequijoins.

9.3.1.1 Equijoins

An equijoin is an inner join whose join condition contains an equality operator. The following example is an equijoin because the join condition contains only an equality operator:

SELECT e.employee_id, e.last_name, d.department_name
FROM   employees e, departments d
WHERE  e.department_id=d.department_id;

In the preceding query, the join condition is e.department_id=d.department_id. If a row in the employees table has a department ID that matches the value in a row in the departments table, then the database returns the joined result; otherwise, the database does not return a result.

9.3.1.2 Nonequijoins

A nonequijoin is an inner join whose join condition contains an operator that is not an equality operator. The following query lists all employees whose hire date occurred when employee 176 (who is listed in job_history because he changed jobs in 2007) was working at the company:

SELECT e.employee_id, e.first_name, e.last_name, e.hire_date
FROM   employees e, job_history h
WHERE  h.employee_id = 176
AND    e.hire_date BETWEEN h.start_date AND h.end_date;

In the preceding example, the condition joining employees and job_history does not contain an equality operator, so it is a nonequijoin. Nonequijoins are relatively rare.

Note that a hash join requires at least a partial equijoin. The following SQL script contains an equality join condition (e1.empno = e2.empno) and a nonequality condition:

SET AUTOTRACE TRACEONLY EXPLAIN
SELECT *
FROM   scott.emp e1 JOIN scott.emp e2
ON     ( e1.empno = e2.empno
AND      e1.hiredate BETWEEN e2.hiredate-1 AND e2.hiredate+1 )

The optimizer chooses a hash join for the preceding query, as shown in the following plan:

Execution Plan
----------------------------------------------------------
Plan hash value: 3638257876
 
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |   174 |     5  (20)| 00:00:01 |
|*  1 |  HASH JOIN         |      |     1 |   174 |     5  (20)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| EMP  |    14 |  1218 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| EMP  |    14 |  1218 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("E1"."EMPNO"="E2"."EMPNO")
       filter("E1"."HIREDATE">=INTERNAL_FUNCTION("E2"."HIREDATE")-1 AND
              "E1"."HIREDATE"<=INTERNAL_FUNCTION("E2"."HIREDATE")+1)

9.3.2 Outer Joins

An outer join returns all rows that satisfy the join condition and also returns some or all of those rows from one table for which no rows from the other satisfy the join condition. Thus, an outer join extends the result of a simple join.

In ANSI syntax, the OUTER JOIN clause specifies an outer join. In the FROM clause, the left table appears to the left of the OUTER JOIN keywords, and the right table appears to the right of these keywords. The left table is also called the outer table, and the right table is also called the inner table. For example, in the following statement the employees table is the left or outer table:

SELECT employee_id, last_name, first_name
FROM   employees LEFT OUTER JOIN departments
ON     (employees.department_id=departments.departments_id);

Outer joins require the outer joined table to be the driving table. In the preceding example, employees is the driving table, and departments is the driven-to table.

This section contains the following topics:

9.3.2.1 Nested Loop Outer Joins

The database uses this operation to loop through an outer join between two tables. The outer join returns the outer (preserved) table rows, even when no corresponding rows are in the inner (optional) table.

In a standard nested loop, the optimizer chooses the order of tables—which is the driving table and which the driven table—based on the cost. However, in a nested loop outer join, the join condition determines the order of tables. The database uses the outer, row-preserved table to drive to the inner table.

The optimizer uses nested loops joins to process an outer join in the following circumstances:

  • It is possible to drive from the outer table to the inner table.

  • Data volume is low enough to make the nested loop method efficient.

For an example of a nested loop outer join, you can add the USE_NL hint to Example 9-8 to instruct the optimizer to use a nested loop. For example:

SELECT /*+ USE_NL(c o) */ cust_last_name,
       SUM(NVL2(o.customer_id,0,1)) "Count"
FROM   customers c, orders o
WHERE  c.credit_limit > 1000
AND    c.customer_id = o.customer_id(+)
GROUP BY cust_last_name;

9.3.2.2 Hash Join Outer Joins

The optimizer uses hash joins for processing an outer join when either of the following conditions is met:

  • The data volume is large enough to make the hash join method efficient.

  • It is not possible to drive from the outer table to the inner table.

The cost determines the order of tables. The outer table, including preserved rows, may be used to build the hash table, or it may be used to probe the hash table.

Example 9-8 shows a typical hash join outer join query, and its execution plan. In this example, all the customers with credit limits greater than 1000 are queried. An outer join is needed so that the query captures customers who have no orders.

Example 9-8 Hash Join Outer Joins

SELECT cust_last_name, SUM(NVL2(o.customer_id,0,1)) "Count"
FROM   customers c, orders o
WHERE  c.credit_limit > 1000
AND    c.customer_id = o.customer_id(+)
GROUP BY cust_last_name;

---------------------------------------------------------------------------------
| Id  | Operation           | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |           |       |       |     7 (100)|          |
|   1 |  HASH GROUP BY      |           |   168 |  3192 |     7  (29)| 00:00:01 |
|*  2 |   HASH JOIN OUTER   |           |   318 |  6042 |     6  (17)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| CUSTOMERS |   260 |  3900 |     3   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| ORDERS    |   105 |   420 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("C"."CUSTOMER_ID"="O"."CUSTOMER_ID")
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
   3 - filter("C"."CREDIT_LIMIT">1000)
   4 - filter("O"."CUSTOMER_ID">0)

The query looks for customers which satisfy various conditions. An outer join returns NULL for the inner table columns along with the outer (preserved) table rows when it does not find any corresponding rows in the inner table. This operation finds all the customers rows that do not have any orders rows.

In this case, the outer join condition is the following:

customers.customer_id = orders.customer_id(+)

The components of this condition represent the following:

  • The outer table is customers.

  • The inner table is orders.

  • The join preserves the customers rows, including those rows without a corresponding row in orders.

You could use a NOT EXISTS subquery to return the rows. However, because you are querying all the rows in the table, the hash join performs better (unless the NOT EXISTS subquery is not nested).

In Example 9-9, the outer join is to a multitable view. The optimizer cannot drive into the view like in a normal join or push the predicates, so it builds the entire row set of the view.

Example 9-9 Outer Join to a Multitable View

SELECT c.cust_last_name, sum(revenue)
FROM   customers c, v_orders o
WHERE  c.credit_limit > 2000
AND    o.customer_id(+) = c.customer_id
GROUP BY c.cust_last_name;

----------------------------------------------------------------------------
| Id  | Operation              |  Name        | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |              |   144 |  4608 |    16  (32)|
|   1 |  HASH GROUP BY         |              |   144 |  4608 |    16  (32)|
|*  2 |   HASH JOIN OUTER      |              |   663 | 21216 |    15  (27)|
|*  3 |    TABLE ACCESS FULL   | CUSTOMERS    |   195 |  2925 |     6  (17)|
|   4 |    VIEW                | V_ORDERS     |   665 | 11305 |            |
|   5 |     HASH GROUP BY      |              |   665 | 15960 |     9  (34)|
|*  6 |      HASH JOIN         |              |   665 | 15960 |     8  (25)|
|*  7 |       TABLE ACCESS FULL| ORDERS       |   105 |   840 |     4  (25)|
|   8 |       TABLE ACCESS FULL| ORDER_ITEMS  |   665 | 10640 |     4  (25)|
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("O"."CUSTOMER_ID"(+)="C"."CUSTOMER_ID")
   3 - filter("C"."CREDIT_LIMIT">2000)
   6 - access("O"."ORDER_ID"="L"."ORDER_ID")
   7 - filter("O"."CUSTOMER_ID">0)

The view definition is as follows:

CREATE OR REPLACE view v_orders AS
SELECT l.product_id, SUM(l.quantity*unit_price) revenue, 
       o.order_id, o.customer_id
FROM   orders o, order_items l
WHERE  o.order_id = l.order_id
GROUP BY l.product_id, o.order_id, o.customer_id;

9.3.2.3 Sort Merge Outer Joins

When an outer join cannot drive from the outer (preserved) table to the inner (optional) table, it cannot use a hash join or nested loops joins. In this case, it uses the sort merge outer join.

The optimizer uses sort merge for an outer join in the following cases:

  • A nested loops join is inefficient. A nested loops join can be inefficient because of data volumes.

  • The optimizer finds it is cheaper to use a sort merge over a hash join because of sorts required by other operations.

9.3.2.4 Full Outer Joins

A full outer join is a combination of the left and right outer joins. In addition to the inner join, rows from both tables that have not been returned in the result of the inner join are preserved and extended with nulls. In other words, full outer joins join tables together, yet show rows with no corresponding rows in the joined tables.

Example 9-10 retrieves all departments and all employees in each department, but also includes:

  • Any employees without departments

  • Any departments without employees

Example 9-10 Full Outer Join

SELECT d.department_id, e.employee_id
FROM   employees e FULL OUTER JOIN departments d
ON     e.department_id = d.department_id
ORDER BY d.department_id;

The statement produces the following output:

DEPARTMENT_ID EMPLOYEE_ID
------------- -----------
           10         200
           20         201
           20         202
           30         114
           30         115
           30         116
...
          270
          280
                      178
                      207

125 rows selected.

Starting with Oracle Database 11g, Oracle Database automatically uses a native execution method based on a hash join for executing full outer joins whenever possible. When the database uses the new method to execute a full outer join, the execution plan for the query contains HASH JOIN FULL OUTER. Example 9-11 shows the execution plan for the query in Example 9-10.

Example 9-11 Execution Plan for a Full Outer Join

---------------------------------------------------------------------------------------
| Id  | Operation               | Name       | Rows  | Bytes | Cost (%CPU)| Time      |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |            |   122 |  4758 |     6  (34)| 00:0 0:01 |
|   1 |  SORT ORDER BY          |            |   122 |  4758 |     6  (34)| 00:0 0:01 |
|   2 |   VIEW                  | VW_FOJ_0   |   122 |  4758 |     5  (20)| 00:0 0:01 |
|*  3 |    HASH JOIN FULL OUTER |            |   122 |  1342 |     5  (20)| 00:0 0:01 |
|   4 |     INDEX FAST FULL SCAN| DEPT_ID_PK |    27 |   108 |     2   (0)| 00:0 0:01 |
|   5 |     TABLE ACCESS FULL   | EMPLOYEES  |   107 |   749 |     2   (0)| 00:0 0:01 |
---------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

HASH JOIN FULL OUTER is included in the plan (Step 3), indicating that the query uses the hash full outer join execution method. Typically, when the full outer join condition between two tables is an equijoin, the hash full outer join execution method is possible, and Oracle Database uses it automatically.

To instruct the optimizer to consider using the hash full outer join execution method, apply the NATIVE_FULL_OUTER_JOIN hint. To instruct the optimizer not to consider using the hash full outer join execution method, apply the NO_NATIVE_FULL_OUTER_JOIN hint. The NO_NATIVE_FULL_OUTER_JOIN hint instructs the optimizer to exclude the native execution method when joining each specified table. Instead, the full outer join is executed as a union of left outer join and an antijoin.

9.3.2.5 Multiple Tables on the Left of an Outer Join

In Oracle Database 12c, multiple tables may exist on the left of an outer-joined table. This enhancement enables Oracle Database to merge a view that contains multiple tables and appears on the left of outer join.

In releases before Oracle Database 12c, a query such as the following was invalid, and would trigger an ORA-01417 error message:

SELECT t1.d, t3.c
FROM   t1, t2, t3
WHERE  t1.z = t2.z 
AND    t1.x = t3.x (+) 
AND    t2.y = t3.y (+);

Starting in Oracle Database 12c, the preceding query is valid.

9.3.3 Semijoins

A semijoin is a join between two data sets that returns a row from the first set when a matching row exists in the subquery data set. The database stops processing the second data set at the first match. Thus, optimization does not duplicate rows from the first data set when multiple rows in the second data set satisfy the subquery criteria.


Note:

Semijoins and antijoins are considered join types even though the SQL constructs that cause them are subqueries. They are internal algorithms that the optimizer uses to flatten subquery constructs so that they can be resolved in a join-like way.

9.3.3.1 When the Optimizer Considers Semijoins

A semijoin avoids returning a huge number of rows when a query only needs to determine whether a match exists. With large data sets, this optimization can result in significant time savings over a nested loops join that must loop through every record returned by the inner query for every row in the outer query. The optimizer can apply the semijoin optimization to nested loops joins, hash joins, and sort merge joins.

The optimizer may choose a semijoin in the following circumstances:

  • The statement uses either an IN or EXISTS clause.

  • The statement contains a subquery in the IN or EXISTS clause.

  • The IN or EXISTS clause is not contained inside an OR branch.

9.3.3.2 How Semijoins Work

The semijoin optimization is implemented differently depending on what type of join is used. The following pseudocode shows a semijoin for a nested loops join:

FOR ds1_row IN ds1 LOOP
  match := false;
  FOR ds2_row IN ds2 LOOP
    IF (ds1_row matches ds2_row) THEN
      match := true;
      EXIT -- stop processing second data set when a match is found
    END IF
  END LOOP
  IF (match = true) THEN 
    RETURN ds1_row
  END IF
END LOOP

In the preceding pseudocode, ds1 is the first data set, and ds2 is the subquery data set. The code obtains the first row from the first data set, and then loops through the subquery data set looking for a match. The code exits the inner loop as soon as it finds a match, and then begins processing the next row in the first data set.

Example 9-12 Semijoin Using WHERE EXISTS

The following query uses a WHERE EXISTS clause to list only the departments that contain employees:

SELECT department_id, department_name 
FROM   departments
WHERE EXISTS (SELECT 1
              FROM   employees 
              WHERE  employees.department_id = departments.department_id)

The execution plan reveals a NESTED LOOPS SEMI operation in Step 1:

----------------------------------------------------------------------------------
| Id| Operation          | Name              |Rows| Bytes |Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT   |                   |    |       |    2 (100)|          |
| 1 |  NESTED LOOPS SEMI |                   | 11 |   209 |    2   (0)| 00:00:01 |
| 2 |   TABLE ACCESS FULL| DEPARTMENTS       | 27 |   432 |    2   (0)| 00:00:01 |
|*3 |   INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 44 |   132 |    0   (0)|          |
----------------------------------------------------------------------------------

For each row in departments, which forms the outer loop, the database obtains the department ID, and then probes the employees.department_id index for matching entries. Conceptually, the index looks as follows:

10,rowid
10,rowid
10,rowid
10,rowid
30,rowid
30,rowid
30,rowid
...

If the first entry in the departments table is department 30, then the database performs a range scan of the index until it finds the first 30 entry, at which point it stops reading the index and returns the matching row from departments. If the next row in the outer loop is department 20, then the database scans the index for a 20 entry, and not finding any matches, performs the next iteration of the outer loop. The database proceeds in this way until all matching rows are returned.

Example 9-13 Semijoin Using IN

The following query uses a IN clause to list only the departments that contain employees:

SELECT department_id, department_name
FROM   departments
WHERE  department_id IN 
       (SELECT department_id 
        FROM   employees); 

The execution plan reveals a NESTED LOOPS SEMI operation in Step 1:

----------------------------------------------------------------------------------
| Id| Operation          | Name              |Rows| Bytes |Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT   |                   |    |       |    2 (100)|          |
| 1 |  NESTED LOOPS SEMI |                   | 11 |   209 |    2   (0)| 00:00:01 |
| 2 |   TABLE ACCESS FULL| DEPARTMENTS       | 27 |   432 |    2   (0)| 00:00:01 |
|*3 |   INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 44 |   132 |    0   (0)|          |
----------------------------------------------------------------------------------

The plan is identical to the plan in Example 9-12, "Semijoin Using WHERE EXISTS".

9.3.4 Antijoins

An antijoin is a join between two data sets that returns a row from the first set when a matching row does not exist in the subquery data set. Like a semijoin, an antijoin stops processing the subquery data set when the first match is found. Unlike a semijoin, the antijoin only returns a row when no match is found.

9.3.4.1 When the Optimizer Considers Antijoins

An antijoin avoids unnecessary processing when a query only ~Mneeds to return a row when a match does not exist. With large data sets, this optimization can result in significant time savings over a nested loops join that must loop through every record returned by the inner query for every row in the outer query. The optimizer can apply the antijoin optimization to nested loops joins, hash joins, and sort merge joins.

The optimizer may choose an antijoin in the following circumstances:

  • The statement uses either the NOT IN or NOT EXISTS clause.

  • The statement has a subquery in the NOT IN or NOT EXISTS clause.

  • The NOT IN or NOT EXISTS clause is not contained inside an OR branch.

  • The statement performs an outer join and applies an IS NULL condition to a join column, as in the following example:

    SET AUTOTRACE TRACEONLY EXPLAIN
    SELECT emp.*
    FROM   emp, dept
    WHERE  emp.deptno = dept.deptno(+)
    AND    dept.deptno IS NULL
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1543991079
     
    ---------------------------------------------------------------------------
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------
    |   0 | SELECT STATEMENT   |      |    14 |  1400 |     5  (20)| 00:00:01 |
    |*  1 |  HASH JOIN ANTI    |      |    14 |  1400 |     5  (20)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL| EMP  |    14 |  1218 |     2   (0)| 00:00:01 |
    |   3 |   TABLE ACCESS FULL| DEPT |     4 |    52 |     2   (0)| 00:00:01 |
    ---------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
       1 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
     
    Note
    -----
       - dynamic statistics used: dynamic sampling (level=2)
    

9.3.4.2 How Antijoins Work

The antijoin optimization is implemented differently depending on what type of join is used. The following pseudocode shows an antijoin for a nested loops join:

FOR ds1_row IN ds1 LOOP
  match := true;
  FOR ds2_row IN ds2 LOOP
    IF (ds1_row matches ds2_row) THEN
      match := false;
      EXIT -- stop processing second data set when a match is found
    END IF
  END LOOP
  IF (match = true) THEN 
    RETURN ds1_row
  END IF
END LOOP

In the preceding pseudocode, ds1 is the first data set, and ds2 is the second data set. The code obtains the first row from the first data set, and then loops through the second data set looking for a match. The code exits the inner loop as soon as it finds a match, and begins processing the next row in the first data set.

Example 9-14 Semijoin Using WHERE EXISTS

The following query uses a WHERE EXISTS clause to list only the departments that contain employees:

SELECT department_id, department_name 
FROM   departments
WHERE EXISTS (SELECT 1
              FROM   employees 
              WHERE  employees.department_id = departments.department_id)

The execution plan reveals a NESTED LOOPS SEMI operation in Step 1:

----------------------------------------------------------------------------------
| Id| Operation          | Name              |Rows| Bytes |Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT   |                   |    |       |    2 (100)|          |
| 1 |  NESTED LOOPS SEMI |                   | 11 |   209 |    2   (0)| 00:00:01 |
| 2 |   TABLE ACCESS FULL| DEPARTMENTS       | 27 |   432 |    2   (0)| 00:00:01 |
|*3 |   INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 44 |   132 |    0   (0)|          |
----------------------------------------------------------------------------------

For each row in departments, which forms the outer loop, the database obtains the department ID, and then probes the employees.department_id index for matching entries. Conceptually, the index looks as follows:

10,rowid
10,rowid
10,rowid
10,rowid
30,rowid
30,rowid
30,rowid
...

If the first record in the departments table is department 30, then the database performs a range scan of the index until it finds the first 30 entry, at which point it stops reading the index and returns the matching row from departments. If the next row in the outer loop is department 20, then the database scans the index for a 20 entry, and not finding any matches, performs the next iteration of the outer loop. The database proceeds in this way until all matching rows are returned.

9.3.4.3 How Antijoins Handle Nulls

For semijoins, IN and EXISTS are functionally equivalent. However, NOT IN and NOT EXISTS are not functionally equivalent. The difference is because of nulls. If a null value is returned to a NOT IN operator, then the statement returns no records. To see why, consider the following WHERE clause:

WHERE department_id NOT IN (null, 10, 20)

The database tests the preceding expression as follows:

WHERE (department_id != null) AND (department_id != 10) AND (department_id != 20)

For the entire expression to be true, each individual condition must be true. However, a null value cannot be compared to another value, so the department_id !=null condition cannot be true, and thus the whole expression cannot be true. The following techniques enable a statement to return records even when nulls are returned to the NOT IN operator:

  • Apply an NVL function to the columns returned by the subquery.

  • Add an IS NOT NULL predicate to the subquery.

  • Implement NOT NULL constraints.

In contrast to NOT IN, the NOT EXISTS clause only considers predicates that return the existence of a match, and ignores any row that does not match or could not be determined because of nulls. If at least one row in the subquery matches the row from the outer query, then NOT EXISTS returns false. If no tuples match, then NOT EXISTS returns true. The presence of nulls in the subquery does not affect the search for matching records.

In releases earlier than Oracle Database 11g, the optimizer could not use an antijoin optimization when nulls could be returned by a subquery. However, starting in Oracle Database 11g, the ANTI NA (and ANTI SNA) optimizations described in the following sections enable the optimizer to use an antijoin even when nulls are possible.

Example 9-15 Antijoin Using NOT IN

Suppose that a user issues the following query with a NOT IN clause to list the departments that contain no employees:

SELECT department_id, department_name
FROM   departments
WHERE  department_id NOT IN 
       (SELECT department_id 
        FROM   employees);

The preceding query returns no rows even though several departments contain no employees. This result, which was not intended by the user, occurs because the employees.department_id column is nullable.

The execution plan reveals a NESTED LOOPS ANTI SNA operation in Step 2:

----------------------------------------------------------------------------------
| Id| Operation              | Name              |Rows|Bytes| Cost (%CPU) | Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT       |                   |    |     |  4 (100)|          |
|*1 |  FILTER                |                   |    |     |         |          |
| 2 |   NESTED LOOPS ANTI SNA|                   | 17 | 323 |  4  (50)| 00:00:01 |
| 3 |    TABLE ACCESS FULL   | DEPARTMENTS       | 27 | 432 |  2   (0)| 00:00:01 |
|*4 |    INDEX RANGE SCAN    | EMP_DEPARTMENT_IX | 41 | 123 |  0   (0)|          |
|*5 |   TABLE ACCESS FULL    | EMPLOYEES         |  1 |   3 |  2   (0)| 00:00:01 |
----------------------------------------------------------------------------------
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NULL)
   4 - access("DEPARTMENT_ID"="DEPARTMENT_ID")
   5 - filter("DEPARTMENT_ID" IS NULL)

The ANTI SNA stands for "single null-aware antijoin." ANTI NA stands for "null-aware antijoin." The null-aware operation enables the optimizer to use the semijoin optimization even on a nullable column. In releases earlier than Oracle Database 11g, the database could not perform antijoins on NOT IN queries when nulls were possible.

Suppose that the user rewrites the query by applying an IS NOT NULL condition to the subquery:

SELECT department_id, department_name
FROM   departments
WHERE  department_id NOT IN 
       (SELECT department_id 
        FROM   employees
        WHERE  department_id IS NOT NULL);

The preceding query returns 16 rows, which is the expected result. Step 1 in the plan shows a standard NESTED LOOPS ANTI join instead of an ANTI NA or ANTI SNA join because the subquery cannot returns nulls:

----------------------------------------------------------------------------------
| Id | Operation          | Name              | Rows| Bytes | Cost (%CPU)| Time  |
----------------------------------------------------------------------------------
|  0 | SELECT STATEMENT   |                   |     |       |  2 (100)|          |
|  1 |  NESTED LOOPS ANTI |                   |  17 |   323 |  2   (0)| 00:00:01 |
|  2 |   TABLE ACCESS FULL| DEPARTMENTS       |  27 |   432 |  2   (0)| 00:00:01 |
|* 3 |   INDEX RANGE SCAN | EMP_DEPARTMENT_IX |  41 |   123 |  0   (0)|          |
----------------------------------------------------------------------------------
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("DEPARTMENT_ID"="DEPARTMENT_ID")
       filter("DEPARTMENT_ID" IS NOT NULL)

Example 9-16 Antijoin Using NOT EXISTS

Suppose that a user issues the following query with a NOT EXISTS clause to list the departments that contain no employees:

SELECT department_id, department_name
FROM   departments d
WHERE  NOT EXISTS
       (SELECT null
        FROM   employees e
        WHERE  e.department_id = d.department_id)

The preceding query avoids the null problem for NOT IN clauses. Thus, even though employees.department_id column is nullable, the statement returns the desired result.

Step 1 of the execution plan reveals a NESTED LOOPS ANTI operation, not the ANTI NA variant, which was necessary for NOT IN when nulls were possible:

----------------------------------------------------------------------------------
| Id  | Operation          | Name              | Rows  | Bytes | Cost (%CPU)|Time|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |       |       | 2 (100)|        |
|   1 |  NESTED LOOPS ANTI |                   |    17 |   323 | 2   (0)|00:00:01|
|   2 |   TABLE ACCESS FULL| DEPARTMENTS       |    27 |   432 | 2   (0)|00:00:01|
|*  3 |   INDEX RANGE SCAN | EMP_DEPARTMENT_IX |    41 |   123 | 0   (0)|        |
----------------------------------------------------------------------------------
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

9.4 Join Optimizations

This section describes common join optimizations:

9.4.1 Bloom Filters

A Bloom filter, named after its creator Burton Bloom, is a low-memory data structure that tests membership in a set. A Bloom filter correctly indicates when an element is not in a set, but can incorrectly indicate when an element is in a set. Thus, false negatives are impossible but false positives are possible.

9.4.1.1 Purpose of Bloom Filters

Bloom filters are especially useful when the amount of memory needed to store the filter is small relative to the amount of data in the data set, and when most data is expected to fail the membership test.

Oracle Database uses Bloom filters to various specific goals, including the following:

  • Reduce the amount of data transferred to slave processes in a parallel query, especially when the database discards most rows because they do not fulfill a join condition

  • Eliminate unneeded partitions when building a partition access list in a join, known as partition pruning

  • Test whether data exists in the server result cache, thereby avoiding a disk read

  • Filter members in Exadata cells, especially when joining a large fact table and small dimension tables in a star schema

Bloom filters can occur in both parallel and serial processing.

9.4.1.2 How Bloom Filters Work

A Bloom filter uses an array of bits to indicate inclusion in a set. For example, 8 elements (an arbitrary number used for this example) in an array are initially set to 0:

e1 e2 e3 e4 e5 e6 e7 e8
 0  0  0  0  0  0  0  0

This array represents a set. To represent an input value i in this array, three separate hash functions (an arbitrary number used for this example) are applied to i, each generating a hash value between 1 and 8:

f1(i) = h1
f2(i) = h2
f3(i) = h3

To store the value 17 in this array, the two hash functions set i to 17, and then return the following hash values:

f1(17) = 5
f2(17) = 3
f3(17) = 5

In the preceding example, two of the hash functions happened to return the same value of 5, known as a hash collision. Because the distinct hash values are 5 and 3, the 5th and 3rd elements in the array are set to 1:

e1 e2 e3 e4 e5 e6 e7 e8
 0  0  1  0  1  0  0  0

Testing the membership of 17 in the set reverses the process. To test whether the set excludes the value 17, element 3 or element 5 must contain a 0. If a 0 is present in either element, then the set cannot contain 17. No false negatives are possible.

To test whether the set includes 17, both element 3 and element 5 must contain 1 values. However, if the test indicates a 1 for both elements, then it is still possible for the set not to include 17. False positives are possible. For example, the following array might represent the value 22, which also has a 1 for both element 3 and element 5:

e1 e2 e3 e4 e5 e6 e7 e8
 1  0  1  0  1  0  0  0

9.4.1.3 Bloom Filter Controls

The optimizer automatically determines whether to use Bloom filters. To override optimizer decisions, you can use the hints PX_JOIN_FILTER and NO_PX_JOIN_FILTER.


See Also:

Oracle Database SQL Language Reference to learn more about the bloom filter hints

9.4.1.4 Bloom Filter Metadata

The following dynamic performance views contain metadata about Bloom filters:

  • V$SQL_JOIN_FILTER

    This view shows the number of rows filtered out (FILTERED column) and tested (PROBED column) by an active Bloom filter.

  • V$PQ_TQSTAT

    This view displays the number of rows processed through each parallel execution server at each stage of the execution tree. You can use it to monitor how much Bloom filters have reduced data transfer among parallel processes.

In an execution plan, a Bloom filter is indicated by keywords JOIN FILTER in the Operation column, and the prefix :BF in the Name column, as in the 9th step of the following plan snippet:

----------------------------------------------------------------------------
| Id  | Operation                  | Name     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------
...
|   9 |      JOIN FILTER CREATE    | :BF0000  |  Q1,03 | PCWP |            |

In the Predicate Information section of the plan, filters that contain functions beginning with the string SYS_OP_BLOOM_FILTER indicate use of a Bloom filter.

9.4.1.5 Bloom Filters: Scenario

The following parallel query joins the sales fact table to the products and times dimension tables, and filters on fiscal week 18:

SELECT /*+ parallel(s) */ p.prod_name, s.quantity_sold
FROM   sh.sales s, sh.products p, sh.times t 
WHERE  s.prod_id = p.prod_id
AND    s.time_id = t.time_id
AND    t.fiscal_week_number = 18;

Querying DBMS_XPLAN.DISPLAY_CURSOR provides the following output:

SELECT * FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(format => 'BASIC,+PARALLEL,+PREDICATE'));

EXPLAINED SQL STATEMENT:
------------------------
SELECT /*+ parallel(s) */ p.prod_name, s.quantity_sold FROM sh.sales s,
sh.products p, sh.times t WHERE s.prod_id = p.prod_id AND s.time_id =
t.time_id AND t.fiscal_week_number = 18
 
Plan hash value: 1183628457
 
----------------------------------------------------------------------------
| Id  | Operation                  | Name     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |          |        |      |            |
|   1 |  PX COORDINATOR            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)      | :TQ10003 |  Q1,03 | P->S | QC (RAND)  |
|*  3 |    HASH JOIN BUFFERED      |          |  Q1,03 | PCWP |            |
|   4 |     PX RECEIVE             |          |  Q1,03 | PCWP |            |
|   5 |      PX SEND BROADCAST     | :TQ10001 |  Q1,01 | S->P | BROADCAST  |
|   6 |       PX SELECTOR          |          |  Q1,01 | SCWC |            |
|   7 |        TABLE ACCESS FULL   | PRODUCTS |  Q1,01 | SCWP |            |
|*  8 |     HASH JOIN              |          |  Q1,03 | PCWP |            |
|   9 |      JOIN FILTER CREATE    | :BF0000  |  Q1,03 | PCWP |            |
|  10 |       BUFFER SORT          |          |  Q1,03 | PCWC |            |
|  11 |        PX RECEIVE          |          |  Q1,03 | PCWP |            |
|  12 |         PX SEND HYBRID HASH| :TQ10000 |        | S->P | HYBRID HASH|
|* 13 |          TABLE ACCESS FULL | TIMES    |        |      |            |
|  14 |      PX RECEIVE            |          |  Q1,03 | PCWP |            |
|  15 |       PX SEND HYBRID HASH  | :TQ10002 |  Q1,02 | P->P | HYBRID HASH|
|  16 |        JOIN FILTER USE     | :BF0000  |  Q1,02 | PCWP |            |
|  17 |         PX BLOCK ITERATOR  |          |  Q1,02 | PCWC |            |
|* 18 |          TABLE ACCESS FULL | SALES    |  Q1,02 | PCWP |            |
----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("S"."PROD_ID"="P"."PROD_ID")
   8 - access("S"."TIME_ID"="T"."TIME_ID")
  13 - filter("T"."FISCAL_WEEK_NUMBER"=18)
  18 - access(:Z>=:Z AND :Z<=:Z)
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"S"."TIME_ID"))

A single server process scans the times table (Step 13), and then uses a hybrid hash distribution method to send the rows to the parallel execution servers (Step 12). The processes in set Q1,03 create a bloom filter (Step 9). The processes in set Q1,02 scan sales in parallel (Step 18), and then use the Bloom filter to discard rows from sales (Step 16) before sending them on to set Q1,03 using hybrid hash distribution (Step 15). The processes in set Q1,03 hash join the times rows to the filtered sales rows (Step 8). The processes in set Q1,01 scan products (Step 7), and then send the rows to Q1,03 (Step 5). Finally, the processes in Q1,03 join the products rows to the rows generated by the previous hash join (Step 3).

The basic process looks as follows:

Description of tgsql_vm_082.png follows

9.4.2 Partition-Wise Joins

A partition-wise join is a join optimization that divides a large join of two tables, one of which must be partitioned on the join key, into several smaller joins. Partition-wise joins are either of the following:

  • Full partition-wise join

    Both tables must be equipartitioned on their join keys, or use reference partitioning (that is, be related by referential constraints). The database divides a large join into smaller joins between two partitions from the two joined tables.

  • Partial partition-wise joins

    Only one table is partitioned on the join key. The other table may or may not be partitioned.


See Also:

Oracle Database VLDB and Partitioning Guide explains partition-wise joins in detail

9.4.2.1 Purpose of Partition-Wise Joins

Partition-wise joins reduce query response time by minimizing the amount of data exchanged among parallel execution servers when joins execute in parallel. This technique significantly reduces response time and improves the use of CPU and memory. In Oracle Real Application Clusters (Oracle RAC) environments, partition-wise joins also avoid or at least limit the data traffic over the interconnect, which is the key to achieving good scalability for massive join operations.

9.4.2.2 How Partition-Wise Joins Work

When the database serially joins two partitioned tables without using a partition-wise join, a single server process performs the join, as shown in Figure 9-6. In this example, the join is not partition-wise because the server process joins every partition of table t1 to every partition of table t2.

Figure 9-6 Join That Is Not Partition-Wise

Description of Figure 9-6 follows

9.4.2.2.1 How a Full Partition-Wise Join Works

Figure 9-7 shows a full partition-wise join performed in parallel (it can also be performed in serial). In this case, the granule of parallelism is a partition. Each parallel execution server joins the partitions in pairs. For example, the first parallel execution server joins the first partition of t1 to the first partition of t2. The parallel execution coordinator then assembles the result.

Figure 9-7 Full Partition-Wise Join in Parallel

Description of Figure 9-7 follows

A full partition-wise join can also join partitions to subpartitions, which is useful when the tables use different partitioning methods. For example, customers is partitioned by hash, but sales is partitioned by range. If you subpartition sales by hash, then the database can perform a full partition-wise join between the hash partitions of the customers and the hash subpartitions of sales.

In the execution plan, the presence of a partition operation before the join signals the presence of a full partition-wise join, as in the following snippet:

|   8 |         PX PARTITION HASH ALL|
|*  9 |          HASH JOIN           |

See Also:

Oracle Database VLDB and Partitioning Guide explains full partition-wise joins in detail, and includes several examples

9.4.2.2.2 How a Partial Partition-Wise Join Works

In contrast, the example in Figure 9-8 shows a partial partition-wise join between t1, which is partitioned, and t2, which is not partitioned. Partial partition-wise joins, unlike their full partition-wise counterpart, must execute in parallel.

Because t2 is not partitioned, a set of parallel execution servers must generate partitions from t2 as needed. A different set of parallel execution servers then joins the t1 partitions to the dynamically generated partitions. The parallel execution coordinator assembles the result.

Figure 9-8 Partial Partition-Wise Join

Description of Figure 9-8 follows

In the execution plan, the operation PX SEND PARTITION (KEY) signals a partial partition-wise join, as in the following snippet:

|  11 |            PX SEND PARTITION (KEY)    |

See Also:

Oracle Database VLDB and Partitioning Guide explains full partition-wise joins in detail, and includes several examples

PKVPKDOEBPS/img/tgsql_vm_075.png SPNG  IHDREZtEXtSoftwareAdobe ImageReadyqe<PLTEӐ呥ѱ退VWW砠*,(faa`Ȩpppz􄜲ʚy===⌙vͫ|||Ϡ޶lml|u茌֪ΣqzipOOOϡbjZ@@@mrt񵶷èT[L迾견mxbxxy秸zzzf{8<4󑌆}y7<1񥦨ɍtok𾺷GLAojgEK?\\\^iSjfc269^^_9:;EGHuwr5;/IME///blrruu~xsJJK ek_fff???ubOIDATx` 'LL\]\nʎ#mjDjRlPNVHHA,rkE*}4b^ Яв{{yff $Iy>933IY$*UPQ TpTpTc񋍨ڿ JV<րxsAgp,Hm^HpqVQ XBtyݺumĽdC=;cIGz"*^}Q|"̛*GW nEB<ѝ `U lڃb=KTpX ۠cKǺVɑ yFTvvj{+^$ahV`I9 -<*8hUopyDlO {7tɓ{P46~>PeVH, \q?ZϜ{gf/MսyHz?X [i^}DE5oD4aƣ#-,.[9u38^'yq݁:(3\xTpdÁ<-(;y8^90Agsq8Z#"(GAUv+8nȳʙGgG6oP8bD*8i k7 HG+,d,%)IG:q\k7V[{ UG {,<*83_$ 'nHV|yY h<+cx G\Sl<+8J#7ȏG ' pa#GW&ɷt1Qo%#W0}ʹx9^׷j= dյ>ĕiqGc8(~A#bq8/" iRI G w^D@t <&"-O\FEr_ǐ.IZ}q#GG\,Y]??|N> X8cg<4I18&2Kc0/ hF:v3Hf6x}>߾}>_q4 8( Q;68q4=4B͚p8T&&xtjO uPPks5Bsa[usp8vA# K8=!{<}DQ!n%,[|VXh|??#G7o#m!xh~АU!qq2Y Z}ғעlu@/dʮ!Pk'.7pD0:"oVqҭ / ٬X7[V8Z6o~4Eb2X_U[[Z!i1q\cXãa/ܿUnQFx;"iAЏ.m&P(Q_iJ4SEޖՁ:( dCrHn7& }AڙtÒ?JqqƷwE۫]MNv8O`G_=8Aꠍ룠D}:~ſPsG8"|p[e~67:IQB6w3h6Q;iL#A&9u)8lXAGv8% u.n!¸`@ObhpзE8lm^XqUsD"rҚT^j_:4(w|:x;-?ueZN'c睂l%"3: Ʀ(^Nfl`l(V:}+DX]nRSsp#HĠdڋT?}^>>}};vČ N:eN$ƩSxe6Km '.p+$O$YPg(i[QPCӁ^O8x\ ĄȢEc,[n=-n,**7Q4$ fDݸo}߷e~'/tъPsp#+$ck]-Q5_f䒂qG D#P8g/m|ݖ`%18d9JyDfH1q_8w{#淒Ғ㘓%HF=d&ԻFnr9'.kU.&FqROҢ)x[+e.,&B rw#fY(ݵL@^-G{bfaFlfYˢoJH`DjT[qRXVUhF 'ūڔ"cNT;JÜHNq8/IYEӎ&Ua fYXYy8.V h.#OsqY#o H\Q3UN8tߔL}FL ;>(X~ZwRBQ8Dx|+EN<I8>ɘ\Evi87EftRʈ5q~+M#p^Z]E,샺p[0Sf;idln?eyꛂZu9t*ZrJ"V Ʌ>Fφ.\{8p\\FX""Ԩd(W X Z(5淂%2l^N812Z|*ssb8Q rȖ;BGNhsp4u65_NࡇWi%4UhBoӀGq@ Y;E!ޒ>vI <p?[647aj64CesE`I l3s< -np[CM=F7YQ%U5V:joz{U}8rfa-&/KvM!M ڇVzdjUiSZC 6?JC}6IܷHc%E,B CYCf<ʀZϐ7]O@V_9yp8k*z棭֠7|>afiNon5T{mʮpytN\ 2,eեSalTXU]v6HE ]\UFºXY:,x+s8p37sqtH!a_x NvY x_6ՅM}Jc?\H69m۬O:]]k+]Ok`G0al@ Z- I`x贈5DT#TdЉF=A++8v^Il=ؕ)8ve07P򍺇a5C, 2->7A  0}GSeR4h@7UOg][Pz u`tӮ049X{āGPѫ~ BrjduutB*]8]y'f?[[E/Ѱ5pb;pBUqDY ǬjdWC5\\P9^BucF0*jv#phaWNgD]ٽ Oc\95lb[{x!p ODO%܊p,UvC5V( *"T;0PC nEӑ8oĴ#1B*hΫ*vIh.sVb`X`n`]C a1::a oЌ8$(W12)pkUryMpꆒD{`8ū) vbSd/U1Zh6wHӉAY4  8Zz-ec8tL>-+Jo񄃚u@XT7Ty ȟ܀|sgq$B).7mr2?  K aܔM^vUruM[4dJXp?AEdgb-֊1\yFaV|]&8PMM -njzԹ0UCC`70dlrs;`$(Qx:L0JJL8/íaKC 57G)울"蹲rK\Q㦼82ؕGys,wi?q,eLL혀cW.8䝙6pcWFYv=gP<8ydE8r50Xj`r-?ggaݩgjA-pp2ő@̑pB.ڞ=)a}VQ|%F5M9Z.eÜG@Rds籊bPXҲzΠ 88vHYA2Iǩ'Cϋ@) ִ,@m3]cc)$"T2n)M \ cѲZΠ ˉ]JxJđGi0k8tщXDnً*Ǯ]ַ=0wXQ08{qT%aD{Xc@ʸ sѢSq8U% pTXX GXkq*h%b/ ^GS1C:u,Jqʃ OÄQc__Ǥ x7zD"*Es3D鸏#Xz :kV7LhU8FpoohMmOZ18 ub_8L8g=Pֱ|hhhyCÄQS$;f>A?n q=_[ 6[茧1HL'E˄~5 ^;kUתGU<բIí=GN@[>rKGS 'hiD3!˫fᶮŵCGx3c#TW+qPz  o=ՃְkÇGy j-u}a=U-sI;QɜjV ] 7O亾~(}Gr]qjOXPp\U]ݢ|VQ PO:ۼa.鴃ˢ+kk^i6ܔvL_v hJ .΋ͥPlk'|XZqyqP$*LpT}U=`cxr/ut6);YtqJ/ف|G)ASU YhpSO̫j ( 5t-M˅48.vSmfk~@vvϸB"inJdd +?8 puFXK-/.nٽ7ELf1~p|InpF`ضfn+W9u?ĵ#@*bNxiy p_Mw k]eǡ H8jSU5ݳZ{'E b>ܽC%JYO!](,*8x qyb⮻IA/C%HiHI=O alٲCrPGIc; ?9ϑ.tz}!pq\dd@g s J=xñ $(,ߋk1G@t)<7pYn`Q)}cO^{\Hc%Ͽw/$w\GS( RQ{k\޽i]k~Zgm}WvZMc"sx*2zl֞ ~<1sq8ʚz('&k ,k-EUcd,mRs^@Wi9|ö 9?ڬU_.7QH}<|VѺ H tO>=cή׺oZ Gpl~ q$>mOio=-:Z*_5y=b?G6aK=r7y7ˆfHv J;ÙQV?7QSCcS=wA'Ykww횵pm#w?^meCq(Y2ZFдCT5M%ǀźzO 8MQN rܜm0jSTC@Tt;|^&1H⑀#}nLQrQQN|X6sxYr;Lrˈ#b/jӠ|L8$G)šb=Mr##kn``G`s}e,(b' .A2 8ʇ#`Ȝ9Gëس:fjEHJ(ewD1+ .$Gwyz0F,7 C;zj`~Hot6QH‘.7dn4 %8!OM3(8-Si~&i΃zs< p~4Ԏjx=90{n`v/w |fwOip.=kn ~q is? {칁kj~0bpH΢us8npd pry'l `'+7r?:044tpgj曳 "I 3嫶ΪP8.MʅrQ-N Vwf ,-Q^pkȜh8$&Fn`iq|M8.M1aH$an`ԫ978&ǥ8ȕs/$#?@4s^i(rgg (ǥf8R#'ݠ=.ߪG)rI¬?(onٌd ~ޔฌRe˖ Fr-[=2aIQ(\gaZMg G|daR '  8(E% _&iL()8EZt*SOq)IOf8 G<+8ʇ# ҹc# Ǔp\ +8ʅ#ő+8ʄdiyLE2?pe|KG8J}|t\rIG@n. LYI[uL3p$$e Lx`OāɹG"xK$i^ie9qDy<aK/P _hX?Kb4{8Y#NA W,7ped؍[n|b㖳GHGɒ>wnd7ީT\v'RQ\l8.1p\q㮍[WYeê*0EaL>IS ѿr֧_YRHMǍDQ^58y9N@c]P- /qPЭOнaEk@Y,8Rڇ\Y̼ i]EH60А~O<<q8gau1p.E8)©.dU]KoSm 'dzzzYYmEd&3$qKԈR6hr%. }i=a| =: anjrj@fTۖ=ŖPS;ҿ6DTł \ .K , *E} ꤯mMfX8N7'DAsw K l V}řBʋ'qBV(iso6d47; Wu 8p`U!۟QZg/ƎƊL6~(`9YphK BA%pPV/肱Fq(`DqysqIGoӶi@E(nO8wE;B@}!۾p;U&ö,87Z=T 8^Ȇnٶr6CB϶9(7AhRqC/t ;iDqp3q qkQ'@cxs<ӯԲ EO ,էwI&P*֐UY݊n-Vs1~‡FeX_U[[%) o_6>xGH!a3BY'4Y6G?S%* 7m ZJuɒTJU\!:puu-z‡="a&U/A<4~=!6L|-> 7O?g7a&LdW-mqDj&P('zG֦FJrE ])p@*E#z[EkYǁFx霈t#YSM;@ RO#z[Jboo dÒ IaB]Rn`ߢqyɜufɜ]k嬡ŋ H eB/.gט| &zjzYe%}r\(+ Q.($mVUsAs~-Ŏ9A'-fV`ׅzn`3rscL]P۬ښZGMw*ځm~!Y,Qqh˲6Nx8 4VͭRx`" _5s9vd6l5GMېk' I?l6 E ~%|:|skf$]6ޭi2XR8'\n3PZ5̍7b~:/qzmf{^~d@3==n4{k@ZR%GC`s=S.1r1]&M[OGvUNM,{ہ !8a8.yR\h^_e꿢S2_k^,zgkxq$Ǒ_!78ʐn8F"voДA)+Y+*G\r08qUw,޿9ocF:It4e >t3.lŚ hաAQ ZZA+jzicT2px#P [О.Kݑ^OK |(3.p$GH惃}QrQFV <G^@Lcwf#I"4$D###O"H@9{p*$pOJ87vvtBǕkcpOF^u/׫9n_phܢxlh2 ( NFC@=5!Un_r9mPԎ I-{x& o[47sP#В:]0m+MCnZn&vI;ZnQׇGxcV`_qk_uoλ+ wg A\ڕJ?hmmp{00d CXa?L hn űy iE^~ۿ1I@8q @ C47 c=x (Jm"¸47D@4s 8\Á]vK]f-갆ڵd@;4u3.Eq^?P?$x +b86hD]`[ +4-fԤ't,y' ^2=t 9_c]}V%1=ZGq`QhY$nG#8  .H1Gu&?G#7# $?]O:TOj_zDё$щh.I݈Oxt|ոu 0dK"{z{9r p$Y")ǽ0Z$ \qFqN6l ,! e'Kk& 0 H$#^O d rŘ(q@DǤ82) 3&D*8q$IđHQH2xkE =/DRpd!0z3fȾ?r2pD&]pAD Rs}#^f<"fw_v3P8t fڑB g2P8(4ڑ#<ڮ1|xbhMRI8r LLL8qhaSu~ɁMOš+yLhgMB3W?ER5i: mפ~SG_L,S2hG@[%lksv@l2ḡʆz&|SLɨs|)7p$sחYc%fi^C <ᰣ~$s}>7/Dd <+۳v囖vk2 p,'78&@4JsʖfR0 k"_#'L~ʜzq5J;JXJQ ϣ;eJƪJBL/;a".'hG1Վ ~cʔ$ Z1e1Lh* āD&Q,o/LT(^@i5#IȤIIC2H~8rQHr31"2)Sr#7p\vL98RsWEјQSˤ*Ne&23&E۸ &ǩO9O/ODW*8*WpTc<ҕwNz3-ankg(:'[qg"xgѫ`Ǒ.m{cؔB<t -!vH+1}nd[ V/-ћ#h`6^VVVEN`` r h}eYIk{D1&`"6F?&h2## S]1vDXzzXGP}oRwmM/1闶,m #Xc63$xDC{ /:,]_0:X^Ú!H8?F2ҩldA$dחvNvn;eWfNى3Hʒ#NPdA%=5g/t:VN K\\)DC%@p#{^ 4l|xxdu랑O\qh/8O/\瑅}x8Y -} GV/[ؿ^rmgu0c7ֽ/(NWc us/>ќffY}t.ߟs`󩽞1yݸ楷MiH;X r_* >JS;cGvDc/G7; aw>z-旲CDzvb|赲pc)ѻO:eK+G$3E>GM,lAJ8UJuI(.;v8 JG֦d\节(u.HI?'Nq:etS QqE6U^_iN* V|W||Uţ? מKHLFw}Wl'SX<s.\q0hİ\ 5g,ojIy%S^Qb1 sW:4xyq.tgqz؈%sZ`fu!`% !eN'$WlяxI N#t. Td%q ~@J\<ЮnR2xo!k1dO]jN)zՠM-%v*xN;z*tp/'X9іw7V2cw0r#vQ5 4y8h{#Ioy't}%F"vMqg.đ5R\b6&PjwB-D/ l ZVp)ĩЯAҢ8Y#mG]UxU;W.8" 6\UJŁ.D7H B1p`A~`ePXWA%"-^gMg4بpH2b8#.NShRp@5;;aBU‹qQNډkl=Th,nͦr:5vUs=JEZvQoGדun([DpMel靝1zc{bߡq路2Yy!ywF}QQ?Utr)S\GmXP4EY6612'S?9N/ d5n.(l2KÑ ڑtIXzz<\]^SNe/MfԳJ ꢴAb&ۢTQ)Gc"=?\%# qԕ!?ҙ3g̘?F+8O~^7 2ԩSLk ᤩoҗ~WAYL?Y1u _t.yҕOnkna|) 0L9swV2k`\c~f,]:x٧NuU)e$d# M3&! {gל_9q+[h?9U3Nj׮9_ʃ*8J d1ň`LϾi?_ȪH_2`DS,⚩i1I3bW0?$IMLb 7`,Xdނ;8J9wM`n\{B.R BQjT'&6*I35[w(aW>3k̟&?3s&MO?ܹs/y鹠 UHa|0+~I0 '׭[R\lh&̙׽u3g|~ϭ۞?9iW朲X/nCvJdO&54^FxuO??v w"oɑ?!">( F<*]R0z򄐏.Bj,=ENUYNsG^@dQ7GhlLnݼ{+8rf|7ҩg"0p.cD|#"Fȑ>:B1oɒsGi[RS1[c/֊|jRK'n9Bq46_+!40t땮䅣o]Ho^y JGIۓmV%(a|]ı-],q=SBM‹ޭoy|ɉչ[|++/uh t[2oIK1WMsG}SWϏ *SC`ŊZHw̛硶7xntj$Xv|'/>!6S8]I@S]ym @W}vKWG5,3Co" W]/?/Ltiy2_Ox3#h#E_&+!R;λ/v$3d'_s^vhGQ#+MЎk_.}i Yg>0>>yiGQ$jǿ\gVUXƂT3/SwNijp_q6) ɷO:<3--(sM;Xٔ+KQ"'hFM烴Ya||5_ԧ)1(FD{4{vtʌy4apj8*}G8ؔUT;<ܪ{W}yIgFbcHEI1jο}{|jCu6xƏLy_!I8*QݑU޷~ǯ~oŢ~{/uܹK_:/w՟TFE;ʩ|ꞹ?=sRK:/|-@qu(vqyڑ0+8r[a1伴cIGN~Aq{.7QGn##>W*8rK1SjG`wYGV?q#wye;&_Vpɣ|#JG~yǨs%qlS+8 wM9Vp$ -+>BGX;yxtWۖ8R2iao_,'|~޼#U)ndu0݁[ُfw|VH'O۶[?]̛VpdTvS_XVpdgru둞o7}ss*8rjuuG؏fc[J&gTpTpT"/g\ IENDB`PKŻ)S SPKDOEBPS/img/tgsql_vm_050.pngPNG  IHDR<׃tEXtSoftwareAdobe ImageReadyqe<iTXtXML:com.adobe.xmp 2011-12-24T00:10:48-08:00 Adobe Illustrator CS4 2011-12-24T00:10:48-08:00 2011-12-24T00:10:48-08:00 Default Swatch Group 0 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 K=0 GRAY PROCESS 0 K=100 GRAY PROCESS 255 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 K=25 GRAY PROCESS 63 K=50 GRAY PROCESS 127 K=75 GRAY PROCESS 191 K=100 GRAY PROCESS 255 C=25 M=0 Y=0 K=0 CMYK PROCESS 25.000000 0.000000 0.000000 0.000000 C=50 M=0 Y=0 K=0 CMYK PROCESS 50.000000 0.000000 0.000000 0.000000 C=75 M=0 Y=0 K=0 CMYK PROCESS 75.000000 0.000000 0.000000 0.000000 C=100 M=0 Y=0 K=0 CMYK PROCESS 100.000000 0.000000 0.000000 0.000000 C=25 M=25 Y=0 K=0 CMYK PROCESS 25.000000 25.000000 0.000000 0.000000 C=50 M=50 Y=0 K=0 CMYK PROCESS 50.000000 50.000000 0.000000 0.000000 C=75 M=75 Y=0 K=0 CMYK PROCESS 75.000000 75.000000 0.000000 0.000000 C=100 M=100 Y=0 K=0 CMYK PROCESS 100.000000 100.000000 0.000000 0.000000 C=0 M=25 Y=0 K=0 CMYK PROCESS 0.000000 25.000000 0.000000 0.000000 C=0 M=50 Y=0 K=0 CMYK PROCESS 0.000000 50.000000 0.000000 0.000000 C=0 M=75 Y=0 K=0 CMYK PROCESS 0.000000 75.000000 0.000000 0.000000 C=0 M=100 Y=0 K=0 CMYK PROCESS 0.000000 100.000000 0.000000 0.000000 C=0 M=25 Y=25 K=0 CMYK PROCESS 0.000000 25.000000 25.000000 0.000000 C=0 M=50 Y=50 K=0 CMYK PROCESS 0.000000 50.000000 50.000000 0.000000 C=0 M=75 Y=75 K=0 CMYK PROCESS 0.000000 75.000000 75.000000 0.000000 C=0 M=100 Y=100 K=0 CMYK PROCESS 0.000000 100.000000 100.000000 0.000000 C=0 M=0 Y=25 K=0 CMYK PROCESS 0.000000 0.000000 25.000000 0.000000 C=0 M=0 Y=50 K=0 CMYK PROCESS 0.000000 0.000000 50.000000 0.000000 C=0 M=0 Y=75 K=0 CMYK PROCESS 0.000000 0.000000 75.000000 0.000000 C=0 M=0 Y=100 K=0 CMYK PROCESS 0.000000 0.000000 100.000000 0.000000 C=25 M=0 Y=25 K=0 CMYK PROCESS 25.000000 0.000000 25.000000 0.000000 C=50 M=0 Y=50 K=0 CMYK PROCESS 50.000000 0.000000 50.000000 0.000000 C=75 M=0 Y=75 K=0 CMYK PROCESS 75.000000 0.000000 75.000000 0.000000 C=100 M=0 Y=100 K=0 CMYK PROCESS 100.000000 0.000000 100.000000 0.000000 C=25 M=13 Y=0 K=0 CMYK PROCESS 25.000000 12.500000 0.000000 0.000000 C=50 M=25 Y=0 K=0 CMYK PROCESS 50.000000 25.000000 0.000000 0.000000 C=75 M=38 Y=0 K=0 CMYK PROCESS 75.000000 37.500000 0.000000 0.000000 C=100 M=50 Y=0 K=0 CMYK PROCESS 100.000000 50.000000 0.000000 0.000000 C=13 M=25 Y=0 K=0 CMYK PROCESS 12.500000 25.000000 0.000000 0.000000 C=25 M=50 Y=0 K=0 CMYK PROCESS 25.000000 50.000000 0.000000 0.000000 C=38 M=75 Y=0 K=0 CMYK PROCESS 37.500000 75.000000 0.000000 0.000000 C=50 M=100 Y=0 K=0 CMYK PROCESS 50.000000 100.000000 0.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=0 M=25 Y=13 K=0 CMYK PROCESS 0.000000 25.000000 12.500000 0.000000 C=0 M=50 Y=25 K=0 CMYK PROCESS 0.000000 50.000000 25.000000 0.000000 C=0 M=75 Y=38 K=0 CMYK PROCESS 0.000000 75.000000 37.500000 0.000000 C=0 M=100 Y=50 K=0 CMYK PROCESS 0.000000 100.000000 50.000000 0.000000 C=0 M=13 Y=25 K=0 CMYK PROCESS 0.000000 12.500000 25.000000 0.000000 C=0 M=25 Y=50 K=0 CMYK PROCESS 0.000000 25.000000 50.000000 0.000000 C=0 M=38 Y=75 K=0 CMYK PROCESS 0.000000 37.500000 75.000000 0.000000 C=0 M=50 Y=100 K=0 CMYK PROCESS 0.000000 50.000000 100.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=0 Y=25 K=0 CMYK PROCESS 12.500000 0.000000 25.000000 0.000000 C=25 M=0 Y=50 K=0 CMYK PROCESS 25.000000 0.000000 50.000000 0.000000 C=38 M=0 Y=75 K=0 CMYK PROCESS 37.500000 0.000000 75.000000 0.000000 C=50 M=0 Y=100 K=0 CMYK PROCESS 50.000000 0.000000 100.000000 0.000000 C=25 M=0 Y=13 K=0 CMYK PROCESS 25.000000 0.000000 12.500000 0.000000 C=50 M=0 Y=25 K=0 CMYK PROCESS 50.000000 0.000000 25.000000 0.000000 C=75 M=0 Y=38 K=0 CMYK PROCESS 75.000000 0.000000 37.500000 0.000000 C=100 M=0 Y=50 K=0 CMYK PROCESS 100.000000 0.000000 50.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=25 M=13 Y=13 K=0 CMYK PROCESS 25.000000 12.500000 12.500000 0.000000 C=50 M=25 Y=25 K=0 CMYK PROCESS 50.000000 25.000000 25.000000 0.000000 C=75 M=38 Y=38 K=0 CMYK PROCESS 75.000000 37.500000 37.500000 0.000000 C=100 M=50 Y=50 K=0 CMYK PROCESS 100.000000 50.000000 50.000000 0.000000 C=25 M=25 Y=13 K=0 CMYK PROCESS 25.000000 25.000000 12.500000 0.000000 C=50 M=50 Y=25 K=0 CMYK PROCESS 50.000000 50.000000 25.000000 0.000000 C=75 M=75 Y=38 K=0 CMYK PROCESS 75.000000 75.000000 37.500000 0.000000 C=100 M=100 Y=50 K=0 CMYK PROCESS 100.000000 100.000000 50.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=25 Y=13 K=0 CMYK PROCESS 12.500000 25.000000 12.500000 0.000000 C=25 M=50 Y=25 K=0 CMYK PROCESS 25.000000 50.000000 25.000000 0.000000 C=38 M=75 Y=38 K=0 CMYK PROCESS 37.500000 75.000000 37.500000 0.000000 C=50 M=100 Y=50 K=0 CMYK PROCESS 50.000000 100.000000 50.000000 0.000000 C=13 M=25 Y=25 K=0 CMYK PROCESS 12.500000 25.000000 25.000000 0.000000 C=25 M=50 Y=50 K=0 CMYK PROCESS 25.000000 50.000000 50.000000 0.000000 C=38 M=75 Y=75 K=0 CMYK PROCESS 37.500000 75.000000 75.000000 0.000000 C=50 M=100 Y=100 K=0 CMYK PROCESS 50.000000 100.000000 100.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=13 Y=25 K=0 CMYK PROCESS 12.500000 12.500000 25.000000 0.000000 C=25 M=25 Y=50 K=0 CMYK PROCESS 25.000000 25.000000 50.000000 0.000000 C=38 M=38 Y=75 K=0 CMYK PROCESS 37.500000 37.500000 75.000000 0.000000 C=50 M=50 Y=100 K=0 CMYK PROCESS 50.000000 50.000000 100.000000 0.000000 C=25 M=13 Y=25 K=0 CMYK PROCESS 25.000000 12.500000 25.000000 0.000000 C=50 M=25 Y=50 K=0 CMYK PROCESS 50.000000 25.000000 50.000000 0.000000 C=75 M=38 Y=75 K=0 CMYK PROCESS 75.000000 37.500000 75.000000 0.000000 C=100 M=50 Y=100 K=0 CMYK PROCESS 100.000000 50.000000 100.000000 0.000000 C=0 M=50 Y=100 K=0 SPOT 100.000000 CMYK 34.900001 23.920000 50.589997 7.840000 New Color Swatch 62 SPOT 100.000000 CMYK 76.080000 39.219999 18.429999 6.270000 New Color Swatch 57 SPOT 100.000000 CMYK 40.000001 16.859999 5.100000 0.780000 New Color Swatch 52 SPOT 100.000000 CMYK 14.119999 5.490000 3.530000 0.000000 New Color Swatch 56 SPOT 100.000000 CMYK 29.800001 10.200001 2.350000 0.390000 New Color Swatch 61 SPOT 100.000000 CMYK 60.780001 27.840000 9.020000 1.570000 New Color Swatch 54 SPOT 100.000000 CMYK 24.710000 11.369999 6.670000 0.780000 New Color Swatch 43 copy SPOT 100.000000 RGB 102 153 204 New Color Swatch 30 SPOT 100.000000 RGB 204 204 204 C=0 M=0 Y=0 K=100 SPOT 100.000000 RGB 0 0 0 New Color Swatch 58 SPOT 100.000000 RGB 128 153 179 New Color Swatch 51 SPOT 100.000000 RGB 230 241 255 New Color Swatch 55 SPOT 100.000000 RGB 204 230 255 New Color Swatch 3 SPOT 100.000000 RGB 221 237 237 New Color Swatch SPOT 100.000000 RGB 82 160 162 Cyan Magenta Yellow Black New Color Swatch 43 copy C=0 M=0 Y=0 K=100 New Color Swatch 30 Courier Courier Medium Type 1 002.004 False com_____.PFB; com_____.pfm Helvetica-Bold Helvetica Bold Type 1 001.007 False hvb_____.PFB; hvb_____.pfm Helvetica Helvetica Medium Type 1 001.006 False hv______.PFB; hv______.pfm True True 1 8.500000 11.000000 Inches image/png xmp.did:56E0F3695328E1119A13E802F162621B xmp.iid:56E0F3695328E1119A13E802F162621B adobe:docid:illustrator:90abb96e-2e01-11e1-95f9-d848747ca96b converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:56E0F3695328E1119A13E802F162621B 2011-12-16T21:27:54-08:00 Adobe Illustrator CS4 / converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator uuid:90abb96d-2e01-11e1-95f9-d848747ca96b adobe:docid:illustrator:90abb96c-2e01-11e1-95f9-d848747ca96b 1 572 434 3 1 72/1 72/1 2 8 8 8 #PLTEnMOQGkP8tyϫ5iy/13shxy{騷ʳP[bhefgåXƝ櫱pơûJw8;iŲөgr}`lwク߽󴶹VX[CFHA׷˩mϽ镸[}:I8yԘҲۥ萣_{rv𖴘un8:@GJ=><3ATjhA<\XDGP1a,:O\*dg8Pu\<#]d> BS[[iټ6 5"#;g_SڔAlcyMLTLAtDk{n<#k0 `JGP19)easlTA$ jBJV}Nn[*=^s¬L&U3xnxjJ:OaƼIƶ`n%|B Hcf-%3VYY^GO}U<'&9COօalV(C^2Y'!l6gõ:2TSnʤ  X4 ZLW)3 Ct@xSUW_~S?1#1ݩbZ$Td*HM KSWCO[=aoX g6\VS k~P;:ZDզ[h{''n@TzXDӕZP뵹5<a w~JPYFˡS͔Pj9_ԣQՔ@t2q 3Ā%e} &`0vzJMHgޕ`3lluu'|Mr}7&`hNxl!0;$ }u|T\gN4<?*\s k v\(Gc|>T*F\BH%X֭4 cP$0_ϛ\S8`p [ BGf“ r+yf՘[&3zM\i“k=N(9O/P("} ODxRvGx<&^<i]'XȨS5jj`~}`P3u2ǥKu5 O)=93!L$CTBmɖH8΢H NYl1KEr2axn]ի x.xpTibĀQM(*Ge`2KgUx.bL ԢrO.Frv#Ojy 4[@ԒO1q̀p!O"XDJFx…zf)lkIg#񿁟LOknAB<9ͭf#y|>OP2[L\x<`bV(ܶEg3)Yrɓ{fcOO:c$=l¬HIE$hǥL R9OY s'= $g-y[ό³o<2HÃ'1{!OMkaCx [gu(dc T@1x]ZustLckq47oӦr2BYy; eZ#1muXSpJH`ӫe"%SrnmfwW;)jk%rOl=3 OTZ]0Q32EH)3|!IT!/=9zwT$$fB Lx<%{ `U]{7hN OQa9:٘oPH/y"DlAܦްW I KAyzXY_&Ϟ^rͭAJh{'ha_QzXz0<-C܏Sqp0R) w/vo!C-6_!{OooBb$t12H fAxΝuwTTS8<֓gѬ:ډ (S (Sxo}}A]VkQЄ!3 r(  <H p,jkH̔:R')ٳI4ԁ &\ӋI4^x<gvuzDW3wNU /ȥgiwh\ƲjCPD*h)>InW(gʓqTiKWٞ ܋ɌE"- , O(<>b+0Yt*x<%'M˘hW\z^Ix'Hq R)xBRECRg TgփW3 ONSQ.R)x989"1'u}kzEV[f3 ɗǍFTXN)xg.c L7K,>,jr~²I-NO}fyN;O&CZRh=Hry/^K!+Tp<[Y<|fyl|6cKz?*EK&bT|,Ü:Mq*9f_ /Z*,sR2fuK}vΙYtpnXeSPXRƸU%?+1sciѐ#]2Y |" LWٖ|6CJźR4.c)<4ۂT8[~/5x(l͜GB)e@ RI+Ɏqf 7_dIY yv޶uXx:Ob1%fs%tۡ“&^g~:Rev4y _<y 9gwxϴe-YxNtu&Ɩ3!zfW#k,+\c G wԯ'Qγ:O0,>$s9[cOig3 WZRw;† ,*{]l;&%jQi4+xIoBm//XzPgjγq톷#N$Q~2*9O'Hc} wZ^skJ.2-g;dIi8ϤT]XYL#h>9Iϟ<1[o)gR)\-6 J[@@QmfA$>kP2Ed|Hނm.eihUXCS[e{KWF$@xD?BH- 4-6oE<ew^:ȕc {m0PYxhڇV61hhY3;ŠON0s/=˖ݛ 1Aoat"L@"**,3> y6鹚cq1a`zj$Sxz;Y'#ΑD? Pv4 \48ESEoxՖן lݺ3s }}oi)ϮN779jkW:M5zWZ i8Ѣ'WՕ$=E'GΞU˾uV0r_y)O͹ׇk7븶ڮ]u]G;jV{Շ+oT̙ p-rf=GGbä{)Wꞗ~/_ z"-2}_ѡ\ 'Nzy9ǂqxK<ś-?=Mk:X WrV䗧xVssF'fedJԭ{=֜U,W^Y/\UdJV>tWr?EOGGYe}2pov v>u[O`v9 Hգnx 2٬߽Fx.B:Ӆ8~ >sTW3Rw7-f>olsK=ҷ : 2;O]3Ǫ*g[+Yf9 ,<> &wm}lR},۹|鲧H@! _OLf?QuYdE 1][OE)1" ɝg]S0V4IYФ(XSՍz[?9I* &BΎ_<űLγOM36IϢZ2|#t#ü&u#=x޺m; ?Kq٨EqKHd{>ix>*ٲɺqS ۭ4[,;E V<`l{ǏgzΓxlF{R@OO ?ni&gRz\sϭ 83x? {Y:z*_&$;)ȖJZe-&)'6Iat0\!0/Ny{>=>nԦF΃<ӹuࡣf΃GZ*inykxqtn<9xYsnt8@@R7q4RT= tV8p& ZNypnuc8"XZ)5z@k喭xU/::9\Ig]^cbEի[ e}2d+Zݮ4*!cH& yUB-P@Ij̘e -ݓPJ(4-Ax~(Piʬ;oKQW@BI*iC#&eAi+rqD"<)<6B#Nq\vf^[==4w] z=;8IɈ]aܡ,t =x" ω;b$N^kW$FRApssBzayj ǯ6qώ!~V`F74Z?|vs9n٤A8>cSccn?݌@v~Gn{u)Ii[;QFb-vG2Y4b6v:U$C!Uh(C^Y/ihosbЃj'9Sh_屳mVғԤ"vvd]ȩber}% ;O ):~SW&%xCې2ŗ~ SK/AaYivo[*nH!99gӿS\SDz+LJ|Zڈ}n&WF-ыB*:h<0S[Ba2γy[qerYK!йh1c)_؆ొR=ċY4ER+>OD~\kh)*N, yȐ%,!DO&hufO<#4yi%=S <_h3#̦|4<<_wܳx\%m|t@'rr |]< F dǂ'b<ssL?ً͟o0@M 3K~x/ )*q۷ _mCK|`!.hq,^<p$B] 8,A>!/CxfXq9saoz7ɉ-}tUt^e_}rb<+ (}2k^8 =2s٬`kjLb6Ddy9ʖYygq]joyfʕsk/5o{ya wXt_/<@PGDu*d5GX+2.aW3xv'cin0>p qj.鱺z9푽pa`ºG)dn]f74&"V_uˁ&nV`hRx = BuyIeлhA=t ~o(;m<<~ɩ F9΅'n%=/.J0.󣹶2<2:tq?a*=-:H3; u#k=9$t`k=%JC{D'+QxP$ IOw)LSaƃ<<3OMIxsA Yx2?eNmݙO.Q3 :ޜ˕J?KWjMB!{fzQ $'09yFW4CJcOO 530.g%q"T-`1lhsx2?v 'd=LPy뷰lEҗD/ x;ӳG9TDn\J-LB4]Qn*ʺϠ? \ k0 }gfOO XWIĞc:4!V_4lx{zs܅rqxh̡M>x#ÌZkh) O۞_šmYsIz֜{k <럹M4oç(=2?%نlbGO 8t:ғ<~ R]$C`}}1u'jRVV|d̯ա1U*F0cqxS6#AM+N% 2M6Wr#Ҭ,υ.Ϟγq)^msjag9NnsseeNGx*M4F4.X'v.Iut ~aH4:8>/p P:%ҬRIc<ScdlhRcWLuA^#uP3a^o6t_@e,<M-w{σ욺4PhGΓzW< h _ӶI๏8—kC912Ost 1RjADל15K&e2x cDN_jqJ~ &Ky䇡z&P K_d'8p)nhۿtlxgcN mu;Yx:AޓUs~0 3ͭ@-Mko8ǸUr1s; 9r|Y|Og4_Nó1yD t '&IJt8D!A.X&ӕ [A$L6X9AYa1&!< %rм ywwtǹv9݉Ioͨ8kpSdkE'pX@j0Z(JdJ.nv _;rA0OjQ ȔLnTJ&S~.C,!z/ZG +01j0QѣI;PΣIzYjkNg㰱uaf03cyγoY=N? 67S@=~k4 5 <,9?nZrlu~ӿZKϸlqn8T#I6\%KtTR2ys[*1Jn‹Jѩn/)N)mDߎI;~.Ϊͷ<>Vii[rϡ/; ݠ OR(76<+;fy:~xY :\tw4Zk$EtJపKI>yRy>;tbr|q;joك'ylJh5XXRHw8*_*ِU,hXX 0BJL%:W2!olE)K΋|x6r=p,mY3lmW'XL?*Enė^N9H^?謤(aSݩ^z2 Leexla|+(gI;ϳ`йcM&}aʦee^oXYy_gs}mssysG/Ս޺e+*%[vz>-wi:W _~oVQʱ=%Ӳ. {c,ao/ }'DEF.Z5B=+tֆ>]E\ZsX }9]$xHʼV`, GL&p _yrֆ-W!Bux~s]r Dկ,Ӗ<+Odt!Lޑrc &yl@(2C@EǨntVrDepbF!TYt9BE[/\=ljϞgY2:'j9}p' fŝn<@?ڹ[vG!>`aၢgUk)vcQ>Վa3jJ<h рƴ&gr.[+uIp-k$xq'#kMti 9uVB{@x̿]xlp9>Ymx6y{22&?Nn+'ՙչU7nVW%Pt9GW1x5S][0_s4Py~ .A7HΏ~#F{B~) T7{jDPv*+Ze(4.@O ю {fMe ;^(^,qEAAc(7DRb=xdAy)YdWډ48s{III?/sn7A~*{!臧r yW~ k88eGϲ$<wfޞ)0kM3#ނcJiW ^WTduͰG=QKO +`s3Yv3+=F(%ETA=PS4TЀ9#6sV+⋏?׃p: .z; Zy_wYJ+qW~C>v>@~x~a׊$^=oE)xq.ά%άctfBL:N')'tWgJ?vGtΌ8:o_%6pk pF7Y+hxg1\S;՚)RS/ +5y,jJˤRϴ`Y¢h4Sܹ};LySZJv)ӀgYcYPO !n|L`"흒H,OF¬Ht>c1fUa*YҬRc#_PoBW;;;kl1;\ @Rs:i4-RLy>=a@-?=yP t6?;.3KOUc l[W15m~VVz .҃=S#֦!'nw#>U72bt OngnכIĬT&Y{oZ+,?yd455yZ^xuMMVV?<*_zVA8oh>}~x|yZPۯ ;p'.+n2\/K+e+DTc#M7e*UA;*aFT9xl Rgf¬,܉) u>0,qځqϰ7jf,ˤ WaCKZubPVn%=3z@*u ma|3HHV |c>' <>.Vy>y@y>\*)?v~z7&o7L|@)$&#JƵf)BDPuD!kƽ^). \k& S+/_5AO@HMn, Ȇ0'$Y_bx+_!@_wzcmI5CigVaRuo/G=%IexE >c1^-K<^ ht7|m?gDM#f>NONv2S¿"+ gF'| ϳ?տ+},H`zQ glhJBb2l6!D(UOGTa=ANcbHjWz[ݭd+QdVoܫ)5Kr1>Yo|8=jF6ԇk_XJ>¿Imۼٍtg?|g4<Ū<'xk"zY?5>J"_Ye2@d h2bl-soI^N*hG:%J=& _ ,W‰@SA4)Ƕy?{ZXx~s=BpOzA67jMH{9H535x,Y%"_ynah0q̋/> n0^]wϸ 1R[.o ίO "tvg#x9ܷb鉌-m4  ,n ( fu[c[jW UV*ejﬡ 陕6RTէRJ_:Un;gڈl󿹈'g1O DܬId\4ɘ%JOq3Jl/-'{2lYru'IjnjC7KY5{٠/<'G-)Lz hR FׅlC {K/;YՁߗa=b1֓2nێkz;=1ؐ)d* ۊά  oYYpk՚?L8t8>37>!4Y7IWK:]ٍކYԪsA*R.qUܬ챬"^b{!YxNOv*ƍNitȔb0zFdUǶQ+g mH:ڠTx  ]$UM*AݧZVB?fPw4?x>o,5|0噇Q|XxrzRW-#==>YyQ h t2xQWdzXa08.sT\fnuʌfN]DQy'>Yk:IpmR 7ܹ\ȟbFYJ^f~?D;9x{ʫQ1S^5N4ad ~]uѾ I<5k\P̃hIz}yHOxEaSAbC^17Uؔ Rch02R( Z[ \ch艈j}3c%n`N'i Hy{lK=NV?uMNŤr/,3*iAw=,sJS[r{/ӲyBu]Sc( Q n0RDOxD8z1Ū<3bZ-x>K/)ϯ8:y+8nD; ւ e_& 'NP̣8Sȶ= sK9GtlM*X:Kj |ne9hp˩킶|]?_ t1ϸ"1>e?˜L;M%Uw<ΰ榦Yd=0`}z9¥`ze#]-x~TcXV6ƆRx9jRll1&t.ftnCL2ş[jx2E[ѨtڊGsJ#_Y:'Ew^?̝\lO)³ޑl$x.:R4Yj/}ƾZ)dxsl\#m<.G0y0t{( ^?^E]~xaT}ɓwC}|x>~6?ՀG=<n>TbmV̶1W-{.n߾}_jp-x*dhm޺샵h[nnzE-ו=]pMXv޳l\ ͱ*Ϟ%Pph]&tPz61z"se) U Hʊ5"ިpoT9K=[y=~K RX=ɱP "+υč𙃟c+|B|'F|"wQyGc+ʳ~qnOSMŭ8$7&' .7ҟV٢bV'ְ-PaQxmHP@—װu$blmmԮPyZ'F(>ɑ^[<L\TTDO < sϷKyZWg g^~"ϲgRc\u8JgYb@%@$BCyT`Ju浈vH WJD~:٧uP;ZY <zY }9Qm U-Up=|w o-|slx' <++ <+ <+ <+ <+ <+ <+X,]6|R\nZr83ؿuY۹xS)mGBU6} )lmQӍ*9o}k)@V) _-jGP>(sv)M95 e:;6P6$\<-vP{o\@r)ܶ4.f1wDcbhdȋ3z=iÃ!9k%x6"v:b7 UR뒸tY7)"%Hn޼Iv˦gV]ZתA*gώ9a`sf/TXWdh P AL<!JO=bqЧq$?)eyjj&hrE/'' !ޏ5B`#Ҙ+M@u^2p|uN]xVub1aO0$u kB)7 \ϠΧK[|>rvlX5Le褜K8ڃsCxڍ-nfjH#ȐMHy:z(r~ԳJI'N%yii>z س(WOK$.Wn&%brYxr@PxJR?EfWff([Aϰc,N_𤤤lM x@xe" dZIPP]' 4ASBF$M,:^DsCΣG6d3 [ԁxMuaA|>ӵ#j&o].EܼD\l_ pOA~?2dK,>3e 8d8ϕŏ珘S=Btq.eHRϡ۫V#{0[PKקmMb-V7Çoɬ?#HʺJyYΌC7-|(58{'=Tf]l3;Le'e[ӷTfw Lt&ʥS;%mS͝t<sCqy\Lej5Gwcy"x !^R֔58׵~ghk_}#&L6A,BEEp{x6<Çlvٳ΁n}+99}bL69#JM ;UV _R#F̈QU./½C<7csڕFL'4sE6lkp yy\. 'l67F8R,:)[_iKWS5)-T |: R܄A*ϧ? k엗b'$[  ^g+vxtA)V9w ulIvuApA';8x%W5UegA1_cs@EtMYx>ɢ袠p^|ROvk~:<_ ; ιsADnwb_&TgW6pc(ODtfy62Y@tA`Xϋlq'{ÆmDmkdd 6 ܼ9N wN<Nd,f䶒RLDJsL0zѷihzvq%*@g#\M*5Ó^,;Vx706NP,dEux$< -r}]Wrx\R]9ů9(W~@yjk7vl&GG ጶ+m2,l3nIb6ϢlC!@z h s.[.g4|ʃg8Jm%%Nn0<}dKd2kTAvB.[fҟSz|oac0fxW 9+:l-U.8(-Nj+G[[|T[}s o5@| G*qhYٔGG^(:$4< m~[000Zr>~UnRD:9DuAp6{;oSy֡+nW;Hn rfvQ\њ ;wُ1B*20q^՝*6GVķϢF V}^d[ʳi?' D'| sY0V~3i k%ue+m52GC!vL6c{2F#cUI]7y<9!L-HbQHs. On<햁ЇȎ`E٩.e2SEQHxN!x"\')f7z2FVK E% 9fɘ9gj <<"[s2-Nyc,<hR,bґ.@#A[ڢAJQDfDH 3 >k1n7a33=:V"!ܵ/ O\ic"U%*=gK}&'=$txꅝz f39MZΰE9r hY:p4:[,g= zp"@Fp'aF$<~Kxh`0E: !0S"ϋ ;DV+:<4 tZ B樣4.+ (EQrg.@.<-t y̝$=%/&@f@#o/7o3H,}JDa޳9zf),qcNh wÁ0tO vN* W7ߴ$p%!O#qk N|??Ix| YfH^JOKYxZ>#* ˾v^=t&ʪL£Ӛ%/)Y'Y*hM#wh ">yKϮ qZ_zsR`zlKɖlc6*I8xb9žCpF,Fo78Ӿku8<1*ϗ╄0`f7ؘ|H1X:̓FP \) @޸% 7G(< 2ynύݻv~uŨBxh鏿DO$|?A'c{ΧBxe^K!n(6JB1a O#>5b h~&Xz8x>@g5PlΡӑc!O&IhxĥG ӳqcs=AelU{xpbg0/Zy<^_"sEDzสV|)MԿ;x^ gn: W3/^yz| ϞEIOP=kUY|oonڃ=:~O7TݫXڦ淄9s1OƗ`D/=u4o9ĶGO<ๅgwWy_"'#/Jz {Br`z' xs )Si[;=+OK4x. D*֡Ǻ<9Z3/L݀o֩mo?BfɪΤaWO%mnգ ; jzT*Z}7wn1joe_kj-ssIENDB`PK7yN:PKDOEBPS/img/tgsql_vm_074.pngG급PNG  IHDR:2ttEXtSoftwareAdobe ImageReadyqe<PLTE369Ԓcb`YZZ򰰰e./.ţ𺻻ppp頲ךxӯ|||ʫ񙦋xcj[8::rziqOOO@@@𘕍߱pޢ糵~smxaSZK̷͛eyCJ<꼸jjknrt_iUFL@춲񐐐HLCIJJ8<3bdfEGHwxx|vqsojѯ{vruv =>>zvȀ___4HCIDATx |ו%EȨj"IǏѣqI E.( cM %I4MR4i󧩣Цq6Kj& b uވj7n.:lg{?if4F/Fc{:s5vΘ| 0VQW_٫0r׶_޿ Co~~;<5YL&cm]ǥ܎z,7݃x8qpF00,ZWPWlMOKgBør‡go?:>۽y_%Nmrq m -!?>Mě1W:68,kb$^o3;3r 0~̙3gv-6a.m?E$Z ; TfMǙVl3e0LuM\ 2).nkea?Cq<.ڤ7q\&,Y dC>Bp F߆?S%YI.xB`LO8Ey4P7 h9X~p%:ЀLn.g[$i.vU102r.hq“x` dpIрI?l&ۂ zq 0~&^ 0Dϰops7n{TLo@\l3VKp Lde>7AfYo7p'#8amo0Xhܬ|>Ak89tƂa{=vc˖aŲk.`\H<.;\dxݿX,"{(5T4Dϰ`S-T0LZ%sD [" WbTK5Y0 ;&+g rw F0i@lJYVtP?޳^cˀZ =ڵ0˾ܿMӄ5&=;`nLЃV#>|taA,JG3p+O͈KoL:+`ϷZ0p4@\NF2׀C%ųpq`>q<3!v0h~w&w C Ƿ V{7hi_ XgZ\Adl▅,3'Y}&A"oBoijZ C C pJ^xa^t{HKta/R#-v^Ec 8gF/ꞃm?kxvѿ'ܿe :udV F&5>`-f?1066O6)&pvXLYݖcցaa,9^z~$C◤gKM€nݺuDсA{ #?{l߃po kqizz_\w?8#ջgD8*Sq#uS4E$YN2J@r/ ʎETMAłCq%2[bŹ(Pբ(VL(^w 1Q{%RSɬ-%܏"R0D'6S4Y@(QoHX8T"AT2D>l9dfK_Y] G-.k .bJJ0ci蔉\#m3EEAK0{- mL*CbᎀI$Ǖl+It"M[&@CsVW]ii;ni GZϠɢ3Jx"kZ&9(1O% Y,2&G4Ǖl+N9Vd a8NӜ"xWiaDꃑzOCkpL & Ќl HQ S|bť$٦9dfK_hz).e,B F_a<`a\G[;;pASſgXbd%Y؝==+C;0;5$ L91{1_>2L527SO1C|`0b#l j҅Ǽxi/82 \)cxS}gS@a k30BaW`|GGpL >Ã*uF۾1S-kp,w rg3f!g%_ mW0}q5lJ'A:G 5{*9<Ba ߾]耖*# ܔ:Oa Ѕ'x? a3Go%W{/|}@ 3^p=6GFϪ?cG20L5F}Ps9‡(|t4[:q:ω-lcFe#xJz[})g1+[vj\.jZ)udF*%nk*#`}A3Z40<>xeo{~2Kָ94E{; ˜c6 ?Ɯi-W`yf7 Y&4heؙSRK홗d0腑Zc^lg/X{w7'ofR">B1}0BMaۿTn6E8m &qTjca8݊]rlqwqx|EAzrp(tNi`, 'f Fa3*d厼tGޑ(j$S_B;;d!) {=xUOGK2.<_6qw\@d2Ct+dxm<}.6pq d*1~1EƊOiJZ. ~f7l9ZMj2 \akop"2(0 [mf m.Xm:4čn ڊ[A+L7YQ<:|H&τh<!;`|(;`Hi4 FvIoElYv[S** R/t qK_Gp&L"]ScؠXΤ J' mzy!J +nmȜIVx993 :@x\Lx)9F1YSÀSZt;g#$XaQP#HF[֞ c۞qQ?mp?':1A+tBp46qf ОA0X\K0`Ӱn^u[PG*x+8Azђg(]C~P3:[nta:z fqW]#F~4H9b?<ز:↑?\ÐHț>w` 0XRaЕrʼnoxBP%aO#9#%/(&N;Ltd~doX,s*#jЁ#3]_/дS3ǽֆx0WX~5{042HO##y0}6ק鳱Eqq$;%f )e%¶ÅSUdÄAG~0*`d9d8 #%諬b>1Q,CFf 8 4 [ťL4gO-:4݋ p=>@F4HSn2lM$RS\ r`xV`#S)0)E`.qd CJ,Hf588:XLSDC P*!KM%%}4g[OAsHML8v@{uʤ'H$OJO܏ZvUаViza KҮ38"0,xJOY~J\Lڋ=Ʋ7#zN`I¨'[~4eo炦%9( LKIQA /OYH?؋yMFCꅡ=p`P\sPm!j@^I` OM/@Vܴo`T FL "h /G6!^\ 1lZ4" !Fe~/I4z*RPXhE!V.F F0ߥ!֮Zƒ2knl`}_:k>lp+ Wa"VrY5E3z/f4aT`,\m aݞhp j38NRJh fo "o'4X]=;<$b_:k @,?Z00*1cvWO`G0Hx A\ 'd># -p0B`4چmm͖0tLÐSdQAh|o̙'B c4]~c7Bps5{xc !3 0fU=tvΡ' acqDAw!eX3vN. pJ| akp0 9B(EANLpª0at#~,GjWxXOdØ> F#OPLڴrCotDmUpڥ l1)nk@w LȊW/59~3d ,#8pgqrav u7#MJ ~zM:;B]5ob 4$5Yr[I܍SR t-&/vHIKkI0hY; #GDNP{Xu0FimB y`hCԬ*޲r-"f( M-8"dLfl.R Ewca0 g)TxAOl  $kugQAavqk*#|S`9 #[ϟ xFǣDoqK6Wu+!Gp%0 B݂ ;%`Ha116\@ '}Z/}=q=(1"4mn@l :F?Ljcl"57*V08#ayC{\,A;h5xHh-b͍7=_eN.2X\zwǥD< Q|)# AOt#vwlWB8cuqvf~$F5}.^]=)rϼ`32 wY*|Q1B73>- bdi?{0*0 cFc3b.܇X۠  (* m0s);`X[EW8Α1#;[aVr/Hc+]04G( ,/yc;jV/Od}\7 8N%Qr{]xiw"{HDyRɅ8 ]dYdT'j /Qa c6lL/ꪑdHJpqHBGC:VpɍSIT 4K$C<`{@(Ą-HT|*tnj5QA/Qah䄃ZnAET(XE ύLoMɽbl%ep%!b͈j\/Qa cFj>O"ۑB' QQ4tO\RI 'gP}|\nJX@T%*F.4rđruN~W7ܥ.&(^g%=qI',Tp1 l JO ) t0fʀ`0N[aYM]728'{Ia2VqO\R0TGRXws\\ae`Wd DUZ3L#~W1h qezx >䉽4dK<)%~$}qB [.d%7 /Qaƌt0r!q0_˔8wFg #  ?20l/`4ǡg#Iw4`H#shHrp7Sxk64qHBO]]0fJ# 3*f8sƌ0t8GZZ@fKH 0f92P#94}3 ]42HR_JY/=gPMc0] 'k:mq@>;{~JYba|w0fƗ0rq}*)3mv> T3( aw?~0) $'}DKHAbu!vfhYAq;Yc]Y,80ׄyc__ux;;Nvc5GPMP$%KHjpt4}/lWոqFG[kkG- jO7ub䩇hBӧ!7-[V~m'j5`iay̾}!&{}QqVhJ#">È#X(c@.t֣ ]_(v FQ?GG/cFkg2 =B4}#i^miC:5}CSg| SGr`b̆^dJ*ѾXNF.t(B*+/Bϐ4}5MmߋCd!C !dd*PАwW|N>F4}02\8GCa47K3 #˪[YY3,6'-x40UWqᎣFi CE"\%5}X.}욾ΕK//3,mtuFƥha:~_ihj q+x{ wءPk"_ 5}%M1r qimzf#5כud=CMCؖ_GYϿfjvAGY5;F5}v;j>DYa,.<<ע${ F; E__uߞ~ N'O]DU}L$~t~hJhfS__C&P$fI53D"9Y8rh-=<Ҫf xY[5YU]x.]# M_!U0QWM `D;k:Q%K6gzzVg H2#xCIkyh2}h0g -Q/\znA(za8(4rƑH>!_//Mp> m3gtnBMWU`5} aT%O&Ea.K ˲ʇKMմ1[2a4-&M=TnZ9Rkقq)ré8}hcIb=,N X_+0ӘqP>20GLO=pX5C[ |~1<(Q|ф!zp8`8{F_Ukeɒ];Y@pwEwihgM[1mKn:¯udME&  BkhmC,bVtC NEA]ϣeg;q76eJ M9G^S3+ʞRi5 5ƋafgDB#b\`80&׷,f١ad$.%`80j|}W͇|~=J}}v\̂[c^Rg$΋2BuކC;k6ted핚t,J ƥo<mgkJ66œ&&I=h72!o0 2-:}dehp#۩`\yl&K=dXu Ba'a9shQ r>y# ԝi)kMš W\,k/|ciخ5}##|,mwa~;#,TcIa)9"1չ5[> !\֚M+ƹca!|l>tv0 <;~42jĚ׊;톮8cq=pv 3E'F&L4}<609[yjQtBi\+> NB}irG0hG~RG2RfOB'%Ȣ}vdfIWdPzY߸  M0Ogk|k̚b{ƸmEv[e¥}4}2`\ #"#Hs C GFj}Պ;}Oo_20.҂azGzm)NNYU1z^Ð`(s)niahUwמƶ;o3>K0.J#gZ@40c|΅qQZyHD( (>)5EKɩ5}4 Cqִc =$O}+a_Yb'>0㮫Gi0˜_0 F =wijFnxE00 ^OfU:}jg_ Z0Oho۞B4}oh[>KmUyT80XoG0.wvU/EPg40<{Ui=c b;<c@G$42@s=FIiDS6]ΞM`,W0z]CW }$3w롕BFEJ!{$ƼD$;:S)7=CrX7|ÚP5kPM!0c=ޤ+< jVهx><ـ7+qaMC4}>i2Xw/$lD 3_O<TgP*ա˚'|ެU8u^{FC7<` $}Kӷ_WH q.9Wӗ ozq $ teQR04q1X0]< % C#O 9g(7:a 1H9Ȉ #g$p$IȍG*e1 M߄ya(m(4}[ p潥‘E%!g;K7a< l w5-۷Rf{F|1M`wYþ&qU0>j7ޔ`H8Dx.ceOFFԛ* 0sšI #xS#gdPƸP@&# XdO > Os0ɓ<`Qoþ 0n#;{R~^S522q@N}񘝕ZN /0N{#cP5t?h4^c_t* zF,EƦ !v8MX  lF BH$eX lE\_#1#M\5 h336;{zw(zy\iTF=8㝜}%8 mxu"Ŝ^;c1'Og5R+>c C4}L0f$<Թmꉾ<_h{ei0xh0p`@7Οc^vĘl4,/hLZ m܉Y sϠg BN~L;:cBHa\G$R9=W8܇1X *se1m=+Iyr a3{ÎY[xy|>mgxd? mc3 C&ˁ\J[)'M.׿ßt뭓%6e Й* {c{llWzCsS;^$i yc}Q?u _mQaJW5\:Mν|+?E)NY6o˷38Af^s3tK ѐ%#] tƿO/j$ MW`G IS&QYls F3$H##6E O&oqZDM0:z\U@%PkM=wZOa0~3QC *u{>8,$Ds'Z @bBŃ;|`M!0fʤo0~8+d^DABX( y/x:o&]+z7dw'0I +p P^q-[tޞAcESMИȈV%udʩ\a\yxşh+RQ߻hќ oy)E>W]sg.0RJuȌ R"hId4w\SL\8x0AP<~%7TxS[YAen[”T;%d'P1 8<^L1d(-]:g{F1`"ww~P+ BOsh(󦭙Ge'WXjU 5y+*~+Ť_Eu1hik&yp.z;+**\޲(`h^eժ-9}HMځt!y{~~u+~hšQ5 0n=~1&/by]*% 9zaX>ٕcBPHA^`UB]ՈkUۃx/:Et:a4? 04/_ Ŕ_}EQP5vBn`pռ?o7pt*A,CJB5'1_r"5O S{AŻ|o0]TWW_DzBҥ[;؞x6WL$s6z~Gӧ} bc  aRy> D|h޽{+*?B3PtuuWm!qFu떎wb6}?xEɓ N0 /`#źu;FS`( [ ĮXae{|/WE8? ]ױgkB>Pq 0,:Q7+L!_8< 9䜣D+Snt$uUښiϡG̈9ph+6ܸ)aǠqjDϊwtVΡC( θQSlN' FC6 x5XތYaqN5ɰw.xߟ? C'&fIYQE':2'U ^4MXHxߌE 0MR>ܽy>>8IxqH7D۫*!~8N37;G8/S&/w`w?s+-JKVeHi\@RG4^@¹&a/3H%b 8Gk>yѾ&!x (uIF&~<_VS}oFK,YyuG3ցc\>WBUroeBP84G`0_2%j53b-3ƭC3jfCq+0qFfsVJ3ppk{HN3WgdavgL{+luie9Pzg?O,_GRPb[/DS0ʹ gܳ>~gc.{l3gCNr%_}*ae(g0fN_fB,;֯^?? . qE( yw zb{0S_3S&|se:3RFSS3^(cϘ2aF CP0re`x45x^{[>ce9(hjι\0D?Lz)Ð-h)2d$+Jg/'}أ㾿9[rem@=Ѽ@LY.7@,H*\ȰÇ#JHŋ3jȱǏ CIɓ\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧ-BJիXjʵׯ`j7͢]˶۷pʝ˕,Kx[=Rm̽~ `*^̸ǐ#2´7qN̚?FL7}46G-QXd\w`-\1Ggj~6TKLu|wVgw֦!mv >'8+GW~jxN9yt3| zxˬz5z}{Φ| χ#' WO}?=G÷N/:3_ۜkO6pߩx9 ? "s _֡aԇ:Ŏ+DH/| {_fH*S8ġn8ꡇ=u qGy(D<4sD!&чU\( 0!O&t!h0/`t iHN*x̣8{S=Oz (A9ς.thBMtm&hJPR,(HoΒjTvHJS깕N-J02)Mo:ӕ,})LySҴ1KuzTTǙRr4YbuMBЈN_͒,+ѐ'Mi13z%7N{3J2 RԨNEaUհjRָεwMZSaNf;2Mj[6n{WnMro[Nktw-zވ7~YeOx ;~e8'N:ܘ7qI A@(OW~r}\0yA8ϹwṡNtM HOҗ;Iy^^[ ַ{`֥n4 hO;APpNw}bTWeOw;~ 'OS&' GOzG=1֫ bzCLϽw &}Oo~Џ@W1}{o7Ao~݂ ??w~ |W0  㷀g, yx 0x~ "(&@#X#(''X+# Ӂx*;h<*A8=FhB(@B;D[F{HJL۳L&3ePT[V{XZ\۵^`b;d[f{hjZ Or;t[v{xz|۷~;[{K i;[{۹;[[ndۺ;[{ۻ;뻍țʻۼ;[{؛ڻ۽ΛJqMP{蛾껾۾;[{ۿ۸g<\| <\||d "<$\&|(*,.02<4\6 Ík:<>@B};?KmDMN Q=L]XZ\^`9qGpg hhmkil=t]v}xz|~׀؂=؄]؆M؍،m aMٌ=ٖ ٘=ٕٗؕ-ٞ}ّڠْٖة}ګڙ-ڲڡ=ګMڸۺۼ۾=܊MqFF`ʽ̭]}؝ڽ=]}ݸ`fm-}=M. ^^ ">$^mqEE`,+1.?`6~3.9>@B>D^F~HJLNPP޸_PV~fY^YZ_~b^`.\i`h^m~Vns.icltan>^~舞芾VqD0D`N^?0.~ꨞꪾ>^~븞뺎n^ .>?PɎn쀡ʎή̎N܎{A~^~>^m0C?n/? ??_ "?c *?0-0*, c+1_9?O8Ao;C<_ߖqB0TOf!2Wo?XZ[aUhjlnpr?t_vxz&H/DFK0CJ>/@L?onPo]??fAo_?_.0BNI ?ӿo71@@oxȏ DPB >QD-^ĘQF<~QH%MD%A-]SL5mĉ3dΜO.}=JP)R?X TSJꔪ֧]~VXe͞EVZmݾۖNug*W^}IJ@-ڲbƍ?Yb-_ƜYfΝ=Zhҥ7Zj֭]>)AHn޽}pōG\r͝?]tg_Ǟ]vֲg>L%u͟G^z[_|o;wJ0@>D0A@ 'B /7CAZ°DOD1E4E_ԮD Fo1ǚdԱGLhH#D>!dI\H'ʾ2K-[25)3L1AL3r/d"N9L:3O=34PA4[MEuN$R-uпeSO QG%S* 5UUWeUWU5U/T_V[oM54&yP O N.E6Y]"Yg6Zi}vٺ6[mYk/_e\aS:&s5tmwwэ7&`獩}`>~%vK8@8aE2 8b'bqrb7/.l&|Q:٥y_S.e{ _m ~T d hF:i&:j.i4E6L?c.x}Y^%{{cvWly^[^a{].f=_w,'p{pgq_pr/\q q_le{.S?Ն{W=}Q,~>q"xG>)Hx秧嫟zϩ>WÎ}6vI[mI}n}soީ8@rFQ 6Ё\@ Ѐ*A` vl ;(B^& Zُ|3j>І5!`xd]%8YGD [$ZayF|b8(NъRTI*^ыV"# Y ЅapG7<87Yp6* Dd"ء($ ;$$Hn$'IJrRr$iMBd'3YJU^~m~nc[&7?^;<z LY?F9Mj.CQ&'mnSS&7n$9lzs g<)Nts'>KlD}]L*NR@hBj EAR$B19R0QbtzhGQZ=iF#*҉nԢgFeuA_Au>ZrBE*їը)-QMZT2T$A)STVujJTȲ.iX:v TZUԭ[PVNգFkTPī<X̲^T(c .Zd/{Y.U?_{2S ִ-O{ʚյk*ö6+nuբD8 i;NE.n7hظ&7un{~ҴH.D(vPmtڎ׼5nh[p&x;_6[ݔ'uXC+V /~[^785oo\ɑ^/ &i?8S2VTm!ee H&fF帅;#iȀ ҙ'f8 u̲+7 eΑ!GJWt&YYJ8})yWNuʳ'mswf (9wVAGHR;T.Wi+U[ߤi/)zյծtnRYvDuϾvP= z[ulRk1lD#[we7;ߛtud]gۧC/o-w<&ӬW<AUzַ}WZ?={~?|nG' iM`T~}_?% A?~Gßz v8? 4DTd@d?3/܋=$# @D DTdt< +@ ܑ <1 !$$P%d&t'(B%?+< 8:-Ó  243D4T5$#4 89:C0;=*)-BFĒh#6!۰1?cDi 0:d 7, LDC4N ND<\tMT$($3$7_Fqp%lHCTODTO`tdTO./MֈL`uP }P Pr] 5E-NTEQ ]%U  RQ53a%eR&>()*+R(-#0IR1%253E4US1.uS+R9:;<=7SE?AN7BED}@UFuT6aGI͒HK#LNMPQ5SREUeUPVXCU;Y[=W]ƣ^`]bUb5dUdUfeUzhijklmnopq%r5sEtUuevuwxyz+ '~؀؁%؂5؃E؄U؅e؆u؇؈؉؊؋،؍؎؏YW:ThEٔUٕeٖuٜٟٗ٘ٙٚٛٝٞڠڡ%ڢ5ڣEڤUڥّ.`کڪګڬڭڮگ۰۱%۲5۳E۴U۵e۶u۷۸۹ۺZeX۾ۿ%5EUeuDžȕɥʵ=\P5EUeuׅؕ٥ڵ%5E]WH%5EUeu^-ّ@E&6FVfv &`U8.a0Vf !&"6#F$V%f&6aF #b# )b+,b+-/01&263F4V5f6v789:;<=b(>"" @d@n BV CBDD&HIJKLMNOPQ&R6SFTVUfVvWXd?6 "e" [\e_^e^e^&f]FdVeffvghijklmnopq&r6sFggvngxgwgpg{gygz~&6FVfv臆舖艦芶h^gi&!i0iNiniv鑖陦隶&6FVfv꧆ꨖjjjjjkVfv뷆븖빦뺶&6Flj80Ȇʖll&6FVfvmX `؊ Ȋڎ&6FVfv Փ0&6&avp7GWgp/ x gov'q-Wq`wGmqDq?#lRqcq36 Qr]&w'()*r+Kr$OP]?1'273G4W5Jl7./G ;<=>?D$8Ls1,EgFwGHIt$)4CDK6PQ'R7SGT/KVMI3E ѪRSDXu% kLtdTFg4F`FͲlDLtGyClGhD\6iHc\OKɓxIqwBEDd`wNK8Hۅ+8;8e9[~9_>{9衋ю~:ꩫ:뜳:>;ޮ~;{ζ; ?@<+߹;<^[=wݼ{=+fz>q>??_??׿d2m[#( 6- 3&N 5b  SBp.|Z{2!s갇>(!.D<"Qg$2[(oV"-r^"(1f<#Ө5n|#(9ұv#G8 S# AtEm$#)IRA.MrR$(+Pr,%*SU ^ IF- .s]򲗾2a +L` s&2 rgRּ&19ibӗd7MZ̖98':өu&)y'>ߙ|s'@*Ёsu7IЅ2}(D#*щR]A:Nr̖7(HC*ґ#ISҕ~,}H] әT4)NsӚfr)M(*ԡF=*Rԥ.N}*T tlYr^*XUxf=VɊֵzUl}X ׹ҵvk&Zքڠ~+`+=aL5w*VSfP,f3Ys,h)В--j7{Բ}j3 [&-ns򶷾-pب+g 6˖1x.t+>ҵ.u r7؅w-/wËw}/|׫j7x B/,>0  2 s00AA3G%氆S[(11c'ޱc,coEa ?q/P6!`:ؕH~r%$X%~&&b(%Bb*n)$")vb'+fR)r*b-"*/>"*"/Zb1(#1b+b1~"-",FpIS.2f!6 #::#;:“;ƣ<9#= ģ=#?>c>c@#AA?d;alcTB8-E^$FfFn$Gj$%TFTB%Vv%XNW%YYXfZZN׽%\emZQV-^%__%`%<&aa`beb.cfc>dN&eV&dfe&<gvgNfZ]ޥCR&jj&kj“kƦli&mަj&nk&o&ppf& gl怀q"'iORDtN'uVu^uvn'wvtjwux'wyz'{gAgw|Fgss'~giާ~g&(9!&!6h1F(A(^( X(vhPph ihو(@I(BHܐ(Ď&DD@((BAh(V =f6fihEPi.)zi)LA ~)hB:)"ii6))NTHif()ƩCi 钦N*inj# Ƌ)&^Ai)*2iJirIjV*)jA*jF.ꥶBvf^k)~+LB*)*2긞i)(+逨l,&,6l0F@V,!Dtv~,ȆȂ,ɞ,ʦʮ,˶˾,̢l,֬l~,m-.*-vVDpV^-fb-l~-؆؎-ٖٞ-ڂmצ-۶ۺھmҭ^-m- .r@RDl6>.FB.L^.fn.v~.bn.閮馮n㲮®>.n..n@:Dh/&"/,>/FN/V^/Bof/vz~o/o/ί/j@&D`0#0p,070'0GO0W_00o0wcp s p0p00G0 0[0 c@"D\%'7_ qJ0q_Kl1s`11qO1]WCq%1ױc1q[ S32rC2!_'rS2'c'7&?11%DZ(Z**+r+2,,#+*Ӳ,2/wK@/31132'2-,;34/5_.7s,#s5W467/8s624c3:_s9r81ós+0ӳ??DTB/BC7tJ44D;DW=t C;4EwtD[tD4HS4F+/{qIJ4KK4LǴLF/4IKF[J{tGH4PK4NEWHOOM4Q4IT["Q'5K?uT5O4Os5GR_uUV?tXCHJ?NVOuSǵOuY_u[tNuW4[ 5`cY5\5UvT_Ak,4c;56@6LdKve?e?vc#fg6hSevho6g7IKi6kk6lǶl6m׶c{ffiseSvfm7q6nKvdo+ocfin nWws7pcwut?u37u_x6sg7wvdvx+kK7hwow7|jwrvcw|GgzvvbDD/8+8D\B8/7#_sU8Oc8k888Ǹk8wx8w8;˸8ox8KWx x;8/G8︑W9_oyc9yOy#yy'yJy99xc8ǹ[89d,yy9x_:gyS:o:8yo;:9Ϲ?:z:::{{:zs9:D@o;k;@{;w#{J;;׻;绾;{w{/<{+K|#<û;';c|ǓO|oS|ğ|g|ӻs|ɯɏ̣{Ƈ===Oޟ=߽SC~7[>߿}C⃾?>}~+~>O~'כ>K~ڣ>>~׫=3~+"әR ֒5n96gѦUZmƕ;n]w+^8`‡F8bi?~Yre-3؏ =tiӧK#@ukׯaǖ=v֪mֽ{7n޼}.:p㬋'\r͡~}挝=wL{wxѧW}{/~}߿_ < <SA+; 1P.C>l0D:CDW|P#.;qQs R!,#LHl'd'R.0q2c#l |S9;SO5?@@ =TG=JI=4t,K5OQuTSI T\QVKRWQPiU]_7sJ3X6^AQ\VZA]XH4_5gvTg6sCewvuuפURT{U~O%T3aǪ*;X|US]657ڎ;b7#@#ODַ䙅9dn8H9Aa[v#8ezRFYebYN7_\FqZgm{.;S8,ezcMvmOm[ ޱ\d EYh\MrF}9og:뒆\P֓=pawiwxzsס_zkm|w '9NԔ,sZ7=OD Aj0@9r)aGN(N0x _h w_D:Y~&I ?x!"-CW k@^Ї]TC#jxFedX62aLa7Cy7- f M7eb"" Qh$$1Y: rZX4>mnt JU2,G/f$\~qk<# GґWdԜ(ؕr~t& *լ\P^r@̎Fa-yKq&0^/T_ uၮZ)D aӓ2c:eD5yM@cIIԖ CFYӜ wdS$GM,SM?F_sd/mHdK[Aޔ .- J:Qs2*'O%9֤ Vj^wOCֶ굮*kǦ(څU!Ь Dղڵ;k&zB:eZ>1 :P24A#XZOj)UgZQo#GUVQNjy;63ASu`RX Z6 -ԃNt$NN!B7eRX\RWHm ^K5ckA۷=s]nse|&D7Yf}YNEVW>: L6oo$Ͷ[16Fiwe?8nkKSc7TFR80coi[ gY8d-ln[ekCwy[ǽo|h'o>r5ֱ]WnڼXo=2E7:ue: c-pզ Tt.̝mޗw :_)yaI' z8Ƹv| پp2071+.v䣡|a25*յx6n0W%CgX;'fd:A e~sz Z&.!Mlb{hq4u ܤ9{^ԟJٕ/xv绷Vr4,7qpfLndFm8~ISO..e/p-2/#S*Jp/V 0,0;o/?8 ªhe D 5 pUHNPQI 5 ЅO vp P 00  0͒PPѾzO3 0p ! 7p f\ % MQQ k 0 pwSV1]p 3{Y<Sqq1{+mq ّQPyg149 0}qr 2!r!!!!2"%r")"-"12#5r#9#=#A2$Er$I$Mr#ejQQYRa&m&q2'ur'y'}'2(r(((2)r)))2*r**2 ;PKkMMPKDOEBPS/img/tgsql_vm_053.pngPNG  IHDRܐtEXtSoftwareAdobe ImageReadyqe<iTXtXML:com.adobe.xmp 2011-12-18T14:34:47-08:00 Adobe Illustrator CS4 2011-12-18T14:34:47-08:00 2011-12-18T14:34:47-08:00 False True 1 8.500000 11.000000 Inches Helvetica-Bold Helvetica Bold Type 1 001.007 False hvb_____.PFB; hvb_____.pfm Cyan Magenta Yellow Black Default Swatch Group 0 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 K=0 GRAY PROCESS 0 K=100 GRAY PROCESS 255 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 K=25 GRAY PROCESS 63 K=50 GRAY PROCESS 127 K=75 GRAY PROCESS 191 K=100 GRAY PROCESS 255 C=25 M=0 Y=0 K=0 CMYK PROCESS 25.000000 0.000000 0.000000 0.000000 C=50 M=0 Y=0 K=0 CMYK PROCESS 50.000000 0.000000 0.000000 0.000000 C=75 M=0 Y=0 K=0 CMYK PROCESS 75.000000 0.000000 0.000000 0.000000 C=100 M=0 Y=0 K=0 CMYK PROCESS 100.000000 0.000000 0.000000 0.000000 C=25 M=25 Y=0 K=0 CMYK PROCESS 25.000000 25.000000 0.000000 0.000000 C=50 M=50 Y=0 K=0 CMYK PROCESS 50.000000 50.000000 0.000000 0.000000 C=75 M=75 Y=0 K=0 CMYK PROCESS 75.000000 75.000000 0.000000 0.000000 C=100 M=100 Y=0 K=0 CMYK PROCESS 100.000000 100.000000 0.000000 0.000000 C=0 M=25 Y=0 K=0 CMYK PROCESS 0.000000 25.000000 0.000000 0.000000 C=0 M=50 Y=0 K=0 CMYK PROCESS 0.000000 50.000000 0.000000 0.000000 C=0 M=75 Y=0 K=0 CMYK PROCESS 0.000000 75.000000 0.000000 0.000000 C=0 M=100 Y=0 K=0 CMYK PROCESS 0.000000 100.000000 0.000000 0.000000 C=0 M=25 Y=25 K=0 CMYK PROCESS 0.000000 25.000000 25.000000 0.000000 C=0 M=50 Y=50 K=0 CMYK PROCESS 0.000000 50.000000 50.000000 0.000000 C=0 M=75 Y=75 K=0 CMYK PROCESS 0.000000 75.000000 75.000000 0.000000 C=0 M=100 Y=100 K=0 CMYK PROCESS 0.000000 100.000000 100.000000 0.000000 C=0 M=0 Y=25 K=0 CMYK PROCESS 0.000000 0.000000 25.000000 0.000000 C=0 M=0 Y=50 K=0 CMYK PROCESS 0.000000 0.000000 50.000000 0.000000 C=0 M=0 Y=75 K=0 CMYK PROCESS 0.000000 0.000000 75.000000 0.000000 C=0 M=0 Y=100 K=0 CMYK PROCESS 0.000000 0.000000 100.000000 0.000000 C=25 M=0 Y=25 K=0 CMYK PROCESS 25.000000 0.000000 25.000000 0.000000 C=50 M=0 Y=50 K=0 CMYK PROCESS 50.000000 0.000000 50.000000 0.000000 C=75 M=0 Y=75 K=0 CMYK PROCESS 75.000000 0.000000 75.000000 0.000000 C=100 M=0 Y=100 K=0 CMYK PROCESS 100.000000 0.000000 100.000000 0.000000 C=25 M=13 Y=0 K=0 CMYK PROCESS 25.000000 12.500000 0.000000 0.000000 C=50 M=25 Y=0 K=0 CMYK PROCESS 50.000000 25.000000 0.000000 0.000000 C=75 M=38 Y=0 K=0 CMYK PROCESS 75.000000 37.500000 0.000000 0.000000 C=100 M=50 Y=0 K=0 CMYK PROCESS 100.000000 50.000000 0.000000 0.000000 C=13 M=25 Y=0 K=0 CMYK PROCESS 12.500000 25.000000 0.000000 0.000000 C=25 M=50 Y=0 K=0 CMYK PROCESS 25.000000 50.000000 0.000000 0.000000 C=38 M=75 Y=0 K=0 CMYK PROCESS 37.500000 75.000000 0.000000 0.000000 C=50 M=100 Y=0 K=0 CMYK PROCESS 50.000000 100.000000 0.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=0 M=25 Y=13 K=0 CMYK PROCESS 0.000000 25.000000 12.500000 0.000000 C=0 M=50 Y=25 K=0 CMYK PROCESS 0.000000 50.000000 25.000000 0.000000 C=0 M=75 Y=38 K=0 CMYK PROCESS 0.000000 75.000000 37.500000 0.000000 C=0 M=100 Y=50 K=0 CMYK PROCESS 0.000000 100.000000 50.000000 0.000000 C=0 M=13 Y=25 K=0 CMYK PROCESS 0.000000 12.500000 25.000000 0.000000 C=0 M=25 Y=50 K=0 CMYK PROCESS 0.000000 25.000000 50.000000 0.000000 C=0 M=38 Y=75 K=0 CMYK PROCESS 0.000000 37.500000 75.000000 0.000000 C=0 M=50 Y=100 K=0 CMYK PROCESS 0.000000 50.000000 100.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=0 Y=25 K=0 CMYK PROCESS 12.500000 0.000000 25.000000 0.000000 C=25 M=0 Y=50 K=0 CMYK PROCESS 25.000000 0.000000 50.000000 0.000000 C=38 M=0 Y=75 K=0 CMYK PROCESS 37.500000 0.000000 75.000000 0.000000 C=50 M=0 Y=100 K=0 CMYK PROCESS 50.000000 0.000000 100.000000 0.000000 C=25 M=0 Y=13 K=0 CMYK PROCESS 25.000000 0.000000 12.500000 0.000000 C=50 M=0 Y=25 K=0 CMYK PROCESS 50.000000 0.000000 25.000000 0.000000 C=75 M=0 Y=38 K=0 CMYK PROCESS 75.000000 0.000000 37.500000 0.000000 C=100 M=0 Y=50 K=0 CMYK PROCESS 100.000000 0.000000 50.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=25 M=13 Y=13 K=0 CMYK PROCESS 25.000000 12.500000 12.500000 0.000000 C=50 M=25 Y=25 K=0 CMYK PROCESS 50.000000 25.000000 25.000000 0.000000 C=75 M=38 Y=38 K=0 CMYK PROCESS 75.000000 37.500000 37.500000 0.000000 C=100 M=50 Y=50 K=0 CMYK PROCESS 100.000000 50.000000 50.000000 0.000000 C=25 M=25 Y=13 K=0 CMYK PROCESS 25.000000 25.000000 12.500000 0.000000 C=50 M=50 Y=25 K=0 CMYK PROCESS 50.000000 50.000000 25.000000 0.000000 C=75 M=75 Y=38 K=0 CMYK PROCESS 75.000000 75.000000 37.500000 0.000000 C=100 M=100 Y=50 K=0 CMYK PROCESS 100.000000 100.000000 50.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=25 Y=13 K=0 CMYK PROCESS 12.500000 25.000000 12.500000 0.000000 C=25 M=50 Y=25 K=0 CMYK PROCESS 25.000000 50.000000 25.000000 0.000000 C=38 M=75 Y=38 K=0 CMYK PROCESS 37.500000 75.000000 37.500000 0.000000 C=50 M=100 Y=50 K=0 CMYK PROCESS 50.000000 100.000000 50.000000 0.000000 C=13 M=25 Y=25 K=0 CMYK PROCESS 12.500000 25.000000 25.000000 0.000000 C=25 M=50 Y=50 K=0 CMYK PROCESS 25.000000 50.000000 50.000000 0.000000 C=38 M=75 Y=75 K=0 CMYK PROCESS 37.500000 75.000000 75.000000 0.000000 C=50 M=100 Y=100 K=0 CMYK PROCESS 50.000000 100.000000 100.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=13 Y=25 K=0 CMYK PROCESS 12.500000 12.500000 25.000000 0.000000 C=25 M=25 Y=50 K=0 CMYK PROCESS 25.000000 25.000000 50.000000 0.000000 C=38 M=38 Y=75 K=0 CMYK PROCESS 37.500000 37.500000 75.000000 0.000000 C=50 M=50 Y=100 K=0 CMYK PROCESS 50.000000 50.000000 100.000000 0.000000 C=25 M=13 Y=25 K=0 CMYK PROCESS 25.000000 12.500000 25.000000 0.000000 C=50 M=25 Y=50 K=0 CMYK PROCESS 50.000000 25.000000 50.000000 0.000000 C=75 M=38 Y=75 K=0 CMYK PROCESS 75.000000 37.500000 75.000000 0.000000 C=100 M=50 Y=100 K=0 CMYK PROCESS 100.000000 50.000000 100.000000 0.000000 C=0 M=50 Y=100 K=0 SPOT 100.000000 CMYK 34.900001 23.920000 50.589997 7.840000 New Color Swatch 62 SPOT 100.000000 CMYK 76.080000 39.219999 18.429999 6.270000 New Color Swatch 57 SPOT 100.000000 CMYK 40.000001 16.859999 5.100000 0.780000 New Color Swatch 52 SPOT 100.000000 CMYK 14.119999 5.490000 3.530000 0.000000 New Color Swatch 56 SPOT 100.000000 CMYK 29.800001 10.200001 2.350000 0.390000 New Color Swatch 61 SPOT 100.000000 CMYK 60.780001 27.840000 9.020000 1.570000 New Color Swatch 54 SPOT 100.000000 CMYK 24.710000 11.369999 6.670000 0.780000 New Color Swatch 43 copy SPOT 100.000000 RGB 102 153 204 New Color Swatch 30 SPOT 100.000000 RGB 204 204 204 C=0 M=0 Y=0 K=100 SPOT 100.000000 RGB 0 0 0 New Color Swatch 58 SPOT 100.000000 RGB 128 153 179 New Color Swatch 51 SPOT 100.000000 RGB 230 241 255 New Color Swatch 55 SPOT 100.000000 RGB 204 230 255 New Color Swatch 3 SPOT 100.000000 RGB 221 237 237 New Color Swatch SPOT 100.000000 RGB 82 160 162 image/png xmp.did:56E0F3695328E1119A13E802F162621B xmp.iid:9C0DCFC4C129E1118F64DCD87E7972B7 adobe:docid:illustrator:7547372c-29c8-11e1-90ba-81b2f465e35a converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:56E0F3695328E1119A13E802F162621B 2011-12-16T21:27:54-08:00 Adobe Illustrator CS4 / converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:9B0DCFC4C129E1118F64DCD87E7972B7 2011-12-18T13:46:40-08:00 Adobe Illustrator CS4 / saved xmp.iid:9C0DCFC4C129E1118F64DCD87E7972B7 2011-12-18T14:33:50-08:00 Adobe Illustrator CS4 / uuid:7547372b-29c8-11e1-90ba-81b2f465e35a adobe:docid:illustrator:7547372a-29c8-11e1-90ba-81b2f465e35a xmp.did:56E0F3695328E1119A13E802F162621B 1 405 24 3 1 72/1 72/1 2 8 8 8 Z8PLTEKKLYqQbtㄮڲꝾNZgzvԜ |eYlƵ~ةayޒvssݕߊꎟ؏ߵz֤慯۪kȎ]iv퇰pppۉrӘ|߁ٸޤΞ斻▮ʖ-3:HQ[p҆`kwxhٺ᛽Ⓐި娺Л盧DLSvբ熏ጭҬ鐥myur}儨nj~~?DIfytӕsӀ٨蛡ݸ얩݁cs砶Yfuܭ[v٢tԯTjwՖdoޱwj_t6>Hr{א16<"',wga꛼ᓷץ隷ׇhpyk|WjФ頠٬}׈pXXX甔ddd뗼ٟ煮ɬyhu$+z֥珕obz~t~Y`iң䨿ڦ NIDATxڼ XTepd4( ya`"bh\6Ѣ̱R FI\K QrmK Q IeYgcv}9sΙsYq|~˜f>s6QB̴=sZ2U=PUP@'1Y6U}~n`u>/7|Zh?:oF2hkz"„0Rجql9*ArDTaS\ʒs;c&!{쥳Rq䜙ť .9J3,b1Y㤳8a9{tvv=4ΊT4tlWɒ܊Y4=!"+4$I 2:kkkYR`NT7/җs9[y}lژYU yq:8*_lV>>E p=\Gxk?KXQM6(ٯVe l;i\TnDe񉓅ި3WVV,,",Y5&3:Vm#CRu?&qf.Cyw(N_U 3XOFU4u1QA =)(QY90q}[`qHՄBU+0Uq/~FWttt4-$0qte]fv1Z37K@ ^Yv1 !444*R]_MVZ4 4Y5׮]sP6_SW uZ&KQh[ Wfpct'uvKS#Kx'{{T {UT47dye0FQKC^\,E*T7#=Pƌb@z/myl __xƘ-U!}jT)|9q>\|C}zœ[k{͂ %Y8IAU^a]p(߫(' 7YUٷR!ggT4rդ (OI`">L:H&lRZ;%iSK'L&4 >*MjwOt*Bg~у7|W(M{M>,tta9R}Cs/c7L%˾Cl:! ҄;Ao PPR2*Ejc&jW)J1z]pq='pO Vڙ`X//p- ~ ,BB8ܝ8P1\Ua0d;R66Rڠ<Ziq%" G7nCa)Z !خZ5\^ W{遙"Y qzcee9ߞDA [ ᭷6œ]@ / Zopix([][ THP6ʳEz UHN[EVSg" X܆WgθC[(>1B$ީ⟙'qBJq"KFPI2!sèI-ϕ$eBq9whxXXf%6:>(i4eR7mJ/eEz^)Izx MYNajc4$t*b΄Jd6sTC[;GjcOoXȀiV7B7rȄJM{>([fSg'mCKT`-EfL6w|JW4hMR]iF ]\eQaU0 w"Y@S{ROVf hr*1_%(:ROrxĨW;4L(g)-rƀ,Mj;xBxa&8gIMǺ%s`ڗ3m5gk!uY5훸n^?RWhB/Ld4=ه0-@8ˤi?3fJ*9Q>n]CzO­j\" aAe˷k &HJ} Q?:ˣ=>E'O'='̲SSd,dYUbD}[x.}z ݃ik<8<}YxG~M 3f Ȓu\ͧ뇟xQSrzNŊ*?'LVΫ!T!NdBW0PfƵZMX -S?$뙺n!9w;yoUo=~%F0]cъ.a m2Drd2U ˊ8YmhKx$c{ק ֍@P,U^5(A]rP!\[M 90U8-j uqqL&nDlF;l ^eO~:l&V)P᪣@#X`Cun K,1"Y@EYCCCx8?OtTz$hYW&j_ڭZA4>-L-?]'m"u}:Q "BcuU֫ęg̲a+/1*݁؆jt '{~E!0VIENDB`PK>CPKDOEBPS/img/cncpt371.png|PNG  IHDRm?tEXtSoftwareAdobe ImageReadyqe<PLTE::?@EFG ,,,++-'''NOP224CCCcdf779IJL㧩@@@-./༽cde123888???$%&122nor STV 000```Р̍ǎPPPTVFHppp򐐐)*+pqqwx{٪JKL֊qsӜ߸ce8:JKK;<=***%%%nnoefhWWX>>?}~___ƍopq%%&bbcްdeg///klo|}~777GHJ,-.[\\ !ᘘ~opsϏ001\\]]^_hikBCEffgͨfgiJJJXXYɇ+,-ǫߚaab),ghh\]_89:[]_z{~!!"``aوkllttuVWYeefiijmo/ jjmPPQ"""rtv//0__`=>?%&&L _89rooo韠RRSSTTbcewwwl-/`aa '*UVXZZZuuvEEFYZ[iIDATx |yuX#ز7 ȲnW;ݝ3-[,ɒ%Y-ǎ1)` `@&@ ͣy56I4i&3wNw+{|3o]ݮ (>hĄ bB)RcZNGHR∭ [^%nAIamJf&JN*&۔$|b*2ؕ&l#^N-rd˿N/eyc%Ғg{{Rfi{##}oo/%nrhGFӍJ^:26:sL fHtf]G2;lk b_d0H@%᱾ـ^|ן}j>D ' dzu%|sF\J _Ju)+@{1dhoaQnOrLW;E"p#ђBÑ>c"h DS8ãOi _9ؒ,ڟliZ6:ip E4=V`0;cT lT*O"Uiha0DF7/)6y- :Dn8+K"_:zGVK)ͤŌߍڟnzU^sBwu!CߧLh=n_őzaꍴ6Rd{GG^gzF5YA9Q(7[`L#3Iz!@>rE""#9=uC@Ҭe  zhRfBU!*Bb"Ѐim-PPDXzD\PE)kP.5b'TjرӫjP=k4`$[pZUf/享.uS>A BW UٵHP5ų!ijR~ga۶f˹Y@9PNNn]~/֣P-SP=*.VX TV*Z ܼ![m[ʻ)U *wXB&[bN]2F`PVSPnyPi{XTFHQo{" * igHڴy0 `zӽ ޽oE`L 0d-$.TVjPNĴ´WPŴW=ӄ"s{|ƣ)@:H[mds T{h WXC --[Khc=$:!dyhc6>{);x !BB(!o9ɃDO׳bg'bXP $Pnq_cJ^IHf=(o|gs$[b7Sa @_) \qRN*&aSbALMxncz/t^Yt L&nUl0uB:Lmm&2SC+ȖZ@YP2OY/>"YU۞Us5觾?&ڿ<-O}8t+Vm[!y+sbS;=c{^q--^wB0b]T%G} SN>&Z;|}6SB"G LnD"oZrXӞ=z bIaFS"2i1NBiϱuz,SuL~.[Vr*_{-[v\)#ϘJ{!#i L0=`D$C@xD)܇Nh ;&R~MMzvz< a: a: Ezkjnand!BUWQ475{Ts0)'Le|ɇ!?~/Bb*[_ޝޜϝFh~s;:Bfy|r|Jn:y->DŽY_8= !B-U ´k "No7f!.m]Pqsq8ɢ>Oa!_ȸ2<%!他srzP90; L=r"{R]dU}n:Yg!.k+ӻt47A!)LoCN»ۅNo霞4YQ!D f!||57yBx2c|1!&ĄP bBL( 1!&Ą bBL( 1!&Ą bBAL 1!&Ąr9`zz1] 7\_ĄbBL 1!&ĄbBL 1 @5Q1->G}rFE+RkY-m,1-0 }lٺn]$LmbeO_*N=6|jS*&Bv 1 ,LǴafCzq036mĔ+ ݘls5O::;I)@6&AuvV^!MmR*x:$A~[L[ʭrO\\UU8H2mY}6]/x7~;[fOgb*p(GLMy啚l龸"cj*>07)DWמJS3EԖK9V-?iY`b~֖ Tf2}=WeNP~\ =6w;$w7.L䮻8J%a&3La$ &KY$]IY֩V;hto4TuaR`ItdlؔQ&l 'G2u ɦ\7ƗO~DyaRB!ʮ/MW L!+oԖ.&on&݈E$&J(0 0SQ,ۦ0laF CB\aҌK'.$DqvCC+Xrb斅vKsP8KݤyAE]c[t'KSFLivG֯ߞ-w]6ߵk.[6hi8SOӎYanBL471}tӛmtzin& :&ĴiN,.B &0XLTpn Ss0u"„4BhjCL~!0 Ho9] CtzBSR,,„2܌9bLx!M!0_x;/Btzy9MB`܄Y !0 w 80һ&\7"C !y&9CT L%ބNB+YY_!:0m$_+S[r 1efUezEޓE¤U'j|#SfV'f٪8>ZL/󪝩1e=~lAhn.&Ŧ+Dr5ELSfx`P&Չ#EV5bZ(BRFE/ D4I ZQCLĄbBL 1!&ĄbBL 1- w?/!$*_*wL( &_ Kk(S.IENDB`PK'HPKD"OEBPS/img/incident_details_num.gifBrGIF87a^Ό)19cZcք1cތ1քZ111Zc֔ZZZ11ތc)1c19cZ)1cs11111ccs11cZ1c11c1c11Μc1cZc֜ZZZcRޜZֵZ1{c֭ZZ1111{{ZZ1{11Z甄Ƅ1ZZlZ1ZZZ1{1Z)RRΜZZZ΄RZ1ZΜ{1Z΄ZZZZZ1Z11ZZ޽{R1J9BZBJ޵B)1JZB{)ΜsZkZJZ{{RcΥBJs{ƭ֭cZ{Z{cksZJޭsΌZJs{Jƥ{Ɯƥ)ZcZΥZk)RZ{{{1Z{{{{{ZZ{{{ޜZ{{1{c֜{{{{cZ1Z޽ZZ{{Z1{ޜZ1{ZB9{B99B{֜9B9cc9c֥c֥1c﵄ֵ֥ΜZsR1RΥ{ƥs޽)sƥkRsR1Z1sƔkZ1RZRR{sR{1Z֜ZcZ1111c{Ƶ{{cZ9Zc9Z9֭Z9ZZ{Z11Ɣ{,^H*\ȰÇ#JHŋ3jȱǏ CI(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*9pӧPJJիXjݺ10|+ٴjʝKݻx Ҭd;ۛlױᗉ!eఎ)LHq΀-@`S^ͺ^( iֲ+S<:3kÝV"26m4NؙM);}`[c=9V0uiWzmNob1]Xԙmt)`vVhZzbX Z0w(TvDX ǘf!҈Ί=(gh3a~՗dl*fF)TV9ӆݵYW sj2F}Ri4f@O*s^9`nMh堄j(kXNac\a(!W0qz&JZ([9)`iJRAx譸檫1Oe>Vvf4(:hRoҪkO陝A)a-1i'u9̗IJ{ 7pћfi8 h֩ط' qJXtZ{'`}g"gdmvulMen-twxI߀.K~n]x7ZmD.WnSwJ# 騧W.)n{Ѯ/o<G/=ɛ wO}ه)\D}S8LW:@M7%i+A@ @p h |}Bdp%aYg”\DӗB~p #$`@>P!0%#F2 p쇿X /pD@`¨EkqFQc R9n0$<@`>ܱ[c\*#Y| _? x`<#$!ɒ`0iCFY)WIT4%,Iy,C@,I9&!˴Ѐw " x( @-ܑc ?~4$ <` CȈp䐞`<=G T@$P!T!J] h6\b/!?gG6tC~f0!.T) ٷ.^4-q#,Q @O{0 D5fq%`XzHN2>pzd']y(` W2}5-@]}| d\PK(wь G=~ԦE 6pC}e3:9$5I {5qkoAbE۲W^2fctܴ5-, sm :P4KY^:b c)Y$z⿞M )@ RWk_S4n ,TF }`Ԫ}[f7m_{^~p0tcѓ L0-v@U%w zLu]>;Y.PgAeP\g5&r-= ^AFjzOL 0eAq.U2 sJ"[3XYV<2ar5Yew]c_;8= 5! wNvr&h尾ƙ9 M{6cfEQi._dUebٜ?_aD@㰜) @+BPb(h@d@YA SBOtFgW쉞ܑ PcI/# - + y,;Qz怠ʢD` Р-ޙ P)1X"J(hT4_S)>\$YbSg@YAڣAơ c ikJZ*F-Ѧ6Qdq&!:]0 $G(:(*?2[ѩmIשɵL?T4\M%LsDXUSLTx (ѡZpA \𞺺i᧹*j`:ڠ,ʬበ ʬݩ p yZ鐧a9  .0쀥@`ᙣj%i[Sb8dST4uUEХcvS UזAHiZ`@S虞ZGz [zr:z iqʨD2J[yz,( yݙ)[)۬+*D{CBL]Ug9Fr {%nFc[R0ַu[ca@bg馃_K5K,k@aZaC1kP\s_;jNKJV0)j\զ4[@K,F>v{1E~[+c|Vi=PT9ڬfk'6[0E;; B)K{/1鮳kd+ﺮmK`Ŷs={p;:80 dP‰|6pIZE[+lM0-=BVI"^#>7jPLH ^^$}<`ߺ e~&R>VMDFM M~C@l &M@熞7tNz}\>lE0 P^z~Ra8>8@ ZN5bp   ?N-A{u8%lv'MNJ^#pǁ` nQD8\%mN`Ȍu[ڞz~Sa2pzEʬ, O-G2^{wG \nKLЌ~˜\?  &2~AnH nQ5U#-0MaP@I >TÕ.ό>VXT |]`O>\u| ;AztLEz MfȜ&.`҂DdamqbμL֫j/oa5yC~\$~wL?p0eO^cm;Q׿pN;@h C@u@ Ѷ ba&@PB >-^ĘQF=~RH%K(`E  0,Hի%8r4 =A$MqofE14"BҢ֝#ksA,p>꙲VZ(RUird 8MNKfzlJ67SNT-drQmZ{@?/`?^'6Ag.@ !ʓo(XJq!%?G*[Mfs)\(VͤMf =@k$:)C8k"ʑz+|ҊP6r\wHU!֛^v:fķ}~\uE=&TM.#L\|?h׊ 7z:bM5 f)=~8%/8!?+{Xv;%U,@cJg7ಒ\/MSj/~<#]xA#2 CN|՞yo޴D[HHj2!~,_.L[#9 '^n-pcLND󂵤e{r;1 좝#Z:)Q 1곾^1(9 @&!S% 3^;6Y#"$(>@2Y ,#Dj$d&%t(B+)+ *- z 2-4"?ۋx6[1;!+^ R {KA;{EIH%ҍY@4B4H4GӲ^,lF9FeHal TÉZ1Hz(󲉠ܰt1< '7hwKat} "(UD@N$Eܫ,PRܽS٘K ΄Vja#r:㻙OH&*X*PK:ǀxmzNd[X/iQΨ%IN9\ZIHmA0m=t ,5ㅊ1rJՄk(:ΥM3m5/CJ=aZaGh !UԫHEZ˃@Ԕ@%ԓ5@&A&4.dcovd- 73 `FK&D-Ճ=ճ4U%T SFس!F~’[X H,yFIeMWt]W\(?qIvE}#lW,Vpc/cnqf@grN][fmgN|F xf8PW !RFKReSU FUrewN1+сh=I͠pӼ )M3YDFD&QTٓVCFiv:,t?h{<0P4mZ02cb ~ieꦖ@- e+jju&y 6WA&p^VkFb느236ӻ1s*z*~:neC'eSDuBkNV[?v~B21&['CT)SITԗu '[めkɦЁ[\MT5O Q$U3}5%5mā[kkQOe,pFmۆ-Ze]cv׉;y38i=p٭_I`,6+2ˁn-LZ`K21`oTRi- p sce,u.:cv63O;%qHdvlr:&*;DX4 >F>EsarӊlVLi. /rqyϝ?̮R[8Կc[|rFmh?xͮp/q@D5>PB[u[WrdM4Qrv]G:dgvYfyh֙j{`9aPlLjkSB vܞ6`vwwxyzxoqɀXCvA[Ziy|w1"A{7lԷ\WW´~x[I?(33;yy&ArO9"O u6`7GWgwG qy`!z<v42=݈d7G{zt$LѕB{wxg"; '/0?W'ZNYIjuh|& >pԇ|}Oէ,8?{_jJB@įG6hoǷ.@?~&@3~oiENMY-~wg7wKQߜs鋅K}i6NiB校/(dQ̙}rq!Ȉ> %̘2gҬi&Μ:'РB-j(ҤJ2m4(!'1O $ 0T<̚פ+ZlCJ4Km|!ҍh8cD}Ljfcmfbݕ-.ԪWn5K߾6*80 թ'[m<1t (ִŬ#(P@Ej(awϻ {{oi8Q? 8 T!g a!ogmŅ\B{mXs&%DZ7߅T B3f\xPw!~+΅#=#ADK28VY\a[X@)brux\#m P6+VÝ!e)>Tm5D|#e# ڎ{' JnS!qo0)IuV=N*' @ x4x|ԁ(|4 |Aݧ髠t*J53#tҫފN,:,@iZ{-j-<@^lF-[.骻n !\;/JKA֒O<0L`(0 ;0K<1 {pMc1{}!<2%|2'_K21w+934k8<4EI+4M;4QKB}5YkLVs5a}c}6i Ym6ܪ7u}Ms7}߁ >8V|8+8;8K>9[~9k9{~N=9饛~:ꩫ:뭻:왇S~;;; ?<<+<;髿>>co~M??O?M'< $`  3A.P&s`00;Aԝ|񋈘w)`0`4PK >!5Fl ch#2x`hDZ@Vd EcBW1f<#Ө5n|#(9ұv#4E.P &< wDP@$ @RG\9+b%((+H,J[K#0)a<&29&Lx&41Y i2a59n۔!`g"M& h<|²0 v< -POoS'=#*щR(F3эr(HC*ґ&=)JSҕ/AaM}> )D CpUgC uiMZ^Ȑ%OMޔd)Vխr^*X*ֱԥ.(`C)H+*C픂N< e\ •xfa {W3x+cX:֩|kOA%+hC+ђ=-jAjDxutl];,A 0*4aؠJoesؾ pjܻ"7.x+W'|!g0l"2̃0` -g v"_ ~KK4ly3 s>y1]&>1SX͉skVѸ6NnY91,xF>2YS$3N.ʒ,)S42]y^2 1ob.33 R'Q9Y)#x,'3 @3Op+PKT99M9Ab Ё@+fuV FD#'Q)aX`T-B.BZ%E./E1Mɨ@& !,H"a *E"*a'r2a82V[|#8EٍZ485"[6^L\b8?>:_4c:R;ƙ<z1;4zm#Q%[E9$4¶F=ZA!`2`KcHNZIN#YMO~޽ _L-J5 ƕ[)B!dIʵ#d;$@@Ac-:Q SXD(aKDPN"&">#.nZAN URa #%-ec 2ev]O#Wze%(Z[`a.h\;RT#:"mfLlZemSߓ\ B b YI&d >B(Jup[[[~&y0:aU}"D΄#Ģhz:煪gndKt$'re`VPe5 )((=_r惜o0ȄW]}{('.i6z2~P`v(WhBu#`RqMpa mbzࣉ!6dx&[b5Peo0__]m8٤ixƢtO:.&@bFe0E:j^rKPL$9J[{ɄVᢺj$>:V"LjiVIOƩo.)IŬDj3[㧊F+iR`yq΢')V %6cj§Du$]:\+M޳v`*Nr3֫Φ6j*.~ㄞ*L+V&:b:'IfB,jk^klNhnkN*&&)eRj*JkuT"%4-ΪcFz+Q ^,&޸D5AImg.q]N@&h2njlCVc#{>csD'8•^f[-tY2.,^k?歔m:]Bg> eۮ^D=b#\'6I8n h'Fn =B)N b\beNmJ &[VEf.:/^%Z^ :EI*Sd(B&ǭ&N]}Qjگ4,J) Y`IciUIMš7(wR\8stm7zxo .#·|pT}Ow~W&wJ8>򦦸(︎?qޞgUR Mo/̞,Q:b"n}[$BŶaRlj7j>7cyi9;9&O9^:vÁdOko OE@9+#H멜wI$ReڥKzR7)=9t),l\y^s9c*샐9?bE Bq?Oz/ܲz++l);H邹˛m%={˴^9{x$c*໿7) 9kdsWZd87t^W0nz#w,.axLn< TDR'!KwTKzu J=E K:OѧWQ{Gؤ` b 'G#<+k x!{RJᄁ4z QS0#H &I%6q15KEP1FSUxN%l1 `Yh@;AbE@nH<'LSME ҦX8Κ5 TPC}DmQ}TI)-,J1TS.OAUPI-TF=UU7MWamXiuYmUWq_5"$.L`mV6c *9€42,Xb@Igm!hK0aJ5Xl]Ozbڳn mG^<"InnkTaZ/ ^ 'I᏿~ʿ_ϟ8?ğX@% [^@ ki +f"P'B쌊"t!qFpN!KM\m m:R2a)X^"@yIEx0IKL Ni)J!!(#ԜQ-(0`? 7`#`EASA$FMFG!m#cԊ0UJtwsJĂ52;% sN Fd(E\ "ƾqgVbe"~IA~Iij )Ih w;ak}G>C_͉I&r2$AeP.tP eCPNIEodQnAGQ:"%IcR7*eKoR62MwTA4D6OGS<>%Q'cT.՜BeSwTNu0RUbUn.ZW9T4^%YsbV5&je[WBJsazW¥{kV'5aX.uc!YNe1ؿ5(@Zю5iQZծeka[Ζl:kҖiw[m Fmps]Nյu]6Ww^.Qg ^n{_zyu+O엿`6{`/ v=Q_!@6Ł;a8#\,6, ' ppq.9Bvp CxLOxX6_\d1Yle$ dLx#&3,w_pD3 @! UĔlD+Zmv{P< `k]`9Ϲ.>&"p"su\?}qrj!5mP:ٗD'lK<0@& dB{gζmgGy8 <`Bi>{';7F@-40c6 ly;>팫Hvov??mlFM깘Z!}5 V<+fT=r1gp]xCLj} HdكN^ ;{ 1Lh놰zաMXOC!n_p/ ^!nl]v^xatwxc_#?:nn3sst9?ϳm~s]éqnB&o/m@3(3ǯs|kۡ_[ 8DoW!A}s!o>JPn,/ 0 KV/;lb l/cr躣dž!OO` bP0/a /+l90N`0 O{0 P p(Z2/L2p% vo>s?K??4@?۫ @4A =0>b>tB@B)t?14C9CrCkAEtA>2$CU4@/[tEEatFitdԻF4GDDSF;FcH]HHKHKGuG= G4@-tJmJtKUTK 4 ”I IJQK?M+NNNk4@ ` MFMt0N OQ ջuQ-` O ` 8F 0u * `PI!t! QNUu(#UaU)_ vOm& vu`@QJuMOU!R5VrVoYչ5ZuNK 2DO-E @   ` AG ZN5]嵻f Ε[@ v]IuPUu+uV6a!V5 ``u5@`( ހO!tY#6&NQSd!^W;NC $O`>X 8eremgThVPI4]MGii6ji`jPjvkjk6lvlɶlkͶlUd5! m6nminn6ooovojw;Qupp7q`qqr%wr)7lrtmbd1srt=WrEwtMtt1WsmmQ7v#wuesUvq7wVw uisSysoi lITDwxy Txzwwxl 5A5w|WmETSmR7G 45yobf`oVLk \&}єzӗIі}[SxtUbm~ׂuT{G=SAӃEs ` @86Q8Ux;s DfT piaXIOO |Sw`EӅ?WWj iYa;SxWAxxMSMs €WuGT55xd;8LX|3uSXwW3_Vc &ruS4T ( @ŴVR4W xbw(ۗ9+XeMxH]9uyyy4`O>Oobi.uSy DR] 3{W|w @9L Y;3 ̀ `@ )˹43zovW3{f9|y /9[׹9`6|?Ջku yoֹRu}3˹ g VcãUzYڏ94 T 0VP:St+u>:5x W>Y|&:: 8#7 Zڣ_9ER1sU3?3ZxX Zf:6XDL4z {4_4C{y'hZ]VY99>O0A{ƠA` 9 9Z{@5{tRS'٠C '4{;S7Z_uZ[cیAz_-O9{5f4!۲z{P?;_{A bJ۾ڵ᛻s\5y36AѺu=[S tOy[xG{ÁSM_95@6~S ܱw)LK،:B;][f͹9ŧWo({kZo;/;f#E@y <5\W|AAL9%\QS]Ĥ93۸`-Y9g=;Gji˙o8Zj=oY9|x)xD@-W1M|g=oGmKOWփ}ع׋]};mĕ=Sڱث}׳=]}]wUE]H D=0_w ~ >~! ) ~->5~91A>E~IM?U^`ݕPZ0b^Df^j^2n~s r~Mz/~~=>^1~da0=ޅ^]m^IC w G.8>u>^9cb#AvrkB}4 .ZMDNV Lgʥ _iN&A&egLP dQW?dZ Y_i9B;Zi"rlEP*r^qz!.Ci8?K4`v0 b_%`h1gQ!R9h$ԿJ#޿+?b H ‚24b+Z$PaI2Z DARy&[| 34kl9ӤLi2͡D=:T(ҥ$) IT@2ԞS~8@A$N\!|dZVJYl+.ܱv:CO + DpH&. ⹜YfCѠ &Nvj^x`c{']jJаM9}\iPm:O˛?o;z^`Z 6{ (lF؅ 8ZJFh " VSA(FUW`<g"iOh-6{t_~12p`~^V{&p3f%#q5vlAuT=aE EuYޗӘ䁉f物Kp, ƁXvtE](U]QD^D[pUXQȩRF $xDqRm )!+Ge|U "/!imXs%5a>*iSteBtUQHB+oAwNHj RTENswq9؄8$Fy7Pg:#2w,PD2W0ӖnY!8"")67&ƓAl47rl֦!84$=lOslnQ4Tb%t d^`WE+ `eQGB"q#.Oģ1ob!q,)Gu)]דѩ҉*p' ',cpcYApr,IG線ٍT-vY|(rXŽj&l"q2000Y1f)72S ph1Ò1"s@'1 á6, w2<2ـ ǚ)(P=832*4A4h`JGwCz55AzwYWJF6f66m$o3uIvu3nWQN#U^gvW<ϓaeG#Qg=rAC>xA yGSݴ d@᪴yiSo#,JlD!KA6C/T]_xN?&VD#اiDZTc>F{Q{7{F7CE'FFfdhF:ZZ8QGw%/ڛ]xsj(ŸSڢW{{'YD;(4=Jk{KJN˕J0eFj!T^rʄy~h. $/k9a 'gٛqaF cN$O3ŇpH!Pl%q3P4!bi ;r&@OY3rpHHKYp)u1RR0%SׇUrnEs%ST[DeTTH?K*Gpe]rqs?UaUcUV'{V~K|GSrE lv)z%H^:Xh{VNXE;K"Y.Y)YZ祲Xh7u{zZlk[!e[\!uyIKӰUh؟[E]]:-0^奼΢Nxg] 6/Hձ_Sa`$_ka F<w1j, ;ˍ!'dQdZqdSdL־mȐ[Vb;r]VղO[.}po{C,g}g&hĈGQN=k[KG`a_TL1xON FszkX3X 7l*Q#Fk)r\YRioW>#V8L{kܮ˔ȍL( *u-rr {hw"r5@s9sz@ nl \rUs\Hu4[^f$<}\s)|a?Oxly6;$%LiJw<~Y <R-v ݕ\ %,l(*j=kwJ7JG+-y圴4]6}/91M[>MO Q-SMUmWY"A9M6l6Yџ+.*Ӝamz*!Z>x>Zklmo/-`i ` 3`}ϫd^d|}=e}L,S?/C2>LWqڻA ѵޮ3sCqzK1FeVJF:nD`j_b&/گ{js{ lЗ5s5QT{H1w[#4(\J2{Mb  P(@B@60bƏ -Fx@"5~ԨR W.ǁ PQ T0`Q+9\eY4UYnkUaŎ%[Yijk qm("jV U` ,<% n^Pjf~tP8CjLZ&Jt@ R'PX[`ċ4+aQO ɕ/sѥO<1f" d7} I6︸$;8P@'?C@hcB*(8#DOD{$> $ڐ:;1E[tj+h/*蓏K,kD'Cl-L J/tjL !/4<546R)qVN۴SK16#Aи,LIw31-X8 4RU&SObMɤCUjYeXj*jamWj QKHR]DpvIR _@eʰ7ۘ` `fCzOBNyJk]|^w=(S _"xW)`MηR q_R8 YӚ5#lKUXi]/=uG̈J0#NH8"`**ZRfԖy( g5)S(6ϖ.jFjGR-Z<7\,B`y%ئ]R-V࢟v a_7Bvtwr(w-u~K|*U޽M7*6 zY 8iڀT1*meO׭1Ԧg פO=i$ B$@5ЀN"&Nh[\Mm xZȻ7(~'VgkPCPwHZ_q8} 8DYWi}=|2 , 1}hd)[iK|Deač8D)ZPSd 6<(B$D UdN})pkD 2r{c\Ǔ]ے!8DR^*`qXESdVBel7gU$>df3LhFS,Of5yMlfSw6MpS8yNtSg X;NxSg=yO|Sg?OT%hA zP&T ehC;PKm=GrBrPKDOEBPS/img/time_stats.giftGIF87a㶶Ja 3<ޏv1ɽuVRnu EMjNhoAI 09@w;3x7Uf8u?aDO{ Q\_e3nnAxvP~^++s@@󒿏\\G虦3MM:XXJppV^gox䭶r=[CnIfqBnIepBmoAmn@ln@kIemHe?kk?j,H*\ȰÇ#JHŋ hȱǏ CIɓ(S\ɲ˗ I͛8sɳϟ@ JѣHs ӧPJJիXjʵׯ`J]*@/^Ҫ]˶۷pʝKݻx˷o޳.e.\+^̸ǐ#KL˘3k̹÷l颵W `Zͺװc˞M۸sͻ߿a55 -TZμУKNسkνWJŪ+JZϾ˟OϿRJ z$ 6F(Vhfv衂2 )yb(,0(4h8''ɐDiH&L6PF)TViX&t%`)dihlp)t料h^$|g$l*#!ɟ&$R(%lLIV†zh2*l('(ꪬ*묑0Ȯk#l40 &+$l<⫵rKԆ+k+!+oq"l,"h+"o"ˆ!ޛo" Jfl{W,20,4l7.@,<h C!l tGKF!@aGuZrK}V tف}V,7G3!t߀.n'?}85W~Lk5ZG| 㣷Ά ^|о:7G/W9<ߡoxhz߇y߽Gav1zw|w ;/PdȷQPp@$ހ@)l#$'G7RPwd@a,e"JB~ҒT&<̥.w^ LbL2f:Ќ4IjZ̦6iB4x 8IrL:v~ @JЂMBP| D'JъZͨF7юz HGJҒFԄ0IJWҖ0] BӚ8ͩNwjS@ PJԢ5>=RԦ:@M*TJժZ`*Sծz` XJֲhMZֶpUSպx]*]׾S+`KVUMb[,d'Ree3uFKҚMjWֺi V% ͭnw pKwu,zl08ЍtKZͮv&]enW@Mz|buraLN;.pTJX8,U߭]GL(NW$~:@᧔ Ano?_/9_T႔L*[Xβ.{`2TS ýM&(DA _:C&x>Np2/g%~V%, g9ng) ?r O!v ;g)X1(BNK9g Er~M~:Grq0ʑL1 B0NAVaf&3FcWqgx&d~ayrO1gBh'hIf~&?@f,x$ЀxYg)jsvd~Irׁr@{U;;<=LtNqwa4XvwXl`^qօmQDjW8 19PifWxU8oo_q~hge8eV56`7PEtRWsxT5{G##g%gr~'hfo81 ؈406Vl&qwi|xX6y8h8YYGxnj茚'H٨gōEh|(G{x8XHňV:hmrx%dGjMFTqayHTa hmyoh`h{c"0ɑyw[d#Hjw~)~6InHa (~4T9 hyFdy6r{&{(rxU!qi%pR8O YAx{P{:`*EzʚJ6ʀbZf} Wt%`ilniM9Zt:"!*9 o_hv{78`ْZGzlͺfպ3JzDGeF֪>ۂq8l(zaf;Vq zRr6Vov&}_AJXld*ffJ-H36z ;~G{sN+jjTaFdzD%Gr8< Ua8FlJ۳FqˬMrx6I˖~OAcEW UzTTk|k" }-ě6ffZ˼X˭[KYT{TfMA5۵ˉv n*\k| ,zL|0l[wM<(8OU  ™ 8:<̟X`JLNPR 2012-11-28T22:06:38+16:00 Adobe Illustrator CS4 2012-11-28T22:06:38+16:00 2012-11-28T22:06:38+16:00 False True 1 8.500000 11.000000 Inches Helvetica-Bold Helvetica Bold Type 1 001.007 False HVB_____0.PFB; HVB_____0.PFM Helvetica Helvetica Medium Type 1 001.006 False HV______0.PFB; HV______0.PFM Cyan Magenta Yellow Black Default Swatch Group 0 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 K=0 GRAY PROCESS 0 K=100 GRAY PROCESS 255 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 K=25 GRAY PROCESS 63 K=50 GRAY PROCESS 127 K=75 GRAY PROCESS 191 K=100 GRAY PROCESS 255 C=25 M=0 Y=0 K=0 CMYK PROCESS 25.000000 0.000000 0.000000 0.000000 C=50 M=0 Y=0 K=0 CMYK PROCESS 50.000000 0.000000 0.000000 0.000000 C=75 M=0 Y=0 K=0 CMYK PROCESS 75.000000 0.000000 0.000000 0.000000 C=100 M=0 Y=0 K=0 CMYK PROCESS 100.000000 0.000000 0.000000 0.000000 C=25 M=25 Y=0 K=0 CMYK PROCESS 25.000000 25.000000 0.000000 0.000000 C=50 M=50 Y=0 K=0 CMYK PROCESS 50.000000 50.000000 0.000000 0.000000 C=75 M=75 Y=0 K=0 CMYK PROCESS 75.000000 75.000000 0.000000 0.000000 C=100 M=100 Y=0 K=0 CMYK PROCESS 100.000000 100.000000 0.000000 0.000000 C=0 M=25 Y=0 K=0 CMYK PROCESS 0.000000 25.000000 0.000000 0.000000 C=0 M=50 Y=0 K=0 CMYK PROCESS 0.000000 50.000000 0.000000 0.000000 C=0 M=75 Y=0 K=0 CMYK PROCESS 0.000000 75.000000 0.000000 0.000000 C=0 M=100 Y=0 K=0 CMYK PROCESS 0.000000 100.000000 0.000000 0.000000 C=0 M=25 Y=25 K=0 CMYK PROCESS 0.000000 25.000000 25.000000 0.000000 C=0 M=50 Y=50 K=0 CMYK PROCESS 0.000000 50.000000 50.000000 0.000000 C=0 M=75 Y=75 K=0 CMYK PROCESS 0.000000 75.000000 75.000000 0.000000 C=0 M=100 Y=100 K=0 CMYK PROCESS 0.000000 100.000000 100.000000 0.000000 C=0 M=0 Y=25 K=0 CMYK PROCESS 0.000000 0.000000 25.000000 0.000000 C=0 M=0 Y=50 K=0 CMYK PROCESS 0.000000 0.000000 50.000000 0.000000 C=0 M=0 Y=75 K=0 CMYK PROCESS 0.000000 0.000000 75.000000 0.000000 C=0 M=0 Y=100 K=0 CMYK PROCESS 0.000000 0.000000 100.000000 0.000000 C=25 M=0 Y=25 K=0 CMYK PROCESS 25.000000 0.000000 25.000000 0.000000 C=50 M=0 Y=50 K=0 CMYK PROCESS 50.000000 0.000000 50.000000 0.000000 C=75 M=0 Y=75 K=0 CMYK PROCESS 75.000000 0.000000 75.000000 0.000000 C=100 M=0 Y=100 K=0 CMYK PROCESS 100.000000 0.000000 100.000000 0.000000 C=25 M=13 Y=0 K=0 CMYK PROCESS 25.000000 12.500000 0.000000 0.000000 C=50 M=25 Y=0 K=0 CMYK PROCESS 50.000000 25.000000 0.000000 0.000000 C=75 M=38 Y=0 K=0 CMYK PROCESS 75.000000 37.500000 0.000000 0.000000 C=100 M=50 Y=0 K=0 CMYK PROCESS 100.000000 50.000000 0.000000 0.000000 C=13 M=25 Y=0 K=0 CMYK PROCESS 12.500000 25.000000 0.000000 0.000000 C=25 M=50 Y=0 K=0 CMYK PROCESS 25.000000 50.000000 0.000000 0.000000 C=38 M=75 Y=0 K=0 CMYK PROCESS 37.500000 75.000000 0.000000 0.000000 C=50 M=100 Y=0 K=0 CMYK PROCESS 50.000000 100.000000 0.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=0 M=25 Y=13 K=0 CMYK PROCESS 0.000000 25.000000 12.500000 0.000000 C=0 M=50 Y=25 K=0 CMYK PROCESS 0.000000 50.000000 25.000000 0.000000 C=0 M=75 Y=38 K=0 CMYK PROCESS 0.000000 75.000000 37.500000 0.000000 C=0 M=100 Y=50 K=0 CMYK PROCESS 0.000000 100.000000 50.000000 0.000000 C=0 M=13 Y=25 K=0 CMYK PROCESS 0.000000 12.500000 25.000000 0.000000 C=0 M=25 Y=50 K=0 CMYK PROCESS 0.000000 25.000000 50.000000 0.000000 C=0 M=38 Y=75 K=0 CMYK PROCESS 0.000000 37.500000 75.000000 0.000000 C=0 M=50 Y=100 K=0 CMYK PROCESS 0.000000 50.000000 100.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=0 Y=25 K=0 CMYK PROCESS 12.500000 0.000000 25.000000 0.000000 C=25 M=0 Y=50 K=0 CMYK PROCESS 25.000000 0.000000 50.000000 0.000000 C=38 M=0 Y=75 K=0 CMYK PROCESS 37.500000 0.000000 75.000000 0.000000 C=50 M=0 Y=100 K=0 CMYK PROCESS 50.000000 0.000000 100.000000 0.000000 C=25 M=0 Y=13 K=0 CMYK PROCESS 25.000000 0.000000 12.500000 0.000000 C=50 M=0 Y=25 K=0 CMYK PROCESS 50.000000 0.000000 25.000000 0.000000 C=75 M=0 Y=38 K=0 CMYK PROCESS 75.000000 0.000000 37.500000 0.000000 C=100 M=0 Y=50 K=0 CMYK PROCESS 100.000000 0.000000 50.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=25 M=13 Y=13 K=0 CMYK PROCESS 25.000000 12.500000 12.500000 0.000000 C=50 M=25 Y=25 K=0 CMYK PROCESS 50.000000 25.000000 25.000000 0.000000 C=75 M=38 Y=38 K=0 CMYK PROCESS 75.000000 37.500000 37.500000 0.000000 C=100 M=50 Y=50 K=0 CMYK PROCESS 100.000000 50.000000 50.000000 0.000000 C=25 M=25 Y=13 K=0 CMYK PROCESS 25.000000 25.000000 12.500000 0.000000 C=50 M=50 Y=25 K=0 CMYK PROCESS 50.000000 50.000000 25.000000 0.000000 C=75 M=75 Y=38 K=0 CMYK PROCESS 75.000000 75.000000 37.500000 0.000000 C=100 M=100 Y=50 K=0 CMYK PROCESS 100.000000 100.000000 50.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=25 Y=13 K=0 CMYK PROCESS 12.500000 25.000000 12.500000 0.000000 C=25 M=50 Y=25 K=0 CMYK PROCESS 25.000000 50.000000 25.000000 0.000000 C=38 M=75 Y=38 K=0 CMYK PROCESS 37.500000 75.000000 37.500000 0.000000 C=50 M=100 Y=50 K=0 CMYK PROCESS 50.000000 100.000000 50.000000 0.000000 C=13 M=25 Y=25 K=0 CMYK PROCESS 12.500000 25.000000 25.000000 0.000000 C=25 M=50 Y=50 K=0 CMYK PROCESS 25.000000 50.000000 50.000000 0.000000 C=38 M=75 Y=75 K=0 CMYK PROCESS 37.500000 75.000000 75.000000 0.000000 C=50 M=100 Y=100 K=0 CMYK PROCESS 50.000000 100.000000 100.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=13 Y=25 K=0 CMYK PROCESS 12.500000 12.500000 25.000000 0.000000 C=25 M=25 Y=50 K=0 CMYK PROCESS 25.000000 25.000000 50.000000 0.000000 C=38 M=38 Y=75 K=0 CMYK PROCESS 37.500000 37.500000 75.000000 0.000000 C=50 M=50 Y=100 K=0 CMYK PROCESS 50.000000 50.000000 100.000000 0.000000 C=25 M=13 Y=25 K=0 CMYK PROCESS 25.000000 12.500000 25.000000 0.000000 C=50 M=25 Y=50 K=0 CMYK PROCESS 50.000000 25.000000 50.000000 0.000000 C=75 M=38 Y=75 K=0 CMYK PROCESS 75.000000 37.500000 75.000000 0.000000 C=100 M=50 Y=100 K=0 CMYK PROCESS 100.000000 50.000000 100.000000 0.000000 C=0 M=50 Y=100 K=0 SPOT 100.000000 CMYK 34.900001 23.920000 50.589997 7.840000 New Color Swatch 62 SPOT 100.000000 CMYK 76.080000 39.219999 18.429999 6.270000 New Color Swatch 57 SPOT 100.000000 CMYK 40.000001 16.859999 5.100000 0.780000 New Color Swatch 52 SPOT 100.000000 CMYK 14.119999 5.490000 3.530000 0.000000 New Color Swatch 56 SPOT 100.000000 CMYK 29.800001 10.200001 2.350000 0.390000 New Color Swatch 61 SPOT 100.000000 CMYK 60.780001 27.840000 9.020000 1.570000 New Color Swatch 54 SPOT 100.000000 CMYK 24.710000 11.369999 6.670000 0.780000 New Color Swatch 43 copy SPOT 100.000000 RGB 102 153 204 New Color Swatch 30 SPOT 100.000000 RGB 204 204 204 C=0 M=0 Y=0 K=100 SPOT 100.000000 RGB 0 0 0 New Color Swatch 58 SPOT 100.000000 RGB 128 153 179 New Color Swatch 51 SPOT 100.000000 RGB 230 241 255 New Color Swatch 55 SPOT 100.000000 RGB 204 230 255 New Color Swatch 3 SPOT 100.000000 RGB 221 237 237 New Color Swatch SPOT 100.000000 RGB 82 160 162 image/png xmp.did:56E0F3695328E1119A13E802F162621B xmp.iid:768C7A2ED239E211BC18F71CC51051D2 adobe:docid:illustrator:234df8fc-39d2-11e2-9ce6-b46a4caefbe3 converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:56E0F3695328E1119A13E802F162621B 2011-12-16T21:27:54-08:00 Adobe Illustrator CS4 / converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:9B0DCFC4C129E1118F64DCD87E7972B7 2011-12-18T13:46:40-08:00 Adobe Illustrator CS4 / converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:9D0DCFC4C129E1118F64DCD87E7972B7 2011-12-18T14:36:15-08:00 Adobe Illustrator CS4 / converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:A10DCFC4C129E1118F64DCD87E7972B7 2011-12-18T15:02:57-08:00 Adobe Illustrator CS4 / converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:FD6D4787FF38E2118F01A651C7FA23A0 2012-11-27T18:01:31-08:00 Adobe Illustrator CS4 / converted from application/postscript to application/vnd.adobe.illustrator converted from application/postscript to application/vnd.adobe.illustrator saved xmp.iid:768C7A2ED239E211BC18F71CC51051D2 2012-11-28T20:54:34-08:00 Adobe Illustrator CS4 / uuid:234df8fb-39d2-11e2-9ce6-b46a4caefbe3 adobe:docid:illustrator:234df8fa-39d2-11e2-9ce6-b46a4caefbe3 xmp.did:56E0F3695328E1119A13E802F162621B 1 485 549 3 1 72/1 72/1 2 8 8 8  '1PLTEGifńJOT嶸-15ӣĭe*RyYcnc{ƣx8cuwy淽Sqydgjܻ㝣s/[lrvڜݖ⎓xjwt||cktӘYtSZbk|{̊Hs[|ʚƧqLmɕ\ͫޭ̫[]t"&*o}v`nOuׂ׆뒶~ۢMq낡y⊰ВӆϬABC~}:::^_`nnpװ󡱸pӾϑQ {ZIDATx} \SW>ZB Iy q B0J4,DVkAPH*Q):(-tCf2 Er}INB ; 9\ZvlfOX~Ҟ=aI{6B.>XWg:k=c~0(9^(>s522(hǎ^^gz!0t^u's󱳻xEw߅@H31| ',vvv)Ϝ*T\{%1YmXW&Xǃ<_CBuO}qg.ݛ"n XwQz eu>- mN-T!5zx9t9sE|)zdX0*EsAi'/ ḺͫPRډ*aF(Ub5\#w΃m drTNV. L>amg1O&#ĭ,C]v{1t sqҮZë`HA|ώOXڸp_W y ^Aプ򂞤s5:TpMcW\ <%?ӰvAAƼ4Q;Ɲw@ "oփ"p#"=aж:)݈H3]> l|u #ȯE?=tۓVq#pj.Bs~\4gg/.eБ'I3` +~²Kn-0gh<*51@D[\zg&n6ڼTK][_<1!Oma &|',UdyG7lMM0*ΆLj<յUV o vH/zRw|v[C{vvގ;7c]D&7[ ͓Sܯws{ІcfBs>_cy. 4e0wG$<ɃL^B6Hi!gtؒ'\{BrBQwxIn 19I"4_e.7Sܫh 2;tA7ꆁRg0.Ҩ[h`XO8{tWHmjzx7Se Bxi ؿ`D0nHl A=74{~Woi5rRq*˹|G=c1u~M7,'T=:bH|lR5`8m7 JzrmO} i#0yxjG zȽnL7=$Y>1ohGׯʄwRԛƫ k'pLUTJ/]ai')ɬZ+F>FQ#OߍvXI]W.+=+y+Jx!\7Gbj.N}%r+Swrn]uN{BR,Oibv΅Bխ*юSדʔ*>f_%zLTPu"TdZ[n YvǗ :b5kT5*.Uh_ RbSǭD2XiGi]Xm7oqVp)Xx!wɾt+#~tKԻtj5 OfO&F|䰔 ҝZ0,fh2w iצpx;p='FnW-|k43W9FVdz݃n 7.rt\ӸO{x)SD@x<CXPE}XUHUy%HHhZT+xoaXZsoN't)D֥֨ߋvrp R&;5k:,կEŇRr8BT{ef9Uݰz^ʤݗ]W,?[ЬiwPy^:Oj -OG'q 4\+xWxULHD#a(Ey.^WzKnmŶ8 rH' re\b-qvR2Hފ:aq7. fyR4*]PC3EjkwlU̝H TQFDd\!'_Uk!TF33>L3yk t""4DzXbn-OA#̓#UI!Au_琪C 6̐])C\zGtk蘭Ʒ4#b9-+2U&B*^HINWAY NMI0BagzNwH)ʾmfq]rsp4WJ~S?%74H>ns+s T5ͻ*BrMYZT QEA\2D2e)5}GCrUUۍZcRL&+a"$sT / ;NSq#8p_FiA3ò$;wyW2ٓ[4k1[>^5B#0x-pG=qBPdեitts+?-fWУ2$z>RxΧXFHd{|/*dU (r`Sjo~wV9;'Kn(~WşE=*}~D.UEVH)v1mR7^:\jxB :8H,wo4lEM mP9HqqI*upUH a??[苞Ow52Is)QO=3^hӭ޸yMrܭH38`\pxtkiTsb$s.%&3)[H ^ }kӘ,3\ۻ}[vHfzcUհE3<t~Bc֡nrwBP0CޛGXTtΏ lmeu#~U~~1O&ᡲ񜓷v(f%Xu"L3~UU䀲LP̴sN[ 1yS+_"VZE쓌:%ѕJx5b\eH6|{UU/LkΙ mNox˷tV&qyS+MIKz*:7¢M8&;bHJEc:N>-*7Jda&nLq)sLi_O~rf74$ҭaaqҦ݃تr[E MTg@Z!JeM-u,imH+"7Gp˔sI0:n27xN52As1ۢchoQ*XYi9N}t.`({(7-KtBǪAaq5zR6pH&+g_Y+7 Ūcy=*}rúoa$o` '<({X1?'U8옓Y'8=b۪kF1(RE}\U? P,Wgj5 xC\phH6L&(Wab@+ @y ` 9R2 E 2h`~grC5@Y1!G5%K\~Y{.\e9s逹`}y|Rɘj繿'_ Ȩd2:.͊WztQC@ݻB9B9# `D% x?y.)=6;˾A.79E%$$ & 6do12&)ɄLo%%>H ~uVyg@ #&d,l'cLz_)ҰcHPb2vAzWQ&"GQ,?y<$F9mڴ_eN6Ȥ;ih{T{R ,{G1˜ ȕ`(S%pS6 |1 ƪ~w@''޲oo ,K;_=,0g6,ewzZ&ИrRS?P>A40(U¥6w-choW_x;(5;1'֭qr05֭Cˬ0N`dr<4v~$NN cd4H7G-I,6Q5 E)ǘrƌuԬ'=sr٩YMK:#Z>@܌X\!!-orCuQ }S <2:z(2N˦ܸx8~483;Ss`ޖ5v{//Ƕ=)1f6`qwbAaA&2+={|euF{Qe!.2ػ8JLҒѿ<[LZ"1+Ճ45%=ecS~=Dˌ%]@XN |%@HQҷ1θd9$Lr N6I}P `-{lF$LF}ks7:L_H+y|'r_.Xֲai )ٕ}\ugjbbNNO^3@c]N|:(qA)2 :-9z,7 |g{idJNZAUcXkvb缕o,_>tre<걭6.GˀRdf#0 W.g9Fqx@M>mJ krc,4.3 9DcMųQ)-2+ǦO_?7.һ"q@9Fs# /=\\&m城Z׮][ZmMG윕 ȏMh=dJ9_ YҲD*Q'<9;}  $.Zv .$.cM2_-/;{ۄuH 8(.QZv凯e@)O6+fZ72%)c2_0aA~=5P p<;2Zjt=O/(e[~|ɬzzݺu,-S*2K0#yڵge5Mbo-:M \`Nd7jO HkCQI ,W2 GS96AoH6kNo$*eڀ.Qُ6 ^L}\DŽ?RWc-5+uؔ2r%hA.a>.7Xg90{96I-O1, =_1]\I9O/&}%Z^9b3Zъ$;Ξ7i7ح)6{=SGl-1İ7ee`KfN%.8

ЮIf..fu,o8PZh9'{Q\~ P\!.?q!"jF%݉Yy=&vam٣ qyy2F*mo Գp+[?_^~פXmmc12xckѿܤWOy1ˍ&êHLSF?Uv/XA.jl_J& t#iİV8ݲ*/ɿurkܚ쳇?f¦Circhh[c&V}GncvR%rc&yrdOJ66z^nr\΢6'br?xY 8+(i\8 (٭-e榬 ,ELy/kwksHM,5rW\q/jL!y]1%m5.t(X>&79$g4hJ&rصMݧLŵ-AbA#)\-o7$6Ɛc<^ޖܞ\nmn YYt >u˝nr_ wMؼ c5 8950Lj}oT%Ҁb$Vjx9d)g'<q1~I/ҧ^'6vrMa%O= 3 _H얖3b7^ HiKښ\p{|(s4%N :5]?na9񖃫[[p;Y쥯? ͎G+Ww|آM$GbpXsI/|OXp'sVE1 \;NY{ص2zJO''geKi˔kƲe.+$dGl5~DŽ0k׮ߧ;ߟevKgXYYrpKc1/ϡAS) Oe4O25݆82cpZ,,4`y,y%6KEnQQv~:أ3ǧ$'(r1 }_?PpVAİ#F `X_)7m 2yꥯκ<Պ nJ.*4H#P0M }_M 6beÜ_F^eՅϜI@k@&4^e m=vǎ1D!Q*䉲BY][;s&eeeO23hoh>A^_ ~`mȋFrfuSeX00*7lʿ e;DK7NY˳(h؀yZrfma ,{2@kaIhfxD&Āey#Ɉ8 4쁒|~&e@9 _an,C0{@ W1$uYO ܊"< |uĂуd97fygk%+$6p#,/6Jk&N:F \V͘ &9x͊}hD=@&ĀX6yQZY&ⲹcr*E'+Xp`ِ\fk ~یܢeS9 r؈^^yxG־W9OO;Ć ^BWr0;kz1`aa shP+eGd 2qY.2-%-Zs"N@Y"E_Ч8/M#n-2C1+jRCOB_R25fjǰ@roZ6yeZˬ5T#dd~D!eL!Nlr$WW(@#n̢A@b*e4HG%NSw8YN8,ᬑ.* 7ۀ53C q`8q^'NԜ_5uٳ5$8UX} 4g0`%r&i崩d_2ټEJ&zG~ #q3NlvQC1T€ĨH;{,x LHlmsOH&zk"`EgۙG @_? 2X d z+zfAbKRc(E rz(;E_Xe#c6[5EaC juu5% P {Cl0[K_FZ|O$6-V F9Š;K?ʳ`K$6c W_ ct^U埭tRp Gv d06e $+#0AaᴺZdլYKTp{Ҹr7SRwXMԅj᫻C< &҆8 И`zІLF2/(K#ؽH%DR} $ jBdh w&^|=GWOrpdff"`EнY~!{Br0@n@n(g(SR$`SȺ,XLEGGOffy<,^> Ɖ7q}k2~yM\}?6iUP |}CC]ߺP|kckNU;:"~m|& D$4,c DLe2g׮MܸIc CחMjP9FZ񎐄-3u23ȶKwG2R1%ck|4nܤGH̀nb o5Rj9A:N_yrQJ'6#4(l4 O>ʃk##?7H_(?{)nXEE|=? |6qL C5Z ]S52$8F!2?`*AM|Yk@>Ǽמ2uqXS㯷L!>H/|Sc"t x1MG|`7pXf͛Mfy+q!Tԁԛn# PkhrX݅ S܁PHBj_&T#yp|TGJ v6^:2ee 6"6@=iGj& ,On^kCG<Q\!THТ&#v8^Uj=ŭկ-?Q ]7b-f5ee#x޾@@GS& XFj CThGF0YBֲ#G"%;tj{ޚ=ݜL i"(Y0mmt$돇#5-H$A 86$wQ/!>23)Ȫ_9g2zQ XXC(6a]KX>[ۥq@eVP7f1ЕcA ?gy[H53h6A=ߣH6Hy@FZK2ENh}4: R !UW.5qV ҩqH:ޡZ\(8**γUHxGkYqM5gj YS'v kr' -En6bfO'owq2g[`9mL\L|)--ݽFA|L|UZs:wwٳg*/G]=;#FD3:7nQu8QGmgVph0jwH=Ycҍyd(H0HwI H O(MF|`D1[cT6LrW#ޮ =ڊ\kƉ}z:ba܌:wd\=k1JNEu^>u}3t (Qjpl53T~]vCvqi Ҧ]3+ؔg,I,@Cؒm(S4WA@:3GۧP/+},SAaH^ƫKGE}Gr"^Zaf:Ei!R0y7$u`&%Et8MvF ccsMm @I~Se5݋M&}!981">_[r^/ܹ)vT,;c{ wG2z168Z޻5=hE}9u;ɟ2ٻzɑ"fe/{&Y'Na&tsM_]68e"''2% wuz02pd"`%IB%t''@Ɏ+%4c2zey~O_MXyPZWwn0\һNS q)뮠߂{vԲ N)j{CO!DrpyhE{Mlr1%YSOF'w3'W[ogw7+7v2ǁH0qNר?˯ag2!,LvJ~$,dOp.$Y4z"`e},*=S6676BvtI088}u: ,`vHhhW(p =L\X2j,r! ҇6_.B&ETRӽe'8YT2S[20PKջLLݕ(Sq8d(Xiffy,dEZG VWffnMkFX?>MeҌLѱzu2CB O\32vfMȪiSV8B`j) yg{84Ù+0Ify6WJd*􉬯τX_9ZޒL+K|0Lb=F#/d_k{IZ9VvCJ(TQALS?*;:pƲV`'fY;Yt:܉b3t2oNXz&&?JրebZ2@Ke%CdP-[BR'ӉZovq*{WS,7)T)rVv, A +2tŒC!M7xw="  _γE2_PըsB%j7C^cT4 &&S/+깜~W5!|@ 5Z@%U28dWa"dv"fY YBgjnSgmP,kxR _1]_p>Xs^A68<!ܠ 48B h,pf X= w 9^&.\>Wd-怛)ݩ 3:@,p&[2Bg:g89G]# `.c yGV!;>XބWuYI^)X㕗)M+?Ef @eF $4q382)gij'SJZz9PbGx/_3EPD j ݻF)Cz1R$/@rwy-f2_!L@:plz&0Wj+"|R \"Nm=~26LDֳ|6LFn5xO=(d 'Q u)TPifU0XK{zJl8>dh"5k\tAH):Vqh/"0)>§8م fsDrݙ W5E_dZF%ڭ_:4^$&C*cev )Wpu G@,S5R֬<$fBvOЊLҭՖ B0".dz8K01?j}Luz-S_b ^_QSOSXrwcժRj&bٰXկH.Y3y"Py3a(擶}ݟYl"–UĎT\᩼ukUBLVx`:{ 5MI"|$+S K0Hv5ZP5͎.+D 1V|oBQ5`BTs2WN#x擩}l8ԇEłW|n9f{E!)\rDn^^."7SNk(JfGx ZE=A ߸dŅga+'i23ùytYڍ{e˫ oJeoܓ wi%&L9} 8X?dHtkG={wW|()[Z.04Y2Sgm`=f.sdvG:mt2uV}nm2,VV ,+nU0!erLL"#IG^}Yc"`R'w+ UzVsr@3 2҇rڙfWx&_O8pʄ.^Eh>p4ՆuEbsrLm:؂4ʋv}.^ _ȓuD`iDz>pVkòk!{u<ӦC Π ŋל,y<׬M_vEp3`A v#q6Y8#uWmOM&aLmpw$;[gD? C Rzʔm8.Xz`Kb1H?N{,O#} c O 'ݫ{۸eߧ ,/uY+|ϔ+[&-ؙ=Op>Gؽe诅e29b|S^ᘲ 0vtSsn0n|!vOpꗱQh^lED&_G`I&)8?2e Ct.6:iv"` ݐi鲙ebB`i7&Lڬ,cD5%2 =Wj ǕĉbBT&͜XZ*?,o:A>ŹLen2B/QvAsl^Uj5'yN`zicߣ,-Qp$ʳZӐl`yJ+ egae'=bƭs~_/dP% V LݖZ~>HXΠ Uzd\Oq 'w: 7;[ڵO%}%,sV*H J{k{O_[ `24N2! tTHvee'3 WoZ5\oӎYNcۧ)NhkK2re)돊\yC1ou8=XNY6^|3φ\[vٚ첏StEuTDfIǀ@"y%sg|௩rr,e;-WwAUQX} #In7 k; խrfYNے  4;> 2Y5CJ@$Ry@"f< MR8r"9<5Aߡ_ 6{@ѽ,8T0  j7 gGT3ax(,o5iR '=϶*WnW օj9ru rwc-Rw29mr9Sx,osU` OSK$lMV\1H;릾Yx"j= 2=^Wؽv ^ ϸPe S"524@'Zsof89-2=^ypRު2%h2Ǖ!A_^ U( QQ]D~r$U-ʪ;+qqksUsk6Z}--,p6wӀ/VżfN&alpk_ E9BTH%sɡ uӏuⶂ6W?8q:ΘoIL'ӽ~܄@,T P(2NL@pr$IdƗeyHwpG[ HrN ߥnzZYԱmicv"UZz4]Qm߶X5ҪQcu+B62]@de)gϞԎ6N{肜iL9@BX~;öszUYc Fw1ˍj r w@#PʥFZvrr,JOtr|@v}ahS<1$kC]F4#X> 5(eiq ,o d."bݬRߞ8}l8QQQrвx%KTBwkH1G0"X(;=gfɟ`&tYdռ} $*M|tkB+N,-G+d _ޫG6Xs+J0 &1i] ?Uc!2#NX$gl 9rjV/RF)XnޥY#FPɏbk`gbcG`GObπb G:G%U .Gm(ؤ @|)^tMjQ4CQ$&;٬`I?A{?)N^]dF_hHN|Wҵ:9X~sb5r3 }a75ܫ Zo8o :-c/t!q4BZEG!5Z쵵񰖿9=cƌ2X8n~$]>2i[%hmm2K Zy4RLQ& jYU H c3#-"vS\\F Oq@ 01IP`QtV 1Hбe'2< =ƅֱL>:}Y7dL*Jd!^](7Ŋ5bjtޑe3fb;z/u?GdbXȆaM鬑鐇wIB |ob:}Ո|>Ӡ"t2$9?^E;7X~J_ %M:-N,T@ ӌ3.DŽ/pbouB57TSДx 7\d_ #\x,8}XS(mkǝswqOOT8U] km"FĞQAJ>Xþ ޸nSJlQͦMƝ9} wݾPAu' NN{3HO+;+͜✵ɛ+.% Y j:Hf|'"x-T!YV_|ӽcxЌ3Gey-K&#-.LhVjQTD+5`.@/@Z,FlCPvϕ{J=Z=?X)"?s7. Y=͑@-"˰2YQo >. RYyܳc4 Øc`/R<| n.--8p`D!DL>K.Yrחt'ʪϟo .((=?* w3QݏB&ClS\j'Οرcq)5JIo4|>ޱEAb opXo$3` 3y&Xk7T0;:2T싯L&Kr2% d6+z^Dn/Wu8qGU*DOt/d0k섃$k,@>#jiXPp{F .K;&k7ʄ]ϵB%{rd8OI%F c1-T;(;|Umd,2gOi'e5!DB̲@%cd) nGHʔ"ɱn23{!Y3? yfGrg 8ö}eHL빶[3#A=j4 YxOeгZ=⛯J>]z"POzY0H (p, 34rӌ-@|%tf)17WQ)wesk~(/Khl(CMKe>'뼫HxS)w߿ffy|4n7˽30*"7)lZd=Ae#W5K|~s?Eb~u+G 'R-津$"]ܫeߣl$ JVeÏjsiNlhS͐")K8 7Nu,o'?mbVy.-y+P/0G"m-3^hHFe߯ er}Ey!q$"Pkm#~r=?~?)XpEx"_ϭGBԟ[͔0o% H>Grb?mxق!⿡mաasO[db:`kxwlq:`>: PÖ]ȁ|DDuL|RԬ@2elp\dq_t@P1he5C{ )ܐ͛ m<ysP0h# 6&XLXGffhHͩc;{(gH4wнUI 8 ^MŒpAhQt'(ـG7w`c~NjLWq%UH6B7xdkYMx9q~QSϏ!EҀ ڸMoŜM&kH,e_e_j}6 krݥjQVSD}~"pwlzVװ  fȑןL'hjz0/DIl\J B0<;E3F45?H} \T732[q<0 ~1#"_9La8X(hYAHF$ѼS"QѰqnR/z'y~OQʻ>̜30QrZ{G7 N`,tCN'bhIIWKj$\P;2$w\ـkv<0l=EX_* x{B82Zh}͆P ?Q_-#NNw_FA9#B< 8)BUfs^ յc0* C'4{2cg>q艣(욐".E2mavO*zR`KF||1~=K L{djeB}8&-46Q8Fy DL&]mmFa̫.$kz.ziS114`?]Lľy'_1i:qj`ci10(`3=i GӵGu5F[lolL?}uSjt@i5}L3q Ds uˆ0ӽh om*/k1Ybp,շICЉVt\:laDO a20O7Lr=w!& G \%o[eL S_hP8}֣c!3_-> uMf#kD#^ҏ}70i4ci Pg7ahN>A5q{K༙:& UwES3ќ¡EAH6yy}u-yy&PK=1糬F-m24 5oj2cI1'+}-՗xۆOkCtG6_v\b,5 p"ܲqp:,En̢8âj: x cg1.u=11*Yhoyrd uc7jztTk7F|±(E6_O9L P/sx}3 e-&:m( UL CMzk"F|rl(moxYd@@ML> MFqC+IfPc8#S,G@/h. pڊr ٵ61T{Kg[McO>lG`.5q FᐑAyV01_麙¾ &Ft/]11 =ZDP8}@T vMs09&[d(`c@qP{Cp̢ m6_uP;  8H1in?Q&9_fΚqGգ,E"@yr 4_=Lƙ=LeWm'O@K^m,D|N({?%7uF?M#|5q<=dՉ?dZ[}k݌0e'Ïg ]:G9.Qd@m>>x4)"%!ZXye&fZ J@~L@oXlb]>aBGs4h~9ph(o}k'ARE/?.T#@KZ·y;  Db`L|_P`!K!wP\<jSʧ0-$a?>/5~{e Dy>4_ڴn(N qϞZjU.|=pme]/FI8!Ĥ#8VwW<>|& '#92J &}D_AȜih=90dĹ~"PT]nppӌ>Nıٓ8lC`SD(VkmW(@d X6LTt<|bqyď3+7})))&g!1Qtƭp{ ML߳賸7߇[zVDL.& g2 8y95YrچT]} 0-gE F[/\|G-^OlaCȲ}E"6;eaUCcNʶm$1Fqy 2xO=5eTէ+ɳǸ5l )E1azdg1YxbPfPTQ&8y9<WV(M('jomۦ7RR6_Jp0qY#Qݤ 6w/I-}tSO8[S*[S ` D\`fBփALBf-P8.3a>/#&SFnenG)J &ݑE2پ Z>VBҭA[ؠ7 #Sįh&xù578>a5PlH$מ4D/VE>Dc2/?] q?!ND_*6>}j@bphqwG!htym>Y౸H8O)`PNV$Jh =;LQq 1T azY^4_o]qp|Fv֩AߡCz#B3{ҥ0/&ld򦶋8Vҍj&N=!} /8)'ey1{]A6MgV(2nنnHёG2 aqiN6E&,M4[P4JiNy'ˑd-8=a=63Jwɥ9CsǾq/kf='h6`sD#G伞}w5K̄,YDL̋ISɻ`FlP8g[&Иt\f&ݓq){v?o.ʌ*1@[$0P$>_}9YeVK0n3]LVf.qvevf0.,?a0iq);إ)iKUްa=WZs!fD| a!Ӕ_GaVL2%8'k&.h.K2n R/s&o~pgt򺸗Y3fS@"nc|HI 5CIt{h(3;~ۭk!=˞ /}S56q";:rS .zO(\S7sbjt]yCO641y3.)kZ1#L$Sr NJ)͆&+&b2!r`e0:u^`7J2 ͛OEV(^ b;=8 Ob%B|3vhu%ou3bI12a] YLfYU_LggwY}b]L#@oKVjXĂ{w [0?^09SN1@*-MvíXSe{1~EÄo6|u Sac߽[iK}>"b9dwL@wO:]L&:&N{E4qwmxq{w޻7.Ox4'=a" [|1>\Yn1Hɕ_%D{&=W/:w(h[Kvȳd/b20ݳfL: V\v1-~\`?,Xc_uJZCtsj]W#oʻ9u֏!y( +W9LAdWRϭd^ż©WW[r?B.O@L.I=N6w6L,\ɳfM:99yyi.:9ޓ3bbɳNu9yi˞gk^bԩK)&sH&z"Ma(d#:mlC \&J>9ϡϾFlGپ(ۗel_vˎ}Q/;vˎ}Q/;eGپ(ۗel_v(ۗe2^jQ˿ *))u(آ 2@9ivGұl!m%m&;#8n1Vx5Q!**5DWJ0{f `킸rt:PgXNSx<^˪daqJ5L(tT9:;#`9R2r/eZǽxUfM\JQ:%t5Zq@cǩư r^F (QnT*y:q`֦qf^:˷R+Gar\ݲUk%;~yfo(s *5Gb,Z(#tc4Ҏ_~r-X=Kd:?`):;~t/kV.\>A~+b=$_Elb0)WŸ̑l*o g#VTT呰z#٨;Q%t Qu%vf@I ߇Y;ʶ؎+(q&׌>;#j K͠}_HɩZ/F{R,'eӎ(Q/;ʏj+t55.4T Y*;lMD C%2;[DOKjL(zhi!<$ av$ Fr ;NRcGy,)肱M)9/ŹEFoGYe »2&A$S*!PQZ˘T3^ 4q(v\Q[y<^Ů ']V/ XjG!|PuKD+#ʪuLV%B.`5v5uE+'U`zƯ45\c[H֘cdgt2YOUϊcrYRoG]!"WAU{2EHح<>31L dR2"ump#`G]V\ڍb,KnY+ܫ`LnʹR c -8ӵ:.k^<);ںPM%GǩԦGQBZګB OX);J;T @o*F7H8vU՚k7K J&Tj,0e/A&jxL[ Ȫb*Fw 5Q~hQg*:_V&L w\4c;*1\cUC2?&:aJ'A]+^vX[Ѧs hUiӵ՚ AI#4x |`|v)1mqXu(?;4|Z}`<+xݮ]R."j:EZ+*GU5?eK0.CrᢨVX+kfZ0u- :YFjJ@/?K~poO 㶘/} YPY Iob;!(WŽ: s vBL>+b|kG\M+SWC */l2E-dY xV3Q~(Qnu.3 !osqhtIg(a_َ50*3eVTicfutIM91>ʎCIbeuuIiMt59KN)cv*ž$.qWwObdu$=;(^McStvw0RV@k,;YU$oztQ+ĔD= Z{I2 V4bAD6]E'vj[pj>(7pZC>$Bs j9K/[0O6t>6 c(v m y&H-;z Cy,*Yث&X89tӺ{qP#QQ 8x`,pTIA50Uv-8BcrKQLNK{PQe: C'gРIqЙ0~Z>3ANZ$=BGtPCdÆ8ƿ$ȸ Fi4fV$43?M𨠌Z\`bAL"&AJ )̍( x bM/gD\ ;EFwp23}4Pf*+N7%]r(((P1RSI2P*T֔7ʕ0N*?...]. s0!jъtWUC): r'$ +#%QL!YL*骥LLH\q.Vje5iZ?!59+rE ᲄ2Й`lG-W-8*NV,mJ.L-S9*$ͱj[qU<-?ip!O%fzm@Ur%8OqjJ4BI\cw!rVp(դiU)JeRdjP U%C(rD &Skd _ y~^M xjHa(Rz]0~7 9V(}cxbmPO 2 ȁ>#K(d{*yPN!㥦17`+q$|X>ʈʖr3Ir^&vJ&i*Er2 ˨ GR^W*!YejIS: gpB<}q*,ׅQu阣#@Y d2!&ʤmŃDY E'p`h J Y \5" {Yaem&&s"eo%Հoͼ^;}(gɵpf7x|]vGozz.49pTMD*̄UB ˯(wʕZLǑd퐩:$f{!ȄG21Z+i<6AP+ GUJіoG`rS|RR'pZ g2VkT.Zq~ƪT˵,C*dj WSh4uQ@=vs ʀgJ)}:!!Y>*e,[ L??YPZ'Ðp4 ~[Q[ jW, /k8JSa ;H:Rȥ,h2 @ds 4 X2CQ5]V誗IAZiW:XR$:`}*Ւ d8߀d Ô7:f,  GطSjUN &t*O8N)u'O)ejDіK$RElcYk+He:bTR"/+#p.H7hx*9nI.)ud< AN I -tl0> Yh,Cq;^lժTc TR“KhX|W`qup-WTr/zj^,yP%qU5 .GcR APavo /jMAdLyWl? 2UtnŇp.]L*(!=s-?lEFA>HR^#傒(DY}8#" 2.Lr^z&=ШHGX9b@ ÊDPGF;zPd%o aZa dkPI#|%w ب"(HGBD;?XFU-i\'QQ">$kXDXFvvH^VQoY P:1T!bڧ1" 6}ېK([ζ;cB蔵GҴ(k-gE-vbŬôouVEΗ_@xK\/AQYKK?IJ~z2hA#r߬Na,TaԂ*w^{I.9,y]_FVYX!h12qs95U*_9N{dP#{`59>>Iv[v8ܨj`ys#WAd,KF]dP5k7aؾG YݲmXq#tGlO :{4ũKu !,ϸĔl.YQyXqɆG e`Ztw.-?E ȊH2YN]a"T`&K, Jn%%kH%JoJR59D^쁫Ud9vl\K,0撜HUeƆȜ%L%Ep (ߎ"mUdw : ;ae\zu$XÄ5.&8$%CnU6b' "1{& 903t'Ht]9,1@?Z(Q4w*Wg*{I h60>Z(W `(,sOOq4*h0vj+*,M`Vdqp0~ij%u'( mнTa~ 94~oV @J A O}dPWǶz} SҨzKMu7x.%&lhj =%1IUQdGQdpGGQ6Q& T0; D֦(Fe*]e #BxǂWCrfRRF "X,P,V3Q&"UT v/Q QA#+h;#$Ɇ7~TYV<\ Suo,/JiqV5PZ M=d-OxB?-[Ã9휥 Α4  Y\ OVXՐRX|~9VR:X'-S K55 0Bjb|csgr|t 尤qq]%q^k21N5x"8ju/|brB;{<纷,ij KIڰ$V6k+"`QAC2*h%iJ:`fDVxڰ?I|<]3U~6zPC{_ =?LW+!?ԱZ31F `+ xz})wɤc` `ű,\U4\)H᭭`O vhxDhoNtWNA~ 2n\$(jϯ{s=hxu(ןW*52HFNetLM(eVUk"O*Tt~Y]Z'P569GR.-Р%*|NөױQi[,K$ynW6KS!A2"XA#AdpSk!v׎xP'Ć e/i8R4rLV!ɕm*T,U8.vA:Lƫ۵ Oo:R[] KJI զU|MQ[ r8)D$ 1*hdmd r d\n*AFT-׶4 ̑j5TU0lAT(y?xuV$'CËj"|}My} Cr_Lž.ɂ{NrHPPT/D/50Kus'JOF}(A,F"@E}E?.j$1 ܽ ]Ho ,/!. %dpׁݔf4@yW]eՇ[_ േ3k0Q.22L Ɲ(,@XCi s/E Q̐E4 @gp2V  0xU3h 8rޅ" > V%BNJX}c(Ö drgX~K qU]lxysoRSڇ%{bT|b2|yuyl!fT*=U`^;BС sbG,ʨx5ƴzX+h&n,3# m> 9c _>żT]HZU?ô(\ko> (×X.\Kf%lk K-_UeLC&K39 ż d9~Gao\o& jL+KzxmڋB6N'K+j~|x.VI!I@EҘ@sdok_W$U8%S5RPdEE7I͡׸:ĻK KhK}+,ݡR^9'utvF;;rrZt!!_.7OːN@I҅F5R#o9)!!a|Xuoy"$ob"t;&^w]wR u7tQvi(7<%H冑}3a((7< L GHDkiW5y(}Gp^J}GyCa}ZJrՈC~fZOR%7ފ:g۶ś6mQTTހs:M(ϟ_Xyԫ ƟlQ?bc[㝣z.ހKd&=owM(C.##O_@H||CʍC89t$yySHj$5iX//ӧ׮=z~ϘU Κ,br3/ LL>pw/DD CqcۨZOx΁ l ؿQ0Bc-h&>yWGF^=lJya5>j_vL.jP\΁<֒L⻐K=g<|YW^/,>a >FD^iNNMpv{Rk~'rBl"wfPWK#DFLK˙Fl t_^t"L{aKR94Ôʆg'9k^~}aD_U5aޚ[nVziȤ(,3,{⩧{gtYUegݺup;&V1(ΙD feDqwmZZZ6? ?(%_(O `_/GȲfpxcI!C5(y ǥ,sg|'22B2Y tvv2rPnoڛwQxz2!!%tihBJ,2ܿ~~1~I`J(l/q9.](,{ϡ+,/Gǒ,?~^!y#5yyyޜ׮}ݿ dHJ&ѭe,s#>>;X^(t&2;bH\zZ#YFLL4F1D0,(c-΍o&#!|Q/x`_3jīw1<1˰08*^=㴹ϾY,_Gkcs.( :N9i|e,eY$ғ\& &earcdNNb믿)&"M?_ęgr. 1HdޝU$GkEfeRFke? PGQy{wv$_dsϽ:)YK Cщ~1kcp.C]&yryWgggHXs֊+^}ՊsYLl 9"S:R/ϛȲ\&O?+`e.x|'a2erss=)< ffU ˬ -8i9sys 3h.?xլ,Qf]ӰCqM9BH q.j\4Oqu}nū&ru.C1h8+QDlV{ew%W-ː"ʑ9IYjlb/:Hg}YR"2kasr)Y~vq>u _;ϭ vN̋8ߙXcY9tq|3,9;BJ1y?eq^h({[=RL^2t%S(IK!S's5}B.d]͔WA™7 :,pFz׾8;"=ʣ^x&:; O;IQWBC|Ñ?%n7_+w$%dW+".]nSxA@G q8ܧ7_ʏ}*By1!렘| Prp=U|Б#)>t#G ߩSzP&|`X wse"zxA/[n.&$e3Dw%!ߑ P΂(S8û(`lx6 ʛHY~';7lG,",C)Yi,˴/r42ˊqz'*=HPVDL. Ww|zxm8SrʥCLH eBJkc_mĦ &\̇LA+DByvd""G{X3a2_AGy2!%he+N d̢<_ \nz(6b=f:G 9g`@e mlʰ~(@ډGK#U]QalhXS=- bஷDCXh=@f,DJt @Fy'0M'e4@y=!$乜Pd9xeY>Jhl4`N#pܯo)i,({[P{K{Ἄe5x1+θp/5_ Wp!ut{w.6er/-~){ip7XEvu\ ruϠkp 0 4{&ZPf9k58+A_ MκQYǎ^?hpgx.k$1ujp9r 8VVy}{9|]q8q{{qƹ(@ivl*A8dȉcGO 8 G0?[ T6U~z|DӃUx{,Q ]-M?|]iuEh"t+G1Gȅb}O"mwma6銭<^۳q''8 Mӧ_+[7gJ0ULıRlRLE)$#q[A]f06 {||}wzEzܹsaK)FHqC/naלو HOCISSNj^FLOݛw83af7hhcqwId>J'Nvp'`=K)-, p E+Yi;pcG `4_R&Yی`W*xGuf SUVbfqw͌#kO<"aLet t}F62ekp 23vXd&t0g18růo-FI甂7aN! VGE$ʧ6D,/nQzqqQg ̝K%;@}<3%Jd~8$tߤgpy} WFNdl6%N@O2^d-&\8l$ \%P .O޵، UϘgyD|Q>s`c#*QdPj E+qྑo ٌjxV\]t@DAAA|Iť;O#Kwn\( דɓǠRX2_\`ߨ]خ]p8:W,x<]~x,sQU{4=8#Xe:$Ţ0572i!M9}#eL𘯭rr,k+ur5X~B9 y6#ifV-eZd#z/=8pYTȤZyB.`eO?0莀Q@4__#Л{M8 J 5ʒSgڈ`vfQ Nʌz.cO ,Tw g**F,w%8<8TxzDPm)E:&? P&đriʇeT z$XF"H ՕB\BI{0h(o)I \BqLeZ;yʪP VJ:i]~F#0pG@vtI̶p+i2ɕJ Og8~ZePzYgΨ?D>d یn#^skjz +xRYWVj{ ȸwїّkֈ@jG@3_M|-c/ըح< RJ%ϯVc$֮ !ʿYjG@8Ë\&WR"PTJeJ^c|Mj9(<W7_/]UDbP xURZRRZ#q4"̉G؈z.!8p$S f8_cG `dhGðdo g_7eF <;[L5nLw'`hY#Q\1|e~s/|dmlB]({U.2:2a] ;_{ǞΨ\hyJB=ǯ =K9'Os50=*S$ FA(([,XRBSnONn@/:::1ʙ<bUL=e]RwPa>Gkp\vv#NI39 (;p"٦ ʍWJW`M(A`gDWGinZ,MnNꖶ?2+0ѱBA"*NL2z8zt!s Ay:1}W{٨i J}ss[1$&Tt0Kvv]MnJbs+UHB}&\ t#BAJPVj͠ePFpZB\P^4q@12KMYYĹׂT*)>' (r\&Z")BͳqMܜ` M=y4 EBb#<]QӉ6\Q,5lXo&"b^R!T/neEJu{~p#b|9=,;{ӧ% )4E(QF r& \ɠ<ŜDm|2|$, 唁k[ZU$!GQ4(;4V43n:m-.>Ij& $E# z8;dm)As}z A7c~V{nTDAZ0Hnba85@~V/R˨ }-DQr"B| jNjKâ{Z'VwЫի0l60#)Pf5er(]ۛjQ2cyB+iľY )I0RGr:O_Co')M5e;78-DD'6i QDru~tJcG5@"^j`Ӻ^4ֳ?R%YKzxŐٕx+v\F$ɾ" X;^Q[x%sZÀ^_86ʚ\..N"څ^5Lp2cG>>?I 4H:TV;6 eN'"!<暬GO+}.Clbɋ5eK.H ~60jM`E'@2ؑ &EǓ-5վy$<xdFgMM"EN BoQ+2|;g d( s( ቗ҘZ dϺ =Z 9M6 D9e-p7!pi*DpAm\0Pj(ݩĤs׽ATL9wt||(:$up`> sy|~&0N9gPeVr- _M!{ha'/ft& ʐ} [D&)񼓋r;6W-@2TBfbb!FfY. % IGCCGE_+J*|WCF x:\`Ν$j͂(G8@9钒|C@y/(ax]Q<QI =UHU@n(@z0b" )]a`>WSIbz?1|䵔Ĭx4&NֻH*UGWlYyڼ>}W9"cJB uUz>)okedG<T= Ӥv+gKK:=\W 嶕ՊuzܢO+1ta!.D3Cn%ccR-19Gt`tgwH𫛰j:*0>t?q&c\PPZ0 -0`NԠ+rNLav#/$ t` 0$natӥt…~5vlj gC[Z3V ҜvA 9#"Jv%.ĕܜsáq0HC/"NSQbQZUўDBI 1K  Ncϵ.[7Fpp2(g/z-n+|rBs9;vid(;3Cq0^sQ+H*J7 #UVA粟iQ/F"9 H*78XrG{{wO[a"+"#`bj&[`'Do2j؁1968$hCtjjqAڀ/dQ.F!De7|mlZO(29v"fu kF(䨙`ֹF:^t:MEFGCEVT`MHdBAw;}g+ 5G_gBB -1E)'êei^YJ(&%eP15?'^;9}G:a4M@46P'\P^ U ƨDUaarMdE"Do(<\ycPKt "oI0X×?@ ݧׁ;?J&`oISC|譣n;p`Z.,`"nY7^ E!guէHAй\~+hޅ]W`wt`u>2ٻnSY 5klriUW"GՄO@[#L\wBنQƝ}igIrTXC,\H(_ƶw/TLXtntw%$n8"t !-Q>b9tc XC,,oG_`J2`I7SM $y@"f̿Tx;'k, qoi[?94ZI1d(j|:#uu {ًsh8g#fF9AT|zBXX(_#tukds %Gн퉷~ljˢ"#2EyNM熲|ˢjZ:f6vew yMP'^)Y%"â> 19"dHc X$ muuYR}}z. ܥ}:.sguhA.^xqjml!2/KxB$ K@;|::۟<@XCh(![sO-D}%n^5Au7KJqB5 E4 j=1,{}OΕT?E +$G 7`DXB l q Ϫ\.fs&!a!Piz^CjF4եx~ml=]Q;}mtu*Bo!p.Ct_WC_O-?}mcާt Q~*?@P}a^ol>=]jdrhd3tu},]]2P=PCϤK)yV\=]7ij<\+>?_J`g^ 彟?->-w5A.-d/ⲇLZWC?-N3Bp\[tq_}.BZN~Fr;/טmmdry;ynuPAꗌY]>w؁;g1pI_yn9ln!n}U di!=ĢvƼ5Jd&(Sr)L9"ids ?c l G7w 2Yc^[}ކFRVRGFm>&:o(_WY)NBlc k Kdmc 34y66 SDZ[ߧc7Ě3 yn^O. RBlG_Yq1Lk]h^_ .4ȾKV}.X)NB@0lslL'_'Cf~ofjAVcDyWЛqԾse-N66!:,{G!vzz,pW-}} Nrebn C=!;< v`6xؑ6=k Q!i*ʷNʻc2YŊ;ml~Lpr*̐,;(_(^ ?kd?ǚ\B{ERh(oyp ,$jϞMq2WFyN46!5@J?! />sسLv_B YF}Os˨ʕk+`!08]*i %`oVUȠrCqml-qY_d'DpD!#A_V\ևa+@2[Na(f֣D.E)j!&s9wKxjCE|QΣާd^A ykŜg(eR*᠜. Ͽc[s|f;{O%!8AAG^*$("٦1c/M1{ bCGmX2vͅs9*OI2p.cM]J VMx}_n![H4gc|C̓˔r22?ԉ7 V]}_|㻒rNj$X]Tb&>獲׻fs]A˜ff.LLɴS +ɗ\ c-=4{ PnM @V--87BܗF#u5C_䫫9⟶ __Z .0rS $&rVWϕtHQӼViV}؟p7NQ*fSOhs=,lmjeJOS3ak/q 'qeMN) .Hׂy_BʊO{/5eAuC V%qgv(M/&_0{) ͫG slǤ̅O]46Kerel1=R/A\O$~ P&ދ˛brrrmz5vGur%| un9y=k[Íbcrx]o[Am (ȄsxsDޗ(={k+QGIIY~?.ys(/9&q>EF^V{vvR~l2vd}MMẺMX(/G}>o_˃![[[/QD(0G/G}ʯP~y+_}^ 0ŘzIENDB`PKSi9PKDOEBPS/img/sql_id_tooltip.gifcGIF87aJތZcccccckkkkkkssssss{{{{{{111BBB΄Δ省19sJcsBJJks{֥{{Rks!BJcZRք{9ZJck9cBsƵss{ck,%33J%00ثQLee@O+0J e 3jlpT40dL+j2)\NŲ)-jςEA1#H'd9sTR<|zJXQj23R+U s[QlzKӥ.WR뒫߿*)#q-[\qR&L^v[,OSLC[Lx,V/V% dɔIѵZYEލL0cr Ug̻u]|1(+Yzp. ι˟J-W{/D)-yJSe)qKR^IG߄bOTZqYh1**)7HPД*v:j7rJj)Þ3:J{(Z}zÞfʪ+ؒ-{z .ETDzF祍2Jn\-zH>1 (K N물 nn'J*);.6*:)b()D*O+/ҳ*>'= (M *%6G$r]s-xwF]7}-ӝx= +Fߍ_uwbηgny}8৏^x眫.;bn~߶n7/.3O8nFPٳ#3н싏}~}ӿ߾c>w H-xz;Bm \v<ٳPo|  Aԛ2 %r+tL 1FoCٳ!@  .z⡘(JSW7>tᰍ*/k3+ тp$EXG#ܑeGDQG9>VZo HCv~_! JHJO/򕰴]cIes޲ f)LXbz+Biy>]T&.;jP̦0nz 8IrL:v~ @ρMBІD'JъZT(x`΍ԣ'HCJҒ3i7U~T}):YQDiDeMӧj;Ӣ:uj < fǦ+(է2ʛNTz(T*f@D+Yq.^ׯ±=UJŠ3(|̀R#(Zl8ةԱ (kYɾ4*|>uME+Yشvxi9Yְe'9RF;.f2ձqKV.7[V)qծmXG n;KѾ6V5oy ^}/rG_*oeNb09_:ؤM'AWł4ŕleG["\ױp Z~-ڶ0o%M7n`ֹf+LF8ښ-yԪvzd~‹jbJ`SsNel< OE4'MJ/TҖδ7mOLsӠu9=-RWVհ5 YָεwkUӺMbnuY),¡g`z ms۬v rNZzV  <6Mou&6_=O~{}o >fp\v6~_{ի: \fo5u۞|lP:z]g^ϽnWyON[~Ղx7σͿ@|7H_̓~b Vk>}k/}I?{g}\sҋ{y|KϾ{̫^/> >|S7?_yѧ{zwB{~g~Wxg~'7owo |0 y{Bz*v{3X1~/7hsŧ=HwX+H448 s~=o#y,z4Hׂ8dXfx灜G熀w(@`zWwVwwH|~h}L{qzՇ(U@ tf hw(1x}w(Lj/ `(pxhx}je(>ҧ(H؋N'xĸ،8Xh{xڸ؍X8X明긎ȁ؎(X~zApA`}h((ҏcH(} 8y7@:?!!9$iْ.0 2y5y8 ,Y;>@B9DYFyH>,:,,Y4WWpy(,Е,URْ#g- 49di.Wxz|ٗ~yKْ@ >POjyUf&(^ .b9*ٓbyٓԒi%939$ٔyVw2 (/Yqyu)铜 i hIyؙi iɛi)ι_:9)69≜ٜiڙ ܩ"9< Ĺɞ2g)Yّ 6yN>9)nY?Pɚy(cY:)N` (Q釚 9>yȚvgi5 f*:TZɢVZ\ڥ^t9azhڥ`lڦnڹp:tZv*rzz|:yڧZ:9yC:XV)y9əj[oʢ!:Z J,9:ę9'UV;9 ʣ!)Ujڦ,*2:Py_Y$iD_ʚ#^5j,p*گV*=kk+7 ۰=(*i<Y+:;kZ#+,۲.02;4[6{8:۰z(9;+ K=`+۰VY {LH[?0$ ۴;۵^`b;d=˰ D;09#=P[9E}+ K*Ij۶x뷆e;k)#R1 Ki{(jQW(+y`˸ۻgۻۻ[{ț5;8VkЫ{؛̛۽{[{㛾۾Ż;[{1+˿?+=(: m`.|!$ܱ# ++P*#ܲ; =?\B<{!!1|>3<ڰ@Uqf, W^~ >>^~ ">&~(*^,02>'4~8:m<@B>>>F~H^JNJP>T^3.VZ\` b^f~MhljpSr^v>t~z7|瀞~~4P芾莎6@]^-%nN1o1>{N...k>뺾>^~Ȟʾ;.~؞ھ)^^".@**p)P( ""pp?_n|s}m$_&($PM>68:<>@B?D_FHJLK/ R?QTVO\/"b?d_fhj;] 0 \`l~?LpppO?_?ooOH_ 0_؟__O{o?_ !8HXh8)9IYiy *:JZA QhZk{۫,0苜, =M]m}-` .>Mn~I/?Nn0~ <ѱ :GĉE1Ɓl2d'x2eJJ&0@4kڼΝ8Ō;~ 9ɔ p9͜;{ :ѤK>:լ[~ u ;PK \_PKDOEBPS/img/tgsql_vm_006.pngPNG  IHDR|OtEXtSoftwareAdobe ImageReadyqe<PLTEͼꚜ/23ȹ۹ǂdجdwvx{ᣳŞzVguxڼ܆rdeg]ݛ@@@써ˁŽәÊ沾̢䕦UWXiЉՋҸksvslܶϣ঴䣫dmt}ere˭[^aǩ;FOv欹wwZgYդ舖𞪵wLRV㟻ׅ{`$'烥GKN鋪о޿xnkISIԏ7::¦\|=F=ҨʖeɖّÈ~~ptpȶ$MHO}IDATx{L׺ߡ"C#֥(* ^\-EA+ h()%CPNbz7` MTvy=oڛHA<ݢ%#|%=߼௾"h#1:-/P!ߪ šMD;񗟿Ai4JТ/ZS#w ¾/a5'd1+)|85rNOS *0`S 䬑#J|Px"s"dx `Պ_̆ϵ-ܒ;v1|!%/X !-&1Di;_K./b bbH^7D^$Z_tXO?G=AvX̞( +:]/x*+2Iy>~m|C p`Ya[^e+HLP!:Y[tSbnӨT}"^bHKGhr|@WËz.ЦVEk /\n b<* n|Ч@2zz j1|BP15_0z8 ߟ z8©Uj . fi79f`%h-Vm@`oMNfT4jt0jZ=Pƒ-H;` EDJNk5C sLnVo\_<0+U:G+Dlm 8)c ̵(H~HR2'C/b^ߡB20@rڜX= ȻO_;1/DO/)Y壌_9o8uƿ/^|ڎ۱o`ߵT V1i4zMUG?o<$WVVGOWY9y{k+(L9o{ݔqV) 25<ߩrkHI%?3Gd7 *m?7< R FJ,$Qo% ^}79&??&<[џ?BR.*>襦:l|GD}jsWȿ}t7O?Ç >""aWJ'함=G/|=!d1@OS>P˟6h;o)5!SJo[BR|&),~÷|JvvfQ{c?i.HrzHl&j@϶e|Ns es թ|#UrCz~vѫ7i.~o9zh|>Ƈ5q?;Ir\R#W1[^y#Zr5q?[ s_-\OF޸:zߠzr^fĎ/~* [K.98߸"z#9ڳ%-\綾lUo܏/ɯl|Ù梍U#7(+F5C ~ Ry,|{%%G_p˿9~R^{{e%ySryk o?"\J—W-sW\rpѷ;]_5S1|qJK.fmWP\#`|/[0J!\^ސo]{ %Ƈ;6n޿~`7&|bSw=U8*gNSr_oϻ9,m)[eoTK>zuoڴTǍoؕw*~hA—;0uX7i.'e!g|.9qk~>U^Uuo| GKpFu?>e^əFH>E`mݏO Ao ə?'OWrUt?>9~4Du?|R{ڐ?ض#~ su?1|"^əH _C/utԵR9^əfGv?>in. |_y?9|ޟ23#4's}ud|_UO^aݡZSKuq?p>%1~/pـ_\%k拝OOSJR^əAd?b?',9^5_eIK?GOçH_R~cSN$U$|Ñ_ }~Xw|j'c#mKRki{S͙FH~1|a ^aݡmk ?_?;4<${ `6v}?rK8n٢[% OϦ˫<93X~2l%m)e|O>-RJKO~}re?; ?l4Gg>yÂ}?+9G>l'L\R PsP|%<ц>fJ!򿢓3=78ࢇG~1+qi.yoP:8]ф7X4zu7#1; 9_ml~ s3sZPoPЊc><;mcϟr#VaDK6р#'?o2UL?9 ǗdM28/oh?٧;~٠ս|LvwG1Adh ﴌhC&?/<{gCN~9zQ24S=%_ ? |C?챬K&dN9 0:#G/]_W_Dj'_4 <"pq=&8| H1;ܜ>զZ ?Ī~"WQ~^ڦ]4SI;<>v|/O;40UNX|Qj\ i/ȿw)ESB7L!S IMo]Q|O,eޔ)2 z }mb ȱrM?|:\H}Pou~j MÿP, xI:?F;؊!VN'O<J #Dξ*QTѣ7;%|JKlԎ 2#D}{T"O RбiHE(I3\[``.$$\X@d@DZ?j4#=dy>ط];g3I8|Ttee%V;@00y#5ZsG5GpvOF_-l7YУCo[L?%=q#=$O9jmMj;نH_^aO#/zcM5b0Z5jE ܴPdM۰!l1ߟe)=ܒ_=7%jeDZVЁ"aEEH?>=uh>?D7E (><;ӧ`zVm|IȚ䳼"dBEV *7Ɏ} |i#s{G@ڀH- - :j%,RgFɚ\VӔ }k mgQRhMlлˈJh o@cC4=\ET"v)TH^(qCwG8, ل6 ST7>OZw)ϮPìcgHւ2,'=l)zzX2ґ'(\ҫՂC!)V<!)*yzI>+YsȪ Ai5Xb(od K|7>\2u47"~ ZD?3\9r@Dy Aӧ*?G?/_>6\b}ʌxŵb AeI EbmldMO $j>-dۊ>>s#ź1`VkivR'>nRz^v >$gu>I*O!E CN,j<@hQQ'3tUK{~#>L)A]k AwKDth >)BEյzbmR:9Y &'/FǥMe#kl=؞#vTTpgyb^jGEl5e-h6PNzIZTjFqY[-JI_x<}6#g:>kt4tXrFY! mnT?"Ծ5ŢIxƍPd {?t!;^w!?:~Z/J5#W5LJz> &ORO?DvTȬ:KފHSc9_*z]#}}Pk;MBuI_{lƸꦀ8xҚ%ǡ'OݦNq^/]OOM_4}[I1<ƍ;[X;߅H~bc.JfA0>q/Mb@~S5%?_aR/'||'Ҏ).%mmuɴr[mdd|d%; 3,^>w)dde\v-#q&Schj ^jxb]<S\^## 8SSSsR$(OYAI&nߐgXn*P=5/oI~>?_TV~URio`VqN| [ {r$ ݏYESfFt{N!ȜvNPgH^}'O;(;;%tAȿAsגUlG-dϘh{ş AǙI?'~Ә&)lu(v:ϞM3I=4;X>Q+3wۑ?_sK3iꮵoL}++?~MpʱSٲG'({"k4& ڻxQٳ)𥄦U;9|Dٲ3xsIǩ1 X~,?X~>mѷvvN&^ȄHiڑv$vtMS8퀽gz4|).v>N4ZaL~\#lWDq T^~N@Oy9|]8!.]{3xӎTTyjHIIt&nx쒴#??) /`j v)rDib/4I>sK~uG!A>9M,?[ksH {7ǖ;\ۜa%3 |+?]p+~vJ1AǣO#AUH&I?I?}<]çW+xx3NA~=yLDWlSGOajZC(9~]1;viPQ{W{w<)7-y6cq=h˖MO۔R'*wuNytm z±4jS\a ^Gl{D~*6UyԬ2/3N~ ؤQʯptrV!ϋ{gW8,\[50zw}zثHknS=Qi[K=GɿiLYhQK5O(9;a{VAϡʏb?ˍ_|\.أNzTҮϋoY\q*i'}1z~e襥1G?/䬛I wT~uܰ0vxޑȟz gL~Qm;V >'.FڑIG#iKłا.7>H;S`ay?oLL~VB0{aus@~g_s?M.?f,'=J;wĞRL;/`Oǖz(Ǝo㷝v0Jϟڰ4N2ύ2^Dg`.)ǡf-M}~qwaRߠqR?n^X~d*DŽnڈ oL=ɯuma\_?]M}E{ػ cnÔ)|-9W >{h:#FQ/#]a® R~]r=7xgpdguh=~:&~N4bb'ÌD)&g񍤅]NNw-5-aL .4]Mwc'ڪbgsSz>7ńtJugWp3csrDc[w5ZWo5 n*/ ’XjA?G|?RB](ʒN=7/ # s vA@8إzQij2c6>ohh(y8'|[1|TzF~?N]N?K_"J.RDO=/ 5JzD ߔEۊ[U͹nBq;\)Bi MvghyIӱ ]KZ_tP~_ -E\)w㋨v2{MӃ&^aUA?n 1u4zG{Az˘9345->^͞6z]ejy/OO@ůIۺb N^//d'7-Gz"![f|U6zɁ&OOӅ~-~&zR8vx5j$5)kjZyZ|g$Ao 8gk*gS 6[|ɿL"-׃wyȎN6OR2up4j=)× };1>>@Z\iZg|o۱sR&DO_lp໊=K%3/$6< _kpz||)~.=g^^ 0O$?ٳgnw7w߾g>mNwgL?必PG"~%7}嗸֟_Әu_dU=k}gҮٽF%ۼ2h$S|Gη-$#m#S{H 9 3ۅoʥqc$iGg×1Z~tk2pX%8d??HXZ4H/CWr򗑿_g*Ӈkj)ʷLdG'W9o¶,|(}J'R)tq:Ҏ'5jQH:.R;|3HG⪏ nE÷lWt)39-U"+0|.d;@ȣ2_*?xS>ն]hDF3 _ `AJ_ ;e5}*;:YBSJ|n "wb& GjSfȺSѩsiGm99Ȓ‡[퐎滌< (W =&E{1|xdc/'AωH:{>֋;@e:\sr!sIͷg֊/?ĈDc sC]QY?YL~Nv zRJR]tRm:w,ݿF/>Fe;>:BVc;:䜯1D r ηc^HpH=HB^/p?|i,z>f|G' zG~1m3Dvq/q+;6 #';_>nɏ|ezIэ{^Ou$5}hW0|"ʎNbںO̙FEqGH {gPu2Gv q>{Bv>{ìh nBw)#1.Þ< uQ |:R?fmw9o nсTjw -x:[avsc= ?vq[ 3-SA`b+;:I}9dm/SvvQ.\2x_p+*~r|Yf#R ^Ի@.[!n㊽&Iu; |qM~tAo&U@A-{T#{ O!+tfN~D|;X&zt3-??_#;?ʎN&c;|y۞x@OYqOo>M7yj:7 `?-KWw쀒\yG.Gg^8yaWi _|5Q1|.꫚ | zZ\l륆N7=|tOf޳ 𩥜d%n#Wx/G{|=_',| z |ÇA?)|٫[!N;]]ϱ/ESf4X,mf.ZYLt.ؘNbkb?s3ۜE_Ո)3r7grR)&[T_,|<|X~ |=zm| ]_Df!N;SV nhy%n^~>}yQocdkթN _Bw}ʛY3տGY+˶ͼUCgL'AO[Z_R3c,ka6vU U'淌?BOgv~s+u,_ۭN.9'| 5Z7uW8,KEZےcoوY=O7,q>ǡ':uSNXE[8C=6[`o-/JvV؈$߯L0|l`˥ޑY8;1|"s+Mv}R%8+| 5==uٞTfڐ:4CWvJ듺a}|$Ic2oΖ> 3اj6El_|@ڮOBH;&ޕ_,ӷkT_17'?|(w`/%ͫJ e| _ee~ύ3#*i%sOD?} u(w_m8lmuSijJATaVQ)v> E%W3)egKX,,aN~tm5[U_|vb|tasʸ|{>#:[6FO;ukkjZAv)5t+?=k]Oz)HսA+fN)O>,zqAX3>.xpuCz#Ev'zg/|Rϣ}Kn}4 |qI֗UyzMb;|A+N/ <=q>蕌_bBZ>wĉ9s{R}Ni =򚏑C$l/<;I##s7>, IrSr> iwH Z~y7]o4^`Fժ;ؓu>&MYskkX$,gWh)tNR|wgv 8,ܷp!|֭ !B.""s$G'x6 |wV~K߃{XY~ۯv1z/~ZpoTX]Y|}6MS֖ƷmWV[{rmme_| s[r!|_h]WP'r&wy8Ï^LPǡV8BC?<3?_lM'v¤+,|6-+Q?Fv}nG9hyΧP񊝯S=cOT$tq5EEbdKlOc"lŭJ3 k&^`TOTg d_>s߳_ۥP@O~c'FiᣧZT/@ zʿf9|D~Y > = ϖ*iLJ/T^6Wv~YKs^ЛBOO)hWX)R/rU4zr|<& g1>.L.kӈ"j>=_$I;<%|X)ϧ(JA]HIГǠ4wKP謹֧~|Z嗝's咵]fXWSn,?Y7Tco}9r_Rk<|Xo,*= A>]u{je><(\)u>R%EǞ/Ȝ&Ƃ)=X@؁SO?-K/ AK~cOSF~@fuqɭ϶& 6{w%klp)vL&v9tM% "bjg=gϕo7'8S|+Ѯb7ʢgéS>WY֒76 3ηO(xN.[#wi:D'iFt>;'/D~V<6/41._-Տg/8Ow+jџXP z{zUНo϶ɓ}akFHW;8TSH\64]$C +k9IZ~~mw5>I_>)x绎%5_L?55B.*|eWI/Wx[qt ySiv &Ӿ}/W6*ǣ1:/O- ZN]b|cг}nW|s:O!ίL߁.Wv7B 5_Ks}+_JWNאVجsCki ٔ_>~ȿr!!v65o;4N|k<|Jg>.|pݘ%_pUGKnO*Z8JЃs 5G謹+d?o |{:|Ruo,9>}5S ?PYYƖfM x]I͗#WeeWTkqW>A%Rq͗Y*v>^ɑ2NSoːviSͽ5#-.U--k0}HO(;WEG]Wk:Vc}6_O߭jZhA$?AO{~=s϶wukp1d}em| U~6VMxٝnSF+ڮl[F>T!|8S;$/zP8KtEӮ=v/*=1|S8uyo|vI?H?5էJacAujhZ|?yGYrO)z؎ r+X>06z@|i9]-kEaBͷX~ |r&磝T#)l,I=Zx]%1+~du9@ V>.u+Q|K|R=R \ }r+vϚgz  |\7U96z|ŹN_UZ) H[vKRS64o-MMK+t-UwvMs>O?ǥp]+v;by|+'[E9w+fX|_C 6wI|-Rw(#41>~!j>(zW5zR&nM*W(tvB]W^ǩ:bݒ}|%_-WYһ>ە1U|D ߧ|sF+{+b{n͢ɂ|S]cCqSr>>oBPj=|$җfeΚtmuj1YH[:y'{5%99wkI.z'14T9 _[~.?kH, WAmB{ݳz÷97h9܄we Jͤww6%o@1u'[b^.fޮ] mV 5?}k4|w V%]t1z=u|xKՖs9_M}, A~N : vF}}%2UPo7+|kJJ >:gܹ;3g8GaKEEE}wڬ+zĬs.c}zmǷ~‡k snώQ#jf/9:yJ~|nWu>A/ '.+{/#?)Ձwۇyw6F3ikI3ed>UGo;== ױg—F'DFg̘q4Lߊ@ | 7|CMFdn/\#C"o|ɿl&WR$vk$ iX V~:'?6GޚZMRF91'{:8 I=o8f1|?l};5&i~¡w_V UW9S'N_AoMFrWB H O9;f̝mX'{ w_c|4Tn_K w3B>Iَަc#Qwq>=k};ޛ|yt`oɺ$ǣw`(v4*g~im pηNzEηs>V~+Bd‡6F͹EM?<GwVmLJϝ^{x|E/J=>z wTxY:|8۷o1A|t7kVx]-%r1:ʹ:y=@\=,SϽT9s>v$ nW|鹹֭;_)] ln_Gw=kwKg\Գ|};8XcH'.MNw5\szIz8҆*g?ݙ8_XBXdBdT<'Nl=А{B2? Pcj .|chF>񝝙YN=`a]8wk.cfͷ"l~~mBpV9S'N.vH~L>?f珩oFOݺ@Gs v>o=Ivѯ >kpΧ@z-ȑ3_Q5v{~1[Dw[[>l.|AwѻOɣ&gu>V~ML_\4~_Eq~gs2ڰߢj>UMlcgF͹ t|".^tɶ#v:K~π5CX/Z}|t͗^S\aM[~O|\ꅛѩd7жW'ꤋiw>4dOp2}k+17}{_#;PRiZ>u€ 8拭W |Ed}B'.#(2s==62c +D_h苡!~͞rVGYrXt]{W d'sO(C+o} w2UXwԤt65aoB}oz_sGW^^t>􀃿w2ydfrb;*GѷVrWnهz#}Q[ו[ jS%#Β=69&p.{_¹ˢ^pW 6169/rs g>|ֵ1i7[AZkYF*8_76p˿oܛѡlfv'^eo9wiϜ6mC55z6EX %xR5_cf ÷r=H}{$ wE+"rG/v>?(vu^X7ՒԊ Zs+xY/݌@fV=?T w ;9EM>!|}} s6L5HuUb,1"mGX#}h3r͹O k0zM RdOltn%y c۟:Œ4B׫K҄:g#F"ƍ-ͺbE C3PӴ/8:6VD8ݎuPATFS>+YvvU_ rϫ ƣ m2$}Z2Z?o^xn5 ?~IX56|<}[2SwzF\]%V~&뢥1:Kd;X0^; [ThVtI`$S']tr相zuдȀs(o@7Ż8_tkѿL9a53 "vn.'ꛖOsECSs£H/$F_|PMuD ~,lUjქjիjL%VK%fmE7 뢣w8M=O ;FO_DQ'aw't?w} ǹ.C[[>d3,Gʼn3o^=2S\o6ZB;ft2,] +*)s]'/vO&h*iK;ikY#;Ι#xgo |Y >4>))ٴv6yd75=Z-v>|%>Qw>F55./_3}9M͸e_3g9~vhw%\UW7C"s_i sS~,8~>,@v7 ;?G|~ > =z;hNbM&sM31ךXZeҊv)}| tX՜L\+!?$k5;N/,37Jv&obc|󑩖׺`ݏS^Z78#li㷵voO,5TAG ?M7>}Nkz+/Lohhd%}P~acvvnzy߶лZ~(+؟?K5n^N4ɜoF}:{/o8g胗C|Yhz}vJ"?$>n+ {/E+p!OxHk>a=iC#PFm&-WZZ|ʟ?~~ԗ=B1Ñ YNyeWhw~?Ǝ,CCZNd§C+{1`ۼO!wFF߮1|=Suw*'Y#_@|:__2kwA/Z~R{1k>z;^&irڋ[ktQp{{zscvipgwkJP!ٝoSI>Gϓ˿vk uep͎9z,s}]REcOPEkU){Fqc>IUMAއ|tw*=^qxe@?@bg}ԱOimW1ڴғ nv^XR^KX Gd=^|/ܣHOimW > ><~k ݮ e}Pc&୺qhۥQvޝyǎ ?qcc``ń1s@)z*=5EEu>dKl<Sv91jgAĘR+(A8u߰}?8yq Gv{x!yΨon,~ v}}]ٹ.7k)v*w_iosĉ mj|._jͷB?j\ԣGvʷQ;g}O^N~RxXK_]\ot3ksWb'8ዉ9SNe!|GJW^ fip ~7jmkחv _1q>"ZOG"OwdN1ꯡ7̼@]M,Z@vm1!7I-xj^E6.<5}>i_Ҟ=lyqqqۡ}-?N_6g_qKcߦrJ~ ߢCjʣH0hVgwh^|Ð_>y8dp>N==n2|}kgFO_#7?ZA÷i:MxI? ʟoV.\׿?{m GÙSo>zۙ9܅ y?qQ|%a2U$4Xg?bF~GϿ?bb?]hKzc:-| )Hg/.Ξ_Y4Ȭ'Woޕ8ŶQ(pc:7M& _ݚ/\ÁOqOCeY_ >E#~Og^/@he;Bb_P%-ç|XK<<{m2>y/M֨ft7(h-^(DU Tx}[o_'gJs/ _)7NG(?(DE-whC|e5Å*s07q;{{CQPHx.Z@oSOY 8bAr^5K_ł[7O^!K`z<W8ĻZg >rbSmnE!m xC,mmm32QA0e^G/2nw> ZahA`keD Ab#-󙷮i6|cf|;HppGQ9dX2񖍪C{g|k#/x&  Xe0%/jSL33 |j+ ՙ|5l;sl):vCBCA{!AmHJ.>|φ|Ź;^;+Zp{Kg:Ӎ!QXcC({&>cg{o0۞a;ߡOfφ wwJX=/<[]|!˽8/;;X7sjk${7uZxVwA"+J+|g3 ;t΁ϡ%/Cz\,|vv2),ͳ .֢Eno!St>>> +6o fs׍|8z VwvgI'qK=裣e3l!=T~8~>wsDp=qm1Q's0^kM(ʗP#_}+/?k5Ruל$gϷlc jlE%̀]ϒ>&ז/9tHvY,|Bٷu>2p /g&ϿtfdgG߉o0o ikL!9ǂ@m ߺD~Z"Ka.#_;]cǏ^2wUhsFN ]t|,|- i B_m9θU`tu ;j\./X<g=i9? |ss 6/zgtdv9|;#,!f-d ڰ[ +J_dvg࿈Flf<|c2r_Y>v߶|sYЬv$f{ } <Qqkq绾Ț5kCNIk pۻ$_H|pKՈیŒ3ȌPttjmGUty *[cCQ2*@|_7DzY5Fm۲rwg灬h} _8jA2tk]jCptr~IZcͽ| 3A{,.EuY{f ?LƏ|nDͨ ɱhg k;;jX%lrb.|lEf󽧰:;_>q{Y&j^#w>k#2+OGV>tjAev2h -A(֣$^uZU]g| KXs>Yg5q:7 |!KcTw;O|kPe} 7/'<ss/twO7F9O*:7ނ|b_PuXrVˊʹ,OI bb7s 'Gk}nW9}W1YP>v,@|5\`Z'Kx&x&YZɖ.=7MOzk8>{{%$> 1B_>}o䕗n)?zeq!7{n]<9}ܳhљk:߱jgzE{ҼO96?DxdG~OW4ofY0|L{뙠dnW@{ {3KӞ/5;9~%pDzm4h34|f㩝O=4Gn~^N0}h"I@DYZwĂ4{O~lE㫯K=$;;β?QϷ5J4?w:#<٧W5Ӳ:;ɼ]$9@cn}ׯ6rG 6"%8ZO3 N43TX?DI/#>+O*Y-*^ cvYhxu)ޏQG|^{bxtr"s\x$ "fLכiFZ {.œբbj>`~a_Zޅ~gpǯWG.ñ/jҜ"MxYKP?:I|'8P[wtt/Il^ɵ&Դiӆ Cۆ[y9)/Eol+-AmdёL[|t0JAa% dR_F]8> #G:\HǢc9Bc3~e>'r'-XGVOFKcACCv4.r6Έ x1W?ss}pdRX7 8_7ȍ؂ b6z6Am3׸M.p2[u {]Z!?q,Z7U b3j>H| 簔\9G?Yz_>·F_irx~ζG|_xq\җ;iOwrDܿ= И#IG#cG"uF Ts4?>k R^Z襬9[^ޕ,@w#ۢG1`cQ ?wa4Ϳ'q^.$r>>$ϑ+??Hn9|ʷm;r<1.ʕ=g $Jy#W EV~?VYT-(K,]Ėĩw:4ΚF⚯('Dێj?* q_/U6Ǜq-oo,}h?}w߇͗?hWtmuO<;|i֗.#m5 Ç?KT9%ΐ>ҽs;MDYc&m'#y5_GG׎|Q>Խ(Nx[; 8߃9[so!vlݶinF#F5r(X8O~HGnj>.z@753zé92 ȴ#4}vGkPb3e?{3-|X2YA9,<ȿ.hX9|E>ş_g{t2).\Mze>4ڝD#ĴmǠ #4%8F:8'LVCnFw_w|mGH;r: Hs?_qj>iR/Z=^uc$u%]W=9ՉZ>;'aSw2|_hO)^SccXѱuW;t"u_}묖LkQ Ĝ>Qm6!FlS뜏q>1'8ݒ~b ѷ|&V]x'kĜ/%.m)X?7|UMbΗqeD@~_*|t |R&mp>QNiGutħeJI<%GַFMC| 9_.:@S|)=ٳ ȟ{|5:(h0w|lD3G8volawVDϖDKF~䧏F~SCGO^q8bO;_cousYH[t-rO#y&7|L/ /W~%u'ȱ6>Oe?Q|s_ *|V%1b}dZ1:A9rnugUFl۩#bU 9VΧ@{QȂΗ?:l9LEǒ1+A𝏵A |0ڝWu襃ewc,R^>F9W^~Le8Ecatu:GwaUp|uxtmNpߏ~J9xMpm׮y⁔{1gc}ܴmHxGF ?>OS!O|9&x5uh(_ɧ>}DN#A50iVA˽ԺGttu`dӵuW3h 9jyFW_}XU5I;xQ書ۆ׭+6&?OSgWڮ$seU7 }ֹΗ/~*|) i<.zh|^5YO|gv]=-$ٝO3T| |⹇.D;!H_*|q{&)T*w@}b/O~OoBηlOa==Nj^<*xA%7|Lʠ<_g.O~~dw8 |RE.LDK X2 5D' [|˾1xηY?Ψ^Ի6-ꎔ zí{&֋5Pϓd=9#IYO[_ <|r|lKAq'(!eڔ3&|?mLP)?Ȩ .?yc9oUTGГ|YOk;;5w4_G{%|alr9<

;Ut'Zͱ |Iu_eo-zWo=3DoО3OyS\=gs>$ƓZA +xП^A'|aYO%U=Η-;,U3l[ ۟8pӝÜ߬Ѽ|?U |s>W |'ty_bUq̹s"}|-p'15wA7L:_~]O [9O v?w >^UӝOh/#e&_"|XO~'t͙TZ{,p>}#;K>|||!V*y_吟?ٟ} 9y:7BIBR\+Aנ?kj }gop zڃ0oM'|w!C!46]LZX5Ϧl?*]~њoUDA{۝m6;6Ao ܳk'-Eo\tr._@}QSr>Ujٜ=/_+,ٱ{,=yi''/\`v߽cɎ%О_y?m2os>[٪V}Ȼ /&+lN49=ڥJҍH1?ZKv{+O>$T YK/e_2U~x- T,o OmT4 *}ȅ& h%v>>!փ%Gby| ֬Y`|y\Fqr;_W9o|g[lK?>,Jxٖg*F~|' @/ dywFe*jbJҿ9Tqϗ>$?| 3(t3ua#n'ќzܱK;yr;R&K>X¦-[yawK%kL?ƗXVXvw8˯*7/KZ| aa99{<DŽ+EIb;Wx0c !b3p0`IFlxg%+,͖!t>nWk__m__V\cPKObӢ}Vr'曏v; %L/ |)4>VZktgUϻ|!>|b[<7څKtC4J;o?衃Qx"2B Oy/S{v{aW_&U^.3ʘu9+ǫvX~԰9^Vl4ͷ|_Dھ ٚR)C";_ |vG||Jtux(@ΚuaC>h\wl[^wN#p>t=>MAMm rYww:{`ʫ4kk>Wђ%EXV_Cp zvlɲ7/[|K{0M-˞>|whWŇKҫUneP~!}{|ϳCΓXm7Q݁/;]䫗/>! vp`r aJm f9N=@;(W!C5 7}>e|'إדK͞O= ͫܥ#HlVNP/m U z?Tb *,o4{?g;t+JZ G |ʸMbt%F-#/,Ut m]BmU٭qD}8* # ]"(c盅?bSDp)yi~ߎѵeęϾ|zmq-bUz 4>]||1| ^oL*o=Q|8$/|ǕH X~T(|/ >whW>z;_go'.]'+-˾Gl>ǿ~я ?]jrC_j;|,|ZVzJ=em8HCG0fy]9_qzԩϮ?u|P<,d= b:_rk!Sx`|VFݥf>bP\"K8-<0'۳:" ç҇Br bH9D[i #+vK/Qh Dw k9h'6hA KEu:oesi8_Ɛ+ Z+]תPj/ؕ㕓w3ݻwX- o͎or;'O8.sJGh@ r~6·sm"}B~|y| !PMtRzgpW9B.~xiwx?_o,ޛK|bبYzє.?W=wW_y(St.3o!U#:C.:GjX$/Kvl˷lYff9[{fRN.Э Y-}}v2TW'uA~ۀNT<)i]ϧ7\:_Wkܸ#4ͺc0s|h^X|Ėb'Z$\.!.VPR,S3nhg_" Pla)caxg?MoYoyOΞKͱ\j;g&Uխ}]*`$v> 2>F|O]>VGjʥ 9,TZ/U︴?A oi8C;:۩co AJp&|t~/_Cru/By#^ou!'RvȽrHIޣhp tv)Eo qiKøMbc,?tZv6N7` t}؆l)x U~J$;F쉇ͯU|^jo|R[iZ\l~b,jT(FRiN] j{΅Og9ֆ(aUm~&a uHI&0IlېF>OW~?Xbh!q[ 4nF~@~wpA.Mr.5ybȅF^˧Phজ>JPNm jPC!H0J|T\6/ ~Lj竷uR KO7 m2S,>KC7ʜZ46>A}ԉHRpO|@W)<VMIhRRHGhL IOS8pyH>: H|E/ {,5:&R!IV5*a]H~0%N8,Ӆ!z}^kequݤ?$ q;m6ZhdAloIy/Mj3R:ݤ!s8 9!n%첈Aji{7>ZC+dF/O8_W̭Ktq>r|! +|2j$]a# F5n 2_PSEaK?INΓ_oSr>m>DYM(JMNJ]un7 Ǽ>|.lEtPBz>hKziVK$NOJjV|FO5&_O zC,|I|%,%ϸ/jgsF!&|d]fK˄O' | }IcҴN .UHtV Lt$5#.Өj(NWĦղOn-Mt9-aU)  b&* Pe]ڍ&BFL}=q·k)ڮ=ewZ\ jE6Dik"萄"&2 dO|qi M.+ӯ ;1`֓Ӵ4jv'GTNRc|!!P> UgzRMN9M^pj# *]2 N1R^h$lyW |*N*2NB&TGEl@cAI^YK:· MlȅC&>ЫFjpeug mT8B3v&Ee OqOf9N_V;Y 5Am09e>X(Ln?, _Ȣ s>[QWݝ2dNXL,@_(^pZ:uvy .m|!.>$#M6c;NJ§2;mhUZz¦ҳQ6](|gXz)%~hGPgLNK@~ ngBP8|<|l 5-";,(O4k}!3Ha?d`'1_)eu>[yÛXn<i `:<āGlo2:Zjw);_B~wB +6k>N~3#}8 ' 0|7y'iF(v B%Rx$0Ffͬ)|1ͦ6pCb#9';_gW.l"~ϒ|.-,[!cr@Q 駅@.I?m|\OmjQPnp>nO0͇H'GX~>m>/, _H>p̾U&7؁H=ƒ(7 Bq_TW]V+Lh41t,3軃MY324p=Zn!C~9 _~URzf%#6- >K§K/Wjts;t"QE+TxfW>i>SCz{X7MPw\U3N^ߴVT~@&X~*$|POFN x`D8d2TC.^S>U/>p;_X' 7Ä x_;>ƵU 4 V #60,Wᛙ~*ywFoaJ^Κ|X鴶|BgBͩ⟝m.ˀϜ _8/8Kg3SU zݑꍨz_>ўwWD0a zPW:_;AH+>s&|f@ݗz88s95oO/d|UC.m|7 >ѪjGW]̓OaU{| *.qyz }×~0w{q`~f|<ͱO;_|巄x:*>[vׇgXy"f"Ȧ_>Ȑ L8c]5Ѫ%LPѝw&RGW8hA;i<>AD_=>'kUI:}⒆n@/U/&|x)$<4Zx$!c] حT-hJ|ci7"`T|j _ m|x[7>Wߐ9IUS$ ]VW(\E2o&qQlo.;bUsX[p ·ш;ɈK"dd5`!y|`dfHq8RJT;EBv>K뉩.&NNtうt& *Qf˦,)Ӥp\aU=[&Z"= 9^V+|* @vٯW )B2 r/O\7>^pEGrTfӯvhI6Z q_kRBQ,^ 6x&D~1G7|̚~"_@Hah5J )vtPuX6MP:=^l7>"G}F 7ݻXЙy[[|M髲k*pYN(B|RS$R"Bή(느#?t dP'=FE0^oUt>f pg{#3`4Pqlj$2jLe^o_k/u$&?c}BVz^²25Iإ R$*²2#Tny'_LEOt]^2h7'ek4 {/oڴaæMk-=JI/q˯*­&wk5qCMWRc~ao!WQX_>4 Se4aF~`T7%Ko="DJ>_&XV/E]Z4I$+C7HZn bZӷU>j<u-:vE6 ȚqfWKIe} {FPot_{AʄOY(2A'cH~/7hn:KCX1|U7oWKy'\'ove\԰+.Ye4PWbc v:Q\3 >-8\j441#djݶNL!SN2#n`Ç Q?:|!Ii>06:PV zQfr>6x.#fj"m|lb}7iG]"xqzpz"73lGQ>ML mV1Wj> dqwzz/NĂGB<|!]g5jp.mCצqB'( Ϲ[+_z5/|a)v; |3RGor0Uh]M&c#ǵ O'U リ>CM~B|~P"u>?ɐaR mo>J$/:ޮZoe7Ce /|:Wzr-s+. `1t* 1^w[Ы6n }CUr*d }txI7%'fEYߥXSYmbǡmh/|F(xl(G:qzmPX Zh;"mw?5lz-޲~oVOItv ‡uz>96A0KўgBԻt~C_i*~ңOw> 8_M]" :Uq ņtF0 \V~.,|6h2WZȿ@n>v/=<.B˭L.وwF? JoտaCcoOgmsKfڏμF zH?EGލFn>3'a6npzkX sMM |5>')?OM]$QF5kH\ȏ``6"z0&૩\?yc޺Ptdt8Ib6>3>ąoz/쭗x/Y^Q竪OO(yCZK"HC/dm QEȒSnW,\1̞Ғbax5ݺ;%.c@GACu$h I=E|&/5-dgoho{I(:#%xJ$dˋ"˨!uw |\[mTJ_~Kd 9{ZZҢoJFY֥t>$%.T=1gڥ/WK&ȟ|Bm()XhPugkܤ+|l| 6e#5V̄Eo瞺"P9?kC`H|u ȯ`H`6j*M]%*3g[fP5jŃoxO^ֵݬmYU6W4/w&}F6UjiQZ>?T:,|q)w ֫+`ߐU oՍ>nx>Tie~`ћe >p>ŞVc|3Sj=mrQnZؑ/'"Dz:QV=֙kd9E"Xɿw'$n|# 7ՇmaY84p|=y|3gWG(?M)g,rVjm27\|>tm 1{I?|3gfgoj2϶z(t'Xٜrp_[i1;5TUH#{K~% )Ɣ wt AQ&!xf7wH9]d14|ɞWkuw_jSƻ`J|u寅."9}_RU8VpW! n.VK,긽*54\ӝBL+&?J2ʆWq776㚏Pu^1{v&|NU%ň4Kk.j|e^>\W])Vٳ3 :ESF.@'i|o 'oEeΠku&1S@~gYx@/tp Gx,zozd΄oLeOvZ)ԶknRywv)W>Glc+ N\3ޟh=ʡؙ-㚻U,)VhUV5~mX_P/ۙiՇgΜ,5xݸQ㓡A.1̞ޞ8;-UH_7+2‡gsgfya"yNM;厐.ηjffTKZ*-6Pqߊ t8E@~̧XR5e9/y/嘤_=!>_烰g̈=; T*d=. ÇOKA7(*|hYڛW۔p%NI)t(<^zCmZc:poL͞3)&| Q!t>zinꄟ^IïW6y.D͞Y93#zXrG$k_|ԙaE/ͮ̐س+)zX{YU/3riHEA5]+j:l˄O79\܄a_̶n|\dpi79?Χ3 [)/nլ%7%Q6^k&fxчD˾ERŞe^c֜r7s-|EQҴtXP~[>@+~= ߮.zYRp=WWVfD?Swg:z)i+O@ee9UMߞs_rT vʽ;PYSufe5dϗ ?a|c/ZYwX@^c}Tr9w+K-B>z3 T {ߊa/g|]{})qwkxF7>d,wfe '|]ee{vͨhMC73+|MO(3@4%T~mJk32CK'֮f~ FJ>I]3^D׼ꭀQ10|w@RrG^|{3;g$K>C%?_ʖ]y?-*UtYk^=}0z-T=²w>֘kڕb;AvUOH= <[~-ѓG u>b'3)U&6Zrk⇁>'~Nӫ[EЛPrg%A*(UWnOzT= 9 5+s)>fcOEoᓼ&^:/_ͽߧ_CpCz^-/3/~0CޫлYgwne~"Iʮs^8?.!Ϋ&+ vəQ7W䶉?uM{|%QYiMx)^ˉ_5'C*|{ 1u;u?Ɩr|Rr@[%W7se?>@[~ȴ_E^Hs[%_GTRνw#TS%cBGL7||*W?c/uK9:/j]kuͽ}79SrfܾqpXv_roPJny{C;pQ>f{,ܚ·*b#`ex ٿ鮄t{ܲ% R?_{ბ׽e[i~ 0ݑڸIENDB`PK^PKD"OEBPS/img/mon_sql_exec_details.gifGIF87a5,ތsssRZccck{Z1JBB9B111kJJJ޽Bc{֥R{{RcJkss!{ƽBZss{s{{{BscR)Jss{ccBR19Rs9ZBR)))))Z1ccν{{Ɯ)sc1ν99{J111{JR9{kcZJ99kJcRνޔRֽ99kk{)ތBB9BBss!BJ)c!ZRJ1cJs9sc{sccs9RZ{{sRJssccքތƜ眜J{ck{sRs1kB1JksΔ!BZ!s)JB9ccRJ{9ZB{kcJss{RބZ!ތ)kZ!)BcZkB9k{Jkc{{{9{sk99s99k11k9Z{)ZBs))BB{ccBscZZ{kssZZk9ޔR1B!s֌BJRR祜cBZc{s{cZkkR)Z1֜ޜ{,5,H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ0cʜI͛8sɳϟ@ JѣH'NӧPJJիXjʵWKٳQL$ZpʝKݻx ߿KB&[4ՙ"ǐ#K5˘3k̹ϠCs~ʗӨ7N@X8۸s.[{IKCB|⸉M "vIn &I;o֝W?ga`Q`&P:Y0Pj 4\A=@r몶⪫Net4Z3&SpsNw(NfL&HG'^xhE%fI-2& `܀*')Jӂ Ä.Lmhp~f_+: D4? =v@T*NjN4Ɏ+D 0t҄ttMF?}414EAY0Lݫ=P@I m6GtLa 1Cw]{$3kٸ-Ҿ<7G.M;0103A[ btF.y"H.qtg8&!6^ gqsGgs&xV:r# bkm2ߚr0+>0="p"$88NrZD!4akv(ظDl ZIIgĔmk]aňhV:CVjxC)QhTՊpUlFvHIG+bqwǛ@01y/$//vbܷ/!GP'Ia]r'"9aqGYC5ń].w6KSRx-&Rr78ԧ;^wo ՛ޤCduv3 x>iĘ^YiŒ+ F&{MW9J}W88D*> FKWs*Tjr ә4uCړ̓4ckݢjc+"b]8'xCtVX@[<JhfЮ4kc0B\Nh +[Hbl}=-~Ct^P,@KaĐWYɐ=zV/= {P` @4c|?E[ _hŇ6fHkZ87)*)hS1o6UxRµSZ7<šR)W*-UXqx=>õmR 7Wh:{`}hxw' -hܟmx*' Q(Sw~4&oR HyLVV&%9jݖO;)m[VCj^IV=<*9.Z{&s'٫)Є樇g"LPH~mG#|.`F0}}&\֯< BZ4}r|&(8.}{[Oٔd|)6uY%W8WrHw3V75r:V%D Hr?57ES5WSs[$OvR,'tRFt8vWOFbe5aTwq8uVCPRvrVNW=l+`whG?8vuvO21: ,59wkhű=lY#"7j4lxLtf{!=3i>JÓ&2\r&!w{d()0ڔ#b*PF  ^ܶ]0`%P&t3 ~}_ø~~O8Uِ 3 `8A~l@_QwabUCp4S(5dsWtf"Ka .w7#c S!v2}g>%ULr l&jɗȉ(X'1\Yˠ @`"mp#P `ɗh n dnhB0ֈ$n&^n؛&VBNG 밍hp DO8 S4C#e=4ݙam6lrrRqTC#Dwq47f56Hف/WD UYD}s4bU ɝ75F .8v=@ӡYO"$jgE6ؓz.$EV31֔@V3>@ro p?`h1  `q3'oɸO)}fA@7'e^A^^anbZxA)kja[:;tz88$Jh%f"fj6'ڨJ9l9+1w:1WjqŰ om;A 1P `  mB!nZqx1ٰ5э:,8+sܪuCފ+ʭN"s+e=ШMV,hf*h6 r;82Eʀ` P;p[YWЍ P! $[Gs*oW0W@j8G?rӶ Kưǀ 1 ;`)P%:+L۴R;1a+{X[1*9+4@P6T[fY@ @Q{t[vKzo:0 4V6qw{{$>qpsw|[$~+{۹gK[zATk0[{ۻ;[{[ۼЛ[{kӛ۽K[{㛾۾kH[{ػۿ[ 8\| <\| "<$\&|(*,NwkW-<4\6|8:<>@B@B=D]F}HJԕK0a9, $$-}W@ᡶ %\M^_-Llnpr=t]v}x*Um=L][+ʽGM\ێCڝ=ޮMݢM դ]}֭>9A k>.^q ">&~(ס-ߵ0lv%2.4nOr8nM;D^lnlmlJL8@^A1O~QX>]Z1S^C!a=_~?\慄pr>8eUS|9b倞Q^84n>:nLn.ʋ^|ns.n\CBڌ,ɂE39Y̾9'"|L$ܬҒإZ 9Y3-.V9լZMVm+.فMdf#9 M.9>g-$-)b]ўP3YM~?~)/)]ؒ!?x] 9+]νN-5Mְb)Ȟ?)R6/92b.T/gV4?|mC,}l0}iu:Ր]1.@La{/g @،oYЈ9Qo΄ 0b~#/"\srͼяo_/M789/-] .w۟h 3*OoخO9$Oz8Q@@@!9B $@8H%MDRJ-]SL5m9SN왲c7v8`GJ5!ҏ! h4iF*jDX Y9ݾgП:T)Fg=;Y!BWbƍYY&5Y'.M j`lڵmƝ[N;jQ@Rs-8kقOHlWzkϳ&\ZrF$u~@٭/Bv5vS4O?$p@2뭽.۬ATj 4A6k p: 1DG$DJ$ڐ*&hC ohFʐtG|H0B| EPR! 2F$BR/'b!'D3M2-A9STN;3ݤ|OO?O@%J6'k8tB4RItRK/uRL7C E}SSOE5UMSe@UYYPG%\wW_6Xa%XcE6YeeQXj6ZiZkZm[o5Q8[sE7]ue7lۅ7^yur7_} 2Q7`&`v58aFxaη7 ^/8cmxc?cG&XodWfeu9fyfW>Yєogg/zhy܊3Ki:jj:kk;l&lF;m߆;n离nouA.uُ}v+Nvqv,}yGmh^N^yym{g^x~ٵg|ek}˗t?\8@ЀD`@6ڢ_fq &.ME8BЄ'Da UBЅ)! І7auCЇ?bwYwD":H%J&+A8E*VъWbE.vы_b =ŒgDcոF6эoc86`Fd^%9l.KAzD)Wty&F6ґd$%9IJVҒd&5II1MYR$JQvDe*UJVҕe,e9KZҎ˝vI O*`R3$&I*h3GH ? k\D8"d1y :e\|"y%OȰZȔ`9h@:PԠEhBPt36e᳢$0a2gz1RԤ'EiJURԥ/Jh&ެ 8\n{"c4J`ufH%T91A׌; u8EUAX\16^=s)ҚO|O^9Wk^Wկl`;Xְ͌U!NkLnDLevֳmhE;ZҖִEmjGDz db{@mYyɮ"uho3O_nq;GS%kNAvЙ:pᴋ3L @v]:,zkp~N7uy3sWGu;HsӤp4:׻^V'RtZ%eKvtd1pC \RNquc 41"J8Dr%vpGxp7#75#M.07%lW#S3zS&]^2Y斆9<UAwNsUǺsH ֩Ӂų .1-.;(69U\oio{>ww{w=9?-$sebr~pDW|5yw}7O殗>7s/^yx̀)}~q4|aUoNZsK|P/'/ɉ}˺\7\<3-xI 7vm}(}xˉ$+3K$O,ߢ=[*{T6 ̌ $4 ީ-03 ^A83z2) j@c?#2K>ªI /ת.b4.cSBF4F{( ՛z-/B 'T 9Šk-t1t/|09,C;!L#˩Cs5_A?PoAa?i9 lALMNOPQ$R4EP:?C2ek6h7hr<_`a$b4cDdTe,FF)4N"ՙAS(;P'<2j,(9.wl/k/BB /FӰ.0ʦ*{GǠ0 ?0}L0\:&>>FzG Í)4>ZH$:,3G3`YI$X?*,F28SciDʤTʥdʦtʧʨʩʪ|JٺHU4I7_I73. tb2@dq˷˸˹˺˻ w {. )l#8fNclǤaaI,E\JtTd֤Lyi3 -ƱҫbM>x,ތ$ddt TDO4Ͷ9@Tdt|O2 圗8=O10͔'2POLx  tA ]N PP %QP:mWd1mQQ QQ#%m%uR- X%%MRy-./01%253- 5e6u76U&'SR=SP%؅]Sx9CEDUEeFuGHIT^KLMLSDnQ(=@T$H]ЅA%BmZ[\]^_V_]a%b5cEVcTٸP R֏UQLnkV[@$ȅ\u q%r5sEtUuevuwm4yz{e8mt#jV , ؊*Z`؆#8X"i,=*4*7Tאّ%ْm[0Uٕeٖuٕ݌͌삯MUXa]Q.T& @F&"t-̜Һ{jJ0CڎL«l&bڀ]NS7ź,'Y`۶XR=ZM2Yt,c9 ۿ%5EU\ĥueY`,tR9 QB fW}V9Q۴^,3{mJIk̪ʽ\qTZMđ̝!2ۮLy3Ibr)X?pGx^X0#3s!ӮˠiC;X^-c^uŵ_ĕ_M!9]#cWä}C*t^(=Jū\5.*32 ]׊0//u94$䪭C'):᭥&޽kbCo+,VJF);x6bX"Ģ?e6 Ekka*V("pW `v[T6'3;G~,989c50<=>cV@ζZ\yh;.}1Li8c3bjsp |G)a\[d۽R:E[y$C"%cm+G`^# q&r6sFtVufvvgvc?ycCI[A k4gYIҌaBubeַci9X>v*[M>F~-*JeD WL^$Fh=3?\f䟸iT!HTfM7jk(޽-f5kg9. 6F>VHp:4."M`]: ṵ=K$LJVK%S;ٱDݑ]Y@pJ`$kV˓fMֲm-ܸrҭk.޼z/ } 4@ar a@P)>LP!ʇZ}дCVDH!PPTAXgܠlˇ<kڷs婆A ikJ6:I16RX;L)Pwx`Ry]KdO衇Si#Bui!…A!8"%x")XbBrmF ese&Oig=ZgYYTMHiMIz`!K;gY*=PPEQ)7cv UlGm:'`rrfn1'ǘc!x(" UiٷueB > WaOsF#zwǚ&GazaiiZap+++r"{l 8'B=P) ڣT ]za{ŗWJ䩕V!KI0n=݈d,q/bRZgٹSza8-kCm})J I:V~zU,P 05l z/ku&T+C}4I+4M;ӛt15U[}5Yk5W/m)ѵ(޶}>nƜmxzhLv-hV,BИko&P^@M!J\d}qBt7Pr f\#l SXJ"Lr/IV|%CT J/!r#?e67 hUHR܈,JRC-r^"(11}f<#F"DB"ica,6W<л .g w%x=!FNfPAb(Ab:擣$$SIB%a/"4Еv1w9%0 F5ĬmHGWTB%notC")}DL= %+Aܫ1pdgژ&Y(WڸagTI\4dJR#,m6K$(F3эr&B*ґ&=)JSҒ"Q+ˇr24)N[yМ򴧉کO*\TmZIN}*w թRglaUU]lc*ֱR-+Zyzִա<**ӶҵvjDWu~[a*W;=l_XT%v=T`RTme3jL&ђ--j_Բ3E`+Җcmmoւ-p; ؂L;U}.t!fvԥ,scJrFE.\E jjJdfweJ6E}kʎK egw?D!N:f7o ׹y+Qk٘{>^%x9x´Mb)dJ-IS;ni=K6i.NJ2EJ.xBGeb+["%pkH,ƏpfP&mbc蕐$[ *I+^cu`g&X)7@dKݸ&P2 ik^ z1 rtj+Oy0G?ӳL6$3 F*xo^U5nz9oJXZe^[$<DŽWSumƂ% [#R$mV ~ <|ZW]'{мkv֊H!<5'MZ`p2shEݰ%p֝;n u#UBuW7ovӀ~3ijKtxk6>>%, e ;ߩ,ra=mk>c%́r pc*f@zS]LKç> %OQ.OsB<[n;/OkԳo}N_{^߳Z'~|`-۲/ڿFiTKߠ씫QA|醿[zϧߴm/U?S"F6E.`vLVAfLp Ym f ༉4 V=UE\fBXA |_ 6R>%L&@PajU^j![T9%ZaNUUW #B`%`B B"q!"*J . PJT"!fb '2b MRED!&'"+~V(6-a!Qbb%bOm"' "6"TT6%"G"&Mc1¢4(^Q#"z#pcc8Z#9P6_:j9ax`?<#?$T̢@":d#0$A$E]RHh$AdDZFrdx큤 $VdJE#n..M$L$C$`XB5l@G bSBN>`+NʡU^?-$j%]Dt@l4G0ǒefO@pS)## !*`^GЉe]2>ڥeP,\`DU^M*&DUh:c#d* [9ހ-#IQ8-!Iei:L`o`j&).CLf.IRZ$@tZ*&Qtb'Vi#cޥILUT͍Hѳ-)]vuo`p~"9Q!Dtg4US>Stb%zic}.'}6h:>(`^(m(R(9vNrZ ~(7(җV`:Ψw(5樎r. )u4)B`>rM)2)0Vߗvipji,)_ini*V)u{) iuv_RG\ 梺T*jWjT\*fn*v~*****r"֪*檮*B*++*WRL6>+FN+r2kfn+Mk62+n+*++ƫΫ+v+ .E+k+6>,V,F6n,v~,ȆȎ,ɖɞ,ʦʮ,˶˾,Ƭ,͒C,,Ϛ-lv*-6>-FNJ^-fnn~- U)m]ƞڮ-ʱ#۾-bh-힪 ڭmkkZ..&..-w.FN.V^.frmj.膮.閮nrA.Ʈ.֮..".//&.*-Foj=VV^n_e/~//TQĢ//n>/Ư//j00nn7?0G+$C|,L00 Zpvf*hD FDBp 0p+ZCAT ks*1GO1ꈦ/B {pFp$ Ӱ0(q q#qpW1DZq_qeq@pqFwvgvowLms'j/7B|w>yzgusDp Ӿ7̮u~~wԒ7xsxQC諾g}xEp,AƳG_;B9ex݋y§y߇lɓ_˪'Oy{ǀ$!̃p~7}S|l7\s;@0Hࠁ"Fwb1 )a[A(*Xrש?t*WR#2E(QA < !aN!]8⊋\P E[Qg"aј;dxM#xqǑ'WysϡGSϋ@~,z/vx[U/W{WWm"ٸ?ߊѾ C\̶+т/B&͠"+.P8Tsm#>Z< %j1JkPqQy )(hXI@=R// #ꬼk-2ԋLt3O) 9Ms/+B:,@7(#5 hSQlG!TI)t# ;$PJKY T09"u>E}(SMoTJČ } QXye _ Va-VHLP̨(Dh)(X7$0!m{Դ^nɍ e/D=kWyWd̎(O߭V."mHX<n^Du1ba>W9]WY};H9e;*-umg P@yE#nXΌANZ饙nZX5E$S꫱q⮽.i.FMS;NΛ /px[oq֖߈1\9A' I/?Yo=a]iq]y߁^iWo硏Ƅ y^x"O_;>ۯ_ͷ?47qo~t!A *~J*8A n 2ȟE-y4 QB$CZ Bΐ5$< D%m#dE)Nk I*n]'e4ј0&+1$`8b0i52 {:ddɟAʁ~ ك#T(@" %/?rrKQJU*g0GF&xt$";HA^*)%$3"Ȳ&PNN2Ì(;UN՜I+J@r#O?FőZ9Kq`8d HyN8rYI}~R#et}Ԥ5PQbCAR2#)((;iH=V4̊(Ew)B$RwRt$/HG O}11$d&JO[.5FJ@F5(FI"M$l* TuB}HxȫeQe!5Iז*U#%K^YWԥ!5rXt#cZSNhtk #`|E`/YN %F Y6a,-ќ3i%`CY`rRme*^Vl׉I.vm(;P皳*a9Mvru.m8H.@ E"q^Η6_i eP'_/ `/:&*!a 1aωo%Abǿ|QW*vb˜5v?lcx8drB6H!M=bD'O5r>N-o]f1e6ќf5mvg9ϙug=}hAKr1ۑэvOhHKҽsiXi^>P;zԍ.bNTGӐ^u˵ޱؕ'ȫak]ռfupM|"d^6=]l=[hM[vp?dܠfrjݛnwn-m8E pӴ" .x{uqp#0=cH6ǣ z}y+x]H[|,.es[{8Es_9}ts L5oxN̋Xߛ伺 !BU]?zjz5^ׯxvS%}x#@1{gInl4bN`1oӉӜ<+d&| yu-zB7}"|GH ƿ~)"^7|p/lҍtMl7EZ+wɣۉ|A_GDooBD"!p"lM0 h'bm&a/b2"(OoO;N| R%n0wςl.p/d(`Bp -P/p" Ѱ/gcpP 0 0D~(5x GLHO ;O.+)̰P"DoW0 gq+dXN:qp=/0bA{Ŏ %"0g.ʤ/#c/t1!Ɛ. I + t0/1 "! "y+QQ!/rip!i//#;2% P7 ʮq$/2$]r wOD!-R!O/!8*qGRmpO#(#/)yrsK1*c(2(R/Rر-Q.oo O'2 Kn 2%QЮ*Q1Q+5_2C3*13P,.+/VoT6%31+ 7Ro2R]15F &ɲ:6s7U7S+3U/*2Su"=+.0/n0o0#WPS=S2ߏ21(A323ypoA3AI'"1%?59Q+--%0!57?BSC2!t*'E1+7T:H\BQ4;- /EsFo*mS!o-D=95p6/4ӎPSEIcc@EpFrH A.': HR!O4@'Q&YQ9/!JP"=G+2RR QC%uSKRUu4 {1q,YU PS7([EQ9G0NsGM/URQRu:Z_uWPV0Npq7?SXǐLmҞTH=ST\)LE3#4 P3S0O`[q@4873"#[nakta SDbR-SOSk_=UOob%!J%EvW%?1X#tOcoYI.R"/DNjh'T3?_O[!v6n5VhmZ4u /9cULQ/&NLimafomtՅS t6pnHcvXw&Yg!qf 7PE)oos(MM_Edrtyn~ bPT2UCnf9?fHre\vV6[?stz.u?nu aNzy 4NxAxWْy-%Ӭ7rawz-O7P_Vtw~w27w}7zKb7 ޷oT~7WX |WsXw|w#Xu7Mm] uz3'BY#؈@ؑ!UbX'XH^9.ֶ:Rb9vY&v9Y y9y9{#9%%$*%ٕ`9y hc38' ]y/D9y:1E C @ :%z)-1AEkYnryG虗= UzY]a:ezaU998 |:zڧ〣-lh>k~9FZRNڤY8 z:zڦy9n#v:s:z麮Z REY0c0#$8!{- =`4{d"@@/0:/{@B^ҺGqzx};{{%[U7.;#X;/=Hcڕ`;{; 8.AU(%!3.Ͱ#$Grr 5+sHCV><J <1\cjۗKbT#6Q%~=[D}s;X]ٗ] ? G>ٙ} U~7b=# ]k<1۝WYޅu> 13/?~$vݡMQ?UYKj^#+fjˑ\ż S^_\."{͍?Χ?cI9+AG(?^? ΁P T24AҼ := gJt`R; R t'+򻕿/tN? uROMuV_u 2+rCb 3fv Bn wrMwv]z;MF~ xx/<yONy1k\,{Nz8!Bꪯz뮿{p$, O,X^p4axp3^GϦ̚l&6R~ud㬫]YΔJKCصvbiZQZXn gjjP:q|h9%{U*Tݬf-UN^g7YȎ2+ 0}Ҥ@nvo Z-ֵ 3nFz6s=T4WabjrXUqL wF(jF޹ g^܇!X Yhp- P>3raPҳO{Z&2ԧNP6Z$,ȋ1Vճ2(m̋D7v`8r|ˊ8ӏ5c"R€,ky\kY!w"daln\>ןdܕ/ܐKz~jkaWfOHZ@n;MYnzX小_牙ѲNLPէ~,DŽ>4_ k>4U,go`z=rK1/K{\f^;`^<$T8^m;yݼ-=_t_Cx \h!Xhs> ^Rh?]k!bU^l븥%h#/ 8g|!G=aBl{l@^ٌua<t332W s"Chӟܤik}\׿;  Y0U<XdX56[GMdBPR/75&9_]<~-:mjH!%>anEC^EEO#9J+, oߞ"_/;:+S0Taaԯ},c':;)NK6wߋV&MǟQXE~2:iNœyjSIG gJycf~ 3Wgc8GHӀ thRDV{)'atJWRd'ށ~5C?A(CHT0^~`T1vvVhDo4iW~[ȅ]؅+*؂;NSi?hDN+FX&aX^hwy}(:O5 (HmH`oh`S~{(HbLN~Pv8(H3+4Q(HŘxX4DRxh(H -rxo(HOsƊX(1瘍B%ȏa؊4,A2*"#ɐB/Q"/.KҐ2* ~Ȍfhq ّ)#")+1q&.) *YDoԋ/'E.3(61.т/3)sq @h숐AFHY :N)W.l2)Z˸؎&b9ҒTy A>jQl(i$yb[)yIȗӁ~ِ)+i, " d.qY/qFFі))9)y%"ˁWKAUhE1IEQ IcEygᜂA͙y919+)n1@yKq)#,9:,p.""(q*HcI1M֙$Iz[i%[#I "3j9#y1v5Q8 *&--` yJRJbph% (0aYs+D10&zIs!09;AQ%`ETJ P(q0p J Qce?`ij)ǁ4H4*KOqb쁡Yx},%Zk*:a3z:](a}1 )py-0TJa zʙiG3i+[9zԡ xȊQ:UQ OAz+K;ҁ 0@Y)ѯfH:lZs tj*zA[bڬqѰuuNJbbڭW۳ HA1AB5{y jؕzb AsZ6roۣgk"ڜʶX겇xx96 QQp.DjZ4%=&5.+;& ZR)ڦT1z Q[WsMkE%J;%,>Ҿ;x;<`0/ Ҽ\ O )䫵d,9tK579+/\+ ,Ĕe%8k& /1 @N RL6ajAC {ClbIF[̚&p)ru)Z|\srACyƇ!1B|p0lɟJ^`l l[*H 94s+ɮ:̯<#IDߪ#l'(DȖý Lʬ,̵,̿)̑E|̑ Kp)Ϊ<϶,͚aImâ7A-M3 =3m !-dμ+L3= b-#3=9 ?0]0σ:8=l{xEmGIH sC%-<@8PmPZ$a-cMe=M0$+ k"`^{} ~}ym=0oqIOX:qTkaPWT@w$FJFlaexSifYKwk$4C1 0]({s۳MM׃ESndfyK ؂?OJW-жt31}A1H\\sOV.S.=qnreRq<ηq/dwb22}flυ?WQ(fxW?>5c8ۿyMzTRn~E3f* 0"<}L3|[bTniikqUi=be)i]~12v#N7FVlf;]؍ջw1V5Y&^56y=8GF^E^]/c3 S]xS/Cc/r<_a.͟%VHgށ)r~P-J4qN5>x%߀)GUjJ=.3 {~O/`WY[C}7| ۈVM5k}F#XXmme_YtN3HF65<~]4=jm!Ճ5x'H?8 /-GXy Bac*>mط3_/oR}(.hc0ߎ.MRi/)Qدoh3Y7_l (PA .dؐ@C%NXE5nG!E$YI)UdK+[D 9ussQHƄSQNZUYnիʙkT)PBU6)RW~[]y_aylC5 cȑ%O\e̙E fHᴉ.fl¥U>JkرeϦ]{h۹e[o߶&޻xK'WysKn__nBύUFSw/{}rϚOih +HONbKϧ"=C1C,Q ,NPpqDK1gq%z%`mG&LB.-C6ܩCArJ*QrFER!eqq$PsM66"3!$}rR'$I Q7/NԒ-=PTP,L9$ O!%0 N;4 F 9'%Μ0QcQ Q8ЕQ`EMR (UX @bZjEW|eTN3VsD5(r ]`uwrv:m wm_{sbX`Nw#bvfa}݈9‚C`feY8ɖ?&{H:^2-[u5oX^ ]u]0 y YL7ݭ{ c[Rŷ]F룒g.hpbs J\`#HZ@ PcyCϮ[/rt&=.͝f2qE<7ϗx;zׅ7^^c ڷܪ-6uowsj \:MF``h/h̓;޽eh)@ Z0,"peME`cXJFa*6A }CEypCKpTIF/d\DP5ض-< "O;e4'~I#\)Y:5j.|_8 \EQxUIʣYodi8.…\F$2oy-#,9X$kbUOS \mbIDxuK!/ ~P:Te=(pQft=ф95&.m #*9sl'LΖ𩓟jjyzAXPQK1D&ȨejQ]%>0j`]jXXQ@keХTή7>nxb]BaIL#ŋR wڨU)Xj_lվ(I-=km.~2&mVföYξL;xl -nu[N%nquh[yv2) \]'V 0ኻdwɋV 5 pMpӋ&旉NDKM 44SJ{`JUZx_z¸Q ^3aG[AM|Wn78`l)ᘴvU`HAZ<"1i'IkiZc,/c:1dcݷ @ p2u]NW4i RdYυl&o6;*bp  !t&鋩gYL3‹DZFo|;y6 Uf̕'jӟSiZgtTؚ0>i6 kdm}Ns׻}nzQ^!!̈́[&w}nt{2+` ;[=v}o|[w-n8yn-opץfxpG\R@Ti~'qS$!'yM~:..ƅM]b[ק}mq;yRq`_QSs'](ҝt? ia~p[Qz ]cR_>zW[. 8 ۷ts#d־Fn F8=$#/;*W|?n=8;v@!j<4,O?wﭧj/t_OS_[#7>M%g1Ϳ\1g['$q@ hq}_P?_O7#'_n::ؾ0@<ࢺ{=$?C ?;3 ۈп AӉ ,.,Ӟ3@[? B44Ck?cB8#9(|;C=*l+@l ӽ1=Eۛ==#HdR=D‘3LDEPdt|G|G}Gi$G3k{GUjdVsM0U&tB %rBTu7aX* | hIXNPZUYuN-ϒLZmxP eՆ1"d)j҄r[*ݜz'ܲ@RZ!”nMZݠX]O%YZJ[\mMA}M|YR}V:"e&9-1]jϭd Ђh.u-J-,=X\\ڑ Ny׃HM,GJ Fek]MmXZM^\?U@].sE^%^½_uKUYdYsY-M6lRMVU]mހ|^^5 <`[_t:ߝ]`$b '^0V 6Ua:vac5cliboQI[vca8Z-sEK?_2IJ%@!dzU_t|JQeRVMbBf IxɂReZe8e5ZVnDYe[faey]^Jefb~e|fi>h 䆌՜bPenfof9pgf䖕n^f&c:xg3kNe)^dneW:h\u&vFż'EyhXdP;$h_a|Y c44]i=iO[…>jHKGD4<8,i~AfCj{katf~}s ;PlkQ{FgUVfxg.j3dmƦl͆!@cfDLi{+kb#iE)N߄g/AsA$$5dl |&Bm[s;Z6Xo3IdmGnwkBl{ .C@8Dm=AV7.n6K۝)Txn&j?GE\Kqa k޻saoHM$vkoGAGqmnnpN?U=Խҋ=s[葆i57 n[>`4p&Tf< Y;Nj /q8c}evhCtJ+o,_t.mrnJuQgKgL\t QuXGR_S?yhuHu_:uu^>^vfoNt-g*VfO/wvmvvMOu7mVnlvtOwvwT?vNvXvuw{9vjwudv~wxwbG_xN}Edc=HKv>.bgW1D7i'q__My'ݖGvAάxObzzߏA YerpOP1` E~e͎ya<^|/|?|7|X/{9{LO_xo?zcPξh}}؏}هX}|S'ד|ޅ]_d ~~}{7uwkO~њ#wA.`  08p@hxp:xQŎ?X@1J\`24",KRǑ!A:ϢF/NZrG@Y3-Ѯ^5O!ǒ4ҦBM++ڹv/b/߾~ٳ!:u \"C,A9w -ztx-P@ R7HOb̈IwrȞ =:V,^X3ڗα[fV8GT+y=/(pE{T%KP@ j`ݗցEEV~wD\z8]lP'(vфn|]{偎;职OO 0@Q *ihAimPk[QS6o}QiKEƄ7%wQTaiWZqZх!_v wUK'q5gA'"HҩaOpaZ٣v&6y&*]#)(2:JcMUdT(]P&jUFTsɂUfrpM5f!@uB G /bX=6vDT$^|jJrŭ6*^-0c WKŪ\$ΚrgQNkyU73jg ūղ̖&iڕL+.Vt[p^WGM T]ڶjJ\zk/o AmrWC@"Gކ%q6UJW!2o.9T裓^,}.AiJyO^ n4Ѱ klўoH;4OGfnχUDL'Gu{ᄄxZ[ϻu^ހFtUE=fn|Ev_dbEBB0 %XQ2gr "! -`a%y lU Adobe Illustrator CS4 2011-12-06T18:29:27+16:00 2011-12-06T18:29:27+16:00 2011-12-06T18:29:27+16:00 8.500000 11.000000 Inches 1 True True Courier Courier Medium Type 1 002.004 False COM_____.PFB; COM_____.PFM Helvetica-Bold Helvetica Bold Type 1 001.007 False HVB_____.PFB; HVB_____.PFM Helvetica Helvetica Medium Type 1 001.006 False HV______.PFB; HV______.PFM Cyan Magenta Yellow Black New Color Swatch 30 C=0 M=0 Y=0 K=100 New Color Swatch 43 copy Default Swatch Group 0 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 C=100 M=100 Y=100 K=100 CMYK PROCESS 100.000000 100.000000 100.000000 100.000000 K=0 GRAY PROCESS 0 K=100 GRAY PROCESS 255 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 K=25 GRAY PROCESS 63 K=50 GRAY PROCESS 127 K=75 GRAY PROCESS 191 K=100 GRAY PROCESS 255 C=25 M=0 Y=0 K=0 CMYK PROCESS 25.000000 0.000000 0.000000 0.000000 C=50 M=0 Y=0 K=0 CMYK PROCESS 50.000000 0.000000 0.000000 0.000000 C=75 M=0 Y=0 K=0 CMYK PROCESS 75.000000 0.000000 0.000000 0.000000 C=100 M=0 Y=0 K=0 CMYK PROCESS 100.000000 0.000000 0.000000 0.000000 C=25 M=25 Y=0 K=0 CMYK PROCESS 25.000000 25.000000 0.000000 0.000000 C=50 M=50 Y=0 K=0 CMYK PROCESS 50.000000 50.000000 0.000000 0.000000 C=75 M=75 Y=0 K=0 CMYK PROCESS 75.000000 75.000000 0.000000 0.000000 C=100 M=100 Y=0 K=0 CMYK PROCESS 100.000000 100.000000 0.000000 0.000000 C=0 M=25 Y=0 K=0 CMYK PROCESS 0.000000 25.000000 0.000000 0.000000 C=0 M=50 Y=0 K=0 CMYK PROCESS 0.000000 50.000000 0.000000 0.000000 C=0 M=75 Y=0 K=0 CMYK PROCESS 0.000000 75.000000 0.000000 0.000000 C=0 M=100 Y=0 K=0 CMYK PROCESS 0.000000 100.000000 0.000000 0.000000 C=0 M=25 Y=25 K=0 CMYK PROCESS 0.000000 25.000000 25.000000 0.000000 C=0 M=50 Y=50 K=0 CMYK PROCESS 0.000000 50.000000 50.000000 0.000000 C=0 M=75 Y=75 K=0 CMYK PROCESS 0.000000 75.000000 75.000000 0.000000 C=0 M=100 Y=100 K=0 CMYK PROCESS 0.000000 100.000000 100.000000 0.000000 C=0 M=0 Y=25 K=0 CMYK PROCESS 0.000000 0.000000 25.000000 0.000000 C=0 M=0 Y=50 K=0 CMYK PROCESS 0.000000 0.000000 50.000000 0.000000 C=0 M=0 Y=75 K=0 CMYK PROCESS 0.000000 0.000000 75.000000 0.000000 C=0 M=0 Y=100 K=0 CMYK PROCESS 0.000000 0.000000 100.000000 0.000000 C=25 M=0 Y=25 K=0 CMYK PROCESS 25.000000 0.000000 25.000000 0.000000 C=50 M=0 Y=50 K=0 CMYK PROCESS 50.000000 0.000000 50.000000 0.000000 C=75 M=0 Y=75 K=0 CMYK PROCESS 75.000000 0.000000 75.000000 0.000000 C=100 M=0 Y=100 K=0 CMYK PROCESS 100.000000 0.000000 100.000000 0.000000 C=25 M=13 Y=0 K=0 CMYK PROCESS 25.000000 12.500000 0.000000 0.000000 C=50 M=25 Y=0 K=0 CMYK PROCESS 50.000000 25.000000 0.000000 0.000000 C=75 M=38 Y=0 K=0 CMYK PROCESS 75.000000 37.500000 0.000000 0.000000 C=100 M=50 Y=0 K=0 CMYK PROCESS 100.000000 50.000000 0.000000 0.000000 C=13 M=25 Y=0 K=0 CMYK PROCESS 12.500000 25.000000 0.000000 0.000000 C=25 M=50 Y=0 K=0 CMYK PROCESS 25.000000 50.000000 0.000000 0.000000 C=38 M=75 Y=0 K=0 CMYK PROCESS 37.500000 75.000000 0.000000 0.000000 C=50 M=100 Y=0 K=0 CMYK PROCESS 50.000000 100.000000 0.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=0 M=25 Y=13 K=0 CMYK PROCESS 0.000000 25.000000 12.500000 0.000000 C=0 M=50 Y=25 K=0 CMYK PROCESS 0.000000 50.000000 25.000000 0.000000 C=0 M=75 Y=38 K=0 CMYK PROCESS 0.000000 75.000000 37.500000 0.000000 C=0 M=100 Y=50 K=0 CMYK PROCESS 0.000000 100.000000 50.000000 0.000000 C=0 M=13 Y=25 K=0 CMYK PROCESS 0.000000 12.500000 25.000000 0.000000 C=0 M=25 Y=50 K=0 CMYK PROCESS 0.000000 25.000000 50.000000 0.000000 C=0 M=38 Y=75 K=0 CMYK PROCESS 0.000000 37.500000 75.000000 0.000000 C=0 M=50 Y=100 K=0 CMYK PROCESS 0.000000 50.000000 100.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=0 Y=25 K=0 CMYK PROCESS 12.500000 0.000000 25.000000 0.000000 C=25 M=0 Y=50 K=0 CMYK PROCESS 25.000000 0.000000 50.000000 0.000000 C=38 M=0 Y=75 K=0 CMYK PROCESS 37.500000 0.000000 75.000000 0.000000 C=50 M=0 Y=100 K=0 CMYK  PROCESS 50.000000 0.000000 100.000000 0.000000 C=25 M=0 Y=13 K=0 CMYK PROCESS 25.000000 0.000000 12.500000 0.000000 C=50 M=0 Y=25 K=0 CMYK PROCESS 50.000000 0.000000 25.000000 0.000000 C=75 M=0 Y=38 K=0 CMYK PROCESS 75.000000 0.000000 37.500000 0.000000 C=100 M=0 Y=50 K=0 CMYK PROCESS 100.000000 0.000000 50.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=25 M=13 Y=13 K=0 CMYK PROCESS 25.000000 12.500000 12.500000 0.000000 C=50 M=25 Y=25 K=0 CMYK PROCESS 50.000000 25.000000 25.000000 0.000000 C=75 M=38 Y=38 K=0 CMYK PROCESS 75.000000 37.500000 37.500000 0.000000 C=100 M=50 Y=50 K=0 CMYK PROCESS 100.000000 50.000000 50.000000 0.000000 C=25 M=25 Y=13 K=0 CMYK PROCESS 25.000000 25.000000 12.500000 0.000000 C=50 M=50 Y=25 K=0 CMYK PROCESS 50.000000 50.000000 25.000000 0.000000 C=75 M=75 Y=38 K=0 CMYK PROCESS 75.000000 75.000000 37.500000 0.000000 C=100 M=100 Y=50 K=0 CMYK PROCESS 100.000000 100.000000 50.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=25 Y=13 K=0 CMYK PROCESS 12.500000 25.000000 12.500000 0.000000 C=25 M=50 Y=25 K=0 CMYK PROCESS 25.000000 50.000000 25.000000 0.000000 C=38 M=75 Y=38 K=0 CMYK PROCESS 37.500000 75.000000 37.500000 0.000000 C=50 M=100 Y=50 K=0 CMYK PROCESS 50.000000 100.000000 50.000000 0.000000 C=13 M=25 Y=25 K=0 CMYK PROCESS 12.500000 25.000000 25.000000 0.000000 C=25 M=50 Y=50 K=0 CMYK PROCESS 25.000000 50.000000 50.000000 0.000000 C=38 M=75 Y=75 K=0 CMYK PROCESS 37.500000 75.000000 75.000000 0.000000 C=50 M=100 Y=100 K=0 CMYK PROCESS 50.000000 100.000000 100.000000 0.000000 C=0 M=0 Y=0 K=0 CMYK PROCESS 0.000000 0.000000 0.000000 0.000000 C=13 M=13 Y=25 K=0 CMYK PROCESS 12.500000 12.500000 25.000000 0.000000 C=25 M=25 Y=50 K=0 CMYK PROCESS 25.000000 25.000000 50.000000 0.000000 C=38 M=38 Y=75 K=0 CMYK PROCESS 37.500000 37.500000 75.000000 0.000000 C=50 M=50 Y=100 K=0 CMYK PROCESS 50.000000 50.000000 100.000000 0.000000 C=25 M=13 Y=25 K=0 CMYK PROCESS 25.000000 12.500000 25.000000 0.000000 C=50 M=25 Y=50 K=0 CMYK PROCESS 50.000000 25.000000 50.000000 0.000000 C=75 M=38 Y=75 K=0 CMYK PROCESS 75.000000 37.500000 75.000000 0.000000 C=100 M=50 Y=100 K=0 CMYK PROCESS 100.000000 50.000000 100.000000 0.000000 New Color Swatch 2 SPOT 100.000000 RGB 86 86 86 New Color Swatch 1 SPOT 100.000000 RGB 75 75 75 C=0 M=0 Y=0 K=100 SPOT 100.000000 RGB 0 0 0 New Color Swatch 30 SPOT 100.000000 RGB 204 204 204 New Color Swatch 43 copy SPOT 100.000000 RGB 102 153 204 New Color Swatch 58 SPOT 100.000000 RGB 128 153 179 New Color Swatch 51 SPOT 100.000000 RGB 230 241 255 New Color Swatch 62 SPOT 100.000000 RGB 64 102 139 New Color Swatch 55 SPOT 100.000000 RGB 204 230 255 New Color Swatch 3 SPOT 100.000000 RGB 221 237 237 New Color Swatch 61 SPOT 100.000000 RGB 102