설문조사
PostgreSQL/PPAS 관련 듣고 싶은 교육은


총 게시물 163건, 최근 0 건
   

WHERE col = coalesce(:search_criteria, col)

글쓴이 : 모델광 날짜 : 2023-10-27 (금) 21:50 조회 : 350
If you are an old-shool Oracle user, you might be familiar with the following predicate:

WHERE column = NVL(:search_criteria, column)

When we use the above predicate, Oracle has an in-built optimization for handling bind variables and nulls. You might think that when you migrate the above predicate to PostgreSQL, you can covert the predicate as follows:

WHERE column = COALESCE(:search_criteria, column)

However, PostgreSQL does not have a built-in optimzation mechanism to handle the predicate. In this note, I will explain why you should not use the "WHERE column = coalesce(:search_criteria, column)" predicate in PostgreSQL.

Here is the script I used to generate the test data.


drop table t1;
create unlogged table t1
as
select i as col1,
       i as col2,
       lpad(i::text,10,'0') col3,
       lpad('x',100,'x')  padding
  from generate_series(1,1000000) a(i);

create unlogged table t2 as select * from t1;

alter table t1 alter column col2 set not null;
create index t1_x02 on t1(col2);
alter table t1 add constraint t1_pk primary key(col1);
alter table t2 add constraint t2_pk primary key(col1);

analyze t1,t2;

Note that I deliverately added a not-null constraint on the col2 column. If the col2 column is nullable, the "col2 = COALESCE(:search_criteria, col2)" predicate returns an incorrect result.

For the reason why col2 should be constrained as not null, refer to this article:


https://www.postgresdba.com/bbs/board.php?bo_table=C05&wr_id=112&page=6

Here is the migrated query from Oracle 12.2 to PostgreSQL 15.1.

select *
  from t1, t2
 where t1.col2 = coalesce(:1, t1.col2)
   and t1.col1 = t2.col1;

When I migrate the query from Oracle, all I did was to substitute NVL with COALESCE.

When I assigned 100 to :1, the elapsed time was almost the same as in Oracle.

However when I assigned null to :1, the elapsed time was almost 2x larger than in Oracle. I had to scratch my head because I did not know the reason.

As usual, I checked the execution plan to work it out:


Nested Loop  (cost=0.85..16.89 rows=1 width=240) (actual time=0.034..0.036 rows=1 loops=1)
  Buffers: shared hit=3 read=5
  ->  Index Scan using t1_x02 on t1  (cost=0.42..8.44 rows=1 width=120) (actual time=0.024..0.025 rows=1 loops=1)
        Index Cond: (col2 = 100)
        Buffers: shared read=4
  ->  Index Scan using t2_pk on t2  (cost=0.42..8.44 rows=1 width=120) (actual time=0.006..0.006 rows=1 loops=1)
        Index Cond: (col1 = t1.col1)
        Buffers: shared hit=3 read=1
Planning:
  Buffers: shared hit=7 read=9
Planning Time: 0.268 ms
Execution Time: 0.053 ms

​Nested Loop  (cost=0.42..60946.50 rows=5000 width=240) (actual time=0.045..1342.017 rows=1000000 loops=1)
  Buffers: shared hit=3979111 read=39071
  ->  Seq Scan on t1  (cost=0.00..30682.00 rows=5000 width=120) (actual time=0.035..130.210 rows=1000000 loops=1)
        Filter: (col2 = COALESCE(col2))
        Buffers: shared hit=33 read=18149
  ->  Index Scan using t2_pk on t2  (cost=0.42..6.05 rows=1 width=120) (actual time=0.001..0.001 rows=1 loops=1000000)
        Index Cond: (col1 = t1.col1)
        Buffers: shared hit=3979078 read=20922
Planning:
  Buffers: shared hit=16
Planning Time: 0.135 ms
Execution Time: 1392.376 ms


The first plan was obtained with the value 100 assigned to :1. The second plan was obtained with null assigned to :1. 
Can you notice any inefficiency in the second plan?

The optimizer is estimating that it would get 5000 rows after scanning t1 and filtering by col2 = COALESCE(col2). Where does 5000 (= 1000000 * 0.005) come from? The coalesce() estimate is simply the "0.5% guess for equality" that applies to most cases of function(column_name).

Regarding the 0.5% rule, please refer to this article:


https://www.postgresdba.com/bbs/board.php?bo_table=C05&wr_id=143&page=5

The incorrect cardinality estimate is the culprit of the inefficent execution plan, hitting the t2_pk index 1000000 times.

So the question is:
How will we enhance the performance of the query?

Well, actually the Oracle optimizer gives us the answer.

Here is the execution plan of the original query using NVL on Oracle 12.2.

-----------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Starts | E-Rows | A-Rows |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |      1 |        |   1000K|
|   1 |  VIEW                                             | VW_ORE_F79C84EE |      1 |   1000K|   1000K|
|   2 |   UNION-ALL                                     |                 |      1 |   1000K|   1000K|
|*  3 |    FILTER                                         |                 |      1 |        |      0 |
|   4 |     NESTED LOOPS                            |                 |      0 |      1 |      0 |
|   5 |      NESTED LOOPS                           |                 |      0 |      1 |      0 |
|   6 |       TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      0 |      1 |      0 |
|*  7 |        INDEX RANGE SCAN                  | T1_X02          |      0 |      1 |      0 |
|*  8 |       INDEX UNIQUE SCAN                  | T2_PK           |      0 |      1 |      0 |
|   9 |      TABLE ACCESS BY INDEX ROWID         | T2           |      0 |      1 |      0 |
|* 10 |    FILTER                                         |                 |      1 |        |   1000K|
|* 11 |     HASH JOIN                                  |                 |      1 |   1000K|   1000K|
|  12 |      TABLE ACCESS FULL                     | T2              |      1 |   1000K|   1000K|
|  13 |      TABLE ACCESS FULL                     | T1              |      1 |   1000K|   1000K|
-----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(:1 IS NOT NULL)
7 - access("T1"."COL2"=:1)
8 - access("T1"."COL1"="T2"."COL1")
10 - filter(:1 IS NULL)
11 - access("T1"."COL1"="T2"."COL1")

You can see in this plan how Oracle has split the query into two queries combined through UNION-ALL with filter operations at lines 3(:1 is not null) and 10(:1 is null) to allow the runtime engine to execute only the appropriate branch. You will also notice that each branch was optimized separately and in this case the two branches get totally different paths because of the enormous difference in the estimated volums of data.

So I have written the transformed version of the original query and run it on PostgreSQL 15.1.


select *
  from t1, t2
 where :1 is null
   and t1.col1 = t2.col1
union ALL
select *
  from t1, t2
 where :1 is not null
   and t1.col2 = :1
   and t1.col1 = t2.col1;

Here are the two execution plans, one with null assigned to :1 and the other with 100 assigned to :1.

Merge Join  (cost=2.95..103327.30 rows=1000000 width=240) (actual time=0.048..530.303 rows=1000000 loops=1)
  Merge Cond: (t1.col1 = t2.col1)
  Buffers: shared hit=2521 read=39313
  ->  Index Scan using t1_pk on t1  (cost=0.42..44165.43 rows=1000000 width=120) (actual time=0.036..152.671 rows=1000000 loops=1)
        Buffers: shared hit=3 read=20914
  ->  Index Scan using t2_pk on t2  (cost=0.42..44165.43 rows=1000000 width=120) (actual time=0.008..149.432 rows=1000000 loops=1)
        Buffers: shared hit=2518 read=18399
Planning:
  Buffers: shared hit=16
Planning Time: 0.177 ms
Execution Time: 561.953 ms


Nested Loop  (cost=0.85..16.89 rows=1 width=240) (actual time=0.024..0.026 rows=1 loops=1)
  Buffers: shared hit=3 read=5
  ->  Index Scan using t1_x02 on t1  (cost=0.42..8.44 rows=1 width=120) (actual time=0.016..0.017 rows=1 loops=1)
        Index Cond: (col2 = 100)
        Buffers: shared read=4
  ->  Index Scan using t2_pk on t2  (cost=0.42..8.44 rows=1 width=120) (actual time=0.005..0.005 rows=1 loops=1)
        Index Cond: (col1 = t1.col1)
        Buffers: shared hit=3 read=1
Planning:
  Buffers: shared hit=6 read=10
Planning Time: 0.258 ms
Execution Time: 0.041 ms


A key point to remember is that by removing the coalesce function we have given the optimizer a chance to get a better cardinality estimate.

Note that the cardinality estimate on t1 is correct now when we assign null to :1, which resulted in a merge join instead of a nested loop join.


Conclusion
When you use COALESCE in the where clause, the optimizer is poor at estimating the row count because it simply applies a 0.5% guess rule.

When you migrate the predicate column = nvl(:search_criteria, column) from Oracle, be cautious in replacing NVL with COALESCE. Functionally they are identical, but the implementation detail is different. In PostgreSQL you have to split the query into two queries using the union all operator to allow the engine to execute only the appropriate branch.

​Addendum
If you want to run the experiment yourself on Oracle, here is the script I used to generate the data on Oracle 12.2.


drop table t1 purge;
create table t1
nologging
as
select
        rownum                          col1,
        rownum                          col2,
        lpad(rownum,10,'0')             col3,
        lpad('x',100,'x')               padding
from XMLTABLE('1 to 1000000')
;
drop table t2 purge;

create table t2 nologging as select * from t1;

alter table t1 add constraint t1_pk primary key(col1);
create index t1_x02 on t1(col2);
alter table t1 modify (col2 not null);
alter table t2 add constraint t2_pk primary key(col1);

exec dbms_stats.gather_table_stats('','T1');
exec dbms_stats.gather_table_stats('','T2');

 select
            *
  from t1, t2
 where t1.col2 = nvl(:1, t1.col2)
   and t1.col1 = t2.col1;


   

postgresdba.com