• 1. Environmental Introduction

Operating System: Aliyun ECS
Database: Oracle 11.2.0.1 ADG

  • 2. Introducing knowledge points

Firstly, we introduce some knowledge about binding variables.
When Oracle parses and executes the target SQL, it finds the matching parent cursor in the database cache based on the hash value of the target SQL's SQL text, which means that as long as the target SQL's SQL text is slightly different, the hash value calculated from it is very likely to be different (even if the hash value is the same, it doesn't matter. Because Oracle will continue to compare the SQL text corresponding to parent cursor, that is to say, the parse tree and execution plan cannot be reused between the target SQL whose SQL text is not exactly the same.
The biggest use of bound variables is that they can effectively reduce the number of hard parses in the system. However, binding variables are not necessarily necessary. For high concurrent OLTP-type systems, binding variables should be used, because they can effectively reduce the number of hard parsing, which is also a prerequisite for OLTP-type systems to have good performance and scalability on the database side; but for OLAP/DSS-type systems, Bei The impact of LaiHard parsing on system performance is negligible. In this case, of course, you can choose not to bind variables, because even when used, the improvement of system performance is very limited.
But let's not, of course, forget one of the functions and drawbacks of bind variable snooping:
For target SQL that uses bound variables, Oracle can choose two ways to determine its execution plan.
a. Use bound variables to snoop.
b. If you do not use bound variables to snoop, default selection rates (e.g. 5%) are used for predicate conditions where the selection rates may vary depending on the specific input values.
Binding peeking is controlled by the implicit parameter _optim_peek_user_binds, and the default value of _optim_peek_user_binds is true, indicating that the default is enabled.
Binding variables peeping into the parse tree and execution plan generated by previous hard parsing, regardless of the specific input value of the subsequent incoming binding variables, has been criticized. This situation has not been alleviated until the introduction of adaptive cursor sharing in Oracle 11g, because it may make CBO in some situations. In this case, the execution plan selected for some specific input values of the binding variables is not the optimal execution plan for the target SQL in the current situation, and it may bring about sudden changes in the execution plan of the target SQL, which will directly affect the performance of the application system.
Usually, you can use v$sql_bind_capture to get the value of the bound variable, and I usually use the following statement:

select NAME, POSITION, datatype_string, max_length, value_string,ANYDATA.accesstimestamp (value_anydata) 
from v$sql_bind_capture where sql_id='&sql';  

The results are similar to those shown below.

The list used here is:
NAME VARCHAR2(30) Name of the bind variable
POSITION NUMBER Position of the bind variable in the SQL statement
DATATYPE_STRING VARCHAR2(15) Textual representation of the bind datatype
MAX_LENGTH NUMBER Maximum bind length
VALUE_STRING VARCHAR2(4000) Value of the bind represented as a string
VALUE_ANYDATA ANYDATA Value of the bind represented using the self-descriptive Sys.AnyData datatype. This representation is useful to programmatically decode the value of the bind variable.
Here we will highlight the last column:
We usually get the value of the column through ANYDATA.accesstimestamp (value_anydata). The description of the column is roughly the binding value represented by the self-descriptive Sys.AnyData data data type. This representation is for programmatically decoding the value of the bound variable.
The explanation from oracle-base is:
The ANYDATA type includes CONVERT* constructor functions for the majority of Oracle data types that can be accessed from SQL
That is to say, it represents a convert constructor for data types, for specific reference.
https://oracle-base.com/articles/misc/anydata
Okay, the mattress is ready, so let's start the next step.

  • 3. Performance Problem Scenarios

Recently, after the previous SQL optimization, the evening has been alleviated, but still not optimistic, this morning to see elapsed time in the morning before 0:00 to 2:00 a.m. and elevated again.

     17430 2017-04-18 21:00:53 2017-04-18 22:00:56 2.2418E+13 2.2417E+13        15.028409
     17431 2017-04-18 22:00:56 2017-04-18 23:00:58 2.2423E+13 2.2418E+13       84.6113666
     17432 2017-04-18 23:00:58 2017-04-19 00:00:01 2.2424E+13 2.2423E+13       12.7940667
     17433 2017-04-19 00:00:01 2017-04-19 01:00:53 2.2439E+13 2.2424E+13       245.712036
     17434 2017-04-19 01:00:53 2017-04-19 02:00:55 2.2452E+13 2.2439E+13       225.038459
     17435 2017-04-19 02:00:55 2017-04-19 03:00:58 2.2455E+13 2.2452E+13        40.387316
     17436 2017-04-19 03:00:58 2017-04-19 04:00:02 2.2460E+13 2.2455E+13       95.6910478
     17437 2017-04-19 04:00:02 2017-04-19 05:00:04 2.2461E+13 2.2460E+13       .588831683
     17438 2017-04-19 05:00:04 2017-04-19 06:00:07 2.2461E+13 2.2461E+13       2.96520707

Helplessly collect the corresponding awr report and look at it:

Statements are similar to the following:

select count(*) as N_M1_Act
  from (select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '01'
           and t.apporgcode = :1
           and t.currentdate = :2
           and t.loanPath = '1'
        intersect
        select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '02'
           and t.apporgcode = :3
           and t.currentdate = :4
           and t.loanPath = '1')  

It's also a very simple statement. View its implementation plan as follows:

Reasonably speaking, it's quite good. Why does it take more than 4 seconds to execute at a time, plus a lot of concurrent execution, directly lead to a bunch of queues waiting.
Look at the actual execution plan of this SQL in sqlplus with suspicion, but since it's early morning SQL, it's no longer possible to use display_cursor to view it. It may have been brushed out of the shared pool.

SQL> select * from table(dbms_xplan.display_cursor('9hkvc6qf0v5f4'));
PLAN_TABLE_OUTPUT
-----------------------------------------------
SQL_ID  9hkvc6qf0v5f4, child number 0
select count(*) as N_M1_Act from ( select t.loanacno  from
ReportOverdueFlowDtStatistics t where t.accountstatenew = '01' and
t.apporgcode=:1 and t.currentdate=:2 and t.loanPath='1' intersect
select t.loanacno from ReportOverdueFlowDtStatistics t where
t.accountstatenew = '02' and  t.apporgcode=:3 and t.currentdate=:4 and
t.loanPath='1' )
NOTE: cannot fetch plan for SQL_ID: 9hkvc6qf0v5f4, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)
13 rows selected.

Then we use display_awr to see the end of the story:

SQL> select * from table(dbms_xplan.display_awr('9hkvc6qf0v5f4'));                           
PLAN_TABLE_OUTPUT
--------------------------------------------------------------
SQL_ID 9hkvc6qf0v5f4
--------------------
select count(*) as N_M1_Act from ( select t.loanacno  from
ReportOverdueFlowDtStatistics t where t.accountstatenew = '01' and
t.apporgcode=:1 and t.currentdate=:2 and t.loanPath='1' intersect
select t.loanacno from ReportOverdueFlowDtStatistics t where
t.accountstatenew = '02' and  t.apporgcode=:3 and t.currentdate=:4 and
t.loanPath='1' )
Plan hash value: 740429392
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name              | Rows  | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |   |   | 48989 (100)|      |
|   1 |  SORT AGGREGATE         |               |     1 |   |        |      |
|   2 |   VIEW              |               |     6 |   | 48989   (1)| 00:09:48 |
|   3 |    INTERSECTION         |               |   |   |        |      |
|   4 |     SORT UNIQUE         |               |   664 | 23240 | 48958   (1)| 00:09:48 |
|   5 |      TABLE ACCESS FULL      | REPORTOVERDUEFLOWDTSTATISTICS |   664 | 23240 | 48957   (1)| 00:09:48 |
|   6 |     SORT UNIQUE         |               |     6 |   210 |    31   (4)| 00:00:01 |
|   7 |      TABLE ACCESS BY INDEX ROWID| REPORTOVERDUEFLOWDTSTATISTICS |     6 |   210 |    30   (0)| 00:00:01 |
|   8 |       INDEX RANGE SCAN      | IDX_RPTOFLOWDTSTATISTICS_02   |    38 |   |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------
25 rows selected.

You can see that the query statement above intersect has been scanned in full table, and the main time consumption is here, while the query consumption below intersect can be neglected.
It's strange that the key problems arise in this way:
Question: Why do SQL statements plan to index one by one and not by one?
Then look at the v$active_session_histroy view

You can see a lot of read by other session waiting events
Here's an introduction to the waiting event:
The wait event is stripped from the buffer busy wait event.
What will happen to this waiting event?
(1) A session is querying the data of a table, reading the data from disk to the cache, while other sessions are also requesting the same data block. This waiting event occurs if the response block is not fully read into the cache.
(2) Because the session is reading data in memory, the event will be accompanied by sequential read or scattered read, which generally does not exist in isolation.
As can be seen from the figure above, a large number of sessions read by other session s are blocked by the same session. The blocked session is being db file scattered read. What are these sessions doing?
Taking sql_id as 9hkvc6qf0v5f4 as an example, the following SQL statements coincide with the awr report and support the problem.

select count(*) as N_M1_Act
  from (select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '01'
           and t.apporgcode = :1
           and t.currentdate = :2
           and t.loanPath = '1'
        intersect
        select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '02'
           and t.apporgcode = :3
           and t.currentdate = :4
           and t.loanPath = '1')  

So let's see what kind of SQL statement this session is blocked waiting for execution?
A random sql_id statement, such as 65c0dsv0zbba4, can be found:

select count(*) as M_Back_N
   from (select t.loanacno
           from ReportOverdueFlowDtStatistics t
          where (t.accountstatenew = '02' or t.accountstatenew = '03' or
                t.accountstatenew = '04' or t.accountstatenew = '05')
            and t.apporgcode = :1
            and t.currentdate = :2
            and t.loanPath = '1'
         intersect
         select t.loanacno
           from ReportOverdueFlowDtStatistics t
          where t.accountstatenew = '01'
            and t.apporgcode = :3
            and t.currentdate = :4
            and t.loanPath = '1')

Similarly, the existence of similar statements awaits execution.

  • 4. Analysis of Performance Problems

From the introduction of the problem scenario, we probably understand the reason why there is a large amount of time consumption, a large number of waiting events, are located in a similar class of SQL statements caused by poor performance.
It is emphasized here that:
Why do SQL statements plan to execute one walk index and one don't?
First look at the index:

It's obviously a composite index, with the current date column in front of the index, followed by the apporgcode, accountstatenew column.
The information statistics for the corresponding columns are as follows:

SQL>  select count(*) from ReportOverdueFlowDtStatistics;
  COUNT(*)
----------
  44865762
SQL> select accountstatenew,count(accountstatenew) from ReportOverdueFlowDtStatistics group by accountstatenew;
AC COUNT(ACCOUNTSTATENEW)
-- ----------------------
04         942986
01       36566202
02        1487274
03        1064456
05        4804844
SQL>  select apporgcode, count(apporgcode) from ReportOverdueFlowDtStatistics group by apporgcode;
APPORGCODE COUNT(APPORGCODE)
---------- -----------------
014401            354669
055101          1277
053401          1029
053301          1508
058101           276
018101          1870
052101           156
057201            84
012205             59239
013402            378892
017301            688490
016103            552585
012113             23313
053101           517
051101           477
059201           434
011203            525865
017209            167779
011507             66809
015502             23786
055201             9
014302            407479
011307             61394
014411             48066
059101            31
013405            527166
......
014506             58075
057401            46
217 rows selected.

From the query results, there are nearly 4500w records in the ReportOverdueFlowDtStatistics table, among which current date is a time series, which must change at any time. The apporgcode column has 217 different values, and the distribution is very uneven from the results. The columns with 50-60w records are also small to only 9 accountstatenew columns. Among the five values, the data with the value of 01 is as many as 3500 w, and the others are more evenly distributed.
The data types corresponding to these three columns are:
The current date data type is date
The apporgcode data type is varchar2()
accountstatenew data type is varchar2()
So composite index creation is feasible.
Then we sort the values of the apporgcode column as follows:

It can be seen that more than half of the data volume is more than 20w, and most of the data is more than 10w.
Then we lock one of the SQL statements whose sql_id is 9hkvc6qf0v5f4 as an example. The SQL statements are as follows:

select count(*) as N_M1_Act
  from (select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '01'
           and t.apporgcode = :1
           and t.currentdate = :2
           and t.loanPath = '1'
        intersect
        select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '02'
           and t.apporgcode = :3
           and t.currentdate = :4
           and t.loanPath = '1')

Let's query the value of its bound variable:

Then take the first set of bound variable values into the query:
But here's a strange problem with binding variables: 2. The corresponding date is timestamp type, and the data is constructed by anydata. Here's a test of a transformation like this:

SQL>   select to_timestamp('2017-04-17 12:00:00','yyyy-mm-dd hh24:mi:ss') from dual;
TO_TIMESTAMP('2017-04-1712:00:00','YYYY-MM-DDHH24:MI:SS')
---------------------------------------------------------------------------
17-APR-17 12.00.00.000000000 PM

It's exactly the same value as the timestamp shown in the figure, but we know that the current date column is a date column of data type, so we suspect that there is an implicit conversion here.
Now that we have the value, we can test it:
When running in real environments, it's actually similar to converting the current date column in the first query to_timestamp

SQL> select count(*) as N_M1_Act
  from (select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '01'
           and t.apporgcode = '011303'
           and t.currentdate = to_timestamp('2017-04-17 12:00:00','yyyy-mm-dd hh24:mi:ss')
           and t.loanPath = '1'
        intersect
        select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '02'
           and t.apporgcode = '011303'
           and t.currentdate = to_date('2017-04-18 00:00:00','yyyy-mm-dd hh24:mi:ss')
           and t.loanPath = '1');
Execution Plan
----------------------------------------------------------
Plan hash value: 4055505861
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name              | Rows  | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |     1 |   |  2589   (1)| 00:00:32 |
|   1 |  SORT AGGREGATE         |               |     1 |   |        |      |
|   2 |   VIEW              |               |     1 |   |  2589   (1)| 00:00:32 |
|   3 |    INTERSECTION         |               |   |   |        |      |
|   4 |     SORT UNIQUE         |               |     8 |   280 |  2583   (1)| 00:00:31 |
|*  5 |      TABLE ACCESS BY INDEX ROWID| REPORTOVERDUEFLOWDTSTATISTICS |     8 |   280 |  2582   (1)| 00:00:31 |
|*  6 |       INDEX SKIP SCAN       | IDX_RPTOFLOWDTSTATISTICS_02   |  2675 |   |   586   (0)| 00:00:08 |
|   7 |     SORT UNIQUE         |               |     1 |    35 |     6  (17)| 00:00:01 |
|*  8 |      TABLE ACCESS BY INDEX ROWID| REPORTOVERDUEFLOWDTSTATISTICS |     1 |    35 |     5   (0)| 00:00:01 |
|*  9 |       INDEX RANGE SCAN      | IDX_RPTOFLOWDTSTATISTICS_02   |     1 |   |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("T"."LOANPATH"='1')
   6 - access("T"."APPORGCODE"='011303' AND "T"."ACCOUNTSTATENEW"='01')
       filter("T"."APPORGCODE"='011303' AND "T"."ACCOUNTSTATENEW"='01' AND
          INTERNAL_FUNCTION("T"."CURRENTDATE")=TIMESTAMP' 2017-04-17 12:00:00.000000000')
   8 - filter("T"."LOANPATH"='1')
   9 - access("T"."CURRENTDATE"=TO_DATE(' 2017-04-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
          "T"."APPORGCODE"='011303' AND "T"."ACCOUNTSTATENEW"='02')
Statistics
----------------------------------------------------------
      1  recursive calls
      0  db block gets
    906  consistent gets
    902  physical reads
      0  redo size
    525  bytes sent via SQL*Net to client
    519  bytes received via SQL*Net from client
      2  SQL*Net roundtrips to/from client
      2  sorts (memory)
      0  sorts (disk)
      1  rows processed

You can see that when to_timestamp is used, index skip scan is used, and the main time consumed is also in the first query. Logical and physical reads are 900, from the predicate information.
There is indeed an implicit conversion, which converts time into a timestamp type.
Then we rewrite to_timestamp to the original to_date type without implicit conversion:

SQL> select count(*) as N_M1_Act
  from (select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '01'
           and t.apporgcode = '011303'
           and t.currentdate = to_date('2017-04-17 12:00:00','yyyy-mm-dd hh24:mi:ss')
           and t.loanPath = '1'
        intersect
        select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '02'
           and t.apporgcode = '011303'
           and t.currentdate = to_date('2017-04-18 00:00:00','yyyy-mm-dd hh24:mi:ss')
           and t.loanPath = '1');
Execution Plan
----------------------------------------------------------
Plan hash value: 270485034
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name              | Rows  | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |     1 |   |    15  (14)| 00:00:01 |
|   1 |  SORT AGGREGATE         |               |     1 |   |        |      |
|   2 |   VIEW              |               |     1 |   |    15  (14)| 00:00:01 |
|   3 |    INTERSECTION         |               |   |   |        |      |
|   4 |     SORT UNIQUE         |               |     1 |    35 |     9  (12)| 00:00:01 |
|*  5 |      TABLE ACCESS BY INDEX ROWID| REPORTOVERDUEFLOWDTSTATISTICS |     1 |    35 |     8   (0)| 00:00:01 |
|*  6 |       INDEX RANGE SCAN      | IDX_RPTOFLOWDTSTATISTICS_02   |     5 |   |     4   (0)| 00:00:01 |
|   7 |     SORT UNIQUE         |               |     1 |    35 |     6  (17)| 00:00:01 |
|*  8 |      TABLE ACCESS BY INDEX ROWID| REPORTOVERDUEFLOWDTSTATISTICS |     1 |    35 |     5   (0)| 00:00:01 |
|*  9 |       INDEX RANGE SCAN      | IDX_RPTOFLOWDTSTATISTICS_02   |     1 |   |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("T"."LOANPATH"='1')
   6 - access("T"."CURRENTDATE"=TO_DATE(' 2017-04-17 12:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
          "T"."APPORGCODE"='011303' AND "T"."ACCOUNTSTATENEW"='01')
   8 - filter("T"."LOANPATH"='1')
   9 - access("T"."CURRENTDATE"=TO_DATE(' 2017-04-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
          "T"."APPORGCODE"='011303' AND "T"."ACCOUNTSTATENEW"='02')
Statistics
----------------------------------------------------------
      1  recursive calls
      0  db block gets
      8  consistent gets
      6  physical reads
      0  redo size
    525  bytes sent via SQL*Net to client
    519  bytes received via SQL*Net from client
      2  SQL*Net roundtrips to/from client
      2  sorts (memory)
      0  sorts (disk)
      1  rows processed

We can see that there is no implicit conversion at this time, and the execution plan is index range scan, logical and physical reading is less than 10, predicate information is directly obtained through access conditions.
But even if you use to_timestamp to walk here, it's a jump scan instead of a full table scan.
Look at the number of data bars corresponding to'011303'is 96379:

SQL> select apporgcode, count(apporgcode) from ReportOverdueFlowDtStatistics where apporgcode='011303' group by apporgcode;
APPORGCODE COUNT(APPORGCODE)
---------- -----------------
011303             96379

Then we take a larger number, such as 011503, which corresponds to 55w pieces of data:

SQL>  select apporgcode, count(apporgcode) from ReportOverdueFlowDtStatistics where apporgcode='011503' group by apporgcode;
APPORGCODE COUNT(APPORGCODE)
---------- -----------------
011503            559191

Then take a look at the implementation plan:
When to_timestamp is used, the execution plan is as follows:

SQL> select count(*) as N_M1_Act
  from (select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '01'
           and t.apporgcode = '011503'
           and t.currentdate = to_timestamp('2017-04-17 12:00:00','yyyy-mm-dd hh24:mi:ss')
           and t.loanPath = '1'
        intersect
        select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '02'
           and t.apporgcode = '011503'
           and t.currentdate = to_date('2017-04-18 00:00:00','yyyy-mm-dd hh24:mi:ss')
           and t.loanPath = '1');
Execution Plan
----------------------------------------------------------
Plan hash value: 740429392
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name              | Rows  | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |     1 |   |   107K  (1)| 00:21:29 |
|   1 |  SORT AGGREGATE         |               |     1 |   |        |      |
|   2 |   VIEW              |               |    11 |   |   107K  (1)| 00:21:29 |
|   3 |    INTERSECTION         |               |   |   |        |      |
|   4 |     SORT UNIQUE         |               |  1447 | 50645 |   107K  (1)| 00:21:28 |
|*  5 |      TABLE ACCESS FULL      | REPORTOVERDUEFLOWDTSTATISTICS |  1447 | 50645 |   107K  (1)| 00:21:28 |
|   6 |     SORT UNIQUE         |               |    11 |   385 |    34   (3)| 00:00:01 |
|*  7 |      TABLE ACCESS BY INDEX ROWID| REPORTOVERDUEFLOWDTSTATISTICS |    11 |   385 |    33   (0)| 00:00:01 |
|*  8 |       INDEX RANGE SCAN      | IDX_RPTOFLOWDTSTATISTICS_02   |    38 |   |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("T"."APPORGCODE"='011503' AND "T"."LOANPATH"='1' AND "T"."ACCOUNTSTATENEW"='01' AND
          INTERNAL_FUNCTION("T"."CURRENTDATE")=TIMESTAMP' 2017-04-17 12:00:00.000000000')
   7 - filter("T"."LOANPATH"='1')
   8 - access("T"."CURRENTDATE"=TO_DATE(' 2017-04-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
          "T"."APPORGCODE"='011503' AND "T"."ACCOUNTSTATENEW"='02')
Statistics
----------------------------------------------------------
      1  recursive calls
      0  db block gets
     393452  consistent gets
     393393  physical reads
      0  redo size
    525  bytes sent via SQL*Net to client
    519  bytes received via SQL*Net from client
      2  SQL*Net roundtrips to/from client
      2  sorts (memory)
      0  sorts (disk)
      1  rows processed

Sure enough, like the actual implementation plan, the whole table scan, has undergone an implicit conversion, logic reading and physical reading of more than 40 weeks. The performance is poor.
But if we use to_date, the execution plan is completely different:

SQL> select count(*) as N_M1_Act
  from (select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '01'
           and t.apporgcode = '011503'
           and t.currentdate = to_date('2017-04-17 12:00:00','yyyy-mm-dd hh24:mi:ss')
           and t.loanPath = '1'
        intersect
        select t.loanacno
          from ReportOverdueFlowDtStatistics t
         where t.accountstatenew = '02'
           and t.apporgcode = '011503'
           and t.currentdate = to_date('2017-04-18 00:00:00','yyyy-mm-dd hh24:mi:ss')
           and t.loanPath = '1');
Execution Plan
----------------------------------------------------------
Plan hash value: 270485034
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name              | Rows  | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |     1 |   |   677   (1)| 00:00:09 |
|   1 |  SORT AGGREGATE         |               |     1 |   |        |      |
|   2 |   VIEW              |               |    11 |   |   677   (1)| 00:00:09 |
|   3 |    INTERSECTION         |               |   |   |        |      |
|   4 |     SORT UNIQUE         |               |   246 |  8610 |   643   (1)| 00:00:08 |
|*  5 |      TABLE ACCESS BY INDEX ROWID| REPORTOVERDUEFLOWDTSTATISTICS |   246 |  8610 |   642   (0)| 00:00:08 |
|*  6 |       INDEX RANGE SCAN      | IDX_RPTOFLOWDTSTATISTICS_02   |   852 |   |     6   (0)| 00:00:01 |
|   7 |     SORT UNIQUE         |               |    11 |   385 |    34   (3)| 00:00:01 |
|*  8 |      TABLE ACCESS BY INDEX ROWID| REPORTOVERDUEFLOWDTSTATISTICS |    11 |   385 |    33   (0)| 00:00:01 |
|*  9 |       INDEX RANGE SCAN      | IDX_RPTOFLOWDTSTATISTICS_02   |    38 |   |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("T"."LOANPATH"='1')
   6 - access("T"."CURRENTDATE"=TO_DATE(' 2017-04-17 12:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
          "T"."APPORGCODE"='011503' AND "T"."ACCOUNTSTATENEW"='01')
   8 - filter("T"."LOANPATH"='1')
   9 - access("T"."CURRENTDATE"=TO_DATE(' 2017-04-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
          "T"."APPORGCODE"='011503' AND "T"."ACCOUNTSTATENEW"='02')
Statistics
----------------------------------------------------------
      1  recursive calls
      0  db block gets
     42  consistent gets
     40  physical reads
      0  redo size
    525  bytes sent via SQL*Net to client
    519  bytes received via SQL*Net from client
      2  SQL*Net roundtrips to/from client
      2  sorts (memory)
      0  sorts (disk)
      1  rows processed 

You can see that the first query takes index range scan, logical and physical reads are only 40, and the predicate information also shows access to get data directly.

5. Summary of Solutions
From the above analysis conclusion, it can be concluded that due to the characteristics of database binding variable snooping, it should be due to the large initial corresponding values of the input parameters, and according to the actual execution plan, the implicit conversion of to_timestamp is used, which leads to the problem of binding variable assignment, and the number of corresponding values of apporgcode. Most of the data is over 20w, so the adaptive cursor may not succeed in getting a good execution plan again, so it does not use index skip scan, which makes the first query of the SQL statement find the full table scan and degrades the performance; it may also be due to the large amount of data that the value of the bound variable is always corresponding to. Failed to trigger the adaptive cursor feature.
But the biggest sticking point is the implicit conversion of to_timestamp, which causes the most performance problems. If you change to_date, the effect will be completely different.
Submitted to the development of the front-end program, aimed at modifying the front-end program, to avoid the ANYDATA for the implicit conversion of bound variables.