Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PreparedStatementHandle leak leading to OOM #2264

Closed
nicolaslledo opened this issue Nov 27, 2023 · 14 comments · Fixed by #2272
Closed

PreparedStatementHandle leak leading to OOM #2264

nicolaslledo opened this issue Nov 27, 2023 · 14 comments · Fixed by #2272
Labels
Bug A bug in the driver. A high priority item that one can expect to be addressed quickly.
Milestone

Comments

@nicolaslledo
Copy link

nicolaslledo commented Nov 27, 2023

Driver version

mssql-jdbc-12.4.2.jre8.jar

SQL Server version

Microsoft SQL Server 2019 (RTM-CU21) (KB5025808) - 15.0.4316.3 (X64) Jun 1 2023 16:32:31 Copyright (C) 2019 Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2019 Standard 10.0 (Build 17763: ) (Hypervisor)

Client Operating System

Windows Server 2019

JAVA/JVM version

version 1.8.0_362, vendor Amazon.com Inc.

Table schema

CREATE TABLE [sioux].[aff_interv_lig](
	[int_lig_id] [bigint] NOT NULL,
	[int_id] [bigint] NULL,
	[projet_id] [bigint] NULL,
	[tache_id] [bigint] NULL,
	[lov_nature_prest_id] [bigint] NULL,
	[designation] [varchar](max) NULL,
	[qte_prevue] [decimal](18, 0) NULL,
	[qte_vendue] [decimal](20, 9) NULL,
	[lov_fact_unit_id] [bigint] NULL,
	[prix_unitaire] [decimal](20, 9) NULL,
	[montant] [decimal](20, 9) NULL,
	[code_fact_id] [bigint] NULL,
	[code_tva_id] [bigint] NULL,
	[date_prevue] [datetime2](0) NULL,
	[date_realisee] [datetime2](0) NULL,
	[executant_id] [bigint] NULL,
	[structure_id] [bigint] NULL,
	[cli_prestation_id] [bigint] NULL,
	[production_flag] [varchar](max) NULL,
	[extourne_id] [bigint] NULL,
	[designation2] [varchar](max) NULL,
	[ressource_int_id] [bigint] NULL,
	[lov_nature_id] [bigint] NULL,
	[lov_famille_id] [bigint] NULL,
	[filiere_id] [bigint] NULL,
	[heure] [varchar](max) NULL,
	[vehicule_id] [bigint] NULL,
	[num_ticket] [varchar](max) NULL,
	[tonnage] [decimal](20, 9) NULL,
	[code_regroupement] [decimal](20, 9) NULL,
	[tour_valide] [varchar](max) NULL,
	[tour_ordre] [decimal](20, 0) NULL,
	[remorque_id] [bigint] NULL,
	[benne1_id] [bigint] NULL,
	[benne2_id] [bigint] NULL,
	[benne3_id] [bigint] NULL,
	[benne4_id] [bigint] NULL,
	[cli_catalogue_id] [bigint] NULL,
	[cli_catalogue_client_id] [bigint] NULL,
	[origine_ligne_id] [bigint] NULL,
	[lov_cat_vehicule_id] [bigint] NULL,
	[nb_heure] [decimal](20, 9) NULL,
	[flag_rvi] [varchar](max) NULL,
	[commande_ligne_id] [bigint] NULL,
	[rvi_id] [bigint] NULL,
	[lov_site_traitement_id] [bigint] NULL,
	[retour_tour_valide] [varchar](max) NULL,
	[bsd] [varchar](max) NULL,
	[lov_interv_tps_estim_id] [bigint] NULL,
	[date_traitement] [datetime2](0) NULL,
	[num_ot] [varchar](max) NULL,
	[site_traitement_id] [bigint] NULL,
	[bordereau_anc] [varchar](max) NULL,
	[bordereau_anc_libre] [varchar](max) NULL,
	[lov_code_dr_id] [bigint] NULL,
	[lov_qualif_trait_final_id] [bigint] NULL,
	[unites] [varchar](max) NULL,
	[lov_unites_id] [bigint] NULL,
	[nature_produit_id] [bigint] NULL,
	[unite_mesure1] [varchar](max) NULL,
	[unite_mesure2] [varchar](max) NULL,
	[date_debut] [datetime2](0) NULL,
	[date_fin] [datetime2](0) NULL,
	[commentaire_technicien] [varchar](max) NULL,
	[code_conso_id] [bigint] NULL,
	[code_rupture_id] [bigint] NULL,
	[flag_planifiee] [varchar](max) NULL,
	[last_modification] [datetime2](6) NULL,
	[ordre] [decimal](20, 9) NULL,
	[updated_at_utc] [datetime2](6) NULL,
 CONSTRAINT [pkc_aff_interv_lig] PRIMARY KEY CLUSTERED 
(
	[int_lig_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Problem description

PreparedStatementHandle are leaking. A batch is updating or inserting around 4 millions rows in a single table.
For an unknown reason although there are only two prepared statements, the cache defined with setStatementPoolingCacheSize is rapidely exhausted (in my case 1000) and the number of references to com.microsoft.sqlserver.jdbc.SQLServerConnection$PreparedStatementHandle and com.microsoft.sqlserver.jdbc.SQLServerConnection$CityHash128Key instances is constantly increasing.

This leads to an OutOfMemoryException.

Expected behavior

Only two preparedStatement handles in the connection cache.

Actual behavior

image
image

Error message/stack trace

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.Arrays.copyOf(Arrays.java:3332)
        at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
        at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
        at java.lang.StringBuilder.append(StringBuilder.java:141)
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.<init>(SQLServerPreparedStatement.java:561)
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:512)
        at one40.aff_interv_lig_0_1.aff_interv_lig.tDBInput_3Process(aff_interv_lig.java:9265)
        at one40.aff_interv_lig_0_1.aff_interv_lig.tDBInput_2Process(aff_interv_lig.java:1944)
        at one40.aff_interv_lig_0_1.aff_interv_lig.tDBConnection_1Process(aff_interv_lig.java:1315)
        at one40.aff_interv_lig_0_1.aff_interv_lig.tSetGlobalVar_2Process(aff_interv_lig.java:1068)
        at one40.aff_interv_lig_0_1.aff_interv_lig.runJobInTOS(aff_interv_lig.java:15367)
        at one40.aff_interv_lig_0_1.aff_interv_lig.runJob(aff_interv_lig.java:15093)
        at one40.siouxflow_0_1.SiouxFlow.tRunJob_10Process(SiouxFlow.java:2238)
        at one40.siouxflow_0_1.SiouxFlow.runJobInTOS(SiouxFlow.java:7762)
        at one40.siouxflow_0_1.SiouxFlow.runJob(SiouxFlow.java:7479)
        at one40.flow_0_1.Flow.tRunJob_7Process(Flow.java:2571)
        at one40.flow_0_1.Flow.tRunJob_4Process(Flow.java:2291)
        at one40.flow_0_1.Flow.runJobInTOS(Flow.java:6290)
        at one40.flow_0_1.Flow.main(Flow.java:5880)

Any other details that can be helpful

The java code genereted by Talend Open Studio

public void tDBInput_3Process(final java.util.Map<String, Object> globalMap) throws TalendException {
    java.util.Map<String, Object> resourceMap = new java.util.HashMap<String, Object>();
    try {
        if(resumeIt || globalResumeTicket) { // start the resume
            
            java.sql.Connection msSqlConnection = null;
            
            String updateQuery = "UPDATE [aff_interv_lig] SET [int_id] = ?, ... WHERE [int_lig_id] = ?";
            java.sql.PreparedStatement stmtUpdate = msSqlConnection.prepareStatement(updateQuery);

            String insertQuery = "INSERT INTO [aff_interv_lig] ([int_lig_id], ...) VALUES (?,...)";
            java.sql.PreparedStatement stmtInsert = msSqlConnection.prepareStatement(insertQuery);
            
            java.sql.Connection connOracle = null;
            String driverOracle = "oracle.jdbc.OracleDriver";
            java.lang.Class.forName(driverOracle);

            String url_tDBInput_3 = "jdbc:oracle:thin:@(description=(address=(protocol=tcp)(host=" + context.Sioux_Server + ")(port=" + context.Sioux_Port + "))(connect_data=(service_name=" + context.Sioux_ServiceName + ")))";
            connOracle = java.sql.DriverManager.getConnection(url_tDBInput_3, atnParamsPrope_tDBInput_3);
            

            java.sql.Statement stmt_tDBInput_3 = connOracle.createStatement( java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
            stmt_tDBInput_3.setFetchSize(10000);
            String dbquery_tDBInput_3 = "SELECT ..........";

            java.sql.ResultSet rsOracle = null;

            try {
                rsOracle = stmt_tDBInput_3.executeQuery(dbquery_tDBInput_3);
                java.sql.ResultSetMetaData rsmd_tDBInput_3 = rsOracle.getMetaData();
                int colQtyInRs_tDBInput_3 = rsmd_tDBInput_3.getColumnCount();

                String tmpContent_tDBInput_3 = null;

                while(rsOracle.next()) {

                    if(colQtyInRs_tDBInput_3 < 1) {
                        sioux.int_lig_id = null;
                    } else {

                        if(rsOracle.getObject(1) != null) {
                            sioux.int_lig_id = rsOracle.getLong(1);
                        } else {

                            sioux.int_lig_id = null;
                        }
                    }

                    { // start of Var scope
                        uiOds = null;
                        // # Output table : 'uiOds'
                        uiOds_tmp.int_lig_id = sioux.int_lig_id;
                        uiOds = uiOds_tmp;
                    } // end of Var scope

                    // Start of branch "uiOds"
                    if(uiOds != null) {
                        
                        int updateFlag_tDBOutput_1 = 0;
                        try {
                            if(uiOds.int_id == null) {
                                stmtUpdate.setNull(1, java.sql.Types.INTEGER);
                            } else {
                                stmtUpdate.setLong(1, uiOds.int_id);
                            }

                            updateFlag_tDBOutput_1 = stmtUpdate.executeUpdate();
                            updatedCount_tDBOutput_1 = updatedCount_tDBOutput_1 + updateFlag_tDBOutput_1;
                            if(updateFlag_tDBOutput_1 == 0) {

                                if(uiOds.int_lig_id == null) {
                                    stmtInsert.setNull(1, java.sql.Types.INTEGER);
                                } else {
                                    stmtInsert.setLong(1, uiOds.int_lig_id);
                                }

                                insertedCount_tDBOutput_1 = insertedCount_tDBOutput_1
                                        + stmtInsert.executeUpdate();
                                nb_line_tDBOutput_1++;
                            } else {
                                nb_line_tDBOutput_1++;

                            }
                        }

                    } // End of branch "uiOds"


                }
            } finally {
                if(rsOracle != null) {
                    rsOracle.close();
                }
                if(stmt_tDBInput_3 != null) {
                    stmt_tDBInput_3.close();
                }
                if(connOracle != null && !connOracle.isClosed()) {
                    connOracle.close();
                }
            }

            if(stmtUpdate != null) {
                stmtUpdate.close();
                resourceMap.remove("pstmtUpdate_tDBOutput_1");
            }
            if(stmtInsert != null) {
                stmtInsert.close();
                resourceMap.remove("pstmtInsert_tDBOutput_1");
            }
            resourceMap.put("statementClosed_tDBOutput_1", true);
        } // end the resume


    } catch(java.lang.Exception e) {
        // ....
        throw te;
    } catch(java.lang.Error error) {
        // ...
        throw error;
    } finally {
        // free memory for "tAggregateRow_1_AGGIN"
        try {

            if(resourceMap.get("statementClosed_tDBOutput_1") == null) {
                java.sql.PreparedStatement pstmtUpdateToClose_tDBOutput_1 = null;
                if((pstmtUpdateToClose_tDBOutput_1 = (java.sql.PreparedStatement) resourceMap.remove("pstmtUpdate_tDBOutput_1")) != null) {
                    pstmtUpdateToClose_tDBOutput_1.close();
                }
                java.sql.PreparedStatement pstmtInsertToClose_tDBOutput_1 = null;
                if((pstmtInsertToClose_tDBOutput_1 = (java.sql.PreparedStatement) resourceMap.remove("pstmtInsert_tDBOutput_1")) != null) {
                    pstmtInsertToClose_tDBOutput_1.close();
                }
            }

            if(resourceMap.get("statementClosed_tDBOutput_3") == null) {
                java.sql.PreparedStatement pstmtUpdateToClose_tDBOutput_3 = null;
                if((pstmtUpdateToClose_tDBOutput_3 = (java.sql.PreparedStatement) resourceMap.remove("pstmtUpdate_tDBOutput_3")) != null) {
                    pstmtUpdateToClose_tDBOutput_3.close();
                }
                java.sql.PreparedStatement pstmtInsertToClose_tDBOutput_3 = null;
                if((pstmtInsertToClose_tDBOutput_3 = (java.sql.PreparedStatement) resourceMap.remove("pstmtInsert_tDBOutput_3")) != null) {
                    pstmtInsertToClose_tDBOutput_3.close();
                }
            }

            /**
             * [tDBOutput_3 finally ] stop
             */

        } catch(java.lang.Exception e) {
            // ignore
        } catch(java.lang.Error error) {
            // ignore
        }
        resourceMap = null;
    }

    globalMap.put("tDBInput_3_SUBPROCESS_STATE", 1);
}

JDBC trace logs

@ecki
Copy link
Contributor

ecki commented Nov 27, 2023

I think your sample code is hard to follow, you should include all looping logic and batchAdd functionality and try to use only the minimum number of columns possible for readability.

if you add prepared statements to a batch I think it will keep them all open until you close the batch (potentially even the connection?).

@Jeffery-Wasty
Copy link
Contributor

Hi @nicolaslledo,

We'll look at this, but the above comment is right, this sample code is hard to follow, and if it can be simplified, that would be very helpful.

@nicolaslledo
Copy link
Author

nicolaslledo commented Nov 28, 2023

I think your sample code is hard to follow, you should include all looping logic and batchAdd functionality and try to use only the minimum number of columns possible for readability.

if you add prepared statements to a batch I think it will keep them all open until you close the batch (potentially even the connection?).

I have edited the code. It is generated by Talend Open Studio, an ETL recently aquired by Qlik.
I cannot do much to have clean code, so i remove most of the duplicated code and renamed some variables.

Batchadd is not used, Talend generate it that way.

@nicolaslledo
Copy link
Author

nicolaslledo commented Nov 28, 2023

I've a workaround for now.
I simply call sqlServerConnection.setDisableStatementPooling(true).

When using version 6 of the MSSQL driver, it made the process extremly slow.
With current version, I can manage. 3 hour and half for 4+ millions inserts and 4 more millions updates.
Memory print is minimal.

@ecki
Copy link
Contributor

ecki commented Nov 28, 2023

It looks like a loop with reusing a prepared statement (an update and insert statement) which does not get closed in the loop iteration. Maybe thats enough to reproduce the problem while using the statement pooling.

@Jeffery-Wasty
Copy link
Contributor

The repro code above does not use the MSSQL JDBC driver. Are you able to provide appropriate repro code for our driver?

@nicolaslledo
Copy link
Author

nicolaslledo commented Nov 29, 2023

I tried to remove as much unwanted detail that I could.
Here is what the method does:

  • A SELECT query is performed on an Oracle Database
  • then it Establishes a Connection to an MS SQL Database
  • prepares 2 statements
    • UPDATE PreparedStatement
    • INSERT PreparedStatement
  • Loops over the ResultSet
    • maps the data
    • executes the update
    • if 0 row updated executes the insert
  • when the loop is over (4 millions iterations later) closes both PreparedStatement

What I saw:
When the pool is used each executed statement computes a Hash (CityHash128Key) and tries to recover an PreparedStatementHandle from Cache.
2 things happens:

  1. The hash varies for each execution even if the query remains the same. The change is on the "guessed" datatype. See the attachement for an example. Sometimes it's the scale of a decimal, sometimes a date parameters becomes a datetime2, an int becomes a bigint. Therefore new entries are put in cache. Since 70 bind variables are provided 4 million times for 2 prepared statements, the number of unique hashes goes through the roof.
    Example:
    update-1.txt
    update-2.txt
99c99
< @P27 decimal(38,2),
---
> @P27 decimal(38,1),
112c112
< @P40 decimal(38,2),
---
> @P40 decimal(38,0),
  1. When the cache is full, some PreparedStatemendHandle are progressivly discarded, but somehow they stay in memory. Hence the OOM after nearly 9 millions insert/update executions.

My two cents on this. 2 possible problems,

  1. maybe, really maybe, the Hash key should not relies on datatypes or the guess should'nt be that precise
  2. the PreparedStatemendHandle are not freed

@ecki
Copy link
Contributor

ecki commented Nov 29, 2023

That’s a good analysis. I think the variations in signatures is (not only for the client) a thing which can be optimized a bit - for example using 3 digits precision and only more if needed? (I don’t think the types can be removed from the hash). But how many different hashes does that create in your case?

the second point sounds more severe.

btw: maybe it makes sense to chunk your batches to something like 10k, that also makes the transactions smaller (and would allow streaming/parallel preparation)

@Jeffery-Wasty
Copy link
Contributor

Jeffery-Wasty commented Nov 29, 2023

Hi @nicolaslledo,

My confusion comes from this line:

String driverOracle = "oracle.jdbc.OracleDriver";

Are you not using the Oracle JDBC driver in this example? Has this problem been replicated with the MSSQL JDBC driver? Thank you for the breakdown, we'll take a look at this further if this is a MSSQL JDBC issue, but if not, you would need to reach out to the appropriate team.

@ecki
Copy link
Contributor

ecki commented Nov 30, 2023

variations in signatures

I just found that 1.2.15 might reduce this part, as it does no longer use variable scale by default, as seen here #2248

@nicolaslledo
Copy link
Author

Hi @nicolaslledo,

My confusion comes from this line:

String driverOracle = "oracle.jdbc.OracleDriver";

Are you not using the Oracle JDBC driver in this example? Has this problem been replicated with the MSSQL JDBC driver? Thank you for the breakdown, we'll take a look at this further if this is a MSSQL JDBC issue, but if not, you would need to reach out to the appropriate team.

Sorry for the confusion. Data is read on Oracle then inserted/updated in SQL Server using ms sql driver.
I cannot provide all the code (where the driver is loaded and set in some Map) because it is irrelevant.
I'm using the MS SQL Driver with the version I stated. In the screenshot showing the HEAP content, you'll see entries specific to MS SQL.

@nicolaslledo
Copy link
Author

That’s a good analysis. I think the variations in signatures is (not only for the client) a thing which can be optimized a bit - for example using 3 digits precision and only more if needed? (I don’t think the types can be removed from the hash). But how many different hashes does that create in your case?

the second point sounds more severe.

btw: maybe it makes sense to chunk your batches to something like 10k, that also makes the transactions smaller (and would allow streaming/parallel preparation)

I upped the pool at 10 K to delay the memory leak and it eventually came. So I presume it's more. It doesn't only involve BigDecimal and DECIMAL but also INT / BIGINT and DATETIME2 / DATE. Most columns may be null too.

As you point out, the main problem is the leak even if causation is the hash.

You're right, the developer was a bit rough with the transaction size and duration. ^^'
10K rows transaction is the target.

@Jeffery-Wasty
Copy link
Contributor

Jeffery-Wasty commented Dec 11, 2023

Hi @nicolaslledo,

We may have a solution to this issue, if you are able to test. We're still not able to replicate this issue on our end, but the thought is that discarded prepared statement handles are not cleaned up often enough, only being cleaned on connection close. The becomes a problem when running UPDATES and INSERTS on the scale that you are doing above. We have moved the cleanup code to the start of every execute, which should, if this theory is correct, resolve the issue. The changes are in #2272. In the meantime, we will continue to try to replicate this on our end.

@ecki
Copy link
Contributor

ecki commented Dec 13, 2023

I have an reproducer, the following testclass creates the problem:

package net.eckenfels.test;

import java.math.BigDecimal;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;

import org.testcontainers.containers.Container.ExecResult;
import org.testcontainers.utility.DockerImageName;
import org.testcontainers.utility.MountableFile;
import org.testcontainers.containers.MSSQLServerContainer;

public class Main {
    public static void main(String[] args) throws SQLException
    {
        MSSQLServerContainer dbContainer = new MSSQLServerContainer<>(/*
                                                                       * DockerImageName.parse(
                                                                       * "mcr.microsoft.com/mssql/server:2017-CU12")
                                                                       */).acceptLicense();
        dbContainer.start();
        String url = dbContainer.getJdbcUrl() + ";disableStatementPooling=false;statementPoolingCacheSize=1000";
        String user = dbContainer.getUsername();
        String pass = dbContainer.getPassword();

        try (Connection c = DriverManager.getConnection(url, user, pass))
        {
            createTable(c);
            c.setAutoCommit(false); // or true, doesnt change the outcome
            try (PreparedStatement ps = c.prepareStatement(
                    "UPDATE tab SET c1=?, c2=?, c3=?, c4=?, c5=?, c6=?, c7=?, c8=?, c9=?, c10=?, c11=?, c12=?, c13=?, c14=?, c15=?, c16=?, c17=?, c18=?, c19=?, c20=? WHERE cKey=?"))
            {
                for (int i = 0; i < 10_000_000; i++) {
                    setArguments(i, ps);
                    ps.executeUpdate();
                    if (i % 100_000 == 0)
                    System.out.println(" " + i);
                }
            }
            c.commit();
        }
    }

    private static void setArguments(int i, PreparedStatement ps) throws SQLException
    {
        ps.setString(21, "key");
        for(int c = 1; c < 21; c++)
        {
            //for each iteration use a DECIMAL definition declaration encoding it in binary
            boolean bit = (i & (1 << (c-1))) != 0;
            BigDecimal num = bit ? new BigDecimal(1.1) : new BigDecimal(1);
            ps.setBigDecimal(c, num);
        }       
    }

    private static void createTable(Connection c) throws SQLException
    {
        try (Statement s = c.createStatement())
        {
            s.execute("CREATE TABLE tab (cKey VARCHAR(100), c1 DECIMAL, c2 DECIMAL, c3 DECIMAL,"
                +"c4 DECIMAL, c5 DECIMAL, c6 DECIMAL, c7 DECIMAL, c8 DECIMAL, c9 DECIMAL,"
                +"c10 DECIMAL, c11 DECIMAL, c12 DECIMAL, c13 DECIMAL, c14 DECIMAL, c15 DECIMAL,"
                +"c16 DECIMAL, c17 DECIMAL, c18 DECIMAL, c19 DECIMAL, c20 DECIMAL)");
            s.execute("INSERT INTO tab(cKey) VALUES('key')");
        }
    }
}

with the follwoing pom:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>net.eckenfels.test</groupId>
    <artifactId>mssql-leaktest</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.release>11</maven.compiler.release>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <version.slf4j>2.0.9</version.slf4j>
        <version.testcontainers>1.19.2</version.testcontainers>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.microsoft.sqlserver</groupId>
            <artifactId>mssql-jdbc</artifactId>
            <version>12.4.2.jre11</version>
        </dependency>

        <!-- testcontainer/docker has conflicting versions -->
        <dependency>
            <artifactId>slf4j-api</artifactId>
            <groupId>org.slf4j</groupId>
            <version>${version.slf4j}</version>
        </dependency>
        <!-- make slf4j not complain (configure: -Dorg.slf4j.simpleLogger.defaultLogLevel=info) -->
        <dependency>
            <artifactId>slf4j-simple</artifactId>
            <groupId>org.slf4j</groupId>
            <version>${version.slf4j}</version>
        </dependency>

        <!-- testcontainer modules for testing various RDBMS -->
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>testcontainers</artifactId>
            <version>${version.testcontainers}</version>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>mssqlserver</artifactId>
            <version>${version.testcontainers}</version>
        </dependency>
    </dependencies>


</project>

and the following command to start it:

mvn clean package exec:java '-Dexec.mainClass=net.eckenfels.test.Main'

This shows up quickly in the histogram (after 100.000 updates):

 num     #instances         #bytes  class name (module)
-------------------------------------------------------
   1:        206412       77329032  [B (java.base@17.0.8)
   2:        203302        4879248  java.lang.String (java.base@17.0.8)
   3:        102548        3288448  [J (java.base@17.0.8)
   4:        102415        3277280  com.microsoft.sqlserver.jdbc.SQLServerConnection$PreparedStatementHandle
   5:        102416        2457984  com.microsoft.sqlserver.jdbc.SQLServerConnection$CityHash128Key
   6:        101432        2434368  java.util.concurrent.ConcurrentLinkedQueue$Node (java.base@17.0.8)
   7:        102583        1641328  java.util.concurrent.atomic.AtomicInteger (java.base@17.0.8)
   8:         10414        1240840  java.lang.Class (java.base@17.0.8)

The idea here is that for each execution i use a different combination of scale for the 20 decimals (basically binary encoding the iteration number). But this could also happen from other types like NULL, DateTime, etc (I guess).

In the default case with no cache, its totally fine and does not leak.

@Jeffery-Wasty Jeffery-Wasty linked a pull request Jan 3, 2024 that will close this issue
@lilgreenbird lilgreenbird added this to the 12.6.0 milestone Jan 16, 2024
@lilgreenbird lilgreenbird added the Bug A bug in the driver. A high priority item that one can expect to be addressed quickly. label Jan 16, 2024
@github-project-automation github-project-automation bot moved this to Closed Issues in MSSQL JDBC Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug A bug in the driver. A high priority item that one can expect to be addressed quickly.
Projects
Status: Closed Issues
Development

Successfully merging a pull request may close this issue.

4 participants