Benchmarking Bulk & Batch Insert with .NET and actionETL

In this article we look at the great database insert performance you can achieve with the actionETL dataflow and .NET. In particular, we look at how the choice of SQL database, database provider, and bulk insert vs. batch insert impacts performance.

Setup

  • The benchmark generates rows with the RepeatRowsSource worker
  • Batch Insert puts multiple rows in each insert statement. The AdbInsertTarget worker supports all actionETL database providers.
    • Insert statements use (named or positional) parameters. Each parameter inserts one column value in one row in the table.
  • Bulk Insert requires a database specific facility, and is available for MySQL-compatible and SQL Server databases via the AdbMySqlConnectorBulkInsertTarget and AdbSqlClientBulkInsertTarget workers.
  • The target table is unindexed with 10 columns (4 integer, 4 text, 2 decimal).
  • All databases and the actionETL .NET5 benchmark run locally on the same Windows 10 host, an old 4-core Intel Core i7 4770K, 3.5GHz desktop PC from 2013 with a Samsung 840 EVO 1TB SSD.

Throughput

Here we generate and insert 1 million rows into the target table, and report the number of values inserted per second. 

Note: At larger volumes and in this benchmark, insert queries are mainly bottlenecked by the number of values to insert, which is why we report that number. To get the number of rows instead for our particular test, divide by 10.

We measure:

  • Batch Insert Default, which uses a batch size of 256 values per batch and 16,384 values per transaction.
  • Batch Insert Tuned, which uses tuned values for batch and transaction sizes. 
    • Note: This often uses the largest possible settings, but it must be done with caution, since it can overload the server if used with very large data volumes.
  • Bulk Insert, where supported.
Throughput

Several things stand out:

  • Bulk Insert as well as the (embedded database) SQLite Batch Insert Tuned tests are from 2.5 to 15 times faster than the other tests and databases, and should (from a strict throughput performance standpoint) be used whenever possible.
  • Tuning Batch Insert settings provides a 14% to 54% improvement for non-SQLite tests, and 443% improvement for the SQLite test.
  • The SqlClient Batch Insert test has unexpectedly low performance. In fact, our benchmark discovered a previously unknown SQL Server bottleneck. The SqlClient team is developing a workaround, and when it’s released we’ll take advantage of it in actionETL.
    • In preliminary testing with the unreleased workaround, SqlClient Batch Insert  achieves a performance roughly 12% better than ODBC SQL Server, which would make it the fastest non-embedded Batch Insert in our tests.
    • With actionETL, you can however already use either SqlClient Bulk Insert or the ODBC SQL Server provider combination, which both deliver great performance.

Throughput Scaling

In these tests we generate and insert between 1 row and 1 million rows, and again report the number of values inserted per second. The Batch Insert tests use default batch and transaction size settings.

We see that:

  • There is an order of magnitude performance difference between the fastest and slowest database-provider-API combination, both for small and large volumes. 
  • SqlClient Bulk is the fastest across all volumes.
  • Even though the database etc. is warmed up with an insert before measurements are started, the overhead of opening a connection, initiating the inserts etc. in the measured run is large enough to completely limit the throughput up to 1,000 parameters (100 rows), and mostly also up to 10,000 values (1,000 rows).
  • For small volumes, Bulk Insert (on those databases that support it) has the same performance as Batch Insert.

Here is the same data in a table:

Rows Inserted1101001,00010,000100,0001,000,000
MySqlClient Batch6525,68652,268209,848280,920294,748294,147
MySqlConnector Batch1361,33912,860109,279331,885430,429432,934
MySqlConnector Bulk1371,37113,058126,161733,0251,751,1912,095,514
Npgsql Batch1,35812,84296,534266,640318,638329,748330,148
ODBC SQL Server Batch1,78017,150128,562396,670497,223517,184534,700
SqlClient Batch1,83817,364102,841204,082225,870229,410232,350
SqlClient Bulk1,95619,464181,0101,289,6933,569,5414,382,2584,382,097
SQLite Batch5014,84942,224378,911652,616732,463814,030

In Summary

If you are not bottlenecked by other factors (slow network, slow inserts due to indexing, etc.), then your choice of database, provider and insert API combination can make a huge performance difference. 

One common way to realize these performance advantages is to perform record-level processing in the dataflow, load data into unindexed staging tables, and finish up with set-based processing in the database.

Do use Bulk Insert where available for best performance.

Do benchmark your own use cases with different API, providers, and settings to pick the best combination.

And certainly, actionETL helps you by providing and making it easy to switch between Batch Insert and Bulk Insert across the various databases. Try it out yourself with the free Community edition!

Nitty-gritty

Database and Provider Details

ProviderDatabaseTuned Rows per BatchTuned Rows per Transaction
MySqlClientMariaDB 10.3200long.MaxValue
MySqlConnectorMariaDB 10.3200long.MaxValue
MySqlConnectorMariaDB 10.3N/AN/A
NpgsqlPostgreSQL 12200long.MaxValue
System.Data.OdbcSQL Server 201999long.MaxValue
Microsoft.Data.SqlClientSQL Server 201913long.MaxValue
Microsoft.Data.SqlClientSQL Server 2019N/AN/A
System.Data.SQLiteSQLite.Core 1.0.113.625long.MaxValue

actionETL Code

The benchmark was run with actionETL 0.40.0 on .NET5.

Dataflow row type:

class MyRow
{
    public long Data1 { get; }
    public long Data2 { get; } = 2000;
    public long Data3 { get; } = 3000;
    public long Data4 { get; } = 4000;
    public string Text1 { get; } = "Text5";
    public string Text2 { get; } = "Text567890";
    public string Text3 { get; } = "Text56789012345";
    public string Text4 { get; } = "Text5678901234567890";
    public decimal Number1 { get; } = 3.14M;
    public decimal Number2 { get; } = 7.92M;

    public MyRow(long data) => Data1 = data;
}

Dataflow code for batch insert:

var source = new RepeatRowsSource<MyRow>(aw, "Generate Rows"
    , numberOfRows, new MyRow(42)) { SendTemplateRows = true };

source.Output.Link.AdbInsertTarget("Insert Rows", cmca => cmca.AutoName()
    , connectionString.CreateConnectionBuilder(), tableName);
source.SetRowLimits(rowsPerBatch, rowsPerTransaction); // Optional

Dataflow target for MySQL bulk insert (replacing AdbInsertTarget above):

source.Output.Link.AdbMySqlConnectorBulkInsertTarget("Insert Rows"
    , cmca => cmca.AutoName(), connectionString.CreateConnectionBuilder()
    , tableName);

Dataflow target for SQL Server bulk insert (replacing AdbInsertTarget above):

source.Output.Link.AdbSqlClientBulkInsertTarget("Insert Rows"
    , cmca => cmca.AutoName(), connectionString.CreateConnectionBuilder()
    , SqlBulkCopyOptions.TableLock, tableName);

Blazingly Fast ETL Dataflow with .NET and actionETL

The dataflow capabilities in actionETL have been heavily optimized and are fast – as in really fast. The high performance/low overhead architecture provides a great foundation for implementing complex and large volume ETL requirements.

Benchmark numbers below, and full sources on GitHub. Why not use the free trial and run it on your Windows or Linux machine?

tl;dr

My old desktop from 2013 can pump up to 180 million rows per second between two dataflow workers, some two orders of magnitude faster than most traditional data sources can keep up with on this machine, and up to 340 million rows per second in aggregate link throughput across its 4 cores:

It’s also achieved with quite small buffer sizes of 1024 rows, which conserves memory. The 256 rows default buffer size is on this benchmark almost as fast, and using e.g. tiny 32-row or even smaller buffers when you have particularly wide rows is perfectly viable. You can even give different links in the same dataflow different buffer sizes.

With no data sources to slow the dataflow down, just pumping rows does bottleneck my 4 cores on CPU after about 3 links. After that, the aggregate link throughput stays constant even with over a thousand dataflow source, transform, and target workers all pumping rows simultaneously, showing great scalability.

You can use large numbers of workers to solve complex problems well while retaining excellent performance.

In any real application, the high efficiency of the dataflow leaves almost all CPU cycles available for user logic and data source drivers.

With the powerful API, it only takes a few lines of code to create, connect, and run all the workers for all the benchmark cases.

Just the Facts, Ma’am

In the above chart, the blue “Many Sources, No Transforms” shows running many source workers, with a single target worker connected to each source:

The red “One Source, Many Transforms” shows running a single source worker, with many downstream transforms, terminated by a single target:

And the yellow “Multiple Sources and Transforms” shows running multiple source workers, and an equal number of transform and target workers per source as there are sources in total:

The specific workers used are:

In these benchmarks, we specifically test the movement of rows between workers, without interacting with external data sources or allocating new rows (I’ll come back to that in later articles).

  • The source worker or workers send a row 1 billion times over one link, or 500 million times over two links, all the way to 1 million rows over 1024 links, to the downstream workers. This way, the aggregate number of rows over all links are always 1 billion.
  • Each transform receives the upstream rows and sends them downstream
  • Each individual flow is terminated by a target that receives the upstream rows and throws them away
  • Aggregate Link Throughput is the sum of the number of rows passed through each link (1 billion), divided by the time

That old desktop from 2013 that runs the benchmarks? It has an Intel Core i7 4770K, 3.5GHz, 4 physical cores, hyper-threading disabled, 16GB of memory, and now Windows 10 Pro (full specification). And we’re running .NET Core 3.1.

Here’s the output from a run:

AggregateLinkRows, Workers, Sources, DownstreamLinksPerSource, TotalLinks, Duration (s), AggregateLinkThroughput (Million Rows/s)
 1000000000,      2,      1,      1,      1,   5.547,    180.27
 1000000000,      4,      2,      1,      2,   2.933,    340.92
 1000000002,      6,      3,      1,      3,   3.071,    325.62
 1000000000,      8,      4,      1,      4,   3.140,    318.50
 1000000000,     32,     16,      1,     16,   3.146,    317.90
 1000000000,    128,     64,      1,     64,   3.163,    316.19
 1000000000,    512,    256,      1,    256,   3.179,    314.61
 1000000512,   2048,   1024,      1,   1024,   3.267,    306.12
 1000000000,      3,      1,      2,      2,   4.933,    202.71
 1000000002,      4,      1,      3,      3,   3.439,    290.79
 1000000000,      5,      1,      4,      4,   3.494,    286.22
 1000000000,     17,      1,     16,     16,   3.089,    323.74
 1000000000,     65,      1,     64,     64,   3.064,    326.42
 1000000000,    257,      1,    256,    256,   3.009,    332.29
 1000000512,   1025,      1,   1024,   1024,   3.241,    308.53
 1000000000,      6,      2,      2,      4,   4.550,    219.77
 1000000000,     20,      4,      4,     16,   3.730,    268.11
 1000000000,     72,      8,      8,     64,   3.364,    297.30
 1000000000,    272,     16,     16,    256,   3.507,    285.12
 1000000512,   1056,     32,     32,   1024,   3.149,    317.59

Use the Source, Luke

Coding this is very easy, here’s how to create, connect, and run all the workers for all the test cases – a good demonstration of the power of the library API:

const long desiredAggregateLinkRows = 1_000_000_000;
var testCases = new (int sources, int downstreamLinksPerSource)[]
{
	(1,1) // Warm up the JIT etc. to get consistent timings
	, (1,1), (2,1), (3,1), (4,1), (16,1), (64,1), (256,1)
		, (1024,1) // Many Sources, No Transforms
	, (1,2), (1,3), (1,4), (1,16), (1,64), (1,256)
		, (1,1024) // Single Source, Many Transforms 
	, (2,2), (4,4), (8,8), (16,16), (32,32) // Multiple Sources and Transforms 
};

foreach (var (numberOfSources, downstreamLinksPerSource) in testCases)
{
	int numberOfLinks = numberOfSources * downstreamLinksPerSource;
	long rowsPerSource = (long)Math.Ceiling(
		(double)desiredAggregateLinkRows / numberOfLinks);
	long actualAggregateLinkRows = numberOfSources * rowsPerSource 
		* downstreamLinksPerSource; // Avoid round-off errors

	var workerSystem = new WorkerSystem()
		.Root(ws =>
		{
			for (int s = 0; s < numberOfSources; s++)
			{
				var rrs = new RepeatRowsSource<MyRow>(ws, $"Source {s}"
					, rowsPerSource, new MyRow(42))
						{ SendTemplateRows = true };
				var output = rrs.Output;

				for (int l = 1; l < downstreamLinksPerSource; l++)
					output = output.Link.MulticastTransform(
						$"Transform {s} - {l}", 1).TypedOutputs[0];

				output.Link.TrashTarget($"Target {s}");
			}
		});

	(await workerSystem.StartAsync()) // Run the worker system
		.ThrowOnFailure();
}

Winding Down

I think the key takeaways are that actionETL can move around a massive amount of rows in the dataflow without any noticeable slow-downs, and do it with memory conserving small buffers.

Some future topics to look at include memory allocation and consumption, as well as data source and transform worker performance.

As mentioned, full sources on GitHub; try it out with the free trial and let me know in the comments below what throughput numbers you get!

Reduce .NET ETL Code Size by 23 Times with actionETL

In a fully documented example, actionETL required only 9kB of code to create a high performance and reusable custom Slowly Changing Dimension (SCD) worker, 23 times less than the 209kB used by a SQL Server® Integration Services (SSIS) implementation with similar functionality. What caused this stark difference?

Modern AppDev Techniques

The actionETL library is designed from the ground up to make ETL development very productive. By using well-known application development techniques, it provides excellent:

  • Reusability
  • Composability
  • Encapsulation
  • Testability
  • Extensibility
  • Refactoring
  • Source control and Continuous Integration/Continuous Delivery

In the SCD example, actionETL composability pays a huge dividend, where existing ‘control flow’ and dataflow workers are easily combined to create a new high performance and reusable custom worker:

Unlike with SSIS, actionETL ‘control flow’ and dataflow workers can be freely mixed and matched, and can also be used to create new custom workers.

Visual Designer

In contrast, SSIS cannot use existing control flow tasks or dataflow components when creating new tasks or components, not even via C++, and must therefore implement all required functionality from scratch. Most SSIS custom tasks and components also require significant UI code. Both aspects heavily inflate the amount of code that must be written and maintained.

Virtually all traditional ETL tools have the same heavy focus on their drag&drop visual designer as SSIS has. While this certainly helps in some ways, like initially getting up to speed on the tool, they pay a very heavy price in terms of poor support for some or all of the above modern AppDev traits.

Whitepaper

If you want to learn more about how actionETL compares to SSIS and traditional ETL tools, check out the Thirteen Factors Crippling ETL Productivity whitepaper.