Benchmarking Bulk & Batch Insert with .NET and actionETL

UPDATE: actionETL 0.41.0 onward uses Microsoft.Data.SqlClient 4.x or later, and avoids the SqlClient Batch Insert performance issue mentioned below.

In this article, we look at the great database insert performance you can achieve with the actionETL dataflow and .NET. In particular, we look at how the choice of SQL database, database provider, and bulk insert vs. batch insert impacts performance.

Setup

  • The benchmark generates rows with the RepeatRowsSource worker
  • Batch Insert puts multiple rows in each insert statement. The AdbInsertTarget worker supports all actionETL database providers.
    • Insert statements use (named or positional) parameters. Each parameter inserts one column value in one row in the table.
  • Bulk Insert requires a database-specific facility and is available for MySQL-compatible and SQL Server databases via the AdbMySqlConnectorBulkInsertTarget and AdbSqlClientBulkInsertTarget workers.
  • The target table is unindexed with 10 columns (4 integer, 4 text, 2 decimal).
  • All databases and the actionETL .NET5 benchmark run locally on the same Windows 10 host, an old 4-core Intel Core i7 4770K, 3.5GHz desktop PC from 2013 with a Samsung 840 EVO 1TB SSD.

Throughput

Here we generate and insert 1 million rows into the target table, and report the number of values inserted per second. 

Note: At larger volumes and in this benchmark, insert queries are mainly bottlenecked by the number of values to insert, which is why we report that number. To get the number of rows instead for our particular test, divide by 10.

We measure:

  • Batch Insert Default, which uses a batch size of 256 values per batch and 16,384 values per transaction.
  • Batch Insert Tuned, which uses tuned values for batch and transaction sizes.
    • Note: This often uses the largest possible settings, but it must be done with caution, since it can overload the server if used with very large data volumes.
  • Bulk Insert, where supported.
Throughput

Several things stand out:

  • Bulk Insert as well as the (embedded database) SQLite Batch Insert Tuned tests are from 2.5 to 15 times faster than the other tests and databases, and should (from a strict throughput performance standpoint) be used whenever possible.
  • Tuning Batch Insert settings provides a 14% to 54% improvement for non-SQLite tests and a 443% improvement for the SQLite test.
  • UPDATE: actionETL 0.41.0 onward uses Microsoft.Data.SqlClient 4.x or later, and avoids this performance issue:
    • The SqlClient Batch Insert tests have unexpectedly low performance. In fact, our benchmark discovered a previously unknown SQL Server bottleneck. The SqlClient team is developing a workaround, and when it’s released we’ll take advantage of it in actionETL.
      • In preliminary testing with the unreleased workaround, SqlClient Batch Insert achieves a performance roughly 12% better than ODBC SQL Server, which would make it the fastest non-embedded Batch Insert in our tests.
      • With actionETL, you can however already use either SqlClient Bulk Insert or the ODBC SQL Server provider combination, which both deliver great performance.

Throughput Scaling

In these tests we generate and insert between 1 row and 1 million rows, and again report the number of values inserted per second. The Batch Insert tests use default batch and transaction size settings.

We see that:

  • There is an order-of-magnitude performance difference between the fastest and slowest database-provider-API combination, both for small and large volumes. 
  • SqlClient Bulk is the fastest across all volumes.
  • Even though the database etc. is warmed up with an insert before measurements are started, the overhead of opening a connection, initiating the inserts, etc. in the measured run is large enough to completely limit the throughput up to 1,000 parameters (100 rows), and mostly also up to 10,000 values (1,000 rows).
  • For small volumes, Bulk Insert (on those databases that support it) has the same performance as Batch Insert.

Here is the same data in a table:

Rows Inserted1101001,00010,000100,0001,000,000
MySqlClient Batch6525,68652,268209,848280,920294,748294,147
MySqlConnector Batch1361,33912,860109,279331,885430,429432,934
MySqlConnector Bulk1371,37113,058126,161733,0251,751,1912,095,514
Npgsql Batch1,35812,84296,534266,640318,638329,748330,148
ODBC SQL Server Batch1,78017,150128,562396,670497,223517,184534,700
SqlClient Batch1,83817,364102,841204,082225,870229,410232,350
SqlClient Bulk1,95619,464181,0101,289,6933,569,5414,382,2584,382,097
SQLite Batch5014,84942,224378,911652,616732,463814,030

In Summary

If you are not bottlenecked by other factors (slow network, slow inserts due to indexing, etc.), then your choice of database, provider and insert API combination can make a huge performance difference. 

One common way to realize these performance advantages is to perform record-level processing in the dataflow, load data into unindexed staging tables, and finish up with set-based processing in the database.

Do use Bulk Insert where available for best performance.

Do benchmark your own use cases with different API, providers, and settings to pick the best combination.

And certainly, actionETL helps you by providing and making it easy to switch between Batch Insert and Bulk Insert across the various databases. Try it out yourself with the free Community edition!

Nitty-gritty

Database and Provider Details

ProviderDatabaseTuned Rows per BatchTuned Rows per Transaction
MySqlClient BatchMariaDB 10.3200long.MaxValue
MySqlConnector BatchMariaDB 10.3200long.MaxValue
MySqlConnector BulkMariaDB 10.3N/AN/A
Npgsql BatchPostgreSQL 12200long.MaxValue
System.Data.Odbc BatchSQL Server 201999long.MaxValue
Microsoft.Data.SqlClient BatchSQL Server 201913long.MaxValue
Microsoft.Data.SqlClient BulkSQL Server 2019N/AN/A
System.Data.SQLite BatchSQLite.Core 1.0.113.625long.MaxValue

actionETL Code

The benchmark was run with actionETL 0.40.0 on .NET5.

Dataflow row type:

class MyRow
{
    public long Data1 { get; }
    public long Data2 { get; } = 2000;
    public long Data3 { get; } = 3000;
    public long Data4 { get; } = 4000;
    public string Text1 { get; } = "Text5";
    public string Text2 { get; } = "Text567890";
    public string Text3 { get; } = "Text56789012345";
    public string Text4 { get; } = "Text5678901234567890";
    public decimal Number1 { get; } = 3.14M;
    public decimal Number2 { get; } = 7.92M;

    public MyRow(long data) => Data1 = data;
}

Dataflow code for batch insert:

var source = new RepeatRowsSource<MyRow>(aw, "Generate Rows"
    , numberOfRows, new MyRow(42)) { SendTemplateRows = true };

source.Output.Link.AdbInsertTarget("Insert Rows", cmca => cmca.AutoName()
    , connectionString.CreateConnectionBuilder(), tableName);
source.SetRowLimits(rowsPerBatch, rowsPerTransaction); // Optional

Dataflow target for MySQL bulk insert (replacing AdbInsertTarget above):

source.Output.Link.AdbMySqlConnectorBulkInsertTarget("Insert Rows"
    , cmca => cmca.AutoName(), connectionString.CreateConnectionBuilder()
    , tableName);

Dataflow target for SQL Server bulk insert (replacing AdbInsertTarget above):

source.Output.Link.AdbSqlClientBulkInsertTarget("Insert Rows"
    , cmca => cmca.AutoName(), connectionString.CreateConnectionBuilder()
    , SqlBulkCopyOptions.TableLock, tableName);

Blazingly Fast ETL Dataflow with .NET and actionETL

The dataflow capabilities in actionETL have been heavily optimized and are fast – as in really fast. The high performance/low overhead architecture provides a great foundation for implementing complex and large volume ETL requirements.

Benchmark numbers below, and full sources on GitHub. Why not use the free trial and run it on your Windows or Linux machine?

tl;dr

My old desktop from 2013 can pump up to 180 million rows per second between two dataflow workers, some two orders of magnitude faster than most traditional data sources can keep up with on this machine, and up to 340 million rows per second in aggregate link throughput across its 4 cores:

It’s also achieved with quite small buffer sizes of 1024 rows, which conserves memory. The 256 rows default buffer size is on this benchmark almost as fast, and using e.g. tiny 32-row or even smaller buffers when you have particularly wide rows is perfectly viable. You can even give different links in the same dataflow different buffer sizes.

With no data sources to slow the dataflow down, just pumping rows does bottleneck my 4 cores on CPU after about 3 links. After that, the aggregate link throughput stays constant even with over a thousand dataflow source, transform, and target workers all pumping rows simultaneously, showing great scalability.

You can use large numbers of workers to solve complex problems well while retaining excellent performance.

In any real application, the high efficiency of the dataflow leaves almost all CPU cycles available for user logic and data source drivers.

With the powerful API, it only takes a few lines of code to create, connect, and run all the workers for all the benchmark cases.

Just the Facts, Ma’am

In the above chart, the blue “Many Sources, No Transforms” shows running many source workers, with a single target worker connected to each source:

The red “One Source, Many Transforms” shows running a single source worker, with many downstream transforms, terminated by a single target:

And the yellow “Multiple Sources and Transforms” shows running multiple source workers, and an equal number of transform and target workers per source as there are sources in total:

The specific workers used are:

In these benchmarks, we specifically test the movement of rows between workers, without interacting with external data sources or allocating new rows (I’ll come back to that in later articles).

  • The source worker or workers send a row 1 billion times over one link, or 500 million times over two links, all the way to 1 million rows over 1024 links, to the downstream workers. This way, the aggregate number of rows over all links are always 1 billion.
  • Each transform receives the upstream rows and sends them downstream
  • Each individual flow is terminated by a target that receives the upstream rows and throws them away
  • Aggregate Link Throughput is the sum of the number of rows passed through each link (1 billion), divided by the time

That old desktop from 2013 that runs the benchmarks? It has an Intel Core i7 4770K, 3.5GHz, 4 physical cores, hyper-threading disabled, 16GB of memory, and now Windows 10 Pro (full specification). And we’re running .NET Core 3.1.

Here’s the output from a run:

AggregateLinkRows, Workers, Sources, DownstreamLinksPerSource, TotalLinks, Duration (s), AggregateLinkThroughput (Million Rows/s)
 1000000000,      2,      1,      1,      1,   5.547,    180.27
 1000000000,      4,      2,      1,      2,   2.933,    340.92
 1000000002,      6,      3,      1,      3,   3.071,    325.62
 1000000000,      8,      4,      1,      4,   3.140,    318.50
 1000000000,     32,     16,      1,     16,   3.146,    317.90
 1000000000,    128,     64,      1,     64,   3.163,    316.19
 1000000000,    512,    256,      1,    256,   3.179,    314.61
 1000000512,   2048,   1024,      1,   1024,   3.267,    306.12
 1000000000,      3,      1,      2,      2,   4.933,    202.71
 1000000002,      4,      1,      3,      3,   3.439,    290.79
 1000000000,      5,      1,      4,      4,   3.494,    286.22
 1000000000,     17,      1,     16,     16,   3.089,    323.74
 1000000000,     65,      1,     64,     64,   3.064,    326.42
 1000000000,    257,      1,    256,    256,   3.009,    332.29
 1000000512,   1025,      1,   1024,   1024,   3.241,    308.53
 1000000000,      6,      2,      2,      4,   4.550,    219.77
 1000000000,     20,      4,      4,     16,   3.730,    268.11
 1000000000,     72,      8,      8,     64,   3.364,    297.30
 1000000000,    272,     16,     16,    256,   3.507,    285.12
 1000000512,   1056,     32,     32,   1024,   3.149,    317.59

Use the Source, Luke

Coding this is very easy, here’s how to create, connect, and run all the workers for all the test cases – a good demonstration of the power of the library API:

const long desiredAggregateLinkRows = 1_000_000_000;
var testCases = new (int sources, int downstreamLinksPerSource)[]
{
	(1,1) // Warm up the JIT etc. to get consistent timings
	, (1,1), (2,1), (3,1), (4,1), (16,1), (64,1), (256,1)
		, (1024,1) // Many Sources, No Transforms
	, (1,2), (1,3), (1,4), (1,16), (1,64), (1,256)
		, (1,1024) // Single Source, Many Transforms 
	, (2,2), (4,4), (8,8), (16,16), (32,32) // Multiple Sources and Transforms 
};

foreach (var (numberOfSources, downstreamLinksPerSource) in testCases)
{
	int numberOfLinks = numberOfSources * downstreamLinksPerSource;
	long rowsPerSource = (long)Math.Ceiling(
		(double)desiredAggregateLinkRows / numberOfLinks);
	long actualAggregateLinkRows = numberOfSources * rowsPerSource 
		* downstreamLinksPerSource; // Avoid round-off errors

	var workerSystem = new WorkerSystem()
		.Root(ws =>
		{
			for (int s = 0; s < numberOfSources; s++)
			{
				var rrs = new RepeatRowsSource<MyRow>(ws, $"Source {s}"
					, rowsPerSource, new MyRow(42))
						{ SendTemplateRows = true };
				var output = rrs.Output;

				for (int l = 1; l < downstreamLinksPerSource; l++)
					output = output.Link.MulticastTransform(
						$"Transform {s} - {l}", 1).TypedOutputs[0];

				output.Link.TrashTarget($"Target {s}");
			}
		});

	(await workerSystem.StartAsync()) // Run the worker system
		.ThrowOnFailure();
}

Winding Down

I think the key takeaways are that actionETL can move around a massive amount of rows in the dataflow without any noticeable slow-downs, and do it with memory conserving small buffers.

Some future topics to look at include memory allocation and consumption, as well as data source and transform worker performance.

As mentioned, full sources on GitHub; try it out with the free trial and let me know in the comments below what throughput numbers you get!

SMO + LINQ for Entity Framework Code First Database Modifications

Database Aspects of Code First

Using Entity Framework (EF) 4.1 with the Code First approach seems like a good choice for my greenfield Web App – during development I can focus on the Web and C# .Net paradigms, and postpone the bulk of the database work until such time my application and data model has stabilised. That said there are several database aspects I must still address up front, including:

  • Decide and implement the high-level mapping between .Net classes and database tables. Morteza Manavi (blog | twitter) has a great series on this.
  • Set field properties such as string length, required vs. optional etc., which will get reflected in the database.

Since I plan to continue using Code First beyond the initial project setup, populate with test data etc. it becomes desirable to make further changes to the database beyond what EF Data Annotations and the Fluent API supports natively. These changes are usually put in the Seed() method of the inherited DbContext class; this method gets executed immediately after the database has been created.

EF 4.1 does not support adding unique keys, so I create them with a call to ExecuteSqlCommand() as follows:

context.Database.ExecuteSqlCommand(@"
  ALTER TABLE [WebApp].[MyTable] ADD CONSTRAINT 
    [UK_MyTable_Name] UNIQUE(Name); 
  ALTER TABLE [WebApp].[MySecondTable] ADD CONSTRAINT 
    [UK_MySecondTable_Name] UNIQUE(Name); 
  ");

For straight forward SQL I prefer the above approach since it will allow me to later (if and when the database takes on a life of its own) add the SQL text to the database creation script generated by EF.

Key Names

While foreign keys generated by EF have names that can also be overridden, I notice that EF has not given the primary keys any names, and there is also no EF way of setting their names. SQL Server in turn auto generates primary key names such as PK__MyTable__0376AC45EB0148EF. Apart from the ugly GUID in the name, the MyTable part unfortunately gets truncated at 8 characters, so I decide to add proper names to the primary keys.

Ladislav Mrnka (stackoverflow) was kind enough to point me to his sample code for dropping and recreating primary keys using SQL. Since a database “with a life of its own” would not need this drop-create SQL code, I prefer in this case to stay in .Net land and use SQL Server Management Objects (SMO) for implementing the changes:

SMO + LINQ + EF = True

Using SMO requires adding a few references to the project as described here. Discovering and altering primary keys is quite straight forward, and I elect to use Language-Integrated Query (LINQ) instead of nested foreach statements,
which makes for nicely concise code. Here’s the complete Seed() method:

// Seed() method from the MyDbContext : DbContext class
public void Seed(WebAppEntities context)
{
  // Add unique keys
  context.Database.ExecuteSqlCommand(@"
    ALTER TABLE [WebApp].[MyTable] ADD CONSTRAINT
      [UK_MyTable_Name] UNIQUE(Name);
    ALTER TABLE [WebApp].[MySecondTable] ADD CONSTRAINT
      [UK_MySecondTable_Name] UNIQUE(Name);
    ");


  // Get a Server object from the existing connection
  var server = new Microsoft.SqlServer.Management.Smo.Server(
    new Microsoft.SqlServer.Management.Common.ServerConnection
      (context.Database.Connection as System.Data.SqlClient.SqlConnection));
  // Get a database object. Qualify names to avoid EF name clashes.
  Microsoft.SqlServer.Management.Smo.Database database =
    server.Databases[context.Database.Connection.Database];

  // Rename auto generated primary key names
  (
    from Microsoft.SqlServer.Management.Smo.Table table in database.Tables
    where table.Schema == "WebApp"
    from Microsoft.SqlServer.Management.Smo.Index index in table.Indexes
    where index.IndexKeyType ==
      Microsoft.SqlServer.Management.Smo.IndexKeyType.DriPrimaryKey
    // Create pairs of a name string and an Index object
    select new { table.Name, index }
  // ToList() separates reading the object model above from modifying it below
  ).ToList()
  // Use extension and anonymous methods to rename the primary keys
  .ForEach(primaryKey =&amp;amp;amp;amp;amp;amp;gt;
    {
      // Name primary keys "PK_TableName"
      primaryKey.index.Rename(
        "PK_" + primaryKey.Name.Replace("[", "").Replace("]", ""));
      primaryKey.index.Alter();
    }
}

Summing Up

When my project has more .Net coding than database focus, I often prefer using the power, simplicity and IntelliSense of SMO to perform my database manipulations; maybe you’ll consider doing the same ?-)