EntityFrameworkCore extensions:
-Bulk operations (Insert, Update, Delete, Read, Upsert, Sync, Truncate) and
-Batch ops (Delete, Update).
Library is Lightweight and very Efficient, having all mostly used CRUD operation.
Was selected in top 20 EF Core Extensions recommended by Microsoft.
Current version is using EF Core 5 and is targeting NetStandard 2.1 so it can be used on project targeting Net 5.
At the moment supports Microsoft SQL Server(2008+) and SQLite.
Under the hood uses SqlBulkCopy for Insert, for Update/Delete combines BulkInsert with raw Sql MERGE.
For SQLite there is no BulkCopy, instead library uses plain SQL combined with UPSERT.
Bulk Tests can not have UseInMemoryDb because InMemoryProvider does not support Relational-specific methods.
Available on
Package manager console command for installation: Install-Package EFCore.BulkExtensions
Nuget | Target | Used EF v. | For projects targeting |
---|---|---|---|
5.x | NetStandard 2.1 | EF Core 5.0 | Net 5+ |
3.x | NetStandard 2.0 | EF Core 3.n | NetCore(3.0+) or NetFrm(4.6.1+) MoreInfo |
2.x | NetStandard 2.0 | EF Core 2.n | NetCore(2.0+) or NetFrm(4.6.1+) |
1.x | NetStandard 1.4 | EF Core 1.0 | NetCore(1.0+) |
If you find this project useful you can mark it by leaving a Github Star ⭐.
If you would like to support the Project by making a Donation ($10) #BuyMeBeer, you are welcome to do so:
or
(0 fee)
Please read CONTRIBUTING for details on code of conduct, and the process for submitting pull requests.
Want to Contact us for Hire (Development & Consulting): www.codis.tech
It's pretty simple and straightforward.
Bulk Extensions are made on DbContext class and can be used like this (supported both regular and Async methods):
context.BulkInsert(entitiesList); context.BulkInsertAsync(entitiesList);
context.BulkInsertOrUpdate(entitiesList); context.BulkInsertOrUpdateAsync(entitiesList); //Upsert
context.BulkInsertOrUpdateOrDelete(entitiesList); context.BulkInsertOrUpdateOrDeleteAsync(entitiesList);//Sync
context.BulkUpdate(entitiesList); context.BulkUpdateAsync(entitiesList);
context.BulkDelete(entitiesList); context.BulkDeleteAsync(entitiesList);
context.BulkRead(entitiesList); context.BulkReadAsync(entitiesList);
context.Truncate<Entity>(); context.TruncateAsync<Entity>();
SQLite requires package: SQLitePCLRaw.bundle_e_sqlite3 with call to SQLitePCL.Batteries.Init()
Batch Extensions are made on IQueryable DbSet and can be used as in the following code segment.
They are done as pure sql and no check is done whether some are prior loaded in memory and are being Tracked.
(updateColumns is optional param in which PropertyNames added explicitly when need update to it's default value)
// Delete
context.Items.Where(a => a.ItemId > 500).BatchDelete();
context.Items.Where(a => a.ItemId > 500).BatchDeleteAsync();
// Update (using Expression arg.) supports Increment/Decrement
context.Items.Where(a => a.ItemId <= 500).BatchUpdate(a => new Item { Quantity = a.Quantity + 100 });
// can be as value '+100' or as variable '+incrementStep' (int incrementStep = 100;)
// Update (via simple object)
context.Items.Where(a => a.ItemId <= 500).BatchUpdate(new Item { Description = "Updated" });
context.Items.Where(a => a.ItemId <= 500).BatchUpdateAsync(new Item { Description = "Updated" });
// Update (via simple object) - requires additional Argument for setting to Property default value
var updateColumns = new List<string> { nameof(Item.Quantity) }; // Update 'Quantity' to default value('0'-zero)
var q = context.Items.Where(a => a.ItemId <= 500);
int affected = q.BatchUpdate(new Item { Description = "Updated" }, updateColumns);//result assigned to variable
If Windows Authentication is used then in ConnectionString there should be Trusted_Connection=True; because Sql credentials are required to stay in connection.
When used directly each of these operations are separate transactions and are automatically committed.
And if we need multiple operations in single procedure then explicit transaction should be used, for example:
using (var transaction = context.Database.BeginTransaction())
{
context.BulkInsert(entities1List);
context.BulkInsert(entities2List);
transaction.Commit();
}
BulkInsertOrUpdate method can be used when there is need for both operations but in one connection to database.
It makes Update when PK(PrimaryKey) is matched, otherwise does Insert.
BulkInsertOrUpdateOrDelete effectively synchronizes table rows with input data.
Those in Db that are not found in the list will be deleted.
Partial Sync can be done on table subset using expression set on config with method:
bulkConfig.SetSynchronizeFilter<Item>(a => a.Quantity > 0);
For Sqlite Not supported since lite only has UPSERT statement. Way to achieve there sync functionality is to Select or BulkRead existing data from DB, split list into sublists and call separately Bulk methods for BulkInsertOrUpdate and Delete.
BulkRead does SELECT and JOIN based on one or more Unique columns that are specified in Config UpdateByProperties
.
More info in the Example at the bottom.
Note: Bulk ops have optional argument Type type that can be set to type of Entity if list has dynamic runtime objects or is inhereted from Entity class.
Bulk methods can have optional argument BulkConfig with properties (bool, int, object, List):
PROPERTY : DEFAULTvalue SqlBulkCopyOptions: Default
----------------------- PropertiesToInclude: null,
PreserveInsertOrder: true, PropertiesToIncludeOnCompare: null,
SetOutputIdentity: false, PropertiesToIncludeOnUpdate: null,
BatchSize: 2000, PropertiesToExclude: null,
NotifyAfter: null, PropertiesToExcludeOnCompare: null,
BulkCopyTimeout: null, PropertiesToExcludeOnUpdate: null,
EnableStreaming: false, UpdateByProperties: null,
UseTempDB: false, EnableShadowProperties: false,
UniqueTableNameTempDb: true, IncludeGraph: false,
CustomDestinationTableName: null, OmitClauseExistsExcept: false,
TrackingEntities: false, DoNotUpdateIfTimeStampChanged: false,
WithHoldlock: true, SRID: 4326,
CalculateStats: false, DateTime2PrecisionForceRound: false,
-----------------------
METHOD: SetSynchronizeFilter<T>
If we want to change defaults, BulkConfig should be added explicitly with one or more bool properties set to true, and/or int props like BatchSize to different number.
Config also has DelegateFunc for setting Underlying-Connection/Transaction, e.g. in UnderlyingTest.
When doing update we can chose to exclude one or more properties by adding their names into PropertiesToExclude, or if we need to update less then half column then PropertiesToInclude can be used. Setting both Lists are not allowed.
When using the BulkInsert_/OrUpdate methods, you may also specify the PropertiesToIncludeOnCompare and PropertiesToExcludeOnCompare properties. By adding a column name to the PropertiesToExcludeOnCompare, will allow it to be inserted and updated but will not update the row if any of the other columns in that row did not change. For example, if you are importing bulk data and want to keep an internal CreateDate or UpdateDate, you add those columns to the PropertiesToExcludeOnCompare.
Another option that may be used in the same scenario are the PropertiesToIncludeOnUpdate and PropertiesToExcludeOnUpdate properties. These properties will allow you to specify insert-only columns such as CreateDate and CreatedBy.
If we want Insert only new and skip existing ones in Db (Insert_if_not_Exist) then use BulkInsertOrUpdate with config
PropertiesToIncludeOnUpdate = new List<string> { "" }
Additionaly there is UpdateByProperties for specifying custom properties, by which we want update to be done.
Using UpdateByProperties while also having Identity column requires that Id property be Excluded.
If NotifyAfter is not set it will have same value as BatchSize while BulkCopyTimeout when not set has SqlBulkCopy default which is 30 seconds and if set to 0 it indicates no limit.
SetOutputIdentity have purpose only when PK has Identity (usually int type with AutoIncrement), while if PK is Guid(sequential) created in Application there is no need for them.
Also Tables with Composite Keys have no Identity column so no functionality for them in that case either.
var bulkConfig = new BulkConfig { SetOutputIdentity = true, BatchSize = 4000 };
context.BulkInsert(entList, bulkConfig);
context.BulkInsertOrUpdate(entList, new BulkConfig { SetOutputIdentity = true });
context.BulkInsertOrUpdate(entList, b => b.SetOutputIdentity = true); // example of BulkConfig set with Action arg.
PreserveInsertOrder is true by default and makes sure that entites are inserted to Db as ordered in entitiesList.
When table has Identity column (int autoincrement) with 0 values in list they will temporary be automatically changed from 0s into range -N:-1.
Or it can be manually set with proper values for order (Negative values used to skip conflict with existing ones in Db).
Here single Id value itself doesn't matter, db will change it to next in sequence, what matters is their mutual relationship for sorting.
Insertion order is implemented with TOP in conjuction with ORDER BY. stackoverflow:merge-into-insertion-order.
This config should remain true when SetOutputIdentity is set to true on Entity containing NotMapped Property. issues/76
When using SetOutputIdentity Id values will be updated to new ones from database.
With BulkInsertOrUpdate for those that will be updated it has to match with Id column, or other unique column(s) if using UpdateByProperties.
For Sqlite combination of BulkInsertOrUpdate and IdentityId automatic set will not work properly since it does not have full MERGE capabilities like SqlServer. Instead list can be split into 2 lists, and call separately BulkInsert and BulkUpdate.
SetOutputIdentity is useful when BulkInsert is done to multiple related tables, that have Identity column.
After Insert is done to first table, we need Id-s (if using Option 1) that were generated in Db because they are FK(ForeignKey) in second table.
It is implemented with OUTPUT as part of MERGE Query, so in this case even the Insert is not done directly to TargetTable but to TempTable and then Merged with TargetTable.
When used Id-s will be updated in entitiesList, and if PreserveInsertOrder is set to false then entitiesList will be cleared and reloaded.
Example of SetOutputIdentity with parent-child FK related tables:
int numberOfEntites = 1000;
var entities = new List<Item>();
var subEntities = new List<ItemHistory>();
for (int i = 1; i <= numberOfEntites; i++)
{
var entity = new Item { Name = $"Name {i}" };
entity.ItemHistories = new List<ItemHistory>()
{
new ItemHistory { Remark = $"Info {i}.1" },
new ItemHistory { Remark = $"Info {i}.2" }
};
entities.Add(entity);
}
// Option 1
using (var transaction = context.Database.BeginTransaction())
{
context.BulkInsert(entities, new BulkConfig { SetOutputIdentity = true });
foreach (var entity in entities) {
foreach (var subEntity in entity.ItemHistories) {
subEntity.ItemId = entity.ItemId; // setting FK to match its linked PK that was generated in DB
}
subEntities.AddRange(entity.ItemHistories);
}
context.BulkInsert(subEntities);
transaction.Commit();
}
// Option 2 using Graph (only for SQL Server) - all entities in relationship with main ones in list are BulkInsertUpdated
context.BulkInsert(entities, b => b.IncludeGraph = true);
When CalculateStats set to True the result returned in BulkConfig.StatsInfo
(StatsNumber-Inserted/Updated/Deleted).
If used for pure Insert (with Batching) then SetOutputIdentity should also be configured because Merge is required.
TrackingEntities can be set to True if we want to have tracking of entities from BulkRead or if SetOutputIdentity is set.
UseTempDB when set then BulkOperation has to be inside Transaction
UniqueTableNameTempDb when changed to false temp table name will be only 'Temp' without random numbers.
CustomDestinationTableName can be set with 'TableName' only or with 'Schema.TableName'.
EnableShadowProperties to add (normal) Shadow Property and persist value. Disables automatic discrimator, use manual method.
IncludeGraph when set all entites that have relations with main ones from the list are also merged into theirs tables.
OmitClauseExistsExcept removes the clause from Merge statement, useful when need to active triggers even for same data.
DoNotUpdateIfTimeStampChanged if set checks TimeStamp for Concurrency, ones with conflict will not be updated.
SRID Spatial Reference Identifier - for SQL Server with NetTopologySuite.
DateTime2PrecisionForceRound If dbtype datetime2 has precision less then default 7, example 'datetime2(3)' SqlBulkCopy does Floor instead of Round so when this Property is set then Rounding will be done in memory to make sure inserted values are same as with regular SaveChanges.
SqlBulkCopyOptions is Enum with [Flags] attribute which enables specifying one or more options:
Default, KeepIdentity, CheckConstraints, TableLock, KeepNulls, FireTriggers, UseInternalTransaction
SetSynchronizeFilter A method that receives and sets expresion filter on entities to delete when using BulkInsertOrUpdateOrDelete.
Last optional argument is Action progress (Example in EfOperationTest.cs RunInsert() with WriteProgress()).
context.BulkInsert(entitiesList, null, (a) => WriteProgress(a));
Library supports Global Query Filters and Value Conversions as well.
Additionally BatchUpdate and named Property works with EnumToString Conversion.
It can map OwnedTypes, also next are links with info how to achieve
NestedOwnedTypes and
OwnedInSeparateTable.
Table splitting are somewhat specific but could be configured in way Set TableSplit.
With Computed and Timestamp Columns it will work in a way that they are automatically excluded from Insert. And when combined with SetOutputIdentity they will be Selected.
Spatial types, like Geometry, also supported and if Entity has one, clause EXIST ... EXCEPT is skipped because it's not comparable.
Performance for bulk ops measured with ActivitySources
named: 'BulkExecute' (tags: 'operationType', 'entitiesCount')
Bulk Extension methods can be Overridden if required, for example to set AuditInfo.
If having problems with Deadlock there is useful info in issue/46.
When having TPH (Table-Per-Hierarchy) inheritance model it can be set in 2 ways.
First is automatically by Convention in which case Discriminator column is not directly in Entity but is Shadow Property.
And second is to explicitly define Discriminator property in Entity and configure it with .HasDiscriminator()
.
Important remark regarding the first case is that since we can not set directly Discriminator to certain value we need first to add list of entities to DbSet where it will be set and after that we can call Bulk operation. Note that SaveChanges are not called and we could optionally turn off TrackingChanges for performance. Example:
public class Student : Person { ... }
context.Students.AddRange(entities); // adding to Context so that Shadow property 'Discriminator' gets set
context.BulkInsert(entities);
TPT (Table-Per-Type) as of v5 is partially supported.
When we need to Select from big List of some Unique Prop./Column use BulkRead (JOIN done in Sql) for Efficiency:
// instead of WhereIN which will TimeOut for List with several thousand records
var entities = context.Items.Where(a => itemsNames.Contains(a.Name)).AsNoTracking().ToList(); //SQL IN operator
// or JOIN in Memory that loads entire table
var entities = context.Items.Join(itemsNames, a => a.Name, p => p, (a, p) => a).AsNoTracking().ToList();
// USE
var items = itemsNames.Select(a => new Item { Name = a }).ToList(); // creating list of Items where only Name is set
var bulkConfig = new BulkConfig { UpdateByProperties = new List<string> { nameof(Item.Name) } };
context.BulkRead(items, bulkConfig); // Items list will be loaded from Db with data(other properties)
Following are performances (in seconds for SQL Server):
Operations\Rows | 100,000 EF | 100,000 EFBulk | 1,000,000 EFBulk |
---|---|---|---|
Insert | 38.98 s | 2.10 s | 17.99 s |
Update | 109.25 s | 3.96 s | 31.45 s |
Delete | 7.26 s | 2.04 s | 12.18 s |
----------------- | ------------ | ---------------- | ------------------ |
Together | 70.70 s | 5.88 s | 56.84 s |
TestTable has 6 columns (Guid, string, string, int, decimal?, DateTime).
All were inserted and 2 of them (string, DateTime) were updated.
Test was done locally on following configuration: INTEL Core i5-3570K 3.40GHz, DDRIII 8GB x 2, SSD 840 EVO 128 GB.
For small data sets there is an overhead since most Bulk ops need to create Temp table and also Drop it after finish.
Probably good advice would be to use Bulk ops for sets greater than 1000.