You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Writing all of the generators will be a one if not the most time consuming and buggiest things in the beginning. However there are a lot more benefits than downsides, with the biggest ones being:
Debuggable and inspectable materializer & inserters in a meaningful manner
Optimizing by the compiler to the code, instead of purely relying on the JIT and the hand written IL
Way more maintainable code as hand written IL is very error-prune and hard to maintain
However as mentioned there are downsides, which do hurt, but it is well worth the trade off, with the most notable being:
No reading and writing of hidden fields and or accessory, this includes read-only properties, and all accessory other than public
Not the most optimized materializer theoretically possible, however we need fewer code loaded to accomplish the same task.
Query
Design
As mentioned before the materializer itself will be no longer as optimized as theoretically possibly, this comes down to a simple problem. There is no consistent and easy way to predict which columns may be returned by an SQL query AOT, meaning before the query actually executed once.
This therefore mean that we have two parts which play together while reading a DbDataReader.
The actual detection which columns got returned.
This will be done by a dictionary that gets compiled AOT containing the column names of each property as a key with the value being their absolute property index in the entity. On the first execution at runtime of this query these will be mapped to a new static array where the index will be the column index of the reader and the value being the property index.
The actual materialization which is getting reused by all queries with the same table, order of joins and other configuration options such as change tracking.
The insert doesn't change much in its implementation other than reusing some parts of the inserter.
Proposal
Single entity:
publicstaticvoidPersonInserter(NpgsqlCommandcmd,Personp){varparameters=cmd.Parameters;parameters.Add(newNpgsqlParameter<string>("@p0",p.Name));parameters.Add(newNpgsqlParameter<string>("@p1",p.Content));cmd.CommandText="INSERT INTO people (name, content) VALUES (@p0, @p1) RETURNING id";}publicstaticasyncTaskInserter(Personp){usingvarcmd=newNpgsqlCommand();PersonInserter(cmd,p);p.Id=(int)awaitcmd.ExecuteScalarAsync();}
Multiple entities:
publicstaticvoidPersonInserter(NpgsqlParameterCollectionparameters,StringBuildercommandText,Person[]people){commandText.Append("INSERT INTO people (name, content) VALUES ");varabsoluteIndex=0;for(inti=0;i<people.Length;i++){varp=people[i];varname1="@p"+absoluteIndex++;varname2="@p"+absoluteIndex++;commandText.Append('(').Append(name1).Append(',').Append(' ').Append(name2).Append(')').Append(',').Append(' ');parameters.Add(newNpgsqlParameter<string>(name1,p.Name));parameters.Add(newNpgsqlParameter<string>(name2,p.Content));}commandText.Length-=2;commandText.Append(" RETURNING id");}publicstaticasyncTaskInserter(Person[]p){usingvarcmd=newNpgsqlCommand();varcommandText=newStringBuilder();PersonInserter(cmd.Parameters,commandText,p);cmd.CommandText=commandText.ToString();awaitusingvarreader=awaitcmd.ExecuteReaderAsync(p.Length==1?CommandBehavior.SingleRow:CommandBehavior.Default);varindex=0;while(awaitreader.ReadAsync()){p[index++].Id=reader.GetFieldValue<int>(0);}}
Update
Design
We used to store an array of booleans with a fixed length of the total theoretically updatable columns, which was semi optimal as it required an array allocation and an iteration over the whole array even if only one column was updated. In Reflow this will completely change.
From now on changes will be stored in local numeric fields containing the changed columns. Additionally, the trackChanges field will be stored in there as well to reduce memory usage even further. Assuming there are four updateable columns there would be one byte field. The least significant bit would indicate if changes should be tracked, the leading four will be responsible for the other columns (from least significant to most significant). The remaining two will be unused. If there are more than 7 fields which are updateable we will change the field to the next smallest fitting numeric type e.g. ushort/uint/ulong. For now we will restrict the maximum amount of updatable columns to 63 (as one bit is required for the previous trackChanges field).
While building the SQL required to actually update the columns in the database we use an algorithm which actually only iterates over all set bits, which improves the performance even further. Additionally we no longer need to query the method which gets us the new value of the columns.
Proposal
publicstaticvoidUpdate(Personp,StringBuilderbuilder,NpgsqlParameterCollectionparameters,refuintabolsuteIndex){if(pis not PersonProxyproxy)return;proxy.GetSectionChanges(0,outvarsection);while(section!=0){stringname="@p"+abolsuteIndex++;switch((byte)(section&(byte)(~section+1))){case1<<0:builder.Append("id = ");parameters.Add(newNpgsqlParameter<int>(name,proxy.Id));break;case1<<1:builder.Append("name = ");parameters.Add(newNpgsqlParameter<string>(name,proxy.Name));break;default:thrownewException();}builder.Append(name).Append(',').Append(' ');section&=(byte)(section-1);}}publicclassPerson{publicvirtualintId{get;set;}publicvirtualstringName{get;set;}publicstringContent{get;set;}}publicclassPersonProxy:Person{privatebyte_hasChanges_1_8;publicPersonProxy(booltrackChanges=false){if(trackChanges)_hasChanges_1_8|=1;}publicoverrideintId{get{returnbase.Id;}set{base.Id=value;if((byte)(_hasChanges_1_8&1)!=0){_hasChanges_1_8|=1<<1;}}}publicoverridestringName{get{returnbase.Name;}set{base.Name=value;if((byte)(_hasChanges_1_8&1)!=0){_hasChanges_1_8|=1<<2;}}}publicvoidGetSectionChanges(bytesectionIndex,outbytesection){switch(sectionIndex){case0:section=(byte)(_hasChanges_1_8>>1);break;default:thrownewException();}}}
Delete
Definition
The deletion of entities doesn't change much in its implementation either, other than no longer accessing the primary key through a delegate.
Proposal
Single entity:
publicstaticTask<int>Delete(Personp){usingvarcmd=newNpgsqlCommand();cmd.Parameters.Add(newNpgsqlParameter<int>("@p0",p.Id));cmd.CommandText="DELETE FROM people WHERE id = @p0";returncmd.ExecuteNonQueryAsync();}
Multiple entities:
publicstaticTask<int>Delete(Person[]p){if(p.Length==0)returnTask.FromResult(0);usingvarcmd=newNpgsqlCommand();varparameters=cmd.Parameters;varcommandText=newStringBuilder();commandText.Append("DELETE FROM people WHERE id IN (");for(inti=0;i<p.Length;i++){varname="@p"+i;parameters.Add(newNpgsqlParameter<int>(name,p[i].Id));commandText.Append(name).Append(',').Append(' ');}commandText.Length-=2;commandText.Append(')');cmd.CommandText=commandText.ToString();returncmd.ExecuteNonQueryAsync();}
The text was updated successfully, but these errors were encountered:
Definition
Writing all of the generators will be a one if not the most time consuming and buggiest things in the beginning. However there are a lot more benefits than downsides, with the biggest ones being:
However as mentioned there are downsides, which do hurt, but it is well worth the trade off, with the most notable being:
Query
Design
As mentioned before the materializer itself will be no longer as optimized as theoretically possibly, this comes down to a simple problem. There is no consistent and easy way to predict which columns may be returned by an SQL query AOT, meaning before the query actually executed once.
This therefore mean that we have two parts which play together while reading a
DbDataReader
.This will be done by a dictionary that gets compiled AOT containing the column names of each property as a key with the value being their absolute property index in the entity. On the first execution at runtime of this query these will be mapped to a new static array where the index will be the column index of the reader and the value being the property index.
Proposal
Single entity:
Multiple one-to-one entities:
Insert
Design
The insert doesn't change much in its implementation other than reusing some parts of the inserter.
Proposal
Single entity:
Multiple entities:
Update
Design
We used to store an array of booleans with a fixed length of the total theoretically updatable columns, which was semi optimal as it required an array allocation and an iteration over the whole array even if only one column was updated. In Reflow this will completely change.
From now on changes will be stored in local numeric fields containing the changed columns. Additionally, the trackChanges field will be stored in there as well to reduce memory usage even further. Assuming there are four updateable columns there would be one byte field. The least significant bit would indicate if changes should be tracked, the leading four will be responsible for the other columns (from least significant to most significant). The remaining two will be unused. If there are more than 7 fields which are updateable we will change the field to the next smallest fitting numeric type e.g. ushort/uint/ulong. For now we will restrict the maximum amount of updatable columns to 63 (as one bit is required for the previous trackChanges field).
While building the SQL required to actually update the columns in the database we use an algorithm which actually only iterates over all set bits, which improves the performance even further. Additionally we no longer need to query the method which gets us the new value of the columns.
Proposal
Delete
Definition
The deletion of entities doesn't change much in its implementation either, other than no longer accessing the primary key through a delegate.
Proposal
Single entity:
Multiple entities:
The text was updated successfully, but these errors were encountered: