Skip to content

Conversation

@SlightlyMadGargoyle
Copy link
Contributor

Paraphrasing the code comments, the field length of a 'C' column is usually defined by the single FieldLength byte. The current implementation extends this by also combining the FieldLength and Decimal bytes as is done by per Clipper and FoxPro. This should work well enough as standard DBFs should a decimal count of 0.

This is not always the case. I had a set of DBFs that had a non-zero decimal byte in a 'C' column. Odd, but oh well.

I've split the header's 'Read' into two phases. The first reads the header assuming that we are using standard field lengths. After reading the header the record length is calculated by summing the field lengths and checked against the expected "RecordLength". If the lengths do not match, re-process the header assuming the extended 'C' field length format.

No further checking is done after that as, potentially, non-standard implementations could have non-standard "RecordLength" field.

… + 1) is correct according to the expected record length as specified in the header to avoid inappropriate reading of the non-standard high byte of nFieldLength
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant