-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove useless match in liftTokenizer
#2899
Conversation
implicit final val jsonAstEncoder: Encoder[JsonValue[Json]] = astEncoder(_.value.toString(), "json") | ||
implicit final val jsonAstDecoder: Decoder[JsonValue[Json]] = astDecoder(JsonValue(_)) | ||
implicit final val jsonbAstEncoder: Encoder[JsonbValue[Json]] = astEncoder(_.value.toString(), "jsonb") | ||
implicit final val jsonbAstDecoder: Decoder[JsonbValue[Json]] = astDecoder(JsonbValue(_)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to compute them more than once
implicit final val jsonAstEncoder: Encoder[JsonValue[Json]] = astEncoder(_.value.toString(), "json") | ||
implicit final val jsonAstDecoder: Decoder[JsonValue[Json]] = astDecoder(JsonValue(_)) | ||
implicit final val jsonbAstEncoder: Encoder[JsonbValue[Json]] = astEncoder(_.value.toString(), "jsonb") | ||
implicit final val jsonbAstDecoder: Decoder[JsonbValue[Json]] = astDecoder(JsonbValue(_)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to compute them more than once
implicit val tokenTokenizer: Tokenizer[Token] = Tokenizer[Token](identity) | ||
implicit val statementTokenizer: Tokenizer[Statement] = | ||
Tokenizer[Statement](identity) | ||
implicit def stringTokenTokenizer: Tokenizer[StringToken] = | ||
implicit val stringTokenTokenizer: Tokenizer[StringToken] = | ||
Tokenizer[StringToken](identity) | ||
implicit def liftingTokenTokenizer: Tokenizer[ScalarLiftToken] = | ||
implicit val liftingTokenTokenizer: Tokenizer[ScalarLiftToken] = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to compute them more than once
case tag: ScalarTag => ScalarTagToken(tag) | ||
case tag: QuotationTag => QuotationTagToken(tag) | ||
case lift: ScalarLift => ScalarLiftToken(lift) | ||
// TODO Longer Explanation | ||
case lift: Tag => fail("Cannot tokenizer a non-scalar tagging.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAICT, a Lift
can't be a ScalarTag
, nor a QuotationTag
nor a Tag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! I think there was a point at which Tag was a sub-type of Lift but I changed that a while ago.
new Tokenizer[T] { | ||
private val stable = fallback(this) | ||
override def token(v: T) = pf.applyOrElse(v, stable.token) | ||
private lazy val stable: Tokenizer[T] = fallback(this) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was weird to me. Why would we need to compute the fallback value before to know if we'll ever need it one day? 🤔
Found this thanks to research done in #2886