-
Notifications
You must be signed in to change notification settings - Fork 913
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Array Support #327
Comments
I've made some of my thoughts public in #302. I'd very much like to have array support built into the package so that users don't have to keep reinventing the wheel. But the most important thing would be to first come out with a detailed proposal on how the interface would look like. Once the interface has been agreed on, it shouldn't be more than a small matter of programming. |
Hey @johto, what is the current state of that interface? What is the best proposal so far and what is missing? Also, how about composite/row types? |
@tarikjn: I have not seen an interface addressing my comments in #302. Perhaps at this point I should just write an interface proposal I like, and then have everyone else shoot it down. If we ever reach a point where nobody is too unhappy with it, someone can write the code. (That I won't be doing any time soon). Re: composite/row types, I don't remember them having been mentioned before at any point. I haven't spent any time thinking about what their interface should look like, either. If you have time to spend on this, feel free to try and come up with something. I'll be happy to review an interface proposal. |
@johto I've read #302, but I wasn't able to follow the arguments (possibly because the PR diff has gone away.) The type IntArray []int
func (IntArray) Value() (driver.Value, error) {}
|
I'm thinking there should just be one array type that wraps an interface, and can Scan/Value into that interface, using something like the Unmarshaller in #302. Having separate wrappers per slice type seems less than ideal. If this seems like a reasonable approach, I'll start taking a stab at it. |
I would expect the overhead of the reflection-based "any slice" wrapper to be significant, that's why I think there should also be a couple of wrappers for the commonly used types. But perhaps we should just start with a generic wrapper and measure the difference. However, now that I think about this a bit more, there seems to be a clear problem with the fact that neither the Valuer nor the Scanner knows what the type oids are. So if you're given a slice of []byte, is that a []bytea or a []text? Our current implementation accepts both and sends the correct value anyway since it knows the type of the parameter, but that's not good enough for the Valuer. So whatever happens, there's going to be a difference in behavior here. I'm not sure what to do here to be honest. This is probably only a problem with bytea vs. the people, so maybe we should just specify that everything is sent as text with the generic wrapper, and have a ByteaArray{} for []bytea types. Any thoughts? P.S. Just so we're clear, I think the wrapper should accept
|
Fair enough on the reflection performance. I'm happy to create some benchmarks to test the difference. ByteaArray seems reasonable enough for now. Definitely agree about |
I think having a generic interface{} implementation is very useful, and then having some wrappers for custom types like []string etc. .. And then compare the performance and make both available.
|
@erykwalder do you have any working code / fork you're doing this? I am working on the same problem today and happy to help. We need array types as well. |
btw, an example gist I came across: https://gist.github.com/adharris/4163702 I also took the various parts from it and assembled this working example: https://gist.github.com/pkieltyka/06b5f25295bf24d7a5de |
If the Valuer is trying to encode a slice of []byte? This does seem like a reason to have some specific Valuer(s).
Only for prepared statements, yes? |
Which is the only thing we currently have; see e.g. #209. |
@pkieltyka https://github.com/jagregory/pq wrote an "Unmarshal" for pq arrays. Seems like a decent start, although I'm trying to think of a different way to be able to do it. |
Hello.. any action on this issue..? I follow every issue in this project and everything is blocked by someone finding some nick-picky nuance and then fails to deliver.. it's stagnating the library. |
That sounds like open source to me.
I think we're making great progress all the time. It's just hard to coordinate between several people when they don't have working hours or get paid. I know I have a long list of things I should look at, but finding the time and energy to do that is sometimes very difficult. |
Oh, also, there's ongoing work and discussion on arrays in #353, in case you missed that. |
I understand and I appreciate your work, its a big project to maintain. |
there are 20 PRs tho :/ |
I guess, it would be nice to see some PRs included in a |
moved to #364 |
There's an implementation of Int64Array in #364, though admittedly only for the Valuer part. Have you tested that PR? Did you hit any problems? |
I havent see that. But what i can say right now is that, type checking could be more verbose (specific errors), and there is a lot of magic. And they do not implement |
Could you chime in on the discussion with a bit more specific criticism? I have no idea what you mean by "type checking could be more verbose". "A lot of magic" might be a valid criticism, but it's hard to address if we have no idea wherein you think the magic lies.
That's correct. But if we can get the Valuer part committed, perhaps there's hope for someone picking up the work needed to implement the Scanner interface on top of it. |
I took a stab at adding support for arrays in this library, as I'm using them extensively in a project I'm working on. Instead of creating a new type that implements the Valuer interface, I wanted to be able to support regular Go slices and let the driver figure out the conversion. This is a start at doing that, and should work fairly well but it hasn't been thoroughly tested. I wanted to see if I could get some feedback on this approach, before I kept going further. With these changes, you could then do something like the following: id := 1
multi := []int64{5, 6, 7, 8, 9, 10}
stmt, err := db.Prepare("INSERT INTO example VALUES($1, $2)")
res, err := stmt.Exec(id, multi) And should support a slice of any native type driver.Value type, as well as any type that implements the Valuer interface. This is pretty much working, and what I have left is to make sure to test all data types thoroughly and write tests for it. Let me know what you think, and if it's a good approach I can finish it up this weekend. |
This has been discussed before, and so far there has not been a consensus that this would be a good idea. I can see some relevant discussion in #327 and #302, but there might be more. |
@johto Thank you for the response. I've read over a lot of the discussion, but what I'd really like to understand is why there's opposition to supporting native slices. I understand that the reflection logic may be more expensive, but I really would hate to have to use structs for basic types (IntArray, BoolArray, StringArray, etc.) just because they're arrays. Are there any other concerns that I may have missed? I know that database/sql is already filled with reflection to do work on any basic type, so I'd like to see what else is a concern here. In addition, is there anything I can do to help come to a consensus on the right approach here? Would benchmarks comparing using an array type that implements Valuer vs. native slices that are encoded using a ValueConverter help in this case? Anything else I can do? Thanks again! |
I personally use db.Query and db.Exec most of the time. ColumnConverter only works with prepared statements. Of course, it works, as you've shown. I can see this being pleasant for users of prepared statements, so I'm not opposed. However, I think ColumnConverter, Scanner, and Valuer should all be sharing the same internal functions. |
@cbandy I completely agree, but I didn't want to do a major refactor of encoding and decoding until I had gotten some feedback on this proof-of-concept. Since I also always use prepared statements, I didn't realize that ColumnConverter would not work for other cases. I'll have to look into that further. In addition, there are other improvements that I'm sure need to be made, but I wanted to get some feedback before I worked on this further. |
I personally think parameters should work the same way regardless of whether you used a prepared statement or just a straight up Query(). I don't see writing pq.Array() around every array parameter being an issue at all. Or is there something more here that I'm missing? |
I think this can be closed since #466 was merged. Please re-open if I am wrong. |
I've been browsing the issues and PRs to see what the status of array support is. Along with a bunch of others, support is something I'd like to see.
I'm curious as to what the consensus is as of now. Would it be nice for support to be baked into this package, or should it come from another package? Should it affect the encode/decode process, or simply be a set of scanners?
To speak more generally, what should be supported, what cases should be handled, and what should be avoided?
This is something I'm willing to work on; I just want to know what will be mergable.
The text was updated successfully, but these errors were encountered: