-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
after_post_process is run even if validations fail #2178
after_post_process is run even if validations fail #2178
Comments
Thank you for reporting, @jrochkind. This is indeed a bug. |
I took a stab at this, but it gets complicated. My guess is there's no way to do it without adding Not sure if it's possible to do this using ActiveSupport::Callbacks without changing some existing semantics. But any ideas or tips welcome, and I'll try to find more time to work on it. I think this probably is a security issue. |
I'm curious who wrote the README "Post-processing will not even start if the attachment is not valid according to the validations. Your callbacks and processors will only be called with valid attachments.", and what made them think that would be true! Did this ever work like that? Are there any tests that were intended to verify that? |
|
Looking at the git history, I believe it was written as a spec before the func was implemented, and then the func was never implemented. The commit message says "reccommended course of action". e377e85 Anyway, this seems like a pretty bad bug to me with security implementations, but I've not been successful at figuring out a nice way to solve it. It obviously either needs to be solved, or the README needs to be changed, preferably the former. |
That was supposed to be how it worked, yes, because the validators are themselves callbacks and the chains should be halted if a callback returns false. Sadly, this does not appear to be the case (anymore? if it was in the past?). |
That is indeed the question. |
So, in debugging this, I discovered that ActiveSupport::Callback, by default halts in the By default. But you can tell ActiveSupport::Callback This may be all that's needed to fix the issue. Along with a spec of course. If the spec isn't there, my guess would be it never did this. It needs a spec, of course. Except when I tried experimentally doing this, it seemed to cause other tests to fail, there may be other things assuming the current behavior of |
Thanks for posting your findings, @jrochkind. Can |
Yes, it's set at the point ActiveSupport::Callbacks is set up in the Attachment class, just for that class. I think it would make paperclip behave right, but I didn't get so far as to completely verifying it with a test, I gave up when I saw it seemed to make other specs in the Paperclip suite fail for mysterious reasons. |
Related with #1960 |
Ah, first reported 9 months ago, thanks @tute. I think this bug has security implications, no? Should we be assuming that it's not going to be fixed? |
Yeah, paperclip suffered a bit in the past several months, but we are getting it back on track.
It does.
I can't fix it myself in the foreseeable future, but I'll swiftly merge a PR that addresses this problem and publish releases. It will be fixed, but don't know when. |
Okay, I'll try to find another couple hours to try harder for a PR. (although if someone beats me to it, I won't complain). What would be super helpful is if you could provide instructions for getting the test suite to pass on a clean master. It's more confusing to figure out if my change broke anything or not, when the test suite starts out red for me. Thanks for the recognition that paperclip has been a bit abandoned, and indication that you would like to get it back on track. As an aside, I'm probably not alone in that I would find it reassuring if you provided some details on what will change to make it get back on track (someone assigned to it or given more time that wasn't before? etc). My general impression of thoughtbot open source is that you release very well-designed and written code... which typically then basically receives no support or maintenance. Which is your right, you don't owe us anything, but it would effect my decision of what to adopt etc, and would be helpful if you provided some communication as to intent. But anyway, it's still very well-written and well-designed software, thanks for sharing. |
❤️
This is unacceptable indeed. In my computer I get failures:
I will work on this, but I don't expect to have answers in the short term. Meanwhile what I do is open PRs, and check with CI, which is green (or we are doing something wrong). I will write an update to your third paragraph in a followup comment. Thanks for asking. |
I created #2199 to keep track of that. |
Here is some paperclip history, and our plans for its future. Jon Yurek (@jyurek) gave birth and mostly led paperclip by himself for about seven years (since Rails v2.0). Prem Sichanugrist (@sikachu) and Mike Burns (@mike-burns) helped him with regular contributions. Not long ago, Jon decided to move on to other projects. Paperclip was headless for about two years and saw sporadic code reviews on pull requests and a few releases. (Jon, Prem, and Mike, please correct me if I got something wrong.) I am now maintaining it, and I will be responsible for moving forward issues, pull requests, and doing releases. I may have more or less bandwidth week after week, but I'll be present. That's a first reason why I can say that paperclip is now a healthier project. There are currently some sore points, like the spoof detection and mime-type checks. But some major sore points are now gone, which makes paperclip easier to maintain:
This is a second reason why paperclip's future looks more promising. Over the course of 2015, with Jon Moss' help (@maclover7), we took down the open issues for paperclip from over 220 to around 80. I want to still cut that down in half at least, meanwhile it is much easier for me to keep track of the open issues, and keep in mind their relative priorities and work needed. This means fresh air, and that's the third change that happened recently that makes paperclip easier to maintain. It is not an option for us to release products that people depend on, and then abandon them without ensuring future development "just because". We think that's unfair to the people who deposit their trust on our products and projects, and we think it's irresponsible as maintainers. If we believe we can't move the project forward, we will find a new team to do it, and until we find it, we will continue maintaining the project ourselves. When we publish work, we set expectations. Unless stated otherwise in the form of early or beta releases, that is of high quality, maintainable, stable software. This implies continuity for as long as the work stays useful. Paperclip had recently a bit of a rough time, but again, we are bringing it back to life. Thanks again for your comment and question. It helped me put my feelings and thoughts into words. :) |
Spent some more time on it, but making much headway. The code gets complicated. There are 4-5 specs that fail on a clean checkout, they all seem S3-related, maybe the tests are written assuming certain AWS ENV? Haven't looked into that, just trying to ignore them. After last week's code archeology, my attempt was to change the define_callbacks method call (that method is from ActiveSupport::Callbacks), to define the callbacks with a I was aware that this might break existing API too. It did cause 5-10 new specs to fail -- but in ways where I can't quite figure out what's going on or why it's failing. A lot of the specs are written in very mock-heavy ways, which makes it hard to tell if it's just a test failure or a real failure, and when the methods being mocked are somewhat obscure, hard to tell what's really being tested. For me anyway. Oddly, one of the specs that fails after my change seems like it was intended to spec the behavior we want here -- but if so, it's obviously failing to do so. See: https://github.com/thoughtbot/paperclip/blob/master/spec/paperclip/attachment_spec.rb#L846 Yes, that's line 846. The very mock-heavy nature of this test makes it hard for me to figure out what's going on -- if it means to spec what we're talking about and passes... what's wrong with the test? |
That's good. We just merged three new commits into master fixing some failing tests. |
Might this be related with Rails 5 change? http://edgeguides.rubyonrails.org/upgrading_ruby_on_rails.html#halting-callback-chains-via-throw-abort |
It might be stubbing the system under test, and if so, it needs to be fixed as well. |
actually, I think I might be right near a solution. PR soon. The tests that were failing for me after my change were actually wrong, they were spec'ing the wrong behavior, opposite of what it should be actually. I am not testing under Rails5 -- that thing you mention (which i had not previously been aware of) might be a problem for us under Rails5, and require Rails5-specific code (bah). Does CI test with Rails5 yet? It probably should. |
Interesting. Good we are about to release a major version. Thanks for all your work.
It does: https://github.com/thoughtbot/paperclip/blob/master/.travis.yml#L7-L11 |
bah, not as quite close to a solution as I thought. I am just mucking up the comments on this issue. I'll keep working on it, but if you want I can share what I've found and where I am, here, or in a Slack channel somewhere, or in a separate PR with code, let me know if you'd like me to and what medium you prefer. |
You can post "early drafts" of a PR. We can add many commits there as we learn about the problem, and do a final squash and rebase |
- Because the processors were called on assignment, instead of during saving, the validations could never work correctly. This is because the built in validations use the values in the db columns to operate. However, since these are populated on assignment, the validations cannot run before the processors run. Moreover, any other type of validation not dependent on the db columns also cannot run, because the processors are called on assignment. The processors should be called during save which allows for validations to occur. - Fixed tests that assert the incorrect behavior - Closes thoughtbot#2462, Closes thoughtbot#2321, Closes thoughtbot#2236, Closes thoughtbot#2178, Closes thoughtbot#1960, Closes thoughtbot#2204
Looks like there are PRs open that will be merged, thanks for the issue. |
- Because the processors were called on assignment, instead of during saving, the validations could never work correctly. This is because the built in validations use the values in the db columns to operate. However, since these are populated on assignment, the validations cannot run before the processors run. Moreover, any other type of validation not dependent on the db columns also cannot run, because the processors are called on assignment. The processors should be called during save which allows for validations to occur. - Fixed tests that assert the incorrect behavior - Closes thoughtbot#2462, Closes thoughtbot#2321, Closes thoughtbot#2236, Closes thoughtbot#2178, Closes thoughtbot#1960, Closes thoughtbot#2204
This has been fixed at https://github.com/kreeti/paperclip/commits/master. |
Paperclip 4.3.6.
The README says:
This seems to pretty clearly say that an
after_post_process
hook should no be called with an invalid attachment?But it seems not to behave as advertised:
Attaching a .txt file, I would expect the model to not be
valid?
, but for myi_am_after_post_process
method not to be called. However, it is called, and the exception raised.This is a problem because I have
after_post_process
hooks written assuming if they're being called, we have a valid image file. They end up raising when given unexpected input, the text file, and I get an uncaught exception instead of simply a model that is notvalid?
as I expect.If I remove the
after_post_process
hook, I can confirm that the validations are working and flagging the model as notvalid?
with an invalid attachment error.Additionally, for the same reasons that
content_type
validations are now required, it seems a security problem to runafter_post_process
hooks on an attached file that failed it's validations.Am I missing something? Thanks!
The text was updated successfully, but these errors were encountered: