Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Taking weighting seriously #487

Open
wants to merge 95 commits into
base: master
Choose a base branch
from

Conversation

gragusa
Copy link

@gragusa gragusa commented Jul 15, 2022

This PR addresses several problems with the current GLM implementation.

Current status
In master, GLM/LM only accepts weights through the keyword wts. These weights are implicitly frequency weights.

With this PR
FrequencyWeights, AnalyticWeights, and ProbabilityWeights are possible. The API is the following

## Frequency Weights
lm(@formula(y~x), df; wts=fweights(df.wts)
## Analytic Weights
lm(@formula(y~x), df; wts=aweights(df.wts)
## ProbabilityWeights
lm(@formula(y~x), df; wts=pweights(df.wts)

The old behavior -- passing a vector wts=df.wts is deprecated and for the moment, the array os coerced df.wts to FrequencyWeights.

To allow dispatching on the weights, CholPred takes a parameter T<:AbstractWeights. The unweighted LM/GLM has UnitWeights as the parameter for the type.

This PR also implements residuals(r::RegressionModel; weighted::Bool=false) and modelmatrix(r::RegressionModel; weighted::Bool = false). The new signature for these two methods is pending in StatsApi.

There are many changes that I had to make to make everything work. Tests are passing, but some new feature needs new tests. Before implementing them, I wanted to ensure that the approach taken was liked.

I have also implemented momentmatrix, which returns the estimating function of the estimator. I arrived to the conclusion that it does not make sense to have a keyword argument weighted. Thus I will amend JuliaStats/StatsAPI.jl#16 to remove such a keyword from the signature.

Update

I think I covered all the suggestions/comments with this exception as I have to think about it. Maybe this can be addressed later. The new standard errors (the one for ProbabilityWeights) also work in the rank deficient case (and so does cooksdistance).

Tests are passing and I think they cover everything that I have implemented. Also, added a section in the documentation about using Weights and updated jldoc with the new signature of CholeskyPivoted.

To do:

  • Deal with weighted standard errors with rank deficient designs
  • Document the new API
  • Improve testing

Closes #186.

@codecov-commenter
Copy link

codecov-commenter commented Jul 16, 2022

Codecov Report

Attention: Patch coverage is 75.47771% with 77 lines in your changes missing coverage. Please review.

Project coverage is 85.66%. Comparing base (89493a4) to head (4fb18df).

Files with missing lines Patch % Lines
src/linpred.jl 73.50% 31 Missing ⚠️
src/glmfit.jl 78.30% 23 Missing ⚠️
src/lm.jl 73.17% 22 Missing ⚠️
src/glmtools.jl 83.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #487      +/-   ##
==========================================
- Coverage   90.33%   85.66%   -4.67%     
==========================================
  Files           8        8              
  Lines        1107     1277     +170     
==========================================
+ Hits         1000     1094      +94     
- Misses        107      183      +76     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@lrnv
Copy link

lrnv commented Jul 20, 2022

Hey,

Would that fix the issue I am having, which is that if rows of the data contains missing values, GLM discard those rows, but does not discard the corresponding values of df.weights and then yells that there are too many weights ?

I think the interfacing should allow for a DataFrame input of weights, that would take care of such things (like it does for the other variables).

@gragusa
Copy link
Author

gragusa commented Jul 20, 2022

Would that fix the issue I am having, which is that if rows of the data contains missing values, GLM discard those rows, but does not discard the corresponding values of df.weights and then yells that there are too many weights ?

not really. But it would be easy to make this a feature. But before digging further on this I would like to know whether there is consensus on the approach of this PR.

@alecloudenback
Copy link

alecloudenback commented Aug 14, 2022

FYI this appears to fix #420; a PR was started in #432 and the author closed for lack of time on their part to investigate CI failures.

Here's the test case pulled from #432 which passes with the in #487.

@testset "collinearity and weights" begin
    rng = StableRNG(1234321)
    x1 = randn(100)
    x1_2 = 3 * x1
    x2 = 10 * randn(100)
    x2_2 = -2.4 * x2
    y = 1 .+ randn() * x1 + randn() * x2 + 2 * randn(100)
    df = DataFrame(y = y, x1 = x1, x2 = x1_2, x3 = x2, x4 = x2_2, weights = repeat([1, 0.5],50))
    f = @formula(y ~ x1 + x2 + x3 + x4)
    lm_model = lm(f, df, wts = df.weights)#, dropcollinear = true)
    X = [ones(length(y)) x1_2 x2_2]
    W = Diagonal(df.weights)
    coef_naive = (X'W*X)\X'W*y
    @test lm_model.model.pp.chol isa CholeskyPivoted
    @test rank(lm_model.model.pp.chol) == 3
    @test isapprox(filter(!=(0.0), coef(lm_model)), coef_naive)
end

Can this test set be added?

Is there any other feedback for @gragusa ? It would be great to get this merged if good to go.

@nalimilan
Copy link
Member

Sorry for the long delay, I hadn't realized you were waiting for feedback. Looks great overall, please feel free to finish it! I'll try to find the time to make more specific comments.

Copy link
Member

@nalimilan nalimilan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've read the code. Lots of comments, but all of these are minor. The main one is mostly stylistic: in most cases it seems that using if wts isa UnitWeights inside a single method (like the current structure) gives simpler code than defining several methods. Otherwise the PR looks really clean!

What are you thoughts regarding testing? There are a lot of combinations to test and it's not easy to see how to integrate that into the current organization of tests. One way would be to add code for each kind of test to each @testset that checks a given model family (or a particular case, like collinear variables). There's also the issue of testing the QR factorization, which isn't used by default.

src/GLM.jl Outdated Show resolved Hide resolved
src/GLM.jl Outdated Show resolved Hide resolved
src/glmfit.jl Outdated Show resolved Hide resolved
src/glmfit.jl Outdated Show resolved Hide resolved
src/glmfit.jl Outdated Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
test/runtests.jl Outdated Show resolved Hide resolved
@bkamins
Copy link
Contributor

bkamins commented Aug 31, 2022

A very nice PR. In the tests can we have some test set that compares the results of aweights, fweights, and pweights for the same set of data (coeffs, predictions, covariance matrix of the estimates, p-values etc.).

@ParadaCarleton
Copy link

Hmm, did any of the people who worked on Survey.jl leave comments here? @iuliadmtru @aviks

@gragusa
Copy link
Author

gragusa commented Jun 16, 2023

I finally found the time to rebase this PR against the latest main repository. Tests pass locally; let's see whether they pass on the CI.

I have a few days of "free" time and would like to finish this. @nalimilan It is difficult to track the comments and which ones were addressed by the various commit. On my side, the primary decision is about weight scaling. But before engaging in a conversation, I will add documentation so that whoever will contribute to the discussion can do it coherently.

Test passed!

@nalimilan
Copy link
Member

Cool. Do you need any input from my side?

@SamuelMathieu-code
Copy link

Hi there! I wonder what will happen to this PR? As I understand, one review from a person with write access is needed?

@gragusa
Copy link
Author

gragusa commented Feb 12, 2024 via email

@andreasnoack
Copy link
Member

@gragusa Any chance that you'd be able to look at the remaining items here? It would be good to get in for a 2.0 release.

@gragusa
Copy link
Author

gragusa commented Nov 19, 2024

@andreasnoack I merged my branch with base. Tests are passing (documentation is failing, but that is easy to fix). There were a few outstanding decisions to make (mostly about ftest and other peripheral methods), but I need to review the code and see where we stand. I only have a little time, but if I get some help, I could add the finishing touches. For instance, there is JuliaStats/StatsAPI.jl#16 to merge eventually.

@nalimilan
Copy link
Member

Thanks. Is there anything blocking in particular at JuliaStats/StatsAPI.jl#16?

@nalimilan
Copy link
Member

And regarding ftest and other "peripheral methods", better throw an error in unsupported cases for now so that the PR can be merged, and things can be implemented later.

@gragusa
Copy link
Author

gragusa commented Nov 25, 2024

@andreasnoack @nalimilan @bkamins ftest now throws if a model fitted with ProbabilityWeights is passed. I added tests about it. Also, I added tests about the loglikelihood (also throws). Tests pass locally. We should be good to go to merge. After 5 years, this is a record-making PR!

@gragusa
Copy link
Author

gragusa commented Nov 25, 2024

Argh...7 commits to merge......

@andreasnoack
Copy link
Member

It looks like one of the last digits is flipping in a doctests. Would you be able to add a regex filter to that block?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Path towards GLMs with fweights, pweights, and aweights