-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rank update #331
Rank update #331
Conversation
Codecov Report
@@ Coverage Diff @@
## master #331 +/- ##
==========================================
- Coverage 95.91% 95.75% -0.16%
==========================================
Files 21 22 +1
Lines 1492 1485 -7
==========================================
- Hits 1431 1422 -9
- Misses 61 63 +2
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand everything correctly, then it looks pretty good. I have one question about assignment vs. dot assignment in a single method that needs attention, everything else is small.
The two big changes that I didn't comment on are:
- No default values for alpha and beta in the rank update methods. I think this is fine, but this is definitely a breaking change (admittedly on non-exported methods), so it's a good thing to get in now before the big release. Is this also in line with the related methods in LinearAlgebra?
- Removal of a catch-all
rankUpdate!
method. I'm wondering if we should still have that in there. Or maybe even something 'dumber', such as
function rankUpdate(A::Any, B::Any, alpha, beta)
error("We haven't implemented a method for $(typeof(A)), $(typeof(B)). Please file an issue on GitHub")
end
src/linalg/rankUpdate.jl
Outdated
end | ||
C | ||
end | ||
=# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggest final newline so that various things stop complaining
=# | |
=# | |
@@ -531,7 +531,7 @@ end | |||
|
|||
nθ(m::LinearMixedModel) = length(m.parmap) | |||
|
|||
StatsBase.nobs(m::LinearMixedModel) = first(size(m)) | |||
StatsBase.nobs(m::LinearMixedModel) = length(first(m.allterms).refs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why change this? (not opposed, just curious)
@@ -52,6 +52,7 @@ end | |||
@test loglikelihood(fit!(wm1)) ≈ loglikelihood(m1) | |||
end | |||
|
|||
#= I don't see this testset as meaningful b/c diagonal A does not occur after amalgamation of ReMat's for the same grouping factor - D.B. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to think of a case where diagonal A could occur. Perhaps in one of the custom model types some of the ZiF spinoff projects are working on? Maybe this is something re-visit in a few weeks to months and see if we can further trim.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is best to think of how rankUpdate!
gets called. After pre- and post-multiplication by Lambda and inflation of the diagonal the rankUpdate!
methods get called with C
being one of the diagonal blocks and A
being a block to the left of C
. So C
is m.L[Block(2,2)]
or higher because there are no blocks to the left of m.L[Block(1,1)]
. The cases that matter are when there is more than one grouping factor for the random effects. If A
were to be diagonal then it would need to be generated from two scalar random-effects terms, as in (1|G) + (1|H)
and the number of levels of G
and H
would need to be equal and the levels would match. You can get that from (1|G) + (0+x|G)
but that is now collapsed to a single r.e. term either through zerocorr(1+x|G)
or by amalgamation of the random effects terms. So I don't think there will be real-life cases where A
is diagonal. C
is always a square matrix and either Diagonal
or UniformBlockDiagonal
or, through fill-in, Dense
. The only special cases for A
are Dense
, SparseMatrixCSC
or, in a new branch that I haven't pushed yet, KHotColumn
. The last one is a special type of sparse matrix generated by nested grouping factors for vector-valued random effects where each column of A
has exactly K
adjacent non-zeros.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, so the only way that this could occur would be if someone had two identical grouping variables so that they wouldn't be amalgamated. Until we see a legitimate use case for that, I think we can safely ignore that possibility.
Thanks for the refresher! 😄
I don't see the benefit of adding a "catch-all" method to generate an error. Not having a method has the same effect, doesn't it? |
I felt that this generic would be internal to the package and I dropped the default values for alpha and beta because there is only one call to generic in the package. It seemed it was easier to specify the values there than to add defaults for all the methods. |
For us, yes. But for the average user, a Ultimately, I'm neutral. Do we have a "normal" person we can ask? |
I would value the opinions of either @christinabergmann or @debruine |
In retrospect I think it is fine to go ahead with adding a fallback-error method like this. Would you (@palday) be willing to add a test to ensure test coverage? |
@dmbates I'll take care of it. 😄 |
@@ -52,6 +52,7 @@ end | |||
@test loglikelihood(fit!(wm1)) ≈ loglikelihood(m1) | |||
end | |||
|
|||
#= I don't see this testset as meaningful b/c diagonal A does not occur after amalgamation of ReMat's for the same grouping factor - D.B. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, so the only way that this could occur would be if someone had two identical grouping variables so that they wouldn't be amalgamated. Until we see a legitimate use case for that, I think we can safely ignore that possibility.
Thanks for the refresher! 😄
@dmbates If you're happy with the new error message, please squash and merge. |
Clean up
rankUpdate!
methods. In particular use Hermitian{T, Diagonal{T, Vector{T}}} instead of branching on the diagonal case inupdateL!
. We want to use method dispatch instead ofif
statements that I had in the code. This should be rebased on master after #329 is merged.Closes #290.