Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Add three solvers for differences of convex functions. #1307

Merged
merged 22 commits into from
Jun 11, 2018

Conversation

sbanert
Copy link
Contributor

@sbanert sbanert commented Feb 27, 2018

This family of solvers allows to handle e.g. nonconvex regularizers like the difference of two convex minimizers. The three solvers differ in the use of proximals and subgradients for the different parts of the problem.

The main restriction is that there is no solver which can handle objective functions whose convex part is the composition of a functional with a linear operator.

Copy link
Member

@kohr-h kohr-h left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, and some very cool new stuff coming in! My comments are mostly wrt docs, and one efficiency thing.

y_n \\in \\partial h(x_n), \\qquad x_{n+1}
= \\mathrm{Prox}^{\\gamma}_g(x_n + \\gamma y_n)

Here, :math:`\\gamma` is the stepsize parameter ``gamma``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's obvious enough that you don't need to mention it explicitly.

= \\mathrm{Prox}^{\\gamma}_g(x_n + \\gamma y_n)

Here, :math:`\\gamma` is the stepsize parameter ``gamma``.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps a comment on the difference to the "simple" DCA? I guess the main advantage is that you don't need the subgradient of g, but only its prox.


[TA1997] Tao, P D, and An, L T H. *Convex analysis approach to d.c.
programming: Theory, algorithms and applications*. Acta Mathematica
Vietnamica, 22.1 (1997), pp 289--355.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless the text references it, I wouldn't add this citation. Without reference it's unclear how it relates to this function.

Initial point, updated in-place.
g : `Functional`
Convex part. Needs to provide a `Functional.convex_conj` with a
`Functional.proximal` factory.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gradient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed this. Maybe this is an artefact of diffing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above.

and the iteration is given by

.. math::
y_n \in \partial h(x_n), \qquad x_{n+1} \in \partial g^*(y_n)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you draw the connection to the gradient property?

min_x a/2 (x - b)^2 - |x|

For finding possible (local) solutions, we consider several cases:
* x > 0 ==> ∂|.|(x) = 1, i.e., a necessary optimality condition is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we need a unicode marker for this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

raise ValueError('`g.domain` and `h.domain` need to be equal, but '
'{} != {}'.format(space, h.domain))
for _ in range(niter):
g.proximal(gamma)(x + gamma * h.gradient(x), out=x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you use x.lincomb in the argument, you can probably avoid one copy.

for _ in range(niter):
g.proximal(gamma)(x + gamma * K.adjoint(y) -
gamma * phi.gradient(x), out=x)
h.convex_conj.proximal(mu)(y + mu * K(x), out=y)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here


.. math ::
x_{n+1} &= \mathrm{Prox}_{\gamma}^g (x_n + \gamma K^* y_n
- \gamma \nabla \varphi(x_n)), \\
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a matter of taste, but I'd take out the common factor gamma.

g.proximal(gamma)(x + gamma * K.adjoint(y) -
gamma * phi.gradient(x), out=x)
h.convex_conj.proximal(mu)(y + mu * K(x), out=y)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you break down the code to make it more efficient, perhaps it makes sense to again add a _simple implementation of the method:

for _ in range(niter):
    x = g.proximal(gamma)(x + gamma * (K.adjoint(y) - phi.gradient(x)))
    y = h.convex_conj.proximal(mu)(y + mu * K(x))

no=4&page=451&year=2003&ppage=462>`_. It solves the problem

.. math ::
\\min g(x) - h(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I overlooked this the first time. Please go for new-style r''' math-containing docstrings and no double backslashes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem then is the broken link, which is itself too long for one line because of PEP8. I have not found a way to break the link without inserting additional whitespace.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, too bad. In that case I'd prefer violating PEP8 and let the link go over the length limit, to still get the benefit of less ugly and error-prone math typing. Other opinions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I changed this, and neither pytest --pep8 nor PEP8speaks are complaining.

aringh
aringh previously requested changes Mar 27, 2018
Copy link
Member

@aringh aringh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a nice contribution! Some minor comments.


"""Solvers for the optimization of the difference of convex functions.

Collection of DCA and related methods which make use of structured optimization
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add explanation of the acronym DCA somewhere?

Convex part. Needs to provide a `Functional.convex_conj` with a
`Functional.gradient` method.
h : `Functional`
Negative of the concave part. Needs to provide a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the first sentence hard to understand. Should it be a convex or concave function, and why the negative? Can't we just write Convex functional. Needs to [...]

x : `LinearSpaceElement`
Initial point, updated in-place.
g : `Functional`
Convex part. Needs to provide a `Functional.convex_conj` with a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we write this as Needs to implement Functional.convex_conj.gradient.? Or is that more confusing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be more precise than just 👍, `Functional.convex_conj.gradient` will produce a dead link, so it's better to write ``g.convex_conj.gradient`` then.

See also
--------
"""
# `prox_dca`, `doubleprox_dc`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put back in See also.

raise ValueError('`g.domain` and `h.domain` need to be equal, but '
'{} != {}'.format(space, h.domain))
for _ in range(niter):
g.convex_conj.gradient(h.gradient(x), out=x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to do g_convex_conj = g.convex_conj outside the loop and then use g_convex_conj in the loop. If the initialization is heavy this should be faster, right? Poke @kohr-h

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right, good point

----------
[BB2016] Banert, S, and Bot, R I. *A general double-proximal gradient
algorithm for d.c. programming*. arXiv:1610.06538 [math.OC] (2016).
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A See also section?

callback(x)


def doubleprox_dc_simple(x, y, g, h, phi, K, niter, gamma, mu):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be kept? If so, move it to a test of the method?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve inserted it on @kohr-h’s request (#1307 (comment)), and in the test it is checked that both methods give exactly the same result. The same principle is already in ODL:

def admm_linearized_simple(x, f, g, L, tau, sigma, niter, **kwargs):
def adupdates_simple(x, g, L, stepsize, inner_stepsizes, niter,

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's good that the methods are next to each other so that, if things go seriously wrong, users can go for the "emergency fallback" that is (hopefully) bug-free. As long as it's not in __all__ it doesn't clutter anything.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok!

@@ -0,0 +1,90 @@
# coding=utf-8
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we usually have this in the top of the files? @kohr-h

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, but without it, the ∂ symbol in the comment would cause a Python 2 error, see #1307 (comment).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In less than two years we can remove it again :-)

* {b + 1/a} if b > 1/a.
"""

# Set the problem parameters
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Short comment on that this puts us in the third case?

assert float(y_simpl) == pytest.approx(float(y))

# All methods should give approximately one solution of the problem.
assert dist_dca == pytest.approx(0, abs=1e-6)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this not use the HIGH_ACCURACY and LOW_ACCURACY somehow? Or what are they used for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this as a regression test. The test shows that this method gives the accuracy 1e-6 for the current configuration of ODL (and the accuracy is too low for rounding errors) and the given number of iterations. Maybe I could change this to LOW_ACCURACY, but then it is not clear that the values should not even be equal even in theory.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong opinion on this. We have something similar in the ray transform tests, but on the other hand things are not fully within our control there. @adler-j?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’d like to leave this as it is. Instead I will add a clarifying comment.


from __future__ import division
import odl
from odl.solvers import (dca, prox_dca, doubleprox_dc)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no parentheses here

def dca(x, g, h, niter, callback=None):
r"""Subgradient DCA of Tao and An.

This algorithm solves a problem of the form::
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid the colon being printed in the doc, add a space before the ::


This algorithm solves a problem of the form::

min_x g(x) - h(x),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be completely outrageous to use f and g as the other optimization methods do?

@kohr-h kohr-h mentioned this pull request May 3, 2018
25 tasks
@sbanert
Copy link
Contributor Author

sbanert commented May 3, 2018

I think I addressed everything now.

@sbanert
Copy link
Contributor Author

sbanert commented May 3, 2018

That’s kind of strange. The failing Travis test says

>       assert dist_prox_cda == pytest.approx(0, abs=1e-6)
E       assert 1.5683290222057167e-09 == 0 ± 1.0e-06
E        +  where 0 ± 1.0e-06 = <function approx at 0x7f3f917972f0>(0, abs=1e-06)
E        +    where <function approx at 0x7f3f917972f0> = pytest.approx
odl/test/solvers/nonsmooth/difference_convex_test.py:92: AssertionError

This does not happen on my local machine, and I always thought that 1.56e-9 < 1e-6.

@sbanert
Copy link
Contributor Author

sbanert commented May 3, 2018

After updating my conda packages, I get the same error. Possible reasons for this failure are thus (I suspect that it might be the pytest update)

    anaconda-client:        1.6.6-py36h59e3ba0_0        --> 1.6.14-py36_0             
    anaconda-navigator:     1.6.11-py36_0               --> 1.8.4-py36_0              
    astroid:                1.6.0-py36_0                --> 1.6.3-py36_0              
    astropy:                2.0.3-py36h14c3975_0        --> 3.0.2-py36h3010b51_1      
    attrs:                  17.3.0-py36h5ab58ff_0       --> 17.4.0-py36_0             
    babel:                  2.5.1-py36_0                --> 2.5.3-py36_0              
    backports.weakref:      1.0rc1-py36_0               --> 1.0.post1-py36h39d5b32_0  
    bitarray:               0.8.1-py36h5834eb8_0        --> 0.8.1-py36h14c3975_1      
    bokeh:                  0.12.13-py36h2f9c1c0_0      --> 0.12.15-py36_0            
    bzip2:                  1.0.6-h6d464ef_2            --> 1.0.6-h9a117a8_4          
    ca-certificates:        2017.08.26-h1d4fec5_0       --> 2018.03.07-0              
    cairo:                  1.14.10-hdf128ce_6          --> 1.14.12-h7636065_2        
    certifi:                2017.11.5-py36hf29ccca_0    --> 2018.4.16-py36_0          
    cffi:                   1.11.2-py36h2825082_0       --> 1.11.5-py36h9745a5d_0     
    cloudpickle:            0.5.2-py36h84cdd9c_0        --> 0.5.2-py36_1              
    coverage:               4.4.2-py36hca7c4c5_0        --> 4.5.1-py36h14c3975_0      
    cryptography:           2.1.4-py36hd09be54_0        --> 2.2.2-py36h14c3975_0      
    cudatoolkit:            8.0-3                       --> 9.0-h13b8566_0            
    cudnn:                  6.0.21-cuda8.0_0            --> 7.1.2-cuda9.0_0           
    curl:                   7.55.1-h78862de_4           --> 7.59.0-h84994c4_0         
    cython:                 0.27.3-py36h1860423_0       --> 0.28.2-py36h14c3975_0     
    cytoolz:                0.8.2-py36h708bfd4_0        --> 0.9.0.1-py36h14c3975_0    
    dask:                   0.16.1-py36_0               --> 0.17.3-py36_0             
    dask-core:              0.16.1-py36_0               --> 0.17.3-py36_0             
    dbus:                   1.10.22-h3b5a359_0          --> 1.13.2-h714fa37_1         
    decorator:              4.1.2-py36hd076ac8_0        --> 4.3.0-py36_0              
    distributed:            1.20.2-py36_0               --> 1.21.6-py36_0             
    fastcache:              1.0.2-py36h5b0c431_0        --> 1.0.2-py36h14c3975_2      
    fontconfig:             2.12.4-h88586e7_1           --> 2.12.6-h49f89f6_0         
    glib:                   2.53.6-h5d9569c_2           --> 2.56.1-h000015b_0         
    graphite2:              1.3.10-hc526e54_0           --> 1.3.11-hf63cedd_1         
    greenlet:               0.4.12-py36h2d503a6_0       --> 0.4.13-py36h14c3975_0     
    gst-plugins-base:       1.12.2-he3457e5_0           --> 1.14.0-hbbd80ab_1         
    gstreamer:              1.12.2-h4f93127_0           --> 1.14.0-hb453b48_1         
    harfbuzz:               1.5.0-h2545bd6_0            --> 1.7.6-h5f0a787_1          
    heapdict:               1.0.0-py36h79797d7_0        --> 1.0.0-py36_2              
    hypothesis:             3.38.5-py36h196a6cc_0       --> 3.56.0-py36_0             
    imageio:                2.2.0-py36he555465_0        --> 2.3.0-py36_0              
    imagesize:              0.7.1-py36h52d8127_0        --> 1.0.0-py36_0              
    intel-openmp:           2018.0.0-hc7b2577_8         --> 2018.0.0-8                
    ipykernel:              4.7.0-py36h2f9c1c0_0        --> 4.8.2-py36_0              
    ipython:                6.2.1-py36h88c514a_1        --> 6.3.1-py36_0              
    ipywidgets:             7.1.0-py36_0                --> 7.2.1-py36_0              
    isort:                  4.2.15-py36had401c0_0       --> 4.3.4-py36_0              
    jdcal:                  1.3-py36h4c697fb_0          --> 1.4-py36_0                
    jedi:                   0.11.0-py36_2               --> 0.12.0-py36_1             
    jupyter:                1.0.0-py36h9896ce5_0        --> 1.0.0-py36_4              
    jupyter_client:         5.2.1-py36_0                --> 5.2.3-py36_0              
    libgcc-ng:              7.2.0-h7cc24e2_2            --> 7.2.0-hdf63c60_3          
    libgfortran-ng:         7.2.0-h9f7466a_2            --> 7.2.0-hdf63c60_3          
    libpng:                 1.6.32-hbd3595f_4           --> 1.6.34-hb9fc6fc_0         
    libprotobuf:            3.4.1-h5b8497f_0            --> 3.5.2-h6f1eeef_0          
    libsodium:              1.0.15-hf101ebd_0           --> 1.0.16-h1bed415_0         
    libssh2:                1.8.0-h2d05a93_3            --> 1.8.0-h9cfc8f7_4          
    libstdcxx-ng:           7.2.0-h7a57d05_2            --> 7.2.0-hdf63c60_3          
    libxcb:                 1.12-hcd93eb1_4             --> 1.13-h1bed415_1           
    libxml2:                2.9.4-h2e8b1d7_6            --> 2.9.8-hf84eae3_0          
    libxslt:                1.1.29-h78d5cac_6           --> 1.1.32-h1312cb7_0         
    llvmlite:               0.21.0-py36ha241eea_0       --> 0.22.0-py36ha27ea49_0     
    lxml:                   4.1.1-py36h4d89739_0        --> 4.2.1-py36h23eabaa_0      
    markdown:               2.6.9-py36_0                --> 2.6.11-py36_0             
    matplotlib:             2.1.1-py36ha26af80_0        --> 2.2.2-py36h0e671d2_1      
    mistune:                0.8.1-py36h3d5977c_0        --> 0.8.3-py36h14c3975_1      
    mkl:                    2018.0.1-h19d6760_4         --> 2018.0.2-1                
    msgpack-python:         0.5.1-py36h6bb024c_0        --> 0.5.6-py36h6bb024c_0      
    multipledispatch:       0.4.9-py36h41da3fb_0        --> 0.5.0-py36_0              
    navigator-updater:      0.1.0-py36h14770f7_0        --> 0.2.0-py36_0              
    networkx:               2.0-py36h7e96fb8_0          --> 2.1-py36_0                
    notebook:               5.2.2-py36h40a37e6_0        --> 5.4.1-py36_0              
    numba:                  0.36.2-np112py36hbcd2105_0  --> 0.37.0-np112py36ha11926f_0
    numpydoc:               0.7.0-py36h18f165f_0        --> 0.8.0-py36_0              
    olefile:                0.44-py36h79f9f78_0         --> 0.45.1-py36_0             
    openpyxl:               2.4.9-py36hb5dfbf6_0        --> 2.5.3-py36_0              
    openssl:                1.0.2n-hb7f436b_0           --> 1.0.2o-h20670df_0         
    packaging:              16.8-py36ha668100_1         --> 17.1-py36_0               
    pango:                  1.40.11-h8191d47_0          --> 1.41.0-hd475d92_0         
    parso:                  0.1.1-py36h35f843b_0        --> 0.2.0-py36_0              
    path.py:                10.5-py36h55ceabb_0         --> 11.0.1-py36_0             
    pathlib2:               2.3.0-py36h49efa8e_0        --> 2.3.2-py36_0              
    patsy:                  0.4.1-py36ha3be15e_0        --> 0.5.0-py36_0              
    pcre:                   8.41-hc27e229_1             --> 8.42-h439df22_0           
    pexpect:                4.3.1-py36_0                --> 4.5.0-py36_0              
    pillow:                 5.0.0-py36h3deb7b8_0        --> 5.1.0-py36h3deb7b8_0      
    pip:                    9.0.1-py36h6c6f9ce_4        --> 10.0.1-py36_0             
    ply:                    3.10-py36hed35086_0         --> 3.11-py36_0               
    protobuf:               3.4.1-py36h306e679_0        --> 3.5.2-py36hf484d3e_0      
    psutil:                 5.4.3-py36h14c3975_0        --> 5.4.5-py36h14c3975_0      
    py:                     1.5.2-py36h29bf505_0        --> 1.5.3-py36_0              
    pycodestyle:            2.3.1-py36hf609f19_0        --> 2.4.0-py36_0              
    pycrypto:               2.6.1-py36h6998063_1        --> 2.6.1-py36h14c3975_7      
    pylint:                 1.8.1-py36_0                --> 1.8.4-py36_0              
    pyodbc:                 4.0.21-py36h083aac6_0       --> 4.0.23-py36hf484d3e_0     
    pyqt:                   5.6.0-py36h0386399_5        --> 5.9.2-py36h751905a_0      
    pysocks:                1.6.7-py36hd97a5b1_1        --> 1.6.8-py36_0              
    pytables:               3.4.2-py36h3b5282a_2        --> 3.4.3-py36h02b9ad4_0      
    pytest:                 3.3.2-py36_0                --> 3.5.1-py36_0              
    python:                 3.6.4-hc3d631a_0            --> 3.6.5-hc3d631a_2          
    python-dateutil:        2.6.1-py36h88d3b88_1        --> 2.7.2-py36_0              
    pytz:                   2017.3-py36h63b9c63_0       --> 2018.4-py36_0             
    pyzmq:                  16.0.3-py36he2533c7_0       --> 17.0.0-py36h14c3975_0     
    qt:                     5.6.2-h974d657_12           --> 5.9.5-h7e424d6_0          
    qtpy:                   1.3.1-py36h3691cc8_0        --> 1.4.1-py36_0              
    ruamel_yaml:            0.11.14-py36ha2fb22d_2      --> 0.15.35-py36h14c3975_1    
    scipy:                  1.0.0-py36hbf646e7_0        --> 1.0.1-py36hfc37229_0      
    setuptools:             36.5.0-py36he42e2e1_0       --> 39.1.0-py36_0             
    simplegeneric:          0.8.1-py36h2cb9092_0        --> 0.8.1-py36_2              
    sip:                    4.18.1-py36h51ed4ed_2       --> 4.19.8-py36hf484d3e_0     
    sortedcollections:      0.5.3-py36h3c761f9_0        --> 0.6.1-py36_0              
    sortedcontainers:       1.5.9-py36_0                --> 1.5.10-py36_0             
    sphinx:                 1.6.6-py36_0                --> 1.7.4-py36_0              
    sphinx_rtd_theme:       0.2.4-py36_0                --> 0.3.0-py36_0              
    spyder:                 3.2.6-py36_0                --> 3.2.8-py36_0              
    sqlalchemy:             1.2.0-py36h14c3975_0        --> 1.2.7-py36h6b74fdf_0      
    sqlite:                 3.20.1-hb898158_2           --> 3.23.1-he433501_0         
    tensorflow:             1.3.0-0                     --> 1.5.0-0                   
    tensorflow-base:        1.3.0-py36h5293eaa_1        --> 1.5.0-py36hff88cb2_1      
    tensorflow-gpu:         1.3.0-0                     --> 1.5.0-0                   
    tensorflow-gpu-base:    1.3.0-py36cuda8.0cudnn6.0_1 --> 1.5.0-py36h8a131e3_0      
    tensorflow-tensorboard: 0.1.5-py36_0                --> 1.5.1-py36hf484d3e_1      
    terminado:              0.8.1-py36_0                --> 0.8.1-py36_1              
    tornado:                4.5.3-py36_0                --> 5.0.2-py36_0              
    typing:                 3.6.2-py36h7da032a_0        --> 3.6.4-py36_0              
    unixodbc:               2.3.4-hc36303a_1            --> 2.3.6-h1bed415_0          
    wheel:                  0.30.0-py36hfd4bba0_1       --> 0.31.0-py36_0             
    widgetsnbextension:     3.1.0-py36_0                --> 3.2.1-py36_0              
    xlsxwriter:             1.0.2-py36h3de1aca_0        --> 1.0.4-py36_0              
    xz:                     5.2.3-h55aa19d_2            --> 5.2.3-h5e939de_4          
    zeromq:                 4.2.2-hbedb6e5_2            --> 4.2.5-h439df22_0          
    zope.interface:         4.4.3-py36h0ccbf34_0        --> 4.5.0-py36h14c3975_0      

@kohr-h
Copy link
Member

kohr-h commented May 3, 2018

That’s kind of strange. The failing Travis test says

  assert dist_prox_cda == pytest.approx(0, abs=1e-6)

E assert 1.5683290222057167e-09 == 0 ± 1.0e-06
E + where 0 ± 1.0e-06 = <function approx at 0x7f3f917972f0>(0, abs=1e-06)
E + where <function approx at 0x7f3f917972f0> = pytest.approx
odl/test/solvers/nonsmooth/difference_convex_test.py:92: AssertionError

This does not happen on my local machine, and I always thought that 1.56e-9 < 1e-6.

😂 I'd assume that as well..
Apparently there have been some recent changes in pytest's approx function that have to do with NumPy array handling. Can you print the types and shapes of in question?

@kohr-h
Copy link
Member

kohr-h commented May 3, 2018

Apparently there have been some recent changes in pytest's approx function that have to do with NumPy array handling.

For reference, here is the PR with the relevant changes.

But since the old tests seem to be fine, I guess there's some subtle issue with your test.

@sbanert
Copy link
Contributor Author

sbanert commented May 7, 2018

I tried to figure out what’s going on with the debugger.

(Pdb) p dist_prox_dca
1.5683290222057167e-09
(Pdb) p dist_dca
0.0
(Pdb) p type(dist_prox_dca)
<class 'numpy.float64'>
(Pdb) p type(dist_dca)
<class 'numpy.float64'>
(Pdb) p dist_prox_dca.shape
()
(Pdb) p dist_dca.shape
()

However, if I convert everything to float, it works fine (as does everything with NumPy 1.13).

@kohr-h
Copy link
Member

kohr-h commented May 9, 2018

This seems like an issue with pytest, can you try to make a minimal example that doesn't involve ODL and file an issue with pytest if it's reproducible? Perhaps it's already been raised though.

@aringh
Copy link
Member

aringh commented Jun 11, 2018

I will poke this one again 😉 It is more or less good to go, no?

@sbanert
Copy link
Contributor Author

sbanert commented Jun 11, 2018

I’m not aware of any open issues with this pull request. (Except for the bug report to pytest, which might be unnecessary anyway because it works fine with the newest numpy version.)

@kohr-h
Copy link
Member

kohr-h commented Jun 11, 2018

Merge branch 'master' into dcsolvers

We need to undo this, otherwise LGTM.

@sbanert
Copy link
Contributor Author

sbanert commented Jun 11, 2018

Done. There is no way to remove this annoying “Update branch” button or change its function to a rebase, I assume?

@kohr-h
Copy link
Member

kohr-h commented Jun 11, 2018

Done. There is no way to remove this annoying “Update branch” button or change its function to a rebase, I assume?

No, unfortunately not. I guess everybody has to be burned at least once :-)

@kohr-h kohr-h dismissed aringh’s stale review June 11, 2018 14:53

Everything addressed, merging now

@kohr-h kohr-h merged commit dff29c1 into odlgroup:master Jun 11, 2018
@sbanert sbanert deleted the dcsolvers branch June 12, 2018 11:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants