-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heaviside step #1989
Heaviside step #1989
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, thanks for making NML2 a bit easier. A question on the design, though:
If this is specifically for the support of NML2 semantics, why not implement it the
same way (or add an alternative say hstep
, which has that (admittedly weird) behaviour)?
Because I wasn't sure if the Math.signum implementation was 'the standard' or just the easiest way to get a step function in java. I couldn't find the official definition. The python LEMS implementation also has the half-maximum convention explicitly spelled out though, which makes it more official I think? elif func == "H":
def heaviside_step(x):
if x < 0:
return 0
elif x > 0:
return 1
else:
return 0.5
return "heaviside_step" The LEMS paper also mentions
Which I guess makes the Java version the official standard.. I'll change it to the half-maxmimum convention |
The problem is that IEEE745 float/doubles are signed, so we actually have ±0. Java might be different. Let's just define |
Co-authored-by: boeschf <[email protected]>
How would half-maximum heaviside deviate here from binary heaviside w.r.t. Something like this:
or
? Not that I hope that any production code every relies on this behaviour :) |
The current implementation is behaving like the former variant (-0 and +0 are treated equally), which in my book is a good thing. |
This PR adds the Heaviside step function to Arbor's iexpr, needed for some inhomogeneous density mechanism like the ones in this model. Arguable, one could also use segment groups but this makes automated translation of an existing model using NeuroML's
H()
much much easier.The implementation returns
which makes sense to me, but does not have to be the semantics that we follow.
For example the NeuroML/LEMS implementation uses the half-maximum convention
which has
H(0.0) == 0.5
.