-
Notifications
You must be signed in to change notification settings - Fork 580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SCS_GPU: build libscsgpuindir against CUDA_full_jll #1294
Conversation
S/SCS_GPU/build_tarballs.jl
Outdated
platforms = [ | ||
Linux(:x86_64), | ||
Windows(:x86_64), | ||
MacOS(:x86_64), | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following a discussion we had with @maleadt a few weeks ago my understanding is that this is going to work only for Linux
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, I've taken my clues from MAGMA
:
Yggdrasil/M/MAGMA/build_tarballs.jl
Line 37 in 54c9c4b
platforms = [ |
It depends on CUDA_jll
and builds for all of those platforms... [email protected]
builds for those as well:
build_tarballs(ARGS, name, version, [], script, |
@maleadt could you chip in?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but as You have predicted Linux builds fine, MacOSX and Windows fail 😆
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think magma ever worked at all
Co-authored-by: Mosè Giordano <[email protected]>
Seeing how this library links against very specific libraries, e.g. Regardless, for this build you better use the oldest version of CUDA that is compatible, because for CUDA 11 you need NVIDIA driver 450+ (or so) which is 0% of the users currently. |
We already have some infrastructure in place to make something like this happen (but we aren't really there yet) 😉 pinging @staticfloat |
@maleadt I fixed cuda to 9.0; What could be the design pattern for conditional use of in SCS? |
Check if If you want to use this together with CUDA.jl, that's tricky, because CUDA.jl does its own artifact selection (the most recent compatible one for your driver), vs. 9.0 you've selected here. So that could cause multiple |
:D I see; I made the necessary changes to SCS.jl locally. Is there an easy way to test it with SCS_GPU_jll locally as well? e.g. how to install SCS_GPU_jll artifact from path? |
|
I'm trying to figure out a way of detecting CUDA libraries at runtime, but I can't really figure how. import SCS_jll
const indirect = SCS_jll.libscsindir
const direct = SCS_jll.libscsdir
if Sys.islinux() && Libdl.dlopen("libcublas.$(Libdl.dlext).9.0"; throw_error=false) !== nothing
import SCS_GPU_jll
const gpuindirect = SCS_jll.libscsgpuindir
end This code will be run at precompile time, so |
@maleadt @giordano I couldn't come up with anything better than this:
using Pkg; pkg"add [email protected]"
using CUDA_jll
using SCS
SCS.eval(:(
import SCS_GPU_jll;
const gpuindirect = SCS_GPU_jll.libscsgpuindir;
GpuIndirectSolver in available_solvers || push!(available_solvers, GpuIndirectSolver);
include("src/c_wrapper.jl"); # to @eval the ccalls depending on gpuindirect
)) see jump-dev/SCS.jl#187 (comment) If that's good enough, I'd ask to merge SCS_GPU here. |
Can't you |
@maleadt the problem is that in If I |
Ah, yes, that's not ideal. With |
with dynamic |
@maleadt apparently I needed to go for holiday to figure out the solution ;-) The key point is that one can function __init__()
@require CUDA_jll="e9e359dc-d701-5aa8-82ae-09bbf812ea83" include("c_wrapper_gpu.jl")
end and the if haskey(ENV,"JULIA_SCS_LIBRARY_PATH")
@isdefined(libscsgpuindir) && push!(available_solvers, GpuIndirectSolver)
else
import SCS_GPU_jll
const gpuindirect = SCS_GPU_jll.libscsgpuindir
push!(available_solvers, GpuIndirectSolver)
end
[ ... ]# other gpu specific code see recent changes in jump-dev/SCS.jl#187 Anyway: ~/.julia/dev/SCS scs_gpu_jll julia --project=.
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.3.0 (2019-11-26)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
julia> using CUDA_jll
[NVBLAS] NVBLAS_CONFIG_FILE environment variable is NOT set : relying on default config filename 'nvblas.conf'
[NVBLAS] Cannot open default config file 'nvblas.conf'
[NVBLAS] Config parsed
[NVBLAS] CPU Blas library need to be provided
julia> using SCS
[ Info: Precompiling SCS [c946c3f1-0d1f-5ce8-9dea-7daa1f7e2d13]
julia> SCS.available_solvers
3-element Array{DataType,1}:
SCS.DirectSolver
SCS.IndirectSolver
SCS.GpuIndirectSolver
julia> SCS.gpuindirect
"libscsgpuindir.so" ~/.julia/dev/SCS scs_gpu_jll julia --project=.
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.3.0 (2019-11-26)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
julia> using SCS
julia> SCS.available_solvers
2-element Array{DataType,1}:
SCS.DirectSolver
SCS.IndirectSolver
|
@giordano if there are no objections, I'd like to merge it |
Builds only
libscsgpuindir
againstCUDA_full_jll
;this simplified the build script (only the GPU part is built) and I bit the bullet and tested the build locally ;)
this moves #1289 into a separate package following advice of @giordano;
I limited the platforms only to the supported by CUDA; here is the output of
ldd
on BB compiled library:by comparison this is what I get compiling on an external machine:
What I understand any CUDA > 9.0 (or maybe even earlier) would do;
@maleadt As I understand what you said in the other thread, witch such compiled SCS_GPU one can still
import SCS_GPU_jll
withoutCUDA_jll
etc. What would be the optimal way for discovery CUDA at runtime?At the moment we do the following: if SCS is compiled from source then possibly add
libscsgpuindir
(libscsgpu
is the old name):https://github.com/jump-dev/SCS.jl/blob/672b6711e924ca1af520b48db73da217bb00e6ba/deps/build.jl#L63
Then at precompile time we add it to a
const available_solvers
:https://github.com/jump-dev/SCS.jl/blob/672b6711e924ca1af520b48db73da217bb00e6ba/src/c_wrapper.jl#L90
Would it be feasible to check for CUDA at
__init__
and populateavailable_solvers
there?