We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue: Regression – Fix from #44 Does Not Work in Latest Version
The fix implemented in PR #44 appears to no longer work in the latest version of TaylorDiff.jl.
TaylorDiff.jl
This MWE:
v, direction = CuArray([0f0, 0f0]), CuArray([1.0f0, 0.0f0]) derivative(x -> sum(exp.(x)), v, direction, 2) # directional derivative
gives:
ERROR: InvalidIRError: compiling MethodInstance for (::GPUArrays.var"#gpu_broadcast_kernel_linear#38")(::KernelAbstractions.CompilerMetadata{…}, ::CuDeviceVector{…}, ::Base.Broadcast.Broadcasted{…}) resulted in invalid LLVM IR Reason: unsupported dynamic function invocation (call to make_seed) Stacktrace: [1] _broadcast_getindex_evalf @ ./broadcast.jl:678 [2] _broadcast_getindex @ ./broadcast.jl:651 [3] getindex @ ./broadcast.jl:610 [4] macro expansion @ ~/.julia/packages/GPUArrays/Mot2g/src/host/broadcast.jl:54 [5] gpu_broadcast_kernel_linear @ ~/.julia/packages/KernelAbstractions/mD0Rj/src/macros.jl:97 [6] gpu_broadcast_kernel_linear @ ./none:0 Hint: catch this exception as `err` and call `code_typed(err; interactive = true)` to introspect the erronous code with Cthulhu.jl Stacktrace: [1] check_ir(job::GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, args::LLVM.Module) @ GPUCompiler ~/.julia/packages/GPUCompiler/Nxf8r/src/validation.jl:167 [2] macro expansion @ ~/.julia/packages/GPUCompiler/Nxf8r/src/driver.jl:382 [inlined] [3] macro expansion @ ~/.julia/packages/TimerOutputs/6KVfH/src/TimerOutput.jl:253 [inlined] [4] macro expansion @ ~/.julia/packages/GPUCompiler/Nxf8r/src/driver.jl:381 [inlined] [5] @ GPUCompiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:108 [6] emit_llvm @ ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:106 [inlined] [7] @ GPUCompiler ~/.julia/packages/GPUCompiler/Nxf8r/src/driver.jl:100 [8] codegen @ ~/.julia/packages/GPUCompiler/Nxf8r/src/driver.jl:82 [inlined] [9] compile(target::Symbol, job::GPUCompiler.CompilerJob; kwargs::@Kwargs{}) @ GPUCompiler ~/.julia/packages/GPUCompiler/Nxf8r/src/driver.jl:79 [10] compile @ ~/.julia/packages/GPUCompiler/Nxf8r/src/driver.jl:74 [inlined] [11] #1147 @ ~/.julia/packages/CUDA/1kIOw/src/compiler/compilation.jl:250 [inlined] [12] JuliaContext(f::CUDA.var"#1147#1150"{GPUCompiler.CompilerJob{…}}; kwargs::@Kwargs{}) @ GPUCompiler ~/.julia/packages/GPUCompiler/Nxf8r/src/driver.jl:34 [13] JuliaContext(f::Function) @ GPUCompiler ~/.julia/packages/GPUCompiler/Nxf8r/src/driver.jl:25 [14] compile(job::GPUCompiler.CompilerJob) @ CUDA ~/.julia/packages/CUDA/1kIOw/src/compiler/compilation.jl:249 [15] actual_compilation(cache::Dict{…}, src::Core.MethodInstance, world::UInt64, cfg::GPUCompiler.CompilerConfig{…}, compiler::typeof(CUDA.compile), linker::typeof(CUDA.link)) @ GPUCompiler ~/.julia/packages/GPUCompiler/Nxf8r/src/execution.jl:237 [16] cached_compilation(cache::Dict{…}, src::Core.MethodInstance, cfg::GPUCompiler.CompilerConfig{…}, compiler::Function, linker::Function) @ GPUCompiler ~/.julia/packages/GPUCompiler/Nxf8r/src/execution.jl:151 [17] macro expansion @ ~/.julia/packages/CUDA/1kIOw/src/compiler/execution.jl:380 [inlined] [18] macro expansion @ ./lock.jl:273 [inlined] [19] cufunction(f::GPUArrays.var"#gpu_broadcast_kernel_linear#38", tt::Type{…}; kwargs::@Kwargs{…}) @ CUDA ~/.julia/packages/CUDA/1kIOw/src/compiler/execution.jl:375 [20] macro expansion @ ~/.julia/packages/CUDA/1kIOw/src/compiler/execution.jl:112 [inlined] [21] (::KernelAbstractions.Kernel{…})(::CuArray{…}, ::Vararg{…}; ndrange::Tuple{…}, workgroupsize::Nothing) @ CUDA.CUDAKernels ~/.julia/packages/CUDA/1kIOw/src/CUDAKernels.jl:103 [22] _copyto! @ ~/.julia/packages/GPUArrays/Mot2g/src/host/broadcast.jl:71 [inlined] [23] copyto! @ ~/.julia/packages/GPUArrays/Mot2g/src/host/broadcast.jl:44 [inlined] [24] copy @ ~/.julia/packages/GPUArrays/Mot2g/src/host/broadcast.jl:29 [inlined] [25] materialize @ ./broadcast.jl:872 [inlined] [26] broadcast(::typeof(TaylorDiff.make_seed), ::CuArray{…}, ::CuArray{…}, ::Int64) @ Base.Broadcast ./broadcast.jl:810 [27] make_seed @ ~/.julia/packages/TaylorDiff/hFSFr/src/derivative.jl:5 [inlined] [28] derivatives @ ~/.julia/packages/TaylorDiff/hFSFr/src/derivative.jl:41 [inlined] [29] derivative(f::Function, x::CuArray{Float32, 1, CUDA.DeviceMemory}, l::CuArray{Float32, 1, CUDA.DeviceMemory}, p::Int64) @ TaylorDiff ~/.julia/packages/TaylorDiff/hFSFr/src/derivative.jl:17 [30] top-level scope @ REPL[4]:1 Some type information was truncated. Use `show(err)` to see complete types.
Additional Information Julia Version: 1.11.3 TaylorDiff Version: v0.3.1 Zygote Version: v0.6.75 CUDA Version v5.6.1
1.11.3
v0.3.1
v0.6.75
v5.6.1
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Issue: Regression – Fix from #44 Does Not Work in Latest Version
Description
The fix implemented in PR #44 appears to no longer work in the latest version of
TaylorDiff.jl
.This MWE:
gives:
Additional Information
Julia Version:
1.11.3
TaylorDiff Version:
v0.3.1
Zygote Version:
v0.6.75
CUDA Version
v5.6.1
The text was updated successfully, but these errors were encountered: