-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add extension point for AbstractModel
s with AbstractVariableRef
s
#92
Comments
It's unfortunate. But this would be my suggestion. Could you give an example of what you would require for the I am very reticent to start making things abstract before we absolutely need to. |
I think this should work now if you do |
It should work with
This would indeed be unfortunate since any changes made to any of the predictors would have to be made in two places which has been quite annoying on the OMLT team.
For activation functions, I think only single extension point would be needed for variable generation: function add_activation_variable(model::JuMP.AbstractModel, xi, lb, ub; kwargs...)
yi = JuMP.@variable(model; kwargs...)
_set_bounds_if_finite(yi, lb, ub)
return yi
end In the InfiniteOpt extension, I would add something of the form: function add_activation_variable(model::InfiniteOpt.InfiniteModel, xi, lb, ub; kwargs...)
prefs = InfiniteOpt.parameter_refs(xi) # queries the infinite parameters an input may depend on
if isempty(prefs) # is the input finite?
yi = JuMP.@variable(model; kwargs...)
else
yi = JuMP.@variable(model; variable_type = InfiniteOpt.Infinite(prefs...), kwargs...)
end
_set_bounds_if_finite(yi, lb, ub) # InfiniteOpt's version of this function (not an extension)
return yi
end The function add_predictor(model::JuMP.AbstractModel, ::ReLU, x::Vector)
ub = last.(get_variable_bounds.(x))
y = [add_activation_variable(model, x[i], 0, ub[i], base_name = "moai_ReLU[$i]") for i in eachindex(x)]
JuMP.@constraint(model, y .== max.(0, x))
return y
end
function add_predictor(model::JuMP.AbstractModel, predictor::ReLUBigM, x::Vector)
m = length(x)
bounds = get_variable_bounds.(x)
y = Vector{JuMP.variable_ref_type(model)}(undef, m)
for i in 1:m
y[i] = add_activation_variable(model, x[i], 0, last(bounds[i]), base_name = "moai_ReLU[$i]")
lb, ub = bounds[i]
z = add_activation_variable(model, x[i], nothing, nothing, binary = true)
JuMP.@constraint(model, y[i] >= x[i])
U = min(ub, predictor.M)
JuMP.@constraint(model, y[i] <= U * z)
L = min(max(0, -lb), predictor.M)
JuMP.@constraint(model, y[i] <= x[i] + L * (1 - z))
end
return y
end This API would also readily be compatible with the other activation functions, binary decision trees and quantiles. Note that this also makes So then, the only layer that requires special treatment for full space is # Variable extension point
function add_affine_variable(model::JuMP.AbstractModel, x, lb, ub; kwargs...)
yi = JuMP.@variable(model; kwargs...)
_set_bounds_if_finite(yi, lb, ub)
return yi
end
# Method overloaded in the InfiniteOpt extension
function add_affine_variable(model::InfiniteOpt.InfiniteModel, x, lb, ub; kwargs...)
prefs = InfiniteOpt.parameter_refs(x)
if isempty(prefs) # are all of the inputs finite?
yi = JuMP.@variable(model; kwargs...)
else
yi = JuMP.@variable(model; variable_type = InfiniteOpt.Infinite(prefs...), kwargs...)
end
_set_bounds_if_finite(yi, lb, ub) # InfiniteOpt's version of this function
return yi
end
function add_predictor(model::JuMP.AbstractModel, predictor::Affine, x::Vector)
m = size(predictor.A, 1)
y = Vector{JuMP.variable_ref_type(model)}(undef, m)
bounds = get_variable_bounds.(x)
for i in 1:size(predictor.A, 1)
y_lb, y_ub = predictor.b[i], predictor.b[i]
for j in 1:size(predictor.A, 2)
a_ij = predictor.A[i, j]
lb, ub = bounds[j]
y_ub += a_ij * ifelse(a_ij >= 0, ub, lb)
y_lb += a_ij * ifelse(a_ij >= 0, lb, ub)
end
y[i] = add_affine_variable(model, x, y_lb, y_ub, base_name = "moai_Affine[$i]")
end
JuMP.@constraint(model, predictor.A * x .+ predictor.b .== y)
return y
end |
So this is actually one reason why I didn't want to support Could you use the function add_predictor(model::InfiniteModel, predictor::AbstractPredictor, x::Vector)
y_expr = add_predictor(model, ReducedSpace(predictor), x)
y = @variable(model, [1:length(y)])
@constraint(model, y .== y_expr)
return y
end |
Yes, but only for layers that support For these, having an overloadable function like |
Related to #83,
JuMP.AbstractModel
s like InfiniteOpt need to control how variables are generated during reformulation.Consider for instance, a two-stage stochastic program that uses a NN predictor model where we would like to do:
We would want
y
to bey(ξ)
since the input isx(ξ)
, but instead 1st stage variablesy
are returned since theInfinite(ξ)
tag is not used in@variable
behind the scenes. Interestingly, this problem is avoided with reduced space reformulationsSo, what I would need is some way to appropriately add tags to reformulation variables. Unfortunately, this might mean adding an extension point for each
add_predictor
method since their relationships between the inputs and the generated variables vary.Alternatively, I suppose I could overload every
add_predictor
method forInfiniteModel
s, but then I would end up having to copy nearly all the code (only tweaking how the variables are generated).The text was updated successfully, but these errors were encountered: