Optimizing DICE using OptiMimi

Hey!

I am trying to optimize the welfare using MimiDICE2016R2. I am posting the code that I am using below. However, I am unable to obtain a solution, and the code has been running for close to 24 hours. I am not sure if there is something explicit that I am missing or if there is some problem with how I have defined my optimization problem. I would appreciate any help on this.

# load the model
using MimiDICE2016R2

# get the model components
m_opti = MimiDICE2016R2.get_model()

# run the model
run(m_opti)

# load packages required for optimization
using Mimi
using OptiMimi

# define objective function
function objective(model::Model)
    model[:welfare, :CUMCEMUTOTPER][100]
end

# define optimization problem
optprob = problem(m_opti, [:neteconomy, :emissions], [:S, :MIU], [0., 0.], [1., 1.], objective)

# obtain the solution
(maxf, maxx) = solution(optprob, () -> [0. for i in 1:200], verbose = true)

Hi @tejendrapsingh thank you for posting! For questions specifically about OptiMimi it might be best to reach out to @jrising (tagging here in case he sees it) or post an Issue on the Github repository since it is not a package that we Mimi developers actively maintain.

In my recent work I have used the NLOpt package to do these optimizations, it isn’t as integrated with Mimi but it seems to work quite well.

Some examples of where people have used that (might be slightly outdated but give the general idea):

These are using regionalized versions of DICE, so might be more complicated than you want, but could give you an idea of how to set up the problem.

Hi @tejendrapsingh. I’m not surprised that it takes a long time to do this directly. It’s a lot of parameters to optimize, and OptiMimi runs the whole model for each parameter combination.

You have a couple of options.

  • You can parameterize these time-series parameters. For example, you can say MIU is alpha t + beta, and then you only need two parameters. Getting the right parameterization is quite important though.
  • You can give the optimizer a better starting point. The closer you get to the optimal solution, the less searching it needs to do.
  • You can set up your Mimi model to use autodifferentiation, and then OptiMimi can use solvers that require gradients. It’s been a while since I tested this though.

Hope that helps!

1 Like

Hey @lrennels and @jrising

Thanks for the suggestions. I am looking into these. I will report back on my progress soon.

This might be useful too from @felixschaumann Optimizing MimiDICE - #23 by felixschaumann

Thanks @lrennels !

I coded optimization for vanilla DICE2013. However, my optimization problem does not converge. I am posting the code below for your reference.

# load packages
using NLopt, Mimi, MimiDICE2013

# write the function which takes as input an instance of DICE model
function optimise_model(m::Model=MimiDICE2013.get_model())

    # define periods for which the optimization takes place
    n_objectives = length(m.md.dim_dict[:time])

    # define time by which the optimization should stop
    stop_time = 640

    # define the tolerance parameter for the functional value
    tolerance = 1e-6

    # determine the optimization algorithm
    optimization_algorithm = :LN_SBPLX

    # Create lower bound
    lower_bound = zeros(n_objectives)     

    # Create upper bound    
    upper_bound = ones(n_objectives)
    
    # Create initial condition for algorithm (this is a fraction of upper bound)
    starting_point = ones(n_objectives) ./ 1000 
    
    # generate an optimization problem of NLopt type
    opt = Opt(optimization_algorithm, n_objectives)
    
    # Set the bounds.
    lower_bounds!(opt, lower_bound)
    upper_bounds!(opt, upper_bound)
    
    # Assign the objective function to maximize.
    max_objective!(opt, (x, grad) -> construct_objective(m, x))
    
    # Set termination time.
    maxtime!(opt, stop_time)
    
    # Set optimizatoin tolerance (will stop if |Δf| / |f| < tolerance from one iteration to the next).
    ftol_rel!(opt, tolerance)
    
    # Optimize model.
    maximum_objective_value, optimised_policy_vector, convergence_result = optimize(opt, starting_point)

    # return the object: model and dictionary object
    return (m = m, opt_result = Dict([("Maximum objective value", maximum_objective_value),
    ("Optimised policy vector", optimised_policy_vector),
    ("Convergence result", convergence_result)]))
end
    
# generate function that creates an objective function with the model and optimized abatement as the argument
function construct_objective(m::Model, optimised_abatement::Array{Float64,1})
    # update abatement rate and run the model 
    update_param!(m, :MIU, optimised_abatement)
    run(m)
    return m[:welfare, :CUMCEMUTOTPER][n_objectives]
end

Hi @tejendrapsingh, when I use the code above directly pasted into my environment, and then run

julia> m, opt_result = optimise_model()

My results do quickly converge, returning a model m, that I look at with explore(m) as shown in the image, as well as the opt_result Dictionary below.

julia> opt_result
Dict{String, Any} with 3 entries:
  "Optimised policy vector" => [0.140766, 0.155643, 0.168098, 0.179659, 0.189477, 0.197352, 0.202184, 0.203448, 0.199977, 0.190742  …  0.0011875, 0.0011875, 0.0011875, 0.0011875, 0.0011875, 0.0011875, 0…
  "Maximum objective value" => 40983.8
  "Convergence result"      => :FTOL_REACHED

One diagnostic step might be to double check that your optimization is running properly with println("blah") statements within the objective function, or even just running a loop over various possible sets of rates and checking the results of the optimization function with printing or graphing ie.

m = MimiDICE2013.get_model()
for constant_rate in collect(0:0.1:1)
    w = construct_objective(m, fill(constant_rate, 60))
    println("Welfare is $w")
end

to make sure the results differ and the function runs smoothly?

Other thoughts on what can help are below, but given this converges very quickly on my machine I think it should be working fine on yours?


  1. You can add some flexibility around which years you are using as choice variables and then interpolate between them, which I do fairly crudely below but you could clean this up. That can help with speed or just gut-checking for strange answers.

  2. If you are using a local algorithm make sure you look at different starting points as it might be sensitive, for example I ran your code with a starting point of upper_bound ./2 and got quite different results.


# load packages
using NLopt, Mimi, MimiDICE2013, Interpolations

# write the function which takes as input an instance of DICE model
function optimise_model(m::Model=MimiDICE2013.get_model(); objective_years::Vector=collect(2010:5:2305))

    # define periods for which the optimization takes place
    n_objectives = length(objective_years)
    all_years = dim_keys(m, :time)

    # check that all objective years are in model years - a nothing will appear if
    # one of the objective_years is not found in all_years
    mismatches = findall(i -> isnothing(i), indexin(objective_years, all_years))
    if !isempty(mismatches)
        error("All objective years must be found in model `m` years.")
    end

    # define time by which the optimization should stop
    stop_time = 640

    # define the tolerance parameter for the functional value
    tolerance = 1e-6

    # determine the optimization algorithm
    optimization_algorithm = :LN_SBPLX

    # Create lower bound
    lower_bound = zeros(n_objectives)     

    # Create upper bound    
    upper_bound = ones(n_objectives)
    
    # Create initial condition for algorithm (this is a fraction of upper bound)
    starting_point = upper_bound ./ 1000
    
    # generate an optimization problem of NLopt type
    opt = Opt(optimization_algorithm, n_objectives)
    
    # Set the bounds.
    lower_bounds!(opt, lower_bound)
    upper_bounds!(opt, upper_bound)
    
    # Assign the objective function to maximize.
    max_objective!(opt, (x, grad) -> construct_objective(m, x, objective_years, all_years))
    
    # Set termination time.
    maxtime!(opt, stop_time)
    
    # Set optimizatoin tolerance (will stop if |Δf| / |f| < tolerance from one iteration to the next).
    ftol_rel!(opt, tolerance)
    
    # Optimize model.
    maximum_objective_value, optimised_policy_vector, convergence_result = optimize(opt, starting_point)

    # return the object: model and dictionary object
    return (m = m, opt_result = Dict([("Maximum objective value", maximum_objective_value),
    ("Optimised policy vector", optimised_policy_vector),
    ("Convergence result", convergence_result)]))
end
    
# generate function that creates an objective function with the model and optimized abatement as the argument
function construct_objective(m::Model, optimised_abatement::Array{Float64,1},  objective_years::Vector, all_years::Vector)
    # update abatement rate and run the model 
    full_abatement_schedule = get_full_schedule(optimised_abatement, objective_years, all_years)
    update_param!(m, :MIU, full_abatement_schedule)
    run(m)
    return m[:welfare, :CUMCEMUTOTPER][n_objectives]
end

function get_full_schedule(rates, objective_years, all_years)
    interp_linear_extrap = LinearInterpolation(objective_years, rates,extrapolation_bc=Line())
    full_schedule = interp_linear_extrap[all_years]
    full_schedule[full_schedule .< 0] .= 0 # can't go below 0 -- only important for extrapolation
    full_schedule[full_schedule .> 1] .= 1 # can't go above 1 -- only important for extrapolation
    return full_schedule
end

@lrennels , Hey!

Thanks a lot for these suggestions. I solved my previous problem. The issue was NLopt was giving me “FTOL_REACHED” as the output. As I am new to this optimization routine, I was expecting “NLOPT_SUCCESS”. I, therefore, got confused. I appreciate all the help.

I now have a new issue that I would appreciate a little help on. I am trying to optimize both the abatement rate and the savings rate. The way I am doing this is to pass one long parameter vector to the objective function. Is there a way I can pass two separate parameter vectors of length equivalent to the time dimension of the DICE2013 model? For instance, I would like to pass these to `max_objective’ function in NLopt. I am posting the code for the long parameter vector approach and also what I would ideally like to do below.

Long parameter vector approach:

# set directory
cd("C:/Users/singh/Dropbox/ResearchWork/Potential/mimiDICE2016R2")

# load packages
using NLopt, Mimi, MimiDICE2013, CSV, DataFrames

# write the function which takes as input an instance of DICE model
function optimise_model(m::Model=MimiDICE2013.get_model())

    # define periods for which the optimization takes place
    n_objectives = length(m.md.dim_dict[:time])

    # define time by which the optimization should stop
    stop_time = 640

    # define the tolerance parameter for the functional value
    tolerance = 1e-6

    # determine the optimization algorithm
    optimization_algorithm = :LN_SBPLX

    # Create lower bound
    lower_bound = [0.039;zeros(n_objectives-1);zeros(n_objectives)]     

    # Create upper bound    
    upper_bound = [0.039;ones(28);1.2*ones(31);ones(n_objectives)]
    
    # Create initial condition for algorithm (this is a fraction of upper bound)
    starting_point = [ones(n_objectives) ./ 1000;ones(n_objectives) ./ 1000] 
    
    # generate an optimization problem of NLopt type
    opt = Opt(optimization_algorithm, n_objectives*2)
    
    # Set the bounds.
    lower_bounds!(opt, lower_bound)
    upper_bounds!(opt, upper_bound)
    
    # Assign the objective function to maximize.
    max_objective!(opt, (x, grad) -> construct_objective(m, x))
    
    # Set termination time.
    maxtime!(opt, stop_time)
    
    # Set optimizatoin tolerance (will stop if |Δf| / |f| < tolerance from one iteration to the next).
    ftol_rel!(opt, tolerance)
    
    # Optimize model.
    maximum_objective_value, optimised_policy_vector, convergence_result = optimize(opt, starting_point)

    # return the object: model and dictionary object
    return (m = m, opt_result = Dict([("Maximum objective value", maximum_objective_value),
    ("Optimised policy vector", optimised_policy_vector),
    ("Convergence result", convergence_result)]))
end
    
# generate function that creates an objective function with the model and optimized abatement as the argument
function construct_objective(m::Model, optimised_abatement::Array{Float64,1})
    # update abatement rate and run the model 
    update_param!(m, :MIU, optimised_abatement[1:n_objectives])
    update_param!(m, :S, optimised_abatement[n_objectives+1:2*n_objectives])
    run(m)
    return m[:welfare, :UTILITY]
end

# run the model
model_opt = optimise_model()

# extract the policy vector
policy_opt = model_opt.opt_result[:"Optimised policy vector"]

# update the parameter and run model
m = MimiDICE2013.get_model()
update_param!(m, :MIU, policy_opt[1:60])
update_param!(m, :S, policy_opt[61:120])
run(m)

# export to CSV
CSV.write("optimized_model.csv", DataFrame((years = 2010:5:2305, 
abatement = m[:emissions, :MIU], 
cumulative_welfare = m[:welfare, :CUMCEMUTOTPER], 
savings = m[:neteconomy, :S])))

What I would ideally like to do:

# set directory
cd("C:/Users/singh/Dropbox/ResearchWork/Potential/mimiDICE2016R2")

# load packages
using NLopt, Mimi, MimiDICE2013, CSV, DataFrames

# write the function which takes as input an instance of DICE model
function optimise_model(m::Model=MimiDICE2013.get_model())

    # define periods for which the optimization takes place
    n_objectives = length(m.md.dim_dict[:time])

    # define time by which the optimization should stop
    stop_time = 640

    # define the tolerance parameter for the functional value
    tolerance = 1e-6

    # determine the optimization algorithm
    optimization_algorithm = :LN_SBPLX

    # Create lower bound
    lower_bound = [0.039;zeros(n_objectives-1);zeros(n_objectives)]     

    # Create upper bound    
    upper_bound = [0.039;ones(28);1.2*ones(31);ones(n_objectives)]
    
    # Create initial condition for algorithm (this is a fraction of upper bound)
    starting_point = [ones(n_objectives) ./ 1000;ones(n_objectives) ./ 1000] 
    
    # generate an optimization problem of NLopt type
    opt = Opt(optimization_algorithm, n_objectives*2)
    
    # Set the bounds.
    lower_bounds!(opt, lower_bound)
    upper_bounds!(opt, upper_bound)
    
    # Assign the objective function to maximize.
    max_objective!(opt, (x, grad) -> construct_objective(m, x))
    
    # Set termination time.
    maxtime!(opt, stop_time)
    
    # Set optimizatoin tolerance (will stop if |Δf| / |f| < tolerance from one iteration to the next).
    ftol_rel!(opt, tolerance)
    
    # Optimize model.
    maximum_objective_value, optimised_policy_vector, convergence_result = optimize(opt, starting_point)

    # return the object: model and dictionary object
    return (m = m, opt_result = Dict([("Maximum objective value", maximum_objective_value),
    ("Optimised policy vector", optimised_policy_vector),
    ("Convergence result", convergence_result)]))
end
    
# generate function that creates an objective function with the model and optimized abatement as the argument
function construct_objective(m::Model, optimised_abatement::Array{Float64,1}, optimised_savings::Array{Float64,1})
    # update abatement rate and run the model 
    update_param!(m, :MIU, optimised_abatement)
    update_param!(m, :S, optimised_savings)
    run(m)
    return m[:welfare, :UTILITY]
end

# run the model
model_opt = optimise_model()

Glad to hear it, yes that is the correct feedback meaning it hit the FTOL that you set, so assuming you are happy with the convergence etc. then that is a “successful” result as far as I know.

Per your next question, we’re reaching the limits of my skills with NLOpt and optimization, as I have not done as much work with optimization as work on Mimi.jl itself.

Folks who might have done more with optimization include @jrising (focused more on OptiMimi) and it looks like perhaps @felixschaumann or @FrankErrickson.

Good luck and do let me know as you go if you run into any Mimi.jl centered issues or error messages that you have trouble with, as that is more in the domain of my (and this forum’s) expertise, and I’ll be more than happy to jump back in! The Julia Slack is also quite active, there might be some NLOpt folks on there, or I’m guessing you can add Issues to the package as another way to be in touch with optimization experts.

@lrennels Thanks for the reply, and sorry for the delayed response. From other posts on the forum, I created a code for optimizing mitigation/abatement and savings rates. However, the results do not seem to make sense. In particular, they do not correlate well with the results from optimization in GAMS. I am posting the code below. I am unsure if I am missing something here, but any possible help is appreciated.

# set directory
cd("C:/Users/singh/Dropbox/ResearchWork/Potential/mimiDICE2016R2")

# load packages
using NLopt, Mimi, MimiDICE2013, CSV, DataFrames

# write the function which takes as input an instance of DICE model
function optimise_model(m::Model=MimiDICE2013.get_model())

    # define periods for which the optimization takes place
    n_objectives = length(m.md.dim_dict[:time])

    # define time by which the optimization should stop
    stop_time = 640

    # define the tolerance parameter for the functional value
    tolerance = 1e-6

    # determine the optimization algorithm
    optimization_algorithm = :LN_SBPLX

    # Create lower bound
    lower_bound = [0.039;zeros(n_objectives-1);zeros(n_objectives)]     

    # Create upper bound    
    upper_bound = [0.039;ones(28);1.2*ones(31);ones(n_objectives)]
    
    # Create initial condition for algorithm (this is a fraction of upper bound)
    starting_point = upper_bound.*0.005
    
    # generate an optimization problem of NLopt type
    opt = Opt(optimization_algorithm, n_objectives*2)
    
    # Set the bounds.
    lower_bounds!(opt, lower_bound)
    upper_bounds!(opt, upper_bound)
    
    # Assign the objective function to maximize.
    max_objective!(opt, (x, grad) -> construct_objective(m, x))
    
    # Set termination time.
    maxtime!(opt, stop_time)
    
    # Set optimizatoin tolerance (will stop if |Δf| / |f| < tolerance from one iteration to the next).
    ftol_rel!(opt, tolerance)
    
    # Optimize model.
    maximum_objective_value, optimised_policy_vector, convergence_result = optimize(opt, starting_point)

    # return the object: model and dictionary object
    return (m = m, opt_result = Dict([("Maximum objective value", maximum_objective_value),
    ("Optimised policy vector", optimised_policy_vector),
    ("Convergence result", convergence_result)]))
end
    
# generate function that creates an objective function with the model and optimized abatement as the argument
function construct_objective(m::Model, optimised_abatement_sav::Array{Float64,1})
    # define periods for which the optimization takes place
    n_objectives = length(m.md.dim_dict[:time])

    # update abatement and savings rate and run the model 
    update_param!(m, :MIU,  optimised_abatement_sav[1:n_objectives])
    update_param!(m, :S,    optimised_abatement_sav[n_objectives+1:2*n_objectives])
    run(m)
    return m[:welfare, :UTILITY]
end

# run the model
model_opt = optimise_model()

I am also tagging @jrising , @felixschaumann if they can be of help.

I am also tagging @FrankErrickson if they can be of help.

Hi @tejendrapsingh,
I’ve never tried to optimise two variables at the same time, but in principle I don’t see why your approach wouldn’t work. Mainly I was puzzled by your very low starting points for both variables, maybe it’s worth going for some higher values that are closer to what you’d expect for emissions control rate and savings rate?
Cheers,
Felix

@felixschaumann , Hey!

Thanks for the reply.

You are correct that the starting values were a bit weird. I made them more realistic, and the model seems to behave well. Thanks for the suggestion.

Cheers,
Tejendra

@lrennels Hey!

A quick question. I am playing around with the DICE2013 optimization. I want to do a baseline run optimization where damages are ignored. The way I do this is to set damages to zero by explicitly setting the damage quadratic term a2 to be zero. However, the lower bound on the abatement bites, and the optimized value of the abatement rate is zero. This is not what I would expect from a baseline run. Do you have a sense of why this could be happening? Is there something crucial I am missing here? I am posting my code below for reference.

# set directory
cd("C:/Users/singh/Dropbox/ResearchWork/Potential/mimiDICE2016R2")

# load packages
using NLopt, Mimi, MimiDICE2013, CSV, DataFrames

# write the function which takes as input an instance of DICE model
function optimise_model(m::Model=MimiDICE2013.get_model())

    # define periods for which the optimization takes place
    n_objectives = length(m.md.dim_dict[:time])

    # define time by which the optimization should stop
    stop_time = 640

    # define the tolerance parameter for the functional value
    tolerance = 1e-6

    # determine the optimization algorithm
    optimization_algorithm = :LN_SBPLX

    # Create lower bound
    lower_bound = [0.039;zeros(n_objectives-1);zeros(n_objectives-10);ones(10)*0.25827]     

    # Create upper bound    
    upper_bound = [0.039;ones(28);1.2*ones(31);ones(n_objectives-10);ones(10)*0.25827]
    
    # Create initial condition for algorithm (this is a fraction of upper bound)
    starting_point = [ones(n_objectives)*0.4;ones(n_objectives)*0.2]
    
    # generate an optimization problem of NLopt type
    opt = Opt(optimization_algorithm, n_objectives*2)
    
    # Set the bounds.
    lower_bounds!(opt, lower_bound)
    upper_bounds!(opt, upper_bound)
    
    # Assign the objective function to maximize.
    max_objective!(opt, (x, grad) -> construct_objective(m, x))
    
    # Set termination time.
    maxtime!(opt, stop_time)
    
    # Set optimizatoin tolerance (will stop if |Δf| / |f| < tolerance from one iteration to the next).
    ftol_rel!(opt, tolerance)
    
    # Optimize model.
    maximum_objective_value, optimised_policy_vector, convergence_result = optimize(opt, starting_point)

    # return the object: model and dictionary object
    return (m = m, opt_result = Dict([("Maximum objective value", maximum_objective_value),
    ("Optimised policy vector", optimised_policy_vector),
    ("Convergence result", convergence_result)]))
end
    
# generate function that creates an objective function with the model and optimized abatement as the argument
function construct_objective(m::Model, optimised_abatement_sav::Array{Float64,1})
    # define periods for which the optimization takes place
    n_objectives = length(m.md.dim_dict[:time])

    # update abatement and savings rate and run the model 
    update_param!(m, :MIU,  optimised_abatement_sav[1:n_objectives])
    update_param!(m, :S,    optimised_abatement_sav[n_objectives+1:2*n_objectives])
    # update the damages parameter and set to zero
    update_param!(m, :a2, 0.00)
    run(m)
    return m[:welfare, :UTILITY]
end

# run the model
model_opt = optimise_model()

# extract the policy vector
policy_opt_abatement_sav = model_opt.opt_result[:"Optimised policy vector"]

# update the parameter and run model
m = MimiDICE2013.get_model()
update_param!(m, :MIU,  policy_opt_abatement_sav[1:60])
update_param!(m, :S,    policy_opt_abatement_sav[61:120])
run(m)

# export to CSV
CSV.write("./output/optimized_model_abatement_sav_baseline.csv", DataFrame((years = 2010:5:2305, 
abatement = m[:emissions, :MIU], 
cumulative_welfare = m[:welfare, :CUMCEMUTOTPER], 
savings = m[:neteconomy, :S],
emissions = m[:emissions, :CCA],
production = m[:grosseconomy, :YGROSS],
temperature = m[:climatedynamics, :TATM])))

HI @tejendrapsingh sorry for the delay. If the damage quadratic term a2 is set to zero, and therefore as you say damages from climate change are always zero regardless of changes in temperature or abatement rates that modulate changes in temperature, then I think we would expect the optimizer to come out with the optimal abatement rate of zero right?

Put another way, if abatement costs money, but has no effect on damages (ie. it does not reduce damages at all), then spending on abatement would never have a positive overall effects and we would not choose to spend it. Does that sound right? Maybe damages to zero isn’t what you’re looking for for this baseline run?

1 Like