I am currently comparing contrasting and comparing various IAM models. In terms of Mimi - I have been reviewing the input SSP data for GIVE, FAIRv2 and FAIRv1.6.2.
Given that historical information should all be consistent - i.e. it should not matter what the SSP scenario is, the historical emissions data should all be the same. However, it appears that the SSP585 data is not consistent, for what Leach et al call Montreal gases and Aerosols. For example the ch2cl2 montreal gas for the 585 scenario is different to the 119, 245 and 370 scenarios. (Note: this all seems to stem from taking the averages from the Leach et al paper (although confusingly the Python replication data (called as such in the Mimiv2 files (GitHub - FrankErrickson/MimiFAIRv2.jl))) appear to be consistent.
Hi @rmjohnst thank you for the question! I’m happy to help dig into this a bit, and correct any error there might be. I’m going to move the question over the MimiFAIRv2 repository, since that seems to be where the discrepancy is, and we can carry on the conversation there.
If you wouldn’t mind, could you expand on what you mean by this being due to averaging Leach et al.? And I assume what you mean when you point to the files are those in the python_replication_data folder here?
Let me know if there were any discrepancies you wanted to look at in another model.
Thank you for taking the trouble to look into this.
Yes you are right - the averages I was talking about (for the montreal gases and aerosols) are those that appeared in the python_replication_data and that were, in turn, derived from the annual means from the file C:\Users\rjohn.julia\packages\MimiFAIRv2\7ivpN\data\python_replication_data\Original Leach et al (rcmip-emissions-annual-means-v5-1-0.csv).
And I will surely be asking more questions re: any inconsistencies I discover in the input data for the various models
P.S. The Mimi Framework is a very impressive piece of work!! - I am slowly becoming accustomed to the api etc. - but even now am finding it very logical and efficient to work with.
@rmjohnst thank you, I am in touch with my colleagues and we’ll sort out the inputs here – the models on the Mimi platform are developed by a host of users and many are still in development – especially those that have not been used in a recent publication, so these things can turn up as they are built. For MimiFAIRv2 I also aim to add a Mimi API Monte Carlo at some point soon, since it is crucial to run that model in Monte Carlo mode.
The Mimi teams hosts the platform, adds technical support and feature improvements, etc. but are not in charge of every model so to get accurate and timely responses I would suggest generally posting a model-specific question to the Github repository of the given model as an Issue and tagging the developers. In this case of course several of the MimiFAIRv2 developers happen to be Mimi developers, so this worked as well! Just explaining why you may get pointed to the repository if you post package-specific questions here
I’m glad to hear that the Framework is working well for you!