-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port IMPROVER to Python 3 #558
Port IMPROVER to Python 3 #558
Conversation
f078920
to
05bdd6b
Compare
9499430
to
cd78a8b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few initial comments to get you started.
bin/improver-spot-extract
Outdated
@@ -233,7 +233,7 @@ def main(): | |||
|
|||
# Loop through the requested diagnostics and load data the essential data | |||
# and any additional data that is required. | |||
for diagnostic_key in diagnostics.keys(): | |||
for diagnostic_key in list(diagnostics.keys()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be a list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -416,7 +416,7 @@ def __repr__(self): | |||
"""Represent the configured plugin instance as a string.""" | |||
description = ('<WeightedBlendAcrossWholeDimension:' | |||
' coord = {0:s}, weighting_mode = {1:s},' | |||
' coord_adjust = {2:s}>') | |||
' coord_adjust = {2}>') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why has this lost :s
when the line above hasn't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because coord_adjust
can be a None, rather than a string.
@@ -457,7 +457,7 @@ def process(self, cube, weights=None): | |||
if not isinstance(cube, iris.cube.Cube): | |||
msg = ('The first argument must be an instance of ' | |||
'iris.cube.Cube but is' | |||
' {0:s}.'.format(type(cube))) | |||
' {0:s}.'.format(str(type(cube)))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are other ways of doing this:
- potentially by removing the
:s
in the format specifier - putting
!s
in the format specifier, which callsstr
These aren't necessarily better than your current version, which do you think is best?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed the :s
format specifier and removed enforcing type(cube)
to be a string.
bin/improver-threshold
Outdated
@@ -115,17 +115,17 @@ def main(): | |||
thresholds_from_file = json.load(input_file) | |||
thresholds = [] | |||
fuzzy_bounds = [] | |||
for key in thresholds_from_file.keys(): | |||
for key in list(thresholds_from_file.keys()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be a list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -876,7 +876,7 @@ def _apply_params( | |||
|
|||
# If the coefficients are not available for the date, use the | |||
# raw ensemble forecast as the calibrated ensemble forecast. | |||
if date not in optimised_coeffs.keys(): | |||
if date not in list(optimised_coeffs.keys()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be changed to:
if date not in optimised_coeffs:
which is simpler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
lib/improver/spotdata/main.py
Outdated
|
||
# Establish multiprocessing pool - each diagnostic processed on its | ||
# own thread. | ||
diagnostic_pool = mp.Pool(processes=n_diagnostic_threads) | ||
|
||
diagnostic_keys = [ | ||
diagnostic_name for diagnostic_name in diagnostics.keys()] | ||
diagnostic_name for diagnostic_name in list(diagnostics.keys())] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only need diagnostic_keys = list(diagnostics.keys())
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
lib/improver/spotdata/main.py
Outdated
@@ -192,7 +193,7 @@ def run_spotdata(diagnostics, ancillary_data, sites, config_constants, | |||
# Process diagnostics serially on one thread. | |||
resulting_cubes = CubeList() | |||
extrema_cubes = CubeList() | |||
for key in diagnostics.keys(): | |||
for key in list(diagnostics.keys()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for key in diagnostics:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -122,7 +122,7 @@ def process(self, cube, sites, ancillary_data, | |||
|
|||
""" | |||
if self.method == 'fast_nearest_neighbour': | |||
if 'orography' in ancillary_data.keys(): | |||
if 'orography' in list(ancillary_data.keys()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just need if 'orography' in ancillary_data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
lib/improver/spotdata/read_input.py
Outdated
@@ -63,7 +63,7 @@ def get_method_prerequisites(method, diagnostic_data_path): | |||
} | |||
|
|||
additional_diagnostics = {} | |||
if method in prereq.keys(): | |||
if method in list(prereq.keys()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if method in prereq
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
lib/improver/spotdata/site_data.py
Outdated
@@ -147,9 +147,9 @@ def parse_input(self, site_data): | |||
|
|||
""" | |||
latitude_entries = [i_site for (i_site, site) in enumerate(site_data) | |||
if 'latitude' in site.keys()] | |||
if 'latitude' in list(site.keys())] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if 'latitude' in site
same thing below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -254,7 +257,7 @@ def test_adding_argument_with_defined_kwargs_dict(self): | |||
|
|||
parser.add_arguments(args_to_add) | |||
result_args = parser.parse_args() | |||
result_args = vars(result_args).keys() | |||
result_args = list(vars(result_args).keys()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think everywhere in this file you have something of the form list(vars(result_args).keys())
it can be simplified to just vars(result_args)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -311,6 +314,8 @@ class Test_wrong_args_error(unittest.TestCase): | |||
|
|||
"""Test the wrong_args_error method.""" | |||
|
|||
@ManageWarnings( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed trying an extra nested while, to try closing the file properly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -399,7 +429,7 @@ def test_basic(self): | |||
optimised_coeffs, coeff_names = result | |||
self.assertIsInstance(optimised_coeffs, dict) | |||
self.assertIsInstance(coeff_names, list) | |||
for key in optimised_coeffs.keys(): | |||
for key in list(optimised_coeffs.keys()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for key in optimised_coeffs:
and same below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -335,6 +363,8 @@ def test_mean_predictor_estimate_coefficients_nans(self): | |||
|
|||
current_forecast_predictor = cube.collapsed( | |||
"realization", iris.analysis.MEAN) | |||
current_forecast_predictor.data = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been added because otherwise you get this error:
test_EstimateCoefficientsForEnsembleCalibration.py:374: MaskedArrayFutureWarning: setting an item on a masked array which has a shared mask will not copy the mask and also change the original mask array in the future.
Check the NumPy 1.11 release notes for more information.
current_forecast_predictor.data[0][0][0] = np.nan
As the array is being specifically set to have NaN value, the masked values have been filled. This is due to a feature of cube.collapsed where the output cube seems to always be a masked array regardless of whether the input cube is a masked array.
upper_array = np.full( | ||
forecast_at_percentiles[:, 0].shape, bounds_pairing[1]) | ||
forecast_at_percentiles[:, 0].shape, bounds_pairing[1], | ||
dtype=np.int32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want this to be an int or a float?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this unit test, I've changed this to be a float, as potentially this is more likely.
@@ -125,7 +125,8 @@ def set_up_cube(zero_point_indices=((0, 0, 7, 7),), num_time_points=1, | |||
|
|||
cube.add_dim_coord( | |||
DimCoord( | |||
range(num_realization_points), standard_name='realization'), 0) | |||
list(range(num_realization_points)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change isn't needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -340,7 +340,7 @@ def find_all_routes(graph, start, end, route=None): | |||
route = route + [start] | |||
if start == end: | |||
return [route] | |||
if start not in graph.keys(): | |||
if start not in list(graph.keys()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need list
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -394,12 +394,12 @@ def process(self, cubes): | |||
|
|||
# Construct graph nodes dictionary | |||
graph = {key: [self.queries[key]['succeed'], self.queries[key]['fail']] | |||
for key in self.queries.keys()} | |||
for key in list(self.queries.keys())} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need list
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -65,7 +65,7 @@ | |||
29: 'Thunder_Shower_Day', | |||
30: 'Thunder'} | |||
|
|||
WX_DICT = OrderedDict(sorted(_WX_DICT_IN.items(), key=lambda t: t[0])) | |||
WX_DICT = OrderedDict(sorted(list(_WX_DICT_IN.items()), key=lambda t: t[0])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might not need list
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -81,9 +81,9 @@ def add_wxcode_metadata(cube): | |||
cube.standard_name = None | |||
cube.var_name = None | |||
cube.units = "1" | |||
wx_keys = np.array(WX_DICT.keys()) | |||
wx_keys = np.array(list(WX_DICT.keys())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about these lists
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this list is actually needed, as otherwise the attribute is:
array(odict_keys([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]), dtype=object)
rather than
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30])
@@ -1,4 +1,34 @@ | |||
#!/usr/bin/env bats | |||
#!/usr/bin/env bats |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have this line twice now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -165,11 +165,11 @@ def prepare_for_integration(self, cube): | |||
if self.direction_of_integration == "positive": | |||
integrated_cube = upper_bounds_cube.copy() | |||
integrated_cube.coord(self.coord_name_to_integrate).bounds = ( | |||
zip(lower_bounds, upper_bounds)) | |||
list(zip(lower_bounds, upper_bounds))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if you need these lists in this file or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this list causes the error:
ValueError: The shape of the bounds array should be points.shape + (n_bounds,)
cd78a8b
to
3aabaf8
Compare
… -E pass. Unit tests pass, although a few tests still generate warnings, which needs further investigation. CLI tests need further work, as all the known good output will change, as a result of an upgrade to the hdf5 version that is recorded within each netcdf file.
… Argparse unclosed file warning is now ignored, however, an unclosed socket warning will sometimes remains.
…ng that seems to arise based on the interaction between the ensemble calibration and the spot database tests.
… the ManageWarnings utility is only used within the unit tests.
…yCoefficientsForEnsembleCalibration.py and test_SpotDatabase.py, I've ignored Deprecation Warnings, which I think are coming from pandas that are causing Deprecation Warnings to be raised locally.
…leCalibration.py are trapped and other causes of errors are resolved.
…idoc. It seems like Sphinx should support the use of sphinx.apidoc, rather than sphinx.ext.apidoc, however, it seems like something about this interface doesn't seem to be working.
44b4119
to
df46464
Compare
… and removal of the use of the warnings handler within test_ArgParser.py.
…nary, as this syntax is more explicit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good now :)
Tests pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have checked through all the changes and they generally look good. I've just made a few comments and have a couple of questions about the changes, which I have added. Once they have been addressed I think I'm happy with it. I've run all the unit and cli tests and they all seem fine, which is good!
"\nThe final iteration resulted in a percentage " | ||
"change that is greater than the" | ||
" accepted threshold "] | ||
WARNING_TYPES = [UserWarning, ImportWarning, FutureWarning, RuntimeWarning, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason there are multiples of UserWarning, and ImportWarning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason that there are multiple UserWarning and ImportWarning is because these need to match up with the messages in the IGNORED_MESSAGES list i.e.:
IGNORED_MESSAGES | WARNING_TYPES |
---|---|
"Collapsing a non-contiguous coordinate." | UserWarning |
"Not importing directory .*sphinxcontrib'" | ImportWarning |
"The pandas.core.datetools module is deprecated" | FutureWarning |
Therefore there needs to be the same number of warnings as there are IGNORED_MESSAGES, so that's why it looks like the warnings are repeated, as the different messages are associated with different types of warning. The default warning is a UserWarning, so in some instances, no WARNING_TYPES list is provided, if all of the messages are UserWarnings.
You're right that it's not obvious that these two lists need to be the same length. Maybe a dictionary would be an alternative interface e.g.
MESSAGES_AND_WARNINGS = {
"Collapsing a non-contiguous coordinate.": UserWarning,
"Not importing directory .*sphinxcontrib'": ImportWarning,
...
}
or a list of tuples:
MESSAGES_AND_WARNINGS = [(UserWarning, "Collapsing a non-contiguous coordinate."), (ImportWarning, "Not importing directory .*sphinxcontrib'"), ...]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I don't think it is obvious, so it might be useful if it was easier to understand that is the reason there are multiple similar warnings? You could just put a comment, or as you suggested do a dictionary?
@@ -49,9 +49,23 @@ | |||
_create_historic_forecasts, _create_truth) | |||
from improver.utilities.warnings_handler import ManageWarnings | |||
|
|||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is super picky but you could probably just delete this extra line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
"\nThe final iteration resulted in a percentage " | ||
"change that is greater than the" | ||
" accepted threshold "] | ||
WARNING_TYPES = [UserWarning, ImportWarning, FutureWarning, RuntimeWarning, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again do you need multiples of UserWarning and ImportWarning?
"\nThe final iteration resulted in a percentage " | ||
"change that is greater than the" | ||
" accepted threshold "], | ||
warning_types=[UserWarning, ImportWarning, FutureWarning, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need multiples of UserWarning and ImportWarning?
@@ -137,7 +165,7 @@ def test_statsmodels_members(self, warning_list=None): | |||
plugin = Plugin(distribution, desired_units, | |||
predictor_of_mean_flag=predictor_of_mean_flag) | |||
self.assertTrue(len(warning_list) == 1) | |||
self.assertTrue(any(item.category == UserWarning | |||
self.assertTrue(any(item.category == ImportWarning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why has this changed from UserWarning to ImportWarning?
Plugin(method, self.data_directory).process(self.cube) | ||
|
||
def test_invalid_output_path(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not need this test anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason that I removed this test was because it was causing a number of low-level warnings that looked like they were coming from the underlying C code:
.HDF5-DIAG: Error detected in HDF5 (1.10.1) thread 140639791306496:
#000: H5F.c line 491 in H5Fcreate(): unable to create file
major: File accessibilty
minor: Unable to open file
#001: H5Fint.c line 1247 in H5F_open(): unable to open file: time = Fri Jun 1 08:45:57 2018
, name = '/test_data.nc', tent_flags = 13
major: File accessibilty
minor: Unable to open file
#002: H5FD.c line 809 in H5FD_open(): open failed
major: Virtual File Layer
minor: Unable to initialize object
#003: H5FDsec2.c line 346 in H5FD_sec2_open(): unable to open file: name = '/test_data.nc', errno = 13, error message = 'Permission denied', flags = 13, o_flags = 242
major: File accessibilty
minor: Unable to open file
I don't think there would be a way of ignoring these messages. Also, I thought that this test was a bit unusual because essentially it's testing what happens when you try to save a file to the root directory, so it's more of a test of the file system, rather than plugin. Therefore, I thought that the simplest option would be to remove this test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, that sounds fine to me.
* Modifications to upgrade IMPROVER to Python 3. Pycodestyle and pylint -E pass. Unit tests pass, although a few tests still generate warnings, which needs further investigation. CLI tests need further work, as all the known good output will change, as a result of an upgrade to the hdf5 version that is recorded within each netcdf file. * Edits to ignore relevant warnings arising when importing statsmodels. Argparse unclosed file warning is now ignored, however, an unclosed socket warning will sometimes remains. * Update python version used in Travis. * Further minor edits for warnings. There is still one DeprecationWarning that seems to arise based on the interaction between the ensemble calibration and the spot database tests. * Reconfigure the managing of warnings for ensemble calibration, so the the ManageWarnings utility is only used within the unit tests. * Make unit test output verbose. * As a result of some interaction between the warnings within test_ApplyCoefficientsForEnsembleCalibration.py and test_SpotDatabase.py, I've ignored Deprecation Warnings, which I think are coming from pandas that are causing Deprecation Warnings to be raised locally. * Edits to ensure warnings raised in test_EstimateCoefficientsForEnsembleCalibration.py are trapped and other causes of errors are resolved. * Print statement to try to debug Travis error. * Modify warnings that are managed by test_statsmodels_members. * Remove verbosity from unit test output. * Sphinx documentation fix, as sphinx.apidoc has moved to sphinx.ext.apidoc. It seems like Sphinx should support the use of sphinx.apidoc, rather than sphinx.ext.apidoc, however, it seems like something about this interface doesn't seem to be working. * Remove unused import of ManageWarnings. * Include an explicit closing of the file that is opened within the setUpClass. * Removal of unnecessary lists added by the 2to3 tool. * Further edits to remove unnecessary conversion of variables to a list and removal of the use of the warnings handler within test_ArgParser.py. * Edits to remove enforcing of string formatting. * Update to the type of error raised in CLI tests for a missing land mask. * Re-added usage of keys() when iterating over the keys within a dictionary, as this syntax is more explicit. * Further edits to explicitly access the keys on a dictionary, where appropriate. * Removed extra blank line.
* latest: (1367 commits) Move vicinity processing to threshold CLI (metoppv#581) Refactored to allow for separate filling functions (metoppv#587) Add warning if too many zero values for advection velocities (metoppv#585) Impro 621 improve snow (metoppv#586) metoppv#442: Add profiling support to CLIs (metoppv#554) Update readthedocs config to include netcdftime Add test for licence information (metoppv#582) IMPRO-639: GenerateProbabilitiesFromMeanAndVariance plugin (metoppv#578) Removal of renaming of coordinates within ensemble calibration, as this functionality is not used, as no input is provided with ensemble_member_id or ensemble_realization_id as a coordinate. (metoppv#579) Add coverage badge to README Try out coverage (metoppv#565) Impro-672 use realizations rather than members (metoppv#577) Modifications to allow neighbourhood processing of complex numbers (metoppv#570) Add support for providing a verbose option for the unit tests. (metoppv#569) IMPRO-618: Grid to grid interpolation (metoppv#566) IMPRO-212: A faster percentile method. (metoppv#568) Impro 624 add coordinate collapse (metoppv#567) Add Python version to README.md Port IMPROVER to Python 3 (metoppv#558) (metoppv#564) Reordered KGO creation call position so it gets made even when test fails. (metoppv#562) ...
* Modifications to upgrade IMPROVER to Python 3. Pycodestyle and pylint -E pass. Unit tests pass, although a few tests still generate warnings, which needs further investigation. CLI tests need further work, as all the known good output will change, as a result of an upgrade to the hdf5 version that is recorded within each netcdf file. * Edits to ignore relevant warnings arising when importing statsmodels. Argparse unclosed file warning is now ignored, however, an unclosed socket warning will sometimes remains. * Update python version used in Travis. * Further minor edits for warnings. There is still one DeprecationWarning that seems to arise based on the interaction between the ensemble calibration and the spot database tests. * Reconfigure the managing of warnings for ensemble calibration, so the the ManageWarnings utility is only used within the unit tests. * Make unit test output verbose. * As a result of some interaction between the warnings within test_ApplyCoefficientsForEnsembleCalibration.py and test_SpotDatabase.py, I've ignored Deprecation Warnings, which I think are coming from pandas that are causing Deprecation Warnings to be raised locally. * Edits to ensure warnings raised in test_EstimateCoefficientsForEnsembleCalibration.py are trapped and other causes of errors are resolved. * Print statement to try to debug Travis error. * Modify warnings that are managed by test_statsmodels_members. * Remove verbosity from unit test output. * Sphinx documentation fix, as sphinx.apidoc has moved to sphinx.ext.apidoc. It seems like Sphinx should support the use of sphinx.apidoc, rather than sphinx.ext.apidoc, however, it seems like something about this interface doesn't seem to be working. * Remove unused import of ManageWarnings. * Include an explicit closing of the file that is opened within the setUpClass. * Removal of unnecessary lists added by the 2to3 tool. * Further edits to remove unnecessary conversion of variables to a list and removal of the use of the warnings handler within test_ArgParser.py. * Edits to remove enforcing of string formatting. * Update to the type of error raised in CLI tests for a missing land mask. * Re-added usage of keys() when iterating over the keys within a dictionary, as this syntax is more explicit. * Further edits to explicitly access the keys on a dictionary, where appropriate. * Removed extra blank line.
Addresses #488
Description:
Porting to Python 3.
Pycodestyle, pylintE and unit tests now seem to pass. Building the documentation works on Travis but not locally, although I haven't investigated this so far.
All CLI tests that don't require known good output pass. CLI tests that do require known good output currently all fail due to a change to
hdf5libversion
within each file, so all known good output needs regenerating. This can be done now that #538 has been merged.Note that as this is a pull request to the Python 3 feature branch.
Testing: