Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] With References check for equivalent sets instead of identical Set objects #1243

Closed
wants to merge 2 commits into from

Conversation

blnicho
Copy link
Member

@blnicho blnicho commented Jan 7, 2020

Fixes #905

Summary/Motivation:

I think this will resolve an issue where for some types of models using both Pyomo.DAE and Pyomo.Network, the discretization and arc_expansion transformations had to be applied in a particular order. This makes a very small change in reference.py in how indexing sets are compared when iterating over the components in a slice. Instead of checking if the same Set objects are used this checks that equivalent sets are used.

@andrewlee94 could you check if this change resolves the ordering problem for IDAES models?

Changes proposed in this PR:

  • check for equivalent sets when creating Reference objects

Legal Acknowledgement

By contributing to this software project, I agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the BSD license.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

@blnicho blnicho requested a review from jsiirola January 7, 2020 00:53
@@ -410,7 +410,7 @@ def _identify_wildcard_sets(iter_stack, index):
if len(index[i]) != len(level):
return None
# if any subset differs
if any(index[i].get(j,None) is not _set for j,_set in iteritems(level)):
if any(index[i].get(j,None) != _set for j,_set in iteritems(level)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you comment on the need for this change? Th new code will be slower (is is an O(1) operation, where != in general is O(n)). Also, by not using is, something like the following:

m.b = Block([1,2])
m.I = Set(initialize=[1,2,3])
m.J = Set(initialize=[1,2,3])
m.b[1].x = Var(m.I)
m.b[2].x = Var(m.J)
s = m.b[:].x[:]

will identify wildcard sets for s as [m.b_index, m.I], even though m.b[2].x is indexed by m.J and not m.I. When we originally wrote this logic, the idea was to only return wildcard sets if the indexing sets were demonstrably "known".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes the toy model in issue #905 where a model containing a Block constructed using a rule like the one below had to have the discretization transformation come before the arc_expansion transformation.

m = ConcreteModel()

m.s = ContinuousSet(initialize=[0,10])

def block_rule(b, i):
    b.v = Var(['a','b'], initialize=1)
    return

m.b1 = Block(m.s, rule=block_rule)
m.b2 = Block(m.s, rule=block_rule)

m.p1 = Port()
m.p2 = Port()

m.r1 = Reference(m.b1[:].v[:])

The problem is that b1[0].v and b1[10].v are not identified as having the same indexing set because a new Set is being implicitly created from the values of the Python list. This causes the ContinuousSet index in the Reference component r1 to be obscured so the indexed equality constraint that is eventually added by the arc_expansion transformation is not explicitly indexed by a ContinuousSet and therefore not expanded by the discretization transformation.

I realize this change violates the original logic which requires the user to be much more explicit when declaring their indexing sets but I also think that the above implementation is pretty common and most users would expect b1[0].v and b1[10].v to be identified as having the same indexing set (they shouldn't have to know that a Set object is being created for them implicitly)

It's possible that there are edge cases here where if we were creating References for Params or Constraints (which aren't necessarily dense) then checking is not vs. != leads to more or less intuitive behavior but for Vars I think it makes sense to check if all the elements of the sets are the same.

A couple other ideas for solving this problem of transformation ordering are:

  1. In the discretization transformation check component indexing sets for the SetOf type and automatically try to expand the component if it is found (since it might contain an obscured ContinuousSet) - I'm worried this could lead to weird side-effects if the user does anything weird with adding elements to non-ContinuousSets

  2. Make References or SetOf aware of ContinuousSets - Seems weird to make core code aware of something in Pyomo.DAE

  3. Modify the wildcard identification functionality to preserve any indexing sets that are identified as the same even if it encounters ones that aren't the same at a different level - I'm not sure how hard this would be to implement

Copy link
Member

@jsiirola jsiirola Jan 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me add another option:

  1. Modify the wildcard identification so that it handles SetOf differently (that is, use is for Set and == for SetOf).

That said, I think I like option 3. Looking at the wildcard identification, I suspect that we could support a partial identification without too much hassle. The challenge is that we would be potentially creating several pseudoset objects, and those objects would not be attached to the model. Given that I am in favor of moving toward not requiring all set objects to be attached to the model (see #45), I don't see too much of a problem going down that path. However, I think we can only handle "floating sets" correctly after we switch over to using the set rewrite (#326) by default.

@andrewlee94
Copy link

@blnicho This appears to fix the problem. I just tried running the IDAES Tutorial 3 under both your branch and the IDAES tag (or something close to it).

With your branch, the order of the transformations did not matter, whilst on master it did.

@codecov
Copy link

codecov bot commented Jan 7, 2020

Codecov Report

Merging #1243 into master will increase coverage by 0.18%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1243      +/-   ##
==========================================
+ Coverage   70.83%   71.01%   +0.18%     
==========================================
  Files         475      534      +59     
  Lines       76879    81767    +4888     
==========================================
+ Hits        54457    58067    +3610     
- Misses      22422    23700    +1278     
Impacted Files Coverage Δ
pyomo/core/base/reference.py 97.50% <100.00%> (-0.42%) ⬇️
...contrib/pynumero/interfaces/nlp_transformations.py 50.00% <0.00%> (-1.04%) ⬇️
examples/mpec/munson1c.py 100.00% <0.00%> (ø)
pyomo/contrib/pynumero/examples/feasibility.py 0.00% <0.00%> (ø)
pyomo/pysp/plugins/ecksteincombettesextension.py 0.00% <0.00%> (ø)
pyomo/contrib/pynumero/examples/derivatives.py 0.00% <0.00%> (ø)
examples/pysp/baa99/ReferenceModel.py 100.00% <0.00%> (ø)
pyomo/solvers/wrappers.py 0.00% <0.00%> (ø)
pyomo/core/base/alias.py 0.00% <0.00%> (ø)
examples/pyomo/benders/subproblem.py 100.00% <0.00%> (ø)
... and 108 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 642487c...20d8ac2. Read the comment docs.

@blnicho blnicho changed the title With References check for equivalent sets instead of identical Set objects [WIP] With References check for equivalent sets instead of identical Set objects Jan 15, 2020
@blnicho
Copy link
Member Author

blnicho commented May 22, 2020

We decided not to go with the solution proposed in this PR and this is no longer a high priority so I'm going to close it

@blnicho blnicho closed this May 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DAE transformation and Arc expansion are order dependent
4 participants