-
Notifications
You must be signed in to change notification settings - Fork 526
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] With References check for equivalent sets instead of identical Set objects #1243
Conversation
@@ -410,7 +410,7 @@ def _identify_wildcard_sets(iter_stack, index): | |||
if len(index[i]) != len(level): | |||
return None | |||
# if any subset differs | |||
if any(index[i].get(j,None) is not _set for j,_set in iteritems(level)): | |||
if any(index[i].get(j,None) != _set for j,_set in iteritems(level)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you comment on the need for this change? Th new code will be slower (is
is an O(1) operation, where !=
in general is O(n)). Also, by not using is
, something like the following:
m.b = Block([1,2])
m.I = Set(initialize=[1,2,3])
m.J = Set(initialize=[1,2,3])
m.b[1].x = Var(m.I)
m.b[2].x = Var(m.J)
s = m.b[:].x[:]
will identify wildcard sets for s
as [m.b_index, m.I]
, even though m.b[2].x
is indexed by m.J
and not m.I
. When we originally wrote this logic, the idea was to only return wildcard sets if the indexing sets were demonstrably "known".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes the toy model in issue #905 where a model containing a Block constructed using a rule like the one below had to have the discretization transformation come before the arc_expansion transformation.
m = ConcreteModel()
m.s = ContinuousSet(initialize=[0,10])
def block_rule(b, i):
b.v = Var(['a','b'], initialize=1)
return
m.b1 = Block(m.s, rule=block_rule)
m.b2 = Block(m.s, rule=block_rule)
m.p1 = Port()
m.p2 = Port()
m.r1 = Reference(m.b1[:].v[:])
The problem is that b1[0].v
and b1[10].v
are not identified as having the same indexing set because a new Set is being implicitly created from the values of the Python list. This causes the ContinuousSet index in the Reference component r1
to be obscured so the indexed equality constraint that is eventually added by the arc_expansion transformation is not explicitly indexed by a ContinuousSet and therefore not expanded by the discretization transformation.
I realize this change violates the original logic which requires the user to be much more explicit when declaring their indexing sets but I also think that the above implementation is pretty common and most users would expect b1[0].v
and b1[10].v
to be identified as having the same indexing set (they shouldn't have to know that a Set object is being created for them implicitly)
It's possible that there are edge cases here where if we were creating References for Params or Constraints (which aren't necessarily dense) then checking is not
vs. !=
leads to more or less intuitive behavior but for Vars I think it makes sense to check if all the elements of the sets are the same.
A couple other ideas for solving this problem of transformation ordering are:
-
In the discretization transformation check component indexing sets for the
SetOf
type and automatically try to expand the component if it is found (since it might contain an obscured ContinuousSet) - I'm worried this could lead to weird side-effects if the user does anything weird with adding elements to non-ContinuousSets -
Make References or SetOf aware of ContinuousSets - Seems weird to make core code aware of something in Pyomo.DAE
-
Modify the wildcard identification functionality to preserve any indexing sets that are identified as the same even if it encounters ones that aren't the same at a different level - I'm not sure how hard this would be to implement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me add another option:
- Modify the wildcard identification so that it handles
SetOf
differently (that is, useis
forSet
and==
forSetOf
).
That said, I think I like option 3. Looking at the wildcard identification, I suspect that we could support a partial identification without too much hassle. The challenge is that we would be potentially creating several pseudoset objects, and those objects would not be attached to the model. Given that I am in favor of moving toward not requiring all set objects to be attached to the model (see #45), I don't see too much of a problem going down that path. However, I think we can only handle "floating sets" correctly after we switch over to using the set rewrite (#326) by default.
@blnicho This appears to fix the problem. I just tried running the IDAES Tutorial 3 under both your branch and the IDAES tag (or something close to it). With your branch, the order of the transformations did not matter, whilst on master it did. |
Codecov Report
@@ Coverage Diff @@
## master #1243 +/- ##
==========================================
+ Coverage 70.83% 71.01% +0.18%
==========================================
Files 475 534 +59
Lines 76879 81767 +4888
==========================================
+ Hits 54457 58067 +3610
- Misses 22422 23700 +1278
Continue to review full report at Codecov.
|
We decided not to go with the solution proposed in this PR and this is no longer a high priority so I'm going to close it |
Fixes #905
Summary/Motivation:
I think this will resolve an issue where for some types of models using both Pyomo.DAE and Pyomo.Network, the discretization and arc_expansion transformations had to be applied in a particular order. This makes a very small change in
reference.py
in how indexing sets are compared when iterating over the components in a slice. Instead of checking if the same Set objects are used this checks that equivalent sets are used.@andrewlee94 could you check if this change resolves the ordering problem for IDAES models?
Changes proposed in this PR:
Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: