-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
operation-rebalancer
pass modifies IR so as to require additional relinearization ops
#1284
Comments
Repro'd. Here's the ILP printed in debug mode: https://gist.github.com/j2kun/5ddf0abe4673014e716e37d59611c743 |
I realized while studying the ILP that this is actually the fault of the operation rebalancer pass. It transforms the IR from
To
And this is what is given to |
Maybe this would not be a problem if there were a cheap way to increase the level without doing a mul op? |
--optimize-relinearization
operation-rebalancer
pass modifies IR so as to require additional relinearization ops
Ohh, great catch! The rebalanced output isn't wrong from a pure "multiplicative depth" point of view. I wonder if a simple heuristic to group ctxt-ctxt and ctxt-ptxt muls together as much as possible would be a solution here?
Mh, good question. Oh, and slightly off-topic, but I thought the mgmgt infrastructure calls this "dimension" and reserves "level" for mod-chain related stuff? |
If you have a ciphertext More over, for multiplication, The requirement that their "size" are the same are present in current BGVOp/CKKSOp verifier. I thought about these potentials when migrating the optimize-relinearization code but thought that would require significant change to the ILP model. Also, I do not know whether OpenFHE/Lattigo support such mixed dimension add/mul. |
I think this is the key issue: while it's theoretically possible to do this, I'm pretty sure OpenFHE yells at you if you try to do them (though I haven't tested this out, and I've been wrong about the API limitations before 🙈 ) |
I think the ILP changes would be easy, just remove some constraints that ensure operand args have the same key basis dimension. |
While I haven't totally read through this issue, I do remember that this was not implemented in the operation-balancer pass explicitly heir/lib/Transforms/OperationBalancer/OperationBalancer.cpp Lines 195 to 199 in 96c9775
and that Lawrence had follow-up items to group operands by their ptxt - in issue #836 he mentions "The code makes an implicit assumption that intermediate operands have the same operation depth as other operands. See comments inside OperationBalancer.cpp for more details for how to improve on this." |
I did the experiment and finds both backends fine with it. Maybe we can start with removing the restriction on "size" in BGVOp/CKKSOp verifier. For OpenFHE, the code is auto ciphertextMul = cryptoContext->EvalMultNoRelin(ciphertext1, ciphertext2);
std::cout << "Dimension: " << ciphertextMul->GetElements().size() << std::endl;
std::cout << "Dimension: " << ciphertext3->GetElements().size() << std::endl;
auto ciphertextMulAndAdd = cryptoContext->EvalAdd(ciphertextMul, ciphertext3);
std::cout << "Dimension: " << ciphertextMulAndAdd->GetElements().size() << std::endl;
auto ciphertextMulAndMul = cryptoContext->EvalMultNoRelin(ciphertextMul, ciphertext3);
std::cout << "Dimension: " << ciphertextMulAndMul->GetElements().size() << std::endl; The result is
For Lattigo, the code is fmt.Println("v1 degree:", v1.Degree())
mul, _ := v0.MulNew(v1, v2)
fmt.Println("mul degree:", mul.Degree())
mulAndAdd, _ := v0.AddNew(mul, v1)
fmt.Println("mulAndAdd degree:", mulAndAdd.Degree())
mulAndMul, _ := v0.MulNew(mulAndAdd, v1)
fmt.Println("mulAndMul degree:", mulAndMul.Degree()) The result is
|
The optimizer generally does a good job of pulling relins through adds, but I found a very odd case here where it will work on
(p*x) + ((x*x)+(y*y))
but not on((x*x)+(y*y)) + (p*x)
The text was updated successfully, but these errors were encountered: