Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

operation-rebalancer pass modifies IR so as to require additional relinearization ops #1284

Open
AlexanderViand-Intel opened this issue Jan 21, 2025 · 9 comments · May be fixed by #1295
Open
Assignees
Labels
dialect: mgmt Issues concerning the mgmt dialect

Comments

@AlexanderViand-Intel
Copy link
Collaborator

AlexanderViand-Intel commented Jan 21, 2025

The optimizer generally does a good job of pulling relins through adds, but I found a very odd case here where it will work on (p*x) + ((x*x)+(y*y)) but not on ((x*x)+(y*y)) + (p*x)

// heir-opt --mlir-to-secret-arithmetic --secret-insert-mgmt-bgv --optimize-relinearization
func.func @bar(%x: i16 {secret.secret}, %y: i16 {secret.secret}, %p: i16) -> (i16) {
    %xx = arith.muli %x, %x : i16
    %yy = arith.muli %y, %y : i16
    %0 = arith.addi %xx, %yy : i16
    %xp = arith.muli %x, %p : i16
    %1 = arith.addi %0, %xp : i16
    func.return %1 : i16
}
///produces a single relin
// heir-opt --mlir-to-secret-arithmetic --secret-insert-mgmt-bgv --optimize-relinearization
func.func @bar(%x: i16 {secret.secret}, %y: i16 {secret.secret}, %p: i16) -> (i16) {
    %xx = arith.muli %x, %x : i16
    %yy = arith.muli %y, %y : i16
    %0 = arith.addi %xx, %yy : i16
    %xp = arith.muli %x, %p : i16
    %1 = arith.addi %xp, %0 : i16
    func.return %1 : i16
}
///produces two relins, despite the only change being the order of operands for %1
@AlexanderViand-Intel AlexanderViand-Intel added the dialect: mgmt Issues concerning the mgmt dialect label Jan 21, 2025
@j2kun
Copy link
Collaborator

j2kun commented Jan 23, 2025

Repro'd. Here's the ILP printed in debug mode: https://gist.github.com/j2kun/5ddf0abe4673014e716e37d59611c743

@j2kun
Copy link
Collaborator

j2kun commented Jan 23, 2025

I realized while studying the ILP that this is actually the fault of the operation rebalancer pass. It transforms the IR from

      %1 = arith.muli %input0, %input0 {secretness = true} : i16
      %2 = arith.muli %input1, %input1 {secretness = true} : i16
      %3 = arith.addi %1, %2 {secretness = true} : i16
      %4 = arith.muli %input0, %arg2 {secretness = true} : i16
      %5 = arith.addi %4, %3 {secretness = true} : i16

To

      %1 = arith.muli %input0, %input0 {secretness = true} : i16
      %2 = arith.muli %input1, %input1 {secretness = true} : i16
      %3 = arith.muli %input0, %arg2 {secretness = true} : i16
      %4 = arith.addi %3, %1 : i16
      %5 = arith.addi %4, %2 : i16

And this is what is given to optimize-relinearization, which handles it correctly. Note the operands of the %4 = arith.addi %3, %1 op span two different levels, %3 has level 1 while %1 has level 2, so the latter must be relinearized, which then causes the following op to have the same problem, which forces %2 to be relinearized.

@j2kun
Copy link
Collaborator

j2kun commented Jan 23, 2025

Maybe this would not be a problem if there were a cheap way to increase the level without doing a mul op?

@j2kun j2kun changed the title Bug/Missed Optimization in --optimize-relinearization operation-rebalancer pass modifies IR so as to require additional relinearization ops Jan 23, 2025
@AlexanderViand-Intel
Copy link
Collaborator Author

I realized while studying the ILP that this is actually the fault of the operation rebalancer pass.

Ohh, great catch! The rebalanced output isn't wrong from a pure "multiplicative depth" point of view. I wonder if a simple heuristic to group ctxt-ctxt and ctxt-ptxt muls together as much as possible would be a solution here?

Maybe this would not be a problem if there were a cheap way to increase the level without doing a mul op?

Mh, good question. Oh, and slightly off-topic, but I thought the mgmgt infrastructure calls this "dimension" and reserves "level" for mod-chain related stuff?

@ZenithalHourlyRate
Copy link
Collaborator

Maybe this would not be a problem if there were a cheap way to increase the level without doing a mul op?

Mh, good question. Oh, and slightly off-topic, but I thought the mgmgt infrastructure calls this "dimension" and reserves "level" for mod-chain related stuff?

If you have a ciphertext x = (c0, c1, c2) and y = (c0', c1'), you can directly add them to z = (c0 + c0', c1 + c1', c2) and the decryption wont fail.

More over, for multiplication, x and y can also multiply without the requirement that their "dimension" (in mgmt term, or "size" in CiphertextSpaceAttr, or "SkDegree"/"basis" in the decryption point of view) are the same.

The requirement that their "size" are the same are present in current BGVOp/CKKSOp verifier.

I thought about these potentials when migrating the optimize-relinearization code but thought that would require significant change to the ILP model.

Also, I do not know whether OpenFHE/Lattigo support such mixed dimension add/mul.

@AlexanderViand-Intel
Copy link
Collaborator Author

If you have a ciphertext x = (c0, c1, c2) and y = (c0', c1'), you can directly add them to z = (c0 + c0', c1 + c1', c2) and the decryption wont fail.

More over, for multiplication, x and y can also multiply without the requirement that their "dimension" (in mgmt term, or "size" in CiphertextSpaceAttr, or "SkDegree"/"basis" in the decryption point of view) are the same.

Also, I do not know whether OpenFHE/Lattigo support such mixed dimension add/mul.

I think this is the key issue: while it's theoretically possible to do this, I'm pretty sure OpenFHE yells at you if you try to do them (though I haven't tested this out, and I've been wrong about the API limitations before 🙈 )

@j2kun
Copy link
Collaborator

j2kun commented Jan 23, 2025

I think the ILP changes would be easy, just remove some constraints that ensure operand args have the same key basis dimension.

@asraa
Copy link
Collaborator

asraa commented Jan 23, 2025

I wonder if a simple heuristic to group ctxt-ctxt and ctxt-ptxt muls together as much as possible would be a solution here?

While I haven't totally read through this issue, I do remember that this was not implemented in the operation-balancer pass explicitly

// TODO (#836): analyze the block to see which operations should be done
// secretly (i.e. under homomorphic encryption) to determine which
// operations are done plaintext and others over ciphertext. This is
// useful because we can then sort the operations in a way that minimizes
// the number of encodings.

and that Lawrence had follow-up items to group operands by their ptxt - in issue #836 he mentions "The code makes an implicit assumption that intermediate operands have the same operation depth as other operands. See comments inside OperationBalancer.cpp for more details for how to improve on this."

@ZenithalHourlyRate
Copy link
Collaborator

Also, I do not know whether OpenFHE/Lattigo support such mixed dimension add/mul.

I think this is the key issue: while it's theoretically possible to do this, I'm pretty sure OpenFHE yells at you if you try to do them (though I haven't tested this out, and I've been wrong about the API limitations before 🙈 )

I did the experiment and finds both backends fine with it. Maybe we can start with removing the restriction on "size" in BGVOp/CKKSOp verifier.

For OpenFHE, the code is

    auto ciphertextMul = cryptoContext->EvalMultNoRelin(ciphertext1, ciphertext2);
    std::cout << "Dimension: " << ciphertextMul->GetElements().size() << std::endl;
    std::cout << "Dimension: " << ciphertext3->GetElements().size() << std::endl;
    auto ciphertextMulAndAdd = cryptoContext->EvalAdd(ciphertextMul, ciphertext3);
    std::cout << "Dimension: " << ciphertextMulAndAdd->GetElements().size() << std::endl;
    auto ciphertextMulAndMul = cryptoContext->EvalMultNoRelin(ciphertextMul, ciphertext3);
    std::cout << "Dimension: " << ciphertextMulAndMul->GetElements().size() << std::endl;

The result is

Dimension: 3
Dimension: 2
Dimension: 3
Dimension: 4

For Lattigo, the code is

	fmt.Println("v1 degree:", v1.Degree())
	mul, _ := v0.MulNew(v1, v2)
	fmt.Println("mul degree:", mul.Degree())
	mulAndAdd, _ := v0.AddNew(mul, v1)
	fmt.Println("mulAndAdd degree:", mulAndAdd.Degree())
	mulAndMul, _ := v0.MulNew(mulAndAdd, v1)
	fmt.Println("mulAndMul degree:", mulAndMul.Degree())

The result is

v1 degree: 1
mul degree: 2
mulAndAdd degree: 2
mulAndMul degree: 3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dialect: mgmt Issues concerning the mgmt dialect
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants