Skip to content

Commit

Permalink
Address review comments
Browse files Browse the repository at this point in the history
Signed-off-by: Sven Trieflinger <[email protected]>
  • Loading branch information
strieflin committed Mar 8, 2023
1 parent d13c2ae commit 7bd3d38
Show file tree
Hide file tree
Showing 13 changed files with 109 additions and 31 deletions.
23 changes: 23 additions & 0 deletions .github/workflows/auto-labeler.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
#
# Copyright (c) 2023 - for information on the respective copyright owner
# see the NOTICE file and/or the repository https://github.com/carbynestack/klyshko.
#
# SPDX-License-Identifier: Apache-2.0
#
name: 'Auto Labeler'

on:

# Allow workflow to be triggered manually
workflow_dispatch:

issues:
types:
- reopened
- opened

jobs:

# re-usable workflow @see https://docs.github.com/en/actions/using-workflows/reusing-workflows
auto-labeling:
uses: carbynestack/.github/.github/workflows/auto-labeler.yml@master
22 changes: 22 additions & 0 deletions .github/workflows/stale-issue-cleanup.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#
# Copyright (c) 2023 - for information on the respective copyright owner
# see the NOTICE file and/or the repository https://github.com/carbynestack/klyshko.
#
# SPDX-License-Identifier: Apache-2.0
#
name: 'Stale Issue Cleanup'

on:

# Allow workflow to be triggered manually
workflow_dispatch:

# Trigger at 1:00 AM each day
schedule:
- cron: '0 1 * * *'

jobs:

# re-usable workflow @see https://docs.github.com/en/actions/using-workflows/reusing-workflows
stale-issue-cleanup:
uses: carbynestack/.github/.github/workflows/stale-actions.yml@master
35 changes: 29 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# Carbyne Stack Klyshko Correlated Randomness Generation

[![stability-wip](https://img.shields.io/badge/stability-wip-lightgrey.svg)](https://github.com/mkenney/software-guides/blob/master/STABILITY-BADGES.md#work-in-progress)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/3a07fd83b67647138b8ea660d16cdc35)](https://www.codacy.com/gh/carbynestack/klyshko/dashboard?utm_source=github.com&utm_medium=referral&utm_content=carbynestack/klyshko&utm_campaign=Badge_Grade)
[![codecov](https://codecov.io/gh/carbynestack/klyshko/branch/master/graph/badge.svg?token=6hRb7xRW6C)](https://codecov.io/gh/carbynestack/klyshko)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit)
Expand Down Expand Up @@ -46,7 +45,7 @@ Klyshko consists of three main components:
generated.
- A *Task* (kind: `TupleGenerationTask`) represents a local or remote
execution of a CRG. A task exposes the state of the invocation on a single
VCP. On the job level task states are aggregated into a job state. Remote
VCP. On the job level, task states are aggregated into a job state. Remote
tasks are proxied locally to make their state available to the job
controller. The task controller makes use of the
[Klyshko Integration Interface (KII)](#klyshko-integration-interface-kii) to
Expand All @@ -62,6 +61,26 @@ to orchestrate actions across VCPs.

To deploy Klyshko to your VC you have to perform the following steps:

### Provide VCP Configuration

> **NOTE**: This is a workaround until the Carbyne Stack Operator (see
> [CSEP-0053](https://github.com/carbynestack/carbynestack/pull/54)) is
> available.
Klyshko needs to know the overall number of VCPs in the VC and the zero-based
index of the local VCP. This done by creating a configuration map with the
following content:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cs-vcp-config
data:
playerCount: <<NUMBER-OF-VCPS>>
playerId: <<ZERO-BASED-INDEX-OF-LOCAL-VCP>>
```
### Install the operator
The Klyshko operator can be deployed either by building from source or by using
Expand Down Expand Up @@ -236,7 +255,12 @@ These are made available to CRGs by the Klyshko runtime as files in folder
The `deploy.sh` scripts in the `hack` folders (top-level and within modules) can
be used to (re-)deploy Klyshko to a 2-party Carbyne Stack VC setup as described
in the [tutorials](https://carbynestack.io/getting-started) on the Carbyne Stack
website.
website. To trigger (re-)deployment, the top-level script must be called from
the project root folder using

```shell
./hack/deploy.sh
```

### Logging

Expand Down Expand Up @@ -274,9 +298,8 @@ We use the following logging level convention in the Klyshko code basis.

## License

Carbyne Stack *Klyshko Correlated Randomness Generation Subsystem* is
open-sourced under the Apache License 2.0. See the [LICENSE](LICENSE) file for
details.
Carbyne Stack *Klyshko Correlated Randomness Generation Service* is open-sourced
under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.

### 3rd Party Licenses

Expand Down
9 changes: 0 additions & 9 deletions klyshko-operator/PROJECT
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,6 @@ plugins:
projectName: klyshko
repo: github.com/carbynestack/klyshko
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: carbnyestack.io
group: klyshko
kind: Job
path: github.com/carbynestack/klyshko/api/v1alpha1
version: v1alpha1
- api:
crdVersion: v1
namespaced: true
Expand Down
1 change: 1 addition & 0 deletions klyshko-operator/api/v1alpha1/groupversion_info.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ SPDX-License-Identifier: Apache-2.0

//+kubebuilder:object:generate=true
//+groupName=klyshko.carbnyestack.io

package v1alpha1

import (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ type TupleGenerationSchedulerStatus struct {
}

//+kubebuilder:object:root=true
//+kubebuilder:resource:shortName=tgs;tgscheduler
//+kubebuilder:subresource:status

// TupleGenerationScheduler is the Schema for the TupleGenerationScheduler API.
Expand Down
1 change: 1 addition & 0 deletions klyshko-operator/api/v1alpha1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 4 additions & 4 deletions klyshko-operator/castor/castor_client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ var _ = Describe("Fetching telemetry", func() {
httpmock.DeactivateAndReset()
})

When("when Castor services responds with expected JSON data", func() {
When("when Castor service responds with expected JSON data", func() {

expectedTelemetry := Telemetry{TupleMetrics: []TupleMetrics{
{
Expand Down Expand Up @@ -55,7 +55,7 @@ var _ = Describe("Fetching telemetry", func() {
})
})

When("when Castor services responds with non-success status code", func() {
When("when Castor service responds with non-success status code", func() {
BeforeEach(func() {
httpmock.Activate()
httpmock.RegisterResponder(
Expand Down Expand Up @@ -89,7 +89,7 @@ var _ = Describe("Activating a tuple chunk", func() {
httpmock.DeactivateAndReset()
})

When("when Castor services responds with success status code", func() {
When("when Castor service responds with success status code", func() {
BeforeEach(func() {
httpmock.Activate()
httpmock.RegisterResponder(
Expand All @@ -106,7 +106,7 @@ var _ = Describe("Activating a tuple chunk", func() {
})
})

When("when Castor services responds with bon-success status code", func() {
When("when Castor service responds with non-success status code", func() {
BeforeEach(func() {
httpmock.Activate()
httpmock.RegisterResponder(
Expand Down
2 changes: 1 addition & 1 deletion klyshko-operator/charts/klyshko-operator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ deletes the release.

## Configuration

The following sections list the (main) configurable parameters of the
The following sections list the (main) configuration parameters of the
`klyshko-operator` chart and their default values. For the full list of
configuration parameters see [values.yaml](values.yaml).

Expand Down
3 changes: 3 additions & 0 deletions klyshko-operator/charts/klyshko-operator/templates/crds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,9 @@ spec:
kind: TupleGenerationScheduler
listKind: TupleGenerationSchedulerList
plural: tuplegenerationschedulers
shortNames:
- tgs
- tgscheduler
singular: tuplegenerationscheduler
scope: Namespaced
versions:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@ spec:
kind: TupleGenerationScheduler
listKind: TupleGenerationSchedulerList
plural: tuplegenerationschedulers
shortNames:
- tgs
- tgscheduler
singular: tuplegenerationscheduler
scope: Namespaced
versions:
Expand Down
26 changes: 15 additions & 11 deletions klyshko-operator/controllers/tuplegenerationjob_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -71,17 +71,26 @@ func (r *TupleGenerationJobReconciler) Reconcile(ctx context.Context, req ctrl.R
logger := r.Logger.WithValues("Job.Key", jobKey)
logger.V(logging.DEBUG).Info("Reconciling tuple generation jobs")

// Get local player index to be used throughout the reconciliation loop
playerID, err := localPlayerID(ctx, &r.Client, req.Namespace)
if err != nil {
return ctrl.Result{RequeueAfter: 60 * time.Second},
fmt.Errorf("can't read playerId from VCP configuration for job %v: %w", req.Name, err)
}

// Cleanup if job has been deleted
job := &klyshkov1alpha1.TupleGenerationJob{}
err := r.Get(ctx, req.NamespacedName, job)
err = r.Get(ctx, req.NamespacedName, job)
if err != nil {
if apierrors.IsNotFound(err) {
// Job resource not available -> has been deleted
_, err := r.EtcdClient.Delete(ctx, jobKey.ToEtcdKey())
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to delete roster for job %v: %w", req.Name, err)
// Job resource not available -> has been deleted, delete etcd entry, iff we are the master
if playerID == 0 {
_, err := r.EtcdClient.Delete(ctx, jobKey.ToEtcdKey())
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to delete roster for job %v: %w", req.Name, err)
}
logger.V(logging.DEBUG).Info("Roster deleted")
}
logger.V(logging.DEBUG).Info("Roster deleted")
return ctrl.Result{}, nil
}
// Error reading the object - requeue the request.
Expand All @@ -95,7 +104,6 @@ func (r *TupleGenerationJobReconciler) Reconcile(ctx context.Context, req ctrl.R
return ctrl.Result{}, fmt.Errorf("failed to read resource for roster with key %v for task %v: %w", jobKey, req.Name, err)
}
if resp.Count == 0 {
playerID, err := localPlayerID(ctx, &r.Client, req.Namespace)
if playerID != 0 {
logger.V(logging.DEBUG).Info("Roster not available, retrying later")
return ctrl.Result{}, nil
Expand All @@ -114,10 +122,6 @@ func (r *TupleGenerationJobReconciler) Reconcile(ctx context.Context, req ctrl.R
}

// Create local task if not existing
playerID, err := localPlayerID(ctx, &r.Client, req.Namespace)
if err != nil {
return ctrl.Result{RequeueAfter: 60 * time.Second}, fmt.Errorf("can't read playerId from VCP configuration for job %v: %w", req.Name, err)
}
task := &klyshkov1alpha1.TupleGenerationTask{}
err = r.Get(ctx, types.NamespacedName{
Namespace: job.Namespace,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ import (
"github.com/google/uuid"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"math/rand"
"time"

"k8s.io/apimachinery/pkg/runtime"
Expand Down Expand Up @@ -157,6 +158,11 @@ func (r *TupleGenerationSchedulerReconciler) cleanupFinishedJobs(ctx context.Con
return err
}
logger.V(logging.DEBUG).Info("Deleting finished jobs", "jobs", finishedJobs)
// Shuffling jobs to ensure that finished jobs do not accumulate while we try to delete
// the same finished job over and over again
rand.Shuffle(len(finishedJobs), func(i, j int) {
finishedJobs[i], finishedJobs[j] = finishedJobs[j], finishedJobs[i]
})
for _, j := range finishedJobs {
err := r.Delete(ctx, &j)
if err != nil {
Expand Down

0 comments on commit 7bd3d38

Please sign in to comment.