Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NXDRIVE-2711: Show that upload is still alive for very large files (Sourcery refactored) #4228

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
b90806e
14-sept
gitofanindya Sep 14, 2023
ea94e57
NXDRIVE-2711:Show thta upload is still alive for very large file
Sep 22, 2023
273e923
NXDRIVE-2711:Show thta upload is still alive for very large file
Sep 22, 2023
915f44d
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Sep 26, 2023
d7308ca
NXDRIVE-2711: Upload is still alive
swetayadav1 Oct 5, 2023
d82ddbb
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 10, 2023
cf30aaa
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 10, 2023
3382d47
Merge branch 'master' of https://github.com/nuxeo/nuxeo-drive into wi…
swetayadav1 Oct 10, 2023
9664a78
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 10, 2023
70481e6
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 11, 2023
7524f02
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 11, 2023
bd016cd
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 12, 2023
2968e81
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 12, 2023
7a400a7
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 17, 2023
900dab5
NXDRIVE-2711: Show that upload is still alive for very large files
swetayadav1 Oct 17, 2023
643f317
NXDRIVE-2711: update testcase
swetayadav1 Oct 18, 2023
ab433bb
Merge branch 'master' of https://github.com/nuxeo/nuxeo-drive into wi…
swetayadav1 Oct 18, 2023
894e6a6
NXDRIVE-2711:Show that upload is still alive for very large files
swetayadav1 Oct 25, 2023
3fda071
NXDRIVE-2711:Show that upload is still alive for very large files
swetayadav1 Oct 25, 2023
2de2bf4
NXDRIVE-2711:Show that upload is still alive for very large files
swetayadav1 Oct 25, 2023
1c921c4
NXDRIVE-2711:Show that upload is still alive for very large files
swetayadav1 Oct 25, 2023
8d90b25
Merge branch 'master' of https://github.com/nuxeo/nuxeo-drive into wi…
swetayadav1 Oct 25, 2023
7cb3f27
NXDRIVE-2711:Show that upload is still alive for very large files
swetayadav1 Oct 26, 2023
e8501e3
NXDRIVE-2711:Show that upload is still alive for very large files
swetayadav1 Oct 26, 2023
ae9a1a8
NXDRIVE-2711:Show that upload is still alive for very large files
swetayadav1 Oct 26, 2023
345078c
NXDRIVE-2711:Show that upload is still alive for very large files
swetayadav1 Oct 26, 2023
c21d4f1
NXDRIVE-2711: Sourcery refactored
swetayadav1 Nov 2, 2023
893d1aa
'Refactored by Sourcery'
Nov 2, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions docs/changes/5.3.3.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Release date: `2023-xx-xx`

### Direct Transfer

- [NXDRIVE-2](https://jira.nuxeo.com/browse/NXDRIVE-2):
- [NXDRIVE-2711](https://jira.nuxeo.com/browse/NXDRIVE-2711): Show that upload is still alive for very large files

## GUI

Expand Down Expand Up @@ -83,6 +83,7 @@ Release date: `2023-xx-xx`
- Upgraded `typing-extensions` from 4.0.1 to 4.7.1
- Upgraded `vulture` from 2.3 to 2.9.1
- Upgraded `wcwidth` from 0.2.5 to 0.2.6

## Technical Changes

-
- Added `finalizing_status` attribute in LinkingAction class
27 changes: 14 additions & 13 deletions nxdrive/client/uploader/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -357,11 +357,9 @@
self.dao.update_upload(transfer)
transfer.is_dirty = False

# Handle status changes every time a chunk is sent
_transfer = self.get_upload(
if _transfer := self.get_upload(

Check warning on line 360 in nxdrive/client/uploader/__init__.py

View check run for this annotation

Codecov / codecov/patch

nxdrive/client/uploader/__init__.py#L360

Added line #L360 was not covered by tests
doc_pair=transfer.doc_pair, path=transfer.path
)
if _transfer:
):
Comment on lines -360 to +362
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function BaseUploader.upload_chunks refactored with the following changes:

This removes the following comments ( why? ):

# Handle status changes every time a chunk is sent

self._handle_transfer_status(_transfer)
else:
uploader.upload()
Expand Down Expand Up @@ -416,7 +414,7 @@
self._set_transfer_status(transfer, TransferStatus.ONGOING)
raise exc

def link_blob_to_doc( # type: ignore[return]
def link_blob_to_doc(
self,
command: str,
transfer: Upload,
Expand Down Expand Up @@ -451,15 +449,19 @@
kwargs["headers"] = headers
try:
doc_type = kwargs.get("doc_type", "")
if transfer.is_direct_transfer and doc_type and doc_type != "":
res = self._transfer_docType_file(transfer, headers, doc_type)
else:
res = self._transfer_autoType_file(command, blob, kwargs)

return res
return (
self._transfer_docType_file(transfer, headers, doc_type)
if transfer.is_direct_transfer and doc_type and doc_type != ""
else self._transfer_autoType_file(command, blob, kwargs)
)
except Exception as exc:
err = f"Error while linking blob to doc: {exc!r}"
log.warning(err)
action.finalizing_status = "Error"
if "TCPKeepAliveHTTPSConnectionPool" not in str(exc):
transfer.request_uid = str(uuid4())
self.dao.update_upload_requestid(transfer)
raise exc
finally:
action.finish_action()

Expand Down Expand Up @@ -498,11 +500,10 @@
data=content,
ssl_verify=self.verification_needed,
)
res = self.remote.fetch(
return self.remote.fetch(

Check warning on line 503 in nxdrive/client/uploader/__init__.py

View check run for this annotation

Codecov / codecov/patch

nxdrive/client/uploader/__init__.py#L503

Added line #L503 was not covered by tests
f"{self.remote.client.api_path}/path{transfer.remote_parent_path}",
headers=headers,
)
return res
Comment on lines -501 to -505
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function BaseUploader._transfer_docType_file refactored with the following changes:


@staticmethod
def _complete_upload(transfer: Upload, blob: FileBlob, /) -> None:
Expand Down
45 changes: 19 additions & 26 deletions nxdrive/dao/engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -690,7 +690,7 @@
"UPDATE States SET processor = 0 WHERE processor = ?", (processor_id,)
)
log.debug(f"Released processor {processor_id}")
return bool(c.rowcount > 0)
return c.rowcount > 0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.release_processor refactored with the following changes:


def acquire_processor(self, thread_id: int, row_id: int, /) -> bool:
with self.lock:
Expand All @@ -702,7 +702,7 @@
" AND processor IN (0, ?)",
(thread_id, row_id, thread_id),
)
return bool(c.rowcount == 1)
return c.rowcount == 1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.acquire_processor refactored with the following changes:


def _reinit_states(self, cursor: Cursor, /) -> None:
cursor.execute("DROP TABLE States")
Expand Down Expand Up @@ -748,7 +748,7 @@
c.execute(f"{update} WHERE id = ?", ("remotely_deleted", doc_pair.id))
if doc_pair.folderish:
c.execute(
update + " " + self._get_recursive_remote_condition(doc_pair),
f"{update} {self._get_recursive_remote_condition(doc_pair)}",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.delete_remote_state refactored with the following changes:

("parent_remotely_deleted",),
)
# Only queue parent
Expand Down Expand Up @@ -781,16 +781,7 @@
) -> int:
digest = None
if not info.folderish:
if is_large_file(info.size):
# We can't compute the digest of big files now as it will
# be done later when the entire file is fully copied.
# For instance, on my machine (32GB RAM, 8 cores, Intel NUC)
# it takes 23 minutes for 100 GB and 7 minute for 50 GB.
# This is way too much effort to compute it several times.
digest = UNACCESSIBLE_HASH
else:
digest = info.get_digest()

digest = UNACCESSIBLE_HASH if is_large_file(info.size) else info.get_digest()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.insert_local_state refactored with the following changes:

This removes the following comments ( why? ):

# We can't compute the digest of big files now as it will
# For instance, on my machine (32GB RAM, 8 cores, Intel NUC)
# This is way too much effort to compute it several times.
# be done later when the entire file is fully copied.
# it takes 23 minutes for 100 GB and 7 minute for 50 GB.

with self.lock:
c = self._get_write_connection().cursor()
pair_state = PAIR_STATES[("created", "unknown")]
Expand Down Expand Up @@ -1406,7 +1397,7 @@
condition = self._get_recursive_remote_condition(doc_pair)
else:
condition = self._get_recursive_condition(doc_pair)
c.execute("DELETE FROM States " + condition)
c.execute(f"DELETE FROM States {condition}")

Check warning on line 1400 in nxdrive/dao/engine.py

View check run for this annotation

Codecov / codecov/patch

nxdrive/dao/engine.py#L1400

Added line #L1400 was not covered by tests
Comment on lines -1409 to +1400
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.remove_state refactored with the following changes:


def remove_state_children(
self, doc_pair: DocPair, /, *, remote_recursion: bool = False
Expand All @@ -1417,7 +1408,7 @@
condition = self._get_recursive_remote_condition(doc_pair)
else:
condition = self._get_recursive_condition(doc_pair)
c.execute("DELETE FROM States " + condition)
c.execute(f"DELETE FROM States {condition}")

Check warning on line 1411 in nxdrive/dao/engine.py

View check run for this annotation

Codecov / codecov/patch

nxdrive/dao/engine.py#L1411

Added line #L1411 was not covered by tests
Comment on lines -1420 to +1411
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.remove_state_children refactored with the following changes:


def get_state_from_local(self, path: Path, /) -> Optional[DocPair]:
c = self._get_read_connection().cursor()
Expand Down Expand Up @@ -1485,15 +1476,10 @@
def queue_children(self, row: DocPair, /) -> None:
with self.lock:
c = self._get_write_connection().cursor()
children: List[DocPair] = c.execute(
"SELECT *"
" FROM States"
" WHERE remote_parent_ref = ?"
" OR local_parent_path = ?"
" AND " + self._get_to_sync_condition(),
if children := c.execute(
f"SELECT * FROM States WHERE remote_parent_ref = ? OR local_parent_path = ? AND {self._get_to_sync_condition()}",
(row.remote_ref, row.local_path),
).fetchall()
if children:
).fetchall():
Comment on lines -1488 to +1482
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.queue_children refactored with the following changes:

log.info(f"Queuing {len(children)} children of {row}")
for child in children:
self._queue_pair_state(child.id, child.folderish, child.pair_state)
Expand Down Expand Up @@ -1663,7 +1649,7 @@
version,
),
)
result = bool(c.rowcount == 1)
result = c.rowcount == 1
Comment on lines -1666 to +1652
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.synchronize_state refactored with the following changes:


# Retry without version for folder
if not result and row.folderish:
Expand Down Expand Up @@ -1694,7 +1680,7 @@
row.remote_parent_ref,
),
)
result = bool(c.rowcount == 1)
result = c.rowcount == 1

if not result:
log.debug(f"Was not able to synchronize state: {row!r}")
Expand Down Expand Up @@ -1869,7 +1855,7 @@
row = c.execute(
"SELECT COUNT(path) FROM RemoteScan WHERE path = ? LIMIT 1", (path,)
).fetchone()
return bool(row[0] > 0)
return row[0] > 0
Comment on lines -1872 to +1858
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EngineDAO.is_path_scanned refactored with the following changes:


def is_filter(self, path: str, /) -> bool:
path = self._clean_filter_path(path)
Expand Down Expand Up @@ -2381,6 +2367,13 @@
sql = "UPDATE Uploads SET batch = ? WHERE uid = ?"
c.execute(sql, (json.dumps(batch), upload.uid))

def update_upload_requestid(self, upload: Upload, /) -> None:
"""In case of error during linking, update request_uid for upload"""
with self.lock:
c = self._get_write_connection().cursor()
sql = "UPDATE Uploads SET request_uid = ? WHERE uid = ?"
c.execute(sql, (upload.request_uid, upload.uid))

def pause_transfer(
self,
nature: str,
Expand Down
1 change: 1 addition & 0 deletions nxdrive/data/i18n/i18n.json
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,7 @@
"DIRECT_TRANSFER_DETAILS": "[%1%] %2 of %3",
"DIRECT_TRANSFER_END": "Transfer done: \"%1\"",
"DIRECT_TRANSFER_ERROR": "Transfer error: \"%1\"",
"DIRECT_TRANSFER_FINALIZING_ERROR": "An error occurred during the transfer, it will resume shortly.",
"DIRECT_TRANSFER_NO_ACCOUNT": "Cannot use the Direct Transfer feature with no account, aborting.",
"DIRECT_TRANSFER_NOT_ALLOWED": "Direct Transfer of \"%1\" is not allowed for synced files.",
"DIRECT_TRANSFER_NOT_ENABLED": "The Direct Transfer feature is not enabled.",
Expand Down
8 changes: 8 additions & 0 deletions nxdrive/data/qml/TransferItem.qml
Original file line number Diff line number Diff line change
Expand Up @@ -77,5 +77,13 @@ Rectangle {
}
}
}

ScaledText {
text: qsTr("DIRECT_TRANSFER_FINALIZING_ERROR") + tl.tr
color: secondaryText
visible: finalizing && finalizing_status
Layout.leftMargin: icon.width + 5
font.pointSize: point_size * 0.8
}
}
}
26 changes: 19 additions & 7 deletions nxdrive/engine/activity.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,7 @@ def get_current_action(*, thread_id: int = None) -> Optional["Action"]:

@staticmethod
def finish_action() -> None:
action = Action.actions.pop(current_thread_id(), None)
if action:
if action := Action.actions.pop(current_thread_id(), None):
action.finish()

def finish(self) -> None:
Expand All @@ -69,9 +68,7 @@ def export(self) -> Dict[str, Any]:
}

def __repr__(self) -> str:
if not self.progress:
return str(self.type)
return f"{self.type}({self.progress}%)"
return f"{self.type}({self.progress}%)" if self.progress else str(self.type)
Comment on lines -72 to +71
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Action.__repr__ refactored with the following changes:



class IdleAction(Action):
Expand Down Expand Up @@ -131,8 +128,7 @@ def _connect_reporter(self, reporter: Optional[QApplication], /) -> None:
return

for evt in ("started", "progressing", "done"):
signal = getattr(reporter, f"action_{evt}", None)
if signal:
if signal := getattr(reporter, f"action_{evt}", None):
Comment on lines -134 to +131
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function FileAction._connect_reporter refactored with the following changes:

getattr(self, evt).connect(signal)

@property
Expand All @@ -149,6 +145,15 @@ def progress(self, value: float, /) -> None:

self.progressing.emit(self)

@property
def finalizing_status(self) -> str:
return self._finalizing_status

@finalizing_status.setter
def finalizing_status(self, value: str, /) -> None:
self._finalizing_status = value
self.progressing.emit(self)

def get_percent(self) -> float:
if self.size < 0 or (self.empty and not self.uploaded):
return 0.0
Expand Down Expand Up @@ -257,6 +262,13 @@ def __init__(
doc_pair=doc_pair,
)
self.progress = size
self.finalizing_status = ""

def export(self) -> Dict[str, Any]:
return {
**super().export(),
"finalizing_status": self.finalizing_status,
}


def tooltip(doing: str): # type: ignore
Expand Down
Loading
Loading