-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: Old method upload #535
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs a bit of work. Also, some tests checking the failure states (too big file, Content-Length mismatch) should be included.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After doing some research, I discovered that await request.post()
reads the WHOLE request body into memory, clearly defeating the purpose of much of the code here.
|
||
def read_file_with_max_size(self, max_size: int) -> Union[bytes, None]: | ||
buffer_size = 64 * 1024 # 64 KB buffer size | ||
content = b"" | ||
total_read = 0 | ||
|
||
while True: | ||
chunk = self.file_field.file.read(buffer_size) | ||
|
||
if not chunk: | ||
break | ||
|
||
total_read += len(chunk) | ||
if total_read > max_size: | ||
raise web.HTTPRequestEntityTooLarge() | ||
|
||
content += chunk | ||
|
||
return content if content else None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part is actually insufficient and in the actual position, where we process the request, we need to use
await request.multipart()
https://docs.aiohttp.org/en/stable/web_quickstart.html#file-uploads
You might have noticed a big warning in the example above. The general issue is that aiohttp.web.BaseRequest.post() reads the whole payload in memory, resulting in possible OOM errors. To avoid this, for multipart uploads, you should use aiohttp.web.BaseRequest.multipart() which returns a multipart reader:
Closed in favor #559 |
Problem:
We require the old upload method to remain functional. In some cases, it may crash due to the HTTP client not setting the multipart header for the file_field.
Solution:
If no message is sent, we don't need to know the file size. We can simply read chunks until we reach the maximum size for unauthenticated uploads or end of file.