Skip to content

Commit

Permalink
Allow for the firmware file size limit to be configurable (#1746)
Browse files Browse the repository at this point in the history
This allows for the firmware file size upload limit to be adjusted via
runtime env var.

This also restrictions large file uploads to only the firmwares api
route.

I'd also like this to be configurable per org, such that free users can
only upload smaller firmware sizes, but thats for a different
discussion.
  • Loading branch information
joshk authored Jan 11, 2025
1 parent 38dee14 commit 9e597df
Show file tree
Hide file tree
Showing 6 changed files with 49 additions and 7 deletions.
4 changes: 4 additions & 0 deletions config/runtime.exs
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,10 @@ if config_env() == :prod do
end
end

# Set a default max firmware upload size of 200MB for all environments
config :nerves_hub, NervesHub.Firmwares.Upload,
max_size: System.get_env("FIRMWARE_UPLOAD_MAX_SIZE", "200000000") |> String.to_integer()

##
# SMTP settings.
#
Expand Down
2 changes: 1 addition & 1 deletion lib/nerves_hub/firmwares.ex
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ defmodule NervesHub.Firmwares do
Repo.rollback(error)
end
end,
timeout: 30_000
timeout: 60_000
)
end

Expand Down
2 changes: 1 addition & 1 deletion lib/nerves_hub/firmwares/upload/s3.ex
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ defmodule NervesHub.Firmwares.Upload.S3 do
def upload_file(source_path, %{"s3_key" => s3_key}) do
source_path
|> S3.Upload.stream_file()
|> S3.upload(bucket(), s3_key)
|> S3.upload(bucket(), s3_key, timeout: 60_000)
|> ExAws.request()
|> case do
{:ok, _} -> :ok
Expand Down
36 changes: 36 additions & 0 deletions lib/nerves_hub_web/dymanic_config_multipart.ex
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
defmodule NervesHubWeb.DymanicConfigMultipart do
@moduledoc """
A wrapper around `Plug.Parsers.MULTIPART` which allows for the `:length` opt (max file size)
to be set during runtime.
This also restricts large file uploads to the firmware upload api route.
This can later be expanded to allow for different file size limits based on the organization.
Thank you to https://hexdocs.pm/plug/Plug.Parsers.MULTIPART.html#module-dynamic-configuration
for the inspiration.
"""

@multipart Plug.Parsers.MULTIPART

def init(opts) do
opts
end

def parse(conn, "multipart", subtype, headers, opts) do
opts = @multipart.init([length: max_file_size(conn)] ++ opts)
@multipart.parse(conn, "multipart", subtype, headers, opts)
end

def parse(conn, _type, _subtype, _headers, _opts) do
{:next, conn}
end

defp max_file_size(conn) do
if String.match?(conn.request_path, ~r/^\/api\/orgs\/\w+\/products\/\w+\/firmwares$/) do
Application.get_env(:nerves_hub, NervesHub.Firmwares.Upload, [])[:max_size]
else
1_000_000
end
end
end
6 changes: 2 additions & 4 deletions lib/nerves_hub_web/endpoint.ex
Original file line number Diff line number Diff line change
Expand Up @@ -71,11 +71,9 @@ defmodule NervesHubWeb.Endpoint do

plug(
Plug.Parsers,
parsers: [:urlencoded, :multipart, :json],
parsers: [:urlencoded, NervesHubWeb.DymanicConfigMultipart, :json],
pass: ["*/*"],
# 1GB
length: 1_073_741_824,
json_decoder: Jason
json_decoder: Phoenix.json_library()
)

plug(Sentry.PlugContext)
Expand Down
6 changes: 5 additions & 1 deletion lib/nerves_hub_web/live/firmware.ex
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ defmodule NervesHubWeb.Live.Firmware do
accept: ~w(.fw),
max_entries: 1,
auto_upload: true,
max_file_size: 200_000_000,
max_file_size: max_file_size(),
progress: &handle_progress/3
)
|> render_with(&upload_firmware_template/1)
Expand Down Expand Up @@ -193,4 +193,8 @@ defmodule NervesHubWeb.Live.Firmware do
key = Enum.find(org_keys, &(&1.id == org_key_id))
"#{key.name}"
end

defp max_file_size() do
Application.get_env(:nerves_hub, NervesHub.Firmwares.Upload, [])[:max_size]
end
end

0 comments on commit 9e597df

Please sign in to comment.