Sunday, March 10, 2013

Handling Multiple File Uploads in a Go Web Application

It took me some time to get this right as I couldn't find an easy-to-follow example on the web. Since this is a common use case, this post may hopefully help someone.

This example has a single template for the main webpage that has the file upload form. The backend handler serves this template on a GET request and handles the upload on a POST request. If you're not familiar with Go's html templates, I would recommend reading this well written document first.

The uploadHandler method is where the action happens. This handler responds to a GET request by displaying the upload form. This form POSTs to the same URL - the handler responds by parsing the posted form, saving the uploaded files and displaying a success message.

You can find the complete project at - https://github.com/sanatgersappa/Go-MultipleFileUpload

Update: As pointed out by Luit in the comments, an alternate way of doing this is to use a mime/multipart.Reader exposed by r.MultipartReader() instead of r.MultiparseForm(). This approach has the advantage that it doesn't write to a temporary location on the disk, but processes bytes as they come in.

7 comments:

  1. Why not use a mime/multipart.Reader, by calling r.MultipartReader() instead of r.ParseMultipartForm(...)? I think that might make the code simpler, and it works just as well (maybe even better, no temporary files for big multipart uploads).

    ReplyDelete
  2. Thanks. That's an alternate way of doing it. However, I have a feeling this might require a lot of memory on the server if someone dumps a largish file or two. Haven't tested it though.

    ReplyDelete
  3. As far as I know, the Reader will simply fill the buffers that are always present when dealing with network stuff, but nothing more. I'd say the ParseMultipartForm method is the more dangerous one, as it simply buffers up to x bytes in memory, and stores the rest of the incoming files to disk.

    The multipart.Reader simply exposes a different way to read from the network connection, only accepting as much data from it as you try to read (plus a few bytes of buffering, perhaps). It's just a layer on top, enabling you to use it as an io.Reader for each multipart.Part, as if it's a simple file. No filesystem backing, or filling buffers at the mercy of the (possibly dangerous) request.

    If it doesn't work this way internally, I'd consider it to be a bug.

    ReplyDelete
  4. Hmm..sounds about right. Will try it. Thanks.

    ReplyDelete
  5. Ok. Tried it out and updated. Looks like a good approach.

    ReplyDelete
  6. Nice. One more thing though, you can copy from an io.Reader to an io.Writer using the io.Copy(...) function.

    ReplyDelete
  7. Done. Looks much cleaner now. Thanks again.

    ReplyDelete