My previous change that reworked how metadata generation worked was not
correct for two reasons:
* It passed the original `io.Reader` (with no bytes remaining) to
`GenerateMetadata`
* It did not seek back to the beginning of the temporary file after
writing to it, causing any subsequent reads to return EOF.
This change fixes both issues and S3 uploads now work fine. We really
ought to investigate adding test coverage to our S3 backend to avoid
similar recurrences.
It turns out that the S3 API expects the additional `MetadataDirective:
REPLACE` option in order to update metadata. If this is not provided,
then metadata will simply be copied from the source object, which is not
what we wanted.
* Add PutMetadata function to storage backends
This function is not currently used, but it will be useful for helper
scripts that need to regenerate metadata on the fly, especially scripts
to migrate between storage backends. In the future, we can also use it
to automatically regenerate metadata if it is found to be missing or
corrupted.
* Add PutMetadata function to storage backend interface and
implementations
* Rework metadata generation to be more efficient and work better with
the PutMetadata function
* Add a basic test for metadata generation
* Change PutMetadata to take a Metadata type instead
It's unlikely that this function is useful if it always regenerates the
metadata. Instead, the caller should do that if it needs.
This change pulls in some code copied from net/http's fs.go so that we
can support If-Match/If-None-Match requests. This will make it easy to
put a caching proxy in front of linx-server instances. Request
validation will still happen as long as the proxy can contact the
origin, so expiration and deletion will still work as expected under
normal circumstances.
* Remove right margin from expiration dropdown on index
* Use flexbox for bin/story display
* Move Paste/Save button after expire dropdown, instead of before