The VFS directory cache layer didn't update directory entry properties
if they are reused after cache invalidation.
Update them unconditionally as newDir sets them to the same value and
setting a pointer is cheaper in both LoC as well as CPU cycles than a
branch.
Also add a test exercising this behavior.
Fixes#6335
Before this change, if --vfs-cache-mode writes or above was set and
--links was in use, when a symlink was saved then the VFS failed to
upload it. This meant when the VFS was restarted the link wasn't there
any more.
This was caused by the local backend, which we use to manage the VFS
cache, picking up the global --links flag.
This patch makes sure that the internal instantations of the local
backend in the VFS cache don't ever use the --links flag or the
--local-links flag even if specified on the command line.
Fixes#8367
Before, after a sync, only file modtimes were updated when not using
--copy-empty-src-dirs. This ensures modtimes are updated to match the source
folder, regardless of copyEmptySrcDir. The flag --no-update-dir-modtime
(which previously did nothing) will disable this.
This adds tests to check dir modtimes are updated from source
when syncing even if they've changed in the destination.
This should work both with and without --copy-empty-src-dirs.
5f70918e2c
introduced a new INFO log when making a directory, which differs depending on
whether the backend supports setting directory metadata. This caused false
positives on the bisync createemptysrcdirs test.
This fixes it by ignoring that log line.
This shifts the behavior of the average loop to be a persistent loop
that gets resumed/paused when transfers & checks are started/completed.
Previously, the averageLoop was stopped on completion of
transfers & checks but failed to start again due to the protection of
the sync.Once
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
Before this change, there was a bug affecting listing files when:
- a given bisync run had changes in the 2to1 direction
AND
- the run had NO changes in the 1to2 direction
AND
- at least one of the changed files changed AGAIN during the run
(specifically, after the initial march and before the transfers.)
In this situation, the listings on one side would still retain the prior version
of the changed file, potentially causing conflicts or errors.
This change fixes the issue by making sure that if we're updating the listings
on one side, we must also update the other. (We previously tried to skip it for
efficiency, but this failed to account for the possibility that a changed file
could change again during the run.)
* Lower pacer minSleep to establish new connections faster
* Use Echo requests to check whether connections are working (required an upgrade of go-smb2)
* Only remount shares when needed
* Use context for connection establishment
* When returning a connection to the pool, only check the ones that encountered errors
* Close connections in parallel
In this commit we introduced support for client credentials flow:
65012beea4 lib/oauthutil: add support for OAuth client credential flow
This involved re-organising the oauth credentials.
Unfortunately a small error was made which used a fixed redirect URL
rather than the one configured for the backend.
This caused the box backend oauth flow not to work properly with
redirect_uri_mismatch errors.
These backends were using the wrong redirect URL and will likely be
affected, though it is possible the backends have workarounds.
- box
- drive
- googlecloudstorage
- googlephotos
- hidrive
- pikpak
- premiumizeme
- sharefile
- yandex
Before this change the logic which makes sure we create all
directories could get confused with directories which started with
slashes and get into an infinite loop consuming 100% of the CPU.
Before this change, bucket.Join would tidy up object keys by removing
repeated / in them. This means we can't access objects with // in them
which is valid for object keys (but not for file system paths).
This could have consequences for users who are relying on rclone to
fix improper paths for them.
When doing a multipart upload or copy, if a InvalidBlobOrBlock error
is received, it can mean that there are uncomitted blocks from a
previous failed attempt with a different length of ID.
This patch makes rclone attempt to clear the uncomitted blocks and
retry if it receives this error.
This implements multipart server side copy to improve copying from one
azure region to another by orders of magnitude (from 30s for a 100M
file to 10s for a 10G file with --azureblob-upload-concurrency 500).
- Add `--azureblob-copy-cutoff` to control the cutoff from single to multipart copy
- Add `--azureblob-copy-concurrency` to control the copy concurrency
- Add ServerSideAcrossConfigs flag as this now works properly
- Implement multipart copy using put block list API
- Shortcut multipart copy for same storage account
- Override with `--azureblob-use-copy-blob`
Fixes#8249
This speeds up server side copies for small files which need the check
the copy status by using an exponential ramp up of time to check the
copy status endpoint.
Before this change, if a multipart upload was aborted, then rclone
would leave uncommitted blocks lying around. Azure has a limit of
100,000 uncommitted blocks per storage account, so when you then try
to upload other stuff into that account, or simply the same file
again, you can run into this limit. This causes errors like the
following:
BlockCountExceedsLimit: The uncommitted block count cannot exceed the
maximum limit of 100,000 blocks.
This change removes the uncommitted blocks if a multipart upload is
aborted or fails.
If there was an existing destination file, it takes care not to
overwrite it by recomitting already comitted blocks.
This means that the scheme for allocating block IDs had to change to
make them different for each block and each upload.
Fixes#5583