Use a shared API request limiter across recursive upload and
download traversal so folder detail fetches, file listings,
folder auth, and transfers can run concurrently under one
budget.
Refactor the traversal loops into task-driven pipelines while
preserving skip-if-exists, excludes, cleanup, and current
output behavior.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Cache parsed auth state per remote and validate it with on-disk\nfile metadata so repeated authenticated API calls can skip\nredundant open/read/JSON parse work within one process.\n\nCentralize cache load, persist, and removal helpers in the cache\nmodule, reuse them from login, logout, and whoami, and update\nthe refresh path to persist structured cache data directly.\n\nAdd targeted cache tests for memory reuse, invalidation after\nexternal writes, persist updates, and cache removal.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Move token refresh checks into the shared Rust connection/API path so long-running authenticated operations stop reusing stale access tokens. This covers recursive download and upload traversal, recursive ls via the shared APIs, and direct authenticated commands such as cp, mv, rm, and chacl.
Also surface HTTP failures earlier in the affected API methods instead of failing later during response parsing.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Phase 5: Replace all Box<dyn Error> return types with anyhow::Result<T>
throughout the codebase. Replace string-based Err("msg".into()) and
format!().into() patterns with bail!() and anyhow!() macros. Fix
dirs::home_dir().unwrap() in settings.rs to use a fallback path instead
of panicking when HOME is unset. Remove stray use std::error::Error
imports no longer needed.
Phase 6: Add From<&User> for CacheUser in models/user.rs and
From<&Laboratory>/From<&Laboratories> for CacheLaboratory/CacheLabsWrapper
in models/laboratory.rs. Simplify commands/login.rs to use .into()
conversions, removing the redundant to_cache_user() and to_cache_labs()
helper functions.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The access token obtained at command startup could expire during a long
transfer session (e.g. uploading thousands of files or large files),
causing subsequent requests to fail with 401 Unauthorized.
Root cause: load_cache_with_token_refresh was called only once, and the
resulting MDRSConnection — including its now-stale token — was shared
across all parallel tasks via Arc. There was no mechanism to update the
token in the shared instance after creation.
Fix:
- Add MDRSConnection::with_token(&self, token) that creates a new
connection struct reusing the caller's HTTP client (cheap Arc clone,
shares the connection pool) but carrying a fresh Bearer token.
- In upload.rs and download.rs, call load_cache_with_token_refresh
inside each tokio::spawn task body, then create a task-local
connection via conn.with_token(fresh_token) before transferring the
file. The shared reqwest::Client (connection pool) is preserved.
cp.rs is not changed: it uses only short server-side API calls with no
parallel tasks, so token expiry during a cp operation is negligible.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The previous implementation had two correctness issues:
1. flock on .tmp was ineffective for cross-process exclusion.
After fs::rename(), the .tmp inode disappears. A second process
opening .tmp gets a brand-new inode, so both processes hold flocks
on different inodes simultaneously — no mutual exclusion occurs.
2. The critical section was too narrow. The in-process tokio::Mutex
only serializes tasks within the same process. Two separate mdrs
processes could both read the cache, both decide a refresh was
needed, and both call the token-refresh endpoint before either had
written the new token back — risking double-refresh and potential
failures on servers that use refresh-token rotation.
Fix: introduce a dedicated `cache/{remote}.lock` file as the cross-
process advisory lock target. The lock file is never renamed, so its
inode remains stable for the entire critical section. The flock now
wraps the complete read-check-refresh-write cycle in
load_cache_with_token_refresh(), and the redundant flock on .tmp in
refresh_and_persist() is removed.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- api/files.rs: NFC-normalize filename before sending to server in
upload_file(). On macOS, local filenames may be NFD-encoded, which
would cause the server to store them as NFD instead of NFC.
- commands/download.rs: replace direct to_lowercase() subfolder
comparison with find_subfolder_by_name() helper, which already
applies NFC normalization on both sides.
- commands/cp.rs, mv.rs: apply nfc() to s_basename (source path
component from user input) for consistency with d_basename, so
the no-op identity check and find_*() calls use normalized strings.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
On Windows, std::fs::canonicalize returns an extended-length path
with the \?\ prefix, which does not support forward slashes.
Using format!("{}/{}", ...) to join paths then caused os error 123
(ERROR_INVALID_NAME).
Replace all string-based path concatenation with PathBuf::join so
that the OS-appropriate separator is used on every platform.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Fetch latest release from Gitea API using existing reqwest client
- Match release asset by BUILD_TARGET triple (supports .tar.gz and .zip)
- Compare versions; show confirmation prompt (skippable with -y/--yes)
- Download archive, extract binary, atomically replace self via self-replace
- Support private repositories via GITEA_TOKEN environment variable
- Expose BUILD_TARGET in build.rs for compile-time target triple detection
- Add .gitea/workflows/release.yml for multi-platform release builds on tag push
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>