# rclone — rsync for Cloud Storage Across 70+ Providers > rclone is a command-line tool to sync, copy, and mount files between any two of 70+ cloud storage providers (S3, GCS, Azure, Dropbox, Google Drive, OneDrive, etc.). The universal file-moving tool every sysadmin should know. ## Install Save in your project root: # rclone — rsync for the Cloud ## Quick Use ```bash # Install curl https://rclone.org/install.sh | sudo bash # Configure a remote (interactive) rclone config # Follow prompts: name it "gdrive", pick Google Drive, auth via browser # Common commands rclone ls gdrive:path/to/folder rclone copy ~/photos gdrive:backups/photos rclone sync ~/photos gdrive:backups/photos --dry-run rclone mount gdrive:backups /mnt/gdrive --daemon rclone check local:path gdrive:path ``` ## Introduction rclone, originally a one-person project by Nick Craig-Wood, is now the de-facto tool for pushing bytes between cloud storage. It speaks 70+ protocols (every major cloud, WebDAV, SFTP, S3-compatible, Backblaze B2, Internet Archive, Storj, etc.) and lets you mount any of them as a local filesystem. With over 56,000 GitHub stars, rclone is used by sysadmins, photographers, researchers, and backup tools (it's the engine behind several commercial backup products). If it stores bytes, rclone probably talks to it. ## What rclone Does rclone exposes a unified CLI across 70+ backends. Configure "remotes" (named credentials for each provider), then use verbs: `ls`, `copy`, `sync`, `move`, `check`, `mount`, `serve`. Advanced features include encryption-at-rest for any backend, bandwidth throttling, filtering, partial uploads, and checksum verification. ## Architecture Overview ``` rclone binary | [rclone.conf] — per-remote credentials (~/.config/rclone/rclone.conf) | [Backend drivers] s3, gcs, azure, dropbox, drive, onedrive, webdav, sftp, ftp, http, b2, wasabi, storj, mega, mailru, pcloud, yandex, smb, 70+ more | [Core operations] copy / sync / move / ls / mount / serve / check / cleanup / dedupe / cryptcheck | [Optional layers] crypt — encrypt on-write, decrypt on-read union — merge multiple backends as one cache — local disk cache for slow backends chunker, combine, compress, alias ``` ## Self-Hosting & Configuration ```ini # ~/.config/rclone/rclone.conf [gdrive] type = drive client_id = ... client_secret = ... scope = drive token = {"access_token":"..."} [b2] type = b2 account = ... key = ... [b2-crypt] type = crypt remote = b2:my-bucket/encrypted password = encrypted-password password2 = encrypted-salt [s3-wasabi] type = s3 provider = Wasabi access_key_id = ... secret_access_key = ... region = us-east-1 endpoint = s3.wasabisys.com ``` ```bash # Sync with filters + bandwidth limit + progress rclone sync ~/photos b2-crypt:photos \ --filter "+ *.jpg" --filter "+ *.raw" --filter "- *" \ --bwlimit 10M --progress --transfers 4 --checkers 8 # Mount as filesystem (runs in background with --daemon) rclone mount gdrive:backups /mnt/gdrive --vfs-cache-mode writes --daemon # Serve as WebDAV for apps that speak WebDAV rclone serve webdav gdrive:Work --addr :8080 # Verify integrity (compare checksums) rclone check ~/photos b2-crypt:photos # Migrate between clouds (e.g., dropbox to gdrive) rclone sync dropbox:myfolder gdrive:myfolder --progress ``` ## Key Features - **70+ backends** — every major cloud, plus SFTP/WebDAV/HTTP/SMB/IMAP - **sync + copy + move + check** — rsync-style semantics across clouds - **Mount as filesystem** — read/write any cloud as /mnt/x - **Encryption at rest** — transparent crypt wrapper on any backend - **Bandwidth throttling** — `--bwlimit`, `--bwlimit-file` for scheduled limits - **Filters** — rsync-style include/exclude patterns - **Resumable transfers** — chunked uploads with retry + integrity check - **Cross-cloud migration** — direct server-to-server in many cases ## Comparison with Similar Tools | Feature | rclone | rsync | aws s3 sync | cyberduck | rsync.net (+ rsync) | |---|---|---|---|---|---| | Backend count | 70+ | SSH-based only | AWS/S3-compat only | ~20 | SFTP only | | Scripting-friendly | Yes | Yes | Yes | GUI | Yes | | Encryption | Built-in | Via SSH/LUKS | Via KMS | Via backend | Via SSH | | Mount as FS | Yes | Via sshfs | Via s3fs | GUI mount | Via sshfs | | Cross-cloud | Yes | No | No | Manual | No | | Best For | Multi-cloud sync | Unix-to-Unix | AWS-only | Desktop GUI | SFTP backup | ## FAQ **Q: Is rclone safe for my data?** A: Yes — widely audited, used by backup products. For extra safety, wrap sensitive data with the crypt backend (encryption on your side). **Q: How fast is it?** A: Very fast. With `--transfers 16 --checkers 32` and a good network, it saturates most links. Per-backend limits (e.g., Google Drive API quotas) often matter more than rclone itself. **Q: rclone mount vs native clients?** A: Native clients (Google Drive File Stream, Dropbox daemon) sync files locally; rclone mount streams on demand. For backups + occasional access, mount is lighter. For constant editing of cloud files, native clients are smoother. **Q: Does it work offline?** A: Mount with `--vfs-cache-mode full` caches accessed files locally and flushes changes when online. Not a replacement for Dropbox-style offline, but serviceable. ## Sources - GitHub: https://github.com/rclone/rclone - Website: https://rclone.org - Author: Nick Craig-Wood - License: MIT --- Source: https://tokrepo.com/en/workflows/06a04735-3859-11f1-9bc6-00163e2b0d79 Author: AI Open Source