Block Storage
Persistent volumes, snapshots, and backups for your instances
Block Storage
Fugoku Block Storage provides persistent, high-performance SSD volumes that can be attached to instances.
Overview
Key features:
- Persistent: Data survives instance deletion
- Expandable: Resize volumes without data loss
- Triple-replicated: 99.999% durability
- High IOPS: NVMe SSD backend
- Snapshots: Point-in-time backups
- Portable: Detach and reattach to different instances
Pricing
- Storage: $0.10/GB/month
- Snapshots: $0.05/GB/month
- IOPS: Included (up to 20,000 IOPS per volume)
Examples:
100 GB volume: $10/month
500 GB volume: $50/month
1 TB volume: $100/month
10 TB volume: $1,000/monthCreating Volumes
Via Console
- Navigate to Storage → Volumes
- Click Create Volume
- Configure:
- Name: my-data
- Size: 100 GB (min 10 GB, max 10 TB)
- Region: lagos-1 (must match instance)
- Source: Blank or clone from snapshot
- Click Create
Volume ready in 10-30 seconds.
Via CLI
# Create blank volume
fugoku volumes create \
--name my-data \
--size 100 \
--region lagos-1
# Create from snapshot
fugoku volumes create \
--name restored-data \
--size 100 \
--region lagos-1 \
--snapshot snap-abc123Via API
curl -X POST https://api.fugoku.com/v1/volumes \
-H "Authorization: Bearer $FUGOKU_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-data",
"size_gb": 100,
"region": "lagos-1"
}'Attaching Volumes
Volumes must be attached to an instance before use.
Via Console
- Volume detail page → Attach to Instance
- Select instance
- Volume appears as block device (e.g.,
/dev/vdb)
Via CLI
fugoku volumes attach my-data --instance web-1Via API
curl -X POST https://api.fugoku.com/v1/volumes/my-data/attach \
-H "Authorization: Bearer $FUGOKU_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"instance_id": "web-1"}'Formatting and Mounting
After attaching, format and mount the volume.
First-Time Setup
# SSH into instance
fugoku ssh web-1
# List block devices
lsblk
# NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
# vda 8:0 0 80G 0 disk
# ├─vda1 8:1 0 79G 0 part /
# └─vda2 8:2 0 1G 0 part [SWAP]
# vdb 8:16 0 100G 0 disk
# Format the volume (ext4)
sudo mkfs.ext4 /dev/vdb
# Create mount point
sudo mkdir /mnt/data
# Mount volume
sudo mount /dev/vdb /mnt/data
# Verify
df -h /mnt/data
# Filesystem Size Used Avail Use% Mounted on
# /dev/vdb 98G 24K 93G 1% /mnt/dataAuto-Mount on Boot
Add to /etc/fstab:
# Get UUID (recommended over /dev/vdb)
sudo blkid /dev/vdb
# /dev/vdb: UUID="abc123-def456..." TYPE="ext4"
# Add to fstab
echo 'UUID=abc123-def456... /mnt/data ext4 defaults,nofail 0 2' | sudo tee -a /etc/fstab
# Test fstab
sudo mount -a
# Reboot and verify
sudo reboot
# (wait 30 seconds)
fugoku ssh web-1
df -h /mnt/dataNote: Use nofail option so boot succeeds even if volume isn't attached.
Alternative Filesystems
XFS (good for large files):
sudo apt install xfsprogs
sudo mkfs.xfs /dev/vdb
sudo mount /dev/vdb /mnt/dataBtrfs (advanced features):
sudo apt install btrfs-progs
sudo mkfs.btrfs /dev/vdb
sudo mount /dev/vdb /mnt/dataDetaching Volumes
Safely detach volumes for migration or maintenance.
Steps
# SSH into instance
fugoku ssh web-1
# Unmount volume
sudo umount /mnt/data
# Remove from fstab (optional, for permanent detach)
sudo nano /etc/fstab
# Comment out or delete the line for /dev/vdbVia Console
Volume detail → Detach
Via CLI
fugoku volumes detach my-dataNote: Volume must be unmounted before detaching, or data corruption may occur.
Resizing Volumes
Expand volumes without data loss (shrinking not supported).
Resize Process
# Via CLI (can be done while attached)
fugoku volumes resize my-data --size 200
# Via Console
# Volume detail → Resize → Enter new size → ConfirmExpand Filesystem
After resizing, expand the filesystem:
# SSH into instance
fugoku ssh web-1
# For ext4
sudo resize2fs /dev/vdb
# For XFS
sudo xfs_growfs /mnt/data
# Verify
df -h /mnt/data
# Should show new sizeSnapshots
Point-in-time backups of volumes.
Manual Snapshots
Via Console:
- Volume detail → Snapshots tab
- Click Create Snapshot
- Name snapshot
- Wait for completion (time depends on volume size and data changed)
Via CLI:
fugoku snapshots create my-data --name before-upgradeVia API:
curl -X POST https://api.fugoku.com/v1/volumes/my-data/snapshots \
-H "Authorization: Bearer $FUGOKU_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"name": "before-upgrade"}'Automatic Snapshots
Schedule recurring snapshots.
Via Console: Volume detail → Snapshots → Enable Automatic
- Schedule: Hourly, Daily, Weekly
- Retention: 1-365 days
- Time: Preferred hours (for daily/weekly)
Via CLI:
fugoku snapshots enable \
--volume my-data \
--schedule daily \
--time 02:00 \
--retention 7Example schedules:
- Hourly, keep 24 hours
- Daily at 2:00 AM, keep 7 days
- Weekly on Sunday at 3:00 AM, keep 4 weeks
Restoring from Snapshots
Create new volume from snapshot:
fugoku volumes create \
--name restored-data \
--size 100 \
--region lagos-1 \
--snapshot snap-abc123Overwrite existing volume:
- Detach volume from instance
- Volume detail → Restore tab
- Select snapshot
- Confirm (destroys current data)
Clone for testing:
Create copy of production data for staging:
# Snapshot production volume
fugoku snapshots create prod-db --name prod-snapshot
# Create staging volume from snapshot
fugoku volumes create \
--name staging-db \
--size 100 \
--region lagos-1 \
--snapshot prod-snapshot
# Attach to staging instance
fugoku volumes attach staging-db --instance staging-db-1Performance
IOPS & Throughput
All volumes use NVMe SSDs with high performance:
- Random Read: Up to 20,000 IOPS
- Random Write: Up to 10,000 IOPS
- Sequential Read: Up to 500 MB/s
- Sequential Write: Up to 250 MB/s
Performance scales with volume size up to limits above.
Benchmarking
Test IOPS (random reads):
sudo apt install fio
# Random read test
sudo fio --name=randread \
--ioengine=libaio \
--iodepth=32 \
--rw=randread \
--bs=4k \
--direct=1 \
--size=1G \
--numjobs=4 \
--runtime=60 \
--group_reporting \
--filename=/mnt/data/testfileTest throughput (sequential writes):
sudo fio --name=seqwrite \
--ioengine=libaio \
--iodepth=32 \
--rw=write \
--bs=128k \
--direct=1 \
--size=4G \
--numjobs=1 \
--runtime=60 \
--group_reporting \
--filename=/mnt/data/testfileExpected results:
Random read: 15,000-20,000 IOPS
Sequential write: 200-250 MB/sOptimization Tips
Use noatime mount option: Reduces write load by not updating access times.
# In /etc/fstab
UUID=abc123... /mnt/data ext4 defaults,noatime,nofail 0 2Tune I/O scheduler:
For database workloads, use noop or deadline scheduler.
echo deadline | sudo tee /sys/block/vdb/queue/schedulerIncrease readahead: For sequential workloads.
sudo blockdev --setra 2048 /dev/vdbUse Cases
Database Storage
Separate data from OS for easier management:
# Create volume for PostgreSQL
fugoku volumes create \
--name postgres-data \
--size 200 \
--region lagos-1
# Attach to database instance
fugoku volumes attach postgres-data --instance db-1
# Format and mount
sudo mkfs.ext4 /dev/vdb
sudo mkdir /var/lib/postgresql
sudo mount /dev/vdb /var/lib/postgresql
echo '/dev/vdb /var/lib/postgresql ext4 defaults,noatime,nofail 0 2' | sudo tee -a /etc/fstab
# Install PostgreSQL
sudo apt install postgresql
# Data automatically stored on volumeFile Storage
Shared files across multiple applications:
# Volume for uploads/media
fugoku volumes create \
--name app-uploads \
--size 500 \
--region lagos-1
# Attach to web server
fugoku volumes attach app-uploads --instance web-1
# Mount
sudo mkfs.ext4 /dev/vdb
sudo mkdir /var/www/uploads
sudo mount /dev/vdb /var/www/uploads
sudo chown www-data:www-data /var/www/uploadsBackup Storage
Dedicated volume for backups:
# Large volume for backups
fugoku volumes create \
--name backups \
--size 1000 \
--region lagos-1
# Attach to backup server
fugoku volumes attach backups --instance backup-1
# Mount
sudo mkfs.ext4 /dev/vdb
sudo mkdir /mnt/backups
sudo mount /dev/vdb /mnt/backups
# Run backup script
#!/bin/bash
BACKUP_DIR="/mnt/backups/$(date +%Y-%m-%d)"
mkdir -p "$BACKUP_DIR"
pg_dump production_db > "$BACKUP_DIR/database.sql"
tar czf "$BACKUP_DIR/files.tar.gz" /var/wwwDevelopment Environments
Preserve dev work across instance rebuilds:
# Volume for project code
fugoku volumes create \
--name dev-projects \
--size 100 \
--region lagos-1
# Attach to dev instance
fugoku volumes attach dev-projects --instance dev-1
# Mount at home directory
sudo mkfs.ext4 /dev/vdb
sudo mkdir /home/ubuntu/projects
sudo mount /dev/vdb /home/ubuntu/projects
sudo chown ubuntu:ubuntu /home/ubuntu/projectsData Migration
Between Instances
Move volume from one instance to another:
# Detach from source
fugoku ssh web-1
sudo umount /mnt/data
exit
fugoku volumes detach my-data
# Attach to destination
fugoku volumes attach my-data --instance web-2
# Mount on destination
fugoku ssh web-2
sudo mkdir /mnt/data
sudo mount /dev/vdb /mnt/dataBetween Regions
Use snapshots to migrate data:
# In source region (lagos-1)
fugoku snapshots create my-data --name for-migration
# Copy snapshot to target region (coming Q3 2026)
fugoku snapshots copy snap-abc123 --to-region london-1
# In target region, create volume from snapshot
fugoku volumes create \
--name my-data-london \
--size 100 \
--region london-1 \
--snapshot snap-abc123Current workaround (until snapshot copy available):
# Rsync over network
fugoku ssh web-lagos
rsync -avz /mnt/data/ ubuntu@web-london:/mnt/data/Monitoring
Via Console
Volume detail → Metrics tab:
- Read IOPS
- Write IOPS
- Read throughput (MB/s)
- Write throughput (MB/s)
- Latency
Via CLI
fugoku volumes stats my-data
# Output:
# Read: 1,200 IOPS, 45 MB/s
# Write: 800 IOPS, 30 MB/s
# Latency: 0.5 ms avgDisk Space Monitoring
On instance:
# Check space
df -h /mnt/data
# Set up alert when 80% full
sudo apt install nagios-plugins-basic
check_disk -w 20% -c 10% -p /mnt/dataSecurity
Encryption
All volumes are encrypted at rest (AES-256).
- Default: Fugoku-managed keys
- Custom keys (coming Q4 2026): Bring your own encryption keys
Access Control
Volumes can only be attached to instances in your account.
Team permissions:
- Admin: Full control
- Developer: Attach/detach, create snapshots
- Read-Only: View only
Backups
Enable automatic snapshots for critical volumes:
fugoku snapshots enable \
--volume my-data \
--schedule daily \
--retention 30Follow 3-2-1 rule:
- 3 copies of data
- 2 different storage types
- 1 off-site backup
Troubleshooting
Volume won't attach
- Check region: Volume and instance must be in same region
- Check status: Volume must be "available" (not already attached)
- Retry: Detach and reattach
Can't mount volume
# Check if volume is attached
lsblk
# Check filesystem
sudo file -s /dev/vdb
# Should show filesystem type
# Force check (if ext4)
sudo e2fsck -f /dev/vdbPerformance issues
- Check IOPS limit: May be hitting 20k IOPS cap
- I/O wait: Run
topand check%wa- high I/O wait indicates bottleneck - Test with fio: Benchmark to isolate issue
top
# %wa > 10% indicates I/O bottleneck
# Check I/O stats
iostat -x 1Snapshot failed
- Volume may be very active - pause writes temporarily
- Check disk space on volume
- Contact support if persists: support@fugoku.com
Best Practices
- Use volumes for persistent data - root disks are ephemeral if instance deleted
- Enable automatic snapshots - daily backups for critical data
- Test restores - verify snapshots work before disaster strikes
- Monitor disk space - set up alerts at 80% full
- Use appropriate filesystem - ext4 for general use, XFS for large files
- Label volumes clearly - use descriptive names (prod-db, staging-uploads)
- Document mount points - keep inventory of what's mounted where
Limits
Per account:
- Volumes: 100 per region
- Total storage: 50 TB per region
- Snapshots: 500 per account
Need higher limits? Contact support@fugoku.com
Pricing Calculator
Volume: 100 GB × $0.10 = $10.00/month
Snapshot: 50 GB × $0.05 = $2.50/month
────────────────────────────────────
Total: $12.50/monthSnapshots use incremental storage (only changed blocks), so 10 snapshots of 100 GB volume != 1 TB cost.
Next Steps:
- Learn about Networking for private networks
- Read about Backups best practices
- Explore the API for automation
- Browse GPU Storage for ML datasets