ChatGPT解决这个技术问题 Extra ChatGPT

Add EBS to Ubuntu EC2 Instance

I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.

Here's what I did:

From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)" Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device" sudo mount /dev/sdf /vol So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system. sudo fdisk -l sudo mount -v -t ext4 /dev/xvdf /vol the error: mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so "dmesg | tail" told me it gave the following error: EXT4-fs (sda1): VFS: Can't find ext4 filesystem

I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.

Questions:

Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?

Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.

This may belong on a sysadmin-oriented StackExchange site. Nevertheless exactly what I needed to find. Thank you for asking this!

E
Eric Hammond

Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:

Create EBS volume. Attach EBS volume to /dev/sdf (EC2's external name for this particular device number). Format file system /dev/xvdf (Ubuntu's internal name for this particular device number): sudo mkfs.ext4 /dev/xvdf Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously. Mount file system (with update to /etc/fstab so it stays mounted on reboot): sudo mkdir -m 000 /vol echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab sudo mount /vol


Just to be explicit, /dev/xvdf doesn't exist prior to your mounting /dev/sdf.
Thanks a lot for this! I was totally confused by the /mnt directory and wrongly assumed that my extra EBS volume (/dev/xvdf) that I told AWS to attach at instance creation was already mounted. Also, the mapping between what AWS shows (/dev/sdf) and (/dev/xvdf) that exists on ubuntu tripped me up.
@scrapcodes: Fortunately, these are definitely the right steps for the original poster's question (new, unformatted EBS volume). They certainly may not be the right steps if you have a completely different situation (EBS volume created from snapshot containing existing filesystem).
Why does step four include the flag -m 000?
@JosephMornin Turning off all bits in the mode is a simple indicator that nobody should be allowed to do anything in this directory until a new file system is mounted here. It's a message that this directory has been created as a mount point. It is not required for functionality, but sometimes avoids the mistakes of creating files when the desired volume is not mounted.
R
Ramesh Sinha

Step 1: create volume step 2: attach to your instance root volume step 3: run sudo resize 2fs -p /dev/xvde step 4: restart apache2 sudo service apache2 restart step 4: run df -h

You can see total volume attached to your instance.