Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

attaching disk #13

Open
paciorek opened this issue Oct 24, 2014 · 11 comments
Open

attaching disk #13

paciorek opened this issue Oct 24, 2014 · 11 comments

Comments

@paciorek
Copy link
Contributor

Would be good to think more about this. I realized that /mnt has 40 Gb which is enough for what we are doing when they start up a single instance, but it disappears when an instance is stopped and restarted. So we may want to have some discussion about provisioning where there is persistent disk, but in light of cost.

@aculich
Copy link
Contributor

aculich commented Oct 24, 2014

By default the disk attached to EC2 instances called instance store is ephemeral and all data will be lost if the instance is terminated, however it will persist if the instance is merely stopped or restarted.

Persistent storage that will survive termination of an instance requires attaching an EBS volume. I'll add this to the TODO list for after the policy/alarms that I'm working on now.

Also, note that the size of an instance store is much larger than the 40GB of the root device, however it is not automatically mounted by the OS, however if you need additional space without using EBS, it is possible to mount the extra storage.

In summary:

  • instance store survives stop/start
  • instance store is lost on terminate
  • instance store is larger than just the root volume
  • EBS volume for data that needs to survive a terminate
  • deploy instance store on /scratch mount point
  • deploy EBS on /persist or /results or ...

@paciorek
Copy link
Contributor Author

I'm seeing 40 Gb in /mnt and just 8 Gb in / when I start using defaults
for c3.xlarge.

Also when I stopped an instance, stuff in /mnt disappeared. Thoughts?

I'm trying to figure out a good path for students as they are already
working on their problem set... R is causing problems as we need 20-30 Gb
available in its temporary disk space but /tmp only has a few Gb. I can
either tell R to have its temp space elsewhere or perhaps there is some way
to have more of the disk allocated to /tmp?

On Fri, Oct 24, 2014 at 9:38 AM, Aaron Culich [email protected]
wrote:

By default the disk attached to EC2 instances called instance store
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
is ephemeral and all data will be lost if the instance is terminated,
however it will persist if the instance is merely stopped or restarted
.

Persistent storage that will survive termination of an instance requires
attaching an EBS volume http://aws.amazon.com/ebs/getting-started/.
I'll add this to the TODO list for after the policy/alarms that I'm working
on now.

Also, note that the size of an instance store
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.htmlStorageOnInstanceTypes
is much larger than the 40GB of the root device, however it is not
automatically mounted by the OS, however if you need additional space
without using EBS, it is possible to mount the extra storage.

In summary:

  • instance store survives stop/start
  • instance store is lost on terminate
  • instance store is larger than just the root volume
  • EBS volume for data that needs to survive a terminate
  • deploy instance store on /scratch mount point
  • deploy EBS on /persist or /results or ...


Reply to this email directly or view it on GitHub
#13 (comment)
.

@paciorek
Copy link
Contributor Author

Arggh. R doesn't want to allow its temp directory to be on /mnt for some reason. So I really need to figure out a way to get more space available for /tmp.

@aculich
Copy link
Contributor

aculich commented Oct 24, 2014

this process will be faster if I drop by and we look at it together and iterate. I'll swing by at 10:30am if that works for you? I expect we might run into other problems along the way as you test out other things, so might as well just get everything sorted out ASAP and I'll worry about scripting it after-the-fact.

@paciorek
Copy link
Contributor Author

I have a 10:10 meeting and should be back about 11. Does that work?

On Fri, Oct 24, 2014 at 10:03 AM, Aaron Culich [email protected]
wrote:

this process will be faster if I drop by and we look at it together and
iterate. I'll swing by at 10:30am if that works for you? I expect we might
run into other problems along the way as you test out other things, so
might as well just get everything sorted out ASAP and I'll worry about
scripting it after-the-fact.


Reply to this email directly or view it on GitHub
#13 (comment)
.

@aculich
Copy link
Contributor

aculich commented Oct 24, 2014

11am is fine.

@paciorek
Copy link
Contributor Author

Ok, I just got back so here now.

On Fri, Oct 24, 2014 at 10:27 AM, Aaron Culich [email protected]
wrote:

11am is fine.


Reply to this email directly or view it on GitHub
#13 (comment)
.

aculich added a commit that referenced this issue Oct 24, 2014
@aculich
Copy link
Contributor

aculich commented Oct 24, 2014

The setup-storage script will combine both instance-store volumes into a single continuous block device using LVM.

Students only need to run the first time an instance is started:

sudo ./setup-storage

If there is already data in /mnt it WILL DESTROY the data already there and there are currently no safeguards, so this is dangerous to leave lying around in case they accidentally ran it a second time (e.g. bash history mishap)

@aculich
Copy link
Contributor

aculich commented Oct 24, 2014

This will not yet fix the other problem of the hard-coded /tmp path for your existing code.

@aculich
Copy link
Contributor

aculich commented Oct 24, 2014

I have verified that rebooting or start/stop of the instance will NOT delete the files in /mnt. The instance store volumes work as I expected (whether or not you run the setup-storage script).

I think your work-around for symlinking /tmp to /mnt is the culprite causing the data to disappear, because of this standard Ubuntu default setup for /tmp that automatically clears /tmp if it detects that it is not a tmpfs filesystem (which is what happens when you symlink it to /mnt):

$ head -n25  /etc/init/mounted-tmp.conf
# mounted-tmp - Clean /tmp directory
#
# Cleans up the /tmp directory when it does not exist as a temporary
# filesystem.

description     "Clean /tmp directory"

start on (mounted MOUNTPOINT=/tmp) or (mounted MOUNTPOINT=/usr)
# The "/tmp" here is just a default and is overridden by the "start on"
# case above. It protects someone from running this job directly and
# having no $MOUNTPOINT defined.
env MOUNTPOINT=/tmp

task

script
    if [ x$MOUNTPOINT = x/tmp ] && [ ! -x /usr/bin/find ] ; then
        touch /tmp/.delayed_mounted_tmp_clean
        exit 0
    elif [ x$MOUNTPOINT = x/usr ] ; then
        [ -f /tmp/.delayed_mounted_tmp_clean ] || exit 0
        rm /tmp/.delayed_mounted_tmp_clean
        MOUNTPOINT=/tmp
    fi
...[snip]...

@aculich
Copy link
Contributor

aculich commented Oct 24, 2014

What I stated above is INCORRECT!

As we verified in the console, stopping an instance does indeed lose all data on the instance store volumes, in this case the ones on /mnt, but they do survive a restart (just not a full stop).

What survives a stop and a start is the root device, in this case /dev/xvda, whereas the instance store volumes /dev/xvdb and /dev/xvdc do not survive the stop and start.

In the case of a terminate the root volume will be deleted unless when starting the instance in the console you uncheck the Delete on Terminate checkbox in the Add Storage section.

You can read more here:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants