Thomas Cameron thomas.cameron@camerontech.com
To be clear, there are a LOT of ways to use Fedora on AWS.
The quickest/easiest way is just just use a Fedora image provided
by the Fedora project and then customize it. When you launch your
EC2 instance, go to community AMIs, and choose one from the Fedora
project and you're good to go! Or, you can use the Amazon EC2
Image Builder at https://aws.amazon.com/image-builder/.
That's a quick and painless way, as well.
I did this as a thought exercise/education thing. I wanted to
understand for myself what was involved in setting up an
image myself and making it work on AWS. I'm definitely not saying
this is the best way. It's strictly a case of my nerding
out to figure out a thing, and sharing that thing with the
internet. And to be clear, while I work for AWS, this is not an
AWS supported or even recommended thing. This is purely me
learning, and sharing it on my own time without anything to do
with AWS. There is no warranty for this, and if it breaks, you get
to keep all the pieces.
You can use cockpit or virt-manager. I used virt-manager and
kickstarted a Fedora 37 instance on a 3GB virtual disk.
Here's the kickstart file I created, and shared from a web server
on my homelab network:
# Use text install
text
# Reboot
reboot
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8
# Use network installation
url --url="http://172.31.100.1/f37"
# Use update repository so that the system has the latest versions of everything
repo --name "Updates" --baseurl=https://mirrors.kernel.org/fedora/updates/37/Everything/x86_64/
%packages
@^custom-environment
unzip
%end
# Run the Setup Agent on first boot
firstboot --disable
# Generated using Blivet version 3.5.0
ignoredisk --only-use=vda
# Partition clearing information
clearpart --all --initlabel
# Disk partitioning information
part biosboot --fstype="biosboot" --ondisk=vda --size=1
part / --fstype="xfs" --ondisk=vda --size=1 --grow
part /boot --fstype="xfs" --ondisk=vda --size=512
timesource --ntp-server=time.skylineservers.com
timesource --ntp-server=t2.time.gq1.yahoo.com
timesource --ntp-server=108.61.73.243
timesource --ntp-server=shed.galexander.org
# System timezone
timezone America/Chicago --utc
# Root password
rootpw --iscrypted [redacted]
%post
echo set enable-bracketed-paste off > /root/.inputrc
echo set enable-bracketed-paste off > /etc/skel/.inputrc
yum -y install cloud-init
systemctl enable cloud-init
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
rm -rf aws*
dnf install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
systemctl enable amazon-ssm-agent
# NOTE: could not make next line work during kickstart, must be done after first boot!
# dracut -f --add-drivers "nvme xen-netfront xen-blkfront"
%end
openssl passwd -6Copy and paste the resultant password string into your ks.cfg file.
Password:
Verifying - Password:
$6$IkJ.duPRt0z1dZk6$m.MR4CNiyuvY1zh5fEYOT5iKQ4E5Eb4e/.uRbLrcv7dZYEU.KZl87ojG508zKotMjTronVftokpFz6h36Rys4/
I'm going to write this for anyone who can't kickstart and you
need to do these things manually. After you install Fedora in a VM
selecting only Fedora Custom Operating System, you need to do a
couple of things. We'll install cloud-init,
the latest version of the AWS CLI, the AWS SSM Agent, and we'll
set up the initial ramdisk using dracut.
To install cloud-init, you can just do it from the command line.
dnf -y install cloud-init
systemctl enable cloud-init
To install the latest version of AWS CLI, v2, run this command in
the virtual machine:
curl
"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
unzip awscliv2.zip
./aws/install
I know, I know, I wish there were an RPM for this, but there's
not as of February 24th, 2023.
To install the AWS SSM Agent, use DNF:
dnf install -y
https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
Note that it also enables the service as part of the RPM
installation:
Your Fedora instance KVM instance may not have all the required
modules built into the initial ramdisk to boot in AWS. To add
them, use dracut with the -f argument to force a rebuild of the
initial ramdisk.
dracut -f --add-drivers "nvme xen-netfront
xen-blkfront"
It will create a new initramfs file. In this screenshot, I've
shown that the initramfs file is created when you run the command:
You will want to clean up any stuff you have left behind. I
recommend cleaning up your bash history, your ssh key files, and
any log files which could leak any information about your homelab
setup.
I'm going to assume you ran the commands to set up this instance
as root. Removing your bash history is pretty easy. You should
also remove the zip file you created and the installation source
for the AWS CLI. In this screenshot, I show that the dot files are
left, but nothing else.
export HISTFILE=/dev/null
rm -f ~/.bash_history
rm -f ~/*
cd /etc/ssh
rm -f *key*
In this screenshot, I show what files are there, what files to
remove, and then what files remain:
You can remove all the files which could potentially leak
information about your homelab setup. Don't worry, when the AMI
boots, it will create new files with the correct ownership and
SELinux contexts.
cd /var/log
find . -type f | xargs rm -f
If you have to boot your image up, make sure you clean out the
ssh key files, the log files, and .bash_history
file before you shut it down.
Just repeat the commands you used inside the VM.
curl
"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
unzip awscliv2.zip
./aws/install
Then run aws configure and fill
out your AWS access key and secret access key. In my screenshot,
it's just showing what I already had set up.
I am assuming you are already familiar with AWS. There are
already a ton of great tutorials and videos on S3 so I'm not going
to dive deep into them.
Create a bucket. In this example, I created a bucket called tcameron-fedora. Just remember your
bucket name needs to be globally unique. Also, as of January 2023,
AWS encrypts buckets by default (https://aws.amazon.com/blogs/aws/amazon-s3-encrypts-new-objects-by-default/).
This will be important later on.
In order to import a virtual machine, you have to create a role
called vmimport and assign it
permissions (referred to as policies by AWS). Most of the docs
I've found don't talk about all the policies you need to apply to
the role, so I put them all together in this document.
Create a text document called trust-policy.json.
Here's how it looks:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect":
"Allow",
"Principal":
{ "Service": "vmie.amazonaws.com" },
"Action":
"sts:AssumeRole",
"Condition":
{
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}
Now, to add the role to AWS, run this commend:
aws iam create-role --role-name vmimport
--assume-role-policy-document file://trust-policy.json
If want, you can specify the full path to the file, for example:
aws iam create-role --role-name vmimport
--assume-role-policy-document
file:///home/thomas.cameron/trust-policy.json
Now assign permissions (policies) to that vmimport
role. Create a file called role-policy.json
which looks like the example below. You need to change the
bucket name to the bucket you created above! Note that,
because the buckets are encrypted by default, you also need to
grant privileges to access the Key Management Service (KMS), so I
added in the stanza with the KMS calls.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::tcameron-fedora",
"arn:aws:s3:::tcameron-fedora/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:Decrypt",
"kms:DescribeKey",
"kms:Encrypt",
"kms:GenerateDataKey*",
"kms:ReEncrypt*"
],
"Resource": "*"
}
]
}
To apply this policy, run this command:
aws iam put-role-policy --role-name vmimport
--policy-name vmimport --policy-document
file://role-policy.json
If you want to specify the full path it would look something like
this:
aws iam put-role-policy --role-name vmimport
--policy-name vmimport --policy-document
file:///home/thomas.cameron/role-policy.json
You need to convert the qcow2 image (/var/lib/libvirt/images/fedora37.qcow2)
to raw format. As root, I copied it from /var/lib/libvirt/images/fedora37.qcow2
to a temporary location, changed ownership to my regular Linux
user:
Then covert the image using qemu-img convert:
qemu-img convert -f qcow2 -O raw
fedora37.qcow2 fedora37.raw
Now you can copy the image to the S3 bucket you created earlier.
In this example, it's going to tcameron-fedora:
aws s3 cp fedora37.raw
s3://tcameron-fedora
When done, you should see the object in your bucket:
Create a file called containers.json.
It should look like the below. Substitute the description
to be whatever your description is. Change the S3Key
name to the name of the image file you're going to convert.
{
"Description": "Fedora",
"Format": "raw",
"UserBucket": {
"S3Bucket":
"tcameron-fedora",
"S3Key":
"fedora37.raw"
}
}
Now import it by running the following command:
aws ec2 import-snapshot --disk-container file://containers.json
You will get a message which gives you the name of the ImportTaskId: