Proxmox convert to template что это
VM Templates and Clones
Contents
Introduction
A template is a fully pre-configured operating system image that can used to deploy KVM virtual machines. Creating a special template is usually preferred over cloning an existing VM.
Deploying virtual machines from templates is blazing fast, very comfortable and if you use linked clones you can optimize your storage by using base images and thin-provisioning.
Proxmox VE includes container based templates since 2008 and beginning with the V3.x series, additionally KVM templates can be created and deployed.
Definitions
Create VM Template
Templates are created by converting a VM to a template.
As soon as the VM is converted, it cannot be started anymore and the icon changes. If you want to modify an existing template, you need to deploy a full clone from this template and do the steps above again.
OS specific notes for Templates
For productive usage it is highly recommended that a template does not include any data, user accounts or SSH keys so you should remove all before you convert the VM to a template. On Linux systems you should remove SSH host keys, persistent network MAC configuration and user accounts and user data. Windows offers a bunch of tools for this, e.g. sysprep.
For testing purposes it may be useful to use a fully installed OS as a template.
GNU/Linux
Windows 7
Deploy a VM from a Template
Right-click the template, and select «Clone».
Full Clone
A full clone VM is a complete copy and is fully independent from the original VM or VM Template, but it requires the same disk space as the original.
Linked Clone
A linked clone VM requires less disk space but cannot run without access to the base VM Template.
Linked Clones works for theses storages: files in raw, qcow2, vmdk format (either on local storage or nfs); LVM-thin, ZFS, rbd, sheepdog, nexenta.
It’s not supported with LVM & ISCSI storage.
Записки программиста
Мой первый опыт использования Proxmox VE
Proxmox Virtual Environment — это система, предоставляющая простой и удобный веб-интерфейс для управления виртуальными машинами (используется KVM) и контейнерами (LXC) на вашем кластере физических машин. Фактически, при помощи Proxmox вы можете создать свой маленький Amazon Web Services на собственном железе. В общем и целом, система очень похожа на Parallels Virtual Automation, с которым мы знакомились ранее, только распространяется бесплатно и с открытыми исходными кодами. Также предоставляется и платная техническая поддержка. Как мы скоро убедимся, со своей задачей Proxmox справляется не хуже PVA, а в чем-то, возможно, и лучше.
Установка
Качаем ISO-образ отсюда, записываем на флешку как обычно при помощи dd:
Флешку втыкаем в будущую хост-машину. Помним, что для работы KVM требуется, чтобы CPU умел технологию Intel VT-x или AMD-V. Насколько я понимаю, все процессоры семейства Intel Core i5 и Intel Core i7 поддерживают аппаратную виртуализацию, но на всякий случай сверьтесь с информацией в BIOS и описанием вашей конкретной модели CPU на сайте производителя. Также на время установки нам понадобятся монитор и клавиатура.
Важно! Примите во внимание что по умолчанию на сервер также можно зайти пользователем root по SSH, используя тот же пароль.
Использование
Админка выглядит приблизительно таким образом:
Для создания виртуалки сначала нужно залить установочный ISO-образ системы. Я лично экспериментировал на FreeBSD. В дереве слева выбираем Datacenter → proxmox → local, открываем вкладку Content, жмем Upload. Затем в правом верхнем углу жмем Create VM. Диалог создания новой виртуальной машины ничем не примечателен, все просто и понятно. После создания говорим виртуалке Start. Затем жмем Console → noVNC. В результате подключаемся к виртуалке по VNC прямо через браузер. Все это работает в самом обычном Chromium без Flash’а и Java-апплетов. Крутяк!
Теперь под только что созданным пользователем можно зайти напрямую в контейнер по SSH, sshd в контейнере уже был поднят.
Proxmox VE поддерживает клонирование виртуальных машин. Клонирование контейнеров, насколько я смог разобраться, пока почему-то не реализовано. В дереве справа жмем ПКМ по виртуалке, говорим Convert to Template. Снова жмем ПКМ, жмем Clone. В результате получаем кучу копий одной и той же виртуальной машины, удобно.
Для создания бэкапов нам понадобится настроить NFS сервер. В принципе, ничто не мешает поднять его прямо на одной из виртуалок. Затем в дереве слева кликаем на Datacenter, открываем вкладку Storage, жмем Add → NFS. В поле Server вводим IP-адрес NFS-сервера, в выпадающем списке Export выбираем экспортируемый им каталог. В выпадающем списке Content кликаем по очереди на все пункты, чтобы они добавились к списку. Нигде больше не видел такого нестандартного элемента управления!
Теперь проверяем, что резервное копирование и восстановление из резервных копий работает как для виртуальных машин, так и для контейнеров. Заметьте, что можно настроить автоматическое создание резервных копий по расписанию. Помимо резервного копирования для KVM также есть механизм снапшотов, позволяющий запоминать состояние виратуалок и откатываться к ранее запомненному состоянию. Очень интересно выглядит в действии, обязательно попробуйте.
Проверить работу Proxmox VE с несколькими хост-машинами я, за неимением такого количества лишних машин, к сожалению, не смог. Однако согласно этой статье в официальной wiki, объединение машин в кластер производится одной командой, после чего все работает точно так же. Правда, остается открытым вопрос, не разваливается ли все при сетевых проблемах. Надеюсь, кто-то из читателей, активно использующих Proxmox, сможет пролить свет на этот вопрос в комментариях.
Заключение
Напоследок хочется отметить несколько вещей, которые мне не очень понравились в Proxmox:
Несмотря на озвученные проблемы, я все равно решительно одобряю Proxmox. Помня боль и унижение при использовании AWS, сейчас я бы предпочел ему (как и Google Cloud, как и Azure, потому что по многочисленным отзывам там все те же проблемы) арендовать физические машины и сделать на них собственный IaaS при помощи Proxmox. Есть серьезные основания полагать, что такая конфигурация будет уж точно не хуже, ибо куда уж хуже.
А пользуетесь ли вы Proxmox VE и если да, то как впечатления?
Proxmox 7 LXC Docker Portainer
Proxmox 7 LXC Docker Portainer
Linux container (LXC) template Docker + Portainer
Подготовлю систему и сделаю шаблон, т.к. часто экспериментирую с Docker контейнерами, а Portainer предоставляет удобный веб интерфейс для управления.
Создаю LXC в Proxmox
Для создания использую template ubuntu 21.04
Проверяю kectl + Nesting
Обновляю и ставлю все необходимые программы.
Установка Docker Proxmox 7 LXC (Ubuntu 21.04)
Добавляю официальный ключ GPG
Добавлю стабильный репозиторий
Обновляю и устанавливаю
Можно так же перезагрузить контейнер и убедиться что служба запускается автоматически и без ошибок.
Установка Portainer
Мне удобнее хранить в директории чем создавать отдельный volume group, поэтому создаю папки
Установка Portainer. Ссылка на официальный источник
Проверяю статус контейнера
Проверяю веб интерфейс
На данном этапе ничего не делаю. Закрываю все и создаю backup/template. Для дальнейшего быстрого развертывания готовой системы Portainer.io
Proxmox свой шаблон Linux container | own lxc template on Proxmox
1. Convert to template
Самый простой способ сделать шаблон это нажать правой кнопкой и выбрать (Convert to template)
В дальнейшем можно быстро и удобно поднимать заранее подготовленную и настроенную систему. И висит в списке вместе с остальными системами
2. Own CT Templates (Собственный шаблон LXC)
Обычно использую готовый стандартный шаблон:
ubuntu-21.04-standard_21.04-1_amd64.tar.gz
Желание создать такой же, но со своими настройками и установленными программами.
Выключаю готовый и настроенный Portainer. И делаю бэкап.
Теперь при дальнейшем создании LXC (Linux CT) будет выбор моего готового шаблона с Portainer.
Какой из шаблонов использовать правильнее и какие нюансы имеются у каждого способа ответить пока не смогу. Использую только для удобства и проблем каких то не обнаружил.
Работа с Proxmox в стиле IaC
Далее я покажу, как создать необходимое кол-во виртуальных машин с указанными вами характеристиками, используя template Proxmox-а и playbook Ansible. Playbook мы будем запускать на сервере, где установлен Proxmox, поэтому в качестве хоста указывается localhost. Предполагается, что у вас уже есть опыт работы с Proxmox и Ansible по отдельности. Если понадобится что-то уточнить, пожалуйста, задавайте вопросы в комментариях.
В качестве примера будут созданы две виртуальные машины (далее VM):
Характеристики стенда
Предварительная настройка
Установим необходимые для работы модуля proxmox_kvm зависимости:
Создание темлейта
Виртуальные машины будут созданы на основе образа CentOS 7. Сохраним ISO-образ со страницы Download CentOS. И загрузим его в Proxmox.
После старта и установки операционной системы заходим и устанавливаем пакет cloud-init:
Этот пакет нам понадобится для изменения настроек VM до её запуска. К примеру, мы сможем задать IP адрес и указать публичную часть ssh-ключа.
Если вы в этом не нуждаетесь, то можете не менять значение этого параметра. Так же в этом файле можно отключить модули, которые вы не будете использовать.
Перейдём на вкладку Cloud-Init и определим учётную запись с паролем.
(!)Обратите внимание: необходимо задать имя пользователя и пароль! Иначе вы не сможете зайти в созданную из этого шаблона VM. Сетевые настройки в параметрах cloud-init мы не задаём — они будут определены позже, в playbook-е Ansible.
Итого, на данном этапе у нас есть шаблон виртуальной машины на основе CentOS 7.
Описание переменных Ansible
В файле мы определим переменные, необходимые для авторизации в API Proxmox-авторизации (ваши данные будут отличаться, конечно):
И переменные, описывающие характеристики виртуальных машин:
Итоговый вид файла (vms.yml):
Описание playbook Ansible
Определим следующие логические блоки
При помощи конструкции loop мы обрабатываем все записи из словаря vms
Ключи находятся в каталоге /tmp сервера, где запускается playbook.
Итоговый вид playbook-а ( create_vm.yml )
Запуск плейбука
Запустим плейбук при помощи команды
Результат выполнения команды должен быть таким:
Проверка результата
Теперь мы можем зайти на созданные хосты по определённым нами IP, пользователю и ключу:
Проверяем характеристики хостов в интерфейсе Proxmox-а:
Всё соответствует ожидаемому.
Заключение
Справедливости ради, нужно сказать, что есть и другие инструменты декларативного подхода. Например, плагин Terraform. Но он, к сожалению, не документирован и официально не сертифицирован.
Linux Container
Containers are a lightweight alternative to fully virtualized machines (VMs). They use the kernel of the host system that they run on, instead of emulating a full operating system (OS). This means that containers can access resources on the host system directly.
The runtime costs for containers is low, usually negligible. However, there are some drawbacks that need be considered:
Only Linux distributions can be run in Proxmox Containers. It is not possible to run other operating systems like, for example, FreeBSD or Microsoft Windows inside a container.
For security reasons, access to host resources needs to be restricted. Therefore, containers run in their own separate namespaces. Additionally some syscalls (user space requests to the Linux kernel) are not allowed within containers.
Proxmox VE uses Linux Containers (LXC) as its underlying container technology. The “Proxmox Container Toolkit” ( pct ) simplifies the usage and management of LXC, by providing an interface that abstracts complex tasks.
Containers are tightly integrated with Proxmox VE. This means that they are aware of the cluster setup, and they can use the same network and storage resources as virtual machines. You can also use the Proxmox VE firewall, or manage containers using the HA framework.
Our primary goal is to offer an environment that provides the benefits of using a VM, but without the additional overhead. This means that Proxmox Containers can be categorized as “System Containers”, rather than “Application Containers”.
If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox Qemu VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers. |
Technology Overview
Integrated into Proxmox VE graphical web user interface (GUI)
Easy to use command line tool pct
Access via Proxmox VE REST API
lxcfs to provide containerized /proc file system
Control groups (cgroups) for resource isolation and limitation
AppArmor and seccomp to improve security
Modern Linux kernels
Image based deployment (templates)
Container setup from host (network, DNS, storage, etc.)
Supported Distributions
List of officially supported distributions can be found below.
Templates for the following distributions are available through our repositories. You can use pveam tool or the Graphical User Interface to download them.
Alpine Linux
Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox.
For currently supported releases see:
Arch Linux
Arch Linux, a lightweight and flexible Linux® distribution that tries to Keep It Simple.
Arch Linux is using a rolling-release model, see its wiki for more details:
CentOS, Almalinux, Rocky Linux
CentOS / CentOS Stream
The CentOS Linux distribution is a stable, predictable, manageable and reproducible platform derived from the sources of Red Hat Enterprise Linux (RHEL)
For currently supported releases see:
Almalinux
An Open Source, community owned and governed, forever-free enterprise Linux distribution, focused on long-term stability, providing a robust production-grade platform. AlmaLinux OS is 1:1 binary compatible with RHEL® and pre-Stream CentOS.
For currently supported releases see:
Rocky Linux
Rocky Linux is a community enterprise operating system designed to be 100% bug-for-bug compatible with America’s top enterprise Linux distribution now that its downstream partner has shifted direction.
For currently supported releases see:
Debian
Debian is a free operating system, developed and maintained by the Debian project. A free Linux distribution with thousands of applications to meet our users’ needs.
For currently supported releases see:
Devuan
Devuan GNU+Linux is a fork of Debian without systemd that allows users to reclaim control over their system by avoiding unnecessary entanglements and ensuring Init Freedom.
For currently supported releases see:
Fedora
Fedora creates an innovative, free, and open source platform for hardware, clouds, and containers that enables software developers and community members to build tailored solutions for their users.
For currently supported releases see:
Gentoo
a highly flexible, source-based Linux distribution.
Gentoo is using a rolling-release model.
OpenSUSE
The makers’ choice for sysadmins, developers and desktop users.
For currently supported releases see:
Ubuntu
Ubuntu is the modern, open source operating system on Linux for the enterprise server, desktop, cloud, and IoT.
For currently supported releases see:
Container Images
Container images, sometimes also referred to as “templates” or “appliances”, are tar archives which contain everything to run a container.
Proxmox VE itself provides a variety of basic templates for the most common Linux distributions. They can be downloaded using the GUI or the pveam (short for Proxmox VE Appliance Manager) command line utility. Additionally, TurnKey Linux container templates are also available to download.
The list of available templates is updated daily through the pve-daily-update timer. You can also trigger an update manually by executing:
To view the list of available images run:
You can restrict this large list by specifying the section you are interested in, for example basic system images:
Before you can use such a template, you need to download them into one of your storages. If you’re unsure to which one, you can simply use the local named storage for that purpose. For clustered installations, it is preferred to use a shared storage so that all nodes can access those images.
You are now ready to create containers using that image, and you can list all downloaded images on storage local with:
You can also use the Proxmox VE web interface GUI to download, list and delete container templates. |
pct uses them to create a new container, for example:
The above command shows you the full Proxmox VE volume identifiers. They include the storage name, and most other Proxmox VE commands can use them. For example you can delete that image later with:
Container Settings
General Settings
General settings of a container include
the Node : the physical server on which the container will run
the CT ID: a unique number in this Proxmox VE installation used to identify your container
Hostname: the hostname of the container
Resource Pool: a logical group of containers and VMs
Password: the root password of the container
SSH Public Key: a public key for connecting to the root account over SSH
Unprivileged container: this option allows to choose at creation time if you want to create a privileged or unprivileged container.
Unprivileged Containers
Unprivileged containers use a new kernel feature called user namespaces. The root UID 0 inside the container is mapped to an unprivileged user outside the container. This means that most security issues (container escape, resource abuse, etc.) in these containers will affect a random unprivileged user, and would be a generic kernel security bug rather than an LXC issue. The LXC team thinks unprivileged containers are safe by design.
This is the default option when creating a new container.
If the container uses systemd as an init system, please be aware the systemd version running inside the container should be equal to or greater than 220. |
Privileged Containers
Security in containers is achieved by using mandatory access control AppArmor restrictions, seccomp filters and Linux kernel namespaces. The LXC team considers this kind of container as unsafe, and they will not consider new container escape exploits to be security issues worthy of a CVE and quick fix. That’s why privileged containers should only be used in trusted environments.
You can restrict the number of visible CPUs inside the container using the cores option. This is implemented using the Linux cpuset cgroup (control group). A special task inside pvestatd tries to distribute running containers among available CPUs periodically. To view the assigned CPUs run the following command:
Containers use the host kernel directly. All tasks inside a container are handled by the host CPU scheduler. Proxmox VE uses the Linux CFS (Completely Fair Scheduler) scheduler by default, which has additional bandwidth control options.
You can use this option to further limit assigned CPU time. Please note that this is a floating point number, so it is perfectly valid to assign two cores to a container, but restrict overall CPU consumption to half a core.
This is a relative weight passed to the kernel scheduler. The larger the number is, the more CPU time this container gets. Number is relative to the weights of all the other running containers. The default is 1024. You can use this setting to prioritize some containers.
Memory
Container memory is controlled using the cgroup memory controller.
Limit overall memory usage. This corresponds to the memory.limit_in_bytes cgroup setting.
Allows the container to use additional swap memory from the host swap space. This corresponds to the memory.memsw.limit_in_bytes cgroup setting, which is set to the sum of both value ( memory + swap ).
Mount Points
Use volume as container root. See below for a detailed description of all options.
[,acl= ] [,backup= ] [,mountoptions= ] [,quota= ] [,replicate= ] [,ro= ] [,shared= ] [,size= ]
Use volume as container mount point. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.
Explicitly enable or disable ACL support.
Whether to include the mount point in backups (only used for volume mount points).
Extra mount options for rootfs/mps.
Path to the mount point as seen from inside the container.
Enable user quotas inside the container (not supported with zfs subvolumes)
Will include this volume to a storage replica job.
Read-only mount point
Mark this non-volume mount point as available on all nodes.
This option does not share the mount point automatically, it assumes it is shared already! |
Volume size (read only value).
Volume, device or directory to mount into the container.
Currently there are three types of mount points: storage backed mount points, bind mounts, and device mounts.
Storage Backed Mount Points
Storage backed mount points are managed by the Proxmox VE storage subsystem and come in three different flavors:
Image based: these are raw images containing a single ext4 formatted file system.
ZFS subvolumes: these are technically bind mounts, but with managed storage, and thus allow resizing and snapshotting.
Directories: passing size=0 triggers a special case where instead of a raw image a directory is created.
The special option syntax STORAGE_ID:SIZE_IN_GB for storage backed mount point volumes will automatically allocate a volume of the specified size on the specified storage. For example, calling |
will allocate a 10GB volume on the storage thin1 and replace the volume ID place holder 10 with the allocated volume ID, and setup the moutpoint in the container at /path/in/container
Bind Mount Points
Bind mounts allow you to access arbitrary directories from your Proxmox VE host inside a container. Some potential use cases are:
Accessing your home directory in the guest
Accessing an USB device directory in the guest
Accessing an NFS mount from the host in the guest
Bind mounts are considered to not be managed by the storage subsystem, so you cannot make snapshots or deal with quotas from inside the container. With unprivileged containers you might run into permission problems caused by the user mapping and cannot use ACLs.
Device Mount Points
Device mount points allow to mount block devices of the host directly into the container. Similar to bind mounts, device mounts are not managed by Proxmox VE’s storage subsystem, but the quota and acl options will be honored.
Device mount points should only be used under special circumstances. In most cases a storage backed mount point offers the same performance and a lot more features. |
Network
Specifies network interfaces for the container.
Bridge to attach the network device to.
Controls whether this interface’s firewall rules should be used.
Default gateway for IPv4 traffic.
Default gateway for IPv6 traffic.
A common MAC address with the I/G (Individual/Group) bit not set.
IPv4 address in CIDR format.
IPv6 address in CIDR format.
Maximum transfer unit of the interface. (lxc.network.mtu)
Name of the network device as seen from inside the container. (lxc.network.name)
Apply rate limiting to the interface
VLAN tag for this interface.
VLAN ids to pass through the interface
Network interface type.
Automatic Start and Shutdown of Containers
To automatically start a container when the host system boots, select the option Start at boot in the Options panel of the container in the web interface or run the following command:
Start and Shutdown Order
If you want to fine tune the boot order of your containers, you can use the following parameters:
Start/Shutdown order: Defines the start order priority. For example, set it to 1 if you want the CT to be the first to be started. (We use the reverse startup order for shutdown, so a container with a start order of 1 would be the last to be shut down)
Startup delay: Defines the interval between this container start and subsequent containers starts. For example, set it to 240 if you want to wait 240 seconds before starting other containers.
Shutdown timeout: Defines the duration in seconds Proxmox VE should wait for the container to be offline after issuing a shutdown command. By default this value is set to 60, which means that Proxmox VE will issue a shutdown request, wait 60s for the machine to be offline, and if after 60s the machine is still online will notify that the shutdown action failed.
Please note that containers without a Start/Shutdown order parameter will always start after those where the parameter is set, and this parameter only makes sense between the machines running locally on a host, and not cluster-wide.
If you require a delay between the host boot and the booting of the first container, see the section on Proxmox VE Node Management.
Hookscripts
Security Considerations
Containers use the kernel of the host system. This exposes an attack surface for malicious users. In general, full virtual machines provide better isolation. This should be considered if containers are provided to unknown or untrusted people.
To reduce the attack surface, LXC uses many security features like AppArmor, CGroups and kernel namespaces.
AppArmor
To trace AppArmor activity, use:
Although it is not recommended, AppArmor can be disabled for a container. This brings security risks with it. Some syscalls can lead to privilege escalation when executed within a container if the system is misconfigured or if a LXC or Linux Kernel vulnerability exists.
To disable AppArmor for a container, add the following line to the container configuration file located at /etc/pve/lxc/CTID.conf :
Control Groups (cgroup)
cgroup is a kernel mechanism used to hierarchically organize processes and distribute system resources.
The main resources controlled via cgroups are CPU time, memory and swap limits, and access to device nodes. cgroups are also used to «freeze» a container before taking snapshots.
There are 2 versions of cgroups currently available, legacy and cgroupv2.
Since Proxmox VE 7.0, the default is a pure cgroupv2 environment. Previously a «hybrid» setup was used, where resource control was mainly done in cgroupv1 with an additional cgroupv2 controller which could take over some subsystems via the cgroup_no_v1 kernel command line parameter. (See the kernel parameter documentation for details.)
CGroup Version Compatibility
The main difference between pure cgroupv2 and the old hybrid environments regarding Proxmox VE is that with cgroupv2 memory and swap are now controlled independently. The memory and swap settings for containers can map directly to these values, whereas previously only the memory limit and the limit of the sum of memory and swap could be limited.
Another important difference is that the devices controller is configured in a completely different way. Because of this, file system quotas are currently not supported in a pure cgroupv2 environment.
cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2 environment. Containers running systemd version 231 or newer support cgroupv2
[this includes all newest major versions of container templates shipped by Proxmox VE]
, as do containers not using systemd as init system
[for example Alpine Linux]
.
CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases, which have a systemd version that is too old to run in a cgroupv2 environment, you can either
Upgrade the whole distribution to a newer release. For the examples above, that could be Ubuntu 18.04 or 20.04, and CentOS 8 (or RHEL/CentOS derivatives like AlmaLinux or Rocky Linux). This has the benefit to get the newest bug and security fixes, often also new features, and moving the EOL date in the future.
Upgrade the Containers systemd version. If the distribution provides a backports repository this can be an easy and quick stop-gap measurement.
Move the container, or its services, to a Virtual Machine. Virtual Machines have a much less interaction with the host, that’s why one can install decades old OS versions just fine there.
Switch back to the legacy cgroup controller. Note that while it can be a valid solution, it’s not a permanent one. There’s a high likelihood that a future Proxmox VE major release, for example 8.0, cannot support the legacy controller anymore.
Changing CGroup Version
If file system quotas are not required and all containers support cgroupv2, it is recommended to stick to the new default. |
To switch back to the previous version the following kernel command line parameter can be used:
See this section on editing the kernel boot command line on where to add the parameter.
Guest Operating System Configuration
Proxmox VE tries to detect the Linux distribution in the container, and modifies some files. Here is a short list of things done at container startup:
to set the container name
to allow lookup of the local hostname
pass the complete network setup to the container
pass information about DNS servers
adapt the init system
for example, fix the number of spawned getty processes
set the root password
when creating a new container
so that each container has unique keys
so that cron does not start at the same time on all containers
Changes made by Proxmox VE are enclosed by comment markers:
Those markers will be inserted at a reasonable location in the file. If such a section already exists, it will be updated in place and will not be moved.
Modification of a file can be prevented by adding a .pve-ignore. file for it. For instance, if the file /etc/.pve-ignore.hosts exists then the /etc/hosts file will not be touched. This can be a simple empty file created via:
OS type detection is done by testing for certain files inside the container. Proxmox VE first checks the /etc/os-release file
[/etc/os-release replaces the multitude of per-distribution release files https://manpages.debian.org/stable/systemd/os-release.5.en.html]
. If that file is not present, or it does not contain a clearly recognizable distribution identifier the following distribution specific release files are checked.
inspect /etc/lsb-release ( DISTRIB_ID=Ubuntu )
Container start fails if the configured ostype differs from the auto detected type. |
Container Storage
The Proxmox VE LXC container storage model is more flexible than traditional container storage models. A container can have multiple mount points. This makes it possible to use the best suited storage for each application.
For example the root file system of the container can be on slow and cheap storage while the database can be on fast and distributed storage via a second mount point. See section Mount Points for further details.
Furthermore, local devices or local directories can be mounted directly using bind mounts. This gives access to local resources inside a container with practically zero overhead. Bind mounts can be used as an easy way to share data between containers.
FUSE Mounts
Because of existing issues in the Linux kernel’s freezer subsystem the usage of FUSE mounts inside a container is strongly advised against, as containers need to be frozen for suspend or snapshot mode backups. |
If FUSE mounts cannot be replaced by other mounting mechanisms or storage technologies, it is possible to establish the FUSE mount on the Proxmox host and use a bind mount point to make it accessible inside the container.
Using Quotas Inside Containers
Quotas allow to set limits inside a container for the amount of disk space that each user can use.
This only works on ext4 image based storage types and currently only works with privileged containers. |
Activating the quota option causes the following mount options to be used for a mount point: usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0
This allows quotas to be used like on any other system. You can initialize the /aquota.user and /aquota.group files by running:
Then edit the quotas using the edquota command. Refer to the documentation of the distribution running inside the container for details.
Using ACLs Inside Containers
The standard Posix Access Control Lists are also available inside containers. ACLs allow you to set more detailed file ownership than the traditional user/group/others model.
Backup of Container mount points
To include a mount point in backups, enable the backup option for it in the container configuration. For an existing mount point mp0
add backup=1 to enable it.
When creating a new mount point in the GUI, this option is enabled by default. |
To disable backups for a mount point, add backup=0 in the way described above, or uncheck the Backup checkbox on the GUI.
Replication of Containers mount points
Backup and Restore
Container Backup
It is possible to use the vzdump tool for container backup. Please refer to the vzdump manual page for details.
Restoring Container Backups
Restoring container backups made with vzdump is possible using the pct restore command. By default, pct restore will attempt to restore as much of the backed up container configuration as possible. It is possible to override the backed up configuration by manually setting container options on the command line (see the pct manual page for details).
pvesm extractconfig can be used to view the backed up configuration contained in a vzdump archive. |
There are two basic restore modes, only differing by their handling of mount points:
“Simple” Restore Mode
If neither the rootfs parameter nor any of the optional mpX parameters are explicitly set, the mount point configuration from the backed up configuration file is restored using the following steps:
Extract mount points and their options from backup
Create volumes for storage backed mount points on the storage provided with the storage parameter (default: local ).
Extract files from backup archive
Add bind and device mount points to restored configuration (limited to root user)
Since bind and device mount points are never backed up, no files are restored in the last step, but only the configuration options. The assumption is that such mount points are either backed up with another mechanism (e.g., NFS space that is bind mounted into many containers), or not intended to be backed up at all. |
This simple mode is also used by the container restore operations in the web interface.
“Advanced” Restore Mode
By setting the rootfs parameter (and optionally, any combination of mpX parameters), the pct restore command is automatically switched into an advanced mode. This advanced mode completely ignores the rootfs and mpX configuration options contained in the backup archive, and instead only uses the options explicitly provided as parameters.
This mode allows flexible configuration of mount point settings at restore time, for example:
Set target storages, volume sizes and other options for each mount point individually
Redistribute backed up files according to new mount point scheme
Restore to device and/or bind mount points (limited to root user)
Managing Containers with pct
The “Proxmox Container Toolkit” ( pct ) is the command line tool to manage Proxmox VE containers. It enables you to create or destroy containers, as well as control the container execution (start, stop, reboot, migrate, etc.). It can be used to set parameters in the config file of a container, for example the network configuration or memory limits.
CLI Usage Examples
Create a container based on a Debian template (provided you have already downloaded the template via the web interface)
Start container 100
Start a login session via getty
Enter the LXC namespace and run a shell as root user
Display the configuration
Reduce the memory of the container to 512MB
Destroying a container always removes it from Access Control Lists and it always removes the firewall configuration of the container. You have to activate —purge, if you want to additionally remove the container from replication jobs, backup jobs and HA resource configurations.
Move a mount point volume to a different storage.
Reassign a volume to a different CT. This will remove the volume mp0 from the source CT and attaches it as mp1 to the target CT. In the background the volume is being renamed so that the name matches the new owner.
Obtaining Debugging Logs
In case pct start is unable to start a specific container, it might be helpful to collect debugging output by passing the --debug flag (replace CTID with the container’s CTID):
Alternatively, you can use the following lxc-start command, which will save the debug log to the file specified by the -o output option:
This command will attempt to start the container in foreground mode, to stop the container run pct shutdown CTID or pct stop CTID in a second terminal.
Migration
If you have a cluster, you can migrate your Containers with
This works as long as your Container is offline. If it has local volumes or mount points defined, the migration will copy the content over the network to the target host if the same storage is defined there.
Running containers cannot live-migrated due to technical limitations. You can do a restart migration, which shuts down, moves and then starts a container again on the target node. As containers are very lightweight, this results normally only in a downtime of some hundreds of milliseconds.
A restart migration can be done through the web interface or by using the --restart flag with the pct migrate command.
A restart migration will shut down the Container and kill it after the specified timeout (the default is 180 seconds). Then it will migrate the Container like an offline migration and when finished, it starts the Container on the target node.
Configuration
For that reason, it is usually better to use the pct command to generate and modify those files, or do the whole thing using the GUI. Our toolkit is smart enough to instantaneously apply most changes to running containers. This feature is called “hot plug”, and there is no need to restart the container in that case.
In cases where a change cannot be hot-plugged, it will be registered as a pending change (shown in red color in the GUI). They will only be applied after rebooting the container.
File Format
The container configuration file uses a simple colon separated key/value format. Each line has the following format:
Blank lines in those files are ignored, and lines starting with a # character are treated as comments and are also ignored.
It is possible to add low-level, LXC style configuration directly, for example:
The settings are passed directly to the LXC low-level tools.
Snapshots
When you create a snapshot, pct stores the configuration at snapshot time into a separate snapshot section within the same configuration file. For example, after creating a snapshot called “testsnapshot”, your configuration file will look like this:
Options
OS architecture type.
Console mode. By default, the console command tries to open a connection to one of the available tty devices. By setting cmode to console it tries to attach to /dev/console instead. If you set cmode to shell, it simply invokes a shell inside the container (no login).
Attach a console device (/dev/console) to the container.
The number of cores assigned to the container. A container can use all available cores by default.
Limit of CPU usage.
If the computer has 2 CPUs, it has a total of 2 CPU time. Value 0 indicates no CPU limit. |
CPU weight for a VM. Argument is used in the kernel fair scheduler. The larger the number is, the more CPU time this VM gets. Number is relative to the weights of all the other running VMs.
Try to be more verbose. For now this only enables debug log-level on start.
Description for the Container. Shown in the web-interface CT’s summary. This is saved as comment inside the configuration file.
features : [force_rw_sys= ] [,fuse= ] [,keyctl= ] [,mknod= ] [,mount= ] [,nesting= ]
Allow containers access to advanced features.
Allow using fuse file systems in a container. Note that interactions between fuse and the freezer cgroup can potentially cause I/O deadlocks.
For unprivileged containers only: Allow the use of the keyctl() system call. This is required to use docker inside a container. By default unprivileged containers will see this system call as non-existent. This is mostly a workaround for systemd-networkd, as it will treat it as a fatal error when some keyctl() operations are denied by the kernel due to lacking permissions. Essentially, you can choose between running systemd-networkd or docker.
Allow unprivileged containers to use mknod() to add certain device nodes. This requires a kernel with seccomp trap to user space support (5.3 or newer). This is experimental.
Allow mounting file systems of specific types. This should be a list of file system types as used with the mount command. Note that this can have negative effects on the container’s security. With access to a loop device, mounting a file can circumvent the mknod permission of the devices cgroup, mounting an NFS file system can block the host’s I/O completely and prevent it from rebooting, etc.
Allow nesting. Best used with unprivileged containers with additional id mapping. Note that this will expose procfs and sysfs contents of the host to the guest.
Script that will be exectued during various steps in the containers lifetime.
Set a host name for the container.
Amount of RAM for the VM in MB.
[,acl= ] [,backup= ] [,mountoptions= ] [,quota= ] [,replicate= ] [,ro= ] [,shared= ] [,size= ]
Use volume as container mount point. Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.
Explicitly enable or disable ACL support.
Whether to include the mount point in backups (only used for volume mount points).
Extra mount options for rootfs/mps.
Path to the mount point as seen from inside the container.
Enable user quotas inside the container (not supported with zfs subvolumes)
Will include this volume to a storage replica job.
Read-only mount point
Mark this non-volume mount point as available on all nodes.
This option does not share the mount point automatically, it assumes it is shared already! |
Volume size (read only value).
Volume, device or directory to mount into the container.
Sets DNS server IP address for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
net[n] : name= [,bridge=
] [,firewall= ] [,gw= ] [,gw6= ] [,hwaddr= ] [,ip= ] [,ip6= ] [,mtu= ] [,rate= ] [,tag= ] [,trunks= ] [,type= ]
Specifies network interfaces for the container.
Bridge to attach the network device to.
Controls whether this interface’s firewall rules should be used.
Default gateway for IPv4 traffic.
Default gateway for IPv6 traffic.
A common MAC address with the I/G (Individual/Group) bit not set.
IPv4 address in CIDR format.
IPv6 address in CIDR format.
Maximum transfer unit of the interface. (lxc.network.mtu)
Name of the network device as seen from inside the container. (lxc.network.name)
Apply rate limiting to the interface
VLAN tag for this interface.
VLAN ids to pass through the interface
Network interface type.
Specifies whether a VM will be started during system bootup.
Sets the protection flag of the container. This will prevent the CT or CT’s disk remove/update operation.
rootfs : [volume=] [,acl= ] [,mountoptions= ] [,quota= ] [,replicate= ] [,ro= ] [,shared= ] [,size= ]
Use volume as container root.
Explicitly enable or disable ACL support.
Extra mount options for rootfs/mps.
Enable user quotas inside the container (not supported with zfs subvolumes)
Will include this volume to a storage replica job.
Read-only mount point
Mark this non-volume mount point as available on all nodes.
This option does not share the mount point automatically, it assumes it is shared already! |
Volume size (read only value).
Volume, device or directory to mount into the container.
Sets DNS search domains for a container. Create will automatically use the setting from the host if you neither set searchdomain nor nameserver.
Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped.
Amount of SWAP for the VM in MB.
Tags of the Container. This is only meta information.
Time zone to use in the container. If option isn’t set, then nothing will be done. Can be set to host to match the host time zone, or an arbitrary time zone option from /usr/share/zoneinfo/zone.tab
Specify the number of tty available to the container
Makes the container run as unprivileged user. (Should not be modified manually.)
Reference to unused volumes. This is used internally, and should not be modified manually.