The version of the vegas kernel is too old что делать
FATAL: kernel too old #190
Comments
otakutyrant commented Feb 15, 2021 •
I would like to provide a detailed report but I am away from my PC and posting this issue on my phone instead. Actually I think it is needless to explicate it so far because today the new version of glibic is released so that thereafter everyone who have once updated their system will, maybe, encounter the same issue. I think it may be time to update the kernel of ArchWSL and release it.
Update: The specific issue is that when you upgrade the system, you will encounter «FATAL: kernel too old» after the update of glibc and some programs (maybe because they are depended on it?) and every prompt in a Shell.
The text was updated successfully, but these errors were encountered:
claudiocabral commented Feb 15, 2021
had the same issue, at the end «solved» by updating the kernel with these instructions and switching to WSL 2
Enter-tainer commented Feb 15, 2021 •
had the same issue, at the end «solved» by updating the kernel with these instructions and switching to WSL 2
Is it possible for me to stay on wsl1? Since wsl2 has terrible IO performance on Windows disks, a simple git status can take more 10 seconds.
Esgariot commented Feb 15, 2021
It’s most likely due to glibc 2.33-4 update which happened today.
Ca1se commented Feb 15, 2021
I have the same issue, arch wsl conldn’t work after glibc update.
claudiocabral commented Feb 15, 2021
Fijxu commented Feb 15, 2021
had the same issue, at the end «solved» by updating the kernel with these instructions and switching to WSL 2
otakutyrant commented Feb 16, 2021
had the same issue, at the end «solved» by updating the kernel with these instructions and switching to WSL 2
I studied those instructions and found out installing a preview build of Windows 10 is required but I do not want to put my working computer at risk of using an unstable system. It can’t be helped.
I googled how to upgrade the kernel of WSL but no useful information was found.
krnets commented Feb 16, 2021 •
Set your distribution version to WSL 2:
If version still shows 1, run the following and reboot:
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
dynxer commented Feb 16, 2021 •
For those who want to stay at WSL1 and want to fix the problem, a simple (also risky) solution is to overwrite the WSL installation directly with the old glibc package.
According to this document, permissions are stored in NTFS extended attributes. Using some specific way to overwrite (e.g. dd) can preserve NTFS extended attributes, which is also safer.
devtryongit commented Feb 16, 2021 •
For those who want to stay at WSL1 and want to fix the problem, a simple (also risky) solution is to overwrite the WSL installation directly with the old glibc package.
According to this document, permissions are stored in NTFS extended attributes. Using some specific way to overwrite (e.g. dd) can preserve NTFS extended attributes, which is also safer.
In the end I had to install a new instance of Arch and set the server to the archive as suggested by @claudiocabral. Packages from the 14th of February are still ok.
Very annoying I had to reinstall/reconfigure everything though.
liyiliuxingyu commented Feb 17, 2021
dynxer commented Feb 17, 2021
After more testing, I have refined that solution. Here are the steps.
The whole point of the process is to make sure that the replaced files have proper permissions. While this may be the safest steps, please note that it still carries risks. You can also try upgrading to WSL2 first, downgrading glibc and then downgrade to WSL1. This should work too.
There is also a glibc-linux4 from AUR.
dynxer commented Feb 17, 2021
In the end I had to install a new instance of Arch and set the server to the archive as suggested by @claudiocabral. Packages from the 14th of February are still ok.
Very annoying I had to reinstall/reconfigure everything though.
I also tested that the shell immediately closes may be because the overwritten file permissions have been reset to unreadable or unexecutable.
If you still want to try to fix it, consider referring to the new comment I just posted. Following that should ensure that the overwritten files have the correct permissions from the start. You may need to overwrite with the entire package since you’ve done that before.
FATAL: kernel too old #256
Comments
csreynolds commented Feb 25, 2021 •
Last build works fine, build from last night wont even start.
My docker host is :
arch-base image does the same thing since changes on 20210222
It runs fine on my fedora33 installation, which is currently kernel 5.10.17-200.fc33
Problem is with archlinux source. They defaulted to zstd compression, only included in kernel 5.9+. Any host machine running below 5.9 will get this error.
Looking at this chart of distribution default kernel versions, its a pretty bold breaking change they made:
https://zonedstorage.io/distributions/linux/
The text was updated successfully, but these errors were encountered:
DamageDoctor commented Feb 26, 2021
Same issue here
Synology DS1817+
DSM 6.2.3-25426 Update 3
Docker App 18.09.0-0513
wojo commented Feb 26, 2021
Same here on the latest DSM 6.2.3-25426 Update 3.
ChrisBaker97 commented Feb 26, 2021
DamageDoctor commented Feb 26, 2021
I ended up just rolling back to previous version like @csreynolds. The joy of :latest 😉
groot-stuff commented Feb 26, 2021 •
Latest version pushed to dockerhub (2.0.4.dev38-g23a48dd01-3-02) results in the error below in both binhex-radarr and binhex-sonarr dockers on unRaid 6.8.3. When going to http://[IP]:8112/json in the browser an HTTP 405 error is returned.
Unable to communicate with Deluge. The operation has timed out.: ‘http://[IP]:8112/json’
Rolling back to 2.0.4.dev38_g23a48dd01-3-01 resolved the issue.
grayhat917 commented Feb 26, 2021
Same here. 1517+ Kernel Version 3.10.105
Ashkaan commented Feb 26, 2021
neoKushan commented Feb 26, 2021
Had the same issue as others.
DamageDoctor commented Feb 26, 2021 •
Problem is with archlinux source. They defaulted to zstd compression, only included in kernel 5.9+. Any host machine running below 5.9 will get this error.
Looking at this chart of distribution default kernel versions, its a pretty bold breaking change they made:
https://zonedstorage.io/distributions/linux/
Super aggressive move. Time to turn off a bunch of docker image auto-updates I guess.
binhex commented Feb 27, 2021
Problem is with archlinux source. They defaulted to zstd compression, only included in kernel 5.9+. Any host machine running below 5.9 will get this error.
news here:
https://archlinux.org/news/moving-to-zstandard-images-by-default-on-mkinitcpio/
Looking at this chart of distribution default kernel versions, its a pretty bold breaking change they made:
https://zonedstorage.io/distributions/linux/
Super aggressive move. Time to turn off a bunch of docker image auto-updates I guess.
i dont believe this is the issue, as i am running kernel 4.9.x with no issue, i THINK its probably related to the glibc changes, see link:- https://bugs.archlinux.org/task/69563
extract from the post, note the highlighted section:-
quorn23 commented Feb 28, 2021 •
@binhex would that be the arch-delugevpn:test one? If so, could you push the image? The one on docker hub is out of date. (If that’s not what you mean with [testing] apologies.)
binhex commented Feb 28, 2021
@binhex would that be the arch-delugevpn:test one? If so, could you push the image? The one on docker hub is out of date. (If that’s not what you mean with [testing] apologies.)
i think you misunderstand, the glibc package 2.33-4 is included in both latest and test, therefore both will fail for synology users, sadly i think this could be the end of road for synology support until the kernel gets bumped up to 4.4+
neoKushan commented Mar 1, 2021
What a shame that would be. Hopefully DSM 7 will include a kernel bump (I’d be surprised if it doesn’t but none of the literature around it specifies), but that’s currently in Beta and some older Synology devices are probably going to be stranded.
For those devices, using the image binhex/arch-delugevpn:2.0.4.dev38_g23a48dd01-3-01 still works a charm for me. Until then, I guess we’re waiting. Maybe it’s time to kick my home server needs up a notch, my little synology is pushing itself hard as it is.
c-hri-s commented Mar 1, 2021
I’m running a DS918+ on DSM 6.2.3-25426 Update 3 and I have Linux babynas 4.4.59+ #25426 SMP PREEMPT Mon Dec 14 18:48:50 CST 2020 x86_64 GNU/Linux synology_apollolake_918+
ChrisBaker97 commented Mar 2, 2021 •
@neoKushan it looks like people running the DSM 7 beta are having the same problem. Some brief research suggests that the kernel version is tied to the hardware, not to DSM, and that while Synology back ports features and fixes into the older kernels, there is never a version bump. So it would seem that the only way to address this on the Synology end would be to buy a newer NAS.
That being said, I’m not sure what the downside to freezing this container at 2.0.4.dev38_g23a48dd01-3-01 would be for those of us on older Synology hardware, given that Deluge itself hasn’t been updated in almost three years anyway?
hot22shot commented Mar 4, 2021
@ChrisBaker97 as I run DSM 7 on my DS916 I can confirm that, Synology locks down the kernel to a model and barely bump it. FYI I’m running Linux kernel version 3.10.108. That will definitely cause me some issue with docker containers as time fly.
neoKushan commented Mar 4, 2021
Ouch, I run a DS916+ myself but not upgraded to DSM7. Very poor show on Synology’s part but I guess you could argue that it’s 5 year old hardware.
I am personally outgrowing my DS916+ anyway so will be migrating to something a bit better and more modern in the near future but this does suck for anyone running said hardware. More and more container images are going to migrate away from these old kernel versions as time goes on. As @ChrisBaker97 says, theres probably no real issue freezing on the most latest version of this particular image.
binhex commented Mar 4, 2021
That being said, I’m not sure what the downside to freezing this container at 2.0.4.dev38_g23a48dd01-3-01 would be for those of us on older Synology hardware, given that Deluge itself hasn’t been updated in almost three years anyway?
sadly there is this (resolved in ‘latest), which leaves you guys between a rock and a hard place:- binhex/arch-qbittorrentvpn#80
ChrisBaker97 commented Mar 4, 2021
The more I learn, the more I’m leaning toward steering my Synology into being basically just a file server, while offloading the application and networking services to a separate device. I really don’t want the hassle of keeping up my own separate server, but I also never would’ve imagined that Synology was freezing the kernel on a version that was EOL over three years ago, so I guess I desire to have a little more control over the software environment than I can get from them.
hot22shot commented Mar 4, 2021
Can’t argue with you on that, my DSM will also be back to what it was in the beginning : a storage server.
I’ll move my software to better supported platform.
And when the time comes to replace it, I’ll keep Synology’s limitations in mind.
neoKushan commented Mar 4, 2021
That being said, I’m not sure what the downside to freezing this container at 2.0.4.dev38_g23a48dd01-3-01 would be for those of us on older Synology hardware, given that Deluge itself hasn’t been updated in almost three years anyway?
sadly there is this (resolved in ‘latest), which leaves you guys between a rock and a hard place:- binhex/arch-qbittorrentvpn#80
Well. Poo. That’s a humdinger of an issue.
Can’t argue with you on that, my DSM will also be back to what it was in the beginning : a storage server.
I’ll move my software to better supported platform.
And when the time comes to replace it, I’ll keep Synology’s limitations in mind.
Thirding this one. I’ve already been researching and scoping where I go with this next and my current solution would be a custom build running Unraid. The likes of FreeNAS (TrueNAS) can run applications but realistically it’s more geared towards storage, whereas Unraid can manage your storage but has first-class support for running any containers you might want while having minimum management overheard. The software is slick, well supported and does basically everything that DSM does (At least for my use-case), as well as some other party pieces. Cobble together some commodity hardware, slap a load of drives in there and you’ve got something functionally better than a Synology, but just as polished and easy to use. I’d recoup most of that cost from selling the DS916+ as well (which has held its value pretty well over the years).
I used to use a lot more functionality from DSM itself, but since I went down the container route I ended up using less and less of it and instead used a containerised application instead. Much easier to manage, keep updated and no reliance on Synology to fix things when stuff breaks (Which they’re notoriously slow to do). It was just a matter of time really.
binhex commented Mar 4, 2021
FWIW, my main support base is unRAID and will be for the foreseeable, i run a medium sized home server and unraid fits my needs well, the community is strong and friendly and Limetech keep making improvements with each version (latest release JUST having dropped), yes you have to pay for it, but ROI for me has been outstanding.
ChrisBaker97 commented Mar 4, 2021
At the risk of turning this issue discussion into a forum post.
I was already in the market for a rackmount Synology to replace my DS1815+. Having recently completed a 30-month-long cycle upgrading the capacity of all eight drives, I’m really not in a position to easily switch to another vendor, since I’d have to come up with a way to park a ton of data if I can’t just move the drives over directly. I also see value in the Synology Hybrid RAID, which, as far as I know, isn’t functionality that you can easily replicate with the alternatives. (It sure was nice to be able to swap 8TB drives in for the old 4TB ones piecemeal as they failed over time, while gaining the additional storage immediately, rather than waiting until they were all upgraded first. It’ll be even nicer in DSM7, when it will no longer require a lengthy rebuild if you’re just swapping out for a larger drive pre-failure.)
For several years, I’ve been waiting for Synology to come out with an 8 to 12-bay rackmount unit that had sufficient processing power to do some serious video transcoding. The new RS1221+ comes close, but is still a bit of a disappointment there, as well as with its lack of built-in 10GbE and M.2 NVMe SSD slots (which can both admittedly be added, at additional expense, via an add-on card).
So I guess I begrudgingly feel like Synology adds enough value to justify their price premium, even when only serving as a NAS, and as much as I’d like to punish them for basically making Docker a ticking time bomb here, I think I’ve known for a while that I really should be running two separate appliances for storage and services anyway. I’ve definitely been looking at unRAID as an OS, and now I’m just wondering if I can squeeze enough processing power into a 1U shallow box, or if I need to go to 2U for no other reason than to fit an adequate heat sink. Another option I’ve been meaning to look into is a NUC.
neoKushan commented Mar 4, 2021 •
I also don’t want to turn this into a discussion on the matter (And apologies if this is too off topic) but for whatever it’s worth, after thinking about it earlier I’ve pulled the trigger on ordering the components to build my own unRAID system.
My requirements are fairly typical of a lot of people in terms of media consumption/plex transcoding but I’ll be running some hefty VMs/Containers (Like game servers) as well.
I could have saved money here by going with an earlier-generation intel CPU and 6 cores is almost certainly overkill for most. A pentium transcodes just as well well but I have gone for a higher core count here for my other needs.
A Motherboard with 4 memory slots and plenty of SATA ports. I only put in 2x sticks of memory for now, but wanted the 4 in case I need more in future. 32GB of RAM is fine for my needs for now but I can expand easily if I need.
An nvme drive for cache/scratch/performance where needed. 1TB is overkill for sure, a 256GB drive would probably be enough for most (If you need one at all) but this leaves me room for running more VMs/Container images from it.
Finally, a decent, well-regarded case that can hold plenty of 3.5 inch disks. This one holds 8 + space for 2 more (2.5 or 3.5), which is great for my needs. If I had the room I’d have looked at a rackmount, but this is going to live in my office so acoustics are a factor here.
Not listed: SATA cables, an LSI card for more SATA ports and some SATA power splitters for the PSU. YMMV.
In case anyone is thinking of doing a similar thing themselves, you can use that as a base or look at the excellent NAS Killer build guides from serverbuilds for inspiration.
One last thing to add: I am SUPER glad I have my environment configured as a docker-compose script. I know I can’t use compose out of the box with unRAID but it’s not difficult to do and will make migrating a cinch.
FATAL: kernel too old
Распаковал свежий stage3
издеваешься? Это ядро лета 2009 года, а сейчас 2016 и выходит 4.5
И что? Я не могу сменить ядро без доступа к железке
Дело похоже даже не в libc:
Изначально хотел uclibc или musl систему, но stage3 таких не оказалось.
Значит тебе не нужен stage3, бутстрапься
Есть инструкции, с чего начать?
Образы portage тебе в соседней теме нашли. Сложнее будет с тарболлами патчей, которых уже может не быть на зеркалах
для catalyst нужна рабочая система той же архитектуры
Первое что пришло в голову:
1. Собрать тулчейн + минимальную систему(bash,coreutils sed,grep и т.д.) через crossdev
2. Запустить всю эту требуху в qemu-user
3. Использовать Catalyst для сборки полного stage3
Время переустановить линукс!
Полгода назад брал stage3 и он работал.
В Debian testing недавно прилетело обновление glibc:
Starting with version 2.21-1, the glibc requires a 3.2 or later Linux kernel. If you use an older kernel, please upgrade it *before* installing this glibc version. Failing to do so will end-up with the following failure:
Note: This obviously does not apply to non-Linux kernels.
Предупредили, как полагается. А ядру 3.1 всего лишь 4 года исполнилось. Пожалуй, довольно жёсткое требование.
FATAL: kernel too old
Я надеюсь тут не придется перепрошивать данное. устройство ***ть?!
Короче как я понял в этом случае так и придется перепрошивать на более новое ядро. Тут похоже все связано с библиотеками glibc, если не найти более старой версии то по другому никак похоже?
Это значит что без геморроя тут никак да? )
Печаль беда
Короче все ясно тут 2 варианта:
1. Либо прошивать до более новой версии ядра, но получится ли это на данном устройстве мало вероятно ввиду старости самого железа а именно самого arm ядра.
2. Либо искать более старые исходники арка, что тоже мало вероятно поскольку если даже на офф сайте уже давно нет этих версий то я даже не представляю где их можно откопать.
Искренне соболезную, но и одновременно восхищаюсь уровнем невероятного задротства.
Мне не удалось завести Gentoo (в которой даже сейчас можно найти довольно древние ядра) три года назад на armv6l, а ты хочешь завести быстро развивающийся рач. Даже с большой натяжкой, даже с большой верой нельзя сказать, что у тебя есть шансы.
Ясно, а жаль, ладно всем спасибо за помощь!
FATAL: kernel too old #914
Comments
pc10201 commented Apr 19, 2018
FATAL: kernel too old
CentOS release 6.7 (Final)
The text was updated successfully, but these errors were encountered:
discordianfish commented Apr 19, 2018
Doesn’t it log in which line this error occurs? Not sure what is throwing it and whether we can prevent it.
davidbirdsong commented Apr 19, 2018
I ran into this too on Centos 6.4-6.7, but simply building it with go1.10 seems to work around the issue.
pc10201 commented Apr 20, 2018
The log has only log «FATAL: kernel too old».It do not has line info
pc10201 commented Apr 20, 2018
This is core dump file
core.zip
discordianfish commented Apr 20, 2018
@davidbirdsong Hrm odd, 0.16.0-rc.2 is suppose to be build with go 1.10.1.
The problem here appears that the glibc that was used by CGO doesn’t support this kernel version, so this is probably why it’s working on your system.
Our images are built by the golang-builder which again is built upon debian:sid which appears to only support 3.2+. Came across this: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864720
@sdurrheimer Do you have thoughts how to fix this? Wondering if we could use an older debian version, compile glibc ourselves or use a different dist altogether.
That being said, this kernel is old and we probably won’t spend too much time fixing it. At least we should document this limitation though.