Apr 13 19:19:48.895268 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:19:48.895327 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:19:48.895351 kernel: KASLR enabled Apr 13 19:19:48.895365 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 19:19:48.895378 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 13 19:19:48.895392 kernel: random: crng init done Apr 13 19:19:48.895409 kernel: ACPI: Early table checksum verification disabled Apr 13 19:19:48.895424 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 13 19:19:48.895439 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 13 19:19:48.895457 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895472 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895486 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895501 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895515 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895533 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895552 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895568 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895583 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:48.895599 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 19:19:48.895614 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 13 19:19:48.895629 kernel: NUMA: Failed to initialise from firmware Apr 13 19:19:48.895645 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:19:48.895660 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Apr 13 19:19:48.895675 kernel: Zone ranges: Apr 13 19:19:48.895691 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:19:48.895709 kernel: DMA32 empty Apr 13 19:19:48.895724 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 13 19:19:48.895740 kernel: Movable zone start for each node Apr 13 19:19:48.895755 kernel: Early memory node ranges Apr 13 19:19:48.895770 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 13 19:19:48.895786 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 13 19:19:48.895801 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 13 19:19:48.895817 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 13 19:19:48.895832 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 13 19:19:48.895848 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 13 19:19:48.895863 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 13 19:19:48.895879 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:19:48.895924 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 19:19:48.895944 kernel: psci: probing for conduit method from ACPI. Apr 13 19:19:48.895960 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:19:48.895983 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:19:48.896000 kernel: psci: Trusted OS migration not required Apr 13 19:19:48.896016 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:19:48.896036 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 13 19:19:48.896105 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:19:48.896122 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:19:48.896139 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:19:48.896155 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:19:48.896172 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:19:48.896188 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:19:48.896205 kernel: CPU features: detected: Spectre-v4 Apr 13 19:19:48.896221 kernel: CPU features: detected: Spectre-BHB Apr 13 19:19:48.896237 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:19:48.896259 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:19:48.896274 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:19:48.896281 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:19:48.896288 kernel: alternatives: applying boot alternatives Apr 13 19:19:48.896296 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:19:48.896303 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:19:48.896367 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:19:48.896377 kernel: Fallback order for Node 0: 0 Apr 13 19:19:48.896384 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 13 19:19:48.896390 kernel: Policy zone: Normal Apr 13 19:19:48.896397 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:19:48.896408 kernel: software IO TLB: area num 2. Apr 13 19:19:48.896415 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 13 19:19:48.896422 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Apr 13 19:19:48.896429 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:19:48.896436 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:19:48.896443 kernel: rcu: RCU event tracing is enabled. Apr 13 19:19:48.896450 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:19:48.896457 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:19:48.896464 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:19:48.896471 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:19:48.896478 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:19:48.896484 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:19:48.896493 kernel: GICv3: 256 SPIs implemented Apr 13 19:19:48.896500 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:19:48.896506 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:19:48.896513 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 13 19:19:48.896520 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 13 19:19:48.896527 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 13 19:19:48.896534 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:19:48.896541 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:19:48.896548 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 13 19:19:48.896555 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 13 19:19:48.896561 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:19:48.896570 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:19:48.896577 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:19:48.896584 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:19:48.896591 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:19:48.896597 kernel: Console: colour dummy device 80x25 Apr 13 19:19:48.896605 kernel: ACPI: Core revision 20230628 Apr 13 19:19:48.896612 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:19:48.896619 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:19:48.896626 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:19:48.896633 kernel: landlock: Up and running. Apr 13 19:19:48.896641 kernel: SELinux: Initializing. Apr 13 19:19:48.896649 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:19:48.896656 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:19:48.896663 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:19:48.896670 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:19:48.896677 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:19:48.896684 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:19:48.896691 kernel: Platform MSI: ITS@0x8080000 domain created Apr 13 19:19:48.896698 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 13 19:19:48.896706 kernel: Remapping and enabling EFI services. Apr 13 19:19:48.896714 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:19:48.896721 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:19:48.896729 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 13 19:19:48.896736 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 13 19:19:48.896743 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:19:48.896750 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:19:48.896757 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:19:48.896764 kernel: SMP: Total of 2 processors activated. Apr 13 19:19:48.896771 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:19:48.896779 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:19:48.896787 kernel: CPU features: detected: Common not Private translations Apr 13 19:19:48.896799 kernel: CPU features: detected: CRC32 instructions Apr 13 19:19:48.896808 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 13 19:19:48.896815 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:19:48.896822 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:19:48.896830 kernel: CPU features: detected: Privileged Access Never Apr 13 19:19:48.896837 kernel: CPU features: detected: RAS Extension Support Apr 13 19:19:48.896846 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 13 19:19:48.896854 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:19:48.896862 kernel: alternatives: applying system-wide alternatives Apr 13 19:19:48.896869 kernel: devtmpfs: initialized Apr 13 19:19:48.896876 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:19:48.896884 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:19:48.896891 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:19:48.896915 kernel: SMBIOS 3.0.0 present. Apr 13 19:19:48.896926 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 13 19:19:48.896933 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:19:48.896941 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:19:48.896948 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:19:48.896956 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:19:48.896963 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:19:48.896971 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Apr 13 19:19:48.896978 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:19:48.896985 kernel: cpuidle: using governor menu Apr 13 19:19:48.896994 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:19:48.897001 kernel: ASID allocator initialised with 32768 entries Apr 13 19:19:48.897009 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:19:48.897017 kernel: Serial: AMBA PL011 UART driver Apr 13 19:19:48.897024 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:19:48.897031 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:19:48.897039 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:19:48.897054 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:19:48.897062 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:19:48.897071 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:19:48.897079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:19:48.897087 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:19:48.897094 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:19:48.897101 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:19:48.897109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:19:48.897116 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:19:48.897123 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:19:48.897130 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:19:48.897140 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:19:48.897147 kernel: ACPI: Interpreter enabled Apr 13 19:19:48.897155 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:19:48.897163 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:19:48.897171 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:19:48.897178 kernel: printk: console [ttyAMA0] enabled Apr 13 19:19:48.897186 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 19:19:48.897355 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:19:48.897438 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:19:48.897505 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:19:48.897570 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 13 19:19:48.897634 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 13 19:19:48.897644 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 13 19:19:48.897652 kernel: PCI host bridge to bus 0000:00 Apr 13 19:19:48.897722 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 13 19:19:48.897782 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:19:48.897844 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 13 19:19:48.897927 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 19:19:48.898017 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 13 19:19:48.898111 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 13 19:19:48.898183 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 13 19:19:48.898250 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:19:48.898337 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.898405 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 13 19:19:48.898481 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.898551 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 13 19:19:48.898625 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.898692 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 13 19:19:48.898768 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.898834 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 13 19:19:48.899207 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.899306 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 13 19:19:48.899383 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.899452 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 13 19:19:48.899532 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.899599 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 13 19:19:48.899835 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.899928 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 13 19:19:48.900297 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:48.900370 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 13 19:19:48.900455 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 13 19:19:48.900523 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 13 19:19:48.900601 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:19:48.900791 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 13 19:19:48.900864 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:19:48.903071 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:19:48.903178 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 19:19:48.903260 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 13 19:19:48.903339 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 19:19:48.903409 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 13 19:19:48.903476 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 13 19:19:48.903554 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 19:19:48.903622 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 13 19:19:48.903701 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 19:19:48.903774 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 13 19:19:48.903844 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 13 19:19:48.904078 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 19:19:48.904160 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 13 19:19:48.904229 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:19:48.904316 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:19:48.904385 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 13 19:19:48.904452 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 13 19:19:48.904520 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:19:48.904601 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 13 19:19:48.904670 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:19:48.904737 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:19:48.904812 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 13 19:19:48.904877 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 13 19:19:48.904958 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 13 19:19:48.905031 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 13 19:19:48.905140 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:19:48.905210 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:19:48.905279 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 13 19:19:48.905348 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 13 19:19:48.905420 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 13 19:19:48.905491 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 13 19:19:48.905560 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:19:48.905629 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:19:48.905700 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 13 19:19:48.905769 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:19:48.905840 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:19:48.907620 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 19:19:48.907712 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:19:48.907780 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:19:48.907850 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 19:19:48.907933 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:19:48.908002 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:19:48.908098 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 19:19:48.908171 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:19:48.908257 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:19:48.908326 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 13 19:19:48.908393 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:48.908464 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 13 19:19:48.908531 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:48.908600 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 13 19:19:48.908667 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:48.908740 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 13 19:19:48.908806 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:48.908874 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 13 19:19:48.908955 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:48.909030 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 13 19:19:48.909113 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:48.909187 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 13 19:19:48.909256 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:48.909323 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 13 19:19:48.909392 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:48.909460 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 13 19:19:48.909528 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:48.909600 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 13 19:19:48.909672 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 13 19:19:48.909742 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 13 19:19:48.909808 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 19:19:48.909876 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 13 19:19:48.909955 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 19:19:48.910024 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 13 19:19:48.910136 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 19:19:48.910207 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 13 19:19:48.910280 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 13 19:19:48.910346 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 13 19:19:48.910412 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 13 19:19:48.910478 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 13 19:19:48.910543 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 13 19:19:48.910609 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 13 19:19:48.910675 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 13 19:19:48.910741 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 13 19:19:48.910810 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 13 19:19:48.910877 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 13 19:19:48.910956 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 13 19:19:48.911030 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 13 19:19:48.911120 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 13 19:19:48.911192 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:19:48.911260 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 13 19:19:48.911327 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 19:19:48.911399 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 13 19:19:48.911465 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 13 19:19:48.911531 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:48.911604 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 13 19:19:48.911676 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 19:19:48.911742 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 13 19:19:48.911809 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 13 19:19:48.911875 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:48.911975 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:19:48.912063 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 13 19:19:48.912138 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 19:19:48.912207 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 13 19:19:48.912280 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 13 19:19:48.912347 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:48.912423 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:19:48.912490 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 19:19:48.912556 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 13 19:19:48.912630 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 13 19:19:48.912697 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:48.912770 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 13 19:19:48.912843 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 13 19:19:48.912923 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 19:19:48.912993 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 13 19:19:48.913097 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 13 19:19:48.913170 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:48.913244 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 13 19:19:48.913314 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 13 19:19:48.913381 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 19:19:48.913454 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 13 19:19:48.913520 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 13 19:19:48.913586 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:48.913662 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 13 19:19:48.913731 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 13 19:19:48.913823 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 13 19:19:48.913895 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 19:19:48.914067 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 13 19:19:48.914145 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 13 19:19:48.914210 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:48.914278 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 19:19:48.914341 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 13 19:19:48.914405 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 13 19:19:48.914470 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:48.914535 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 19:19:48.914602 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 13 19:19:48.914672 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 13 19:19:48.914738 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:48.914804 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 13 19:19:48.914863 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:19:48.917124 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 13 19:19:48.917227 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 13 19:19:48.917296 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 13 19:19:48.917364 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:48.917432 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 13 19:19:48.917493 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 13 19:19:48.917553 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:48.917620 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 13 19:19:48.917681 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 13 19:19:48.917743 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:48.917810 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 13 19:19:48.917870 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 13 19:19:48.917966 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:48.918061 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 13 19:19:48.918130 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 13 19:19:48.918192 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:48.918265 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 13 19:19:48.918326 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 13 19:19:48.918390 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:48.918457 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 13 19:19:48.918520 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 13 19:19:48.918580 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:48.918647 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 13 19:19:48.918708 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 13 19:19:48.918768 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:48.918840 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 13 19:19:48.921509 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 13 19:19:48.921624 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:48.921636 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:19:48.921644 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:19:48.921652 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:19:48.921660 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:19:48.921668 kernel: iommu: Default domain type: Translated Apr 13 19:19:48.921676 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:19:48.921684 kernel: efivars: Registered efivars operations Apr 13 19:19:48.921692 kernel: vgaarb: loaded Apr 13 19:19:48.921704 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:19:48.921712 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:19:48.921720 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:19:48.921728 kernel: pnp: PnP ACPI init Apr 13 19:19:48.921805 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 13 19:19:48.921817 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:19:48.921825 kernel: NET: Registered PF_INET protocol family Apr 13 19:19:48.921833 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:19:48.921844 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:19:48.921852 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:19:48.921860 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:19:48.921868 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:19:48.921876 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:19:48.921884 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:19:48.921891 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:19:48.921920 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:19:48.922007 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 13 19:19:48.922024 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:19:48.922032 kernel: kvm [1]: HYP mode not available Apr 13 19:19:48.922053 kernel: Initialise system trusted keyrings Apr 13 19:19:48.922063 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:19:48.922071 kernel: Key type asymmetric registered Apr 13 19:19:48.922079 kernel: Asymmetric key parser 'x509' registered Apr 13 19:19:48.922087 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:19:48.922094 kernel: io scheduler mq-deadline registered Apr 13 19:19:48.922102 kernel: io scheduler kyber registered Apr 13 19:19:48.922113 kernel: io scheduler bfq registered Apr 13 19:19:48.922122 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:19:48.922202 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 13 19:19:48.922271 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 13 19:19:48.922337 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.922406 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 13 19:19:48.922472 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 13 19:19:48.922541 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.922611 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 13 19:19:48.922677 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 13 19:19:48.922742 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.922812 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 13 19:19:48.922889 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 13 19:19:48.923011 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.923101 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 13 19:19:48.923171 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 13 19:19:48.923242 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.923313 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 13 19:19:48.923380 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 13 19:19:48.923452 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.923521 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 13 19:19:48.923587 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 13 19:19:48.923653 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.923722 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 13 19:19:48.923788 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 13 19:19:48.923858 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.923869 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 13 19:19:48.924745 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 13 19:19:48.924829 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 13 19:19:48.924895 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:48.925005 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:19:48.925021 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:19:48.925029 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:19:48.925140 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 13 19:19:48.925218 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 13 19:19:48.925230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:19:48.925238 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:19:48.925305 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 13 19:19:48.925317 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 13 19:19:48.925325 kernel: thunder_xcv, ver 1.0 Apr 13 19:19:48.925336 kernel: thunder_bgx, ver 1.0 Apr 13 19:19:48.925346 kernel: nicpf, ver 1.0 Apr 13 19:19:48.925353 kernel: nicvf, ver 1.0 Apr 13 19:19:48.925431 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:19:48.925494 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:19:48 UTC (1776107988) Apr 13 19:19:48.925504 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:19:48.925512 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 13 19:19:48.925520 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:19:48.925530 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:19:48.925538 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:19:48.925545 kernel: Segment Routing with IPv6 Apr 13 19:19:48.925553 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:19:48.925561 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:19:48.925568 kernel: Key type dns_resolver registered Apr 13 19:19:48.925576 kernel: registered taskstats version 1 Apr 13 19:19:48.925584 kernel: Loading compiled-in X.509 certificates Apr 13 19:19:48.925592 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:19:48.925601 kernel: Key type .fscrypt registered Apr 13 19:19:48.925609 kernel: Key type fscrypt-provisioning registered Apr 13 19:19:48.925616 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:19:48.925624 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:19:48.925632 kernel: ima: No architecture policies found Apr 13 19:19:48.925640 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:19:48.925648 kernel: clk: Disabling unused clocks Apr 13 19:19:48.925656 kernel: Freeing unused kernel memory: 39424K Apr 13 19:19:48.925664 kernel: Run /init as init process Apr 13 19:19:48.925671 kernel: with arguments: Apr 13 19:19:48.925681 kernel: /init Apr 13 19:19:48.925688 kernel: with environment: Apr 13 19:19:48.925696 kernel: HOME=/ Apr 13 19:19:48.925703 kernel: TERM=linux Apr 13 19:19:48.925713 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:19:48.925723 systemd[1]: Detected virtualization kvm. Apr 13 19:19:48.925731 systemd[1]: Detected architecture arm64. Apr 13 19:19:48.925741 systemd[1]: Running in initrd. Apr 13 19:19:48.925749 systemd[1]: No hostname configured, using default hostname. Apr 13 19:19:48.925757 systemd[1]: Hostname set to . Apr 13 19:19:48.925765 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:19:48.925774 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:19:48.925782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:48.925790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:48.925799 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:19:48.925809 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:19:48.925817 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:19:48.925826 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:19:48.925835 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:19:48.925844 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:19:48.925852 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:48.925861 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:48.925870 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:19:48.925878 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:19:48.925887 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:19:48.925895 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:19:48.925915 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:19:48.925923 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:19:48.925931 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:19:48.925940 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:19:48.925948 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:48.925959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:48.925967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:48.925977 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:19:48.925986 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:19:48.925994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:19:48.926002 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:19:48.926011 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:19:48.926019 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:19:48.926028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:19:48.926037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:48.926054 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:19:48.926085 systemd-journald[237]: Collecting audit messages is disabled. Apr 13 19:19:48.926107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:48.926116 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:19:48.926125 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:19:48.926134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:48.926143 systemd-journald[237]: Journal started Apr 13 19:19:48.926163 systemd-journald[237]: Runtime Journal (/run/log/journal/574ab14712be439a82d71e6b0bf6259c) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:19:48.917139 systemd-modules-load[238]: Inserted module 'overlay' Apr 13 19:19:48.928288 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:19:48.932191 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:19:48.935373 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:19:48.937984 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 13 19:19:48.938918 kernel: Bridge firewalling registered Apr 13 19:19:48.942169 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:48.946029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:19:48.953135 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:19:48.956087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:48.965211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:19:48.971612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:48.978226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:48.988319 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:19:48.991971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:48.997941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:49.003461 dracut-cmdline[270]: dracut-dracut-053 Apr 13 19:19:49.005180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:19:49.008408 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:19:49.040668 systemd-resolved[279]: Positive Trust Anchors: Apr 13 19:19:49.040683 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:19:49.040716 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:19:49.051096 systemd-resolved[279]: Defaulting to hostname 'linux'. Apr 13 19:19:49.053272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:19:49.053952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:49.088933 kernel: SCSI subsystem initialized Apr 13 19:19:49.093927 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:19:49.100935 kernel: iscsi: registered transport (tcp) Apr 13 19:19:49.114201 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:19:49.114267 kernel: QLogic iSCSI HBA Driver Apr 13 19:19:49.164307 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:19:49.171117 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:19:49.189969 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:19:49.190065 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:19:49.190090 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:19:49.243970 kernel: raid6: neonx8 gen() 15638 MB/s Apr 13 19:19:49.256320 kernel: raid6: neonx4 gen() 15462 MB/s Apr 13 19:19:49.273300 kernel: raid6: neonx2 gen() 13157 MB/s Apr 13 19:19:49.289974 kernel: raid6: neonx1 gen() 10407 MB/s Apr 13 19:19:49.306967 kernel: raid6: int64x8 gen() 6922 MB/s Apr 13 19:19:49.323969 kernel: raid6: int64x4 gen() 7255 MB/s Apr 13 19:19:49.340970 kernel: raid6: int64x2 gen() 6087 MB/s Apr 13 19:19:49.357975 kernel: raid6: int64x1 gen() 5024 MB/s Apr 13 19:19:49.358090 kernel: raid6: using algorithm neonx8 gen() 15638 MB/s Apr 13 19:19:49.374976 kernel: raid6: .... xor() 11900 MB/s, rmw enabled Apr 13 19:19:49.375063 kernel: raid6: using neon recovery algorithm Apr 13 19:19:49.380310 kernel: xor: measuring software checksum speed Apr 13 19:19:49.380366 kernel: 8regs : 19783 MB/sec Apr 13 19:19:49.380388 kernel: 32regs : 19664 MB/sec Apr 13 19:19:49.381146 kernel: arm64_neon : 27007 MB/sec Apr 13 19:19:49.381189 kernel: xor: using function: arm64_neon (27007 MB/sec) Apr 13 19:19:49.432972 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:19:49.447196 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:19:49.454232 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:49.479514 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 13 19:19:49.483013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:49.491816 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:19:49.509562 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Apr 13 19:19:49.547062 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:19:49.554162 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:19:49.606076 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:49.613185 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:19:49.633177 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:19:49.635578 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:19:49.636399 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:49.639308 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:19:49.647548 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:19:49.667197 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:19:49.703938 kernel: scsi host0: Virtio SCSI HBA Apr 13 19:19:49.708566 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 19:19:49.712197 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 19:19:49.750057 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 13 19:19:49.750283 kernel: ACPI: bus type USB registered Apr 13 19:19:49.750296 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 13 19:19:49.751209 kernel: usbcore: registered new interface driver usbfs Apr 13 19:19:49.751248 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:19:49.753920 kernel: usbcore: registered new interface driver hub Apr 13 19:19:49.755688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:19:49.755811 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:49.757787 kernel: usbcore: registered new device driver usb Apr 13 19:19:49.757817 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 13 19:19:49.757843 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:49.758994 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:49.759148 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:49.759755 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:49.771231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:49.790346 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:19:49.790558 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 19:19:49.792352 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 13 19:19:49.792521 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 13 19:19:49.793327 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 13 19:19:49.793459 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 13 19:19:49.793545 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 19:19:49.795954 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 19:19:49.796026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:49.800214 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:19:49.800382 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 19:19:49.800469 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:19:49.802551 kernel: GPT:17805311 != 80003071 Apr 13 19:19:49.802582 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:19:49.802594 kernel: GPT:17805311 != 80003071 Apr 13 19:19:49.802609 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:19:49.802153 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:49.804646 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:49.804663 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 19:19:49.805030 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 13 19:19:49.805925 kernel: hub 1-0:1.0: USB hub found Apr 13 19:19:49.806938 kernel: hub 1-0:1.0: 4 ports detected Apr 13 19:19:49.808959 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 19:19:49.811929 kernel: hub 2-0:1.0: USB hub found Apr 13 19:19:49.812124 kernel: hub 2-0:1.0: 4 ports detected Apr 13 19:19:49.831997 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:49.856931 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (520) Apr 13 19:19:49.861933 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (501) Apr 13 19:19:49.870184 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 19:19:49.878091 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 19:19:49.885394 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 19:19:49.886333 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 19:19:49.892315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:19:49.898116 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:19:49.904435 disk-uuid[575]: Primary Header is updated. Apr 13 19:19:49.904435 disk-uuid[575]: Secondary Entries is updated. Apr 13 19:19:49.904435 disk-uuid[575]: Secondary Header is updated. Apr 13 19:19:49.912182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:49.915940 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:49.920950 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:50.054393 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 19:19:50.191764 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 13 19:19:50.191834 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 19:19:50.192096 kernel: usbcore: registered new interface driver usbhid Apr 13 19:19:50.192932 kernel: usbhid: USB HID core driver Apr 13 19:19:50.297102 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 13 19:19:50.427975 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 13 19:19:50.482407 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 13 19:19:50.922937 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:50.923824 disk-uuid[576]: The operation has completed successfully. Apr 13 19:19:50.971443 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:19:50.971562 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:19:50.986202 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:19:50.993562 sh[593]: Success Apr 13 19:19:51.006926 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:19:51.062259 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:19:51.066051 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:19:51.066724 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:19:51.083456 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:19:51.083532 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:51.083558 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:19:51.083582 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:19:51.084305 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:19:51.090945 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:19:51.093334 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:19:51.095885 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:19:51.103264 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:19:51.107150 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:19:51.117688 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:51.117732 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:51.117743 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:51.123608 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:51.123660 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:51.135389 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:19:51.137297 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:51.145975 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:19:51.150138 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:19:51.232289 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:19:51.243121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:19:51.264091 ignition[677]: Ignition 2.19.0 Apr 13 19:19:51.264100 ignition[677]: Stage: fetch-offline Apr 13 19:19:51.266105 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:19:51.264135 ignition[677]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:51.268093 systemd-networkd[781]: lo: Link UP Apr 13 19:19:51.264143 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:51.268097 systemd-networkd[781]: lo: Gained carrier Apr 13 19:19:51.264305 ignition[677]: parsed url from cmdline: "" Apr 13 19:19:51.269673 systemd-networkd[781]: Enumeration completed Apr 13 19:19:51.264308 ignition[677]: no config URL provided Apr 13 19:19:51.270272 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:19:51.264313 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:19:51.271145 systemd[1]: Reached target network.target - Network. Apr 13 19:19:51.264319 ignition[677]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:19:51.272073 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:51.264324 ignition[677]: failed to fetch config: resource requires networking Apr 13 19:19:51.272079 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:51.264515 ignition[677]: Ignition finished successfully Apr 13 19:19:51.272864 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:51.272867 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:51.273455 systemd-networkd[781]: eth0: Link UP Apr 13 19:19:51.273459 systemd-networkd[781]: eth0: Gained carrier Apr 13 19:19:51.273466 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:51.276645 systemd-networkd[781]: eth1: Link UP Apr 13 19:19:51.276650 systemd-networkd[781]: eth1: Gained carrier Apr 13 19:19:51.276658 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:51.281125 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:19:51.294831 ignition[785]: Ignition 2.19.0 Apr 13 19:19:51.294840 ignition[785]: Stage: fetch Apr 13 19:19:51.295082 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:51.295092 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:51.295190 ignition[785]: parsed url from cmdline: "" Apr 13 19:19:51.295193 ignition[785]: no config URL provided Apr 13 19:19:51.295198 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:19:51.295206 ignition[785]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:19:51.295226 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 19:19:51.295868 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 19:19:51.321008 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:19:51.333989 systemd-networkd[781]: eth0: DHCPv4 address 178.105.7.28/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:19:51.496775 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 19:19:51.501944 ignition[785]: GET result: OK Apr 13 19:19:51.502055 ignition[785]: parsing config with SHA512: d5bf9b3e87b56d0c62d47db7ee632ed948b3ef8d8dbb1b78a5f8e1e8fae2b24f97253a4aef0581ba48e4bed16dca40dad5315efca95b328dd01a14607280b659 Apr 13 19:19:51.507596 unknown[785]: fetched base config from "system" Apr 13 19:19:51.508007 ignition[785]: fetch: fetch complete Apr 13 19:19:51.507606 unknown[785]: fetched base config from "system" Apr 13 19:19:51.508013 ignition[785]: fetch: fetch passed Apr 13 19:19:51.507611 unknown[785]: fetched user config from "hetzner" Apr 13 19:19:51.508069 ignition[785]: Ignition finished successfully Apr 13 19:19:51.511618 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:19:51.518219 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:19:51.533263 ignition[792]: Ignition 2.19.0 Apr 13 19:19:51.533273 ignition[792]: Stage: kargs Apr 13 19:19:51.533450 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:51.537150 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:19:51.533460 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:51.534468 ignition[792]: kargs: kargs passed Apr 13 19:19:51.534523 ignition[792]: Ignition finished successfully Apr 13 19:19:51.546169 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:19:51.560709 ignition[799]: Ignition 2.19.0 Apr 13 19:19:51.560721 ignition[799]: Stage: disks Apr 13 19:19:51.560941 ignition[799]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:51.560955 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:51.564261 ignition[799]: disks: disks passed Apr 13 19:19:51.564398 ignition[799]: Ignition finished successfully Apr 13 19:19:51.567950 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:19:51.568804 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:19:51.570598 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:19:51.571441 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:19:51.572596 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:19:51.573882 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:19:51.584246 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:19:51.604170 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 19:19:51.608084 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:19:51.614050 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:19:51.669926 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:19:51.671122 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:19:51.672195 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:19:51.688238 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:19:51.694234 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:19:51.705944 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (815) Apr 13 19:19:51.708476 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:51.708549 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:51.708628 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:19:51.713920 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:51.709652 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:19:51.709688 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:19:51.720232 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:51.720260 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:51.713610 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:19:51.724610 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:19:51.726965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:19:51.779261 coreos-metadata[817]: Apr 13 19:19:51.778 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 19:19:51.781375 coreos-metadata[817]: Apr 13 19:19:51.780 INFO Fetch successful Apr 13 19:19:51.784289 coreos-metadata[817]: Apr 13 19:19:51.783 INFO wrote hostname ci-4081-3-7-3-c59e9f41ff to /sysroot/etc/hostname Apr 13 19:19:51.785632 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:19:51.789816 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:19:51.795594 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:19:51.800587 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:19:51.806499 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:19:51.911547 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:19:51.920109 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:19:51.923247 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:19:51.933936 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:51.962644 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:19:51.968551 ignition[934]: INFO : Ignition 2.19.0 Apr 13 19:19:51.969353 ignition[934]: INFO : Stage: mount Apr 13 19:19:51.970020 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:51.970616 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:51.972559 ignition[934]: INFO : mount: mount passed Apr 13 19:19:51.974155 ignition[934]: INFO : Ignition finished successfully Apr 13 19:19:51.974856 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:19:51.978053 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:19:52.084407 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:19:52.095288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:19:52.102953 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Apr 13 19:19:52.105179 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:52.105225 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:52.105252 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:52.108929 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:52.108967 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:52.111784 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:19:52.142212 ignition[963]: INFO : Ignition 2.19.0 Apr 13 19:19:52.143184 ignition[963]: INFO : Stage: files Apr 13 19:19:52.143570 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:52.143570 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:52.145262 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:19:52.146209 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:19:52.146209 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:19:52.149472 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:19:52.150894 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:19:52.150894 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:19:52.149870 unknown[963]: wrote ssh authorized keys file for user: core Apr 13 19:19:52.153676 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:19:52.153676 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:19:52.239973 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:19:52.414942 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:19:52.414942 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:19:52.417508 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 13 19:19:52.442256 systemd-networkd[781]: eth1: Gained IPv6LL Apr 13 19:19:52.694456 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 19:19:52.917009 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:19:52.919207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 13 19:19:53.223740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 19:19:53.338164 systemd-networkd[781]: eth0: Gained IPv6LL Apr 13 19:19:53.700519 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:19:53.700519 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:19:53.703212 ignition[963]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:19:53.703212 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:19:53.703212 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:19:53.703212 ignition[963]: INFO : files: files passed Apr 13 19:19:53.703212 ignition[963]: INFO : Ignition finished successfully Apr 13 19:19:53.704538 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:19:53.715176 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:19:53.719152 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:19:53.722450 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:19:53.722554 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:19:53.742009 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:53.742009 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:53.744820 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:53.748980 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:19:53.750700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:19:53.766227 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:19:53.798277 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:19:53.798425 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:19:53.800026 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:19:53.801625 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:19:53.803226 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:19:53.809188 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:19:53.823449 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:19:53.834237 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:19:53.849078 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:53.849994 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:53.851314 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:19:53.852491 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:19:53.852616 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:19:53.854153 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:19:53.854759 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:19:53.856006 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:19:53.857307 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:19:53.858439 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:19:53.859566 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:19:53.860673 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:19:53.861866 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:19:53.862966 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:19:53.864209 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:19:53.865197 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:19:53.865325 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:19:53.866682 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:53.867427 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:53.868626 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:19:53.872023 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:53.875643 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:19:53.875809 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:19:53.878801 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:19:53.878953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:19:53.880912 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:19:53.881009 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:19:53.882232 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:19:53.882324 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:19:53.895309 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:19:53.900277 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:19:53.900990 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:19:53.901125 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:53.903840 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:19:53.904102 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:19:53.915450 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:19:53.915565 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:19:53.921389 ignition[1016]: INFO : Ignition 2.19.0 Apr 13 19:19:53.921389 ignition[1016]: INFO : Stage: umount Apr 13 19:19:53.922392 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:53.922392 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:53.923673 ignition[1016]: INFO : umount: umount passed Apr 13 19:19:53.923673 ignition[1016]: INFO : Ignition finished successfully Apr 13 19:19:53.928698 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:19:53.934490 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:19:53.934618 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:19:53.937308 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:19:53.937388 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:19:53.946398 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:19:53.946502 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:19:53.948852 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:19:53.948909 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:19:53.949509 systemd[1]: Stopped target network.target - Network. Apr 13 19:19:53.951282 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:19:53.951331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:19:53.956265 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:19:53.959133 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:19:53.962956 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:53.972708 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:19:53.977632 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:19:53.978563 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:19:53.978614 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:19:53.979754 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:19:53.979799 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:19:53.981220 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:19:53.981268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:19:53.982208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:19:53.982248 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:19:53.983301 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:19:53.984339 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:19:53.985553 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:19:53.985641 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:19:53.986630 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:19:53.986714 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:19:53.987977 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 13 19:19:53.993005 systemd-networkd[781]: eth1: DHCPv6 lease lost Apr 13 19:19:53.995989 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:19:53.996265 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:19:53.999348 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:19:53.999572 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:19:54.001058 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:19:54.001111 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:54.010293 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:19:54.010799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:19:54.010856 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:19:54.012161 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:19:54.012225 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:54.013873 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:19:54.013943 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:54.016074 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:19:54.016118 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:54.017449 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:54.031007 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:19:54.031241 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:19:54.039016 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:19:54.039276 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:54.041704 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:19:54.041764 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:54.043371 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:19:54.043407 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:54.044592 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:19:54.044639 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:19:54.046220 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:19:54.046264 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:19:54.047756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:19:54.047799 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:54.061230 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:19:54.062455 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:19:54.062551 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:54.066396 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:19:54.066533 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:19:54.068390 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:19:54.068436 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:54.069594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:54.069632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:54.075682 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:19:54.077886 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:19:54.079961 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:19:54.087155 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:19:54.096617 systemd[1]: Switching root. Apr 13 19:19:54.133257 systemd-journald[237]: Journal stopped Apr 13 19:19:55.041957 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 13 19:19:55.042063 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:19:55.042078 kernel: SELinux: policy capability open_perms=1 Apr 13 19:19:55.042088 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:19:55.042098 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:19:55.042110 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:19:55.042120 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:19:55.042130 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:19:55.042140 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:19:55.042150 kernel: audit: type=1403 audit(1776107994.256:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:19:55.042161 systemd[1]: Successfully loaded SELinux policy in 34.554ms. Apr 13 19:19:55.042186 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.522ms. Apr 13 19:19:55.042200 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:19:55.042213 systemd[1]: Detected virtualization kvm. Apr 13 19:19:55.042224 systemd[1]: Detected architecture arm64. Apr 13 19:19:55.042234 systemd[1]: Detected first boot. Apr 13 19:19:55.042244 systemd[1]: Hostname set to . Apr 13 19:19:55.042254 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:19:55.042265 zram_generator::config[1058]: No configuration found. Apr 13 19:19:55.042280 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:19:55.042291 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:19:55.042303 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:19:55.042314 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:19:55.042325 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:19:55.042336 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:19:55.042347 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:19:55.042358 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:19:55.042368 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:19:55.042383 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:19:55.042395 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:19:55.042406 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:19:55.042416 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:55.042427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:55.042438 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:19:55.042449 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:19:55.042459 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:19:55.042469 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:19:55.042480 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 13 19:19:55.042492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:55.042503 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:19:55.042513 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:19:55.042524 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:19:55.042534 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:19:55.042545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:55.042561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:19:55.042572 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:19:55.042582 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:19:55.042593 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:19:55.042603 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:19:55.042613 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:55.042623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:55.042633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:55.042644 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:19:55.042659 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:19:55.042672 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:19:55.042683 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:19:55.042694 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:19:55.042704 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:19:55.042715 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:19:55.042729 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:19:55.042742 systemd[1]: Reached target machines.target - Containers. Apr 13 19:19:55.042752 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:19:55.042763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:55.042773 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:19:55.042784 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:19:55.042794 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:55.042805 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:19:55.042815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:55.042827 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:19:55.042838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:55.042849 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:19:55.042859 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:19:55.042869 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:19:55.042880 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:19:55.042891 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:19:55.045669 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:19:55.045695 kernel: ACPI: bus type drm_connector registered Apr 13 19:19:55.045707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:19:55.045718 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:19:55.045729 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:19:55.045740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:19:55.045751 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:19:55.045761 systemd[1]: Stopped verity-setup.service. Apr 13 19:19:55.045772 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:19:55.045782 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:19:55.045794 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:19:55.045811 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:19:55.045822 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:19:55.045833 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:19:55.045844 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:55.045856 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:19:55.045869 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:19:55.045880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:55.045890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:55.045924 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:19:55.048014 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:19:55.048092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:55.048106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:55.048117 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:19:55.048127 kernel: loop: module loaded Apr 13 19:19:55.048138 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:55.048182 systemd-journald[1125]: Collecting audit messages is disabled. Apr 13 19:19:55.048208 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:55.048221 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:55.048233 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:19:55.048244 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:19:55.048255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:55.048265 kernel: fuse: init (API version 7.39) Apr 13 19:19:55.048275 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:19:55.048287 systemd-journald[1125]: Journal started Apr 13 19:19:55.048312 systemd-journald[1125]: Runtime Journal (/run/log/journal/574ab14712be439a82d71e6b0bf6259c) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:19:54.742356 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:19:54.764761 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 19:19:54.765545 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:19:55.055005 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:19:55.055067 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:19:55.058013 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:19:55.059356 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:19:55.060056 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:19:55.062223 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:19:55.076472 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:19:55.085993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:55.100794 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:19:55.101577 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:19:55.101623 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:19:55.103079 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Apr 13 19:19:55.103092 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Apr 13 19:19:55.105502 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:19:55.116168 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:19:55.120246 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:19:55.121055 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:55.126285 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:19:55.135224 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:19:55.136801 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:55.139460 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:19:55.143375 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:19:55.147980 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:19:55.150391 systemd-journald[1125]: Time spent on flushing to /var/log/journal/574ab14712be439a82d71e6b0bf6259c is 73.525ms for 1132 entries. Apr 13 19:19:55.150391 systemd-journald[1125]: System Journal (/var/log/journal/574ab14712be439a82d71e6b0bf6259c) is 8.0M, max 584.8M, 576.8M free. Apr 13 19:19:55.237220 systemd-journald[1125]: Received client request to flush runtime journal. Apr 13 19:19:55.237329 kernel: loop0: detected capacity change from 0 to 200864 Apr 13 19:19:55.237370 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:19:55.155145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:55.158958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:19:55.162332 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:19:55.168954 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:19:55.185172 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:19:55.186023 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:19:55.186826 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:19:55.188801 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:19:55.205333 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 19:19:55.241598 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:19:55.244597 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:19:55.245646 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:19:55.246931 kernel: loop1: detected capacity change from 0 to 8 Apr 13 19:19:55.269753 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:19:55.276946 kernel: loop2: detected capacity change from 0 to 114328 Apr 13 19:19:55.281963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:19:55.308361 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 13 19:19:55.308384 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 13 19:19:55.314271 kernel: loop3: detected capacity change from 0 to 114432 Apr 13 19:19:55.329196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:55.354950 kernel: loop4: detected capacity change from 0 to 200864 Apr 13 19:19:55.374931 kernel: loop5: detected capacity change from 0 to 8 Apr 13 19:19:55.379172 kernel: loop6: detected capacity change from 0 to 114328 Apr 13 19:19:55.395925 kernel: loop7: detected capacity change from 0 to 114432 Apr 13 19:19:55.407476 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 19:19:55.408091 (sd-merge)[1201]: Merged extensions into '/usr'. Apr 13 19:19:55.415024 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:19:55.415061 systemd[1]: Reloading... Apr 13 19:19:55.534974 zram_generator::config[1227]: No configuration found. Apr 13 19:19:55.652200 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:19:55.687065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:19:55.735131 systemd[1]: Reloading finished in 319 ms. Apr 13 19:19:55.756194 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:19:55.759654 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:19:55.772251 systemd[1]: Starting ensure-sysext.service... Apr 13 19:19:55.776473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:19:55.794979 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:19:55.795020 systemd[1]: Reloading... Apr 13 19:19:55.823198 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:19:55.823833 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:19:55.824671 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:19:55.825101 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 13 19:19:55.825216 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 13 19:19:55.828369 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:19:55.828496 systemd-tmpfiles[1265]: Skipping /boot Apr 13 19:19:55.840369 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:19:55.840381 systemd-tmpfiles[1265]: Skipping /boot Apr 13 19:19:55.874927 zram_generator::config[1287]: No configuration found. Apr 13 19:19:55.984137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:19:56.030829 systemd[1]: Reloading finished in 235 ms. Apr 13 19:19:56.055692 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:19:56.057152 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:56.075280 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:19:56.079134 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:19:56.084446 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:19:56.088520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:19:56.094560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:56.098454 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:19:56.108240 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:19:56.110698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:56.114156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:56.118429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:56.121016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:56.122189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:56.123869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:56.124094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:56.129193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:56.131742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:19:56.132493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:56.136841 systemd[1]: Finished ensure-sysext.service. Apr 13 19:19:56.138213 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:19:56.148133 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 19:19:56.153834 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:19:56.169832 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:56.172179 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:56.181576 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:19:56.185312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:56.185495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:56.187202 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:56.189363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:56.189519 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:56.190481 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:56.203269 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Apr 13 19:19:56.209252 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:19:56.209416 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:19:56.217828 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:19:56.231048 augenrules[1366]: No rules Apr 13 19:19:56.232511 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:19:56.238131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:56.248138 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:19:56.249658 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:19:56.259744 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:19:56.260832 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:19:56.364561 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 19:19:56.366449 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:19:56.370424 systemd-networkd[1373]: lo: Link UP Apr 13 19:19:56.370736 systemd-networkd[1373]: lo: Gained carrier Apr 13 19:19:56.371532 systemd-networkd[1373]: Enumeration completed Apr 13 19:19:56.371543 systemd-timesyncd[1351]: No network connectivity, watching for changes. Apr 13 19:19:56.372021 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:19:56.387257 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:19:56.414675 systemd-resolved[1335]: Positive Trust Anchors: Apr 13 19:19:56.414701 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:19:56.414741 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:19:56.416790 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 13 19:19:56.422607 systemd-resolved[1335]: Using system hostname 'ci-4081-3-7-3-c59e9f41ff'. Apr 13 19:19:56.424589 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:19:56.425550 systemd[1]: Reached target network.target - Network. Apr 13 19:19:56.426082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:56.467080 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:56.467090 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:56.467780 systemd-networkd[1373]: eth0: Link UP Apr 13 19:19:56.467784 systemd-networkd[1373]: eth0: Gained carrier Apr 13 19:19:56.467802 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:56.496894 systemd-networkd[1373]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:56.497974 systemd-networkd[1373]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:56.498577 systemd-networkd[1373]: eth1: Link UP Apr 13 19:19:56.498581 systemd-networkd[1373]: eth1: Gained carrier Apr 13 19:19:56.498598 systemd-networkd[1373]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:56.513945 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 19:19:56.519968 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1388) Apr 13 19:19:56.531022 systemd-networkd[1373]: eth0: DHCPv4 address 178.105.7.28/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:19:56.535978 systemd-networkd[1373]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:19:56.538234 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Apr 13 19:19:56.557630 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 19:19:56.557770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:56.561138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:56.565741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:56.569060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:56.569735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:56.569776 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:19:56.573873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:56.574772 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:56.582211 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:56.584442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:56.586061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:56.586437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:56.588885 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:56.589079 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:56.629272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:56.648259 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 13 19:19:56.648352 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 19:19:56.648368 kernel: [drm] features: -context_init Apr 13 19:19:56.649109 kernel: [drm] number of scanouts: 1 Apr 13 19:19:56.649915 kernel: [drm] number of cap sets: 0 Apr 13 19:19:56.649972 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 19:19:56.656012 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 19:19:56.663679 systemd-timesyncd[1351]: Contacted time server 37.120.176.133:123 (1.flatcar.pool.ntp.org). Apr 13 19:19:56.663748 systemd-timesyncd[1351]: Initial clock synchronization to Mon 2026-04-13 19:19:56.341752 UTC. Apr 13 19:19:56.664939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:19:56.665935 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 19:19:56.675568 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:19:56.681429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:56.681615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:56.694111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:56.695528 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:19:56.744987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:56.779847 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:19:56.784158 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:19:56.803425 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:19:56.834979 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:19:56.835960 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:56.836568 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:19:56.837347 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:19:56.838235 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:19:56.839420 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:19:56.840266 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:19:56.841055 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:19:56.841706 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:19:56.841740 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:19:56.842317 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:19:56.843990 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:19:56.846219 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:19:56.851021 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:19:56.853366 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:19:56.856234 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:19:56.858278 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:19:56.860325 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:19:56.861503 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:19:56.861536 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:19:56.868932 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:19:56.865074 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:19:56.870852 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:19:56.873172 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:19:56.880145 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:19:56.884275 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:19:56.885429 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:19:56.887521 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:19:56.894105 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:19:56.897198 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 19:19:56.903892 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:19:56.908107 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:19:56.914758 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:19:56.916189 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:19:56.916677 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:19:56.919183 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:19:56.924111 jq[1454]: false Apr 13 19:19:56.924164 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:19:56.932497 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:19:56.932724 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:19:56.955967 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:19:56.956997 extend-filesystems[1457]: Found loop4 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found loop5 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found loop6 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found loop7 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found sda Apr 13 19:19:56.956997 extend-filesystems[1457]: Found sda1 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found sda2 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found sda3 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found usr Apr 13 19:19:56.956997 extend-filesystems[1457]: Found sda4 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found sda6 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found sda7 Apr 13 19:19:56.956997 extend-filesystems[1457]: Found sda9 Apr 13 19:19:56.956997 extend-filesystems[1457]: Checking size of /dev/sda9 Apr 13 19:19:57.019167 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 13 19:19:57.019408 coreos-metadata[1452]: Apr 13 19:19:56.992 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 19:19:57.019408 coreos-metadata[1452]: Apr 13 19:19:56.994 INFO Fetch successful Apr 13 19:19:57.019408 coreos-metadata[1452]: Apr 13 19:19:56.994 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 19:19:57.019408 coreos-metadata[1452]: Apr 13 19:19:56.995 INFO Fetch successful Apr 13 19:19:56.965480 dbus-daemon[1453]: [system] SELinux support is enabled Apr 13 19:19:57.021676 extend-filesystems[1457]: Resized partition /dev/sda9 Apr 13 19:19:56.965644 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:19:57.030393 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:19:56.977786 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:19:56.977861 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:19:57.048013 jq[1466]: true Apr 13 19:19:56.978894 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:19:56.978928 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:19:56.992707 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:19:57.051656 tar[1480]: linux-arm64/LICENSE Apr 13 19:19:57.051656 tar[1480]: linux-arm64/helm Apr 13 19:19:56.992864 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:19:57.003539 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:19:57.054666 jq[1484]: true Apr 13 19:19:57.004947 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:19:57.038760 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:19:57.066462 update_engine[1465]: I20260413 19:19:57.066231 1465 main.cc:92] Flatcar Update Engine starting Apr 13 19:19:57.074040 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:19:57.075512 update_engine[1465]: I20260413 19:19:57.075094 1465 update_check_scheduler.cc:74] Next update check in 4m6s Apr 13 19:19:57.089094 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:19:57.114020 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1393) Apr 13 19:19:57.138639 systemd-logind[1463]: New seat seat0. Apr 13 19:19:57.141272 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:19:57.141297 systemd-logind[1463]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 13 19:19:57.141497 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:19:57.162732 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 13 19:19:57.179976 extend-filesystems[1486]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 19:19:57.179976 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 13 19:19:57.179976 extend-filesystems[1486]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 13 19:19:57.193627 extend-filesystems[1457]: Resized filesystem in /dev/sda9 Apr 13 19:19:57.193627 extend-filesystems[1457]: Found sr0 Apr 13 19:19:57.205069 bash[1522]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:19:57.196804 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:19:57.199009 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:19:57.203356 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:19:57.205574 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:19:57.219350 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:19:57.258439 systemd[1]: Starting sshkeys.service... Apr 13 19:19:57.286113 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:19:57.326435 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:19:57.359936 containerd[1489]: time="2026-04-13T19:19:57.355952392Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:19:57.387985 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:19:57.398411 coreos-metadata[1536]: Apr 13 19:19:57.397 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 19:19:57.403146 coreos-metadata[1536]: Apr 13 19:19:57.401 INFO Fetch successful Apr 13 19:19:57.409098 unknown[1536]: wrote ssh authorized keys file for user: core Apr 13 19:19:57.443182 containerd[1489]: time="2026-04-13T19:19:57.443132905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:57.446170 containerd[1489]: time="2026-04-13T19:19:57.446127547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:57.446318 update-ssh-keys[1543]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:19:57.446798 containerd[1489]: time="2026-04-13T19:19:57.446776507Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.447627172Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.447779200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.447797666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.447855559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.447881512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.448080760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.448096769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.448109400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.448118383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.448187295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:57.448980 containerd[1489]: time="2026-04-13T19:19:57.448388770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:57.450119 containerd[1489]: time="2026-04-13T19:19:57.448474996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:57.450119 containerd[1489]: time="2026-04-13T19:19:57.448488817Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:19:57.450119 containerd[1489]: time="2026-04-13T19:19:57.448555233Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:19:57.450119 containerd[1489]: time="2026-04-13T19:19:57.448591858Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:19:57.450631 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:19:57.456020 systemd[1]: Finished sshkeys.service. Apr 13 19:19:57.462607 containerd[1489]: time="2026-04-13T19:19:57.462571821Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:19:57.462786 containerd[1489]: time="2026-04-13T19:19:57.462770110Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:19:57.462845 containerd[1489]: time="2026-04-13T19:19:57.462833801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463229995Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463254565Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463417496Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463652333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463749308Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463766546Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463778908Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463791154Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463803593Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463815648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463828393Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463843174Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463854768Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:19:57.465198 containerd[1489]: time="2026-04-13T19:19:57.463875154Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.463887439Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.463935350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.463950899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.463962416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.463975738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.463987792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.463999694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.464010673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.464022114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.464033861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.464046914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.464057664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.464068836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.464082464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465470 containerd[1489]: time="2026-04-13T19:19:57.464097245Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464116402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464128380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464138592Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464246432Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464264629Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464275148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464287817Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464302137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464315151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464324711Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:19:57.465750 containerd[1489]: time="2026-04-13T19:19:57.464337226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:19:57.465999 containerd[1489]: time="2026-04-13T19:19:57.464666505Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:19:57.465999 containerd[1489]: time="2026-04-13T19:19:57.464723554Z" level=info msg="Connect containerd service" Apr 13 19:19:57.465999 containerd[1489]: time="2026-04-13T19:19:57.464754190Z" level=info msg="using legacy CRI server" Apr 13 19:19:57.465999 containerd[1489]: time="2026-04-13T19:19:57.464760870Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:19:57.465999 containerd[1489]: time="2026-04-13T19:19:57.464838803Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469039687Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469244003Z" level=info msg="Start subscribing containerd event" Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469303816Z" level=info msg="Start recovering state" Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469379331Z" level=info msg="Start event monitor" Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469426091Z" level=info msg="Start snapshots syncer" Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469436610Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469443981Z" level=info msg="Start streaming server" Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469485866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469522721Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:19:57.471507 containerd[1489]: time="2026-04-13T19:19:57.469562149Z" level=info msg="containerd successfully booted in 0.115968s" Apr 13 19:19:57.470034 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:19:57.626097 systemd-networkd[1373]: eth1: Gained IPv6LL Apr 13 19:19:57.632952 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:19:57.634286 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:19:57.641074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:19:57.649125 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:19:57.694054 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:19:57.772040 tar[1480]: linux-arm64/README.md Apr 13 19:19:57.788380 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:19:58.138176 systemd-networkd[1373]: eth0: Gained IPv6LL Apr 13 19:19:58.397048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:19:58.407653 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:19:58.815735 kubelet[1566]: E0413 19:19:58.815688 1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:19:58.819581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:19:58.819725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:19:59.152581 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:19:59.176661 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:19:59.188360 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:19:59.194978 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:19:59.195382 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:19:59.203326 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:19:59.214496 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:19:59.225465 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:19:59.233257 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 13 19:19:59.234382 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:19:59.235151 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:19:59.236070 systemd[1]: Startup finished in 787ms (kernel) + 5.568s (initrd) + 5.013s (userspace) = 11.369s. Apr 13 19:20:09.042547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:20:09.051165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:09.186239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:09.197214 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:09.244629 kubelet[1602]: E0413 19:20:09.244557 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:09.249143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:09.249489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:19.292568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:20:19.309325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:19.432233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:19.436996 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:19.481375 kubelet[1617]: E0413 19:20:19.481310 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:19.484542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:19.484700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:29.542675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:20:29.553700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:29.685782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:29.705578 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:29.750187 kubelet[1632]: E0413 19:20:29.750118 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:29.753566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:29.753978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:35.702633 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:20:35.717029 systemd[1]: Started sshd@0-178.105.7.28:22-50.85.169.122:55392.service - OpenSSH per-connection server daemon (50.85.169.122:55392). Apr 13 19:20:35.854038 sshd[1641]: Accepted publickey for core from 50.85.169.122 port 55392 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:35.856842 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:35.866194 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:20:35.881552 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:20:35.887025 systemd-logind[1463]: New session 1 of user core. Apr 13 19:20:35.898381 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:20:35.909476 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:20:35.914009 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:20:36.022673 systemd[1645]: Queued start job for default target default.target. Apr 13 19:20:36.034581 systemd[1645]: Created slice app.slice - User Application Slice. Apr 13 19:20:36.034851 systemd[1645]: Reached target paths.target - Paths. Apr 13 19:20:36.035052 systemd[1645]: Reached target timers.target - Timers. Apr 13 19:20:36.037599 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:20:36.053889 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:20:36.054072 systemd[1645]: Reached target sockets.target - Sockets. Apr 13 19:20:36.054089 systemd[1645]: Reached target basic.target - Basic System. Apr 13 19:20:36.054158 systemd[1645]: Reached target default.target - Main User Target. Apr 13 19:20:36.054196 systemd[1645]: Startup finished in 133ms. Apr 13 19:20:36.054266 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:20:36.066254 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:20:36.182223 systemd[1]: Started sshd@1-178.105.7.28:22-50.85.169.122:55400.service - OpenSSH per-connection server daemon (50.85.169.122:55400). Apr 13 19:20:36.307590 sshd[1656]: Accepted publickey for core from 50.85.169.122 port 55400 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:36.311244 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:36.317111 systemd-logind[1463]: New session 2 of user core. Apr 13 19:20:36.323273 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:20:36.422377 sshd[1656]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:36.427935 systemd[1]: sshd@1-178.105.7.28:22-50.85.169.122:55400.service: Deactivated successfully. Apr 13 19:20:36.430728 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:20:36.431756 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:20:36.432886 systemd-logind[1463]: Removed session 2. Apr 13 19:20:36.466628 systemd[1]: Started sshd@2-178.105.7.28:22-50.85.169.122:55412.service - OpenSSH per-connection server daemon (50.85.169.122:55412). Apr 13 19:20:36.589025 sshd[1663]: Accepted publickey for core from 50.85.169.122 port 55412 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:36.590536 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:36.595726 systemd-logind[1463]: New session 3 of user core. Apr 13 19:20:36.604260 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:20:36.701725 sshd[1663]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:36.707431 systemd[1]: sshd@2-178.105.7.28:22-50.85.169.122:55412.service: Deactivated successfully. Apr 13 19:20:36.710067 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:20:36.711125 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:20:36.712990 systemd-logind[1463]: Removed session 3. Apr 13 19:20:36.729350 systemd[1]: Started sshd@3-178.105.7.28:22-50.85.169.122:55416.service - OpenSSH per-connection server daemon (50.85.169.122:55416). Apr 13 19:20:36.865071 sshd[1670]: Accepted publickey for core from 50.85.169.122 port 55416 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:36.867112 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:36.872669 systemd-logind[1463]: New session 4 of user core. Apr 13 19:20:36.879247 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:20:36.984146 sshd[1670]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:36.989599 systemd[1]: sshd@3-178.105.7.28:22-50.85.169.122:55416.service: Deactivated successfully. Apr 13 19:20:36.991642 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:20:36.993624 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:20:36.994872 systemd-logind[1463]: Removed session 4. Apr 13 19:20:37.018439 systemd[1]: Started sshd@4-178.105.7.28:22-50.85.169.122:55432.service - OpenSSH per-connection server daemon (50.85.169.122:55432). Apr 13 19:20:37.132051 sshd[1677]: Accepted publickey for core from 50.85.169.122 port 55432 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:37.135284 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:37.141532 systemd-logind[1463]: New session 5 of user core. Apr 13 19:20:37.147254 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:20:37.242044 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:20:37.242337 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:20:37.258084 sudo[1680]: pam_unix(sudo:session): session closed for user root Apr 13 19:20:37.274755 sshd[1677]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:37.281669 systemd[1]: sshd@4-178.105.7.28:22-50.85.169.122:55432.service: Deactivated successfully. Apr 13 19:20:37.283681 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:20:37.284776 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:20:37.285850 systemd-logind[1463]: Removed session 5. Apr 13 19:20:37.309337 systemd[1]: Started sshd@5-178.105.7.28:22-50.85.169.122:55442.service - OpenSSH per-connection server daemon (50.85.169.122:55442). Apr 13 19:20:37.435190 sshd[1685]: Accepted publickey for core from 50.85.169.122 port 55442 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:37.438761 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:37.444199 systemd-logind[1463]: New session 6 of user core. Apr 13 19:20:37.456231 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:20:37.545130 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:20:37.545754 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:20:37.550346 sudo[1689]: pam_unix(sudo:session): session closed for user root Apr 13 19:20:37.556878 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:20:37.557222 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:20:37.577399 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:20:37.581068 auditctl[1692]: No rules Apr 13 19:20:37.581588 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:20:37.583108 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:20:37.588331 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:20:37.618707 augenrules[1710]: No rules Apr 13 19:20:37.622028 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:20:37.625187 sudo[1688]: pam_unix(sudo:session): session closed for user root Apr 13 19:20:37.642308 sshd[1685]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:37.648113 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:20:37.649561 systemd[1]: sshd@5-178.105.7.28:22-50.85.169.122:55442.service: Deactivated successfully. Apr 13 19:20:37.653189 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:20:37.654377 systemd-logind[1463]: Removed session 6. Apr 13 19:20:37.681442 systemd[1]: Started sshd@6-178.105.7.28:22-50.85.169.122:55450.service - OpenSSH per-connection server daemon (50.85.169.122:55450). Apr 13 19:20:37.805544 sshd[1718]: Accepted publickey for core from 50.85.169.122 port 55450 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:37.807800 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:37.812527 systemd-logind[1463]: New session 7 of user core. Apr 13 19:20:37.820233 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:20:37.906595 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:20:37.906890 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:20:38.208375 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:20:38.208401 (dockerd)[1736]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:20:38.464497 dockerd[1736]: time="2026-04-13T19:20:38.463797396Z" level=info msg="Starting up" Apr 13 19:20:38.540662 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport773461107-merged.mount: Deactivated successfully. Apr 13 19:20:38.573165 dockerd[1736]: time="2026-04-13T19:20:38.573101972Z" level=info msg="Loading containers: start." Apr 13 19:20:38.672944 kernel: Initializing XFRM netlink socket Apr 13 19:20:38.752766 systemd-networkd[1373]: docker0: Link UP Apr 13 19:20:38.781791 dockerd[1736]: time="2026-04-13T19:20:38.781671937Z" level=info msg="Loading containers: done." Apr 13 19:20:38.805597 dockerd[1736]: time="2026-04-13T19:20:38.804892645Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:20:38.805597 dockerd[1736]: time="2026-04-13T19:20:38.805088309Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:20:38.805597 dockerd[1736]: time="2026-04-13T19:20:38.805258569Z" level=info msg="Daemon has completed initialization" Apr 13 19:20:38.846540 dockerd[1736]: time="2026-04-13T19:20:38.846360153Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:20:38.848067 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:20:39.378160 containerd[1489]: time="2026-04-13T19:20:39.378105607Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 19:20:39.793022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 19:20:39.808175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:39.925312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:39.941382 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:40.002590 kubelet[1881]: E0413 19:20:40.002451 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:40.006496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:40.008119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:40.019950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666249161.mount: Deactivated successfully. Apr 13 19:20:41.112332 containerd[1489]: time="2026-04-13T19:20:41.112249438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:41.114344 containerd[1489]: time="2026-04-13T19:20:41.114291125Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=24476988" Apr 13 19:20:41.115768 containerd[1489]: time="2026-04-13T19:20:41.115697948Z" level=info msg="ImageCreate event name:\"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:41.119280 containerd[1489]: time="2026-04-13T19:20:41.119215864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:41.121510 containerd[1489]: time="2026-04-13T19:20:41.121457331Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"24473489\" in 1.74331208s" Apr 13 19:20:41.121510 containerd[1489]: time="2026-04-13T19:20:41.121509616Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\"" Apr 13 19:20:41.122524 containerd[1489]: time="2026-04-13T19:20:41.122427790Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 19:20:41.870832 update_engine[1465]: I20260413 19:20:41.870682 1465 update_attempter.cc:509] Updating boot flags... Apr 13 19:20:41.921945 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1955) Apr 13 19:20:42.287806 containerd[1489]: time="2026-04-13T19:20:42.287659149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:42.289606 containerd[1489]: time="2026-04-13T19:20:42.289562293Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=19139662" Apr 13 19:20:42.290966 containerd[1489]: time="2026-04-13T19:20:42.290659598Z" level=info msg="ImageCreate event name:\"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:42.295918 containerd[1489]: time="2026-04-13T19:20:42.295413256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:42.297086 containerd[1489]: time="2026-04-13T19:20:42.296985208Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"20617664\" in 1.174518134s" Apr 13 19:20:42.297168 containerd[1489]: time="2026-04-13T19:20:42.297087778Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\"" Apr 13 19:20:42.297788 containerd[1489]: time="2026-04-13T19:20:42.297759282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 19:20:43.353194 containerd[1489]: time="2026-04-13T19:20:43.353138722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:43.354918 containerd[1489]: time="2026-04-13T19:20:43.354839638Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=14195559" Apr 13 19:20:43.357926 containerd[1489]: time="2026-04-13T19:20:43.355727959Z" level=info msg="ImageCreate event name:\"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:43.362912 containerd[1489]: time="2026-04-13T19:20:43.362849932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:43.364338 containerd[1489]: time="2026-04-13T19:20:43.364292345Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"15673579\" in 1.066498179s" Apr 13 19:20:43.364338 containerd[1489]: time="2026-04-13T19:20:43.364335789Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\"" Apr 13 19:20:43.364845 containerd[1489]: time="2026-04-13T19:20:43.364806792Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 19:20:44.420630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974920708.mount: Deactivated successfully. Apr 13 19:20:44.645360 containerd[1489]: time="2026-04-13T19:20:44.645293718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:44.646676 containerd[1489]: time="2026-04-13T19:20:44.646461180Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=22697125" Apr 13 19:20:44.647925 containerd[1489]: time="2026-04-13T19:20:44.647680407Z" level=info msg="ImageCreate event name:\"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:44.650383 containerd[1489]: time="2026-04-13T19:20:44.650138541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:44.651012 containerd[1489]: time="2026-04-13T19:20:44.650974454Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"22696118\" in 1.286127259s" Apr 13 19:20:44.651066 containerd[1489]: time="2026-04-13T19:20:44.651014738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\"" Apr 13 19:20:44.651655 containerd[1489]: time="2026-04-13T19:20:44.651623591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 19:20:45.256058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134216761.mount: Deactivated successfully. Apr 13 19:20:46.131246 containerd[1489]: time="2026-04-13T19:20:46.131186636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.133095 containerd[1489]: time="2026-04-13T19:20:46.133053304Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Apr 13 19:20:46.133417 containerd[1489]: time="2026-04-13T19:20:46.133383290Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.137163 containerd[1489]: time="2026-04-13T19:20:46.137129867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.138609 containerd[1489]: time="2026-04-13T19:20:46.138569742Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.486907628s" Apr 13 19:20:46.138714 containerd[1489]: time="2026-04-13T19:20:46.138697752Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 13 19:20:46.140019 containerd[1489]: time="2026-04-13T19:20:46.139978413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 19:20:46.692707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794422038.mount: Deactivated successfully. Apr 13 19:20:46.702962 containerd[1489]: time="2026-04-13T19:20:46.702051850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.704182 containerd[1489]: time="2026-04-13T19:20:46.704121254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 13 19:20:46.705183 containerd[1489]: time="2026-04-13T19:20:46.705097772Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.708176 containerd[1489]: time="2026-04-13T19:20:46.707847190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.709210 containerd[1489]: time="2026-04-13T19:20:46.708761263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 568.741646ms" Apr 13 19:20:46.709210 containerd[1489]: time="2026-04-13T19:20:46.708795705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 13 19:20:46.709565 containerd[1489]: time="2026-04-13T19:20:46.709544085Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 19:20:47.334537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76079601.mount: Deactivated successfully. Apr 13 19:20:48.030398 containerd[1489]: time="2026-04-13T19:20:48.030336214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:48.032426 containerd[1489]: time="2026-04-13T19:20:48.032374962Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139164" Apr 13 19:20:48.033703 containerd[1489]: time="2026-04-13T19:20:48.033649654Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:48.037037 containerd[1489]: time="2026-04-13T19:20:48.036982015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:48.039098 containerd[1489]: time="2026-04-13T19:20:48.038870912Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.329220019s" Apr 13 19:20:48.039098 containerd[1489]: time="2026-04-13T19:20:48.038933196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 13 19:20:50.042107 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 13 19:20:50.052099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:50.179103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:50.189554 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:50.233355 kubelet[2124]: E0413 19:20:50.230692 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:50.234337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:50.234602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:54.244342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:54.251464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:54.292830 systemd[1]: Reloading requested from client PID 2138 ('systemctl') (unit session-7.scope)... Apr 13 19:20:54.293091 systemd[1]: Reloading... Apr 13 19:20:54.418951 zram_generator::config[2178]: No configuration found. Apr 13 19:20:54.519227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:20:54.589174 systemd[1]: Reloading finished in 295 ms. Apr 13 19:20:54.649477 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:54.655619 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:20:54.656089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:54.662327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:54.802703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:54.814647 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:20:54.863198 kubelet[2228]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:20:54.863937 kubelet[2228]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:20:54.863937 kubelet[2228]: I0413 19:20:54.863608 2228 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:20:55.836070 kubelet[2228]: I0413 19:20:55.836012 2228 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 19:20:55.836070 kubelet[2228]: I0413 19:20:55.836048 2228 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:20:55.836070 kubelet[2228]: I0413 19:20:55.836071 2228 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:20:55.836070 kubelet[2228]: I0413 19:20:55.836077 2228 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:20:55.836382 kubelet[2228]: I0413 19:20:55.836298 2228 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:20:55.847203 kubelet[2228]: E0413 19:20:55.847130 2228 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.105.7.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:20:55.850926 kubelet[2228]: I0413 19:20:55.850268 2228 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:20:55.854888 kubelet[2228]: E0413 19:20:55.854827 2228 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:20:55.855012 kubelet[2228]: I0413 19:20:55.854924 2228 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:20:55.857187 kubelet[2228]: I0413 19:20:55.857115 2228 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:20:55.857380 kubelet[2228]: I0413 19:20:55.857329 2228 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:20:55.857854 kubelet[2228]: I0413 19:20:55.857359 2228 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-3-c59e9f41ff","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:20:55.857854 kubelet[2228]: I0413 19:20:55.857776 2228 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:20:55.857854 kubelet[2228]: I0413 19:20:55.857787 2228 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 19:20:55.858175 kubelet[2228]: I0413 19:20:55.858050 2228 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:20:55.863126 kubelet[2228]: I0413 19:20:55.862846 2228 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:20:55.864304 kubelet[2228]: I0413 19:20:55.864285 2228 kubelet.go:475] "Attempting to sync node with API server" Apr 13 19:20:55.865639 kubelet[2228]: I0413 19:20:55.864601 2228 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:20:55.865639 kubelet[2228]: I0413 19:20:55.864637 2228 kubelet.go:387] "Adding apiserver pod source" Apr 13 19:20:55.865639 kubelet[2228]: I0413 19:20:55.864648 2228 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:20:55.866104 kubelet[2228]: I0413 19:20:55.866059 2228 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:20:55.866803 kubelet[2228]: I0413 19:20:55.866765 2228 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:20:55.866840 kubelet[2228]: I0413 19:20:55.866808 2228 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:20:55.866864 kubelet[2228]: W0413 19:20:55.866858 2228 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:20:55.873998 kubelet[2228]: I0413 19:20:55.873258 2228 server.go:1262] "Started kubelet" Apr 13 19:20:55.873998 kubelet[2228]: E0413 19:20:55.873460 2228 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.105.7.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:20:55.873998 kubelet[2228]: E0413 19:20:55.873548 2228 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.105.7.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-3-c59e9f41ff&limit=500&resourceVersion=0\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:20:55.874142 kubelet[2228]: I0413 19:20:55.873997 2228 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:20:55.874142 kubelet[2228]: I0413 19:20:55.874056 2228 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:20:55.874429 kubelet[2228]: I0413 19:20:55.874405 2228 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:20:55.875939 kubelet[2228]: I0413 19:20:55.875892 2228 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:20:55.883265 kubelet[2228]: E0413 19:20:55.881222 2228 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.105.7.28:6443/api/v1/namespaces/default/events\": dial tcp 178.105.7.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-3-c59e9f41ff.18a600ddd90892cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-3-c59e9f41ff,UID:ci-4081-3-7-3-c59e9f41ff,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-3-c59e9f41ff,},FirstTimestamp:2026-04-13 19:20:55.873229519 +0000 UTC m=+1.052715343,LastTimestamp:2026-04-13 19:20:55.873229519 +0000 UTC m=+1.052715343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-3-c59e9f41ff,}" Apr 13 19:20:55.883931 kubelet[2228]: I0413 19:20:55.883733 2228 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:20:55.884813 kubelet[2228]: I0413 19:20:55.884783 2228 server.go:310] "Adding debug handlers to kubelet server" Apr 13 19:20:55.886009 kubelet[2228]: I0413 19:20:55.885983 2228 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:20:55.886650 kubelet[2228]: I0413 19:20:55.886628 2228 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 19:20:55.886818 kubelet[2228]: E0413 19:20:55.886799 2228 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" Apr 13 19:20:55.889734 kubelet[2228]: E0413 19:20:55.888857 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.7.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-3-c59e9f41ff?timeout=10s\": dial tcp 178.105.7.28:6443: connect: connection refused" interval="200ms" Apr 13 19:20:55.889734 kubelet[2228]: I0413 19:20:55.889167 2228 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:20:55.889734 kubelet[2228]: I0413 19:20:55.889197 2228 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:20:55.889734 kubelet[2228]: E0413 19:20:55.889493 2228 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.105.7.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:20:55.889734 kubelet[2228]: I0413 19:20:55.889707 2228 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:20:55.889929 kubelet[2228]: I0413 19:20:55.889782 2228 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:20:55.891733 kubelet[2228]: E0413 19:20:55.891711 2228 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:20:55.892063 kubelet[2228]: I0413 19:20:55.892045 2228 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:20:55.900389 kubelet[2228]: I0413 19:20:55.900329 2228 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:20:55.901456 kubelet[2228]: I0413 19:20:55.901420 2228 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:20:55.901456 kubelet[2228]: I0413 19:20:55.901445 2228 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 19:20:55.901545 kubelet[2228]: I0413 19:20:55.901477 2228 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 19:20:55.901545 kubelet[2228]: E0413 19:20:55.901523 2228 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:20:55.908605 kubelet[2228]: E0413 19:20:55.908477 2228 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.105.7.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:20:55.921381 kubelet[2228]: I0413 19:20:55.921241 2228 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:20:55.921381 kubelet[2228]: I0413 19:20:55.921264 2228 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:20:55.921381 kubelet[2228]: I0413 19:20:55.921284 2228 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:20:55.924822 kubelet[2228]: I0413 19:20:55.924793 2228 policy_none.go:49] "None policy: Start" Apr 13 19:20:55.924822 kubelet[2228]: I0413 19:20:55.924819 2228 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:20:55.925007 kubelet[2228]: I0413 19:20:55.924832 2228 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:20:55.926720 kubelet[2228]: I0413 19:20:55.926701 2228 policy_none.go:47] "Start" Apr 13 19:20:55.932124 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:20:55.945997 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:20:55.950062 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:20:55.962062 kubelet[2228]: E0413 19:20:55.961976 2228 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:20:55.962373 kubelet[2228]: I0413 19:20:55.962338 2228 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:20:55.962487 kubelet[2228]: I0413 19:20:55.962376 2228 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:20:55.964464 kubelet[2228]: I0413 19:20:55.963834 2228 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:20:55.965803 kubelet[2228]: E0413 19:20:55.965767 2228 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:20:55.965874 kubelet[2228]: E0413 19:20:55.965812 2228 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-3-c59e9f41ff\" not found" Apr 13 19:20:56.017254 systemd[1]: Created slice kubepods-burstable-pod3dead6ea7c422540a5232601737323fa.slice - libcontainer container kubepods-burstable-pod3dead6ea7c422540a5232601737323fa.slice. Apr 13 19:20:56.023187 kubelet[2228]: E0413 19:20:56.023107 2228 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.028988 systemd[1]: Created slice kubepods-burstable-pod19a80197416dcf5d0c9e3f7d294b11a3.slice - libcontainer container kubepods-burstable-pod19a80197416dcf5d0c9e3f7d294b11a3.slice. Apr 13 19:20:56.032120 kubelet[2228]: E0413 19:20:56.031774 2228 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.034280 systemd[1]: Created slice kubepods-burstable-pod95248b4f55afe742f73d6098c01f2524.slice - libcontainer container kubepods-burstable-pod95248b4f55afe742f73d6098c01f2524.slice. Apr 13 19:20:56.038013 kubelet[2228]: E0413 19:20:56.037725 2228 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.065725 kubelet[2228]: I0413 19:20:56.065060 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.065725 kubelet[2228]: E0413 19:20:56.065666 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.7.28:6443/api/v1/nodes\": dial tcp 178.105.7.28:6443: connect: connection refused" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.089708 kubelet[2228]: E0413 19:20:56.089554 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.7.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-3-c59e9f41ff?timeout=10s\": dial tcp 178.105.7.28:6443: connect: connection refused" interval="400ms" Apr 13 19:20:56.092006 kubelet[2228]: I0413 19:20:56.091869 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.092502 kubelet[2228]: I0413 19:20:56.092408 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dead6ea7c422540a5232601737323fa-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-3-c59e9f41ff\" (UID: \"3dead6ea7c422540a5232601737323fa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.092993 kubelet[2228]: I0413 19:20:56.092699 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dead6ea7c422540a5232601737323fa-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-3-c59e9f41ff\" (UID: \"3dead6ea7c422540a5232601737323fa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.092993 kubelet[2228]: I0413 19:20:56.092765 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.092993 kubelet[2228]: I0413 19:20:56.092826 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.092993 kubelet[2228]: I0413 19:20:56.092887 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95248b4f55afe742f73d6098c01f2524-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-3-c59e9f41ff\" (UID: \"95248b4f55afe742f73d6098c01f2524\") " pod="kube-system/kube-scheduler-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.093308 kubelet[2228]: I0413 19:20:56.092999 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dead6ea7c422540a5232601737323fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-3-c59e9f41ff\" (UID: \"3dead6ea7c422540a5232601737323fa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.093308 kubelet[2228]: I0413 19:20:56.093070 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.093308 kubelet[2228]: I0413 19:20:56.093103 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.269670 kubelet[2228]: I0413 19:20:56.269071 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.269670 kubelet[2228]: E0413 19:20:56.269578 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.7.28:6443/api/v1/nodes\": dial tcp 178.105.7.28:6443: connect: connection refused" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.329750 containerd[1489]: time="2026-04-13T19:20:56.329666064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-3-c59e9f41ff,Uid:3dead6ea7c422540a5232601737323fa,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:56.336592 containerd[1489]: time="2026-04-13T19:20:56.336539700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-3-c59e9f41ff,Uid:19a80197416dcf5d0c9e3f7d294b11a3,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:56.342479 containerd[1489]: time="2026-04-13T19:20:56.342357802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-3-c59e9f41ff,Uid:95248b4f55afe742f73d6098c01f2524,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:56.490892 kubelet[2228]: E0413 19:20:56.490811 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.7.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-3-c59e9f41ff?timeout=10s\": dial tcp 178.105.7.28:6443: connect: connection refused" interval="800ms" Apr 13 19:20:56.671706 kubelet[2228]: I0413 19:20:56.671589 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.672214 kubelet[2228]: E0413 19:20:56.672178 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.7.28:6443/api/v1/nodes\": dial tcp 178.105.7.28:6443: connect: connection refused" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:56.882040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835371946.mount: Deactivated successfully. Apr 13 19:20:56.889893 containerd[1489]: time="2026-04-13T19:20:56.889842493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:56.891541 containerd[1489]: time="2026-04-13T19:20:56.891497499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:20:56.892297 containerd[1489]: time="2026-04-13T19:20:56.892261178Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:56.893489 containerd[1489]: time="2026-04-13T19:20:56.893449480Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:56.895022 containerd[1489]: time="2026-04-13T19:20:56.894916116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 13 19:20:56.896320 containerd[1489]: time="2026-04-13T19:20:56.896174861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:20:56.896320 containerd[1489]: time="2026-04-13T19:20:56.896253426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:56.898888 containerd[1489]: time="2026-04-13T19:20:56.898853520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:56.900828 containerd[1489]: time="2026-04-13T19:20:56.900628173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.007468ms" Apr 13 19:20:56.902470 containerd[1489]: time="2026-04-13T19:20:56.902428146Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.661198ms" Apr 13 19:20:56.906542 containerd[1489]: time="2026-04-13T19:20:56.906298627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.86562ms" Apr 13 19:20:57.035482 containerd[1489]: time="2026-04-13T19:20:57.034970720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:57.035482 containerd[1489]: time="2026-04-13T19:20:57.035045363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:57.035482 containerd[1489]: time="2026-04-13T19:20:57.035071285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:57.035482 containerd[1489]: time="2026-04-13T19:20:57.035198011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:57.042783 containerd[1489]: time="2026-04-13T19:20:57.041427842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:57.042783 containerd[1489]: time="2026-04-13T19:20:57.041548608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:57.042950 kubelet[2228]: E0413 19:20:57.042093 2228 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.105.7.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:20:57.043365 containerd[1489]: time="2026-04-13T19:20:57.041567969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:57.044881 containerd[1489]: time="2026-04-13T19:20:57.044660204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:57.044881 containerd[1489]: time="2026-04-13T19:20:57.044716527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:57.044881 containerd[1489]: time="2026-04-13T19:20:57.044731648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:57.044881 containerd[1489]: time="2026-04-13T19:20:57.044801291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:57.045332 containerd[1489]: time="2026-04-13T19:20:57.045261754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:57.067426 systemd[1]: Started cri-containerd-c64142079a80167369e7619c91bed7bbc087ace3c6547f6c8fa6a7008c3f0989.scope - libcontainer container c64142079a80167369e7619c91bed7bbc087ace3c6547f6c8fa6a7008c3f0989. Apr 13 19:20:57.080454 systemd[1]: Started cri-containerd-36fb5d5aa19a2e1da4c032cc897b9ce03ef6ded441befdf2ec69de669432c05b.scope - libcontainer container 36fb5d5aa19a2e1da4c032cc897b9ce03ef6ded441befdf2ec69de669432c05b. Apr 13 19:20:57.085542 systemd[1]: Started cri-containerd-d811ab5fd9cae59abb0c86bddd450f17c3886cb4123951d17f83aaa3753e2898.scope - libcontainer container d811ab5fd9cae59abb0c86bddd450f17c3886cb4123951d17f83aaa3753e2898. Apr 13 19:20:57.136117 containerd[1489]: time="2026-04-13T19:20:57.135886845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-3-c59e9f41ff,Uid:19a80197416dcf5d0c9e3f7d294b11a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c64142079a80167369e7619c91bed7bbc087ace3c6547f6c8fa6a7008c3f0989\"" Apr 13 19:20:57.140611 containerd[1489]: time="2026-04-13T19:20:57.140569960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-3-c59e9f41ff,Uid:95248b4f55afe742f73d6098c01f2524,Namespace:kube-system,Attempt:0,} returns sandbox id \"36fb5d5aa19a2e1da4c032cc897b9ce03ef6ded441befdf2ec69de669432c05b\"" Apr 13 19:20:57.143310 containerd[1489]: time="2026-04-13T19:20:57.143229133Z" level=info msg="CreateContainer within sandbox \"c64142079a80167369e7619c91bed7bbc087ace3c6547f6c8fa6a7008c3f0989\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:20:57.146071 containerd[1489]: time="2026-04-13T19:20:57.145567169Z" level=info msg="CreateContainer within sandbox \"36fb5d5aa19a2e1da4c032cc897b9ce03ef6ded441befdf2ec69de669432c05b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:20:57.149555 containerd[1489]: time="2026-04-13T19:20:57.149034103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-3-c59e9f41ff,Uid:3dead6ea7c422540a5232601737323fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"d811ab5fd9cae59abb0c86bddd450f17c3886cb4123951d17f83aaa3753e2898\"" Apr 13 19:20:57.154599 containerd[1489]: time="2026-04-13T19:20:57.154561459Z" level=info msg="CreateContainer within sandbox \"d811ab5fd9cae59abb0c86bddd450f17c3886cb4123951d17f83aaa3753e2898\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:20:57.162778 containerd[1489]: time="2026-04-13T19:20:57.162698306Z" level=info msg="CreateContainer within sandbox \"36fb5d5aa19a2e1da4c032cc897b9ce03ef6ded441befdf2ec69de669432c05b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e872451ec5fec564c4048bad94c973586680834901048fe25d17c86869a70d32\"" Apr 13 19:20:57.164281 containerd[1489]: time="2026-04-13T19:20:57.163498386Z" level=info msg="StartContainer for \"e872451ec5fec564c4048bad94c973586680834901048fe25d17c86869a70d32\"" Apr 13 19:20:57.165153 containerd[1489]: time="2026-04-13T19:20:57.165064264Z" level=info msg="CreateContainer within sandbox \"c64142079a80167369e7619c91bed7bbc087ace3c6547f6c8fa6a7008c3f0989\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0\"" Apr 13 19:20:57.166793 containerd[1489]: time="2026-04-13T19:20:57.165747499Z" level=info msg="StartContainer for \"2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0\"" Apr 13 19:20:57.174210 containerd[1489]: time="2026-04-13T19:20:57.174168840Z" level=info msg="CreateContainer within sandbox \"d811ab5fd9cae59abb0c86bddd450f17c3886cb4123951d17f83aaa3753e2898\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9341b7c67fa4362915cb1f8237a67d33740270fa51ed48bd0353f5d161caab07\"" Apr 13 19:20:57.175381 containerd[1489]: time="2026-04-13T19:20:57.175348059Z" level=info msg="StartContainer for \"9341b7c67fa4362915cb1f8237a67d33740270fa51ed48bd0353f5d161caab07\"" Apr 13 19:20:57.194265 systemd[1]: Started cri-containerd-2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0.scope - libcontainer container 2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0. Apr 13 19:20:57.208190 systemd[1]: Started cri-containerd-e872451ec5fec564c4048bad94c973586680834901048fe25d17c86869a70d32.scope - libcontainer container e872451ec5fec564c4048bad94c973586680834901048fe25d17c86869a70d32. Apr 13 19:20:57.217140 kubelet[2228]: E0413 19:20:57.217092 2228 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.105.7.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-3-c59e9f41ff&limit=500&resourceVersion=0\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:20:57.227232 systemd[1]: Started cri-containerd-9341b7c67fa4362915cb1f8237a67d33740270fa51ed48bd0353f5d161caab07.scope - libcontainer container 9341b7c67fa4362915cb1f8237a67d33740270fa51ed48bd0353f5d161caab07. Apr 13 19:20:57.250481 containerd[1489]: time="2026-04-13T19:20:57.250331048Z" level=info msg="StartContainer for \"2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0\" returns successfully" Apr 13 19:20:57.260715 kubelet[2228]: E0413 19:20:57.260575 2228 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.105.7.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:20:57.278004 containerd[1489]: time="2026-04-13T19:20:57.277098586Z" level=info msg="StartContainer for \"e872451ec5fec564c4048bad94c973586680834901048fe25d17c86869a70d32\" returns successfully" Apr 13 19:20:57.293486 kubelet[2228]: E0413 19:20:57.292841 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.7.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-3-c59e9f41ff?timeout=10s\": dial tcp 178.105.7.28:6443: connect: connection refused" interval="1.6s" Apr 13 19:20:57.299091 containerd[1489]: time="2026-04-13T19:20:57.298944158Z" level=info msg="StartContainer for \"9341b7c67fa4362915cb1f8237a67d33740270fa51ed48bd0353f5d161caab07\" returns successfully" Apr 13 19:20:57.316246 kubelet[2228]: E0413 19:20:57.316204 2228 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.105.7.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.105.7.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:20:57.477917 kubelet[2228]: I0413 19:20:57.477460 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:57.932522 kubelet[2228]: E0413 19:20:57.931975 2228 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:57.938867 kubelet[2228]: E0413 19:20:57.937885 2228 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:57.942679 kubelet[2228]: E0413 19:20:57.942623 2228 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:58.942498 kubelet[2228]: E0413 19:20:58.942270 2228 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:58.944865 kubelet[2228]: E0413 19:20:58.944734 2228 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.186943 kubelet[2228]: E0413 19:20:59.186905 2228 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-3-c59e9f41ff\" not found" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.307119 kubelet[2228]: I0413 19:20:59.306858 2228 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.389095 kubelet[2228]: I0413 19:20:59.388559 2228 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.398531 kubelet[2228]: E0413 19:20:59.398495 2228 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-3-c59e9f41ff\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.398697 kubelet[2228]: I0413 19:20:59.398683 2228 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.403191 kubelet[2228]: E0413 19:20:59.403152 2228 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.403578 kubelet[2228]: I0413 19:20:59.403363 2228 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.407285 kubelet[2228]: E0413 19:20:59.407257 2228 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-3-c59e9f41ff\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:20:59.868301 kubelet[2228]: I0413 19:20:59.868015 2228 apiserver.go:52] "Watching apiserver" Apr 13 19:20:59.890211 kubelet[2228]: I0413 19:20:59.890173 2228 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:21:01.778887 systemd[1]: Reloading requested from client PID 2512 ('systemctl') (unit session-7.scope)... Apr 13 19:21:01.779332 systemd[1]: Reloading... Apr 13 19:21:01.896945 zram_generator::config[2549]: No configuration found. Apr 13 19:21:02.026609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:21:02.109825 systemd[1]: Reloading finished in 329 ms. Apr 13 19:21:02.146096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:21:02.155317 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:21:02.155520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:21:02.155568 systemd[1]: kubelet.service: Consumed 1.445s CPU time, 119.0M memory peak, 0B memory swap peak. Apr 13 19:21:02.161289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:21:02.300728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:21:02.311633 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:21:02.360721 kubelet[2597]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:21:02.360721 kubelet[2597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:21:02.360721 kubelet[2597]: I0413 19:21:02.360088 2597 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:21:02.373809 kubelet[2597]: I0413 19:21:02.373746 2597 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 19:21:02.373809 kubelet[2597]: I0413 19:21:02.373781 2597 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:21:02.373809 kubelet[2597]: I0413 19:21:02.373806 2597 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:21:02.373809 kubelet[2597]: I0413 19:21:02.373813 2597 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:21:02.374102 kubelet[2597]: I0413 19:21:02.374078 2597 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:21:02.375890 kubelet[2597]: I0413 19:21:02.375570 2597 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:21:02.378693 kubelet[2597]: I0413 19:21:02.378315 2597 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:21:02.383440 kubelet[2597]: E0413 19:21:02.383392 2597 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:21:02.383648 kubelet[2597]: I0413 19:21:02.383463 2597 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:21:02.386144 kubelet[2597]: I0413 19:21:02.386120 2597 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:21:02.386429 kubelet[2597]: I0413 19:21:02.386404 2597 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:21:02.386624 kubelet[2597]: I0413 19:21:02.386430 2597 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-3-c59e9f41ff","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:21:02.386624 kubelet[2597]: I0413 19:21:02.386612 2597 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:21:02.386624 kubelet[2597]: I0413 19:21:02.386626 2597 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 19:21:02.386753 kubelet[2597]: I0413 19:21:02.386649 2597 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:21:02.386842 kubelet[2597]: I0413 19:21:02.386828 2597 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:21:02.387016 kubelet[2597]: I0413 19:21:02.387006 2597 kubelet.go:475] "Attempting to sync node with API server" Apr 13 19:21:02.387068 kubelet[2597]: I0413 19:21:02.387028 2597 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:21:02.387068 kubelet[2597]: I0413 19:21:02.387056 2597 kubelet.go:387] "Adding apiserver pod source" Apr 13 19:21:02.389111 kubelet[2597]: I0413 19:21:02.387075 2597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:21:02.389979 kubelet[2597]: I0413 19:21:02.389958 2597 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:21:02.390621 kubelet[2597]: I0413 19:21:02.390591 2597 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:21:02.390674 kubelet[2597]: I0413 19:21:02.390627 2597 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:21:02.395566 kubelet[2597]: I0413 19:21:02.395543 2597 server.go:1262] "Started kubelet" Apr 13 19:21:02.396292 kubelet[2597]: I0413 19:21:02.396215 2597 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:21:02.396602 kubelet[2597]: I0413 19:21:02.396547 2597 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:21:02.396714 kubelet[2597]: I0413 19:21:02.396701 2597 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:21:02.397311 kubelet[2597]: I0413 19:21:02.397295 2597 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:21:02.399300 kubelet[2597]: I0413 19:21:02.399195 2597 server.go:310] "Adding debug handlers to kubelet server" Apr 13 19:21:02.404935 kubelet[2597]: I0413 19:21:02.404332 2597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:21:02.414431 kubelet[2597]: I0413 19:21:02.414392 2597 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 19:21:02.414620 kubelet[2597]: E0413 19:21:02.414585 2597 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-3-c59e9f41ff\" not found" Apr 13 19:21:02.415073 kubelet[2597]: I0413 19:21:02.415042 2597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:21:02.431262 kubelet[2597]: I0413 19:21:02.431067 2597 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:21:02.437327 kubelet[2597]: I0413 19:21:02.437272 2597 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:21:02.437487 kubelet[2597]: I0413 19:21:02.437405 2597 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:21:02.446999 kubelet[2597]: I0413 19:21:02.446857 2597 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:21:02.446999 kubelet[2597]: I0413 19:21:02.446985 2597 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:21:02.447282 kubelet[2597]: I0413 19:21:02.447174 2597 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:21:02.450110 kubelet[2597]: I0413 19:21:02.450067 2597 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:21:02.450110 kubelet[2597]: I0413 19:21:02.450106 2597 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 19:21:02.450252 kubelet[2597]: I0413 19:21:02.450127 2597 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 19:21:02.450436 kubelet[2597]: E0413 19:21:02.450405 2597 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:21:02.455693 kubelet[2597]: E0413 19:21:02.455148 2597 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:21:02.500078 kubelet[2597]: I0413 19:21:02.499975 2597 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:21:02.500078 kubelet[2597]: I0413 19:21:02.500008 2597 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:21:02.500078 kubelet[2597]: I0413 19:21:02.500031 2597 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:21:02.501626 kubelet[2597]: I0413 19:21:02.500528 2597 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:21:02.501626 kubelet[2597]: I0413 19:21:02.500544 2597 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:21:02.501626 kubelet[2597]: I0413 19:21:02.500560 2597 policy_none.go:49] "None policy: Start" Apr 13 19:21:02.501626 kubelet[2597]: I0413 19:21:02.500569 2597 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:21:02.501626 kubelet[2597]: I0413 19:21:02.500577 2597 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:21:02.501626 kubelet[2597]: I0413 19:21:02.500666 2597 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 19:21:02.501626 kubelet[2597]: I0413 19:21:02.500676 2597 policy_none.go:47] "Start" Apr 13 19:21:02.507129 kubelet[2597]: E0413 19:21:02.507107 2597 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:21:02.507645 kubelet[2597]: I0413 19:21:02.507627 2597 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:21:02.507755 kubelet[2597]: I0413 19:21:02.507723 2597 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:21:02.508193 kubelet[2597]: I0413 19:21:02.508181 2597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:21:02.511007 kubelet[2597]: E0413 19:21:02.510980 2597 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:21:02.551801 kubelet[2597]: I0413 19:21:02.551767 2597 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.552923 kubelet[2597]: I0413 19:21:02.552005 2597 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.554283 kubelet[2597]: I0413 19:21:02.552185 2597 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.612823 kubelet[2597]: I0413 19:21:02.612100 2597 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.624990 kubelet[2597]: I0413 19:21:02.624536 2597 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.624990 kubelet[2597]: I0413 19:21:02.624623 2597 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.739687 kubelet[2597]: I0413 19:21:02.739121 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dead6ea7c422540a5232601737323fa-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-3-c59e9f41ff\" (UID: \"3dead6ea7c422540a5232601737323fa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.739687 kubelet[2597]: I0413 19:21:02.739183 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dead6ea7c422540a5232601737323fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-3-c59e9f41ff\" (UID: \"3dead6ea7c422540a5232601737323fa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.739687 kubelet[2597]: I0413 19:21:02.739217 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.739687 kubelet[2597]: I0413 19:21:02.739289 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.739687 kubelet[2597]: I0413 19:21:02.739331 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.740139 kubelet[2597]: I0413 19:21:02.739372 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95248b4f55afe742f73d6098c01f2524-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-3-c59e9f41ff\" (UID: \"95248b4f55afe742f73d6098c01f2524\") " pod="kube-system/kube-scheduler-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.740139 kubelet[2597]: I0413 19:21:02.739399 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dead6ea7c422540a5232601737323fa-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-3-c59e9f41ff\" (UID: \"3dead6ea7c422540a5232601737323fa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.740139 kubelet[2597]: I0413 19:21:02.739426 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.740139 kubelet[2597]: I0413 19:21:02.739461 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19a80197416dcf5d0c9e3f7d294b11a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-3-c59e9f41ff\" (UID: \"19a80197416dcf5d0c9e3f7d294b11a3\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:02.774054 sudo[2634]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 19:21:02.774366 sudo[2634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 19:21:03.225483 sudo[2634]: pam_unix(sudo:session): session closed for user root Apr 13 19:21:03.389979 kubelet[2597]: I0413 19:21:03.388348 2597 apiserver.go:52] "Watching apiserver" Apr 13 19:21:03.438347 kubelet[2597]: I0413 19:21:03.438239 2597 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:21:03.486654 kubelet[2597]: I0413 19:21:03.486078 2597 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:03.499446 kubelet[2597]: E0413 19:21:03.499081 2597 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-3-c59e9f41ff\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" Apr 13 19:21:03.528951 kubelet[2597]: I0413 19:21:03.528642 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-3-c59e9f41ff" podStartSLOduration=1.528608906 podStartE2EDuration="1.528608906s" podCreationTimestamp="2026-04-13 19:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:03.512414404 +0000 UTC m=+1.192203063" watchObservedRunningTime="2026-04-13 19:21:03.528608906 +0000 UTC m=+1.208397565" Apr 13 19:21:03.533100 kubelet[2597]: I0413 19:21:03.531154 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-3-c59e9f41ff" podStartSLOduration=1.531133769 podStartE2EDuration="1.531133769s" podCreationTimestamp="2026-04-13 19:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:03.530398699 +0000 UTC m=+1.210187318" watchObservedRunningTime="2026-04-13 19:21:03.531133769 +0000 UTC m=+1.210922388" Apr 13 19:21:03.562702 kubelet[2597]: I0413 19:21:03.562256 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-3-c59e9f41ff" podStartSLOduration=1.5622372009999999 podStartE2EDuration="1.562237201s" podCreationTimestamp="2026-04-13 19:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:03.546451716 +0000 UTC m=+1.226240335" watchObservedRunningTime="2026-04-13 19:21:03.562237201 +0000 UTC m=+1.242025780" Apr 13 19:21:05.223042 sudo[1721]: pam_unix(sudo:session): session closed for user root Apr 13 19:21:05.241318 sshd[1718]: pam_unix(sshd:session): session closed for user core Apr 13 19:21:05.246636 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:21:05.249171 systemd[1]: sshd@6-178.105.7.28:22-50.85.169.122:55450.service: Deactivated successfully. Apr 13 19:21:05.252890 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:21:05.254122 systemd[1]: session-7.scope: Consumed 8.853s CPU time, 151.2M memory peak, 0B memory swap peak. Apr 13 19:21:05.255202 systemd-logind[1463]: Removed session 7. Apr 13 19:21:06.809811 kubelet[2597]: I0413 19:21:06.809661 2597 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:21:06.812398 containerd[1489]: time="2026-04-13T19:21:06.812254318Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:21:06.813384 kubelet[2597]: I0413 19:21:06.813338 2597 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:21:07.913842 systemd[1]: Created slice kubepods-besteffort-pod58450b4f_cfd0_4fb6_a934_f47cd56eb905.slice - libcontainer container kubepods-besteffort-pod58450b4f_cfd0_4fb6_a934_f47cd56eb905.slice. Apr 13 19:21:07.941371 systemd[1]: Created slice kubepods-burstable-pod817c6f1a_8085_43ee_a4c1_644ba116cd53.slice - libcontainer container kubepods-burstable-pod817c6f1a_8085_43ee_a4c1_644ba116cd53.slice. Apr 13 19:21:07.971672 kubelet[2597]: I0413 19:21:07.971043 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-config-path\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.971672 kubelet[2597]: I0413 19:21:07.971096 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/817c6f1a-8085-43ee-a4c1-644ba116cd53-hubble-tls\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.971672 kubelet[2597]: I0413 19:21:07.971222 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58450b4f-cfd0-4fb6-a934-f47cd56eb905-kube-proxy\") pod \"kube-proxy-gdl89\" (UID: \"58450b4f-cfd0-4fb6-a934-f47cd56eb905\") " pod="kube-system/kube-proxy-gdl89" Apr 13 19:21:07.971672 kubelet[2597]: I0413 19:21:07.971249 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-bpf-maps\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.971672 kubelet[2597]: I0413 19:21:07.971272 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-hostproc\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.971672 kubelet[2597]: I0413 19:21:07.971302 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cni-path\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972172 kubelet[2597]: I0413 19:21:07.971322 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-etc-cni-netd\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972172 kubelet[2597]: I0413 19:21:07.971339 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/817c6f1a-8085-43ee-a4c1-644ba116cd53-clustermesh-secrets\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972172 kubelet[2597]: I0413 19:21:07.971354 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-host-proc-sys-kernel\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972172 kubelet[2597]: I0413 19:21:07.971376 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rstb5\" (UniqueName: \"kubernetes.io/projected/58450b4f-cfd0-4fb6-a934-f47cd56eb905-kube-api-access-rstb5\") pod \"kube-proxy-gdl89\" (UID: \"58450b4f-cfd0-4fb6-a934-f47cd56eb905\") " pod="kube-system/kube-proxy-gdl89" Apr 13 19:21:07.972172 kubelet[2597]: I0413 19:21:07.971393 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-xtables-lock\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972278 kubelet[2597]: I0413 19:21:07.971408 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6cg\" (UniqueName: \"kubernetes.io/projected/817c6f1a-8085-43ee-a4c1-644ba116cd53-kube-api-access-4l6cg\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972278 kubelet[2597]: I0413 19:21:07.971426 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58450b4f-cfd0-4fb6-a934-f47cd56eb905-lib-modules\") pod \"kube-proxy-gdl89\" (UID: \"58450b4f-cfd0-4fb6-a934-f47cd56eb905\") " pod="kube-system/kube-proxy-gdl89" Apr 13 19:21:07.972278 kubelet[2597]: I0413 19:21:07.971459 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-cgroup\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972278 kubelet[2597]: I0413 19:21:07.971474 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-lib-modules\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972278 kubelet[2597]: I0413 19:21:07.971489 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-host-proc-sys-net\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:07.972380 kubelet[2597]: I0413 19:21:07.971535 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58450b4f-cfd0-4fb6-a934-f47cd56eb905-xtables-lock\") pod \"kube-proxy-gdl89\" (UID: \"58450b4f-cfd0-4fb6-a934-f47cd56eb905\") " pod="kube-system/kube-proxy-gdl89" Apr 13 19:21:07.972380 kubelet[2597]: I0413 19:21:07.971550 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-run\") pod \"cilium-txp7v\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " pod="kube-system/cilium-txp7v" Apr 13 19:21:08.061095 systemd[1]: Created slice kubepods-besteffort-pod1e52f299_9aba_4f98_af53_f9e67c8b6260.slice - libcontainer container kubepods-besteffort-pod1e52f299_9aba_4f98_af53_f9e67c8b6260.slice. Apr 13 19:21:08.072607 kubelet[2597]: I0413 19:21:08.072553 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e52f299-9aba-4f98-af53-f9e67c8b6260-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-4jh9q\" (UID: \"1e52f299-9aba-4f98-af53-f9e67c8b6260\") " pod="kube-system/cilium-operator-6f9c7c5859-4jh9q" Apr 13 19:21:08.072747 kubelet[2597]: I0413 19:21:08.072683 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd722\" (UniqueName: \"kubernetes.io/projected/1e52f299-9aba-4f98-af53-f9e67c8b6260-kube-api-access-zd722\") pod \"cilium-operator-6f9c7c5859-4jh9q\" (UID: \"1e52f299-9aba-4f98-af53-f9e67c8b6260\") " pod="kube-system/cilium-operator-6f9c7c5859-4jh9q" Apr 13 19:21:08.226099 containerd[1489]: time="2026-04-13T19:21:08.225718053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gdl89,Uid:58450b4f-cfd0-4fb6-a934-f47cd56eb905,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:08.248127 containerd[1489]: time="2026-04-13T19:21:08.248051207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txp7v,Uid:817c6f1a-8085-43ee-a4c1-644ba116cd53,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:08.256930 containerd[1489]: time="2026-04-13T19:21:08.256789878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:08.256930 containerd[1489]: time="2026-04-13T19:21:08.256856840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:08.257440 containerd[1489]: time="2026-04-13T19:21:08.256872761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:08.257563 containerd[1489]: time="2026-04-13T19:21:08.257210533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:08.281131 systemd[1]: Started cri-containerd-f484f2f2fef69ca70bcee56cf8972976dad52f6b9428252811b7a8b9e18e1440.scope - libcontainer container f484f2f2fef69ca70bcee56cf8972976dad52f6b9428252811b7a8b9e18e1440. Apr 13 19:21:08.283805 containerd[1489]: time="2026-04-13T19:21:08.283640113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:08.284016 containerd[1489]: time="2026-04-13T19:21:08.283760878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:08.284016 containerd[1489]: time="2026-04-13T19:21:08.283817560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:08.286866 containerd[1489]: time="2026-04-13T19:21:08.285278892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:08.308136 systemd[1]: Started cri-containerd-186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2.scope - libcontainer container 186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2. Apr 13 19:21:08.324331 containerd[1489]: time="2026-04-13T19:21:08.324291439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gdl89,Uid:58450b4f-cfd0-4fb6-a934-f47cd56eb905,Namespace:kube-system,Attempt:0,} returns sandbox id \"f484f2f2fef69ca70bcee56cf8972976dad52f6b9428252811b7a8b9e18e1440\"" Apr 13 19:21:08.334001 containerd[1489]: time="2026-04-13T19:21:08.333953423Z" level=info msg="CreateContainer within sandbox \"f484f2f2fef69ca70bcee56cf8972976dad52f6b9428252811b7a8b9e18e1440\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:21:08.339628 containerd[1489]: time="2026-04-13T19:21:08.339402297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txp7v,Uid:817c6f1a-8085-43ee-a4c1-644ba116cd53,Namespace:kube-system,Attempt:0,} returns sandbox id \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\"" Apr 13 19:21:08.343143 containerd[1489]: time="2026-04-13T19:21:08.343014265Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 19:21:08.351565 containerd[1489]: time="2026-04-13T19:21:08.351506407Z" level=info msg="CreateContainer within sandbox \"f484f2f2fef69ca70bcee56cf8972976dad52f6b9428252811b7a8b9e18e1440\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a4dc9dca4a14d16a1a8176a3aa26951e94d0dd2ba29f0a598dc489f97844920\"" Apr 13 19:21:08.352725 containerd[1489]: time="2026-04-13T19:21:08.352397399Z" level=info msg="StartContainer for \"6a4dc9dca4a14d16a1a8176a3aa26951e94d0dd2ba29f0a598dc489f97844920\"" Apr 13 19:21:08.373250 containerd[1489]: time="2026-04-13T19:21:08.373204819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-4jh9q,Uid:1e52f299-9aba-4f98-af53-f9e67c8b6260,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:08.379358 systemd[1]: Started cri-containerd-6a4dc9dca4a14d16a1a8176a3aa26951e94d0dd2ba29f0a598dc489f97844920.scope - libcontainer container 6a4dc9dca4a14d16a1a8176a3aa26951e94d0dd2ba29f0a598dc489f97844920. Apr 13 19:21:08.410761 containerd[1489]: time="2026-04-13T19:21:08.410369821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:08.410761 containerd[1489]: time="2026-04-13T19:21:08.410427223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:08.410761 containerd[1489]: time="2026-04-13T19:21:08.410442744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:08.411153 containerd[1489]: time="2026-04-13T19:21:08.411085967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:08.419778 containerd[1489]: time="2026-04-13T19:21:08.419663792Z" level=info msg="StartContainer for \"6a4dc9dca4a14d16a1a8176a3aa26951e94d0dd2ba29f0a598dc489f97844920\" returns successfully" Apr 13 19:21:08.442356 systemd[1]: Started cri-containerd-383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62.scope - libcontainer container 383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62. Apr 13 19:21:08.486405 containerd[1489]: time="2026-04-13T19:21:08.485487574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-4jh9q,Uid:1e52f299-9aba-4f98-af53-f9e67c8b6260,Namespace:kube-system,Attempt:0,} returns sandbox id \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\"" Apr 13 19:21:08.531626 kubelet[2597]: I0413 19:21:08.531317 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gdl89" podStartSLOduration=1.5313003630000002 podStartE2EDuration="1.531300363s" podCreationTimestamp="2026-04-13 19:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:08.515160509 +0000 UTC m=+6.194949128" watchObservedRunningTime="2026-04-13 19:21:08.531300363 +0000 UTC m=+6.211088942" Apr 13 19:21:13.211026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133275692.mount: Deactivated successfully. Apr 13 19:21:14.606230 containerd[1489]: time="2026-04-13T19:21:14.606123510Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:14.607314 containerd[1489]: time="2026-04-13T19:21:14.607271385Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 13 19:21:14.608862 containerd[1489]: time="2026-04-13T19:21:14.608540625Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:14.610949 containerd[1489]: time="2026-04-13T19:21:14.610889938Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.267814511s" Apr 13 19:21:14.611057 containerd[1489]: time="2026-04-13T19:21:14.611040223Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 13 19:21:14.614326 containerd[1489]: time="2026-04-13T19:21:14.614254563Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 19:21:14.619617 containerd[1489]: time="2026-04-13T19:21:14.618574617Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:21:14.635686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237575074.mount: Deactivated successfully. Apr 13 19:21:14.639371 containerd[1489]: time="2026-04-13T19:21:14.639238299Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\"" Apr 13 19:21:14.641990 containerd[1489]: time="2026-04-13T19:21:14.641296563Z" level=info msg="StartContainer for \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\"" Apr 13 19:21:14.677399 systemd[1]: Started cri-containerd-377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0.scope - libcontainer container 377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0. Apr 13 19:21:14.704816 containerd[1489]: time="2026-04-13T19:21:14.704748056Z" level=info msg="StartContainer for \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\" returns successfully" Apr 13 19:21:14.720341 systemd[1]: cri-containerd-377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0.scope: Deactivated successfully. Apr 13 19:21:14.903366 containerd[1489]: time="2026-04-13T19:21:14.903101264Z" level=info msg="shim disconnected" id=377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0 namespace=k8s.io Apr 13 19:21:14.903366 containerd[1489]: time="2026-04-13T19:21:14.903187546Z" level=warning msg="cleaning up after shim disconnected" id=377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0 namespace=k8s.io Apr 13 19:21:14.903366 containerd[1489]: time="2026-04-13T19:21:14.903204987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:15.534171 containerd[1489]: time="2026-04-13T19:21:15.534124848Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:21:15.553508 containerd[1489]: time="2026-04-13T19:21:15.553435597Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\"" Apr 13 19:21:15.554198 containerd[1489]: time="2026-04-13T19:21:15.554156979Z" level=info msg="StartContainer for \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\"" Apr 13 19:21:15.578096 systemd[1]: Started cri-containerd-f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9.scope - libcontainer container f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9. Apr 13 19:21:15.601823 containerd[1489]: time="2026-04-13T19:21:15.601661468Z" level=info msg="StartContainer for \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\" returns successfully" Apr 13 19:21:15.615233 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:21:15.615441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:21:15.615508 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:21:15.624394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:21:15.624600 systemd[1]: cri-containerd-f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9.scope: Deactivated successfully. Apr 13 19:21:15.631769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0-rootfs.mount: Deactivated successfully. Apr 13 19:21:15.647730 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:21:15.651156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9-rootfs.mount: Deactivated successfully. Apr 13 19:21:15.655860 containerd[1489]: time="2026-04-13T19:21:15.655641194Z" level=info msg="shim disconnected" id=f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9 namespace=k8s.io Apr 13 19:21:15.655860 containerd[1489]: time="2026-04-13T19:21:15.655716877Z" level=warning msg="cleaning up after shim disconnected" id=f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9 namespace=k8s.io Apr 13 19:21:15.655860 containerd[1489]: time="2026-04-13T19:21:15.655725117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:16.079324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873076893.mount: Deactivated successfully. Apr 13 19:21:16.544785 containerd[1489]: time="2026-04-13T19:21:16.544634207Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:21:16.568692 containerd[1489]: time="2026-04-13T19:21:16.568645246Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\"" Apr 13 19:21:16.570136 containerd[1489]: time="2026-04-13T19:21:16.570093370Z" level=info msg="StartContainer for \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\"" Apr 13 19:21:16.605232 systemd[1]: Started cri-containerd-2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a.scope - libcontainer container 2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a. Apr 13 19:21:16.633981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306871892.mount: Deactivated successfully. Apr 13 19:21:16.645906 containerd[1489]: time="2026-04-13T19:21:16.645842798Z" level=info msg="StartContainer for \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\" returns successfully" Apr 13 19:21:16.648242 systemd[1]: cri-containerd-2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a.scope: Deactivated successfully. Apr 13 19:21:16.677278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a-rootfs.mount: Deactivated successfully. Apr 13 19:21:16.703697 containerd[1489]: time="2026-04-13T19:21:16.703491044Z" level=info msg="shim disconnected" id=2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a namespace=k8s.io Apr 13 19:21:16.703697 containerd[1489]: time="2026-04-13T19:21:16.703543246Z" level=warning msg="cleaning up after shim disconnected" id=2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a namespace=k8s.io Apr 13 19:21:16.703697 containerd[1489]: time="2026-04-13T19:21:16.703551046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:16.726927 containerd[1489]: time="2026-04-13T19:21:16.726500453Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:21:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:21:16.831585 containerd[1489]: time="2026-04-13T19:21:16.830375764Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:16.832648 containerd[1489]: time="2026-04-13T19:21:16.832616551Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 13 19:21:16.833929 containerd[1489]: time="2026-04-13T19:21:16.833882189Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:16.838141 containerd[1489]: time="2026-04-13T19:21:16.838109075Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.223777111s" Apr 13 19:21:16.838269 containerd[1489]: time="2026-04-13T19:21:16.838251400Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 13 19:21:16.846466 containerd[1489]: time="2026-04-13T19:21:16.846396324Z" level=info msg="CreateContainer within sandbox \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 19:21:16.860182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021811398.mount: Deactivated successfully. Apr 13 19:21:16.873582 containerd[1489]: time="2026-04-13T19:21:16.873504735Z" level=info msg="CreateContainer within sandbox \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\"" Apr 13 19:21:16.876869 containerd[1489]: time="2026-04-13T19:21:16.876063932Z" level=info msg="StartContainer for \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\"" Apr 13 19:21:16.908287 systemd[1]: Started cri-containerd-d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b.scope - libcontainer container d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b. Apr 13 19:21:16.937250 containerd[1489]: time="2026-04-13T19:21:16.937201083Z" level=info msg="StartContainer for \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\" returns successfully" Apr 13 19:21:17.544300 containerd[1489]: time="2026-04-13T19:21:17.544249858Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:21:17.560231 containerd[1489]: time="2026-04-13T19:21:17.559446825Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\"" Apr 13 19:21:17.560554 containerd[1489]: time="2026-04-13T19:21:17.560522217Z" level=info msg="StartContainer for \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\"" Apr 13 19:21:17.603115 systemd[1]: Started cri-containerd-2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0.scope - libcontainer container 2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0. Apr 13 19:21:17.661816 kubelet[2597]: I0413 19:21:17.661512 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-4jh9q" podStartSLOduration=2.3069008 podStartE2EDuration="10.661496708s" podCreationTimestamp="2026-04-13 19:21:07 +0000 UTC" firstStartedPulling="2026-04-13 19:21:08.487740534 +0000 UTC m=+6.167529113" lastFinishedPulling="2026-04-13 19:21:16.842336442 +0000 UTC m=+14.522125021" observedRunningTime="2026-04-13 19:21:17.578580028 +0000 UTC m=+15.258368727" watchObservedRunningTime="2026-04-13 19:21:17.661496708 +0000 UTC m=+15.341285287" Apr 13 19:21:17.674562 containerd[1489]: time="2026-04-13T19:21:17.674515691Z" level=info msg="StartContainer for \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\" returns successfully" Apr 13 19:21:17.677582 systemd[1]: cri-containerd-2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0.scope: Deactivated successfully. Apr 13 19:21:17.710660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0-rootfs.mount: Deactivated successfully. Apr 13 19:21:17.728508 containerd[1489]: time="2026-04-13T19:21:17.728446798Z" level=info msg="shim disconnected" id=2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0 namespace=k8s.io Apr 13 19:21:17.729771 containerd[1489]: time="2026-04-13T19:21:17.729534350Z" level=warning msg="cleaning up after shim disconnected" id=2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0 namespace=k8s.io Apr 13 19:21:17.729771 containerd[1489]: time="2026-04-13T19:21:17.729565031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:18.552843 containerd[1489]: time="2026-04-13T19:21:18.552714823Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:21:18.577434 containerd[1489]: time="2026-04-13T19:21:18.577375416Z" level=info msg="CreateContainer within sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\"" Apr 13 19:21:18.578572 containerd[1489]: time="2026-04-13T19:21:18.578440687Z" level=info msg="StartContainer for \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\"" Apr 13 19:21:18.609223 systemd[1]: Started cri-containerd-b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157.scope - libcontainer container b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157. Apr 13 19:21:18.655135 containerd[1489]: time="2026-04-13T19:21:18.655003223Z" level=info msg="StartContainer for \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\" returns successfully" Apr 13 19:21:18.748020 kubelet[2597]: I0413 19:21:18.744877 2597 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 19:21:18.800976 systemd[1]: Created slice kubepods-burstable-pod29aab365_0b30_41f5_b11a_293b600a91a5.slice - libcontainer container kubepods-burstable-pod29aab365_0b30_41f5_b11a_293b600a91a5.slice. Apr 13 19:21:18.811628 systemd[1]: Created slice kubepods-burstable-pod7ab4f8a7_9889_4a11_af85_cf7399af71cc.slice - libcontainer container kubepods-burstable-pod7ab4f8a7_9889_4a11_af85_cf7399af71cc.slice. Apr 13 19:21:18.843744 kubelet[2597]: I0413 19:21:18.843694 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ab4f8a7-9889-4a11-af85-cf7399af71cc-config-volume\") pod \"coredns-66bc5c9577-lm2p7\" (UID: \"7ab4f8a7-9889-4a11-af85-cf7399af71cc\") " pod="kube-system/coredns-66bc5c9577-lm2p7" Apr 13 19:21:18.843744 kubelet[2597]: I0413 19:21:18.843840 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76sf\" (UniqueName: \"kubernetes.io/projected/7ab4f8a7-9889-4a11-af85-cf7399af71cc-kube-api-access-k76sf\") pod \"coredns-66bc5c9577-lm2p7\" (UID: \"7ab4f8a7-9889-4a11-af85-cf7399af71cc\") " pod="kube-system/coredns-66bc5c9577-lm2p7" Apr 13 19:21:18.844188 kubelet[2597]: I0413 19:21:18.843876 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29aab365-0b30-41f5-b11a-293b600a91a5-config-volume\") pod \"coredns-66bc5c9577-ftbld\" (UID: \"29aab365-0b30-41f5-b11a-293b600a91a5\") " pod="kube-system/coredns-66bc5c9577-ftbld" Apr 13 19:21:18.844188 kubelet[2597]: I0413 19:21:18.843894 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bnrw\" (UniqueName: \"kubernetes.io/projected/29aab365-0b30-41f5-b11a-293b600a91a5-kube-api-access-9bnrw\") pod \"coredns-66bc5c9577-ftbld\" (UID: \"29aab365-0b30-41f5-b11a-293b600a91a5\") " pod="kube-system/coredns-66bc5c9577-ftbld" Apr 13 19:21:19.111003 containerd[1489]: time="2026-04-13T19:21:19.110663998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ftbld,Uid:29aab365-0b30-41f5-b11a-293b600a91a5,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:19.120506 containerd[1489]: time="2026-04-13T19:21:19.120097907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lm2p7,Uid:7ab4f8a7-9889-4a11-af85-cf7399af71cc,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:19.572999 kubelet[2597]: I0413 19:21:19.572377 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-txp7v" podStartSLOduration=6.30107324 podStartE2EDuration="12.572353307s" podCreationTimestamp="2026-04-13 19:21:07 +0000 UTC" firstStartedPulling="2026-04-13 19:21:08.341064156 +0000 UTC m=+6.020852735" lastFinishedPulling="2026-04-13 19:21:14.612344223 +0000 UTC m=+12.292132802" observedRunningTime="2026-04-13 19:21:19.570877625 +0000 UTC m=+17.250666284" watchObservedRunningTime="2026-04-13 19:21:19.572353307 +0000 UTC m=+17.252141926" Apr 13 19:21:20.910168 systemd-networkd[1373]: cilium_host: Link UP Apr 13 19:21:20.910320 systemd-networkd[1373]: cilium_net: Link UP Apr 13 19:21:20.914108 systemd-networkd[1373]: cilium_net: Gained carrier Apr 13 19:21:20.914475 systemd-networkd[1373]: cilium_host: Gained carrier Apr 13 19:21:20.915118 systemd-networkd[1373]: cilium_net: Gained IPv6LL Apr 13 19:21:20.915379 systemd-networkd[1373]: cilium_host: Gained IPv6LL Apr 13 19:21:21.030306 systemd-networkd[1373]: cilium_vxlan: Link UP Apr 13 19:21:21.030503 systemd-networkd[1373]: cilium_vxlan: Gained carrier Apr 13 19:21:21.314369 kernel: NET: Registered PF_ALG protocol family Apr 13 19:21:22.021794 systemd-networkd[1373]: lxc_health: Link UP Apr 13 19:21:22.032124 systemd-networkd[1373]: lxc_health: Gained carrier Apr 13 19:21:22.206100 systemd-networkd[1373]: lxcf2de345f77fc: Link UP Apr 13 19:21:22.213923 kernel: eth0: renamed from tmpbb290 Apr 13 19:21:22.219924 systemd-networkd[1373]: lxcf2de345f77fc: Gained carrier Apr 13 19:21:22.220119 systemd-networkd[1373]: lxcfd6feb7de052: Link UP Apr 13 19:21:22.231718 kernel: eth0: renamed from tmp45f86 Apr 13 19:21:22.237415 systemd-networkd[1373]: lxcfd6feb7de052: Gained carrier Apr 13 19:21:22.810507 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Apr 13 19:21:23.514184 systemd-networkd[1373]: lxc_health: Gained IPv6LL Apr 13 19:21:23.514541 systemd-networkd[1373]: lxcf2de345f77fc: Gained IPv6LL Apr 13 19:21:24.218836 systemd-networkd[1373]: lxcfd6feb7de052: Gained IPv6LL Apr 13 19:21:26.207740 containerd[1489]: time="2026-04-13T19:21:26.207420670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:26.207740 containerd[1489]: time="2026-04-13T19:21:26.207485631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:26.207740 containerd[1489]: time="2026-04-13T19:21:26.207510192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:26.207740 containerd[1489]: time="2026-04-13T19:21:26.207608514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:26.217096 containerd[1489]: time="2026-04-13T19:21:26.216500305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:26.217096 containerd[1489]: time="2026-04-13T19:21:26.216570507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:26.217096 containerd[1489]: time="2026-04-13T19:21:26.216585508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:26.217096 containerd[1489]: time="2026-04-13T19:21:26.216726111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:26.254095 systemd[1]: run-containerd-runc-k8s.io-45f86cb3fe0bb1f602a0bb6908b808395e99f092b828f5f5d23fc72e10e23c5c-runc.VRMltF.mount: Deactivated successfully. Apr 13 19:21:26.269129 systemd[1]: Started cri-containerd-45f86cb3fe0bb1f602a0bb6908b808395e99f092b828f5f5d23fc72e10e23c5c.scope - libcontainer container 45f86cb3fe0bb1f602a0bb6908b808395e99f092b828f5f5d23fc72e10e23c5c. Apr 13 19:21:26.271065 systemd[1]: Started cri-containerd-bb29017e32a0ce1992aa8d108a6c6b838747c2b012a6ab51ff4c1f4de0e891f9.scope - libcontainer container bb29017e32a0ce1992aa8d108a6c6b838747c2b012a6ab51ff4c1f4de0e891f9. Apr 13 19:21:26.324638 containerd[1489]: time="2026-04-13T19:21:26.324597074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lm2p7,Uid:7ab4f8a7-9889-4a11-af85-cf7399af71cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb29017e32a0ce1992aa8d108a6c6b838747c2b012a6ab51ff4c1f4de0e891f9\"" Apr 13 19:21:26.333865 containerd[1489]: time="2026-04-13T19:21:26.333633509Z" level=info msg="CreateContainer within sandbox \"bb29017e32a0ce1992aa8d108a6c6b838747c2b012a6ab51ff4c1f4de0e891f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:21:26.341125 containerd[1489]: time="2026-04-13T19:21:26.341058502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ftbld,Uid:29aab365-0b30-41f5-b11a-293b600a91a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"45f86cb3fe0bb1f602a0bb6908b808395e99f092b828f5f5d23fc72e10e23c5c\"" Apr 13 19:21:26.354965 containerd[1489]: time="2026-04-13T19:21:26.353636869Z" level=info msg="CreateContainer within sandbox \"45f86cb3fe0bb1f602a0bb6908b808395e99f092b828f5f5d23fc72e10e23c5c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:21:26.378420 containerd[1489]: time="2026-04-13T19:21:26.377700254Z" level=info msg="CreateContainer within sandbox \"bb29017e32a0ce1992aa8d108a6c6b838747c2b012a6ab51ff4c1f4de0e891f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8fec0d77a61cfa67a82c3bc7fd4e2d4a364e54cfab19f0bef961860e39ae9044\"" Apr 13 19:21:26.379806 containerd[1489]: time="2026-04-13T19:21:26.379611504Z" level=info msg="StartContainer for \"8fec0d77a61cfa67a82c3bc7fd4e2d4a364e54cfab19f0bef961860e39ae9044\"" Apr 13 19:21:26.391932 containerd[1489]: time="2026-04-13T19:21:26.391851622Z" level=info msg="CreateContainer within sandbox \"45f86cb3fe0bb1f602a0bb6908b808395e99f092b828f5f5d23fc72e10e23c5c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80cc8d500900cdd255b23506a8e06eb4133b9a51aa707288c964837be9b8a224\"" Apr 13 19:21:26.393653 containerd[1489]: time="2026-04-13T19:21:26.393595027Z" level=info msg="StartContainer for \"80cc8d500900cdd255b23506a8e06eb4133b9a51aa707288c964837be9b8a224\"" Apr 13 19:21:26.425055 systemd[1]: Started cri-containerd-8fec0d77a61cfa67a82c3bc7fd4e2d4a364e54cfab19f0bef961860e39ae9044.scope - libcontainer container 8fec0d77a61cfa67a82c3bc7fd4e2d4a364e54cfab19f0bef961860e39ae9044. Apr 13 19:21:26.442080 systemd[1]: Started cri-containerd-80cc8d500900cdd255b23506a8e06eb4133b9a51aa707288c964837be9b8a224.scope - libcontainer container 80cc8d500900cdd255b23506a8e06eb4133b9a51aa707288c964837be9b8a224. Apr 13 19:21:26.466637 containerd[1489]: time="2026-04-13T19:21:26.466525282Z" level=info msg="StartContainer for \"8fec0d77a61cfa67a82c3bc7fd4e2d4a364e54cfab19f0bef961860e39ae9044\" returns successfully" Apr 13 19:21:26.482750 containerd[1489]: time="2026-04-13T19:21:26.482701023Z" level=info msg="StartContainer for \"80cc8d500900cdd255b23506a8e06eb4133b9a51aa707288c964837be9b8a224\" returns successfully" Apr 13 19:21:26.593601 kubelet[2597]: I0413 19:21:26.593049 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lm2p7" podStartSLOduration=19.593020649 podStartE2EDuration="19.593020649s" podCreationTimestamp="2026-04-13 19:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:26.590254137 +0000 UTC m=+24.270042796" watchObservedRunningTime="2026-04-13 19:21:26.593020649 +0000 UTC m=+24.272809268" Apr 13 19:21:26.632955 kubelet[2597]: I0413 19:21:26.630929 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ftbld" podStartSLOduration=18.630894393 podStartE2EDuration="18.630894393s" podCreationTimestamp="2026-04-13 19:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:26.610860953 +0000 UTC m=+24.290649612" watchObservedRunningTime="2026-04-13 19:21:26.630894393 +0000 UTC m=+24.310682972" Apr 13 19:21:28.790892 kubelet[2597]: I0413 19:21:28.789434 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:23:14.294291 systemd[1]: Started sshd@7-178.105.7.28:22-50.85.169.122:32836.service - OpenSSH per-connection server daemon (50.85.169.122:32836). Apr 13 19:23:14.423157 sshd[4007]: Accepted publickey for core from 50.85.169.122 port 32836 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:14.425728 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:14.433421 systemd-logind[1463]: New session 8 of user core. Apr 13 19:23:14.437164 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:23:14.639322 sshd[4007]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:14.645425 systemd[1]: sshd@7-178.105.7.28:22-50.85.169.122:32836.service: Deactivated successfully. Apr 13 19:23:14.648733 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:23:14.652157 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:23:14.653858 systemd-logind[1463]: Removed session 8. Apr 13 19:23:19.675460 systemd[1]: Started sshd@8-178.105.7.28:22-50.85.169.122:36278.service - OpenSSH per-connection server daemon (50.85.169.122:36278). Apr 13 19:23:19.805996 sshd[4022]: Accepted publickey for core from 50.85.169.122 port 36278 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:19.809375 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:19.815103 systemd-logind[1463]: New session 9 of user core. Apr 13 19:23:19.824768 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:23:20.009555 sshd[4022]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:20.014402 systemd[1]: sshd@8-178.105.7.28:22-50.85.169.122:36278.service: Deactivated successfully. Apr 13 19:23:20.017099 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:23:20.019669 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:23:20.021807 systemd-logind[1463]: Removed session 9. Apr 13 19:23:25.042254 systemd[1]: Started sshd@9-178.105.7.28:22-50.85.169.122:36286.service - OpenSSH per-connection server daemon (50.85.169.122:36286). Apr 13 19:23:25.164217 sshd[4036]: Accepted publickey for core from 50.85.169.122 port 36286 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:25.166427 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:25.172439 systemd-logind[1463]: New session 10 of user core. Apr 13 19:23:25.180218 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:23:25.360250 sshd[4036]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:25.367424 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:23:25.367855 systemd[1]: sshd@9-178.105.7.28:22-50.85.169.122:36286.service: Deactivated successfully. Apr 13 19:23:25.371571 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:23:25.376519 systemd-logind[1463]: Removed session 10. Apr 13 19:23:30.393239 systemd[1]: Started sshd@10-178.105.7.28:22-50.85.169.122:35534.service - OpenSSH per-connection server daemon (50.85.169.122:35534). Apr 13 19:23:30.522124 sshd[4049]: Accepted publickey for core from 50.85.169.122 port 35534 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:30.525204 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:30.530390 systemd-logind[1463]: New session 11 of user core. Apr 13 19:23:30.537339 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:23:30.713817 sshd[4049]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:30.720067 systemd[1]: sshd@10-178.105.7.28:22-50.85.169.122:35534.service: Deactivated successfully. Apr 13 19:23:30.722706 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:23:30.724790 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:23:30.725959 systemd-logind[1463]: Removed session 11. Apr 13 19:23:30.747464 systemd[1]: Started sshd@11-178.105.7.28:22-50.85.169.122:35548.service - OpenSSH per-connection server daemon (50.85.169.122:35548). Apr 13 19:23:30.873961 sshd[4062]: Accepted publickey for core from 50.85.169.122 port 35548 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:30.875562 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:30.881798 systemd-logind[1463]: New session 12 of user core. Apr 13 19:23:30.890203 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:23:31.102819 sshd[4062]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:31.110224 systemd[1]: sshd@11-178.105.7.28:22-50.85.169.122:35548.service: Deactivated successfully. Apr 13 19:23:31.111879 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:23:31.118730 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:23:31.139326 systemd[1]: Started sshd@12-178.105.7.28:22-50.85.169.122:35562.service - OpenSSH per-connection server daemon (50.85.169.122:35562). Apr 13 19:23:31.140840 systemd-logind[1463]: Removed session 12. Apr 13 19:23:31.269832 sshd[4073]: Accepted publickey for core from 50.85.169.122 port 35562 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:31.272508 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:31.279177 systemd-logind[1463]: New session 13 of user core. Apr 13 19:23:31.288303 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:23:31.455339 sshd[4073]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:31.463126 systemd[1]: sshd@12-178.105.7.28:22-50.85.169.122:35562.service: Deactivated successfully. Apr 13 19:23:31.466072 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:23:31.467194 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:23:31.468384 systemd-logind[1463]: Removed session 13. Apr 13 19:23:36.491388 systemd[1]: Started sshd@13-178.105.7.28:22-50.85.169.122:35566.service - OpenSSH per-connection server daemon (50.85.169.122:35566). Apr 13 19:23:36.617962 sshd[4085]: Accepted publickey for core from 50.85.169.122 port 35566 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:36.620217 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:36.626073 systemd-logind[1463]: New session 14 of user core. Apr 13 19:23:36.634295 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:23:36.806177 sshd[4085]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:36.812061 systemd[1]: sshd@13-178.105.7.28:22-50.85.169.122:35566.service: Deactivated successfully. Apr 13 19:23:36.815822 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:23:36.817026 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:23:36.820532 systemd-logind[1463]: Removed session 14. Apr 13 19:23:41.842384 systemd[1]: Started sshd@14-178.105.7.28:22-50.85.169.122:45742.service - OpenSSH per-connection server daemon (50.85.169.122:45742). Apr 13 19:23:41.968033 sshd[4100]: Accepted publickey for core from 50.85.169.122 port 45742 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:41.969454 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:41.973800 systemd-logind[1463]: New session 15 of user core. Apr 13 19:23:41.980084 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:23:42.155417 sshd[4100]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:42.160933 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:23:42.161751 systemd[1]: sshd@14-178.105.7.28:22-50.85.169.122:45742.service: Deactivated successfully. Apr 13 19:23:42.164942 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:23:42.166391 systemd-logind[1463]: Removed session 15. Apr 13 19:23:42.189292 systemd[1]: Started sshd@15-178.105.7.28:22-50.85.169.122:45754.service - OpenSSH per-connection server daemon (50.85.169.122:45754). Apr 13 19:23:42.309847 sshd[4113]: Accepted publickey for core from 50.85.169.122 port 45754 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:42.313142 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:42.321035 systemd-logind[1463]: New session 16 of user core. Apr 13 19:23:42.325205 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:23:42.588385 sshd[4113]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:42.593749 systemd[1]: sshd@15-178.105.7.28:22-50.85.169.122:45754.service: Deactivated successfully. Apr 13 19:23:42.596545 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:23:42.597681 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:23:42.598880 systemd-logind[1463]: Removed session 16. Apr 13 19:23:42.618331 systemd[1]: Started sshd@16-178.105.7.28:22-50.85.169.122:45762.service - OpenSSH per-connection server daemon (50.85.169.122:45762). Apr 13 19:23:42.752104 sshd[4124]: Accepted publickey for core from 50.85.169.122 port 45762 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:42.754769 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:42.759958 systemd-logind[1463]: New session 17 of user core. Apr 13 19:23:42.764142 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:23:43.557060 sshd[4124]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:43.562969 systemd[1]: sshd@16-178.105.7.28:22-50.85.169.122:45762.service: Deactivated successfully. Apr 13 19:23:43.565713 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:23:43.566753 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:23:43.569462 systemd-logind[1463]: Removed session 17. Apr 13 19:23:43.582452 systemd[1]: Started sshd@17-178.105.7.28:22-50.85.169.122:45766.service - OpenSSH per-connection server daemon (50.85.169.122:45766). Apr 13 19:23:43.719476 sshd[4140]: Accepted publickey for core from 50.85.169.122 port 45766 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:43.721049 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:43.727024 systemd-logind[1463]: New session 18 of user core. Apr 13 19:23:43.733225 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:23:44.050570 sshd[4140]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:44.057476 systemd[1]: sshd@17-178.105.7.28:22-50.85.169.122:45766.service: Deactivated successfully. Apr 13 19:23:44.057683 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:23:44.063136 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:23:44.065176 systemd-logind[1463]: Removed session 18. Apr 13 19:23:44.077170 systemd[1]: Started sshd@18-178.105.7.28:22-50.85.169.122:45768.service - OpenSSH per-connection server daemon (50.85.169.122:45768). Apr 13 19:23:44.214195 sshd[4151]: Accepted publickey for core from 50.85.169.122 port 45768 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:44.216638 sshd[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:44.227099 systemd-logind[1463]: New session 19 of user core. Apr 13 19:23:44.231159 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:23:44.399759 sshd[4151]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:44.406479 systemd[1]: sshd@18-178.105.7.28:22-50.85.169.122:45768.service: Deactivated successfully. Apr 13 19:23:44.409325 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:23:44.411225 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:23:44.412248 systemd-logind[1463]: Removed session 19. Apr 13 19:23:49.434389 systemd[1]: Started sshd@19-178.105.7.28:22-50.85.169.122:43612.service - OpenSSH per-connection server daemon (50.85.169.122:43612). Apr 13 19:23:49.559064 sshd[4167]: Accepted publickey for core from 50.85.169.122 port 43612 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:49.563175 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:49.569186 systemd-logind[1463]: New session 20 of user core. Apr 13 19:23:49.580238 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:23:49.748546 sshd[4167]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:49.753776 systemd[1]: sshd@19-178.105.7.28:22-50.85.169.122:43612.service: Deactivated successfully. Apr 13 19:23:49.757463 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:23:49.758666 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:23:49.760091 systemd-logind[1463]: Removed session 20. Apr 13 19:23:54.776516 systemd[1]: Started sshd@20-178.105.7.28:22-50.85.169.122:43624.service - OpenSSH per-connection server daemon (50.85.169.122:43624). Apr 13 19:23:54.911199 sshd[4179]: Accepted publickey for core from 50.85.169.122 port 43624 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:54.914365 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:54.920778 systemd-logind[1463]: New session 21 of user core. Apr 13 19:23:54.931177 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:23:55.109256 sshd[4179]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:55.115181 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:23:55.116044 systemd[1]: sshd@20-178.105.7.28:22-50.85.169.122:43624.service: Deactivated successfully. Apr 13 19:23:55.119296 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:23:55.120440 systemd-logind[1463]: Removed session 21. Apr 13 19:23:55.141285 systemd[1]: Started sshd@21-178.105.7.28:22-50.85.169.122:43638.service - OpenSSH per-connection server daemon (50.85.169.122:43638). Apr 13 19:23:55.261751 sshd[4191]: Accepted publickey for core from 50.85.169.122 port 43638 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:55.264033 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:55.271292 systemd-logind[1463]: New session 22 of user core. Apr 13 19:23:55.278097 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:23:57.314099 containerd[1489]: time="2026-04-13T19:23:57.313531599Z" level=info msg="StopContainer for \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\" with timeout 30 (s)" Apr 13 19:23:57.318641 containerd[1489]: time="2026-04-13T19:23:57.318411581Z" level=info msg="Stop container \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\" with signal terminated" Apr 13 19:23:57.355225 systemd[1]: cri-containerd-d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b.scope: Deactivated successfully. Apr 13 19:23:57.362717 containerd[1489]: time="2026-04-13T19:23:57.362674540Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:23:57.373658 containerd[1489]: time="2026-04-13T19:23:57.373605612Z" level=info msg="StopContainer for \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\" with timeout 2 (s)" Apr 13 19:23:57.374630 containerd[1489]: time="2026-04-13T19:23:57.374493641Z" level=info msg="Stop container \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\" with signal terminated" Apr 13 19:23:57.386753 systemd-networkd[1373]: lxc_health: Link DOWN Apr 13 19:23:57.386759 systemd-networkd[1373]: lxc_health: Lost carrier Apr 13 19:23:57.399464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b-rootfs.mount: Deactivated successfully. Apr 13 19:23:57.406016 systemd[1]: cri-containerd-b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157.scope: Deactivated successfully. Apr 13 19:23:57.406312 systemd[1]: cri-containerd-b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157.scope: Consumed 7.272s CPU time. Apr 13 19:23:57.417687 containerd[1489]: time="2026-04-13T19:23:57.417372456Z" level=info msg="shim disconnected" id=d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b namespace=k8s.io Apr 13 19:23:57.417687 containerd[1489]: time="2026-04-13T19:23:57.417683773Z" level=warning msg="cleaning up after shim disconnected" id=d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b namespace=k8s.io Apr 13 19:23:57.418165 containerd[1489]: time="2026-04-13T19:23:57.417695493Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:57.439528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157-rootfs.mount: Deactivated successfully. Apr 13 19:23:57.447015 containerd[1489]: time="2026-04-13T19:23:57.446152558Z" level=info msg="shim disconnected" id=b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157 namespace=k8s.io Apr 13 19:23:57.447015 containerd[1489]: time="2026-04-13T19:23:57.446221837Z" level=warning msg="cleaning up after shim disconnected" id=b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157 namespace=k8s.io Apr 13 19:23:57.447015 containerd[1489]: time="2026-04-13T19:23:57.446230757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:57.447015 containerd[1489]: time="2026-04-13T19:23:57.446510754Z" level=info msg="StopContainer for \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\" returns successfully" Apr 13 19:23:57.447944 containerd[1489]: time="2026-04-13T19:23:57.447866378Z" level=info msg="StopPodSandbox for \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\"" Apr 13 19:23:57.448127 containerd[1489]: time="2026-04-13T19:23:57.448107215Z" level=info msg="Container to stop \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.450485 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62-shm.mount: Deactivated successfully. Apr 13 19:23:57.466133 systemd[1]: cri-containerd-383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62.scope: Deactivated successfully. Apr 13 19:23:57.474492 containerd[1489]: time="2026-04-13T19:23:57.474439025Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:23:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:23:57.477282 containerd[1489]: time="2026-04-13T19:23:57.477150593Z" level=info msg="StopContainer for \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\" returns successfully" Apr 13 19:23:57.478109 containerd[1489]: time="2026-04-13T19:23:57.478003583Z" level=info msg="StopPodSandbox for \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\"" Apr 13 19:23:57.478706 containerd[1489]: time="2026-04-13T19:23:57.478158221Z" level=info msg="Container to stop \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.478706 containerd[1489]: time="2026-04-13T19:23:57.478521497Z" level=info msg="Container to stop \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.478706 containerd[1489]: time="2026-04-13T19:23:57.478536137Z" level=info msg="Container to stop \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.478706 containerd[1489]: time="2026-04-13T19:23:57.478547136Z" level=info msg="Container to stop \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.478706 containerd[1489]: time="2026-04-13T19:23:57.478558136Z" level=info msg="Container to stop \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.485514 systemd[1]: cri-containerd-186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2.scope: Deactivated successfully. Apr 13 19:23:57.521567 containerd[1489]: time="2026-04-13T19:23:57.521317673Z" level=info msg="shim disconnected" id=383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62 namespace=k8s.io Apr 13 19:23:57.521567 containerd[1489]: time="2026-04-13T19:23:57.521394712Z" level=warning msg="cleaning up after shim disconnected" id=383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62 namespace=k8s.io Apr 13 19:23:57.521567 containerd[1489]: time="2026-04-13T19:23:57.521406032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:57.524988 containerd[1489]: time="2026-04-13T19:23:57.524750993Z" level=info msg="shim disconnected" id=186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2 namespace=k8s.io Apr 13 19:23:57.524988 containerd[1489]: time="2026-04-13T19:23:57.524817752Z" level=warning msg="cleaning up after shim disconnected" id=186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2 namespace=k8s.io Apr 13 19:23:57.524988 containerd[1489]: time="2026-04-13T19:23:57.524831872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:57.539479 containerd[1489]: time="2026-04-13T19:23:57.538851107Z" level=info msg="TearDown network for sandbox \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\" successfully" Apr 13 19:23:57.539479 containerd[1489]: time="2026-04-13T19:23:57.538891746Z" level=info msg="StopPodSandbox for \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\" returns successfully" Apr 13 19:23:57.542747 containerd[1489]: time="2026-04-13T19:23:57.542691742Z" level=info msg="TearDown network for sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" successfully" Apr 13 19:23:57.542747 containerd[1489]: time="2026-04-13T19:23:57.542730421Z" level=info msg="StopPodSandbox for \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" returns successfully" Apr 13 19:23:57.568550 kubelet[2597]: E0413 19:23:57.567772 2597 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:23:57.675001 kubelet[2597]: I0413 19:23:57.673757 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-config-path\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675001 kubelet[2597]: I0413 19:23:57.673833 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cni-path\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675001 kubelet[2597]: I0413 19:23:57.673877 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd722\" (UniqueName: \"kubernetes.io/projected/1e52f299-9aba-4f98-af53-f9e67c8b6260-kube-api-access-zd722\") pod \"1e52f299-9aba-4f98-af53-f9e67c8b6260\" (UID: \"1e52f299-9aba-4f98-af53-f9e67c8b6260\") " Apr 13 19:23:57.675001 kubelet[2597]: I0413 19:23:57.673979 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-bpf-maps\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675001 kubelet[2597]: I0413 19:23:57.674022 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-run\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675001 kubelet[2597]: I0413 19:23:57.674051 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-host-proc-sys-net\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675574 kubelet[2597]: I0413 19:23:57.674089 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-host-proc-sys-kernel\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675574 kubelet[2597]: I0413 19:23:57.674091 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cni-path" (OuterVolumeSpecName: "cni-path") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.675574 kubelet[2597]: I0413 19:23:57.674122 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-xtables-lock\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675574 kubelet[2597]: I0413 19:23:57.674161 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/817c6f1a-8085-43ee-a4c1-644ba116cd53-hubble-tls\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675574 kubelet[2597]: I0413 19:23:57.674195 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/817c6f1a-8085-43ee-a4c1-644ba116cd53-clustermesh-secrets\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675574 kubelet[2597]: I0413 19:23:57.674230 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-etc-cni-netd\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675970 kubelet[2597]: I0413 19:23:57.674267 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-cgroup\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675970 kubelet[2597]: I0413 19:23:57.674298 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-lib-modules\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675970 kubelet[2597]: I0413 19:23:57.674332 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l6cg\" (UniqueName: \"kubernetes.io/projected/817c6f1a-8085-43ee-a4c1-644ba116cd53-kube-api-access-4l6cg\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675970 kubelet[2597]: I0413 19:23:57.674368 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e52f299-9aba-4f98-af53-f9e67c8b6260-cilium-config-path\") pod \"1e52f299-9aba-4f98-af53-f9e67c8b6260\" (UID: \"1e52f299-9aba-4f98-af53-f9e67c8b6260\") " Apr 13 19:23:57.675970 kubelet[2597]: I0413 19:23:57.674401 2597 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-hostproc\") pod \"817c6f1a-8085-43ee-a4c1-644ba116cd53\" (UID: \"817c6f1a-8085-43ee-a4c1-644ba116cd53\") " Apr 13 19:23:57.675970 kubelet[2597]: I0413 19:23:57.674467 2597 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cni-path\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.678070 kubelet[2597]: I0413 19:23:57.674518 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-hostproc" (OuterVolumeSpecName: "hostproc") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.678070 kubelet[2597]: I0413 19:23:57.675829 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.678070 kubelet[2597]: I0413 19:23:57.675917 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.678070 kubelet[2597]: I0413 19:23:57.675945 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.678070 kubelet[2597]: I0413 19:23:57.675964 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.678198 kubelet[2597]: I0413 19:23:57.675982 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.678198 kubelet[2597]: I0413 19:23:57.677834 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:23:57.678377 kubelet[2597]: I0413 19:23:57.678341 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.679590 kubelet[2597]: I0413 19:23:57.679553 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817c6f1a-8085-43ee-a4c1-644ba116cd53-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:23:57.679797 kubelet[2597]: I0413 19:23:57.679771 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.680996 kubelet[2597]: I0413 19:23:57.680962 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e52f299-9aba-4f98-af53-f9e67c8b6260-kube-api-access-zd722" (OuterVolumeSpecName: "kube-api-access-zd722") pod "1e52f299-9aba-4f98-af53-f9e67c8b6260" (UID: "1e52f299-9aba-4f98-af53-f9e67c8b6260"). InnerVolumeSpecName "kube-api-access-zd722". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:23:57.681105 kubelet[2597]: I0413 19:23:57.681091 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.683284 kubelet[2597]: I0413 19:23:57.683253 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e52f299-9aba-4f98-af53-f9e67c8b6260-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e52f299-9aba-4f98-af53-f9e67c8b6260" (UID: "1e52f299-9aba-4f98-af53-f9e67c8b6260"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:23:57.683402 kubelet[2597]: I0413 19:23:57.683215 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817c6f1a-8085-43ee-a4c1-644ba116cd53-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:23:57.684119 kubelet[2597]: I0413 19:23:57.684088 2597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817c6f1a-8085-43ee-a4c1-644ba116cd53-kube-api-access-4l6cg" (OuterVolumeSpecName: "kube-api-access-4l6cg") pod "817c6f1a-8085-43ee-a4c1-644ba116cd53" (UID: "817c6f1a-8085-43ee-a4c1-644ba116cd53"). InnerVolumeSpecName "kube-api-access-4l6cg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:23:57.775479 kubelet[2597]: I0413 19:23:57.775402 2597 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-bpf-maps\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775479 kubelet[2597]: I0413 19:23:57.775456 2597 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-run\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775479 kubelet[2597]: I0413 19:23:57.775481 2597 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-host-proc-sys-net\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775762 kubelet[2597]: I0413 19:23:57.775500 2597 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-host-proc-sys-kernel\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775762 kubelet[2597]: I0413 19:23:57.775519 2597 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-xtables-lock\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775762 kubelet[2597]: I0413 19:23:57.775535 2597 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/817c6f1a-8085-43ee-a4c1-644ba116cd53-hubble-tls\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775762 kubelet[2597]: I0413 19:23:57.775551 2597 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/817c6f1a-8085-43ee-a4c1-644ba116cd53-clustermesh-secrets\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775762 kubelet[2597]: I0413 19:23:57.775570 2597 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-etc-cni-netd\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775762 kubelet[2597]: I0413 19:23:57.775588 2597 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-cgroup\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775762 kubelet[2597]: I0413 19:23:57.775605 2597 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-lib-modules\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.775762 kubelet[2597]: I0413 19:23:57.775623 2597 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4l6cg\" (UniqueName: \"kubernetes.io/projected/817c6f1a-8085-43ee-a4c1-644ba116cd53-kube-api-access-4l6cg\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.776182 kubelet[2597]: I0413 19:23:57.775640 2597 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e52f299-9aba-4f98-af53-f9e67c8b6260-cilium-config-path\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.776182 kubelet[2597]: I0413 19:23:57.775656 2597 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/817c6f1a-8085-43ee-a4c1-644ba116cd53-hostproc\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.776182 kubelet[2597]: I0413 19:23:57.775673 2597 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/817c6f1a-8085-43ee-a4c1-644ba116cd53-cilium-config-path\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:57.776182 kubelet[2597]: I0413 19:23:57.775689 2597 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zd722\" (UniqueName: \"kubernetes.io/projected/1e52f299-9aba-4f98-af53-f9e67c8b6260-kube-api-access-zd722\") on node \"ci-4081-3-7-3-c59e9f41ff\" DevicePath \"\"" Apr 13 19:23:58.000605 kubelet[2597]: I0413 19:23:58.000471 2597 scope.go:117] "RemoveContainer" containerID="d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b" Apr 13 19:23:58.006687 containerd[1489]: time="2026-04-13T19:23:58.006186648Z" level=info msg="RemoveContainer for \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\"" Apr 13 19:23:58.012570 systemd[1]: Removed slice kubepods-besteffort-pod1e52f299_9aba_4f98_af53_f9e67c8b6260.slice - libcontainer container kubepods-besteffort-pod1e52f299_9aba_4f98_af53_f9e67c8b6260.slice. Apr 13 19:23:58.017013 containerd[1489]: time="2026-04-13T19:23:58.016323933Z" level=info msg="RemoveContainer for \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\" returns successfully" Apr 13 19:23:58.018014 kubelet[2597]: I0413 19:23:58.017858 2597 scope.go:117] "RemoveContainer" containerID="d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b" Apr 13 19:23:58.021998 systemd[1]: Removed slice kubepods-burstable-pod817c6f1a_8085_43ee_a4c1_644ba116cd53.slice - libcontainer container kubepods-burstable-pod817c6f1a_8085_43ee_a4c1_644ba116cd53.slice. Apr 13 19:23:58.022091 systemd[1]: kubepods-burstable-pod817c6f1a_8085_43ee_a4c1_644ba116cd53.slice: Consumed 7.355s CPU time. Apr 13 19:23:58.023285 containerd[1489]: time="2026-04-13T19:23:58.022781900Z" level=error msg="ContainerStatus for \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\": not found" Apr 13 19:23:58.024432 kubelet[2597]: E0413 19:23:58.023059 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\": not found" containerID="d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b" Apr 13 19:23:58.024432 kubelet[2597]: I0413 19:23:58.023091 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b"} err="failed to get container status \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7d5d3bb74518a8bc4c45c37e4d823cb49503823fc9b42f67fd6eab459b19f9b\": not found" Apr 13 19:23:58.024432 kubelet[2597]: I0413 19:23:58.023278 2597 scope.go:117] "RemoveContainer" containerID="b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157" Apr 13 19:23:58.027405 containerd[1489]: time="2026-04-13T19:23:58.027316568Z" level=info msg="RemoveContainer for \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\"" Apr 13 19:23:58.032096 containerd[1489]: time="2026-04-13T19:23:58.032053194Z" level=info msg="RemoveContainer for \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\" returns successfully" Apr 13 19:23:58.033336 kubelet[2597]: I0413 19:23:58.032829 2597 scope.go:117] "RemoveContainer" containerID="2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0" Apr 13 19:23:58.036348 containerd[1489]: time="2026-04-13T19:23:58.035870191Z" level=info msg="RemoveContainer for \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\"" Apr 13 19:23:58.041480 containerd[1489]: time="2026-04-13T19:23:58.039940865Z" level=info msg="RemoveContainer for \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\" returns successfully" Apr 13 19:23:58.041608 kubelet[2597]: I0413 19:23:58.040198 2597 scope.go:117] "RemoveContainer" containerID="2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a" Apr 13 19:23:58.043403 containerd[1489]: time="2026-04-13T19:23:58.043132508Z" level=info msg="RemoveContainer for \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\"" Apr 13 19:23:58.046960 containerd[1489]: time="2026-04-13T19:23:58.046589629Z" level=info msg="RemoveContainer for \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\" returns successfully" Apr 13 19:23:58.047089 kubelet[2597]: I0413 19:23:58.046788 2597 scope.go:117] "RemoveContainer" containerID="f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9" Apr 13 19:23:58.049672 containerd[1489]: time="2026-04-13T19:23:58.049642314Z" level=info msg="RemoveContainer for \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\"" Apr 13 19:23:58.053278 containerd[1489]: time="2026-04-13T19:23:58.053203194Z" level=info msg="RemoveContainer for \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\" returns successfully" Apr 13 19:23:58.053723 kubelet[2597]: I0413 19:23:58.053592 2597 scope.go:117] "RemoveContainer" containerID="377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0" Apr 13 19:23:58.054919 containerd[1489]: time="2026-04-13T19:23:58.054852135Z" level=info msg="RemoveContainer for \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\"" Apr 13 19:23:58.060780 containerd[1489]: time="2026-04-13T19:23:58.060694669Z" level=info msg="RemoveContainer for \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\" returns successfully" Apr 13 19:23:58.062924 kubelet[2597]: I0413 19:23:58.062638 2597 scope.go:117] "RemoveContainer" containerID="b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157" Apr 13 19:23:58.063114 containerd[1489]: time="2026-04-13T19:23:58.063060962Z" level=error msg="ContainerStatus for \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\": not found" Apr 13 19:23:58.063545 kubelet[2597]: E0413 19:23:58.063514 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\": not found" containerID="b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157" Apr 13 19:23:58.063590 kubelet[2597]: I0413 19:23:58.063560 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157"} err="failed to get container status \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\": rpc error: code = NotFound desc = an error occurred when try to find container \"b655d017029199fe7103c6de35fdb693f470df1905b954cc272123529dd90157\": not found" Apr 13 19:23:58.063617 kubelet[2597]: I0413 19:23:58.063591 2597 scope.go:117] "RemoveContainer" containerID="2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0" Apr 13 19:23:58.064058 containerd[1489]: time="2026-04-13T19:23:58.063999911Z" level=error msg="ContainerStatus for \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\": not found" Apr 13 19:23:58.064430 kubelet[2597]: E0413 19:23:58.064392 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\": not found" containerID="2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0" Apr 13 19:23:58.064494 kubelet[2597]: I0413 19:23:58.064440 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0"} err="failed to get container status \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f83f2fd3fd2e85dc8d4a66306037bb6a91b21248e31b81c31eee206093087d0\": not found" Apr 13 19:23:58.064494 kubelet[2597]: I0413 19:23:58.064467 2597 scope.go:117] "RemoveContainer" containerID="2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a" Apr 13 19:23:58.064766 containerd[1489]: time="2026-04-13T19:23:58.064689823Z" level=error msg="ContainerStatus for \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\": not found" Apr 13 19:23:58.065329 kubelet[2597]: E0413 19:23:58.064853 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\": not found" containerID="2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a" Apr 13 19:23:58.065329 kubelet[2597]: I0413 19:23:58.064882 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a"} err="failed to get container status \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2dd19bcde9dd4a4e8d4c41cfc9bd354b5b744ae68326f03706353e3d9902699a\": not found" Apr 13 19:23:58.065329 kubelet[2597]: I0413 19:23:58.064954 2597 scope.go:117] "RemoveContainer" containerID="f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9" Apr 13 19:23:58.068077 containerd[1489]: time="2026-04-13T19:23:58.068024425Z" level=error msg="ContainerStatus for \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\": not found" Apr 13 19:23:58.068634 kubelet[2597]: E0413 19:23:58.068605 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\": not found" containerID="f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9" Apr 13 19:23:58.068779 kubelet[2597]: I0413 19:23:58.068646 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9"} err="failed to get container status \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8c10f202cc230eda380bab2bffc817187b9681ed9220c5b655a9fc580d405f9\": not found" Apr 13 19:23:58.068825 kubelet[2597]: I0413 19:23:58.068786 2597 scope.go:117] "RemoveContainer" containerID="377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0" Apr 13 19:23:58.069097 containerd[1489]: time="2026-04-13T19:23:58.069061013Z" level=error msg="ContainerStatus for \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\": not found" Apr 13 19:23:58.069215 kubelet[2597]: E0413 19:23:58.069193 2597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\": not found" containerID="377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0" Apr 13 19:23:58.069269 kubelet[2597]: I0413 19:23:58.069222 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0"} err="failed to get container status \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"377306ca08534591fa702c4f8fb55e722b50ee3cba074ab6bcead18ed32448d0\": not found" Apr 13 19:23:58.336026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62-rootfs.mount: Deactivated successfully. Apr 13 19:23:58.336393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2-rootfs.mount: Deactivated successfully. Apr 13 19:23:58.336601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2-shm.mount: Deactivated successfully. Apr 13 19:23:58.336800 systemd[1]: var-lib-kubelet-pods-1e52f299\x2d9aba\x2d4f98\x2daf53\x2df9e67c8b6260-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzd722.mount: Deactivated successfully. Apr 13 19:23:58.337114 systemd[1]: var-lib-kubelet-pods-817c6f1a\x2d8085\x2d43ee\x2da4c1\x2d644ba116cd53-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4l6cg.mount: Deactivated successfully. Apr 13 19:23:58.337322 systemd[1]: var-lib-kubelet-pods-817c6f1a\x2d8085\x2d43ee\x2da4c1\x2d644ba116cd53-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 19:23:58.337507 systemd[1]: var-lib-kubelet-pods-817c6f1a\x2d8085\x2d43ee\x2da4c1\x2d644ba116cd53-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 19:23:58.454855 kubelet[2597]: I0413 19:23:58.454807 2597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e52f299-9aba-4f98-af53-f9e67c8b6260" path="/var/lib/kubelet/pods/1e52f299-9aba-4f98-af53-f9e67c8b6260/volumes" Apr 13 19:23:58.455459 kubelet[2597]: I0413 19:23:58.455434 2597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817c6f1a-8085-43ee-a4c1-644ba116cd53" path="/var/lib/kubelet/pods/817c6f1a-8085-43ee-a4c1-644ba116cd53/volumes" Apr 13 19:23:59.263887 sshd[4191]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:59.270979 systemd[1]: sshd@21-178.105.7.28:22-50.85.169.122:43638.service: Deactivated successfully. Apr 13 19:23:59.273205 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:23:59.273384 systemd[1]: session-22.scope: Consumed 1.281s CPU time. Apr 13 19:23:59.277046 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:23:59.289416 systemd-logind[1463]: Removed session 22. Apr 13 19:23:59.293318 systemd[1]: Started sshd@22-178.105.7.28:22-50.85.169.122:43640.service - OpenSSH per-connection server daemon (50.85.169.122:43640). Apr 13 19:23:59.416620 sshd[4353]: Accepted publickey for core from 50.85.169.122 port 43640 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:59.419169 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:59.425606 systemd-logind[1463]: New session 23 of user core. Apr 13 19:23:59.430180 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 19:24:00.900155 sshd[4353]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:00.908702 systemd[1]: sshd@22-178.105.7.28:22-50.85.169.122:43640.service: Deactivated successfully. Apr 13 19:24:00.909755 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Apr 13 19:24:00.912138 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 19:24:00.915829 systemd[1]: session-23.scope: Consumed 1.278s CPU time. Apr 13 19:24:00.927248 systemd-logind[1463]: Removed session 23. Apr 13 19:24:00.939230 systemd[1]: Started sshd@23-178.105.7.28:22-50.85.169.122:38816.service - OpenSSH per-connection server daemon (50.85.169.122:38816). Apr 13 19:24:00.953249 systemd[1]: Created slice kubepods-burstable-podb428f0a7_c3f0_43d4_b9a5_34c56a098220.slice - libcontainer container kubepods-burstable-podb428f0a7_c3f0_43d4_b9a5_34c56a098220.slice. Apr 13 19:24:01.063805 sshd[4365]: Accepted publickey for core from 50.85.169.122 port 38816 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:24:01.065484 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:01.075019 systemd-logind[1463]: New session 24 of user core. Apr 13 19:24:01.081162 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 19:24:01.098575 kubelet[2597]: I0413 19:24:01.098475 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-cni-path\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.098575 kubelet[2597]: I0413 19:24:01.098574 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-etc-cni-netd\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.099784 kubelet[2597]: I0413 19:24:01.098684 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-lib-modules\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.099784 kubelet[2597]: I0413 19:24:01.098725 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-xtables-lock\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.099784 kubelet[2597]: I0413 19:24:01.098758 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-bpf-maps\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.099784 kubelet[2597]: I0413 19:24:01.098791 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b428f0a7-c3f0-43d4-b9a5-34c56a098220-clustermesh-secrets\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.099784 kubelet[2597]: I0413 19:24:01.098826 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b428f0a7-c3f0-43d4-b9a5-34c56a098220-cilium-ipsec-secrets\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.099784 kubelet[2597]: I0413 19:24:01.098887 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-host-proc-sys-net\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.100417 kubelet[2597]: I0413 19:24:01.098952 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-host-proc-sys-kernel\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.100417 kubelet[2597]: I0413 19:24:01.099014 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b428f0a7-c3f0-43d4-b9a5-34c56a098220-cilium-config-path\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.100417 kubelet[2597]: I0413 19:24:01.099047 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgf9p\" (UniqueName: \"kubernetes.io/projected/b428f0a7-c3f0-43d4-b9a5-34c56a098220-kube-api-access-hgf9p\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.100417 kubelet[2597]: I0413 19:24:01.099082 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-cilium-run\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.100417 kubelet[2597]: I0413 19:24:01.099111 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b428f0a7-c3f0-43d4-b9a5-34c56a098220-hubble-tls\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.100417 kubelet[2597]: I0413 19:24:01.099143 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-hostproc\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.100781 kubelet[2597]: I0413 19:24:01.099173 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b428f0a7-c3f0-43d4-b9a5-34c56a098220-cilium-cgroup\") pod \"cilium-r8lbd\" (UID: \"b428f0a7-c3f0-43d4-b9a5-34c56a098220\") " pod="kube-system/cilium-r8lbd" Apr 13 19:24:01.181244 sshd[4365]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:01.185457 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Apr 13 19:24:01.186669 systemd[1]: sshd@23-178.105.7.28:22-50.85.169.122:38816.service: Deactivated successfully. Apr 13 19:24:01.191082 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 19:24:01.202571 systemd-logind[1463]: Removed session 24. Apr 13 19:24:01.213163 systemd[1]: Started sshd@24-178.105.7.28:22-50.85.169.122:38824.service - OpenSSH per-connection server daemon (50.85.169.122:38824). Apr 13 19:24:01.275495 containerd[1489]: time="2026-04-13T19:24:01.275438165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8lbd,Uid:b428f0a7-c3f0-43d4-b9a5-34c56a098220,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:01.305217 containerd[1489]: time="2026-04-13T19:24:01.303529038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:01.305217 containerd[1489]: time="2026-04-13T19:24:01.303608917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:01.305217 containerd[1489]: time="2026-04-13T19:24:01.303630157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:01.305217 containerd[1489]: time="2026-04-13T19:24:01.303744756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:01.326141 systemd[1]: Started cri-containerd-ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a.scope - libcontainer container ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a. Apr 13 19:24:01.350796 containerd[1489]: time="2026-04-13T19:24:01.350749756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8lbd,Uid:b428f0a7-c3f0-43d4-b9a5-34c56a098220,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\"" Apr 13 19:24:01.352930 sshd[4374]: Accepted publickey for core from 50.85.169.122 port 38824 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:24:01.354282 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:01.360072 containerd[1489]: time="2026-04-13T19:24:01.359982822Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:24:01.364416 systemd-logind[1463]: New session 25 of user core. Apr 13 19:24:01.369204 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 13 19:24:01.382944 containerd[1489]: time="2026-04-13T19:24:01.381697960Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3041439507129f0bee24439aedde72bcebab98834865cb82810c1bbbb84ba011\"" Apr 13 19:24:01.383683 containerd[1489]: time="2026-04-13T19:24:01.383633180Z" level=info msg="StartContainer for \"3041439507129f0bee24439aedde72bcebab98834865cb82810c1bbbb84ba011\"" Apr 13 19:24:01.418152 systemd[1]: Started cri-containerd-3041439507129f0bee24439aedde72bcebab98834865cb82810c1bbbb84ba011.scope - libcontainer container 3041439507129f0bee24439aedde72bcebab98834865cb82810c1bbbb84ba011. Apr 13 19:24:01.449480 containerd[1489]: time="2026-04-13T19:24:01.449354749Z" level=info msg="StartContainer for \"3041439507129f0bee24439aedde72bcebab98834865cb82810c1bbbb84ba011\" returns successfully" Apr 13 19:24:01.461752 systemd[1]: cri-containerd-3041439507129f0bee24439aedde72bcebab98834865cb82810c1bbbb84ba011.scope: Deactivated successfully. Apr 13 19:24:01.529616 containerd[1489]: time="2026-04-13T19:24:01.529543410Z" level=info msg="shim disconnected" id=3041439507129f0bee24439aedde72bcebab98834865cb82810c1bbbb84ba011 namespace=k8s.io Apr 13 19:24:01.529616 containerd[1489]: time="2026-04-13T19:24:01.529614049Z" level=warning msg="cleaning up after shim disconnected" id=3041439507129f0bee24439aedde72bcebab98834865cb82810c1bbbb84ba011 namespace=k8s.io Apr 13 19:24:01.532052 containerd[1489]: time="2026-04-13T19:24:01.529629849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:02.033942 containerd[1489]: time="2026-04-13T19:24:02.033768674Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:24:02.047765 containerd[1489]: time="2026-04-13T19:24:02.047465619Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bfc3e3e6cfe35fe71d12dcf69748c0987aa46a60535d034773433927f4c60ecd\"" Apr 13 19:24:02.048944 containerd[1489]: time="2026-04-13T19:24:02.048338651Z" level=info msg="StartContainer for \"bfc3e3e6cfe35fe71d12dcf69748c0987aa46a60535d034773433927f4c60ecd\"" Apr 13 19:24:02.082148 systemd[1]: Started cri-containerd-bfc3e3e6cfe35fe71d12dcf69748c0987aa46a60535d034773433927f4c60ecd.scope - libcontainer container bfc3e3e6cfe35fe71d12dcf69748c0987aa46a60535d034773433927f4c60ecd. Apr 13 19:24:02.114939 containerd[1489]: time="2026-04-13T19:24:02.114274282Z" level=info msg="StartContainer for \"bfc3e3e6cfe35fe71d12dcf69748c0987aa46a60535d034773433927f4c60ecd\" returns successfully" Apr 13 19:24:02.122636 systemd[1]: cri-containerd-bfc3e3e6cfe35fe71d12dcf69748c0987aa46a60535d034773433927f4c60ecd.scope: Deactivated successfully. Apr 13 19:24:02.147606 containerd[1489]: time="2026-04-13T19:24:02.147372397Z" level=info msg="shim disconnected" id=bfc3e3e6cfe35fe71d12dcf69748c0987aa46a60535d034773433927f4c60ecd namespace=k8s.io Apr 13 19:24:02.147606 containerd[1489]: time="2026-04-13T19:24:02.147436036Z" level=warning msg="cleaning up after shim disconnected" id=bfc3e3e6cfe35fe71d12dcf69748c0987aa46a60535d034773433927f4c60ecd namespace=k8s.io Apr 13 19:24:02.147606 containerd[1489]: time="2026-04-13T19:24:02.147447596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:02.447420 containerd[1489]: time="2026-04-13T19:24:02.447374206Z" level=info msg="StopPodSandbox for \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\"" Apr 13 19:24:02.448347 containerd[1489]: time="2026-04-13T19:24:02.447481085Z" level=info msg="TearDown network for sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" successfully" Apr 13 19:24:02.448347 containerd[1489]: time="2026-04-13T19:24:02.447494884Z" level=info msg="StopPodSandbox for \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" returns successfully" Apr 13 19:24:02.448347 containerd[1489]: time="2026-04-13T19:24:02.447968480Z" level=info msg="RemovePodSandbox for \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\"" Apr 13 19:24:02.448347 containerd[1489]: time="2026-04-13T19:24:02.447999560Z" level=info msg="Forcibly stopping sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\"" Apr 13 19:24:02.448347 containerd[1489]: time="2026-04-13T19:24:02.448051439Z" level=info msg="TearDown network for sandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" successfully" Apr 13 19:24:02.452433 containerd[1489]: time="2026-04-13T19:24:02.452272357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:02.452433 containerd[1489]: time="2026-04-13T19:24:02.452339117Z" level=info msg="RemovePodSandbox \"186803adb2283c2068816333d3ddd8c5e083150863eebe090061f4da85a447b2\" returns successfully" Apr 13 19:24:02.453288 containerd[1489]: time="2026-04-13T19:24:02.452978271Z" level=info msg="StopPodSandbox for \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\"" Apr 13 19:24:02.453288 containerd[1489]: time="2026-04-13T19:24:02.453065390Z" level=info msg="TearDown network for sandbox \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\" successfully" Apr 13 19:24:02.453288 containerd[1489]: time="2026-04-13T19:24:02.453075910Z" level=info msg="StopPodSandbox for \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\" returns successfully" Apr 13 19:24:02.453458 containerd[1489]: time="2026-04-13T19:24:02.453390066Z" level=info msg="RemovePodSandbox for \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\"" Apr 13 19:24:02.453458 containerd[1489]: time="2026-04-13T19:24:02.453415986Z" level=info msg="Forcibly stopping sandbox \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\"" Apr 13 19:24:02.453503 containerd[1489]: time="2026-04-13T19:24:02.453471866Z" level=info msg="TearDown network for sandbox \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\" successfully" Apr 13 19:24:02.457040 containerd[1489]: time="2026-04-13T19:24:02.456993311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:24:02.457040 containerd[1489]: time="2026-04-13T19:24:02.457059470Z" level=info msg="RemovePodSandbox \"383fb5dce24c2d3c72454a3a66b84938185a54fb83586f53f63e0f4957553d62\" returns successfully" Apr 13 19:24:02.570240 kubelet[2597]: E0413 19:24:02.570129 2597 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:24:02.868137 update_engine[1465]: I20260413 19:24:02.868031 1465 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 19:24:02.868137 update_engine[1465]: I20260413 19:24:02.868113 1465 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 19:24:02.868949 update_engine[1465]: I20260413 19:24:02.868435 1465 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 19:24:02.869725 update_engine[1465]: I20260413 19:24:02.869650 1465 omaha_request_params.cc:62] Current group set to lts Apr 13 19:24:02.869893 update_engine[1465]: I20260413 19:24:02.869810 1465 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 19:24:02.869893 update_engine[1465]: I20260413 19:24:02.869850 1465 update_attempter.cc:643] Scheduling an action processor start. Apr 13 19:24:02.869893 update_engine[1465]: I20260413 19:24:02.869882 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 19:24:02.870400 update_engine[1465]: I20260413 19:24:02.870101 1465 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 19:24:02.870400 update_engine[1465]: I20260413 19:24:02.870265 1465 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 19:24:02.870400 update_engine[1465]: I20260413 19:24:02.870289 1465 omaha_request_action.cc:272] Request: Apr 13 19:24:02.870400 update_engine[1465]: Apr 13 19:24:02.870400 update_engine[1465]: Apr 13 19:24:02.870400 update_engine[1465]: Apr 13 19:24:02.870400 update_engine[1465]: Apr 13 19:24:02.870400 update_engine[1465]: Apr 13 19:24:02.870400 update_engine[1465]: Apr 13 19:24:02.870400 update_engine[1465]: Apr 13 19:24:02.870400 update_engine[1465]: Apr 13 19:24:02.870400 update_engine[1465]: I20260413 19:24:02.870302 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:24:02.870874 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 19:24:02.872832 update_engine[1465]: I20260413 19:24:02.872762 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:24:02.873409 update_engine[1465]: I20260413 19:24:02.873353 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:24:02.874607 update_engine[1465]: E20260413 19:24:02.874550 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:24:02.874696 update_engine[1465]: I20260413 19:24:02.874668 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 19:24:03.039891 containerd[1489]: time="2026-04-13T19:24:03.039776913Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:24:03.059281 containerd[1489]: time="2026-04-13T19:24:03.059232089Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a\"" Apr 13 19:24:03.060266 containerd[1489]: time="2026-04-13T19:24:03.060224719Z" level=info msg="StartContainer for \"40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a\"" Apr 13 19:24:03.101137 systemd[1]: Started cri-containerd-40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a.scope - libcontainer container 40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a. Apr 13 19:24:03.134445 containerd[1489]: time="2026-04-13T19:24:03.134180219Z" level=info msg="StartContainer for \"40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a\" returns successfully" Apr 13 19:24:03.138249 systemd[1]: cri-containerd-40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a.scope: Deactivated successfully. Apr 13 19:24:03.167938 containerd[1489]: time="2026-04-13T19:24:03.167664382Z" level=info msg="shim disconnected" id=40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a namespace=k8s.io Apr 13 19:24:03.167938 containerd[1489]: time="2026-04-13T19:24:03.167741021Z" level=warning msg="cleaning up after shim disconnected" id=40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a namespace=k8s.io Apr 13 19:24:03.167938 containerd[1489]: time="2026-04-13T19:24:03.167760261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:03.181078 containerd[1489]: time="2026-04-13T19:24:03.181008256Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:24:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:24:03.215970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40569db04fdd49ea407b15150a336c3a4be26b55cbbda777abd3b1faee24cb6a-rootfs.mount: Deactivated successfully. Apr 13 19:24:04.054503 containerd[1489]: time="2026-04-13T19:24:04.054325687Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:24:04.073023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889731621.mount: Deactivated successfully. Apr 13 19:24:04.077733 containerd[1489]: time="2026-04-13T19:24:04.077585635Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f\"" Apr 13 19:24:04.079428 containerd[1489]: time="2026-04-13T19:24:04.078556746Z" level=info msg="StartContainer for \"a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f\"" Apr 13 19:24:04.114145 systemd[1]: Started cri-containerd-a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f.scope - libcontainer container a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f. Apr 13 19:24:04.141391 systemd[1]: cri-containerd-a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f.scope: Deactivated successfully. Apr 13 19:24:04.145572 containerd[1489]: time="2026-04-13T19:24:04.144973182Z" level=info msg="StartContainer for \"a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f\" returns successfully" Apr 13 19:24:04.168173 containerd[1489]: time="2026-04-13T19:24:04.168099091Z" level=info msg="shim disconnected" id=a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f namespace=k8s.io Apr 13 19:24:04.168173 containerd[1489]: time="2026-04-13T19:24:04.168161891Z" level=warning msg="cleaning up after shim disconnected" id=a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f namespace=k8s.io Apr 13 19:24:04.168173 containerd[1489]: time="2026-04-13T19:24:04.168173210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:04.216513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a84b5eea31325436342bdda8d3351d00562e6d346f506a4d8eb7c6761eda687f-rootfs.mount: Deactivated successfully. Apr 13 19:24:05.055227 containerd[1489]: time="2026-04-13T19:24:05.055125075Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:24:05.075720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065266739.mount: Deactivated successfully. Apr 13 19:24:05.078035 containerd[1489]: time="2026-04-13T19:24:05.075959332Z" level=info msg="CreateContainer within sandbox \"ae5e9328723adae0ef07b07af1df9478244cdb299d5fd0c1a311495d55eb1f0a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c5e7e5a317f75bc03f52efa9d6904f2d1b05226528b71e25a958e975d55a5756\"" Apr 13 19:24:05.078146 containerd[1489]: time="2026-04-13T19:24:05.078111514Z" level=info msg="StartContainer for \"c5e7e5a317f75bc03f52efa9d6904f2d1b05226528b71e25a958e975d55a5756\"" Apr 13 19:24:05.112281 systemd[1]: Started cri-containerd-c5e7e5a317f75bc03f52efa9d6904f2d1b05226528b71e25a958e975d55a5756.scope - libcontainer container c5e7e5a317f75bc03f52efa9d6904f2d1b05226528b71e25a958e975d55a5756. Apr 13 19:24:05.147375 containerd[1489]: time="2026-04-13T19:24:05.147258149Z" level=info msg="StartContainer for \"c5e7e5a317f75bc03f52efa9d6904f2d1b05226528b71e25a958e975d55a5756\" returns successfully" Apr 13 19:24:05.344935 kubelet[2597]: I0413 19:24:05.343524 2597 setters.go:543] "Node became not ready" node="ci-4081-3-7-3-c59e9f41ff" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-13T19:24:05Z","lastTransitionTime":"2026-04-13T19:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 13 19:24:05.501942 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 13 19:24:06.075553 kubelet[2597]: I0413 19:24:06.075044 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r8lbd" podStartSLOduration=6.07502174 podStartE2EDuration="6.07502174s" podCreationTimestamp="2026-04-13 19:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:06.074833181 +0000 UTC m=+183.754621760" watchObservedRunningTime="2026-04-13 19:24:06.07502174 +0000 UTC m=+183.754810359" Apr 13 19:24:08.604981 systemd-networkd[1373]: lxc_health: Link UP Apr 13 19:24:08.617335 systemd-networkd[1373]: lxc_health: Gained carrier Apr 13 19:24:09.907510 systemd[1]: run-containerd-runc-k8s.io-c5e7e5a317f75bc03f52efa9d6904f2d1b05226528b71e25a958e975d55a5756-runc.6wzIqe.mount: Deactivated successfully. Apr 13 19:24:10.298364 systemd-networkd[1373]: lxc_health: Gained IPv6LL Apr 13 19:24:12.871705 update_engine[1465]: I20260413 19:24:12.870990 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:24:12.871705 update_engine[1465]: I20260413 19:24:12.871333 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:24:12.871705 update_engine[1465]: I20260413 19:24:12.871579 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:24:12.872919 update_engine[1465]: E20260413 19:24:12.872857 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:24:12.873095 update_engine[1465]: I20260413 19:24:12.873065 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 19:24:14.327306 sshd[4374]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:14.332816 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Apr 13 19:24:14.334093 systemd[1]: sshd@24-178.105.7.28:22-50.85.169.122:38824.service: Deactivated successfully. Apr 13 19:24:14.336996 systemd[1]: session-25.scope: Deactivated successfully. Apr 13 19:24:14.340048 systemd-logind[1463]: Removed session 25. Apr 13 19:24:22.871860 update_engine[1465]: I20260413 19:24:22.871253 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:24:22.871860 update_engine[1465]: I20260413 19:24:22.871681 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:24:22.872976 update_engine[1465]: I20260413 19:24:22.872874 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:24:22.873981 update_engine[1465]: E20260413 19:24:22.873790 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:24:22.873981 update_engine[1465]: I20260413 19:24:22.873927 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 19:24:32.877000 update_engine[1465]: I20260413 19:24:32.876099 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:24:32.877000 update_engine[1465]: I20260413 19:24:32.876417 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:24:32.877000 update_engine[1465]: I20260413 19:24:32.876852 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:24:32.878222 update_engine[1465]: E20260413 19:24:32.878170 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878383 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878407 1465 omaha_request_action.cc:617] Omaha request response: Apr 13 19:24:32.879205 update_engine[1465]: E20260413 19:24:32.878497 1465 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878518 1465 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878526 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878535 1465 update_attempter.cc:306] Processing Done. Apr 13 19:24:32.879205 update_engine[1465]: E20260413 19:24:32.878554 1465 update_attempter.cc:619] Update failed. Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878563 1465 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878575 1465 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878583 1465 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878742 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878784 1465 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 19:24:32.879205 update_engine[1465]: I20260413 19:24:32.878798 1465 omaha_request_action.cc:272] Request: Apr 13 19:24:32.879205 update_engine[1465]: Apr 13 19:24:32.879205 update_engine[1465]: Apr 13 19:24:32.879205 update_engine[1465]: Apr 13 19:24:32.879205 update_engine[1465]: Apr 13 19:24:32.879205 update_engine[1465]: Apr 13 19:24:32.879205 update_engine[1465]: Apr 13 19:24:32.879706 update_engine[1465]: I20260413 19:24:32.878808 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:24:32.879706 update_engine[1465]: I20260413 19:24:32.879012 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:24:32.879706 update_engine[1465]: I20260413 19:24:32.879169 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:24:32.880183 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 19:24:32.880828 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 19:24:32.880857 update_engine[1465]: E20260413 19:24:32.880344 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:24:32.880857 update_engine[1465]: I20260413 19:24:32.880389 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 19:24:32.880857 update_engine[1465]: I20260413 19:24:32.880398 1465 omaha_request_action.cc:617] Omaha request response: Apr 13 19:24:32.880857 update_engine[1465]: I20260413 19:24:32.880405 1465 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:24:32.880857 update_engine[1465]: I20260413 19:24:32.880410 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:24:32.880857 update_engine[1465]: I20260413 19:24:32.880414 1465 update_attempter.cc:306] Processing Done. Apr 13 19:24:32.880857 update_engine[1465]: I20260413 19:24:32.880420 1465 update_attempter.cc:310] Error event sent. Apr 13 19:24:32.880857 update_engine[1465]: I20260413 19:24:32.880429 1465 update_check_scheduler.cc:74] Next update check in 44m46s Apr 13 19:24:45.523552 systemd[1]: cri-containerd-2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0.scope: Deactivated successfully. Apr 13 19:24:45.525110 systemd[1]: cri-containerd-2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0.scope: Consumed 5.364s CPU time, 20.9M memory peak, 0B memory swap peak. Apr 13 19:24:45.549345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0-rootfs.mount: Deactivated successfully. Apr 13 19:24:45.554716 containerd[1489]: time="2026-04-13T19:24:45.554642357Z" level=info msg="shim disconnected" id=2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0 namespace=k8s.io Apr 13 19:24:45.555498 containerd[1489]: time="2026-04-13T19:24:45.555266598Z" level=warning msg="cleaning up after shim disconnected" id=2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0 namespace=k8s.io Apr 13 19:24:45.555498 containerd[1489]: time="2026-04-13T19:24:45.555296438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:45.948288 kubelet[2597]: E0413 19:24:45.948236 2597 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55040->10.0.0.2:2379: read: connection timed out" Apr 13 19:24:46.165503 kubelet[2597]: I0413 19:24:46.164703 2597 scope.go:117] "RemoveContainer" containerID="2c449f0136a0e2e3572c02dce0b4e39644a0ee7b353e2ec031517f5472a62ed0" Apr 13 19:24:46.167127 containerd[1489]: time="2026-04-13T19:24:46.167084755Z" level=info msg="CreateContainer within sandbox \"c64142079a80167369e7619c91bed7bbc087ace3c6547f6c8fa6a7008c3f0989\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:24:46.181494 containerd[1489]: time="2026-04-13T19:24:46.181446423Z" level=info msg="CreateContainer within sandbox \"c64142079a80167369e7619c91bed7bbc087ace3c6547f6c8fa6a7008c3f0989\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"dc993c42e8ff72203304ae2eb90206b5c3429e9f1ec5b38140317600f6f9a351\"" Apr 13 19:24:46.183469 containerd[1489]: time="2026-04-13T19:24:46.182177785Z" level=info msg="StartContainer for \"dc993c42e8ff72203304ae2eb90206b5c3429e9f1ec5b38140317600f6f9a351\"" Apr 13 19:24:46.226303 systemd[1]: Started cri-containerd-dc993c42e8ff72203304ae2eb90206b5c3429e9f1ec5b38140317600f6f9a351.scope - libcontainer container dc993c42e8ff72203304ae2eb90206b5c3429e9f1ec5b38140317600f6f9a351. Apr 13 19:24:46.269251 containerd[1489]: time="2026-04-13T19:24:46.269061115Z" level=info msg="StartContainer for \"dc993c42e8ff72203304ae2eb90206b5c3429e9f1ec5b38140317600f6f9a351\" returns successfully" Apr 13 19:24:49.660256 kubelet[2597]: E0413 19:24:49.659884 2597 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54698->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-3-c59e9f41ff.18a60111d9abaf18 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-3-c59e9f41ff,UID:3dead6ea7c422540a5232601737323fa,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-3-c59e9f41ff,},FirstTimestamp:2026-04-13 19:24:39.22221852 +0000 UTC m=+216.902007139,LastTimestamp:2026-04-13 19:24:39.22221852 +0000 UTC m=+216.902007139,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-3-c59e9f41ff,}"