Apr 13 19:19:46.880088 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:19:46.880122 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:19:46.880135 kernel: KASLR enabled Apr 13 19:19:46.880141 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 19:19:46.880147 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 13 19:19:46.880153 kernel: random: crng init done Apr 13 19:19:46.880160 kernel: ACPI: Early table checksum verification disabled Apr 13 19:19:46.880167 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 13 19:19:46.880173 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 13 19:19:46.880181 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880188 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880194 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880200 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880207 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880215 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880223 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880230 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880236 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:19:46.880243 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 19:19:46.880250 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 13 19:19:46.880256 kernel: NUMA: Failed to initialise from firmware Apr 13 19:19:46.880263 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:19:46.880270 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 13 19:19:46.880276 kernel: Zone ranges: Apr 13 19:19:46.880283 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:19:46.880291 kernel: DMA32 empty Apr 13 19:19:46.880297 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 13 19:19:46.880304 kernel: Movable zone start for each node Apr 13 19:19:46.880311 kernel: Early memory node ranges Apr 13 19:19:46.880318 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 13 19:19:46.880325 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 13 19:19:46.880332 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 13 19:19:46.880338 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 13 19:19:46.880345 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 13 19:19:46.880351 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 13 19:19:46.880358 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 13 19:19:46.880365 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:19:46.880373 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 19:19:46.880380 kernel: psci: probing for conduit method from ACPI. Apr 13 19:19:46.880386 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:19:46.880396 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:19:46.880403 kernel: psci: Trusted OS migration not required Apr 13 19:19:46.880410 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:19:46.880420 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 13 19:19:46.880427 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:19:46.880434 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:19:46.880441 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:19:46.880448 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:19:46.880455 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:19:46.880462 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:19:46.880469 kernel: CPU features: detected: Spectre-v4 Apr 13 19:19:46.880476 kernel: CPU features: detected: Spectre-BHB Apr 13 19:19:46.880483 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:19:46.880492 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:19:46.880499 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:19:46.880507 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:19:46.880514 kernel: alternatives: applying boot alternatives Apr 13 19:19:46.880522 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:19:46.880530 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:19:46.880537 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:19:46.880544 kernel: Fallback order for Node 0: 0 Apr 13 19:19:46.880551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 13 19:19:46.880558 kernel: Policy zone: Normal Apr 13 19:19:46.880565 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:19:46.880598 kernel: software IO TLB: area num 2. Apr 13 19:19:46.880638 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 13 19:19:46.880648 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Apr 13 19:19:46.880655 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:19:46.880672 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:19:46.880680 kernel: rcu: RCU event tracing is enabled. Apr 13 19:19:46.880687 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:19:46.880694 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:19:46.880701 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:19:46.880708 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:19:46.880715 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:19:46.880722 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:19:46.880731 kernel: GICv3: 256 SPIs implemented Apr 13 19:19:46.880738 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:19:46.880745 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:19:46.880752 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 13 19:19:46.880759 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 13 19:19:46.880766 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 13 19:19:46.880773 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:19:46.880780 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:19:46.880787 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 13 19:19:46.880794 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 13 19:19:46.880801 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:19:46.880810 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:19:46.880817 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:19:46.880824 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:19:46.880831 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:19:46.880838 kernel: Console: colour dummy device 80x25 Apr 13 19:19:46.880845 kernel: ACPI: Core revision 20230628 Apr 13 19:19:46.880852 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:19:46.880859 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:19:46.880866 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:19:46.880874 kernel: landlock: Up and running. Apr 13 19:19:46.880882 kernel: SELinux: Initializing. Apr 13 19:19:46.880889 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:19:46.880896 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:19:46.880904 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:19:46.880911 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:19:46.880918 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:19:46.880926 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:19:46.880933 kernel: Platform MSI: ITS@0x8080000 domain created Apr 13 19:19:46.880942 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 13 19:19:46.880951 kernel: Remapping and enabling EFI services. Apr 13 19:19:46.880959 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:19:46.880966 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:19:46.880973 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 13 19:19:46.880980 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 13 19:19:46.880988 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:19:46.880995 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:19:46.881001 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:19:46.881008 kernel: SMP: Total of 2 processors activated. Apr 13 19:19:46.881017 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:19:46.881024 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:19:46.881032 kernel: CPU features: detected: Common not Private translations Apr 13 19:19:46.881045 kernel: CPU features: detected: CRC32 instructions Apr 13 19:19:46.881054 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 13 19:19:46.881061 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:19:46.881068 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:19:46.881076 kernel: CPU features: detected: Privileged Access Never Apr 13 19:19:46.881083 kernel: CPU features: detected: RAS Extension Support Apr 13 19:19:46.881092 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 13 19:19:46.881100 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:19:46.881108 kernel: alternatives: applying system-wide alternatives Apr 13 19:19:46.881122 kernel: devtmpfs: initialized Apr 13 19:19:46.881130 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:19:46.881138 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:19:46.881145 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:19:46.881153 kernel: SMBIOS 3.0.0 present. Apr 13 19:19:46.881162 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 13 19:19:46.881170 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:19:46.881177 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:19:46.881185 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:19:46.881193 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:19:46.881200 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:19:46.881208 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Apr 13 19:19:46.881215 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:19:46.881223 kernel: cpuidle: using governor menu Apr 13 19:19:46.881233 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:19:46.881240 kernel: ASID allocator initialised with 32768 entries Apr 13 19:19:46.881248 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:19:46.881260 kernel: Serial: AMBA PL011 UART driver Apr 13 19:19:46.881267 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:19:46.881275 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:19:46.881282 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:19:46.881291 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:19:46.881298 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:19:46.881308 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:19:46.881315 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:19:46.881323 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:19:46.881331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:19:46.881338 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:19:46.881346 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:19:46.881353 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:19:46.881361 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:19:46.881368 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:19:46.881377 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:19:46.881385 kernel: ACPI: Interpreter enabled Apr 13 19:19:46.881392 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:19:46.881400 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:19:46.881408 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:19:46.881415 kernel: printk: console [ttyAMA0] enabled Apr 13 19:19:46.881423 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 19:19:46.881589 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:19:46.881713 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:19:46.881790 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:19:46.881860 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 13 19:19:46.881934 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 13 19:19:46.881944 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 13 19:19:46.881952 kernel: PCI host bridge to bus 0000:00 Apr 13 19:19:46.882034 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 13 19:19:46.882107 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:19:46.882263 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 13 19:19:46.882327 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 19:19:46.882412 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 13 19:19:46.882494 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 13 19:19:46.882566 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 13 19:19:46.882636 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:19:46.882734 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.882811 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 13 19:19:46.882891 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.882964 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 13 19:19:46.883043 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.883135 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 13 19:19:46.883237 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.883312 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 13 19:19:46.883390 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.883461 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 13 19:19:46.883544 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.883618 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 13 19:19:46.883714 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.884055 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 13 19:19:46.884204 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.884282 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 13 19:19:46.884360 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:19:46.884429 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 13 19:19:46.884514 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 13 19:19:46.884585 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 13 19:19:46.884700 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:19:46.884788 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 13 19:19:46.884862 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:19:46.884933 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:19:46.885014 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 19:19:46.885093 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 13 19:19:46.885691 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 19:19:46.885789 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 13 19:19:46.885863 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 13 19:19:46.885948 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 19:19:46.886019 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 13 19:19:46.886106 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 19:19:46.886253 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 13 19:19:46.886325 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 13 19:19:46.886403 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 19:19:46.886473 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 13 19:19:46.886543 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:19:46.886628 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:19:46.886721 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 13 19:19:46.886795 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 13 19:19:46.886867 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:19:46.886940 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 13 19:19:46.887008 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:19:46.887078 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:19:46.887188 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 13 19:19:46.887268 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 13 19:19:46.887339 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 13 19:19:46.889266 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 13 19:19:46.889357 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:19:46.889428 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:19:46.889503 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 13 19:19:46.889574 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 13 19:19:46.889648 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 13 19:19:46.889742 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 13 19:19:46.889814 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:19:46.889886 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:19:46.889958 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 13 19:19:46.890025 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:19:46.890093 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:19:46.892283 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 19:19:46.892369 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:19:46.892439 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:19:46.892514 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 19:19:46.892584 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:19:46.892652 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:19:46.892745 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 19:19:46.892823 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:19:46.892894 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:19:46.892969 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 13 19:19:46.893039 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:46.893126 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 13 19:19:46.893204 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:46.893276 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 13 19:19:46.893351 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:46.893423 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 13 19:19:46.893492 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:46.893564 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 13 19:19:46.893633 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:46.893751 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 13 19:19:46.893825 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:46.893901 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 13 19:19:46.893970 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:46.894038 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 13 19:19:46.894107 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:46.896420 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 13 19:19:46.896496 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:46.896569 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 13 19:19:46.896644 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 13 19:19:46.896740 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 13 19:19:46.896813 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 19:19:46.896886 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 13 19:19:46.896955 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 19:19:46.897028 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 13 19:19:46.897121 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 19:19:46.897214 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 13 19:19:46.897306 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 13 19:19:46.897385 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 13 19:19:46.898416 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 13 19:19:46.898506 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 13 19:19:46.898578 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 13 19:19:46.898655 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 13 19:19:46.898777 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 13 19:19:46.898854 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 13 19:19:46.898932 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 13 19:19:46.899005 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 13 19:19:46.899075 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 13 19:19:46.899212 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 13 19:19:46.899326 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 13 19:19:46.899408 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:19:46.899480 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 13 19:19:46.899554 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 19:19:46.899629 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 13 19:19:46.899717 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 13 19:19:46.899788 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:46.899864 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 13 19:19:46.899938 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 19:19:46.900006 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 13 19:19:46.900072 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 13 19:19:46.900156 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:46.900234 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:19:46.900305 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 13 19:19:46.900373 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 19:19:46.900459 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 13 19:19:46.900536 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 13 19:19:46.900605 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:46.900694 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:19:46.900766 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 19:19:46.900833 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 13 19:19:46.900899 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 13 19:19:46.900967 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:46.901042 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 13 19:19:46.902226 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 13 19:19:46.902348 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 19:19:46.902421 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 13 19:19:46.902491 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 13 19:19:46.902561 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:46.902641 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 13 19:19:46.902736 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 13 19:19:46.902812 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 19:19:46.902891 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 13 19:19:46.902962 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 13 19:19:46.903030 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:46.903108 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 13 19:19:46.904824 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 13 19:19:46.904901 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 13 19:19:46.904979 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 19:19:46.905049 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 13 19:19:46.905873 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 13 19:19:46.905971 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:46.906043 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 19:19:46.906178 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 13 19:19:46.906264 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 13 19:19:46.906332 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:46.906405 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 19:19:46.906473 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 13 19:19:46.906549 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 13 19:19:46.906619 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:46.906704 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 13 19:19:46.906768 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:19:46.906832 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 13 19:19:46.906910 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 13 19:19:46.906975 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 13 19:19:46.907049 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:19:46.907142 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 13 19:19:46.907210 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 13 19:19:46.907274 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:19:46.907345 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 13 19:19:46.907409 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 13 19:19:46.907478 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:19:46.907550 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 13 19:19:46.907615 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 13 19:19:46.907739 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:19:46.907824 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 13 19:19:46.907891 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 13 19:19:46.907956 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:19:46.908035 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 13 19:19:46.908101 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 13 19:19:46.908184 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:19:46.908259 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 13 19:19:46.908333 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 13 19:19:46.908398 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:19:46.908470 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 13 19:19:46.908535 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 13 19:19:46.908598 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:19:46.908681 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 13 19:19:46.908750 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 13 19:19:46.908820 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:19:46.908831 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:19:46.908839 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:19:46.908848 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:19:46.908856 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:19:46.908864 kernel: iommu: Default domain type: Translated Apr 13 19:19:46.908872 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:19:46.908881 kernel: efivars: Registered efivars operations Apr 13 19:19:46.908891 kernel: vgaarb: loaded Apr 13 19:19:46.908899 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:19:46.908907 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:19:46.908915 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:19:46.908923 kernel: pnp: PnP ACPI init Apr 13 19:19:46.909002 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 13 19:19:46.909013 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:19:46.909021 kernel: NET: Registered PF_INET protocol family Apr 13 19:19:46.909030 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:19:46.909041 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:19:46.909050 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:19:46.909058 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:19:46.909066 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:19:46.909075 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:19:46.909083 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:19:46.909091 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:19:46.909100 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:19:46.909195 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 13 19:19:46.909210 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:19:46.909219 kernel: kvm [1]: HYP mode not available Apr 13 19:19:46.909227 kernel: Initialise system trusted keyrings Apr 13 19:19:46.909236 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:19:46.909244 kernel: Key type asymmetric registered Apr 13 19:19:46.909252 kernel: Asymmetric key parser 'x509' registered Apr 13 19:19:46.909260 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:19:46.909269 kernel: io scheduler mq-deadline registered Apr 13 19:19:46.909277 kernel: io scheduler kyber registered Apr 13 19:19:46.909287 kernel: io scheduler bfq registered Apr 13 19:19:46.909296 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:19:46.909371 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 13 19:19:46.909444 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 13 19:19:46.909515 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.909588 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 13 19:19:46.909698 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 13 19:19:46.909777 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.909852 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 13 19:19:46.909923 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 13 19:19:46.909992 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.910066 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 13 19:19:46.910176 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 13 19:19:46.910252 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.910327 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 13 19:19:46.910397 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 13 19:19:46.910468 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.910539 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 13 19:19:46.910614 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 13 19:19:46.910694 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.910769 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 13 19:19:46.910840 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 13 19:19:46.910909 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.910998 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 13 19:19:46.911074 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 13 19:19:46.911184 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.911197 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 13 19:19:46.911269 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 13 19:19:46.911341 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 13 19:19:46.911411 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:19:46.911422 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:19:46.911434 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:19:46.911443 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:19:46.911518 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 13 19:19:46.911595 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 13 19:19:46.911606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:19:46.911614 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:19:46.911728 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 13 19:19:46.911741 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 13 19:19:46.911749 kernel: thunder_xcv, ver 1.0 Apr 13 19:19:46.911762 kernel: thunder_bgx, ver 1.0 Apr 13 19:19:46.911770 kernel: nicpf, ver 1.0 Apr 13 19:19:46.911778 kernel: nicvf, ver 1.0 Apr 13 19:19:46.911868 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:19:46.911936 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:19:46 UTC (1776107986) Apr 13 19:19:46.911947 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:19:46.911955 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 13 19:19:46.911963 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:19:46.911974 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:19:46.911982 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:19:46.911991 kernel: Segment Routing with IPv6 Apr 13 19:19:46.911998 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:19:46.912006 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:19:46.912014 kernel: Key type dns_resolver registered Apr 13 19:19:46.912023 kernel: registered taskstats version 1 Apr 13 19:19:46.912030 kernel: Loading compiled-in X.509 certificates Apr 13 19:19:46.912039 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:19:46.912052 kernel: Key type .fscrypt registered Apr 13 19:19:46.912060 kernel: Key type fscrypt-provisioning registered Apr 13 19:19:46.912068 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:19:46.912076 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:19:46.912084 kernel: ima: No architecture policies found Apr 13 19:19:46.912092 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:19:46.912100 kernel: clk: Disabling unused clocks Apr 13 19:19:46.912108 kernel: Freeing unused kernel memory: 39424K Apr 13 19:19:46.912159 kernel: Run /init as init process Apr 13 19:19:46.912171 kernel: with arguments: Apr 13 19:19:46.912179 kernel: /init Apr 13 19:19:46.912187 kernel: with environment: Apr 13 19:19:46.912195 kernel: HOME=/ Apr 13 19:19:46.912203 kernel: TERM=linux Apr 13 19:19:46.912213 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:19:46.912223 systemd[1]: Detected virtualization kvm. Apr 13 19:19:46.912232 systemd[1]: Detected architecture arm64. Apr 13 19:19:46.912243 systemd[1]: Running in initrd. Apr 13 19:19:46.912251 systemd[1]: No hostname configured, using default hostname. Apr 13 19:19:46.912260 systemd[1]: Hostname set to . Apr 13 19:19:46.912269 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:19:46.912278 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:19:46.912286 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:46.912295 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:46.912305 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:19:46.912316 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:19:46.912324 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:19:46.912335 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:19:46.912346 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:19:46.912355 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:19:46.912364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:46.912373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:46.912384 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:19:46.912393 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:19:46.912402 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:19:46.912411 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:19:46.912420 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:19:46.912429 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:19:46.912438 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:19:46.912446 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:19:46.912457 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:46.912466 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:46.912475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:46.912484 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:19:46.912492 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:19:46.912502 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:19:46.912511 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:19:46.912519 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:19:46.912528 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:19:46.912538 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:19:46.912547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:46.912556 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:19:46.912588 systemd-journald[238]: Collecting audit messages is disabled. Apr 13 19:19:46.912611 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:46.912620 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:19:46.912629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:19:46.912637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:46.912649 systemd-journald[238]: Journal started Apr 13 19:19:46.912680 systemd-journald[238]: Runtime Journal (/run/log/journal/d0c2d53268b9439d9bafd9a334fcf234) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:19:46.899362 systemd-modules-load[239]: Inserted module 'overlay' Apr 13 19:19:46.920174 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:19:46.921630 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:46.924381 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:19:46.925177 kernel: Bridge firewalling registered Apr 13 19:19:46.925428 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:19:46.926715 systemd-modules-load[239]: Inserted module 'br_netfilter' Apr 13 19:19:46.928033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:46.938413 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:19:46.940714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:19:46.945299 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:19:46.954452 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:46.956390 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:19:46.963432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:46.967842 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:46.975743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:46.986333 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:19:46.990838 dracut-cmdline[269]: dracut-dracut-053 Apr 13 19:19:46.996750 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:19:47.017037 systemd-resolved[274]: Positive Trust Anchors: Apr 13 19:19:47.017052 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:19:47.017083 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:19:47.027523 systemd-resolved[274]: Defaulting to hostname 'linux'. Apr 13 19:19:47.029639 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:19:47.030372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:47.092193 kernel: SCSI subsystem initialized Apr 13 19:19:47.097166 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:19:47.105195 kernel: iscsi: registered transport (tcp) Apr 13 19:19:47.118213 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:19:47.118344 kernel: QLogic iSCSI HBA Driver Apr 13 19:19:47.170787 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:19:47.175307 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:19:47.200494 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:19:47.200617 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:19:47.200652 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:19:47.253189 kernel: raid6: neonx8 gen() 15647 MB/s Apr 13 19:19:47.270179 kernel: raid6: neonx4 gen() 15566 MB/s Apr 13 19:19:47.287169 kernel: raid6: neonx2 gen() 13182 MB/s Apr 13 19:19:47.304185 kernel: raid6: neonx1 gen() 10409 MB/s Apr 13 19:19:47.321178 kernel: raid6: int64x8 gen() 6914 MB/s Apr 13 19:19:47.338279 kernel: raid6: int64x4 gen() 7308 MB/s Apr 13 19:19:47.355186 kernel: raid6: int64x2 gen() 6092 MB/s Apr 13 19:19:47.372188 kernel: raid6: int64x1 gen() 5028 MB/s Apr 13 19:19:47.372281 kernel: raid6: using algorithm neonx8 gen() 15647 MB/s Apr 13 19:19:47.389192 kernel: raid6: .... xor() 11900 MB/s, rmw enabled Apr 13 19:19:47.389277 kernel: raid6: using neon recovery algorithm Apr 13 19:19:47.394504 kernel: xor: measuring software checksum speed Apr 13 19:19:47.394596 kernel: 8regs : 19778 MB/sec Apr 13 19:19:47.395413 kernel: 32regs : 19627 MB/sec Apr 13 19:19:47.395454 kernel: arm64_neon : 26297 MB/sec Apr 13 19:19:47.395494 kernel: xor: using function: arm64_neon (26297 MB/sec) Apr 13 19:19:47.446201 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:19:47.460933 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:19:47.468511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:47.482867 systemd-udevd[457]: Using default interface naming scheme 'v255'. Apr 13 19:19:47.486487 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:47.498308 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:19:47.513016 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Apr 13 19:19:47.551634 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:19:47.562518 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:19:47.616525 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:47.623308 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:19:47.649047 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:19:47.651488 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:19:47.653451 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:47.655029 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:19:47.663361 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:19:47.679765 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:19:47.729568 kernel: scsi host0: Virtio SCSI HBA Apr 13 19:19:47.733168 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 19:19:47.733217 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 19:19:47.751223 kernel: ACPI: bus type USB registered Apr 13 19:19:47.753137 kernel: usbcore: registered new interface driver usbfs Apr 13 19:19:47.757134 kernel: usbcore: registered new interface driver hub Apr 13 19:19:47.757194 kernel: usbcore: registered new device driver usb Apr 13 19:19:47.761993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:19:47.762142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:47.764358 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:47.765025 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:47.765217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:47.768292 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:47.776433 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:47.787854 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:19:47.788080 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 19:19:47.788197 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 19:19:47.791142 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 13 19:19:47.791412 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 13 19:19:47.791508 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:19:47.795356 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:19:47.795573 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 19:19:47.795708 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 19:19:47.797147 kernel: hub 1-0:1.0: USB hub found Apr 13 19:19:47.797322 kernel: hub 1-0:1.0: 4 ports detected Apr 13 19:19:47.798150 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 19:19:47.799165 kernel: hub 2-0:1.0: USB hub found Apr 13 19:19:47.799333 kernel: hub 2-0:1.0: 4 ports detected Apr 13 19:19:47.799423 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 13 19:19:47.808620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:47.815681 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 13 19:19:47.815900 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 13 19:19:47.815992 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 13 19:19:47.816391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:19:47.819422 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 13 19:19:47.819599 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 19:19:47.824735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:19:47.824807 kernel: GPT:17805311 != 80003071 Apr 13 19:19:47.824829 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:19:47.824839 kernel: GPT:17805311 != 80003071 Apr 13 19:19:47.824848 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:19:47.824858 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:47.825546 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 13 19:19:47.851393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:47.874137 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (520) Apr 13 19:19:47.881807 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 19:19:47.884051 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (503) Apr 13 19:19:47.902204 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 19:19:47.909871 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:19:47.915397 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 19:19:47.916858 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 19:19:47.928454 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:19:47.937746 disk-uuid[577]: Primary Header is updated. Apr 13 19:19:47.937746 disk-uuid[577]: Secondary Entries is updated. Apr 13 19:19:47.937746 disk-uuid[577]: Secondary Header is updated. Apr 13 19:19:47.947638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:47.953243 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:47.957152 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:48.040151 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 19:19:48.175165 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 13 19:19:48.175249 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 19:19:48.176439 kernel: usbcore: registered new interface driver usbhid Apr 13 19:19:48.177133 kernel: usbhid: USB HID core driver Apr 13 19:19:48.284280 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 13 19:19:48.415167 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 13 19:19:48.470161 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 13 19:19:48.962073 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:19:48.962198 disk-uuid[578]: The operation has completed successfully. Apr 13 19:19:49.014433 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:19:49.014557 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:19:49.037495 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:19:49.043077 sh[595]: Success Apr 13 19:19:49.059445 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:19:49.124003 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:19:49.126082 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:19:49.128839 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:19:49.156399 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:19:49.156468 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:49.156487 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:19:49.156516 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:19:49.156533 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:19:49.164182 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:19:49.166690 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:19:49.168418 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:19:49.179540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:19:49.184376 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:19:49.203634 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:49.203701 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:49.204383 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:49.211143 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:49.211205 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:49.220949 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:19:49.222301 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:49.228840 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:19:49.237369 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:19:49.322537 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:19:49.331339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:19:49.336690 ignition[691]: Ignition 2.19.0 Apr 13 19:19:49.336702 ignition[691]: Stage: fetch-offline Apr 13 19:19:49.340148 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:19:49.336759 ignition[691]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:49.336768 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:49.336939 ignition[691]: parsed url from cmdline: "" Apr 13 19:19:49.336943 ignition[691]: no config URL provided Apr 13 19:19:49.336947 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:19:49.336956 ignition[691]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:19:49.336961 ignition[691]: failed to fetch config: resource requires networking Apr 13 19:19:49.337232 ignition[691]: Ignition finished successfully Apr 13 19:19:49.357756 systemd-networkd[782]: lo: Link UP Apr 13 19:19:49.357771 systemd-networkd[782]: lo: Gained carrier Apr 13 19:19:49.359926 systemd-networkd[782]: Enumeration completed Apr 13 19:19:49.360826 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:19:49.361273 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:49.361277 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:49.362810 systemd[1]: Reached target network.target - Network. Apr 13 19:19:49.363301 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:49.363304 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:49.363946 systemd-networkd[782]: eth0: Link UP Apr 13 19:19:49.363949 systemd-networkd[782]: eth0: Gained carrier Apr 13 19:19:49.363958 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:49.370429 systemd-networkd[782]: eth1: Link UP Apr 13 19:19:49.370432 systemd-networkd[782]: eth1: Gained carrier Apr 13 19:19:49.370442 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:49.372294 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:19:49.385834 ignition[786]: Ignition 2.19.0 Apr 13 19:19:49.385844 ignition[786]: Stage: fetch Apr 13 19:19:49.386026 ignition[786]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:49.386037 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:49.386151 ignition[786]: parsed url from cmdline: "" Apr 13 19:19:49.386155 ignition[786]: no config URL provided Apr 13 19:19:49.386159 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:19:49.386168 ignition[786]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:19:49.386187 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 19:19:49.386874 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 19:19:49.410226 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:19:49.429231 systemd-networkd[782]: eth0: DHCPv4 address 178.105.7.160/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:19:49.587305 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 19:19:49.598575 ignition[786]: GET result: OK Apr 13 19:19:49.598806 ignition[786]: parsing config with SHA512: 91cd3b9bb090590cad4ed9302feed8a6412a5033d3340abea940038e2d9c6ae6b246f863a5e200e6e708ff0d66fb7c3b8e58d70a7fb0dd24090271b56b44b208 Apr 13 19:19:49.605623 unknown[786]: fetched base config from "system" Apr 13 19:19:49.605667 unknown[786]: fetched base config from "system" Apr 13 19:19:49.606312 ignition[786]: fetch: fetch complete Apr 13 19:19:49.605677 unknown[786]: fetched user config from "hetzner" Apr 13 19:19:49.606318 ignition[786]: fetch: fetch passed Apr 13 19:19:49.606376 ignition[786]: Ignition finished successfully Apr 13 19:19:49.609426 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:19:49.614383 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:19:49.630487 ignition[794]: Ignition 2.19.0 Apr 13 19:19:49.630498 ignition[794]: Stage: kargs Apr 13 19:19:49.630739 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:49.630752 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:49.631896 ignition[794]: kargs: kargs passed Apr 13 19:19:49.631954 ignition[794]: Ignition finished successfully Apr 13 19:19:49.634952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:19:49.639339 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:19:49.654529 ignition[800]: Ignition 2.19.0 Apr 13 19:19:49.655226 ignition[800]: Stage: disks Apr 13 19:19:49.655433 ignition[800]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:49.655443 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:49.658135 ignition[800]: disks: disks passed Apr 13 19:19:49.658609 ignition[800]: Ignition finished successfully Apr 13 19:19:49.661239 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:19:49.663048 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:19:49.663843 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:19:49.665246 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:19:49.666415 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:19:49.667481 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:19:49.673410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:19:49.695713 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 19:19:49.699494 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:19:49.708332 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:19:49.751158 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:19:49.752155 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:19:49.754134 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:19:49.764562 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:19:49.769248 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:19:49.776155 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (817) Apr 13 19:19:49.778763 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:49.778811 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:49.779235 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:49.780398 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:19:49.781055 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:19:49.781089 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:19:49.792212 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:49.792248 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:49.786143 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:19:49.796947 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:19:49.802773 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:19:49.851746 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:19:49.853349 coreos-metadata[819]: Apr 13 19:19:49.852 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 19:19:49.856159 coreos-metadata[819]: Apr 13 19:19:49.856 INFO Fetch successful Apr 13 19:19:49.856159 coreos-metadata[819]: Apr 13 19:19:49.856 INFO wrote hostname ci-4081-3-7-c-b986c49433 to /sysroot/etc/hostname Apr 13 19:19:49.859245 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:19:49.861409 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:19:49.868049 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:19:49.872963 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:19:49.968863 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:19:49.973227 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:19:49.975979 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:19:49.985154 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:50.006026 ignition[933]: INFO : Ignition 2.19.0 Apr 13 19:19:50.007665 ignition[933]: INFO : Stage: mount Apr 13 19:19:50.008087 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:50.008087 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:50.010906 ignition[933]: INFO : mount: mount passed Apr 13 19:19:50.010906 ignition[933]: INFO : Ignition finished successfully Apr 13 19:19:50.013580 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:19:50.021344 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:19:50.025151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:19:50.156620 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:19:50.169456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:19:50.178170 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (947) Apr 13 19:19:50.180249 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:19:50.180292 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:19:50.180304 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:19:50.184259 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:19:50.184322 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:19:50.189034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:19:50.212478 ignition[964]: INFO : Ignition 2.19.0 Apr 13 19:19:50.212478 ignition[964]: INFO : Stage: files Apr 13 19:19:50.214807 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:50.214807 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:50.214807 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:19:50.220672 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:19:50.220672 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:19:50.220672 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:19:50.220672 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:19:50.225858 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:19:50.225858 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:19:50.225858 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:19:50.221848 unknown[964]: wrote ssh authorized keys file for user: core Apr 13 19:19:50.995385 systemd-networkd[782]: eth1: Gained IPv6LL Apr 13 19:19:51.123398 systemd-networkd[782]: eth0: Gained IPv6LL Apr 13 19:19:51.375418 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:19:51.721088 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:19:51.721088 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:19:51.721088 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 13 19:19:52.134446 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 19:19:52.467990 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:19:52.467990 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:19:52.471230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 13 19:19:52.916536 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 19:19:54.892604 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 13 19:19:54.892604 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:19:54.895199 ignition[964]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:19:54.895199 ignition[964]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:19:54.895199 ignition[964]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:19:54.895199 ignition[964]: INFO : files: files passed Apr 13 19:19:54.895199 ignition[964]: INFO : Ignition finished successfully Apr 13 19:19:54.898499 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:19:54.906396 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:19:54.912103 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:19:54.915673 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:19:54.916692 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:19:54.927156 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:54.927156 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:54.930349 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:19:54.932100 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:19:54.934337 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:19:54.940303 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:19:54.972407 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:19:54.975192 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:19:54.976512 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:19:54.979000 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:19:54.980383 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:19:54.986422 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:19:55.004269 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:19:55.014505 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:19:55.026917 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:55.028499 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:55.029309 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:19:55.030547 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:19:55.030729 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:19:55.032521 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:19:55.033191 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:19:55.034528 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:19:55.035915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:19:55.037163 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:19:55.038470 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:19:55.039768 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:19:55.041018 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:19:55.042220 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:19:55.043641 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:19:55.044673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:19:55.044814 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:19:55.046264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:55.046967 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:55.048085 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:19:55.048174 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:55.049430 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:19:55.049548 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:19:55.051306 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:19:55.051426 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:19:55.052820 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:19:55.052914 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:19:55.054242 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:19:55.054344 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:19:55.066613 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:19:55.072380 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:19:55.073103 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:19:55.073333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:55.075314 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:19:55.075654 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:19:55.084341 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:19:55.084930 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:19:55.095144 ignition[1016]: INFO : Ignition 2.19.0 Apr 13 19:19:55.095144 ignition[1016]: INFO : Stage: umount Apr 13 19:19:55.095144 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:19:55.095144 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:19:55.098919 ignition[1016]: INFO : umount: umount passed Apr 13 19:19:55.098919 ignition[1016]: INFO : Ignition finished successfully Apr 13 19:19:55.097900 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:19:55.102501 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:19:55.103204 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:19:55.104698 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:19:55.104815 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:19:55.105999 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:19:55.106053 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:19:55.107259 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:19:55.107305 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:19:55.108194 systemd[1]: Stopped target network.target - Network. Apr 13 19:19:55.109089 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:19:55.109170 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:19:55.110370 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:19:55.111254 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:19:55.115177 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:55.116955 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:19:55.117993 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:19:55.119454 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:19:55.119529 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:19:55.120892 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:19:55.120954 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:19:55.122224 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:19:55.122277 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:19:55.123228 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:19:55.123269 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:19:55.124321 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:19:55.127062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:19:55.128467 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:19:55.128608 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:19:55.130625 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:19:55.130731 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:19:55.134226 systemd-networkd[782]: eth0: DHCPv6 lease lost Apr 13 19:19:55.140072 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:19:55.140361 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:19:55.143232 systemd-networkd[782]: eth1: DHCPv6 lease lost Apr 13 19:19:55.146076 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:19:55.148295 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:19:55.149675 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:19:55.149740 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:55.154420 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:19:55.155088 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:19:55.155185 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:19:55.158990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:19:55.159045 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:55.161052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:19:55.161100 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:55.161807 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:19:55.161849 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:55.162751 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:55.172528 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:19:55.172736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:55.174917 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:19:55.174996 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:55.176306 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:19:55.176342 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:55.177415 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:19:55.177469 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:19:55.178199 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:19:55.178242 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:19:55.179410 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:19:55.179459 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:19:55.184345 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:19:55.187417 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:19:55.187566 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:55.191519 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:19:55.191598 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:19:55.192405 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:19:55.192458 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:55.193776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:55.193834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:55.198447 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:19:55.198597 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:19:55.205611 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:19:55.205754 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:19:55.206700 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:19:55.212473 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:19:55.224147 systemd[1]: Switching root. Apr 13 19:19:55.263695 systemd-journald[238]: Journal stopped Apr 13 19:19:56.205526 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Apr 13 19:19:56.205612 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:19:56.205629 kernel: SELinux: policy capability open_perms=1 Apr 13 19:19:56.205640 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:19:56.205654 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:19:56.205665 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:19:56.205675 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:19:56.205685 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:19:56.205695 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:19:56.205706 kernel: audit: type=1403 audit(1776107995.400:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:19:56.205718 systemd[1]: Successfully loaded SELinux policy in 34.724ms. Apr 13 19:19:56.205744 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.314ms. Apr 13 19:19:56.205759 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:19:56.205771 systemd[1]: Detected virtualization kvm. Apr 13 19:19:56.205783 systemd[1]: Detected architecture arm64. Apr 13 19:19:56.205795 systemd[1]: Detected first boot. Apr 13 19:19:56.205806 systemd[1]: Hostname set to . Apr 13 19:19:56.205817 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:19:56.205829 zram_generator::config[1058]: No configuration found. Apr 13 19:19:56.205845 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:19:56.205859 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:19:56.205876 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:19:56.205888 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:19:56.205900 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:19:56.205912 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:19:56.205924 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:19:56.205935 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:19:56.205946 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:19:56.205958 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:19:56.205972 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:19:56.205983 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:19:56.205995 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:19:56.206007 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:19:56.206019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:19:56.206030 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:19:56.206042 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:19:56.206058 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:19:56.206071 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 13 19:19:56.206084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:19:56.206096 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:19:56.206109 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:19:56.206383 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:19:56.206404 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:19:56.206417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:19:56.206435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:19:56.206448 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:19:56.206459 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:19:56.206471 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:19:56.206483 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:19:56.206495 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:19:56.206507 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:19:56.206518 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:19:56.206530 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:19:56.206544 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:19:56.206555 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:19:56.206567 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:19:56.206594 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:19:56.206608 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:19:56.206620 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:19:56.206638 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:19:56.206653 systemd[1]: Reached target machines.target - Containers. Apr 13 19:19:56.206693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:19:56.206712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:56.206724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:19:56.206736 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:19:56.206748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:56.206760 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:19:56.206775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:56.206787 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:19:56.206799 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:56.206811 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:19:56.206823 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:19:56.206835 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:19:56.206847 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:19:56.206859 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:19:56.206871 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:19:56.206887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:19:56.206899 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:19:56.206911 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:19:56.206922 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:19:56.206934 kernel: fuse: init (API version 7.39) Apr 13 19:19:56.206946 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:19:56.206957 systemd[1]: Stopped verity-setup.service. Apr 13 19:19:56.206968 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:19:56.206980 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:19:56.206994 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:19:56.207006 kernel: loop: module loaded Apr 13 19:19:56.207017 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:19:56.207028 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:19:56.207042 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:19:56.207054 kernel: ACPI: bus type drm_connector registered Apr 13 19:19:56.207065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:19:56.207077 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:19:56.207089 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:19:56.207101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:56.207133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:56.207147 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:19:56.207159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:19:56.207173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:56.207185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:56.207198 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:19:56.207209 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:19:56.207222 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:56.207233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:56.207247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:19:56.207259 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:19:56.207303 systemd-journald[1121]: Collecting audit messages is disabled. Apr 13 19:19:56.207330 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:19:56.207342 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:19:56.207355 systemd-journald[1121]: Journal started Apr 13 19:19:56.207381 systemd-journald[1121]: Runtime Journal (/run/log/journal/d0c2d53268b9439d9bafd9a334fcf234) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:19:55.914564 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:19:55.942450 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 19:19:55.942891 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:19:56.216141 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:19:56.226204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:19:56.231136 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:19:56.231213 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:19:56.234492 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:19:56.241152 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:19:56.253340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:19:56.256152 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:56.262093 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:19:56.262296 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:56.273595 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:19:56.273666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:56.281291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:19:56.285139 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:19:56.294679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:19:56.294765 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:19:56.300714 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:19:56.301729 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:19:56.302794 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:19:56.305321 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:19:56.330913 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:19:56.343809 kernel: loop0: detected capacity change from 0 to 8 Apr 13 19:19:56.344929 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:19:56.351333 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Apr 13 19:19:56.351355 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Apr 13 19:19:56.356426 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:19:56.359712 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:19:56.362195 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:19:56.374618 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:19:56.376836 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:19:56.388508 systemd-journald[1121]: Time spent on flushing to /var/log/journal/d0c2d53268b9439d9bafd9a334fcf234 is 87.162ms for 1139 entries. Apr 13 19:19:56.388508 systemd-journald[1121]: System Journal (/var/log/journal/d0c2d53268b9439d9bafd9a334fcf234) is 8.0M, max 584.8M, 576.8M free. Apr 13 19:19:56.490065 systemd-journald[1121]: Received client request to flush runtime journal. Apr 13 19:19:56.490333 kernel: loop1: detected capacity change from 0 to 114432 Apr 13 19:19:56.490365 kernel: loop2: detected capacity change from 0 to 114328 Apr 13 19:19:56.404518 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:19:56.410223 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:19:56.424197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:19:56.466629 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:19:56.476361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:19:56.482160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:19:56.492418 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:19:56.499913 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:19:56.517353 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 13 19:19:56.517374 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 13 19:19:56.522042 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 19:19:56.527548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:19:56.531242 kernel: loop3: detected capacity change from 0 to 197488 Apr 13 19:19:56.569134 kernel: loop4: detected capacity change from 0 to 8 Apr 13 19:19:56.573326 kernel: loop5: detected capacity change from 0 to 114432 Apr 13 19:19:56.590137 kernel: loop6: detected capacity change from 0 to 114328 Apr 13 19:19:56.606701 kernel: loop7: detected capacity change from 0 to 197488 Apr 13 19:19:56.628542 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 19:19:56.629171 (sd-merge)[1200]: Merged extensions into '/usr'. Apr 13 19:19:56.638110 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:19:56.638162 systemd[1]: Reloading... Apr 13 19:19:56.759147 zram_generator::config[1228]: No configuration found. Apr 13 19:19:56.829533 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:19:56.872988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:19:56.920271 systemd[1]: Reloading finished in 281 ms. Apr 13 19:19:56.955259 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:19:56.958651 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:19:56.968337 systemd[1]: Starting ensure-sysext.service... Apr 13 19:19:56.970636 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:19:56.978733 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:19:56.978849 systemd[1]: Reloading... Apr 13 19:19:57.012819 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:19:57.013168 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:19:57.015884 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:19:57.016160 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 13 19:19:57.016212 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 13 19:19:57.025126 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:19:57.025145 systemd-tmpfiles[1265]: Skipping /boot Apr 13 19:19:57.044873 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:19:57.048275 systemd-tmpfiles[1265]: Skipping /boot Apr 13 19:19:57.068145 zram_generator::config[1294]: No configuration found. Apr 13 19:19:57.179183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:19:57.226668 systemd[1]: Reloading finished in 247 ms. Apr 13 19:19:57.246583 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:19:57.257403 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:19:57.276781 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:19:57.283266 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:19:57.292537 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:19:57.297372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:19:57.304925 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:19:57.308825 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:19:57.313084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:57.318401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:57.325179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:57.335453 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:57.336474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:57.341134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:57.341298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:57.349295 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:19:57.354502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:57.358403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:19:57.361291 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:57.366244 systemd[1]: Finished ensure-sysext.service. Apr 13 19:19:57.367332 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:57.367455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:57.386420 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 19:19:57.387741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:57.388862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:57.390896 augenrules[1359]: No rules Apr 13 19:19:57.391054 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:19:57.393617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:57.396245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:57.397904 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:19:57.398956 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:19:57.399103 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:19:57.402723 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Apr 13 19:19:57.407037 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:19:57.413486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:57.413684 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:57.420393 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:19:57.420999 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:19:57.423355 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:19:57.437439 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:19:57.446329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:19:57.447019 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:19:57.453950 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:19:57.543647 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 19:19:57.544549 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:19:57.570334 systemd-networkd[1373]: lo: Link UP Apr 13 19:19:57.570343 systemd-networkd[1373]: lo: Gained carrier Apr 13 19:19:57.570984 systemd-networkd[1373]: Enumeration completed Apr 13 19:19:57.571085 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:19:57.582647 systemd-resolved[1340]: Positive Trust Anchors: Apr 13 19:19:57.582850 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:19:57.582887 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:19:57.585345 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:19:57.592514 systemd-resolved[1340]: Using system hostname 'ci-4081-3-7-c-b986c49433'. Apr 13 19:19:57.594819 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:19:57.595859 systemd[1]: Reached target network.target - Network. Apr 13 19:19:57.596977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:19:57.615183 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 13 19:19:57.664243 systemd-networkd[1373]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:57.664256 systemd-networkd[1373]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:57.666074 systemd-networkd[1373]: eth1: Link UP Apr 13 19:19:57.666091 systemd-networkd[1373]: eth1: Gained carrier Apr 13 19:19:57.666122 systemd-networkd[1373]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:57.689143 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:57.689156 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:19:57.691038 systemd-networkd[1373]: eth0: Link UP Apr 13 19:19:57.691050 systemd-networkd[1373]: eth0: Gained carrier Apr 13 19:19:57.691071 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:19:57.709371 systemd-networkd[1373]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:19:57.710796 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Apr 13 19:19:57.743146 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1389) Apr 13 19:19:57.748266 systemd-networkd[1373]: eth0: DHCPv4 address 178.105.7.160/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:19:57.748667 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Apr 13 19:19:57.748889 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Apr 13 19:19:57.768290 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 19:19:57.768404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:19:57.773144 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 19:19:57.779725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:19:57.788629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:19:57.791929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:19:57.792928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:19:57.792960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:19:57.793345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:19:57.795412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:19:57.807972 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:19:57.809242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:19:57.815347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:19:57.821959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:19:57.825969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:19:57.827607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:19:57.850940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:19:57.857152 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 13 19:19:57.857218 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 19:19:57.857233 kernel: [drm] features: -context_init Apr 13 19:19:57.859998 kernel: [drm] number of scanouts: 1 Apr 13 19:19:57.860091 kernel: [drm] number of cap sets: 0 Apr 13 19:19:57.858464 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:19:57.865148 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 19:19:57.866604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:57.874164 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 19:19:57.880811 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 19:19:57.887188 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:19:57.895653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:19:57.895843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:57.902323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:19:57.970243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:19:58.012110 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:19:58.023488 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:19:58.039203 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:19:58.062517 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:19:58.064977 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:19:58.066895 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:19:58.067818 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:19:58.068961 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:19:58.070135 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:19:58.070898 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:19:58.071752 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:19:58.072594 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:19:58.072630 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:19:58.073148 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:19:58.077173 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:19:58.079444 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:19:58.086192 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:19:58.089188 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:19:58.090567 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:19:58.091519 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:19:58.092165 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:19:58.092973 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:19:58.093010 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:19:58.099336 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:19:58.103327 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:19:58.107047 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:19:58.108367 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:19:58.112314 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:19:58.119391 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:19:58.120093 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:19:58.124338 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:19:58.126366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:19:58.131352 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 19:19:58.134317 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:19:58.138290 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:19:58.142129 jq[1451]: false Apr 13 19:19:58.148247 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:19:58.151083 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:19:58.151606 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:19:58.156370 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:19:58.161658 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:19:58.164065 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:19:58.170507 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:19:58.171355 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:19:58.186684 coreos-metadata[1449]: Apr 13 19:19:58.186 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 19:19:58.188611 coreos-metadata[1449]: Apr 13 19:19:58.188 INFO Fetch successful Apr 13 19:19:58.188967 coreos-metadata[1449]: Apr 13 19:19:58.188 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 19:19:58.189463 coreos-metadata[1449]: Apr 13 19:19:58.189 INFO Fetch successful Apr 13 19:19:58.196021 dbus-daemon[1450]: [system] SELinux support is enabled Apr 13 19:19:58.197569 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:19:58.222930 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:19:58.222992 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:19:58.226644 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:19:58.226748 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:19:58.230599 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:19:58.233174 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:19:58.235818 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:19:58.239335 extend-filesystems[1452]: Found loop4 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found loop5 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found loop6 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found loop7 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found sda Apr 13 19:19:58.239335 extend-filesystems[1452]: Found sda1 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found sda2 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found sda3 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found usr Apr 13 19:19:58.239335 extend-filesystems[1452]: Found sda4 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found sda6 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found sda7 Apr 13 19:19:58.239335 extend-filesystems[1452]: Found sda9 Apr 13 19:19:58.239335 extend-filesystems[1452]: Checking size of /dev/sda9 Apr 13 19:19:58.240506 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:19:58.291428 tar[1469]: linux-arm64/LICENSE Apr 13 19:19:58.291428 tar[1469]: linux-arm64/helm Apr 13 19:19:58.247836 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:19:58.291968 jq[1463]: true Apr 13 19:19:58.303621 jq[1488]: true Apr 13 19:19:58.304095 extend-filesystems[1452]: Resized partition /dev/sda9 Apr 13 19:19:58.306318 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:19:58.318925 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 13 19:19:58.322737 update_engine[1461]: I20260413 19:19:58.320230 1461 main.cc:92] Flatcar Update Engine starting Apr 13 19:19:58.324193 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:19:58.324513 update_engine[1461]: I20260413 19:19:58.324468 1461 update_check_scheduler.cc:74] Next update check in 7m39s Apr 13 19:19:58.327664 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:19:58.403915 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:19:58.404974 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:19:58.435411 systemd-logind[1460]: New seat seat0. Apr 13 19:19:58.443578 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:19:58.443606 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 13 19:19:58.445391 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:19:58.499182 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:19:58.499728 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:19:58.519148 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 13 19:19:58.528158 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1379) Apr 13 19:19:58.533001 systemd[1]: Starting sshkeys.service... Apr 13 19:19:58.553486 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:19:58.561487 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:19:58.569933 extend-filesystems[1495]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 19:19:58.569933 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 13 19:19:58.569933 extend-filesystems[1495]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 13 19:19:58.575375 extend-filesystems[1452]: Resized filesystem in /dev/sda9 Apr 13 19:19:58.575375 extend-filesystems[1452]: Found sr0 Apr 13 19:19:58.578704 containerd[1471]: time="2026-04-13T19:19:58.574814680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:19:58.576092 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:19:58.576333 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:19:58.622444 coreos-metadata[1528]: Apr 13 19:19:58.622 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 19:19:58.623946 coreos-metadata[1528]: Apr 13 19:19:58.623 INFO Fetch successful Apr 13 19:19:58.628360 unknown[1528]: wrote ssh authorized keys file for user: core Apr 13 19:19:58.643090 containerd[1471]: time="2026-04-13T19:19:58.643034840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:58.645188 containerd[1471]: time="2026-04-13T19:19:58.645136600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:58.645301 containerd[1471]: time="2026-04-13T19:19:58.645285960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:19:58.645365 containerd[1471]: time="2026-04-13T19:19:58.645351840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:19:58.645608 containerd[1471]: time="2026-04-13T19:19:58.645586640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:19:58.645702 containerd[1471]: time="2026-04-13T19:19:58.645687680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:58.645824 containerd[1471]: time="2026-04-13T19:19:58.645807040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:58.645878 containerd[1471]: time="2026-04-13T19:19:58.645865120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:58.646105 containerd[1471]: time="2026-04-13T19:19:58.646085880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:58.646189 containerd[1471]: time="2026-04-13T19:19:58.646174920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:58.646246 containerd[1471]: time="2026-04-13T19:19:58.646231880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:58.646307 containerd[1471]: time="2026-04-13T19:19:58.646295200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:58.646438 containerd[1471]: time="2026-04-13T19:19:58.646422320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:58.646795 containerd[1471]: time="2026-04-13T19:19:58.646748520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:19:58.647132 containerd[1471]: time="2026-04-13T19:19:58.646949360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:19:58.647132 containerd[1471]: time="2026-04-13T19:19:58.646982880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:19:58.647132 containerd[1471]: time="2026-04-13T19:19:58.647076120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:19:58.647244 containerd[1471]: time="2026-04-13T19:19:58.647228760Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:19:58.650284 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:19:58.652456 containerd[1471]: time="2026-04-13T19:19:58.652420640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:19:58.652594 containerd[1471]: time="2026-04-13T19:19:58.652579360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.652696840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.652720240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.652737120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.652908400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653245000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653379800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653398000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653414280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653428880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653443560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653457480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653474280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653490320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:19:58.654212 containerd[1471]: time="2026-04-13T19:19:58.653511480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653526160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653542280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653578000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653595880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653608920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653623320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653635720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653649520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653663560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653678040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653692280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653708480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653721000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654532 containerd[1471]: time="2026-04-13T19:19:58.653733600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653747360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653769560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653793040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653805840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653817560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653936720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653956560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653968240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653981640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.653991880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.654007880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.654018520Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:19:58.654853 containerd[1471]: time="2026-04-13T19:19:58.654029840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:19:58.655959 containerd[1471]: time="2026-04-13T19:19:58.655886560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:19:58.656749 containerd[1471]: time="2026-04-13T19:19:58.656100040Z" level=info msg="Connect containerd service" Apr 13 19:19:58.656749 containerd[1471]: time="2026-04-13T19:19:58.656154440Z" level=info msg="using legacy CRI server" Apr 13 19:19:58.656749 containerd[1471]: time="2026-04-13T19:19:58.656165000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:19:58.656749 containerd[1471]: time="2026-04-13T19:19:58.656256680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:19:58.657262 containerd[1471]: time="2026-04-13T19:19:58.657234160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:19:58.657742 containerd[1471]: time="2026-04-13T19:19:58.657692880Z" level=info msg="Start subscribing containerd event" Apr 13 19:19:58.657895 containerd[1471]: time="2026-04-13T19:19:58.657878280Z" level=info msg="Start recovering state" Apr 13 19:19:58.658078 containerd[1471]: time="2026-04-13T19:19:58.658054160Z" level=info msg="Start event monitor" Apr 13 19:19:58.658156 containerd[1471]: time="2026-04-13T19:19:58.658142560Z" level=info msg="Start snapshots syncer" Apr 13 19:19:58.658217 containerd[1471]: time="2026-04-13T19:19:58.658205240Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:19:58.658265 containerd[1471]: time="2026-04-13T19:19:58.658254280Z" level=info msg="Start streaming server" Apr 13 19:19:58.659025 containerd[1471]: time="2026-04-13T19:19:58.659002200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:19:58.659226 containerd[1471]: time="2026-04-13T19:19:58.659205760Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:19:58.659455 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:19:58.660896 containerd[1471]: time="2026-04-13T19:19:58.660877120Z" level=info msg="containerd successfully booted in 0.137398s" Apr 13 19:19:58.668237 update-ssh-keys[1538]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:19:58.671158 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:19:58.673647 systemd[1]: Finished sshkeys.service. Apr 13 19:19:58.994504 tar[1469]: linux-arm64/README.md Apr 13 19:19:59.016164 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:19:59.123310 systemd-networkd[1373]: eth1: Gained IPv6LL Apr 13 19:19:59.126241 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Apr 13 19:19:59.131485 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:19:59.133356 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:19:59.143077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:19:59.150189 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:19:59.187334 systemd-networkd[1373]: eth0: Gained IPv6LL Apr 13 19:19:59.189517 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Apr 13 19:19:59.197166 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:19:59.215339 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:19:59.238071 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:19:59.246436 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:19:59.268484 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:19:59.269801 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:19:59.286076 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:19:59.298601 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:19:59.307597 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:19:59.312150 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 13 19:19:59.313135 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:19:59.911214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:19:59.912712 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:19:59.918619 systemd[1]: Startup finished in 809ms (kernel) + 8.712s (initrd) + 4.553s (userspace) = 14.075s. Apr 13 19:19:59.923504 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:00.372231 kubelet[1579]: E0413 19:20:00.372039 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:00.376188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:00.376456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:10.627204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:20:10.634545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:10.765822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:10.777812 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:10.823598 kubelet[1599]: E0413 19:20:10.823510 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:10.828059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:10.828521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:20.831870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:20:20.847771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:20.968421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:20.974218 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:21.022440 kubelet[1614]: E0413 19:20:21.022360 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:21.025561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:21.025775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:29.343001 systemd-timesyncd[1358]: Contacted time server 85.121.52.237:123 (2.flatcar.pool.ntp.org). Apr 13 19:20:29.343091 systemd-timesyncd[1358]: Initial clock synchronization to Mon 2026-04-13 19:20:29.427678 UTC. Apr 13 19:20:31.082175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:20:31.091573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:31.232505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:31.238393 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:31.280507 kubelet[1630]: E0413 19:20:31.280444 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:31.284444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:31.284729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:35.329731 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:20:35.335494 systemd[1]: Started sshd@0-178.105.7.160:22-50.85.169.122:52064.service - OpenSSH per-connection server daemon (50.85.169.122:52064). Apr 13 19:20:35.464158 sshd[1638]: Accepted publickey for core from 50.85.169.122 port 52064 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:35.466347 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:35.480133 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:20:35.480343 systemd-logind[1460]: New session 1 of user core. Apr 13 19:20:35.487606 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:20:35.503547 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:20:35.517698 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:20:35.522629 (systemd)[1642]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:20:35.639664 systemd[1642]: Queued start job for default target default.target. Apr 13 19:20:35.649590 systemd[1642]: Created slice app.slice - User Application Slice. Apr 13 19:20:35.649950 systemd[1642]: Reached target paths.target - Paths. Apr 13 19:20:35.650186 systemd[1642]: Reached target timers.target - Timers. Apr 13 19:20:35.653195 systemd[1642]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:20:35.682074 systemd[1642]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:20:35.682289 systemd[1642]: Reached target sockets.target - Sockets. Apr 13 19:20:35.682307 systemd[1642]: Reached target basic.target - Basic System. Apr 13 19:20:35.682357 systemd[1642]: Reached target default.target - Main User Target. Apr 13 19:20:35.682389 systemd[1642]: Startup finished in 150ms. Apr 13 19:20:35.682719 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:20:35.694512 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:20:35.825678 systemd[1]: Started sshd@1-178.105.7.160:22-50.85.169.122:52068.service - OpenSSH per-connection server daemon (50.85.169.122:52068). Apr 13 19:20:35.942408 sshd[1653]: Accepted publickey for core from 50.85.169.122 port 52068 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:35.944698 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:35.951630 systemd-logind[1460]: New session 2 of user core. Apr 13 19:20:35.957420 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:20:36.057451 sshd[1653]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:36.062257 systemd[1]: sshd@1-178.105.7.160:22-50.85.169.122:52068.service: Deactivated successfully. Apr 13 19:20:36.064291 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:20:36.066976 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:20:36.068410 systemd-logind[1460]: Removed session 2. Apr 13 19:20:36.091899 systemd[1]: Started sshd@2-178.105.7.160:22-50.85.169.122:52072.service - OpenSSH per-connection server daemon (50.85.169.122:52072). Apr 13 19:20:36.220944 sshd[1660]: Accepted publickey for core from 50.85.169.122 port 52072 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:36.224050 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:36.230188 systemd-logind[1460]: New session 3 of user core. Apr 13 19:20:36.240519 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:20:36.337965 sshd[1660]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:36.344027 systemd[1]: sshd@2-178.105.7.160:22-50.85.169.122:52072.service: Deactivated successfully. Apr 13 19:20:36.345920 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:20:36.348336 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:20:36.349834 systemd-logind[1460]: Removed session 3. Apr 13 19:20:36.369500 systemd[1]: Started sshd@3-178.105.7.160:22-50.85.169.122:52080.service - OpenSSH per-connection server daemon (50.85.169.122:52080). Apr 13 19:20:36.497707 sshd[1667]: Accepted publickey for core from 50.85.169.122 port 52080 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:36.499083 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:36.505726 systemd-logind[1460]: New session 4 of user core. Apr 13 19:20:36.511511 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:20:36.613682 sshd[1667]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:36.619807 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:20:36.620318 systemd[1]: sshd@3-178.105.7.160:22-50.85.169.122:52080.service: Deactivated successfully. Apr 13 19:20:36.623684 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:20:36.627328 systemd-logind[1460]: Removed session 4. Apr 13 19:20:36.644728 systemd[1]: Started sshd@4-178.105.7.160:22-50.85.169.122:52096.service - OpenSSH per-connection server daemon (50.85.169.122:52096). Apr 13 19:20:36.761338 sshd[1674]: Accepted publickey for core from 50.85.169.122 port 52096 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:36.764081 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:36.770442 systemd-logind[1460]: New session 5 of user core. Apr 13 19:20:36.776492 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:20:36.868580 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:20:36.868901 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:20:36.886944 sudo[1677]: pam_unix(sudo:session): session closed for user root Apr 13 19:20:36.904611 sshd[1674]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:36.910875 systemd[1]: sshd@4-178.105.7.160:22-50.85.169.122:52096.service: Deactivated successfully. Apr 13 19:20:36.913294 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:20:36.914232 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:20:36.915499 systemd-logind[1460]: Removed session 5. Apr 13 19:20:36.952684 systemd[1]: Started sshd@5-178.105.7.160:22-50.85.169.122:52104.service - OpenSSH per-connection server daemon (50.85.169.122:52104). Apr 13 19:20:37.079272 sshd[1682]: Accepted publickey for core from 50.85.169.122 port 52104 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:37.082339 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:37.089551 systemd-logind[1460]: New session 6 of user core. Apr 13 19:20:37.098511 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:20:37.189799 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:20:37.190166 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:20:37.194690 sudo[1686]: pam_unix(sudo:session): session closed for user root Apr 13 19:20:37.200869 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:20:37.201530 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:20:37.225739 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:20:37.228382 auditctl[1689]: No rules Apr 13 19:20:37.228847 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:20:37.229035 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:20:37.232616 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:20:37.281732 augenrules[1707]: No rules Apr 13 19:20:37.283557 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:20:37.285426 sudo[1685]: pam_unix(sudo:session): session closed for user root Apr 13 19:20:37.303641 sshd[1682]: pam_unix(sshd:session): session closed for user core Apr 13 19:20:37.308402 systemd[1]: sshd@5-178.105.7.160:22-50.85.169.122:52104.service: Deactivated successfully. Apr 13 19:20:37.311393 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:20:37.313612 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:20:37.314978 systemd-logind[1460]: Removed session 6. Apr 13 19:20:37.334539 systemd[1]: Started sshd@6-178.105.7.160:22-50.85.169.122:52120.service - OpenSSH per-connection server daemon (50.85.169.122:52120). Apr 13 19:20:37.459251 sshd[1715]: Accepted publickey for core from 50.85.169.122 port 52120 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:20:37.461375 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:20:37.468318 systemd-logind[1460]: New session 7 of user core. Apr 13 19:20:37.473502 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:20:37.561633 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:20:37.561931 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:20:37.877748 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:20:37.880087 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:20:38.130087 dockerd[1734]: time="2026-04-13T19:20:38.129940787Z" level=info msg="Starting up" Apr 13 19:20:38.206180 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport33666351-merged.mount: Deactivated successfully. Apr 13 19:20:38.225036 systemd[1]: var-lib-docker-metacopy\x2dcheck2974064169-merged.mount: Deactivated successfully. Apr 13 19:20:38.233184 dockerd[1734]: time="2026-04-13T19:20:38.233143502Z" level=info msg="Loading containers: start." Apr 13 19:20:38.352702 kernel: Initializing XFRM netlink socket Apr 13 19:20:38.439316 systemd-networkd[1373]: docker0: Link UP Apr 13 19:20:38.458947 dockerd[1734]: time="2026-04-13T19:20:38.458889585Z" level=info msg="Loading containers: done." Apr 13 19:20:38.478012 dockerd[1734]: time="2026-04-13T19:20:38.477532544Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:20:38.478012 dockerd[1734]: time="2026-04-13T19:20:38.477665145Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:20:38.478012 dockerd[1734]: time="2026-04-13T19:20:38.477797346Z" level=info msg="Daemon has completed initialization" Apr 13 19:20:38.512138 dockerd[1734]: time="2026-04-13T19:20:38.511993962Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:20:38.512483 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:20:38.982660 containerd[1471]: time="2026-04-13T19:20:38.982603208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\"" Apr 13 19:20:39.517904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1597223912.mount: Deactivated successfully. Apr 13 19:20:40.578880 containerd[1471]: time="2026-04-13T19:20:40.578804996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:40.580626 containerd[1471]: time="2026-04-13T19:20:40.580286038Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.3: active requests=0, bytes read=24595509" Apr 13 19:20:40.581258 containerd[1471]: time="2026-04-13T19:20:40.581224089Z" level=info msg="ImageCreate event name:\"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:40.585462 containerd[1471]: time="2026-04-13T19:20:40.585405528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:40.587403 containerd[1471]: time="2026-04-13T19:20:40.586767238Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.3\" with image id \"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\", size \"24592010\" in 1.604116266s" Apr 13 19:20:40.587403 containerd[1471]: time="2026-04-13T19:20:40.586820226Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\" returns image reference \"sha256:01372c327c8cb0defbcdf3c4127424368b365ba0f2629d3142a37bb2ea8b93e3\"" Apr 13 19:20:40.587703 containerd[1471]: time="2026-04-13T19:20:40.587679859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\"" Apr 13 19:20:41.331563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 19:20:41.337503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:41.467091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:41.481845 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:20:41.530355 kubelet[1938]: E0413 19:20:41.530255 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:20:41.533767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:20:41.533963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:20:41.859825 containerd[1471]: time="2026-04-13T19:20:41.859759290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:41.861616 containerd[1471]: time="2026-04-13T19:20:41.861568458Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.3: active requests=0, bytes read=19064115" Apr 13 19:20:41.862180 containerd[1471]: time="2026-04-13T19:20:41.861984351Z" level=info msg="ImageCreate event name:\"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:41.866600 containerd[1471]: time="2026-04-13T19:20:41.866539850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:41.869033 containerd[1471]: time="2026-04-13T19:20:41.868957901Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.3\" with image id \"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\", size \"20569814\" in 1.281187514s" Apr 13 19:20:41.869033 containerd[1471]: time="2026-04-13T19:20:41.869004094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\" returns image reference \"sha256:f2119fbf97330b133e2bc6c7d48bd6bee01864df1dd4356e678bfd17e0811be4\"" Apr 13 19:20:41.869538 containerd[1471]: time="2026-04-13T19:20:41.869469989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\"" Apr 13 19:20:43.136184 containerd[1471]: time="2026-04-13T19:20:43.135624337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:43.138568 containerd[1471]: time="2026-04-13T19:20:43.138277408Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.3: active requests=0, bytes read=13797917" Apr 13 19:20:43.141027 containerd[1471]: time="2026-04-13T19:20:43.140008760Z" level=info msg="ImageCreate event name:\"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:43.143720 containerd[1471]: time="2026-04-13T19:20:43.143415518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:43.144945 containerd[1471]: time="2026-04-13T19:20:43.144779704Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.3\" with image id \"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\", size \"15303634\" in 1.275270466s" Apr 13 19:20:43.144945 containerd[1471]: time="2026-04-13T19:20:43.144817895Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\" returns image reference \"sha256:21d8a9777f9253ddda9144e58b529a621d0819d77dfd08a67a157fe0379efd15\"" Apr 13 19:20:43.145680 containerd[1471]: time="2026-04-13T19:20:43.145400583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\"" Apr 13 19:20:43.845374 update_engine[1461]: I20260413 19:20:43.844720 1461 update_attempter.cc:509] Updating boot flags... Apr 13 19:20:43.895171 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1962) Apr 13 19:20:43.967327 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1966) Apr 13 19:20:44.445331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405957679.mount: Deactivated successfully. Apr 13 19:20:44.676153 containerd[1471]: time="2026-04-13T19:20:44.674549611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:44.676153 containerd[1471]: time="2026-04-13T19:20:44.675771006Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.3: active requests=0, bytes read=22329611" Apr 13 19:20:44.678988 containerd[1471]: time="2026-04-13T19:20:44.678921633Z" level=info msg="ImageCreate event name:\"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:44.682594 containerd[1471]: time="2026-04-13T19:20:44.682528645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:44.683639 containerd[1471]: time="2026-04-13T19:20:44.683594145Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.3\" with image id \"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\", repo tag \"registry.k8s.io/kube-proxy:v1.35.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\", size \"22328604\" in 1.538160744s" Apr 13 19:20:44.683728 containerd[1471]: time="2026-04-13T19:20:44.683632408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\" returns image reference \"sha256:e21b1b28c776646ec72252e21482c2e273889e76006df8b76d97d9dd1ed544f6\"" Apr 13 19:20:44.684161 containerd[1471]: time="2026-04-13T19:20:44.684127256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 13 19:20:45.170621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385105185.mount: Deactivated successfully. Apr 13 19:20:46.195159 containerd[1471]: time="2026-04-13T19:20:46.193506700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.195159 containerd[1471]: time="2026-04-13T19:20:46.195126927Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Apr 13 19:20:46.197256 containerd[1471]: time="2026-04-13T19:20:46.197200802Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.202427 containerd[1471]: time="2026-04-13T19:20:46.202392297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.204319 containerd[1471]: time="2026-04-13T19:20:46.204264960Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.520096242s" Apr 13 19:20:46.204411 containerd[1471]: time="2026-04-13T19:20:46.204321311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 13 19:20:46.205966 containerd[1471]: time="2026-04-13T19:20:46.205916346Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 19:20:46.653246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968772721.mount: Deactivated successfully. Apr 13 19:20:46.660699 containerd[1471]: time="2026-04-13T19:20:46.660633769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.662319 containerd[1471]: time="2026-04-13T19:20:46.661848008Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 13 19:20:46.664482 containerd[1471]: time="2026-04-13T19:20:46.663663039Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.667787 containerd[1471]: time="2026-04-13T19:20:46.667748590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:46.668710 containerd[1471]: time="2026-04-13T19:20:46.668678033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 462.713667ms" Apr 13 19:20:46.668843 containerd[1471]: time="2026-04-13T19:20:46.668825658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 13 19:20:46.669446 containerd[1471]: time="2026-04-13T19:20:46.669418039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 13 19:20:47.225915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45064983.mount: Deactivated successfully. Apr 13 19:20:47.928478 containerd[1471]: time="2026-04-13T19:20:47.928412205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:47.930219 containerd[1471]: time="2026-04-13T19:20:47.930173814Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21751802" Apr 13 19:20:47.932034 containerd[1471]: time="2026-04-13T19:20:47.931152526Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:47.934590 containerd[1471]: time="2026-04-13T19:20:47.934520133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:20:47.936052 containerd[1471]: time="2026-04-13T19:20:47.935916542Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.266369543s" Apr 13 19:20:47.936052 containerd[1471]: time="2026-04-13T19:20:47.935959469Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 13 19:20:50.384887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:50.398962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:50.436779 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-7.scope)... Apr 13 19:20:50.436813 systemd[1]: Reloading... Apr 13 19:20:50.569148 zram_generator::config[2172]: No configuration found. Apr 13 19:20:50.663502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:20:50.735605 systemd[1]: Reloading finished in 298 ms. Apr 13 19:20:50.791556 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:20:50.791831 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:20:50.792292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:50.798639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:50.923989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:50.935759 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:20:50.988260 kubelet[2215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:20:51.363206 kubelet[2215]: I0413 19:20:51.363032 2215 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 19:20:51.363206 kubelet[2215]: I0413 19:20:51.363126 2215 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:20:51.363206 kubelet[2215]: I0413 19:20:51.363159 2215 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:20:51.363206 kubelet[2215]: I0413 19:20:51.363166 2215 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:20:51.363597 kubelet[2215]: I0413 19:20:51.363543 2215 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 19:20:51.375980 kubelet[2215]: I0413 19:20:51.375517 2215 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:20:51.375980 kubelet[2215]: E0413 19:20:51.375611 2215 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.105.7.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.105.7.160:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:20:51.380041 kubelet[2215]: E0413 19:20:51.379954 2215 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:20:51.380186 kubelet[2215]: I0413 19:20:51.380056 2215 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:20:51.383230 kubelet[2215]: I0413 19:20:51.383182 2215 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:20:51.383505 kubelet[2215]: I0413 19:20:51.383448 2215 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:20:51.383674 kubelet[2215]: I0413 19:20:51.383486 2215 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-c-b986c49433","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:20:51.383674 kubelet[2215]: I0413 19:20:51.383675 2215 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 19:20:51.383890 kubelet[2215]: I0413 19:20:51.383683 2215 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 19:20:51.383890 kubelet[2215]: I0413 19:20:51.383799 2215 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:20:51.387503 kubelet[2215]: I0413 19:20:51.387442 2215 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 19:20:51.387748 kubelet[2215]: I0413 19:20:51.387728 2215 kubelet.go:482] "Attempting to sync node with API server" Apr 13 19:20:51.387748 kubelet[2215]: I0413 19:20:51.387750 2215 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:20:51.387817 kubelet[2215]: I0413 19:20:51.387769 2215 kubelet.go:394] "Adding apiserver pod source" Apr 13 19:20:51.387817 kubelet[2215]: I0413 19:20:51.387781 2215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:20:51.392229 kubelet[2215]: I0413 19:20:51.392145 2215 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:20:51.393474 kubelet[2215]: I0413 19:20:51.393425 2215 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:20:51.393553 kubelet[2215]: I0413 19:20:51.393506 2215 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:20:51.393577 kubelet[2215]: W0413 19:20:51.393561 2215 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:20:51.397166 kubelet[2215]: I0413 19:20:51.396098 2215 server.go:1257] "Started kubelet" Apr 13 19:20:51.398102 kubelet[2215]: I0413 19:20:51.398051 2215 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:20:51.399047 kubelet[2215]: I0413 19:20:51.399015 2215 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:20:51.402159 kubelet[2215]: I0413 19:20:51.401326 2215 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:20:51.402159 kubelet[2215]: I0413 19:20:51.401422 2215 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:20:51.402159 kubelet[2215]: I0413 19:20:51.401735 2215 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:20:51.403217 kubelet[2215]: E0413 19:20:51.401893 2215 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.105.7.160:6443/api/v1/namespaces/default/events\": dial tcp 178.105.7.160:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-c-b986c49433.18a600dcce2c78dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-c-b986c49433,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-c-b986c49433,},FirstTimestamp:2026-04-13 19:20:51.396065501 +0000 UTC m=+0.452482853,LastTimestamp:2026-04-13 19:20:51.396065501 +0000 UTC m=+0.452482853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-c-b986c49433,}" Apr 13 19:20:51.406152 kubelet[2215]: I0413 19:20:51.405747 2215 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 19:20:51.407811 kubelet[2215]: I0413 19:20:51.406562 2215 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:20:51.408489 kubelet[2215]: I0413 19:20:51.408447 2215 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 19:20:51.409061 kubelet[2215]: E0413 19:20:51.409031 2215 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b986c49433\" not found" Apr 13 19:20:51.409779 kubelet[2215]: I0413 19:20:51.409753 2215 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:20:51.409934 kubelet[2215]: I0413 19:20:51.409923 2215 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:20:51.418324 kubelet[2215]: I0413 19:20:51.418281 2215 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:20:51.418324 kubelet[2215]: I0413 19:20:51.418308 2215 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:20:51.418506 kubelet[2215]: I0413 19:20:51.418423 2215 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:20:51.422586 kubelet[2215]: E0413 19:20:51.422526 2215 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.7.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-c-b986c49433?timeout=10s\": dial tcp 178.105.7.160:6443: connect: connection refused" interval="200ms" Apr 13 19:20:51.445548 kubelet[2215]: I0413 19:20:51.445463 2215 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:20:51.449016 kubelet[2215]: I0413 19:20:51.448637 2215 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:20:51.449016 kubelet[2215]: I0413 19:20:51.448666 2215 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 19:20:51.449016 kubelet[2215]: I0413 19:20:51.448688 2215 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 19:20:51.449016 kubelet[2215]: E0413 19:20:51.448726 2215 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:20:51.455041 kubelet[2215]: I0413 19:20:51.455011 2215 cpu_manager.go:225] "Starting" policy="none" Apr 13 19:20:51.455041 kubelet[2215]: I0413 19:20:51.455029 2215 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 19:20:51.455041 kubelet[2215]: I0413 19:20:51.455047 2215 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 19:20:51.460063 kubelet[2215]: I0413 19:20:51.460029 2215 policy_none.go:50] "Start" Apr 13 19:20:51.460063 kubelet[2215]: I0413 19:20:51.460075 2215 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:20:51.460271 kubelet[2215]: I0413 19:20:51.460089 2215 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:20:51.462195 kubelet[2215]: I0413 19:20:51.462169 2215 policy_none.go:44] "Start" Apr 13 19:20:51.469955 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:20:51.489777 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:20:51.493217 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:20:51.503191 kubelet[2215]: E0413 19:20:51.502457 2215 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:20:51.503191 kubelet[2215]: I0413 19:20:51.502833 2215 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 19:20:51.503191 kubelet[2215]: I0413 19:20:51.502857 2215 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:20:51.504242 kubelet[2215]: I0413 19:20:51.504104 2215 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 19:20:51.507649 kubelet[2215]: E0413 19:20:51.507520 2215 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:20:51.507801 kubelet[2215]: E0413 19:20:51.507786 2215 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-c-b986c49433\" not found" Apr 13 19:20:51.565791 systemd[1]: Created slice kubepods-burstable-podcfb84cb46fbb678a116a7d19bf8402bc.slice - libcontainer container kubepods-burstable-podcfb84cb46fbb678a116a7d19bf8402bc.slice. Apr 13 19:20:51.575976 kubelet[2215]: E0413 19:20:51.575864 2215 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b986c49433\" not found" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.583181 systemd[1]: Created slice kubepods-burstable-podfdcd35f1e25a86a0d7849faf2b4ee5fc.slice - libcontainer container kubepods-burstable-podfdcd35f1e25a86a0d7849faf2b4ee5fc.slice. Apr 13 19:20:51.594694 kubelet[2215]: E0413 19:20:51.594294 2215 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b986c49433\" not found" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.598796 systemd[1]: Created slice kubepods-burstable-podb3814b6606efd10a2fef7f55926c0b52.slice - libcontainer container kubepods-burstable-podb3814b6606efd10a2fef7f55926c0b52.slice. Apr 13 19:20:51.601186 kubelet[2215]: E0413 19:20:51.601045 2215 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b986c49433\" not found" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.609831 kubelet[2215]: I0413 19:20:51.609413 2215 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.610457 kubelet[2215]: E0413 19:20:51.610396 2215 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.105.7.160:6443/api/v1/nodes\": dial tcp 178.105.7.160:6443: connect: connection refused" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.610457 kubelet[2215]: I0413 19:20:51.610403 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.610709 kubelet[2215]: I0413 19:20:51.610685 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3814b6606efd10a2fef7f55926c0b52-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-c-b986c49433\" (UID: \"b3814b6606efd10a2fef7f55926c0b52\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.610812 kubelet[2215]: I0413 19:20:51.610795 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.610909 kubelet[2215]: I0413 19:20:51.610894 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fdcd35f1e25a86a0d7849faf2b4ee5fc-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-c-b986c49433\" (UID: \"fdcd35f1e25a86a0d7849faf2b4ee5fc\") " pod="kube-system/kube-scheduler-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.611005 kubelet[2215]: I0413 19:20:51.610985 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3814b6606efd10a2fef7f55926c0b52-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-c-b986c49433\" (UID: \"b3814b6606efd10a2fef7f55926c0b52\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.611074 kubelet[2215]: I0413 19:20:51.611031 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3814b6606efd10a2fef7f55926c0b52-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-c-b986c49433\" (UID: \"b3814b6606efd10a2fef7f55926c0b52\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.611231 kubelet[2215]: I0413 19:20:51.611078 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.611231 kubelet[2215]: I0413 19:20:51.611139 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.611231 kubelet[2215]: I0413 19:20:51.611178 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.625182 kubelet[2215]: E0413 19:20:51.623196 2215 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.7.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-c-b986c49433?timeout=10s\": dial tcp 178.105.7.160:6443: connect: connection refused" interval="400ms" Apr 13 19:20:51.813164 kubelet[2215]: I0413 19:20:51.813102 2215 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.813627 kubelet[2215]: E0413 19:20:51.813573 2215 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.105.7.160:6443/api/v1/nodes\": dial tcp 178.105.7.160:6443: connect: connection refused" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:51.881870 containerd[1471]: time="2026-04-13T19:20:51.881264806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-c-b986c49433,Uid:cfb84cb46fbb678a116a7d19bf8402bc,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:51.898503 containerd[1471]: time="2026-04-13T19:20:51.898431149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-c-b986c49433,Uid:fdcd35f1e25a86a0d7849faf2b4ee5fc,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:51.905062 containerd[1471]: time="2026-04-13T19:20:51.905009172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-c-b986c49433,Uid:b3814b6606efd10a2fef7f55926c0b52,Namespace:kube-system,Attempt:0,}" Apr 13 19:20:52.024803 kubelet[2215]: E0413 19:20:52.024715 2215 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.7.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-c-b986c49433?timeout=10s\": dial tcp 178.105.7.160:6443: connect: connection refused" interval="800ms" Apr 13 19:20:52.216733 kubelet[2215]: I0413 19:20:52.216493 2215 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:52.217451 kubelet[2215]: E0413 19:20:52.217416 2215 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.105.7.160:6443/api/v1/nodes\": dial tcp 178.105.7.160:6443: connect: connection refused" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:52.325619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232421088.mount: Deactivated successfully. Apr 13 19:20:52.333385 containerd[1471]: time="2026-04-13T19:20:52.333301372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:52.335737 containerd[1471]: time="2026-04-13T19:20:52.335665741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 13 19:20:52.339975 containerd[1471]: time="2026-04-13T19:20:52.339839366Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:52.341256 containerd[1471]: time="2026-04-13T19:20:52.341073739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:20:52.342712 containerd[1471]: time="2026-04-13T19:20:52.342674679Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:52.343392 containerd[1471]: time="2026-04-13T19:20:52.343303752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:20:52.343485 containerd[1471]: time="2026-04-13T19:20:52.343470486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:52.347525 containerd[1471]: time="2026-04-13T19:20:52.347408539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:20:52.348614 containerd[1471]: time="2026-04-13T19:20:52.348565068Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 450.027252ms" Apr 13 19:20:52.352305 containerd[1471]: time="2026-04-13T19:20:52.352223284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 470.815107ms" Apr 13 19:20:52.353514 containerd[1471]: time="2026-04-13T19:20:52.353465622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 448.374398ms" Apr 13 19:20:52.495246 containerd[1471]: time="2026-04-13T19:20:52.494269977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:52.495246 containerd[1471]: time="2026-04-13T19:20:52.494357867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:52.495246 containerd[1471]: time="2026-04-13T19:20:52.494375116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:52.495246 containerd[1471]: time="2026-04-13T19:20:52.494513474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:52.501238 containerd[1471]: time="2026-04-13T19:20:52.500739532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:52.501238 containerd[1471]: time="2026-04-13T19:20:52.500804329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:52.501238 containerd[1471]: time="2026-04-13T19:20:52.500820938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:52.501238 containerd[1471]: time="2026-04-13T19:20:52.500906466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:52.501807 containerd[1471]: time="2026-04-13T19:20:52.501521212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:20:52.501807 containerd[1471]: time="2026-04-13T19:20:52.501589650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:20:52.501807 containerd[1471]: time="2026-04-13T19:20:52.501603378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:52.501807 containerd[1471]: time="2026-04-13T19:20:52.501721004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:20:52.530373 systemd[1]: Started cri-containerd-296cec04bb90d430016c7c13ce927c655f482fd065298d9b334551c8a161c47a.scope - libcontainer container 296cec04bb90d430016c7c13ce927c655f482fd065298d9b334551c8a161c47a. Apr 13 19:20:52.536632 systemd[1]: Started cri-containerd-7ec099a537317f3edb5627b6818ac044ed11d42f0b56a962ff847a3c0ace60d8.scope - libcontainer container 7ec099a537317f3edb5627b6818ac044ed11d42f0b56a962ff847a3c0ace60d8. Apr 13 19:20:52.539101 systemd[1]: Started cri-containerd-9bbf2f2153a80c4455ab5d8bac8d65f9d5189a33868728419532e0f59116dbc6.scope - libcontainer container 9bbf2f2153a80c4455ab5d8bac8d65f9d5189a33868728419532e0f59116dbc6. Apr 13 19:20:52.588986 containerd[1471]: time="2026-04-13T19:20:52.588919119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-c-b986c49433,Uid:cfb84cb46fbb678a116a7d19bf8402bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"296cec04bb90d430016c7c13ce927c655f482fd065298d9b334551c8a161c47a\"" Apr 13 19:20:52.605467 containerd[1471]: time="2026-04-13T19:20:52.605239929Z" level=info msg="CreateContainer within sandbox \"296cec04bb90d430016c7c13ce927c655f482fd065298d9b334551c8a161c47a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:20:52.607385 containerd[1471]: time="2026-04-13T19:20:52.607320618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-c-b986c49433,Uid:b3814b6606efd10a2fef7f55926c0b52,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ec099a537317f3edb5627b6818ac044ed11d42f0b56a962ff847a3c0ace60d8\"" Apr 13 19:20:52.613457 containerd[1471]: time="2026-04-13T19:20:52.613399794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-c-b986c49433,Uid:fdcd35f1e25a86a0d7849faf2b4ee5fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bbf2f2153a80c4455ab5d8bac8d65f9d5189a33868728419532e0f59116dbc6\"" Apr 13 19:20:52.617083 containerd[1471]: time="2026-04-13T19:20:52.617018867Z" level=info msg="CreateContainer within sandbox \"7ec099a537317f3edb5627b6818ac044ed11d42f0b56a962ff847a3c0ace60d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:20:52.620674 containerd[1471]: time="2026-04-13T19:20:52.620621852Z" level=info msg="CreateContainer within sandbox \"9bbf2f2153a80c4455ab5d8bac8d65f9d5189a33868728419532e0f59116dbc6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:20:52.632240 containerd[1471]: time="2026-04-13T19:20:52.632146327Z" level=info msg="CreateContainer within sandbox \"296cec04bb90d430016c7c13ce927c655f482fd065298d9b334551c8a161c47a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443\"" Apr 13 19:20:52.633355 containerd[1471]: time="2026-04-13T19:20:52.633303217Z" level=info msg="StartContainer for \"6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443\"" Apr 13 19:20:52.648928 containerd[1471]: time="2026-04-13T19:20:52.648878729Z" level=info msg="CreateContainer within sandbox \"7ec099a537317f3edb5627b6818ac044ed11d42f0b56a962ff847a3c0ace60d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c1234c46f5843ca25269c28b896b9f834811090b6e38282e16d3e8c243a2e48\"" Apr 13 19:20:52.650095 containerd[1471]: time="2026-04-13T19:20:52.650049026Z" level=info msg="StartContainer for \"5c1234c46f5843ca25269c28b896b9f834811090b6e38282e16d3e8c243a2e48\"" Apr 13 19:20:52.651781 containerd[1471]: time="2026-04-13T19:20:52.651684705Z" level=info msg="CreateContainer within sandbox \"9bbf2f2153a80c4455ab5d8bac8d65f9d5189a33868728419532e0f59116dbc6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"28de5cc73c3a90df6b634614bbf61118854237eee0c8071f5c9b4d872fdcf24b\"" Apr 13 19:20:52.653601 containerd[1471]: time="2026-04-13T19:20:52.652366248Z" level=info msg="StartContainer for \"28de5cc73c3a90df6b634614bbf61118854237eee0c8071f5c9b4d872fdcf24b\"" Apr 13 19:20:52.669660 systemd[1]: Started cri-containerd-6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443.scope - libcontainer container 6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443. Apr 13 19:20:52.694836 systemd[1]: Started cri-containerd-5c1234c46f5843ca25269c28b896b9f834811090b6e38282e16d3e8c243a2e48.scope - libcontainer container 5c1234c46f5843ca25269c28b896b9f834811090b6e38282e16d3e8c243a2e48. Apr 13 19:20:52.706382 systemd[1]: Started cri-containerd-28de5cc73c3a90df6b634614bbf61118854237eee0c8071f5c9b4d872fdcf24b.scope - libcontainer container 28de5cc73c3a90df6b634614bbf61118854237eee0c8071f5c9b4d872fdcf24b. Apr 13 19:20:52.747581 containerd[1471]: time="2026-04-13T19:20:52.747444071Z" level=info msg="StartContainer for \"6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443\" returns successfully" Apr 13 19:20:52.752955 containerd[1471]: time="2026-04-13T19:20:52.752905139Z" level=info msg="StartContainer for \"5c1234c46f5843ca25269c28b896b9f834811090b6e38282e16d3e8c243a2e48\" returns successfully" Apr 13 19:20:52.807135 containerd[1471]: time="2026-04-13T19:20:52.807073776Z" level=info msg="StartContainer for \"28de5cc73c3a90df6b634614bbf61118854237eee0c8071f5c9b4d872fdcf24b\" returns successfully" Apr 13 19:20:52.825792 kubelet[2215]: E0413 19:20:52.825749 2215 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.7.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-c-b986c49433?timeout=10s\": dial tcp 178.105.7.160:6443: connect: connection refused" interval="1.6s" Apr 13 19:20:53.021527 kubelet[2215]: I0413 19:20:53.020707 2215 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:53.466686 kubelet[2215]: E0413 19:20:53.466655 2215 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b986c49433\" not found" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:53.468277 kubelet[2215]: E0413 19:20:53.466731 2215 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b986c49433\" not found" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:53.472348 kubelet[2215]: E0413 19:20:53.471279 2215 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b986c49433\" not found" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.409762 kubelet[2215]: I0413 19:20:54.409698 2215 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.409762 kubelet[2215]: E0413 19:20:54.409748 2215 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-c-b986c49433\": node \"ci-4081-3-7-c-b986c49433\" not found" Apr 13 19:20:54.433593 kubelet[2215]: E0413 19:20:54.433536 2215 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b986c49433\" not found" Apr 13 19:20:54.476798 kubelet[2215]: E0413 19:20:54.476284 2215 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b986c49433\" not found" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.479630 kubelet[2215]: E0413 19:20:54.479440 2215 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b986c49433\" not found" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.534330 kubelet[2215]: E0413 19:20:54.534282 2215 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b986c49433\" not found" Apr 13 19:20:54.611082 kubelet[2215]: I0413 19:20:54.610373 2215 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.618504 kubelet[2215]: E0413 19:20:54.618464 2215 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-c-b986c49433\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.618504 kubelet[2215]: I0413 19:20:54.618498 2215 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.620869 kubelet[2215]: E0413 19:20:54.620835 2215 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-c-b986c49433\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.620869 kubelet[2215]: I0413 19:20:54.620868 2215 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.623111 kubelet[2215]: E0413 19:20:54.623074 2215 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.841317 kubelet[2215]: I0413 19:20:54.841221 2215 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:54.846110 kubelet[2215]: E0413 19:20:54.845974 2215 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:55.394481 kubelet[2215]: I0413 19:20:55.394414 2215 apiserver.go:52] "Watching apiserver" Apr 13 19:20:55.410040 kubelet[2215]: I0413 19:20:55.409981 2215 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:20:55.478148 kubelet[2215]: I0413 19:20:55.475353 2215 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:56.894821 systemd[1]: Reloading requested from client PID 2497 ('systemctl') (unit session-7.scope)... Apr 13 19:20:56.895414 systemd[1]: Reloading... Apr 13 19:20:57.006169 zram_generator::config[2540]: No configuration found. Apr 13 19:20:57.110517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:20:57.199466 systemd[1]: Reloading finished in 303 ms. Apr 13 19:20:57.245186 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:57.264188 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:20:57.264688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:57.279842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:20:57.416941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:20:57.434631 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:20:57.492399 kubelet[2582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:20:57.499557 kubelet[2582]: I0413 19:20:57.499499 2582 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 19:20:57.499705 kubelet[2582]: I0413 19:20:57.499696 2582 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:20:57.499769 kubelet[2582]: I0413 19:20:57.499761 2582 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:20:57.499831 kubelet[2582]: I0413 19:20:57.499818 2582 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:20:57.500288 kubelet[2582]: I0413 19:20:57.500270 2582 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 19:20:57.502017 kubelet[2582]: I0413 19:20:57.501843 2582 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:20:57.504471 kubelet[2582]: I0413 19:20:57.504445 2582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:20:57.509165 kubelet[2582]: E0413 19:20:57.508585 2582 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:20:57.509165 kubelet[2582]: I0413 19:20:57.508646 2582 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:20:57.515657 kubelet[2582]: I0413 19:20:57.515625 2582 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:20:57.516803 kubelet[2582]: I0413 19:20:57.516759 2582 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:20:57.516975 kubelet[2582]: I0413 19:20:57.516800 2582 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-c-b986c49433","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:20:57.517061 kubelet[2582]: I0413 19:20:57.516982 2582 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 19:20:57.517061 kubelet[2582]: I0413 19:20:57.516992 2582 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 19:20:57.517061 kubelet[2582]: I0413 19:20:57.517017 2582 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:20:57.517355 kubelet[2582]: I0413 19:20:57.517341 2582 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 19:20:57.517530 kubelet[2582]: I0413 19:20:57.517520 2582 kubelet.go:482] "Attempting to sync node with API server" Apr 13 19:20:57.517566 kubelet[2582]: I0413 19:20:57.517540 2582 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:20:57.517566 kubelet[2582]: I0413 19:20:57.517558 2582 kubelet.go:394] "Adding apiserver pod source" Apr 13 19:20:57.517566 kubelet[2582]: I0413 19:20:57.517568 2582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:20:57.519591 kubelet[2582]: I0413 19:20:57.519546 2582 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:20:57.520487 kubelet[2582]: I0413 19:20:57.520450 2582 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:20:57.520548 kubelet[2582]: I0413 19:20:57.520491 2582 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:20:57.526160 kubelet[2582]: I0413 19:20:57.524739 2582 server.go:1257] "Started kubelet" Apr 13 19:20:57.528797 kubelet[2582]: I0413 19:20:57.528386 2582 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:20:57.529013 kubelet[2582]: I0413 19:20:57.528987 2582 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:20:57.529509 kubelet[2582]: I0413 19:20:57.529485 2582 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:20:57.529756 kubelet[2582]: I0413 19:20:57.529703 2582 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:20:57.530591 kubelet[2582]: I0413 19:20:57.530544 2582 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 19:20:57.531910 kubelet[2582]: I0413 19:20:57.531876 2582 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:20:57.542516 kubelet[2582]: I0413 19:20:57.542486 2582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:20:57.543780 kubelet[2582]: I0413 19:20:57.543757 2582 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 19:20:57.544162 kubelet[2582]: E0413 19:20:57.544129 2582 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b986c49433\" not found" Apr 13 19:20:57.545184 kubelet[2582]: I0413 19:20:57.544861 2582 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:20:57.558372 kubelet[2582]: I0413 19:20:57.548035 2582 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:20:57.562284 kubelet[2582]: I0413 19:20:57.562221 2582 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:20:57.565059 kubelet[2582]: I0413 19:20:57.565017 2582 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:20:57.565059 kubelet[2582]: I0413 19:20:57.565056 2582 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 19:20:57.565253 kubelet[2582]: I0413 19:20:57.565082 2582 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 19:20:57.565253 kubelet[2582]: E0413 19:20:57.565153 2582 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:20:57.581629 kubelet[2582]: I0413 19:20:57.581495 2582 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:20:57.581629 kubelet[2582]: I0413 19:20:57.581610 2582 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:20:57.582809 kubelet[2582]: I0413 19:20:57.581806 2582 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:20:57.634443 kubelet[2582]: I0413 19:20:57.634416 2582 cpu_manager.go:225] "Starting" policy="none" Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634606 2582 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634633 2582 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634786 2582 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634800 2582 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634821 2582 policy_none.go:50] "Start" Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634830 2582 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634840 2582 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634953 2582 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 19:20:57.635053 kubelet[2582]: I0413 19:20:57.634969 2582 policy_none.go:44] "Start" Apr 13 19:20:57.641697 kubelet[2582]: E0413 19:20:57.641647 2582 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:20:57.643595 kubelet[2582]: I0413 19:20:57.643549 2582 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 19:20:57.643595 kubelet[2582]: I0413 19:20:57.643566 2582 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:20:57.644316 kubelet[2582]: I0413 19:20:57.643818 2582 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 19:20:57.649225 kubelet[2582]: E0413 19:20:57.648934 2582 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:20:57.666789 kubelet[2582]: I0413 19:20:57.666654 2582 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.667349 kubelet[2582]: I0413 19:20:57.667153 2582 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.667483 kubelet[2582]: I0413 19:20:57.667460 2582 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.677238 kubelet[2582]: E0413 19:20:57.677194 2582 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-c-b986c49433\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.749241 kubelet[2582]: I0413 19:20:57.748944 2582 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.765847 kubelet[2582]: I0413 19:20:57.765810 2582 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.766082 kubelet[2582]: I0413 19:20:57.766040 2582 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.859489 kubelet[2582]: I0413 19:20:57.859395 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.859489 kubelet[2582]: I0413 19:20:57.859472 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.862049 kubelet[2582]: I0413 19:20:57.859521 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fdcd35f1e25a86a0d7849faf2b4ee5fc-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-c-b986c49433\" (UID: \"fdcd35f1e25a86a0d7849faf2b4ee5fc\") " pod="kube-system/kube-scheduler-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.862049 kubelet[2582]: I0413 19:20:57.859554 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3814b6606efd10a2fef7f55926c0b52-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-c-b986c49433\" (UID: \"b3814b6606efd10a2fef7f55926c0b52\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.862049 kubelet[2582]: I0413 19:20:57.859626 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.862049 kubelet[2582]: I0413 19:20:57.859665 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.862049 kubelet[2582]: I0413 19:20:57.859699 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfb84cb46fbb678a116a7d19bf8402bc-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-c-b986c49433\" (UID: \"cfb84cb46fbb678a116a7d19bf8402bc\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.862284 kubelet[2582]: I0413 19:20:57.859728 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3814b6606efd10a2fef7f55926c0b52-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-c-b986c49433\" (UID: \"b3814b6606efd10a2fef7f55926c0b52\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.862284 kubelet[2582]: I0413 19:20:57.859767 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3814b6606efd10a2fef7f55926c0b52-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-c-b986c49433\" (UID: \"b3814b6606efd10a2fef7f55926c0b52\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" Apr 13 19:20:57.892525 sudo[2617]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 19:20:57.892887 sudo[2617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 19:20:58.378216 sudo[2617]: pam_unix(sudo:session): session closed for user root Apr 13 19:20:58.518904 kubelet[2582]: I0413 19:20:58.518869 2582 apiserver.go:52] "Watching apiserver" Apr 13 19:20:58.559359 kubelet[2582]: I0413 19:20:58.559319 2582 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:20:58.626143 kubelet[2582]: I0413 19:20:58.625509 2582 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b986c49433" Apr 13 19:20:58.645225 kubelet[2582]: E0413 19:20:58.643373 2582 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-c-b986c49433\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b986c49433" Apr 13 19:20:58.685486 kubelet[2582]: I0413 19:20:58.685317 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b986c49433" podStartSLOduration=3.6853019160000002 podStartE2EDuration="3.685301916s" podCreationTimestamp="2026-04-13 19:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:20:58.663922203 +0000 UTC m=+1.223370147" watchObservedRunningTime="2026-04-13 19:20:58.685301916 +0000 UTC m=+1.244749820" Apr 13 19:20:58.703379 kubelet[2582]: I0413 19:20:58.703170 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b986c49433" podStartSLOduration=1.703150819 podStartE2EDuration="1.703150819s" podCreationTimestamp="2026-04-13 19:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:20:58.686555513 +0000 UTC m=+1.246003457" watchObservedRunningTime="2026-04-13 19:20:58.703150819 +0000 UTC m=+1.262598763" Apr 13 19:20:58.719841 kubelet[2582]: I0413 19:20:58.719060 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b986c49433" podStartSLOduration=1.719034065 podStartE2EDuration="1.719034065s" podCreationTimestamp="2026-04-13 19:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:20:58.704054367 +0000 UTC m=+1.263502351" watchObservedRunningTime="2026-04-13 19:20:58.719034065 +0000 UTC m=+1.278482049" Apr 13 19:21:00.913987 sudo[1718]: pam_unix(sudo:session): session closed for user root Apr 13 19:21:00.933269 sshd[1715]: pam_unix(sshd:session): session closed for user core Apr 13 19:21:00.938279 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:21:00.940944 systemd[1]: sshd@6-178.105.7.160:22-50.85.169.122:52120.service: Deactivated successfully. Apr 13 19:21:00.943084 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:21:00.944188 systemd[1]: session-7.scope: Consumed 5.690s CPU time, 153.1M memory peak, 0B memory swap peak. Apr 13 19:21:00.944941 systemd-logind[1460]: Removed session 7. Apr 13 19:21:03.211011 kubelet[2582]: I0413 19:21:03.210847 2582 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:21:03.211781 containerd[1471]: time="2026-04-13T19:21:03.211622220Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:21:03.212656 kubelet[2582]: I0413 19:21:03.212068 2582 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:21:04.226466 kubelet[2582]: E0413 19:21:04.226030 2582 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-b2whq\" is forbidden: User \"system:node:ci-4081-3-7-c-b986c49433\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-7-c-b986c49433' and this object" podUID="1691d010-bf45-4289-85b7-ceddcb38bb30" pod="kube-system/kube-proxy-b2whq" Apr 13 19:21:04.231899 systemd[1]: Created slice kubepods-besteffort-pod1691d010_bf45_4289_85b7_ceddcb38bb30.slice - libcontainer container kubepods-besteffort-pod1691d010_bf45_4289_85b7_ceddcb38bb30.slice. Apr 13 19:21:04.249637 systemd[1]: Created slice kubepods-burstable-pod6c879480_5545_40a8_91f6_edb0b44fd338.slice - libcontainer container kubepods-burstable-pod6c879480_5545_40a8_91f6_edb0b44fd338.slice. Apr 13 19:21:04.401179 kubelet[2582]: I0413 19:21:04.399844 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-xtables-lock\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401179 kubelet[2582]: I0413 19:21:04.399891 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-net\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401179 kubelet[2582]: I0413 19:21:04.399914 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-config-path\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401179 kubelet[2582]: I0413 19:21:04.399931 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-kernel\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401179 kubelet[2582]: I0413 19:21:04.399947 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-hubble-tls\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401412 kubelet[2582]: I0413 19:21:04.399963 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bknhx\" (UniqueName: \"kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-kube-api-access-bknhx\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401412 kubelet[2582]: I0413 19:21:04.399980 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-etc-cni-netd\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401412 kubelet[2582]: I0413 19:21:04.399993 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-lib-modules\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401412 kubelet[2582]: I0413 19:21:04.400007 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c879480-5545-40a8-91f6-edb0b44fd338-clustermesh-secrets\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401412 kubelet[2582]: I0413 19:21:04.400023 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1691d010-bf45-4289-85b7-ceddcb38bb30-kube-proxy\") pod \"kube-proxy-b2whq\" (UID: \"1691d010-bf45-4289-85b7-ceddcb38bb30\") " pod="kube-system/kube-proxy-b2whq" Apr 13 19:21:04.401412 kubelet[2582]: I0413 19:21:04.400039 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1691d010-bf45-4289-85b7-ceddcb38bb30-xtables-lock\") pod \"kube-proxy-b2whq\" (UID: \"1691d010-bf45-4289-85b7-ceddcb38bb30\") " pod="kube-system/kube-proxy-b2whq" Apr 13 19:21:04.401552 kubelet[2582]: I0413 19:21:04.400056 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1691d010-bf45-4289-85b7-ceddcb38bb30-lib-modules\") pod \"kube-proxy-b2whq\" (UID: \"1691d010-bf45-4289-85b7-ceddcb38bb30\") " pod="kube-system/kube-proxy-b2whq" Apr 13 19:21:04.401552 kubelet[2582]: I0413 19:21:04.400071 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-run\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401552 kubelet[2582]: I0413 19:21:04.400084 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-hostproc\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401552 kubelet[2582]: I0413 19:21:04.400097 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-cgroup\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.401552 kubelet[2582]: I0413 19:21:04.400126 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjx85\" (UniqueName: \"kubernetes.io/projected/1691d010-bf45-4289-85b7-ceddcb38bb30-kube-api-access-qjx85\") pod \"kube-proxy-b2whq\" (UID: \"1691d010-bf45-4289-85b7-ceddcb38bb30\") " pod="kube-system/kube-proxy-b2whq" Apr 13 19:21:04.402036 kubelet[2582]: I0413 19:21:04.401988 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-bpf-maps\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.402105 kubelet[2582]: I0413 19:21:04.402078 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cni-path\") pod \"cilium-z7ghq\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " pod="kube-system/cilium-z7ghq" Apr 13 19:21:04.411543 systemd[1]: Created slice kubepods-besteffort-pod8529443b_668f_437a_ad19_29580eb6b962.slice - libcontainer container kubepods-besteffort-pod8529443b_668f_437a_ad19_29580eb6b962.slice. Apr 13 19:21:04.545264 containerd[1471]: time="2026-04-13T19:21:04.545071264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2whq,Uid:1691d010-bf45-4289-85b7-ceddcb38bb30,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:04.561538 containerd[1471]: time="2026-04-13T19:21:04.560786593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7ghq,Uid:6c879480-5545-40a8-91f6-edb0b44fd338,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:04.573947 containerd[1471]: time="2026-04-13T19:21:04.572859702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:04.573947 containerd[1471]: time="2026-04-13T19:21:04.572918714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:04.573947 containerd[1471]: time="2026-04-13T19:21:04.572933998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:04.573947 containerd[1471]: time="2026-04-13T19:21:04.573038420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:04.597353 systemd[1]: Started cri-containerd-611f43a742edc00c91adf4db38b39c3957f8e83d33667a1718eeb3d740d52837.scope - libcontainer container 611f43a742edc00c91adf4db38b39c3957f8e83d33667a1718eeb3d740d52837. Apr 13 19:21:04.601137 containerd[1471]: time="2026-04-13T19:21:04.600545477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:04.601137 containerd[1471]: time="2026-04-13T19:21:04.600897713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:04.601137 containerd[1471]: time="2026-04-13T19:21:04.600918517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:04.601137 containerd[1471]: time="2026-04-13T19:21:04.601018819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:04.604317 kubelet[2582]: I0413 19:21:04.603503 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8529443b-668f-437a-ad19-29580eb6b962-cilium-config-path\") pod \"cilium-operator-78cf5644cb-4vdfr\" (UID: \"8529443b-668f-437a-ad19-29580eb6b962\") " pod="kube-system/cilium-operator-78cf5644cb-4vdfr" Apr 13 19:21:04.604317 kubelet[2582]: I0413 19:21:04.603549 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8t4m\" (UniqueName: \"kubernetes.io/projected/8529443b-668f-437a-ad19-29580eb6b962-kube-api-access-g8t4m\") pod \"cilium-operator-78cf5644cb-4vdfr\" (UID: \"8529443b-668f-437a-ad19-29580eb6b962\") " pod="kube-system/cilium-operator-78cf5644cb-4vdfr" Apr 13 19:21:04.626320 systemd[1]: Started cri-containerd-021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a.scope - libcontainer container 021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a. Apr 13 19:21:04.631536 containerd[1471]: time="2026-04-13T19:21:04.631295630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2whq,Uid:1691d010-bf45-4289-85b7-ceddcb38bb30,Namespace:kube-system,Attempt:0,} returns sandbox id \"611f43a742edc00c91adf4db38b39c3957f8e83d33667a1718eeb3d740d52837\"" Apr 13 19:21:04.640651 containerd[1471]: time="2026-04-13T19:21:04.640413905Z" level=info msg="CreateContainer within sandbox \"611f43a742edc00c91adf4db38b39c3957f8e83d33667a1718eeb3d740d52837\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:21:04.663633 containerd[1471]: time="2026-04-13T19:21:04.663430280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7ghq,Uid:6c879480-5545-40a8-91f6-edb0b44fd338,Namespace:kube-system,Attempt:0,} returns sandbox id \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\"" Apr 13 19:21:04.665184 containerd[1471]: time="2026-04-13T19:21:04.664941804Z" level=info msg="CreateContainer within sandbox \"611f43a742edc00c91adf4db38b39c3957f8e83d33667a1718eeb3d740d52837\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"be8224aeb6b149159be28a53f0dcc74fa9824249dc6c3b512e4877327962532f\"" Apr 13 19:21:04.666444 containerd[1471]: time="2026-04-13T19:21:04.666036398Z" level=info msg="StartContainer for \"be8224aeb6b149159be28a53f0dcc74fa9824249dc6c3b512e4877327962532f\"" Apr 13 19:21:04.668357 containerd[1471]: time="2026-04-13T19:21:04.668069074Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 19:21:04.697369 systemd[1]: Started cri-containerd-be8224aeb6b149159be28a53f0dcc74fa9824249dc6c3b512e4877327962532f.scope - libcontainer container be8224aeb6b149159be28a53f0dcc74fa9824249dc6c3b512e4877327962532f. Apr 13 19:21:04.736051 containerd[1471]: time="2026-04-13T19:21:04.735997398Z" level=info msg="StartContainer for \"be8224aeb6b149159be28a53f0dcc74fa9824249dc6c3b512e4877327962532f\" returns successfully" Apr 13 19:21:05.021077 containerd[1471]: time="2026-04-13T19:21:05.020366980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-4vdfr,Uid:8529443b-668f-437a-ad19-29580eb6b962,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:05.046514 containerd[1471]: time="2026-04-13T19:21:05.046398656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:05.046514 containerd[1471]: time="2026-04-13T19:21:05.046477032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:05.046753 containerd[1471]: time="2026-04-13T19:21:05.046494235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:05.046753 containerd[1471]: time="2026-04-13T19:21:05.046579052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:05.074412 systemd[1]: Started cri-containerd-99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc.scope - libcontainer container 99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc. Apr 13 19:21:05.117217 containerd[1471]: time="2026-04-13T19:21:05.117120989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-4vdfr,Uid:8529443b-668f-437a-ad19-29580eb6b962,Namespace:kube-system,Attempt:0,} returns sandbox id \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\"" Apr 13 19:21:07.088399 kubelet[2582]: I0413 19:21:07.087674 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-b2whq" podStartSLOduration=3.087655016 podStartE2EDuration="3.087655016s" podCreationTimestamp="2026-04-13 19:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:05.659409333 +0000 UTC m=+8.218857557" watchObservedRunningTime="2026-04-13 19:21:07.087655016 +0000 UTC m=+9.647102960" Apr 13 19:21:10.734756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2330895708.mount: Deactivated successfully. Apr 13 19:21:12.135176 containerd[1471]: time="2026-04-13T19:21:12.134311724Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:12.136584 containerd[1471]: time="2026-04-13T19:21:12.136537032Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 13 19:21:12.137074 containerd[1471]: time="2026-04-13T19:21:12.137043382Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:12.138747 containerd[1471]: time="2026-04-13T19:21:12.138705013Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.47058921s" Apr 13 19:21:12.138951 containerd[1471]: time="2026-04-13T19:21:12.138879157Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 13 19:21:12.140977 containerd[1471]: time="2026-04-13T19:21:12.140885995Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 19:21:12.147814 containerd[1471]: time="2026-04-13T19:21:12.147767350Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:21:12.164022 containerd[1471]: time="2026-04-13T19:21:12.163347911Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\"" Apr 13 19:21:12.165073 containerd[1471]: time="2026-04-13T19:21:12.165018703Z" level=info msg="StartContainer for \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\"" Apr 13 19:21:12.196669 systemd[1]: run-containerd-runc-k8s.io-bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370-runc.mZM0Qq.mount: Deactivated successfully. Apr 13 19:21:12.206410 systemd[1]: Started cri-containerd-bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370.scope - libcontainer container bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370. Apr 13 19:21:12.239724 containerd[1471]: time="2026-04-13T19:21:12.239454868Z" level=info msg="StartContainer for \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\" returns successfully" Apr 13 19:21:12.258565 systemd[1]: cri-containerd-bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370.scope: Deactivated successfully. Apr 13 19:21:12.450949 containerd[1471]: time="2026-04-13T19:21:12.450664805Z" level=info msg="shim disconnected" id=bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370 namespace=k8s.io Apr 13 19:21:12.450949 containerd[1471]: time="2026-04-13T19:21:12.450729454Z" level=warning msg="cleaning up after shim disconnected" id=bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370 namespace=k8s.io Apr 13 19:21:12.450949 containerd[1471]: time="2026-04-13T19:21:12.450741576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:12.686158 containerd[1471]: time="2026-04-13T19:21:12.683420811Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:21:12.698817 containerd[1471]: time="2026-04-13T19:21:12.698762859Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\"" Apr 13 19:21:12.701227 containerd[1471]: time="2026-04-13T19:21:12.699797603Z" level=info msg="StartContainer for \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\"" Apr 13 19:21:12.735370 systemd[1]: Started cri-containerd-4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e.scope - libcontainer container 4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e. Apr 13 19:21:12.776469 containerd[1471]: time="2026-04-13T19:21:12.776407429Z" level=info msg="StartContainer for \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\" returns successfully" Apr 13 19:21:12.793407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:21:12.794076 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:21:12.796315 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:21:12.802692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:21:12.802908 systemd[1]: cri-containerd-4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e.scope: Deactivated successfully. Apr 13 19:21:12.828197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:21:12.839652 containerd[1471]: time="2026-04-13T19:21:12.839397607Z" level=info msg="shim disconnected" id=4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e namespace=k8s.io Apr 13 19:21:12.839652 containerd[1471]: time="2026-04-13T19:21:12.839467336Z" level=warning msg="cleaning up after shim disconnected" id=4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e namespace=k8s.io Apr 13 19:21:12.839652 containerd[1471]: time="2026-04-13T19:21:12.839478538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:13.163287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370-rootfs.mount: Deactivated successfully. Apr 13 19:21:13.684913 containerd[1471]: time="2026-04-13T19:21:13.684825125Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:21:13.721372 containerd[1471]: time="2026-04-13T19:21:13.720813585Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\"" Apr 13 19:21:13.723539 containerd[1471]: time="2026-04-13T19:21:13.723468015Z" level=info msg="StartContainer for \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\"" Apr 13 19:21:13.764367 systemd[1]: Started cri-containerd-7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e.scope - libcontainer container 7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e. Apr 13 19:21:13.810578 containerd[1471]: time="2026-04-13T19:21:13.810473795Z" level=info msg="StartContainer for \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\" returns successfully" Apr 13 19:21:13.817253 systemd[1]: cri-containerd-7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e.scope: Deactivated successfully. Apr 13 19:21:13.859996 containerd[1471]: time="2026-04-13T19:21:13.859698038Z" level=info msg="shim disconnected" id=7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e namespace=k8s.io Apr 13 19:21:13.859996 containerd[1471]: time="2026-04-13T19:21:13.859762687Z" level=warning msg="cleaning up after shim disconnected" id=7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e namespace=k8s.io Apr 13 19:21:13.859996 containerd[1471]: time="2026-04-13T19:21:13.859804892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:14.162357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e-rootfs.mount: Deactivated successfully. Apr 13 19:21:14.699145 containerd[1471]: time="2026-04-13T19:21:14.697031157Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:21:14.720760 containerd[1471]: time="2026-04-13T19:21:14.720626470Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\"" Apr 13 19:21:14.723175 containerd[1471]: time="2026-04-13T19:21:14.721841702Z" level=info msg="StartContainer for \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\"" Apr 13 19:21:14.781404 systemd[1]: Started cri-containerd-c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4.scope - libcontainer container c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4. Apr 13 19:21:14.819685 systemd[1]: cri-containerd-c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4.scope: Deactivated successfully. Apr 13 19:21:14.826155 containerd[1471]: time="2026-04-13T19:21:14.825027736Z" level=info msg="StartContainer for \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\" returns successfully" Apr 13 19:21:14.854676 containerd[1471]: time="2026-04-13T19:21:14.854423334Z" level=info msg="shim disconnected" id=c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4 namespace=k8s.io Apr 13 19:21:14.854676 containerd[1471]: time="2026-04-13T19:21:14.854486902Z" level=warning msg="cleaning up after shim disconnected" id=c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4 namespace=k8s.io Apr 13 19:21:14.854676 containerd[1471]: time="2026-04-13T19:21:14.854499304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:21:15.161289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4-rootfs.mount: Deactivated successfully. Apr 13 19:21:15.703257 containerd[1471]: time="2026-04-13T19:21:15.703208156Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:21:15.725565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526101755.mount: Deactivated successfully. Apr 13 19:21:15.732063 containerd[1471]: time="2026-04-13T19:21:15.731882128Z" level=info msg="CreateContainer within sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\"" Apr 13 19:21:15.734223 containerd[1471]: time="2026-04-13T19:21:15.733367065Z" level=info msg="StartContainer for \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\"" Apr 13 19:21:15.768544 systemd[1]: Started cri-containerd-6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c.scope - libcontainer container 6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c. Apr 13 19:21:15.809849 containerd[1471]: time="2026-04-13T19:21:15.809611978Z" level=info msg="StartContainer for \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\" returns successfully" Apr 13 19:21:15.885685 kubelet[2582]: I0413 19:21:15.884855 2582 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 13 19:21:15.942048 systemd[1]: Created slice kubepods-burstable-pod39c681ff_fe5d_43e9_99a7_3955fa5e87d9.slice - libcontainer container kubepods-burstable-pod39c681ff_fe5d_43e9_99a7_3955fa5e87d9.slice. Apr 13 19:21:15.949855 systemd[1]: Created slice kubepods-burstable-podcb969109_6449_4e69_b6e4_8cffc9e18e9a.slice - libcontainer container kubepods-burstable-podcb969109_6449_4e69_b6e4_8cffc9e18e9a.slice. Apr 13 19:21:15.985832 kubelet[2582]: I0413 19:21:15.985460 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kslkh\" (UniqueName: \"kubernetes.io/projected/cb969109-6449-4e69-b6e4-8cffc9e18e9a-kube-api-access-kslkh\") pod \"coredns-7d764666f9-g5j4b\" (UID: \"cb969109-6449-4e69-b6e4-8cffc9e18e9a\") " pod="kube-system/coredns-7d764666f9-g5j4b" Apr 13 19:21:15.985832 kubelet[2582]: I0413 19:21:15.985506 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39c681ff-fe5d-43e9-99a7-3955fa5e87d9-config-volume\") pod \"coredns-7d764666f9-lsf7b\" (UID: \"39c681ff-fe5d-43e9-99a7-3955fa5e87d9\") " pod="kube-system/coredns-7d764666f9-lsf7b" Apr 13 19:21:15.985832 kubelet[2582]: I0413 19:21:15.985572 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgj5\" (UniqueName: \"kubernetes.io/projected/39c681ff-fe5d-43e9-99a7-3955fa5e87d9-kube-api-access-ttgj5\") pod \"coredns-7d764666f9-lsf7b\" (UID: \"39c681ff-fe5d-43e9-99a7-3955fa5e87d9\") " pod="kube-system/coredns-7d764666f9-lsf7b" Apr 13 19:21:15.985832 kubelet[2582]: I0413 19:21:15.985648 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb969109-6449-4e69-b6e4-8cffc9e18e9a-config-volume\") pod \"coredns-7d764666f9-g5j4b\" (UID: \"cb969109-6449-4e69-b6e4-8cffc9e18e9a\") " pod="kube-system/coredns-7d764666f9-g5j4b" Apr 13 19:21:16.252496 containerd[1471]: time="2026-04-13T19:21:16.252251881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lsf7b,Uid:39c681ff-fe5d-43e9-99a7-3955fa5e87d9,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:16.255657 containerd[1471]: time="2026-04-13T19:21:16.255613102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-g5j4b,Uid:cb969109-6449-4e69-b6e4-8cffc9e18e9a,Namespace:kube-system,Attempt:0,}" Apr 13 19:21:16.606444 containerd[1471]: time="2026-04-13T19:21:16.606360536Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:16.608215 containerd[1471]: time="2026-04-13T19:21:16.608168221Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 13 19:21:16.609660 containerd[1471]: time="2026-04-13T19:21:16.609634667Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:21:16.612724 containerd[1471]: time="2026-04-13T19:21:16.612655009Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.471695844s" Apr 13 19:21:16.612815 containerd[1471]: time="2026-04-13T19:21:16.612742939Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 13 19:21:16.619801 containerd[1471]: time="2026-04-13T19:21:16.619750853Z" level=info msg="CreateContainer within sandbox \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 19:21:16.635586 containerd[1471]: time="2026-04-13T19:21:16.635538280Z" level=info msg="CreateContainer within sandbox \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\"" Apr 13 19:21:16.637259 containerd[1471]: time="2026-04-13T19:21:16.636297046Z" level=info msg="StartContainer for \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\"" Apr 13 19:21:16.671711 systemd[1]: Started cri-containerd-4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120.scope - libcontainer container 4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120. Apr 13 19:21:16.713545 containerd[1471]: time="2026-04-13T19:21:16.713456263Z" level=info msg="StartContainer for \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\" returns successfully" Apr 13 19:21:17.115344 kubelet[2582]: I0413 19:21:17.115275 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-z7ghq" podStartSLOduration=2.089885436 podStartE2EDuration="13.115258138s" podCreationTimestamp="2026-04-13 19:21:04 +0000 UTC" firstStartedPulling="2026-04-13 19:21:04.666156784 +0000 UTC m=+7.225604728" lastFinishedPulling="2026-04-13 19:21:15.691529486 +0000 UTC m=+18.250977430" observedRunningTime="2026-04-13 19:21:16.726341562 +0000 UTC m=+19.285789506" watchObservedRunningTime="2026-04-13 19:21:17.115258138 +0000 UTC m=+19.674706082" Apr 13 19:21:17.164745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056143737.mount: Deactivated successfully. Apr 13 19:21:17.728151 kubelet[2582]: I0413 19:21:17.726810 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-4vdfr" podStartSLOduration=2.233120527 podStartE2EDuration="13.726792914s" podCreationTimestamp="2026-04-13 19:21:04 +0000 UTC" firstStartedPulling="2026-04-13 19:21:05.120063105 +0000 UTC m=+7.679511049" lastFinishedPulling="2026-04-13 19:21:16.613735492 +0000 UTC m=+19.173183436" observedRunningTime="2026-04-13 19:21:17.72656973 +0000 UTC m=+20.286017714" watchObservedRunningTime="2026-04-13 19:21:17.726792914 +0000 UTC m=+20.286240858" Apr 13 19:21:20.159566 systemd-networkd[1373]: cilium_host: Link UP Apr 13 19:21:20.160044 systemd-networkd[1373]: cilium_net: Link UP Apr 13 19:21:20.161236 systemd-networkd[1373]: cilium_net: Gained carrier Apr 13 19:21:20.161942 systemd-networkd[1373]: cilium_host: Gained carrier Apr 13 19:21:20.162358 systemd-networkd[1373]: cilium_net: Gained IPv6LL Apr 13 19:21:20.165215 systemd-networkd[1373]: cilium_host: Gained IPv6LL Apr 13 19:21:20.287158 systemd-networkd[1373]: cilium_vxlan: Link UP Apr 13 19:21:20.287172 systemd-networkd[1373]: cilium_vxlan: Gained carrier Apr 13 19:21:20.584142 kernel: NET: Registered PF_ALG protocol family Apr 13 19:21:21.374613 systemd-networkd[1373]: lxc_health: Link UP Apr 13 19:21:21.389400 systemd-networkd[1373]: lxc_health: Gained carrier Apr 13 19:21:21.492776 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Apr 13 19:21:21.875964 systemd-networkd[1373]: lxcb78db63bf9ca: Link UP Apr 13 19:21:21.889196 systemd-networkd[1373]: lxcaa68a7bea7ee: Link UP Apr 13 19:21:21.893337 kernel: eth0: renamed from tmpdba46 Apr 13 19:21:21.897264 kernel: eth0: renamed from tmp1a0c9 Apr 13 19:21:21.903879 systemd-networkd[1373]: lxcb78db63bf9ca: Gained carrier Apr 13 19:21:21.904178 systemd-networkd[1373]: lxcaa68a7bea7ee: Gained carrier Apr 13 19:21:23.283331 systemd-networkd[1373]: lxc_health: Gained IPv6LL Apr 13 19:21:23.411727 systemd-networkd[1373]: lxcb78db63bf9ca: Gained IPv6LL Apr 13 19:21:23.795516 systemd-networkd[1373]: lxcaa68a7bea7ee: Gained IPv6LL Apr 13 19:21:26.059015 containerd[1471]: time="2026-04-13T19:21:26.058346457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:26.059015 containerd[1471]: time="2026-04-13T19:21:26.058417542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:26.059015 containerd[1471]: time="2026-04-13T19:21:26.058435583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:26.059015 containerd[1471]: time="2026-04-13T19:21:26.058560953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:26.077767 containerd[1471]: time="2026-04-13T19:21:26.075373884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:21:26.077767 containerd[1471]: time="2026-04-13T19:21:26.077246019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:21:26.077767 containerd[1471]: time="2026-04-13T19:21:26.077261260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:26.077767 containerd[1471]: time="2026-04-13T19:21:26.077394550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:21:26.108884 systemd[1]: Started cri-containerd-1a0c9562fab72e166faa7a7440d830d831d95fd4e1a5f3d0f05d68d336c0fbcd.scope - libcontainer container 1a0c9562fab72e166faa7a7440d830d831d95fd4e1a5f3d0f05d68d336c0fbcd. Apr 13 19:21:26.134322 systemd[1]: Started cri-containerd-dba462fdc5657e12aa8a2f5f7fc99ae017ba73336c40028c2fc88d6c99d62045.scope - libcontainer container dba462fdc5657e12aa8a2f5f7fc99ae017ba73336c40028c2fc88d6c99d62045. Apr 13 19:21:26.179573 containerd[1471]: time="2026-04-13T19:21:26.179452027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-g5j4b,Uid:cb969109-6449-4e69-b6e4-8cffc9e18e9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a0c9562fab72e166faa7a7440d830d831d95fd4e1a5f3d0f05d68d336c0fbcd\"" Apr 13 19:21:26.191796 containerd[1471]: time="2026-04-13T19:21:26.191558899Z" level=info msg="CreateContainer within sandbox \"1a0c9562fab72e166faa7a7440d830d831d95fd4e1a5f3d0f05d68d336c0fbcd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:21:26.215836 containerd[1471]: time="2026-04-13T19:21:26.213382832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lsf7b,Uid:39c681ff-fe5d-43e9-99a7-3955fa5e87d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dba462fdc5657e12aa8a2f5f7fc99ae017ba73336c40028c2fc88d6c99d62045\"" Apr 13 19:21:26.222679 containerd[1471]: time="2026-04-13T19:21:26.222632819Z" level=info msg="CreateContainer within sandbox \"1a0c9562fab72e166faa7a7440d830d831d95fd4e1a5f3d0f05d68d336c0fbcd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15231c76fe65c08a07fb26acc09e1eadadacb1ecee1a4a28b30ad5d790160d8c\"" Apr 13 19:21:26.223173 containerd[1471]: time="2026-04-13T19:21:26.223014447Z" level=info msg="CreateContainer within sandbox \"dba462fdc5657e12aa8a2f5f7fc99ae017ba73336c40028c2fc88d6c99d62045\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:21:26.224545 containerd[1471]: time="2026-04-13T19:21:26.223929833Z" level=info msg="StartContainer for \"15231c76fe65c08a07fb26acc09e1eadadacb1ecee1a4a28b30ad5d790160d8c\"" Apr 13 19:21:26.246710 containerd[1471]: time="2026-04-13T19:21:26.246650670Z" level=info msg="CreateContainer within sandbox \"dba462fdc5657e12aa8a2f5f7fc99ae017ba73336c40028c2fc88d6c99d62045\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"787b26597ef2c911f20d957d6d4a6af90b347848086d9d01818dd079bee50f5c\"" Apr 13 19:21:26.248244 containerd[1471]: time="2026-04-13T19:21:26.247464689Z" level=info msg="StartContainer for \"787b26597ef2c911f20d957d6d4a6af90b347848086d9d01818dd079bee50f5c\"" Apr 13 19:21:26.267361 systemd[1]: Started cri-containerd-15231c76fe65c08a07fb26acc09e1eadadacb1ecee1a4a28b30ad5d790160d8c.scope - libcontainer container 15231c76fe65c08a07fb26acc09e1eadadacb1ecee1a4a28b30ad5d790160d8c. Apr 13 19:21:26.290417 systemd[1]: Started cri-containerd-787b26597ef2c911f20d957d6d4a6af90b347848086d9d01818dd079bee50f5c.scope - libcontainer container 787b26597ef2c911f20d957d6d4a6af90b347848086d9d01818dd079bee50f5c. Apr 13 19:21:26.324090 containerd[1471]: time="2026-04-13T19:21:26.323557574Z" level=info msg="StartContainer for \"15231c76fe65c08a07fb26acc09e1eadadacb1ecee1a4a28b30ad5d790160d8c\" returns successfully" Apr 13 19:21:26.344618 containerd[1471]: time="2026-04-13T19:21:26.344157499Z" level=info msg="StartContainer for \"787b26597ef2c911f20d957d6d4a6af90b347848086d9d01818dd079bee50f5c\" returns successfully" Apr 13 19:21:26.758332 kubelet[2582]: I0413 19:21:26.757971 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-g5j4b" podStartSLOduration=22.757956407 podStartE2EDuration="22.757956407s" podCreationTimestamp="2026-04-13 19:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:26.75495615 +0000 UTC m=+29.314404094" watchObservedRunningTime="2026-04-13 19:21:26.757956407 +0000 UTC m=+29.317404351" Apr 13 19:21:26.776226 kubelet[2582]: I0413 19:21:26.775475 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-lsf7b" podStartSLOduration=22.775459708 podStartE2EDuration="22.775459708s" podCreationTimestamp="2026-04-13 19:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:21:26.774563084 +0000 UTC m=+29.334011108" watchObservedRunningTime="2026-04-13 19:21:26.775459708 +0000 UTC m=+29.334907652" Apr 13 19:21:30.309165 kubelet[2582]: I0413 19:21:30.308183 2582 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:23:09.424487 systemd[1]: Started sshd@7-178.105.7.160:22-50.85.169.122:50608.service - OpenSSH per-connection server daemon (50.85.169.122:50608). Apr 13 19:23:09.548170 sshd[3995]: Accepted publickey for core from 50.85.169.122 port 50608 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:09.550767 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:09.557439 systemd-logind[1460]: New session 8 of user core. Apr 13 19:23:09.569604 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:23:09.765434 sshd[3995]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:09.771146 systemd[1]: sshd@7-178.105.7.160:22-50.85.169.122:50608.service: Deactivated successfully. Apr 13 19:23:09.774418 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:23:09.775554 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:23:09.777177 systemd-logind[1460]: Removed session 8. Apr 13 19:23:14.806193 systemd[1]: Started sshd@8-178.105.7.160:22-50.85.169.122:50620.service - OpenSSH per-connection server daemon (50.85.169.122:50620). Apr 13 19:23:14.933420 sshd[4009]: Accepted publickey for core from 50.85.169.122 port 50620 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:14.936744 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:14.941710 systemd-logind[1460]: New session 9 of user core. Apr 13 19:23:14.948455 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:23:15.127322 sshd[4009]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:15.133029 systemd[1]: sshd@8-178.105.7.160:22-50.85.169.122:50620.service: Deactivated successfully. Apr 13 19:23:15.137230 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:23:15.138588 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:23:15.139776 systemd-logind[1460]: Removed session 9. Apr 13 19:23:20.162475 systemd[1]: Started sshd@9-178.105.7.160:22-50.85.169.122:41086.service - OpenSSH per-connection server daemon (50.85.169.122:41086). Apr 13 19:23:20.294262 sshd[4023]: Accepted publickey for core from 50.85.169.122 port 41086 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:20.296794 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:20.302237 systemd-logind[1460]: New session 10 of user core. Apr 13 19:23:20.307451 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:23:20.488594 sshd[4023]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:20.494697 systemd[1]: sshd@9-178.105.7.160:22-50.85.169.122:41086.service: Deactivated successfully. Apr 13 19:23:20.498223 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:23:20.499936 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:23:20.501432 systemd-logind[1460]: Removed session 10. Apr 13 19:23:25.524405 systemd[1]: Started sshd@10-178.105.7.160:22-50.85.169.122:41090.service - OpenSSH per-connection server daemon (50.85.169.122:41090). Apr 13 19:23:25.652510 sshd[4039]: Accepted publickey for core from 50.85.169.122 port 41090 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:25.655875 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:25.664292 systemd-logind[1460]: New session 11 of user core. Apr 13 19:23:25.671387 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:23:25.852803 sshd[4039]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:25.857703 systemd[1]: sshd@10-178.105.7.160:22-50.85.169.122:41090.service: Deactivated successfully. Apr 13 19:23:25.861236 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:23:25.862582 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:23:25.872311 systemd-logind[1460]: Removed session 11. Apr 13 19:23:25.879898 systemd[1]: Started sshd@11-178.105.7.160:22-50.85.169.122:41096.service - OpenSSH per-connection server daemon (50.85.169.122:41096). Apr 13 19:23:26.000066 sshd[4053]: Accepted publickey for core from 50.85.169.122 port 41096 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:26.002432 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:26.008029 systemd-logind[1460]: New session 12 of user core. Apr 13 19:23:26.016452 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:23:26.244078 sshd[4053]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:26.252388 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:23:26.253505 systemd[1]: sshd@11-178.105.7.160:22-50.85.169.122:41096.service: Deactivated successfully. Apr 13 19:23:26.260016 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:23:26.269537 systemd-logind[1460]: Removed session 12. Apr 13 19:23:26.275828 systemd[1]: Started sshd@12-178.105.7.160:22-50.85.169.122:41104.service - OpenSSH per-connection server daemon (50.85.169.122:41104). Apr 13 19:23:26.407194 sshd[4064]: Accepted publickey for core from 50.85.169.122 port 41104 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:26.409413 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:26.417879 systemd-logind[1460]: New session 13 of user core. Apr 13 19:23:26.421396 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:23:26.600644 sshd[4064]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:26.609376 systemd[1]: sshd@12-178.105.7.160:22-50.85.169.122:41104.service: Deactivated successfully. Apr 13 19:23:26.612967 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:23:26.614482 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:23:26.615895 systemd-logind[1460]: Removed session 13. Apr 13 19:23:31.637706 systemd[1]: Started sshd@13-178.105.7.160:22-50.85.169.122:40488.service - OpenSSH per-connection server daemon (50.85.169.122:40488). Apr 13 19:23:31.761990 sshd[4076]: Accepted publickey for core from 50.85.169.122 port 40488 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:31.763627 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:31.769822 systemd-logind[1460]: New session 14 of user core. Apr 13 19:23:31.775446 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:23:31.957015 sshd[4076]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:31.964414 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:23:31.965291 systemd[1]: sshd@13-178.105.7.160:22-50.85.169.122:40488.service: Deactivated successfully. Apr 13 19:23:31.967527 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:23:31.970588 systemd-logind[1460]: Removed session 14. Apr 13 19:23:36.992669 systemd[1]: Started sshd@14-178.105.7.160:22-50.85.169.122:40494.service - OpenSSH per-connection server daemon (50.85.169.122:40494). Apr 13 19:23:37.120653 sshd[4091]: Accepted publickey for core from 50.85.169.122 port 40494 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:37.122886 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:37.129223 systemd-logind[1460]: New session 15 of user core. Apr 13 19:23:37.134338 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:23:37.306791 sshd[4091]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:37.312650 systemd[1]: sshd@14-178.105.7.160:22-50.85.169.122:40494.service: Deactivated successfully. Apr 13 19:23:37.315319 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:23:37.316919 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:23:37.333702 systemd[1]: Started sshd@15-178.105.7.160:22-50.85.169.122:40496.service - OpenSSH per-connection server daemon (50.85.169.122:40496). Apr 13 19:23:37.335622 systemd-logind[1460]: Removed session 15. Apr 13 19:23:37.451133 sshd[4103]: Accepted publickey for core from 50.85.169.122 port 40496 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:37.454495 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:37.460937 systemd-logind[1460]: New session 16 of user core. Apr 13 19:23:37.470468 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:23:37.739491 sshd[4103]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:37.745443 systemd[1]: sshd@15-178.105.7.160:22-50.85.169.122:40496.service: Deactivated successfully. Apr 13 19:23:37.749302 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:23:37.750217 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:23:37.751202 systemd-logind[1460]: Removed session 16. Apr 13 19:23:37.769781 systemd[1]: Started sshd@16-178.105.7.160:22-50.85.169.122:40510.service - OpenSSH per-connection server daemon (50.85.169.122:40510). Apr 13 19:23:37.889197 sshd[4114]: Accepted publickey for core from 50.85.169.122 port 40510 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:37.890815 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:37.896159 systemd-logind[1460]: New session 17 of user core. Apr 13 19:23:37.900454 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:23:38.541813 sshd[4114]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:38.548832 systemd[1]: sshd@16-178.105.7.160:22-50.85.169.122:40510.service: Deactivated successfully. Apr 13 19:23:38.554250 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:23:38.557427 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:23:38.575313 systemd[1]: Started sshd@17-178.105.7.160:22-50.85.169.122:40518.service - OpenSSH per-connection server daemon (50.85.169.122:40518). Apr 13 19:23:38.578224 systemd-logind[1460]: Removed session 17. Apr 13 19:23:38.702842 sshd[4130]: Accepted publickey for core from 50.85.169.122 port 40518 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:38.705539 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:38.712265 systemd-logind[1460]: New session 18 of user core. Apr 13 19:23:38.720563 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:23:39.048709 sshd[4130]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:39.053034 systemd[1]: sshd@17-178.105.7.160:22-50.85.169.122:40518.service: Deactivated successfully. Apr 13 19:23:39.060590 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:23:39.063079 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:23:39.080554 systemd[1]: Started sshd@18-178.105.7.160:22-50.85.169.122:40524.service - OpenSSH per-connection server daemon (50.85.169.122:40524). Apr 13 19:23:39.081849 systemd-logind[1460]: Removed session 18. Apr 13 19:23:39.201109 sshd[4141]: Accepted publickey for core from 50.85.169.122 port 40524 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:39.203961 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:39.211106 systemd-logind[1460]: New session 19 of user core. Apr 13 19:23:39.226534 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:23:39.399945 sshd[4141]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:39.407191 systemd[1]: sshd@18-178.105.7.160:22-50.85.169.122:40524.service: Deactivated successfully. Apr 13 19:23:39.410512 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:23:39.411820 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:23:39.413030 systemd-logind[1460]: Removed session 19. Apr 13 19:23:44.434647 systemd[1]: Started sshd@19-178.105.7.160:22-50.85.169.122:36534.service - OpenSSH per-connection server daemon (50.85.169.122:36534). Apr 13 19:23:44.558791 sshd[4158]: Accepted publickey for core from 50.85.169.122 port 36534 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:44.561291 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:44.566956 systemd-logind[1460]: New session 20 of user core. Apr 13 19:23:44.575488 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:23:44.752227 sshd[4158]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:44.758529 systemd[1]: sshd@19-178.105.7.160:22-50.85.169.122:36534.service: Deactivated successfully. Apr 13 19:23:44.762484 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:23:44.763337 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:23:44.764449 systemd-logind[1460]: Removed session 20. Apr 13 19:23:49.793665 systemd[1]: Started sshd@20-178.105.7.160:22-50.85.169.122:52576.service - OpenSSH per-connection server daemon (50.85.169.122:52576). Apr 13 19:23:49.919163 sshd[4171]: Accepted publickey for core from 50.85.169.122 port 52576 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:49.921415 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:49.928290 systemd-logind[1460]: New session 21 of user core. Apr 13 19:23:49.932107 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:23:50.115687 sshd[4171]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:50.122204 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:23:50.123295 systemd[1]: sshd@20-178.105.7.160:22-50.85.169.122:52576.service: Deactivated successfully. Apr 13 19:23:50.126310 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:23:50.127729 systemd-logind[1460]: Removed session 21. Apr 13 19:23:55.149767 systemd[1]: Started sshd@21-178.105.7.160:22-50.85.169.122:52582.service - OpenSSH per-connection server daemon (50.85.169.122:52582). Apr 13 19:23:55.274243 sshd[4184]: Accepted publickey for core from 50.85.169.122 port 52582 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:55.276749 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:55.284457 systemd-logind[1460]: New session 22 of user core. Apr 13 19:23:55.292473 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:23:55.467580 sshd[4184]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:55.472766 systemd[1]: sshd@21-178.105.7.160:22-50.85.169.122:52582.service: Deactivated successfully. Apr 13 19:23:55.475567 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:23:55.476937 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:23:55.478551 systemd-logind[1460]: Removed session 22. Apr 13 19:23:55.493531 systemd[1]: Started sshd@22-178.105.7.160:22-50.85.169.122:52586.service - OpenSSH per-connection server daemon (50.85.169.122:52586). Apr 13 19:23:55.625813 sshd[4197]: Accepted publickey for core from 50.85.169.122 port 52586 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:55.628638 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:55.634831 systemd-logind[1460]: New session 23 of user core. Apr 13 19:23:55.642712 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 19:23:57.395679 containerd[1471]: time="2026-04-13T19:23:57.395637257Z" level=info msg="StopContainer for \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\" with timeout 30 (s)" Apr 13 19:23:57.398462 containerd[1471]: time="2026-04-13T19:23:57.397978948Z" level=info msg="Stop container \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\" with signal terminated" Apr 13 19:23:57.424207 systemd[1]: cri-containerd-4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120.scope: Deactivated successfully. Apr 13 19:23:57.430816 containerd[1471]: time="2026-04-13T19:23:57.428508284Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:23:57.446060 containerd[1471]: time="2026-04-13T19:23:57.445876538Z" level=info msg="StopContainer for \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\" with timeout 2 (s)" Apr 13 19:23:57.446588 containerd[1471]: time="2026-04-13T19:23:57.446547193Z" level=info msg="Stop container \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\" with signal terminated" Apr 13 19:23:57.455612 systemd-networkd[1373]: lxc_health: Link DOWN Apr 13 19:23:57.455619 systemd-networkd[1373]: lxc_health: Lost carrier Apr 13 19:23:57.464590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120-rootfs.mount: Deactivated successfully. Apr 13 19:23:57.475880 systemd[1]: cri-containerd-6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c.scope: Deactivated successfully. Apr 13 19:23:57.476537 systemd[1]: cri-containerd-6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c.scope: Consumed 7.703s CPU time. Apr 13 19:23:57.482333 containerd[1471]: time="2026-04-13T19:23:57.482266361Z" level=info msg="shim disconnected" id=4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120 namespace=k8s.io Apr 13 19:23:57.482579 containerd[1471]: time="2026-04-13T19:23:57.482560847Z" level=warning msg="cleaning up after shim disconnected" id=4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120 namespace=k8s.io Apr 13 19:23:57.482637 containerd[1471]: time="2026-04-13T19:23:57.482625049Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:57.505870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c-rootfs.mount: Deactivated successfully. Apr 13 19:23:57.507229 containerd[1471]: time="2026-04-13T19:23:57.507180177Z" level=info msg="StopContainer for \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\" returns successfully" Apr 13 19:23:57.509035 containerd[1471]: time="2026-04-13T19:23:57.508819612Z" level=info msg="StopPodSandbox for \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\"" Apr 13 19:23:57.509035 containerd[1471]: time="2026-04-13T19:23:57.508869854Z" level=info msg="Container to stop \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.515713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc-shm.mount: Deactivated successfully. Apr 13 19:23:57.521305 containerd[1471]: time="2026-04-13T19:23:57.521092597Z" level=info msg="shim disconnected" id=6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c namespace=k8s.io Apr 13 19:23:57.521305 containerd[1471]: time="2026-04-13T19:23:57.521160998Z" level=warning msg="cleaning up after shim disconnected" id=6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c namespace=k8s.io Apr 13 19:23:57.521305 containerd[1471]: time="2026-04-13T19:23:57.521169678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:57.530872 systemd[1]: cri-containerd-99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc.scope: Deactivated successfully. Apr 13 19:23:57.543275 containerd[1471]: time="2026-04-13T19:23:57.543219033Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:23:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:23:57.546888 containerd[1471]: time="2026-04-13T19:23:57.546831310Z" level=info msg="StopContainer for \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\" returns successfully" Apr 13 19:23:57.547547 containerd[1471]: time="2026-04-13T19:23:57.547517525Z" level=info msg="StopPodSandbox for \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\"" Apr 13 19:23:57.547622 containerd[1471]: time="2026-04-13T19:23:57.547555446Z" level=info msg="Container to stop \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.547622 containerd[1471]: time="2026-04-13T19:23:57.547568886Z" level=info msg="Container to stop \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.547622 containerd[1471]: time="2026-04-13T19:23:57.547578326Z" level=info msg="Container to stop \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.547622 containerd[1471]: time="2026-04-13T19:23:57.547588407Z" level=info msg="Container to stop \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.547622 containerd[1471]: time="2026-04-13T19:23:57.547597967Z" level=info msg="Container to stop \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:23:57.558540 systemd[1]: cri-containerd-021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a.scope: Deactivated successfully. Apr 13 19:23:57.570347 containerd[1471]: time="2026-04-13T19:23:57.570108371Z" level=info msg="shim disconnected" id=99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc namespace=k8s.io Apr 13 19:23:57.570347 containerd[1471]: time="2026-04-13T19:23:57.570187133Z" level=warning msg="cleaning up after shim disconnected" id=99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc namespace=k8s.io Apr 13 19:23:57.570347 containerd[1471]: time="2026-04-13T19:23:57.570297935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:57.595606 containerd[1471]: time="2026-04-13T19:23:57.595544718Z" level=info msg="TearDown network for sandbox \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\" successfully" Apr 13 19:23:57.595606 containerd[1471]: time="2026-04-13T19:23:57.595585279Z" level=info msg="StopPodSandbox for \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\" returns successfully" Apr 13 19:23:57.604901 containerd[1471]: time="2026-04-13T19:23:57.604623634Z" level=info msg="shim disconnected" id=021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a namespace=k8s.io Apr 13 19:23:57.604901 containerd[1471]: time="2026-04-13T19:23:57.604683515Z" level=warning msg="cleaning up after shim disconnected" id=021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a namespace=k8s.io Apr 13 19:23:57.604901 containerd[1471]: time="2026-04-13T19:23:57.604691715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:57.618762 kubelet[2582]: I0413 19:23:57.618727 2582 scope.go:122] "RemoveContainer" containerID="4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120" Apr 13 19:23:57.624498 containerd[1471]: time="2026-04-13T19:23:57.624371979Z" level=info msg="RemoveContainer for \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\"" Apr 13 19:23:57.625799 containerd[1471]: time="2026-04-13T19:23:57.625748048Z" level=info msg="TearDown network for sandbox \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" successfully" Apr 13 19:23:57.625799 containerd[1471]: time="2026-04-13T19:23:57.625788609Z" level=info msg="StopPodSandbox for \"021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a\" returns successfully" Apr 13 19:23:57.633439 containerd[1471]: time="2026-04-13T19:23:57.633226409Z" level=info msg="RemoveContainer for \"4be365cb85b1b46dc846cd3c7eeb392fbf61c12b4c72ac18c4c800e8f69a5120\" returns successfully" Apr 13 19:23:57.637936 containerd[1471]: time="2026-04-13T19:23:57.637711546Z" level=info msg="StopPodSandbox for \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\"" Apr 13 19:23:57.637936 containerd[1471]: time="2026-04-13T19:23:57.637820148Z" level=info msg="TearDown network for sandbox \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\" successfully" Apr 13 19:23:57.637936 containerd[1471]: time="2026-04-13T19:23:57.637833188Z" level=info msg="StopPodSandbox for \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\" returns successfully" Apr 13 19:23:57.640079 containerd[1471]: time="2026-04-13T19:23:57.639927113Z" level=info msg="RemovePodSandbox for \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\"" Apr 13 19:23:57.640443 containerd[1471]: time="2026-04-13T19:23:57.639963634Z" level=info msg="Forcibly stopping sandbox \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\"" Apr 13 19:23:57.640784 containerd[1471]: time="2026-04-13T19:23:57.640481125Z" level=info msg="TearDown network for sandbox \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\" successfully" Apr 13 19:23:57.647379 containerd[1471]: time="2026-04-13T19:23:57.646460934Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:23:57.648616 containerd[1471]: time="2026-04-13T19:23:57.648561499Z" level=info msg="RemovePodSandbox \"99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc\" returns successfully" Apr 13 19:23:57.700092 kubelet[2582]: E0413 19:23:57.700034 2582 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:23:57.741202 kubelet[2582]: I0413 19:23:57.740238 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-bpf-maps\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.741202 kubelet[2582]: I0413 19:23:57.740310 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cni-path\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cni-path\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.741202 kubelet[2582]: I0413 19:23:57.740363 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/8529443b-668f-437a-ad19-29580eb6b962-kube-api-access-g8t4m\" (UniqueName: \"kubernetes.io/projected/8529443b-668f-437a-ad19-29580eb6b962-kube-api-access-g8t4m\") pod \"8529443b-668f-437a-ad19-29580eb6b962\" (UID: \"8529443b-668f-437a-ad19-29580eb6b962\") " Apr 13 19:23:57.741202 kubelet[2582]: I0413 19:23:57.740409 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/6c879480-5545-40a8-91f6-edb0b44fd338-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c879480-5545-40a8-91f6-edb0b44fd338-clustermesh-secrets\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.741202 kubelet[2582]: I0413 19:23:57.740455 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-bpf-maps" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.741635 kubelet[2582]: I0413 19:23:57.740486 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-kube-api-access-bknhx\" (UniqueName: \"kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-kube-api-access-bknhx\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.741635 kubelet[2582]: I0413 19:23:57.740526 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-etc-cni-netd\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.741635 kubelet[2582]: I0413 19:23:57.740561 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-lib-modules\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.741635 kubelet[2582]: I0413 19:23:57.740621 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-lib-modules" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.744512 kubelet[2582]: I0413 19:23:57.744478 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-kernel\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.745289 kubelet[2582]: I0413 19:23:57.744703 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-hubble-tls\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.745289 kubelet[2582]: I0413 19:23:57.744734 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/8529443b-668f-437a-ad19-29580eb6b962-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8529443b-668f-437a-ad19-29580eb6b962-cilium-config-path\") pod \"8529443b-668f-437a-ad19-29580eb6b962\" (UID: \"8529443b-668f-437a-ad19-29580eb6b962\") " Apr 13 19:23:57.745289 kubelet[2582]: I0413 19:23:57.744759 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-hostproc\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-hostproc\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.745289 kubelet[2582]: I0413 19:23:57.744778 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-config-path\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.745289 kubelet[2582]: I0413 19:23:57.744794 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-run\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.745488 kubelet[2582]: I0413 19:23:57.744809 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-cgroup\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.745488 kubelet[2582]: I0413 19:23:57.744826 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-xtables-lock\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.745488 kubelet[2582]: I0413 19:23:57.744841 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-net\") pod \"6c879480-5545-40a8-91f6-edb0b44fd338\" (UID: \"6c879480-5545-40a8-91f6-edb0b44fd338\") " Apr 13 19:23:57.745488 kubelet[2582]: I0413 19:23:57.744892 2582 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-lib-modules\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.745488 kubelet[2582]: I0413 19:23:57.744904 2582 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-bpf-maps\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.745488 kubelet[2582]: I0413 19:23:57.744937 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-net" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.745638 kubelet[2582]: I0413 19:23:57.744963 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-kernel" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.748823 kubelet[2582]: I0413 19:23:57.748771 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-etc-cni-netd" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.748939 kubelet[2582]: I0413 19:23:57.748901 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c879480-5545-40a8-91f6-edb0b44fd338-clustermesh-secrets" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:23:57.749013 kubelet[2582]: I0413 19:23:57.748928 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cni-path" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.749049 kubelet[2582]: I0413 19:23:57.749028 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-run" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.749085 kubelet[2582]: I0413 19:23:57.749051 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-hostproc" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.750589 kubelet[2582]: I0413 19:23:57.750556 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-cgroup" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.750751 kubelet[2582]: I0413 19:23:57.750736 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-xtables-lock" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:23:57.750955 kubelet[2582]: I0413 19:23:57.750916 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-hubble-tls" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:23:57.752245 kubelet[2582]: I0413 19:23:57.751777 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-kube-api-access-bknhx" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "kube-api-access-bknhx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:23:57.755186 kubelet[2582]: I0413 19:23:57.754208 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-config-path" pod "6c879480-5545-40a8-91f6-edb0b44fd338" (UID: "6c879480-5545-40a8-91f6-edb0b44fd338"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:23:57.755934 kubelet[2582]: I0413 19:23:57.755893 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8529443b-668f-437a-ad19-29580eb6b962-cilium-config-path" pod "8529443b-668f-437a-ad19-29580eb6b962" (UID: "8529443b-668f-437a-ad19-29580eb6b962"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:23:57.757292 kubelet[2582]: I0413 19:23:57.757250 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8529443b-668f-437a-ad19-29580eb6b962-kube-api-access-g8t4m" pod "8529443b-668f-437a-ad19-29580eb6b962" (UID: "8529443b-668f-437a-ad19-29580eb6b962"). InnerVolumeSpecName "kube-api-access-g8t4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:23:57.846163 kubelet[2582]: I0413 19:23:57.846046 2582 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bknhx\" (UniqueName: \"kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-kube-api-access-bknhx\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.846483 kubelet[2582]: I0413 19:23:57.846449 2582 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-etc-cni-netd\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.846668 kubelet[2582]: I0413 19:23:57.846639 2582 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-kernel\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.846837 kubelet[2582]: I0413 19:23:57.846809 2582 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c879480-5545-40a8-91f6-edb0b44fd338-hubble-tls\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847027 kubelet[2582]: I0413 19:23:57.846997 2582 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8529443b-668f-437a-ad19-29580eb6b962-cilium-config-path\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847215 kubelet[2582]: I0413 19:23:57.847185 2582 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-hostproc\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847386 kubelet[2582]: I0413 19:23:57.847354 2582 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-config-path\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847895 kubelet[2582]: I0413 19:23:57.847570 2582 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-run\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847895 kubelet[2582]: I0413 19:23:57.847607 2582 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cilium-cgroup\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847895 kubelet[2582]: I0413 19:23:57.847630 2582 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-xtables-lock\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847895 kubelet[2582]: I0413 19:23:57.847651 2582 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-host-proc-sys-net\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847895 kubelet[2582]: I0413 19:23:57.847672 2582 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c879480-5545-40a8-91f6-edb0b44fd338-cni-path\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847895 kubelet[2582]: I0413 19:23:57.847815 2582 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g8t4m\" (UniqueName: \"kubernetes.io/projected/8529443b-668f-437a-ad19-29580eb6b962-kube-api-access-g8t4m\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:57.847895 kubelet[2582]: I0413 19:23:57.847841 2582 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c879480-5545-40a8-91f6-edb0b44fd338-clustermesh-secrets\") on node \"ci-4081-3-7-c-b986c49433\" DevicePath \"\"" Apr 13 19:23:58.197395 kubelet[2582]: I0413 19:23:58.197035 2582 scope.go:122] "RemoveContainer" containerID="6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c" Apr 13 19:23:58.203826 containerd[1471]: time="2026-04-13T19:23:58.203770852Z" level=info msg="RemoveContainer for \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\"" Apr 13 19:23:58.204770 systemd[1]: Removed slice kubepods-burstable-pod6c879480_5545_40a8_91f6_edb0b44fd338.slice - libcontainer container kubepods-burstable-pod6c879480_5545_40a8_91f6_edb0b44fd338.slice. Apr 13 19:23:58.204890 systemd[1]: kubepods-burstable-pod6c879480_5545_40a8_91f6_edb0b44fd338.slice: Consumed 7.804s CPU time. Apr 13 19:23:58.206905 systemd[1]: Removed slice kubepods-besteffort-pod8529443b_668f_437a_ad19_29580eb6b962.slice - libcontainer container kubepods-besteffort-pod8529443b_668f_437a_ad19_29580eb6b962.slice. Apr 13 19:23:58.211375 containerd[1471]: time="2026-04-13T19:23:58.208588716Z" level=info msg="RemoveContainer for \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\" returns successfully" Apr 13 19:23:58.211505 kubelet[2582]: I0413 19:23:58.209523 2582 scope.go:122] "RemoveContainer" containerID="c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4" Apr 13 19:23:58.213907 containerd[1471]: time="2026-04-13T19:23:58.213862349Z" level=info msg="RemoveContainer for \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\"" Apr 13 19:23:58.218955 containerd[1471]: time="2026-04-13T19:23:58.218907658Z" level=info msg="RemoveContainer for \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\" returns successfully" Apr 13 19:23:58.220038 kubelet[2582]: I0413 19:23:58.219984 2582 scope.go:122] "RemoveContainer" containerID="7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e" Apr 13 19:23:58.221310 containerd[1471]: time="2026-04-13T19:23:58.221263349Z" level=info msg="RemoveContainer for \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\"" Apr 13 19:23:58.233728 containerd[1471]: time="2026-04-13T19:23:58.231929418Z" level=info msg="RemoveContainer for \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\" returns successfully" Apr 13 19:23:58.234618 kubelet[2582]: I0413 19:23:58.234571 2582 scope.go:122] "RemoveContainer" containerID="4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e" Apr 13 19:23:58.243517 containerd[1471]: time="2026-04-13T19:23:58.243075699Z" level=info msg="RemoveContainer for \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\"" Apr 13 19:23:58.248180 containerd[1471]: time="2026-04-13T19:23:58.247995485Z" level=info msg="RemoveContainer for \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\" returns successfully" Apr 13 19:23:58.250696 kubelet[2582]: I0413 19:23:58.249126 2582 scope.go:122] "RemoveContainer" containerID="bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370" Apr 13 19:23:58.255536 containerd[1471]: time="2026-04-13T19:23:58.255493206Z" level=info msg="RemoveContainer for \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\"" Apr 13 19:23:58.261742 containerd[1471]: time="2026-04-13T19:23:58.261624098Z" level=info msg="RemoveContainer for \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\" returns successfully" Apr 13 19:23:58.262575 kubelet[2582]: I0413 19:23:58.262553 2582 scope.go:122] "RemoveContainer" containerID="6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c" Apr 13 19:23:58.264141 containerd[1471]: time="2026-04-13T19:23:58.264062991Z" level=error msg="ContainerStatus for \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\": not found" Apr 13 19:23:58.264439 kubelet[2582]: E0413 19:23:58.264411 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\": not found" containerID="6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c" Apr 13 19:23:58.264625 kubelet[2582]: I0413 19:23:58.264568 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c"} err="failed to get container status \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c6570dfb7ab4e95c18928ee9019499f5b5db17175570ed0f9aa6ebd7d736d8c\": not found" Apr 13 19:23:58.264717 kubelet[2582]: I0413 19:23:58.264704 2582 scope.go:122] "RemoveContainer" containerID="c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4" Apr 13 19:23:58.265093 containerd[1471]: time="2026-04-13T19:23:58.265058372Z" level=error msg="ContainerStatus for \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\": not found" Apr 13 19:23:58.265379 kubelet[2582]: E0413 19:23:58.265330 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\": not found" containerID="c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4" Apr 13 19:23:58.265436 kubelet[2582]: I0413 19:23:58.265383 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4"} err="failed to get container status \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c60a1118f91f9e61f9a6a05b82c580aeb83d14ee185c8d4c8dc6a731edb24df4\": not found" Apr 13 19:23:58.265436 kubelet[2582]: I0413 19:23:58.265401 2582 scope.go:122] "RemoveContainer" containerID="7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e" Apr 13 19:23:58.265697 containerd[1471]: time="2026-04-13T19:23:58.265667425Z" level=error msg="ContainerStatus for \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\": not found" Apr 13 19:23:58.265905 kubelet[2582]: E0413 19:23:58.265877 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\": not found" containerID="7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e" Apr 13 19:23:58.265955 kubelet[2582]: I0413 19:23:58.265907 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e"} err="failed to get container status \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a8f6cb086624c31f2d2de632ccdfedd1df6ca1d6d4fba9ded1c750a2550495e\": not found" Apr 13 19:23:58.265955 kubelet[2582]: I0413 19:23:58.265923 2582 scope.go:122] "RemoveContainer" containerID="4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e" Apr 13 19:23:58.266289 containerd[1471]: time="2026-04-13T19:23:58.266201917Z" level=error msg="ContainerStatus for \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\": not found" Apr 13 19:23:58.266406 kubelet[2582]: E0413 19:23:58.266313 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\": not found" containerID="4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e" Apr 13 19:23:58.266444 kubelet[2582]: I0413 19:23:58.266404 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e"} err="failed to get container status \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e5c9ed9c5047fc08b69dd595cff7623ebf33c75edb9db7bc1cc2a2292643c4e\": not found" Apr 13 19:23:58.266444 kubelet[2582]: I0413 19:23:58.266423 2582 scope.go:122] "RemoveContainer" containerID="bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370" Apr 13 19:23:58.266765 containerd[1471]: time="2026-04-13T19:23:58.266668367Z" level=error msg="ContainerStatus for \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\": not found" Apr 13 19:23:58.266866 kubelet[2582]: E0413 19:23:58.266785 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\": not found" containerID="bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370" Apr 13 19:23:58.266866 kubelet[2582]: I0413 19:23:58.266808 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370"} err="failed to get container status \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd195e46cadc7454f00fd793ed115269be1a0f38e84c387e6f7a048f60659370\": not found" Apr 13 19:23:58.388246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99837a0e22dbc136541e57194f8e6193352f796e42b871669ad4ce995fe87fdc-rootfs.mount: Deactivated successfully. Apr 13 19:23:58.388365 systemd[1]: var-lib-kubelet-pods-8529443b\x2d668f\x2d437a\x2dad19\x2d29580eb6b962-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg8t4m.mount: Deactivated successfully. Apr 13 19:23:58.388425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a-rootfs.mount: Deactivated successfully. Apr 13 19:23:58.388477 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-021a5150376846a47a2d7da3c54f68e284be84a344eb0088c4f145abc48a183a-shm.mount: Deactivated successfully. Apr 13 19:23:58.388532 systemd[1]: var-lib-kubelet-pods-6c879480\x2d5545\x2d40a8\x2d91f6\x2dedb0b44fd338-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbknhx.mount: Deactivated successfully. Apr 13 19:23:58.388585 systemd[1]: var-lib-kubelet-pods-6c879480\x2d5545\x2d40a8\x2d91f6\x2dedb0b44fd338-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 19:23:58.388638 systemd[1]: var-lib-kubelet-pods-6c879480\x2d5545\x2d40a8\x2d91f6\x2dedb0b44fd338-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 19:23:58.568727 kubelet[2582]: E0413 19:23:58.566782 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-g5j4b" podUID="cb969109-6449-4e69-b6e4-8cffc9e18e9a" Apr 13 19:23:59.328536 sshd[4197]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:59.334387 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Apr 13 19:23:59.335603 systemd[1]: sshd@22-178.105.7.160:22-50.85.169.122:52586.service: Deactivated successfully. Apr 13 19:23:59.337853 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 19:23:59.338219 systemd[1]: session-23.scope: Consumed 1.004s CPU time. Apr 13 19:23:59.339364 systemd-logind[1460]: Removed session 23. Apr 13 19:23:59.358584 systemd[1]: Started sshd@23-178.105.7.160:22-50.85.169.122:52596.service - OpenSSH per-connection server daemon (50.85.169.122:52596). Apr 13 19:23:59.482030 sshd[4360]: Accepted publickey for core from 50.85.169.122 port 52596 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:59.483387 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:59.491237 systemd-logind[1460]: New session 24 of user core. Apr 13 19:23:59.498386 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 19:23:59.572013 kubelet[2582]: I0413 19:23:59.571897 2582 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6c879480-5545-40a8-91f6-edb0b44fd338" path="/var/lib/kubelet/pods/6c879480-5545-40a8-91f6-edb0b44fd338/volumes" Apr 13 19:23:59.573696 kubelet[2582]: I0413 19:23:59.573639 2582 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8529443b-668f-437a-ad19-29580eb6b962" path="/var/lib/kubelet/pods/8529443b-668f-437a-ad19-29580eb6b962/volumes" Apr 13 19:24:00.567421 kubelet[2582]: E0413 19:24:00.566237 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-g5j4b" podUID="cb969109-6449-4e69-b6e4-8cffc9e18e9a" Apr 13 19:24:00.606976 sshd[4360]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:00.613603 systemd[1]: sshd@23-178.105.7.160:22-50.85.169.122:52596.service: Deactivated successfully. Apr 13 19:24:00.620211 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 19:24:00.624197 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Apr 13 19:24:00.653080 systemd[1]: Started sshd@24-178.105.7.160:22-50.85.169.122:35818.service - OpenSSH per-connection server daemon (50.85.169.122:35818). Apr 13 19:24:00.657186 systemd-logind[1460]: Removed session 24. Apr 13 19:24:00.669507 systemd[1]: Created slice kubepods-burstable-pod629ca121_1563_412d_ace4_afc679bd1a86.slice - libcontainer container kubepods-burstable-pod629ca121_1563_412d_ace4_afc679bd1a86.slice. Apr 13 19:24:00.770543 kubelet[2582]: I0413 19:24:00.770287 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/629ca121-1563-412d-ace4-afc679bd1a86-cilium-config-path\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.770543 kubelet[2582]: I0413 19:24:00.770369 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-cilium-run\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.770543 kubelet[2582]: I0413 19:24:00.770481 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-hostproc\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771391 kubelet[2582]: I0413 19:24:00.770602 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-cilium-cgroup\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771391 kubelet[2582]: I0413 19:24:00.770657 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/629ca121-1563-412d-ace4-afc679bd1a86-clustermesh-secrets\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771391 kubelet[2582]: I0413 19:24:00.770727 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/629ca121-1563-412d-ace4-afc679bd1a86-cilium-ipsec-secrets\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771391 kubelet[2582]: I0413 19:24:00.770765 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-host-proc-sys-net\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771391 kubelet[2582]: I0413 19:24:00.770798 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/629ca121-1563-412d-ace4-afc679bd1a86-hubble-tls\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771391 kubelet[2582]: I0413 19:24:00.770831 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-bpf-maps\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771751 kubelet[2582]: I0413 19:24:00.770862 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-xtables-lock\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771751 kubelet[2582]: I0413 19:24:00.770895 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z7pf\" (UniqueName: \"kubernetes.io/projected/629ca121-1563-412d-ace4-afc679bd1a86-kube-api-access-2z7pf\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771751 kubelet[2582]: I0413 19:24:00.770934 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-cni-path\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771751 kubelet[2582]: I0413 19:24:00.770968 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-lib-modules\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771751 kubelet[2582]: I0413 19:24:00.771025 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-host-proc-sys-kernel\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.771751 kubelet[2582]: I0413 19:24:00.771069 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/629ca121-1563-412d-ace4-afc679bd1a86-etc-cni-netd\") pod \"cilium-nm5wd\" (UID: \"629ca121-1563-412d-ace4-afc679bd1a86\") " pod="kube-system/cilium-nm5wd" Apr 13 19:24:00.800000 sshd[4372]: Accepted publickey for core from 50.85.169.122 port 35818 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:24:00.801634 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:00.808431 kubelet[2582]: I0413 19:24:00.808317 2582 setters.go:546] "Node became not ready" node="ci-4081-3-7-c-b986c49433" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-13T19:24:00Z","lastTransitionTime":"2026-04-13T19:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 13 19:24:00.810619 systemd-logind[1460]: New session 25 of user core. Apr 13 19:24:00.815500 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 13 19:24:00.923584 sshd[4372]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:00.929893 systemd[1]: sshd@24-178.105.7.160:22-50.85.169.122:35818.service: Deactivated successfully. Apr 13 19:24:00.932831 systemd[1]: session-25.scope: Deactivated successfully. Apr 13 19:24:00.934940 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. Apr 13 19:24:00.935985 systemd-logind[1460]: Removed session 25. Apr 13 19:24:00.946605 systemd[1]: Started sshd@25-178.105.7.160:22-50.85.169.122:35828.service - OpenSSH per-connection server daemon (50.85.169.122:35828). Apr 13 19:24:00.979348 containerd[1471]: time="2026-04-13T19:24:00.978865296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nm5wd,Uid:629ca121-1563-412d-ace4-afc679bd1a86,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:01.005936 containerd[1471]: time="2026-04-13T19:24:01.005641875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:01.005936 containerd[1471]: time="2026-04-13T19:24:01.005715436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:01.005936 containerd[1471]: time="2026-04-13T19:24:01.005740437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:01.005936 containerd[1471]: time="2026-04-13T19:24:01.005841559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:01.026531 systemd[1]: Started cri-containerd-d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5.scope - libcontainer container d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5. Apr 13 19:24:01.053303 containerd[1471]: time="2026-04-13T19:24:01.052951658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nm5wd,Uid:629ca121-1563-412d-ace4-afc679bd1a86,Namespace:kube-system,Attempt:0,} returns sandbox id \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\"" Apr 13 19:24:01.061568 containerd[1471]: time="2026-04-13T19:24:01.061425521Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:24:01.075297 containerd[1471]: time="2026-04-13T19:24:01.075246700Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b1708149655745be8c77b14dce6be63f8eaa72d7c2933e1090b9b3933638c91d\"" Apr 13 19:24:01.076737 containerd[1471]: time="2026-04-13T19:24:01.076680851Z" level=info msg="StartContainer for \"b1708149655745be8c77b14dce6be63f8eaa72d7c2933e1090b9b3933638c91d\"" Apr 13 19:24:01.078377 sshd[4384]: Accepted publickey for core from 50.85.169.122 port 35828 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:24:01.082793 sshd[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:01.091822 systemd-logind[1460]: New session 26 of user core. Apr 13 19:24:01.097867 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 13 19:24:01.117485 systemd[1]: Started cri-containerd-b1708149655745be8c77b14dce6be63f8eaa72d7c2933e1090b9b3933638c91d.scope - libcontainer container b1708149655745be8c77b14dce6be63f8eaa72d7c2933e1090b9b3933638c91d. Apr 13 19:24:01.149681 containerd[1471]: time="2026-04-13T19:24:01.148958735Z" level=info msg="StartContainer for \"b1708149655745be8c77b14dce6be63f8eaa72d7c2933e1090b9b3933638c91d\" returns successfully" Apr 13 19:24:01.159746 systemd[1]: cri-containerd-b1708149655745be8c77b14dce6be63f8eaa72d7c2933e1090b9b3933638c91d.scope: Deactivated successfully. Apr 13 19:24:01.200475 containerd[1471]: time="2026-04-13T19:24:01.200085241Z" level=info msg="shim disconnected" id=b1708149655745be8c77b14dce6be63f8eaa72d7c2933e1090b9b3933638c91d namespace=k8s.io Apr 13 19:24:01.200475 containerd[1471]: time="2026-04-13T19:24:01.200174123Z" level=warning msg="cleaning up after shim disconnected" id=b1708149655745be8c77b14dce6be63f8eaa72d7c2933e1090b9b3933638c91d namespace=k8s.io Apr 13 19:24:01.200475 containerd[1471]: time="2026-04-13T19:24:01.200183843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:02.226602 containerd[1471]: time="2026-04-13T19:24:02.226510814Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:24:02.257546 containerd[1471]: time="2026-04-13T19:24:02.257478685Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084\"" Apr 13 19:24:02.260577 containerd[1471]: time="2026-04-13T19:24:02.258826354Z" level=info msg="StartContainer for \"9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084\"" Apr 13 19:24:02.295390 systemd[1]: Started cri-containerd-9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084.scope - libcontainer container 9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084. Apr 13 19:24:02.331328 containerd[1471]: time="2026-04-13T19:24:02.331276963Z" level=info msg="StartContainer for \"9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084\" returns successfully" Apr 13 19:24:02.343938 systemd[1]: cri-containerd-9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084.scope: Deactivated successfully. Apr 13 19:24:02.386216 containerd[1471]: time="2026-04-13T19:24:02.386149672Z" level=info msg="shim disconnected" id=9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084 namespace=k8s.io Apr 13 19:24:02.386508 containerd[1471]: time="2026-04-13T19:24:02.386487439Z" level=warning msg="cleaning up after shim disconnected" id=9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084 namespace=k8s.io Apr 13 19:24:02.386576 containerd[1471]: time="2026-04-13T19:24:02.386563041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:02.567736 kubelet[2582]: E0413 19:24:02.565970 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-g5j4b" podUID="cb969109-6449-4e69-b6e4-8cffc9e18e9a" Apr 13 19:24:02.706407 kubelet[2582]: E0413 19:24:02.706321 2582 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:24:02.879162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c999f84eae7ec31e28a0bd311cdebef13fd8b40fab2d8bd88e8cb7b1d4b3084-rootfs.mount: Deactivated successfully. Apr 13 19:24:03.228559 containerd[1471]: time="2026-04-13T19:24:03.228418683Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:24:03.247671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58019329.mount: Deactivated successfully. Apr 13 19:24:03.255711 containerd[1471]: time="2026-04-13T19:24:03.255658634Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78\"" Apr 13 19:24:03.257162 containerd[1471]: time="2026-04-13T19:24:03.256686576Z" level=info msg="StartContainer for \"472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78\"" Apr 13 19:24:03.297887 systemd[1]: Started cri-containerd-472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78.scope - libcontainer container 472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78. Apr 13 19:24:03.340277 containerd[1471]: time="2026-04-13T19:24:03.339602294Z" level=info msg="StartContainer for \"472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78\" returns successfully" Apr 13 19:24:03.343286 systemd[1]: cri-containerd-472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78.scope: Deactivated successfully. Apr 13 19:24:03.386020 containerd[1471]: time="2026-04-13T19:24:03.385934939Z" level=info msg="shim disconnected" id=472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78 namespace=k8s.io Apr 13 19:24:03.386020 containerd[1471]: time="2026-04-13T19:24:03.386014101Z" level=warning msg="cleaning up after shim disconnected" id=472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78 namespace=k8s.io Apr 13 19:24:03.387341 containerd[1471]: time="2026-04-13T19:24:03.386032341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:03.880101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-472f4fc7d66f537bbc293df9d887f907fc6b4847d48d822ef6de70ec4a0ccd78-rootfs.mount: Deactivated successfully. Apr 13 19:24:04.231626 containerd[1471]: time="2026-04-13T19:24:04.231412922Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:24:04.255861 containerd[1471]: time="2026-04-13T19:24:04.255773571Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2\"" Apr 13 19:24:04.257332 containerd[1471]: time="2026-04-13T19:24:04.256968117Z" level=info msg="StartContainer for \"575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2\"" Apr 13 19:24:04.292325 systemd[1]: Started cri-containerd-575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2.scope - libcontainer container 575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2. Apr 13 19:24:04.319425 systemd[1]: cri-containerd-575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2.scope: Deactivated successfully. Apr 13 19:24:04.321502 containerd[1471]: time="2026-04-13T19:24:04.319713519Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod629ca121_1563_412d_ace4_afc679bd1a86.slice/cri-containerd-575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2.scope/memory.events\": no such file or directory" Apr 13 19:24:04.325237 containerd[1471]: time="2026-04-13T19:24:04.325173998Z" level=info msg="StartContainer for \"575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2\" returns successfully" Apr 13 19:24:04.347047 containerd[1471]: time="2026-04-13T19:24:04.346957591Z" level=info msg="shim disconnected" id=575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2 namespace=k8s.io Apr 13 19:24:04.347047 containerd[1471]: time="2026-04-13T19:24:04.347080834Z" level=warning msg="cleaning up after shim disconnected" id=575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2 namespace=k8s.io Apr 13 19:24:04.347356 containerd[1471]: time="2026-04-13T19:24:04.347110474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:04.567243 kubelet[2582]: E0413 19:24:04.566478 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-g5j4b" podUID="cb969109-6449-4e69-b6e4-8cffc9e18e9a" Apr 13 19:24:04.879405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-575ef1f0e6b891788665e159790396088ae886fbf68adf896b75996749573cb2-rootfs.mount: Deactivated successfully. Apr 13 19:24:05.244227 containerd[1471]: time="2026-04-13T19:24:05.243330661Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:24:05.265882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008929063.mount: Deactivated successfully. Apr 13 19:24:05.268887 containerd[1471]: time="2026-04-13T19:24:05.268018797Z" level=info msg="CreateContainer within sandbox \"d24053cf43444a850c5eb8b44fa24c58d4d93fddea3895bab23fedafbdd0c0b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"894cbb60a9b362907ebd59a2a54c662c318aefdc62fc0c707b6ae525c612d77e\"" Apr 13 19:24:05.270574 containerd[1471]: time="2026-04-13T19:24:05.270345608Z" level=info msg="StartContainer for \"894cbb60a9b362907ebd59a2a54c662c318aefdc62fc0c707b6ae525c612d77e\"" Apr 13 19:24:05.302315 systemd[1]: Started cri-containerd-894cbb60a9b362907ebd59a2a54c662c318aefdc62fc0c707b6ae525c612d77e.scope - libcontainer container 894cbb60a9b362907ebd59a2a54c662c318aefdc62fc0c707b6ae525c612d77e. Apr 13 19:24:05.338077 containerd[1471]: time="2026-04-13T19:24:05.336096477Z" level=info msg="StartContainer for \"894cbb60a9b362907ebd59a2a54c662c318aefdc62fc0c707b6ae525c612d77e\" returns successfully" Apr 13 19:24:05.769161 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 13 19:24:06.565518 kubelet[2582]: E0413 19:24:06.565433 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-g5j4b" podUID="cb969109-6449-4e69-b6e4-8cffc9e18e9a" Apr 13 19:24:08.805752 systemd-networkd[1373]: lxc_health: Link UP Apr 13 19:24:08.812553 systemd-networkd[1373]: lxc_health: Gained carrier Apr 13 19:24:09.003354 kubelet[2582]: I0413 19:24:09.003242 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-nm5wd" podStartSLOduration=9.003220376 podStartE2EDuration="9.003220376s" podCreationTimestamp="2026-04-13 19:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:06.268216826 +0000 UTC m=+188.827664770" watchObservedRunningTime="2026-04-13 19:24:09.003220376 +0000 UTC m=+191.562668400" Apr 13 19:24:10.771525 systemd-networkd[1373]: lxc_health: Gained IPv6LL Apr 13 19:24:14.123545 sshd[4384]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:14.129821 systemd[1]: sshd@25-178.105.7.160:22-50.85.169.122:35828.service: Deactivated successfully. Apr 13 19:24:14.133684 systemd[1]: session-26.scope: Deactivated successfully. Apr 13 19:24:14.136923 systemd-logind[1460]: Session 26 logged out. Waiting for processes to exit. Apr 13 19:24:14.140224 systemd-logind[1460]: Removed session 26. Apr 13 19:24:28.971967 systemd[1]: cri-containerd-6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443.scope: Deactivated successfully. Apr 13 19:24:28.973159 systemd[1]: cri-containerd-6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443.scope: Consumed 3.445s CPU time, 16.1M memory peak, 0B memory swap peak. Apr 13 19:24:28.998655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443-rootfs.mount: Deactivated successfully. Apr 13 19:24:29.006525 containerd[1471]: time="2026-04-13T19:24:29.006455575Z" level=info msg="shim disconnected" id=6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443 namespace=k8s.io Apr 13 19:24:29.006525 containerd[1471]: time="2026-04-13T19:24:29.006530213Z" level=warning msg="cleaning up after shim disconnected" id=6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443 namespace=k8s.io Apr 13 19:24:29.007188 containerd[1471]: time="2026-04-13T19:24:29.006541853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:29.015928 kubelet[2582]: E0413 19:24:29.015612 2582 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38352->10.0.0.2:2379: read: connection timed out" Apr 13 19:24:29.314828 kubelet[2582]: I0413 19:24:29.313921 2582 scope.go:122] "RemoveContainer" containerID="6b8761461c9e7b334742cebd0b204be95eea4214f8433b17f1a8b03730224443" Apr 13 19:24:29.320683 containerd[1471]: time="2026-04-13T19:24:29.318945325Z" level=info msg="CreateContainer within sandbox \"296cec04bb90d430016c7c13ce927c655f482fd065298d9b334551c8a161c47a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:24:29.336854 containerd[1471]: time="2026-04-13T19:24:29.336583429Z" level=info msg="CreateContainer within sandbox \"296cec04bb90d430016c7c13ce927c655f482fd065298d9b334551c8a161c47a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8afed64747cd3e0e69f140348c0f46e828c0b50e31b4aabe1ccaab8f442c2eaf\"" Apr 13 19:24:29.339151 containerd[1471]: time="2026-04-13T19:24:29.337341971Z" level=info msg="StartContainer for \"8afed64747cd3e0e69f140348c0f46e828c0b50e31b4aabe1ccaab8f442c2eaf\"" Apr 13 19:24:29.371354 systemd[1]: Started cri-containerd-8afed64747cd3e0e69f140348c0f46e828c0b50e31b4aabe1ccaab8f442c2eaf.scope - libcontainer container 8afed64747cd3e0e69f140348c0f46e828c0b50e31b4aabe1ccaab8f442c2eaf. Apr 13 19:24:29.417447 containerd[1471]: time="2026-04-13T19:24:29.417403042Z" level=info msg="StartContainer for \"8afed64747cd3e0e69f140348c0f46e828c0b50e31b4aabe1ccaab8f442c2eaf\" returns successfully" Apr 13 19:24:30.003005 systemd[1]: run-containerd-runc-k8s.io-8afed64747cd3e0e69f140348c0f46e828c0b50e31b4aabe1ccaab8f442c2eaf-runc.ZrKkNF.mount: Deactivated successfully. Apr 13 19:24:33.108818 kubelet[2582]: E0413 19:24:33.108614 2582 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:37984->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-c-b986c49433.18a6010dffed5911 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-c-b986c49433,UID:b3814b6606efd10a2fef7f55926c0b52,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-c-b986c49433,},FirstTimestamp:2026-04-13 19:24:22.684186897 +0000 UTC m=+205.243634921,LastTimestamp:2026-04-13 19:24:22.684186897 +0000 UTC m=+205.243634921,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-c-b986c49433,}"