Apr 16 00:18:12.897714 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 16 00:18:12.897739 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Apr 15 22:32:48 -00 2026 Apr 16 00:18:12.897749 kernel: KASLR enabled Apr 16 00:18:12.897755 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 16 00:18:12.897761 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 16 00:18:12.897768 kernel: random: crng init done Apr 16 00:18:12.897775 kernel: ACPI: Early table checksum verification disabled Apr 16 00:18:12.897781 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 16 00:18:12.897787 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 16 00:18:12.897795 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897801 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897808 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897814 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897821 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897828 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897836 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897843 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897849 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:18:12.897856 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 00:18:12.897862 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 16 00:18:12.897869 kernel: NUMA: Failed to initialise from firmware Apr 16 00:18:12.897875 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 16 00:18:12.897881 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 16 00:18:12.897888 kernel: Zone ranges: Apr 16 00:18:12.897894 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 16 00:18:12.897902 kernel: DMA32 empty Apr 16 00:18:12.897909 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 16 00:18:12.897915 kernel: Movable zone start for each node Apr 16 00:18:12.897921 kernel: Early memory node ranges Apr 16 00:18:12.897970 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 16 00:18:12.897977 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 16 00:18:12.897983 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 16 00:18:12.897990 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 16 00:18:12.897996 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 16 00:18:12.898002 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 16 00:18:12.898009 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 16 00:18:12.898015 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 16 00:18:12.898025 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 16 00:18:12.899607 kernel: psci: probing for conduit method from ACPI. Apr 16 00:18:12.899634 kernel: psci: PSCIv1.1 detected in firmware. Apr 16 00:18:12.899649 kernel: psci: Using standard PSCI v0.2 function IDs Apr 16 00:18:12.899656 kernel: psci: Trusted OS migration not required Apr 16 00:18:12.899664 kernel: psci: SMC Calling Convention v1.1 Apr 16 00:18:12.899673 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 16 00:18:12.899680 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 16 00:18:12.899687 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 16 00:18:12.899695 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 16 00:18:12.899702 kernel: Detected PIPT I-cache on CPU0 Apr 16 00:18:12.899709 kernel: CPU features: detected: GIC system register CPU interface Apr 16 00:18:12.899716 kernel: CPU features: detected: Hardware dirty bit management Apr 16 00:18:12.899724 kernel: CPU features: detected: Spectre-v4 Apr 16 00:18:12.899730 kernel: CPU features: detected: Spectre-BHB Apr 16 00:18:12.899737 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 16 00:18:12.899746 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 16 00:18:12.899753 kernel: CPU features: detected: ARM erratum 1418040 Apr 16 00:18:12.899760 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 16 00:18:12.899767 kernel: alternatives: applying boot alternatives Apr 16 00:18:12.899775 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=0adf63447ce845e6a0056fdc0e76e619192ad10bb115f878c5a0d78c1b8c220d Apr 16 00:18:12.899783 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 00:18:12.899790 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 00:18:12.899796 kernel: Fallback order for Node 0: 0 Apr 16 00:18:12.899803 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 16 00:18:12.899810 kernel: Policy zone: Normal Apr 16 00:18:12.899817 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 00:18:12.899825 kernel: software IO TLB: area num 2. Apr 16 00:18:12.899832 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 16 00:18:12.899840 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Apr 16 00:18:12.899847 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 00:18:12.899854 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 00:18:12.899862 kernel: rcu: RCU event tracing is enabled. Apr 16 00:18:12.899870 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 00:18:12.899876 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 00:18:12.899883 kernel: Tracing variant of Tasks RCU enabled. Apr 16 00:18:12.899890 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 00:18:12.899898 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 00:18:12.899905 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 16 00:18:12.899914 kernel: GICv3: 256 SPIs implemented Apr 16 00:18:12.899921 kernel: GICv3: 0 Extended SPIs implemented Apr 16 00:18:12.899973 kernel: Root IRQ handler: gic_handle_irq Apr 16 00:18:12.899981 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 16 00:18:12.899988 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 16 00:18:12.899995 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 16 00:18:12.900002 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 16 00:18:12.900009 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 16 00:18:12.900016 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 16 00:18:12.900024 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 16 00:18:12.900030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 00:18:12.901104 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 00:18:12.901113 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 16 00:18:12.901121 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 16 00:18:12.901129 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 16 00:18:12.901136 kernel: Console: colour dummy device 80x25 Apr 16 00:18:12.901144 kernel: ACPI: Core revision 20230628 Apr 16 00:18:12.901151 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 16 00:18:12.901159 kernel: pid_max: default: 32768 minimum: 301 Apr 16 00:18:12.901166 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 00:18:12.901173 kernel: landlock: Up and running. Apr 16 00:18:12.901183 kernel: SELinux: Initializing. Apr 16 00:18:12.901190 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 00:18:12.901199 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 00:18:12.901206 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 00:18:12.901214 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 00:18:12.901221 kernel: rcu: Hierarchical SRCU implementation. Apr 16 00:18:12.901229 kernel: rcu: Max phase no-delay instances is 400. Apr 16 00:18:12.901237 kernel: Platform MSI: ITS@0x8080000 domain created Apr 16 00:18:12.901244 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 16 00:18:12.901253 kernel: Remapping and enabling EFI services. Apr 16 00:18:12.901261 kernel: smp: Bringing up secondary CPUs ... Apr 16 00:18:12.901268 kernel: Detected PIPT I-cache on CPU1 Apr 16 00:18:12.901275 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 16 00:18:12.901284 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 16 00:18:12.901291 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 00:18:12.901298 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 16 00:18:12.901306 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 00:18:12.901313 kernel: SMP: Total of 2 processors activated. Apr 16 00:18:12.901323 kernel: CPU features: detected: 32-bit EL0 Support Apr 16 00:18:12.901331 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 16 00:18:12.901339 kernel: CPU features: detected: Common not Private translations Apr 16 00:18:12.901352 kernel: CPU features: detected: CRC32 instructions Apr 16 00:18:12.901361 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 16 00:18:12.901370 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 16 00:18:12.901378 kernel: CPU features: detected: LSE atomic instructions Apr 16 00:18:12.901385 kernel: CPU features: detected: Privileged Access Never Apr 16 00:18:12.901393 kernel: CPU features: detected: RAS Extension Support Apr 16 00:18:12.901413 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 16 00:18:12.901421 kernel: CPU: All CPU(s) started at EL1 Apr 16 00:18:12.901429 kernel: alternatives: applying system-wide alternatives Apr 16 00:18:12.901436 kernel: devtmpfs: initialized Apr 16 00:18:12.901444 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 00:18:12.901452 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 00:18:12.901460 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 00:18:12.901467 kernel: SMBIOS 3.0.0 present. Apr 16 00:18:12.901478 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 16 00:18:12.901485 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 00:18:12.901494 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 16 00:18:12.901501 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 16 00:18:12.901509 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 16 00:18:12.901517 kernel: audit: initializing netlink subsys (disabled) Apr 16 00:18:12.901525 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Apr 16 00:18:12.901533 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 00:18:12.901540 kernel: cpuidle: using governor menu Apr 16 00:18:12.901551 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 16 00:18:12.901558 kernel: ASID allocator initialised with 32768 entries Apr 16 00:18:12.901567 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 00:18:12.901575 kernel: Serial: AMBA PL011 UART driver Apr 16 00:18:12.901582 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 16 00:18:12.901590 kernel: Modules: 0 pages in range for non-PLT usage Apr 16 00:18:12.901598 kernel: Modules: 509008 pages in range for PLT usage Apr 16 00:18:12.901606 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 00:18:12.901614 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 00:18:12.901623 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 16 00:18:12.901630 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 16 00:18:12.901639 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 00:18:12.901646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 00:18:12.901657 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 16 00:18:12.901665 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 16 00:18:12.901673 kernel: ACPI: Added _OSI(Module Device) Apr 16 00:18:12.901680 kernel: ACPI: Added _OSI(Processor Device) Apr 16 00:18:12.901689 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 00:18:12.901698 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 00:18:12.901706 kernel: ACPI: Interpreter enabled Apr 16 00:18:12.901714 kernel: ACPI: Using GIC for interrupt routing Apr 16 00:18:12.901721 kernel: ACPI: MCFG table detected, 1 entries Apr 16 00:18:12.901729 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 16 00:18:12.901736 kernel: printk: console [ttyAMA0] enabled Apr 16 00:18:12.901789 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 00:18:12.901979 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 00:18:12.904217 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 16 00:18:12.904347 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 16 00:18:12.904421 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 16 00:18:12.904491 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 16 00:18:12.904502 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 16 00:18:12.904510 kernel: PCI host bridge to bus 0000:00 Apr 16 00:18:12.904592 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 16 00:18:12.904665 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 16 00:18:12.904726 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 16 00:18:12.904787 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 00:18:12.904874 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 16 00:18:12.904972 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 16 00:18:12.905065 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 16 00:18:12.905140 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 16 00:18:12.905225 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.905294 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 16 00:18:12.905369 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.905436 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 16 00:18:12.905511 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.905578 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 16 00:18:12.905660 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.905727 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 16 00:18:12.905803 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.905873 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 16 00:18:12.905997 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.908231 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 16 00:18:12.908355 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.908426 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 16 00:18:12.908502 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.908569 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 16 00:18:12.908644 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 16 00:18:12.908711 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 16 00:18:12.908796 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 16 00:18:12.908863 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 16 00:18:12.908969 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 16 00:18:12.910244 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 16 00:18:12.910416 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 16 00:18:12.910493 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 16 00:18:12.910578 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 16 00:18:12.910659 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 16 00:18:12.910739 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 16 00:18:12.910809 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 16 00:18:12.910883 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 16 00:18:12.910979 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 16 00:18:12.912159 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 16 00:18:12.912277 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 16 00:18:12.912347 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 16 00:18:12.912415 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 16 00:18:12.912493 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 16 00:18:12.912560 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 16 00:18:12.912628 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 16 00:18:12.912709 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 16 00:18:12.912778 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 16 00:18:12.912845 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 16 00:18:12.912918 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 16 00:18:12.913751 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 16 00:18:12.913891 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 16 00:18:12.914018 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 16 00:18:12.915223 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 16 00:18:12.915362 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 16 00:18:12.915431 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 16 00:18:12.915503 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 16 00:18:12.915571 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 16 00:18:12.915637 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 16 00:18:12.915707 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 16 00:18:12.915773 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 16 00:18:12.915847 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 16 00:18:12.915920 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 16 00:18:12.916010 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 16 00:18:12.917165 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 16 00:18:12.917255 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 16 00:18:12.917325 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 16 00:18:12.917395 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 16 00:18:12.917477 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 16 00:18:12.917546 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 16 00:18:12.917611 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 16 00:18:12.917683 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 16 00:18:12.917750 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 16 00:18:12.917816 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 16 00:18:12.917889 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 16 00:18:12.917984 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 16 00:18:12.918093 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 16 00:18:12.918168 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 16 00:18:12.918237 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 00:18:12.918306 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 16 00:18:12.918375 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 00:18:12.918445 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 16 00:18:12.918518 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 00:18:12.918588 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 16 00:18:12.918655 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 00:18:12.918722 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 16 00:18:12.918789 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 00:18:12.918876 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 16 00:18:12.918997 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 00:18:12.920743 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 16 00:18:12.920833 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 00:18:12.920904 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 16 00:18:12.921023 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 00:18:12.921448 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 16 00:18:12.921522 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 00:18:12.921594 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 16 00:18:12.921669 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 16 00:18:12.921739 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 16 00:18:12.922251 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 16 00:18:12.922331 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 16 00:18:12.922399 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 16 00:18:12.922475 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 16 00:18:12.922541 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 16 00:18:12.923212 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 16 00:18:12.923314 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 16 00:18:12.923386 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 16 00:18:12.923453 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 16 00:18:12.923522 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 16 00:18:12.923588 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 16 00:18:12.923661 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 16 00:18:12.923728 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 16 00:18:12.923798 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 16 00:18:12.923868 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 16 00:18:12.923954 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 16 00:18:12.924027 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 16 00:18:12.924134 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 16 00:18:12.924211 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 16 00:18:12.924280 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 16 00:18:12.924349 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 16 00:18:12.924417 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 00:18:12.924490 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 16 00:18:12.924556 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 16 00:18:12.924622 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 00:18:12.924697 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 16 00:18:12.924768 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 00:18:12.924836 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 16 00:18:12.924902 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 16 00:18:12.924981 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 00:18:12.925131 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 16 00:18:12.925205 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 16 00:18:12.925274 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 00:18:12.925340 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 16 00:18:12.925411 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 16 00:18:12.925499 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 00:18:12.925573 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 16 00:18:12.925641 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 00:18:12.925709 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 16 00:18:12.925775 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 16 00:18:12.925841 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 00:18:12.925918 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 16 00:18:12.926120 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 16 00:18:12.926286 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 00:18:12.926358 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 16 00:18:12.926424 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 16 00:18:12.926489 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 00:18:12.926564 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 16 00:18:12.926637 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 16 00:18:12.926725 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 00:18:12.926844 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 16 00:18:12.926915 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 16 00:18:12.927001 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 00:18:12.927102 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 16 00:18:12.927197 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 16 00:18:12.927268 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 16 00:18:12.927336 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 00:18:12.927402 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 16 00:18:12.927474 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 16 00:18:12.927539 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 00:18:12.927609 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 00:18:12.927673 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 16 00:18:12.927738 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 16 00:18:12.927803 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 00:18:12.927873 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 00:18:12.927981 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 16 00:18:12.928098 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 16 00:18:12.928170 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 00:18:12.928242 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 16 00:18:12.928301 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 16 00:18:12.928361 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 16 00:18:12.928444 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 16 00:18:12.928507 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 16 00:18:12.928574 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 00:18:12.928645 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 16 00:18:12.928706 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 16 00:18:12.928766 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 00:18:12.928835 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 16 00:18:12.928897 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 16 00:18:12.928979 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 00:18:12.929113 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 16 00:18:12.929179 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 16 00:18:12.929258 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 00:18:12.929327 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 16 00:18:12.929388 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 16 00:18:12.929448 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 00:18:12.929519 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 16 00:18:12.929581 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 16 00:18:12.929644 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 00:18:12.929711 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 16 00:18:12.929777 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 16 00:18:12.929838 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 00:18:12.929907 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 16 00:18:12.930010 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 16 00:18:12.930113 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 00:18:12.930187 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 16 00:18:12.930251 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 16 00:18:12.930319 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 00:18:12.930329 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 16 00:18:12.930337 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 16 00:18:12.930346 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 16 00:18:12.930354 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 16 00:18:12.930362 kernel: iommu: Default domain type: Translated Apr 16 00:18:12.930370 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 16 00:18:12.930378 kernel: efivars: Registered efivars operations Apr 16 00:18:12.930387 kernel: vgaarb: loaded Apr 16 00:18:12.930396 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 16 00:18:12.930403 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 00:18:12.930411 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 00:18:12.930419 kernel: pnp: PnP ACPI init Apr 16 00:18:12.930500 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 16 00:18:12.930512 kernel: pnp: PnP ACPI: found 1 devices Apr 16 00:18:12.930520 kernel: NET: Registered PF_INET protocol family Apr 16 00:18:12.930534 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 00:18:12.930546 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 00:18:12.930554 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 00:18:12.930563 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 00:18:12.930571 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 00:18:12.930579 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 00:18:12.930587 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 00:18:12.930594 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 00:18:12.930603 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 00:18:12.930684 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 16 00:18:12.930698 kernel: PCI: CLS 0 bytes, default 64 Apr 16 00:18:12.930706 kernel: kvm [1]: HYP mode not available Apr 16 00:18:12.930714 kernel: Initialise system trusted keyrings Apr 16 00:18:12.930722 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 00:18:12.930730 kernel: Key type asymmetric registered Apr 16 00:18:12.930740 kernel: Asymmetric key parser 'x509' registered Apr 16 00:18:12.930749 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 00:18:12.930758 kernel: io scheduler mq-deadline registered Apr 16 00:18:12.930767 kernel: io scheduler kyber registered Apr 16 00:18:12.930777 kernel: io scheduler bfq registered Apr 16 00:18:12.930786 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 16 00:18:12.930859 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 16 00:18:12.930938 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 16 00:18:12.931008 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.931157 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 16 00:18:12.931228 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 16 00:18:12.931301 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.931374 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 16 00:18:12.931441 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 16 00:18:12.931507 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.931576 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 16 00:18:12.931646 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 16 00:18:12.931712 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.931782 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 16 00:18:12.931847 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 16 00:18:12.931990 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.932169 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 16 00:18:12.932248 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 16 00:18:12.932313 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.932381 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 16 00:18:12.932446 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 16 00:18:12.932510 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.932578 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 16 00:18:12.932647 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 16 00:18:12.932714 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.932725 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 16 00:18:12.932791 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 16 00:18:12.932856 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 16 00:18:12.932957 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:18:12.932972 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 16 00:18:12.932986 kernel: ACPI: button: Power Button [PWRB] Apr 16 00:18:12.932994 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 16 00:18:12.934402 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 16 00:18:12.934497 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 16 00:18:12.934509 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 00:18:12.934518 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 16 00:18:12.934589 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 16 00:18:12.934600 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 16 00:18:12.934608 kernel: thunder_xcv, ver 1.0 Apr 16 00:18:12.934623 kernel: thunder_bgx, ver 1.0 Apr 16 00:18:12.934631 kernel: nicpf, ver 1.0 Apr 16 00:18:12.934638 kernel: nicvf, ver 1.0 Apr 16 00:18:12.934722 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 16 00:18:12.934789 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-16T00:18:12 UTC (1776298692) Apr 16 00:18:12.934799 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 00:18:12.934808 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 16 00:18:12.934816 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 16 00:18:12.934827 kernel: watchdog: Hard watchdog permanently disabled Apr 16 00:18:12.934835 kernel: NET: Registered PF_INET6 protocol family Apr 16 00:18:12.934842 kernel: Segment Routing with IPv6 Apr 16 00:18:12.934850 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 00:18:12.934858 kernel: NET: Registered PF_PACKET protocol family Apr 16 00:18:12.934866 kernel: Key type dns_resolver registered Apr 16 00:18:12.934874 kernel: registered taskstats version 1 Apr 16 00:18:12.934882 kernel: Loading compiled-in X.509 certificates Apr 16 00:18:12.934890 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 42c6438655eac241afd498b973a7e22ad5b14a7d' Apr 16 00:18:12.934900 kernel: Key type .fscrypt registered Apr 16 00:18:12.934907 kernel: Key type fscrypt-provisioning registered Apr 16 00:18:12.934915 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 00:18:12.934968 kernel: ima: Allocated hash algorithm: sha1 Apr 16 00:18:12.934978 kernel: ima: No architecture policies found Apr 16 00:18:12.934986 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 16 00:18:12.934994 kernel: clk: Disabling unused clocks Apr 16 00:18:12.935002 kernel: Freeing unused kernel memory: 39424K Apr 16 00:18:12.935009 kernel: Run /init as init process Apr 16 00:18:12.935021 kernel: with arguments: Apr 16 00:18:12.935030 kernel: /init Apr 16 00:18:12.935907 kernel: with environment: Apr 16 00:18:12.935917 kernel: HOME=/ Apr 16 00:18:12.935941 kernel: TERM=linux Apr 16 00:18:12.935952 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 00:18:12.935963 systemd[1]: Detected virtualization kvm. Apr 16 00:18:12.935972 systemd[1]: Detected architecture arm64. Apr 16 00:18:12.935985 systemd[1]: Running in initrd. Apr 16 00:18:12.935993 systemd[1]: No hostname configured, using default hostname. Apr 16 00:18:12.936001 systemd[1]: Hostname set to . Apr 16 00:18:12.936010 systemd[1]: Initializing machine ID from VM UUID. Apr 16 00:18:12.936018 systemd[1]: Queued start job for default target initrd.target. Apr 16 00:18:12.936026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:18:12.936293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:18:12.936307 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 00:18:12.936320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 00:18:12.936329 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 00:18:12.936339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 00:18:12.936350 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 00:18:12.936359 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 00:18:12.936367 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:18:12.936376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:18:12.936386 systemd[1]: Reached target paths.target - Path Units. Apr 16 00:18:12.936394 systemd[1]: Reached target slices.target - Slice Units. Apr 16 00:18:12.936402 systemd[1]: Reached target swap.target - Swaps. Apr 16 00:18:12.936411 systemd[1]: Reached target timers.target - Timer Units. Apr 16 00:18:12.936419 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 00:18:12.936427 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 00:18:12.936436 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 00:18:12.936444 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 00:18:12.936455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:18:12.936463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 00:18:12.936471 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:18:12.936480 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 00:18:12.936488 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 00:18:12.936497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 00:18:12.936506 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 00:18:12.936514 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 00:18:12.936522 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 00:18:12.936532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 00:18:12.936541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:18:12.936550 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 00:18:12.936597 systemd-journald[237]: Collecting audit messages is disabled. Apr 16 00:18:12.936621 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:18:12.936629 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 00:18:12.936639 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 00:18:12.936648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 00:18:12.936659 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 00:18:12.936669 systemd-journald[237]: Journal started Apr 16 00:18:12.936695 systemd-journald[237]: Runtime Journal (/run/log/journal/726801011b34416286e5ba78bc87cb52) is 8.0M, max 76.6M, 68.6M free. Apr 16 00:18:12.915088 systemd-modules-load[238]: Inserted module 'overlay' Apr 16 00:18:12.940156 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 00:18:12.940180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:18:12.940200 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 00:18:12.943072 kernel: Bridge firewalling registered Apr 16 00:18:12.945094 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 16 00:18:12.946360 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 00:18:12.948103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:18:12.953279 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:18:12.956760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:18:12.966058 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 00:18:12.980085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:18:12.981736 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:18:12.992364 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 00:18:12.997129 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:18:13.004307 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 00:18:13.019450 dracut-cmdline[274]: dracut-dracut-053 Apr 16 00:18:13.026642 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=0adf63447ce845e6a0056fdc0e76e619192ad10bb115f878c5a0d78c1b8c220d Apr 16 00:18:13.031603 systemd-resolved[272]: Positive Trust Anchors: Apr 16 00:18:13.031615 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 00:18:13.031649 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 00:18:13.039447 systemd-resolved[272]: Defaulting to hostname 'linux'. Apr 16 00:18:13.041325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 00:18:13.046559 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:18:13.123104 kernel: SCSI subsystem initialized Apr 16 00:18:13.128090 kernel: Loading iSCSI transport class v2.0-870. Apr 16 00:18:13.136106 kernel: iscsi: registered transport (tcp) Apr 16 00:18:13.149121 kernel: iscsi: registered transport (qla4xxx) Apr 16 00:18:13.149218 kernel: QLogic iSCSI HBA Driver Apr 16 00:18:13.202516 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 00:18:13.208273 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 00:18:13.232679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 00:18:13.232758 kernel: device-mapper: uevent: version 1.0.3 Apr 16 00:18:13.232770 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 00:18:13.284113 kernel: raid6: neonx8 gen() 15666 MB/s Apr 16 00:18:13.301112 kernel: raid6: neonx4 gen() 15476 MB/s Apr 16 00:18:13.318106 kernel: raid6: neonx2 gen() 13126 MB/s Apr 16 00:18:13.335102 kernel: raid6: neonx1 gen() 10325 MB/s Apr 16 00:18:13.352072 kernel: raid6: int64x8 gen() 6870 MB/s Apr 16 00:18:13.369107 kernel: raid6: int64x4 gen() 7250 MB/s Apr 16 00:18:13.386201 kernel: raid6: int64x2 gen() 6060 MB/s Apr 16 00:18:13.403114 kernel: raid6: int64x1 gen() 4989 MB/s Apr 16 00:18:13.403205 kernel: raid6: using algorithm neonx8 gen() 15666 MB/s Apr 16 00:18:13.420126 kernel: raid6: .... xor() 11865 MB/s, rmw enabled Apr 16 00:18:13.420216 kernel: raid6: using neon recovery algorithm Apr 16 00:18:13.425085 kernel: xor: measuring software checksum speed Apr 16 00:18:13.425161 kernel: 8regs : 19778 MB/sec Apr 16 00:18:13.425190 kernel: 32regs : 17556 MB/sec Apr 16 00:18:13.426360 kernel: arm64_neon : 26883 MB/sec Apr 16 00:18:13.426403 kernel: xor: using function: arm64_neon (26883 MB/sec) Apr 16 00:18:13.478121 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 00:18:13.495109 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 00:18:13.500272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:18:13.524282 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 16 00:18:13.527901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:18:13.538236 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 00:18:13.554683 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Apr 16 00:18:13.593985 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 00:18:13.601373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 00:18:13.658717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:18:13.667275 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 00:18:13.695749 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 00:18:13.697465 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 00:18:13.699337 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:18:13.701984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 00:18:13.709338 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 00:18:13.729089 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 00:18:13.788068 kernel: ACPI: bus type USB registered Apr 16 00:18:13.792601 kernel: scsi host0: Virtio SCSI HBA Apr 16 00:18:13.792852 kernel: usbcore: registered new interface driver usbfs Apr 16 00:18:13.792866 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 00:18:13.795560 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 16 00:18:13.796774 kernel: usbcore: registered new interface driver hub Apr 16 00:18:13.804470 kernel: usbcore: registered new device driver usb Apr 16 00:18:13.807501 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 00:18:13.808969 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:18:13.812877 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:18:13.813580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 00:18:13.813651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:18:13.815149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:18:13.822272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:18:13.840938 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 16 00:18:13.841194 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 16 00:18:13.841285 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 00:18:13.843119 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 16 00:18:13.846675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:18:13.859348 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:18:13.861138 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 16 00:18:13.861363 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 16 00:18:13.861461 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 16 00:18:13.866316 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 16 00:18:13.866516 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 16 00:18:13.869707 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 00:18:13.869761 kernel: GPT:17805311 != 80003071 Apr 16 00:18:13.869772 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 00:18:13.872058 kernel: GPT:17805311 != 80003071 Apr 16 00:18:13.872110 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 00:18:13.872121 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 00:18:13.875588 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 16 00:18:13.881075 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 00:18:13.881321 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 16 00:18:13.883502 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 16 00:18:13.883678 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 00:18:13.883768 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 16 00:18:13.882372 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:18:13.885820 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 16 00:18:13.887087 kernel: hub 1-0:1.0: USB hub found Apr 16 00:18:13.887261 kernel: hub 1-0:1.0: 4 ports detected Apr 16 00:18:13.887348 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 16 00:18:13.889162 kernel: hub 2-0:1.0: USB hub found Apr 16 00:18:13.889351 kernel: hub 2-0:1.0: 4 ports detected Apr 16 00:18:13.923062 kernel: BTRFS: device fsid a6240e59-bdb5-4432-bae9-6f06a7303c55 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (515) Apr 16 00:18:13.932089 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (516) Apr 16 00:18:13.937382 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 16 00:18:13.949658 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 16 00:18:13.950544 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 16 00:18:13.960578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 00:18:13.965824 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 16 00:18:13.975021 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 00:18:13.980218 disk-uuid[574]: Primary Header is updated. Apr 16 00:18:13.980218 disk-uuid[574]: Secondary Entries is updated. Apr 16 00:18:13.980218 disk-uuid[574]: Secondary Header is updated. Apr 16 00:18:13.987072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 00:18:14.130180 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 16 00:18:14.263933 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 16 00:18:14.264052 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 16 00:18:14.264381 kernel: usbcore: registered new interface driver usbhid Apr 16 00:18:14.264406 kernel: usbhid: USB HID core driver Apr 16 00:18:14.372210 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 16 00:18:14.501104 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 16 00:18:14.555089 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 16 00:18:15.007156 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 00:18:15.009165 disk-uuid[575]: The operation has completed successfully. Apr 16 00:18:15.056887 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 00:18:15.058116 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 00:18:15.077245 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 00:18:15.085018 sh[592]: Success Apr 16 00:18:15.099109 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 16 00:18:15.145854 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 00:18:15.163298 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 00:18:15.166381 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 00:18:15.185048 kernel: BTRFS info (device dm-0): first mount of filesystem a6240e59-bdb5-4432-bae9-6f06a7303c55 Apr 16 00:18:15.185115 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 16 00:18:15.185137 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 00:18:15.185157 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 00:18:15.185176 kernel: BTRFS info (device dm-0): using free space tree Apr 16 00:18:15.194095 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 16 00:18:15.195612 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 00:18:15.197087 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 00:18:15.208391 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 00:18:15.214267 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 00:18:15.223423 kernel: BTRFS info (device sda6): first mount of filesystem d00c5e58-4065-42ad-81de-759701ad0aab Apr 16 00:18:15.223496 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 00:18:15.223509 kernel: BTRFS info (device sda6): using free space tree Apr 16 00:18:15.229064 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 00:18:15.229135 kernel: BTRFS info (device sda6): auto enabling async discard Apr 16 00:18:15.240619 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 00:18:15.243068 kernel: BTRFS info (device sda6): last unmount of filesystem d00c5e58-4065-42ad-81de-759701ad0aab Apr 16 00:18:15.249976 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 00:18:15.257274 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 00:18:15.329836 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 00:18:15.337263 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 00:18:15.360625 systemd-networkd[782]: lo: Link UP Apr 16 00:18:15.360633 systemd-networkd[782]: lo: Gained carrier Apr 16 00:18:15.362424 systemd-networkd[782]: Enumeration completed Apr 16 00:18:15.363086 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:18:15.363089 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:18:15.363859 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:18:15.363862 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:18:15.371008 ignition[682]: Ignition 2.19.0 Apr 16 00:18:15.364432 systemd-networkd[782]: eth0: Link UP Apr 16 00:18:15.371014 ignition[682]: Stage: fetch-offline Apr 16 00:18:15.364435 systemd-networkd[782]: eth0: Gained carrier Apr 16 00:18:15.371062 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:18:15.364443 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:18:15.371070 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:18:15.364641 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 00:18:15.371615 ignition[682]: parsed url from cmdline: "" Apr 16 00:18:15.367297 systemd[1]: Reached target network.target - Network. Apr 16 00:18:15.371619 ignition[682]: no config URL provided Apr 16 00:18:15.371079 systemd-networkd[782]: eth1: Link UP Apr 16 00:18:15.371625 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 00:18:15.371082 systemd-networkd[782]: eth1: Gained carrier Apr 16 00:18:15.371635 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 16 00:18:15.371093 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:18:15.371641 ignition[682]: failed to fetch config: resource requires networking Apr 16 00:18:15.375073 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 00:18:15.371834 ignition[682]: Ignition finished successfully Apr 16 00:18:15.382215 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 00:18:15.395626 ignition[785]: Ignition 2.19.0 Apr 16 00:18:15.396411 ignition[785]: Stage: fetch Apr 16 00:18:15.396619 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:18:15.396631 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:18:15.396739 ignition[785]: parsed url from cmdline: "" Apr 16 00:18:15.396743 ignition[785]: no config URL provided Apr 16 00:18:15.396747 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 00:18:15.396755 ignition[785]: no config at "/usr/lib/ignition/user.ign" Apr 16 00:18:15.396776 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 16 00:18:15.397408 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 16 00:18:15.414148 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 00:18:15.423166 systemd-networkd[782]: eth0: DHCPv4 address 188.245.164.135/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 00:18:15.597649 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 16 00:18:15.606243 ignition[785]: GET result: OK Apr 16 00:18:15.606434 ignition[785]: parsing config with SHA512: 5e928364b1181760ba428eb42c31399a727071ed28845f2b85bd0d58b4035dfa21e7e6fd696367c91c50207b2bb9c154edf5dcadee77b08e21c0360bb07c7660 Apr 16 00:18:15.613297 unknown[785]: fetched base config from "system" Apr 16 00:18:15.613988 ignition[785]: fetch: fetch complete Apr 16 00:18:15.613306 unknown[785]: fetched base config from "system" Apr 16 00:18:15.613994 ignition[785]: fetch: fetch passed Apr 16 00:18:15.613322 unknown[785]: fetched user config from "hetzner" Apr 16 00:18:15.614100 ignition[785]: Ignition finished successfully Apr 16 00:18:15.616428 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 00:18:15.632278 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 00:18:15.648650 ignition[792]: Ignition 2.19.0 Apr 16 00:18:15.648664 ignition[792]: Stage: kargs Apr 16 00:18:15.648849 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:18:15.648857 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:18:15.649911 ignition[792]: kargs: kargs passed Apr 16 00:18:15.649966 ignition[792]: Ignition finished successfully Apr 16 00:18:15.655987 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 00:18:15.661305 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 00:18:15.676091 ignition[798]: Ignition 2.19.0 Apr 16 00:18:15.676104 ignition[798]: Stage: disks Apr 16 00:18:15.676371 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:18:15.676381 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:18:15.680321 ignition[798]: disks: disks passed Apr 16 00:18:15.680397 ignition[798]: Ignition finished successfully Apr 16 00:18:15.685095 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 00:18:15.686759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 00:18:15.688192 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 00:18:15.688849 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 00:18:15.689546 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 00:18:15.690135 systemd[1]: Reached target basic.target - Basic System. Apr 16 00:18:15.697333 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 00:18:15.717493 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 16 00:18:15.723008 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 00:18:15.727296 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 00:18:15.782094 kernel: EXT4-fs (sda9): mounted filesystem a7d1b52a-2d60-4e63-87fc-077f5b665cf4 r/w with ordered data mode. Quota mode: none. Apr 16 00:18:15.782792 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 00:18:15.783963 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 00:18:15.791304 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 00:18:15.795195 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 00:18:15.800330 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 16 00:18:15.802054 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 00:18:15.802109 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 00:18:15.806655 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 00:18:15.811014 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 00:18:15.816251 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (815) Apr 16 00:18:15.816300 kernel: BTRFS info (device sda6): first mount of filesystem d00c5e58-4065-42ad-81de-759701ad0aab Apr 16 00:18:15.816312 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 00:18:15.816322 kernel: BTRFS info (device sda6): using free space tree Apr 16 00:18:15.825122 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 00:18:15.825193 kernel: BTRFS info (device sda6): auto enabling async discard Apr 16 00:18:15.828458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 00:18:15.871745 coreos-metadata[817]: Apr 16 00:18:15.871 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 16 00:18:15.874922 coreos-metadata[817]: Apr 16 00:18:15.874 INFO Fetch successful Apr 16 00:18:15.874922 coreos-metadata[817]: Apr 16 00:18:15.874 INFO wrote hostname ci-4081-3-6-n-510861948e to /sysroot/etc/hostname Apr 16 00:18:15.878128 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 00:18:15.877393 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 00:18:15.881174 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 16 00:18:15.885849 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 00:18:15.890978 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 00:18:15.995551 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 00:18:16.000230 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 00:18:16.003429 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 00:18:16.013121 kernel: BTRFS info (device sda6): last unmount of filesystem d00c5e58-4065-42ad-81de-759701ad0aab Apr 16 00:18:16.032187 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 00:18:16.037077 ignition[932]: INFO : Ignition 2.19.0 Apr 16 00:18:16.037077 ignition[932]: INFO : Stage: mount Apr 16 00:18:16.038465 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:18:16.038465 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:18:16.038465 ignition[932]: INFO : mount: mount passed Apr 16 00:18:16.038465 ignition[932]: INFO : Ignition finished successfully Apr 16 00:18:16.040594 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 00:18:16.052180 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 00:18:16.184700 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 00:18:16.197362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 00:18:16.209473 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Apr 16 00:18:16.209546 kernel: BTRFS info (device sda6): first mount of filesystem d00c5e58-4065-42ad-81de-759701ad0aab Apr 16 00:18:16.209572 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 00:18:16.209593 kernel: BTRFS info (device sda6): using free space tree Apr 16 00:18:16.213084 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 00:18:16.213153 kernel: BTRFS info (device sda6): auto enabling async discard Apr 16 00:18:16.215947 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 00:18:16.238018 ignition[961]: INFO : Ignition 2.19.0 Apr 16 00:18:16.238018 ignition[961]: INFO : Stage: files Apr 16 00:18:16.239344 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:18:16.239344 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:18:16.241627 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 16 00:18:16.242932 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 00:18:16.242932 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 00:18:16.247800 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 00:18:16.249263 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 00:18:16.250645 unknown[961]: wrote ssh authorized keys file for user: core Apr 16 00:18:16.251536 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 00:18:16.254317 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 00:18:16.255628 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 16 00:18:16.322527 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 00:18:16.405160 systemd-networkd[782]: eth1: Gained IPv6LL Apr 16 00:18:16.437338 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 00:18:16.437338 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 00:18:16.439762 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 16 00:18:16.696411 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 00:18:16.790263 systemd-networkd[782]: eth0: Gained IPv6LL Apr 16 00:18:16.964061 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 00:18:16.964061 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 16 00:18:16.964061 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 00:18:16.964061 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 00:18:16.971291 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 16 00:18:17.217437 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 16 00:18:18.068302 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 00:18:18.068302 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 16 00:18:18.071746 ignition[961]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 00:18:18.071746 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 00:18:18.071746 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 00:18:18.071746 ignition[961]: INFO : files: files passed Apr 16 00:18:18.071746 ignition[961]: INFO : Ignition finished successfully Apr 16 00:18:18.072909 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 00:18:18.078317 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 00:18:18.090308 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 00:18:18.095518 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 00:18:18.095622 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 00:18:18.115825 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:18:18.115825 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:18:18.119885 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:18:18.123243 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 00:18:18.124197 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 00:18:18.129270 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 00:18:18.164256 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 00:18:18.165690 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 00:18:18.169703 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 00:18:18.171198 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 00:18:18.172953 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 00:18:18.180348 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 00:18:18.197188 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 00:18:18.208303 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 00:18:18.221182 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:18:18.222030 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:18:18.223793 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 00:18:18.225644 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 00:18:18.225783 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 00:18:18.227339 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 00:18:18.228030 systemd[1]: Stopped target basic.target - Basic System. Apr 16 00:18:18.229579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 00:18:18.230827 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 00:18:18.231921 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 00:18:18.233112 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 00:18:18.234426 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 00:18:18.235763 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 00:18:18.236809 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 00:18:18.238009 systemd[1]: Stopped target swap.target - Swaps. Apr 16 00:18:18.239005 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 00:18:18.239157 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 00:18:18.240553 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:18:18.241265 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:18:18.242438 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 00:18:18.244301 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:18:18.245070 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 00:18:18.245195 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 00:18:18.246958 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 00:18:18.247097 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 00:18:18.248897 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 00:18:18.249077 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 00:18:18.250015 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 16 00:18:18.250129 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 00:18:18.259505 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 00:18:18.265380 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 00:18:18.266686 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 00:18:18.267332 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:18:18.271753 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 00:18:18.271940 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 00:18:18.274723 ignition[1013]: INFO : Ignition 2.19.0 Apr 16 00:18:18.274723 ignition[1013]: INFO : Stage: umount Apr 16 00:18:18.277193 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:18:18.277193 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:18:18.277193 ignition[1013]: INFO : umount: umount passed Apr 16 00:18:18.277193 ignition[1013]: INFO : Ignition finished successfully Apr 16 00:18:18.285486 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 00:18:18.287405 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 00:18:18.290543 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 00:18:18.293635 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 00:18:18.295382 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 00:18:18.295442 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 00:18:18.298924 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 00:18:18.298992 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 00:18:18.301360 systemd[1]: Stopped target network.target - Network. Apr 16 00:18:18.301895 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 00:18:18.301960 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 00:18:18.302736 systemd[1]: Stopped target paths.target - Path Units. Apr 16 00:18:18.303296 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 00:18:18.307117 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:18:18.308713 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 00:18:18.309298 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 00:18:18.310772 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 00:18:18.310864 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 00:18:18.312058 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 00:18:18.312099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 00:18:18.313069 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 00:18:18.313123 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 00:18:18.314106 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 00:18:18.314146 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 00:18:18.315390 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 00:18:18.316333 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 00:18:18.318641 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 00:18:18.319281 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 00:18:18.319286 systemd-networkd[782]: eth0: DHCPv6 lease lost Apr 16 00:18:18.319381 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 00:18:18.320941 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 00:18:18.321051 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 00:18:18.323495 systemd-networkd[782]: eth1: DHCPv6 lease lost Apr 16 00:18:18.323558 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 00:18:18.323665 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 00:18:18.328405 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 00:18:18.328581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 00:18:18.330418 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 00:18:18.331535 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 00:18:18.337569 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 00:18:18.337624 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:18:18.345199 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 00:18:18.345792 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 00:18:18.345879 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 00:18:18.348900 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 00:18:18.348961 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:18:18.349996 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 00:18:18.350097 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 00:18:18.351509 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 00:18:18.351552 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:18:18.352910 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:18:18.366330 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 00:18:18.366456 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 00:18:18.373305 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 00:18:18.373584 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:18:18.376191 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 00:18:18.376239 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 00:18:18.377183 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 00:18:18.377219 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:18:18.378642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 00:18:18.378696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 00:18:18.380727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 00:18:18.380783 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 00:18:18.382567 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 00:18:18.382626 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:18:18.388275 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 00:18:18.388914 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 00:18:18.388984 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:18:18.392480 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 00:18:18.392536 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 00:18:18.394146 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 00:18:18.394191 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:18:18.395689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 00:18:18.395729 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:18:18.403679 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 00:18:18.403802 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 00:18:18.405532 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 00:18:18.418908 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 00:18:18.429981 systemd[1]: Switching root. Apr 16 00:18:18.464842 systemd-journald[237]: Journal stopped Apr 16 00:18:19.505080 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 16 00:18:19.505164 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 00:18:19.505176 kernel: SELinux: policy capability open_perms=1 Apr 16 00:18:19.505186 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 00:18:19.505200 kernel: SELinux: policy capability always_check_network=0 Apr 16 00:18:19.505209 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 00:18:19.505219 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 00:18:19.505228 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 00:18:19.505244 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 00:18:19.505253 kernel: audit: type=1403 audit(1776298698.747:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 00:18:19.505264 systemd[1]: Successfully loaded SELinux policy in 35.805ms. Apr 16 00:18:19.505285 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.555ms. Apr 16 00:18:19.505296 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 00:18:19.505307 systemd[1]: Detected virtualization kvm. Apr 16 00:18:19.505317 systemd[1]: Detected architecture arm64. Apr 16 00:18:19.505327 systemd[1]: Detected first boot. Apr 16 00:18:19.505339 systemd[1]: Hostname set to . Apr 16 00:18:19.505350 systemd[1]: Initializing machine ID from VM UUID. Apr 16 00:18:19.505360 zram_generator::config[1055]: No configuration found. Apr 16 00:18:19.505374 systemd[1]: Populated /etc with preset unit settings. Apr 16 00:18:19.505384 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 00:18:19.505394 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 00:18:19.505405 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 00:18:19.505415 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 00:18:19.505427 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 00:18:19.505437 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 00:18:19.505448 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 00:18:19.505458 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 00:18:19.505468 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 00:18:19.505479 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 00:18:19.505489 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 00:18:19.505503 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:18:19.505513 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:18:19.505528 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 00:18:19.505539 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 00:18:19.505549 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 00:18:19.505559 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 00:18:19.505570 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 16 00:18:19.505580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:18:19.505591 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 00:18:19.505604 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 00:18:19.505625 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 00:18:19.505637 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 00:18:19.505652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:18:19.505664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 00:18:19.505675 systemd[1]: Reached target slices.target - Slice Units. Apr 16 00:18:19.505685 systemd[1]: Reached target swap.target - Swaps. Apr 16 00:18:19.505696 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 00:18:19.505708 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 00:18:19.505719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:18:19.505729 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 00:18:19.505740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:18:19.505751 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 00:18:19.505762 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 00:18:19.505772 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 00:18:19.505782 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 00:18:19.505793 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 00:18:19.505805 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 00:18:19.505815 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 00:18:19.505836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 00:18:19.505848 systemd[1]: Reached target machines.target - Containers. Apr 16 00:18:19.505864 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 00:18:19.505878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:18:19.505889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 00:18:19.505899 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 00:18:19.505910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 00:18:19.505920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 00:18:19.505934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 00:18:19.505944 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 00:18:19.505954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 00:18:19.505965 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 00:18:19.505977 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 00:18:19.505988 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 00:18:19.505998 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 00:18:19.506008 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 00:18:19.506019 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 00:18:19.506029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 00:18:19.507090 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 00:18:19.507114 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 00:18:19.507125 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 00:18:19.507141 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 00:18:19.507151 systemd[1]: Stopped verity-setup.service. Apr 16 00:18:19.507165 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 00:18:19.507176 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 00:18:19.507186 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 00:18:19.507209 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 00:18:19.507261 systemd-journald[1125]: Collecting audit messages is disabled. Apr 16 00:18:19.507288 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 00:18:19.507299 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 00:18:19.507310 systemd-journald[1125]: Journal started Apr 16 00:18:19.507334 systemd-journald[1125]: Runtime Journal (/run/log/journal/726801011b34416286e5ba78bc87cb52) is 8.0M, max 76.6M, 68.6M free. Apr 16 00:18:19.254687 systemd[1]: Queued start job for default target multi-user.target. Apr 16 00:18:19.278768 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 16 00:18:19.279503 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 00:18:19.515567 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 00:18:19.512108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:18:19.514618 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 00:18:19.515373 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 00:18:19.521139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 00:18:19.524844 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 00:18:19.526496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 00:18:19.528107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 00:18:19.529264 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 00:18:19.558210 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 00:18:19.563319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:18:19.568219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 00:18:19.573195 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 00:18:19.576311 kernel: ACPI: bus type drm_connector registered Apr 16 00:18:19.577078 kernel: fuse: init (API version 7.39) Apr 16 00:18:19.578332 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 00:18:19.579463 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 00:18:19.583609 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 00:18:19.585527 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 00:18:19.585559 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 00:18:19.587051 kernel: loop: module loaded Apr 16 00:18:19.587742 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 00:18:19.590181 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 00:18:19.598368 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 00:18:19.599119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:18:19.601074 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 00:18:19.605665 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 00:18:19.606452 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 00:18:19.614468 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 00:18:19.619554 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 00:18:19.621760 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 00:18:19.624399 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 00:18:19.624542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 00:18:19.625560 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 00:18:19.625694 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 00:18:19.626639 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 00:18:19.628077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 00:18:19.636204 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 00:18:19.636923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 00:18:19.654019 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 00:18:19.655757 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 00:18:19.660953 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 00:18:19.665059 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 00:18:19.665972 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 00:18:19.668482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:18:19.677618 systemd-journald[1125]: Time spent on flushing to /var/log/journal/726801011b34416286e5ba78bc87cb52 is 42.422ms for 1132 entries. Apr 16 00:18:19.677618 systemd-journald[1125]: System Journal (/var/log/journal/726801011b34416286e5ba78bc87cb52) is 8.0M, max 584.8M, 576.8M free. Apr 16 00:18:19.742005 systemd-journald[1125]: Received client request to flush runtime journal. Apr 16 00:18:19.742119 kernel: loop0: detected capacity change from 0 to 200864 Apr 16 00:18:19.742196 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 00:18:19.708324 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Apr 16 00:18:19.708335 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Apr 16 00:18:19.730230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 00:18:19.746204 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 00:18:19.749122 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 00:18:19.754773 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 00:18:19.759288 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 00:18:19.763091 kernel: loop1: detected capacity change from 0 to 8 Apr 16 00:18:19.767519 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:18:19.780261 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 00:18:19.803115 kernel: loop2: detected capacity change from 0 to 114328 Apr 16 00:18:19.809770 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 16 00:18:19.834310 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 00:18:19.844894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 00:18:19.854056 kernel: loop3: detected capacity change from 0 to 114432 Apr 16 00:18:19.881832 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Apr 16 00:18:19.881852 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Apr 16 00:18:19.892733 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:18:19.905083 kernel: loop4: detected capacity change from 0 to 200864 Apr 16 00:18:19.930091 kernel: loop5: detected capacity change from 0 to 8 Apr 16 00:18:19.933079 kernel: loop6: detected capacity change from 0 to 114328 Apr 16 00:18:19.947061 kernel: loop7: detected capacity change from 0 to 114432 Apr 16 00:18:19.961515 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 16 00:18:19.962026 (sd-merge)[1198]: Merged extensions into '/usr'. Apr 16 00:18:19.969236 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 00:18:19.969426 systemd[1]: Reloading... Apr 16 00:18:20.059075 zram_generator::config[1220]: No configuration found. Apr 16 00:18:20.226108 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 00:18:20.254961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 00:18:20.304697 systemd[1]: Reloading finished in 334 ms. Apr 16 00:18:20.328599 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 00:18:20.329885 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 00:18:20.341344 systemd[1]: Starting ensure-sysext.service... Apr 16 00:18:20.344219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 00:18:20.356401 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 00:18:20.367511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:18:20.368429 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Apr 16 00:18:20.368440 systemd[1]: Reloading... Apr 16 00:18:20.376602 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 00:18:20.377440 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 00:18:20.378442 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 00:18:20.378660 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 16 00:18:20.378704 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 16 00:18:20.385124 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 00:18:20.385136 systemd-tmpfiles[1262]: Skipping /boot Apr 16 00:18:20.397885 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 00:18:20.398013 systemd-tmpfiles[1262]: Skipping /boot Apr 16 00:18:20.419819 systemd-udevd[1265]: Using default interface naming scheme 'v255'. Apr 16 00:18:20.474476 zram_generator::config[1293]: No configuration found. Apr 16 00:18:20.636934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 00:18:20.701072 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 00:18:20.708395 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 16 00:18:20.708984 systemd[1]: Reloading finished in 340 ms. Apr 16 00:18:20.729883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:18:20.732183 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:18:20.780143 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 16 00:18:20.793516 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 00:18:20.797692 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 00:18:20.799239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:18:20.803144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 00:18:20.817328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 00:18:20.822393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 00:18:20.823217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:18:20.828361 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 00:18:20.838416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 00:18:20.846588 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1318) Apr 16 00:18:20.846676 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 16 00:18:20.846691 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 16 00:18:20.846706 kernel: [drm] features: -context_init Apr 16 00:18:20.846165 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 00:18:20.849052 kernel: [drm] number of scanouts: 1 Apr 16 00:18:20.849124 kernel: [drm] number of cap sets: 0 Apr 16 00:18:20.851159 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 00:18:20.856056 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 16 00:18:20.870055 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 00:18:20.879586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:18:20.882954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 00:18:20.909088 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 16 00:18:20.910235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:18:20.926540 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 00:18:20.950409 systemd[1]: Finished ensure-sysext.service. Apr 16 00:18:20.951794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 00:18:20.951953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 00:18:20.954945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 00:18:20.955410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 00:18:20.957720 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 00:18:20.957900 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 00:18:20.959750 augenrules[1396]: No rules Apr 16 00:18:20.961092 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 00:18:20.962693 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 00:18:20.962874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 00:18:20.966064 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 00:18:20.973631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 00:18:20.987895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 00:18:20.989379 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 00:18:20.989784 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 00:18:20.993373 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 00:18:20.997540 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 00:18:21.001340 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 00:18:21.011332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:18:21.013027 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 00:18:21.023004 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 00:18:21.024745 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 00:18:21.037589 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 00:18:21.048261 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 00:18:21.049501 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 00:18:21.077588 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 00:18:21.077470 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 00:18:21.111097 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 00:18:21.111932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:18:21.123295 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 00:18:21.125434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:18:21.136094 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 00:18:21.173928 systemd-networkd[1375]: lo: Link UP Apr 16 00:18:21.174117 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 00:18:21.174496 systemd-networkd[1375]: lo: Gained carrier Apr 16 00:18:21.176370 systemd-networkd[1375]: Enumeration completed Apr 16 00:18:21.177198 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 00:18:21.177637 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:18:21.177641 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:18:21.181317 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:18:21.181325 systemd-networkd[1375]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:18:21.181883 systemd-networkd[1375]: eth0: Link UP Apr 16 00:18:21.181887 systemd-networkd[1375]: eth0: Gained carrier Apr 16 00:18:21.181901 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:18:21.185332 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 00:18:21.186482 systemd-networkd[1375]: eth1: Link UP Apr 16 00:18:21.186490 systemd-networkd[1375]: eth1: Gained carrier Apr 16 00:18:21.186514 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:18:21.187968 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 00:18:21.189463 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 00:18:21.200027 systemd-resolved[1376]: Positive Trust Anchors: Apr 16 00:18:21.200088 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 00:18:21.200120 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 00:18:21.207030 systemd-resolved[1376]: Using system hostname 'ci-4081-3-6-n-510861948e'. Apr 16 00:18:21.209670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 00:18:21.210560 systemd[1]: Reached target network.target - Network. Apr 16 00:18:21.211114 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:18:21.211761 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 00:18:21.212598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 00:18:21.213545 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 00:18:21.214768 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 00:18:21.215563 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 00:18:21.216292 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 00:18:21.216975 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 00:18:21.217015 systemd[1]: Reached target paths.target - Path Units. Apr 16 00:18:21.217514 systemd[1]: Reached target timers.target - Timer Units. Apr 16 00:18:21.218645 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 00:18:21.220845 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 00:18:21.226154 systemd-networkd[1375]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 00:18:21.228507 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Apr 16 00:18:21.232897 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 00:18:21.234566 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 00:18:21.235775 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 00:18:21.236175 systemd-networkd[1375]: eth0: DHCPv4 address 188.245.164.135/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 00:18:21.236487 systemd[1]: Reached target basic.target - Basic System. Apr 16 00:18:21.237745 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 00:18:21.237767 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 00:18:21.242200 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 00:18:21.244486 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 00:18:21.247247 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 00:18:21.254335 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 00:18:21.260465 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 00:18:21.262154 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 00:18:21.263607 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 00:18:21.267351 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 00:18:21.273320 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 16 00:18:21.277299 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 00:18:21.282617 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 00:18:21.287090 jq[1442]: false Apr 16 00:18:21.295321 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 00:18:21.296848 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 00:18:21.297956 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 00:18:21.301120 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 00:18:21.304143 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 00:18:21.306710 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 00:18:21.309106 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 00:18:21.331594 dbus-daemon[1441]: [system] SELinux support is enabled Apr 16 00:18:21.331809 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 00:18:21.337447 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 00:18:21.337489 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 00:18:21.338341 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 00:18:21.338356 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 00:18:21.346175 extend-filesystems[1445]: Found loop4 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found loop5 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found loop6 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found loop7 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found sda Apr 16 00:18:21.349053 extend-filesystems[1445]: Found sda1 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found sda2 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found sda3 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found usr Apr 16 00:18:21.349053 extend-filesystems[1445]: Found sda4 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found sda6 Apr 16 00:18:21.349053 extend-filesystems[1445]: Found sda7 Apr 16 00:18:21.347604 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 00:18:21.369231 coreos-metadata[1440]: Apr 16 00:18:21.368 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 16 00:18:21.369464 extend-filesystems[1445]: Found sda9 Apr 16 00:18:21.369464 extend-filesystems[1445]: Checking size of /dev/sda9 Apr 16 00:18:21.347909 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 00:18:21.376473 coreos-metadata[1440]: Apr 16 00:18:21.373 INFO Fetch successful Apr 16 00:18:21.376473 coreos-metadata[1440]: Apr 16 00:18:21.373 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 16 00:18:21.377273 jq[1454]: true Apr 16 00:18:21.378226 coreos-metadata[1440]: Apr 16 00:18:21.377 INFO Fetch successful Apr 16 00:18:21.395202 systemd-timesyncd[1409]: Contacted time server 85.215.166.214:123 (3.flatcar.pool.ntp.org). Apr 16 00:18:21.395260 systemd-timesyncd[1409]: Initial clock synchronization to Thu 2026-04-16 00:18:20.995795 UTC. Apr 16 00:18:21.408278 update_engine[1453]: I20260416 00:18:21.406973 1453 main.cc:92] Flatcar Update Engine starting Apr 16 00:18:21.412065 tar[1460]: linux-arm64/LICENSE Apr 16 00:18:21.412065 tar[1460]: linux-arm64/helm Apr 16 00:18:21.416333 extend-filesystems[1445]: Resized partition /dev/sda9 Apr 16 00:18:21.414220 systemd[1]: Started update-engine.service - Update Engine. Apr 16 00:18:21.420357 update_engine[1453]: I20260416 00:18:21.417212 1453 update_check_scheduler.cc:74] Next update check in 10m8s Apr 16 00:18:21.426153 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) Apr 16 00:18:21.422264 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 00:18:21.424278 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 00:18:21.434354 jq[1476]: true Apr 16 00:18:21.438481 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 00:18:21.438711 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 00:18:21.443923 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 16 00:18:21.488893 systemd-logind[1451]: New seat seat0. Apr 16 00:18:21.497575 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (Power Button) Apr 16 00:18:21.497735 systemd-logind[1451]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 16 00:18:21.498099 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 00:18:21.563270 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1302) Apr 16 00:18:21.593585 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 00:18:21.596761 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 00:18:21.617878 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Apr 16 00:18:21.622596 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 00:18:21.644421 systemd[1]: Starting sshkeys.service... Apr 16 00:18:21.670796 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 16 00:18:21.673087 extend-filesystems[1484]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 16 00:18:21.673087 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 16 00:18:21.673087 extend-filesystems[1484]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 16 00:18:21.688147 extend-filesystems[1445]: Resized filesystem in /dev/sda9 Apr 16 00:18:21.688147 extend-filesystems[1445]: Found sr0 Apr 16 00:18:21.674207 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 00:18:21.676087 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 00:18:21.697563 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 00:18:21.709330 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 00:18:21.764047 coreos-metadata[1520]: Apr 16 00:18:21.763 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 16 00:18:21.768823 coreos-metadata[1520]: Apr 16 00:18:21.767 INFO Fetch successful Apr 16 00:18:21.772579 unknown[1520]: wrote ssh authorized keys file for user: core Apr 16 00:18:21.797280 containerd[1473]: time="2026-04-16T00:18:21.797190440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 00:18:21.817708 update-ssh-keys[1526]: Updated "/home/core/.ssh/authorized_keys" Apr 16 00:18:21.819096 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 00:18:21.831024 containerd[1473]: time="2026-04-16T00:18:21.828625240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 00:18:21.828953 systemd[1]: Finished sshkeys.service. Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835384920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835436080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835458560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835639320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835660280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835731120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835743880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835931880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835952520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835971640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:18:21.836697 containerd[1473]: time="2026-04-16T00:18:21.835982400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 00:18:21.837059 containerd[1473]: time="2026-04-16T00:18:21.836074840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 00:18:21.837059 containerd[1473]: time="2026-04-16T00:18:21.836286160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 00:18:21.837059 containerd[1473]: time="2026-04-16T00:18:21.836399440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:18:21.837059 containerd[1473]: time="2026-04-16T00:18:21.836417440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 00:18:21.837059 containerd[1473]: time="2026-04-16T00:18:21.836492440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 00:18:21.837059 containerd[1473]: time="2026-04-16T00:18:21.836533960Z" level=info msg="metadata content store policy set" policy=shared Apr 16 00:18:21.849758 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.861506240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.861620880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.861653520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.861683960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.861712160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862024480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862512800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862697800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862733880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862756760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862780920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862845600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862870560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 00:18:21.864047 containerd[1473]: time="2026-04-16T00:18:21.862894720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.862920480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.862942880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.862964280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.862986120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863019880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863323920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863355840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863402200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863439720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863484160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863510440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863547040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863571960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864406 containerd[1473]: time="2026-04-16T00:18:21.863604280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.863638200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.863666440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.863700280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.863738040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.863814800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.863845560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.863884640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.864066400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.864099600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.864348200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.864370680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.864381280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.864463120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 00:18:21.864657 containerd[1473]: time="2026-04-16T00:18:21.864483360Z" level=info msg="NRI interface is disabled by configuration." Apr 16 00:18:21.864931 containerd[1473]: time="2026-04-16T00:18:21.864495840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 00:18:21.864954 containerd[1473]: time="2026-04-16T00:18:21.864896200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 00:18:21.865076 containerd[1473]: time="2026-04-16T00:18:21.864969640Z" level=info msg="Connect containerd service" Apr 16 00:18:21.865076 containerd[1473]: time="2026-04-16T00:18:21.865006520Z" level=info msg="using legacy CRI server" Apr 16 00:18:21.865076 containerd[1473]: time="2026-04-16T00:18:21.865014280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 00:18:21.866057 containerd[1473]: time="2026-04-16T00:18:21.865136480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 00:18:21.866370 containerd[1473]: time="2026-04-16T00:18:21.866290160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 00:18:21.866619 containerd[1473]: time="2026-04-16T00:18:21.866579680Z" level=info msg="Start subscribing containerd event" Apr 16 00:18:21.866642 containerd[1473]: time="2026-04-16T00:18:21.866633480Z" level=info msg="Start recovering state" Apr 16 00:18:21.866948 containerd[1473]: time="2026-04-16T00:18:21.866927720Z" level=info msg="Start event monitor" Apr 16 00:18:21.866993 containerd[1473]: time="2026-04-16T00:18:21.866949040Z" level=info msg="Start snapshots syncer" Apr 16 00:18:21.866993 containerd[1473]: time="2026-04-16T00:18:21.866959360Z" level=info msg="Start cni network conf syncer for default" Apr 16 00:18:21.867137 containerd[1473]: time="2026-04-16T00:18:21.866966480Z" level=info msg="Start streaming server" Apr 16 00:18:21.867502 containerd[1473]: time="2026-04-16T00:18:21.867480840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 00:18:21.867687 containerd[1473]: time="2026-04-16T00:18:21.867664560Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 00:18:21.870299 containerd[1473]: time="2026-04-16T00:18:21.869518680Z" level=info msg="containerd successfully booted in 0.073136s" Apr 16 00:18:21.869634 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 00:18:22.133422 tar[1460]: linux-arm64/README.md Apr 16 00:18:22.146119 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 00:18:22.293310 systemd-networkd[1375]: eth0: Gained IPv6LL Apr 16 00:18:22.300080 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 00:18:22.301479 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 00:18:22.311253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:18:22.315346 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 00:18:22.367211 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 00:18:22.627507 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 00:18:22.649863 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 00:18:22.660193 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 00:18:22.670476 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 00:18:22.672127 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 00:18:22.684176 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 00:18:22.696161 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 00:18:22.704334 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 00:18:22.707098 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 16 00:18:22.708049 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 00:18:23.161723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:18:23.163731 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 00:18:23.168164 (kubelet)[1569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:18:23.169872 systemd[1]: Startup finished in 826ms (kernel) + 6.055s (initrd) + 4.457s (userspace) = 11.339s. Apr 16 00:18:23.189290 systemd-networkd[1375]: eth1: Gained IPv6LL Apr 16 00:18:23.672014 kubelet[1569]: E0416 00:18:23.671951 1569 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:18:23.676793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:18:23.676935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:18:33.928254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 00:18:33.940519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:18:34.062393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:18:34.067503 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:18:34.114769 kubelet[1588]: E0416 00:18:34.114716 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:18:34.118952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:18:34.119225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:18:44.224581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 00:18:44.233677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:18:44.360261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:18:44.378775 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:18:44.425887 kubelet[1603]: E0416 00:18:44.425759 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:18:44.428728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:18:44.428880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:18:54.475441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 00:18:54.483433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:18:54.641367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:18:54.652672 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:18:54.706567 kubelet[1618]: E0416 00:18:54.706485 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:18:54.709954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:18:54.710192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:19:00.080884 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 00:19:00.094118 systemd[1]: Started sshd@0-188.245.164.135:22-4.175.71.9:39362.service - OpenSSH per-connection server daemon (4.175.71.9:39362). Apr 16 00:19:00.215216 sshd[1626]: Accepted publickey for core from 4.175.71.9 port 39362 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:19:00.217759 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:19:00.231089 systemd-logind[1451]: New session 1 of user core. Apr 16 00:19:00.233581 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 00:19:00.248601 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 00:19:00.267130 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 00:19:00.274534 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 00:19:00.295669 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 00:19:00.417982 systemd[1630]: Queued start job for default target default.target. Apr 16 00:19:00.426485 systemd[1630]: Created slice app.slice - User Application Slice. Apr 16 00:19:00.426558 systemd[1630]: Reached target paths.target - Paths. Apr 16 00:19:00.426587 systemd[1630]: Reached target timers.target - Timers. Apr 16 00:19:00.428662 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 00:19:00.454776 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 00:19:00.454927 systemd[1630]: Reached target sockets.target - Sockets. Apr 16 00:19:00.454942 systemd[1630]: Reached target basic.target - Basic System. Apr 16 00:19:00.455006 systemd[1630]: Reached target default.target - Main User Target. Apr 16 00:19:00.455055 systemd[1630]: Startup finished in 152ms. Apr 16 00:19:00.455677 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 00:19:00.470260 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 00:19:00.593786 systemd[1]: Started sshd@1-188.245.164.135:22-4.175.71.9:39374.service - OpenSSH per-connection server daemon (4.175.71.9:39374). Apr 16 00:19:00.722905 sshd[1641]: Accepted publickey for core from 4.175.71.9 port 39374 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:19:00.726298 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:19:00.736586 systemd-logind[1451]: New session 2 of user core. Apr 16 00:19:00.747470 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 00:19:00.849351 sshd[1641]: pam_unix(sshd:session): session closed for user core Apr 16 00:19:00.857212 systemd[1]: sshd@1-188.245.164.135:22-4.175.71.9:39374.service: Deactivated successfully. Apr 16 00:19:00.859415 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 00:19:00.860465 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Apr 16 00:19:00.861838 systemd-logind[1451]: Removed session 2. Apr 16 00:19:00.883555 systemd[1]: Started sshd@2-188.245.164.135:22-4.175.71.9:39380.service - OpenSSH per-connection server daemon (4.175.71.9:39380). Apr 16 00:19:01.001083 sshd[1648]: Accepted publickey for core from 4.175.71.9 port 39380 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:19:01.003018 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:19:01.009782 systemd-logind[1451]: New session 3 of user core. Apr 16 00:19:01.020467 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 00:19:01.115676 sshd[1648]: pam_unix(sshd:session): session closed for user core Apr 16 00:19:01.121246 systemd[1]: sshd@2-188.245.164.135:22-4.175.71.9:39380.service: Deactivated successfully. Apr 16 00:19:01.123994 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 00:19:01.125433 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Apr 16 00:19:01.126756 systemd-logind[1451]: Removed session 3. Apr 16 00:19:01.150571 systemd[1]: Started sshd@3-188.245.164.135:22-4.175.71.9:39388.service - OpenSSH per-connection server daemon (4.175.71.9:39388). Apr 16 00:19:01.290807 sshd[1655]: Accepted publickey for core from 4.175.71.9 port 39388 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:19:01.293596 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:19:01.300023 systemd-logind[1451]: New session 4 of user core. Apr 16 00:19:01.306734 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 00:19:01.410753 sshd[1655]: pam_unix(sshd:session): session closed for user core Apr 16 00:19:01.416152 systemd[1]: sshd@3-188.245.164.135:22-4.175.71.9:39388.service: Deactivated successfully. Apr 16 00:19:01.418702 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 00:19:01.422578 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Apr 16 00:19:01.424236 systemd-logind[1451]: Removed session 4. Apr 16 00:19:01.446690 systemd[1]: Started sshd@4-188.245.164.135:22-4.175.71.9:39404.service - OpenSSH per-connection server daemon (4.175.71.9:39404). Apr 16 00:19:01.571241 sshd[1662]: Accepted publickey for core from 4.175.71.9 port 39404 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:19:01.572902 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:19:01.579825 systemd-logind[1451]: New session 5 of user core. Apr 16 00:19:01.586497 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 00:19:01.685074 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 00:19:01.685406 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:19:01.702671 sudo[1665]: pam_unix(sudo:session): session closed for user root Apr 16 00:19:01.720905 sshd[1662]: pam_unix(sshd:session): session closed for user core Apr 16 00:19:01.727845 systemd[1]: sshd@4-188.245.164.135:22-4.175.71.9:39404.service: Deactivated successfully. Apr 16 00:19:01.730806 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 00:19:01.733439 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Apr 16 00:19:01.735004 systemd-logind[1451]: Removed session 5. Apr 16 00:19:01.755621 systemd[1]: Started sshd@5-188.245.164.135:22-4.175.71.9:39420.service - OpenSSH per-connection server daemon (4.175.71.9:39420). Apr 16 00:19:01.886182 sshd[1670]: Accepted publickey for core from 4.175.71.9 port 39420 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:19:01.888704 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:19:01.894727 systemd-logind[1451]: New session 6 of user core. Apr 16 00:19:01.900414 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 00:19:01.989020 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 00:19:01.989356 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:19:01.997666 sudo[1674]: pam_unix(sudo:session): session closed for user root Apr 16 00:19:02.006659 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 00:19:02.007152 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:19:02.028665 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 00:19:02.033095 auditctl[1677]: No rules Apr 16 00:19:02.033894 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 00:19:02.036117 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 00:19:02.042671 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 00:19:02.085908 augenrules[1695]: No rules Apr 16 00:19:02.087999 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 00:19:02.089703 sudo[1673]: pam_unix(sudo:session): session closed for user root Apr 16 00:19:02.108322 sshd[1670]: pam_unix(sshd:session): session closed for user core Apr 16 00:19:02.113082 systemd[1]: sshd@5-188.245.164.135:22-4.175.71.9:39420.service: Deactivated successfully. Apr 16 00:19:02.115467 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 00:19:02.118948 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Apr 16 00:19:02.120523 systemd-logind[1451]: Removed session 6. Apr 16 00:19:02.138507 systemd[1]: Started sshd@6-188.245.164.135:22-4.175.71.9:39434.service - OpenSSH per-connection server daemon (4.175.71.9:39434). Apr 16 00:19:02.271803 sshd[1703]: Accepted publickey for core from 4.175.71.9 port 39434 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:19:02.274246 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:19:02.280114 systemd-logind[1451]: New session 7 of user core. Apr 16 00:19:02.290426 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 00:19:02.377440 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 00:19:02.377800 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:19:02.709573 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 00:19:02.711772 (dockerd)[1721]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 00:19:02.987139 dockerd[1721]: time="2026-04-16T00:19:02.987065328Z" level=info msg="Starting up" Apr 16 00:19:03.110827 dockerd[1721]: time="2026-04-16T00:19:03.110760774Z" level=info msg="Loading containers: start." Apr 16 00:19:03.239383 kernel: Initializing XFRM netlink socket Apr 16 00:19:03.343194 systemd-networkd[1375]: docker0: Link UP Apr 16 00:19:03.366108 dockerd[1721]: time="2026-04-16T00:19:03.365764007Z" level=info msg="Loading containers: done." Apr 16 00:19:03.382114 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3971761903-merged.mount: Deactivated successfully. Apr 16 00:19:03.386279 dockerd[1721]: time="2026-04-16T00:19:03.386206957Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 00:19:03.386446 dockerd[1721]: time="2026-04-16T00:19:03.386366798Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 00:19:03.386533 dockerd[1721]: time="2026-04-16T00:19:03.386498272Z" level=info msg="Daemon has completed initialization" Apr 16 00:19:03.438139 dockerd[1721]: time="2026-04-16T00:19:03.436976223Z" level=info msg="API listen on /run/docker.sock" Apr 16 00:19:03.437293 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 00:19:04.024880 containerd[1473]: time="2026-04-16T00:19:04.024802507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 00:19:04.665151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2159176560.mount: Deactivated successfully. Apr 16 00:19:04.725003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 00:19:04.734262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:19:04.940927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:19:04.941434 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:19:05.017780 kubelet[1881]: E0416 00:19:05.017445 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:19:05.020606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:19:05.021271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:19:05.532913 containerd[1473]: time="2026-04-16T00:19:05.532794168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:05.535267 containerd[1473]: time="2026-04-16T00:19:05.535212774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=24193866" Apr 16 00:19:05.537202 containerd[1473]: time="2026-04-16T00:19:05.537150587Z" level=info msg="ImageCreate event name:\"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:05.542220 containerd[1473]: time="2026-04-16T00:19:05.542165361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:05.545055 containerd[1473]: time="2026-04-16T00:19:05.544688752Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"24190367\" in 1.519818588s" Apr 16 00:19:05.545055 containerd[1473]: time="2026-04-16T00:19:05.544784734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\"" Apr 16 00:19:05.546004 containerd[1473]: time="2026-04-16T00:19:05.545692186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 00:19:06.457886 containerd[1473]: time="2026-04-16T00:19:06.456468067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:06.459255 containerd[1473]: time="2026-04-16T00:19:06.459206916Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=18901464" Apr 16 00:19:06.463176 containerd[1473]: time="2026-04-16T00:19:06.463129508Z" level=info msg="ImageCreate event name:\"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:06.466089 containerd[1473]: time="2026-04-16T00:19:06.466013110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:06.469327 containerd[1473]: time="2026-04-16T00:19:06.469261832Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"20408083\" in 923.526236ms" Apr 16 00:19:06.469520 containerd[1473]: time="2026-04-16T00:19:06.469499005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\"" Apr 16 00:19:06.470206 containerd[1473]: time="2026-04-16T00:19:06.470167313Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 00:19:06.834088 update_engine[1453]: I20260416 00:19:06.833936 1453 update_attempter.cc:509] Updating boot flags... Apr 16 00:19:06.892903 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1945) Apr 16 00:19:06.987081 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1949) Apr 16 00:19:07.296535 containerd[1473]: time="2026-04-16T00:19:07.296481834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:07.305717 containerd[1473]: time="2026-04-16T00:19:07.305650773Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=14047965" Apr 16 00:19:07.307302 containerd[1473]: time="2026-04-16T00:19:07.307225986Z" level=info msg="ImageCreate event name:\"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:07.316085 containerd[1473]: time="2026-04-16T00:19:07.314494443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:07.316733 containerd[1473]: time="2026-04-16T00:19:07.316681225Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"15554602\" in 846.183639ms" Apr 16 00:19:07.316955 containerd[1473]: time="2026-04-16T00:19:07.316929157Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\"" Apr 16 00:19:07.317568 containerd[1473]: time="2026-04-16T00:19:07.317543087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 00:19:08.203584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963845606.mount: Deactivated successfully. Apr 16 00:19:08.447182 containerd[1473]: time="2026-04-16T00:19:08.447068986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:08.449512 containerd[1473]: time="2026-04-16T00:19:08.449441143Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=22606312" Apr 16 00:19:08.451472 containerd[1473]: time="2026-04-16T00:19:08.451403938Z" level=info msg="ImageCreate event name:\"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:08.454918 containerd[1473]: time="2026-04-16T00:19:08.454356052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:08.456104 containerd[1473]: time="2026-04-16T00:19:08.455962815Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"22605305\" in 1.138277458s" Apr 16 00:19:08.456104 containerd[1473]: time="2026-04-16T00:19:08.456061555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\"" Apr 16 00:19:08.456783 containerd[1473]: time="2026-04-16T00:19:08.456733090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 00:19:08.985322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487001127.mount: Deactivated successfully. Apr 16 00:19:09.746916 containerd[1473]: time="2026-04-16T00:19:09.746835525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:09.750111 containerd[1473]: time="2026-04-16T00:19:09.749708556Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Apr 16 00:19:09.751511 containerd[1473]: time="2026-04-16T00:19:09.751393199Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:09.756684 containerd[1473]: time="2026-04-16T00:19:09.756592875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:09.759694 containerd[1473]: time="2026-04-16T00:19:09.758692197Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.301912538s" Apr 16 00:19:09.759694 containerd[1473]: time="2026-04-16T00:19:09.758754969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 16 00:19:09.760309 containerd[1473]: time="2026-04-16T00:19:09.760273260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 00:19:10.223216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761011255.mount: Deactivated successfully. Apr 16 00:19:10.232934 containerd[1473]: time="2026-04-16T00:19:10.232834068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:10.235001 containerd[1473]: time="2026-04-16T00:19:10.234911887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 16 00:19:10.236463 containerd[1473]: time="2026-04-16T00:19:10.236374835Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:10.240408 containerd[1473]: time="2026-04-16T00:19:10.240299911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:10.241485 containerd[1473]: time="2026-04-16T00:19:10.241296293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 480.867203ms" Apr 16 00:19:10.241485 containerd[1473]: time="2026-04-16T00:19:10.241370427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 16 00:19:10.242610 containerd[1473]: time="2026-04-16T00:19:10.242356207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 00:19:10.778783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048527325.mount: Deactivated successfully. Apr 16 00:19:11.543801 containerd[1473]: time="2026-04-16T00:19:11.543029985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:11.544694 containerd[1473]: time="2026-04-16T00:19:11.544632264Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139756" Apr 16 00:19:11.546651 containerd[1473]: time="2026-04-16T00:19:11.546589565Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:11.552122 containerd[1473]: time="2026-04-16T00:19:11.552066398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:11.555085 containerd[1473]: time="2026-04-16T00:19:11.553319057Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.310642152s" Apr 16 00:19:11.555085 containerd[1473]: time="2026-04-16T00:19:11.553370345Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 16 00:19:15.225000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 00:19:15.235983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:19:15.381268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:19:15.381490 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:19:15.431218 kubelet[2112]: E0416 00:19:15.431159 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:19:15.434005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:19:15.434192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:19:18.231215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:19:18.242903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:19:18.278611 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-7.scope)... Apr 16 00:19:18.278635 systemd[1]: Reloading... Apr 16 00:19:18.438081 zram_generator::config[2175]: No configuration found. Apr 16 00:19:18.536647 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 00:19:18.612542 systemd[1]: Reloading finished in 333 ms. Apr 16 00:19:18.671299 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 00:19:18.671412 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 00:19:18.671707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:19:18.680552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:19:18.823369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:19:18.826341 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 00:19:18.873443 kubelet[2214]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 00:19:18.873443 kubelet[2214]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 00:19:18.884730 kubelet[2214]: I0416 00:19:18.884592 2214 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 00:19:19.415155 kubelet[2214]: I0416 00:19:19.415093 2214 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 00:19:19.415155 kubelet[2214]: I0416 00:19:19.415144 2214 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 00:19:19.415365 kubelet[2214]: I0416 00:19:19.415190 2214 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 00:19:19.415365 kubelet[2214]: I0416 00:19:19.415202 2214 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 00:19:19.416080 kubelet[2214]: I0416 00:19:19.415674 2214 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 00:19:19.430323 kubelet[2214]: I0416 00:19:19.428147 2214 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 00:19:19.430875 kubelet[2214]: E0416 00:19:19.430813 2214 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://188.245.164.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.164.135:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 00:19:19.438581 kubelet[2214]: E0416 00:19:19.438264 2214 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 00:19:19.438581 kubelet[2214]: I0416 00:19:19.438364 2214 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 00:19:19.441357 kubelet[2214]: I0416 00:19:19.441305 2214 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 00:19:19.441879 kubelet[2214]: I0416 00:19:19.441830 2214 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 00:19:19.442600 kubelet[2214]: I0416 00:19:19.441984 2214 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-510861948e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 00:19:19.442600 kubelet[2214]: I0416 00:19:19.442207 2214 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 00:19:19.442600 kubelet[2214]: I0416 00:19:19.442220 2214 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 00:19:19.442600 kubelet[2214]: I0416 00:19:19.442369 2214 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 00:19:19.471735 kubelet[2214]: I0416 00:19:19.471614 2214 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:19:19.473456 kubelet[2214]: I0416 00:19:19.473416 2214 kubelet.go:475] "Attempting to sync node with API server" Apr 16 00:19:19.473456 kubelet[2214]: I0416 00:19:19.473452 2214 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 00:19:19.473653 kubelet[2214]: I0416 00:19:19.473486 2214 kubelet.go:387] "Adding apiserver pod source" Apr 16 00:19:19.473653 kubelet[2214]: I0416 00:19:19.473516 2214 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 00:19:19.475820 kubelet[2214]: E0416 00:19:19.475154 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://188.245.164.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.164.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 00:19:19.475820 kubelet[2214]: E0416 00:19:19.475493 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://188.245.164.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-510861948e&limit=500&resourceVersion=0\": dial tcp 188.245.164.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 00:19:19.476352 kubelet[2214]: I0416 00:19:19.475933 2214 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 00:19:19.476785 kubelet[2214]: I0416 00:19:19.476653 2214 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 00:19:19.476785 kubelet[2214]: I0416 00:19:19.476693 2214 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 00:19:19.476785 kubelet[2214]: W0416 00:19:19.476748 2214 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 00:19:19.480365 kubelet[2214]: I0416 00:19:19.480315 2214 server.go:1262] "Started kubelet" Apr 16 00:19:19.487177 kubelet[2214]: I0416 00:19:19.486972 2214 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 00:19:19.488234 kubelet[2214]: I0416 00:19:19.488209 2214 server.go:310] "Adding debug handlers to kubelet server" Apr 16 00:19:19.490091 kubelet[2214]: E0416 00:19:19.487891 2214 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.164.135:6443/api/v1/namespaces/default/events\": dial tcp 188.245.164.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-510861948e.18a6ae4f7dc9d76d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-510861948e,UID:ci-4081-3-6-n-510861948e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-510861948e,},FirstTimestamp:2026-04-16 00:19:19.480268653 +0000 UTC m=+0.649574788,LastTimestamp:2026-04-16 00:19:19.480268653 +0000 UTC m=+0.649574788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-510861948e,}" Apr 16 00:19:19.490091 kubelet[2214]: I0416 00:19:19.489466 2214 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 00:19:19.490091 kubelet[2214]: I0416 00:19:19.489537 2214 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 00:19:19.490091 kubelet[2214]: I0416 00:19:19.489955 2214 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 00:19:19.493125 kubelet[2214]: I0416 00:19:19.492787 2214 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 00:19:19.494982 kubelet[2214]: I0416 00:19:19.494894 2214 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 00:19:19.499086 kubelet[2214]: E0416 00:19:19.498687 2214 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-510861948e\" not found" Apr 16 00:19:19.499086 kubelet[2214]: I0416 00:19:19.498740 2214 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 00:19:19.499086 kubelet[2214]: I0416 00:19:19.498950 2214 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 00:19:19.499086 kubelet[2214]: I0416 00:19:19.499005 2214 reconciler.go:29] "Reconciler: start to sync state" Apr 16 00:19:19.500715 kubelet[2214]: E0416 00:19:19.500675 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://188.245.164.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.164.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 00:19:19.502084 kubelet[2214]: E0416 00:19:19.501726 2214 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 00:19:19.502084 kubelet[2214]: I0416 00:19:19.501973 2214 factory.go:223] Registration of the systemd container factory successfully Apr 16 00:19:19.502337 kubelet[2214]: I0416 00:19:19.502310 2214 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 00:19:19.504371 kubelet[2214]: E0416 00:19:19.504318 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.164.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-510861948e?timeout=10s\": dial tcp 188.245.164.135:6443: connect: connection refused" interval="200ms" Apr 16 00:19:19.504728 kubelet[2214]: I0416 00:19:19.504706 2214 factory.go:223] Registration of the containerd container factory successfully Apr 16 00:19:19.532087 kubelet[2214]: I0416 00:19:19.531602 2214 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 00:19:19.533288 kubelet[2214]: I0416 00:19:19.533245 2214 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 00:19:19.533288 kubelet[2214]: I0416 00:19:19.533286 2214 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 00:19:19.533415 kubelet[2214]: I0416 00:19:19.533323 2214 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 00:19:19.533415 kubelet[2214]: E0416 00:19:19.533381 2214 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 00:19:19.536529 kubelet[2214]: E0416 00:19:19.536163 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://188.245.164.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.164.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 00:19:19.545835 kubelet[2214]: I0416 00:19:19.545802 2214 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 00:19:19.546382 kubelet[2214]: I0416 00:19:19.546102 2214 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 00:19:19.546382 kubelet[2214]: I0416 00:19:19.546130 2214 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:19:19.549080 kubelet[2214]: I0416 00:19:19.548948 2214 policy_none.go:49] "None policy: Start" Apr 16 00:19:19.549080 kubelet[2214]: I0416 00:19:19.548979 2214 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 00:19:19.549080 kubelet[2214]: I0416 00:19:19.548991 2214 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 00:19:19.551010 kubelet[2214]: I0416 00:19:19.550978 2214 policy_none.go:47] "Start" Apr 16 00:19:19.556395 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 00:19:19.566125 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 00:19:19.571690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 00:19:19.585124 kubelet[2214]: E0416 00:19:19.583923 2214 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 00:19:19.585124 kubelet[2214]: I0416 00:19:19.584200 2214 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 00:19:19.585124 kubelet[2214]: I0416 00:19:19.584215 2214 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 00:19:19.586093 kubelet[2214]: I0416 00:19:19.585384 2214 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 00:19:19.589632 kubelet[2214]: E0416 00:19:19.588979 2214 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 00:19:19.589632 kubelet[2214]: E0416 00:19:19.589096 2214 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-510861948e\" not found" Apr 16 00:19:19.651364 systemd[1]: Created slice kubepods-burstable-podba8a64b3229e73512051a750beb9692f.slice - libcontainer container kubepods-burstable-podba8a64b3229e73512051a750beb9692f.slice. Apr 16 00:19:19.661132 kubelet[2214]: E0416 00:19:19.661059 2214 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:19.667330 systemd[1]: Created slice kubepods-burstable-pod77acbaa1a84a9b15dfdf36986b42f970.slice - libcontainer container kubepods-burstable-pod77acbaa1a84a9b15dfdf36986b42f970.slice. Apr 16 00:19:19.671680 kubelet[2214]: E0416 00:19:19.671314 2214 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:19.676754 systemd[1]: Created slice kubepods-burstable-pod7e51adf6e0d1ce23a78091207e061303.slice - libcontainer container kubepods-burstable-pod7e51adf6e0d1ce23a78091207e061303.slice. Apr 16 00:19:19.679026 kubelet[2214]: E0416 00:19:19.678975 2214 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:19.688087 kubelet[2214]: I0416 00:19:19.688004 2214 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:19.688882 kubelet[2214]: E0416 00:19:19.688823 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.164.135:6443/api/v1/nodes\": dial tcp 188.245.164.135:6443: connect: connection refused" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:19.700844 kubelet[2214]: I0416 00:19:19.700328 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77acbaa1a84a9b15dfdf36986b42f970-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-510861948e\" (UID: \"77acbaa1a84a9b15dfdf36986b42f970\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.700844 kubelet[2214]: I0416 00:19:19.700385 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.700844 kubelet[2214]: I0416 00:19:19.700417 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.700844 kubelet[2214]: I0416 00:19:19.700453 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.700844 kubelet[2214]: I0416 00:19:19.700480 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.701281 kubelet[2214]: I0416 00:19:19.700512 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.701281 kubelet[2214]: I0416 00:19:19.700538 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba8a64b3229e73512051a750beb9692f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-510861948e\" (UID: \"ba8a64b3229e73512051a750beb9692f\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.701281 kubelet[2214]: I0416 00:19:19.700570 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77acbaa1a84a9b15dfdf36986b42f970-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-510861948e\" (UID: \"77acbaa1a84a9b15dfdf36986b42f970\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.701281 kubelet[2214]: I0416 00:19:19.700617 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77acbaa1a84a9b15dfdf36986b42f970-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-510861948e\" (UID: \"77acbaa1a84a9b15dfdf36986b42f970\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:19.705473 kubelet[2214]: E0416 00:19:19.705379 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.164.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-510861948e?timeout=10s\": dial tcp 188.245.164.135:6443: connect: connection refused" interval="400ms" Apr 16 00:19:19.894702 kubelet[2214]: I0416 00:19:19.894159 2214 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:19.895441 kubelet[2214]: E0416 00:19:19.895380 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.164.135:6443/api/v1/nodes\": dial tcp 188.245.164.135:6443: connect: connection refused" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:19.966291 containerd[1473]: time="2026-04-16T00:19:19.966090312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-510861948e,Uid:ba8a64b3229e73512051a750beb9692f,Namespace:kube-system,Attempt:0,}" Apr 16 00:19:19.976012 containerd[1473]: time="2026-04-16T00:19:19.975957287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-510861948e,Uid:77acbaa1a84a9b15dfdf36986b42f970,Namespace:kube-system,Attempt:0,}" Apr 16 00:19:19.981754 kubelet[2214]: E0416 00:19:19.981405 2214 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.164.135:6443/api/v1/namespaces/default/events\": dial tcp 188.245.164.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-510861948e.18a6ae4f7dc9d76d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-510861948e,UID:ci-4081-3-6-n-510861948e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-510861948e,},FirstTimestamp:2026-04-16 00:19:19.480268653 +0000 UTC m=+0.649574788,LastTimestamp:2026-04-16 00:19:19.480268653 +0000 UTC m=+0.649574788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-510861948e,}" Apr 16 00:19:19.982534 containerd[1473]: time="2026-04-16T00:19:19.982186533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-510861948e,Uid:7e51adf6e0d1ce23a78091207e061303,Namespace:kube-system,Attempt:0,}" Apr 16 00:19:20.106575 kubelet[2214]: E0416 00:19:20.106514 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.164.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-510861948e?timeout=10s\": dial tcp 188.245.164.135:6443: connect: connection refused" interval="800ms" Apr 16 00:19:20.299513 kubelet[2214]: I0416 00:19:20.299474 2214 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:20.300287 kubelet[2214]: E0416 00:19:20.300224 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.164.135:6443/api/v1/nodes\": dial tcp 188.245.164.135:6443: connect: connection refused" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:20.405066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060862105.mount: Deactivated successfully. Apr 16 00:19:20.421285 containerd[1473]: time="2026-04-16T00:19:20.420852806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:19:20.425071 containerd[1473]: time="2026-04-16T00:19:20.424994776Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 16 00:19:20.427409 containerd[1473]: time="2026-04-16T00:19:20.426415144Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:19:20.428605 containerd[1473]: time="2026-04-16T00:19:20.428515433Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:19:20.431620 containerd[1473]: time="2026-04-16T00:19:20.430715813Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:19:20.432160 containerd[1473]: time="2026-04-16T00:19:20.432105938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 00:19:20.434388 containerd[1473]: time="2026-04-16T00:19:20.434306878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 00:19:20.438604 containerd[1473]: time="2026-04-16T00:19:20.437573625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:19:20.438798 containerd[1473]: time="2026-04-16T00:19:20.438616428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 462.535447ms" Apr 16 00:19:20.443795 containerd[1473]: time="2026-04-16T00:19:20.443525209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 461.247984ms" Apr 16 00:19:20.451796 containerd[1473]: time="2026-04-16T00:19:20.451725619Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.528414ms" Apr 16 00:19:20.603856 containerd[1473]: time="2026-04-16T00:19:20.603561625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:19:20.603856 containerd[1473]: time="2026-04-16T00:19:20.603651075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:19:20.604998 containerd[1473]: time="2026-04-16T00:19:20.604297952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:19:20.604998 containerd[1473]: time="2026-04-16T00:19:20.604367680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:19:20.604998 containerd[1473]: time="2026-04-16T00:19:20.604385642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:20.604998 containerd[1473]: time="2026-04-16T00:19:20.604485454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:20.605412 containerd[1473]: time="2026-04-16T00:19:20.603667997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:20.605412 containerd[1473]: time="2026-04-16T00:19:20.605227782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:19:20.605412 containerd[1473]: time="2026-04-16T00:19:20.605289429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:19:20.606915 containerd[1473]: time="2026-04-16T00:19:20.606165613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:20.607700 containerd[1473]: time="2026-04-16T00:19:20.607115125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:20.607700 containerd[1473]: time="2026-04-16T00:19:20.607401959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:20.640334 systemd[1]: Started cri-containerd-bb751df8193579a7825d8ed46ad5d6e2403f20242fb42acafce6b52ec58f2d4d.scope - libcontainer container bb751df8193579a7825d8ed46ad5d6e2403f20242fb42acafce6b52ec58f2d4d. Apr 16 00:19:20.653615 systemd[1]: Started cri-containerd-2ad4eeaa165fba6e9c0748d483ae41d7ba76b8c51f54e08726edc2a2f83c5d48.scope - libcontainer container 2ad4eeaa165fba6e9c0748d483ae41d7ba76b8c51f54e08726edc2a2f83c5d48. Apr 16 00:19:20.655913 systemd[1]: Started cri-containerd-ef29e3fa2a4d9828af4bc1b0ac9e3fd81cb1510d75d2aacf7e7e79d6a201dfae.scope - libcontainer container ef29e3fa2a4d9828af4bc1b0ac9e3fd81cb1510d75d2aacf7e7e79d6a201dfae. Apr 16 00:19:20.716570 containerd[1473]: time="2026-04-16T00:19:20.716436140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-510861948e,Uid:7e51adf6e0d1ce23a78091207e061303,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad4eeaa165fba6e9c0748d483ae41d7ba76b8c51f54e08726edc2a2f83c5d48\"" Apr 16 00:19:20.727260 containerd[1473]: time="2026-04-16T00:19:20.726735959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-510861948e,Uid:77acbaa1a84a9b15dfdf36986b42f970,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef29e3fa2a4d9828af4bc1b0ac9e3fd81cb1510d75d2aacf7e7e79d6a201dfae\"" Apr 16 00:19:20.731189 containerd[1473]: time="2026-04-16T00:19:20.731067552Z" level=info msg="CreateContainer within sandbox \"2ad4eeaa165fba6e9c0748d483ae41d7ba76b8c51f54e08726edc2a2f83c5d48\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 00:19:20.731405 containerd[1473]: time="2026-04-16T00:19:20.731273016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-510861948e,Uid:ba8a64b3229e73512051a750beb9692f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb751df8193579a7825d8ed46ad5d6e2403f20242fb42acafce6b52ec58f2d4d\"" Apr 16 00:19:20.737100 containerd[1473]: time="2026-04-16T00:19:20.737009215Z" level=info msg="CreateContainer within sandbox \"ef29e3fa2a4d9828af4bc1b0ac9e3fd81cb1510d75d2aacf7e7e79d6a201dfae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 00:19:20.740184 containerd[1473]: time="2026-04-16T00:19:20.740139585Z" level=info msg="CreateContainer within sandbox \"bb751df8193579a7825d8ed46ad5d6e2403f20242fb42acafce6b52ec58f2d4d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 00:19:20.757617 containerd[1473]: time="2026-04-16T00:19:20.757561206Z" level=info msg="CreateContainer within sandbox \"2ad4eeaa165fba6e9c0748d483ae41d7ba76b8c51f54e08726edc2a2f83c5d48\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97\"" Apr 16 00:19:20.760582 containerd[1473]: time="2026-04-16T00:19:20.758997296Z" level=info msg="StartContainer for \"9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97\"" Apr 16 00:19:20.770370 containerd[1473]: time="2026-04-16T00:19:20.770073407Z" level=info msg="CreateContainer within sandbox \"bb751df8193579a7825d8ed46ad5d6e2403f20242fb42acafce6b52ec58f2d4d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c\"" Apr 16 00:19:20.771422 containerd[1473]: time="2026-04-16T00:19:20.771379601Z" level=info msg="StartContainer for \"376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c\"" Apr 16 00:19:20.773736 containerd[1473]: time="2026-04-16T00:19:20.773664112Z" level=info msg="CreateContainer within sandbox \"ef29e3fa2a4d9828af4bc1b0ac9e3fd81cb1510d75d2aacf7e7e79d6a201dfae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aad35c39173fb7c2109adbc266f453867432e82bc2a48e260e6de25bbb578e29\"" Apr 16 00:19:20.774983 containerd[1473]: time="2026-04-16T00:19:20.774937262Z" level=info msg="StartContainer for \"aad35c39173fb7c2109adbc266f453867432e82bc2a48e260e6de25bbb578e29\"" Apr 16 00:19:20.818977 systemd[1]: Started cri-containerd-9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97.scope - libcontainer container 9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97. Apr 16 00:19:20.833289 systemd[1]: Started cri-containerd-376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c.scope - libcontainer container 376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c. Apr 16 00:19:20.845806 systemd[1]: Started cri-containerd-aad35c39173fb7c2109adbc266f453867432e82bc2a48e260e6de25bbb578e29.scope - libcontainer container aad35c39173fb7c2109adbc266f453867432e82bc2a48e260e6de25bbb578e29. Apr 16 00:19:20.903410 containerd[1473]: time="2026-04-16T00:19:20.899337422Z" level=info msg="StartContainer for \"9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97\" returns successfully" Apr 16 00:19:20.912409 kubelet[2214]: E0416 00:19:20.912366 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.164.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-510861948e?timeout=10s\": dial tcp 188.245.164.135:6443: connect: connection refused" interval="1.6s" Apr 16 00:19:20.933154 containerd[1473]: time="2026-04-16T00:19:20.932393053Z" level=info msg="StartContainer for \"376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c\" returns successfully" Apr 16 00:19:20.939134 containerd[1473]: time="2026-04-16T00:19:20.938807412Z" level=info msg="StartContainer for \"aad35c39173fb7c2109adbc266f453867432e82bc2a48e260e6de25bbb578e29\" returns successfully" Apr 16 00:19:20.948929 kubelet[2214]: E0416 00:19:20.948857 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://188.245.164.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.164.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 00:19:20.962171 kubelet[2214]: E0416 00:19:20.961432 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://188.245.164.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.164.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 00:19:21.009334 kubelet[2214]: E0416 00:19:21.009275 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://188.245.164.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.164.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 00:19:21.102602 kubelet[2214]: I0416 00:19:21.102561 2214 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:21.561057 kubelet[2214]: E0416 00:19:21.559848 2214 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:21.563842 kubelet[2214]: E0416 00:19:21.563764 2214 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:21.567621 kubelet[2214]: E0416 00:19:21.567585 2214 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:22.567297 kubelet[2214]: E0416 00:19:22.567241 2214 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:22.567734 kubelet[2214]: E0416 00:19:22.567701 2214 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:22.905866 kubelet[2214]: E0416 00:19:22.904430 2214 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-510861948e\" not found" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:22.925152 kubelet[2214]: I0416 00:19:22.925102 2214 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:23.005317 kubelet[2214]: I0416 00:19:23.004697 2214 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-510861948e" Apr 16 00:19:23.014187 kubelet[2214]: E0416 00:19:23.014149 2214 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-510861948e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-510861948e" Apr 16 00:19:23.014377 kubelet[2214]: I0416 00:19:23.014363 2214 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:23.020314 kubelet[2214]: E0416 00:19:23.020276 2214 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-510861948e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:23.020640 kubelet[2214]: I0416 00:19:23.020584 2214 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:23.023604 kubelet[2214]: E0416 00:19:23.023557 2214 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-510861948e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:23.476342 kubelet[2214]: I0416 00:19:23.476252 2214 apiserver.go:52] "Watching apiserver" Apr 16 00:19:23.500020 kubelet[2214]: I0416 00:19:23.499897 2214 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 00:19:25.311446 systemd[1]: Reloading requested from client PID 2494 ('systemctl') (unit session-7.scope)... Apr 16 00:19:25.311467 systemd[1]: Reloading... Apr 16 00:19:25.444167 zram_generator::config[2537]: No configuration found. Apr 16 00:19:25.557504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 00:19:25.645670 systemd[1]: Reloading finished in 333 ms. Apr 16 00:19:25.687637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:19:25.703545 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 00:19:25.704011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:19:25.704165 systemd[1]: kubelet.service: Consumed 1.085s CPU time, 120.8M memory peak, 0B memory swap peak. Apr 16 00:19:25.712494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:19:25.847329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:19:25.849817 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 00:19:25.902326 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 00:19:25.902326 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 00:19:25.902326 kubelet[2579]: I0416 00:19:25.901460 2579 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 00:19:25.916107 kubelet[2579]: I0416 00:19:25.914607 2579 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 00:19:25.916107 kubelet[2579]: I0416 00:19:25.914652 2579 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 00:19:25.916107 kubelet[2579]: I0416 00:19:25.914684 2579 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 00:19:25.916107 kubelet[2579]: I0416 00:19:25.914690 2579 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 00:19:25.916107 kubelet[2579]: I0416 00:19:25.914950 2579 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 00:19:25.916981 kubelet[2579]: I0416 00:19:25.916956 2579 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 00:19:25.920287 kubelet[2579]: I0416 00:19:25.920261 2579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 00:19:25.923388 kubelet[2579]: E0416 00:19:25.923347 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 00:19:25.923610 kubelet[2579]: I0416 00:19:25.923600 2579 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 00:19:25.929247 kubelet[2579]: I0416 00:19:25.929208 2579 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 00:19:25.929766 kubelet[2579]: I0416 00:19:25.929717 2579 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 00:19:25.930500 kubelet[2579]: I0416 00:19:25.929858 2579 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-510861948e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 00:19:25.930671 kubelet[2579]: I0416 00:19:25.930655 2579 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 00:19:25.930729 kubelet[2579]: I0416 00:19:25.930721 2579 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 00:19:25.931074 kubelet[2579]: I0416 00:19:25.930815 2579 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 00:19:25.931223 kubelet[2579]: I0416 00:19:25.931207 2579 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:19:25.931520 kubelet[2579]: I0416 00:19:25.931504 2579 kubelet.go:475] "Attempting to sync node with API server" Apr 16 00:19:25.931599 kubelet[2579]: I0416 00:19:25.931589 2579 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 00:19:25.931675 kubelet[2579]: I0416 00:19:25.931666 2579 kubelet.go:387] "Adding apiserver pod source" Apr 16 00:19:25.933064 kubelet[2579]: I0416 00:19:25.931722 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 00:19:25.937638 kubelet[2579]: I0416 00:19:25.937613 2579 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 00:19:25.940535 kubelet[2579]: I0416 00:19:25.940496 2579 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 00:19:25.940707 kubelet[2579]: I0416 00:19:25.940696 2579 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 00:19:25.943236 kubelet[2579]: I0416 00:19:25.943210 2579 server.go:1262] "Started kubelet" Apr 16 00:19:25.947097 kubelet[2579]: I0416 00:19:25.947070 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 00:19:25.962793 kubelet[2579]: I0416 00:19:25.962732 2579 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 00:19:25.966039 kubelet[2579]: I0416 00:19:25.964008 2579 server.go:310] "Adding debug handlers to kubelet server" Apr 16 00:19:25.967704 kubelet[2579]: I0416 00:19:25.967637 2579 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 00:19:25.967769 kubelet[2579]: I0416 00:19:25.967739 2579 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 00:19:25.967939 kubelet[2579]: I0416 00:19:25.967922 2579 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 00:19:25.968963 kubelet[2579]: I0416 00:19:25.968905 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 00:19:25.971834 kubelet[2579]: I0416 00:19:25.971791 2579 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 00:19:25.974214 kubelet[2579]: I0416 00:19:25.974152 2579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 00:19:25.974345 kubelet[2579]: I0416 00:19:25.974329 2579 reconciler.go:29] "Reconciler: start to sync state" Apr 16 00:19:25.984222 kubelet[2579]: I0416 00:19:25.984183 2579 factory.go:223] Registration of the systemd container factory successfully Apr 16 00:19:25.984388 kubelet[2579]: I0416 00:19:25.984319 2579 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 00:19:25.986885 kubelet[2579]: I0416 00:19:25.986653 2579 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 00:19:25.988430 kubelet[2579]: I0416 00:19:25.988402 2579 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 00:19:25.989027 kubelet[2579]: I0416 00:19:25.988658 2579 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 00:19:25.989027 kubelet[2579]: I0416 00:19:25.988691 2579 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 00:19:25.989027 kubelet[2579]: E0416 00:19:25.988748 2579 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 00:19:25.990513 kubelet[2579]: I0416 00:19:25.990473 2579 factory.go:223] Registration of the containerd container factory successfully Apr 16 00:19:26.053069 kubelet[2579]: I0416 00:19:26.052884 2579 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 00:19:26.053069 kubelet[2579]: I0416 00:19:26.052903 2579 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 00:19:26.053069 kubelet[2579]: I0416 00:19:26.052928 2579 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:19:26.053382 kubelet[2579]: I0416 00:19:26.053084 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 00:19:26.053382 kubelet[2579]: I0416 00:19:26.053094 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 00:19:26.053382 kubelet[2579]: I0416 00:19:26.053112 2579 policy_none.go:49] "None policy: Start" Apr 16 00:19:26.053382 kubelet[2579]: I0416 00:19:26.053120 2579 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 00:19:26.053382 kubelet[2579]: I0416 00:19:26.053128 2579 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 00:19:26.053382 kubelet[2579]: I0416 00:19:26.053301 2579 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 00:19:26.053382 kubelet[2579]: I0416 00:19:26.053315 2579 policy_none.go:47] "Start" Apr 16 00:19:26.060642 kubelet[2579]: E0416 00:19:26.060079 2579 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 00:19:26.060642 kubelet[2579]: I0416 00:19:26.060302 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 00:19:26.060642 kubelet[2579]: I0416 00:19:26.060314 2579 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 00:19:26.060884 kubelet[2579]: I0416 00:19:26.060672 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 00:19:26.066582 kubelet[2579]: E0416 00:19:26.065882 2579 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 00:19:26.089481 kubelet[2579]: I0416 00:19:26.089429 2579 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.090161 kubelet[2579]: I0416 00:19:26.090125 2579 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.090543 kubelet[2579]: I0416 00:19:26.090514 2579 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.175987 kubelet[2579]: I0416 00:19:26.175567 2579 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:26.185069 kubelet[2579]: I0416 00:19:26.184845 2579 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:26.185069 kubelet[2579]: I0416 00:19:26.184942 2579 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-510861948e" Apr 16 00:19:26.275918 kubelet[2579]: I0416 00:19:26.275555 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77acbaa1a84a9b15dfdf36986b42f970-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-510861948e\" (UID: \"77acbaa1a84a9b15dfdf36986b42f970\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.275918 kubelet[2579]: I0416 00:19:26.275676 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77acbaa1a84a9b15dfdf36986b42f970-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-510861948e\" (UID: \"77acbaa1a84a9b15dfdf36986b42f970\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.275918 kubelet[2579]: I0416 00:19:26.275714 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.275918 kubelet[2579]: I0416 00:19:26.275763 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba8a64b3229e73512051a750beb9692f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-510861948e\" (UID: \"ba8a64b3229e73512051a750beb9692f\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.275918 kubelet[2579]: I0416 00:19:26.275786 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77acbaa1a84a9b15dfdf36986b42f970-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-510861948e\" (UID: \"77acbaa1a84a9b15dfdf36986b42f970\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.276273 kubelet[2579]: I0416 00:19:26.275808 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.276273 kubelet[2579]: I0416 00:19:26.275826 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.276273 kubelet[2579]: I0416 00:19:26.275850 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.276273 kubelet[2579]: I0416 00:19:26.275879 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e51adf6e0d1ce23a78091207e061303-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-510861948e\" (UID: \"7e51adf6e0d1ce23a78091207e061303\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" Apr 16 00:19:26.310895 sudo[2619]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 16 00:19:26.311756 sudo[2619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 16 00:19:26.823266 sudo[2619]: pam_unix(sudo:session): session closed for user root Apr 16 00:19:26.936724 kubelet[2579]: I0416 00:19:26.936635 2579 apiserver.go:52] "Watching apiserver" Apr 16 00:19:26.975351 kubelet[2579]: I0416 00:19:26.975234 2579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 00:19:27.068529 kubelet[2579]: I0416 00:19:27.068166 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-510861948e" podStartSLOduration=1.068148534 podStartE2EDuration="1.068148534s" podCreationTimestamp="2026-04-16 00:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:19:27.067418386 +0000 UTC m=+1.213893364" watchObservedRunningTime="2026-04-16 00:19:27.068148534 +0000 UTC m=+1.214623592" Apr 16 00:19:27.109499 kubelet[2579]: I0416 00:19:27.109311 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-510861948e" podStartSLOduration=1.109292182 podStartE2EDuration="1.109292182s" podCreationTimestamp="2026-04-16 00:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:19:27.092374136 +0000 UTC m=+1.238849154" watchObservedRunningTime="2026-04-16 00:19:27.109292182 +0000 UTC m=+1.255767200" Apr 16 00:19:27.109499 kubelet[2579]: I0416 00:19:27.109407 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-510861948e" podStartSLOduration=1.109402272 podStartE2EDuration="1.109402272s" podCreationTimestamp="2026-04-16 00:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:19:27.107502736 +0000 UTC m=+1.253977834" watchObservedRunningTime="2026-04-16 00:19:27.109402272 +0000 UTC m=+1.255877290" Apr 16 00:19:28.938868 sudo[1706]: pam_unix(sudo:session): session closed for user root Apr 16 00:19:28.955831 sshd[1703]: pam_unix(sshd:session): session closed for user core Apr 16 00:19:28.961253 systemd[1]: sshd@6-188.245.164.135:22-4.175.71.9:39434.service: Deactivated successfully. Apr 16 00:19:28.964658 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 00:19:28.965325 systemd[1]: session-7.scope: Consumed 9.491s CPU time, 151.0M memory peak, 0B memory swap peak. Apr 16 00:19:28.966412 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Apr 16 00:19:28.968172 systemd-logind[1451]: Removed session 7. Apr 16 00:19:31.531212 kubelet[2579]: I0416 00:19:31.531139 2579 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 00:19:31.531888 containerd[1473]: time="2026-04-16T00:19:31.531684091Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 00:19:31.532304 kubelet[2579]: I0416 00:19:31.531918 2579 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 00:19:32.574824 systemd[1]: Created slice kubepods-burstable-podf96311cf_7956_4b22_b9eb_4b556920cc59.slice - libcontainer container kubepods-burstable-podf96311cf_7956_4b22_b9eb_4b556920cc59.slice. Apr 16 00:19:32.586930 systemd[1]: Created slice kubepods-besteffort-podaf229a1c_bb6d_42b2_ad4b_b14ea9095d16.slice - libcontainer container kubepods-besteffort-podaf229a1c_bb6d_42b2_ad4b_b14ea9095d16.slice. Apr 16 00:19:32.618855 kubelet[2579]: I0416 00:19:32.618811 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cni-path\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619424 kubelet[2579]: I0416 00:19:32.619397 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-etc-cni-netd\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619485 kubelet[2579]: I0416 00:19:32.619436 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f96311cf-7956-4b22-b9eb-4b556920cc59-clustermesh-secrets\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619485 kubelet[2579]: I0416 00:19:32.619468 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-host-proc-sys-net\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619557 kubelet[2579]: I0416 00:19:32.619488 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-host-proc-sys-kernel\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619557 kubelet[2579]: I0416 00:19:32.619503 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f96311cf-7956-4b22-b9eb-4b556920cc59-hubble-tls\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619557 kubelet[2579]: I0416 00:19:32.619520 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-run\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619557 kubelet[2579]: I0416 00:19:32.619534 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-cgroup\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619557 kubelet[2579]: I0416 00:19:32.619548 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-config-path\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619776 kubelet[2579]: I0416 00:19:32.619562 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lbhd\" (UniqueName: \"kubernetes.io/projected/f96311cf-7956-4b22-b9eb-4b556920cc59-kube-api-access-8lbhd\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619776 kubelet[2579]: I0416 00:19:32.619580 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-hostproc\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619776 kubelet[2579]: I0416 00:19:32.619596 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-xtables-lock\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619776 kubelet[2579]: I0416 00:19:32.619610 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af229a1c-bb6d-42b2-ad4b-b14ea9095d16-xtables-lock\") pod \"kube-proxy-lbtzt\" (UID: \"af229a1c-bb6d-42b2-ad4b-b14ea9095d16\") " pod="kube-system/kube-proxy-lbtzt" Apr 16 00:19:32.619776 kubelet[2579]: I0416 00:19:32.619626 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af229a1c-bb6d-42b2-ad4b-b14ea9095d16-lib-modules\") pod \"kube-proxy-lbtzt\" (UID: \"af229a1c-bb6d-42b2-ad4b-b14ea9095d16\") " pod="kube-system/kube-proxy-lbtzt" Apr 16 00:19:32.619776 kubelet[2579]: I0416 00:19:32.619641 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-bpf-maps\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619936 kubelet[2579]: I0416 00:19:32.619699 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-lib-modules\") pod \"cilium-2bksj\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " pod="kube-system/cilium-2bksj" Apr 16 00:19:32.619936 kubelet[2579]: I0416 00:19:32.619714 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af229a1c-bb6d-42b2-ad4b-b14ea9095d16-kube-proxy\") pod \"kube-proxy-lbtzt\" (UID: \"af229a1c-bb6d-42b2-ad4b-b14ea9095d16\") " pod="kube-system/kube-proxy-lbtzt" Apr 16 00:19:32.619936 kubelet[2579]: I0416 00:19:32.619732 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cqtb\" (UniqueName: \"kubernetes.io/projected/af229a1c-bb6d-42b2-ad4b-b14ea9095d16-kube-api-access-7cqtb\") pod \"kube-proxy-lbtzt\" (UID: \"af229a1c-bb6d-42b2-ad4b-b14ea9095d16\") " pod="kube-system/kube-proxy-lbtzt" Apr 16 00:19:32.655147 systemd[1]: Created slice kubepods-besteffort-pod1c75bf38_dcf1_4fa3_b560_7e7b947e1c20.slice - libcontainer container kubepods-besteffort-pod1c75bf38_dcf1_4fa3_b560_7e7b947e1c20.slice. Apr 16 00:19:32.725547 kubelet[2579]: I0416 00:19:32.722617 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9q82\" (UniqueName: \"kubernetes.io/projected/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20-kube-api-access-z9q82\") pod \"cilium-operator-6f9c7c5859-gndjw\" (UID: \"1c75bf38-dcf1-4fa3-b560-7e7b947e1c20\") " pod="kube-system/cilium-operator-6f9c7c5859-gndjw" Apr 16 00:19:32.725547 kubelet[2579]: I0416 00:19:32.722846 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-gndjw\" (UID: \"1c75bf38-dcf1-4fa3-b560-7e7b947e1c20\") " pod="kube-system/cilium-operator-6f9c7c5859-gndjw" Apr 16 00:19:32.888151 containerd[1473]: time="2026-04-16T00:19:32.886898891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2bksj,Uid:f96311cf-7956-4b22-b9eb-4b556920cc59,Namespace:kube-system,Attempt:0,}" Apr 16 00:19:32.899465 containerd[1473]: time="2026-04-16T00:19:32.899197556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbtzt,Uid:af229a1c-bb6d-42b2-ad4b-b14ea9095d16,Namespace:kube-system,Attempt:0,}" Apr 16 00:19:32.919292 containerd[1473]: time="2026-04-16T00:19:32.919078109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:19:32.919292 containerd[1473]: time="2026-04-16T00:19:32.919142714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:19:32.919292 containerd[1473]: time="2026-04-16T00:19:32.919168636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:32.919651 containerd[1473]: time="2026-04-16T00:19:32.919264524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:32.929917 containerd[1473]: time="2026-04-16T00:19:32.929780326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:19:32.929917 containerd[1473]: time="2026-04-16T00:19:32.929853972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:19:32.930296 containerd[1473]: time="2026-04-16T00:19:32.930056869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:32.930296 containerd[1473]: time="2026-04-16T00:19:32.930187999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:32.947269 systemd[1]: Started cri-containerd-348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277.scope - libcontainer container 348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277. Apr 16 00:19:32.960288 systemd[1]: Started cri-containerd-01de447b48d601fbb4bb3dfc936acacb40cdf59fced94e531f248cc109093ef5.scope - libcontainer container 01de447b48d601fbb4bb3dfc936acacb40cdf59fced94e531f248cc109093ef5. Apr 16 00:19:32.965174 containerd[1473]: time="2026-04-16T00:19:32.965125998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-gndjw,Uid:1c75bf38-dcf1-4fa3-b560-7e7b947e1c20,Namespace:kube-system,Attempt:0,}" Apr 16 00:19:33.005925 containerd[1473]: time="2026-04-16T00:19:33.005874971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2bksj,Uid:f96311cf-7956-4b22-b9eb-4b556920cc59,Namespace:kube-system,Attempt:0,} returns sandbox id \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\"" Apr 16 00:19:33.009063 containerd[1473]: time="2026-04-16T00:19:33.008998855Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 16 00:19:33.020302 containerd[1473]: time="2026-04-16T00:19:33.020254334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbtzt,Uid:af229a1c-bb6d-42b2-ad4b-b14ea9095d16,Namespace:kube-system,Attempt:0,} returns sandbox id \"01de447b48d601fbb4bb3dfc936acacb40cdf59fced94e531f248cc109093ef5\"" Apr 16 00:19:33.028154 containerd[1473]: time="2026-04-16T00:19:33.028022260Z" level=info msg="CreateContainer within sandbox \"01de447b48d601fbb4bb3dfc936acacb40cdf59fced94e531f248cc109093ef5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 00:19:33.030260 containerd[1473]: time="2026-04-16T00:19:33.030166908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:19:33.030260 containerd[1473]: time="2026-04-16T00:19:33.030225952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:19:33.030627 containerd[1473]: time="2026-04-16T00:19:33.030237353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:33.030837 containerd[1473]: time="2026-04-16T00:19:33.030744393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:33.047070 containerd[1473]: time="2026-04-16T00:19:33.046758443Z" level=info msg="CreateContainer within sandbox \"01de447b48d601fbb4bb3dfc936acacb40cdf59fced94e531f248cc109093ef5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64b9209551ea2017b73cc043e89dace9f6d77a73964837fa22905309eebf98ea\"" Apr 16 00:19:33.047476 containerd[1473]: time="2026-04-16T00:19:33.047430496Z" level=info msg="StartContainer for \"64b9209551ea2017b73cc043e89dace9f6d77a73964837fa22905309eebf98ea\"" Apr 16 00:19:33.052760 systemd[1]: Started cri-containerd-87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451.scope - libcontainer container 87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451. Apr 16 00:19:33.102839 systemd[1]: Started cri-containerd-64b9209551ea2017b73cc043e89dace9f6d77a73964837fa22905309eebf98ea.scope - libcontainer container 64b9209551ea2017b73cc043e89dace9f6d77a73964837fa22905309eebf98ea. Apr 16 00:19:33.119948 containerd[1473]: time="2026-04-16T00:19:33.119904594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-gndjw,Uid:1c75bf38-dcf1-4fa3-b560-7e7b947e1c20,Namespace:kube-system,Attempt:0,} returns sandbox id \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\"" Apr 16 00:19:33.148893 containerd[1473]: time="2026-04-16T00:19:33.148628957Z" level=info msg="StartContainer for \"64b9209551ea2017b73cc043e89dace9f6d77a73964837fa22905309eebf98ea\" returns successfully" Apr 16 00:19:35.468812 kubelet[2579]: I0416 00:19:35.468601 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lbtzt" podStartSLOduration=3.46858184 podStartE2EDuration="3.46858184s" podCreationTimestamp="2026-04-16 00:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:19:34.104883457 +0000 UTC m=+8.251358475" watchObservedRunningTime="2026-04-16 00:19:35.46858184 +0000 UTC m=+9.615056858" Apr 16 00:19:36.668200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226210310.mount: Deactivated successfully. Apr 16 00:19:38.194125 containerd[1473]: time="2026-04-16T00:19:38.193559595Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:38.200143 containerd[1473]: time="2026-04-16T00:19:38.199355199Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 16 00:19:38.201569 containerd[1473]: time="2026-04-16T00:19:38.201492748Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:38.205785 containerd[1473]: time="2026-04-16T00:19:38.205494106Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.196274994s" Apr 16 00:19:38.205785 containerd[1473]: time="2026-04-16T00:19:38.205579872Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 16 00:19:38.211075 containerd[1473]: time="2026-04-16T00:19:38.210295201Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 16 00:19:38.214361 containerd[1473]: time="2026-04-16T00:19:38.214311560Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 00:19:38.272093 containerd[1473]: time="2026-04-16T00:19:38.271910491Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\"" Apr 16 00:19:38.273999 containerd[1473]: time="2026-04-16T00:19:38.273804463Z" level=info msg="StartContainer for \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\"" Apr 16 00:19:38.314403 systemd[1]: Started cri-containerd-d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b.scope - libcontainer container d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b. Apr 16 00:19:38.367408 systemd[1]: cri-containerd-d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b.scope: Deactivated successfully. Apr 16 00:19:38.384219 containerd[1473]: time="2026-04-16T00:19:38.383808002Z" level=info msg="StartContainer for \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\" returns successfully" Apr 16 00:19:38.630656 containerd[1473]: time="2026-04-16T00:19:38.630385371Z" level=info msg="shim disconnected" id=d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b namespace=k8s.io Apr 16 00:19:38.630656 containerd[1473]: time="2026-04-16T00:19:38.630459336Z" level=warning msg="cleaning up after shim disconnected" id=d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b namespace=k8s.io Apr 16 00:19:38.630656 containerd[1473]: time="2026-04-16T00:19:38.630469137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:19:39.110141 containerd[1473]: time="2026-04-16T00:19:39.109870406Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 00:19:39.128164 containerd[1473]: time="2026-04-16T00:19:39.127384841Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\"" Apr 16 00:19:39.128920 containerd[1473]: time="2026-04-16T00:19:39.128620606Z" level=info msg="StartContainer for \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\"" Apr 16 00:19:39.169479 systemd[1]: Started cri-containerd-65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5.scope - libcontainer container 65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5. Apr 16 00:19:39.204773 containerd[1473]: time="2026-04-16T00:19:39.204717479Z" level=info msg="StartContainer for \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\" returns successfully" Apr 16 00:19:39.222651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 00:19:39.223009 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:19:39.223329 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:19:39.230304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:19:39.234609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b-rootfs.mount: Deactivated successfully. Apr 16 00:19:39.235799 systemd[1]: cri-containerd-65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5.scope: Deactivated successfully. Apr 16 00:19:39.257460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5-rootfs.mount: Deactivated successfully. Apr 16 00:19:39.264410 containerd[1473]: time="2026-04-16T00:19:39.264352748Z" level=info msg="shim disconnected" id=65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5 namespace=k8s.io Apr 16 00:19:39.264410 containerd[1473]: time="2026-04-16T00:19:39.264404792Z" level=warning msg="cleaning up after shim disconnected" id=65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5 namespace=k8s.io Apr 16 00:19:39.264410 containerd[1473]: time="2026-04-16T00:19:39.264413873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:19:39.269150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:19:39.756270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804045376.mount: Deactivated successfully. Apr 16 00:19:40.120617 containerd[1473]: time="2026-04-16T00:19:40.120451176Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 00:19:40.154050 containerd[1473]: time="2026-04-16T00:19:40.153990621Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\"" Apr 16 00:19:40.154712 containerd[1473]: time="2026-04-16T00:19:40.154587261Z" level=info msg="StartContainer for \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\"" Apr 16 00:19:40.195380 systemd[1]: Started cri-containerd-e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e.scope - libcontainer container e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e. Apr 16 00:19:40.234200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408163905.mount: Deactivated successfully. Apr 16 00:19:40.245760 systemd[1]: cri-containerd-e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e.scope: Deactivated successfully. Apr 16 00:19:40.248962 containerd[1473]: time="2026-04-16T00:19:40.248829930Z" level=info msg="StartContainer for \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\" returns successfully" Apr 16 00:19:40.287345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e-rootfs.mount: Deactivated successfully. Apr 16 00:19:40.332007 containerd[1473]: time="2026-04-16T00:19:40.331947334Z" level=info msg="shim disconnected" id=e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e namespace=k8s.io Apr 16 00:19:40.332722 containerd[1473]: time="2026-04-16T00:19:40.332525652Z" level=warning msg="cleaning up after shim disconnected" id=e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e namespace=k8s.io Apr 16 00:19:40.332722 containerd[1473]: time="2026-04-16T00:19:40.332554174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:19:40.384659 containerd[1473]: time="2026-04-16T00:19:40.384487011Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:40.386398 containerd[1473]: time="2026-04-16T00:19:40.386330614Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 16 00:19:40.387328 containerd[1473]: time="2026-04-16T00:19:40.387242235Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:19:40.389172 containerd[1473]: time="2026-04-16T00:19:40.388977311Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.178628067s" Apr 16 00:19:40.389172 containerd[1473]: time="2026-04-16T00:19:40.389024835Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 16 00:19:40.395353 containerd[1473]: time="2026-04-16T00:19:40.395169526Z" level=info msg="CreateContainer within sandbox \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 16 00:19:40.417432 containerd[1473]: time="2026-04-16T00:19:40.417376773Z" level=info msg="CreateContainer within sandbox \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\"" Apr 16 00:19:40.418468 containerd[1473]: time="2026-04-16T00:19:40.418183587Z" level=info msg="StartContainer for \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\"" Apr 16 00:19:40.449308 systemd[1]: Started cri-containerd-6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f.scope - libcontainer container 6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f. Apr 16 00:19:40.478987 containerd[1473]: time="2026-04-16T00:19:40.478904571Z" level=info msg="StartContainer for \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\" returns successfully" Apr 16 00:19:41.125860 containerd[1473]: time="2026-04-16T00:19:41.124996952Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 00:19:41.148565 containerd[1473]: time="2026-04-16T00:19:41.146730380Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\"" Apr 16 00:19:41.148716 containerd[1473]: time="2026-04-16T00:19:41.148567661Z" level=info msg="StartContainer for \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\"" Apr 16 00:19:41.199272 systemd[1]: Started cri-containerd-78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d.scope - libcontainer container 78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d. Apr 16 00:19:41.241850 systemd[1]: cri-containerd-78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d.scope: Deactivated successfully. Apr 16 00:19:41.246653 containerd[1473]: time="2026-04-16T00:19:41.246549941Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf96311cf_7956_4b22_b9eb_4b556920cc59.slice/cri-containerd-78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d.scope/memory.events\": no such file or directory" Apr 16 00:19:41.247419 containerd[1473]: time="2026-04-16T00:19:41.247289349Z" level=info msg="StartContainer for \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\" returns successfully" Apr 16 00:19:41.279542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d-rootfs.mount: Deactivated successfully. Apr 16 00:19:41.293751 containerd[1473]: time="2026-04-16T00:19:41.293450783Z" level=info msg="shim disconnected" id=78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d namespace=k8s.io Apr 16 00:19:41.293751 containerd[1473]: time="2026-04-16T00:19:41.293576071Z" level=warning msg="cleaning up after shim disconnected" id=78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d namespace=k8s.io Apr 16 00:19:41.293751 containerd[1473]: time="2026-04-16T00:19:41.293585592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:19:42.136583 containerd[1473]: time="2026-04-16T00:19:42.136520999Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 00:19:42.157583 kubelet[2579]: I0416 00:19:42.157524 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-gndjw" podStartSLOduration=2.8891058210000002 podStartE2EDuration="10.157506754s" podCreationTimestamp="2026-04-16 00:19:32 +0000 UTC" firstStartedPulling="2026-04-16 00:19:33.121614808 +0000 UTC m=+7.268089786" lastFinishedPulling="2026-04-16 00:19:40.390015701 +0000 UTC m=+14.536490719" observedRunningTime="2026-04-16 00:19:41.202325914 +0000 UTC m=+15.348800932" watchObservedRunningTime="2026-04-16 00:19:42.157506754 +0000 UTC m=+16.303981732" Apr 16 00:19:42.158481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056241456.mount: Deactivated successfully. Apr 16 00:19:42.163672 containerd[1473]: time="2026-04-16T00:19:42.163603108Z" level=info msg="CreateContainer within sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\"" Apr 16 00:19:42.164902 containerd[1473]: time="2026-04-16T00:19:42.164857829Z" level=info msg="StartContainer for \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\"" Apr 16 00:19:42.196247 systemd[1]: Started cri-containerd-b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296.scope - libcontainer container b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296. Apr 16 00:19:42.240304 containerd[1473]: time="2026-04-16T00:19:42.240239137Z" level=info msg="StartContainer for \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\" returns successfully" Apr 16 00:19:42.339069 kubelet[2579]: I0416 00:19:42.335908 2579 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 00:19:42.385463 systemd[1]: Created slice kubepods-burstable-pod71b0e236_bbca_4585_bcb0_b183113f7e1b.slice - libcontainer container kubepods-burstable-pod71b0e236_bbca_4585_bcb0_b183113f7e1b.slice. Apr 16 00:19:42.393797 kubelet[2579]: I0416 00:19:42.393677 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71b0e236-bbca-4585-bcb0-b183113f7e1b-config-volume\") pod \"coredns-66bc5c9577-rsclf\" (UID: \"71b0e236-bbca-4585-bcb0-b183113f7e1b\") " pod="kube-system/coredns-66bc5c9577-rsclf" Apr 16 00:19:42.393797 kubelet[2579]: I0416 00:19:42.393721 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvq2x\" (UniqueName: \"kubernetes.io/projected/71b0e236-bbca-4585-bcb0-b183113f7e1b-kube-api-access-xvq2x\") pod \"coredns-66bc5c9577-rsclf\" (UID: \"71b0e236-bbca-4585-bcb0-b183113f7e1b\") " pod="kube-system/coredns-66bc5c9577-rsclf" Apr 16 00:19:42.397453 systemd[1]: Created slice kubepods-burstable-pod7e1d2fa5_83a9_4b78_9610_f63cf339de80.slice - libcontainer container kubepods-burstable-pod7e1d2fa5_83a9_4b78_9610_f63cf339de80.slice. Apr 16 00:19:42.494555 kubelet[2579]: I0416 00:19:42.494385 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjdc\" (UniqueName: \"kubernetes.io/projected/7e1d2fa5-83a9-4b78-9610-f63cf339de80-kube-api-access-cxjdc\") pod \"coredns-66bc5c9577-6xd7g\" (UID: \"7e1d2fa5-83a9-4b78-9610-f63cf339de80\") " pod="kube-system/coredns-66bc5c9577-6xd7g" Apr 16 00:19:42.494555 kubelet[2579]: I0416 00:19:42.494485 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e1d2fa5-83a9-4b78-9610-f63cf339de80-config-volume\") pod \"coredns-66bc5c9577-6xd7g\" (UID: \"7e1d2fa5-83a9-4b78-9610-f63cf339de80\") " pod="kube-system/coredns-66bc5c9577-6xd7g" Apr 16 00:19:42.695829 containerd[1473]: time="2026-04-16T00:19:42.694995306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rsclf,Uid:71b0e236-bbca-4585-bcb0-b183113f7e1b,Namespace:kube-system,Attempt:0,}" Apr 16 00:19:42.704873 containerd[1473]: time="2026-04-16T00:19:42.704400473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6xd7g,Uid:7e1d2fa5-83a9-4b78-9610-f63cf339de80,Namespace:kube-system,Attempt:0,}" Apr 16 00:19:44.462409 systemd-networkd[1375]: cilium_host: Link UP Apr 16 00:19:44.464830 systemd-networkd[1375]: cilium_net: Link UP Apr 16 00:19:44.466897 systemd-networkd[1375]: cilium_net: Gained carrier Apr 16 00:19:44.467153 systemd-networkd[1375]: cilium_host: Gained carrier Apr 16 00:19:44.585377 systemd-networkd[1375]: cilium_vxlan: Link UP Apr 16 00:19:44.585577 systemd-networkd[1375]: cilium_vxlan: Gained carrier Apr 16 00:19:44.861759 systemd-networkd[1375]: cilium_host: Gained IPv6LL Apr 16 00:19:44.887278 kernel: NET: Registered PF_ALG protocol family Apr 16 00:19:45.301649 systemd-networkd[1375]: cilium_net: Gained IPv6LL Apr 16 00:19:45.635223 systemd-networkd[1375]: lxc_health: Link UP Apr 16 00:19:45.646696 systemd-networkd[1375]: lxc_health: Gained carrier Apr 16 00:19:45.685514 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Apr 16 00:19:45.792593 systemd-networkd[1375]: lxc471a66511887: Link UP Apr 16 00:19:45.797190 kernel: eth0: renamed from tmp2cc08 Apr 16 00:19:45.802690 systemd-networkd[1375]: lxc471a66511887: Gained carrier Apr 16 00:19:46.257339 systemd-networkd[1375]: lxcde4b8f66e9d3: Link UP Apr 16 00:19:46.268111 kernel: eth0: renamed from tmp44268 Apr 16 00:19:46.274392 systemd-networkd[1375]: lxcde4b8f66e9d3: Gained carrier Apr 16 00:19:46.837323 systemd-networkd[1375]: lxc471a66511887: Gained IPv6LL Apr 16 00:19:46.837679 systemd-networkd[1375]: lxc_health: Gained IPv6LL Apr 16 00:19:46.912465 kubelet[2579]: I0416 00:19:46.912383 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2bksj" podStartSLOduration=9.712862728 podStartE2EDuration="14.912365637s" podCreationTimestamp="2026-04-16 00:19:32 +0000 UTC" firstStartedPulling="2026-04-16 00:19:33.008007538 +0000 UTC m=+7.154482556" lastFinishedPulling="2026-04-16 00:19:38.207510447 +0000 UTC m=+12.353985465" observedRunningTime="2026-04-16 00:19:43.165488875 +0000 UTC m=+17.311963933" watchObservedRunningTime="2026-04-16 00:19:46.912365637 +0000 UTC m=+21.058840655" Apr 16 00:19:47.477383 systemd-networkd[1375]: lxcde4b8f66e9d3: Gained IPv6LL Apr 16 00:19:50.064127 containerd[1473]: time="2026-04-16T00:19:50.063813968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:19:50.064127 containerd[1473]: time="2026-04-16T00:19:50.063872691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:19:50.064127 containerd[1473]: time="2026-04-16T00:19:50.063883612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:50.067083 containerd[1473]: time="2026-04-16T00:19:50.063968736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:50.096507 systemd[1]: run-containerd-runc-k8s.io-2cc08fc9df94251c54f16408dad15d210a5eca864e1825c06604808cde16ef4e-runc.yGeZgm.mount: Deactivated successfully. Apr 16 00:19:50.106277 systemd[1]: Started cri-containerd-2cc08fc9df94251c54f16408dad15d210a5eca864e1825c06604808cde16ef4e.scope - libcontainer container 2cc08fc9df94251c54f16408dad15d210a5eca864e1825c06604808cde16ef4e. Apr 16 00:19:50.120859 containerd[1473]: time="2026-04-16T00:19:50.120343868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:19:50.120859 containerd[1473]: time="2026-04-16T00:19:50.120501197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:19:50.121411 containerd[1473]: time="2026-04-16T00:19:50.120567161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:50.121411 containerd[1473]: time="2026-04-16T00:19:50.120715809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:19:50.153261 systemd[1]: Started cri-containerd-44268d44ed02d265b430e64f0dab4c749d27146bf76350812b1b0010260ad9db.scope - libcontainer container 44268d44ed02d265b430e64f0dab4c749d27146bf76350812b1b0010260ad9db. Apr 16 00:19:50.195237 containerd[1473]: time="2026-04-16T00:19:50.195188144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6xd7g,Uid:7e1d2fa5-83a9-4b78-9610-f63cf339de80,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cc08fc9df94251c54f16408dad15d210a5eca864e1825c06604808cde16ef4e\"" Apr 16 00:19:50.204388 containerd[1473]: time="2026-04-16T00:19:50.204348592Z" level=info msg="CreateContainer within sandbox \"2cc08fc9df94251c54f16408dad15d210a5eca864e1825c06604808cde16ef4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 00:19:50.221702 containerd[1473]: time="2026-04-16T00:19:50.221662831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rsclf,Uid:71b0e236-bbca-4585-bcb0-b183113f7e1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"44268d44ed02d265b430e64f0dab4c749d27146bf76350812b1b0010260ad9db\"" Apr 16 00:19:50.229184 containerd[1473]: time="2026-04-16T00:19:50.229123941Z" level=info msg="CreateContainer within sandbox \"44268d44ed02d265b430e64f0dab4c749d27146bf76350812b1b0010260ad9db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 00:19:50.234680 containerd[1473]: time="2026-04-16T00:19:50.234585336Z" level=info msg="CreateContainer within sandbox \"2cc08fc9df94251c54f16408dad15d210a5eca864e1825c06604808cde16ef4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f54682b5398904a1f853d368b99bf385195e690e7067f638d805042ff309a841\"" Apr 16 00:19:50.250284 containerd[1473]: time="2026-04-16T00:19:50.250208477Z" level=info msg="StartContainer for \"f54682b5398904a1f853d368b99bf385195e690e7067f638d805042ff309a841\"" Apr 16 00:19:50.254944 containerd[1473]: time="2026-04-16T00:19:50.254853985Z" level=info msg="CreateContainer within sandbox \"44268d44ed02d265b430e64f0dab4c749d27146bf76350812b1b0010260ad9db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"457b3a9911d3070ed75cd743b83c22e28856355a9e82651da40726ef70684de7\"" Apr 16 00:19:50.258797 containerd[1473]: time="2026-04-16T00:19:50.258754330Z" level=info msg="StartContainer for \"457b3a9911d3070ed75cd743b83c22e28856355a9e82651da40726ef70684de7\"" Apr 16 00:19:50.292459 systemd[1]: Started cri-containerd-457b3a9911d3070ed75cd743b83c22e28856355a9e82651da40726ef70684de7.scope - libcontainer container 457b3a9911d3070ed75cd743b83c22e28856355a9e82651da40726ef70684de7. Apr 16 00:19:50.301276 systemd[1]: Started cri-containerd-f54682b5398904a1f853d368b99bf385195e690e7067f638d805042ff309a841.scope - libcontainer container f54682b5398904a1f853d368b99bf385195e690e7067f638d805042ff309a841. Apr 16 00:19:50.336109 containerd[1473]: time="2026-04-16T00:19:50.335096693Z" level=info msg="StartContainer for \"457b3a9911d3070ed75cd743b83c22e28856355a9e82651da40726ef70684de7\" returns successfully" Apr 16 00:19:50.339362 containerd[1473]: time="2026-04-16T00:19:50.339300055Z" level=info msg="StartContainer for \"f54682b5398904a1f853d368b99bf385195e690e7067f638d805042ff309a841\" returns successfully" Apr 16 00:19:51.032164 kubelet[2579]: I0416 00:19:51.031804 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 00:19:51.214666 kubelet[2579]: I0416 00:19:51.214152 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6xd7g" podStartSLOduration=19.214135852 podStartE2EDuration="19.214135852s" podCreationTimestamp="2026-04-16 00:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:19:51.186814534 +0000 UTC m=+25.333289552" watchObservedRunningTime="2026-04-16 00:19:51.214135852 +0000 UTC m=+25.360610870" Apr 16 00:19:51.241415 kubelet[2579]: I0416 00:19:51.240798 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rsclf" podStartSLOduration=19.240780612000002 podStartE2EDuration="19.240780612s" podCreationTimestamp="2026-04-16 00:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:19:51.214500633 +0000 UTC m=+25.360975651" watchObservedRunningTime="2026-04-16 00:19:51.240780612 +0000 UTC m=+25.387255630" Apr 16 00:20:37.440672 systemd[1]: Started sshd@7-188.245.164.135:22-45.148.10.192:36170.service - OpenSSH per-connection server daemon (45.148.10.192:36170). Apr 16 00:20:37.473262 sshd[3981]: Connection closed by 45.148.10.192 port 36170 Apr 16 00:20:37.469692 systemd[1]: sshd@7-188.245.164.135:22-45.148.10.192:36170.service: Deactivated successfully. Apr 16 00:21:39.579292 systemd[1]: Started sshd@8-188.245.164.135:22-4.175.71.9:40854.service - OpenSSH per-connection server daemon (4.175.71.9:40854). Apr 16 00:21:39.697147 sshd[3992]: Accepted publickey for core from 4.175.71.9 port 40854 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:21:39.700280 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:21:39.707324 systemd-logind[1451]: New session 8 of user core. Apr 16 00:21:39.714080 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 00:21:39.904344 sshd[3992]: pam_unix(sshd:session): session closed for user core Apr 16 00:21:39.910634 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Apr 16 00:21:39.911626 systemd[1]: sshd@8-188.245.164.135:22-4.175.71.9:40854.service: Deactivated successfully. Apr 16 00:21:39.915162 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 00:21:39.918338 systemd-logind[1451]: Removed session 8. Apr 16 00:21:44.942409 systemd[1]: Started sshd@9-188.245.164.135:22-4.175.71.9:40858.service - OpenSSH per-connection server daemon (4.175.71.9:40858). Apr 16 00:21:45.060628 sshd[4006]: Accepted publickey for core from 4.175.71.9 port 40858 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:21:45.063346 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:21:45.073662 systemd-logind[1451]: New session 9 of user core. Apr 16 00:21:45.078469 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 00:21:45.246440 sshd[4006]: pam_unix(sshd:session): session closed for user core Apr 16 00:21:45.251798 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Apr 16 00:21:45.251880 systemd[1]: sshd@9-188.245.164.135:22-4.175.71.9:40858.service: Deactivated successfully. Apr 16 00:21:45.254755 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 00:21:45.256140 systemd-logind[1451]: Removed session 9. Apr 16 00:21:50.286649 systemd[1]: Started sshd@10-188.245.164.135:22-4.175.71.9:44778.service - OpenSSH per-connection server daemon (4.175.71.9:44778). Apr 16 00:21:50.409073 sshd[4020]: Accepted publickey for core from 4.175.71.9 port 44778 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:21:50.410759 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:21:50.415818 systemd-logind[1451]: New session 10 of user core. Apr 16 00:21:50.424347 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 00:21:50.611689 sshd[4020]: pam_unix(sshd:session): session closed for user core Apr 16 00:21:50.617193 systemd[1]: sshd@10-188.245.164.135:22-4.175.71.9:44778.service: Deactivated successfully. Apr 16 00:21:50.620010 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 00:21:50.621438 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Apr 16 00:21:50.622962 systemd-logind[1451]: Removed session 10. Apr 16 00:21:50.642507 systemd[1]: Started sshd@11-188.245.164.135:22-4.175.71.9:44792.service - OpenSSH per-connection server daemon (4.175.71.9:44792). Apr 16 00:21:50.772011 sshd[4034]: Accepted publickey for core from 4.175.71.9 port 44792 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:21:50.774196 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:21:50.782139 systemd-logind[1451]: New session 11 of user core. Apr 16 00:21:50.792492 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 00:21:51.009342 sshd[4034]: pam_unix(sshd:session): session closed for user core Apr 16 00:21:51.019501 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Apr 16 00:21:51.019728 systemd[1]: sshd@11-188.245.164.135:22-4.175.71.9:44792.service: Deactivated successfully. Apr 16 00:21:51.025281 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 00:21:51.041758 systemd-logind[1451]: Removed session 11. Apr 16 00:21:51.051874 systemd[1]: Started sshd@12-188.245.164.135:22-4.175.71.9:44808.service - OpenSSH per-connection server daemon (4.175.71.9:44808). Apr 16 00:21:51.174092 sshd[4045]: Accepted publickey for core from 4.175.71.9 port 44808 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:21:51.177917 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:21:51.184504 systemd-logind[1451]: New session 12 of user core. Apr 16 00:21:51.191428 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 00:21:51.382510 sshd[4045]: pam_unix(sshd:session): session closed for user core Apr 16 00:21:51.389480 systemd[1]: sshd@12-188.245.164.135:22-4.175.71.9:44808.service: Deactivated successfully. Apr 16 00:21:51.392863 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 00:21:51.395916 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Apr 16 00:21:51.397082 systemd-logind[1451]: Removed session 12. Apr 16 00:21:56.428716 systemd[1]: Started sshd@13-188.245.164.135:22-4.175.71.9:36674.service - OpenSSH per-connection server daemon (4.175.71.9:36674). Apr 16 00:21:56.551120 sshd[4058]: Accepted publickey for core from 4.175.71.9 port 36674 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:21:56.553555 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:21:56.558655 systemd-logind[1451]: New session 13 of user core. Apr 16 00:21:56.567304 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 00:21:56.737096 sshd[4058]: pam_unix(sshd:session): session closed for user core Apr 16 00:21:56.743697 systemd[1]: sshd@13-188.245.164.135:22-4.175.71.9:36674.service: Deactivated successfully. Apr 16 00:21:56.747567 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 00:21:56.748832 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Apr 16 00:21:56.750208 systemd-logind[1451]: Removed session 13. Apr 16 00:22:01.767380 systemd[1]: Started sshd@14-188.245.164.135:22-4.175.71.9:36690.service - OpenSSH per-connection server daemon (4.175.71.9:36690). Apr 16 00:22:01.894832 sshd[4070]: Accepted publickey for core from 4.175.71.9 port 36690 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:01.897217 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:01.903083 systemd-logind[1451]: New session 14 of user core. Apr 16 00:22:01.912504 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 00:22:02.092528 sshd[4070]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:02.102096 systemd[1]: sshd@14-188.245.164.135:22-4.175.71.9:36690.service: Deactivated successfully. Apr 16 00:22:02.104931 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 00:22:02.107190 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Apr 16 00:22:02.120648 systemd-logind[1451]: Removed session 14. Apr 16 00:22:02.124397 systemd[1]: Started sshd@15-188.245.164.135:22-4.175.71.9:36704.service - OpenSSH per-connection server daemon (4.175.71.9:36704). Apr 16 00:22:02.250852 sshd[4083]: Accepted publickey for core from 4.175.71.9 port 36704 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:02.252391 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:02.257575 systemd-logind[1451]: New session 15 of user core. Apr 16 00:22:02.264293 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 00:22:02.520380 sshd[4083]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:02.525278 systemd[1]: sshd@15-188.245.164.135:22-4.175.71.9:36704.service: Deactivated successfully. Apr 16 00:22:02.529459 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 00:22:02.532543 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Apr 16 00:22:02.534402 systemd-logind[1451]: Removed session 15. Apr 16 00:22:02.551443 systemd[1]: Started sshd@16-188.245.164.135:22-4.175.71.9:36708.service - OpenSSH per-connection server daemon (4.175.71.9:36708). Apr 16 00:22:02.682808 sshd[4093]: Accepted publickey for core from 4.175.71.9 port 36708 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:02.684016 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:02.689243 systemd-logind[1451]: New session 16 of user core. Apr 16 00:22:02.698489 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 00:22:03.371724 sshd[4093]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:03.378370 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Apr 16 00:22:03.378555 systemd[1]: sshd@16-188.245.164.135:22-4.175.71.9:36708.service: Deactivated successfully. Apr 16 00:22:03.383017 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 00:22:03.384632 systemd-logind[1451]: Removed session 16. Apr 16 00:22:03.415540 systemd[1]: Started sshd@17-188.245.164.135:22-4.175.71.9:36712.service - OpenSSH per-connection server daemon (4.175.71.9:36712). Apr 16 00:22:03.542477 sshd[4109]: Accepted publickey for core from 4.175.71.9 port 36712 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:03.546004 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:03.551852 systemd-logind[1451]: New session 17 of user core. Apr 16 00:22:03.556313 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 00:22:03.883143 sshd[4109]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:03.889436 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Apr 16 00:22:03.889851 systemd[1]: sshd@17-188.245.164.135:22-4.175.71.9:36712.service: Deactivated successfully. Apr 16 00:22:03.895634 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 00:22:03.900537 systemd-logind[1451]: Removed session 17. Apr 16 00:22:03.917469 systemd[1]: Started sshd@18-188.245.164.135:22-4.175.71.9:36722.service - OpenSSH per-connection server daemon (4.175.71.9:36722). Apr 16 00:22:04.050194 sshd[4121]: Accepted publickey for core from 4.175.71.9 port 36722 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:04.052929 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:04.059052 systemd-logind[1451]: New session 18 of user core. Apr 16 00:22:04.070486 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 00:22:04.248395 sshd[4121]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:04.255338 systemd[1]: sshd@18-188.245.164.135:22-4.175.71.9:36722.service: Deactivated successfully. Apr 16 00:22:04.261326 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 00:22:04.268738 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Apr 16 00:22:04.271979 systemd-logind[1451]: Removed session 18. Apr 16 00:22:09.286789 systemd[1]: Started sshd@19-188.245.164.135:22-4.175.71.9:56640.service - OpenSSH per-connection server daemon (4.175.71.9:56640). Apr 16 00:22:09.412063 sshd[4138]: Accepted publickey for core from 4.175.71.9 port 56640 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:09.413800 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:09.421142 systemd-logind[1451]: New session 19 of user core. Apr 16 00:22:09.432830 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 00:22:09.613195 sshd[4138]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:09.619642 systemd[1]: sshd@19-188.245.164.135:22-4.175.71.9:56640.service: Deactivated successfully. Apr 16 00:22:09.625073 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 00:22:09.627075 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Apr 16 00:22:09.629328 systemd-logind[1451]: Removed session 19. Apr 16 00:22:14.647440 systemd[1]: Started sshd@20-188.245.164.135:22-4.175.71.9:56652.service - OpenSSH per-connection server daemon (4.175.71.9:56652). Apr 16 00:22:14.780093 sshd[4150]: Accepted publickey for core from 4.175.71.9 port 56652 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:14.781524 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:14.787967 systemd-logind[1451]: New session 20 of user core. Apr 16 00:22:14.800434 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 00:22:14.972377 sshd[4150]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:14.977946 systemd[1]: sshd@20-188.245.164.135:22-4.175.71.9:56652.service: Deactivated successfully. Apr 16 00:22:14.984185 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 00:22:14.985799 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Apr 16 00:22:15.005446 systemd[1]: Started sshd@21-188.245.164.135:22-4.175.71.9:56654.service - OpenSSH per-connection server daemon (4.175.71.9:56654). Apr 16 00:22:15.007244 systemd-logind[1451]: Removed session 20. Apr 16 00:22:15.123904 sshd[4163]: Accepted publickey for core from 4.175.71.9 port 56654 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:15.125874 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:15.131599 systemd-logind[1451]: New session 21 of user core. Apr 16 00:22:15.145671 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 00:22:17.524761 containerd[1473]: time="2026-04-16T00:22:17.524510700Z" level=info msg="StopContainer for \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\" with timeout 30 (s)" Apr 16 00:22:17.527969 containerd[1473]: time="2026-04-16T00:22:17.527536283Z" level=info msg="Stop container \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\" with signal terminated" Apr 16 00:22:17.539905 containerd[1473]: time="2026-04-16T00:22:17.539561499Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 00:22:17.546615 containerd[1473]: time="2026-04-16T00:22:17.546271694Z" level=info msg="StopContainer for \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\" with timeout 2 (s)" Apr 16 00:22:17.547433 containerd[1473]: time="2026-04-16T00:22:17.546983441Z" level=info msg="Stop container \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\" with signal terminated" Apr 16 00:22:17.557640 systemd-networkd[1375]: lxc_health: Link DOWN Apr 16 00:22:17.557649 systemd-networkd[1375]: lxc_health: Lost carrier Apr 16 00:22:17.605216 systemd[1]: cri-containerd-6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f.scope: Deactivated successfully. Apr 16 00:22:17.622701 systemd[1]: cri-containerd-b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296.scope: Deactivated successfully. Apr 16 00:22:17.624524 systemd[1]: cri-containerd-b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296.scope: Consumed 7.623s CPU time. Apr 16 00:22:17.650835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f-rootfs.mount: Deactivated successfully. Apr 16 00:22:17.660849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296-rootfs.mount: Deactivated successfully. Apr 16 00:22:17.665782 containerd[1473]: time="2026-04-16T00:22:17.665452230Z" level=info msg="shim disconnected" id=6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f namespace=k8s.io Apr 16 00:22:17.665782 containerd[1473]: time="2026-04-16T00:22:17.665514509Z" level=warning msg="cleaning up after shim disconnected" id=6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f namespace=k8s.io Apr 16 00:22:17.665782 containerd[1473]: time="2026-04-16T00:22:17.665522709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:17.666504 containerd[1473]: time="2026-04-16T00:22:17.666335694Z" level=info msg="shim disconnected" id=b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296 namespace=k8s.io Apr 16 00:22:17.666504 containerd[1473]: time="2026-04-16T00:22:17.666379693Z" level=warning msg="cleaning up after shim disconnected" id=b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296 namespace=k8s.io Apr 16 00:22:17.666504 containerd[1473]: time="2026-04-16T00:22:17.666387293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:17.685268 containerd[1473]: time="2026-04-16T00:22:17.685223141Z" level=info msg="StopContainer for \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\" returns successfully" Apr 16 00:22:17.686370 containerd[1473]: time="2026-04-16T00:22:17.686323721Z" level=info msg="StopPodSandbox for \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\"" Apr 16 00:22:17.686636 containerd[1473]: time="2026-04-16T00:22:17.686615675Z" level=info msg="Container to stop \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 00:22:17.686794 containerd[1473]: time="2026-04-16T00:22:17.686699914Z" level=info msg="Container to stop \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 00:22:17.686794 containerd[1473]: time="2026-04-16T00:22:17.686717033Z" level=info msg="Container to stop \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 00:22:17.686794 containerd[1473]: time="2026-04-16T00:22:17.686728753Z" level=info msg="Container to stop \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 00:22:17.686794 containerd[1473]: time="2026-04-16T00:22:17.686737993Z" level=info msg="Container to stop \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 00:22:17.688758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277-shm.mount: Deactivated successfully. Apr 16 00:22:17.694187 containerd[1473]: time="2026-04-16T00:22:17.694145975Z" level=info msg="StopContainer for \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\" returns successfully" Apr 16 00:22:17.694849 containerd[1473]: time="2026-04-16T00:22:17.694702124Z" level=info msg="StopPodSandbox for \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\"" Apr 16 00:22:17.694849 containerd[1473]: time="2026-04-16T00:22:17.694743404Z" level=info msg="Container to stop \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 00:22:17.698286 systemd[1]: cri-containerd-348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277.scope: Deactivated successfully. Apr 16 00:22:17.708772 systemd[1]: cri-containerd-87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451.scope: Deactivated successfully. Apr 16 00:22:17.734369 containerd[1473]: time="2026-04-16T00:22:17.734132509Z" level=info msg="shim disconnected" id=348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277 namespace=k8s.io Apr 16 00:22:17.734369 containerd[1473]: time="2026-04-16T00:22:17.734200828Z" level=warning msg="cleaning up after shim disconnected" id=348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277 namespace=k8s.io Apr 16 00:22:17.734369 containerd[1473]: time="2026-04-16T00:22:17.734210187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:17.746835 containerd[1473]: time="2026-04-16T00:22:17.746767113Z" level=info msg="shim disconnected" id=87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451 namespace=k8s.io Apr 16 00:22:17.747144 containerd[1473]: time="2026-04-16T00:22:17.747121626Z" level=warning msg="cleaning up after shim disconnected" id=87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451 namespace=k8s.io Apr 16 00:22:17.747227 containerd[1473]: time="2026-04-16T00:22:17.747213745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:17.754279 containerd[1473]: time="2026-04-16T00:22:17.754234094Z" level=info msg="TearDown network for sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" successfully" Apr 16 00:22:17.754456 containerd[1473]: time="2026-04-16T00:22:17.754439010Z" level=info msg="StopPodSandbox for \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" returns successfully" Apr 16 00:22:17.773331 containerd[1473]: time="2026-04-16T00:22:17.773258579Z" level=info msg="TearDown network for sandbox \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\" successfully" Apr 16 00:22:17.773627 containerd[1473]: time="2026-04-16T00:22:17.773581573Z" level=info msg="StopPodSandbox for \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\" returns successfully" Apr 16 00:22:17.934058 kubelet[2579]: I0416 00:22:17.932407 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f96311cf-7956-4b22-b9eb-4b556920cc59-clustermesh-secrets\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.935696 kubelet[2579]: I0416 00:22:17.934556 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-host-proc-sys-net\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.935696 kubelet[2579]: I0416 00:22:17.934616 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f96311cf-7956-4b22-b9eb-4b556920cc59-hubble-tls\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.935696 kubelet[2579]: I0416 00:22:17.934636 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-cgroup\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.935696 kubelet[2579]: I0416 00:22:17.934659 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20-cilium-config-path\") pod \"1c75bf38-dcf1-4fa3-b560-7e7b947e1c20\" (UID: \"1c75bf38-dcf1-4fa3-b560-7e7b947e1c20\") " Apr 16 00:22:17.935696 kubelet[2579]: I0416 00:22:17.934681 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-hostproc\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.935696 kubelet[2579]: I0416 00:22:17.934702 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-etc-cni-netd\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.936792 kubelet[2579]: I0416 00:22:17.934722 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-config-path\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.936792 kubelet[2579]: I0416 00:22:17.934745 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lbhd\" (UniqueName: \"kubernetes.io/projected/f96311cf-7956-4b22-b9eb-4b556920cc59-kube-api-access-8lbhd\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.936792 kubelet[2579]: I0416 00:22:17.934764 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cni-path\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.936792 kubelet[2579]: I0416 00:22:17.934783 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-host-proc-sys-kernel\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.936792 kubelet[2579]: I0416 00:22:17.934803 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-bpf-maps\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.936792 kubelet[2579]: I0416 00:22:17.934821 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-lib-modules\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.938242 kubelet[2579]: I0416 00:22:17.934839 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-run\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.938242 kubelet[2579]: I0416 00:22:17.934855 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-xtables-lock\") pod \"f96311cf-7956-4b22-b9eb-4b556920cc59\" (UID: \"f96311cf-7956-4b22-b9eb-4b556920cc59\") " Apr 16 00:22:17.938242 kubelet[2579]: I0416 00:22:17.934876 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9q82\" (UniqueName: \"kubernetes.io/projected/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20-kube-api-access-z9q82\") pod \"1c75bf38-dcf1-4fa3-b560-7e7b947e1c20\" (UID: \"1c75bf38-dcf1-4fa3-b560-7e7b947e1c20\") " Apr 16 00:22:17.938242 kubelet[2579]: I0416 00:22:17.937349 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f96311cf-7956-4b22-b9eb-4b556920cc59-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 00:22:17.941520 kubelet[2579]: I0416 00:22:17.941460 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cni-path" (OuterVolumeSpecName: "cni-path") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.941706 kubelet[2579]: I0416 00:22:17.941537 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.941706 kubelet[2579]: I0416 00:22:17.941560 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.941706 kubelet[2579]: I0416 00:22:17.941576 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.941706 kubelet[2579]: I0416 00:22:17.941593 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.941706 kubelet[2579]: I0416 00:22:17.941607 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.942025 kubelet[2579]: I0416 00:22:17.941631 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.942025 kubelet[2579]: I0416 00:22:17.941646 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.944058 kubelet[2579]: I0416 00:22:17.943154 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 00:22:17.944058 kubelet[2579]: I0416 00:22:17.943276 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20-kube-api-access-z9q82" (OuterVolumeSpecName: "kube-api-access-z9q82") pod "1c75bf38-dcf1-4fa3-b560-7e7b947e1c20" (UID: "1c75bf38-dcf1-4fa3-b560-7e7b947e1c20"). InnerVolumeSpecName "kube-api-access-z9q82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 00:22:17.944058 kubelet[2579]: I0416 00:22:17.943308 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-hostproc" (OuterVolumeSpecName: "hostproc") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.944376 kubelet[2579]: I0416 00:22:17.944351 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 00:22:17.945003 kubelet[2579]: I0416 00:22:17.944942 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f96311cf-7956-4b22-b9eb-4b556920cc59-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 00:22:17.945536 kubelet[2579]: I0416 00:22:17.945488 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f96311cf-7956-4b22-b9eb-4b556920cc59-kube-api-access-8lbhd" (OuterVolumeSpecName: "kube-api-access-8lbhd") pod "f96311cf-7956-4b22-b9eb-4b556920cc59" (UID: "f96311cf-7956-4b22-b9eb-4b556920cc59"). InnerVolumeSpecName "kube-api-access-8lbhd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 00:22:17.946359 kubelet[2579]: I0416 00:22:17.946319 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c75bf38-dcf1-4fa3-b560-7e7b947e1c20" (UID: "1c75bf38-dcf1-4fa3-b560-7e7b947e1c20"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 00:22:18.000475 systemd[1]: Removed slice kubepods-burstable-podf96311cf_7956_4b22_b9eb_4b556920cc59.slice - libcontainer container kubepods-burstable-podf96311cf_7956_4b22_b9eb_4b556920cc59.slice. Apr 16 00:22:18.000792 systemd[1]: kubepods-burstable-podf96311cf_7956_4b22_b9eb_4b556920cc59.slice: Consumed 7.718s CPU time. Apr 16 00:22:18.003543 systemd[1]: Removed slice kubepods-besteffort-pod1c75bf38_dcf1_4fa3_b560_7e7b947e1c20.slice - libcontainer container kubepods-besteffort-pod1c75bf38_dcf1_4fa3_b560_7e7b947e1c20.slice. Apr 16 00:22:18.035806 kubelet[2579]: I0416 00:22:18.035722 2579 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f96311cf-7956-4b22-b9eb-4b556920cc59-clustermesh-secrets\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.035806 kubelet[2579]: I0416 00:22:18.035785 2579 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-host-proc-sys-net\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.035806 kubelet[2579]: I0416 00:22:18.035808 2579 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f96311cf-7956-4b22-b9eb-4b556920cc59-hubble-tls\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.035806 kubelet[2579]: I0416 00:22:18.035826 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-cgroup\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036158 kubelet[2579]: I0416 00:22:18.035844 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20-cilium-config-path\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036158 kubelet[2579]: I0416 00:22:18.035861 2579 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-hostproc\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036158 kubelet[2579]: I0416 00:22:18.035878 2579 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-etc-cni-netd\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036158 kubelet[2579]: I0416 00:22:18.035896 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-config-path\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036158 kubelet[2579]: I0416 00:22:18.035912 2579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8lbhd\" (UniqueName: \"kubernetes.io/projected/f96311cf-7956-4b22-b9eb-4b556920cc59-kube-api-access-8lbhd\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036158 kubelet[2579]: I0416 00:22:18.035932 2579 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cni-path\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036158 kubelet[2579]: I0416 00:22:18.035948 2579 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036158 kubelet[2579]: I0416 00:22:18.035965 2579 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-bpf-maps\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036409 kubelet[2579]: I0416 00:22:18.035982 2579 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-lib-modules\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036409 kubelet[2579]: I0416 00:22:18.035997 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-cilium-run\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036409 kubelet[2579]: I0416 00:22:18.036013 2579 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f96311cf-7956-4b22-b9eb-4b556920cc59-xtables-lock\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.036409 kubelet[2579]: I0416 00:22:18.036066 2579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z9q82\" (UniqueName: \"kubernetes.io/projected/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20-kube-api-access-z9q82\") on node \"ci-4081-3-6-n-510861948e\" DevicePath \"\"" Apr 16 00:22:18.513363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451-rootfs.mount: Deactivated successfully. Apr 16 00:22:18.513616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451-shm.mount: Deactivated successfully. Apr 16 00:22:18.513779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277-rootfs.mount: Deactivated successfully. Apr 16 00:22:18.513916 systemd[1]: var-lib-kubelet-pods-1c75bf38\x2ddcf1\x2d4fa3\x2db560\x2d7e7b947e1c20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz9q82.mount: Deactivated successfully. Apr 16 00:22:18.514794 systemd[1]: var-lib-kubelet-pods-f96311cf\x2d7956\x2d4b22\x2db9eb\x2d4b556920cc59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8lbhd.mount: Deactivated successfully. Apr 16 00:22:18.515244 systemd[1]: var-lib-kubelet-pods-f96311cf\x2d7956\x2d4b22\x2db9eb\x2d4b556920cc59-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 16 00:22:18.515642 systemd[1]: var-lib-kubelet-pods-f96311cf\x2d7956\x2d4b22\x2db9eb\x2d4b556920cc59-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 16 00:22:18.614621 kubelet[2579]: I0416 00:22:18.614414 2579 scope.go:117] "RemoveContainer" containerID="b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296" Apr 16 00:22:18.618369 containerd[1473]: time="2026-04-16T00:22:18.617910006Z" level=info msg="RemoveContainer for \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\"" Apr 16 00:22:18.631091 containerd[1473]: time="2026-04-16T00:22:18.630443064Z" level=info msg="RemoveContainer for \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\" returns successfully" Apr 16 00:22:18.633983 kubelet[2579]: I0416 00:22:18.633802 2579 scope.go:117] "RemoveContainer" containerID="78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d" Apr 16 00:22:18.638001 containerd[1473]: time="2026-04-16T00:22:18.637924931Z" level=info msg="RemoveContainer for \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\"" Apr 16 00:22:18.647457 containerd[1473]: time="2026-04-16T00:22:18.647227165Z" level=info msg="RemoveContainer for \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\" returns successfully" Apr 16 00:22:18.648467 kubelet[2579]: I0416 00:22:18.648418 2579 scope.go:117] "RemoveContainer" containerID="e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e" Apr 16 00:22:18.653897 containerd[1473]: time="2026-04-16T00:22:18.653543973Z" level=info msg="RemoveContainer for \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\"" Apr 16 00:22:18.657789 containerd[1473]: time="2026-04-16T00:22:18.657658540Z" level=info msg="RemoveContainer for \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\" returns successfully" Apr 16 00:22:18.657954 kubelet[2579]: I0416 00:22:18.657933 2579 scope.go:117] "RemoveContainer" containerID="65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5" Apr 16 00:22:18.660841 containerd[1473]: time="2026-04-16T00:22:18.660450890Z" level=info msg="RemoveContainer for \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\"" Apr 16 00:22:18.665291 containerd[1473]: time="2026-04-16T00:22:18.665245365Z" level=info msg="RemoveContainer for \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\" returns successfully" Apr 16 00:22:18.666173 kubelet[2579]: I0416 00:22:18.666018 2579 scope.go:117] "RemoveContainer" containerID="d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b" Apr 16 00:22:18.670446 containerd[1473]: time="2026-04-16T00:22:18.670391514Z" level=info msg="RemoveContainer for \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\"" Apr 16 00:22:18.675245 containerd[1473]: time="2026-04-16T00:22:18.675169669Z" level=info msg="RemoveContainer for \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\" returns successfully" Apr 16 00:22:18.676083 kubelet[2579]: I0416 00:22:18.675595 2579 scope.go:117] "RemoveContainer" containerID="b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296" Apr 16 00:22:18.677104 containerd[1473]: time="2026-04-16T00:22:18.676116092Z" level=error msg="ContainerStatus for \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\": not found" Apr 16 00:22:18.678457 kubelet[2579]: E0416 00:22:18.678104 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\": not found" containerID="b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296" Apr 16 00:22:18.678457 kubelet[2579]: I0416 00:22:18.678160 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296"} err="failed to get container status \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\": rpc error: code = NotFound desc = an error occurred when try to find container \"b61b9ed713fe93ca3bdd9705850f05234a5461622f2908279376e006bbd69296\": not found" Apr 16 00:22:18.678457 kubelet[2579]: I0416 00:22:18.678203 2579 scope.go:117] "RemoveContainer" containerID="78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d" Apr 16 00:22:18.679264 kubelet[2579]: E0416 00:22:18.678985 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\": not found" containerID="78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d" Apr 16 00:22:18.679452 containerd[1473]: time="2026-04-16T00:22:18.678692846Z" level=error msg="ContainerStatus for \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\": not found" Apr 16 00:22:18.679617 kubelet[2579]: I0416 00:22:18.679194 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d"} err="failed to get container status \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\": rpc error: code = NotFound desc = an error occurred when try to find container \"78561e3b3522176eab32eb1a38a61566e84d3b2c00fb0b08f2e8c5f0fe9cb30d\": not found" Apr 16 00:22:18.679617 kubelet[2579]: I0416 00:22:18.679332 2579 scope.go:117] "RemoveContainer" containerID="e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e" Apr 16 00:22:18.680214 containerd[1473]: time="2026-04-16T00:22:18.680138300Z" level=error msg="ContainerStatus for \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\": not found" Apr 16 00:22:18.680743 kubelet[2579]: E0416 00:22:18.680427 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\": not found" containerID="e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e" Apr 16 00:22:18.680743 kubelet[2579]: I0416 00:22:18.680454 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e"} err="failed to get container status \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9189a1805bfde633a086ba562af0b061c647df5c5fbfb5acd622be3a0b30d1e\": not found" Apr 16 00:22:18.680743 kubelet[2579]: I0416 00:22:18.680476 2579 scope.go:117] "RemoveContainer" containerID="65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5" Apr 16 00:22:18.681550 containerd[1473]: time="2026-04-16T00:22:18.681344079Z" level=error msg="ContainerStatus for \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\": not found" Apr 16 00:22:18.681869 kubelet[2579]: E0416 00:22:18.681615 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\": not found" containerID="65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5" Apr 16 00:22:18.681869 kubelet[2579]: I0416 00:22:18.681645 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5"} err="failed to get container status \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"65b9478c6de4aa8133ad2337fdcfeedb9dd633d502e7dd3daa68e6615bc26cb5\": not found" Apr 16 00:22:18.681869 kubelet[2579]: I0416 00:22:18.681664 2579 scope.go:117] "RemoveContainer" containerID="d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b" Apr 16 00:22:18.682546 containerd[1473]: time="2026-04-16T00:22:18.682419420Z" level=error msg="ContainerStatus for \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\": not found" Apr 16 00:22:18.682887 kubelet[2579]: E0416 00:22:18.682784 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\": not found" containerID="d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b" Apr 16 00:22:18.683281 kubelet[2579]: I0416 00:22:18.682851 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b"} err="failed to get container status \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5d835df17e9f2b252557fcbf24c60ffc692b05e28ec1891cd843fd9c7e6ed7b\": not found" Apr 16 00:22:18.683281 kubelet[2579]: I0416 00:22:18.683080 2579 scope.go:117] "RemoveContainer" containerID="6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f" Apr 16 00:22:18.686584 containerd[1473]: time="2026-04-16T00:22:18.686497107Z" level=info msg="RemoveContainer for \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\"" Apr 16 00:22:18.690624 containerd[1473]: time="2026-04-16T00:22:18.690452797Z" level=info msg="RemoveContainer for \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\" returns successfully" Apr 16 00:22:18.691141 kubelet[2579]: I0416 00:22:18.690788 2579 scope.go:117] "RemoveContainer" containerID="6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f" Apr 16 00:22:18.691487 containerd[1473]: time="2026-04-16T00:22:18.691381901Z" level=error msg="ContainerStatus for \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\": not found" Apr 16 00:22:18.691636 kubelet[2579]: E0416 00:22:18.691600 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\": not found" containerID="6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f" Apr 16 00:22:18.691691 kubelet[2579]: I0416 00:22:18.691659 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f"} err="failed to get container status \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b2ade3396dcef18c14455f2b8f5885faf6ddde312b1201521d5544ffdf95c5f\": not found" Apr 16 00:22:19.436451 sshd[4163]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:19.442320 systemd[1]: sshd@21-188.245.164.135:22-4.175.71.9:56654.service: Deactivated successfully. Apr 16 00:22:19.446159 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 00:22:19.446402 systemd[1]: session-21.scope: Consumed 1.593s CPU time. Apr 16 00:22:19.447963 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Apr 16 00:22:19.450289 systemd-logind[1451]: Removed session 21. Apr 16 00:22:19.467555 systemd[1]: Started sshd@22-188.245.164.135:22-4.175.71.9:55652.service - OpenSSH per-connection server daemon (4.175.71.9:55652). Apr 16 00:22:19.593360 sshd[4322]: Accepted publickey for core from 4.175.71.9 port 55652 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:19.595751 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:19.603158 systemd-logind[1451]: New session 22 of user core. Apr 16 00:22:19.608391 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 00:22:19.993729 kubelet[2579]: I0416 00:22:19.993679 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c75bf38-dcf1-4fa3-b560-7e7b947e1c20" path="/var/lib/kubelet/pods/1c75bf38-dcf1-4fa3-b560-7e7b947e1c20/volumes" Apr 16 00:22:19.995083 kubelet[2579]: I0416 00:22:19.994460 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f96311cf-7956-4b22-b9eb-4b556920cc59" path="/var/lib/kubelet/pods/f96311cf-7956-4b22-b9eb-4b556920cc59/volumes" Apr 16 00:22:21.131924 kubelet[2579]: E0416 00:22:21.131838 2579 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 00:22:21.246351 sshd[4322]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:21.252929 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Apr 16 00:22:21.254430 systemd[1]: sshd@22-188.245.164.135:22-4.175.71.9:55652.service: Deactivated successfully. Apr 16 00:22:21.261526 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 00:22:21.262514 systemd[1]: session-22.scope: Consumed 1.403s CPU time. Apr 16 00:22:21.280406 systemd-logind[1451]: Removed session 22. Apr 16 00:22:21.291447 systemd[1]: Started sshd@23-188.245.164.135:22-4.175.71.9:55658.service - OpenSSH per-connection server daemon (4.175.71.9:55658). Apr 16 00:22:21.331951 systemd[1]: Created slice kubepods-burstable-pode430a925_c270_4a34_9c86_07312455bcba.slice - libcontainer container kubepods-burstable-pode430a925_c270_4a34_9c86_07312455bcba.slice. Apr 16 00:22:21.335529 kubelet[2579]: E0416 00:22:21.335467 2579 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-6-n-510861948e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-510861948e' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Apr 16 00:22:21.338155 kubelet[2579]: E0416 00:22:21.337152 2579 status_manager.go:1018] "Failed to get status for pod" err="pods \"cilium-gpgff\" is forbidden: User \"system:node:ci-4081-3-6-n-510861948e\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-510861948e' and this object" podUID="e430a925-c270-4a34-9c86-07312455bcba" pod="kube-system/cilium-gpgff" Apr 16 00:22:21.338155 kubelet[2579]: E0416 00:22:21.337269 2579 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-6-n-510861948e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-510861948e' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Apr 16 00:22:21.338155 kubelet[2579]: E0416 00:22:21.337440 2579 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-6-n-510861948e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-510861948e' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Apr 16 00:22:21.339184 kubelet[2579]: E0416 00:22:21.339129 2579 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4081-3-6-n-510861948e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-510861948e' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-ipsec-keys\"" type="*v1.Secret" Apr 16 00:22:21.430875 sshd[4333]: Accepted publickey for core from 4.175.71.9 port 55658 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:21.434310 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:21.439808 systemd-logind[1451]: New session 23 of user core. Apr 16 00:22:21.446459 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 00:22:21.461445 kubelet[2579]: I0416 00:22:21.461344 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-etc-cni-netd\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461445 kubelet[2579]: I0416 00:22:21.461440 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-lib-modules\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461719 kubelet[2579]: I0416 00:22:21.461476 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-host-proc-sys-kernel\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461719 kubelet[2579]: I0416 00:22:21.461510 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e430a925-c270-4a34-9c86-07312455bcba-clustermesh-secrets\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461719 kubelet[2579]: I0416 00:22:21.461542 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-cilium-cgroup\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461719 kubelet[2579]: I0416 00:22:21.461572 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-host-proc-sys-net\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461719 kubelet[2579]: I0416 00:22:21.461612 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e430a925-c270-4a34-9c86-07312455bcba-hubble-tls\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461719 kubelet[2579]: I0416 00:22:21.461643 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-cilium-run\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461998 kubelet[2579]: I0416 00:22:21.461672 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-hostproc\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461998 kubelet[2579]: I0416 00:22:21.461729 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e430a925-c270-4a34-9c86-07312455bcba-cilium-ipsec-secrets\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461998 kubelet[2579]: I0416 00:22:21.461769 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-xtables-lock\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461998 kubelet[2579]: I0416 00:22:21.461801 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e430a925-c270-4a34-9c86-07312455bcba-cilium-config-path\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461998 kubelet[2579]: I0416 00:22:21.461833 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xpsp\" (UniqueName: \"kubernetes.io/projected/e430a925-c270-4a34-9c86-07312455bcba-kube-api-access-7xpsp\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.461998 kubelet[2579]: I0416 00:22:21.461869 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-bpf-maps\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.462314 kubelet[2579]: I0416 00:22:21.461902 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e430a925-c270-4a34-9c86-07312455bcba-cni-path\") pod \"cilium-gpgff\" (UID: \"e430a925-c270-4a34-9c86-07312455bcba\") " pod="kube-system/cilium-gpgff" Apr 16 00:22:21.550591 sshd[4333]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:21.557453 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Apr 16 00:22:21.557735 systemd[1]: sshd@23-188.245.164.135:22-4.175.71.9:55658.service: Deactivated successfully. Apr 16 00:22:21.565608 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 00:22:21.584225 systemd[1]: Started sshd@24-188.245.164.135:22-4.175.71.9:55674.service - OpenSSH per-connection server daemon (4.175.71.9:55674). Apr 16 00:22:21.590867 systemd-logind[1451]: Removed session 23. Apr 16 00:22:21.709186 sshd[4341]: Accepted publickey for core from 4.175.71.9 port 55674 ssh2: RSA SHA256:es51nA5SMoytRkY/yLSoOOH2KLr0mt1MIHk0lTLGO0M Apr 16 00:22:21.711147 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:22:21.718532 systemd-logind[1451]: New session 24 of user core. Apr 16 00:22:21.722370 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 00:22:22.565258 kubelet[2579]: E0416 00:22:22.564568 2579 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 16 00:22:22.565258 kubelet[2579]: E0416 00:22:22.564724 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e430a925-c270-4a34-9c86-07312455bcba-clustermesh-secrets podName:e430a925-c270-4a34-9c86-07312455bcba nodeName:}" failed. No retries permitted until 2026-04-16 00:22:23.064690564 +0000 UTC m=+177.211165582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/e430a925-c270-4a34-9c86-07312455bcba-clustermesh-secrets") pod "cilium-gpgff" (UID: "e430a925-c270-4a34-9c86-07312455bcba") : failed to sync secret cache: timed out waiting for the condition Apr 16 00:22:22.565922 kubelet[2579]: E0416 00:22:22.565307 2579 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 16 00:22:22.565922 kubelet[2579]: E0416 00:22:22.565429 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e430a925-c270-4a34-9c86-07312455bcba-cilium-config-path podName:e430a925-c270-4a34-9c86-07312455bcba nodeName:}" failed. No retries permitted until 2026-04-16 00:22:23.065402514 +0000 UTC m=+177.211877572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/e430a925-c270-4a34-9c86-07312455bcba-cilium-config-path") pod "cilium-gpgff" (UID: "e430a925-c270-4a34-9c86-07312455bcba") : failed to sync configmap cache: timed out waiting for the condition Apr 16 00:22:23.146571 containerd[1473]: time="2026-04-16T00:22:23.146489686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gpgff,Uid:e430a925-c270-4a34-9c86-07312455bcba,Namespace:kube-system,Attempt:0,}" Apr 16 00:22:23.176088 containerd[1473]: time="2026-04-16T00:22:23.175677370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:22:23.176088 containerd[1473]: time="2026-04-16T00:22:23.175763049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:22:23.176088 containerd[1473]: time="2026-04-16T00:22:23.175775609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:22:23.176088 containerd[1473]: time="2026-04-16T00:22:23.175885768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:22:23.204102 systemd[1]: Started cri-containerd-b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4.scope - libcontainer container b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4. Apr 16 00:22:23.233783 containerd[1473]: time="2026-04-16T00:22:23.233525027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gpgff,Uid:e430a925-c270-4a34-9c86-07312455bcba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\"" Apr 16 00:22:23.244402 containerd[1473]: time="2026-04-16T00:22:23.244194722Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 00:22:23.259261 containerd[1473]: time="2026-04-16T00:22:23.259084001Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3a285100c922524584a576df299f90b1f625b0b9a2d180872000eee7235b1b90\"" Apr 16 00:22:23.260750 containerd[1473]: time="2026-04-16T00:22:23.260684619Z" level=info msg="StartContainer for \"3a285100c922524584a576df299f90b1f625b0b9a2d180872000eee7235b1b90\"" Apr 16 00:22:23.293373 systemd[1]: Started cri-containerd-3a285100c922524584a576df299f90b1f625b0b9a2d180872000eee7235b1b90.scope - libcontainer container 3a285100c922524584a576df299f90b1f625b0b9a2d180872000eee7235b1b90. Apr 16 00:22:23.326442 containerd[1473]: time="2026-04-16T00:22:23.326378849Z" level=info msg="StartContainer for \"3a285100c922524584a576df299f90b1f625b0b9a2d180872000eee7235b1b90\" returns successfully" Apr 16 00:22:23.340621 systemd[1]: cri-containerd-3a285100c922524584a576df299f90b1f625b0b9a2d180872000eee7235b1b90.scope: Deactivated successfully. Apr 16 00:22:23.386726 containerd[1473]: time="2026-04-16T00:22:23.386215079Z" level=info msg="shim disconnected" id=3a285100c922524584a576df299f90b1f625b0b9a2d180872000eee7235b1b90 namespace=k8s.io Apr 16 00:22:23.386726 containerd[1473]: time="2026-04-16T00:22:23.386322997Z" level=warning msg="cleaning up after shim disconnected" id=3a285100c922524584a576df299f90b1f625b0b9a2d180872000eee7235b1b90 namespace=k8s.io Apr 16 00:22:23.386726 containerd[1473]: time="2026-04-16T00:22:23.386342117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:23.663433 containerd[1473]: time="2026-04-16T00:22:23.663381604Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 00:22:23.705513 containerd[1473]: time="2026-04-16T00:22:23.705416675Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"adbf731812c32ba47bdffa34aea0787b91ccd140f68ba9e57ad721a136b1025f\"" Apr 16 00:22:23.709010 containerd[1473]: time="2026-04-16T00:22:23.708942907Z" level=info msg="StartContainer for \"adbf731812c32ba47bdffa34aea0787b91ccd140f68ba9e57ad721a136b1025f\"" Apr 16 00:22:23.750341 systemd[1]: Started cri-containerd-adbf731812c32ba47bdffa34aea0787b91ccd140f68ba9e57ad721a136b1025f.scope - libcontainer container adbf731812c32ba47bdffa34aea0787b91ccd140f68ba9e57ad721a136b1025f. Apr 16 00:22:23.792324 containerd[1473]: time="2026-04-16T00:22:23.792232619Z" level=info msg="StartContainer for \"adbf731812c32ba47bdffa34aea0787b91ccd140f68ba9e57ad721a136b1025f\" returns successfully" Apr 16 00:22:23.803735 systemd[1]: cri-containerd-adbf731812c32ba47bdffa34aea0787b91ccd140f68ba9e57ad721a136b1025f.scope: Deactivated successfully. Apr 16 00:22:23.839232 containerd[1473]: time="2026-04-16T00:22:23.839000625Z" level=info msg="shim disconnected" id=adbf731812c32ba47bdffa34aea0787b91ccd140f68ba9e57ad721a136b1025f namespace=k8s.io Apr 16 00:22:23.839794 containerd[1473]: time="2026-04-16T00:22:23.839271742Z" level=warning msg="cleaning up after shim disconnected" id=adbf731812c32ba47bdffa34aea0787b91ccd140f68ba9e57ad721a136b1025f namespace=k8s.io Apr 16 00:22:23.839794 containerd[1473]: time="2026-04-16T00:22:23.839299741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:24.655694 containerd[1473]: time="2026-04-16T00:22:24.655597091Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 00:22:24.674969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593973963.mount: Deactivated successfully. Apr 16 00:22:24.679499 containerd[1473]: time="2026-04-16T00:22:24.679420308Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0\"" Apr 16 00:22:24.680346 containerd[1473]: time="2026-04-16T00:22:24.680218658Z" level=info msg="StartContainer for \"5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0\"" Apr 16 00:22:24.739315 systemd[1]: Started cri-containerd-5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0.scope - libcontainer container 5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0. Apr 16 00:22:24.776244 containerd[1473]: time="2026-04-16T00:22:24.776191355Z" level=info msg="StartContainer for \"5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0\" returns successfully" Apr 16 00:22:24.782327 systemd[1]: cri-containerd-5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0.scope: Deactivated successfully. Apr 16 00:22:24.820058 containerd[1473]: time="2026-04-16T00:22:24.819942558Z" level=info msg="shim disconnected" id=5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0 namespace=k8s.io Apr 16 00:22:24.820058 containerd[1473]: time="2026-04-16T00:22:24.820047876Z" level=warning msg="cleaning up after shim disconnected" id=5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0 namespace=k8s.io Apr 16 00:22:24.820302 containerd[1473]: time="2026-04-16T00:22:24.820071556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:25.086180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5710765785cbf933793e146107bc4fbef8f7346d645b1db9db97b01b1842d9b0-rootfs.mount: Deactivated successfully. Apr 16 00:22:25.662410 containerd[1473]: time="2026-04-16T00:22:25.662357670Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 00:22:25.693566 containerd[1473]: time="2026-04-16T00:22:25.692794066Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718\"" Apr 16 00:22:25.694601 containerd[1473]: time="2026-04-16T00:22:25.694563205Z" level=info msg="StartContainer for \"aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718\"" Apr 16 00:22:25.730285 systemd[1]: Started cri-containerd-aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718.scope - libcontainer container aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718. Apr 16 00:22:25.763322 systemd[1]: cri-containerd-aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718.scope: Deactivated successfully. Apr 16 00:22:25.771357 containerd[1473]: time="2026-04-16T00:22:25.771303608Z" level=info msg="StartContainer for \"aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718\" returns successfully" Apr 16 00:22:25.801125 containerd[1473]: time="2026-04-16T00:22:25.800897055Z" level=info msg="shim disconnected" id=aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718 namespace=k8s.io Apr 16 00:22:25.801125 containerd[1473]: time="2026-04-16T00:22:25.800987534Z" level=warning msg="cleaning up after shim disconnected" id=aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718 namespace=k8s.io Apr 16 00:22:25.801125 containerd[1473]: time="2026-04-16T00:22:25.801006214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:26.042339 containerd[1473]: time="2026-04-16T00:22:26.042262764Z" level=info msg="StopPodSandbox for \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\"" Apr 16 00:22:26.044279 containerd[1473]: time="2026-04-16T00:22:26.044234662Z" level=info msg="TearDown network for sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" successfully" Apr 16 00:22:26.044279 containerd[1473]: time="2026-04-16T00:22:26.044268181Z" level=info msg="StopPodSandbox for \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" returns successfully" Apr 16 00:22:26.044994 containerd[1473]: time="2026-04-16T00:22:26.044903774Z" level=info msg="RemovePodSandbox for \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\"" Apr 16 00:22:26.045192 containerd[1473]: time="2026-04-16T00:22:26.045174411Z" level=info msg="Forcibly stopping sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\"" Apr 16 00:22:26.045330 containerd[1473]: time="2026-04-16T00:22:26.045305730Z" level=info msg="TearDown network for sandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" successfully" Apr 16 00:22:26.051822 containerd[1473]: time="2026-04-16T00:22:26.051773217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 00:22:26.052145 containerd[1473]: time="2026-04-16T00:22:26.052020495Z" level=info msg="RemovePodSandbox \"348b3376cf361654a3f96b0eb53122bbf0ee03c37286bdbfc6dd5dffb0933277\" returns successfully" Apr 16 00:22:26.053276 containerd[1473]: time="2026-04-16T00:22:26.053241201Z" level=info msg="StopPodSandbox for \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\"" Apr 16 00:22:26.053386 containerd[1473]: time="2026-04-16T00:22:26.053335080Z" level=info msg="TearDown network for sandbox \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\" successfully" Apr 16 00:22:26.053386 containerd[1473]: time="2026-04-16T00:22:26.053346160Z" level=info msg="StopPodSandbox for \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\" returns successfully" Apr 16 00:22:26.054546 containerd[1473]: time="2026-04-16T00:22:26.054518547Z" level=info msg="RemovePodSandbox for \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\"" Apr 16 00:22:26.054625 containerd[1473]: time="2026-04-16T00:22:26.054555146Z" level=info msg="Forcibly stopping sandbox \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\"" Apr 16 00:22:26.054625 containerd[1473]: time="2026-04-16T00:22:26.054617306Z" level=info msg="TearDown network for sandbox \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\" successfully" Apr 16 00:22:26.063638 containerd[1473]: time="2026-04-16T00:22:26.063391928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 00:22:26.063638 containerd[1473]: time="2026-04-16T00:22:26.063555566Z" level=info msg="RemovePodSandbox \"87ed84b20f5c441b7ff525db0f2243ed7d40d150c7de989ecf93bd6753aad451\" returns successfully" Apr 16 00:22:26.084361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa09dfffe19aebd0e438ef872d89f33fb8316db07be81aa0f9cb389d34aea718-rootfs.mount: Deactivated successfully. Apr 16 00:22:26.133547 kubelet[2579]: E0416 00:22:26.133444 2579 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 00:22:26.673393 containerd[1473]: time="2026-04-16T00:22:26.673350477Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 00:22:26.700679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852146969.mount: Deactivated successfully. Apr 16 00:22:26.703244 containerd[1473]: time="2026-04-16T00:22:26.702951707Z" level=info msg="CreateContainer within sandbox \"b842b428dc16407d9f3b0a624bb04402440862d7360e10b724b2fbab2462a2c4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa\"" Apr 16 00:22:26.704886 containerd[1473]: time="2026-04-16T00:22:26.704683247Z" level=info msg="StartContainer for \"6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa\"" Apr 16 00:22:26.741384 systemd[1]: Started cri-containerd-6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa.scope - libcontainer container 6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa. Apr 16 00:22:26.772151 containerd[1473]: time="2026-04-16T00:22:26.771890777Z" level=info msg="StartContainer for \"6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa\" returns successfully" Apr 16 00:22:27.086604 systemd[1]: run-containerd-runc-k8s.io-6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa-runc.IvwSem.mount: Deactivated successfully. Apr 16 00:22:27.118095 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 16 00:22:27.700996 kubelet[2579]: I0416 00:22:27.700429 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gpgff" podStartSLOduration=6.700411467 podStartE2EDuration="6.700411467s" podCreationTimestamp="2026-04-16 00:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:22:27.695485518 +0000 UTC m=+181.841960576" watchObservedRunningTime="2026-04-16 00:22:27.700411467 +0000 UTC m=+181.846886485" Apr 16 00:22:29.811080 kubelet[2579]: I0416 00:22:29.810924 2579 setters.go:543] "Node became not ready" node="ci-4081-3-6-n-510861948e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-16T00:22:29Z","lastTransitionTime":"2026-04-16T00:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 16 00:22:30.056243 systemd-networkd[1375]: lxc_health: Link UP Apr 16 00:22:30.090285 systemd-networkd[1375]: lxc_health: Gained carrier Apr 16 00:22:30.323666 systemd[1]: run-containerd-runc-k8s.io-6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa-runc.PulUBs.mount: Deactivated successfully. Apr 16 00:22:31.445218 systemd-networkd[1375]: lxc_health: Gained IPv6LL Apr 16 00:22:34.700902 systemd[1]: run-containerd-runc-k8s.io-6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa-runc.JGwpVI.mount: Deactivated successfully. Apr 16 00:22:36.887115 systemd[1]: run-containerd-runc-k8s.io-6c9ce7d73d2f05983b5921133453229be634362e944479bb0493e6ed7a7625aa-runc.K7h90g.mount: Deactivated successfully. Apr 16 00:22:36.972390 sshd[4341]: pam_unix(sshd:session): session closed for user core Apr 16 00:22:36.978122 systemd[1]: sshd@24-188.245.164.135:22-4.175.71.9:55674.service: Deactivated successfully. Apr 16 00:22:36.981359 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 00:22:36.983981 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Apr 16 00:22:36.985920 systemd-logind[1451]: Removed session 24. Apr 16 00:22:52.314537 kubelet[2579]: E0416 00:22:52.314090 2579 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55922->10.0.0.2:2379: read: connection timed out" Apr 16 00:22:53.073206 systemd[1]: cri-containerd-9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97.scope: Deactivated successfully. Apr 16 00:22:53.073758 systemd[1]: cri-containerd-9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97.scope: Consumed 3.558s CPU time, 15.6M memory peak, 0B memory swap peak. Apr 16 00:22:53.103046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97-rootfs.mount: Deactivated successfully. Apr 16 00:22:53.117743 containerd[1473]: time="2026-04-16T00:22:53.117670963Z" level=info msg="shim disconnected" id=9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97 namespace=k8s.io Apr 16 00:22:53.117743 containerd[1473]: time="2026-04-16T00:22:53.117728883Z" level=warning msg="cleaning up after shim disconnected" id=9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97 namespace=k8s.io Apr 16 00:22:53.117743 containerd[1473]: time="2026-04-16T00:22:53.117739203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:53.754983 kubelet[2579]: I0416 00:22:53.753429 2579 scope.go:117] "RemoveContainer" containerID="9657d8f39130726508ed1b1b61347a32510c76c4b19ada66c9eea2dcd76b8a97" Apr 16 00:22:53.757979 containerd[1473]: time="2026-04-16T00:22:53.757785084Z" level=info msg="CreateContainer within sandbox \"2ad4eeaa165fba6e9c0748d483ae41d7ba76b8c51f54e08726edc2a2f83c5d48\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 00:22:53.773549 containerd[1473]: time="2026-04-16T00:22:53.773490136Z" level=info msg="CreateContainer within sandbox \"2ad4eeaa165fba6e9c0748d483ae41d7ba76b8c51f54e08726edc2a2f83c5d48\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"af8ad0c8c69c8684b295a925a9e712510adb028ebdd9596d58b88d6f7674bd8a\"" Apr 16 00:22:53.775779 containerd[1473]: time="2026-04-16T00:22:53.774518982Z" level=info msg="StartContainer for \"af8ad0c8c69c8684b295a925a9e712510adb028ebdd9596d58b88d6f7674bd8a\"" Apr 16 00:22:53.812467 systemd[1]: Started cri-containerd-af8ad0c8c69c8684b295a925a9e712510adb028ebdd9596d58b88d6f7674bd8a.scope - libcontainer container af8ad0c8c69c8684b295a925a9e712510adb028ebdd9596d58b88d6f7674bd8a. Apr 16 00:22:53.863340 containerd[1473]: time="2026-04-16T00:22:53.863273864Z" level=info msg="StartContainer for \"af8ad0c8c69c8684b295a925a9e712510adb028ebdd9596d58b88d6f7674bd8a\" returns successfully" Apr 16 00:22:56.537087 kubelet[2579]: E0416 00:22:56.535165 2579 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55580->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-510861948e.18a6ae7f980fe1e1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-510861948e,UID:77acbaa1a84a9b15dfdf36986b42f970,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-510861948e,},FirstTimestamp:2026-04-16 00:22:46.079496673 +0000 UTC m=+200.225971731,LastTimestamp:2026-04-16 00:22:46.079496673 +0000 UTC m=+200.225971731,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-510861948e,}" Apr 16 00:22:56.853504 systemd[1]: cri-containerd-376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c.scope: Deactivated successfully. Apr 16 00:22:56.855163 systemd[1]: cri-containerd-376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c.scope: Consumed 3.142s CPU time, 14.8M memory peak, 0B memory swap peak. Apr 16 00:22:56.876910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c-rootfs.mount: Deactivated successfully. Apr 16 00:22:56.882949 containerd[1473]: time="2026-04-16T00:22:56.882809772Z" level=info msg="shim disconnected" id=376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c namespace=k8s.io Apr 16 00:22:56.882949 containerd[1473]: time="2026-04-16T00:22:56.882940493Z" level=warning msg="cleaning up after shim disconnected" id=376ccafcb42dbc3811446ca364b2330b578f05c27acc5ae789ba3f2334be785c namespace=k8s.io Apr 16 00:22:56.882949 containerd[1473]: time="2026-04-16T00:22:56.882952173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:22:56.896084 containerd[1473]: time="2026-04-16T00:22:56.895912429Z" level=warning msg="cleanup warnings time=\"2026-04-16T00:22:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io