Sep 12 23:56:36.907187 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:56:36.907216 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:56:36.907227 kernel: KASLR enabled Sep 12 23:56:36.907233 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 12 23:56:36.907239 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 12 23:56:36.907245 kernel: random: crng init done Sep 12 23:56:36.907253 kernel: ACPI: Early table checksum verification disabled Sep 12 23:56:36.907259 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 12 23:56:36.907265 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 12 23:56:36.907273 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907280 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907286 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907292 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907299 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907306 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907315 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907321 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907328 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:56:36.907335 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 23:56:36.907342 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 12 23:56:36.907348 kernel: NUMA: Failed to initialise from firmware Sep 12 23:56:36.907355 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 12 23:56:36.907362 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 12 23:56:36.907368 kernel: Zone ranges: Sep 12 23:56:36.907375 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 23:56:36.907383 kernel: DMA32 empty Sep 12 23:56:36.907389 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 12 23:56:36.907396 kernel: Movable zone start for each node Sep 12 23:56:36.907402 kernel: Early memory node ranges Sep 12 23:56:36.907409 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 12 23:56:36.907416 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 12 23:56:36.907650 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 12 23:56:36.907661 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 12 23:56:36.907669 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 12 23:56:36.907678 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 12 23:56:36.907687 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 12 23:56:36.907696 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 12 23:56:36.907709 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 12 23:56:36.907718 kernel: psci: probing for conduit method from ACPI. Sep 12 23:56:36.907727 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:56:36.907740 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:56:36.907750 kernel: psci: Trusted OS migration not required Sep 12 23:56:36.907759 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:56:36.907770 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 23:56:36.907780 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:56:36.907789 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:56:36.907799 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:56:36.907808 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:56:36.907818 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:56:36.907827 kernel: CPU features: detected: Hardware dirty bit management Sep 12 23:56:36.907837 kernel: CPU features: detected: Spectre-v4 Sep 12 23:56:36.907846 kernel: CPU features: detected: Spectre-BHB Sep 12 23:56:36.907855 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:56:36.907867 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:56:36.909946 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:56:36.909959 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:56:36.909966 kernel: alternatives: applying boot alternatives Sep 12 23:56:36.909975 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:56:36.909983 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:56:36.909990 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:56:36.909997 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:56:36.910004 kernel: Fallback order for Node 0: 0 Sep 12 23:56:36.910014 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 12 23:56:36.910022 kernel: Policy zone: Normal Sep 12 23:56:36.910038 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:56:36.910046 kernel: software IO TLB: area num 2. Sep 12 23:56:36.910053 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 12 23:56:36.910061 kernel: Memory: 3882744K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 213256K reserved, 0K cma-reserved) Sep 12 23:56:36.910068 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:56:36.910076 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:56:36.910084 kernel: rcu: RCU event tracing is enabled. Sep 12 23:56:36.910091 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:56:36.910098 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:56:36.910105 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:56:36.910113 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:56:36.910121 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:56:36.910129 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:56:36.910135 kernel: GICv3: 256 SPIs implemented Sep 12 23:56:36.910142 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:56:36.910149 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:56:36.910156 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:56:36.910163 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 23:56:36.910170 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 23:56:36.910177 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:56:36.910184 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:56:36.910191 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 12 23:56:36.910198 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 12 23:56:36.910207 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:56:36.910214 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:56:36.910221 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:56:36.910228 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:56:36.910235 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:56:36.910242 kernel: Console: colour dummy device 80x25 Sep 12 23:56:36.910249 kernel: ACPI: Core revision 20230628 Sep 12 23:56:36.910256 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:56:36.910263 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:56:36.910270 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:56:36.910279 kernel: landlock: Up and running. Sep 12 23:56:36.910286 kernel: SELinux: Initializing. Sep 12 23:56:36.910293 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:56:36.910301 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:56:36.910308 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:56:36.910316 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:56:36.910323 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:56:36.910330 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:56:36.910337 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 23:56:36.910345 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 23:56:36.910352 kernel: Remapping and enabling EFI services. Sep 12 23:56:36.910359 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:56:36.910366 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:56:36.910373 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 23:56:36.910380 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 12 23:56:36.910387 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:56:36.910394 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:56:36.910401 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:56:36.910408 kernel: SMP: Total of 2 processors activated. Sep 12 23:56:36.910418 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:56:36.910436 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:56:36.910450 kernel: CPU features: detected: Common not Private translations Sep 12 23:56:36.910459 kernel: CPU features: detected: CRC32 instructions Sep 12 23:56:36.910466 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 23:56:36.910474 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:56:36.910481 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:56:36.910489 kernel: CPU features: detected: Privileged Access Never Sep 12 23:56:36.910496 kernel: CPU features: detected: RAS Extension Support Sep 12 23:56:36.910506 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 23:56:36.910513 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:56:36.910520 kernel: alternatives: applying system-wide alternatives Sep 12 23:56:36.910528 kernel: devtmpfs: initialized Sep 12 23:56:36.910535 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:56:36.910543 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:56:36.910550 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:56:36.910560 kernel: SMBIOS 3.0.0 present. Sep 12 23:56:36.910568 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 12 23:56:36.910575 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:56:36.910583 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:56:36.910590 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:56:36.910598 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:56:36.910605 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:56:36.910612 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Sep 12 23:56:36.910620 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:56:36.910629 kernel: cpuidle: using governor menu Sep 12 23:56:36.910636 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:56:36.910644 kernel: ASID allocator initialised with 32768 entries Sep 12 23:56:36.910651 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:56:36.910659 kernel: Serial: AMBA PL011 UART driver Sep 12 23:56:36.910666 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:56:36.910673 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 23:56:36.910681 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:56:36.910688 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:56:36.910697 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:56:36.910705 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:56:36.910712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:56:36.910720 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:56:36.910727 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:56:36.910735 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:56:36.910742 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:56:36.910749 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:56:36.910757 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:56:36.910764 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:56:36.910773 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:56:36.910781 kernel: ACPI: Interpreter enabled Sep 12 23:56:36.910788 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:56:36.910795 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:56:36.910803 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:56:36.910810 kernel: printk: console [ttyAMA0] enabled Sep 12 23:56:36.910818 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:56:36.911025 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:56:36.911107 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:56:36.911174 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:56:36.911238 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 23:56:36.911304 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 23:56:36.911314 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 23:56:36.911321 kernel: PCI host bridge to bus 0000:00 Sep 12 23:56:36.911393 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 23:56:36.911476 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:56:36.911540 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 23:56:36.911597 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:56:36.911681 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 23:56:36.911757 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 12 23:56:36.911825 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 12 23:56:36.914002 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 12 23:56:36.914118 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.914190 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 12 23:56:36.914265 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.914331 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 12 23:56:36.914405 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.914494 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 12 23:56:36.914582 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.914651 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 12 23:56:36.914724 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.914792 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 12 23:56:36.915005 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.915089 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 12 23:56:36.915168 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.915233 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 12 23:56:36.915305 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.915369 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 12 23:56:36.915454 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 12 23:56:36.915521 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 12 23:56:36.915602 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 12 23:56:36.915668 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 12 23:56:36.915745 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 23:56:36.915814 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 12 23:56:36.917015 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:56:36.917135 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 12 23:56:36.917225 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 12 23:56:36.917295 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 12 23:56:36.917373 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 12 23:56:36.917463 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 12 23:56:36.917538 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 12 23:56:36.917615 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 12 23:56:36.917684 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 12 23:56:36.917766 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 12 23:56:36.917834 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 12 23:56:36.917924 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 12 23:56:36.917997 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 12 23:56:36.918065 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 12 23:56:36.918142 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 23:56:36.918216 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 12 23:56:36.918284 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 12 23:56:36.918350 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 12 23:56:36.918432 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 12 23:56:36.918503 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 12 23:56:36.918571 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 12 23:56:36.918641 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 12 23:56:36.918712 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 12 23:56:36.918778 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 12 23:56:36.921010 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 12 23:56:36.921113 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 12 23:56:36.921179 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 12 23:56:36.921274 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 12 23:56:36.921347 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 12 23:56:36.921458 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 12 23:56:36.921549 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 12 23:56:36.921618 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 12 23:56:36.921685 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 12 23:56:36.921756 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 23:56:36.921824 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 12 23:56:36.921908 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 12 23:56:36.921994 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 23:56:36.922063 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 12 23:56:36.922129 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 12 23:56:36.922200 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 23:56:36.922266 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 12 23:56:36.922331 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 12 23:56:36.922401 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 23:56:36.922487 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 12 23:56:36.922570 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 12 23:56:36.922640 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 12 23:56:36.922774 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 23:56:36.922855 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 12 23:56:36.926125 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 23:56:36.926222 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 12 23:56:36.926313 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 23:56:36.926397 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 12 23:56:36.926510 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 23:56:36.926585 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 12 23:56:36.926652 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 23:56:36.926722 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 12 23:56:36.926789 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 23:56:36.926858 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 12 23:56:36.926970 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 23:56:36.927040 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 12 23:56:36.927107 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 23:56:36.927176 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 12 23:56:36.927246 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 23:56:36.927319 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 12 23:56:36.927391 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 12 23:56:36.927474 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 12 23:56:36.927543 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 12 23:56:36.927610 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 12 23:56:36.927675 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 12 23:56:36.927743 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 12 23:56:36.927809 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 12 23:56:36.929026 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 12 23:56:36.929151 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 12 23:56:36.929224 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 12 23:56:36.929291 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 12 23:56:36.929362 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 12 23:56:36.929467 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 12 23:56:36.929552 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 12 23:56:36.929619 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 12 23:56:36.930813 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 12 23:56:36.930958 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 12 23:56:36.931032 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 12 23:56:36.931099 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 12 23:56:36.931170 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 12 23:56:36.931247 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 12 23:56:36.931316 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:56:36.931385 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 12 23:56:36.931479 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 12 23:56:36.931562 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 12 23:56:36.931629 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 12 23:56:36.931695 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 23:56:36.931770 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 12 23:56:36.931840 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 12 23:56:36.931957 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 12 23:56:36.932028 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 12 23:56:36.932093 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 23:56:36.932166 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 12 23:56:36.932235 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 12 23:56:36.932303 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 12 23:56:36.932393 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 12 23:56:36.932486 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 12 23:56:36.932556 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 23:56:36.932631 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 12 23:56:36.932704 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 12 23:56:36.932770 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 12 23:56:36.932837 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 12 23:56:36.932929 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 23:56:36.933007 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 12 23:56:36.933083 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 12 23:56:36.933149 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 12 23:56:36.933215 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 12 23:56:36.933279 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 23:56:36.933362 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 12 23:56:36.933468 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 12 23:56:36.933546 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 12 23:56:36.933615 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 12 23:56:36.933687 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 12 23:56:36.933753 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 23:56:36.933830 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 12 23:56:36.933926 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 12 23:56:36.934000 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 12 23:56:36.934070 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 12 23:56:36.934136 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 12 23:56:36.934201 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 12 23:56:36.934270 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 23:56:36.934338 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 12 23:56:36.934404 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 12 23:56:36.934480 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 12 23:56:36.934546 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 23:56:36.934614 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 12 23:56:36.934681 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 12 23:56:36.934749 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 12 23:56:36.934819 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 23:56:36.935042 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 23:56:36.935129 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:56:36.935188 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 23:56:36.935283 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 12 23:56:36.935351 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 12 23:56:36.935411 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 23:56:36.935532 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 12 23:56:36.935614 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 12 23:56:36.935677 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 23:56:36.935750 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 12 23:56:36.935811 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 12 23:56:36.935920 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 23:56:36.935998 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 12 23:56:36.936066 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 12 23:56:36.936130 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 23:56:36.936208 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 12 23:56:36.936276 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 12 23:56:36.936337 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 23:56:36.936403 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 12 23:56:36.936480 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 12 23:56:36.936542 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 23:56:36.936610 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 12 23:56:36.936670 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 12 23:56:36.936733 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 23:56:36.936800 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 12 23:56:36.936861 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 12 23:56:36.936933 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 23:56:36.937006 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 12 23:56:36.937068 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 12 23:56:36.937127 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 23:56:36.937141 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:56:36.937149 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:56:36.937157 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:56:36.937165 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:56:36.937173 kernel: iommu: Default domain type: Translated Sep 12 23:56:36.937181 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:56:36.937189 kernel: efivars: Registered efivars operations Sep 12 23:56:36.937197 kernel: vgaarb: loaded Sep 12 23:56:36.937205 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:56:36.937214 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:56:36.937224 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:56:36.937232 kernel: pnp: PnP ACPI init Sep 12 23:56:36.937305 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 23:56:36.937316 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:56:36.937325 kernel: NET: Registered PF_INET protocol family Sep 12 23:56:36.937333 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:56:36.937341 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:56:36.937351 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:56:36.937359 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:56:36.937367 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:56:36.937375 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:56:36.937383 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:56:36.937391 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:56:36.937399 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:56:36.937487 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 12 23:56:36.937500 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:56:36.937510 kernel: kvm [1]: HYP mode not available Sep 12 23:56:36.937518 kernel: Initialise system trusted keyrings Sep 12 23:56:36.937526 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:56:36.937534 kernel: Key type asymmetric registered Sep 12 23:56:36.937542 kernel: Asymmetric key parser 'x509' registered Sep 12 23:56:36.937550 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:56:36.937557 kernel: io scheduler mq-deadline registered Sep 12 23:56:36.937565 kernel: io scheduler kyber registered Sep 12 23:56:36.937573 kernel: io scheduler bfq registered Sep 12 23:56:36.937583 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 23:56:36.937656 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 12 23:56:36.937725 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 12 23:56:36.937792 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.937862 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 12 23:56:36.938069 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 12 23:56:36.938139 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.938214 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 12 23:56:36.938284 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 12 23:56:36.938351 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.938459 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 12 23:56:36.938543 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 12 23:56:36.938613 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.938688 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 12 23:56:36.938755 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 12 23:56:36.938823 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.938948 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 12 23:56:36.939019 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 12 23:56:36.939083 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.939155 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 12 23:56:36.939222 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 12 23:56:36.939286 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.939354 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 12 23:56:36.939431 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 12 23:56:36.939505 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.939520 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 12 23:56:36.939590 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 12 23:56:36.939656 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 12 23:56:36.939723 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 23:56:36.939733 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:56:36.939741 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:56:36.939749 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:56:36.939825 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 12 23:56:36.940011 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 12 23:56:36.940027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:56:36.940035 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 23:56:36.940110 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 12 23:56:36.940122 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 12 23:56:36.940130 kernel: thunder_xcv, ver 1.0 Sep 12 23:56:36.940137 kernel: thunder_bgx, ver 1.0 Sep 12 23:56:36.940145 kernel: nicpf, ver 1.0 Sep 12 23:56:36.940158 kernel: nicvf, ver 1.0 Sep 12 23:56:36.940241 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:56:36.940304 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:56:36 UTC (1757721396) Sep 12 23:56:36.940315 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:56:36.940323 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 23:56:36.940331 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:56:36.940339 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:56:36.940349 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:56:36.940357 kernel: Segment Routing with IPv6 Sep 12 23:56:36.940365 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:56:36.940372 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:56:36.940380 kernel: Key type dns_resolver registered Sep 12 23:56:36.940388 kernel: registered taskstats version 1 Sep 12 23:56:36.940396 kernel: Loading compiled-in X.509 certificates Sep 12 23:56:36.940404 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:56:36.940412 kernel: Key type .fscrypt registered Sep 12 23:56:36.940428 kernel: Key type fscrypt-provisioning registered Sep 12 23:56:36.940439 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:56:36.940447 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:56:36.940455 kernel: ima: No architecture policies found Sep 12 23:56:36.940463 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:56:36.940471 kernel: clk: Disabling unused clocks Sep 12 23:56:36.940479 kernel: Freeing unused kernel memory: 39488K Sep 12 23:56:36.940486 kernel: Run /init as init process Sep 12 23:56:36.940494 kernel: with arguments: Sep 12 23:56:36.940504 kernel: /init Sep 12 23:56:36.940511 kernel: with environment: Sep 12 23:56:36.940519 kernel: HOME=/ Sep 12 23:56:36.940526 kernel: TERM=linux Sep 12 23:56:36.940534 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:56:36.940543 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:56:36.940554 systemd[1]: Detected virtualization kvm. Sep 12 23:56:36.940562 systemd[1]: Detected architecture arm64. Sep 12 23:56:36.940572 systemd[1]: Running in initrd. Sep 12 23:56:36.940580 systemd[1]: No hostname configured, using default hostname. Sep 12 23:56:36.940588 systemd[1]: Hostname set to . Sep 12 23:56:36.940596 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:56:36.940605 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:56:36.940613 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:56:36.940621 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:56:36.940630 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:56:36.940640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:56:36.940649 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:56:36.940658 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:56:36.940668 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:56:36.940676 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:56:36.940685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:56:36.940693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:56:36.940703 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:56:36.940711 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:56:36.940719 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:56:36.940728 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:56:36.940736 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:56:36.940745 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:56:36.940753 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:56:36.940761 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:56:36.940769 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:56:36.940779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:56:36.940788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:56:36.940796 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:56:36.940807 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:56:36.940815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:56:36.940823 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:56:36.940831 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:56:36.940839 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:56:36.940850 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:56:36.940858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:36.940866 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:56:36.940885 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:56:36.940920 systemd-journald[236]: Collecting audit messages is disabled. Sep 12 23:56:36.940944 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:56:36.940954 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:56:36.940962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:56:36.940972 kernel: Bridge firewalling registered Sep 12 23:56:36.940980 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:56:36.940989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:36.940998 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:56:36.941006 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:56:36.941015 systemd-journald[236]: Journal started Sep 12 23:56:36.941035 systemd-journald[236]: Runtime Journal (/run/log/journal/2fd1214baf5146be90a75ed420835141) is 8.0M, max 76.6M, 68.6M free. Sep 12 23:56:36.899401 systemd-modules-load[237]: Inserted module 'overlay' Sep 12 23:56:36.943391 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:56:36.922495 systemd-modules-load[237]: Inserted module 'br_netfilter' Sep 12 23:56:36.946518 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:56:36.962150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:56:36.972762 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:56:36.977729 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:56:36.979231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:56:36.990054 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:56:36.991590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:56:36.994326 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:56:37.001796 dracut-cmdline[268]: dracut-dracut-053 Sep 12 23:56:37.004505 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:56:37.007194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:56:37.035041 systemd-resolved[276]: Positive Trust Anchors: Sep 12 23:56:37.035807 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:56:37.036775 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:56:37.046232 systemd-resolved[276]: Defaulting to hostname 'linux'. Sep 12 23:56:37.047290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:56:37.049321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:56:37.102921 kernel: SCSI subsystem initialized Sep 12 23:56:37.106935 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:56:37.114917 kernel: iscsi: registered transport (tcp) Sep 12 23:56:37.128912 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:56:37.128982 kernel: QLogic iSCSI HBA Driver Sep 12 23:56:37.184913 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:56:37.194348 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:56:37.216448 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:56:37.216572 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:56:37.216620 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:56:37.266949 kernel: raid6: neonx8 gen() 15593 MB/s Sep 12 23:56:37.283936 kernel: raid6: neonx4 gen() 15534 MB/s Sep 12 23:56:37.300927 kernel: raid6: neonx2 gen() 13044 MB/s Sep 12 23:56:37.317914 kernel: raid6: neonx1 gen() 10274 MB/s Sep 12 23:56:37.334934 kernel: raid6: int64x8 gen() 6881 MB/s Sep 12 23:56:37.351924 kernel: raid6: int64x4 gen() 7281 MB/s Sep 12 23:56:37.368933 kernel: raid6: int64x2 gen() 6086 MB/s Sep 12 23:56:37.385926 kernel: raid6: int64x1 gen() 5025 MB/s Sep 12 23:56:37.386035 kernel: raid6: using algorithm neonx8 gen() 15593 MB/s Sep 12 23:56:37.402928 kernel: raid6: .... xor() 11867 MB/s, rmw enabled Sep 12 23:56:37.403007 kernel: raid6: using neon recovery algorithm Sep 12 23:56:37.408124 kernel: xor: measuring software checksum speed Sep 12 23:56:37.408208 kernel: 8regs : 19783 MB/sec Sep 12 23:56:37.408229 kernel: 32regs : 19679 MB/sec Sep 12 23:56:37.408247 kernel: arm64_neon : 26963 MB/sec Sep 12 23:56:37.408929 kernel: xor: using function: arm64_neon (26963 MB/sec) Sep 12 23:56:37.459924 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:56:37.476520 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:56:37.490249 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:56:37.506018 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 12 23:56:37.509528 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:56:37.518146 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:56:37.537738 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Sep 12 23:56:37.575002 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:56:37.582147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:56:37.633900 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:56:37.642062 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:56:37.656314 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:56:37.658047 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:56:37.658650 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:56:37.661920 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:56:37.673123 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:56:37.686380 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:56:37.771520 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:56:37.771656 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:56:37.777185 kernel: scsi host0: Virtio SCSI HBA Sep 12 23:56:37.777383 kernel: ACPI: bus type USB registered Sep 12 23:56:37.777398 kernel: usbcore: registered new interface driver usbfs Sep 12 23:56:37.777410 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 23:56:37.777452 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 12 23:56:37.777473 kernel: usbcore: registered new interface driver hub Sep 12 23:56:37.777485 kernel: usbcore: registered new device driver usb Sep 12 23:56:37.774150 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:56:37.778111 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:56:37.778276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:37.782536 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:37.788364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:37.815324 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 12 23:56:37.815594 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 12 23:56:37.815855 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:56:37.816296 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 12 23:56:37.818147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:37.826330 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 23:56:37.826585 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 12 23:56:37.826161 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:56:37.828616 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 12 23:56:37.834045 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 23:56:37.834246 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 12 23:56:37.835011 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 12 23:56:37.839246 kernel: hub 1-0:1.0: USB hub found Sep 12 23:56:37.839900 kernel: hub 1-0:1.0: 4 ports detected Sep 12 23:56:37.840922 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 12 23:56:37.842169 kernel: hub 2-0:1.0: USB hub found Sep 12 23:56:37.842344 kernel: hub 2-0:1.0: 4 ports detected Sep 12 23:56:37.843914 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 12 23:56:37.845129 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 12 23:56:37.845290 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 12 23:56:37.845384 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 12 23:56:37.846380 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 23:56:37.850244 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:56:37.856374 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:56:37.856470 kernel: GPT:17805311 != 80003071 Sep 12 23:56:37.856488 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:56:37.856502 kernel: GPT:17805311 != 80003071 Sep 12 23:56:37.856516 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:56:37.856529 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:37.858897 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 12 23:56:37.899925 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (517) Sep 12 23:56:37.910028 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (521) Sep 12 23:56:37.910953 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 12 23:56:37.917439 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 12 23:56:37.929582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 23:56:37.935501 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 12 23:56:37.936193 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 12 23:56:37.941093 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:56:37.949524 disk-uuid[572]: Primary Header is updated. Sep 12 23:56:37.949524 disk-uuid[572]: Secondary Entries is updated. Sep 12 23:56:37.949524 disk-uuid[572]: Secondary Header is updated. Sep 12 23:56:37.955927 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:37.963467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:37.969944 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:38.078921 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 12 23:56:38.215490 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 12 23:56:38.215555 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 12 23:56:38.216192 kernel: usbcore: registered new interface driver usbhid Sep 12 23:56:38.216906 kernel: usbhid: USB HID core driver Sep 12 23:56:38.320960 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 12 23:56:38.447966 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 12 23:56:38.501929 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 12 23:56:38.972924 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:56:38.974070 disk-uuid[573]: The operation has completed successfully. Sep 12 23:56:39.029644 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:56:39.029756 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:56:39.046214 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:56:39.052291 sh[591]: Success Sep 12 23:56:39.065912 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:56:39.142586 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:56:39.144567 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:56:39.147794 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:56:39.168072 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:56:39.168139 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:56:39.168160 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:56:39.169111 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:56:39.169153 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:56:39.175924 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 23:56:39.178085 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:56:39.180667 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:56:39.187141 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:56:39.191130 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:56:39.201916 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:39.201978 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:56:39.201991 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:56:39.206175 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 23:56:39.206231 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:56:39.217624 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:39.217248 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:56:39.224914 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:56:39.232071 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:56:39.322036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:56:39.331136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:56:39.334711 ignition[687]: Ignition 2.19.0 Sep 12 23:56:39.335238 ignition[687]: Stage: fetch-offline Sep 12 23:56:39.335290 ignition[687]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:39.335298 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:39.335470 ignition[687]: parsed url from cmdline: "" Sep 12 23:56:39.335473 ignition[687]: no config URL provided Sep 12 23:56:39.335478 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:56:39.335485 ignition[687]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:56:39.335490 ignition[687]: failed to fetch config: resource requires networking Sep 12 23:56:39.335660 ignition[687]: Ignition finished successfully Sep 12 23:56:39.340674 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:56:39.356379 systemd-networkd[781]: lo: Link UP Sep 12 23:56:39.356390 systemd-networkd[781]: lo: Gained carrier Sep 12 23:56:39.357950 systemd-networkd[781]: Enumeration completed Sep 12 23:56:39.358383 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:56:39.358495 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:39.358498 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:56:39.359356 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:39.359359 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:56:39.359505 systemd[1]: Reached target network.target - Network. Sep 12 23:56:39.359928 systemd-networkd[781]: eth0: Link UP Sep 12 23:56:39.359931 systemd-networkd[781]: eth0: Gained carrier Sep 12 23:56:39.359939 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:39.366245 systemd-networkd[781]: eth1: Link UP Sep 12 23:56:39.366249 systemd-networkd[781]: eth1: Gained carrier Sep 12 23:56:39.366259 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:39.368101 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:56:39.381443 ignition[785]: Ignition 2.19.0 Sep 12 23:56:39.381454 ignition[785]: Stage: fetch Sep 12 23:56:39.381634 ignition[785]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:39.381644 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:39.381729 ignition[785]: parsed url from cmdline: "" Sep 12 23:56:39.381733 ignition[785]: no config URL provided Sep 12 23:56:39.381737 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:56:39.381744 ignition[785]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:56:39.381763 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 12 23:56:39.382612 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 12 23:56:39.409994 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 23:56:39.423008 systemd-networkd[781]: eth0: DHCPv4 address 91.99.3.235/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 23:56:39.582824 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 12 23:56:39.589436 ignition[785]: GET result: OK Sep 12 23:56:39.589623 ignition[785]: parsing config with SHA512: 201e7597156ef3a2ff61e99d048d8ba06173370ede45f339417a77abfd3702aa47e880dc6db6df9f8cb7a65991dbdc0d2071b198c839ff8dc7a76b8ba955a161 Sep 12 23:56:39.598009 unknown[785]: fetched base config from "system" Sep 12 23:56:39.598018 unknown[785]: fetched base config from "system" Sep 12 23:56:39.598543 ignition[785]: fetch: fetch complete Sep 12 23:56:39.598023 unknown[785]: fetched user config from "hetzner" Sep 12 23:56:39.598549 ignition[785]: fetch: fetch passed Sep 12 23:56:39.598595 ignition[785]: Ignition finished successfully Sep 12 23:56:39.600815 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:56:39.606147 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:56:39.619483 ignition[792]: Ignition 2.19.0 Sep 12 23:56:39.619493 ignition[792]: Stage: kargs Sep 12 23:56:39.619655 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:39.619664 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:39.620729 ignition[792]: kargs: kargs passed Sep 12 23:56:39.620779 ignition[792]: Ignition finished successfully Sep 12 23:56:39.625751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:56:39.634228 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:56:39.645548 ignition[798]: Ignition 2.19.0 Sep 12 23:56:39.645558 ignition[798]: Stage: disks Sep 12 23:56:39.645761 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:39.645779 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:39.648031 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:56:39.646779 ignition[798]: disks: disks passed Sep 12 23:56:39.649043 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:56:39.646831 ignition[798]: Ignition finished successfully Sep 12 23:56:39.650864 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:56:39.651751 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:56:39.652835 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:56:39.653797 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:56:39.659102 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:56:39.675642 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 23:56:39.681340 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:56:39.689463 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:56:39.741920 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:56:39.742925 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:56:39.744111 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:56:39.757219 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:56:39.762024 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:56:39.764085 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 23:56:39.764690 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:56:39.764717 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:56:39.774731 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:56:39.777090 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:56:39.781043 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Sep 12 23:56:39.784748 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:39.784822 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:56:39.784845 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:56:39.791346 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 23:56:39.791392 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:56:39.801252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:56:39.835807 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:56:39.837350 coreos-metadata[816]: Sep 12 23:56:39.836 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 12 23:56:39.838935 coreos-metadata[816]: Sep 12 23:56:39.838 INFO Fetch successful Sep 12 23:56:39.840864 coreos-metadata[816]: Sep 12 23:56:39.839 INFO wrote hostname ci-4081-3-5-n-44c5618783 to /sysroot/etc/hostname Sep 12 23:56:39.844448 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:56:39.847629 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:56:39.852744 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:56:39.856767 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:56:39.956607 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:56:39.963011 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:56:39.966065 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:56:39.975891 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:39.995995 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:56:39.999927 ignition[930]: INFO : Ignition 2.19.0 Sep 12 23:56:39.999927 ignition[930]: INFO : Stage: mount Sep 12 23:56:39.999927 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:39.999927 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:40.002446 ignition[930]: INFO : mount: mount passed Sep 12 23:56:40.003029 ignition[930]: INFO : Ignition finished successfully Sep 12 23:56:40.005621 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:56:40.016052 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:56:40.169764 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:56:40.177226 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:56:40.191530 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Sep 12 23:56:40.193111 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:56:40.193164 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:56:40.193177 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:56:40.196918 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 23:56:40.196986 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:56:40.200301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:56:40.223067 ignition[959]: INFO : Ignition 2.19.0 Sep 12 23:56:40.223067 ignition[959]: INFO : Stage: files Sep 12 23:56:40.224087 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:40.224087 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:40.225529 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:56:40.225529 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:56:40.226777 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:56:40.229320 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:56:40.230211 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:56:40.231329 unknown[959]: wrote ssh authorized keys file for user: core Sep 12 23:56:40.232508 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:56:40.234328 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 23:56:40.235452 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 23:56:40.352763 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:56:40.907173 systemd-networkd[781]: eth1: Gained IPv6LL Sep 12 23:56:40.971259 systemd-networkd[781]: eth0: Gained IPv6LL Sep 12 23:56:41.099904 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 23:56:41.099904 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:56:41.102119 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 23:56:41.591370 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:56:41.848013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:56:41.848013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:56:41.848013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:56:41.848013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:56:41.848013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:56:41.848013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:56:41.848013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:56:41.848013 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:56:41.855645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:56:41.855645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:56:41.855645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:56:41.855645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:56:41.855645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:56:41.855645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:56:41.855645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 23:56:42.072638 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:56:42.253630 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:56:42.253630 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:56:42.256166 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:56:42.256166 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:56:42.256166 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:56:42.256166 ignition[959]: INFO : files: files passed Sep 12 23:56:42.256166 ignition[959]: INFO : Ignition finished successfully Sep 12 23:56:42.261090 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:56:42.267356 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:56:42.272009 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:56:42.276767 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:56:42.277667 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:56:42.285669 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:56:42.285669 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:56:42.288479 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:56:42.289600 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:56:42.290811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:56:42.297098 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:56:42.340940 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:56:42.341054 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:56:42.343237 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:56:42.344580 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:56:42.345372 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:56:42.350169 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:56:42.367125 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:56:42.378277 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:56:42.402802 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:56:42.403639 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:56:42.405110 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:56:42.406015 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:56:42.406194 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:56:42.407609 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:56:42.408869 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:56:42.409928 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:56:42.411041 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:56:42.412300 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:56:42.413313 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:56:42.414426 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:56:42.415467 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:56:42.416497 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:56:42.417345 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:56:42.418098 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:56:42.418271 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:56:42.419376 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:56:42.420443 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:56:42.421419 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:56:42.421588 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:56:42.422626 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:56:42.422806 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:56:42.424144 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:56:42.424303 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:56:42.425222 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:56:42.425377 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:56:42.426145 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 23:56:42.426293 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:56:42.445969 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:56:42.448147 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:56:42.449454 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:56:42.453060 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:56:42.453848 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:56:42.454125 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:56:42.465339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:56:42.465500 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:56:42.469982 ignition[1011]: INFO : Ignition 2.19.0 Sep 12 23:56:42.469982 ignition[1011]: INFO : Stage: umount Sep 12 23:56:42.469982 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:56:42.469982 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 23:56:42.473564 ignition[1011]: INFO : umount: umount passed Sep 12 23:56:42.473564 ignition[1011]: INFO : Ignition finished successfully Sep 12 23:56:42.475697 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:56:42.475855 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:56:42.478546 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:56:42.478593 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:56:42.479195 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:56:42.479232 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:56:42.482475 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:56:42.483423 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:56:42.484466 systemd[1]: Stopped target network.target - Network. Sep 12 23:56:42.486003 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:56:42.486082 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:56:42.489113 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:56:42.490438 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:56:42.494967 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:56:42.496991 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:56:42.498359 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:56:42.500118 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:56:42.500164 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:56:42.502395 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:56:42.502458 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:56:42.503966 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:56:42.504022 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:56:42.505113 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:56:42.505157 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:56:42.506600 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:56:42.509057 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:56:42.512123 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 12 23:56:42.513088 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:56:42.516299 systemd-networkd[781]: eth1: DHCPv6 lease lost Sep 12 23:56:42.517776 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:56:42.517925 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:56:42.522263 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:56:42.522638 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:56:42.528375 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:56:42.528495 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:56:42.536038 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:56:42.537742 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:56:42.537815 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:56:42.541455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:56:42.541514 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:56:42.542470 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:56:42.542509 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:56:42.543482 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:56:42.543520 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:56:42.544762 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:56:42.547112 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:56:42.547888 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:56:42.553781 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:56:42.553898 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:56:42.559066 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:56:42.559184 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:56:42.566852 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:56:42.567052 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:56:42.569309 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:56:42.569365 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:56:42.570651 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:56:42.570698 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:56:42.571616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:56:42.571662 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:56:42.573076 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:56:42.573124 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:56:42.574423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:56:42.574467 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:56:42.586790 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:56:42.588194 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:56:42.588300 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:56:42.589788 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:56:42.589886 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:56:42.593979 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:56:42.594037 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:56:42.595943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:56:42.595999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:42.601502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:56:42.601603 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:56:42.602772 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:56:42.611173 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:56:42.619725 systemd[1]: Switching root. Sep 12 23:56:42.651492 systemd-journald[236]: Journal stopped Sep 12 23:56:43.503900 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Sep 12 23:56:43.503968 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:56:43.503985 kernel: SELinux: policy capability open_perms=1 Sep 12 23:56:43.503994 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:56:43.504006 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:56:43.504017 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:56:43.504029 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:56:43.504044 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:56:43.504056 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:56:43.504067 kernel: audit: type=1403 audit(1757721402.773:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:56:43.504079 systemd[1]: Successfully loaded SELinux policy in 35.337ms. Sep 12 23:56:43.504106 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.529ms. Sep 12 23:56:43.504120 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:56:43.504133 systemd[1]: Detected virtualization kvm. Sep 12 23:56:43.504145 systemd[1]: Detected architecture arm64. Sep 12 23:56:43.504158 systemd[1]: Detected first boot. Sep 12 23:56:43.504168 systemd[1]: Hostname set to . Sep 12 23:56:43.504180 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:56:43.504193 zram_generator::config[1054]: No configuration found. Sep 12 23:56:43.504206 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:56:43.504218 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:56:43.504230 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:56:43.504244 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:56:43.504258 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:56:43.504271 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:56:43.504281 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:56:43.504291 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:56:43.504302 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:56:43.504316 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:56:43.504329 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:56:43.504342 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:56:43.504354 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:56:43.504410 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:56:43.504428 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:56:43.504441 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:56:43.504454 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:56:43.504467 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:56:43.504479 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 23:56:43.504491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:56:43.504504 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:56:43.504519 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:56:43.504532 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:56:43.504543 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:56:43.504558 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:56:43.504574 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:56:43.504588 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:56:43.504600 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:56:43.504613 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:56:43.504627 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:56:43.504638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:56:43.504652 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:56:43.504665 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:56:43.504677 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:56:43.504690 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:56:43.504702 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:56:43.504714 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:56:43.504729 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:56:43.504741 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:56:43.504754 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:56:43.504770 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:56:43.504784 systemd[1]: Reached target machines.target - Containers. Sep 12 23:56:43.504795 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:56:43.504808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:43.504818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:56:43.504830 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:56:43.504843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:56:43.504856 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:56:43.504868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:56:43.522473 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:56:43.522501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:56:43.522523 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:56:43.522537 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:56:43.522549 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:56:43.522559 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:56:43.522570 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:56:43.522580 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:56:43.522594 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:56:43.522607 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:56:43.522619 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:56:43.522669 systemd-journald[1117]: Collecting audit messages is disabled. Sep 12 23:56:43.522700 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:56:43.522716 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:56:43.522729 systemd[1]: Stopped verity-setup.service. Sep 12 23:56:43.522742 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:56:43.522754 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:56:43.522769 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:56:43.522781 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:56:43.522791 kernel: loop: module loaded Sep 12 23:56:43.522803 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:56:43.522814 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:56:43.522869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:56:43.527039 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:56:43.527054 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:56:43.527066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:56:43.527085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:56:43.527096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:56:43.527107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:56:43.527123 systemd-journald[1117]: Journal started Sep 12 23:56:43.527154 systemd-journald[1117]: Runtime Journal (/run/log/journal/2fd1214baf5146be90a75ed420835141) is 8.0M, max 76.6M, 68.6M free. Sep 12 23:56:43.253259 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:56:43.278300 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 23:56:43.279207 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:56:43.535267 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:56:43.530118 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:56:43.530923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:56:43.531841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:56:43.533925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:56:43.542543 kernel: fuse: init (API version 7.39) Sep 12 23:56:43.546445 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:56:43.546650 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:56:43.548090 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:56:43.553938 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:56:43.563103 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:56:43.569107 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:56:43.569727 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:56:43.569761 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:56:43.571514 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:56:43.574906 kernel: ACPI: bus type drm_connector registered Sep 12 23:56:43.580446 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:56:43.584207 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:56:43.586092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:43.596093 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:56:43.598593 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:56:43.600076 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:56:43.604083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:56:43.605058 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:56:43.610084 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:56:43.615284 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:56:43.622169 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:56:43.626941 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:56:43.630738 systemd-journald[1117]: Time spent on flushing to /var/log/journal/2fd1214baf5146be90a75ed420835141 is 87.720ms for 1125 entries. Sep 12 23:56:43.630738 systemd-journald[1117]: System Journal (/var/log/journal/2fd1214baf5146be90a75ed420835141) is 8.0M, max 584.8M, 576.8M free. Sep 12 23:56:43.738669 systemd-journald[1117]: Received client request to flush runtime journal. Sep 12 23:56:43.738713 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 23:56:43.738749 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:56:43.627915 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:56:43.628048 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:56:43.629048 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:56:43.629793 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:56:43.632060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:56:43.673750 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:56:43.678768 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:56:43.702462 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:56:43.703361 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:56:43.721255 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:56:43.725969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:56:43.737116 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Sep 12 23:56:43.737128 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Sep 12 23:56:43.746557 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:56:43.750491 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:56:43.758451 kernel: loop1: detected capacity change from 0 to 207008 Sep 12 23:56:43.772009 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:56:43.773107 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 23:56:43.784271 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:56:43.787458 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:56:43.810814 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:56:43.816424 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 23:56:43.821210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:56:43.840237 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Sep 12 23:56:43.840253 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Sep 12 23:56:43.845471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:56:43.860015 kernel: loop3: detected capacity change from 0 to 8 Sep 12 23:56:43.876907 kernel: loop4: detected capacity change from 0 to 114432 Sep 12 23:56:43.900918 kernel: loop5: detected capacity change from 0 to 207008 Sep 12 23:56:43.929914 kernel: loop6: detected capacity change from 0 to 114328 Sep 12 23:56:43.949926 kernel: loop7: detected capacity change from 0 to 8 Sep 12 23:56:43.952295 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 12 23:56:43.952754 (sd-merge)[1196]: Merged extensions into '/usr'. Sep 12 23:56:43.960196 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:56:43.960215 systemd[1]: Reloading... Sep 12 23:56:44.077917 zram_generator::config[1222]: No configuration found. Sep 12 23:56:44.132970 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:56:44.254539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:56:44.304102 systemd[1]: Reloading finished in 342 ms. Sep 12 23:56:44.345573 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:56:44.348567 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:56:44.359090 systemd[1]: Starting ensure-sysext.service... Sep 12 23:56:44.363387 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:56:44.378012 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:56:44.378029 systemd[1]: Reloading... Sep 12 23:56:44.423235 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:56:44.423565 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:56:44.424265 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:56:44.424522 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Sep 12 23:56:44.424575 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Sep 12 23:56:44.431821 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:56:44.431837 systemd-tmpfiles[1261]: Skipping /boot Sep 12 23:56:44.443545 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:56:44.443561 systemd-tmpfiles[1261]: Skipping /boot Sep 12 23:56:44.456897 zram_generator::config[1290]: No configuration found. Sep 12 23:56:44.590976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:56:44.639764 systemd[1]: Reloading finished in 261 ms. Sep 12 23:56:44.658135 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:56:44.659184 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:56:44.689012 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:56:44.694126 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:56:44.697097 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:56:44.700281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:56:44.707261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:56:44.719195 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:56:44.727252 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:56:44.729895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:44.733954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:56:44.746999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:56:44.757413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:56:44.758163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:44.765267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:44.765489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:44.769465 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:56:44.773449 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:56:44.773605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:56:44.776439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:44.780733 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Sep 12 23:56:44.785658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:56:44.786437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:44.790265 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:56:44.791175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:56:44.793311 systemd[1]: Finished ensure-sysext.service. Sep 12 23:56:44.797262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:56:44.797511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:56:44.800639 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:56:44.800723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:56:44.810893 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:56:44.819127 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:56:44.819707 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:56:44.825430 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:56:44.829551 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:56:44.831835 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:56:44.842036 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:56:44.843448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:56:44.859085 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:56:44.859676 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:56:44.886219 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:56:44.895756 augenrules[1377]: No rules Sep 12 23:56:44.897197 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:56:44.957072 systemd-resolved[1331]: Positive Trust Anchors: Sep 12 23:56:44.957465 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:56:44.957559 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:56:44.963970 systemd-resolved[1331]: Using system hostname 'ci-4081-3-5-n-44c5618783'. Sep 12 23:56:44.965932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:56:44.966644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:56:44.989124 systemd-networkd[1367]: lo: Link UP Sep 12 23:56:44.989135 systemd-networkd[1367]: lo: Gained carrier Sep 12 23:56:44.989790 systemd-networkd[1367]: Enumeration completed Sep 12 23:56:44.989980 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:56:44.990639 systemd[1]: Reached target network.target - Network. Sep 12 23:56:45.020595 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:56:45.022016 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:56:45.022749 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 23:56:45.023298 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:56:45.088939 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 23:56:45.099760 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:45.099772 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:56:45.101619 systemd-networkd[1367]: eth0: Link UP Sep 12 23:56:45.102034 systemd-networkd[1367]: eth0: Gained carrier Sep 12 23:56:45.102057 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:45.108405 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:45.108416 systemd-networkd[1367]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:56:45.110768 systemd-networkd[1367]: eth1: Link UP Sep 12 23:56:45.110775 systemd-networkd[1367]: eth1: Gained carrier Sep 12 23:56:45.110796 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:56:45.139687 systemd-networkd[1367]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 23:56:45.141569 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Sep 12 23:56:45.158677 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1380) Sep 12 23:56:45.166211 systemd-networkd[1367]: eth0: DHCPv4 address 91.99.3.235/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 23:56:45.167092 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Sep 12 23:56:45.193588 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 12 23:56:45.193711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:56:45.200170 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:56:45.203715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:56:45.209072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:56:45.209916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:56:45.209949 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:56:45.214186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:56:45.214350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:56:45.222009 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:56:45.222580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:56:45.223973 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:56:45.225194 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 12 23:56:45.225238 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 23:56:45.225262 kernel: [drm] features: -context_init Sep 12 23:56:45.231285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 23:56:45.240087 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:56:45.244432 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:56:45.245047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:56:45.248090 kernel: [drm] number of scanouts: 1 Sep 12 23:56:45.248182 kernel: [drm] number of cap sets: 0 Sep 12 23:56:45.248174 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:56:45.263982 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 12 23:56:45.272981 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 23:56:45.273177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:45.284071 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 23:56:45.300303 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:56:45.301565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:56:45.301749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:45.315257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:56:45.376008 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:56:45.464019 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:56:45.472160 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:56:45.485711 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:56:45.514636 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:56:45.516704 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:56:45.518159 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:56:45.519506 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:56:45.521035 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:56:45.522982 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:56:45.523704 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:56:45.524422 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:56:45.525043 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:56:45.525076 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:56:45.525527 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:56:45.527191 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:56:45.529196 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:56:45.534909 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:56:45.537916 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:56:45.539178 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:56:45.539883 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:56:45.540439 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:56:45.541024 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:56:45.541058 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:56:45.544041 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:56:45.548820 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:56:45.556991 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 23:56:45.560099 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:56:45.564274 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:56:45.566079 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:56:45.568960 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:56:45.572085 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:56:45.575553 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:56:45.581102 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 12 23:56:45.586140 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:56:45.591612 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:56:45.597229 jq[1448]: false Sep 12 23:56:45.600065 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:56:45.603680 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:56:45.604161 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:56:45.606126 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:56:45.610104 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:56:45.614936 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:56:45.620253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:56:45.621915 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:56:45.654604 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:56:45.654819 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:56:45.663970 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:56:45.667086 jq[1460]: true Sep 12 23:56:45.678509 dbus-daemon[1447]: [system] SELinux support is enabled Sep 12 23:56:45.681655 tar[1463]: linux-arm64/LICENSE Sep 12 23:56:45.681655 tar[1463]: linux-arm64/helm Sep 12 23:56:45.678685 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:56:45.691414 extend-filesystems[1451]: Found loop4 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found loop5 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found loop6 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found loop7 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found sda Sep 12 23:56:45.691414 extend-filesystems[1451]: Found sda1 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found sda2 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found sda3 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found usr Sep 12 23:56:45.691414 extend-filesystems[1451]: Found sda4 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found sda6 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found sda7 Sep 12 23:56:45.691414 extend-filesystems[1451]: Found sda9 Sep 12 23:56:45.691414 extend-filesystems[1451]: Checking size of /dev/sda9 Sep 12 23:56:45.688771 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:56:45.733158 coreos-metadata[1446]: Sep 12 23:56:45.723 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 12 23:56:45.688805 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:56:45.733490 jq[1482]: true Sep 12 23:56:45.691071 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:56:45.691090 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:56:45.698267 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:56:45.698505 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:56:45.739333 coreos-metadata[1446]: Sep 12 23:56:45.738 INFO Fetch successful Sep 12 23:56:45.739333 coreos-metadata[1446]: Sep 12 23:56:45.739 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 12 23:56:45.743944 coreos-metadata[1446]: Sep 12 23:56:45.739 INFO Fetch successful Sep 12 23:56:45.744049 extend-filesystems[1451]: Resized partition /dev/sda9 Sep 12 23:56:45.748559 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Sep 12 23:56:45.760061 update_engine[1458]: I20250912 23:56:45.754031 1458 main.cc:92] Flatcar Update Engine starting Sep 12 23:56:45.765197 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 12 23:56:45.765503 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:56:45.770077 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:56:45.772472 update_engine[1458]: I20250912 23:56:45.771541 1458 update_check_scheduler.cc:74] Next update check in 9m24s Sep 12 23:56:45.796486 systemd-logind[1457]: New seat seat0. Sep 12 23:56:45.804208 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:56:45.804224 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 12 23:56:45.804565 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:56:45.868289 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:56:45.873520 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:56:45.888301 systemd[1]: Starting sshkeys.service... Sep 12 23:56:45.891083 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 23:56:45.892174 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:56:45.917911 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1369) Sep 12 23:56:45.917973 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 12 23:56:45.918219 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 23:56:45.926799 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 23:56:45.952321 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 23:56:45.952321 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 12 23:56:45.952321 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 12 23:56:45.964318 extend-filesystems[1451]: Resized filesystem in /dev/sda9 Sep 12 23:56:45.964318 extend-filesystems[1451]: Found sr0 Sep 12 23:56:45.953888 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:56:45.954091 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:56:46.007279 coreos-metadata[1524]: Sep 12 23:56:46.003 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 12 23:56:46.007279 coreos-metadata[1524]: Sep 12 23:56:46.007 INFO Fetch successful Sep 12 23:56:46.021390 unknown[1524]: wrote ssh authorized keys file for user: core Sep 12 23:56:46.041250 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:56:46.064906 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:56:46.066501 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 23:56:46.071381 systemd[1]: Finished sshkeys.service. Sep 12 23:56:46.133820 containerd[1475]: time="2025-09-12T23:56:46.133711800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:56:46.205947 containerd[1475]: time="2025-09-12T23:56:46.204251200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:46.209964 containerd[1475]: time="2025-09-12T23:56:46.209907040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:46.210078 containerd[1475]: time="2025-09-12T23:56:46.210063880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:56:46.210154 containerd[1475]: time="2025-09-12T23:56:46.210141200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:56:46.210441 containerd[1475]: time="2025-09-12T23:56:46.210394560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:56:46.210939 containerd[1475]: time="2025-09-12T23:56:46.210922920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:46.213793 containerd[1475]: time="2025-09-12T23:56:46.212994240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:46.213793 containerd[1475]: time="2025-09-12T23:56:46.213017440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:46.213793 containerd[1475]: time="2025-09-12T23:56:46.213238080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:46.213793 containerd[1475]: time="2025-09-12T23:56:46.213255240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:46.213793 containerd[1475]: time="2025-09-12T23:56:46.213278720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:46.213793 containerd[1475]: time="2025-09-12T23:56:46.213290960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:46.213793 containerd[1475]: time="2025-09-12T23:56:46.213423520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:46.213793 containerd[1475]: time="2025-09-12T23:56:46.213675040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:56:46.214070 containerd[1475]: time="2025-09-12T23:56:46.214050040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:56:46.214134 containerd[1475]: time="2025-09-12T23:56:46.214121560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:56:46.214287 containerd[1475]: time="2025-09-12T23:56:46.214260160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:56:46.214417 containerd[1475]: time="2025-09-12T23:56:46.214400840Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:56:46.218601 containerd[1475]: time="2025-09-12T23:56:46.218571600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:56:46.218743 containerd[1475]: time="2025-09-12T23:56:46.218728240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:56:46.219921 containerd[1475]: time="2025-09-12T23:56:46.219903080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:56:46.220593 containerd[1475]: time="2025-09-12T23:56:46.219975240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:56:46.220593 containerd[1475]: time="2025-09-12T23:56:46.219996480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:56:46.220593 containerd[1475]: time="2025-09-12T23:56:46.220154760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:56:46.220593 containerd[1475]: time="2025-09-12T23:56:46.220415840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:56:46.220593 containerd[1475]: time="2025-09-12T23:56:46.220520960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:56:46.220593 containerd[1475]: time="2025-09-12T23:56:46.220537440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:56:46.220593 containerd[1475]: time="2025-09-12T23:56:46.220549800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:56:46.220593 containerd[1475]: time="2025-09-12T23:56:46.220564360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:56:46.220800 containerd[1475]: time="2025-09-12T23:56:46.220784000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:56:46.220853 containerd[1475]: time="2025-09-12T23:56:46.220841920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:56:46.220943 containerd[1475]: time="2025-09-12T23:56:46.220929800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:56:46.221001 containerd[1475]: time="2025-09-12T23:56:46.220986200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222125960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222150560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222164120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222185080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222200040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222212480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222225720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222237520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222250960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222263120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222276480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222289280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222313040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.222895 containerd[1475]: time="2025-09-12T23:56:46.222325360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222338400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222358960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222386320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222408920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222421000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222433600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222562720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222581640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222592400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222604440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222613720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222629360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222639520Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:56:46.223160 containerd[1475]: time="2025-09-12T23:56:46.222654080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:56:46.224023 containerd[1475]: time="2025-09-12T23:56:46.223959360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:56:46.224179 containerd[1475]: time="2025-09-12T23:56:46.224163320Z" level=info msg="Connect containerd service" Sep 12 23:56:46.224251 containerd[1475]: time="2025-09-12T23:56:46.224239640Z" level=info msg="using legacy CRI server" Sep 12 23:56:46.224754 containerd[1475]: time="2025-09-12T23:56:46.224740320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.224898520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.225774120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.226173280Z" level=info msg="Start subscribing containerd event" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.226226880Z" level=info msg="Start recovering state" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.226314320Z" level=info msg="Start event monitor" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.226326800Z" level=info msg="Start snapshots syncer" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.226337440Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.226387440Z" level=info msg="Start streaming server" Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.226804600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:56:46.227888 containerd[1475]: time="2025-09-12T23:56:46.226857200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:56:46.233808 containerd[1475]: time="2025-09-12T23:56:46.233778280Z" level=info msg="containerd successfully booted in 0.103617s" Sep 12 23:56:46.234104 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:56:46.347025 systemd-networkd[1367]: eth1: Gained IPv6LL Sep 12 23:56:46.347618 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Sep 12 23:56:46.353753 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:56:46.355472 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:56:46.363140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:56:46.367208 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:56:46.422917 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:56:46.502867 tar[1463]: linux-arm64/README.md Sep 12 23:56:46.527650 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:56:46.795280 systemd-networkd[1367]: eth0: Gained IPv6LL Sep 12 23:56:46.796118 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Sep 12 23:56:47.211508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:56:47.217480 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:56:47.547663 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:56:47.575736 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:56:47.583415 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:56:47.593405 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:56:47.593595 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:56:47.602202 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:56:47.613773 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:56:47.627290 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:56:47.631230 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 23:56:47.632747 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:56:47.635192 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:56:47.638265 systemd[1]: Startup finished in 770ms (kernel) + 6.084s (initrd) + 4.899s (userspace) = 11.754s. Sep 12 23:56:47.745745 kubelet[1560]: E0912 23:56:47.745688 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:56:47.749085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:56:47.749302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:56:58.000136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:56:58.006176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:56:58.137109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:56:58.139123 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:56:58.185406 kubelet[1596]: E0912 23:56:58.185326 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:56:58.188741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:56:58.189070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:08.439703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:57:08.453348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:08.585167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:08.585397 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:08.635827 kubelet[1611]: E0912 23:57:08.635720 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:08.638731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:08.638858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:17.118619 systemd-timesyncd[1352]: Contacted time server 162.159.200.123:123 (2.flatcar.pool.ntp.org). Sep 12 23:57:17.118694 systemd-timesyncd[1352]: Initial clock synchronization to Fri 2025-09-12 23:57:17.191576 UTC. Sep 12 23:57:18.890088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:57:18.907320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:19.030189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:19.035897 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:19.083575 kubelet[1626]: E0912 23:57:19.083507 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:19.086452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:19.086646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:27.514563 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:57:27.521677 systemd[1]: Started sshd@0-91.99.3.235:22-147.75.109.163:46910.service - OpenSSH per-connection server daemon (147.75.109.163:46910). Sep 12 23:57:28.526087 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 46910 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:28.528385 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:28.539601 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:57:28.546355 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:57:28.549706 systemd-logind[1457]: New session 1 of user core. Sep 12 23:57:28.561142 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:57:28.568376 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:57:28.582974 (systemd)[1638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:57:28.697518 systemd[1638]: Queued start job for default target default.target. Sep 12 23:57:28.709997 systemd[1638]: Created slice app.slice - User Application Slice. Sep 12 23:57:28.710057 systemd[1638]: Reached target paths.target - Paths. Sep 12 23:57:28.710087 systemd[1638]: Reached target timers.target - Timers. Sep 12 23:57:28.712377 systemd[1638]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:57:28.725858 systemd[1638]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:57:28.726086 systemd[1638]: Reached target sockets.target - Sockets. Sep 12 23:57:28.726113 systemd[1638]: Reached target basic.target - Basic System. Sep 12 23:57:28.726178 systemd[1638]: Reached target default.target - Main User Target. Sep 12 23:57:28.726223 systemd[1638]: Startup finished in 134ms. Sep 12 23:57:28.726434 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:57:28.737284 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:57:29.337597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 23:57:29.357295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:29.437154 systemd[1]: Started sshd@1-91.99.3.235:22-147.75.109.163:46924.service - OpenSSH per-connection server daemon (147.75.109.163:46924). Sep 12 23:57:29.472143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:29.483835 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:29.534903 kubelet[1659]: E0912 23:57:29.534791 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:29.537407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:29.537556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:30.427193 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 46924 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:30.430377 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:30.437131 systemd-logind[1457]: New session 2 of user core. Sep 12 23:57:30.453561 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:57:30.885292 update_engine[1458]: I20250912 23:57:30.885033 1458 update_attempter.cc:509] Updating boot flags... Sep 12 23:57:30.929958 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1676) Sep 12 23:57:31.007117 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1671) Sep 12 23:57:31.128204 sshd[1652]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:31.133938 systemd[1]: sshd@1-91.99.3.235:22-147.75.109.163:46924.service: Deactivated successfully. Sep 12 23:57:31.135557 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:57:31.136415 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:57:31.137713 systemd-logind[1457]: Removed session 2. Sep 12 23:57:31.307353 systemd[1]: Started sshd@2-91.99.3.235:22-147.75.109.163:50170.service - OpenSSH per-connection server daemon (147.75.109.163:50170). Sep 12 23:57:32.293889 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 50170 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:32.295292 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:32.299915 systemd-logind[1457]: New session 3 of user core. Sep 12 23:57:32.308201 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:57:32.977505 sshd[1689]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:32.983963 systemd[1]: sshd@2-91.99.3.235:22-147.75.109.163:50170.service: Deactivated successfully. Sep 12 23:57:32.986642 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:57:32.988661 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:57:32.990159 systemd-logind[1457]: Removed session 3. Sep 12 23:57:33.157432 systemd[1]: Started sshd@3-91.99.3.235:22-147.75.109.163:50176.service - OpenSSH per-connection server daemon (147.75.109.163:50176). Sep 12 23:57:34.143357 sshd[1696]: Accepted publickey for core from 147.75.109.163 port 50176 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:34.145469 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:34.150778 systemd-logind[1457]: New session 4 of user core. Sep 12 23:57:34.158223 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:57:34.831679 sshd[1696]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:34.841853 systemd[1]: sshd@3-91.99.3.235:22-147.75.109.163:50176.service: Deactivated successfully. Sep 12 23:57:34.847518 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:57:34.849857 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:57:34.851674 systemd-logind[1457]: Removed session 4. Sep 12 23:57:35.011467 systemd[1]: Started sshd@4-91.99.3.235:22-147.75.109.163:50182.service - OpenSSH per-connection server daemon (147.75.109.163:50182). Sep 12 23:57:35.994398 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 50182 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:35.996643 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:36.003269 systemd-logind[1457]: New session 5 of user core. Sep 12 23:57:36.009186 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:57:36.528924 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:57:36.529507 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:57:36.545410 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 12 23:57:36.706568 sshd[1703]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:36.712613 systemd[1]: sshd@4-91.99.3.235:22-147.75.109.163:50182.service: Deactivated successfully. Sep 12 23:57:36.715625 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:57:36.718115 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:57:36.719421 systemd-logind[1457]: Removed session 5. Sep 12 23:57:36.879537 systemd[1]: Started sshd@5-91.99.3.235:22-147.75.109.163:50190.service - OpenSSH per-connection server daemon (147.75.109.163:50190). Sep 12 23:57:37.877044 sshd[1711]: Accepted publickey for core from 147.75.109.163 port 50190 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:37.879403 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:37.885082 systemd-logind[1457]: New session 6 of user core. Sep 12 23:57:37.893225 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:57:38.409649 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:57:38.411052 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:57:38.421714 sudo[1715]: pam_unix(sudo:session): session closed for user root Sep 12 23:57:38.439034 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:57:38.439737 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:57:38.462423 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:57:38.464345 auditctl[1718]: No rules Sep 12 23:57:38.465470 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:57:38.465796 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:57:38.474496 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:57:38.502422 augenrules[1736]: No rules Sep 12 23:57:38.504936 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:57:38.506839 sudo[1714]: pam_unix(sudo:session): session closed for user root Sep 12 23:57:38.668625 sshd[1711]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:38.674062 systemd[1]: sshd@5-91.99.3.235:22-147.75.109.163:50190.service: Deactivated successfully. Sep 12 23:57:38.675788 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:57:38.676630 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:57:38.677993 systemd-logind[1457]: Removed session 6. Sep 12 23:57:38.842179 systemd[1]: Started sshd@6-91.99.3.235:22-147.75.109.163:50200.service - OpenSSH per-connection server daemon (147.75.109.163:50200). Sep 12 23:57:39.610058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 23:57:39.630085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:39.763281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:39.767948 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:39.815931 kubelet[1754]: E0912 23:57:39.815809 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:39.820052 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 50200 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 12 23:57:39.820986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:39.821153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:39.823051 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:39.833668 systemd-logind[1457]: New session 7 of user core. Sep 12 23:57:39.841220 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:57:40.338223 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:57:40.338522 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:57:40.644345 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:57:40.652659 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:57:40.901803 dockerd[1777]: time="2025-09-12T23:57:40.901156186Z" level=info msg="Starting up" Sep 12 23:57:40.988528 dockerd[1777]: time="2025-09-12T23:57:40.988077680Z" level=info msg="Loading containers: start." Sep 12 23:57:41.091905 kernel: Initializing XFRM netlink socket Sep 12 23:57:41.180223 systemd-networkd[1367]: docker0: Link UP Sep 12 23:57:41.204572 dockerd[1777]: time="2025-09-12T23:57:41.204486843Z" level=info msg="Loading containers: done." Sep 12 23:57:41.217797 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2945917023-merged.mount: Deactivated successfully. Sep 12 23:57:41.221194 dockerd[1777]: time="2025-09-12T23:57:41.221135338Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:57:41.221391 dockerd[1777]: time="2025-09-12T23:57:41.221244944Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:57:41.221435 dockerd[1777]: time="2025-09-12T23:57:41.221401571Z" level=info msg="Daemon has completed initialization" Sep 12 23:57:41.263368 dockerd[1777]: time="2025-09-12T23:57:41.263176434Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:57:41.263729 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:57:42.328228 containerd[1475]: time="2025-09-12T23:57:42.326998277Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 23:57:43.017412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42355701.mount: Deactivated successfully. Sep 12 23:57:43.842810 containerd[1475]: time="2025-09-12T23:57:43.841039502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:43.844139 containerd[1475]: time="2025-09-12T23:57:43.844073327Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363783" Sep 12 23:57:43.845389 containerd[1475]: time="2025-09-12T23:57:43.845318451Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:43.848713 containerd[1475]: time="2025-09-12T23:57:43.848659375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:43.849976 containerd[1475]: time="2025-09-12T23:57:43.849940670Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.522092599s" Sep 12 23:57:43.850091 containerd[1475]: time="2025-09-12T23:57:43.850075114Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 23:57:43.851161 containerd[1475]: time="2025-09-12T23:57:43.851132817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 23:57:44.599531 systemd[1]: Started sshd@7-91.99.3.235:22-193.46.255.244:55076.service - OpenSSH per-connection server daemon (193.46.255.244:55076). Sep 12 23:57:44.820420 sshd[1975]: Received disconnect from 193.46.255.244 port 55076:11: [preauth] Sep 12 23:57:44.820420 sshd[1975]: Disconnected from 193.46.255.244 port 55076 [preauth] Sep 12 23:57:44.821160 systemd[1]: sshd@7-91.99.3.235:22-193.46.255.244:55076.service: Deactivated successfully. Sep 12 23:57:44.965973 containerd[1475]: time="2025-09-12T23:57:44.965394441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:44.967575 containerd[1475]: time="2025-09-12T23:57:44.967125213Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531220" Sep 12 23:57:44.970907 containerd[1475]: time="2025-09-12T23:57:44.968697819Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:44.972077 containerd[1475]: time="2025-09-12T23:57:44.972044169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:44.973435 containerd[1475]: time="2025-09-12T23:57:44.973395873Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.122227004s" Sep 12 23:57:44.973507 containerd[1475]: time="2025-09-12T23:57:44.973436725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 23:57:44.974009 containerd[1475]: time="2025-09-12T23:57:44.973968276Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 23:57:45.994919 containerd[1475]: time="2025-09-12T23:57:45.993155364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:45.994919 containerd[1475]: time="2025-09-12T23:57:45.994792530Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484344" Sep 12 23:57:45.996695 containerd[1475]: time="2025-09-12T23:57:45.996642150Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:46.000827 containerd[1475]: time="2025-09-12T23:57:46.000774057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:46.004372 containerd[1475]: time="2025-09-12T23:57:46.004306428Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.030295941s" Sep 12 23:57:46.004372 containerd[1475]: time="2025-09-12T23:57:46.004363280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 23:57:46.006762 containerd[1475]: time="2025-09-12T23:57:46.006696107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 23:57:47.030026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1456636538.mount: Deactivated successfully. Sep 12 23:57:47.338167 containerd[1475]: time="2025-09-12T23:57:47.337042313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:47.339410 containerd[1475]: time="2025-09-12T23:57:47.339360153Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417843" Sep 12 23:57:47.341101 containerd[1475]: time="2025-09-12T23:57:47.341044394Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:47.344117 containerd[1475]: time="2025-09-12T23:57:47.343831764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:47.344654 containerd[1475]: time="2025-09-12T23:57:47.344616713Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.337861794s" Sep 12 23:57:47.344654 containerd[1475]: time="2025-09-12T23:57:47.344651440Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 23:57:47.345245 containerd[1475]: time="2025-09-12T23:57:47.345115488Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:57:47.924191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1178285141.mount: Deactivated successfully. Sep 12 23:57:48.613045 containerd[1475]: time="2025-09-12T23:57:48.612956719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:48.617917 containerd[1475]: time="2025-09-12T23:57:48.616713984Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:48.617917 containerd[1475]: time="2025-09-12T23:57:48.616809160Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 12 23:57:48.624417 containerd[1475]: time="2025-09-12T23:57:48.624357537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:48.627020 containerd[1475]: time="2025-09-12T23:57:48.626955809Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.281804915s" Sep 12 23:57:48.627020 containerd[1475]: time="2025-09-12T23:57:48.627001217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 23:57:48.627630 containerd[1475]: time="2025-09-12T23:57:48.627507501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:57:49.206789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155311545.mount: Deactivated successfully. Sep 12 23:57:49.215154 containerd[1475]: time="2025-09-12T23:57:49.215032725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:49.216000 containerd[1475]: time="2025-09-12T23:57:49.215957060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 12 23:57:49.217914 containerd[1475]: time="2025-09-12T23:57:49.216936403Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:49.219792 containerd[1475]: time="2025-09-12T23:57:49.219746532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:49.220849 containerd[1475]: time="2025-09-12T23:57:49.220809047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 593.256139ms" Sep 12 23:57:49.220995 containerd[1475]: time="2025-09-12T23:57:49.220976991Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:57:49.221534 containerd[1475]: time="2025-09-12T23:57:49.221492626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 23:57:49.830352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2240732364.mount: Deactivated successfully. Sep 12 23:57:49.832544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 12 23:57:49.840214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:50.004522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:50.006508 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:57:50.074783 kubelet[2071]: E0912 23:57:50.074729 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:57:50.078181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:57:50.078384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:57:51.275553 containerd[1475]: time="2025-09-12T23:57:51.275458942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.277987 containerd[1475]: time="2025-09-12T23:57:51.277932269Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Sep 12 23:57:51.279307 containerd[1475]: time="2025-09-12T23:57:51.279216777Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.284157 containerd[1475]: time="2025-09-12T23:57:51.283247273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:51.285043 containerd[1475]: time="2025-09-12T23:57:51.284999780Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.063374414s" Sep 12 23:57:51.285043 containerd[1475]: time="2025-09-12T23:57:51.285038623Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 23:57:57.227659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:57.236445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:57.267535 systemd[1]: Reloading requested from client PID 2148 ('systemctl') (unit session-7.scope)... Sep 12 23:57:57.267554 systemd[1]: Reloading... Sep 12 23:57:57.399899 zram_generator::config[2188]: No configuration found. Sep 12 23:57:57.508835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:57:57.579253 systemd[1]: Reloading finished in 311 ms. Sep 12 23:57:57.629383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:57.635442 (kubelet)[2227]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:57:57.637089 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:57.637887 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:57:57.638184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:57.642448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:57:57.782177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:57:57.782318 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:57:57.831896 kubelet[2239]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:57:57.831896 kubelet[2239]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:57:57.831896 kubelet[2239]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:57:57.831896 kubelet[2239]: I0912 23:57:57.830734 2239 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:57:59.068172 kubelet[2239]: I0912 23:57:59.068102 2239 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:57:59.068172 kubelet[2239]: I0912 23:57:59.068146 2239 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:57:59.068605 kubelet[2239]: I0912 23:57:59.068505 2239 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:57:59.104086 kubelet[2239]: E0912 23:57:59.104019 2239 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.3.235:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.3.235:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:57:59.109106 kubelet[2239]: I0912 23:57:59.108322 2239 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:57:59.113733 kubelet[2239]: E0912 23:57:59.113684 2239 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:57:59.113946 kubelet[2239]: I0912 23:57:59.113929 2239 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:57:59.117541 kubelet[2239]: I0912 23:57:59.117503 2239 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:57:59.118934 kubelet[2239]: I0912 23:57:59.118842 2239 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:57:59.119776 kubelet[2239]: I0912 23:57:59.119022 2239 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-44c5618783","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:57:59.119776 kubelet[2239]: I0912 23:57:59.119317 2239 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:57:59.119776 kubelet[2239]: I0912 23:57:59.119327 2239 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:57:59.119776 kubelet[2239]: I0912 23:57:59.119572 2239 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:57:59.123205 kubelet[2239]: I0912 23:57:59.123172 2239 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:57:59.123345 kubelet[2239]: I0912 23:57:59.123334 2239 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:57:59.123424 kubelet[2239]: I0912 23:57:59.123415 2239 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:57:59.123526 kubelet[2239]: I0912 23:57:59.123515 2239 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:57:59.125825 kubelet[2239]: W0912 23:57:59.125756 2239 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.3.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-44c5618783&limit=500&resourceVersion=0": dial tcp 91.99.3.235:6443: connect: connection refused Sep 12 23:57:59.125940 kubelet[2239]: E0912 23:57:59.125841 2239 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.3.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-44c5618783&limit=500&resourceVersion=0\": dial tcp 91.99.3.235:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:57:59.126819 kubelet[2239]: W0912 23:57:59.126726 2239 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.3.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.3.235:6443: connect: connection refused Sep 12 23:57:59.126819 kubelet[2239]: E0912 23:57:59.126784 2239 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.3.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.3.235:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:57:59.127289 kubelet[2239]: I0912 23:57:59.127229 2239 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:57:59.129099 kubelet[2239]: I0912 23:57:59.129030 2239 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:57:59.130904 kubelet[2239]: W0912 23:57:59.129304 2239 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:57:59.132574 kubelet[2239]: I0912 23:57:59.132540 2239 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:57:59.132650 kubelet[2239]: I0912 23:57:59.132586 2239 server.go:1287] "Started kubelet" Sep 12 23:57:59.139617 kubelet[2239]: I0912 23:57:59.139567 2239 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:57:59.140539 kubelet[2239]: I0912 23:57:59.140509 2239 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:57:59.141572 kubelet[2239]: E0912 23:57:59.141261 2239 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.3.235:6443/api/v1/namespaces/default/events\": dial tcp 91.99.3.235:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-44c5618783.1864ae5f59c5437d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-44c5618783,UID:ci-4081-3-5-n-44c5618783,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-44c5618783,},FirstTimestamp:2025-09-12 23:57:59.132562301 +0000 UTC m=+1.344984651,LastTimestamp:2025-09-12 23:57:59.132562301 +0000 UTC m=+1.344984651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-44c5618783,}" Sep 12 23:57:59.142246 kubelet[2239]: I0912 23:57:59.142173 2239 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:57:59.142559 kubelet[2239]: I0912 23:57:59.142532 2239 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:57:59.144620 kubelet[2239]: I0912 23:57:59.144564 2239 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:57:59.145142 kubelet[2239]: I0912 23:57:59.145059 2239 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:57:59.145586 kubelet[2239]: I0912 23:57:59.145572 2239 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:57:59.148894 kubelet[2239]: I0912 23:57:59.148178 2239 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:57:59.148894 kubelet[2239]: I0912 23:57:59.148263 2239 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:57:59.148894 kubelet[2239]: W0912 23:57:59.148663 2239 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.3.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.3.235:6443: connect: connection refused Sep 12 23:57:59.148894 kubelet[2239]: E0912 23:57:59.148708 2239 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.3.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.3.235:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:57:59.150526 kubelet[2239]: E0912 23:57:59.150426 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-44c5618783\" not found" Sep 12 23:57:59.151305 kubelet[2239]: E0912 23:57:59.150747 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.3.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-44c5618783?timeout=10s\": dial tcp 91.99.3.235:6443: connect: connection refused" interval="200ms" Sep 12 23:57:59.153290 kubelet[2239]: I0912 23:57:59.152383 2239 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:57:59.153290 kubelet[2239]: I0912 23:57:59.152494 2239 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:57:59.153725 kubelet[2239]: E0912 23:57:59.153573 2239 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:57:59.154925 kubelet[2239]: I0912 23:57:59.154040 2239 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:57:59.168400 kubelet[2239]: I0912 23:57:59.168145 2239 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:57:59.168400 kubelet[2239]: I0912 23:57:59.168165 2239 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:57:59.168400 kubelet[2239]: I0912 23:57:59.168183 2239 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:57:59.170629 kubelet[2239]: I0912 23:57:59.170601 2239 policy_none.go:49] "None policy: Start" Sep 12 23:57:59.170736 kubelet[2239]: I0912 23:57:59.170726 2239 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:57:59.170802 kubelet[2239]: I0912 23:57:59.170794 2239 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:57:59.181159 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:57:59.182305 kubelet[2239]: I0912 23:57:59.182131 2239 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:57:59.185616 kubelet[2239]: I0912 23:57:59.185277 2239 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:57:59.185616 kubelet[2239]: I0912 23:57:59.185313 2239 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:57:59.185616 kubelet[2239]: I0912 23:57:59.185336 2239 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:57:59.185616 kubelet[2239]: I0912 23:57:59.185343 2239 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:57:59.185616 kubelet[2239]: E0912 23:57:59.185407 2239 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:57:59.187402 kubelet[2239]: W0912 23:57:59.187341 2239 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.3.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.3.235:6443: connect: connection refused Sep 12 23:57:59.187686 kubelet[2239]: E0912 23:57:59.187591 2239 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.3.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.3.235:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:57:59.193679 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:57:59.198099 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:57:59.208903 kubelet[2239]: I0912 23:57:59.208646 2239 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:57:59.209894 kubelet[2239]: I0912 23:57:59.209244 2239 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:57:59.209894 kubelet[2239]: I0912 23:57:59.209268 2239 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:57:59.209894 kubelet[2239]: I0912 23:57:59.209704 2239 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:57:59.212000 kubelet[2239]: E0912 23:57:59.211978 2239 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:57:59.212181 kubelet[2239]: E0912 23:57:59.212166 2239 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-44c5618783\" not found" Sep 12 23:57:59.300698 systemd[1]: Created slice kubepods-burstable-pod601a9a7eb396cf54ebdb34ef526c443e.slice - libcontainer container kubepods-burstable-pod601a9a7eb396cf54ebdb34ef526c443e.slice. Sep 12 23:57:59.305465 systemd[1]: Created slice kubepods-burstable-pod0d7eac9c782c4660c4a310b3fff83cff.slice - libcontainer container kubepods-burstable-pod0d7eac9c782c4660c4a310b3fff83cff.slice. Sep 12 23:57:59.315053 kubelet[2239]: E0912 23:57:59.313776 2239 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.315053 kubelet[2239]: I0912 23:57:59.314218 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.315053 kubelet[2239]: E0912 23:57:59.314781 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.3.235:6443/api/v1/nodes\": dial tcp 91.99.3.235:6443: connect: connection refused" node="ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.322777 kubelet[2239]: E0912 23:57:59.322541 2239 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.326721 systemd[1]: Created slice kubepods-burstable-podf5e71d9745b53c7fcecfb775175135ed.slice - libcontainer container kubepods-burstable-podf5e71d9745b53c7fcecfb775175135ed.slice. Sep 12 23:57:59.330643 kubelet[2239]: E0912 23:57:59.330421 2239 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.352293 kubelet[2239]: E0912 23:57:59.352230 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.3.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-44c5618783?timeout=10s\": dial tcp 91.99.3.235:6443: connect: connection refused" interval="400ms" Sep 12 23:57:59.448972 kubelet[2239]: I0912 23:57:59.448855 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.448972 kubelet[2239]: I0912 23:57:59.448976 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.449207 kubelet[2239]: I0912 23:57:59.449014 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.449207 kubelet[2239]: I0912 23:57:59.449057 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.449207 kubelet[2239]: I0912 23:57:59.449093 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5e71d9745b53c7fcecfb775175135ed-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-44c5618783\" (UID: \"f5e71d9745b53c7fcecfb775175135ed\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.449207 kubelet[2239]: I0912 23:57:59.449146 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/601a9a7eb396cf54ebdb34ef526c443e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-44c5618783\" (UID: \"601a9a7eb396cf54ebdb34ef526c443e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.449207 kubelet[2239]: I0912 23:57:59.449185 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.449465 kubelet[2239]: I0912 23:57:59.449217 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/601a9a7eb396cf54ebdb34ef526c443e-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-44c5618783\" (UID: \"601a9a7eb396cf54ebdb34ef526c443e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.449465 kubelet[2239]: I0912 23:57:59.449251 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/601a9a7eb396cf54ebdb34ef526c443e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-44c5618783\" (UID: \"601a9a7eb396cf54ebdb34ef526c443e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.518611 kubelet[2239]: I0912 23:57:59.518129 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.518611 kubelet[2239]: E0912 23:57:59.518559 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.3.235:6443/api/v1/nodes\": dial tcp 91.99.3.235:6443: connect: connection refused" node="ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.615922 containerd[1475]: time="2025-09-12T23:57:59.615823002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-44c5618783,Uid:601a9a7eb396cf54ebdb34ef526c443e,Namespace:kube-system,Attempt:0,}" Sep 12 23:57:59.625276 containerd[1475]: time="2025-09-12T23:57:59.625226029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-44c5618783,Uid:0d7eac9c782c4660c4a310b3fff83cff,Namespace:kube-system,Attempt:0,}" Sep 12 23:57:59.632681 containerd[1475]: time="2025-09-12T23:57:59.632338532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-44c5618783,Uid:f5e71d9745b53c7fcecfb775175135ed,Namespace:kube-system,Attempt:0,}" Sep 12 23:57:59.753538 kubelet[2239]: E0912 23:57:59.753480 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.3.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-44c5618783?timeout=10s\": dial tcp 91.99.3.235:6443: connect: connection refused" interval="800ms" Sep 12 23:57:59.922047 kubelet[2239]: E0912 23:57:59.921805 2239 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.3.235:6443/api/v1/namespaces/default/events\": dial tcp 91.99.3.235:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-44c5618783.1864ae5f59c5437d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-44c5618783,UID:ci-4081-3-5-n-44c5618783,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-44c5618783,},FirstTimestamp:2025-09-12 23:57:59.132562301 +0000 UTC m=+1.344984651,LastTimestamp:2025-09-12 23:57:59.132562301 +0000 UTC m=+1.344984651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-44c5618783,}" Sep 12 23:57:59.922739 kubelet[2239]: I0912 23:57:59.922716 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-44c5618783" Sep 12 23:57:59.923171 kubelet[2239]: E0912 23:57:59.923094 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.3.235:6443/api/v1/nodes\": dial tcp 91.99.3.235:6443: connect: connection refused" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:00.195557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189749160.mount: Deactivated successfully. Sep 12 23:58:00.203590 containerd[1475]: time="2025-09-12T23:58:00.203463253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:00.205039 containerd[1475]: time="2025-09-12T23:58:00.204995371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 12 23:58:00.206306 containerd[1475]: time="2025-09-12T23:58:00.206245035Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:00.208007 containerd[1475]: time="2025-09-12T23:58:00.207834877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:58:00.209067 containerd[1475]: time="2025-09-12T23:58:00.208917252Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:58:00.209067 containerd[1475]: time="2025-09-12T23:58:00.209025577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:00.216004 containerd[1475]: time="2025-09-12T23:58:00.215946731Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.9676ms" Sep 12 23:58:00.219031 containerd[1475]: time="2025-09-12T23:58:00.218767675Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:00.223408 containerd[1475]: time="2025-09-12T23:58:00.223122098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 590.66984ms" Sep 12 23:58:00.224198 containerd[1475]: time="2025-09-12T23:58:00.224152110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:58:00.244799 containerd[1475]: time="2025-09-12T23:58:00.244597435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 619.271ms" Sep 12 23:58:00.251247 kubelet[2239]: W0912 23:58:00.250493 2239 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.3.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.3.235:6443: connect: connection refused Sep 12 23:58:00.251247 kubelet[2239]: E0912 23:58:00.250550 2239 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.3.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.3.235:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:58:00.327791 containerd[1475]: time="2025-09-12T23:58:00.327096170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:00.327791 containerd[1475]: time="2025-09-12T23:58:00.327220617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:00.327791 containerd[1475]: time="2025-09-12T23:58:00.327253098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:00.327791 containerd[1475]: time="2025-09-12T23:58:00.327390585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:00.328194 containerd[1475]: time="2025-09-12T23:58:00.327993136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:00.328255 containerd[1475]: time="2025-09-12T23:58:00.328200947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:00.328572 containerd[1475]: time="2025-09-12T23:58:00.328223588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:00.329497 containerd[1475]: time="2025-09-12T23:58:00.329432970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:00.330798 containerd[1475]: time="2025-09-12T23:58:00.330103844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:00.331122 containerd[1475]: time="2025-09-12T23:58:00.330932366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:00.331122 containerd[1475]: time="2025-09-12T23:58:00.330958288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:00.331122 containerd[1475]: time="2025-09-12T23:58:00.331069453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:00.353051 systemd[1]: Started cri-containerd-b1c11a6d7a55c604d4fa23d89910ba9211620f81ac58ca286e6b81d448b409f4.scope - libcontainer container b1c11a6d7a55c604d4fa23d89910ba9211620f81ac58ca286e6b81d448b409f4. Sep 12 23:58:00.358185 systemd[1]: Started cri-containerd-27a3c48f8f86da8e5dd372a027f94b6003e40edf5dc4e83ae0c7e414aff93a97.scope - libcontainer container 27a3c48f8f86da8e5dd372a027f94b6003e40edf5dc4e83ae0c7e414aff93a97. Sep 12 23:58:00.368345 systemd[1]: Started cri-containerd-044e3e69fad832e9290eda221d39f3a1945cf59fa16712148af7bd5d7227034e.scope - libcontainer container 044e3e69fad832e9290eda221d39f3a1945cf59fa16712148af7bd5d7227034e. Sep 12 23:58:00.426444 containerd[1475]: time="2025-09-12T23:58:00.426030425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-44c5618783,Uid:0d7eac9c782c4660c4a310b3fff83cff,Namespace:kube-system,Attempt:0,} returns sandbox id \"044e3e69fad832e9290eda221d39f3a1945cf59fa16712148af7bd5d7227034e\"" Sep 12 23:58:00.433793 containerd[1475]: time="2025-09-12T23:58:00.433656135Z" level=info msg="CreateContainer within sandbox \"044e3e69fad832e9290eda221d39f3a1945cf59fa16712148af7bd5d7227034e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:58:00.436086 containerd[1475]: time="2025-09-12T23:58:00.436052937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-44c5618783,Uid:601a9a7eb396cf54ebdb34ef526c443e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1c11a6d7a55c604d4fa23d89910ba9211620f81ac58ca286e6b81d448b409f4\"" Sep 12 23:58:00.442830 containerd[1475]: time="2025-09-12T23:58:00.442782281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-44c5618783,Uid:f5e71d9745b53c7fcecfb775175135ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"27a3c48f8f86da8e5dd372a027f94b6003e40edf5dc4e83ae0c7e414aff93a97\"" Sep 12 23:58:00.443641 containerd[1475]: time="2025-09-12T23:58:00.443590443Z" level=info msg="CreateContainer within sandbox \"b1c11a6d7a55c604d4fa23d89910ba9211620f81ac58ca286e6b81d448b409f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:58:00.448755 containerd[1475]: time="2025-09-12T23:58:00.448394928Z" level=info msg="CreateContainer within sandbox \"27a3c48f8f86da8e5dd372a027f94b6003e40edf5dc4e83ae0c7e414aff93a97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:58:00.463157 containerd[1475]: time="2025-09-12T23:58:00.462990634Z" level=info msg="CreateContainer within sandbox \"044e3e69fad832e9290eda221d39f3a1945cf59fa16712148af7bd5d7227034e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf\"" Sep 12 23:58:00.464901 containerd[1475]: time="2025-09-12T23:58:00.463737112Z" level=info msg="StartContainer for \"db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf\"" Sep 12 23:58:00.469065 containerd[1475]: time="2025-09-12T23:58:00.469002141Z" level=info msg="CreateContainer within sandbox \"b1c11a6d7a55c604d4fa23d89910ba9211620f81ac58ca286e6b81d448b409f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9c35b1b03d45bbbf49b92eb146387e1a7f525c6dbe916ca4baec1d0e6dc170f3\"" Sep 12 23:58:00.470976 containerd[1475]: time="2025-09-12T23:58:00.470917439Z" level=info msg="StartContainer for \"9c35b1b03d45bbbf49b92eb146387e1a7f525c6dbe916ca4baec1d0e6dc170f3\"" Sep 12 23:58:00.480659 containerd[1475]: time="2025-09-12T23:58:00.480587613Z" level=info msg="CreateContainer within sandbox \"27a3c48f8f86da8e5dd372a027f94b6003e40edf5dc4e83ae0c7e414aff93a97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1\"" Sep 12 23:58:00.481284 containerd[1475]: time="2025-09-12T23:58:00.481256807Z" level=info msg="StartContainer for \"294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1\"" Sep 12 23:58:00.503075 systemd[1]: Started cri-containerd-db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf.scope - libcontainer container db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf. Sep 12 23:58:00.521840 systemd[1]: Started cri-containerd-9c35b1b03d45bbbf49b92eb146387e1a7f525c6dbe916ca4baec1d0e6dc170f3.scope - libcontainer container 9c35b1b03d45bbbf49b92eb146387e1a7f525c6dbe916ca4baec1d0e6dc170f3. Sep 12 23:58:00.537243 systemd[1]: Started cri-containerd-294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1.scope - libcontainer container 294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1. Sep 12 23:58:00.555056 kubelet[2239]: E0912 23:58:00.553934 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.3.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-44c5618783?timeout=10s\": dial tcp 91.99.3.235:6443: connect: connection refused" interval="1.6s" Sep 12 23:58:00.573198 containerd[1475]: time="2025-09-12T23:58:00.573080299Z" level=info msg="StartContainer for \"db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf\" returns successfully" Sep 12 23:58:00.588196 containerd[1475]: time="2025-09-12T23:58:00.588134788Z" level=info msg="StartContainer for \"9c35b1b03d45bbbf49b92eb146387e1a7f525c6dbe916ca4baec1d0e6dc170f3\" returns successfully" Sep 12 23:58:00.623759 containerd[1475]: time="2025-09-12T23:58:00.623675604Z" level=info msg="StartContainer for \"294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1\" returns successfully" Sep 12 23:58:00.639921 kubelet[2239]: W0912 23:58:00.639359 2239 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.3.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-44c5618783&limit=500&resourceVersion=0": dial tcp 91.99.3.235:6443: connect: connection refused Sep 12 23:58:00.639921 kubelet[2239]: E0912 23:58:00.639431 2239 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.3.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-44c5618783&limit=500&resourceVersion=0\": dial tcp 91.99.3.235:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:58:00.726266 kubelet[2239]: I0912 23:58:00.725410 2239 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:01.199168 kubelet[2239]: E0912 23:58:01.198581 2239 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:01.202058 kubelet[2239]: E0912 23:58:01.202023 2239 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:01.204618 kubelet[2239]: E0912 23:58:01.204472 2239 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.205988 kubelet[2239]: E0912 23:58:02.205952 2239 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.206583 kubelet[2239]: E0912 23:58:02.206428 2239 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.584813 kubelet[2239]: E0912 23:58:02.584441 2239 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-44c5618783\" not found" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.719163 kubelet[2239]: I0912 23:58:02.719105 2239 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.754312 kubelet[2239]: I0912 23:58:02.753451 2239 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.767959 kubelet[2239]: E0912 23:58:02.767924 2239 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-44c5618783\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.768347 kubelet[2239]: I0912 23:58:02.768160 2239 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.770771 kubelet[2239]: E0912 23:58:02.770485 2239 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.770771 kubelet[2239]: I0912 23:58:02.770594 2239 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:58:02.774372 kubelet[2239]: E0912 23:58:02.774320 2239 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-44c5618783\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:58:03.128676 kubelet[2239]: I0912 23:58:03.128628 2239 apiserver.go:52] "Watching apiserver" Sep 12 23:58:03.148731 kubelet[2239]: I0912 23:58:03.148692 2239 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:58:03.206561 kubelet[2239]: I0912 23:58:03.206517 2239 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:58:03.210022 kubelet[2239]: E0912 23:58:03.209754 2239 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-44c5618783\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:58:04.744119 systemd[1]: Reloading requested from client PID 2515 ('systemctl') (unit session-7.scope)... Sep 12 23:58:04.744136 systemd[1]: Reloading... Sep 12 23:58:04.857669 zram_generator::config[2561]: No configuration found. Sep 12 23:58:04.953379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:05.038974 systemd[1]: Reloading finished in 294 ms. Sep 12 23:58:05.080716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:05.093301 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:58:05.093534 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:05.093590 systemd[1]: kubelet.service: Consumed 1.750s CPU time, 130.1M memory peak, 0B memory swap peak. Sep 12 23:58:05.101338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:05.228157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:05.243280 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:58:05.304969 kubelet[2600]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:58:05.304969 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:58:05.304969 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:58:05.305904 kubelet[2600]: I0912 23:58:05.305600 2600 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:58:05.313164 kubelet[2600]: I0912 23:58:05.313106 2600 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:58:05.313164 kubelet[2600]: I0912 23:58:05.313141 2600 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:58:05.313520 kubelet[2600]: I0912 23:58:05.313486 2600 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:58:05.315152 kubelet[2600]: I0912 23:58:05.315078 2600 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:58:05.320561 kubelet[2600]: I0912 23:58:05.320004 2600 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:58:05.324063 kubelet[2600]: E0912 23:58:05.324029 2600 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:58:05.324360 kubelet[2600]: I0912 23:58:05.324347 2600 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:58:05.328172 kubelet[2600]: I0912 23:58:05.328113 2600 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:58:05.328448 kubelet[2600]: I0912 23:58:05.328397 2600 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:58:05.328779 kubelet[2600]: I0912 23:58:05.328449 2600 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-44c5618783","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:58:05.328779 kubelet[2600]: I0912 23:58:05.328742 2600 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:58:05.328779 kubelet[2600]: I0912 23:58:05.328751 2600 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:58:05.328936 kubelet[2600]: I0912 23:58:05.328804 2600 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:58:05.331350 kubelet[2600]: I0912 23:58:05.331275 2600 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:58:05.331350 kubelet[2600]: I0912 23:58:05.331333 2600 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:58:05.331350 kubelet[2600]: I0912 23:58:05.331362 2600 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:58:05.331565 kubelet[2600]: I0912 23:58:05.331374 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:58:05.334751 kubelet[2600]: I0912 23:58:05.334694 2600 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:58:05.338014 kubelet[2600]: I0912 23:58:05.335327 2600 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:58:05.338014 kubelet[2600]: I0912 23:58:05.336690 2600 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:58:05.338014 kubelet[2600]: I0912 23:58:05.336723 2600 server.go:1287] "Started kubelet" Sep 12 23:58:05.341891 kubelet[2600]: I0912 23:58:05.340450 2600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:58:05.345052 kubelet[2600]: I0912 23:58:05.345007 2600 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:58:05.347236 kubelet[2600]: I0912 23:58:05.347181 2600 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:58:05.347856 kubelet[2600]: I0912 23:58:05.347644 2600 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:58:05.348404 kubelet[2600]: I0912 23:58:05.348381 2600 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:58:05.353484 kubelet[2600]: I0912 23:58:05.351128 2600 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:58:05.353484 kubelet[2600]: E0912 23:58:05.351375 2600 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-44c5618783\" not found" Sep 12 23:58:05.355506 kubelet[2600]: I0912 23:58:05.355481 2600 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:58:05.355781 kubelet[2600]: I0912 23:58:05.355767 2600 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:58:05.357498 kubelet[2600]: I0912 23:58:05.357474 2600 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:58:05.358882 kubelet[2600]: I0912 23:58:05.357730 2600 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:58:05.367496 kubelet[2600]: I0912 23:58:05.367469 2600 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:58:05.373691 kubelet[2600]: I0912 23:58:05.373606 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:58:05.377600 kubelet[2600]: I0912 23:58:05.377554 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:58:05.377600 kubelet[2600]: I0912 23:58:05.377592 2600 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:58:05.377882 kubelet[2600]: I0912 23:58:05.377693 2600 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:58:05.377882 kubelet[2600]: I0912 23:58:05.377716 2600 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:58:05.377882 kubelet[2600]: E0912 23:58:05.377782 2600 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:58:05.386928 kubelet[2600]: I0912 23:58:05.367847 2600 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:58:05.456409 kubelet[2600]: I0912 23:58:05.456379 2600 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:58:05.456409 kubelet[2600]: I0912 23:58:05.456396 2600 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:58:05.456409 kubelet[2600]: I0912 23:58:05.456418 2600 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:58:05.456597 kubelet[2600]: I0912 23:58:05.456575 2600 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:58:05.456668 kubelet[2600]: I0912 23:58:05.456592 2600 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:58:05.456668 kubelet[2600]: I0912 23:58:05.456611 2600 policy_none.go:49] "None policy: Start" Sep 12 23:58:05.456668 kubelet[2600]: I0912 23:58:05.456658 2600 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:58:05.456668 kubelet[2600]: I0912 23:58:05.456667 2600 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:58:05.456786 kubelet[2600]: I0912 23:58:05.456770 2600 state_mem.go:75] "Updated machine memory state" Sep 12 23:58:05.461289 kubelet[2600]: I0912 23:58:05.461260 2600 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:58:05.461493 kubelet[2600]: I0912 23:58:05.461467 2600 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:58:05.461540 kubelet[2600]: I0912 23:58:05.461487 2600 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:58:05.462801 kubelet[2600]: I0912 23:58:05.462178 2600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:58:05.464181 kubelet[2600]: E0912 23:58:05.464151 2600 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:58:05.478579 kubelet[2600]: I0912 23:58:05.478535 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.479813 kubelet[2600]: I0912 23:58:05.479070 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.479813 kubelet[2600]: I0912 23:58:05.479315 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.573770 kubelet[2600]: I0912 23:58:05.573672 2600 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.584728 kubelet[2600]: I0912 23:58:05.584683 2600 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.585569 kubelet[2600]: I0912 23:58:05.584786 2600 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.657895 kubelet[2600]: I0912 23:58:05.657820 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.658047 kubelet[2600]: I0912 23:58:05.657923 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/601a9a7eb396cf54ebdb34ef526c443e-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-44c5618783\" (UID: \"601a9a7eb396cf54ebdb34ef526c443e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.658047 kubelet[2600]: I0912 23:58:05.657952 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/601a9a7eb396cf54ebdb34ef526c443e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-44c5618783\" (UID: \"601a9a7eb396cf54ebdb34ef526c443e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.658047 kubelet[2600]: I0912 23:58:05.658021 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.658228 kubelet[2600]: I0912 23:58:05.658042 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.658228 kubelet[2600]: I0912 23:58:05.658080 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.658228 kubelet[2600]: I0912 23:58:05.658100 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d7eac9c782c4660c4a310b3fff83cff-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-44c5618783\" (UID: \"0d7eac9c782c4660c4a310b3fff83cff\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.658356 kubelet[2600]: I0912 23:58:05.658334 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5e71d9745b53c7fcecfb775175135ed-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-44c5618783\" (UID: \"f5e71d9745b53c7fcecfb775175135ed\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.658615 kubelet[2600]: I0912 23:58:05.658561 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/601a9a7eb396cf54ebdb34ef526c443e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-44c5618783\" (UID: \"601a9a7eb396cf54ebdb34ef526c443e\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" Sep 12 23:58:05.743919 sudo[2636]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 23:58:05.744222 sudo[2636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 23:58:06.201091 sudo[2636]: pam_unix(sudo:session): session closed for user root Sep 12 23:58:06.348526 kubelet[2600]: I0912 23:58:06.348141 2600 apiserver.go:52] "Watching apiserver" Sep 12 23:58:06.356767 kubelet[2600]: I0912 23:58:06.356705 2600 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:58:06.439715 kubelet[2600]: I0912 23:58:06.439477 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:58:06.449121 kubelet[2600]: E0912 23:58:06.448909 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-44c5618783\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" Sep 12 23:58:06.457846 kubelet[2600]: I0912 23:58:06.457017 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-44c5618783" podStartSLOduration=1.456987896 podStartE2EDuration="1.456987896s" podCreationTimestamp="2025-09-12 23:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:06.455256151 +0000 UTC m=+1.206422389" watchObservedRunningTime="2025-09-12 23:58:06.456987896 +0000 UTC m=+1.208154134" Sep 12 23:58:06.483552 kubelet[2600]: I0912 23:58:06.482916 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-44c5618783" podStartSLOduration=1.482898077 podStartE2EDuration="1.482898077s" podCreationTimestamp="2025-09-12 23:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:06.470271919 +0000 UTC m=+1.221438197" watchObservedRunningTime="2025-09-12 23:58:06.482898077 +0000 UTC m=+1.234064315" Sep 12 23:58:06.501280 kubelet[2600]: I0912 23:58:06.501216 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-44c5618783" podStartSLOduration=1.50119805 podStartE2EDuration="1.50119805s" podCreationTimestamp="2025-09-12 23:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:06.484088002 +0000 UTC m=+1.235254240" watchObservedRunningTime="2025-09-12 23:58:06.50119805 +0000 UTC m=+1.252364248" Sep 12 23:58:07.850646 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 12 23:58:08.009321 sshd[1744]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:08.017312 systemd[1]: sshd@6-91.99.3.235:22-147.75.109.163:50200.service: Deactivated successfully. Sep 12 23:58:08.019569 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:58:08.019790 systemd[1]: session-7.scope: Consumed 7.722s CPU time, 153.4M memory peak, 0B memory swap peak. Sep 12 23:58:08.022130 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:58:08.023764 systemd-logind[1457]: Removed session 7. Sep 12 23:58:11.135922 kubelet[2600]: I0912 23:58:11.135828 2600 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:58:11.137732 containerd[1475]: time="2025-09-12T23:58:11.137277269Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:58:11.138102 kubelet[2600]: I0912 23:58:11.137486 2600 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:58:12.101775 systemd[1]: Created slice kubepods-besteffort-pod9d77de17_7329_4c9d_b0a4_2506e15178c6.slice - libcontainer container kubepods-besteffort-pod9d77de17_7329_4c9d_b0a4_2506e15178c6.slice. Sep 12 23:58:12.103856 kubelet[2600]: I0912 23:58:12.103064 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d77de17-7329-4c9d-b0a4-2506e15178c6-lib-modules\") pod \"kube-proxy-lggbm\" (UID: \"9d77de17-7329-4c9d-b0a4-2506e15178c6\") " pod="kube-system/kube-proxy-lggbm" Sep 12 23:58:12.103856 kubelet[2600]: I0912 23:58:12.103097 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d77de17-7329-4c9d-b0a4-2506e15178c6-xtables-lock\") pod \"kube-proxy-lggbm\" (UID: \"9d77de17-7329-4c9d-b0a4-2506e15178c6\") " pod="kube-system/kube-proxy-lggbm" Sep 12 23:58:12.103856 kubelet[2600]: I0912 23:58:12.103116 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdsb\" (UniqueName: \"kubernetes.io/projected/9d77de17-7329-4c9d-b0a4-2506e15178c6-kube-api-access-9kdsb\") pod \"kube-proxy-lggbm\" (UID: \"9d77de17-7329-4c9d-b0a4-2506e15178c6\") " pod="kube-system/kube-proxy-lggbm" Sep 12 23:58:12.103856 kubelet[2600]: I0912 23:58:12.103726 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d77de17-7329-4c9d-b0a4-2506e15178c6-kube-proxy\") pod \"kube-proxy-lggbm\" (UID: \"9d77de17-7329-4c9d-b0a4-2506e15178c6\") " pod="kube-system/kube-proxy-lggbm" Sep 12 23:58:12.119612 systemd[1]: Created slice kubepods-burstable-podc12370c1_e49b_422c_b1a2_c03ba3fa0ad7.slice - libcontainer container kubepods-burstable-podc12370c1_e49b_422c_b1a2_c03ba3fa0ad7.slice. Sep 12 23:58:12.204285 kubelet[2600]: I0912 23:58:12.204179 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-lib-modules\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.205066 kubelet[2600]: I0912 23:58:12.204316 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-xtables-lock\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.205066 kubelet[2600]: I0912 23:58:12.204394 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-config-path\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.205066 kubelet[2600]: I0912 23:58:12.204430 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-host-proc-sys-net\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.205066 kubelet[2600]: I0912 23:58:12.204512 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-etc-cni-netd\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.205066 kubelet[2600]: I0912 23:58:12.204578 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-run\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.205066 kubelet[2600]: I0912 23:58:12.204609 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cni-path\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.205406 kubelet[2600]: I0912 23:58:12.204903 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-cgroup\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.205406 kubelet[2600]: I0912 23:58:12.204974 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tkj4\" (UniqueName: \"kubernetes.io/projected/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-kube-api-access-2tkj4\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.206276 kubelet[2600]: I0912 23:58:12.206220 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-hostproc\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.206371 kubelet[2600]: I0912 23:58:12.206286 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-clustermesh-secrets\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.206371 kubelet[2600]: I0912 23:58:12.206334 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-host-proc-sys-kernel\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.206424 kubelet[2600]: I0912 23:58:12.206368 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-bpf-maps\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.206424 kubelet[2600]: I0912 23:58:12.206401 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-hubble-tls\") pod \"cilium-kvl4z\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " pod="kube-system/cilium-kvl4z" Sep 12 23:58:12.267307 systemd[1]: Created slice kubepods-besteffort-pod5682df3d_1839_4d78_99fa_818280ce56bc.slice - libcontainer container kubepods-besteffort-pod5682df3d_1839_4d78_99fa_818280ce56bc.slice. Sep 12 23:58:12.307388 kubelet[2600]: I0912 23:58:12.307041 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffgk\" (UniqueName: \"kubernetes.io/projected/5682df3d-1839-4d78-99fa-818280ce56bc-kube-api-access-6ffgk\") pod \"cilium-operator-6c4d7847fc-ptk5j\" (UID: \"5682df3d-1839-4d78-99fa-818280ce56bc\") " pod="kube-system/cilium-operator-6c4d7847fc-ptk5j" Sep 12 23:58:12.308606 kubelet[2600]: I0912 23:58:12.308148 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5682df3d-1839-4d78-99fa-818280ce56bc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ptk5j\" (UID: \"5682df3d-1839-4d78-99fa-818280ce56bc\") " pod="kube-system/cilium-operator-6c4d7847fc-ptk5j" Sep 12 23:58:12.412610 containerd[1475]: time="2025-09-12T23:58:12.411537852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lggbm,Uid:9d77de17-7329-4c9d-b0a4-2506e15178c6,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:12.423143 containerd[1475]: time="2025-09-12T23:58:12.423085065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvl4z,Uid:c12370c1-e49b-422c-b1a2-c03ba3fa0ad7,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:12.450076 containerd[1475]: time="2025-09-12T23:58:12.449982161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:12.450076 containerd[1475]: time="2025-09-12T23:58:12.450038762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:12.450076 containerd[1475]: time="2025-09-12T23:58:12.450050603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:12.450455 containerd[1475]: time="2025-09-12T23:58:12.450391733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:12.463273 containerd[1475]: time="2025-09-12T23:58:12.463162781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:12.463273 containerd[1475]: time="2025-09-12T23:58:12.463230783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:12.463273 containerd[1475]: time="2025-09-12T23:58:12.463246544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:12.464136 containerd[1475]: time="2025-09-12T23:58:12.463911163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:12.477098 systemd[1]: Started cri-containerd-1a8bf7cfebc606b023b5cf24726377cb952346c750f2f303010f6f1309d572f3.scope - libcontainer container 1a8bf7cfebc606b023b5cf24726377cb952346c750f2f303010f6f1309d572f3. Sep 12 23:58:12.496166 systemd[1]: Started cri-containerd-3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96.scope - libcontainer container 3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96. Sep 12 23:58:12.531528 containerd[1475]: time="2025-09-12T23:58:12.531124062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lggbm,Uid:9d77de17-7329-4c9d-b0a4-2506e15178c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a8bf7cfebc606b023b5cf24726377cb952346c750f2f303010f6f1309d572f3\"" Sep 12 23:58:12.536751 containerd[1475]: time="2025-09-12T23:58:12.536653702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvl4z,Uid:c12370c1-e49b-422c-b1a2-c03ba3fa0ad7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\"" Sep 12 23:58:12.539846 containerd[1475]: time="2025-09-12T23:58:12.539775952Z" level=info msg="CreateContainer within sandbox \"1a8bf7cfebc606b023b5cf24726377cb952346c750f2f303010f6f1309d572f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:58:12.541341 containerd[1475]: time="2025-09-12T23:58:12.540157883Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 23:58:12.559210 containerd[1475]: time="2025-09-12T23:58:12.559160351Z" level=info msg="CreateContainer within sandbox \"1a8bf7cfebc606b023b5cf24726377cb952346c750f2f303010f6f1309d572f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bec57af0221a9e20169dff6fa66b24ac54df4bef7b4bc902d149d979646a2e6d\"" Sep 12 23:58:12.562356 containerd[1475]: time="2025-09-12T23:58:12.562277801Z" level=info msg="StartContainer for \"bec57af0221a9e20169dff6fa66b24ac54df4bef7b4bc902d149d979646a2e6d\"" Sep 12 23:58:12.572293 containerd[1475]: time="2025-09-12T23:58:12.572163846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ptk5j,Uid:5682df3d-1839-4d78-99fa-818280ce56bc,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:12.593760 systemd[1]: Started cri-containerd-bec57af0221a9e20169dff6fa66b24ac54df4bef7b4bc902d149d979646a2e6d.scope - libcontainer container bec57af0221a9e20169dff6fa66b24ac54df4bef7b4bc902d149d979646a2e6d. Sep 12 23:58:12.604812 containerd[1475]: time="2025-09-12T23:58:12.604471299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:12.604812 containerd[1475]: time="2025-09-12T23:58:12.604525220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:12.604812 containerd[1475]: time="2025-09-12T23:58:12.604548301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:12.604812 containerd[1475]: time="2025-09-12T23:58:12.604630383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:12.627638 systemd[1]: Started cri-containerd-8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656.scope - libcontainer container 8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656. Sep 12 23:58:12.656393 containerd[1475]: time="2025-09-12T23:58:12.656334275Z" level=info msg="StartContainer for \"bec57af0221a9e20169dff6fa66b24ac54df4bef7b4bc902d149d979646a2e6d\" returns successfully" Sep 12 23:58:12.686317 containerd[1475]: time="2025-09-12T23:58:12.686207337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ptk5j,Uid:5682df3d-1839-4d78-99fa-818280ce56bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\"" Sep 12 23:58:13.471788 kubelet[2600]: I0912 23:58:13.471650 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lggbm" podStartSLOduration=1.47163152 podStartE2EDuration="1.47163152s" podCreationTimestamp="2025-09-12 23:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:13.471179028 +0000 UTC m=+8.222345266" watchObservedRunningTime="2025-09-12 23:58:13.47163152 +0000 UTC m=+8.222797758" Sep 12 23:58:16.224271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041936858.mount: Deactivated successfully. Sep 12 23:58:17.619313 containerd[1475]: time="2025-09-12T23:58:17.619258272Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:17.621116 containerd[1475]: time="2025-09-12T23:58:17.620968392Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 23:58:17.621778 containerd[1475]: time="2025-09-12T23:58:17.621417643Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:17.624334 containerd[1475]: time="2025-09-12T23:58:17.624279790Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.082936913s" Sep 12 23:58:17.624334 containerd[1475]: time="2025-09-12T23:58:17.624326751Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 23:58:17.625716 containerd[1475]: time="2025-09-12T23:58:17.625404457Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 23:58:17.628681 containerd[1475]: time="2025-09-12T23:58:17.628429688Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:58:17.643706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168561492.mount: Deactivated successfully. Sep 12 23:58:17.645740 containerd[1475]: time="2025-09-12T23:58:17.645673855Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\"" Sep 12 23:58:17.647022 containerd[1475]: time="2025-09-12T23:58:17.646308590Z" level=info msg="StartContainer for \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\"" Sep 12 23:58:17.683106 systemd[1]: Started cri-containerd-ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479.scope - libcontainer container ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479. Sep 12 23:58:17.711511 containerd[1475]: time="2025-09-12T23:58:17.711371446Z" level=info msg="StartContainer for \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\" returns successfully" Sep 12 23:58:17.727711 systemd[1]: cri-containerd-ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479.scope: Deactivated successfully. Sep 12 23:58:17.898058 containerd[1475]: time="2025-09-12T23:58:17.897108071Z" level=info msg="shim disconnected" id=ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479 namespace=k8s.io Sep 12 23:58:17.898058 containerd[1475]: time="2025-09-12T23:58:17.897189553Z" level=warning msg="cleaning up after shim disconnected" id=ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479 namespace=k8s.io Sep 12 23:58:17.898058 containerd[1475]: time="2025-09-12T23:58:17.897208914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:18.479223 containerd[1475]: time="2025-09-12T23:58:18.479065125Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:58:18.505898 containerd[1475]: time="2025-09-12T23:58:18.505688811Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\"" Sep 12 23:58:18.508828 containerd[1475]: time="2025-09-12T23:58:18.508473354Z" level=info msg="StartContainer for \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\"" Sep 12 23:58:18.537241 systemd[1]: Started cri-containerd-f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549.scope - libcontainer container f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549. Sep 12 23:58:18.574849 containerd[1475]: time="2025-09-12T23:58:18.574780223Z" level=info msg="StartContainer for \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\" returns successfully" Sep 12 23:58:18.589392 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:58:18.589786 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:18.589862 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:58:18.594731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:58:18.594950 systemd[1]: cri-containerd-f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549.scope: Deactivated successfully. Sep 12 23:58:18.621701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:18.625305 containerd[1475]: time="2025-09-12T23:58:18.625249171Z" level=info msg="shim disconnected" id=f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549 namespace=k8s.io Sep 12 23:58:18.625886 containerd[1475]: time="2025-09-12T23:58:18.625660540Z" level=warning msg="cleaning up after shim disconnected" id=f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549 namespace=k8s.io Sep 12 23:58:18.625886 containerd[1475]: time="2025-09-12T23:58:18.625681980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:18.640203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479-rootfs.mount: Deactivated successfully. Sep 12 23:58:19.180473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647193909.mount: Deactivated successfully. Sep 12 23:58:19.486098 containerd[1475]: time="2025-09-12T23:58:19.485974079Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:58:19.508654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733476829.mount: Deactivated successfully. Sep 12 23:58:19.513469 containerd[1475]: time="2025-09-12T23:58:19.513057953Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\"" Sep 12 23:58:19.514518 containerd[1475]: time="2025-09-12T23:58:19.514442223Z" level=info msg="StartContainer for \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\"" Sep 12 23:58:19.553236 systemd[1]: Started cri-containerd-fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae.scope - libcontainer container fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae. Sep 12 23:58:19.602777 containerd[1475]: time="2025-09-12T23:58:19.602397833Z" level=info msg="StartContainer for \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\" returns successfully" Sep 12 23:58:19.611643 systemd[1]: cri-containerd-fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae.scope: Deactivated successfully. Sep 12 23:58:19.669864 containerd[1475]: time="2025-09-12T23:58:19.669631508Z" level=info msg="shim disconnected" id=fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae namespace=k8s.io Sep 12 23:58:19.669864 containerd[1475]: time="2025-09-12T23:58:19.669708830Z" level=warning msg="cleaning up after shim disconnected" id=fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae namespace=k8s.io Sep 12 23:58:19.669864 containerd[1475]: time="2025-09-12T23:58:19.669720910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:19.744599 containerd[1475]: time="2025-09-12T23:58:19.744212305Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:19.744716 containerd[1475]: time="2025-09-12T23:58:19.744692155Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 23:58:19.745666 containerd[1475]: time="2025-09-12T23:58:19.745627736Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:58:19.747517 containerd[1475]: time="2025-09-12T23:58:19.746969725Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.121527788s" Sep 12 23:58:19.747517 containerd[1475]: time="2025-09-12T23:58:19.747015006Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 23:58:19.749036 containerd[1475]: time="2025-09-12T23:58:19.749008130Z" level=info msg="CreateContainer within sandbox \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 23:58:19.765513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947400469.mount: Deactivated successfully. Sep 12 23:58:19.769108 containerd[1475]: time="2025-09-12T23:58:19.769051650Z" level=info msg="CreateContainer within sandbox \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\"" Sep 12 23:58:19.771837 containerd[1475]: time="2025-09-12T23:58:19.770625044Z" level=info msg="StartContainer for \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\"" Sep 12 23:58:19.802144 systemd[1]: Started cri-containerd-5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435.scope - libcontainer container 5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435. Sep 12 23:58:19.843129 containerd[1475]: time="2025-09-12T23:58:19.843078914Z" level=info msg="StartContainer for \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\" returns successfully" Sep 12 23:58:20.497023 containerd[1475]: time="2025-09-12T23:58:20.496975206Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:58:20.513219 containerd[1475]: time="2025-09-12T23:58:20.512809141Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\"" Sep 12 23:58:20.513919 containerd[1475]: time="2025-09-12T23:58:20.513803602Z" level=info msg="StartContainer for \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\"" Sep 12 23:58:20.556138 systemd[1]: Started cri-containerd-55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2.scope - libcontainer container 55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2. Sep 12 23:58:20.615080 systemd[1]: cri-containerd-55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2.scope: Deactivated successfully. Sep 12 23:58:20.617980 containerd[1475]: time="2025-09-12T23:58:20.616424616Z" level=info msg="StartContainer for \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\" returns successfully" Sep 12 23:58:20.660319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2-rootfs.mount: Deactivated successfully. Sep 12 23:58:20.661175 kubelet[2600]: I0912 23:58:20.660424 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ptk5j" podStartSLOduration=1.603781885 podStartE2EDuration="8.660405828s" podCreationTimestamp="2025-09-12 23:58:12 +0000 UTC" firstStartedPulling="2025-09-12 23:58:12.69115544 +0000 UTC m=+7.442321678" lastFinishedPulling="2025-09-12 23:58:19.747779383 +0000 UTC m=+14.498945621" observedRunningTime="2025-09-12 23:58:20.580364372 +0000 UTC m=+15.331530690" watchObservedRunningTime="2025-09-12 23:58:20.660405828 +0000 UTC m=+15.411572066" Sep 12 23:58:20.664678 containerd[1475]: time="2025-09-12T23:58:20.664607157Z" level=info msg="shim disconnected" id=55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2 namespace=k8s.io Sep 12 23:58:20.664678 containerd[1475]: time="2025-09-12T23:58:20.664673598Z" level=warning msg="cleaning up after shim disconnected" id=55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2 namespace=k8s.io Sep 12 23:58:20.664678 containerd[1475]: time="2025-09-12T23:58:20.664681918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:21.503899 containerd[1475]: time="2025-09-12T23:58:21.503623618Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:58:21.527491 containerd[1475]: time="2025-09-12T23:58:21.527171140Z" level=info msg="CreateContainer within sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\"" Sep 12 23:58:21.529096 containerd[1475]: time="2025-09-12T23:58:21.529014978Z" level=info msg="StartContainer for \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\"" Sep 12 23:58:21.567100 systemd[1]: Started cri-containerd-589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd.scope - libcontainer container 589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd. Sep 12 23:58:21.606351 containerd[1475]: time="2025-09-12T23:58:21.606308961Z" level=info msg="StartContainer for \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\" returns successfully" Sep 12 23:58:21.695445 kubelet[2600]: I0912 23:58:21.694401 2600 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 23:58:21.740582 systemd[1]: Created slice kubepods-burstable-poda4dd7ca4_ff71_4989_8370_259b2982bd56.slice - libcontainer container kubepods-burstable-poda4dd7ca4_ff71_4989_8370_259b2982bd56.slice. Sep 12 23:58:21.752855 systemd[1]: Created slice kubepods-burstable-pod24dccbb9_1fcb_4080_b183_0308bcbc0e71.slice - libcontainer container kubepods-burstable-pod24dccbb9_1fcb_4080_b183_0308bcbc0e71.slice. Sep 12 23:58:21.778779 kubelet[2600]: I0912 23:58:21.778491 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcg2n\" (UniqueName: \"kubernetes.io/projected/a4dd7ca4-ff71-4989-8370-259b2982bd56-kube-api-access-tcg2n\") pod \"coredns-668d6bf9bc-h8gsp\" (UID: \"a4dd7ca4-ff71-4989-8370-259b2982bd56\") " pod="kube-system/coredns-668d6bf9bc-h8gsp" Sep 12 23:58:21.778779 kubelet[2600]: I0912 23:58:21.778565 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24dccbb9-1fcb-4080-b183-0308bcbc0e71-config-volume\") pod \"coredns-668d6bf9bc-9qcnr\" (UID: \"24dccbb9-1fcb-4080-b183-0308bcbc0e71\") " pod="kube-system/coredns-668d6bf9bc-9qcnr" Sep 12 23:58:21.778779 kubelet[2600]: I0912 23:58:21.778586 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rplb\" (UniqueName: \"kubernetes.io/projected/24dccbb9-1fcb-4080-b183-0308bcbc0e71-kube-api-access-6rplb\") pod \"coredns-668d6bf9bc-9qcnr\" (UID: \"24dccbb9-1fcb-4080-b183-0308bcbc0e71\") " pod="kube-system/coredns-668d6bf9bc-9qcnr" Sep 12 23:58:21.778779 kubelet[2600]: I0912 23:58:21.778604 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4dd7ca4-ff71-4989-8370-259b2982bd56-config-volume\") pod \"coredns-668d6bf9bc-h8gsp\" (UID: \"a4dd7ca4-ff71-4989-8370-259b2982bd56\") " pod="kube-system/coredns-668d6bf9bc-h8gsp" Sep 12 23:58:22.052687 containerd[1475]: time="2025-09-12T23:58:22.051858489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h8gsp,Uid:a4dd7ca4-ff71-4989-8370-259b2982bd56,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:22.056409 containerd[1475]: time="2025-09-12T23:58:22.056359458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9qcnr,Uid:24dccbb9-1fcb-4080-b183-0308bcbc0e71,Namespace:kube-system,Attempt:0,}" Sep 12 23:58:23.828782 systemd-networkd[1367]: cilium_host: Link UP Sep 12 23:58:23.829879 systemd-networkd[1367]: cilium_net: Link UP Sep 12 23:58:23.830487 systemd-networkd[1367]: cilium_net: Gained carrier Sep 12 23:58:23.830657 systemd-networkd[1367]: cilium_host: Gained carrier Sep 12 23:58:23.960403 systemd-networkd[1367]: cilium_vxlan: Link UP Sep 12 23:58:23.960410 systemd-networkd[1367]: cilium_vxlan: Gained carrier Sep 12 23:58:24.253962 kernel: NET: Registered PF_ALG protocol family Sep 12 23:58:24.459078 systemd-networkd[1367]: cilium_host: Gained IPv6LL Sep 12 23:58:24.587086 systemd-networkd[1367]: cilium_net: Gained IPv6LL Sep 12 23:58:24.977968 systemd-networkd[1367]: lxc_health: Link UP Sep 12 23:58:24.990695 systemd-networkd[1367]: lxc_health: Gained carrier Sep 12 23:58:25.133501 systemd-networkd[1367]: lxcc0633035b9c7: Link UP Sep 12 23:58:25.139348 systemd-networkd[1367]: lxca603ea862c1c: Link UP Sep 12 23:58:25.146231 kernel: eth0: renamed from tmpba55c Sep 12 23:58:25.151149 kernel: eth0: renamed from tmpc80bd Sep 12 23:58:25.159817 systemd-networkd[1367]: lxca603ea862c1c: Gained carrier Sep 12 23:58:25.166061 systemd-networkd[1367]: lxcc0633035b9c7: Gained carrier Sep 12 23:58:25.868080 systemd-networkd[1367]: cilium_vxlan: Gained IPv6LL Sep 12 23:58:26.251579 systemd-networkd[1367]: lxcc0633035b9c7: Gained IPv6LL Sep 12 23:58:26.447700 kubelet[2600]: I0912 23:58:26.446296 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kvl4z" podStartSLOduration=9.36038583 podStartE2EDuration="14.446276064s" podCreationTimestamp="2025-09-12 23:58:12 +0000 UTC" firstStartedPulling="2025-09-12 23:58:12.539302498 +0000 UTC m=+7.290468736" lastFinishedPulling="2025-09-12 23:58:17.625192732 +0000 UTC m=+12.376358970" observedRunningTime="2025-09-12 23:58:22.529086463 +0000 UTC m=+17.280252741" watchObservedRunningTime="2025-09-12 23:58:26.446276064 +0000 UTC m=+21.197442262" Sep 12 23:58:26.699057 systemd-networkd[1367]: lxc_health: Gained IPv6LL Sep 12 23:58:27.019108 systemd-networkd[1367]: lxca603ea862c1c: Gained IPv6LL Sep 12 23:58:29.205086 containerd[1475]: time="2025-09-12T23:58:29.204621995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:29.205086 containerd[1475]: time="2025-09-12T23:58:29.204686836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:29.205086 containerd[1475]: time="2025-09-12T23:58:29.204740517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:29.205086 containerd[1475]: time="2025-09-12T23:58:29.204840599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:29.224174 containerd[1475]: time="2025-09-12T23:58:29.223323098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:58:29.224174 containerd[1475]: time="2025-09-12T23:58:29.223390939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:58:29.224174 containerd[1475]: time="2025-09-12T23:58:29.223413979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:29.224174 containerd[1475]: time="2025-09-12T23:58:29.223515101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:58:29.260370 systemd[1]: Started cri-containerd-c80bd6c53b5c30ae30fbfbac992292c0d6f69f2fc8448aac674244ea22816058.scope - libcontainer container c80bd6c53b5c30ae30fbfbac992292c0d6f69f2fc8448aac674244ea22816058. Sep 12 23:58:29.266940 systemd[1]: Started cri-containerd-ba55cbedc88ca0638338387af42eeb760b7488c541c526e7d00f85bd605fbe38.scope - libcontainer container ba55cbedc88ca0638338387af42eeb760b7488c541c526e7d00f85bd605fbe38. Sep 12 23:58:29.323620 containerd[1475]: time="2025-09-12T23:58:29.323549560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9qcnr,Uid:24dccbb9-1fcb-4080-b183-0308bcbc0e71,Namespace:kube-system,Attempt:0,} returns sandbox id \"c80bd6c53b5c30ae30fbfbac992292c0d6f69f2fc8448aac674244ea22816058\"" Sep 12 23:58:29.334643 containerd[1475]: time="2025-09-12T23:58:29.334117171Z" level=info msg="CreateContainer within sandbox \"c80bd6c53b5c30ae30fbfbac992292c0d6f69f2fc8448aac674244ea22816058\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:58:29.355100 containerd[1475]: time="2025-09-12T23:58:29.355024429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h8gsp,Uid:a4dd7ca4-ff71-4989-8370-259b2982bd56,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba55cbedc88ca0638338387af42eeb760b7488c541c526e7d00f85bd605fbe38\"" Sep 12 23:58:29.369623 containerd[1475]: time="2025-09-12T23:58:29.369472863Z" level=info msg="CreateContainer within sandbox \"c80bd6c53b5c30ae30fbfbac992292c0d6f69f2fc8448aac674244ea22816058\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8689c0b6eddf71a1b90c884b66286c953ee97a481c7e77c5ad5a43ab4a4203a4\"" Sep 12 23:58:29.369950 containerd[1475]: time="2025-09-12T23:58:29.369753868Z" level=info msg="CreateContainer within sandbox \"ba55cbedc88ca0638338387af42eeb760b7488c541c526e7d00f85bd605fbe38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:58:29.370339 containerd[1475]: time="2025-09-12T23:58:29.370229275Z" level=info msg="StartContainer for \"8689c0b6eddf71a1b90c884b66286c953ee97a481c7e77c5ad5a43ab4a4203a4\"" Sep 12 23:58:29.399527 containerd[1475]: time="2025-09-12T23:58:29.399435428Z" level=info msg="CreateContainer within sandbox \"ba55cbedc88ca0638338387af42eeb760b7488c541c526e7d00f85bd605fbe38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6817f479971278fd2f46a38856aed8cf17799791e7b8fe84d1767367ee2810fb\"" Sep 12 23:58:29.404925 containerd[1475]: time="2025-09-12T23:58:29.403957981Z" level=info msg="StartContainer for \"6817f479971278fd2f46a38856aed8cf17799791e7b8fe84d1767367ee2810fb\"" Sep 12 23:58:29.428149 systemd[1]: Started cri-containerd-8689c0b6eddf71a1b90c884b66286c953ee97a481c7e77c5ad5a43ab4a4203a4.scope - libcontainer container 8689c0b6eddf71a1b90c884b66286c953ee97a481c7e77c5ad5a43ab4a4203a4. Sep 12 23:58:29.454511 systemd[1]: Started cri-containerd-6817f479971278fd2f46a38856aed8cf17799791e7b8fe84d1767367ee2810fb.scope - libcontainer container 6817f479971278fd2f46a38856aed8cf17799791e7b8fe84d1767367ee2810fb. Sep 12 23:58:29.476408 containerd[1475]: time="2025-09-12T23:58:29.476268912Z" level=info msg="StartContainer for \"8689c0b6eddf71a1b90c884b66286c953ee97a481c7e77c5ad5a43ab4a4203a4\" returns successfully" Sep 12 23:58:29.497794 containerd[1475]: time="2025-09-12T23:58:29.497655338Z" level=info msg="StartContainer for \"6817f479971278fd2f46a38856aed8cf17799791e7b8fe84d1767367ee2810fb\" returns successfully" Sep 12 23:58:29.569091 kubelet[2600]: I0912 23:58:29.568984 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h8gsp" podStartSLOduration=17.568967252 podStartE2EDuration="17.568967252s" podCreationTimestamp="2025-09-12 23:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:29.568016436 +0000 UTC m=+24.319182674" watchObservedRunningTime="2025-09-12 23:58:29.568967252 +0000 UTC m=+24.320133490" Sep 12 23:58:29.569091 kubelet[2600]: I0912 23:58:29.569093 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9qcnr" podStartSLOduration=17.569089054 podStartE2EDuration="17.569089054s" podCreationTimestamp="2025-09-12 23:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:58:29.545762716 +0000 UTC m=+24.296928994" watchObservedRunningTime="2025-09-12 23:58:29.569089054 +0000 UTC m=+24.320255292" Sep 12 23:58:30.218913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712906406.mount: Deactivated successfully. Sep 13 00:00:01.867411 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Sep 13 00:00:01.878247 systemd[1]: logrotate.service: Deactivated successfully. Sep 13 00:00:30.958442 systemd[1]: Started sshd@8-91.99.3.235:22-147.75.109.163:43444.service - OpenSSH per-connection server daemon (147.75.109.163:43444). Sep 13 00:00:31.953532 sshd[3993]: Accepted publickey for core from 147.75.109.163 port 43444 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:00:31.956046 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:31.961053 systemd-logind[1457]: New session 8 of user core. Sep 13 00:00:31.975199 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:00:32.745405 sshd[3993]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:32.750115 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:00:32.750604 systemd[1]: sshd@8-91.99.3.235:22-147.75.109.163:43444.service: Deactivated successfully. Sep 13 00:00:32.753661 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:00:32.756644 systemd-logind[1457]: Removed session 8. Sep 13 00:00:37.924411 systemd[1]: Started sshd@9-91.99.3.235:22-147.75.109.163:43452.service - OpenSSH per-connection server daemon (147.75.109.163:43452). Sep 13 00:00:38.909376 sshd[4007]: Accepted publickey for core from 147.75.109.163 port 43452 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:00:38.914482 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:38.920717 systemd-logind[1457]: New session 9 of user core. Sep 13 00:00:38.927182 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:00:39.668994 sshd[4007]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:39.676505 systemd[1]: sshd@9-91.99.3.235:22-147.75.109.163:43452.service: Deactivated successfully. Sep 13 00:00:39.679103 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:00:39.680227 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:00:39.681985 systemd-logind[1457]: Removed session 9. Sep 13 00:00:44.849363 systemd[1]: Started sshd@10-91.99.3.235:22-147.75.109.163:57410.service - OpenSSH per-connection server daemon (147.75.109.163:57410). Sep 13 00:00:45.832105 sshd[4023]: Accepted publickey for core from 147.75.109.163 port 57410 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:00:45.834057 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:45.839670 systemd-logind[1457]: New session 10 of user core. Sep 13 00:00:45.847773 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:00:46.590135 sshd[4023]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:46.596469 systemd[1]: sshd@10-91.99.3.235:22-147.75.109.163:57410.service: Deactivated successfully. Sep 13 00:00:46.599133 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:00:46.599955 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:00:46.601730 systemd-logind[1457]: Removed session 10. Sep 13 00:00:46.765393 systemd[1]: Started sshd@11-91.99.3.235:22-147.75.109.163:57420.service - OpenSSH per-connection server daemon (147.75.109.163:57420). Sep 13 00:00:47.765429 sshd[4036]: Accepted publickey for core from 147.75.109.163 port 57420 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:00:47.771761 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:47.785393 systemd-logind[1457]: New session 11 of user core. Sep 13 00:00:47.793163 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:00:48.579938 sshd[4036]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:48.585685 systemd[1]: sshd@11-91.99.3.235:22-147.75.109.163:57420.service: Deactivated successfully. Sep 13 00:00:48.587813 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:00:48.589002 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:00:48.590494 systemd-logind[1457]: Removed session 11. Sep 13 00:00:48.751326 systemd[1]: Started sshd@12-91.99.3.235:22-147.75.109.163:57434.service - OpenSSH per-connection server daemon (147.75.109.163:57434). Sep 13 00:00:49.733322 sshd[4047]: Accepted publickey for core from 147.75.109.163 port 57434 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:00:49.735375 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:49.741109 systemd-logind[1457]: New session 12 of user core. Sep 13 00:00:49.750534 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:00:50.475696 sshd[4047]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:50.480349 systemd[1]: sshd@12-91.99.3.235:22-147.75.109.163:57434.service: Deactivated successfully. Sep 13 00:00:50.482926 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:00:50.484242 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:00:50.485836 systemd-logind[1457]: Removed session 12. Sep 13 00:00:55.658463 systemd[1]: Started sshd@13-91.99.3.235:22-147.75.109.163:33776.service - OpenSSH per-connection server daemon (147.75.109.163:33776). Sep 13 00:00:56.644104 sshd[4060]: Accepted publickey for core from 147.75.109.163 port 33776 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:00:56.645778 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:56.653280 systemd-logind[1457]: New session 13 of user core. Sep 13 00:00:56.667218 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:00:57.399110 sshd[4060]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:57.404051 systemd[1]: sshd@13-91.99.3.235:22-147.75.109.163:33776.service: Deactivated successfully. Sep 13 00:00:57.406722 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:00:57.410146 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:00:57.411722 systemd-logind[1457]: Removed session 13. Sep 13 00:01:02.565129 systemd[1]: Started sshd@14-91.99.3.235:22-147.75.109.163:58314.service - OpenSSH per-connection server daemon (147.75.109.163:58314). Sep 13 00:01:03.549265 sshd[4073]: Accepted publickey for core from 147.75.109.163 port 58314 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:03.552487 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:03.558697 systemd-logind[1457]: New session 14 of user core. Sep 13 00:01:03.564094 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:01:04.297352 sshd[4073]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:04.302403 systemd[1]: sshd@14-91.99.3.235:22-147.75.109.163:58314.service: Deactivated successfully. Sep 13 00:01:04.304578 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:01:04.306927 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:01:04.307840 systemd-logind[1457]: Removed session 14. Sep 13 00:01:09.475303 systemd[1]: Started sshd@15-91.99.3.235:22-147.75.109.163:58320.service - OpenSSH per-connection server daemon (147.75.109.163:58320). Sep 13 00:01:10.460516 sshd[4088]: Accepted publickey for core from 147.75.109.163 port 58320 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:10.462991 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:10.471220 systemd-logind[1457]: New session 15 of user core. Sep 13 00:01:10.476195 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:01:11.219280 sshd[4088]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:11.225429 systemd[1]: sshd@15-91.99.3.235:22-147.75.109.163:58320.service: Deactivated successfully. Sep 13 00:01:11.225868 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:01:11.228404 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:01:11.230896 systemd-logind[1457]: Removed session 15. Sep 13 00:01:11.395241 systemd[1]: Started sshd@16-91.99.3.235:22-147.75.109.163:54648.service - OpenSSH per-connection server daemon (147.75.109.163:54648). Sep 13 00:01:12.370890 sshd[4101]: Accepted publickey for core from 147.75.109.163 port 54648 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:12.373307 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:12.380193 systemd-logind[1457]: New session 16 of user core. Sep 13 00:01:12.388213 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:01:13.166278 sshd[4101]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:13.172431 systemd[1]: sshd@16-91.99.3.235:22-147.75.109.163:54648.service: Deactivated successfully. Sep 13 00:01:13.172785 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:01:13.175117 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:01:13.176561 systemd-logind[1457]: Removed session 16. Sep 13 00:01:13.340331 systemd[1]: Started sshd@17-91.99.3.235:22-147.75.109.163:54660.service - OpenSSH per-connection server daemon (147.75.109.163:54660). Sep 13 00:01:14.332000 sshd[4113]: Accepted publickey for core from 147.75.109.163 port 54660 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:14.334550 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:14.340370 systemd-logind[1457]: New session 17 of user core. Sep 13 00:01:14.346213 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:01:15.665453 sshd[4113]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:15.669239 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:01:15.670134 systemd[1]: sshd@17-91.99.3.235:22-147.75.109.163:54660.service: Deactivated successfully. Sep 13 00:01:15.674282 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:01:15.676856 systemd-logind[1457]: Removed session 17. Sep 13 00:01:15.849325 systemd[1]: Started sshd@18-91.99.3.235:22-147.75.109.163:54670.service - OpenSSH per-connection server daemon (147.75.109.163:54670). Sep 13 00:01:16.844370 sshd[4131]: Accepted publickey for core from 147.75.109.163 port 54670 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:16.846639 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:16.851940 systemd-logind[1457]: New session 18 of user core. Sep 13 00:01:16.858265 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:01:17.723459 sshd[4131]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:17.730001 systemd[1]: sshd@18-91.99.3.235:22-147.75.109.163:54670.service: Deactivated successfully. Sep 13 00:01:17.735402 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:01:17.736795 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:01:17.738527 systemd-logind[1457]: Removed session 18. Sep 13 00:01:17.901515 systemd[1]: Started sshd@19-91.99.3.235:22-147.75.109.163:54682.service - OpenSSH per-connection server daemon (147.75.109.163:54682). Sep 13 00:01:18.880982 sshd[4142]: Accepted publickey for core from 147.75.109.163 port 54682 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:18.883864 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:18.889421 systemd-logind[1457]: New session 19 of user core. Sep 13 00:01:18.895083 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:01:19.634135 sshd[4142]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:19.639825 systemd[1]: sshd@19-91.99.3.235:22-147.75.109.163:54682.service: Deactivated successfully. Sep 13 00:01:19.643435 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:01:19.645565 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:01:19.647465 systemd-logind[1457]: Removed session 19. Sep 13 00:01:24.812534 systemd[1]: Started sshd@20-91.99.3.235:22-147.75.109.163:60538.service - OpenSSH per-connection server daemon (147.75.109.163:60538). Sep 13 00:01:25.803027 sshd[4157]: Accepted publickey for core from 147.75.109.163 port 60538 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:25.805007 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:25.813357 systemd-logind[1457]: New session 20 of user core. Sep 13 00:01:25.819184 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:01:26.551721 sshd[4157]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:26.555603 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:01:26.556285 systemd[1]: sshd@20-91.99.3.235:22-147.75.109.163:60538.service: Deactivated successfully. Sep 13 00:01:26.559587 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:01:26.563648 systemd-logind[1457]: Removed session 20. Sep 13 00:01:31.733483 systemd[1]: Started sshd@21-91.99.3.235:22-147.75.109.163:54464.service - OpenSSH per-connection server daemon (147.75.109.163:54464). Sep 13 00:01:32.704388 sshd[4170]: Accepted publickey for core from 147.75.109.163 port 54464 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:32.706078 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:32.712127 systemd-logind[1457]: New session 21 of user core. Sep 13 00:01:32.722266 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:01:33.453448 sshd[4170]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:33.459325 systemd[1]: sshd@21-91.99.3.235:22-147.75.109.163:54464.service: Deactivated successfully. Sep 13 00:01:33.463126 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:01:33.467083 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:01:33.468423 systemd-logind[1457]: Removed session 21. Sep 13 00:01:38.637268 systemd[1]: Started sshd@22-91.99.3.235:22-147.75.109.163:54474.service - OpenSSH per-connection server daemon (147.75.109.163:54474). Sep 13 00:01:39.617540 sshd[4182]: Accepted publickey for core from 147.75.109.163 port 54474 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:39.620421 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:39.629413 systemd-logind[1457]: New session 22 of user core. Sep 13 00:01:39.639256 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:01:40.365725 sshd[4182]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:40.370973 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:01:40.371745 systemd[1]: sshd@22-91.99.3.235:22-147.75.109.163:54474.service: Deactivated successfully. Sep 13 00:01:40.375154 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:01:40.376969 systemd-logind[1457]: Removed session 22. Sep 13 00:01:40.544261 systemd[1]: Started sshd@23-91.99.3.235:22-147.75.109.163:60104.service - OpenSSH per-connection server daemon (147.75.109.163:60104). Sep 13 00:01:41.532388 sshd[4195]: Accepted publickey for core from 147.75.109.163 port 60104 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:41.534419 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:41.540306 systemd-logind[1457]: New session 23 of user core. Sep 13 00:01:41.547225 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:01:44.368553 containerd[1475]: time="2025-09-13T00:01:44.368506829Z" level=info msg="StopContainer for \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\" with timeout 30 (s)" Sep 13 00:01:44.373531 containerd[1475]: time="2025-09-13T00:01:44.371502768Z" level=info msg="Stop container \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\" with signal terminated" Sep 13 00:01:44.388381 systemd[1]: cri-containerd-5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435.scope: Deactivated successfully. Sep 13 00:01:44.400240 containerd[1475]: time="2025-09-13T00:01:44.400139824Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:01:44.409456 containerd[1475]: time="2025-09-13T00:01:44.409246920Z" level=info msg="StopContainer for \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\" with timeout 2 (s)" Sep 13 00:01:44.412563 containerd[1475]: time="2025-09-13T00:01:44.410082805Z" level=info msg="Stop container \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\" with signal terminated" Sep 13 00:01:44.418525 systemd-networkd[1367]: lxc_health: Link DOWN Sep 13 00:01:44.419863 systemd-networkd[1367]: lxc_health: Lost carrier Sep 13 00:01:44.441718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435-rootfs.mount: Deactivated successfully. Sep 13 00:01:44.443727 systemd[1]: cri-containerd-589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd.scope: Deactivated successfully. Sep 13 00:01:44.444008 systemd[1]: cri-containerd-589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd.scope: Consumed 7.568s CPU time. Sep 13 00:01:44.456582 containerd[1475]: time="2025-09-13T00:01:44.456498330Z" level=info msg="shim disconnected" id=5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435 namespace=k8s.io Sep 13 00:01:44.456582 containerd[1475]: time="2025-09-13T00:01:44.456556570Z" level=warning msg="cleaning up after shim disconnected" id=5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435 namespace=k8s.io Sep 13 00:01:44.456582 containerd[1475]: time="2025-09-13T00:01:44.456566050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:01:44.468300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd-rootfs.mount: Deactivated successfully. Sep 13 00:01:44.473232 containerd[1475]: time="2025-09-13T00:01:44.473117512Z" level=info msg="shim disconnected" id=589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd namespace=k8s.io Sep 13 00:01:44.473232 containerd[1475]: time="2025-09-13T00:01:44.473200192Z" level=warning msg="cleaning up after shim disconnected" id=589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd namespace=k8s.io Sep 13 00:01:44.473232 containerd[1475]: time="2025-09-13T00:01:44.473209272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:01:44.485731 containerd[1475]: time="2025-09-13T00:01:44.485684949Z" level=info msg="StopContainer for \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\" returns successfully" Sep 13 00:01:44.486451 containerd[1475]: time="2025-09-13T00:01:44.486425393Z" level=info msg="StopPodSandbox for \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\"" Sep 13 00:01:44.486539 containerd[1475]: time="2025-09-13T00:01:44.486465394Z" level=info msg="Container to stop \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:01:44.488587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656-shm.mount: Deactivated successfully. Sep 13 00:01:44.508657 containerd[1475]: time="2025-09-13T00:01:44.508206407Z" level=info msg="StopContainer for \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\" returns successfully" Sep 13 00:01:44.511760 containerd[1475]: time="2025-09-13T00:01:44.509462975Z" level=info msg="StopPodSandbox for \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\"" Sep 13 00:01:44.511760 containerd[1475]: time="2025-09-13T00:01:44.509520295Z" level=info msg="Container to stop \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:01:44.511760 containerd[1475]: time="2025-09-13T00:01:44.509533695Z" level=info msg="Container to stop \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:01:44.511760 containerd[1475]: time="2025-09-13T00:01:44.509547255Z" level=info msg="Container to stop \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:01:44.511760 containerd[1475]: time="2025-09-13T00:01:44.509560015Z" level=info msg="Container to stop \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:01:44.511760 containerd[1475]: time="2025-09-13T00:01:44.509582976Z" level=info msg="Container to stop \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:01:44.513648 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96-shm.mount: Deactivated successfully. Sep 13 00:01:44.518135 systemd[1]: cri-containerd-8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656.scope: Deactivated successfully. Sep 13 00:01:44.529076 systemd[1]: cri-containerd-3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96.scope: Deactivated successfully. Sep 13 00:01:44.562931 containerd[1475]: time="2025-09-13T00:01:44.562826023Z" level=info msg="shim disconnected" id=8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656 namespace=k8s.io Sep 13 00:01:44.562931 containerd[1475]: time="2025-09-13T00:01:44.562907063Z" level=warning msg="cleaning up after shim disconnected" id=8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656 namespace=k8s.io Sep 13 00:01:44.562931 containerd[1475]: time="2025-09-13T00:01:44.562918023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:01:44.566961 containerd[1475]: time="2025-09-13T00:01:44.566681646Z" level=info msg="shim disconnected" id=3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96 namespace=k8s.io Sep 13 00:01:44.566961 containerd[1475]: time="2025-09-13T00:01:44.566735527Z" level=warning msg="cleaning up after shim disconnected" id=3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96 namespace=k8s.io Sep 13 00:01:44.566961 containerd[1475]: time="2025-09-13T00:01:44.566744087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:01:44.580328 containerd[1475]: time="2025-09-13T00:01:44.580140529Z" level=info msg="TearDown network for sandbox \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\" successfully" Sep 13 00:01:44.580328 containerd[1475]: time="2025-09-13T00:01:44.580174689Z" level=info msg="StopPodSandbox for \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\" returns successfully" Sep 13 00:01:44.591396 containerd[1475]: time="2025-09-13T00:01:44.591298677Z" level=info msg="TearDown network for sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" successfully" Sep 13 00:01:44.591396 containerd[1475]: time="2025-09-13T00:01:44.591340358Z" level=info msg="StopPodSandbox for \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" returns successfully" Sep 13 00:01:44.725249 kubelet[2600]: I0913 00:01:44.725103 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-config-path\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725249 kubelet[2600]: I0913 00:01:44.725148 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-host-proc-sys-net\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725249 kubelet[2600]: I0913 00:01:44.725174 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-hostproc\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725249 kubelet[2600]: I0913 00:01:44.725204 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-clustermesh-secrets\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725249 kubelet[2600]: I0913 00:01:44.725224 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-run\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725249 kubelet[2600]: I0913 00:01:44.725241 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-host-proc-sys-kernel\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725715 kubelet[2600]: I0913 00:01:44.725258 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5682df3d-1839-4d78-99fa-818280ce56bc-cilium-config-path\") pod \"5682df3d-1839-4d78-99fa-818280ce56bc\" (UID: \"5682df3d-1839-4d78-99fa-818280ce56bc\") " Sep 13 00:01:44.725715 kubelet[2600]: I0913 00:01:44.725274 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-bpf-maps\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725715 kubelet[2600]: I0913 00:01:44.725291 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-xtables-lock\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725715 kubelet[2600]: I0913 00:01:44.725305 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-lib-modules\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725715 kubelet[2600]: I0913 00:01:44.725320 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-cgroup\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725715 kubelet[2600]: I0913 00:01:44.725339 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tkj4\" (UniqueName: \"kubernetes.io/projected/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-kube-api-access-2tkj4\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725848 kubelet[2600]: I0913 00:01:44.725357 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cni-path\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725848 kubelet[2600]: I0913 00:01:44.725374 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-hubble-tls\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725848 kubelet[2600]: I0913 00:01:44.725391 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ffgk\" (UniqueName: \"kubernetes.io/projected/5682df3d-1839-4d78-99fa-818280ce56bc-kube-api-access-6ffgk\") pod \"5682df3d-1839-4d78-99fa-818280ce56bc\" (UID: \"5682df3d-1839-4d78-99fa-818280ce56bc\") " Sep 13 00:01:44.725848 kubelet[2600]: I0913 00:01:44.725407 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-etc-cni-netd\") pod \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\" (UID: \"c12370c1-e49b-422c-b1a2-c03ba3fa0ad7\") " Sep 13 00:01:44.725848 kubelet[2600]: I0913 00:01:44.725479 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.730907 kubelet[2600]: I0913 00:01:44.728071 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.730907 kubelet[2600]: I0913 00:01:44.728124 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.730907 kubelet[2600]: I0913 00:01:44.728140 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-hostproc" (OuterVolumeSpecName: "hostproc") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.730907 kubelet[2600]: I0913 00:01:44.728579 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:01:44.730907 kubelet[2600]: I0913 00:01:44.728637 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.731250 kubelet[2600]: I0913 00:01:44.728653 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.731250 kubelet[2600]: I0913 00:01:44.728682 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.731250 kubelet[2600]: I0913 00:01:44.731145 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.731326 kubelet[2600]: I0913 00:01:44.731254 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.731672 kubelet[2600]: I0913 00:01:44.731399 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cni-path" (OuterVolumeSpecName: "cni-path") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:01:44.739372 kubelet[2600]: I0913 00:01:44.737636 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:01:44.742115 kubelet[2600]: I0913 00:01:44.742057 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5682df3d-1839-4d78-99fa-818280ce56bc-kube-api-access-6ffgk" (OuterVolumeSpecName: "kube-api-access-6ffgk") pod "5682df3d-1839-4d78-99fa-818280ce56bc" (UID: "5682df3d-1839-4d78-99fa-818280ce56bc"). InnerVolumeSpecName "kube-api-access-6ffgk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:01:44.743430 kubelet[2600]: I0913 00:01:44.743379 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5682df3d-1839-4d78-99fa-818280ce56bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5682df3d-1839-4d78-99fa-818280ce56bc" (UID: "5682df3d-1839-4d78-99fa-818280ce56bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:01:44.744415 kubelet[2600]: I0913 00:01:44.744385 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:01:44.744703 kubelet[2600]: I0913 00:01:44.744664 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-kube-api-access-2tkj4" (OuterVolumeSpecName: "kube-api-access-2tkj4") pod "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" (UID: "c12370c1-e49b-422c-b1a2-c03ba3fa0ad7"). InnerVolumeSpecName "kube-api-access-2tkj4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:01:44.826484 kubelet[2600]: I0913 00:01:44.826274 2600 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-lib-modules\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.826484 kubelet[2600]: I0913 00:01:44.826335 2600 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-cgroup\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.826484 kubelet[2600]: I0913 00:01:44.826361 2600 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2tkj4\" (UniqueName: \"kubernetes.io/projected/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-kube-api-access-2tkj4\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.826484 kubelet[2600]: I0913 00:01:44.826382 2600 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cni-path\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.826484 kubelet[2600]: I0913 00:01:44.826416 2600 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-hubble-tls\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.826484 kubelet[2600]: I0913 00:01:44.826436 2600 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6ffgk\" (UniqueName: \"kubernetes.io/projected/5682df3d-1839-4d78-99fa-818280ce56bc-kube-api-access-6ffgk\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.826484 kubelet[2600]: I0913 00:01:44.826455 2600 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-etc-cni-netd\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.826484 kubelet[2600]: I0913 00:01:44.826476 2600 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-config-path\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.827129 kubelet[2600]: I0913 00:01:44.826496 2600 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-host-proc-sys-net\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.827129 kubelet[2600]: I0913 00:01:44.826517 2600 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-hostproc\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.827129 kubelet[2600]: I0913 00:01:44.826537 2600 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-clustermesh-secrets\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.827129 kubelet[2600]: I0913 00:01:44.826555 2600 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-cilium-run\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.827129 kubelet[2600]: I0913 00:01:44.826573 2600 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.827129 kubelet[2600]: I0913 00:01:44.826592 2600 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5682df3d-1839-4d78-99fa-818280ce56bc-cilium-config-path\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.827129 kubelet[2600]: I0913 00:01:44.826610 2600 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-bpf-maps\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:44.827129 kubelet[2600]: I0913 00:01:44.826629 2600 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7-xtables-lock\") on node \"ci-4081-3-5-n-44c5618783\" DevicePath \"\"" Sep 13 00:01:45.023300 kubelet[2600]: I0913 00:01:45.023133 2600 scope.go:117] "RemoveContainer" containerID="5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435" Sep 13 00:01:45.029796 containerd[1475]: time="2025-09-13T00:01:45.029701571Z" level=info msg="RemoveContainer for \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\"" Sep 13 00:01:45.037675 containerd[1475]: time="2025-09-13T00:01:45.037631220Z" level=info msg="RemoveContainer for \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\" returns successfully" Sep 13 00:01:45.038571 kubelet[2600]: I0913 00:01:45.038543 2600 scope.go:117] "RemoveContainer" containerID="5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435" Sep 13 00:01:45.040421 systemd[1]: Removed slice kubepods-besteffort-pod5682df3d_1839_4d78_99fa_818280ce56bc.slice - libcontainer container kubepods-besteffort-pod5682df3d_1839_4d78_99fa_818280ce56bc.slice. Sep 13 00:01:45.042803 containerd[1475]: time="2025-09-13T00:01:45.041078961Z" level=error msg="ContainerStatus for \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\": not found" Sep 13 00:01:45.042934 kubelet[2600]: E0913 00:01:45.042005 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\": not found" containerID="5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435" Sep 13 00:01:45.042934 kubelet[2600]: I0913 00:01:45.042037 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435"} err="failed to get container status \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bb54d37380ba0ba875f6bb0f89f4538437be1dac7bcc96c3b8ecf815c760435\": not found" Sep 13 00:01:45.042934 kubelet[2600]: I0913 00:01:45.042150 2600 scope.go:117] "RemoveContainer" containerID="589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd" Sep 13 00:01:45.046235 containerd[1475]: time="2025-09-13T00:01:45.046166472Z" level=info msg="RemoveContainer for \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\"" Sep 13 00:01:45.055002 systemd[1]: Removed slice kubepods-burstable-podc12370c1_e49b_422c_b1a2_c03ba3fa0ad7.slice - libcontainer container kubepods-burstable-podc12370c1_e49b_422c_b1a2_c03ba3fa0ad7.slice. Sep 13 00:01:45.055199 systemd[1]: kubepods-burstable-podc12370c1_e49b_422c_b1a2_c03ba3fa0ad7.slice: Consumed 7.654s CPU time. Sep 13 00:01:45.060242 containerd[1475]: time="2025-09-13T00:01:45.060192399Z" level=info msg="RemoveContainer for \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\" returns successfully" Sep 13 00:01:45.060925 kubelet[2600]: I0913 00:01:45.060888 2600 scope.go:117] "RemoveContainer" containerID="55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2" Sep 13 00:01:45.062482 containerd[1475]: time="2025-09-13T00:01:45.062450053Z" level=info msg="RemoveContainer for \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\"" Sep 13 00:01:45.067329 containerd[1475]: time="2025-09-13T00:01:45.066708439Z" level=info msg="RemoveContainer for \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\" returns successfully" Sep 13 00:01:45.067719 kubelet[2600]: I0913 00:01:45.067682 2600 scope.go:117] "RemoveContainer" containerID="fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae" Sep 13 00:01:45.074145 containerd[1475]: time="2025-09-13T00:01:45.073515802Z" level=info msg="RemoveContainer for \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\"" Sep 13 00:01:45.082784 containerd[1475]: time="2025-09-13T00:01:45.082495537Z" level=info msg="RemoveContainer for \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\" returns successfully" Sep 13 00:01:45.084313 kubelet[2600]: I0913 00:01:45.084111 2600 scope.go:117] "RemoveContainer" containerID="f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549" Sep 13 00:01:45.088724 containerd[1475]: time="2025-09-13T00:01:45.088688335Z" level=info msg="RemoveContainer for \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\"" Sep 13 00:01:45.094493 containerd[1475]: time="2025-09-13T00:01:45.094422171Z" level=info msg="RemoveContainer for \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\" returns successfully" Sep 13 00:01:45.094927 kubelet[2600]: I0913 00:01:45.094902 2600 scope.go:117] "RemoveContainer" containerID="ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479" Sep 13 00:01:45.099590 containerd[1475]: time="2025-09-13T00:01:45.099548443Z" level=info msg="RemoveContainer for \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\"" Sep 13 00:01:45.104556 containerd[1475]: time="2025-09-13T00:01:45.104503113Z" level=info msg="RemoveContainer for \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\" returns successfully" Sep 13 00:01:45.104918 kubelet[2600]: I0913 00:01:45.104894 2600 scope.go:117] "RemoveContainer" containerID="589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd" Sep 13 00:01:45.105553 containerd[1475]: time="2025-09-13T00:01:45.105519799Z" level=error msg="ContainerStatus for \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\": not found" Sep 13 00:01:45.105818 kubelet[2600]: E0913 00:01:45.105761 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\": not found" containerID="589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd" Sep 13 00:01:45.105953 kubelet[2600]: I0913 00:01:45.105926 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd"} err="failed to get container status \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\": rpc error: code = NotFound desc = an error occurred when try to find container \"589dd90a0151472c9a43841c2b70a06d4f11da3527d4391ce3e55f0c05fd9cfd\": not found" Sep 13 00:01:45.106035 kubelet[2600]: I0913 00:01:45.106022 2600 scope.go:117] "RemoveContainer" containerID="55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2" Sep 13 00:01:45.106347 containerd[1475]: time="2025-09-13T00:01:45.106315484Z" level=error msg="ContainerStatus for \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\": not found" Sep 13 00:01:45.106695 kubelet[2600]: E0913 00:01:45.106597 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\": not found" containerID="55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2" Sep 13 00:01:45.106695 kubelet[2600]: I0913 00:01:45.106620 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2"} err="failed to get container status \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"55545c7736575e910a57c7b9297a95973b6e321a8f0b9031af9f051a54ff90a2\": not found" Sep 13 00:01:45.106695 kubelet[2600]: I0913 00:01:45.106636 2600 scope.go:117] "RemoveContainer" containerID="fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae" Sep 13 00:01:45.107243 containerd[1475]: time="2025-09-13T00:01:45.106993489Z" level=error msg="ContainerStatus for \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\": not found" Sep 13 00:01:45.107327 kubelet[2600]: E0913 00:01:45.107116 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\": not found" containerID="fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae" Sep 13 00:01:45.107327 kubelet[2600]: I0913 00:01:45.107137 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae"} err="failed to get container status \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"fef082f80a33c8c0b2e40e9ad6ba0891dc03fabd0c77aa05f332f42e07a334ae\": not found" Sep 13 00:01:45.107327 kubelet[2600]: I0913 00:01:45.107154 2600 scope.go:117] "RemoveContainer" containerID="f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549" Sep 13 00:01:45.107442 containerd[1475]: time="2025-09-13T00:01:45.107357931Z" level=error msg="ContainerStatus for \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\": not found" Sep 13 00:01:45.107718 kubelet[2600]: E0913 00:01:45.107552 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\": not found" containerID="f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549" Sep 13 00:01:45.107718 kubelet[2600]: I0913 00:01:45.107573 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549"} err="failed to get container status \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\": rpc error: code = NotFound desc = an error occurred when try to find container \"f40b79c005006120343dfd227989d3fb5b096bc9b93015743be7f82eff527549\": not found" Sep 13 00:01:45.107718 kubelet[2600]: I0913 00:01:45.107586 2600 scope.go:117] "RemoveContainer" containerID="ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479" Sep 13 00:01:45.108009 kubelet[2600]: E0913 00:01:45.107851 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\": not found" containerID="ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479" Sep 13 00:01:45.108048 containerd[1475]: time="2025-09-13T00:01:45.107720653Z" level=error msg="ContainerStatus for \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\": not found" Sep 13 00:01:45.108115 kubelet[2600]: I0913 00:01:45.107869 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479"} err="failed to get container status \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba380942eab0f880a3bb5f48fe3a804d907842575727324ac003b0afefd84479\": not found" Sep 13 00:01:45.379551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656-rootfs.mount: Deactivated successfully. Sep 13 00:01:45.379692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96-rootfs.mount: Deactivated successfully. Sep 13 00:01:45.379778 systemd[1]: var-lib-kubelet-pods-5682df3d\x2d1839\x2d4d78\x2d99fa\x2d818280ce56bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ffgk.mount: Deactivated successfully. Sep 13 00:01:45.379897 systemd[1]: var-lib-kubelet-pods-c12370c1\x2de49b\x2d422c\x2db1a2\x2dc03ba3fa0ad7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2tkj4.mount: Deactivated successfully. Sep 13 00:01:45.379995 systemd[1]: var-lib-kubelet-pods-c12370c1\x2de49b\x2d422c\x2db1a2\x2dc03ba3fa0ad7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:01:45.380083 systemd[1]: var-lib-kubelet-pods-c12370c1\x2de49b\x2d422c\x2db1a2\x2dc03ba3fa0ad7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:01:45.386610 kubelet[2600]: I0913 00:01:45.385678 2600 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5682df3d-1839-4d78-99fa-818280ce56bc" path="/var/lib/kubelet/pods/5682df3d-1839-4d78-99fa-818280ce56bc/volumes" Sep 13 00:01:45.386610 kubelet[2600]: I0913 00:01:45.386107 2600 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" path="/var/lib/kubelet/pods/c12370c1-e49b-422c-b1a2-c03ba3fa0ad7/volumes" Sep 13 00:01:45.524369 kubelet[2600]: E0913 00:01:45.524284 2600 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:01:46.473166 sshd[4195]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:46.478551 systemd[1]: sshd@23-91.99.3.235:22-147.75.109.163:60104.service: Deactivated successfully. Sep 13 00:01:46.483574 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:01:46.483728 systemd[1]: session-23.scope: Consumed 1.671s CPU time. Sep 13 00:01:46.486801 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:01:46.490010 systemd-logind[1457]: Removed session 23. Sep 13 00:01:46.646496 systemd[1]: Started sshd@24-91.99.3.235:22-147.75.109.163:60110.service - OpenSSH per-connection server daemon (147.75.109.163:60110). Sep 13 00:01:47.653088 sshd[4365]: Accepted publickey for core from 147.75.109.163 port 60110 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:47.656028 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:47.664731 systemd-logind[1457]: New session 24 of user core. Sep 13 00:01:47.666317 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:01:49.015756 kubelet[2600]: I0913 00:01:49.015617 2600 setters.go:602] "Node became not ready" node="ci-4081-3-5-n-44c5618783" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:01:49Z","lastTransitionTime":"2025-09-13T00:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:01:49.368471 kubelet[2600]: I0913 00:01:49.368426 2600 memory_manager.go:355] "RemoveStaleState removing state" podUID="5682df3d-1839-4d78-99fa-818280ce56bc" containerName="cilium-operator" Sep 13 00:01:49.368471 kubelet[2600]: I0913 00:01:49.368458 2600 memory_manager.go:355] "RemoveStaleState removing state" podUID="c12370c1-e49b-422c-b1a2-c03ba3fa0ad7" containerName="cilium-agent" Sep 13 00:01:49.377864 systemd[1]: Created slice kubepods-burstable-pod5a4ef2dc_70ae_4238_8f71_33837b31931d.slice - libcontainer container kubepods-burstable-pod5a4ef2dc_70ae_4238_8f71_33837b31931d.slice. Sep 13 00:01:49.553317 sshd[4365]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:49.558524 systemd[1]: sshd@24-91.99.3.235:22-147.75.109.163:60110.service: Deactivated successfully. Sep 13 00:01:49.561527 kubelet[2600]: I0913 00:01:49.561473 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-lib-modules\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561632 kubelet[2600]: I0913 00:01:49.561534 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5a4ef2dc-70ae-4238-8f71-33837b31931d-cilium-ipsec-secrets\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561632 kubelet[2600]: I0913 00:01:49.561558 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-bpf-maps\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561632 kubelet[2600]: I0913 00:01:49.561581 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-cni-path\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561632 kubelet[2600]: I0913 00:01:49.561611 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvcxg\" (UniqueName: \"kubernetes.io/projected/5a4ef2dc-70ae-4238-8f71-33837b31931d-kube-api-access-wvcxg\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561748 kubelet[2600]: I0913 00:01:49.561633 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a4ef2dc-70ae-4238-8f71-33837b31931d-clustermesh-secrets\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561748 kubelet[2600]: I0913 00:01:49.561655 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-host-proc-sys-kernel\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561748 kubelet[2600]: I0913 00:01:49.561679 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a4ef2dc-70ae-4238-8f71-33837b31931d-cilium-config-path\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561748 kubelet[2600]: I0913 00:01:49.561701 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-host-proc-sys-net\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561748 kubelet[2600]: I0913 00:01:49.561721 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-hostproc\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561853 kubelet[2600]: I0913 00:01:49.561740 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-cilium-cgroup\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561853 kubelet[2600]: I0913 00:01:49.561762 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-cilium-run\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561853 kubelet[2600]: I0913 00:01:49.561783 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-etc-cni-netd\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.561853 kubelet[2600]: I0913 00:01:49.561803 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a4ef2dc-70ae-4238-8f71-33837b31931d-xtables-lock\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.562421 kubelet[2600]: I0913 00:01:49.561822 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a4ef2dc-70ae-4238-8f71-33837b31931d-hubble-tls\") pod \"cilium-xb77k\" (UID: \"5a4ef2dc-70ae-4238-8f71-33837b31931d\") " pod="kube-system/cilium-xb77k" Sep 13 00:01:49.562833 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:01:49.563517 systemd[1]: session-24.scope: Consumed 1.084s CPU time. Sep 13 00:01:49.564262 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:01:49.566032 systemd-logind[1457]: Removed session 24. Sep 13 00:01:49.728458 systemd[1]: Started sshd@25-91.99.3.235:22-147.75.109.163:60122.service - OpenSSH per-connection server daemon (147.75.109.163:60122). Sep 13 00:01:49.985193 containerd[1475]: time="2025-09-13T00:01:49.985009848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xb77k,Uid:5a4ef2dc-70ae-4238-8f71-33837b31931d,Namespace:kube-system,Attempt:0,}" Sep 13 00:01:50.012374 containerd[1475]: time="2025-09-13T00:01:50.012213461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:50.012374 containerd[1475]: time="2025-09-13T00:01:50.012328821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:50.012619 containerd[1475]: time="2025-09-13T00:01:50.012344021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:50.012619 containerd[1475]: time="2025-09-13T00:01:50.012465182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:50.032241 systemd[1]: Started cri-containerd-f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3.scope - libcontainer container f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3. Sep 13 00:01:50.059469 containerd[1475]: time="2025-09-13T00:01:50.059427122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xb77k,Uid:5a4ef2dc-70ae-4238-8f71-33837b31931d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\"" Sep 13 00:01:50.066416 containerd[1475]: time="2025-09-13T00:01:50.066267566Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:01:50.078909 containerd[1475]: time="2025-09-13T00:01:50.078829446Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4f7ada97b2550530bf92d729faa28635927a487b27bebd3affe529f040667338\"" Sep 13 00:01:50.079684 containerd[1475]: time="2025-09-13T00:01:50.079654891Z" level=info msg="StartContainer for \"4f7ada97b2550530bf92d729faa28635927a487b27bebd3affe529f040667338\"" Sep 13 00:01:50.106069 systemd[1]: Started cri-containerd-4f7ada97b2550530bf92d729faa28635927a487b27bebd3affe529f040667338.scope - libcontainer container 4f7ada97b2550530bf92d729faa28635927a487b27bebd3affe529f040667338. Sep 13 00:01:50.132954 containerd[1475]: time="2025-09-13T00:01:50.132775950Z" level=info msg="StartContainer for \"4f7ada97b2550530bf92d729faa28635927a487b27bebd3affe529f040667338\" returns successfully" Sep 13 00:01:50.145404 systemd[1]: cri-containerd-4f7ada97b2550530bf92d729faa28635927a487b27bebd3affe529f040667338.scope: Deactivated successfully. Sep 13 00:01:50.176225 containerd[1475]: time="2025-09-13T00:01:50.176017306Z" level=info msg="shim disconnected" id=4f7ada97b2550530bf92d729faa28635927a487b27bebd3affe529f040667338 namespace=k8s.io Sep 13 00:01:50.176739 containerd[1475]: time="2025-09-13T00:01:50.176158267Z" level=warning msg="cleaning up after shim disconnected" id=4f7ada97b2550530bf92d729faa28635927a487b27bebd3affe529f040667338 namespace=k8s.io Sep 13 00:01:50.176739 containerd[1475]: time="2025-09-13T00:01:50.176571430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:01:50.525949 kubelet[2600]: E0913 00:01:50.525821 2600 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:01:50.713786 sshd[4384]: Accepted publickey for core from 147.75.109.163 port 60122 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:50.716826 sshd[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:50.722969 systemd-logind[1457]: New session 25 of user core. Sep 13 00:01:50.726076 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:01:51.063441 containerd[1475]: time="2025-09-13T00:01:51.063336091Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:01:51.076929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864054069.mount: Deactivated successfully. Sep 13 00:01:51.079767 containerd[1475]: time="2025-09-13T00:01:51.079693156Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc\"" Sep 13 00:01:51.081075 containerd[1475]: time="2025-09-13T00:01:51.081013404Z" level=info msg="StartContainer for \"31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc\"" Sep 13 00:01:51.117409 systemd[1]: Started cri-containerd-31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc.scope - libcontainer container 31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc. Sep 13 00:01:51.142489 containerd[1475]: time="2025-09-13T00:01:51.142432638Z" level=info msg="StartContainer for \"31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc\" returns successfully" Sep 13 00:01:51.150796 systemd[1]: cri-containerd-31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc.scope: Deactivated successfully. Sep 13 00:01:51.179923 containerd[1475]: time="2025-09-13T00:01:51.179851278Z" level=info msg="shim disconnected" id=31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc namespace=k8s.io Sep 13 00:01:51.179923 containerd[1475]: time="2025-09-13T00:01:51.179916719Z" level=warning msg="cleaning up after shim disconnected" id=31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc namespace=k8s.io Sep 13 00:01:51.179923 containerd[1475]: time="2025-09-13T00:01:51.179924519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:01:51.392839 sshd[4384]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:51.398599 systemd[1]: sshd@25-91.99.3.235:22-147.75.109.163:60122.service: Deactivated successfully. Sep 13 00:01:51.400843 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:01:51.401781 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:01:51.403432 systemd-logind[1457]: Removed session 25. Sep 13 00:01:51.571962 systemd[1]: Started sshd@26-91.99.3.235:22-147.75.109.163:35202.service - OpenSSH per-connection server daemon (147.75.109.163:35202). Sep 13 00:01:51.670675 systemd[1]: run-containerd-runc-k8s.io-31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc-runc.urqeOO.mount: Deactivated successfully. Sep 13 00:01:51.670842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31a32443d672f0f7bdbecc702f61210411f04c1a4ca2e963a7a1ecaf467fc9bc-rootfs.mount: Deactivated successfully. Sep 13 00:01:52.070273 containerd[1475]: time="2025-09-13T00:01:52.069959035Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:01:52.101944 containerd[1475]: time="2025-09-13T00:01:52.099535746Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca\"" Sep 13 00:01:52.101944 containerd[1475]: time="2025-09-13T00:01:52.101421478Z" level=info msg="StartContainer for \"6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca\"" Sep 13 00:01:52.134137 systemd[1]: Started cri-containerd-6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca.scope - libcontainer container 6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca. Sep 13 00:01:52.165627 systemd[1]: cri-containerd-6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca.scope: Deactivated successfully. Sep 13 00:01:52.167946 containerd[1475]: time="2025-09-13T00:01:52.167193023Z" level=info msg="StartContainer for \"6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca\" returns successfully" Sep 13 00:01:52.195267 containerd[1475]: time="2025-09-13T00:01:52.195107283Z" level=info msg="shim disconnected" id=6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca namespace=k8s.io Sep 13 00:01:52.195506 containerd[1475]: time="2025-09-13T00:01:52.195487926Z" level=warning msg="cleaning up after shim disconnected" id=6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca namespace=k8s.io Sep 13 00:01:52.195574 containerd[1475]: time="2025-09-13T00:01:52.195561326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:01:52.564257 sshd[4556]: Accepted publickey for core from 147.75.109.163 port 35202 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:01:52.565825 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:52.570536 systemd-logind[1457]: New session 26 of user core. Sep 13 00:01:52.576109 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:01:52.671411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cbbc2db23086e8ac13b22cdf18fda8f734431550a938636e6233e3d765d23ca-rootfs.mount: Deactivated successfully. Sep 13 00:01:53.074943 containerd[1475]: time="2025-09-13T00:01:53.074894727Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:01:53.099162 containerd[1475]: time="2025-09-13T00:01:53.097771755Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5\"" Sep 13 00:01:53.100382 containerd[1475]: time="2025-09-13T00:01:53.099991449Z" level=info msg="StartContainer for \"d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5\"" Sep 13 00:01:53.142429 systemd[1]: Started cri-containerd-d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5.scope - libcontainer container d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5. Sep 13 00:01:53.171253 containerd[1475]: time="2025-09-13T00:01:53.171186672Z" level=info msg="StartContainer for \"d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5\" returns successfully" Sep 13 00:01:53.173240 systemd[1]: cri-containerd-d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5.scope: Deactivated successfully. Sep 13 00:01:53.214759 containerd[1475]: time="2025-09-13T00:01:53.214606074Z" level=info msg="shim disconnected" id=d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5 namespace=k8s.io Sep 13 00:01:53.214759 containerd[1475]: time="2025-09-13T00:01:53.214750715Z" level=warning msg="cleaning up after shim disconnected" id=d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5 namespace=k8s.io Sep 13 00:01:53.214759 containerd[1475]: time="2025-09-13T00:01:53.214760035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:01:53.669989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2c292aa1aece9b15f6b58cff77376d1c714e7d3a0e81d67f0790d520ff969e5-rootfs.mount: Deactivated successfully. Sep 13 00:01:54.081136 containerd[1475]: time="2025-09-13T00:01:54.080766741Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:01:54.107910 containerd[1475]: time="2025-09-13T00:01:54.106795031Z" level=info msg="CreateContainer within sandbox \"f32f2b144b7afee6b5e8066e5febdffbdee0112507375da33d4df42c737b6ff3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15ef43615d881e4ddd7e9f578b63fc515e238f2a48da95f09c221e91b16b5aa9\"" Sep 13 00:01:54.109990 containerd[1475]: time="2025-09-13T00:01:54.108707324Z" level=info msg="StartContainer for \"15ef43615d881e4ddd7e9f578b63fc515e238f2a48da95f09c221e91b16b5aa9\"" Sep 13 00:01:54.169101 systemd[1]: Started cri-containerd-15ef43615d881e4ddd7e9f578b63fc515e238f2a48da95f09c221e91b16b5aa9.scope - libcontainer container 15ef43615d881e4ddd7e9f578b63fc515e238f2a48da95f09c221e91b16b5aa9. Sep 13 00:01:54.214621 containerd[1475]: time="2025-09-13T00:01:54.214566855Z" level=info msg="StartContainer for \"15ef43615d881e4ddd7e9f578b63fc515e238f2a48da95f09c221e91b16b5aa9\" returns successfully" Sep 13 00:01:54.534939 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 13 00:01:55.106752 kubelet[2600]: I0913 00:01:55.105953 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xb77k" podStartSLOduration=6.105934879 podStartE2EDuration="6.105934879s" podCreationTimestamp="2025-09-13 00:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:55.102743498 +0000 UTC m=+229.853909776" watchObservedRunningTime="2025-09-13 00:01:55.105934879 +0000 UTC m=+229.857101077" Sep 13 00:01:55.322475 systemd[1]: run-containerd-runc-k8s.io-15ef43615d881e4ddd7e9f578b63fc515e238f2a48da95f09c221e91b16b5aa9-runc.Jwl3lb.mount: Deactivated successfully. Sep 13 00:01:57.501623 systemd-networkd[1367]: lxc_health: Link UP Sep 13 00:01:57.509926 systemd[1]: run-containerd-runc-k8s.io-15ef43615d881e4ddd7e9f578b63fc515e238f2a48da95f09c221e91b16b5aa9-runc.XwiKfr.mount: Deactivated successfully. Sep 13 00:01:57.537974 systemd-networkd[1367]: lxc_health: Gained carrier Sep 13 00:01:59.243207 systemd-networkd[1367]: lxc_health: Gained IPv6LL Sep 13 00:01:59.700266 systemd[1]: run-containerd-runc-k8s.io-15ef43615d881e4ddd7e9f578b63fc515e238f2a48da95f09c221e91b16b5aa9-runc.DmVEJQ.mount: Deactivated successfully. Sep 13 00:02:01.372695 systemd[1]: Started sshd@27-91.99.3.235:22-193.46.255.7:36420.service - OpenSSH per-connection server daemon (193.46.255.7:36420). Sep 13 00:02:01.599192 sshd[5283]: Received disconnect from 193.46.255.7 port 36420:11: [preauth] Sep 13 00:02:01.599592 sshd[5283]: Disconnected from 193.46.255.7 port 36420 [preauth] Sep 13 00:02:01.601698 systemd[1]: sshd@27-91.99.3.235:22-193.46.255.7:36420.service: Deactivated successfully. Sep 13 00:02:04.255116 sshd[4556]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:04.260974 systemd-logind[1457]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:02:04.261728 systemd[1]: sshd@26-91.99.3.235:22-147.75.109.163:35202.service: Deactivated successfully. Sep 13 00:02:04.265446 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:02:04.269773 systemd-logind[1457]: Removed session 26. Sep 13 00:02:05.441739 containerd[1475]: time="2025-09-13T00:02:05.441631200Z" level=info msg="StopPodSandbox for \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\"" Sep 13 00:02:05.442690 containerd[1475]: time="2025-09-13T00:02:05.442327005Z" level=info msg="TearDown network for sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" successfully" Sep 13 00:02:05.442690 containerd[1475]: time="2025-09-13T00:02:05.442361325Z" level=info msg="StopPodSandbox for \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" returns successfully" Sep 13 00:02:05.443648 containerd[1475]: time="2025-09-13T00:02:05.443612174Z" level=info msg="RemovePodSandbox for \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\"" Sep 13 00:02:05.443753 containerd[1475]: time="2025-09-13T00:02:05.443653014Z" level=info msg="Forcibly stopping sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\"" Sep 13 00:02:05.443753 containerd[1475]: time="2025-09-13T00:02:05.443711334Z" level=info msg="TearDown network for sandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" successfully" Sep 13 00:02:05.448128 containerd[1475]: time="2025-09-13T00:02:05.448083365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:05.448346 containerd[1475]: time="2025-09-13T00:02:05.448201005Z" level=info msg="RemovePodSandbox \"3cac56b2c88983198c4d3ec661ec65e46adbea24ef7754fa14f64a1e37b12e96\" returns successfully" Sep 13 00:02:05.449472 containerd[1475]: time="2025-09-13T00:02:05.448853730Z" level=info msg="StopPodSandbox for \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\"" Sep 13 00:02:05.449472 containerd[1475]: time="2025-09-13T00:02:05.448959451Z" level=info msg="TearDown network for sandbox \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\" successfully" Sep 13 00:02:05.449472 containerd[1475]: time="2025-09-13T00:02:05.448972051Z" level=info msg="StopPodSandbox for \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\" returns successfully" Sep 13 00:02:05.449472 containerd[1475]: time="2025-09-13T00:02:05.449439894Z" level=info msg="RemovePodSandbox for \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\"" Sep 13 00:02:05.449851 containerd[1475]: time="2025-09-13T00:02:05.449717936Z" level=info msg="Forcibly stopping sandbox \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\"" Sep 13 00:02:05.449851 containerd[1475]: time="2025-09-13T00:02:05.449787736Z" level=info msg="TearDown network for sandbox \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\" successfully" Sep 13 00:02:05.453060 containerd[1475]: time="2025-09-13T00:02:05.452931918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:05.453060 containerd[1475]: time="2025-09-13T00:02:05.453005959Z" level=info msg="RemovePodSandbox \"8311843404af5e144f3d02466b40a93798d63a5f9755df66914b7d884a189656\" returns successfully" Sep 13 00:02:19.363459 systemd[1]: cri-containerd-db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf.scope: Deactivated successfully. Sep 13 00:02:19.364279 systemd[1]: cri-containerd-db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf.scope: Consumed 4.267s CPU time, 18.2M memory peak, 0B memory swap peak. Sep 13 00:02:19.388712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf-rootfs.mount: Deactivated successfully. Sep 13 00:02:19.407179 containerd[1475]: time="2025-09-13T00:02:19.406868664Z" level=info msg="shim disconnected" id=db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf namespace=k8s.io Sep 13 00:02:19.407179 containerd[1475]: time="2025-09-13T00:02:19.406984425Z" level=warning msg="cleaning up after shim disconnected" id=db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf namespace=k8s.io Sep 13 00:02:19.407179 containerd[1475]: time="2025-09-13T00:02:19.406995705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:19.791014 kubelet[2600]: E0913 00:02:19.789974 2600 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48790->10.0.0.2:2379: read: connection timed out" Sep 13 00:02:19.802106 systemd[1]: cri-containerd-294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1.scope: Deactivated successfully. Sep 13 00:02:19.803264 systemd[1]: cri-containerd-294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1.scope: Consumed 4.178s CPU time, 16.2M memory peak, 0B memory swap peak. Sep 13 00:02:19.846217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1-rootfs.mount: Deactivated successfully. Sep 13 00:02:19.848659 containerd[1475]: time="2025-09-13T00:02:19.848328635Z" level=info msg="shim disconnected" id=294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1 namespace=k8s.io Sep 13 00:02:19.848659 containerd[1475]: time="2025-09-13T00:02:19.848526996Z" level=warning msg="cleaning up after shim disconnected" id=294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1 namespace=k8s.io Sep 13 00:02:19.848659 containerd[1475]: time="2025-09-13T00:02:19.848538556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:20.155563 kubelet[2600]: I0913 00:02:20.154026 2600 scope.go:117] "RemoveContainer" containerID="db1c25c8808030c19d85c7730e7642f7b79277cbab12e424cdcb1cb897bfb6cf" Sep 13 00:02:20.157265 containerd[1475]: time="2025-09-13T00:02:20.157140685Z" level=info msg="CreateContainer within sandbox \"044e3e69fad832e9290eda221d39f3a1945cf59fa16712148af7bd5d7227034e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:02:20.158278 kubelet[2600]: I0913 00:02:20.158251 2600 scope.go:117] "RemoveContainer" containerID="294c404bfecbe715f28530164c9ea859784f47f7380a8bc58f5cfec33cf7c5f1" Sep 13 00:02:20.160669 containerd[1475]: time="2025-09-13T00:02:20.160529429Z" level=info msg="CreateContainer within sandbox \"27a3c48f8f86da8e5dd372a027f94b6003e40edf5dc4e83ae0c7e414aff93a97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:02:20.178449 containerd[1475]: time="2025-09-13T00:02:20.178392120Z" level=info msg="CreateContainer within sandbox \"27a3c48f8f86da8e5dd372a027f94b6003e40edf5dc4e83ae0c7e414aff93a97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b1722b2f13df3670392230737ba1bbf7aef12c0a956727973fbdbc78d72eacd9\"" Sep 13 00:02:20.178937 containerd[1475]: time="2025-09-13T00:02:20.178911803Z" level=info msg="StartContainer for \"b1722b2f13df3670392230737ba1bbf7aef12c0a956727973fbdbc78d72eacd9\"" Sep 13 00:02:20.182629 containerd[1475]: time="2025-09-13T00:02:20.182575030Z" level=info msg="CreateContainer within sandbox \"044e3e69fad832e9290eda221d39f3a1945cf59fa16712148af7bd5d7227034e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f036bd8e6e2051648e41187f0169a3b7114f3535fcad760af69306e89d581f3c\"" Sep 13 00:02:20.183125 containerd[1475]: time="2025-09-13T00:02:20.183061714Z" level=info msg="StartContainer for \"f036bd8e6e2051648e41187f0169a3b7114f3535fcad760af69306e89d581f3c\"" Sep 13 00:02:20.213409 systemd[1]: Started cri-containerd-b1722b2f13df3670392230737ba1bbf7aef12c0a956727973fbdbc78d72eacd9.scope - libcontainer container b1722b2f13df3670392230737ba1bbf7aef12c0a956727973fbdbc78d72eacd9. Sep 13 00:02:20.227065 systemd[1]: Started cri-containerd-f036bd8e6e2051648e41187f0169a3b7114f3535fcad760af69306e89d581f3c.scope - libcontainer container f036bd8e6e2051648e41187f0169a3b7114f3535fcad760af69306e89d581f3c. Sep 13 00:02:20.278339 containerd[1475]: time="2025-09-13T00:02:20.278280369Z" level=info msg="StartContainer for \"b1722b2f13df3670392230737ba1bbf7aef12c0a956727973fbdbc78d72eacd9\" returns successfully" Sep 13 00:02:20.284248 containerd[1475]: time="2025-09-13T00:02:20.284198172Z" level=info msg="StartContainer for \"f036bd8e6e2051648e41187f0169a3b7114f3535fcad760af69306e89d581f3c\" returns successfully" Sep 13 00:02:24.845539 kubelet[2600]: E0913 00:02:24.845335 2600 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48624->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-44c5618783.1864ae9aca0e8ad9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-44c5618783,UID:601a9a7eb396cf54ebdb34ef526c443e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-44c5618783,},FirstTimestamp:2025-09-13 00:02:14.419483353 +0000 UTC m=+249.170649631,LastTimestamp:2025-09-13 00:02:14.419483353 +0000 UTC m=+249.170649631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-44c5618783,}"