Sep 9 21:33:27.731173 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 21:33:27.731194 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 19:54:20 -00 2025 Sep 9 21:33:27.731204 kernel: KASLR enabled Sep 9 21:33:27.731209 kernel: efi: EFI v2.7 by EDK II Sep 9 21:33:27.731215 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 21:33:27.731220 kernel: random: crng init done Sep 9 21:33:27.731227 kernel: secureboot: Secure boot disabled Sep 9 21:33:27.731232 kernel: ACPI: Early table checksum verification disabled Sep 9 21:33:27.731238 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 21:33:27.731245 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 21:33:27.731251 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731256 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731262 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731268 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731275 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731282 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731288 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731294 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731300 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:33:27.731306 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 21:33:27.731312 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 21:33:27.731318 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:33:27.731328 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 21:33:27.731335 kernel: Zone ranges: Sep 9 21:33:27.731341 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:33:27.731350 kernel: DMA32 empty Sep 9 21:33:27.731358 kernel: Normal empty Sep 9 21:33:27.731366 kernel: Device empty Sep 9 21:33:27.731372 kernel: Movable zone start for each node Sep 9 21:33:27.731378 kernel: Early memory node ranges Sep 9 21:33:27.731384 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 21:33:27.731390 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 21:33:27.731396 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 21:33:27.731402 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 21:33:27.731407 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 21:33:27.731413 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 21:33:27.731420 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 21:33:27.731427 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 21:33:27.731433 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 21:33:27.731439 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 21:33:27.731448 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 21:33:27.731455 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 21:33:27.731462 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 21:33:27.731469 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:33:27.731476 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 21:33:27.731483 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 21:33:27.731489 kernel: psci: probing for conduit method from ACPI. Sep 9 21:33:27.731495 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 21:33:27.731502 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 21:33:27.731508 kernel: psci: Trusted OS migration not required Sep 9 21:33:27.731515 kernel: psci: SMC Calling Convention v1.1 Sep 9 21:33:27.731521 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 21:33:27.731528 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 21:33:27.731543 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 21:33:27.731599 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 21:33:27.731606 kernel: Detected PIPT I-cache on CPU0 Sep 9 21:33:27.731613 kernel: CPU features: detected: GIC system register CPU interface Sep 9 21:33:27.731619 kernel: CPU features: detected: Spectre-v4 Sep 9 21:33:27.731626 kernel: CPU features: detected: Spectre-BHB Sep 9 21:33:27.731633 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 21:33:27.731639 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 21:33:27.731645 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 21:33:27.731652 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 21:33:27.731658 kernel: alternatives: applying boot alternatives Sep 9 21:33:27.731666 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f5bd02e888bbcae51800cf37660dcdbf356eb05540a834019d706c2521a92d30 Sep 9 21:33:27.731674 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 21:33:27.731681 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 21:33:27.731687 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 21:33:27.731693 kernel: Fallback order for Node 0: 0 Sep 9 21:33:27.731700 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 21:33:27.731706 kernel: Policy zone: DMA Sep 9 21:33:27.731712 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 21:33:27.731719 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 21:33:27.731725 kernel: software IO TLB: area num 4. Sep 9 21:33:27.731732 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 21:33:27.731738 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 21:33:27.731745 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 21:33:27.731752 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 21:33:27.731759 kernel: rcu: RCU event tracing is enabled. Sep 9 21:33:27.731765 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 21:33:27.731772 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 21:33:27.731778 kernel: Tracing variant of Tasks RCU enabled. Sep 9 21:33:27.731785 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 21:33:27.731791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 21:33:27.731798 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:33:27.731804 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:33:27.731810 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 21:33:27.731818 kernel: GICv3: 256 SPIs implemented Sep 9 21:33:27.731824 kernel: GICv3: 0 Extended SPIs implemented Sep 9 21:33:27.731830 kernel: Root IRQ handler: gic_handle_irq Sep 9 21:33:27.731837 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 21:33:27.731843 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 21:33:27.731849 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 21:33:27.731855 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 21:33:27.731862 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 21:33:27.731868 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 21:33:27.731875 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 21:33:27.731881 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 21:33:27.731887 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 21:33:27.731895 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:33:27.731901 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 21:33:27.731908 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 21:33:27.731914 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 21:33:27.731920 kernel: arm-pv: using stolen time PV Sep 9 21:33:27.731927 kernel: Console: colour dummy device 80x25 Sep 9 21:33:27.731933 kernel: ACPI: Core revision 20240827 Sep 9 21:33:27.731940 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 21:33:27.731947 kernel: pid_max: default: 32768 minimum: 301 Sep 9 21:33:27.731953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 21:33:27.731961 kernel: landlock: Up and running. Sep 9 21:33:27.731967 kernel: SELinux: Initializing. Sep 9 21:33:27.731974 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:33:27.731981 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:33:27.731987 kernel: rcu: Hierarchical SRCU implementation. Sep 9 21:33:27.731994 kernel: rcu: Max phase no-delay instances is 400. Sep 9 21:33:27.732009 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 21:33:27.732022 kernel: Remapping and enabling EFI services. Sep 9 21:33:27.732028 kernel: smp: Bringing up secondary CPUs ... Sep 9 21:33:27.732040 kernel: Detected PIPT I-cache on CPU1 Sep 9 21:33:27.732047 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 21:33:27.732054 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 21:33:27.732062 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:33:27.732069 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 21:33:27.732076 kernel: Detected PIPT I-cache on CPU2 Sep 9 21:33:27.732083 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 21:33:27.732090 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 21:33:27.732098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:33:27.732104 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 21:33:27.732111 kernel: Detected PIPT I-cache on CPU3 Sep 9 21:33:27.732118 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 21:33:27.732125 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 21:33:27.732132 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:33:27.732138 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 21:33:27.732145 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 21:33:27.732152 kernel: SMP: Total of 4 processors activated. Sep 9 21:33:27.732160 kernel: CPU: All CPU(s) started at EL1 Sep 9 21:33:27.732167 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 21:33:27.732173 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 21:33:27.732180 kernel: CPU features: detected: Common not Private translations Sep 9 21:33:27.732187 kernel: CPU features: detected: CRC32 instructions Sep 9 21:33:27.732194 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 21:33:27.732201 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 21:33:27.732208 kernel: CPU features: detected: LSE atomic instructions Sep 9 21:33:27.732214 kernel: CPU features: detected: Privileged Access Never Sep 9 21:33:27.732223 kernel: CPU features: detected: RAS Extension Support Sep 9 21:33:27.732229 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 21:33:27.732236 kernel: alternatives: applying system-wide alternatives Sep 9 21:33:27.732243 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 21:33:27.732250 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 9 21:33:27.732257 kernel: devtmpfs: initialized Sep 9 21:33:27.732264 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 21:33:27.732271 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 21:33:27.732278 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 21:33:27.732286 kernel: 0 pages in range for non-PLT usage Sep 9 21:33:27.732293 kernel: 508560 pages in range for PLT usage Sep 9 21:33:27.732299 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 21:33:27.732306 kernel: SMBIOS 3.0.0 present. Sep 9 21:33:27.732313 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 21:33:27.732320 kernel: DMI: Memory slots populated: 1/1 Sep 9 21:33:27.732327 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 21:33:27.732334 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 21:33:27.732341 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 21:33:27.732349 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 21:33:27.732356 kernel: audit: initializing netlink subsys (disabled) Sep 9 21:33:27.732363 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 9 21:33:27.732370 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 21:33:27.732376 kernel: cpuidle: using governor menu Sep 9 21:33:27.732383 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 21:33:27.732390 kernel: ASID allocator initialised with 32768 entries Sep 9 21:33:27.732397 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 21:33:27.732404 kernel: Serial: AMBA PL011 UART driver Sep 9 21:33:27.732412 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 21:33:27.732419 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 21:33:27.732425 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 21:33:27.732432 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 21:33:27.732439 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 21:33:27.732446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 21:33:27.732452 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 21:33:27.732459 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 21:33:27.732466 kernel: ACPI: Added _OSI(Module Device) Sep 9 21:33:27.732474 kernel: ACPI: Added _OSI(Processor Device) Sep 9 21:33:27.732481 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 21:33:27.732488 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 21:33:27.732494 kernel: ACPI: Interpreter enabled Sep 9 21:33:27.732501 kernel: ACPI: Using GIC for interrupt routing Sep 9 21:33:27.732508 kernel: ACPI: MCFG table detected, 1 entries Sep 9 21:33:27.732515 kernel: ACPI: CPU0 has been hot-added Sep 9 21:33:27.732522 kernel: ACPI: CPU1 has been hot-added Sep 9 21:33:27.732528 kernel: ACPI: CPU2 has been hot-added Sep 9 21:33:27.732541 kernel: ACPI: CPU3 has been hot-added Sep 9 21:33:27.732556 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 21:33:27.732563 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 21:33:27.732570 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 21:33:27.732694 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 21:33:27.732757 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 21:33:27.732815 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 21:33:27.732871 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 21:33:27.732930 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 21:33:27.732939 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 21:33:27.732947 kernel: PCI host bridge to bus 0000:00 Sep 9 21:33:27.733009 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 21:33:27.733062 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 21:33:27.733114 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 21:33:27.733168 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 21:33:27.733249 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 21:33:27.733318 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 21:33:27.733377 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 21:33:27.733437 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 21:33:27.733494 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 21:33:27.733596 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 21:33:27.733662 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 21:33:27.733724 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 21:33:27.733778 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 21:33:27.733830 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 21:33:27.733883 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 21:33:27.733892 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 21:33:27.733899 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 21:33:27.733905 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 21:33:27.733914 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 21:33:27.733921 kernel: iommu: Default domain type: Translated Sep 9 21:33:27.733927 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 21:33:27.733934 kernel: efivars: Registered efivars operations Sep 9 21:33:27.733941 kernel: vgaarb: loaded Sep 9 21:33:27.733948 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 21:33:27.733955 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 21:33:27.733962 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 21:33:27.733969 kernel: pnp: PnP ACPI init Sep 9 21:33:27.734033 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 21:33:27.734042 kernel: pnp: PnP ACPI: found 1 devices Sep 9 21:33:27.734050 kernel: NET: Registered PF_INET protocol family Sep 9 21:33:27.734056 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 21:33:27.734064 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 21:33:27.734071 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 21:33:27.734077 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 21:33:27.734084 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 21:33:27.734093 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 21:33:27.734100 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:33:27.734107 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:33:27.734114 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 21:33:27.734121 kernel: PCI: CLS 0 bytes, default 64 Sep 9 21:33:27.734127 kernel: kvm [1]: HYP mode not available Sep 9 21:33:27.734134 kernel: Initialise system trusted keyrings Sep 9 21:33:27.734141 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 21:33:27.734148 kernel: Key type asymmetric registered Sep 9 21:33:27.734156 kernel: Asymmetric key parser 'x509' registered Sep 9 21:33:27.734163 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 21:33:27.734170 kernel: io scheduler mq-deadline registered Sep 9 21:33:27.734177 kernel: io scheduler kyber registered Sep 9 21:33:27.734184 kernel: io scheduler bfq registered Sep 9 21:33:27.734191 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 21:33:27.734198 kernel: ACPI: button: Power Button [PWRB] Sep 9 21:33:27.734205 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 21:33:27.734262 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 21:33:27.734272 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 21:33:27.734279 kernel: thunder_xcv, ver 1.0 Sep 9 21:33:27.734286 kernel: thunder_bgx, ver 1.0 Sep 9 21:33:27.734293 kernel: nicpf, ver 1.0 Sep 9 21:33:27.734299 kernel: nicvf, ver 1.0 Sep 9 21:33:27.734363 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 21:33:27.734418 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T21:33:27 UTC (1757453607) Sep 9 21:33:27.734427 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 21:33:27.734434 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 21:33:27.734442 kernel: watchdog: NMI not fully supported Sep 9 21:33:27.734449 kernel: watchdog: Hard watchdog permanently disabled Sep 9 21:33:27.734456 kernel: NET: Registered PF_INET6 protocol family Sep 9 21:33:27.734463 kernel: Segment Routing with IPv6 Sep 9 21:33:27.734470 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 21:33:27.734477 kernel: NET: Registered PF_PACKET protocol family Sep 9 21:33:27.734484 kernel: Key type dns_resolver registered Sep 9 21:33:27.734491 kernel: registered taskstats version 1 Sep 9 21:33:27.734497 kernel: Loading compiled-in X.509 certificates Sep 9 21:33:27.734505 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f5007e8dd2a6cc57a1fe19052a0aaf9985861c4d' Sep 9 21:33:27.734512 kernel: Demotion targets for Node 0: null Sep 9 21:33:27.734519 kernel: Key type .fscrypt registered Sep 9 21:33:27.734526 kernel: Key type fscrypt-provisioning registered Sep 9 21:33:27.734543 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 21:33:27.734570 kernel: ima: Allocated hash algorithm: sha1 Sep 9 21:33:27.734578 kernel: ima: No architecture policies found Sep 9 21:33:27.734585 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 21:33:27.734594 kernel: clk: Disabling unused clocks Sep 9 21:33:27.734601 kernel: PM: genpd: Disabling unused power domains Sep 9 21:33:27.734607 kernel: Warning: unable to open an initial console. Sep 9 21:33:27.734614 kernel: Freeing unused kernel memory: 38976K Sep 9 21:33:27.734621 kernel: Run /init as init process Sep 9 21:33:27.734628 kernel: with arguments: Sep 9 21:33:27.734635 kernel: /init Sep 9 21:33:27.734641 kernel: with environment: Sep 9 21:33:27.734648 kernel: HOME=/ Sep 9 21:33:27.734655 kernel: TERM=linux Sep 9 21:33:27.734662 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 21:33:27.734670 systemd[1]: Successfully made /usr/ read-only. Sep 9 21:33:27.734680 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:33:27.734688 systemd[1]: Detected virtualization kvm. Sep 9 21:33:27.734696 systemd[1]: Detected architecture arm64. Sep 9 21:33:27.734702 systemd[1]: Running in initrd. Sep 9 21:33:27.734710 systemd[1]: No hostname configured, using default hostname. Sep 9 21:33:27.734719 systemd[1]: Hostname set to . Sep 9 21:33:27.734726 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:33:27.734733 systemd[1]: Queued start job for default target initrd.target. Sep 9 21:33:27.734741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:33:27.734748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:33:27.734756 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 21:33:27.734763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:33:27.734771 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 21:33:27.734780 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 21:33:27.734789 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 21:33:27.734797 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 21:33:27.734805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:33:27.734813 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:33:27.734820 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:33:27.734828 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:33:27.734837 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:33:27.734844 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:33:27.734852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:33:27.734859 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:33:27.734867 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 21:33:27.734875 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 21:33:27.734882 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:33:27.734889 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:33:27.734898 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:33:27.734905 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:33:27.734913 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 21:33:27.734920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:33:27.734927 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 21:33:27.734935 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 21:33:27.734943 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 21:33:27.734950 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:33:27.734957 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:33:27.734966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:33:27.734973 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 21:33:27.734981 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:33:27.734988 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 21:33:27.734997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 21:33:27.735021 systemd-journald[243]: Collecting audit messages is disabled. Sep 9 21:33:27.735040 systemd-journald[243]: Journal started Sep 9 21:33:27.735059 systemd-journald[243]: Runtime Journal (/run/log/journal/e7f751816e8b44969ebe219434e99c17) is 6M, max 48.5M, 42.4M free. Sep 9 21:33:27.743630 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 21:33:27.743654 kernel: Bridge firewalling registered Sep 9 21:33:27.729135 systemd-modules-load[245]: Inserted module 'overlay' Sep 9 21:33:27.742913 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 9 21:33:27.746702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:33:27.749210 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:33:27.749575 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:33:27.751582 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:33:27.754671 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 21:33:27.756044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:33:27.758261 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:33:27.769092 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:33:27.776322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:33:27.778221 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 21:33:27.780792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:33:27.781768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:33:27.785221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:33:27.787651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:33:27.789231 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 21:33:27.806190 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f5bd02e888bbcae51800cf37660dcdbf356eb05540a834019d706c2521a92d30 Sep 9 21:33:27.819331 systemd-resolved[285]: Positive Trust Anchors: Sep 9 21:33:27.819352 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:33:27.819385 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:33:27.824102 systemd-resolved[285]: Defaulting to hostname 'linux'. Sep 9 21:33:27.825052 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:33:27.827711 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:33:27.879578 kernel: SCSI subsystem initialized Sep 9 21:33:27.884594 kernel: Loading iSCSI transport class v2.0-870. Sep 9 21:33:27.891573 kernel: iscsi: registered transport (tcp) Sep 9 21:33:27.904639 kernel: iscsi: registered transport (qla4xxx) Sep 9 21:33:27.904677 kernel: QLogic iSCSI HBA Driver Sep 9 21:33:27.919975 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:33:27.940578 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:33:27.941766 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:33:27.986012 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 21:33:27.987958 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 21:33:28.044586 kernel: raid6: neonx8 gen() 15747 MB/s Sep 9 21:33:28.061565 kernel: raid6: neonx4 gen() 15799 MB/s Sep 9 21:33:28.078575 kernel: raid6: neonx2 gen() 13201 MB/s Sep 9 21:33:28.095580 kernel: raid6: neonx1 gen() 10439 MB/s Sep 9 21:33:28.112953 kernel: raid6: int64x8 gen() 6895 MB/s Sep 9 21:33:28.129599 kernel: raid6: int64x4 gen() 7328 MB/s Sep 9 21:33:28.146608 kernel: raid6: int64x2 gen() 6093 MB/s Sep 9 21:33:28.163585 kernel: raid6: int64x1 gen() 5052 MB/s Sep 9 21:33:28.163627 kernel: raid6: using algorithm neonx4 gen() 15799 MB/s Sep 9 21:33:28.180602 kernel: raid6: .... xor() 12346 MB/s, rmw enabled Sep 9 21:33:28.180652 kernel: raid6: using neon recovery algorithm Sep 9 21:33:28.185581 kernel: xor: measuring software checksum speed Sep 9 21:33:28.185609 kernel: 8regs : 21618 MB/sec Sep 9 21:33:28.186685 kernel: 32regs : 21681 MB/sec Sep 9 21:33:28.186698 kernel: arm64_neon : 28109 MB/sec Sep 9 21:33:28.186707 kernel: xor: using function: arm64_neon (28109 MB/sec) Sep 9 21:33:28.239618 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 21:33:28.245547 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:33:28.249677 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:33:28.279877 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 9 21:33:28.283905 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:33:28.285646 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 21:33:28.305789 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Sep 9 21:33:28.327004 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:33:28.328974 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:33:28.383663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:33:28.385736 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 21:33:28.435135 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 21:33:28.435295 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 21:33:28.442335 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:33:28.445752 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 21:33:28.445772 kernel: GPT:9289727 != 19775487 Sep 9 21:33:28.445781 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 21:33:28.445790 kernel: GPT:9289727 != 19775487 Sep 9 21:33:28.445798 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 21:33:28.445813 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:33:28.442453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:33:28.447303 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:33:28.449473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:33:28.477086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:33:28.488813 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 21:33:28.489910 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 21:33:28.497411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 21:33:28.505135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:33:28.511062 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 21:33:28.512002 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 21:33:28.514313 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:33:28.515988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:33:28.517562 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:33:28.519828 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 21:33:28.521259 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 21:33:28.536697 disk-uuid[589]: Primary Header is updated. Sep 9 21:33:28.536697 disk-uuid[589]: Secondary Entries is updated. Sep 9 21:33:28.536697 disk-uuid[589]: Secondary Header is updated. Sep 9 21:33:28.540862 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:33:28.540474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:33:29.547426 disk-uuid[592]: The operation has completed successfully. Sep 9 21:33:29.548332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:33:29.572315 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 21:33:29.572426 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 21:33:29.596039 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 21:33:29.620295 sh[609]: Success Sep 9 21:33:29.631586 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 21:33:29.631624 kernel: device-mapper: uevent: version 1.0.3 Sep 9 21:33:29.632572 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 21:33:29.639847 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 21:33:29.662698 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 21:33:29.664161 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 21:33:29.679683 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 21:33:29.685358 kernel: BTRFS: device fsid 0420e954-c3c6-4e24-9a07-863b2151b564 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (621) Sep 9 21:33:29.685408 kernel: BTRFS info (device dm-0): first mount of filesystem 0420e954-c3c6-4e24-9a07-863b2151b564 Sep 9 21:33:29.685430 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:33:29.689670 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 21:33:29.689709 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 21:33:29.690627 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 21:33:29.691637 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:33:29.692702 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 21:33:29.693362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 21:33:29.696025 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 21:33:29.710593 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 9 21:33:29.712253 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:33:29.712301 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:33:29.714708 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:33:29.714771 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:33:29.718578 kernel: BTRFS info (device vda6): last unmount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:33:29.719262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 21:33:29.720901 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 21:33:29.797795 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:33:29.801145 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:33:29.818925 ignition[693]: Ignition 2.22.0 Sep 9 21:33:29.818937 ignition[693]: Stage: fetch-offline Sep 9 21:33:29.818967 ignition[693]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:33:29.818974 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:33:29.819054 ignition[693]: parsed url from cmdline: "" Sep 9 21:33:29.819057 ignition[693]: no config URL provided Sep 9 21:33:29.819062 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 21:33:29.819069 ignition[693]: no config at "/usr/lib/ignition/user.ign" Sep 9 21:33:29.819087 ignition[693]: op(1): [started] loading QEMU firmware config module Sep 9 21:33:29.819091 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 21:33:29.823926 ignition[693]: op(1): [finished] loading QEMU firmware config module Sep 9 21:33:29.843781 systemd-networkd[801]: lo: Link UP Sep 9 21:33:29.843794 systemd-networkd[801]: lo: Gained carrier Sep 9 21:33:29.844456 systemd-networkd[801]: Enumeration completed Sep 9 21:33:29.844637 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:33:29.846004 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:33:29.846008 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:33:29.846253 systemd[1]: Reached target network.target - Network. Sep 9 21:33:29.846799 systemd-networkd[801]: eth0: Link UP Sep 9 21:33:29.847182 systemd-networkd[801]: eth0: Gained carrier Sep 9 21:33:29.847192 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:33:29.863609 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:33:29.871433 ignition[693]: parsing config with SHA512: 6108573c17cd86fa34c354b8add44e2a90d3596ffa860d0190b533e31def8236d62fbefefc919aeb8fd5ab3ef0a39962f6099302f03cdf09bb7585b2cdfc4e6c Sep 9 21:33:29.875196 unknown[693]: fetched base config from "system" Sep 9 21:33:29.875208 unknown[693]: fetched user config from "qemu" Sep 9 21:33:29.875587 ignition[693]: fetch-offline: fetch-offline passed Sep 9 21:33:29.875645 ignition[693]: Ignition finished successfully Sep 9 21:33:29.878528 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:33:29.880448 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 21:33:29.882766 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 21:33:29.916733 ignition[810]: Ignition 2.22.0 Sep 9 21:33:29.916751 ignition[810]: Stage: kargs Sep 9 21:33:29.916903 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:33:29.916912 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:33:29.917672 ignition[810]: kargs: kargs passed Sep 9 21:33:29.917720 ignition[810]: Ignition finished successfully Sep 9 21:33:29.919806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 21:33:29.922451 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 21:33:29.950328 ignition[818]: Ignition 2.22.0 Sep 9 21:33:29.950351 ignition[818]: Stage: disks Sep 9 21:33:29.950480 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:33:29.950489 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:33:29.951274 ignition[818]: disks: disks passed Sep 9 21:33:29.953297 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 21:33:29.951319 ignition[818]: Ignition finished successfully Sep 9 21:33:29.954961 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 21:33:29.955975 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 21:33:29.957471 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:33:29.958659 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:33:29.960045 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:33:29.962329 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 21:33:29.997707 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 21:33:30.012456 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 21:33:30.014873 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 21:33:30.076584 kernel: EXT4-fs (vda9): mounted filesystem 09d5f77d-9531-4ec2-9062-5fa777d03891 r/w with ordered data mode. Quota mode: none. Sep 9 21:33:30.077278 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 21:33:30.078407 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 21:33:30.080454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:33:30.082056 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 21:33:30.082883 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 21:33:30.082922 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 21:33:30.082946 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:33:30.101401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 21:33:30.103643 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 21:33:30.106568 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 9 21:33:30.110596 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:33:30.110661 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:33:30.112634 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:33:30.112673 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:33:30.113895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:33:30.136054 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 21:33:30.139594 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 9 21:33:30.143703 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 21:33:30.147270 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 21:33:30.214065 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 21:33:30.216214 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 21:33:30.217662 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 21:33:30.244583 kernel: BTRFS info (device vda6): last unmount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:33:30.253213 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 21:33:30.274483 ignition[950]: INFO : Ignition 2.22.0 Sep 9 21:33:30.274483 ignition[950]: INFO : Stage: mount Sep 9 21:33:30.275743 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:33:30.275743 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:33:30.275743 ignition[950]: INFO : mount: mount passed Sep 9 21:33:30.275743 ignition[950]: INFO : Ignition finished successfully Sep 9 21:33:30.277411 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 21:33:30.281767 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 21:33:30.818635 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 21:33:30.820079 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:33:30.838563 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Sep 9 21:33:30.840244 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:33:30.840271 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:33:30.842566 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:33:30.842595 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:33:30.843711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:33:30.880652 ignition[980]: INFO : Ignition 2.22.0 Sep 9 21:33:30.880652 ignition[980]: INFO : Stage: files Sep 9 21:33:30.881997 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:33:30.881997 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:33:30.881997 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Sep 9 21:33:30.884422 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 21:33:30.884422 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 21:33:30.884422 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 21:33:30.884422 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 21:33:30.888454 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 21:33:30.888454 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 21:33:30.888454 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 21:33:30.884580 unknown[980]: wrote ssh authorized keys file for user: core Sep 9 21:33:31.557126 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 21:33:31.708785 systemd-networkd[801]: eth0: Gained IPv6LL Sep 9 21:33:33.661735 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 21:33:33.661735 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 21:33:33.664866 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 21:33:34.004219 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 21:33:34.106254 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 21:33:34.107707 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 21:33:34.107707 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 21:33:34.107707 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:33:34.107707 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:33:34.107707 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:33:34.107707 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:33:34.107707 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:33:34.107707 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:33:34.118543 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:33:34.118543 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:33:34.118543 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 21:33:34.118543 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 21:33:34.118543 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 21:33:34.118543 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 21:33:34.529621 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 21:33:34.853171 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 21:33:34.853171 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 21:33:34.856441 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:33:34.856441 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:33:34.856441 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 21:33:34.856441 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 21:33:34.856441 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:33:34.856441 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:33:34.856441 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 21:33:34.856441 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 21:33:34.869942 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:33:34.873189 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:33:34.874407 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 21:33:34.874407 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 21:33:34.874407 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 21:33:34.874407 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:33:34.874407 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:33:34.874407 ignition[980]: INFO : files: files passed Sep 9 21:33:34.874407 ignition[980]: INFO : Ignition finished successfully Sep 9 21:33:34.878005 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 21:33:34.880693 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 21:33:34.882251 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 21:33:34.890489 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 21:33:34.893450 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 21:33:34.891627 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 21:33:34.895306 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:33:34.895306 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:33:34.897975 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:33:34.897868 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:33:34.900315 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 21:33:34.901879 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 21:33:34.957230 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 21:33:34.957350 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 21:33:34.958990 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 21:33:34.960422 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 21:33:34.961851 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 21:33:34.962511 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 21:33:34.975363 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:33:34.977367 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 21:33:34.995811 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:33:34.996722 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:33:34.998356 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 21:33:34.999787 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 21:33:34.999887 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:33:35.001791 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 21:33:35.003302 systemd[1]: Stopped target basic.target - Basic System. Sep 9 21:33:35.004538 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 21:33:35.005861 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:33:35.007338 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 21:33:35.008949 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:33:35.010481 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 21:33:35.011918 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:33:35.013408 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 21:33:35.015128 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 21:33:35.016461 systemd[1]: Stopped target swap.target - Swaps. Sep 9 21:33:35.017644 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 21:33:35.017747 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:33:35.019458 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:33:35.020945 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:33:35.022424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 21:33:35.025623 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:33:35.026527 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 21:33:35.026648 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 21:33:35.029036 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 21:33:35.029211 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:33:35.030560 systemd[1]: Stopped target paths.target - Path Units. Sep 9 21:33:35.031745 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 21:33:35.032625 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:33:35.033974 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 21:33:35.035367 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 21:33:35.037024 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 21:33:35.037098 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:33:35.038272 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 21:33:35.038343 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:33:35.039491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 21:33:35.039615 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:33:35.040882 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 21:33:35.040972 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 21:33:35.042873 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 21:33:35.044586 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 21:33:35.045272 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 21:33:35.045375 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:33:35.047048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 21:33:35.047132 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:33:35.051449 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 21:33:35.058683 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 21:33:35.066543 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 21:33:35.073047 ignition[1035]: INFO : Ignition 2.22.0 Sep 9 21:33:35.073047 ignition[1035]: INFO : Stage: umount Sep 9 21:33:35.074736 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:33:35.074736 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:33:35.076685 ignition[1035]: INFO : umount: umount passed Sep 9 21:33:35.076685 ignition[1035]: INFO : Ignition finished successfully Sep 9 21:33:35.078250 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 21:33:35.078355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 21:33:35.080005 systemd[1]: Stopped target network.target - Network. Sep 9 21:33:35.081179 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 21:33:35.081236 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 21:33:35.082469 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 21:33:35.082514 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 21:33:35.083818 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 21:33:35.083859 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 21:33:35.085204 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 21:33:35.085243 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 21:33:35.086645 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 21:33:35.088082 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 21:33:35.098199 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 21:33:35.098337 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 21:33:35.100645 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 21:33:35.100823 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 21:33:35.100904 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 21:33:35.104311 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 21:33:35.104931 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 21:33:35.105842 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 21:33:35.105892 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:33:35.108193 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 21:33:35.108969 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 21:33:35.109021 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:33:35.110422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:33:35.110458 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:33:35.112582 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 21:33:35.112620 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 21:33:35.114012 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 21:33:35.114046 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:33:35.116273 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:33:35.118439 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 21:33:35.118490 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:33:35.122109 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 21:33:35.122243 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:33:35.124504 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 21:33:35.124593 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 21:33:35.125520 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 21:33:35.125561 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:33:35.127421 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 21:33:35.127461 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:33:35.129789 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 21:33:35.129839 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 21:33:35.132199 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 21:33:35.132252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:33:35.135298 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 21:33:35.136756 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 21:33:35.136823 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:33:35.139046 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 21:33:35.139093 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:33:35.141506 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 21:33:35.141564 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:33:35.146440 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 21:33:35.146483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:33:35.148476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:33:35.148527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:33:35.151801 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 21:33:35.151847 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 21:33:35.151873 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 21:33:35.151902 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:33:35.152200 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 21:33:35.152277 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 21:33:35.153510 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 21:33:35.153629 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 21:33:35.155126 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 21:33:35.155204 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 21:33:35.157506 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 21:33:35.159005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 21:33:35.159061 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 21:33:35.161359 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 21:33:35.175834 systemd[1]: Switching root. Sep 9 21:33:35.202420 systemd-journald[243]: Journal stopped Sep 9 21:33:35.929176 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 9 21:33:35.929223 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 21:33:35.929239 kernel: SELinux: policy capability open_perms=1 Sep 9 21:33:35.929248 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 21:33:35.929257 kernel: SELinux: policy capability always_check_network=0 Sep 9 21:33:35.929266 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 21:33:35.929276 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 21:33:35.929285 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 21:33:35.929296 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 21:33:35.929305 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 21:33:35.929363 kernel: audit: type=1403 audit(1757453615.395:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 21:33:35.929378 systemd[1]: Successfully loaded SELinux policy in 61.070ms. Sep 9 21:33:35.929401 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.099ms. Sep 9 21:33:35.929412 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:33:35.929426 systemd[1]: Detected virtualization kvm. Sep 9 21:33:35.929438 systemd[1]: Detected architecture arm64. Sep 9 21:33:35.929448 systemd[1]: Detected first boot. Sep 9 21:33:35.929457 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:33:35.929468 zram_generator::config[1081]: No configuration found. Sep 9 21:33:35.929478 kernel: NET: Registered PF_VSOCK protocol family Sep 9 21:33:35.929487 systemd[1]: Populated /etc with preset unit settings. Sep 9 21:33:35.929506 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 21:33:35.929569 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 21:33:35.929590 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 21:33:35.929600 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 21:33:35.929616 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 21:33:35.929627 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 21:33:35.929637 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 21:33:35.929648 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 21:33:35.929658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 21:33:35.929669 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 21:33:35.929680 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 21:33:35.929690 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 21:33:35.929700 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:33:35.929710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:33:35.929720 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 21:33:35.929729 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 21:33:35.929739 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 21:33:35.929749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:33:35.929759 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 21:33:35.929770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:33:35.929780 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:33:35.929790 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 21:33:35.929805 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 21:33:35.929815 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 21:33:35.929824 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 21:33:35.929834 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:33:35.929845 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:33:35.929856 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:33:35.929866 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:33:35.929876 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 21:33:35.929886 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 21:33:35.929896 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 21:33:35.929908 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:33:35.929918 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:33:35.929927 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:33:35.929937 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 21:33:35.929948 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 21:33:35.929958 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 21:33:35.929967 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 21:33:35.929976 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 21:33:35.929986 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 21:33:35.929995 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 21:33:35.930006 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 21:33:35.930016 systemd[1]: Reached target machines.target - Containers. Sep 9 21:33:35.930025 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 21:33:35.930037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:33:35.930046 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:33:35.930056 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 21:33:35.930066 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:33:35.930075 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:33:35.930084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:33:35.930094 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 21:33:35.930103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:33:35.930114 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 21:33:35.930124 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 21:33:35.930134 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 21:33:35.930143 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 21:33:35.930152 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 21:33:35.930162 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:33:35.930172 kernel: fuse: init (API version 7.41) Sep 9 21:33:35.930182 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:33:35.930192 kernel: loop: module loaded Sep 9 21:33:35.930202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:33:35.930212 kernel: ACPI: bus type drm_connector registered Sep 9 21:33:35.930221 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:33:35.930231 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 21:33:35.930241 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 21:33:35.930251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:33:35.930262 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 21:33:35.930271 systemd[1]: Stopped verity-setup.service. Sep 9 21:33:35.930304 systemd-journald[1155]: Collecting audit messages is disabled. Sep 9 21:33:35.930324 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 21:33:35.930334 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 21:33:35.930345 systemd-journald[1155]: Journal started Sep 9 21:33:35.930366 systemd-journald[1155]: Runtime Journal (/run/log/journal/e7f751816e8b44969ebe219434e99c17) is 6M, max 48.5M, 42.4M free. Sep 9 21:33:35.760204 systemd[1]: Queued start job for default target multi-user.target. Sep 9 21:33:35.771455 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 21:33:35.771841 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 21:33:35.932867 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:33:35.933459 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 21:33:35.934362 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 21:33:35.935310 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 21:33:35.936264 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 21:33:35.938586 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 21:33:35.939674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:33:35.940775 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 21:33:35.940925 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 21:33:35.942028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:33:35.942176 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:33:35.943264 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:33:35.943407 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:33:35.944481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:33:35.944652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:33:35.945744 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 21:33:35.945886 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 21:33:35.946919 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:33:35.947074 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:33:35.948164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:33:35.949335 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:33:35.950848 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 21:33:35.952044 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 21:33:35.963685 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:33:35.965544 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 21:33:35.967231 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 21:33:35.968156 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 21:33:35.968182 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:33:35.969792 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 21:33:35.981649 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 21:33:35.982674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:33:35.983743 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 21:33:35.985346 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 21:33:35.986463 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:33:35.987240 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 21:33:35.988248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:33:35.991679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:33:35.994742 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 21:33:35.994877 systemd-journald[1155]: Time spent on flushing to /var/log/journal/e7f751816e8b44969ebe219434e99c17 is 23.256ms for 894 entries. Sep 9 21:33:35.994877 systemd-journald[1155]: System Journal (/var/log/journal/e7f751816e8b44969ebe219434e99c17) is 8M, max 195.6M, 187.6M free. Sep 9 21:33:36.033712 systemd-journald[1155]: Received client request to flush runtime journal. Sep 9 21:33:36.033759 kernel: loop0: detected capacity change from 0 to 211168 Sep 9 21:33:35.997483 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 21:33:36.000013 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:33:36.002796 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 21:33:36.003762 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 21:33:36.012581 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 21:33:36.014466 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 21:33:36.025455 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 9 21:33:36.025465 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 9 21:33:36.027766 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 21:33:36.030577 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:33:36.037725 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:33:36.041234 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 21:33:36.043576 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 21:33:36.050961 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 21:33:36.052716 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 21:33:36.075601 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 21:33:36.078049 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:33:36.080223 kernel: loop1: detected capacity change from 0 to 119368 Sep 9 21:33:36.093997 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 9 21:33:36.094016 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 9 21:33:36.096961 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:33:36.115574 kernel: loop2: detected capacity change from 0 to 100632 Sep 9 21:33:36.148703 kernel: loop3: detected capacity change from 0 to 211168 Sep 9 21:33:36.156739 kernel: loop4: detected capacity change from 0 to 119368 Sep 9 21:33:36.160594 kernel: loop5: detected capacity change from 0 to 100632 Sep 9 21:33:36.164009 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 21:33:36.164349 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 9 21:33:36.168914 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 21:33:36.168929 systemd[1]: Reloading... Sep 9 21:33:36.216588 zram_generator::config[1247]: No configuration found. Sep 9 21:33:36.249389 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 21:33:36.362167 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 21:33:36.362237 systemd[1]: Reloading finished in 192 ms. Sep 9 21:33:36.394590 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 21:33:36.395698 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 21:33:36.415840 systemd[1]: Starting ensure-sysext.service... Sep 9 21:33:36.417373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:33:36.422145 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 21:33:36.425036 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:33:36.427706 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 9 21:33:36.427719 systemd[1]: Reloading... Sep 9 21:33:36.430503 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 21:33:36.430821 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 21:33:36.431107 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 21:33:36.431373 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 21:33:36.432101 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 21:33:36.432393 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 9 21:33:36.432511 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 9 21:33:36.435031 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:33:36.435128 systemd-tmpfiles[1285]: Skipping /boot Sep 9 21:33:36.441131 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:33:36.441227 systemd-tmpfiles[1285]: Skipping /boot Sep 9 21:33:36.456778 systemd-udevd[1288]: Using default interface naming scheme 'v255'. Sep 9 21:33:36.468573 zram_generator::config[1313]: No configuration found. Sep 9 21:33:36.635106 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 21:33:36.635283 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:33:36.636711 systemd[1]: Reloading finished in 208 ms. Sep 9 21:33:36.645022 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:33:36.652812 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:33:36.669119 systemd[1]: Finished ensure-sysext.service. Sep 9 21:33:36.684181 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:33:36.686145 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 21:33:36.687134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:33:36.704294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:33:36.708141 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:33:36.710044 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:33:36.711955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:33:36.712931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:33:36.715666 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 21:33:36.716481 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:33:36.717351 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 21:33:36.720858 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:33:36.723068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:33:36.726980 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 21:33:36.729113 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 21:33:36.734418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:33:36.738326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:33:36.738483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:33:36.739958 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:33:36.744594 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:33:36.745855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:33:36.746010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:33:36.747256 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:33:36.747403 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:33:36.748664 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 21:33:36.750069 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 21:33:36.751287 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 21:33:36.754930 augenrules[1433]: No rules Sep 9 21:33:36.756212 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:33:36.756428 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:33:36.762283 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 21:33:36.764283 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:33:36.764355 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:33:36.765482 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 21:33:36.767658 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 21:33:36.768468 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 21:33:36.777857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:33:36.780049 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 21:33:36.802630 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 21:33:36.853769 systemd-networkd[1411]: lo: Link UP Sep 9 21:33:36.853776 systemd-networkd[1411]: lo: Gained carrier Sep 9 21:33:36.854432 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 21:33:36.854522 systemd-networkd[1411]: Enumeration completed Sep 9 21:33:36.854933 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:33:36.854942 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:33:36.855341 systemd-networkd[1411]: eth0: Link UP Sep 9 21:33:36.855506 systemd-networkd[1411]: eth0: Gained carrier Sep 9 21:33:36.855506 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:33:36.855522 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:33:36.856458 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 21:33:36.857661 systemd-resolved[1413]: Positive Trust Anchors: Sep 9 21:33:36.857679 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:33:36.857711 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:33:36.858392 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 21:33:36.860234 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 21:33:36.863386 systemd-resolved[1413]: Defaulting to hostname 'linux'. Sep 9 21:33:36.864633 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:33:36.865455 systemd[1]: Reached target network.target - Network. Sep 9 21:33:36.866198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:33:36.867097 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:33:36.867943 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 21:33:36.869031 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 21:33:36.870101 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 21:33:36.871033 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 21:33:36.871987 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 21:33:36.872970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 21:33:36.872999 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:33:36.873670 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:33:36.875015 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 21:33:36.876887 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 21:33:36.877616 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:33:36.879076 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 21:33:36.880185 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 21:33:36.881153 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Sep 9 21:33:36.881160 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 21:33:36.881698 systemd-timesyncd[1415]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 21:33:36.881757 systemd-timesyncd[1415]: Initial clock synchronization to Tue 2025-09-09 21:33:37.275497 UTC. Sep 9 21:33:36.883880 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 21:33:36.884910 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 21:33:36.887618 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 21:33:36.888661 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 21:33:36.889885 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:33:36.890634 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:33:36.891337 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:33:36.891365 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:33:36.892260 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 21:33:36.893953 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 21:33:36.896765 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 21:33:36.898430 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 21:33:36.900262 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 21:33:36.901111 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 21:33:36.902689 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 21:33:36.904269 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 21:33:36.906656 jq[1470]: false Sep 9 21:33:36.905920 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 21:33:36.907822 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 21:33:36.912685 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 21:33:36.914209 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 21:33:36.914647 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 21:33:36.915435 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 21:33:36.915908 extend-filesystems[1471]: Found /dev/vda6 Sep 9 21:33:36.917699 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 21:33:36.920900 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 21:33:36.922591 extend-filesystems[1471]: Found /dev/vda9 Sep 9 21:33:36.923941 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 21:33:36.924102 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 21:33:36.926091 extend-filesystems[1471]: Checking size of /dev/vda9 Sep 9 21:33:36.925868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 21:33:36.926024 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 21:33:36.929944 jq[1484]: true Sep 9 21:33:36.944290 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 21:33:36.944710 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 21:33:36.946878 update_engine[1481]: I20250909 21:33:36.946679 1481 main.cc:92] Flatcar Update Engine starting Sep 9 21:33:36.948653 tar[1490]: linux-arm64/LICENSE Sep 9 21:33:36.948839 extend-filesystems[1471]: Resized partition /dev/vda9 Sep 9 21:33:36.949517 tar[1490]: linux-arm64/helm Sep 9 21:33:36.954837 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 21:33:36.954872 extend-filesystems[1509]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 21:33:36.956328 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 21:33:36.961475 jq[1500]: true Sep 9 21:33:36.974241 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 21:33:36.977960 dbus-daemon[1468]: [system] SELinux support is enabled Sep 9 21:33:36.982241 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 21:33:36.986094 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 21:33:36.986117 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 21:33:36.987637 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 21:33:36.988917 update_engine[1481]: I20250909 21:33:36.988855 1481 update_check_scheduler.cc:74] Next update check in 3m16s Sep 9 21:33:36.987654 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 21:33:36.991809 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 21:33:36.991809 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 21:33:36.991809 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 21:33:36.990939 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 21:33:36.997220 extend-filesystems[1471]: Resized filesystem in /dev/vda9 Sep 9 21:33:36.991152 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 21:33:37.002394 systemd[1]: Started update-engine.service - Update Engine. Sep 9 21:33:37.009852 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 21:33:37.014783 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 21:33:37.015745 systemd-logind[1479]: New seat seat0. Sep 9 21:33:37.017226 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 21:33:37.035925 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Sep 9 21:33:37.037203 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 21:33:37.039246 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 21:33:37.101033 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 21:33:37.154731 containerd[1501]: time="2025-09-09T21:33:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 21:33:37.155292 containerd[1501]: time="2025-09-09T21:33:37.155256218Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 21:33:37.167176 containerd[1501]: time="2025-09-09T21:33:37.167134327Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.905µs" Sep 9 21:33:37.167176 containerd[1501]: time="2025-09-09T21:33:37.167167022Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 21:33:37.167273 containerd[1501]: time="2025-09-09T21:33:37.167185698Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 21:33:37.167422 containerd[1501]: time="2025-09-09T21:33:37.167393656Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 21:33:37.167451 containerd[1501]: time="2025-09-09T21:33:37.167423370Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 21:33:37.167470 containerd[1501]: time="2025-09-09T21:33:37.167451909Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:33:37.167531 containerd[1501]: time="2025-09-09T21:33:37.167511506Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:33:37.167531 containerd[1501]: time="2025-09-09T21:33:37.167528797Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:33:37.167866 containerd[1501]: time="2025-09-09T21:33:37.167789972Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:33:37.167866 containerd[1501]: time="2025-09-09T21:33:37.167864132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:33:37.167911 containerd[1501]: time="2025-09-09T21:33:37.167878653Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:33:37.167911 containerd[1501]: time="2025-09-09T21:33:37.167887089Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 21:33:37.168619 containerd[1501]: time="2025-09-09T21:33:37.167968635Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 21:33:37.168619 containerd[1501]: time="2025-09-09T21:33:37.168280971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:33:37.168619 containerd[1501]: time="2025-09-09T21:33:37.168313119Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:33:37.168619 containerd[1501]: time="2025-09-09T21:33:37.168324325Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 21:33:37.168619 containerd[1501]: time="2025-09-09T21:33:37.168371289Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 21:33:37.168824 containerd[1501]: time="2025-09-09T21:33:37.168800047Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 21:33:37.168895 containerd[1501]: time="2025-09-09T21:33:37.168878236Z" level=info msg="metadata content store policy set" policy=shared Sep 9 21:33:37.172358 containerd[1501]: time="2025-09-09T21:33:37.172319635Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 21:33:37.172412 containerd[1501]: time="2025-09-09T21:33:37.172387919Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 21:33:37.172412 containerd[1501]: time="2025-09-09T21:33:37.172402230Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 21:33:37.172447 containerd[1501]: time="2025-09-09T21:33:37.172413268Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 21:33:37.172447 containerd[1501]: time="2025-09-09T21:33:37.172424642Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 21:33:37.172498 containerd[1501]: time="2025-09-09T21:33:37.172480335Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 21:33:37.172517 containerd[1501]: time="2025-09-09T21:33:37.172497207Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 21:33:37.172517 containerd[1501]: time="2025-09-09T21:33:37.172509504Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 21:33:37.172550 containerd[1501]: time="2025-09-09T21:33:37.172523018Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 21:33:37.172550 containerd[1501]: time="2025-09-09T21:33:37.172534182Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 21:33:37.172550 containerd[1501]: time="2025-09-09T21:33:37.172542911Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 21:33:37.172626 containerd[1501]: time="2025-09-09T21:33:37.172573717Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 21:33:37.172713 containerd[1501]: time="2025-09-09T21:33:37.172690056Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 21:33:37.172745 containerd[1501]: time="2025-09-09T21:33:37.172718763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 21:33:37.172745 containerd[1501]: time="2025-09-09T21:33:37.172735089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 21:33:37.172780 containerd[1501]: time="2025-09-09T21:33:37.172751457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 21:33:37.172780 containerd[1501]: time="2025-09-09T21:33:37.172762747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 21:33:37.172780 containerd[1501]: time="2025-09-09T21:33:37.172773113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 21:33:37.172834 containerd[1501]: time="2025-09-09T21:33:37.172784277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 21:33:37.172834 containerd[1501]: time="2025-09-09T21:33:37.172795399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 21:33:37.172834 containerd[1501]: time="2025-09-09T21:33:37.172809962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 21:33:37.172834 containerd[1501]: time="2025-09-09T21:33:37.172821504Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 21:33:37.172834 containerd[1501]: time="2025-09-09T21:33:37.172831702Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 21:33:37.173044 containerd[1501]: time="2025-09-09T21:33:37.173027573Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 21:33:37.173072 containerd[1501]: time="2025-09-09T21:33:37.173047802Z" level=info msg="Start snapshots syncer" Sep 9 21:33:37.173156 containerd[1501]: time="2025-09-09T21:33:37.173134175Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 21:33:37.174167 containerd[1501]: time="2025-09-09T21:33:37.174108576Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 21:33:37.174311 containerd[1501]: time="2025-09-09T21:33:37.174187646Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 21:33:37.174311 containerd[1501]: time="2025-09-09T21:33:37.174294206Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 21:33:37.174453 containerd[1501]: time="2025-09-09T21:33:37.174408615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 21:33:37.174480 containerd[1501]: time="2025-09-09T21:33:37.174453060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 21:33:37.174480 containerd[1501]: time="2025-09-09T21:33:37.174467959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 21:33:37.174514 containerd[1501]: time="2025-09-09T21:33:37.174483194Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 21:33:37.174514 containerd[1501]: time="2025-09-09T21:33:37.174500821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 21:33:37.174675 containerd[1501]: time="2025-09-09T21:33:37.174514923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 21:33:37.174675 containerd[1501]: time="2025-09-09T21:33:37.174544050Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 21:33:37.174675 containerd[1501]: time="2025-09-09T21:33:37.174579262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 21:33:37.174675 containerd[1501]: time="2025-09-09T21:33:37.174620937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 21:33:37.174675 containerd[1501]: time="2025-09-09T21:33:37.174641418Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 21:33:37.174867 containerd[1501]: time="2025-09-09T21:33:37.174694090Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:33:37.174867 containerd[1501]: time="2025-09-09T21:33:37.174716124Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:33:37.174867 containerd[1501]: time="2025-09-09T21:33:37.174725735Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:33:37.174867 containerd[1501]: time="2025-09-09T21:33:37.174741137Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:33:37.174867 containerd[1501]: time="2025-09-09T21:33:37.174753728Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 21:33:37.174867 containerd[1501]: time="2025-09-09T21:33:37.174769257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 21:33:37.174867 containerd[1501]: time="2025-09-09T21:33:37.174781134Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 21:33:37.175110 containerd[1501]: time="2025-09-09T21:33:37.175003697Z" level=info msg="runtime interface created" Sep 9 21:33:37.175110 containerd[1501]: time="2025-09-09T21:33:37.175009699Z" level=info msg="created NRI interface" Sep 9 21:33:37.175110 containerd[1501]: time="2025-09-09T21:33:37.175023129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 21:33:37.175110 containerd[1501]: time="2025-09-09T21:33:37.175036056Z" level=info msg="Connect containerd service" Sep 9 21:33:37.175110 containerd[1501]: time="2025-09-09T21:33:37.175073157Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 21:33:37.175845 containerd[1501]: time="2025-09-09T21:33:37.175811187Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:33:37.241296 containerd[1501]: time="2025-09-09T21:33:37.241190042Z" level=info msg="Start subscribing containerd event" Sep 9 21:33:37.241296 containerd[1501]: time="2025-09-09T21:33:37.241298910Z" level=info msg="Start recovering state" Sep 9 21:33:37.241543 containerd[1501]: time="2025-09-09T21:33:37.241384695Z" level=info msg="Start event monitor" Sep 9 21:33:37.241543 containerd[1501]: time="2025-09-09T21:33:37.241399133Z" level=info msg="Start cni network conf syncer for default" Sep 9 21:33:37.241543 containerd[1501]: time="2025-09-09T21:33:37.241443746Z" level=info msg="Start streaming server" Sep 9 21:33:37.241543 containerd[1501]: time="2025-09-09T21:33:37.241456337Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 21:33:37.241543 containerd[1501]: time="2025-09-09T21:33:37.241463472Z" level=info msg="runtime interface starting up..." Sep 9 21:33:37.241543 containerd[1501]: time="2025-09-09T21:33:37.241470187Z" level=info msg="starting plugins..." Sep 9 21:33:37.241543 containerd[1501]: time="2025-09-09T21:33:37.241486471Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 21:33:37.241797 containerd[1501]: time="2025-09-09T21:33:37.241656069Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 21:33:37.241797 containerd[1501]: time="2025-09-09T21:33:37.241723429Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 21:33:37.246073 containerd[1501]: time="2025-09-09T21:33:37.244721674Z" level=info msg="containerd successfully booted in 0.090527s" Sep 9 21:33:37.244831 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 21:33:37.284454 tar[1490]: linux-arm64/README.md Sep 9 21:33:37.301724 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 21:33:38.044738 systemd-networkd[1411]: eth0: Gained IPv6LL Sep 9 21:33:38.048638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 21:33:38.049996 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 21:33:38.052281 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 21:33:38.054553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:33:38.062000 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 21:33:38.085452 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 21:33:38.085988 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 21:33:38.087534 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 21:33:38.091448 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 21:33:38.633290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:33:38.649985 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:33:39.002339 kubelet[1583]: E0909 21:33:39.002279 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:33:39.004803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:33:39.004951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:33:39.005386 systemd[1]: kubelet.service: Consumed 735ms CPU time, 258.1M memory peak. Sep 9 21:33:39.228556 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 21:33:39.249616 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 21:33:39.252189 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 21:33:39.273835 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 21:33:39.274052 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 21:33:39.276222 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 21:33:39.294745 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 21:33:39.298015 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 21:33:39.299842 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 21:33:39.300891 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 21:33:39.301675 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 21:33:39.303162 systemd[1]: Startup finished in 2.015s (kernel) + 7.785s (initrd) + 3.969s (userspace) = 13.769s. Sep 9 21:33:40.340222 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 21:33:40.341664 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:45776.service - OpenSSH per-connection server daemon (10.0.0.1:45776). Sep 9 21:33:40.402250 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 45776 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:33:40.404234 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:33:40.410429 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 21:33:40.411345 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 21:33:40.416924 systemd-logind[1479]: New session 1 of user core. Sep 9 21:33:40.432612 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 21:33:40.434858 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 21:33:40.452717 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 21:33:40.455153 systemd-logind[1479]: New session c1 of user core. Sep 9 21:33:40.565001 systemd[1617]: Queued start job for default target default.target. Sep 9 21:33:40.574430 systemd[1617]: Created slice app.slice - User Application Slice. Sep 9 21:33:40.574576 systemd[1617]: Reached target paths.target - Paths. Sep 9 21:33:40.574711 systemd[1617]: Reached target timers.target - Timers. Sep 9 21:33:40.575857 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 21:33:40.584436 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 21:33:40.584492 systemd[1617]: Reached target sockets.target - Sockets. Sep 9 21:33:40.584526 systemd[1617]: Reached target basic.target - Basic System. Sep 9 21:33:40.584553 systemd[1617]: Reached target default.target - Main User Target. Sep 9 21:33:40.584602 systemd[1617]: Startup finished in 123ms. Sep 9 21:33:40.584715 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 21:33:40.585863 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 21:33:40.648111 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:45780.service - OpenSSH per-connection server daemon (10.0.0.1:45780). Sep 9 21:33:40.702308 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 45780 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:33:40.703392 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:33:40.707642 systemd-logind[1479]: New session 2 of user core. Sep 9 21:33:40.722713 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 21:33:40.775461 sshd[1631]: Connection closed by 10.0.0.1 port 45780 Sep 9 21:33:40.775751 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Sep 9 21:33:40.785433 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:45780.service: Deactivated successfully. Sep 9 21:33:40.787824 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 21:33:40.788516 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Sep 9 21:33:40.790389 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:45794.service - OpenSSH per-connection server daemon (10.0.0.1:45794). Sep 9 21:33:40.791458 systemd-logind[1479]: Removed session 2. Sep 9 21:33:40.844315 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 45794 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:33:40.845346 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:33:40.849476 systemd-logind[1479]: New session 3 of user core. Sep 9 21:33:40.860789 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 21:33:40.909656 sshd[1640]: Connection closed by 10.0.0.1 port 45794 Sep 9 21:33:40.909686 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Sep 9 21:33:40.925386 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:45794.service: Deactivated successfully. Sep 9 21:33:40.926739 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 21:33:40.929618 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Sep 9 21:33:40.930605 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:45802.service - OpenSSH per-connection server daemon (10.0.0.1:45802). Sep 9 21:33:40.931383 systemd-logind[1479]: Removed session 3. Sep 9 21:33:40.980993 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 45802 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:33:40.982012 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:33:40.986325 systemd-logind[1479]: New session 4 of user core. Sep 9 21:33:40.996728 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 21:33:41.047555 sshd[1649]: Connection closed by 10.0.0.1 port 45802 Sep 9 21:33:41.047835 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Sep 9 21:33:41.058355 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:45802.service: Deactivated successfully. Sep 9 21:33:41.060901 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 21:33:41.061497 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Sep 9 21:33:41.063567 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:45812.service - OpenSSH per-connection server daemon (10.0.0.1:45812). Sep 9 21:33:41.064135 systemd-logind[1479]: Removed session 4. Sep 9 21:33:41.119349 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 45812 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:33:41.120551 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:33:41.125237 systemd-logind[1479]: New session 5 of user core. Sep 9 21:33:41.134753 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 21:33:41.192536 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 21:33:41.192842 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:33:41.206494 sudo[1659]: pam_unix(sudo:session): session closed for user root Sep 9 21:33:41.207855 sshd[1658]: Connection closed by 10.0.0.1 port 45812 Sep 9 21:33:41.208207 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Sep 9 21:33:41.228804 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:45812.service: Deactivated successfully. Sep 9 21:33:41.230245 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 21:33:41.230974 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Sep 9 21:33:41.233253 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:45824.service - OpenSSH per-connection server daemon (10.0.0.1:45824). Sep 9 21:33:41.233997 systemd-logind[1479]: Removed session 5. Sep 9 21:33:41.288348 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 45824 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:33:41.289716 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:33:41.293543 systemd-logind[1479]: New session 6 of user core. Sep 9 21:33:41.308745 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 21:33:41.360641 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 21:33:41.360907 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:33:41.365615 sudo[1670]: pam_unix(sudo:session): session closed for user root Sep 9 21:33:41.370058 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 21:33:41.370312 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:33:41.378245 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:33:41.407421 augenrules[1692]: No rules Sep 9 21:33:41.408765 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:33:41.409036 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:33:41.411440 sudo[1669]: pam_unix(sudo:session): session closed for user root Sep 9 21:33:41.412542 sshd[1668]: Connection closed by 10.0.0.1 port 45824 Sep 9 21:33:41.412945 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 9 21:33:41.425443 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:45824.service: Deactivated successfully. Sep 9 21:33:41.426717 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 21:33:41.427289 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Sep 9 21:33:41.429279 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:45840.service - OpenSSH per-connection server daemon (10.0.0.1:45840). Sep 9 21:33:41.430125 systemd-logind[1479]: Removed session 6. Sep 9 21:33:41.484005 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 45840 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:33:41.485072 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:33:41.489489 systemd-logind[1479]: New session 7 of user core. Sep 9 21:33:41.498722 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 21:33:41.549125 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 21:33:41.549379 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:33:41.816935 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 21:33:41.835843 (dockerd)[1726]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 21:33:42.031528 dockerd[1726]: time="2025-09-09T21:33:42.031460285Z" level=info msg="Starting up" Sep 9 21:33:42.032255 dockerd[1726]: time="2025-09-09T21:33:42.032237180Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 21:33:42.042486 dockerd[1726]: time="2025-09-09T21:33:42.042449057Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 21:33:42.055742 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2519408396-merged.mount: Deactivated successfully. Sep 9 21:33:42.260095 dockerd[1726]: time="2025-09-09T21:33:42.260022993Z" level=info msg="Loading containers: start." Sep 9 21:33:42.267595 kernel: Initializing XFRM netlink socket Sep 9 21:33:42.454231 systemd-networkd[1411]: docker0: Link UP Sep 9 21:33:42.458221 dockerd[1726]: time="2025-09-09T21:33:42.458181941Z" level=info msg="Loading containers: done." Sep 9 21:33:42.470294 dockerd[1726]: time="2025-09-09T21:33:42.470244643Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 21:33:42.470405 dockerd[1726]: time="2025-09-09T21:33:42.470323588Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 21:33:42.470405 dockerd[1726]: time="2025-09-09T21:33:42.470400523Z" level=info msg="Initializing buildkit" Sep 9 21:33:42.491444 dockerd[1726]: time="2025-09-09T21:33:42.491395506Z" level=info msg="Completed buildkit initialization" Sep 9 21:33:42.496138 dockerd[1726]: time="2025-09-09T21:33:42.496108794Z" level=info msg="Daemon has completed initialization" Sep 9 21:33:42.496357 dockerd[1726]: time="2025-09-09T21:33:42.496179331Z" level=info msg="API listen on /run/docker.sock" Sep 9 21:33:42.496420 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 21:33:43.040079 containerd[1501]: time="2025-09-09T21:33:43.040038816Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 21:33:43.053909 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2115901164-merged.mount: Deactivated successfully. Sep 9 21:33:43.648215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556665798.mount: Deactivated successfully. Sep 9 21:33:44.533653 containerd[1501]: time="2025-09-09T21:33:44.533578731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:44.534805 containerd[1501]: time="2025-09-09T21:33:44.534561776Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 9 21:33:44.535648 containerd[1501]: time="2025-09-09T21:33:44.535606675Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:44.538719 containerd[1501]: time="2025-09-09T21:33:44.538685469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:44.539740 containerd[1501]: time="2025-09-09T21:33:44.539686822Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.499601305s" Sep 9 21:33:44.539791 containerd[1501]: time="2025-09-09T21:33:44.539742029Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 21:33:44.541029 containerd[1501]: time="2025-09-09T21:33:44.541005472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 21:33:45.520348 containerd[1501]: time="2025-09-09T21:33:45.520293281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:45.522016 containerd[1501]: time="2025-09-09T21:33:45.521982461Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 9 21:33:45.523039 containerd[1501]: time="2025-09-09T21:33:45.522985833Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:45.526600 containerd[1501]: time="2025-09-09T21:33:45.526427567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:45.527765 containerd[1501]: time="2025-09-09T21:33:45.527736908Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 986.69737ms" Sep 9 21:33:45.527847 containerd[1501]: time="2025-09-09T21:33:45.527832661Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 21:33:45.528314 containerd[1501]: time="2025-09-09T21:33:45.528279454Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 21:33:46.630586 containerd[1501]: time="2025-09-09T21:33:46.630276373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:46.631579 containerd[1501]: time="2025-09-09T21:33:46.631514919Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 9 21:33:46.633586 containerd[1501]: time="2025-09-09T21:33:46.633126262Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:46.635656 containerd[1501]: time="2025-09-09T21:33:46.635619631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:46.636690 containerd[1501]: time="2025-09-09T21:33:46.636656637Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.108185453s" Sep 9 21:33:46.636778 containerd[1501]: time="2025-09-09T21:33:46.636742773Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 21:33:46.637259 containerd[1501]: time="2025-09-09T21:33:46.637238281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 21:33:47.577212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252295546.mount: Deactivated successfully. Sep 9 21:33:47.821671 containerd[1501]: time="2025-09-09T21:33:47.821626702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:47.822151 containerd[1501]: time="2025-09-09T21:33:47.822117862Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 9 21:33:47.823162 containerd[1501]: time="2025-09-09T21:33:47.823133083Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:47.825133 containerd[1501]: time="2025-09-09T21:33:47.825092780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:47.825801 containerd[1501]: time="2025-09-09T21:33:47.825592166Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.188310709s" Sep 9 21:33:47.825801 containerd[1501]: time="2025-09-09T21:33:47.825625026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 21:33:47.826116 containerd[1501]: time="2025-09-09T21:33:47.826081097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 21:33:48.313948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008601722.mount: Deactivated successfully. Sep 9 21:33:49.131102 containerd[1501]: time="2025-09-09T21:33:49.131027941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:49.131662 containerd[1501]: time="2025-09-09T21:33:49.131629084Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 9 21:33:49.132609 containerd[1501]: time="2025-09-09T21:33:49.132584990Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:49.135946 containerd[1501]: time="2025-09-09T21:33:49.135906806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:49.137088 containerd[1501]: time="2025-09-09T21:33:49.136953080Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.310834051s" Sep 9 21:33:49.137088 containerd[1501]: time="2025-09-09T21:33:49.136990608Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 21:33:49.137521 containerd[1501]: time="2025-09-09T21:33:49.137495405Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 21:33:49.193869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 21:33:49.195265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:33:49.310821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:33:49.314389 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:33:49.368027 kubelet[2077]: E0909 21:33:49.367981 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:33:49.371309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:33:49.371460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:33:49.371873 systemd[1]: kubelet.service: Consumed 157ms CPU time, 108.6M memory peak. Sep 9 21:33:49.651175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939588878.mount: Deactivated successfully. Sep 9 21:33:49.655602 containerd[1501]: time="2025-09-09T21:33:49.655566865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:33:49.656551 containerd[1501]: time="2025-09-09T21:33:49.656512873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 21:33:49.657433 containerd[1501]: time="2025-09-09T21:33:49.657398125Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:33:49.659085 containerd[1501]: time="2025-09-09T21:33:49.659051722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:33:49.659828 containerd[1501]: time="2025-09-09T21:33:49.659795020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 522.264713ms" Sep 9 21:33:49.659859 containerd[1501]: time="2025-09-09T21:33:49.659828590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 21:33:49.660300 containerd[1501]: time="2025-09-09T21:33:49.660275377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 21:33:50.131960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185352764.mount: Deactivated successfully. Sep 9 21:33:51.536494 containerd[1501]: time="2025-09-09T21:33:51.536437286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:51.536848 containerd[1501]: time="2025-09-09T21:33:51.536796996Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 9 21:33:51.537887 containerd[1501]: time="2025-09-09T21:33:51.537846585Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:51.540897 containerd[1501]: time="2025-09-09T21:33:51.540854489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:33:51.542425 containerd[1501]: time="2025-09-09T21:33:51.542376598Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.882071868s" Sep 9 21:33:51.542425 containerd[1501]: time="2025-09-09T21:33:51.542415894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 21:33:57.376403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:33:57.376928 systemd[1]: kubelet.service: Consumed 157ms CPU time, 108.6M memory peak. Sep 9 21:33:57.378680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:33:57.396911 systemd[1]: Reload requested from client PID 2172 ('systemctl') (unit session-7.scope)... Sep 9 21:33:57.396927 systemd[1]: Reloading... Sep 9 21:33:57.461891 zram_generator::config[2219]: No configuration found. Sep 9 21:33:57.639520 systemd[1]: Reloading finished in 242 ms. Sep 9 21:33:57.696221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:33:57.699276 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:33:57.699887 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:33:57.700108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:33:57.700147 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.1M memory peak. Sep 9 21:33:57.701412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:33:57.815320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:33:57.825829 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:33:57.856489 kubelet[2263]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:33:57.856489 kubelet[2263]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 21:33:57.856489 kubelet[2263]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:33:57.856796 kubelet[2263]: I0909 21:33:57.856529 2263 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:33:58.426474 kubelet[2263]: I0909 21:33:58.426428 2263 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 21:33:58.426474 kubelet[2263]: I0909 21:33:58.426461 2263 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:33:58.426722 kubelet[2263]: I0909 21:33:58.426694 2263 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 21:33:58.448265 kubelet[2263]: E0909 21:33:58.448230 2263 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 21:33:58.448721 kubelet[2263]: I0909 21:33:58.448709 2263 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:33:58.457005 kubelet[2263]: I0909 21:33:58.456981 2263 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:33:58.459955 kubelet[2263]: I0909 21:33:58.459592 2263 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:33:58.460641 kubelet[2263]: I0909 21:33:58.460609 2263 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:33:58.460878 kubelet[2263]: I0909 21:33:58.460726 2263 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:33:58.461073 kubelet[2263]: I0909 21:33:58.461060 2263 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:33:58.461140 kubelet[2263]: I0909 21:33:58.461118 2263 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 21:33:58.461895 kubelet[2263]: I0909 21:33:58.461877 2263 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:33:58.464374 kubelet[2263]: I0909 21:33:58.464354 2263 kubelet.go:480] "Attempting to sync node with API server" Sep 9 21:33:58.464467 kubelet[2263]: I0909 21:33:58.464456 2263 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:33:58.464543 kubelet[2263]: I0909 21:33:58.464534 2263 kubelet.go:386] "Adding apiserver pod source" Sep 9 21:33:58.465855 kubelet[2263]: I0909 21:33:58.465842 2263 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:33:58.467152 kubelet[2263]: I0909 21:33:58.467119 2263 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:33:58.467807 kubelet[2263]: I0909 21:33:58.467789 2263 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 21:33:58.467923 kubelet[2263]: W0909 21:33:58.467912 2263 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 21:33:58.469294 kubelet[2263]: E0909 21:33:58.469261 2263 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 21:33:58.469883 kubelet[2263]: E0909 21:33:58.469766 2263 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 21:33:58.470191 kubelet[2263]: I0909 21:33:58.470173 2263 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 21:33:58.470233 kubelet[2263]: I0909 21:33:58.470213 2263 server.go:1289] "Started kubelet" Sep 9 21:33:58.470323 kubelet[2263]: I0909 21:33:58.470293 2263 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:33:58.471350 kubelet[2263]: I0909 21:33:58.471328 2263 server.go:317] "Adding debug handlers to kubelet server" Sep 9 21:33:58.472664 kubelet[2263]: I0909 21:33:58.472626 2263 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:33:58.473818 kubelet[2263]: I0909 21:33:58.473520 2263 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:33:58.473818 kubelet[2263]: I0909 21:33:58.473797 2263 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:33:58.474974 kubelet[2263]: E0909 21:33:58.473061 2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863bac5d63bbe70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:33:58.470188656 +0000 UTC m=+0.641271045,LastTimestamp:2025-09-09 21:33:58.470188656 +0000 UTC m=+0.641271045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:33:58.474974 kubelet[2263]: I0909 21:33:58.474385 2263 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:33:58.474974 kubelet[2263]: E0909 21:33:58.474716 2263 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:33:58.474974 kubelet[2263]: I0909 21:33:58.474740 2263 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 21:33:58.476105 kubelet[2263]: E0909 21:33:58.476072 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Sep 9 21:33:58.476356 kubelet[2263]: I0909 21:33:58.476334 2263 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 21:33:58.476695 kubelet[2263]: E0909 21:33:58.476664 2263 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:33:58.476906 kubelet[2263]: I0909 21:33:58.476890 2263 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:33:58.476985 kubelet[2263]: E0909 21:33:58.476939 2263 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 21:33:58.477279 kubelet[2263]: I0909 21:33:58.477259 2263 factory.go:223] Registration of the systemd container factory successfully Sep 9 21:33:58.477440 kubelet[2263]: I0909 21:33:58.477421 2263 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:33:58.478349 kubelet[2263]: I0909 21:33:58.478323 2263 factory.go:223] Registration of the containerd container factory successfully Sep 9 21:33:58.481545 kubelet[2263]: I0909 21:33:58.481507 2263 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 21:33:58.489469 kubelet[2263]: I0909 21:33:58.489448 2263 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 21:33:58.489469 kubelet[2263]: I0909 21:33:58.489465 2263 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 21:33:58.489600 kubelet[2263]: I0909 21:33:58.489503 2263 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:33:58.493608 kubelet[2263]: I0909 21:33:58.493582 2263 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 21:33:58.493608 kubelet[2263]: I0909 21:33:58.493612 2263 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 21:33:58.493703 kubelet[2263]: I0909 21:33:58.493632 2263 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 21:33:58.493703 kubelet[2263]: I0909 21:33:58.493640 2263 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 21:33:58.493703 kubelet[2263]: E0909 21:33:58.493682 2263 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:33:58.494301 kubelet[2263]: E0909 21:33:58.494271 2263 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 21:33:58.560663 kubelet[2263]: I0909 21:33:58.560617 2263 policy_none.go:49] "None policy: Start" Sep 9 21:33:58.560663 kubelet[2263]: I0909 21:33:58.560658 2263 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 21:33:58.560777 kubelet[2263]: I0909 21:33:58.560684 2263 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:33:58.566058 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 21:33:58.574960 kubelet[2263]: E0909 21:33:58.574925 2263 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:33:58.579220 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 21:33:58.581861 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 21:33:58.594728 kubelet[2263]: E0909 21:33:58.594692 2263 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 21:33:58.602277 kubelet[2263]: E0909 21:33:58.602244 2263 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 21:33:58.602489 kubelet[2263]: I0909 21:33:58.602431 2263 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:33:58.602489 kubelet[2263]: I0909 21:33:58.602443 2263 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:33:58.603028 kubelet[2263]: I0909 21:33:58.602975 2263 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:33:58.603978 kubelet[2263]: E0909 21:33:58.603951 2263 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 21:33:58.604031 kubelet[2263]: E0909 21:33:58.603994 2263 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 21:33:58.676762 kubelet[2263]: E0909 21:33:58.676631 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Sep 9 21:33:58.703901 kubelet[2263]: I0909 21:33:58.703866 2263 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:33:58.704290 kubelet[2263]: E0909 21:33:58.704265 2263 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 9 21:33:58.804087 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 21:33:58.829969 kubelet[2263]: E0909 21:33:58.829935 2263 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:33:58.833264 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 21:33:58.835114 kubelet[2263]: E0909 21:33:58.835089 2263 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:33:58.836806 systemd[1]: Created slice kubepods-burstable-podb046c8fbd1cdba8dadafa67d90df3942.slice - libcontainer container kubepods-burstable-podb046c8fbd1cdba8dadafa67d90df3942.slice. Sep 9 21:33:58.838087 kubelet[2263]: E0909 21:33:58.838059 2263 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:33:58.878874 kubelet[2263]: I0909 21:33:58.878820 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b046c8fbd1cdba8dadafa67d90df3942-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b046c8fbd1cdba8dadafa67d90df3942\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:33:58.878874 kubelet[2263]: I0909 21:33:58.878869 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:33:58.879231 kubelet[2263]: I0909 21:33:58.878906 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:33:58.879231 kubelet[2263]: I0909 21:33:58.878934 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:33:58.879231 kubelet[2263]: I0909 21:33:58.878961 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b046c8fbd1cdba8dadafa67d90df3942-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b046c8fbd1cdba8dadafa67d90df3942\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:33:58.879231 kubelet[2263]: I0909 21:33:58.878997 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b046c8fbd1cdba8dadafa67d90df3942-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b046c8fbd1cdba8dadafa67d90df3942\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:33:58.879231 kubelet[2263]: I0909 21:33:58.879011 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:33:58.879330 kubelet[2263]: I0909 21:33:58.879024 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:33:58.879330 kubelet[2263]: I0909 21:33:58.879043 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:33:58.905886 kubelet[2263]: I0909 21:33:58.905851 2263 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:33:58.906155 kubelet[2263]: E0909 21:33:58.906120 2263 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 9 21:33:59.078075 kubelet[2263]: E0909 21:33:59.078026 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Sep 9 21:33:59.130641 kubelet[2263]: E0909 21:33:59.130612 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:33:59.131250 containerd[1501]: time="2025-09-09T21:33:59.131209382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 21:33:59.135463 kubelet[2263]: E0909 21:33:59.135433 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:33:59.135898 containerd[1501]: time="2025-09-09T21:33:59.135863572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 21:33:59.138531 kubelet[2263]: E0909 21:33:59.138325 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:33:59.138674 containerd[1501]: time="2025-09-09T21:33:59.138640116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b046c8fbd1cdba8dadafa67d90df3942,Namespace:kube-system,Attempt:0,}" Sep 9 21:33:59.162329 containerd[1501]: time="2025-09-09T21:33:59.161686722Z" level=info msg="connecting to shim e642e90c4466e653bcf2161301ef5266ff66ed99d744b9b0f170a1e21aad89d5" address="unix:///run/containerd/s/fba999c97c20c65677b15c8a467c36ac9e301fbee834f6d4d9e6939b9a51555a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:33:59.171728 containerd[1501]: time="2025-09-09T21:33:59.171675232Z" level=info msg="connecting to shim 3c4005c886d04ee69857e0a8fa9057bc9820742c3107aa6089dfbf10c7dc6a43" address="unix:///run/containerd/s/b2fb1a4bb58e0a3b55b6738ddf67cc63dad30c6dec6fcfda553333b58be149d0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:33:59.182163 containerd[1501]: time="2025-09-09T21:33:59.182121612Z" level=info msg="connecting to shim 5089b72f839fe4615f5985e0cb36f8588ac2c8ef92bf700a94e7e320c33edfc0" address="unix:///run/containerd/s/cb66276e87bc5fc4d2b5612e3a5e90f2b4fbb7901549d638930f8c8adc77bdc3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:33:59.185744 systemd[1]: Started cri-containerd-e642e90c4466e653bcf2161301ef5266ff66ed99d744b9b0f170a1e21aad89d5.scope - libcontainer container e642e90c4466e653bcf2161301ef5266ff66ed99d744b9b0f170a1e21aad89d5. Sep 9 21:33:59.199724 systemd[1]: Started cri-containerd-3c4005c886d04ee69857e0a8fa9057bc9820742c3107aa6089dfbf10c7dc6a43.scope - libcontainer container 3c4005c886d04ee69857e0a8fa9057bc9820742c3107aa6089dfbf10c7dc6a43. Sep 9 21:33:59.207993 systemd[1]: Started cri-containerd-5089b72f839fe4615f5985e0cb36f8588ac2c8ef92bf700a94e7e320c33edfc0.scope - libcontainer container 5089b72f839fe4615f5985e0cb36f8588ac2c8ef92bf700a94e7e320c33edfc0. Sep 9 21:33:59.219097 containerd[1501]: time="2025-09-09T21:33:59.218974782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"e642e90c4466e653bcf2161301ef5266ff66ed99d744b9b0f170a1e21aad89d5\"" Sep 9 21:33:59.220694 kubelet[2263]: E0909 21:33:59.220670 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:33:59.226592 containerd[1501]: time="2025-09-09T21:33:59.225759917Z" level=info msg="CreateContainer within sandbox \"e642e90c4466e653bcf2161301ef5266ff66ed99d744b9b0f170a1e21aad89d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 21:33:59.233763 containerd[1501]: time="2025-09-09T21:33:59.233666329Z" level=info msg="Container d71841a5607841f0aa53edb773f7a8a50ee89d339b4d73a395b70587b8000032: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:33:59.242411 containerd[1501]: time="2025-09-09T21:33:59.242363920Z" level=info msg="CreateContainer within sandbox \"e642e90c4466e653bcf2161301ef5266ff66ed99d744b9b0f170a1e21aad89d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d71841a5607841f0aa53edb773f7a8a50ee89d339b4d73a395b70587b8000032\"" Sep 9 21:33:59.243788 containerd[1501]: time="2025-09-09T21:33:59.243755701Z" level=info msg="StartContainer for \"d71841a5607841f0aa53edb773f7a8a50ee89d339b4d73a395b70587b8000032\"" Sep 9 21:33:59.245538 containerd[1501]: time="2025-09-09T21:33:59.245509063Z" level=info msg="connecting to shim d71841a5607841f0aa53edb773f7a8a50ee89d339b4d73a395b70587b8000032" address="unix:///run/containerd/s/fba999c97c20c65677b15c8a467c36ac9e301fbee834f6d4d9e6939b9a51555a" protocol=ttrpc version=3 Sep 9 21:33:59.246717 containerd[1501]: time="2025-09-09T21:33:59.246640607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c4005c886d04ee69857e0a8fa9057bc9820742c3107aa6089dfbf10c7dc6a43\"" Sep 9 21:33:59.247499 kubelet[2263]: E0909 21:33:59.247438 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:33:59.250457 containerd[1501]: time="2025-09-09T21:33:59.250424773Z" level=info msg="CreateContainer within sandbox \"3c4005c886d04ee69857e0a8fa9057bc9820742c3107aa6089dfbf10c7dc6a43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 21:33:59.251519 containerd[1501]: time="2025-09-09T21:33:59.251487138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b046c8fbd1cdba8dadafa67d90df3942,Namespace:kube-system,Attempt:0,} returns sandbox id \"5089b72f839fe4615f5985e0cb36f8588ac2c8ef92bf700a94e7e320c33edfc0\"" Sep 9 21:33:59.252384 kubelet[2263]: E0909 21:33:59.252300 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:33:59.257862 containerd[1501]: time="2025-09-09T21:33:59.257833009Z" level=info msg="CreateContainer within sandbox \"5089b72f839fe4615f5985e0cb36f8588ac2c8ef92bf700a94e7e320c33edfc0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 21:33:59.259852 containerd[1501]: time="2025-09-09T21:33:59.259825915Z" level=info msg="Container 44d2347dd1c27c90ea81eda281655a683bbc681e1e6e74d6b44ddc6c50b42552: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:33:59.266623 containerd[1501]: time="2025-09-09T21:33:59.266590556Z" level=info msg="Container 1e96778c4338ab8e8acb6996ce4f9a057c64674020e425d042135fdb8dc42ee2: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:33:59.269468 containerd[1501]: time="2025-09-09T21:33:59.269408969Z" level=info msg="CreateContainer within sandbox \"3c4005c886d04ee69857e0a8fa9057bc9820742c3107aa6089dfbf10c7dc6a43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44d2347dd1c27c90ea81eda281655a683bbc681e1e6e74d6b44ddc6c50b42552\"" Sep 9 21:33:59.270042 containerd[1501]: time="2025-09-09T21:33:59.270001270Z" level=info msg="StartContainer for \"44d2347dd1c27c90ea81eda281655a683bbc681e1e6e74d6b44ddc6c50b42552\"" Sep 9 21:33:59.271610 containerd[1501]: time="2025-09-09T21:33:59.271459665Z" level=info msg="connecting to shim 44d2347dd1c27c90ea81eda281655a683bbc681e1e6e74d6b44ddc6c50b42552" address="unix:///run/containerd/s/b2fb1a4bb58e0a3b55b6738ddf67cc63dad30c6dec6fcfda553333b58be149d0" protocol=ttrpc version=3 Sep 9 21:33:59.271791 systemd[1]: Started cri-containerd-d71841a5607841f0aa53edb773f7a8a50ee89d339b4d73a395b70587b8000032.scope - libcontainer container d71841a5607841f0aa53edb773f7a8a50ee89d339b4d73a395b70587b8000032. Sep 9 21:33:59.275731 containerd[1501]: time="2025-09-09T21:33:59.275661638Z" level=info msg="CreateContainer within sandbox \"5089b72f839fe4615f5985e0cb36f8588ac2c8ef92bf700a94e7e320c33edfc0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e96778c4338ab8e8acb6996ce4f9a057c64674020e425d042135fdb8dc42ee2\"" Sep 9 21:33:59.276143 containerd[1501]: time="2025-09-09T21:33:59.276115459Z" level=info msg="StartContainer for \"1e96778c4338ab8e8acb6996ce4f9a057c64674020e425d042135fdb8dc42ee2\"" Sep 9 21:33:59.277247 containerd[1501]: time="2025-09-09T21:33:59.277214599Z" level=info msg="connecting to shim 1e96778c4338ab8e8acb6996ce4f9a057c64674020e425d042135fdb8dc42ee2" address="unix:///run/containerd/s/cb66276e87bc5fc4d2b5612e3a5e90f2b4fbb7901549d638930f8c8adc77bdc3" protocol=ttrpc version=3 Sep 9 21:33:59.291960 kubelet[2263]: E0909 21:33:59.291928 2263 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 21:33:59.293693 systemd[1]: Started cri-containerd-44d2347dd1c27c90ea81eda281655a683bbc681e1e6e74d6b44ddc6c50b42552.scope - libcontainer container 44d2347dd1c27c90ea81eda281655a683bbc681e1e6e74d6b44ddc6c50b42552. Sep 9 21:33:59.296144 systemd[1]: Started cri-containerd-1e96778c4338ab8e8acb6996ce4f9a057c64674020e425d042135fdb8dc42ee2.scope - libcontainer container 1e96778c4338ab8e8acb6996ce4f9a057c64674020e425d042135fdb8dc42ee2. Sep 9 21:33:59.309971 kubelet[2263]: I0909 21:33:59.309932 2263 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:33:59.310285 kubelet[2263]: E0909 21:33:59.310255 2263 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 9 21:33:59.315392 containerd[1501]: time="2025-09-09T21:33:59.315358327Z" level=info msg="StartContainer for \"d71841a5607841f0aa53edb773f7a8a50ee89d339b4d73a395b70587b8000032\" returns successfully" Sep 9 21:33:59.333623 kubelet[2263]: E0909 21:33:59.333494 2263 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 21:33:59.345817 containerd[1501]: time="2025-09-09T21:33:59.345513308Z" level=info msg="StartContainer for \"1e96778c4338ab8e8acb6996ce4f9a057c64674020e425d042135fdb8dc42ee2\" returns successfully" Sep 9 21:33:59.348921 containerd[1501]: time="2025-09-09T21:33:59.348710828Z" level=info msg="StartContainer for \"44d2347dd1c27c90ea81eda281655a683bbc681e1e6e74d6b44ddc6c50b42552\" returns successfully" Sep 9 21:33:59.499174 kubelet[2263]: E0909 21:33:59.499142 2263 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:33:59.499296 kubelet[2263]: E0909 21:33:59.499268 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:33:59.504784 kubelet[2263]: E0909 21:33:59.504758 2263 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:33:59.504872 kubelet[2263]: E0909 21:33:59.504855 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:33:59.505324 kubelet[2263]: E0909 21:33:59.505306 2263 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:33:59.505424 kubelet[2263]: E0909 21:33:59.505409 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:00.114116 kubelet[2263]: I0909 21:34:00.113771 2263 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:34:00.505563 kubelet[2263]: E0909 21:34:00.505521 2263 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:34:00.505761 kubelet[2263]: E0909 21:34:00.505661 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:00.506887 kubelet[2263]: E0909 21:34:00.506788 2263 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:34:00.507051 kubelet[2263]: E0909 21:34:00.507024 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:00.545119 kubelet[2263]: E0909 21:34:00.545074 2263 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 21:34:00.616565 kubelet[2263]: I0909 21:34:00.615944 2263 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 21:34:00.675753 kubelet[2263]: I0909 21:34:00.675683 2263 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:34:00.680169 kubelet[2263]: E0909 21:34:00.680140 2263 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 21:34:00.680169 kubelet[2263]: I0909 21:34:00.680164 2263 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:34:00.681936 kubelet[2263]: E0909 21:34:00.681907 2263 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:34:00.681936 kubelet[2263]: I0909 21:34:00.681929 2263 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:34:00.683281 kubelet[2263]: E0909 21:34:00.683251 2263 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 21:34:01.476414 kubelet[2263]: I0909 21:34:01.476135 2263 apiserver.go:52] "Watching apiserver" Sep 9 21:34:01.577076 kubelet[2263]: I0909 21:34:01.577011 2263 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 21:34:02.756845 systemd[1]: Reload requested from client PID 2547 ('systemctl') (unit session-7.scope)... Sep 9 21:34:02.756859 systemd[1]: Reloading... Sep 9 21:34:02.808574 zram_generator::config[2590]: No configuration found. Sep 9 21:34:03.051471 systemd[1]: Reloading finished in 293 ms. Sep 9 21:34:03.085889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:34:03.096472 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:34:03.096735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:34:03.096781 systemd[1]: kubelet.service: Consumed 987ms CPU time, 128.9M memory peak. Sep 9 21:34:03.098250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:34:03.229174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:34:03.234067 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:34:03.269739 kubelet[2632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:34:03.269739 kubelet[2632]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 21:34:03.269739 kubelet[2632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:34:03.270062 kubelet[2632]: I0909 21:34:03.269780 2632 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:34:03.274903 kubelet[2632]: I0909 21:34:03.274870 2632 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 21:34:03.274903 kubelet[2632]: I0909 21:34:03.274895 2632 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:34:03.275091 kubelet[2632]: I0909 21:34:03.275075 2632 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 21:34:03.276820 kubelet[2632]: I0909 21:34:03.276610 2632 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 21:34:03.279197 kubelet[2632]: I0909 21:34:03.279160 2632 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:34:03.282743 kubelet[2632]: I0909 21:34:03.282723 2632 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:34:03.285585 kubelet[2632]: I0909 21:34:03.285347 2632 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:34:03.285585 kubelet[2632]: I0909 21:34:03.285546 2632 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:34:03.285755 kubelet[2632]: I0909 21:34:03.285593 2632 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:34:03.285834 kubelet[2632]: I0909 21:34:03.285760 2632 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:34:03.285834 kubelet[2632]: I0909 21:34:03.285768 2632 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 21:34:03.285834 kubelet[2632]: I0909 21:34:03.285818 2632 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:34:03.285968 kubelet[2632]: I0909 21:34:03.285956 2632 kubelet.go:480] "Attempting to sync node with API server" Sep 9 21:34:03.286000 kubelet[2632]: I0909 21:34:03.285971 2632 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:34:03.286000 kubelet[2632]: I0909 21:34:03.285990 2632 kubelet.go:386] "Adding apiserver pod source" Sep 9 21:34:03.286048 kubelet[2632]: I0909 21:34:03.286003 2632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:34:03.288622 kubelet[2632]: I0909 21:34:03.286640 2632 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:34:03.288622 kubelet[2632]: I0909 21:34:03.287161 2632 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 21:34:03.289135 kubelet[2632]: I0909 21:34:03.289109 2632 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 21:34:03.289192 kubelet[2632]: I0909 21:34:03.289143 2632 server.go:1289] "Started kubelet" Sep 9 21:34:03.293679 kubelet[2632]: I0909 21:34:03.291444 2632 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:34:03.293679 kubelet[2632]: I0909 21:34:03.292862 2632 server.go:317] "Adding debug handlers to kubelet server" Sep 9 21:34:03.296565 kubelet[2632]: I0909 21:34:03.295162 2632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:34:03.296565 kubelet[2632]: I0909 21:34:03.295629 2632 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:34:03.303279 kubelet[2632]: I0909 21:34:03.302957 2632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:34:03.305869 kubelet[2632]: I0909 21:34:03.305819 2632 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:34:03.307178 kubelet[2632]: I0909 21:34:03.306229 2632 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 21:34:03.307178 kubelet[2632]: I0909 21:34:03.306391 2632 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 21:34:03.307178 kubelet[2632]: I0909 21:34:03.306486 2632 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:34:03.309861 kubelet[2632]: I0909 21:34:03.309793 2632 factory.go:223] Registration of the systemd container factory successfully Sep 9 21:34:03.310932 kubelet[2632]: I0909 21:34:03.310136 2632 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:34:03.310932 kubelet[2632]: E0909 21:34:03.310879 2632 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:34:03.311407 kubelet[2632]: I0909 21:34:03.311332 2632 factory.go:223] Registration of the containerd container factory successfully Sep 9 21:34:03.321014 kubelet[2632]: I0909 21:34:03.320980 2632 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 21:34:03.322610 kubelet[2632]: I0909 21:34:03.322006 2632 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 21:34:03.322610 kubelet[2632]: I0909 21:34:03.322026 2632 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 21:34:03.322610 kubelet[2632]: I0909 21:34:03.322040 2632 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 21:34:03.322610 kubelet[2632]: I0909 21:34:03.322045 2632 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 21:34:03.322610 kubelet[2632]: E0909 21:34:03.322083 2632 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:34:03.345862 kubelet[2632]: I0909 21:34:03.345839 2632 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 21:34:03.345862 kubelet[2632]: I0909 21:34:03.345857 2632 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 21:34:03.345994 kubelet[2632]: I0909 21:34:03.345890 2632 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:34:03.346018 kubelet[2632]: I0909 21:34:03.346004 2632 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 21:34:03.346038 kubelet[2632]: I0909 21:34:03.346013 2632 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 21:34:03.346038 kubelet[2632]: I0909 21:34:03.346028 2632 policy_none.go:49] "None policy: Start" Sep 9 21:34:03.346038 kubelet[2632]: I0909 21:34:03.346038 2632 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 21:34:03.346091 kubelet[2632]: I0909 21:34:03.346046 2632 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:34:03.346148 kubelet[2632]: I0909 21:34:03.346131 2632 state_mem.go:75] "Updated machine memory state" Sep 9 21:34:03.349942 kubelet[2632]: E0909 21:34:03.349918 2632 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 21:34:03.350332 kubelet[2632]: I0909 21:34:03.350315 2632 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:34:03.350535 kubelet[2632]: I0909 21:34:03.350503 2632 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:34:03.351492 kubelet[2632]: I0909 21:34:03.351470 2632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:34:03.351568 kubelet[2632]: E0909 21:34:03.351514 2632 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 21:34:03.423444 kubelet[2632]: I0909 21:34:03.423412 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:34:03.423853 kubelet[2632]: I0909 21:34:03.423814 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:34:03.423920 kubelet[2632]: I0909 21:34:03.423878 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:34:03.458269 kubelet[2632]: I0909 21:34:03.458228 2632 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:34:03.466090 kubelet[2632]: I0909 21:34:03.466065 2632 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 21:34:03.466189 kubelet[2632]: I0909 21:34:03.466149 2632 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 21:34:03.608048 kubelet[2632]: I0909 21:34:03.607962 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:34:03.608675 kubelet[2632]: I0909 21:34:03.608027 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:34:03.608820 kubelet[2632]: I0909 21:34:03.608680 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:34:03.608820 kubelet[2632]: I0909 21:34:03.608718 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b046c8fbd1cdba8dadafa67d90df3942-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b046c8fbd1cdba8dadafa67d90df3942\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:34:03.608820 kubelet[2632]: I0909 21:34:03.608750 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b046c8fbd1cdba8dadafa67d90df3942-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b046c8fbd1cdba8dadafa67d90df3942\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:34:03.608820 kubelet[2632]: I0909 21:34:03.608788 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b046c8fbd1cdba8dadafa67d90df3942-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b046c8fbd1cdba8dadafa67d90df3942\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:34:03.608820 kubelet[2632]: I0909 21:34:03.608808 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:34:03.608924 kubelet[2632]: I0909 21:34:03.608824 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:34:03.608924 kubelet[2632]: I0909 21:34:03.608840 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:34:03.730458 kubelet[2632]: E0909 21:34:03.730402 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:03.730701 kubelet[2632]: E0909 21:34:03.730586 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:03.732604 kubelet[2632]: E0909 21:34:03.732583 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:03.756756 sudo[2673]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 21:34:03.757021 sudo[2673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 21:34:04.066724 sudo[2673]: pam_unix(sudo:session): session closed for user root Sep 9 21:34:04.286742 kubelet[2632]: I0909 21:34:04.286690 2632 apiserver.go:52] "Watching apiserver" Sep 9 21:34:04.307118 kubelet[2632]: I0909 21:34:04.307084 2632 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 21:34:04.333769 kubelet[2632]: I0909 21:34:04.333681 2632 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:34:04.336874 kubelet[2632]: E0909 21:34:04.333897 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:04.336874 kubelet[2632]: E0909 21:34:04.335927 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:04.337532 kubelet[2632]: E0909 21:34:04.337507 2632 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 21:34:04.337682 kubelet[2632]: E0909 21:34:04.337667 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:04.353424 kubelet[2632]: I0909 21:34:04.353237 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.353220195 podStartE2EDuration="1.353220195s" podCreationTimestamp="2025-09-09 21:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:34:04.3519821 +0000 UTC m=+1.114918657" watchObservedRunningTime="2025-09-09 21:34:04.353220195 +0000 UTC m=+1.116156752" Sep 9 21:34:04.425119 kubelet[2632]: I0909 21:34:04.425053 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.425017574 podStartE2EDuration="1.425017574s" podCreationTimestamp="2025-09-09 21:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:34:04.424187825 +0000 UTC m=+1.187124382" watchObservedRunningTime="2025-09-09 21:34:04.425017574 +0000 UTC m=+1.187954131" Sep 9 21:34:04.425248 kubelet[2632]: I0909 21:34:04.425171 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.425166533 podStartE2EDuration="1.425166533s" podCreationTimestamp="2025-09-09 21:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:34:04.400075005 +0000 UTC m=+1.163011602" watchObservedRunningTime="2025-09-09 21:34:04.425166533 +0000 UTC m=+1.188103090" Sep 9 21:34:05.335325 kubelet[2632]: E0909 21:34:05.335018 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:05.335325 kubelet[2632]: E0909 21:34:05.335183 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:05.427109 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 9 21:34:05.428249 sshd[1704]: Connection closed by 10.0.0.1 port 45840 Sep 9 21:34:05.428679 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:05.431835 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 21:34:05.432206 systemd[1]: session-7.scope: Consumed 7.509s CPU time, 257.8M memory peak. Sep 9 21:34:05.433244 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:45840.service: Deactivated successfully. Sep 9 21:34:05.436301 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Sep 9 21:34:05.437936 systemd-logind[1479]: Removed session 7. Sep 9 21:34:09.185899 kubelet[2632]: I0909 21:34:09.185863 2632 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 21:34:09.186683 kubelet[2632]: I0909 21:34:09.186298 2632 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 21:34:09.186733 containerd[1501]: time="2025-09-09T21:34:09.186148718Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 21:34:10.186691 systemd[1]: Created slice kubepods-besteffort-pod792d34a2_cb73_4339_b8b1_3957ca1b43c7.slice - libcontainer container kubepods-besteffort-pod792d34a2_cb73_4339_b8b1_3957ca1b43c7.slice. Sep 9 21:34:10.200446 systemd[1]: Created slice kubepods-burstable-podb6a5f7b5_b37f_40c2_a43c_05492bf3b3ce.slice - libcontainer container kubepods-burstable-podb6a5f7b5_b37f_40c2_a43c_05492bf3b3ce.slice. Sep 9 21:34:10.257565 kubelet[2632]: I0909 21:34:10.257519 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpbf\" (UniqueName: \"kubernetes.io/projected/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-kube-api-access-5tpbf\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.257916 kubelet[2632]: I0909 21:34:10.257607 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/792d34a2-cb73-4339-b8b1-3957ca1b43c7-lib-modules\") pod \"kube-proxy-bvp8s\" (UID: \"792d34a2-cb73-4339-b8b1-3957ca1b43c7\") " pod="kube-system/kube-proxy-bvp8s" Sep 9 21:34:10.257916 kubelet[2632]: I0909 21:34:10.257633 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/792d34a2-cb73-4339-b8b1-3957ca1b43c7-xtables-lock\") pod \"kube-proxy-bvp8s\" (UID: \"792d34a2-cb73-4339-b8b1-3957ca1b43c7\") " pod="kube-system/kube-proxy-bvp8s" Sep 9 21:34:10.257916 kubelet[2632]: I0909 21:34:10.257648 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-clustermesh-secrets\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.257916 kubelet[2632]: I0909 21:34:10.257701 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-config-path\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.257916 kubelet[2632]: I0909 21:34:10.257718 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-run\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.257916 kubelet[2632]: I0909 21:34:10.257760 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-cgroup\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258061 kubelet[2632]: I0909 21:34:10.257778 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-xtables-lock\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258061 kubelet[2632]: I0909 21:34:10.257793 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-host-proc-sys-kernel\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258061 kubelet[2632]: I0909 21:34:10.257830 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwtf6\" (UniqueName: \"kubernetes.io/projected/792d34a2-cb73-4339-b8b1-3957ca1b43c7-kube-api-access-nwtf6\") pod \"kube-proxy-bvp8s\" (UID: \"792d34a2-cb73-4339-b8b1-3957ca1b43c7\") " pod="kube-system/kube-proxy-bvp8s" Sep 9 21:34:10.258061 kubelet[2632]: I0909 21:34:10.257849 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/792d34a2-cb73-4339-b8b1-3957ca1b43c7-kube-proxy\") pod \"kube-proxy-bvp8s\" (UID: \"792d34a2-cb73-4339-b8b1-3957ca1b43c7\") " pod="kube-system/kube-proxy-bvp8s" Sep 9 21:34:10.258061 kubelet[2632]: I0909 21:34:10.257867 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-etc-cni-netd\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258160 kubelet[2632]: I0909 21:34:10.257895 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-hubble-tls\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258160 kubelet[2632]: I0909 21:34:10.257913 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-bpf-maps\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258160 kubelet[2632]: I0909 21:34:10.257928 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-hostproc\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258160 kubelet[2632]: I0909 21:34:10.257944 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cni-path\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258160 kubelet[2632]: I0909 21:34:10.257961 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-lib-modules\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.258160 kubelet[2632]: I0909 21:34:10.258064 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-host-proc-sys-net\") pod \"cilium-fkjsk\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " pod="kube-system/cilium-fkjsk" Sep 9 21:34:10.280342 kubelet[2632]: E0909 21:34:10.280317 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:10.341677 kubelet[2632]: E0909 21:34:10.341644 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:10.401430 systemd[1]: Created slice kubepods-besteffort-podeb6c5d13_ceac_46f5_9893_7cf9409819f2.slice - libcontainer container kubepods-besteffort-podeb6c5d13_ceac_46f5_9893_7cf9409819f2.slice. Sep 9 21:34:10.460610 kubelet[2632]: I0909 21:34:10.460574 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6c5d13-ceac-46f5-9893-7cf9409819f2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-k2clw\" (UID: \"eb6c5d13-ceac-46f5-9893-7cf9409819f2\") " pod="kube-system/cilium-operator-6c4d7847fc-k2clw" Sep 9 21:34:10.460811 kubelet[2632]: I0909 21:34:10.460759 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69dlh\" (UniqueName: \"kubernetes.io/projected/eb6c5d13-ceac-46f5-9893-7cf9409819f2-kube-api-access-69dlh\") pod \"cilium-operator-6c4d7847fc-k2clw\" (UID: \"eb6c5d13-ceac-46f5-9893-7cf9409819f2\") " pod="kube-system/cilium-operator-6c4d7847fc-k2clw" Sep 9 21:34:10.496777 kubelet[2632]: E0909 21:34:10.496731 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:10.497393 containerd[1501]: time="2025-09-09T21:34:10.497343517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvp8s,Uid:792d34a2-cb73-4339-b8b1-3957ca1b43c7,Namespace:kube-system,Attempt:0,}" Sep 9 21:34:10.506221 kubelet[2632]: E0909 21:34:10.506084 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:10.506785 containerd[1501]: time="2025-09-09T21:34:10.506753403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkjsk,Uid:b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce,Namespace:kube-system,Attempt:0,}" Sep 9 21:34:10.519681 containerd[1501]: time="2025-09-09T21:34:10.519636459Z" level=info msg="connecting to shim 81b683f046f74880287f5f5009f555b378e9cecae493642641cadc29fde8a183" address="unix:///run/containerd/s/7547982ebee925af3a2572f1353580b137e1d7fc8f1c023e8b1f6ceff42817db" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:34:10.525364 containerd[1501]: time="2025-09-09T21:34:10.525330687Z" level=info msg="connecting to shim f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae" address="unix:///run/containerd/s/a070876fa6af8a4004b8aabb8fdc006d33ef1bf192f2c7315b1aa93bee0202d0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:34:10.538714 systemd[1]: Started cri-containerd-81b683f046f74880287f5f5009f555b378e9cecae493642641cadc29fde8a183.scope - libcontainer container 81b683f046f74880287f5f5009f555b378e9cecae493642641cadc29fde8a183. Sep 9 21:34:10.541966 systemd[1]: Started cri-containerd-f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae.scope - libcontainer container f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae. Sep 9 21:34:10.567441 containerd[1501]: time="2025-09-09T21:34:10.567391002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkjsk,Uid:b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\"" Sep 9 21:34:10.569175 kubelet[2632]: E0909 21:34:10.569152 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:10.571049 containerd[1501]: time="2025-09-09T21:34:10.571012924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvp8s,Uid:792d34a2-cb73-4339-b8b1-3957ca1b43c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"81b683f046f74880287f5f5009f555b378e9cecae493642641cadc29fde8a183\"" Sep 9 21:34:10.572006 containerd[1501]: time="2025-09-09T21:34:10.571752035Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 21:34:10.572078 kubelet[2632]: E0909 21:34:10.571866 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:10.575900 containerd[1501]: time="2025-09-09T21:34:10.575839951Z" level=info msg="CreateContainer within sandbox \"81b683f046f74880287f5f5009f555b378e9cecae493642641cadc29fde8a183\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 21:34:10.586437 containerd[1501]: time="2025-09-09T21:34:10.586408694Z" level=info msg="Container 7bf63d3dda0d974ea5351cc6c737a0368610dc4eeabe882953d68d0675154093: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:10.592888 containerd[1501]: time="2025-09-09T21:34:10.592845877Z" level=info msg="CreateContainer within sandbox \"81b683f046f74880287f5f5009f555b378e9cecae493642641cadc29fde8a183\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bf63d3dda0d974ea5351cc6c737a0368610dc4eeabe882953d68d0675154093\"" Sep 9 21:34:10.593468 containerd[1501]: time="2025-09-09T21:34:10.593441163Z" level=info msg="StartContainer for \"7bf63d3dda0d974ea5351cc6c737a0368610dc4eeabe882953d68d0675154093\"" Sep 9 21:34:10.595850 containerd[1501]: time="2025-09-09T21:34:10.595825426Z" level=info msg="connecting to shim 7bf63d3dda0d974ea5351cc6c737a0368610dc4eeabe882953d68d0675154093" address="unix:///run/containerd/s/7547982ebee925af3a2572f1353580b137e1d7fc8f1c023e8b1f6ceff42817db" protocol=ttrpc version=3 Sep 9 21:34:10.614686 systemd[1]: Started cri-containerd-7bf63d3dda0d974ea5351cc6c737a0368610dc4eeabe882953d68d0675154093.scope - libcontainer container 7bf63d3dda0d974ea5351cc6c737a0368610dc4eeabe882953d68d0675154093. Sep 9 21:34:10.647593 containerd[1501]: time="2025-09-09T21:34:10.647546041Z" level=info msg="StartContainer for \"7bf63d3dda0d974ea5351cc6c737a0368610dc4eeabe882953d68d0675154093\" returns successfully" Sep 9 21:34:10.705576 kubelet[2632]: E0909 21:34:10.705529 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:10.707275 containerd[1501]: time="2025-09-09T21:34:10.707207488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k2clw,Uid:eb6c5d13-ceac-46f5-9893-7cf9409819f2,Namespace:kube-system,Attempt:0,}" Sep 9 21:34:10.723660 containerd[1501]: time="2025-09-09T21:34:10.723338566Z" level=info msg="connecting to shim 2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c" address="unix:///run/containerd/s/cc2fcf2178649b75b57fd174b37a0835af03d46a4ff229ef2bb4088a31f89593" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:34:10.744688 systemd[1]: Started cri-containerd-2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c.scope - libcontainer container 2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c. Sep 9 21:34:10.797580 containerd[1501]: time="2025-09-09T21:34:10.797524617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k2clw,Uid:eb6c5d13-ceac-46f5-9893-7cf9409819f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\"" Sep 9 21:34:10.798242 kubelet[2632]: E0909 21:34:10.798211 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:11.352748 kubelet[2632]: E0909 21:34:11.352668 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:11.361799 kubelet[2632]: I0909 21:34:11.361747 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bvp8s" podStartSLOduration=1.361735217 podStartE2EDuration="1.361735217s" podCreationTimestamp="2025-09-09 21:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:34:11.360693456 +0000 UTC m=+8.123630013" watchObservedRunningTime="2025-09-09 21:34:11.361735217 +0000 UTC m=+8.124671774" Sep 9 21:34:12.578841 kubelet[2632]: E0909 21:34:12.578806 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:13.355034 kubelet[2632]: E0909 21:34:13.354974 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:14.003474 kubelet[2632]: E0909 21:34:14.003443 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:14.359629 kubelet[2632]: E0909 21:34:14.356433 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:14.359629 kubelet[2632]: E0909 21:34:14.356527 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:19.967267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1727574687.mount: Deactivated successfully. Sep 9 21:34:21.264444 containerd[1501]: time="2025-09-09T21:34:21.264142792Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:34:21.264805 containerd[1501]: time="2025-09-09T21:34:21.264565989Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 21:34:21.265565 containerd[1501]: time="2025-09-09T21:34:21.265484504Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:34:21.267459 containerd[1501]: time="2025-09-09T21:34:21.266992670Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.695208286s" Sep 9 21:34:21.267459 containerd[1501]: time="2025-09-09T21:34:21.267022767Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 21:34:21.272708 containerd[1501]: time="2025-09-09T21:34:21.272666652Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 21:34:21.281584 containerd[1501]: time="2025-09-09T21:34:21.281537548Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 21:34:21.288316 containerd[1501]: time="2025-09-09T21:34:21.288288054Z" level=info msg="Container 9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:21.292705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500420532.mount: Deactivated successfully. Sep 9 21:34:21.295968 containerd[1501]: time="2025-09-09T21:34:21.295866784Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\"" Sep 9 21:34:21.297690 containerd[1501]: time="2025-09-09T21:34:21.297665833Z" level=info msg="StartContainer for \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\"" Sep 9 21:34:21.298526 containerd[1501]: time="2025-09-09T21:34:21.298493737Z" level=info msg="connecting to shim 9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa" address="unix:///run/containerd/s/a070876fa6af8a4004b8aabb8fdc006d33ef1bf192f2c7315b1aa93bee0202d0" protocol=ttrpc version=3 Sep 9 21:34:21.339708 systemd[1]: Started cri-containerd-9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa.scope - libcontainer container 9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa. Sep 9 21:34:21.376485 containerd[1501]: time="2025-09-09T21:34:21.376452341Z" level=info msg="StartContainer for \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\" returns successfully" Sep 9 21:34:21.389258 systemd[1]: cri-containerd-9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa.scope: Deactivated successfully. Sep 9 21:34:21.401701 containerd[1501]: time="2025-09-09T21:34:21.401665922Z" level=info msg="received exit event container_id:\"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\" id:\"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\" pid:3064 exited_at:{seconds:1757453661 nanos:396433427}" Sep 9 21:34:21.401807 containerd[1501]: time="2025-09-09T21:34:21.401769700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\" id:\"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\" pid:3064 exited_at:{seconds:1757453661 nanos:396433427}" Sep 9 21:34:21.429852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa-rootfs.mount: Deactivated successfully. Sep 9 21:34:22.378602 kubelet[2632]: E0909 21:34:22.378351 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:22.384156 containerd[1501]: time="2025-09-09T21:34:22.384109461Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 21:34:22.425237 containerd[1501]: time="2025-09-09T21:34:22.425192490Z" level=info msg="Container f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:22.427228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727212375.mount: Deactivated successfully. Sep 9 21:34:22.429653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771452330.mount: Deactivated successfully. Sep 9 21:34:22.431871 containerd[1501]: time="2025-09-09T21:34:22.431837394Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\"" Sep 9 21:34:22.432622 containerd[1501]: time="2025-09-09T21:34:22.432598840Z" level=info msg="StartContainer for \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\"" Sep 9 21:34:22.433567 containerd[1501]: time="2025-09-09T21:34:22.433525574Z" level=info msg="connecting to shim f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d" address="unix:///run/containerd/s/a070876fa6af8a4004b8aabb8fdc006d33ef1bf192f2c7315b1aa93bee0202d0" protocol=ttrpc version=3 Sep 9 21:34:22.455710 systemd[1]: Started cri-containerd-f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d.scope - libcontainer container f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d. Sep 9 21:34:22.507027 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:34:22.507796 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:34:22.508310 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:34:22.509361 containerd[1501]: time="2025-09-09T21:34:22.509291940Z" level=info msg="StartContainer for \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\" returns successfully" Sep 9 21:34:22.509680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:34:22.512851 systemd[1]: cri-containerd-f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d.scope: Deactivated successfully. Sep 9 21:34:22.513386 systemd[1]: cri-containerd-f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d.scope: Consumed 31ms CPU time, 4.5M memory peak, 1.3M read from disk, 4K written to disk. Sep 9 21:34:22.520303 containerd[1501]: time="2025-09-09T21:34:22.520239978Z" level=info msg="received exit event container_id:\"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\" id:\"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\" pid:3115 exited_at:{seconds:1757453662 nanos:519998329}" Sep 9 21:34:22.520400 containerd[1501]: time="2025-09-09T21:34:22.520341953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\" id:\"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\" pid:3115 exited_at:{seconds:1757453662 nanos:519998329}" Sep 9 21:34:22.534007 update_engine[1481]: I20250909 21:34:22.533589 1481 update_attempter.cc:509] Updating boot flags... Sep 9 21:34:22.544627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:34:23.380391 kubelet[2632]: E0909 21:34:23.380355 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:23.384378 containerd[1501]: time="2025-09-09T21:34:23.384343999Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 21:34:23.407520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d-rootfs.mount: Deactivated successfully. Sep 9 21:34:23.417912 containerd[1501]: time="2025-09-09T21:34:23.416655635Z" level=info msg="Container ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:23.423949 containerd[1501]: time="2025-09-09T21:34:23.423906475Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\"" Sep 9 21:34:23.424449 containerd[1501]: time="2025-09-09T21:34:23.424393322Z" level=info msg="StartContainer for \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\"" Sep 9 21:34:23.425748 containerd[1501]: time="2025-09-09T21:34:23.425723797Z" level=info msg="connecting to shim ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed" address="unix:///run/containerd/s/a070876fa6af8a4004b8aabb8fdc006d33ef1bf192f2c7315b1aa93bee0202d0" protocol=ttrpc version=3 Sep 9 21:34:23.447722 systemd[1]: Started cri-containerd-ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed.scope - libcontainer container ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed. Sep 9 21:34:23.480236 containerd[1501]: time="2025-09-09T21:34:23.480204923Z" level=info msg="StartContainer for \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\" returns successfully" Sep 9 21:34:23.482740 systemd[1]: cri-containerd-ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed.scope: Deactivated successfully. Sep 9 21:34:23.484004 containerd[1501]: time="2025-09-09T21:34:23.483882469Z" level=info msg="received exit event container_id:\"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\" id:\"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\" pid:3177 exited_at:{seconds:1757453663 nanos:483740477}" Sep 9 21:34:23.484867 containerd[1501]: time="2025-09-09T21:34:23.484836994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\" id:\"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\" pid:3177 exited_at:{seconds:1757453663 nanos:483740477}" Sep 9 21:34:23.501100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed-rootfs.mount: Deactivated successfully. Sep 9 21:34:24.397561 kubelet[2632]: E0909 21:34:24.397509 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:24.403898 containerd[1501]: time="2025-09-09T21:34:24.403863743Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 21:34:24.413821 containerd[1501]: time="2025-09-09T21:34:24.413242875Z" level=info msg="Container 7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:24.422298 containerd[1501]: time="2025-09-09T21:34:24.422267756Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\"" Sep 9 21:34:24.422849 containerd[1501]: time="2025-09-09T21:34:24.422816021Z" level=info msg="StartContainer for \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\"" Sep 9 21:34:24.424516 containerd[1501]: time="2025-09-09T21:34:24.424488149Z" level=info msg="connecting to shim 7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86" address="unix:///run/containerd/s/a070876fa6af8a4004b8aabb8fdc006d33ef1bf192f2c7315b1aa93bee0202d0" protocol=ttrpc version=3 Sep 9 21:34:24.452709 systemd[1]: Started cri-containerd-7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86.scope - libcontainer container 7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86. Sep 9 21:34:24.488226 systemd[1]: cri-containerd-7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86.scope: Deactivated successfully. Sep 9 21:34:24.488748 containerd[1501]: time="2025-09-09T21:34:24.488704780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\" id:\"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\" pid:3221 exited_at:{seconds:1757453664 nanos:488341445}" Sep 9 21:34:24.489796 containerd[1501]: time="2025-09-09T21:34:24.489687255Z" level=info msg="received exit event container_id:\"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\" id:\"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\" pid:3221 exited_at:{seconds:1757453664 nanos:488341445}" Sep 9 21:34:24.508708 containerd[1501]: time="2025-09-09T21:34:24.508672989Z" level=info msg="StartContainer for \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\" returns successfully" Sep 9 21:34:24.519378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86-rootfs.mount: Deactivated successfully. Sep 9 21:34:24.838669 containerd[1501]: time="2025-09-09T21:34:24.838624910Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:34:24.839219 containerd[1501]: time="2025-09-09T21:34:24.839185101Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 21:34:24.840520 containerd[1501]: time="2025-09-09T21:34:24.840474284Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:34:24.841855 containerd[1501]: time="2025-09-09T21:34:24.841826097Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.569130108s" Sep 9 21:34:24.841907 containerd[1501]: time="2025-09-09T21:34:24.841860634Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 21:34:24.847350 containerd[1501]: time="2025-09-09T21:34:24.847011002Z" level=info msg="CreateContainer within sandbox \"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 21:34:24.855511 containerd[1501]: time="2025-09-09T21:34:24.854986817Z" level=info msg="Container db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:24.860047 containerd[1501]: time="2025-09-09T21:34:24.860012165Z" level=info msg="CreateContainer within sandbox \"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\"" Sep 9 21:34:24.861659 containerd[1501]: time="2025-09-09T21:34:24.860594926Z" level=info msg="StartContainer for \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\"" Sep 9 21:34:24.861659 containerd[1501]: time="2025-09-09T21:34:24.861310993Z" level=info msg="connecting to shim db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6" address="unix:///run/containerd/s/cc2fcf2178649b75b57fd174b37a0835af03d46a4ff229ef2bb4088a31f89593" protocol=ttrpc version=3 Sep 9 21:34:24.881727 systemd[1]: Started cri-containerd-db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6.scope - libcontainer container db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6. Sep 9 21:34:24.905301 containerd[1501]: time="2025-09-09T21:34:24.905265592Z" level=info msg="StartContainer for \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" returns successfully" Sep 9 21:34:25.404071 kubelet[2632]: E0909 21:34:25.403928 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:25.410350 kubelet[2632]: E0909 21:34:25.410317 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:25.449903 containerd[1501]: time="2025-09-09T21:34:25.449839899Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 21:34:25.464926 containerd[1501]: time="2025-09-09T21:34:25.464169658Z" level=info msg="Container 75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:25.468688 kubelet[2632]: I0909 21:34:25.468615 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-k2clw" podStartSLOduration=1.425025955 podStartE2EDuration="15.468598458s" podCreationTimestamp="2025-09-09 21:34:10 +0000 UTC" firstStartedPulling="2025-09-09 21:34:10.799071069 +0000 UTC m=+7.562007626" lastFinishedPulling="2025-09-09 21:34:24.842643572 +0000 UTC m=+21.605580129" observedRunningTime="2025-09-09 21:34:25.440592521 +0000 UTC m=+22.203529078" watchObservedRunningTime="2025-09-09 21:34:25.468598458 +0000 UTC m=+22.231535015" Sep 9 21:34:25.475578 containerd[1501]: time="2025-09-09T21:34:25.475520245Z" level=info msg="CreateContainer within sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\"" Sep 9 21:34:25.477122 containerd[1501]: time="2025-09-09T21:34:25.477085646Z" level=info msg="StartContainer for \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\"" Sep 9 21:34:25.479274 containerd[1501]: time="2025-09-09T21:34:25.479235116Z" level=info msg="connecting to shim 75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56" address="unix:///run/containerd/s/a070876fa6af8a4004b8aabb8fdc006d33ef1bf192f2c7315b1aa93bee0202d0" protocol=ttrpc version=3 Sep 9 21:34:25.503731 systemd[1]: Started cri-containerd-75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56.scope - libcontainer container 75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56. Sep 9 21:34:25.536566 containerd[1501]: time="2025-09-09T21:34:25.536465311Z" level=info msg="StartContainer for \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" returns successfully" Sep 9 21:34:25.656040 containerd[1501]: time="2025-09-09T21:34:25.655931727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" id:\"0f3983e2b12c2d5a4bcaf9ca57f0400aafcde89384c24a1417530e83903d0ff5\" pid:3335 exited_at:{seconds:1757453665 nanos:655676770}" Sep 9 21:34:25.673977 kubelet[2632]: I0909 21:34:25.673909 2632 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 21:34:25.735163 systemd[1]: Created slice kubepods-burstable-podf7d28186_74f2_4357_a41c_e822355e90f8.slice - libcontainer container kubepods-burstable-podf7d28186_74f2_4357_a41c_e822355e90f8.slice. Sep 9 21:34:25.745612 systemd[1]: Created slice kubepods-burstable-podfa97c485_d7d0_48bf_9a21_7e318c07624a.slice - libcontainer container kubepods-burstable-podfa97c485_d7d0_48bf_9a21_7e318c07624a.slice. Sep 9 21:34:25.766905 kubelet[2632]: I0909 21:34:25.766871 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmbb9\" (UniqueName: \"kubernetes.io/projected/fa97c485-d7d0-48bf-9a21-7e318c07624a-kube-api-access-qmbb9\") pod \"coredns-674b8bbfcf-z2ccg\" (UID: \"fa97c485-d7d0-48bf-9a21-7e318c07624a\") " pod="kube-system/coredns-674b8bbfcf-z2ccg" Sep 9 21:34:25.766905 kubelet[2632]: I0909 21:34:25.766913 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkb42\" (UniqueName: \"kubernetes.io/projected/f7d28186-74f2-4357-a41c-e822355e90f8-kube-api-access-xkb42\") pod \"coredns-674b8bbfcf-vhj9w\" (UID: \"f7d28186-74f2-4357-a41c-e822355e90f8\") " pod="kube-system/coredns-674b8bbfcf-vhj9w" Sep 9 21:34:25.767046 kubelet[2632]: I0909 21:34:25.766936 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7d28186-74f2-4357-a41c-e822355e90f8-config-volume\") pod \"coredns-674b8bbfcf-vhj9w\" (UID: \"f7d28186-74f2-4357-a41c-e822355e90f8\") " pod="kube-system/coredns-674b8bbfcf-vhj9w" Sep 9 21:34:25.767046 kubelet[2632]: I0909 21:34:25.766959 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa97c485-d7d0-48bf-9a21-7e318c07624a-config-volume\") pod \"coredns-674b8bbfcf-z2ccg\" (UID: \"fa97c485-d7d0-48bf-9a21-7e318c07624a\") " pod="kube-system/coredns-674b8bbfcf-z2ccg" Sep 9 21:34:26.043433 kubelet[2632]: E0909 21:34:26.043258 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:26.044157 containerd[1501]: time="2025-09-09T21:34:26.044112222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vhj9w,Uid:f7d28186-74f2-4357-a41c-e822355e90f8,Namespace:kube-system,Attempt:0,}" Sep 9 21:34:26.049337 kubelet[2632]: E0909 21:34:26.048745 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:26.049418 containerd[1501]: time="2025-09-09T21:34:26.049117541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z2ccg,Uid:fa97c485-d7d0-48bf-9a21-7e318c07624a,Namespace:kube-system,Attempt:0,}" Sep 9 21:34:26.422600 kubelet[2632]: E0909 21:34:26.422487 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:26.423100 kubelet[2632]: E0909 21:34:26.422962 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:26.438177 kubelet[2632]: I0909 21:34:26.438103 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fkjsk" podStartSLOduration=5.736025862 podStartE2EDuration="16.438089264s" podCreationTimestamp="2025-09-09 21:34:10 +0000 UTC" firstStartedPulling="2025-09-09 21:34:10.5704779 +0000 UTC m=+7.333414417" lastFinishedPulling="2025-09-09 21:34:21.272541262 +0000 UTC m=+18.035477819" observedRunningTime="2025-09-09 21:34:26.437892337 +0000 UTC m=+23.200828894" watchObservedRunningTime="2025-09-09 21:34:26.438089264 +0000 UTC m=+23.201025821" Sep 9 21:34:27.423782 kubelet[2632]: E0909 21:34:27.423745 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:27.596671 systemd-networkd[1411]: cilium_host: Link UP Sep 9 21:34:27.596772 systemd-networkd[1411]: cilium_net: Link UP Sep 9 21:34:27.596877 systemd-networkd[1411]: cilium_net: Gained carrier Sep 9 21:34:27.597006 systemd-networkd[1411]: cilium_host: Gained carrier Sep 9 21:34:27.669900 systemd-networkd[1411]: cilium_vxlan: Link UP Sep 9 21:34:27.670040 systemd-networkd[1411]: cilium_vxlan: Gained carrier Sep 9 21:34:27.924577 kernel: NET: Registered PF_ALG protocol family Sep 9 21:34:28.348670 systemd-networkd[1411]: cilium_host: Gained IPv6LL Sep 9 21:34:28.431320 kubelet[2632]: E0909 21:34:28.431276 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:28.476696 systemd-networkd[1411]: cilium_net: Gained IPv6LL Sep 9 21:34:28.497296 systemd-networkd[1411]: lxc_health: Link UP Sep 9 21:34:28.506094 systemd-networkd[1411]: lxc_health: Gained carrier Sep 9 21:34:29.096216 systemd-networkd[1411]: lxc3f5d0ebcff89: Link UP Sep 9 21:34:29.098573 kernel: eth0: renamed from tmp5ce12 Sep 9 21:34:29.099211 systemd-networkd[1411]: lxc3f5d0ebcff89: Gained carrier Sep 9 21:34:29.108660 systemd-networkd[1411]: lxc7d66f73a23a3: Link UP Sep 9 21:34:29.110660 kernel: eth0: renamed from tmp4a9ca Sep 9 21:34:29.113360 systemd-networkd[1411]: lxc7d66f73a23a3: Gained carrier Sep 9 21:34:29.429850 kubelet[2632]: E0909 21:34:29.429738 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:29.438330 systemd-networkd[1411]: cilium_vxlan: Gained IPv6LL Sep 9 21:34:29.821094 systemd-networkd[1411]: lxc_health: Gained IPv6LL Sep 9 21:34:30.333219 systemd-networkd[1411]: lxc7d66f73a23a3: Gained IPv6LL Sep 9 21:34:30.431759 kubelet[2632]: E0909 21:34:30.431702 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:30.464796 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:41700.service - OpenSSH per-connection server daemon (10.0.0.1:41700). Sep 9 21:34:30.526217 sshd[3814]: Accepted publickey for core from 10.0.0.1 port 41700 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:30.527458 sshd-session[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:30.533603 systemd-logind[1479]: New session 8 of user core. Sep 9 21:34:30.543701 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 21:34:30.666632 sshd[3817]: Connection closed by 10.0.0.1 port 41700 Sep 9 21:34:30.666890 sshd-session[3814]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:30.670744 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Sep 9 21:34:30.670903 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:41700.service: Deactivated successfully. Sep 9 21:34:30.674103 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 21:34:30.675370 systemd-logind[1479]: Removed session 8. Sep 9 21:34:30.782667 systemd-networkd[1411]: lxc3f5d0ebcff89: Gained IPv6LL Sep 9 21:34:31.433265 kubelet[2632]: E0909 21:34:31.433215 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:32.600046 containerd[1501]: time="2025-09-09T21:34:32.599986549Z" level=info msg="connecting to shim 4a9cae4ab3f6f0c4aa2591c1a1ac69cb4f17d49faebdc5aa5d9cd11e440ad8e2" address="unix:///run/containerd/s/b1af23c44d1a2de8b78a6956f208b104b36e6fea7797f4d1b0d94a5195a8146b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:34:32.608510 containerd[1501]: time="2025-09-09T21:34:32.608473406Z" level=info msg="connecting to shim 5ce124e9e126eca96584c434922a743d2eb70f945f97c4abd46ae353ece80225" address="unix:///run/containerd/s/e7c79b7b27fdd2af0d90f88c026a21bd4a62cd00f22446b38c1ce7800e8c20e4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:34:32.622717 systemd[1]: Started cri-containerd-4a9cae4ab3f6f0c4aa2591c1a1ac69cb4f17d49faebdc5aa5d9cd11e440ad8e2.scope - libcontainer container 4a9cae4ab3f6f0c4aa2591c1a1ac69cb4f17d49faebdc5aa5d9cd11e440ad8e2. Sep 9 21:34:32.626248 systemd[1]: Started cri-containerd-5ce124e9e126eca96584c434922a743d2eb70f945f97c4abd46ae353ece80225.scope - libcontainer container 5ce124e9e126eca96584c434922a743d2eb70f945f97c4abd46ae353ece80225. Sep 9 21:34:32.636833 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:34:32.640872 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:34:32.662096 containerd[1501]: time="2025-09-09T21:34:32.662028235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vhj9w,Uid:f7d28186-74f2-4357-a41c-e822355e90f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a9cae4ab3f6f0c4aa2591c1a1ac69cb4f17d49faebdc5aa5d9cd11e440ad8e2\"" Sep 9 21:34:32.662855 kubelet[2632]: E0909 21:34:32.662733 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:32.666972 containerd[1501]: time="2025-09-09T21:34:32.666933086Z" level=info msg="CreateContainer within sandbox \"4a9cae4ab3f6f0c4aa2591c1a1ac69cb4f17d49faebdc5aa5d9cd11e440ad8e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:34:32.667562 containerd[1501]: time="2025-09-09T21:34:32.667491874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z2ccg,Uid:fa97c485-d7d0-48bf-9a21-7e318c07624a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ce124e9e126eca96584c434922a743d2eb70f945f97c4abd46ae353ece80225\"" Sep 9 21:34:32.668750 kubelet[2632]: E0909 21:34:32.668727 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:32.672425 containerd[1501]: time="2025-09-09T21:34:32.672385882Z" level=info msg="CreateContainer within sandbox \"5ce124e9e126eca96584c434922a743d2eb70f945f97c4abd46ae353ece80225\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:34:32.680074 containerd[1501]: time="2025-09-09T21:34:32.679506319Z" level=info msg="Container 9bab2723c0698c856f4d0d8d952adf131688d58b0ac24577f555755418457126: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:32.682167 containerd[1501]: time="2025-09-09T21:34:32.682138725Z" level=info msg="Container b6ea15eff76c35db30e24c265c7ea588339307a67a0b80aec742c83101f04415: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:34:32.685273 containerd[1501]: time="2025-09-09T21:34:32.685227085Z" level=info msg="CreateContainer within sandbox \"4a9cae4ab3f6f0c4aa2591c1a1ac69cb4f17d49faebdc5aa5d9cd11e440ad8e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bab2723c0698c856f4d0d8d952adf131688d58b0ac24577f555755418457126\"" Sep 9 21:34:32.687131 containerd[1501]: time="2025-09-09T21:34:32.687084510Z" level=info msg="StartContainer for \"9bab2723c0698c856f4d0d8d952adf131688d58b0ac24577f555755418457126\"" Sep 9 21:34:32.688731 containerd[1501]: time="2025-09-09T21:34:32.688696173Z" level=info msg="connecting to shim 9bab2723c0698c856f4d0d8d952adf131688d58b0ac24577f555755418457126" address="unix:///run/containerd/s/b1af23c44d1a2de8b78a6956f208b104b36e6fea7797f4d1b0d94a5195a8146b" protocol=ttrpc version=3 Sep 9 21:34:32.690743 containerd[1501]: time="2025-09-09T21:34:32.690706170Z" level=info msg="CreateContainer within sandbox \"5ce124e9e126eca96584c434922a743d2eb70f945f97c4abd46ae353ece80225\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6ea15eff76c35db30e24c265c7ea588339307a67a0b80aec742c83101f04415\"" Sep 9 21:34:32.691315 containerd[1501]: time="2025-09-09T21:34:32.691114707Z" level=info msg="StartContainer for \"b6ea15eff76c35db30e24c265c7ea588339307a67a0b80aec742c83101f04415\"" Sep 9 21:34:32.692476 containerd[1501]: time="2025-09-09T21:34:32.692399940Z" level=info msg="connecting to shim b6ea15eff76c35db30e24c265c7ea588339307a67a0b80aec742c83101f04415" address="unix:///run/containerd/s/e7c79b7b27fdd2af0d90f88c026a21bd4a62cd00f22446b38c1ce7800e8c20e4" protocol=ttrpc version=3 Sep 9 21:34:32.708709 systemd[1]: Started cri-containerd-9bab2723c0698c856f4d0d8d952adf131688d58b0ac24577f555755418457126.scope - libcontainer container 9bab2723c0698c856f4d0d8d952adf131688d58b0ac24577f555755418457126. Sep 9 21:34:32.711578 systemd[1]: Started cri-containerd-b6ea15eff76c35db30e24c265c7ea588339307a67a0b80aec742c83101f04415.scope - libcontainer container b6ea15eff76c35db30e24c265c7ea588339307a67a0b80aec742c83101f04415. Sep 9 21:34:32.737955 containerd[1501]: time="2025-09-09T21:34:32.737871528Z" level=info msg="StartContainer for \"9bab2723c0698c856f4d0d8d952adf131688d58b0ac24577f555755418457126\" returns successfully" Sep 9 21:34:32.743451 containerd[1501]: time="2025-09-09T21:34:32.743417595Z" level=info msg="StartContainer for \"b6ea15eff76c35db30e24c265c7ea588339307a67a0b80aec742c83101f04415\" returns successfully" Sep 9 21:34:33.448800 kubelet[2632]: E0909 21:34:33.448741 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:33.451585 kubelet[2632]: E0909 21:34:33.451407 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:33.471435 kubelet[2632]: I0909 21:34:33.471279 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vhj9w" podStartSLOduration=23.471262433 podStartE2EDuration="23.471262433s" podCreationTimestamp="2025-09-09 21:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:34:33.470089534 +0000 UTC m=+30.233026091" watchObservedRunningTime="2025-09-09 21:34:33.471262433 +0000 UTC m=+30.234199030" Sep 9 21:34:33.472362 kubelet[2632]: I0909 21:34:33.472040 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z2ccg" podStartSLOduration=23.471895198 podStartE2EDuration="23.471895198s" podCreationTimestamp="2025-09-09 21:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:34:33.460085382 +0000 UTC m=+30.223021939" watchObservedRunningTime="2025-09-09 21:34:33.471895198 +0000 UTC m=+30.234831795" Sep 9 21:34:33.583343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725424294.mount: Deactivated successfully. Sep 9 21:34:34.453815 kubelet[2632]: E0909 21:34:34.453732 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:34.454186 kubelet[2632]: E0909 21:34:34.453840 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:35.455278 kubelet[2632]: E0909 21:34:35.455240 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:35.455643 kubelet[2632]: E0909 21:34:35.455340 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:34:35.680625 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:41704.service - OpenSSH per-connection server daemon (10.0.0.1:41704). Sep 9 21:34:35.741322 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 41704 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:35.742301 sshd-session[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:35.746611 systemd-logind[1479]: New session 9 of user core. Sep 9 21:34:35.752692 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 21:34:35.871869 sshd[4011]: Connection closed by 10.0.0.1 port 41704 Sep 9 21:34:35.872351 sshd-session[4008]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:35.875651 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:41704.service: Deactivated successfully. Sep 9 21:34:35.878919 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 21:34:35.879518 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Sep 9 21:34:35.881121 systemd-logind[1479]: Removed session 9. Sep 9 21:34:40.884572 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:47148.service - OpenSSH per-connection server daemon (10.0.0.1:47148). Sep 9 21:34:40.943311 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 47148 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:40.944767 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:40.948615 systemd-logind[1479]: New session 10 of user core. Sep 9 21:34:40.959696 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 21:34:41.071974 sshd[4032]: Connection closed by 10.0.0.1 port 47148 Sep 9 21:34:41.072430 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:41.075683 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:47148.service: Deactivated successfully. Sep 9 21:34:41.077481 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 21:34:41.078210 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Sep 9 21:34:41.079354 systemd-logind[1479]: Removed session 10. Sep 9 21:34:46.091506 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:47160.service - OpenSSH per-connection server daemon (10.0.0.1:47160). Sep 9 21:34:46.146681 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 47160 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:46.147778 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:46.151510 systemd-logind[1479]: New session 11 of user core. Sep 9 21:34:46.161687 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 21:34:46.267404 sshd[4050]: Connection closed by 10.0.0.1 port 47160 Sep 9 21:34:46.267764 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:46.282688 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:47160.service: Deactivated successfully. Sep 9 21:34:46.286008 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 21:34:46.286817 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Sep 9 21:34:46.288975 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:47174.service - OpenSSH per-connection server daemon (10.0.0.1:47174). Sep 9 21:34:46.289504 systemd-logind[1479]: Removed session 11. Sep 9 21:34:46.348854 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 47174 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:46.349926 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:46.353912 systemd-logind[1479]: New session 12 of user core. Sep 9 21:34:46.362723 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 21:34:46.500701 sshd[4067]: Connection closed by 10.0.0.1 port 47174 Sep 9 21:34:46.501070 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:46.509908 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:47174.service: Deactivated successfully. Sep 9 21:34:46.512493 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 21:34:46.514844 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Sep 9 21:34:46.519854 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:47178.service - OpenSSH per-connection server daemon (10.0.0.1:47178). Sep 9 21:34:46.521863 systemd-logind[1479]: Removed session 12. Sep 9 21:34:46.578665 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 47178 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:46.579755 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:46.583825 systemd-logind[1479]: New session 13 of user core. Sep 9 21:34:46.598707 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 21:34:46.703820 sshd[4081]: Connection closed by 10.0.0.1 port 47178 Sep 9 21:34:46.704318 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:46.707863 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:47178.service: Deactivated successfully. Sep 9 21:34:46.710444 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 21:34:46.711635 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Sep 9 21:34:46.713056 systemd-logind[1479]: Removed session 13. Sep 9 21:34:51.719632 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:60552.service - OpenSSH per-connection server daemon (10.0.0.1:60552). Sep 9 21:34:51.781843 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 60552 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:51.782901 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:51.786273 systemd-logind[1479]: New session 14 of user core. Sep 9 21:34:51.801712 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 21:34:51.908587 sshd[4098]: Connection closed by 10.0.0.1 port 60552 Sep 9 21:34:51.908888 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:51.912058 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:60552.service: Deactivated successfully. Sep 9 21:34:51.913937 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 21:34:51.914704 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Sep 9 21:34:51.915921 systemd-logind[1479]: Removed session 14. Sep 9 21:34:56.925464 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:60564.service - OpenSSH per-connection server daemon (10.0.0.1:60564). Sep 9 21:34:56.988710 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 60564 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:56.989777 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:56.993273 systemd-logind[1479]: New session 15 of user core. Sep 9 21:34:57.003685 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 21:34:57.111364 sshd[4114]: Connection closed by 10.0.0.1 port 60564 Sep 9 21:34:57.110283 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:57.122585 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:60564.service: Deactivated successfully. Sep 9 21:34:57.124773 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 21:34:57.125511 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Sep 9 21:34:57.127586 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:60568.service - OpenSSH per-connection server daemon (10.0.0.1:60568). Sep 9 21:34:57.128348 systemd-logind[1479]: Removed session 15. Sep 9 21:34:57.180924 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 60568 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:57.181818 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:57.185882 systemd-logind[1479]: New session 16 of user core. Sep 9 21:34:57.200680 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 21:34:57.367340 sshd[4130]: Connection closed by 10.0.0.1 port 60568 Sep 9 21:34:57.367756 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:57.379495 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:60568.service: Deactivated successfully. Sep 9 21:34:57.382971 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 21:34:57.383648 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Sep 9 21:34:57.385779 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:60572.service - OpenSSH per-connection server daemon (10.0.0.1:60572). Sep 9 21:34:57.386638 systemd-logind[1479]: Removed session 16. Sep 9 21:34:57.441322 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 60572 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:57.442356 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:57.445978 systemd-logind[1479]: New session 17 of user core. Sep 9 21:34:57.460675 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 21:34:58.015811 sshd[4145]: Connection closed by 10.0.0.1 port 60572 Sep 9 21:34:58.016452 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:58.029375 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:60572.service: Deactivated successfully. Sep 9 21:34:58.031392 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 21:34:58.034640 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Sep 9 21:34:58.037980 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:60582.service - OpenSSH per-connection server daemon (10.0.0.1:60582). Sep 9 21:34:58.041843 systemd-logind[1479]: Removed session 17. Sep 9 21:34:58.091724 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 60582 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:58.092827 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:58.096573 systemd-logind[1479]: New session 18 of user core. Sep 9 21:34:58.102736 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 21:34:58.324511 sshd[4168]: Connection closed by 10.0.0.1 port 60582 Sep 9 21:34:58.325602 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:58.334993 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:60582.service: Deactivated successfully. Sep 9 21:34:58.336806 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 21:34:58.338055 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Sep 9 21:34:58.340830 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:60592.service - OpenSSH per-connection server daemon (10.0.0.1:60592). Sep 9 21:34:58.341698 systemd-logind[1479]: Removed session 18. Sep 9 21:34:58.391453 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 60592 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:34:58.392711 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:34:58.396585 systemd-logind[1479]: New session 19 of user core. Sep 9 21:34:58.408748 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 21:34:58.520674 sshd[4182]: Connection closed by 10.0.0.1 port 60592 Sep 9 21:34:58.521167 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 9 21:34:58.525015 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:60592.service: Deactivated successfully. Sep 9 21:34:58.527929 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 21:34:58.528588 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Sep 9 21:34:58.529555 systemd-logind[1479]: Removed session 19. Sep 9 21:35:03.532404 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:36116.service - OpenSSH per-connection server daemon (10.0.0.1:36116). Sep 9 21:35:03.590865 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 36116 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:35:03.591890 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:35:03.595252 systemd-logind[1479]: New session 20 of user core. Sep 9 21:35:03.604693 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 21:35:03.711668 sshd[4204]: Connection closed by 10.0.0.1 port 36116 Sep 9 21:35:03.711532 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Sep 9 21:35:03.714873 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:36116.service: Deactivated successfully. Sep 9 21:35:03.716432 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 21:35:03.718960 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Sep 9 21:35:03.719957 systemd-logind[1479]: Removed session 20. Sep 9 21:35:08.728507 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:36120.service - OpenSSH per-connection server daemon (10.0.0.1:36120). Sep 9 21:35:08.782511 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 36120 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:35:08.783511 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:35:08.786979 systemd-logind[1479]: New session 21 of user core. Sep 9 21:35:08.799674 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 21:35:08.903982 sshd[4220]: Connection closed by 10.0.0.1 port 36120 Sep 9 21:35:08.904861 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 9 21:35:08.911476 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:36120.service: Deactivated successfully. Sep 9 21:35:08.912916 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 21:35:08.913546 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Sep 9 21:35:08.915325 systemd[1]: Started sshd@21-10.0.0.147:22-10.0.0.1:36136.service - OpenSSH per-connection server daemon (10.0.0.1:36136). Sep 9 21:35:08.917201 systemd-logind[1479]: Removed session 21. Sep 9 21:35:08.969401 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 36136 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:35:08.970466 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:35:08.974446 systemd-logind[1479]: New session 22 of user core. Sep 9 21:35:08.984690 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 21:35:12.069922 containerd[1501]: time="2025-09-09T21:35:12.069393815Z" level=info msg="StopContainer for \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" with timeout 30 (s)" Sep 9 21:35:12.071711 containerd[1501]: time="2025-09-09T21:35:12.070869416Z" level=info msg="Stop container \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" with signal terminated" Sep 9 21:35:12.084143 systemd[1]: cri-containerd-db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6.scope: Deactivated successfully. Sep 9 21:35:12.087079 containerd[1501]: time="2025-09-09T21:35:12.087042258Z" level=info msg="received exit event container_id:\"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" id:\"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" pid:3266 exited_at:{seconds:1757453712 nanos:86788423}" Sep 9 21:35:12.087424 containerd[1501]: time="2025-09-09T21:35:12.087394025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" id:\"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" pid:3266 exited_at:{seconds:1757453712 nanos:86788423}" Sep 9 21:35:12.103676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6-rootfs.mount: Deactivated successfully. Sep 9 21:35:12.127039 containerd[1501]: time="2025-09-09T21:35:12.126990015Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:35:12.127167 containerd[1501]: time="2025-09-09T21:35:12.127076747Z" level=info msg="StopContainer for \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" returns successfully" Sep 9 21:35:12.127617 containerd[1501]: time="2025-09-09T21:35:12.127590337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" id:\"86b2b2255e491fa633f63b42011f03cd33f054faa409abdc54aef4dd389221a9\" pid:4274 exited_at:{seconds:1757453712 nanos:115973315}" Sep 9 21:35:12.141343 containerd[1501]: time="2025-09-09T21:35:12.141264958Z" level=info msg="StopPodSandbox for \"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\"" Sep 9 21:35:12.145050 containerd[1501]: time="2025-09-09T21:35:12.145006227Z" level=info msg="StopContainer for \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" with timeout 2 (s)" Sep 9 21:35:12.145601 containerd[1501]: time="2025-09-09T21:35:12.145474851Z" level=info msg="Stop container \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" with signal terminated" Sep 9 21:35:12.151764 systemd-networkd[1411]: lxc_health: Link DOWN Sep 9 21:35:12.151773 systemd-networkd[1411]: lxc_health: Lost carrier Sep 9 21:35:12.167094 containerd[1501]: time="2025-09-09T21:35:12.167026145Z" level=info msg="Container to stop \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:35:12.168300 systemd[1]: cri-containerd-75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56.scope: Deactivated successfully. Sep 9 21:35:12.168666 systemd[1]: cri-containerd-75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56.scope: Consumed 6.079s CPU time, 123M memory peak, 148K read from disk, 12.9M written to disk. Sep 9 21:35:12.169298 containerd[1501]: time="2025-09-09T21:35:12.169263449Z" level=info msg="received exit event container_id:\"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" id:\"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" pid:3304 exited_at:{seconds:1757453712 nanos:169046300}" Sep 9 21:35:12.169574 containerd[1501]: time="2025-09-09T21:35:12.169531166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" id:\"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" pid:3304 exited_at:{seconds:1757453712 nanos:169046300}" Sep 9 21:35:12.175556 systemd[1]: cri-containerd-2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c.scope: Deactivated successfully. Sep 9 21:35:12.178984 containerd[1501]: time="2025-09-09T21:35:12.178949608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\" id:\"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\" pid:2881 exit_status:137 exited_at:{seconds:1757453712 nanos:178421976}" Sep 9 21:35:12.188852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56-rootfs.mount: Deactivated successfully. Sep 9 21:35:12.196790 containerd[1501]: time="2025-09-09T21:35:12.196754391Z" level=info msg="StopContainer for \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" returns successfully" Sep 9 21:35:12.197174 containerd[1501]: time="2025-09-09T21:35:12.197149805Z" level=info msg="StopPodSandbox for \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\"" Sep 9 21:35:12.197218 containerd[1501]: time="2025-09-09T21:35:12.197206293Z" level=info msg="Container to stop \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:35:12.197242 containerd[1501]: time="2025-09-09T21:35:12.197217214Z" level=info msg="Container to stop \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:35:12.197242 containerd[1501]: time="2025-09-09T21:35:12.197225495Z" level=info msg="Container to stop \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:35:12.197242 containerd[1501]: time="2025-09-09T21:35:12.197233776Z" level=info msg="Container to stop \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:35:12.197242 containerd[1501]: time="2025-09-09T21:35:12.197241417Z" level=info msg="Container to stop \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:35:12.206040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c-rootfs.mount: Deactivated successfully. Sep 9 21:35:12.207326 systemd[1]: cri-containerd-f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae.scope: Deactivated successfully. Sep 9 21:35:12.208811 containerd[1501]: time="2025-09-09T21:35:12.208777268Z" level=info msg="shim disconnected" id=2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c namespace=k8s.io Sep 9 21:35:12.208899 containerd[1501]: time="2025-09-09T21:35:12.208810392Z" level=warning msg="cleaning up after shim disconnected" id=2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c namespace=k8s.io Sep 9 21:35:12.208899 containerd[1501]: time="2025-09-09T21:35:12.208841396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 21:35:12.232831 containerd[1501]: time="2025-09-09T21:35:12.229677352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" id:\"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" pid:2792 exit_status:137 exited_at:{seconds:1757453712 nanos:207222776}" Sep 9 21:35:12.231027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae-rootfs.mount: Deactivated successfully. Sep 9 21:35:12.233064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c-shm.mount: Deactivated successfully. Sep 9 21:35:12.234578 containerd[1501]: time="2025-09-09T21:35:12.234495888Z" level=info msg="TearDown network for sandbox \"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\" successfully" Sep 9 21:35:12.234578 containerd[1501]: time="2025-09-09T21:35:12.234527573Z" level=info msg="StopPodSandbox for \"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\" returns successfully" Sep 9 21:35:12.242575 containerd[1501]: time="2025-09-09T21:35:12.242041635Z" level=info msg="received exit event sandbox_id:\"2f87beaf1defb2dec827561b1b3ceb55e4a7c82147338a4bbd903358a4648c4c\" exit_status:137 exited_at:{seconds:1757453712 nanos:178421976}" Sep 9 21:35:12.264556 containerd[1501]: time="2025-09-09T21:35:12.264522655Z" level=info msg="shim disconnected" id=f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae namespace=k8s.io Sep 9 21:35:12.264737 containerd[1501]: time="2025-09-09T21:35:12.264697359Z" level=info msg="received exit event sandbox_id:\"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" exit_status:137 exited_at:{seconds:1757453712 nanos:207222776}" Sep 9 21:35:12.265564 containerd[1501]: time="2025-09-09T21:35:12.265515390Z" level=info msg="TearDown network for sandbox \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" successfully" Sep 9 21:35:12.265602 containerd[1501]: time="2025-09-09T21:35:12.265543994Z" level=info msg="StopPodSandbox for \"f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae\" returns successfully" Sep 9 21:35:12.265718 containerd[1501]: time="2025-09-09T21:35:12.264719362Z" level=warning msg="cleaning up after shim disconnected" id=f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae namespace=k8s.io Sep 9 21:35:12.265769 containerd[1501]: time="2025-09-09T21:35:12.265717258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 21:35:12.332180 kubelet[2632]: I0909 21:35:12.332058 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cni-path\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.332180 kubelet[2632]: I0909 21:35:12.332117 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tpbf\" (UniqueName: \"kubernetes.io/projected/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-kube-api-access-5tpbf\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.332180 kubelet[2632]: I0909 21:35:12.332141 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-clustermesh-secrets\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.332600 kubelet[2632]: I0909 21:35:12.332193 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-config-path\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.332600 kubelet[2632]: I0909 21:35:12.332210 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-run\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.332600 kubelet[2632]: I0909 21:35:12.332226 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-host-proc-sys-net\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.333793 kubelet[2632]: I0909 21:35:12.333756 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.333834 kubelet[2632]: I0909 21:35:12.333814 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.334050 kubelet[2632]: I0909 21:35:12.333997 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cni-path" (OuterVolumeSpecName: "cni-path") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.334145 kubelet[2632]: I0909 21:35:12.334135 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69dlh\" (UniqueName: \"kubernetes.io/projected/eb6c5d13-ceac-46f5-9893-7cf9409819f2-kube-api-access-69dlh\") pod \"eb6c5d13-ceac-46f5-9893-7cf9409819f2\" (UID: \"eb6c5d13-ceac-46f5-9893-7cf9409819f2\") " Sep 9 21:35:12.334170 kubelet[2632]: I0909 21:35:12.334163 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6c5d13-ceac-46f5-9893-7cf9409819f2-cilium-config-path\") pod \"eb6c5d13-ceac-46f5-9893-7cf9409819f2\" (UID: \"eb6c5d13-ceac-46f5-9893-7cf9409819f2\") " Sep 9 21:35:12.334192 kubelet[2632]: I0909 21:35:12.334179 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-xtables-lock\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.334212 kubelet[2632]: I0909 21:35:12.334193 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-lib-modules\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.334212 kubelet[2632]: I0909 21:35:12.334208 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-cgroup\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.334254 kubelet[2632]: I0909 21:35:12.334220 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-etc-cni-netd\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.334254 kubelet[2632]: I0909 21:35:12.334236 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-hubble-tls\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.334254 kubelet[2632]: I0909 21:35:12.334250 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-hostproc\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.334314 kubelet[2632]: I0909 21:35:12.334265 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-host-proc-sys-kernel\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.334314 kubelet[2632]: I0909 21:35:12.334280 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-bpf-maps\") pod \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\" (UID: \"b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce\") " Sep 9 21:35:12.334356 kubelet[2632]: I0909 21:35:12.334304 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.334356 kubelet[2632]: I0909 21:35:12.334317 2632 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.334356 kubelet[2632]: I0909 21:35:12.334335 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.334356 kubelet[2632]: I0909 21:35:12.334353 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.334459 kubelet[2632]: I0909 21:35:12.334440 2632 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.334484 kubelet[2632]: I0909 21:35:12.334466 2632 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.345316 kubelet[2632]: I0909 21:35:12.344237 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 21:35:12.345316 kubelet[2632]: I0909 21:35:12.344296 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-hostproc" (OuterVolumeSpecName: "hostproc") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.345316 kubelet[2632]: I0909 21:35:12.344312 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.345612 kubelet[2632]: I0909 21:35:12.345576 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.346119 kubelet[2632]: I0909 21:35:12.346088 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 21:35:12.346288 kubelet[2632]: I0909 21:35:12.346111 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 21:35:12.347607 kubelet[2632]: I0909 21:35:12.347577 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-kube-api-access-5tpbf" (OuterVolumeSpecName: "kube-api-access-5tpbf") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "kube-api-access-5tpbf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 21:35:12.347684 kubelet[2632]: I0909 21:35:12.347647 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6c5d13-ceac-46f5-9893-7cf9409819f2-kube-api-access-69dlh" (OuterVolumeSpecName: "kube-api-access-69dlh") pod "eb6c5d13-ceac-46f5-9893-7cf9409819f2" (UID: "eb6c5d13-ceac-46f5-9893-7cf9409819f2"). InnerVolumeSpecName "kube-api-access-69dlh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 21:35:12.348131 kubelet[2632]: I0909 21:35:12.348105 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6c5d13-ceac-46f5-9893-7cf9409819f2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb6c5d13-ceac-46f5-9893-7cf9409819f2" (UID: "eb6c5d13-ceac-46f5-9893-7cf9409819f2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 21:35:12.349182 kubelet[2632]: I0909 21:35:12.349158 2632 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" (UID: "b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 21:35:12.435510 kubelet[2632]: I0909 21:35:12.435480 2632 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435510 kubelet[2632]: I0909 21:35:12.435510 2632 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435627 kubelet[2632]: I0909 21:35:12.435520 2632 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435627 kubelet[2632]: I0909 21:35:12.435528 2632 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435627 kubelet[2632]: I0909 21:35:12.435536 2632 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435627 kubelet[2632]: I0909 21:35:12.435544 2632 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435627 kubelet[2632]: I0909 21:35:12.435570 2632 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435627 kubelet[2632]: I0909 21:35:12.435580 2632 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435627 kubelet[2632]: I0909 21:35:12.435587 2632 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5tpbf\" (UniqueName: \"kubernetes.io/projected/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-kube-api-access-5tpbf\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435627 kubelet[2632]: I0909 21:35:12.435595 2632 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435789 kubelet[2632]: I0909 21:35:12.435603 2632 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435789 kubelet[2632]: I0909 21:35:12.435611 2632 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-69dlh\" (UniqueName: \"kubernetes.io/projected/eb6c5d13-ceac-46f5-9893-7cf9409819f2-kube-api-access-69dlh\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.435789 kubelet[2632]: I0909 21:35:12.435619 2632 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6c5d13-ceac-46f5-9893-7cf9409819f2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:35:12.528923 kubelet[2632]: I0909 21:35:12.528886 2632 scope.go:117] "RemoveContainer" containerID="db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6" Sep 9 21:35:12.530479 containerd[1501]: time="2025-09-09T21:35:12.530437610Z" level=info msg="RemoveContainer for \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\"" Sep 9 21:35:12.530602 systemd[1]: Removed slice kubepods-besteffort-podeb6c5d13_ceac_46f5_9893_7cf9409819f2.slice - libcontainer container kubepods-besteffort-podeb6c5d13_ceac_46f5_9893_7cf9409819f2.slice. Sep 9 21:35:12.532767 systemd[1]: Removed slice kubepods-burstable-podb6a5f7b5_b37f_40c2_a43c_05492bf3b3ce.slice - libcontainer container kubepods-burstable-podb6a5f7b5_b37f_40c2_a43c_05492bf3b3ce.slice. Sep 9 21:35:12.532845 systemd[1]: kubepods-burstable-podb6a5f7b5_b37f_40c2_a43c_05492bf3b3ce.slice: Consumed 6.171s CPU time, 123.3M memory peak, 1.5M read from disk, 12.9M written to disk. Sep 9 21:35:12.544483 containerd[1501]: time="2025-09-09T21:35:12.544453838Z" level=info msg="RemoveContainer for \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" returns successfully" Sep 9 21:35:12.544717 kubelet[2632]: I0909 21:35:12.544691 2632 scope.go:117] "RemoveContainer" containerID="db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6" Sep 9 21:35:12.544946 containerd[1501]: time="2025-09-09T21:35:12.544909260Z" level=error msg="ContainerStatus for \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\": not found" Sep 9 21:35:12.551316 kubelet[2632]: E0909 21:35:12.551277 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\": not found" containerID="db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6" Sep 9 21:35:12.551400 kubelet[2632]: I0909 21:35:12.551325 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6"} err="failed to get container status \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"db66413ee74b30c43f7a1e96516c30187b51e2e452699923f86ea943492129a6\": not found" Sep 9 21:35:12.551400 kubelet[2632]: I0909 21:35:12.551372 2632 scope.go:117] "RemoveContainer" containerID="75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56" Sep 9 21:35:12.554517 containerd[1501]: time="2025-09-09T21:35:12.554490804Z" level=info msg="RemoveContainer for \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\"" Sep 9 21:35:12.558765 containerd[1501]: time="2025-09-09T21:35:12.558725781Z" level=info msg="RemoveContainer for \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" returns successfully" Sep 9 21:35:12.558933 kubelet[2632]: I0909 21:35:12.558911 2632 scope.go:117] "RemoveContainer" containerID="7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86" Sep 9 21:35:12.560101 containerd[1501]: time="2025-09-09T21:35:12.560074004Z" level=info msg="RemoveContainer for \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\"" Sep 9 21:35:12.563761 containerd[1501]: time="2025-09-09T21:35:12.563683496Z" level=info msg="RemoveContainer for \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\" returns successfully" Sep 9 21:35:12.563825 kubelet[2632]: I0909 21:35:12.563809 2632 scope.go:117] "RemoveContainer" containerID="ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed" Sep 9 21:35:12.565836 containerd[1501]: time="2025-09-09T21:35:12.565815906Z" level=info msg="RemoveContainer for \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\"" Sep 9 21:35:12.569121 containerd[1501]: time="2025-09-09T21:35:12.569054267Z" level=info msg="RemoveContainer for \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\" returns successfully" Sep 9 21:35:12.569253 kubelet[2632]: I0909 21:35:12.569203 2632 scope.go:117] "RemoveContainer" containerID="f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d" Sep 9 21:35:12.570547 containerd[1501]: time="2025-09-09T21:35:12.570525707Z" level=info msg="RemoveContainer for \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\"" Sep 9 21:35:12.573415 containerd[1501]: time="2025-09-09T21:35:12.573384216Z" level=info msg="RemoveContainer for \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\" returns successfully" Sep 9 21:35:12.573547 kubelet[2632]: I0909 21:35:12.573527 2632 scope.go:117] "RemoveContainer" containerID="9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa" Sep 9 21:35:12.574853 containerd[1501]: time="2025-09-09T21:35:12.574801609Z" level=info msg="RemoveContainer for \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\"" Sep 9 21:35:12.577267 containerd[1501]: time="2025-09-09T21:35:12.577236781Z" level=info msg="RemoveContainer for \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\" returns successfully" Sep 9 21:35:12.577396 kubelet[2632]: I0909 21:35:12.577376 2632 scope.go:117] "RemoveContainer" containerID="75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56" Sep 9 21:35:12.577673 containerd[1501]: time="2025-09-09T21:35:12.577643716Z" level=error msg="ContainerStatus for \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\": not found" Sep 9 21:35:12.577770 kubelet[2632]: E0909 21:35:12.577752 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\": not found" containerID="75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56" Sep 9 21:35:12.577823 kubelet[2632]: I0909 21:35:12.577775 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56"} err="failed to get container status \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\": rpc error: code = NotFound desc = an error occurred when try to find container \"75abbae56a5281121e407b5ad84356a6cff1b920c0460f69c461e87f085dce56\": not found" Sep 9 21:35:12.577823 kubelet[2632]: I0909 21:35:12.577809 2632 scope.go:117] "RemoveContainer" containerID="7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86" Sep 9 21:35:12.577969 containerd[1501]: time="2025-09-09T21:35:12.577940956Z" level=error msg="ContainerStatus for \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\": not found" Sep 9 21:35:12.578065 kubelet[2632]: E0909 21:35:12.578051 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\": not found" containerID="7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86" Sep 9 21:35:12.578098 kubelet[2632]: I0909 21:35:12.578067 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86"} err="failed to get container status \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\": rpc error: code = NotFound desc = an error occurred when try to find container \"7753109bd9621efd9a036b7f44a5a264f86567e75b5b3437f49affeb8e6f6c86\": not found" Sep 9 21:35:12.578098 kubelet[2632]: I0909 21:35:12.578078 2632 scope.go:117] "RemoveContainer" containerID="ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed" Sep 9 21:35:12.578231 containerd[1501]: time="2025-09-09T21:35:12.578201792Z" level=error msg="ContainerStatus for \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\": not found" Sep 9 21:35:12.578345 kubelet[2632]: E0909 21:35:12.578324 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\": not found" containerID="ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed" Sep 9 21:35:12.578384 kubelet[2632]: I0909 21:35:12.578352 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed"} err="failed to get container status \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed809f413245fdb738ff3a6dd311aa7dfc856a8fc1f679a2a11bdc16405369ed\": not found" Sep 9 21:35:12.578384 kubelet[2632]: I0909 21:35:12.578371 2632 scope.go:117] "RemoveContainer" containerID="f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d" Sep 9 21:35:12.578528 containerd[1501]: time="2025-09-09T21:35:12.578505793Z" level=error msg="ContainerStatus for \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\": not found" Sep 9 21:35:12.578626 kubelet[2632]: E0909 21:35:12.578609 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\": not found" containerID="f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d" Sep 9 21:35:12.578662 kubelet[2632]: I0909 21:35:12.578628 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d"} err="failed to get container status \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8d8550afd0183d67f609163e4e564417aa401c0bdc710b138161b38c7dd2f0d\": not found" Sep 9 21:35:12.578662 kubelet[2632]: I0909 21:35:12.578640 2632 scope.go:117] "RemoveContainer" containerID="9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa" Sep 9 21:35:12.578833 containerd[1501]: time="2025-09-09T21:35:12.578798033Z" level=error msg="ContainerStatus for \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\": not found" Sep 9 21:35:12.578900 kubelet[2632]: E0909 21:35:12.578885 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\": not found" containerID="9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa" Sep 9 21:35:12.578900 kubelet[2632]: I0909 21:35:12.578901 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa"} err="failed to get container status \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"9850f437c54975270ec9a068e2907905e3776360125e9551bcc469de168d93aa\": not found" Sep 9 21:35:13.103701 systemd[1]: var-lib-kubelet-pods-eb6c5d13\x2dceac\x2d46f5\x2d9893\x2d7cf9409819f2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d69dlh.mount: Deactivated successfully. Sep 9 21:35:13.103805 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2df8b108c1589c6598975a654dd50bb813e97a90c7cd911870432a274f4efae-shm.mount: Deactivated successfully. Sep 9 21:35:13.103859 systemd[1]: var-lib-kubelet-pods-b6a5f7b5\x2db37f\x2d40c2\x2da43c\x2d05492bf3b3ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5tpbf.mount: Deactivated successfully. Sep 9 21:35:13.103916 systemd[1]: var-lib-kubelet-pods-b6a5f7b5\x2db37f\x2d40c2\x2da43c\x2d05492bf3b3ce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 21:35:13.103961 systemd[1]: var-lib-kubelet-pods-b6a5f7b5\x2db37f\x2d40c2\x2da43c\x2d05492bf3b3ce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 21:35:13.324976 kubelet[2632]: I0909 21:35:13.324948 2632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce" path="/var/lib/kubelet/pods/b6a5f7b5-b37f-40c2-a43c-05492bf3b3ce/volumes" Sep 9 21:35:13.325611 kubelet[2632]: I0909 21:35:13.325591 2632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6c5d13-ceac-46f5-9893-7cf9409819f2" path="/var/lib/kubelet/pods/eb6c5d13-ceac-46f5-9893-7cf9409819f2/volumes" Sep 9 21:35:13.375171 kubelet[2632]: E0909 21:35:13.375077 2632 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 21:35:14.031465 sshd[4237]: Connection closed by 10.0.0.1 port 36136 Sep 9 21:35:14.033042 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Sep 9 21:35:14.050567 systemd[1]: sshd@21-10.0.0.147:22-10.0.0.1:36136.service: Deactivated successfully. Sep 9 21:35:14.052149 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 21:35:14.052425 systemd[1]: session-22.scope: Consumed 2.416s CPU time, 26.6M memory peak. Sep 9 21:35:14.052958 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Sep 9 21:35:14.055198 systemd[1]: Started sshd@22-10.0.0.147:22-10.0.0.1:58318.service - OpenSSH per-connection server daemon (10.0.0.1:58318). Sep 9 21:35:14.055989 systemd-logind[1479]: Removed session 22. Sep 9 21:35:14.108328 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 58318 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:35:14.109326 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:35:14.112662 systemd-logind[1479]: New session 23 of user core. Sep 9 21:35:14.122743 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 21:35:15.240244 sshd[4394]: Connection closed by 10.0.0.1 port 58318 Sep 9 21:35:15.240805 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Sep 9 21:35:15.249365 systemd[1]: sshd@22-10.0.0.147:22-10.0.0.1:58318.service: Deactivated successfully. Sep 9 21:35:15.253060 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 21:35:15.255641 systemd[1]: session-23.scope: Consumed 1.040s CPU time, 23.9M memory peak. Sep 9 21:35:15.257965 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Sep 9 21:35:15.262143 systemd[1]: Started sshd@23-10.0.0.147:22-10.0.0.1:58330.service - OpenSSH per-connection server daemon (10.0.0.1:58330). Sep 9 21:35:15.269234 systemd-logind[1479]: Removed session 23. Sep 9 21:35:15.283204 systemd[1]: Created slice kubepods-burstable-pod26181e67_28f4_4a2c_834b_a15161ffbfd4.slice - libcontainer container kubepods-burstable-pod26181e67_28f4_4a2c_834b_a15161ffbfd4.slice. Sep 9 21:35:15.325380 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 58330 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:35:15.326392 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:35:15.329767 systemd-logind[1479]: New session 24 of user core. Sep 9 21:35:15.343765 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 21:35:15.349563 kubelet[2632]: I0909 21:35:15.349509 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-cilium-cgroup\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349800 kubelet[2632]: I0909 21:35:15.349575 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-etc-cni-netd\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349800 kubelet[2632]: I0909 21:35:15.349605 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swvz5\" (UniqueName: \"kubernetes.io/projected/26181e67-28f4-4a2c-834b-a15161ffbfd4-kube-api-access-swvz5\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349800 kubelet[2632]: I0909 21:35:15.349621 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-cilium-run\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349800 kubelet[2632]: I0909 21:35:15.349636 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-cni-path\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349800 kubelet[2632]: I0909 21:35:15.349668 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/26181e67-28f4-4a2c-834b-a15161ffbfd4-cilium-ipsec-secrets\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349800 kubelet[2632]: I0909 21:35:15.349686 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-host-proc-sys-net\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349972 kubelet[2632]: I0909 21:35:15.349701 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-xtables-lock\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349972 kubelet[2632]: I0909 21:35:15.349715 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-host-proc-sys-kernel\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349972 kubelet[2632]: I0909 21:35:15.349728 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-hostproc\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349972 kubelet[2632]: I0909 21:35:15.349743 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26181e67-28f4-4a2c-834b-a15161ffbfd4-clustermesh-secrets\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349972 kubelet[2632]: I0909 21:35:15.349757 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26181e67-28f4-4a2c-834b-a15161ffbfd4-hubble-tls\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.349972 kubelet[2632]: I0909 21:35:15.349771 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-lib-modules\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.350083 kubelet[2632]: I0909 21:35:15.349786 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26181e67-28f4-4a2c-834b-a15161ffbfd4-cilium-config-path\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.350083 kubelet[2632]: I0909 21:35:15.349807 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26181e67-28f4-4a2c-834b-a15161ffbfd4-bpf-maps\") pod \"cilium-89pv4\" (UID: \"26181e67-28f4-4a2c-834b-a15161ffbfd4\") " pod="kube-system/cilium-89pv4" Sep 9 21:35:15.393636 sshd[4409]: Connection closed by 10.0.0.1 port 58330 Sep 9 21:35:15.393515 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Sep 9 21:35:15.402489 systemd[1]: sshd@23-10.0.0.147:22-10.0.0.1:58330.service: Deactivated successfully. Sep 9 21:35:15.403987 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 21:35:15.406621 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. Sep 9 21:35:15.407747 systemd[1]: Started sshd@24-10.0.0.147:22-10.0.0.1:58334.service - OpenSSH per-connection server daemon (10.0.0.1:58334). Sep 9 21:35:15.408525 systemd-logind[1479]: Removed session 24. Sep 9 21:35:15.458614 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 58334 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:35:15.460058 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:35:15.467220 systemd-logind[1479]: New session 25 of user core. Sep 9 21:35:15.473717 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 21:35:15.562966 kubelet[2632]: I0909 21:35:15.562869 2632 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T21:35:15Z","lastTransitionTime":"2025-09-09T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 21:35:15.587452 kubelet[2632]: E0909 21:35:15.587419 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:15.588252 containerd[1501]: time="2025-09-09T21:35:15.588219691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89pv4,Uid:26181e67-28f4-4a2c-834b-a15161ffbfd4,Namespace:kube-system,Attempt:0,}" Sep 9 21:35:15.606739 containerd[1501]: time="2025-09-09T21:35:15.606700538Z" level=info msg="connecting to shim fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061" address="unix:///run/containerd/s/743a842c01963e9fa526c2e7110527c198a6148a55c6ab0192258897239182ca" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:35:15.634795 systemd[1]: Started cri-containerd-fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061.scope - libcontainer container fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061. Sep 9 21:35:15.654197 containerd[1501]: time="2025-09-09T21:35:15.654136081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89pv4,Uid:26181e67-28f4-4a2c-834b-a15161ffbfd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\"" Sep 9 21:35:15.654765 kubelet[2632]: E0909 21:35:15.654742 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:15.659340 containerd[1501]: time="2025-09-09T21:35:15.659306474Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 21:35:15.665696 containerd[1501]: time="2025-09-09T21:35:15.665648561Z" level=info msg="Container ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:35:15.670372 containerd[1501]: time="2025-09-09T21:35:15.670331694Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634\"" Sep 9 21:35:15.670956 containerd[1501]: time="2025-09-09T21:35:15.670827472Z" level=info msg="StartContainer for \"ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634\"" Sep 9 21:35:15.673429 containerd[1501]: time="2025-09-09T21:35:15.673394551Z" level=info msg="connecting to shim ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634" address="unix:///run/containerd/s/743a842c01963e9fa526c2e7110527c198a6148a55c6ab0192258897239182ca" protocol=ttrpc version=3 Sep 9 21:35:15.693711 systemd[1]: Started cri-containerd-ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634.scope - libcontainer container ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634. Sep 9 21:35:15.715793 containerd[1501]: time="2025-09-09T21:35:15.715752170Z" level=info msg="StartContainer for \"ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634\" returns successfully" Sep 9 21:35:15.725279 systemd[1]: cri-containerd-ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634.scope: Deactivated successfully. Sep 9 21:35:15.727956 containerd[1501]: time="2025-09-09T21:35:15.727925167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634\" id:\"ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634\" pid:4490 exited_at:{seconds:1757453715 nanos:727477143}" Sep 9 21:35:15.728037 containerd[1501]: time="2025-09-09T21:35:15.727964642Z" level=info msg="received exit event container_id:\"ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634\" id:\"ae19733e360fa4ef74f40946536892d10351b39725066dd36b26f3c606a20634\" pid:4490 exited_at:{seconds:1757453715 nanos:727477143}" Sep 9 21:35:16.536517 kubelet[2632]: E0909 21:35:16.536483 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:16.541854 containerd[1501]: time="2025-09-09T21:35:16.541823616Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 21:35:16.551663 containerd[1501]: time="2025-09-09T21:35:16.551527667Z" level=info msg="Container 66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:35:16.557847 containerd[1501]: time="2025-09-09T21:35:16.557817443Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1\"" Sep 9 21:35:16.558495 containerd[1501]: time="2025-09-09T21:35:16.558474645Z" level=info msg="StartContainer for \"66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1\"" Sep 9 21:35:16.559474 containerd[1501]: time="2025-09-09T21:35:16.559449250Z" level=info msg="connecting to shim 66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1" address="unix:///run/containerd/s/743a842c01963e9fa526c2e7110527c198a6148a55c6ab0192258897239182ca" protocol=ttrpc version=3 Sep 9 21:35:16.580736 systemd[1]: Started cri-containerd-66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1.scope - libcontainer container 66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1. Sep 9 21:35:16.602816 containerd[1501]: time="2025-09-09T21:35:16.602724087Z" level=info msg="StartContainer for \"66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1\" returns successfully" Sep 9 21:35:16.607880 systemd[1]: cri-containerd-66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1.scope: Deactivated successfully. Sep 9 21:35:16.609398 containerd[1501]: time="2025-09-09T21:35:16.609364981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1\" id:\"66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1\" pid:4539 exited_at:{seconds:1757453716 nanos:608763732}" Sep 9 21:35:16.609470 containerd[1501]: time="2025-09-09T21:35:16.609408536Z" level=info msg="received exit event container_id:\"66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1\" id:\"66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1\" pid:4539 exited_at:{seconds:1757453716 nanos:608763732}" Sep 9 21:35:17.454888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66f1360159db573fd2252fc28036f8b252f15855477d4104a8031637e3dd09b1-rootfs.mount: Deactivated successfully. Sep 9 21:35:17.542576 kubelet[2632]: E0909 21:35:17.542507 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:17.545748 containerd[1501]: time="2025-09-09T21:35:17.545654730Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 21:35:17.553438 containerd[1501]: time="2025-09-09T21:35:17.553352269Z" level=info msg="Container 28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:35:17.565037 containerd[1501]: time="2025-09-09T21:35:17.564990648Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd\"" Sep 9 21:35:17.565629 containerd[1501]: time="2025-09-09T21:35:17.565433319Z" level=info msg="StartContainer for \"28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd\"" Sep 9 21:35:17.566938 containerd[1501]: time="2025-09-09T21:35:17.566905794Z" level=info msg="connecting to shim 28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd" address="unix:///run/containerd/s/743a842c01963e9fa526c2e7110527c198a6148a55c6ab0192258897239182ca" protocol=ttrpc version=3 Sep 9 21:35:17.590700 systemd[1]: Started cri-containerd-28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd.scope - libcontainer container 28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd. Sep 9 21:35:17.623169 systemd[1]: cri-containerd-28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd.scope: Deactivated successfully. Sep 9 21:35:17.624713 containerd[1501]: time="2025-09-09T21:35:17.624623261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd\" id:\"28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd\" pid:4582 exited_at:{seconds:1757453717 nanos:624359131}" Sep 9 21:35:17.624968 containerd[1501]: time="2025-09-09T21:35:17.624745207Z" level=info msg="received exit event container_id:\"28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd\" id:\"28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd\" pid:4582 exited_at:{seconds:1757453717 nanos:624359131}" Sep 9 21:35:17.624968 containerd[1501]: time="2025-09-09T21:35:17.624777044Z" level=info msg="StartContainer for \"28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd\" returns successfully" Sep 9 21:35:17.644504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28ae5c4b28db45f0e8983c2cba3dd735821be9070a9fae755d9cecfab35639bd-rootfs.mount: Deactivated successfully. Sep 9 21:35:18.375895 kubelet[2632]: E0909 21:35:18.375859 2632 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 21:35:18.547019 kubelet[2632]: E0909 21:35:18.546988 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:18.551376 containerd[1501]: time="2025-09-09T21:35:18.551326874Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 21:35:18.568197 containerd[1501]: time="2025-09-09T21:35:18.568110745Z" level=info msg="Container c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:35:18.574316 containerd[1501]: time="2025-09-09T21:35:18.574279734Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2\"" Sep 9 21:35:18.574912 containerd[1501]: time="2025-09-09T21:35:18.574739766Z" level=info msg="StartContainer for \"c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2\"" Sep 9 21:35:18.576095 containerd[1501]: time="2025-09-09T21:35:18.575944399Z" level=info msg="connecting to shim c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2" address="unix:///run/containerd/s/743a842c01963e9fa526c2e7110527c198a6148a55c6ab0192258897239182ca" protocol=ttrpc version=3 Sep 9 21:35:18.591700 systemd[1]: Started cri-containerd-c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2.scope - libcontainer container c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2. Sep 9 21:35:18.610695 systemd[1]: cri-containerd-c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2.scope: Deactivated successfully. Sep 9 21:35:18.611687 containerd[1501]: time="2025-09-09T21:35:18.611648594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2\" id:\"c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2\" pid:4620 exited_at:{seconds:1757453718 nanos:611359304}" Sep 9 21:35:18.612043 containerd[1501]: time="2025-09-09T21:35:18.612005996Z" level=info msg="received exit event container_id:\"c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2\" id:\"c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2\" pid:4620 exited_at:{seconds:1757453718 nanos:611359304}" Sep 9 21:35:18.618571 containerd[1501]: time="2025-09-09T21:35:18.618523069Z" level=info msg="StartContainer for \"c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2\" returns successfully" Sep 9 21:35:18.628360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5d7bbf15e33e3e556f7bff76240b004ef67e8f933ad63d7e9324313c9b342d2-rootfs.mount: Deactivated successfully. Sep 9 21:35:19.551956 kubelet[2632]: E0909 21:35:19.551923 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:19.557989 containerd[1501]: time="2025-09-09T21:35:19.557945996Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 21:35:19.566229 containerd[1501]: time="2025-09-09T21:35:19.566185418Z" level=info msg="Container 41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:35:19.574826 containerd[1501]: time="2025-09-09T21:35:19.574792044Z" level=info msg="CreateContainer within sandbox \"fdf08cb8c2b1e3bb80632a07e1501833b05db00f8b0efe479b7cf2ff23714061\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349\"" Sep 9 21:35:19.575513 containerd[1501]: time="2025-09-09T21:35:19.575481575Z" level=info msg="StartContainer for \"41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349\"" Sep 9 21:35:19.576377 containerd[1501]: time="2025-09-09T21:35:19.576345129Z" level=info msg="connecting to shim 41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349" address="unix:///run/containerd/s/743a842c01963e9fa526c2e7110527c198a6148a55c6ab0192258897239182ca" protocol=ttrpc version=3 Sep 9 21:35:19.596724 systemd[1]: Started cri-containerd-41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349.scope - libcontainer container 41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349. Sep 9 21:35:19.626969 containerd[1501]: time="2025-09-09T21:35:19.626935827Z" level=info msg="StartContainer for \"41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349\" returns successfully" Sep 9 21:35:19.688831 containerd[1501]: time="2025-09-09T21:35:19.688782327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349\" id:\"09fda9507909c53f0e3f6bbc4d7062f3cb6269c9ca0282ab1214c392ca830b3d\" pid:4688 exited_at:{seconds:1757453719 nanos:688295215}" Sep 9 21:35:19.885588 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 21:35:20.557989 kubelet[2632]: E0909 21:35:20.557929 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:20.573035 kubelet[2632]: I0909 21:35:20.572915 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-89pv4" podStartSLOduration=5.572901561 podStartE2EDuration="5.572901561s" podCreationTimestamp="2025-09-09 21:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:35:20.572829888 +0000 UTC m=+77.335766445" watchObservedRunningTime="2025-09-09 21:35:20.572901561 +0000 UTC m=+77.335838078" Sep 9 21:35:21.325760 kubelet[2632]: E0909 21:35:21.325718 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:21.588876 kubelet[2632]: E0909 21:35:21.588762 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:21.899059 containerd[1501]: time="2025-09-09T21:35:21.898958495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349\" id:\"e0fa21992adfe11df1461579cffc0d08f303b43a8d569d08e378ac92abab758e\" pid:4963 exit_status:1 exited_at:{seconds:1757453721 nanos:898462939}" Sep 9 21:35:22.689794 systemd-networkd[1411]: lxc_health: Link UP Sep 9 21:35:22.697608 systemd-networkd[1411]: lxc_health: Gained carrier Sep 9 21:35:23.589740 kubelet[2632]: E0909 21:35:23.589409 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:24.018651 containerd[1501]: time="2025-09-09T21:35:24.018603523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349\" id:\"e152037f2cefdb949b232d63fd59299aea6f79345f13d43dedbfdef2423b549f\" pid:5224 exited_at:{seconds:1757453724 nanos:18089119}" Sep 9 21:35:24.029700 systemd-networkd[1411]: lxc_health: Gained IPv6LL Sep 9 21:35:24.565892 kubelet[2632]: E0909 21:35:24.565848 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:35:26.154543 containerd[1501]: time="2025-09-09T21:35:26.154506158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349\" id:\"91a566dc17a872a71e5336b4db411d187c405f67a901c994881f7f3dbdf58e27\" pid:5258 exited_at:{seconds:1757453726 nanos:154177778}" Sep 9 21:35:28.252590 containerd[1501]: time="2025-09-09T21:35:28.252366330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41c8f19eca7fb6b2631ccaa099e63b497724968f223829fb30a3856549cf5349\" id:\"1749feddbf5519a89ba9f7cd99febaec878deca1df4773f5e99ad67c5b5b0abf\" pid:5283 exited_at:{seconds:1757453728 nanos:251880795}" Sep 9 21:35:28.256961 sshd[4424]: Connection closed by 10.0.0.1 port 58334 Sep 9 21:35:28.257436 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Sep 9 21:35:28.260951 systemd[1]: sshd@24-10.0.0.147:22-10.0.0.1:58334.service: Deactivated successfully. Sep 9 21:35:28.262860 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 21:35:28.263575 systemd-logind[1479]: Session 25 logged out. Waiting for processes to exit. Sep 9 21:35:28.264757 systemd-logind[1479]: Removed session 25. Sep 9 21:35:29.323048 kubelet[2632]: E0909 21:35:29.323020 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"