Sep 9 23:32:09.827032 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 23:32:09.827053 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:08:34 -00 2025 Sep 9 23:32:09.827063 kernel: KASLR enabled Sep 9 23:32:09.827068 kernel: efi: EFI v2.7 by EDK II Sep 9 23:32:09.827074 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 23:32:09.827079 kernel: random: crng init done Sep 9 23:32:09.827086 kernel: secureboot: Secure boot disabled Sep 9 23:32:09.827092 kernel: ACPI: Early table checksum verification disabled Sep 9 23:32:09.827098 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 23:32:09.827123 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 23:32:09.827129 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827135 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827140 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827146 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827153 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827161 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827168 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827174 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827180 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:32:09.827186 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 23:32:09.827192 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:32:09.827198 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:32:09.827204 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 23:32:09.827209 kernel: Zone ranges: Sep 9 23:32:09.827215 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:32:09.827223 kernel: DMA32 empty Sep 9 23:32:09.827229 kernel: Normal empty Sep 9 23:32:09.827235 kernel: Device empty Sep 9 23:32:09.827240 kernel: Movable zone start for each node Sep 9 23:32:09.827246 kernel: Early memory node ranges Sep 9 23:32:09.827252 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 23:32:09.827258 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 23:32:09.827264 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 23:32:09.827270 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 23:32:09.827276 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 23:32:09.827282 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 23:32:09.827287 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 23:32:09.827295 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 23:32:09.827301 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 23:32:09.827307 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 23:32:09.827316 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 23:32:09.827322 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 23:32:09.827329 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 23:32:09.827336 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:32:09.827343 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 23:32:09.827349 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 23:32:09.827355 kernel: psci: probing for conduit method from ACPI. Sep 9 23:32:09.827362 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:32:09.827368 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:32:09.827374 kernel: psci: Trusted OS migration not required Sep 9 23:32:09.827381 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:32:09.827387 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 23:32:09.827393 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:32:09.827401 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:32:09.827408 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 23:32:09.827414 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:32:09.827420 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:32:09.827427 kernel: CPU features: detected: Spectre-v4 Sep 9 23:32:09.827433 kernel: CPU features: detected: Spectre-BHB Sep 9 23:32:09.827440 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:32:09.827446 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:32:09.827452 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 23:32:09.827459 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:32:09.827465 kernel: alternatives: applying boot alternatives Sep 9 23:32:09.827472 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a0303a4c67016bd8cbb391a5d1bb2355d0bb259dfb78ea746a1288c781f86ca Sep 9 23:32:09.827480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:32:09.827487 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:32:09.827517 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:32:09.827527 kernel: Fallback order for Node 0: 0 Sep 9 23:32:09.827534 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 23:32:09.827540 kernel: Policy zone: DMA Sep 9 23:32:09.827546 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:32:09.827553 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 23:32:09.827559 kernel: software IO TLB: area num 4. Sep 9 23:32:09.827566 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 23:32:09.827572 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 23:32:09.827597 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 23:32:09.827604 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:32:09.827620 kernel: rcu: RCU event tracing is enabled. Sep 9 23:32:09.827627 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 23:32:09.827634 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:32:09.827640 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:32:09.827647 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:32:09.827653 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 23:32:09.827660 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:32:09.827666 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:32:09.827680 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:32:09.827689 kernel: GICv3: 256 SPIs implemented Sep 9 23:32:09.827696 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:32:09.827702 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:32:09.827708 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 23:32:09.827715 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:32:09.827721 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 23:32:09.827727 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 23:32:09.827734 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:32:09.827791 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:32:09.827799 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 23:32:09.827805 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 23:32:09.827824 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:32:09.827837 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:32:09.827844 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 23:32:09.827851 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 23:32:09.827858 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 23:32:09.827865 kernel: arm-pv: using stolen time PV Sep 9 23:32:09.827872 kernel: Console: colour dummy device 80x25 Sep 9 23:32:09.827879 kernel: ACPI: Core revision 20240827 Sep 9 23:32:09.827885 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 23:32:09.827892 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:32:09.827899 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:32:09.827907 kernel: landlock: Up and running. Sep 9 23:32:09.827914 kernel: SELinux: Initializing. Sep 9 23:32:09.827921 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:32:09.827928 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:32:09.827935 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:32:09.827942 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:32:09.827949 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:32:09.827956 kernel: Remapping and enabling EFI services. Sep 9 23:32:09.827962 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:32:09.827975 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:32:09.827982 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 23:32:09.827989 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 23:32:09.827998 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:32:09.828005 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 23:32:09.828012 kernel: Detected PIPT I-cache on CPU2 Sep 9 23:32:09.828019 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 23:32:09.828027 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 23:32:09.828035 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:32:09.828042 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 23:32:09.828049 kernel: Detected PIPT I-cache on CPU3 Sep 9 23:32:09.828056 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 23:32:09.828064 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 23:32:09.828071 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:32:09.828078 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 23:32:09.828085 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 23:32:09.828092 kernel: SMP: Total of 4 processors activated. Sep 9 23:32:09.828100 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:32:09.828178 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:32:09.828186 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:32:09.828193 kernel: CPU features: detected: Common not Private translations Sep 9 23:32:09.828200 kernel: CPU features: detected: CRC32 instructions Sep 9 23:32:09.828207 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 23:32:09.828214 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:32:09.828221 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:32:09.828228 kernel: CPU features: detected: Privileged Access Never Sep 9 23:32:09.828235 kernel: CPU features: detected: RAS Extension Support Sep 9 23:32:09.828244 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:32:09.828251 kernel: alternatives: applying system-wide alternatives Sep 9 23:32:09.828258 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 23:32:09.828265 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 9 23:32:09.828272 kernel: devtmpfs: initialized Sep 9 23:32:09.828279 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:32:09.828286 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 23:32:09.828304 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:32:09.828312 kernel: 0 pages in range for non-PLT usage Sep 9 23:32:09.828319 kernel: 508560 pages in range for PLT usage Sep 9 23:32:09.828326 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:32:09.828333 kernel: SMBIOS 3.0.0 present. Sep 9 23:32:09.828340 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 23:32:09.828347 kernel: DMI: Memory slots populated: 1/1 Sep 9 23:32:09.828354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:32:09.828361 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:32:09.828368 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:32:09.828377 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:32:09.828384 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:32:09.828391 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 9 23:32:09.828398 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:32:09.828405 kernel: cpuidle: using governor menu Sep 9 23:32:09.828412 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:32:09.828419 kernel: ASID allocator initialised with 32768 entries Sep 9 23:32:09.828426 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:32:09.828433 kernel: Serial: AMBA PL011 UART driver Sep 9 23:32:09.828441 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:32:09.828448 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:32:09.828455 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:32:09.828462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:32:09.828469 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:32:09.828476 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:32:09.828483 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:32:09.828490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:32:09.828497 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:32:09.828504 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:32:09.828512 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:32:09.828519 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:32:09.828525 kernel: ACPI: Interpreter enabled Sep 9 23:32:09.828532 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:32:09.828539 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:32:09.828546 kernel: ACPI: CPU0 has been hot-added Sep 9 23:32:09.828553 kernel: ACPI: CPU1 has been hot-added Sep 9 23:32:09.828559 kernel: ACPI: CPU2 has been hot-added Sep 9 23:32:09.828566 kernel: ACPI: CPU3 has been hot-added Sep 9 23:32:09.828574 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:32:09.828581 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:32:09.828588 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 23:32:09.828777 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:32:09.828847 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:32:09.828907 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:32:09.828965 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 23:32:09.829025 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 23:32:09.829034 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 23:32:09.829041 kernel: PCI host bridge to bus 0000:00 Sep 9 23:32:09.829119 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 23:32:09.829179 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:32:09.829233 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 23:32:09.829286 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 23:32:09.829366 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:32:09.829442 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 23:32:09.829506 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 23:32:09.829567 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 23:32:09.829679 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:32:09.829757 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 23:32:09.829818 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 23:32:09.829883 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 23:32:09.829938 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 23:32:09.829991 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:32:09.830043 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 23:32:09.830052 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:32:09.830059 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:32:09.830066 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:32:09.830075 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:32:09.830081 kernel: iommu: Default domain type: Translated Sep 9 23:32:09.830089 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:32:09.830095 kernel: efivars: Registered efivars operations Sep 9 23:32:09.830114 kernel: vgaarb: loaded Sep 9 23:32:09.830122 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:32:09.830129 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:32:09.830136 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:32:09.830143 kernel: pnp: PnP ACPI init Sep 9 23:32:09.830216 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 23:32:09.830226 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:32:09.830234 kernel: NET: Registered PF_INET protocol family Sep 9 23:32:09.830241 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:32:09.830248 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:32:09.830279 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:32:09.830287 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:32:09.830294 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:32:09.830304 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:32:09.830311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:32:09.830318 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:32:09.830325 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:32:09.830332 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:32:09.830355 kernel: kvm [1]: HYP mode not available Sep 9 23:32:09.830362 kernel: Initialise system trusted keyrings Sep 9 23:32:09.830369 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:32:09.830376 kernel: Key type asymmetric registered Sep 9 23:32:09.830386 kernel: Asymmetric key parser 'x509' registered Sep 9 23:32:09.830393 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:32:09.830400 kernel: io scheduler mq-deadline registered Sep 9 23:32:09.830407 kernel: io scheduler kyber registered Sep 9 23:32:09.830427 kernel: io scheduler bfq registered Sep 9 23:32:09.830434 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:32:09.830441 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:32:09.830449 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:32:09.830546 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 23:32:09.830558 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:32:09.830584 kernel: thunder_xcv, ver 1.0 Sep 9 23:32:09.830592 kernel: thunder_bgx, ver 1.0 Sep 9 23:32:09.830599 kernel: nicpf, ver 1.0 Sep 9 23:32:09.830605 kernel: nicvf, ver 1.0 Sep 9 23:32:09.830783 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:32:09.830871 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:32:09 UTC (1757460729) Sep 9 23:32:09.830882 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:32:09.830890 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:32:09.830901 kernel: watchdog: NMI not fully supported Sep 9 23:32:09.830925 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:32:09.830933 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:32:09.830940 kernel: Segment Routing with IPv6 Sep 9 23:32:09.830947 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:32:09.830953 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:32:09.830960 kernel: Key type dns_resolver registered Sep 9 23:32:09.830967 kernel: registered taskstats version 1 Sep 9 23:32:09.830974 kernel: Loading compiled-in X.509 certificates Sep 9 23:32:09.830997 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 820dabbdbfae37dcb388874c78ed83c436750814' Sep 9 23:32:09.831005 kernel: Demotion targets for Node 0: null Sep 9 23:32:09.831012 kernel: Key type .fscrypt registered Sep 9 23:32:09.831019 kernel: Key type fscrypt-provisioning registered Sep 9 23:32:09.831025 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:32:09.831032 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:32:09.831039 kernel: ima: No architecture policies found Sep 9 23:32:09.831046 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:32:09.831055 kernel: clk: Disabling unused clocks Sep 9 23:32:09.831076 kernel: PM: genpd: Disabling unused power domains Sep 9 23:32:09.831083 kernel: Warning: unable to open an initial console. Sep 9 23:32:09.831091 kernel: Freeing unused kernel memory: 38976K Sep 9 23:32:09.831098 kernel: Run /init as init process Sep 9 23:32:09.831129 kernel: with arguments: Sep 9 23:32:09.831155 kernel: /init Sep 9 23:32:09.831163 kernel: with environment: Sep 9 23:32:09.831169 kernel: HOME=/ Sep 9 23:32:09.831176 kernel: TERM=linux Sep 9 23:32:09.831188 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:32:09.831196 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:32:09.831206 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:32:09.831229 systemd[1]: Detected virtualization kvm. Sep 9 23:32:09.831237 systemd[1]: Detected architecture arm64. Sep 9 23:32:09.831244 systemd[1]: Running in initrd. Sep 9 23:32:09.831252 systemd[1]: No hostname configured, using default hostname. Sep 9 23:32:09.831263 systemd[1]: Hostname set to . Sep 9 23:32:09.831270 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:32:09.831278 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:32:09.831299 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:32:09.831310 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:32:09.831319 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:32:09.831327 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:32:09.831334 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:32:09.831345 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:32:09.831354 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:32:09.831375 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:32:09.831384 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:32:09.831392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:32:09.831400 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:32:09.831407 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:32:09.831417 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:32:09.831424 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:32:09.831432 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:32:09.831454 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:32:09.831462 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:32:09.831470 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:32:09.831478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:32:09.831485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:32:09.831495 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:32:09.831503 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:32:09.831511 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:32:09.831533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:32:09.831541 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:32:09.831549 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:32:09.831557 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:32:09.831565 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:32:09.831573 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:32:09.831584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:32:09.831607 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:32:09.831699 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:32:09.831712 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:32:09.831724 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:32:09.831732 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:32:09.831740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:32:09.831748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:32:09.831804 systemd-journald[245]: Collecting audit messages is disabled. Sep 9 23:32:09.831827 kernel: Bridge firewalling registered Sep 9 23:32:09.831852 systemd-journald[245]: Journal started Sep 9 23:32:09.831872 systemd-journald[245]: Runtime Journal (/run/log/journal/ed9f1bdb3f344655a093f97bdf34168a) is 6M, max 48.5M, 42.4M free. Sep 9 23:32:09.804777 systemd-modules-load[246]: Inserted module 'overlay' Sep 9 23:32:09.826948 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 9 23:32:09.841868 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:32:09.841891 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:32:09.843274 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:32:09.845487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:32:09.850735 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:32:09.852699 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:32:09.857234 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:32:09.866986 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:32:09.870030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:32:09.871498 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:32:09.875979 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:32:09.878544 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:32:09.890784 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:32:09.907561 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a0303a4c67016bd8cbb391a5d1bb2355d0bb259dfb78ea746a1288c781f86ca Sep 9 23:32:09.917826 systemd-resolved[286]: Positive Trust Anchors: Sep 9 23:32:09.917843 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:32:09.917876 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:32:09.923090 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 9 23:32:09.924133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:32:09.929453 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:32:09.990141 kernel: SCSI subsystem initialized Sep 9 23:32:09.994134 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:32:10.002136 kernel: iscsi: registered transport (tcp) Sep 9 23:32:10.016138 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:32:10.016196 kernel: QLogic iSCSI HBA Driver Sep 9 23:32:10.036555 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:32:10.054950 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:32:10.057501 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:32:10.116137 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:32:10.118597 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:32:10.176163 kernel: raid6: neonx8 gen() 15558 MB/s Sep 9 23:32:10.193133 kernel: raid6: neonx4 gen() 14093 MB/s Sep 9 23:32:10.211124 kernel: raid6: neonx2 gen() 8157 MB/s Sep 9 23:32:10.230150 kernel: raid6: neonx1 gen() 11282 MB/s Sep 9 23:32:10.247151 kernel: raid6: int64x8 gen() 6684 MB/s Sep 9 23:32:10.264189 kernel: raid6: int64x4 gen() 4599 MB/s Sep 9 23:32:10.281137 kernel: raid6: int64x2 gen() 5739 MB/s Sep 9 23:32:10.298138 kernel: raid6: int64x1 gen() 4946 MB/s Sep 9 23:32:10.298176 kernel: raid6: using algorithm neonx8 gen() 15558 MB/s Sep 9 23:32:10.315150 kernel: raid6: .... xor() 11842 MB/s, rmw enabled Sep 9 23:32:10.315192 kernel: raid6: using neon recovery algorithm Sep 9 23:32:10.320226 kernel: xor: measuring software checksum speed Sep 9 23:32:10.320249 kernel: 8regs : 21636 MB/sec Sep 9 23:32:10.321278 kernel: 32regs : 21693 MB/sec Sep 9 23:32:10.321292 kernel: arm64_neon : 28099 MB/sec Sep 9 23:32:10.321309 kernel: xor: using function: arm64_neon (28099 MB/sec) Sep 9 23:32:10.375141 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:32:10.381708 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:32:10.384406 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:32:10.421184 systemd-udevd[497]: Using default interface naming scheme 'v255'. Sep 9 23:32:10.425264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:32:10.428243 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:32:10.452208 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 9 23:32:10.475897 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:32:10.478471 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:32:10.538246 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:32:10.542237 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:32:10.596306 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 23:32:10.598324 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 23:32:10.602386 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:32:10.602421 kernel: GPT:9289727 != 19775487 Sep 9 23:32:10.602431 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:32:10.602449 kernel: GPT:9289727 != 19775487 Sep 9 23:32:10.602458 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:32:10.603350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:32:10.607745 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:32:10.607947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:32:10.613062 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:32:10.615187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:32:10.646349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 23:32:10.647928 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:32:10.650236 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:32:10.659698 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 23:32:10.671737 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:32:10.678146 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 23:32:10.679448 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 23:32:10.682607 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:32:10.684872 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:32:10.687262 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:32:10.690075 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:32:10.691997 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:32:10.719398 disk-uuid[591]: Primary Header is updated. Sep 9 23:32:10.719398 disk-uuid[591]: Secondary Entries is updated. Sep 9 23:32:10.719398 disk-uuid[591]: Secondary Header is updated. Sep 9 23:32:10.720369 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:32:10.725628 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:32:11.733138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:32:11.733195 disk-uuid[597]: The operation has completed successfully. Sep 9 23:32:11.766320 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:32:11.766426 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:32:11.793093 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:32:11.818486 sh[610]: Success Sep 9 23:32:11.832913 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:32:11.832967 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:32:11.832979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:32:11.841127 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:32:11.872355 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:32:11.874282 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:32:11.881281 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:32:11.891030 kernel: BTRFS: device fsid 61baaba1-cd1f-4e69-9af9-cc1b703c9653 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (622) Sep 9 23:32:11.891067 kernel: BTRFS info (device dm-0): first mount of filesystem 61baaba1-cd1f-4e69-9af9-cc1b703c9653 Sep 9 23:32:11.891078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:32:11.897125 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:32:11.897172 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:32:11.898224 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:32:11.899676 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:32:11.901295 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:32:11.902190 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:32:11.904159 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:32:11.935122 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (651) Sep 9 23:32:11.937538 kernel: BTRFS info (device vda6): first mount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:32:11.937579 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:32:11.940761 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:32:11.940800 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:32:11.946138 kernel: BTRFS info (device vda6): last unmount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:32:11.947536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:32:11.949756 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:32:12.028081 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:32:12.031887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:32:12.071685 ignition[698]: Ignition 2.21.0 Sep 9 23:32:12.071701 ignition[698]: Stage: fetch-offline Sep 9 23:32:12.072686 systemd-networkd[801]: lo: Link UP Sep 9 23:32:12.071740 ignition[698]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:32:12.072690 systemd-networkd[801]: lo: Gained carrier Sep 9 23:32:12.071748 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:32:12.073383 systemd-networkd[801]: Enumeration completed Sep 9 23:32:12.071910 ignition[698]: parsed url from cmdline: "" Sep 9 23:32:12.073470 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:32:12.071913 ignition[698]: no config URL provided Sep 9 23:32:12.074783 systemd[1]: Reached target network.target - Network. Sep 9 23:32:12.071918 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:32:12.075800 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:32:12.071925 ignition[698]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:32:12.075803 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:32:12.071943 ignition[698]: op(1): [started] loading QEMU firmware config module Sep 9 23:32:12.076452 systemd-networkd[801]: eth0: Link UP Sep 9 23:32:12.071947 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 23:32:12.077459 systemd-networkd[801]: eth0: Gained carrier Sep 9 23:32:12.081602 ignition[698]: op(1): [finished] loading QEMU firmware config module Sep 9 23:32:12.077469 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:32:12.100179 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:32:12.132997 ignition[698]: parsing config with SHA512: 408c08d43f2b78424e895495c2f0cf50ff4f5a7d3390d60a98dbb3e16c0aa1f24f8025a63704367b84e2d8f444adb9e86cf9fcdd9f47403ae484f47ace54bfc7 Sep 9 23:32:12.137531 unknown[698]: fetched base config from "system" Sep 9 23:32:12.137542 unknown[698]: fetched user config from "qemu" Sep 9 23:32:12.137955 ignition[698]: fetch-offline: fetch-offline passed Sep 9 23:32:12.140438 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:32:12.138008 ignition[698]: Ignition finished successfully Sep 9 23:32:12.141745 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 23:32:12.142500 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:32:12.174342 ignition[808]: Ignition 2.21.0 Sep 9 23:32:12.174360 ignition[808]: Stage: kargs Sep 9 23:32:12.174505 ignition[808]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:32:12.174514 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:32:12.176015 ignition[808]: kargs: kargs passed Sep 9 23:32:12.176086 ignition[808]: Ignition finished successfully Sep 9 23:32:12.181605 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:32:12.185353 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:32:12.218343 ignition[816]: Ignition 2.21.0 Sep 9 23:32:12.218361 ignition[816]: Stage: disks Sep 9 23:32:12.218545 ignition[816]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:32:12.218553 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:32:12.219638 ignition[816]: disks: disks passed Sep 9 23:32:12.221892 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:32:12.219695 ignition[816]: Ignition finished successfully Sep 9 23:32:12.224155 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:32:12.225670 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:32:12.227749 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:32:12.229582 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:32:12.231540 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:32:12.234433 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:32:12.264964 systemd-fsck[826]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:32:12.269620 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:32:12.271954 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:32:12.341131 kernel: EXT4-fs (vda9): mounted filesystem b3fb930d-58c7-4aff-a89a-67d23b38af56 r/w with ordered data mode. Quota mode: none. Sep 9 23:32:12.341406 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:32:12.342641 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:32:12.347045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:32:12.350576 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:32:12.351543 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:32:12.351581 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:32:12.351605 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:32:12.362621 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:32:12.366130 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Sep 9 23:32:12.365312 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:32:12.371385 kernel: BTRFS info (device vda6): first mount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:32:12.371408 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:32:12.373255 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:32:12.373437 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:32:12.375030 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:32:12.419681 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:32:12.424296 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:32:12.428228 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:32:12.432209 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:32:12.510173 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:32:12.512156 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:32:12.513879 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:32:12.532194 kernel: BTRFS info (device vda6): last unmount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:32:12.551172 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:32:12.561068 ignition[947]: INFO : Ignition 2.21.0 Sep 9 23:32:12.561068 ignition[947]: INFO : Stage: mount Sep 9 23:32:12.563947 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:32:12.563947 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:32:12.563947 ignition[947]: INFO : mount: mount passed Sep 9 23:32:12.563947 ignition[947]: INFO : Ignition finished successfully Sep 9 23:32:12.565210 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:32:12.568674 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:32:12.889836 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:32:12.892550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:32:12.918574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (960) Sep 9 23:32:12.918619 kernel: BTRFS info (device vda6): first mount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:32:12.918630 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:32:12.923119 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:32:12.923146 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:32:12.924466 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:32:12.958525 ignition[977]: INFO : Ignition 2.21.0 Sep 9 23:32:12.958525 ignition[977]: INFO : Stage: files Sep 9 23:32:12.960209 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:32:12.960209 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:32:12.960209 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:32:12.963509 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:32:12.963509 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:32:12.966468 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:32:12.966468 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:32:12.966468 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:32:12.965227 unknown[977]: wrote ssh authorized keys file for user: core Sep 9 23:32:12.971568 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:32:12.971568 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 23:32:13.012720 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:32:13.518304 systemd-networkd[801]: eth0: Gained IPv6LL Sep 9 23:32:15.091697 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:32:15.091697 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:32:15.091697 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:32:15.295569 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:32:15.459534 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:32:15.459534 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:32:15.463442 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:32:15.481338 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:32:15.481338 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:32:15.481338 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 23:32:15.888271 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:32:16.384433 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:32:16.384433 ignition[977]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:32:16.388311 ignition[977]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:32:16.388311 ignition[977]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:32:16.388311 ignition[977]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:32:16.388311 ignition[977]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 23:32:16.388311 ignition[977]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:32:16.388311 ignition[977]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:32:16.388311 ignition[977]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 23:32:16.388311 ignition[977]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 23:32:16.405755 ignition[977]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:32:16.409831 ignition[977]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:32:16.411654 ignition[977]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 23:32:16.411654 ignition[977]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:32:16.411654 ignition[977]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:32:16.411654 ignition[977]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:32:16.411654 ignition[977]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:32:16.411654 ignition[977]: INFO : files: files passed Sep 9 23:32:16.411654 ignition[977]: INFO : Ignition finished successfully Sep 9 23:32:16.414145 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:32:16.418255 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:32:16.428811 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:32:16.431227 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:32:16.431338 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:32:16.436472 initrd-setup-root-after-ignition[1006]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 23:32:16.440154 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:32:16.440154 initrd-setup-root-after-ignition[1008]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:32:16.443666 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:32:16.444446 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:32:16.446579 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:32:16.449494 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:32:16.488230 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:32:16.488333 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:32:16.490757 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:32:16.493134 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:32:16.495209 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:32:16.496234 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:32:16.519496 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:32:16.522360 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:32:16.542608 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:32:16.545124 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:32:16.546476 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:32:16.548480 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:32:16.548601 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:32:16.551260 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:32:16.553507 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:32:16.555296 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:32:16.557134 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:32:16.559270 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:32:16.561463 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:32:16.563464 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:32:16.565502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:32:16.567863 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:32:16.570252 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:32:16.572291 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:32:16.574039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:32:16.574249 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:32:16.576853 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:32:16.579003 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:32:16.581369 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:32:16.581511 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:32:16.583794 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:32:16.583924 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:32:16.587134 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:32:16.587300 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:32:16.589356 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:32:16.591052 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:32:16.595188 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:32:16.596639 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:32:16.599088 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:32:16.601167 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:32:16.601257 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:32:16.603795 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:32:16.603876 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:32:16.605547 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:32:16.605744 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:32:16.607493 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:32:16.607593 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:32:16.610570 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:32:16.612340 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:32:16.614443 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:32:16.624836 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:32:16.625807 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:32:16.625947 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:32:16.628168 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:32:16.628272 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:32:16.634455 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:32:16.634547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:32:16.644704 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:32:16.647455 ignition[1032]: INFO : Ignition 2.21.0 Sep 9 23:32:16.647455 ignition[1032]: INFO : Stage: umount Sep 9 23:32:16.650820 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:32:16.650820 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:32:16.650820 ignition[1032]: INFO : umount: umount passed Sep 9 23:32:16.650820 ignition[1032]: INFO : Ignition finished successfully Sep 9 23:32:16.650437 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:32:16.650564 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:32:16.651999 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:32:16.652132 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:32:16.656354 systemd[1]: Stopped target network.target - Network. Sep 9 23:32:16.658016 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:32:16.658081 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:32:16.659975 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:32:16.660018 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:32:16.661942 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:32:16.661992 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:32:16.663806 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:32:16.663845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:32:16.665747 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:32:16.665798 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:32:16.667795 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:32:16.669669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:32:16.680359 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:32:16.681746 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:32:16.686041 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:32:16.686505 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:32:16.686564 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:32:16.690628 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:32:16.690844 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:32:16.690953 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:32:16.695153 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:32:16.695579 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:32:16.697290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:32:16.697330 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:32:16.700586 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:32:16.701671 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:32:16.701737 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:32:16.703920 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:32:16.703962 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:32:16.707051 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:32:16.707091 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:32:16.709322 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:32:16.714009 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:32:16.729746 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:32:16.734268 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:32:16.735923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:32:16.735962 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:32:16.738332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:32:16.738362 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:32:16.740304 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:32:16.740355 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:32:16.743514 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:32:16.743562 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:32:16.746355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:32:16.746400 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:32:16.750057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:32:16.751262 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:32:16.751316 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:32:16.754649 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:32:16.754695 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:32:16.758425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:32:16.758465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:32:16.763338 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:32:16.764263 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:32:16.770003 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:32:16.770118 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:32:16.772420 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:32:16.774778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:32:16.805496 systemd[1]: Switching root. Sep 9 23:32:16.845304 systemd-journald[245]: Journal stopped Sep 9 23:32:17.725810 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 9 23:32:17.725858 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:32:17.725875 kernel: SELinux: policy capability open_perms=1 Sep 9 23:32:17.725885 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:32:17.725896 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:32:17.725904 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:32:17.725914 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:32:17.725923 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:32:17.725938 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:32:17.725947 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:32:17.725957 kernel: audit: type=1403 audit(1757460737.043:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:32:17.725969 systemd[1]: Successfully loaded SELinux policy in 56.226ms. Sep 9 23:32:17.725985 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.534ms. Sep 9 23:32:17.725999 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:32:17.726024 systemd[1]: Detected virtualization kvm. Sep 9 23:32:17.726034 systemd[1]: Detected architecture arm64. Sep 9 23:32:17.726046 systemd[1]: Detected first boot. Sep 9 23:32:17.726056 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:32:17.726066 zram_generator::config[1077]: No configuration found. Sep 9 23:32:17.726076 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:32:17.726085 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:32:17.726097 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:32:17.726122 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:32:17.726134 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:32:17.726144 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:32:17.726155 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:32:17.726165 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:32:17.726176 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:32:17.726186 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:32:17.726200 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:32:17.726213 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:32:17.726223 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:32:17.726234 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:32:17.726247 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:32:17.726257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:32:17.726267 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:32:17.726277 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:32:17.726288 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:32:17.726298 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:32:17.726310 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:32:17.726320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:32:17.726330 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:32:17.726340 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:32:17.726350 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:32:17.726359 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:32:17.726369 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:32:17.726381 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:32:17.726392 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:32:17.726403 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:32:17.726413 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:32:17.726423 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:32:17.726433 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:32:17.726443 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:32:17.726454 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:32:17.726464 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:32:17.726475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:32:17.726486 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:32:17.726498 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:32:17.726508 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:32:17.726519 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:32:17.726529 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:32:17.726538 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:32:17.726549 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:32:17.726559 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:32:17.726571 systemd[1]: Reached target machines.target - Containers. Sep 9 23:32:17.726581 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:32:17.726591 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:32:17.726602 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:32:17.726612 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:32:17.726622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:32:17.726631 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:32:17.726647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:32:17.726659 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:32:17.726672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:32:17.726684 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:32:17.726695 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:32:17.726705 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:32:17.726715 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:32:17.726725 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:32:17.726736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:32:17.726746 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:32:17.726757 kernel: ACPI: bus type drm_connector registered Sep 9 23:32:17.726767 kernel: loop: module loaded Sep 9 23:32:17.726776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:32:17.726786 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:32:17.726797 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:32:17.726806 kernel: fuse: init (API version 7.41) Sep 9 23:32:17.726817 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:32:17.726827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:32:17.726837 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:32:17.726848 systemd[1]: Stopped verity-setup.service. Sep 9 23:32:17.726860 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:32:17.726876 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:32:17.726887 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:32:17.726897 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:32:17.726909 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:32:17.726941 systemd-journald[1152]: Collecting audit messages is disabled. Sep 9 23:32:17.726963 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:32:17.726975 systemd-journald[1152]: Journal started Sep 9 23:32:17.726995 systemd-journald[1152]: Runtime Journal (/run/log/journal/ed9f1bdb3f344655a093f97bdf34168a) is 6M, max 48.5M, 42.4M free. Sep 9 23:32:17.499164 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:32:17.521087 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 23:32:17.521505 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:32:17.730017 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:32:17.732145 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:32:17.733828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:32:17.735593 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:32:17.735799 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:32:17.737413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:32:17.737570 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:32:17.739057 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:32:17.739253 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:32:17.742505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:32:17.742699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:32:17.744289 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:32:17.744448 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:32:17.745781 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:32:17.745935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:32:17.747471 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:32:17.749061 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:32:17.750745 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:32:17.752372 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:32:17.765214 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:32:17.767807 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:32:17.770027 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:32:17.771246 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:32:17.771275 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:32:17.773256 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:32:17.788952 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:32:17.790254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:32:17.792001 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:32:17.794306 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:32:17.795684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:32:17.799268 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:32:17.800556 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:32:17.801526 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:32:17.803754 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:32:17.804417 systemd-journald[1152]: Time spent on flushing to /var/log/journal/ed9f1bdb3f344655a093f97bdf34168a is 14.622ms for 888 entries. Sep 9 23:32:17.804417 systemd-journald[1152]: System Journal (/var/log/journal/ed9f1bdb3f344655a093f97bdf34168a) is 8M, max 195.6M, 187.6M free. Sep 9 23:32:17.830145 systemd-journald[1152]: Received client request to flush runtime journal. Sep 9 23:32:17.830247 kernel: loop0: detected capacity change from 0 to 107312 Sep 9 23:32:17.808333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:32:17.812811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:32:17.814499 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:32:17.816271 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:32:17.829666 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:32:17.834916 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:32:17.838417 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:32:17.843431 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:32:17.846126 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:32:17.847162 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:32:17.852880 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:32:17.858127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:32:17.866123 kernel: loop1: detected capacity change from 0 to 138376 Sep 9 23:32:17.880167 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:32:17.882214 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 9 23:32:17.882233 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 9 23:32:17.889191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:32:17.900137 kernel: loop2: detected capacity change from 0 to 207008 Sep 9 23:32:17.934145 kernel: loop3: detected capacity change from 0 to 107312 Sep 9 23:32:17.941128 kernel: loop4: detected capacity change from 0 to 138376 Sep 9 23:32:17.948131 kernel: loop5: detected capacity change from 0 to 207008 Sep 9 23:32:17.958303 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 23:32:17.958685 (sd-merge)[1216]: Merged extensions into '/usr'. Sep 9 23:32:17.962252 systemd[1]: Reload requested from client PID 1193 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:32:17.962270 systemd[1]: Reloading... Sep 9 23:32:18.019138 zram_generator::config[1238]: No configuration found. Sep 9 23:32:18.104502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:32:18.132877 ldconfig[1188]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:32:18.168161 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:32:18.168353 systemd[1]: Reloading finished in 205 ms. Sep 9 23:32:18.194466 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:32:18.196138 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:32:18.214294 systemd[1]: Starting ensure-sysext.service... Sep 9 23:32:18.216081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:32:18.225227 systemd[1]: Reload requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:32:18.225248 systemd[1]: Reloading... Sep 9 23:32:18.235500 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:32:18.235542 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:32:18.235803 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:32:18.235983 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:32:18.236630 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:32:18.236847 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Sep 9 23:32:18.236894 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Sep 9 23:32:18.239797 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:32:18.239808 systemd-tmpfiles[1278]: Skipping /boot Sep 9 23:32:18.248885 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:32:18.248902 systemd-tmpfiles[1278]: Skipping /boot Sep 9 23:32:18.272187 zram_generator::config[1308]: No configuration found. Sep 9 23:32:18.331970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:32:18.396215 systemd[1]: Reloading finished in 170 ms. Sep 9 23:32:18.416193 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:32:18.422023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:32:18.431151 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:32:18.433796 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:32:18.445931 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:32:18.449217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:32:18.453274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:32:18.457302 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:32:18.468715 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:32:18.471713 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:32:18.477500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:32:18.478802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:32:18.482384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:32:18.486954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:32:18.488178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:32:18.488346 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:32:18.489545 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:32:18.493293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:32:18.493490 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:32:18.498230 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:32:18.500420 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:32:18.504078 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Sep 9 23:32:18.506727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:32:18.513396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:32:18.515165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:32:18.515280 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:32:18.515366 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:32:18.516048 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:32:18.519794 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:32:18.519948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:32:18.521089 augenrules[1379]: No rules Sep 9 23:32:18.521812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:32:18.521971 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:32:18.525020 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:32:18.525257 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:32:18.529983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:32:18.534155 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:32:18.535850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:32:18.537147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:32:18.549857 systemd[1]: Finished ensure-sysext.service. Sep 9 23:32:18.556901 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:32:18.559425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:32:18.561611 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:32:18.566203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:32:18.581927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:32:18.587308 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:32:18.589718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:32:18.589768 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:32:18.592944 systemd-resolved[1344]: Positive Trust Anchors: Sep 9 23:32:18.593406 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:32:18.593599 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:32:18.593699 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:32:18.600528 systemd-resolved[1344]: Defaulting to hostname 'linux'. Sep 9 23:32:18.601407 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 23:32:18.602531 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:32:18.603401 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:32:18.605585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:32:18.605774 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:32:18.609833 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:32:18.610291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:32:18.612569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:32:18.612754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:32:18.615570 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:32:18.615852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:32:18.625037 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:32:18.626050 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:32:18.627447 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:32:18.627510 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:32:18.631924 augenrules[1419]: /sbin/augenrules: No change Sep 9 23:32:18.645709 augenrules[1452]: No rules Sep 9 23:32:18.648053 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:32:18.649170 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:32:18.684656 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 23:32:18.690200 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:32:18.691492 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:32:18.693555 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:32:18.694990 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:32:18.698708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:32:18.698746 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:32:18.699834 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:32:18.701296 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:32:18.702689 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:32:18.704291 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:32:18.705393 systemd-networkd[1425]: lo: Link UP Sep 9 23:32:18.705653 systemd-networkd[1425]: lo: Gained carrier Sep 9 23:32:18.706532 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:32:18.708476 systemd-networkd[1425]: Enumeration completed Sep 9 23:32:18.709831 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:32:18.713719 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:32:18.715365 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:32:18.716078 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:32:18.716087 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:32:18.717739 systemd-networkd[1425]: eth0: Link UP Sep 9 23:32:18.717864 systemd-networkd[1425]: eth0: Gained carrier Sep 9 23:32:18.717879 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:32:18.718179 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:32:18.722096 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:32:18.723656 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:32:18.725254 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:32:18.726753 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:32:18.733503 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:32:18.734869 systemd[1]: Reached target network.target - Network. Sep 9 23:32:18.735914 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:32:18.737354 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:32:18.738590 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:32:18.738620 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:32:18.741228 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:32:18.742161 systemd-networkd[1425]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:32:18.742746 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Sep 9 23:32:18.743602 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 23:32:18.743665 systemd-timesyncd[1431]: Initial clock synchronization to Tue 2025-09-09 23:32:18.966246 UTC. Sep 9 23:32:18.745350 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:32:18.750289 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:32:18.756177 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:32:18.762682 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:32:18.765185 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:32:18.766486 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:32:18.767760 jq[1480]: false Sep 9 23:32:18.770537 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:32:18.773612 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:32:18.778262 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:32:18.782275 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:32:18.785758 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:32:18.788383 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:32:18.793077 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:32:18.795188 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:32:18.795690 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:32:18.797331 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:32:18.804317 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:32:18.807340 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:32:18.808708 extend-filesystems[1481]: Found /dev/vda6 Sep 9 23:32:18.809173 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:32:18.809418 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:32:18.809720 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:32:18.809934 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:32:18.811575 extend-filesystems[1481]: Found /dev/vda9 Sep 9 23:32:18.812563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:32:18.812770 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:32:18.817119 jq[1501]: true Sep 9 23:32:18.815618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:32:18.825128 extend-filesystems[1481]: Checking size of /dev/vda9 Sep 9 23:32:18.834133 tar[1507]: linux-arm64/LICENSE Sep 9 23:32:18.834133 tar[1507]: linux-arm64/helm Sep 9 23:32:18.840557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:32:18.841723 update_engine[1499]: I20250909 23:32:18.841548 1499 main.cc:92] Flatcar Update Engine starting Sep 9 23:32:18.843406 jq[1509]: true Sep 9 23:32:18.847583 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:32:18.857539 dbus-daemon[1478]: [system] SELinux support is enabled Sep 9 23:32:18.857724 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:32:18.860616 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:32:18.860944 update_engine[1499]: I20250909 23:32:18.860894 1499 update_check_scheduler.cc:74] Next update check in 7m20s Sep 9 23:32:18.863405 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:32:18.863434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:32:18.866585 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:32:18.866611 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:32:18.867999 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:32:18.871508 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:32:18.874360 extend-filesystems[1481]: Resized partition /dev/vda9 Sep 9 23:32:18.876559 extend-filesystems[1534]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:32:18.881170 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 23:32:18.882406 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:32:18.882574 systemd-logind[1490]: New seat seat0. Sep 9 23:32:18.883257 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:32:18.900714 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 23:32:18.912190 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 23:32:18.912190 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:32:18.912190 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 23:32:18.923373 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Sep 9 23:32:18.924755 bash[1547]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:32:18.917605 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:32:18.917899 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:32:18.952247 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:32:18.954703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:32:18.975948 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:32:18.992761 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:32:19.044554 containerd[1515]: time="2025-09-09T23:32:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:32:19.045284 containerd[1515]: time="2025-09-09T23:32:19.045224392Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 23:32:19.057760 containerd[1515]: time="2025-09-09T23:32:19.057715320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.142µs" Sep 9 23:32:19.058277 containerd[1515]: time="2025-09-09T23:32:19.058128553Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:32:19.058277 containerd[1515]: time="2025-09-09T23:32:19.058163376Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:32:19.058614 containerd[1515]: time="2025-09-09T23:32:19.058590259Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:32:19.058756 containerd[1515]: time="2025-09-09T23:32:19.058738802Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:32:19.059204 containerd[1515]: time="2025-09-09T23:32:19.058860663Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:32:19.059204 containerd[1515]: time="2025-09-09T23:32:19.058946220Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:32:19.059204 containerd[1515]: time="2025-09-09T23:32:19.058960816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.059385396Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.059413970Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.059440447Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.059449081Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.059587058Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.059908649Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.059947707Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.059959013Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.060062948Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.060561533Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:32:19.062146 containerd[1515]: time="2025-09-09T23:32:19.060692603Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:32:19.065214 containerd[1515]: time="2025-09-09T23:32:19.065174972Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:32:19.065269 containerd[1515]: time="2025-09-09T23:32:19.065229530Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:32:19.065269 containerd[1515]: time="2025-09-09T23:32:19.065256048Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:32:19.065303 containerd[1515]: time="2025-09-09T23:32:19.065268464Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:32:19.065303 containerd[1515]: time="2025-09-09T23:32:19.065299012Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:32:19.065372 containerd[1515]: time="2025-09-09T23:32:19.065341729Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:32:19.065398 containerd[1515]: time="2025-09-09T23:32:19.065386090Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:32:19.065426 containerd[1515]: time="2025-09-09T23:32:19.065407511Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:32:19.065426 containerd[1515]: time="2025-09-09T23:32:19.065422476Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:32:19.065461 containerd[1515]: time="2025-09-09T23:32:19.065433741Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:32:19.065461 containerd[1515]: time="2025-09-09T23:32:19.065444390Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:32:19.065461 containerd[1515]: time="2025-09-09T23:32:19.065457094Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:32:19.065592 containerd[1515]: time="2025-09-09T23:32:19.065573733Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:32:19.065622 containerd[1515]: time="2025-09-09T23:32:19.065598072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:32:19.065622 containerd[1515]: time="2025-09-09T23:32:19.065615134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:32:19.065655 containerd[1515]: time="2025-09-09T23:32:19.065627222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:32:19.065655 containerd[1515]: time="2025-09-09T23:32:19.065638117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:32:19.065655 containerd[1515]: time="2025-09-09T23:32:19.065648231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:32:19.065709 containerd[1515]: time="2025-09-09T23:32:19.065658591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:32:19.065709 containerd[1515]: time="2025-09-09T23:32:19.065669034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:32:19.065709 containerd[1515]: time="2025-09-09T23:32:19.065683876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:32:19.065709 containerd[1515]: time="2025-09-09T23:32:19.065695141Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:32:19.065709 containerd[1515]: time="2025-09-09T23:32:19.065705708Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:32:19.065904 containerd[1515]: time="2025-09-09T23:32:19.065887389Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:32:19.065934 containerd[1515]: time="2025-09-09T23:32:19.065906219Z" level=info msg="Start snapshots syncer" Sep 9 23:32:19.065953 containerd[1515]: time="2025-09-09T23:32:19.065933518Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:32:19.066201 containerd[1515]: time="2025-09-09T23:32:19.066165604Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:32:19.066302 containerd[1515]: time="2025-09-09T23:32:19.066228385Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:32:19.066302 containerd[1515]: time="2025-09-09T23:32:19.066293632Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:32:19.066419 containerd[1515]: time="2025-09-09T23:32:19.066397485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:32:19.066457 containerd[1515]: time="2025-09-09T23:32:19.066437571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:32:19.066457 containerd[1515]: time="2025-09-09T23:32:19.066451508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:32:19.066493 containerd[1515]: time="2025-09-09T23:32:19.066471901Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:32:19.066493 containerd[1515]: time="2025-09-09T23:32:19.066486455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:32:19.066534 containerd[1515]: time="2025-09-09T23:32:19.066498008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:32:19.066534 containerd[1515]: time="2025-09-09T23:32:19.066509684Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:32:19.066567 containerd[1515]: time="2025-09-09T23:32:19.066538299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:32:19.066567 containerd[1515]: time="2025-09-09T23:32:19.066549729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:32:19.066599 containerd[1515]: time="2025-09-09T23:32:19.066560377Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:32:19.066617 containerd[1515]: time="2025-09-09T23:32:19.066610700Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:32:19.066639 containerd[1515]: time="2025-09-09T23:32:19.066624925Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:32:19.066639 containerd[1515]: time="2025-09-09T23:32:19.066634505Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:32:19.066673 containerd[1515]: time="2025-09-09T23:32:19.066644331Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:32:19.066673 containerd[1515]: time="2025-09-09T23:32:19.066652677Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:32:19.066673 containerd[1515]: time="2025-09-09T23:32:19.066662421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:32:19.066725 containerd[1515]: time="2025-09-09T23:32:19.066672535Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:32:19.066764 containerd[1515]: time="2025-09-09T23:32:19.066749376Z" level=info msg="runtime interface created" Sep 9 23:32:19.066764 containerd[1515]: time="2025-09-09T23:32:19.066759449Z" level=info msg="created NRI interface" Sep 9 23:32:19.066807 containerd[1515]: time="2025-09-09T23:32:19.066768618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:32:19.066807 containerd[1515]: time="2025-09-09T23:32:19.066780088Z" level=info msg="Connect containerd service" Sep 9 23:32:19.066849 containerd[1515]: time="2025-09-09T23:32:19.066805743Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:32:19.067476 containerd[1515]: time="2025-09-09T23:32:19.067441647Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:32:19.149366 containerd[1515]: time="2025-09-09T23:32:19.149281106Z" level=info msg="Start subscribing containerd event" Sep 9 23:32:19.149497 containerd[1515]: time="2025-09-09T23:32:19.149385617Z" level=info msg="Start recovering state" Sep 9 23:32:19.149497 containerd[1515]: time="2025-09-09T23:32:19.149474587Z" level=info msg="Start event monitor" Sep 9 23:32:19.149497 containerd[1515]: time="2025-09-09T23:32:19.149489717Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:32:19.149558 containerd[1515]: time="2025-09-09T23:32:19.149497693Z" level=info msg="Start streaming server" Sep 9 23:32:19.149558 containerd[1515]: time="2025-09-09T23:32:19.149507108Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:32:19.149558 containerd[1515]: time="2025-09-09T23:32:19.149514426Z" level=info msg="runtime interface starting up..." Sep 9 23:32:19.149558 containerd[1515]: time="2025-09-09T23:32:19.149520223Z" level=info msg="starting plugins..." Sep 9 23:32:19.149558 containerd[1515]: time="2025-09-09T23:32:19.149534613Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:32:19.149817 containerd[1515]: time="2025-09-09T23:32:19.149777471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:32:19.149876 containerd[1515]: time="2025-09-09T23:32:19.149860027Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:32:19.151531 containerd[1515]: time="2025-09-09T23:32:19.149929427Z" level=info msg="containerd successfully booted in 0.105831s" Sep 9 23:32:19.150034 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:32:19.288208 tar[1507]: linux-arm64/README.md Sep 9 23:32:19.312963 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:32:19.971830 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:32:19.991024 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:32:19.994259 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:32:20.017151 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:32:20.017398 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:32:20.020395 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:32:20.055350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:32:20.058475 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:32:20.060799 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:32:20.062244 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:32:20.622802 systemd-networkd[1425]: eth0: Gained IPv6LL Sep 9 23:32:20.625184 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:32:20.627588 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:32:20.630232 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:32:20.632853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:32:20.645758 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:32:20.661263 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:32:20.661523 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:32:20.663641 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:32:20.666200 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:32:21.236727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:32:21.238456 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:32:21.240434 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:32:21.243240 systemd[1]: Startup finished in 2.082s (kernel) + 7.432s (initrd) + 4.257s (userspace) = 13.772s. Sep 9 23:32:21.610345 kubelet[1623]: E0909 23:32:21.610237 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:32:21.612899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:32:21.613036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:32:21.613402 systemd[1]: kubelet.service: Consumed 745ms CPU time, 257M memory peak. Sep 9 23:32:22.837163 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:32:22.838350 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:48238.service - OpenSSH per-connection server daemon (10.0.0.1:48238). Sep 9 23:32:22.912149 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 48238 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:32:22.913449 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:32:22.923209 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:32:22.927666 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:32:22.939280 systemd-logind[1490]: New session 1 of user core. Sep 9 23:32:22.957883 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:32:22.961070 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:32:22.976440 (systemd)[1641]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:32:22.979195 systemd-logind[1490]: New session c1 of user core. Sep 9 23:32:23.119701 systemd[1641]: Queued start job for default target default.target. Sep 9 23:32:23.137240 systemd[1641]: Created slice app.slice - User Application Slice. Sep 9 23:32:23.137273 systemd[1641]: Reached target paths.target - Paths. Sep 9 23:32:23.137311 systemd[1641]: Reached target timers.target - Timers. Sep 9 23:32:23.139140 systemd[1641]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:32:23.148611 systemd[1641]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:32:23.148689 systemd[1641]: Reached target sockets.target - Sockets. Sep 9 23:32:23.148732 systemd[1641]: Reached target basic.target - Basic System. Sep 9 23:32:23.148760 systemd[1641]: Reached target default.target - Main User Target. Sep 9 23:32:23.148787 systemd[1641]: Startup finished in 161ms. Sep 9 23:32:23.148992 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:32:23.150675 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:32:23.223554 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:48248.service - OpenSSH per-connection server daemon (10.0.0.1:48248). Sep 9 23:32:23.298881 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 48248 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:32:23.301272 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:32:23.307384 systemd-logind[1490]: New session 2 of user core. Sep 9 23:32:23.317307 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:32:23.370613 sshd[1654]: Connection closed by 10.0.0.1 port 48248 Sep 9 23:32:23.371082 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Sep 9 23:32:23.390399 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:48248.service: Deactivated successfully. Sep 9 23:32:23.391823 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:32:23.394321 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:32:23.396040 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:48252.service - OpenSSH per-connection server daemon (10.0.0.1:48252). Sep 9 23:32:23.396868 systemd-logind[1490]: Removed session 2. Sep 9 23:32:23.453590 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 48252 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:32:23.454832 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:32:23.458721 systemd-logind[1490]: New session 3 of user core. Sep 9 23:32:23.466319 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:32:23.515526 sshd[1663]: Connection closed by 10.0.0.1 port 48252 Sep 9 23:32:23.516030 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 9 23:32:23.529511 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:48252.service: Deactivated successfully. Sep 9 23:32:23.532440 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:32:23.533172 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:32:23.535742 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:48260.service - OpenSSH per-connection server daemon (10.0.0.1:48260). Sep 9 23:32:23.537294 systemd-logind[1490]: Removed session 3. Sep 9 23:32:23.606029 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 48260 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:32:23.607376 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:32:23.613989 systemd-logind[1490]: New session 4 of user core. Sep 9 23:32:23.624909 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:32:23.679018 sshd[1671]: Connection closed by 10.0.0.1 port 48260 Sep 9 23:32:23.679365 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Sep 9 23:32:23.695938 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:48260.service: Deactivated successfully. Sep 9 23:32:23.698061 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:32:23.699904 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:32:23.703866 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:48266.service - OpenSSH per-connection server daemon (10.0.0.1:48266). Sep 9 23:32:23.705250 systemd-logind[1490]: Removed session 4. Sep 9 23:32:23.777670 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 48266 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:32:23.779794 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:32:23.787147 systemd-logind[1490]: New session 5 of user core. Sep 9 23:32:23.797380 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:32:23.858834 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:32:23.859106 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:32:23.879930 sudo[1680]: pam_unix(sudo:session): session closed for user root Sep 9 23:32:23.883170 sshd[1679]: Connection closed by 10.0.0.1 port 48266 Sep 9 23:32:23.883912 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Sep 9 23:32:23.897075 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:48266.service: Deactivated successfully. Sep 9 23:32:23.899427 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:32:23.900642 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:32:23.905532 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:48276.service - OpenSSH per-connection server daemon (10.0.0.1:48276). Sep 9 23:32:23.905984 systemd-logind[1490]: Removed session 5. Sep 9 23:32:23.961070 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 48276 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:32:23.962498 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:32:23.969201 systemd-logind[1490]: New session 6 of user core. Sep 9 23:32:23.980679 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:32:24.038635 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:32:24.040631 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:32:24.047408 sudo[1690]: pam_unix(sudo:session): session closed for user root Sep 9 23:32:24.052872 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:32:24.053182 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:32:24.068200 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:32:24.128898 augenrules[1712]: No rules Sep 9 23:32:24.129862 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:32:24.130081 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:32:24.133284 sudo[1689]: pam_unix(sudo:session): session closed for user root Sep 9 23:32:24.137747 sshd[1688]: Connection closed by 10.0.0.1 port 48276 Sep 9 23:32:24.138144 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Sep 9 23:32:24.146216 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:48276.service: Deactivated successfully. Sep 9 23:32:24.152805 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:32:24.155019 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:32:24.157394 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:48286.service - OpenSSH per-connection server daemon (10.0.0.1:48286). Sep 9 23:32:24.160498 systemd-logind[1490]: Removed session 6. Sep 9 23:32:24.215777 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 48286 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:32:24.217050 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:32:24.221206 systemd-logind[1490]: New session 7 of user core. Sep 9 23:32:24.231317 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:32:24.284470 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:32:24.284738 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:32:24.609566 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:32:24.621455 (dockerd)[1745]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:32:24.851930 dockerd[1745]: time="2025-09-09T23:32:24.851614839Z" level=info msg="Starting up" Sep 9 23:32:24.854269 dockerd[1745]: time="2025-09-09T23:32:24.852995842Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:32:25.062002 dockerd[1745]: time="2025-09-09T23:32:25.061846294Z" level=info msg="Loading containers: start." Sep 9 23:32:25.071149 kernel: Initializing XFRM netlink socket Sep 9 23:32:25.286420 systemd-networkd[1425]: docker0: Link UP Sep 9 23:32:25.289668 dockerd[1745]: time="2025-09-09T23:32:25.289623975Z" level=info msg="Loading containers: done." Sep 9 23:32:25.303440 dockerd[1745]: time="2025-09-09T23:32:25.303376525Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:32:25.303719 dockerd[1745]: time="2025-09-09T23:32:25.303468662Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 23:32:25.303719 dockerd[1745]: time="2025-09-09T23:32:25.303568292Z" level=info msg="Initializing buildkit" Sep 9 23:32:25.332893 dockerd[1745]: time="2025-09-09T23:32:25.332749476Z" level=info msg="Completed buildkit initialization" Sep 9 23:32:25.337789 dockerd[1745]: time="2025-09-09T23:32:25.337746582Z" level=info msg="Daemon has completed initialization" Sep 9 23:32:25.338418 dockerd[1745]: time="2025-09-09T23:32:25.337807291Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:32:25.338016 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:32:26.170848 containerd[1515]: time="2025-09-09T23:32:26.170803557Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 23:32:26.801017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345260676.mount: Deactivated successfully. Sep 9 23:32:28.030500 containerd[1515]: time="2025-09-09T23:32:28.030444787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:28.031040 containerd[1515]: time="2025-09-09T23:32:28.031008466Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 9 23:32:28.031907 containerd[1515]: time="2025-09-09T23:32:28.031851666Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:28.035879 containerd[1515]: time="2025-09-09T23:32:28.035823999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:28.037920 containerd[1515]: time="2025-09-09T23:32:28.037790725Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.866949865s" Sep 9 23:32:28.037920 containerd[1515]: time="2025-09-09T23:32:28.037847314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 23:32:28.038635 containerd[1515]: time="2025-09-09T23:32:28.038607222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 23:32:29.414974 containerd[1515]: time="2025-09-09T23:32:29.414913725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:29.416020 containerd[1515]: time="2025-09-09T23:32:29.415994503Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 9 23:32:29.417142 containerd[1515]: time="2025-09-09T23:32:29.417068552Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:29.421117 containerd[1515]: time="2025-09-09T23:32:29.421047482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:29.422505 containerd[1515]: time="2025-09-09T23:32:29.421967612Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.383239032s" Sep 9 23:32:29.422505 containerd[1515]: time="2025-09-09T23:32:29.422004118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 23:32:29.422602 containerd[1515]: time="2025-09-09T23:32:29.422537919Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 23:32:30.827219 containerd[1515]: time="2025-09-09T23:32:30.827140912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:30.828350 containerd[1515]: time="2025-09-09T23:32:30.828301261Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 9 23:32:30.830254 containerd[1515]: time="2025-09-09T23:32:30.830209653Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:30.834657 containerd[1515]: time="2025-09-09T23:32:30.834600251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:30.836269 containerd[1515]: time="2025-09-09T23:32:30.836222582Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.413629296s" Sep 9 23:32:30.836269 containerd[1515]: time="2025-09-09T23:32:30.836264891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 23:32:30.837197 containerd[1515]: time="2025-09-09T23:32:30.837153993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 23:32:31.618447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:32:31.619842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:32:31.783682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:32:31.804471 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:32:31.852673 kubelet[2033]: E0909 23:32:31.852626 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:32:31.855991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:32:31.856160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:32:31.856491 systemd[1]: kubelet.service: Consumed 160ms CPU time, 106.4M memory peak. Sep 9 23:32:31.930679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2920176106.mount: Deactivated successfully. Sep 9 23:32:32.345959 containerd[1515]: time="2025-09-09T23:32:32.345839802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:32.346536 containerd[1515]: time="2025-09-09T23:32:32.346497895Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 9 23:32:32.347412 containerd[1515]: time="2025-09-09T23:32:32.347384222Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:32.349503 containerd[1515]: time="2025-09-09T23:32:32.349460319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:32.350241 containerd[1515]: time="2025-09-09T23:32:32.350211145Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.51302178s" Sep 9 23:32:32.350276 containerd[1515]: time="2025-09-09T23:32:32.350247643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 23:32:32.350880 containerd[1515]: time="2025-09-09T23:32:32.350856375Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:32:32.972396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545650427.mount: Deactivated successfully. Sep 9 23:32:33.923878 containerd[1515]: time="2025-09-09T23:32:33.923803384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:33.925344 containerd[1515]: time="2025-09-09T23:32:33.925268166Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 23:32:33.926164 containerd[1515]: time="2025-09-09T23:32:33.926096708Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:33.929056 containerd[1515]: time="2025-09-09T23:32:33.929006065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:33.930077 containerd[1515]: time="2025-09-09T23:32:33.930033055Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.579140953s" Sep 9 23:32:33.930077 containerd[1515]: time="2025-09-09T23:32:33.930075878Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:32:33.930615 containerd[1515]: time="2025-09-09T23:32:33.930580756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:32:34.469912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957448670.mount: Deactivated successfully. Sep 9 23:32:34.477129 containerd[1515]: time="2025-09-09T23:32:34.477047776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:32:34.477961 containerd[1515]: time="2025-09-09T23:32:34.477899645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 23:32:34.478923 containerd[1515]: time="2025-09-09T23:32:34.478879552Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:32:34.480773 containerd[1515]: time="2025-09-09T23:32:34.480711608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:32:34.481699 containerd[1515]: time="2025-09-09T23:32:34.481310409Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 550.700373ms" Sep 9 23:32:34.481699 containerd[1515]: time="2025-09-09T23:32:34.481342730Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:32:34.482215 containerd[1515]: time="2025-09-09T23:32:34.482004929Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 23:32:35.024499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649085114.mount: Deactivated successfully. Sep 9 23:32:37.434233 containerd[1515]: time="2025-09-09T23:32:37.434178863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:37.434717 containerd[1515]: time="2025-09-09T23:32:37.434675149Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 9 23:32:37.436959 containerd[1515]: time="2025-09-09T23:32:37.436912285Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:37.441265 containerd[1515]: time="2025-09-09T23:32:37.441217131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:32:37.442412 containerd[1515]: time="2025-09-09T23:32:37.442246675Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.960186959s" Sep 9 23:32:37.442412 containerd[1515]: time="2025-09-09T23:32:37.442286215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 23:32:41.862720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:32:41.864258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:32:42.037289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:32:42.041471 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:32:42.083581 kubelet[2187]: E0909 23:32:42.083524 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:32:42.086237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:32:42.086504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:32:42.088197 systemd[1]: kubelet.service: Consumed 154ms CPU time, 107.6M memory peak. Sep 9 23:32:42.741039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:32:42.741200 systemd[1]: kubelet.service: Consumed 154ms CPU time, 107.6M memory peak. Sep 9 23:32:42.745913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:32:42.788648 systemd[1]: Reload requested from client PID 2203 ('systemctl') (unit session-7.scope)... Sep 9 23:32:42.788664 systemd[1]: Reloading... Sep 9 23:32:42.856194 zram_generator::config[2246]: No configuration found. Sep 9 23:32:43.042027 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:32:43.132801 systemd[1]: Reloading finished in 343 ms. Sep 9 23:32:43.191666 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:32:43.191983 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:32:43.192466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:32:43.193207 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95.1M memory peak. Sep 9 23:32:43.195815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:32:43.321390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:32:43.325245 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:32:43.367751 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:32:43.367751 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:32:43.367751 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:32:43.368085 kubelet[2292]: I0909 23:32:43.367768 2292 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:32:44.018133 kubelet[2292]: I0909 23:32:44.016496 2292 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:32:44.018133 kubelet[2292]: I0909 23:32:44.016533 2292 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:32:44.018133 kubelet[2292]: I0909 23:32:44.016805 2292 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:32:44.046743 kubelet[2292]: E0909 23:32:44.046691 2292 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:32:44.047641 kubelet[2292]: I0909 23:32:44.047611 2292 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:32:44.054851 kubelet[2292]: I0909 23:32:44.054819 2292 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:32:44.058365 kubelet[2292]: I0909 23:32:44.058332 2292 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:32:44.059081 kubelet[2292]: I0909 23:32:44.059009 2292 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:32:44.059305 kubelet[2292]: I0909 23:32:44.059078 2292 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:32:44.059394 kubelet[2292]: I0909 23:32:44.059375 2292 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:32:44.059394 kubelet[2292]: I0909 23:32:44.059385 2292 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:32:44.059637 kubelet[2292]: I0909 23:32:44.059609 2292 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:32:44.062358 kubelet[2292]: I0909 23:32:44.062326 2292 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:32:44.062358 kubelet[2292]: I0909 23:32:44.062360 2292 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:32:44.062430 kubelet[2292]: I0909 23:32:44.062390 2292 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:32:44.062430 kubelet[2292]: I0909 23:32:44.062400 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:32:44.066220 kubelet[2292]: W0909 23:32:44.066154 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 9 23:32:44.066283 kubelet[2292]: E0909 23:32:44.066230 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:32:44.066791 kubelet[2292]: I0909 23:32:44.066759 2292 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 23:32:44.066867 kubelet[2292]: W0909 23:32:44.066830 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 9 23:32:44.066900 kubelet[2292]: E0909 23:32:44.066877 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:32:44.067566 kubelet[2292]: I0909 23:32:44.067536 2292 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:32:44.067709 kubelet[2292]: W0909 23:32:44.067691 2292 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:32:44.069287 kubelet[2292]: I0909 23:32:44.068701 2292 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:32:44.069287 kubelet[2292]: I0909 23:32:44.068769 2292 server.go:1287] "Started kubelet" Sep 9 23:32:44.069922 kubelet[2292]: I0909 23:32:44.069878 2292 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:32:44.072484 kubelet[2292]: I0909 23:32:44.072388 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:32:44.072768 kubelet[2292]: I0909 23:32:44.072739 2292 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:32:44.075340 kubelet[2292]: I0909 23:32:44.072738 2292 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:32:44.075340 kubelet[2292]: I0909 23:32:44.074318 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:32:44.075340 kubelet[2292]: I0909 23:32:44.074440 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:32:44.075340 kubelet[2292]: E0909 23:32:44.074424 2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c140e500f7d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 23:32:44.068739024 +0000 UTC m=+0.740413541,LastTimestamp:2025-09-09 23:32:44.068739024 +0000 UTC m=+0.740413541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 23:32:44.075660 kubelet[2292]: E0909 23:32:44.075628 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:32:44.075697 kubelet[2292]: I0909 23:32:44.075672 2292 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:32:44.075866 kubelet[2292]: I0909 23:32:44.075845 2292 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:32:44.075912 kubelet[2292]: E0909 23:32:44.075851 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Sep 9 23:32:44.075912 kubelet[2292]: I0909 23:32:44.075909 2292 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:32:44.076306 kubelet[2292]: W0909 23:32:44.076246 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 9 23:32:44.076376 kubelet[2292]: E0909 23:32:44.076318 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:32:44.076712 kubelet[2292]: I0909 23:32:44.076668 2292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:32:44.077297 kubelet[2292]: E0909 23:32:44.077178 2292 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:32:44.077684 kubelet[2292]: I0909 23:32:44.077664 2292 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:32:44.077684 kubelet[2292]: I0909 23:32:44.077682 2292 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:32:44.090029 kubelet[2292]: I0909 23:32:44.089959 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:32:44.091197 kubelet[2292]: I0909 23:32:44.091148 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:32:44.091197 kubelet[2292]: I0909 23:32:44.091181 2292 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:32:44.091293 kubelet[2292]: I0909 23:32:44.091205 2292 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:32:44.091293 kubelet[2292]: I0909 23:32:44.091212 2292 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:32:44.091293 kubelet[2292]: E0909 23:32:44.091262 2292 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:32:44.092134 kubelet[2292]: W0909 23:32:44.091861 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 9 23:32:44.092134 kubelet[2292]: E0909 23:32:44.091911 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:32:44.093539 kubelet[2292]: I0909 23:32:44.093501 2292 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:32:44.093684 kubelet[2292]: I0909 23:32:44.093671 2292 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:32:44.093783 kubelet[2292]: I0909 23:32:44.093773 2292 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:32:44.174148 kubelet[2292]: I0909 23:32:44.174078 2292 policy_none.go:49] "None policy: Start" Sep 9 23:32:44.174326 kubelet[2292]: I0909 23:32:44.174313 2292 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:32:44.174681 kubelet[2292]: I0909 23:32:44.174396 2292 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:32:44.176147 kubelet[2292]: E0909 23:32:44.176093 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:32:44.180965 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:32:44.191599 kubelet[2292]: E0909 23:32:44.191551 2292 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 23:32:44.195926 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:32:44.199266 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:32:44.209992 kubelet[2292]: I0909 23:32:44.209952 2292 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:32:44.210207 kubelet[2292]: I0909 23:32:44.210187 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:32:44.210262 kubelet[2292]: I0909 23:32:44.210206 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:32:44.210533 kubelet[2292]: I0909 23:32:44.210508 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:32:44.211417 kubelet[2292]: E0909 23:32:44.211395 2292 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:32:44.211564 kubelet[2292]: E0909 23:32:44.211550 2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 23:32:44.276820 kubelet[2292]: E0909 23:32:44.276700 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Sep 9 23:32:44.312073 kubelet[2292]: I0909 23:32:44.312024 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:32:44.312554 kubelet[2292]: E0909 23:32:44.312516 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 9 23:32:44.402939 systemd[1]: Created slice kubepods-burstable-poddd3e1ff237bcee35aaf92749c6603557.slice - libcontainer container kubepods-burstable-poddd3e1ff237bcee35aaf92749c6603557.slice. Sep 9 23:32:44.414949 kubelet[2292]: E0909 23:32:44.414911 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:44.417131 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 23:32:44.419133 kubelet[2292]: E0909 23:32:44.419092 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:44.421176 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 23:32:44.423064 kubelet[2292]: E0909 23:32:44.423035 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:44.477427 kubelet[2292]: I0909 23:32:44.477384 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:44.477528 kubelet[2292]: I0909 23:32:44.477468 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3e1ff237bcee35aaf92749c6603557-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd3e1ff237bcee35aaf92749c6603557\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:44.477563 kubelet[2292]: I0909 23:32:44.477521 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3e1ff237bcee35aaf92749c6603557-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd3e1ff237bcee35aaf92749c6603557\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:44.477563 kubelet[2292]: I0909 23:32:44.477552 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3e1ff237bcee35aaf92749c6603557-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dd3e1ff237bcee35aaf92749c6603557\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:44.477606 kubelet[2292]: I0909 23:32:44.477577 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:44.477606 kubelet[2292]: I0909 23:32:44.477593 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:44.477647 kubelet[2292]: I0909 23:32:44.477611 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:44.477647 kubelet[2292]: I0909 23:32:44.477640 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:44.477698 kubelet[2292]: I0909 23:32:44.477657 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:32:44.514878 kubelet[2292]: I0909 23:32:44.514577 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:32:44.515012 kubelet[2292]: E0909 23:32:44.514979 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 9 23:32:44.677659 kubelet[2292]: E0909 23:32:44.677622 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Sep 9 23:32:44.716049 kubelet[2292]: E0909 23:32:44.716007 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:44.716736 containerd[1515]: time="2025-09-09T23:32:44.716685571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dd3e1ff237bcee35aaf92749c6603557,Namespace:kube-system,Attempt:0,}" Sep 9 23:32:44.720255 kubelet[2292]: E0909 23:32:44.720231 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:44.720747 containerd[1515]: time="2025-09-09T23:32:44.720707502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 23:32:44.723513 kubelet[2292]: E0909 23:32:44.723476 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:44.724070 containerd[1515]: time="2025-09-09T23:32:44.723962956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 23:32:44.748130 containerd[1515]: time="2025-09-09T23:32:44.747203501Z" level=info msg="connecting to shim 7cbc88e2405e3f60a5bf05bbbb47b061f2ad54c2e7cdea41dbf1cebc3b96f781" address="unix:///run/containerd/s/c77ad1a0c9a3aecc4e61d575101edfc3e07105e0240af5b236281ef56cff4053" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:32:44.749880 containerd[1515]: time="2025-09-09T23:32:44.749837581Z" level=info msg="connecting to shim b83262b1187be5c0c07611362aa7f3d327d760ad12e119916dcf88885b6d0056" address="unix:///run/containerd/s/84345f0061af73e6d101ea32ca1feb725820c2b4fabac131f7d7dea75304372c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:32:44.773374 containerd[1515]: time="2025-09-09T23:32:44.773320486Z" level=info msg="connecting to shim 8d3a1719329ca9e015046abbc8fffcccefb59be10ceb851adb83fb57c51a4840" address="unix:///run/containerd/s/2ca8cd54020ddea9b3da6ec6105a4d7e142ffb692043bd5f53fac2bc8180046e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:32:44.778344 systemd[1]: Started cri-containerd-7cbc88e2405e3f60a5bf05bbbb47b061f2ad54c2e7cdea41dbf1cebc3b96f781.scope - libcontainer container 7cbc88e2405e3f60a5bf05bbbb47b061f2ad54c2e7cdea41dbf1cebc3b96f781. Sep 9 23:32:44.779580 systemd[1]: Started cri-containerd-b83262b1187be5c0c07611362aa7f3d327d760ad12e119916dcf88885b6d0056.scope - libcontainer container b83262b1187be5c0c07611362aa7f3d327d760ad12e119916dcf88885b6d0056. Sep 9 23:32:44.801332 systemd[1]: Started cri-containerd-8d3a1719329ca9e015046abbc8fffcccefb59be10ceb851adb83fb57c51a4840.scope - libcontainer container 8d3a1719329ca9e015046abbc8fffcccefb59be10ceb851adb83fb57c51a4840. Sep 9 23:32:44.837998 containerd[1515]: time="2025-09-09T23:32:44.837893918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"b83262b1187be5c0c07611362aa7f3d327d760ad12e119916dcf88885b6d0056\"" Sep 9 23:32:44.840059 kubelet[2292]: E0909 23:32:44.839969 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:44.844143 containerd[1515]: time="2025-09-09T23:32:44.843088807Z" level=info msg="CreateContainer within sandbox \"b83262b1187be5c0c07611362aa7f3d327d760ad12e119916dcf88885b6d0056\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:32:44.852287 containerd[1515]: time="2025-09-09T23:32:44.852237960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dd3e1ff237bcee35aaf92749c6603557,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cbc88e2405e3f60a5bf05bbbb47b061f2ad54c2e7cdea41dbf1cebc3b96f781\"" Sep 9 23:32:44.853260 kubelet[2292]: E0909 23:32:44.853033 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:44.854788 containerd[1515]: time="2025-09-09T23:32:44.854739149Z" level=info msg="CreateContainer within sandbox \"7cbc88e2405e3f60a5bf05bbbb47b061f2ad54c2e7cdea41dbf1cebc3b96f781\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:32:44.888509 containerd[1515]: time="2025-09-09T23:32:44.888456598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d3a1719329ca9e015046abbc8fffcccefb59be10ceb851adb83fb57c51a4840\"" Sep 9 23:32:44.889377 kubelet[2292]: E0909 23:32:44.889354 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:44.890953 containerd[1515]: time="2025-09-09T23:32:44.890912102Z" level=info msg="CreateContainer within sandbox \"8d3a1719329ca9e015046abbc8fffcccefb59be10ceb851adb83fb57c51a4840\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:32:44.917555 kubelet[2292]: I0909 23:32:44.917502 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:32:44.917921 kubelet[2292]: E0909 23:32:44.917889 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 9 23:32:44.921151 containerd[1515]: time="2025-09-09T23:32:44.920745196Z" level=info msg="Container 62dffedfb17fb6b391f8d64e163996b866293ea5e0e3fa26f41616f37f0a34e6: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:32:44.923295 containerd[1515]: time="2025-09-09T23:32:44.923253352Z" level=info msg="Container 99bcb6a9db330d2802c7d62e61e5d28a5d01c23c0316103f10e3026e26bfb666: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:32:44.925153 containerd[1515]: time="2025-09-09T23:32:44.924950788Z" level=info msg="Container 44b86ab948f90d28d4aaafbb367df1d573a2ae6c54f9c4020f84672607022cc2: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:32:44.929607 containerd[1515]: time="2025-09-09T23:32:44.929498398Z" level=info msg="CreateContainer within sandbox \"b83262b1187be5c0c07611362aa7f3d327d760ad12e119916dcf88885b6d0056\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"62dffedfb17fb6b391f8d64e163996b866293ea5e0e3fa26f41616f37f0a34e6\"" Sep 9 23:32:44.931470 containerd[1515]: time="2025-09-09T23:32:44.931436711Z" level=info msg="StartContainer for \"62dffedfb17fb6b391f8d64e163996b866293ea5e0e3fa26f41616f37f0a34e6\"" Sep 9 23:32:44.932594 containerd[1515]: time="2025-09-09T23:32:44.932568188Z" level=info msg="connecting to shim 62dffedfb17fb6b391f8d64e163996b866293ea5e0e3fa26f41616f37f0a34e6" address="unix:///run/containerd/s/84345f0061af73e6d101ea32ca1feb725820c2b4fabac131f7d7dea75304372c" protocol=ttrpc version=3 Sep 9 23:32:44.934210 containerd[1515]: time="2025-09-09T23:32:44.934173293Z" level=info msg="CreateContainer within sandbox \"8d3a1719329ca9e015046abbc8fffcccefb59be10ceb851adb83fb57c51a4840\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44b86ab948f90d28d4aaafbb367df1d573a2ae6c54f9c4020f84672607022cc2\"" Sep 9 23:32:44.934837 containerd[1515]: time="2025-09-09T23:32:44.934811243Z" level=info msg="StartContainer for \"44b86ab948f90d28d4aaafbb367df1d573a2ae6c54f9c4020f84672607022cc2\"" Sep 9 23:32:44.935606 containerd[1515]: time="2025-09-09T23:32:44.935572114Z" level=info msg="CreateContainer within sandbox \"7cbc88e2405e3f60a5bf05bbbb47b061f2ad54c2e7cdea41dbf1cebc3b96f781\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99bcb6a9db330d2802c7d62e61e5d28a5d01c23c0316103f10e3026e26bfb666\"" Sep 9 23:32:44.935919 containerd[1515]: time="2025-09-09T23:32:44.935888987Z" level=info msg="connecting to shim 44b86ab948f90d28d4aaafbb367df1d573a2ae6c54f9c4020f84672607022cc2" address="unix:///run/containerd/s/2ca8cd54020ddea9b3da6ec6105a4d7e142ffb692043bd5f53fac2bc8180046e" protocol=ttrpc version=3 Sep 9 23:32:44.937127 containerd[1515]: time="2025-09-09T23:32:44.936034050Z" level=info msg="StartContainer for \"99bcb6a9db330d2802c7d62e61e5d28a5d01c23c0316103f10e3026e26bfb666\"" Sep 9 23:32:44.937240 containerd[1515]: time="2025-09-09T23:32:44.937098501Z" level=info msg="connecting to shim 99bcb6a9db330d2802c7d62e61e5d28a5d01c23c0316103f10e3026e26bfb666" address="unix:///run/containerd/s/c77ad1a0c9a3aecc4e61d575101edfc3e07105e0240af5b236281ef56cff4053" protocol=ttrpc version=3 Sep 9 23:32:44.943652 kubelet[2292]: W0909 23:32:44.943590 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 9 23:32:44.943844 kubelet[2292]: E0909 23:32:44.943820 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:32:44.956311 systemd[1]: Started cri-containerd-62dffedfb17fb6b391f8d64e163996b866293ea5e0e3fa26f41616f37f0a34e6.scope - libcontainer container 62dffedfb17fb6b391f8d64e163996b866293ea5e0e3fa26f41616f37f0a34e6. Sep 9 23:32:44.961189 systemd[1]: Started cri-containerd-44b86ab948f90d28d4aaafbb367df1d573a2ae6c54f9c4020f84672607022cc2.scope - libcontainer container 44b86ab948f90d28d4aaafbb367df1d573a2ae6c54f9c4020f84672607022cc2. Sep 9 23:32:44.962370 systemd[1]: Started cri-containerd-99bcb6a9db330d2802c7d62e61e5d28a5d01c23c0316103f10e3026e26bfb666.scope - libcontainer container 99bcb6a9db330d2802c7d62e61e5d28a5d01c23c0316103f10e3026e26bfb666. Sep 9 23:32:45.010324 kubelet[2292]: W0909 23:32:45.010220 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 9 23:32:45.010324 kubelet[2292]: E0909 23:32:45.010320 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:32:45.014302 containerd[1515]: time="2025-09-09T23:32:45.014251252Z" level=info msg="StartContainer for \"62dffedfb17fb6b391f8d64e163996b866293ea5e0e3fa26f41616f37f0a34e6\" returns successfully" Sep 9 23:32:45.018153 containerd[1515]: time="2025-09-09T23:32:45.018118913Z" level=info msg="StartContainer for \"44b86ab948f90d28d4aaafbb367df1d573a2ae6c54f9c4020f84672607022cc2\" returns successfully" Sep 9 23:32:45.019042 containerd[1515]: time="2025-09-09T23:32:45.019002797Z" level=info msg="StartContainer for \"99bcb6a9db330d2802c7d62e61e5d28a5d01c23c0316103f10e3026e26bfb666\" returns successfully" Sep 9 23:32:45.100592 kubelet[2292]: E0909 23:32:45.100541 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:45.100792 kubelet[2292]: E0909 23:32:45.100693 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:45.103070 kubelet[2292]: E0909 23:32:45.103040 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:45.103270 kubelet[2292]: E0909 23:32:45.103243 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:45.106881 kubelet[2292]: E0909 23:32:45.106845 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:45.107026 kubelet[2292]: E0909 23:32:45.106981 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:45.719788 kubelet[2292]: I0909 23:32:45.719754 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:32:46.107757 kubelet[2292]: E0909 23:32:46.107666 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:46.107842 kubelet[2292]: E0909 23:32:46.107790 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:46.109021 kubelet[2292]: E0909 23:32:46.108999 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:46.109172 kubelet[2292]: E0909 23:32:46.109153 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:46.448718 kubelet[2292]: E0909 23:32:46.448652 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:32:46.449214 kubelet[2292]: E0909 23:32:46.449182 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:47.139934 kubelet[2292]: E0909 23:32:47.139752 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 23:32:47.227701 kubelet[2292]: I0909 23:32:47.227665 2292 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 23:32:47.276575 kubelet[2292]: I0909 23:32:47.276537 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:47.282448 kubelet[2292]: E0909 23:32:47.282406 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:47.282448 kubelet[2292]: I0909 23:32:47.282441 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:47.284769 kubelet[2292]: E0909 23:32:47.284465 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:47.284769 kubelet[2292]: I0909 23:32:47.284527 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:32:47.286590 kubelet[2292]: E0909 23:32:47.286563 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 23:32:48.067828 kubelet[2292]: I0909 23:32:48.067512 2292 apiserver.go:52] "Watching apiserver" Sep 9 23:32:48.076012 kubelet[2292]: I0909 23:32:48.075973 2292 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:32:48.790361 kubelet[2292]: I0909 23:32:48.790324 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:48.795760 kubelet[2292]: E0909 23:32:48.795733 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:49.112905 kubelet[2292]: E0909 23:32:49.112860 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:49.268201 systemd[1]: Reload requested from client PID 2569 ('systemctl') (unit session-7.scope)... Sep 9 23:32:49.268217 systemd[1]: Reloading... Sep 9 23:32:49.340349 zram_generator::config[2615]: No configuration found. Sep 9 23:32:49.405885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:32:49.503087 systemd[1]: Reloading finished in 234 ms. Sep 9 23:32:49.531193 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:32:49.548472 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:32:49.548720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:32:49.548775 systemd[1]: kubelet.service: Consumed 1.150s CPU time, 128M memory peak. Sep 9 23:32:49.551376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:32:49.705586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:32:49.709348 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:32:49.751332 kubelet[2654]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:32:49.752429 kubelet[2654]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:32:49.752429 kubelet[2654]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:32:49.752429 kubelet[2654]: I0909 23:32:49.751471 2654 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:32:49.759553 kubelet[2654]: I0909 23:32:49.759504 2654 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:32:49.759553 kubelet[2654]: I0909 23:32:49.759532 2654 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:32:49.759785 kubelet[2654]: I0909 23:32:49.759768 2654 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:32:49.761151 kubelet[2654]: I0909 23:32:49.761133 2654 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:32:49.763346 kubelet[2654]: I0909 23:32:49.763224 2654 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:32:49.766687 kubelet[2654]: I0909 23:32:49.766659 2654 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:32:49.769312 kubelet[2654]: I0909 23:32:49.769288 2654 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:32:49.769554 kubelet[2654]: I0909 23:32:49.769529 2654 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:32:49.769709 kubelet[2654]: I0909 23:32:49.769553 2654 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:32:49.769788 kubelet[2654]: I0909 23:32:49.769719 2654 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:32:49.769788 kubelet[2654]: I0909 23:32:49.769728 2654 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:32:49.769788 kubelet[2654]: I0909 23:32:49.769770 2654 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:32:49.770101 kubelet[2654]: I0909 23:32:49.769887 2654 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:32:49.770101 kubelet[2654]: I0909 23:32:49.769903 2654 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:32:49.770101 kubelet[2654]: I0909 23:32:49.769922 2654 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:32:49.770101 kubelet[2654]: I0909 23:32:49.769930 2654 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:32:49.771385 kubelet[2654]: I0909 23:32:49.771363 2654 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 23:32:49.771885 kubelet[2654]: I0909 23:32:49.771858 2654 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:32:49.772327 kubelet[2654]: I0909 23:32:49.772304 2654 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:32:49.772367 kubelet[2654]: I0909 23:32:49.772336 2654 server.go:1287] "Started kubelet" Sep 9 23:32:49.772698 kubelet[2654]: I0909 23:32:49.772662 2654 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:32:49.772834 kubelet[2654]: I0909 23:32:49.772793 2654 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:32:49.773075 kubelet[2654]: I0909 23:32:49.773050 2654 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:32:49.777592 kubelet[2654]: I0909 23:32:49.777380 2654 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:32:49.779064 kubelet[2654]: I0909 23:32:49.779022 2654 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:32:49.780077 kubelet[2654]: I0909 23:32:49.780038 2654 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:32:49.780244 kubelet[2654]: E0909 23:32:49.780204 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:32:49.780244 kubelet[2654]: I0909 23:32:49.780211 2654 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:32:49.780363 kubelet[2654]: I0909 23:32:49.780334 2654 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:32:49.781459 kubelet[2654]: I0909 23:32:49.781417 2654 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:32:49.781872 kubelet[2654]: I0909 23:32:49.781851 2654 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:32:49.782052 kubelet[2654]: I0909 23:32:49.782040 2654 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:32:49.782279 kubelet[2654]: I0909 23:32:49.782259 2654 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:32:49.787936 kubelet[2654]: I0909 23:32:49.787895 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:32:49.792807 kubelet[2654]: I0909 23:32:49.792780 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:32:49.792919 kubelet[2654]: I0909 23:32:49.792908 2654 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:32:49.792985 kubelet[2654]: I0909 23:32:49.792974 2654 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:32:49.793327 kubelet[2654]: I0909 23:32:49.793033 2654 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:32:49.793327 kubelet[2654]: E0909 23:32:49.793079 2654 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:32:49.830910 kubelet[2654]: I0909 23:32:49.830881 2654 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:32:49.831141 kubelet[2654]: I0909 23:32:49.831124 2654 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:32:49.831213 kubelet[2654]: I0909 23:32:49.831204 2654 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:32:49.831425 kubelet[2654]: I0909 23:32:49.831408 2654 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:32:49.831525 kubelet[2654]: I0909 23:32:49.831499 2654 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:32:49.831578 kubelet[2654]: I0909 23:32:49.831570 2654 policy_none.go:49] "None policy: Start" Sep 9 23:32:49.831626 kubelet[2654]: I0909 23:32:49.831618 2654 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:32:49.831679 kubelet[2654]: I0909 23:32:49.831670 2654 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:32:49.831832 kubelet[2654]: I0909 23:32:49.831820 2654 state_mem.go:75] "Updated machine memory state" Sep 9 23:32:49.835863 kubelet[2654]: I0909 23:32:49.835392 2654 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:32:49.835863 kubelet[2654]: I0909 23:32:49.835588 2654 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:32:49.835863 kubelet[2654]: I0909 23:32:49.835601 2654 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:32:49.835863 kubelet[2654]: I0909 23:32:49.835793 2654 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:32:49.836706 kubelet[2654]: E0909 23:32:49.836686 2654 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:32:49.894617 kubelet[2654]: I0909 23:32:49.894568 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:49.894741 kubelet[2654]: I0909 23:32:49.894671 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:49.894796 kubelet[2654]: I0909 23:32:49.894572 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:32:49.901406 kubelet[2654]: E0909 23:32:49.901369 2654 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:49.939129 kubelet[2654]: I0909 23:32:49.939092 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:32:49.949123 kubelet[2654]: I0909 23:32:49.949076 2654 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 23:32:49.949300 kubelet[2654]: I0909 23:32:49.949288 2654 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 23:32:49.983551 kubelet[2654]: I0909 23:32:49.983430 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:32:49.983551 kubelet[2654]: I0909 23:32:49.983470 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3e1ff237bcee35aaf92749c6603557-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd3e1ff237bcee35aaf92749c6603557\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:49.983551 kubelet[2654]: I0909 23:32:49.983491 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:49.983551 kubelet[2654]: I0909 23:32:49.983509 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:49.983551 kubelet[2654]: I0909 23:32:49.983525 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:49.984156 kubelet[2654]: I0909 23:32:49.984134 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:49.984279 kubelet[2654]: I0909 23:32:49.984266 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3e1ff237bcee35aaf92749c6603557-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dd3e1ff237bcee35aaf92749c6603557\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:49.984610 kubelet[2654]: I0909 23:32:49.984567 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3e1ff237bcee35aaf92749c6603557-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dd3e1ff237bcee35aaf92749c6603557\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:49.984842 kubelet[2654]: I0909 23:32:49.984734 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:32:50.200892 kubelet[2654]: E0909 23:32:50.200791 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:50.200892 kubelet[2654]: E0909 23:32:50.200860 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:50.201890 kubelet[2654]: E0909 23:32:50.201836 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:50.268226 sudo[2692]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 23:32:50.268476 sudo[2692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 23:32:50.698446 sudo[2692]: pam_unix(sudo:session): session closed for user root Sep 9 23:32:50.770913 kubelet[2654]: I0909 23:32:50.770869 2654 apiserver.go:52] "Watching apiserver" Sep 9 23:32:50.782862 kubelet[2654]: I0909 23:32:50.782829 2654 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:32:50.811647 kubelet[2654]: I0909 23:32:50.811608 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:32:50.812569 kubelet[2654]: I0909 23:32:50.812168 2654 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:50.812778 kubelet[2654]: E0909 23:32:50.812328 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:50.818730 kubelet[2654]: E0909 23:32:50.818681 2654 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 23:32:50.819787 kubelet[2654]: E0909 23:32:50.819156 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:50.820333 kubelet[2654]: E0909 23:32:50.819770 2654 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 23:32:50.820333 kubelet[2654]: E0909 23:32:50.820280 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:50.840634 kubelet[2654]: I0909 23:32:50.840549 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.8403631860000003 podStartE2EDuration="2.840363186s" podCreationTimestamp="2025-09-09 23:32:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:32:50.840363946 +0000 UTC m=+1.124147783" watchObservedRunningTime="2025-09-09 23:32:50.840363186 +0000 UTC m=+1.124147023" Sep 9 23:32:50.848229 kubelet[2654]: I0909 23:32:50.848169 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.848153159 podStartE2EDuration="1.848153159s" podCreationTimestamp="2025-09-09 23:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:32:50.84813179 +0000 UTC m=+1.131915627" watchObservedRunningTime="2025-09-09 23:32:50.848153159 +0000 UTC m=+1.131936996" Sep 9 23:32:50.868726 kubelet[2654]: I0909 23:32:50.868569 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.868552285 podStartE2EDuration="1.868552285s" podCreationTimestamp="2025-09-09 23:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:32:50.856776622 +0000 UTC m=+1.140560459" watchObservedRunningTime="2025-09-09 23:32:50.868552285 +0000 UTC m=+1.152336122" Sep 9 23:32:51.813420 kubelet[2654]: E0909 23:32:51.813079 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:51.813712 kubelet[2654]: E0909 23:32:51.813553 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:52.540754 sudo[1724]: pam_unix(sudo:session): session closed for user root Sep 9 23:32:52.542157 sshd[1723]: Connection closed by 10.0.0.1 port 48286 Sep 9 23:32:52.542844 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 9 23:32:52.546493 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:48286.service: Deactivated successfully. Sep 9 23:32:52.548789 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:32:52.549066 systemd[1]: session-7.scope: Consumed 7.608s CPU time, 262.6M memory peak. Sep 9 23:32:52.550020 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:32:52.551051 systemd-logind[1490]: Removed session 7. Sep 9 23:32:54.503700 kubelet[2654]: E0909 23:32:54.503626 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:56.514251 kubelet[2654]: I0909 23:32:56.514204 2654 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:32:56.514834 containerd[1515]: time="2025-09-09T23:32:56.514491687Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:32:56.515581 kubelet[2654]: I0909 23:32:56.515078 2654 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:32:57.420431 systemd[1]: Created slice kubepods-besteffort-podb108c140_56c6_49de_b217_349f15dc1a61.slice - libcontainer container kubepods-besteffort-podb108c140_56c6_49de_b217_349f15dc1a61.slice. Sep 9 23:32:57.439373 kubelet[2654]: I0909 23:32:57.439329 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b108c140-56c6-49de-b217-349f15dc1a61-lib-modules\") pod \"kube-proxy-5xvr8\" (UID: \"b108c140-56c6-49de-b217-349f15dc1a61\") " pod="kube-system/kube-proxy-5xvr8" Sep 9 23:32:57.439523 kubelet[2654]: I0909 23:32:57.439383 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-etc-cni-netd\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439523 kubelet[2654]: I0909 23:32:57.439410 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drbfn\" (UniqueName: \"kubernetes.io/projected/33f3c0a1-6150-41be-b80e-4460d6094132-kube-api-access-drbfn\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439523 kubelet[2654]: I0909 23:32:57.439432 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxkw\" (UniqueName: \"kubernetes.io/projected/b108c140-56c6-49de-b217-349f15dc1a61-kube-api-access-rxxkw\") pod \"kube-proxy-5xvr8\" (UID: \"b108c140-56c6-49de-b217-349f15dc1a61\") " pod="kube-system/kube-proxy-5xvr8" Sep 9 23:32:57.439523 kubelet[2654]: I0909 23:32:57.439471 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b108c140-56c6-49de-b217-349f15dc1a61-xtables-lock\") pod \"kube-proxy-5xvr8\" (UID: \"b108c140-56c6-49de-b217-349f15dc1a61\") " pod="kube-system/kube-proxy-5xvr8" Sep 9 23:32:57.439523 kubelet[2654]: I0909 23:32:57.439490 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-run\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439634 kubelet[2654]: I0909 23:32:57.439516 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33f3c0a1-6150-41be-b80e-4460d6094132-clustermesh-secrets\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439634 kubelet[2654]: I0909 23:32:57.439537 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-hostproc\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439634 kubelet[2654]: I0909 23:32:57.439556 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-xtables-lock\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439634 kubelet[2654]: I0909 23:32:57.439597 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-host-proc-sys-kernel\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439712 kubelet[2654]: I0909 23:32:57.439635 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-bpf-maps\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439712 kubelet[2654]: I0909 23:32:57.439669 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b108c140-56c6-49de-b217-349f15dc1a61-kube-proxy\") pod \"kube-proxy-5xvr8\" (UID: \"b108c140-56c6-49de-b217-349f15dc1a61\") " pod="kube-system/kube-proxy-5xvr8" Sep 9 23:32:57.439712 kubelet[2654]: I0909 23:32:57.439689 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-cgroup\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439776 kubelet[2654]: I0909 23:32:57.439715 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-host-proc-sys-net\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439776 kubelet[2654]: I0909 23:32:57.439735 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cni-path\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439776 kubelet[2654]: I0909 23:32:57.439759 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-lib-modules\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439837 kubelet[2654]: I0909 23:32:57.439778 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-config-path\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.439837 kubelet[2654]: I0909 23:32:57.439796 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33f3c0a1-6150-41be-b80e-4460d6094132-hubble-tls\") pod \"cilium-drv9r\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " pod="kube-system/cilium-drv9r" Sep 9 23:32:57.447049 systemd[1]: Created slice kubepods-burstable-pod33f3c0a1_6150_41be_b80e_4460d6094132.slice - libcontainer container kubepods-burstable-pod33f3c0a1_6150_41be_b80e_4460d6094132.slice. Sep 9 23:32:57.746364 kubelet[2654]: E0909 23:32:57.746214 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:57.749563 containerd[1515]: time="2025-09-09T23:32:57.749510940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xvr8,Uid:b108c140-56c6-49de-b217-349f15dc1a61,Namespace:kube-system,Attempt:0,}" Sep 9 23:32:57.750257 kubelet[2654]: E0909 23:32:57.750138 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:57.750819 containerd[1515]: time="2025-09-09T23:32:57.750788432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drv9r,Uid:33f3c0a1-6150-41be-b80e-4460d6094132,Namespace:kube-system,Attempt:0,}" Sep 9 23:32:57.788237 systemd[1]: Created slice kubepods-besteffort-pod8d0decdc_5117_4a97_9a5f_eab81ca386a6.slice - libcontainer container kubepods-besteffort-pod8d0decdc_5117_4a97_9a5f_eab81ca386a6.slice. Sep 9 23:32:57.842974 kubelet[2654]: I0909 23:32:57.842928 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d0decdc-5117-4a97-9a5f-eab81ca386a6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pf867\" (UID: \"8d0decdc-5117-4a97-9a5f-eab81ca386a6\") " pod="kube-system/cilium-operator-6c4d7847fc-pf867" Sep 9 23:32:57.842974 kubelet[2654]: I0909 23:32:57.842972 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ltl\" (UniqueName: \"kubernetes.io/projected/8d0decdc-5117-4a97-9a5f-eab81ca386a6-kube-api-access-s5ltl\") pod \"cilium-operator-6c4d7847fc-pf867\" (UID: \"8d0decdc-5117-4a97-9a5f-eab81ca386a6\") " pod="kube-system/cilium-operator-6c4d7847fc-pf867" Sep 9 23:32:57.981304 kubelet[2654]: E0909 23:32:57.981276 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:57.999174 containerd[1515]: time="2025-09-09T23:32:57.999009607Z" level=info msg="connecting to shim ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e" address="unix:///run/containerd/s/a556e9209934d38fa6bb4237b213766d87e800ab6a0a6bd4456fd88d039359e7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:32:58.001618 containerd[1515]: time="2025-09-09T23:32:58.000708954Z" level=info msg="connecting to shim 21f30a47affd95b2dda3e55065d9c58453f79d01c9f7cecefbb6e8c6fa2c73a9" address="unix:///run/containerd/s/27b7e0cb6874e3f724a2f80dc44d62df29b5acbaebc0fb5e52d8df251f112452" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:32:58.027339 systemd[1]: Started cri-containerd-21f30a47affd95b2dda3e55065d9c58453f79d01c9f7cecefbb6e8c6fa2c73a9.scope - libcontainer container 21f30a47affd95b2dda3e55065d9c58453f79d01c9f7cecefbb6e8c6fa2c73a9. Sep 9 23:32:58.028969 systemd[1]: Started cri-containerd-ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e.scope - libcontainer container ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e. Sep 9 23:32:58.056579 containerd[1515]: time="2025-09-09T23:32:58.056544504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xvr8,Uid:b108c140-56c6-49de-b217-349f15dc1a61,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f30a47affd95b2dda3e55065d9c58453f79d01c9f7cecefbb6e8c6fa2c73a9\"" Sep 9 23:32:58.057732 kubelet[2654]: E0909 23:32:58.057674 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:58.062738 containerd[1515]: time="2025-09-09T23:32:58.062704902Z" level=info msg="CreateContainer within sandbox \"21f30a47affd95b2dda3e55065d9c58453f79d01c9f7cecefbb6e8c6fa2c73a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:32:58.063899 containerd[1515]: time="2025-09-09T23:32:58.063869897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drv9r,Uid:33f3c0a1-6150-41be-b80e-4460d6094132,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\"" Sep 9 23:32:58.065353 kubelet[2654]: E0909 23:32:58.065326 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:58.066169 containerd[1515]: time="2025-09-09T23:32:58.066094976Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 23:32:58.077634 containerd[1515]: time="2025-09-09T23:32:58.077598883Z" level=info msg="Container f45c0d46bfa728d1c3375f313c08b06ab18f3647ae0b8b6993ba1a0225e6b66e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:32:58.084287 containerd[1515]: time="2025-09-09T23:32:58.084249991Z" level=info msg="CreateContainer within sandbox \"21f30a47affd95b2dda3e55065d9c58453f79d01c9f7cecefbb6e8c6fa2c73a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f45c0d46bfa728d1c3375f313c08b06ab18f3647ae0b8b6993ba1a0225e6b66e\"" Sep 9 23:32:58.090432 containerd[1515]: time="2025-09-09T23:32:58.090385382Z" level=info msg="StartContainer for \"f45c0d46bfa728d1c3375f313c08b06ab18f3647ae0b8b6993ba1a0225e6b66e\"" Sep 9 23:32:58.091078 kubelet[2654]: E0909 23:32:58.091055 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:58.092047 containerd[1515]: time="2025-09-09T23:32:58.092019840Z" level=info msg="connecting to shim f45c0d46bfa728d1c3375f313c08b06ab18f3647ae0b8b6993ba1a0225e6b66e" address="unix:///run/containerd/s/27b7e0cb6874e3f724a2f80dc44d62df29b5acbaebc0fb5e52d8df251f112452" protocol=ttrpc version=3 Sep 9 23:32:58.092272 containerd[1515]: time="2025-09-09T23:32:58.092241828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pf867,Uid:8d0decdc-5117-4a97-9a5f-eab81ca386a6,Namespace:kube-system,Attempt:0,}" Sep 9 23:32:58.110666 containerd[1515]: time="2025-09-09T23:32:58.110624592Z" level=info msg="connecting to shim 1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51" address="unix:///run/containerd/s/0e7b373072343690a8d22cbaf43caba3259b1e63e6bf626ebc39a7680ad6ad06" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:32:58.113284 systemd[1]: Started cri-containerd-f45c0d46bfa728d1c3375f313c08b06ab18f3647ae0b8b6993ba1a0225e6b66e.scope - libcontainer container f45c0d46bfa728d1c3375f313c08b06ab18f3647ae0b8b6993ba1a0225e6b66e. Sep 9 23:32:58.146364 systemd[1]: Started cri-containerd-1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51.scope - libcontainer container 1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51. Sep 9 23:32:58.188887 containerd[1515]: time="2025-09-09T23:32:58.188850162Z" level=info msg="StartContainer for \"f45c0d46bfa728d1c3375f313c08b06ab18f3647ae0b8b6993ba1a0225e6b66e\" returns successfully" Sep 9 23:32:58.190470 containerd[1515]: time="2025-09-09T23:32:58.190437486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pf867,Uid:8d0decdc-5117-4a97-9a5f-eab81ca386a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\"" Sep 9 23:32:58.192031 kubelet[2654]: E0909 23:32:58.191940 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:58.828971 kubelet[2654]: E0909 23:32:58.828925 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:58.828971 kubelet[2654]: E0909 23:32:58.828976 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:59.125997 kubelet[2654]: E0909 23:32:59.125899 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:32:59.154912 kubelet[2654]: I0909 23:32:59.154801 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5xvr8" podStartSLOduration=2.154780246 podStartE2EDuration="2.154780246s" podCreationTimestamp="2025-09-09 23:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:32:58.850551426 +0000 UTC m=+9.134335303" watchObservedRunningTime="2025-09-09 23:32:59.154780246 +0000 UTC m=+9.438564084" Sep 9 23:32:59.831502 kubelet[2654]: E0909 23:32:59.831085 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:04.474918 update_engine[1499]: I20250909 23:33:04.474864 1499 update_attempter.cc:509] Updating boot flags... Sep 9 23:33:04.526981 kubelet[2654]: E0909 23:33:04.526938 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:04.839699 kubelet[2654]: E0909 23:33:04.839549 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:05.031278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919684943.mount: Deactivated successfully. Sep 9 23:33:06.457125 containerd[1515]: time="2025-09-09T23:33:06.456625042Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:33:06.457125 containerd[1515]: time="2025-09-09T23:33:06.457119102Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 23:33:06.458008 containerd[1515]: time="2025-09-09T23:33:06.457978476Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:33:06.465577 containerd[1515]: time="2025-09-09T23:33:06.465543005Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.399387372s" Sep 9 23:33:06.465577 containerd[1515]: time="2025-09-09T23:33:06.465580692Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 23:33:06.477893 containerd[1515]: time="2025-09-09T23:33:06.477599522Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 23:33:06.482921 containerd[1515]: time="2025-09-09T23:33:06.482871667Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:33:06.495053 containerd[1515]: time="2025-09-09T23:33:06.495010921Z" level=info msg="Container 6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:33:06.496890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686649352.mount: Deactivated successfully. Sep 9 23:33:06.502871 containerd[1515]: time="2025-09-09T23:33:06.502825541Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\"" Sep 9 23:33:06.504714 containerd[1515]: time="2025-09-09T23:33:06.504690878Z" level=info msg="StartContainer for \"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\"" Sep 9 23:33:06.506548 containerd[1515]: time="2025-09-09T23:33:06.506523208Z" level=info msg="connecting to shim 6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd" address="unix:///run/containerd/s/a556e9209934d38fa6bb4237b213766d87e800ab6a0a6bd4456fd88d039359e7" protocol=ttrpc version=3 Sep 9 23:33:06.557326 systemd[1]: Started cri-containerd-6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd.scope - libcontainer container 6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd. Sep 9 23:33:06.592279 containerd[1515]: time="2025-09-09T23:33:06.592235214Z" level=info msg="StartContainer for \"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\" returns successfully" Sep 9 23:33:06.604044 systemd[1]: cri-containerd-6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd.scope: Deactivated successfully. Sep 9 23:33:06.642674 containerd[1515]: time="2025-09-09T23:33:06.642619318Z" level=info msg="received exit event container_id:\"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\" id:\"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\" pid:3086 exited_at:{seconds:1757460786 nanos:629364039}" Sep 9 23:33:06.647262 containerd[1515]: time="2025-09-09T23:33:06.647216287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\" id:\"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\" pid:3086 exited_at:{seconds:1757460786 nanos:629364039}" Sep 9 23:33:06.674136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd-rootfs.mount: Deactivated successfully. Sep 9 23:33:06.848545 kubelet[2654]: E0909 23:33:06.848443 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:07.859405 kubelet[2654]: E0909 23:33:07.859331 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:07.863037 containerd[1515]: time="2025-09-09T23:33:07.862980607Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:33:07.879049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433498861.mount: Deactivated successfully. Sep 9 23:33:07.898039 containerd[1515]: time="2025-09-09T23:33:07.897990351Z" level=info msg="Container 86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:33:07.917163 containerd[1515]: time="2025-09-09T23:33:07.917007774Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\"" Sep 9 23:33:07.918100 containerd[1515]: time="2025-09-09T23:33:07.918069819Z" level=info msg="StartContainer for \"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\"" Sep 9 23:33:07.919053 containerd[1515]: time="2025-09-09T23:33:07.919017921Z" level=info msg="connecting to shim 86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5" address="unix:///run/containerd/s/a556e9209934d38fa6bb4237b213766d87e800ab6a0a6bd4456fd88d039359e7" protocol=ttrpc version=3 Sep 9 23:33:07.945333 systemd[1]: Started cri-containerd-86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5.scope - libcontainer container 86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5. Sep 9 23:33:07.974322 containerd[1515]: time="2025-09-09T23:33:07.974279887Z" level=info msg="StartContainer for \"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\" returns successfully" Sep 9 23:33:07.990810 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:33:07.991024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:33:07.991690 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:33:07.995459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:33:07.998240 systemd[1]: cri-containerd-86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5.scope: Deactivated successfully. Sep 9 23:33:08.019215 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:33:08.027136 containerd[1515]: time="2025-09-09T23:33:08.027076066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\" id:\"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\" pid:3140 exited_at:{seconds:1757460788 nanos:26718160}" Sep 9 23:33:08.032665 containerd[1515]: time="2025-09-09T23:33:08.032615763Z" level=info msg="received exit event container_id:\"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\" id:\"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\" pid:3140 exited_at:{seconds:1757460788 nanos:26718160}" Sep 9 23:33:08.253762 containerd[1515]: time="2025-09-09T23:33:08.253709463Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:33:08.254160 containerd[1515]: time="2025-09-09T23:33:08.254090613Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 23:33:08.255046 containerd[1515]: time="2025-09-09T23:33:08.254973896Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:33:08.256550 containerd[1515]: time="2025-09-09T23:33:08.256517259Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.778858845s" Sep 9 23:33:08.256716 containerd[1515]: time="2025-09-09T23:33:08.256561667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 23:33:08.260126 containerd[1515]: time="2025-09-09T23:33:08.258730426Z" level=info msg="CreateContainer within sandbox \"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 23:33:08.267381 containerd[1515]: time="2025-09-09T23:33:08.267342808Z" level=info msg="Container b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:33:08.272791 containerd[1515]: time="2025-09-09T23:33:08.272753962Z" level=info msg="CreateContainer within sandbox \"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\"" Sep 9 23:33:08.273282 containerd[1515]: time="2025-09-09T23:33:08.273252894Z" level=info msg="StartContainer for \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\"" Sep 9 23:33:08.274329 containerd[1515]: time="2025-09-09T23:33:08.274303687Z" level=info msg="connecting to shim b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a" address="unix:///run/containerd/s/0e7b373072343690a8d22cbaf43caba3259b1e63e6bf626ebc39a7680ad6ad06" protocol=ttrpc version=3 Sep 9 23:33:08.304313 systemd[1]: Started cri-containerd-b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a.scope - libcontainer container b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a. Sep 9 23:33:08.333814 containerd[1515]: time="2025-09-09T23:33:08.333766852Z" level=info msg="StartContainer for \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" returns successfully" Sep 9 23:33:08.868295 kubelet[2654]: E0909 23:33:08.868164 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:08.869712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5-rootfs.mount: Deactivated successfully. Sep 9 23:33:08.875786 kubelet[2654]: E0909 23:33:08.875678 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:08.877943 containerd[1515]: time="2025-09-09T23:33:08.877819166Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:33:08.930409 containerd[1515]: time="2025-09-09T23:33:08.930363820Z" level=info msg="Container 21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:33:08.936968 kubelet[2654]: I0909 23:33:08.888745 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pf867" podStartSLOduration=1.82418005 podStartE2EDuration="11.888729131s" podCreationTimestamp="2025-09-09 23:32:57 +0000 UTC" firstStartedPulling="2025-09-09 23:32:58.192502436 +0000 UTC m=+8.476286273" lastFinishedPulling="2025-09-09 23:33:08.257051517 +0000 UTC m=+18.540835354" observedRunningTime="2025-09-09 23:33:08.887389685 +0000 UTC m=+19.171173522" watchObservedRunningTime="2025-09-09 23:33:08.888729131 +0000 UTC m=+19.172513008" Sep 9 23:33:08.943546 containerd[1515]: time="2025-09-09T23:33:08.943495073Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\"" Sep 9 23:33:08.945006 containerd[1515]: time="2025-09-09T23:33:08.944344509Z" level=info msg="StartContainer for \"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\"" Sep 9 23:33:08.945860 containerd[1515]: time="2025-09-09T23:33:08.945823180Z" level=info msg="connecting to shim 21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0" address="unix:///run/containerd/s/a556e9209934d38fa6bb4237b213766d87e800ab6a0a6bd4456fd88d039359e7" protocol=ttrpc version=3 Sep 9 23:33:08.974300 systemd[1]: Started cri-containerd-21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0.scope - libcontainer container 21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0. Sep 9 23:33:09.037251 containerd[1515]: time="2025-09-09T23:33:09.037199387Z" level=info msg="StartContainer for \"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\" returns successfully" Sep 9 23:33:09.043543 containerd[1515]: time="2025-09-09T23:33:09.043366188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\" id:\"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\" pid:3234 exited_at:{seconds:1757460789 nanos:43034010}" Sep 9 23:33:09.043543 containerd[1515]: time="2025-09-09T23:33:09.043441001Z" level=info msg="received exit event container_id:\"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\" id:\"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\" pid:3234 exited_at:{seconds:1757460789 nanos:43034010}" Sep 9 23:33:09.043490 systemd[1]: cri-containerd-21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0.scope: Deactivated successfully. Sep 9 23:33:09.092774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0-rootfs.mount: Deactivated successfully. Sep 9 23:33:09.882152 kubelet[2654]: E0909 23:33:09.881524 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:09.882152 kubelet[2654]: E0909 23:33:09.882152 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:09.886580 containerd[1515]: time="2025-09-09T23:33:09.886530094Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:33:09.924573 containerd[1515]: time="2025-09-09T23:33:09.924508474Z" level=info msg="Container ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:33:09.925400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344137001.mount: Deactivated successfully. Sep 9 23:33:09.934357 containerd[1515]: time="2025-09-09T23:33:09.934291230Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\"" Sep 9 23:33:09.934887 containerd[1515]: time="2025-09-09T23:33:09.934861410Z" level=info msg="StartContainer for \"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\"" Sep 9 23:33:09.936527 containerd[1515]: time="2025-09-09T23:33:09.936259015Z" level=info msg="connecting to shim ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba" address="unix:///run/containerd/s/a556e9209934d38fa6bb4237b213766d87e800ab6a0a6bd4456fd88d039359e7" protocol=ttrpc version=3 Sep 9 23:33:09.965330 systemd[1]: Started cri-containerd-ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba.scope - libcontainer container ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba. Sep 9 23:33:09.988848 systemd[1]: cri-containerd-ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba.scope: Deactivated successfully. Sep 9 23:33:09.989934 containerd[1515]: time="2025-09-09T23:33:09.989887940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\" id:\"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\" pid:3272 exited_at:{seconds:1757460789 nanos:989650018}" Sep 9 23:33:09.989934 containerd[1515]: time="2025-09-09T23:33:09.989899542Z" level=info msg="received exit event container_id:\"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\" id:\"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\" pid:3272 exited_at:{seconds:1757460789 nanos:989650018}" Sep 9 23:33:09.997485 containerd[1515]: time="2025-09-09T23:33:09.997369292Z" level=info msg="StartContainer for \"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\" returns successfully" Sep 9 23:33:10.011633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba-rootfs.mount: Deactivated successfully. Sep 9 23:33:10.886924 kubelet[2654]: E0909 23:33:10.886832 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:10.889024 containerd[1515]: time="2025-09-09T23:33:10.888968338Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:33:10.900289 containerd[1515]: time="2025-09-09T23:33:10.900240547Z" level=info msg="Container 6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:33:10.907733 containerd[1515]: time="2025-09-09T23:33:10.907634346Z" level=info msg="CreateContainer within sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\"" Sep 9 23:33:10.908309 containerd[1515]: time="2025-09-09T23:33:10.908285375Z" level=info msg="StartContainer for \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\"" Sep 9 23:33:10.909589 containerd[1515]: time="2025-09-09T23:33:10.909515381Z" level=info msg="connecting to shim 6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7" address="unix:///run/containerd/s/a556e9209934d38fa6bb4237b213766d87e800ab6a0a6bd4456fd88d039359e7" protocol=ttrpc version=3 Sep 9 23:33:10.938298 systemd[1]: Started cri-containerd-6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7.scope - libcontainer container 6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7. Sep 9 23:33:10.970516 containerd[1515]: time="2025-09-09T23:33:10.970471513Z" level=info msg="StartContainer for \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" returns successfully" Sep 9 23:33:11.070369 containerd[1515]: time="2025-09-09T23:33:11.070326215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" id:\"ea4c1f25f8579f0cbd4378a4dc4b5d5960ca34624767a5c6f69281676657c830\" pid:3342 exited_at:{seconds:1757460791 nanos:69137624}" Sep 9 23:33:11.141916 kubelet[2654]: I0909 23:33:11.141636 2654 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:33:11.180522 systemd[1]: Created slice kubepods-burstable-pod5c47d105_ed9e_4565_98eb_443236bdd887.slice - libcontainer container kubepods-burstable-pod5c47d105_ed9e_4565_98eb_443236bdd887.slice. Sep 9 23:33:11.187776 systemd[1]: Created slice kubepods-burstable-pod93421861_ceee_42a8_b1d1_a4c112ed782e.slice - libcontainer container kubepods-burstable-pod93421861_ceee_42a8_b1d1_a4c112ed782e.slice. Sep 9 23:33:11.254355 kubelet[2654]: I0909 23:33:11.254303 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93421861-ceee-42a8-b1d1-a4c112ed782e-config-volume\") pod \"coredns-668d6bf9bc-fj684\" (UID: \"93421861-ceee-42a8-b1d1-a4c112ed782e\") " pod="kube-system/coredns-668d6bf9bc-fj684" Sep 9 23:33:11.254481 kubelet[2654]: I0909 23:33:11.254369 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864lw\" (UniqueName: \"kubernetes.io/projected/5c47d105-ed9e-4565-98eb-443236bdd887-kube-api-access-864lw\") pod \"coredns-668d6bf9bc-pfw78\" (UID: \"5c47d105-ed9e-4565-98eb-443236bdd887\") " pod="kube-system/coredns-668d6bf9bc-pfw78" Sep 9 23:33:11.254481 kubelet[2654]: I0909 23:33:11.254390 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgjz\" (UniqueName: \"kubernetes.io/projected/93421861-ceee-42a8-b1d1-a4c112ed782e-kube-api-access-ljgjz\") pod \"coredns-668d6bf9bc-fj684\" (UID: \"93421861-ceee-42a8-b1d1-a4c112ed782e\") " pod="kube-system/coredns-668d6bf9bc-fj684" Sep 9 23:33:11.254481 kubelet[2654]: I0909 23:33:11.254447 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c47d105-ed9e-4565-98eb-443236bdd887-config-volume\") pod \"coredns-668d6bf9bc-pfw78\" (UID: \"5c47d105-ed9e-4565-98eb-443236bdd887\") " pod="kube-system/coredns-668d6bf9bc-pfw78" Sep 9 23:33:11.487484 kubelet[2654]: E0909 23:33:11.487362 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:11.489129 containerd[1515]: time="2025-09-09T23:33:11.488333938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfw78,Uid:5c47d105-ed9e-4565-98eb-443236bdd887,Namespace:kube-system,Attempt:0,}" Sep 9 23:33:11.490787 kubelet[2654]: E0909 23:33:11.490720 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:11.492400 containerd[1515]: time="2025-09-09T23:33:11.492360183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fj684,Uid:93421861-ceee-42a8-b1d1-a4c112ed782e,Namespace:kube-system,Attempt:0,}" Sep 9 23:33:11.893167 kubelet[2654]: E0909 23:33:11.892804 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:11.909829 kubelet[2654]: I0909 23:33:11.909770 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-drv9r" podStartSLOduration=6.5027511780000005 podStartE2EDuration="14.909752648s" podCreationTimestamp="2025-09-09 23:32:57 +0000 UTC" firstStartedPulling="2025-09-09 23:32:58.065757993 +0000 UTC m=+8.349541830" lastFinishedPulling="2025-09-09 23:33:06.472759463 +0000 UTC m=+16.756543300" observedRunningTime="2025-09-09 23:33:11.907563778 +0000 UTC m=+22.191347615" watchObservedRunningTime="2025-09-09 23:33:11.909752648 +0000 UTC m=+22.193536445" Sep 9 23:33:12.895142 kubelet[2654]: E0909 23:33:12.895058 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:13.033877 systemd-networkd[1425]: cilium_host: Link UP Sep 9 23:33:13.034025 systemd-networkd[1425]: cilium_net: Link UP Sep 9 23:33:13.034173 systemd-networkd[1425]: cilium_net: Gained carrier Sep 9 23:33:13.034299 systemd-networkd[1425]: cilium_host: Gained carrier Sep 9 23:33:13.140729 systemd-networkd[1425]: cilium_vxlan: Link UP Sep 9 23:33:13.140741 systemd-networkd[1425]: cilium_vxlan: Gained carrier Sep 9 23:33:13.429144 kernel: NET: Registered PF_ALG protocol family Sep 9 23:33:13.742271 systemd-networkd[1425]: cilium_net: Gained IPv6LL Sep 9 23:33:13.898284 kubelet[2654]: E0909 23:33:13.898248 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:14.047632 systemd-networkd[1425]: lxc_health: Link UP Sep 9 23:33:14.059210 systemd-networkd[1425]: lxc_health: Gained carrier Sep 9 23:33:14.062296 systemd-networkd[1425]: cilium_host: Gained IPv6LL Sep 9 23:33:14.533140 kernel: eth0: renamed from tmp3c916 Sep 9 23:33:14.542662 systemd-networkd[1425]: lxc2c0b96f53b45: Link UP Sep 9 23:33:14.544141 kernel: eth0: renamed from tmp99dbe Sep 9 23:33:14.549952 systemd-networkd[1425]: lxce0b61dad9914: Link UP Sep 9 23:33:14.550223 systemd-networkd[1425]: lxc2c0b96f53b45: Gained carrier Sep 9 23:33:14.550351 systemd-networkd[1425]: lxce0b61dad9914: Gained carrier Sep 9 23:33:14.767255 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Sep 9 23:33:15.598467 systemd-networkd[1425]: lxc2c0b96f53b45: Gained IPv6LL Sep 9 23:33:15.727397 systemd-networkd[1425]: lxc_health: Gained IPv6LL Sep 9 23:33:15.751939 kubelet[2654]: E0909 23:33:15.751892 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:15.854432 systemd-networkd[1425]: lxce0b61dad9914: Gained IPv6LL Sep 9 23:33:16.375292 kubelet[2654]: I0909 23:33:16.375239 2654 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:33:16.375643 kubelet[2654]: E0909 23:33:16.375612 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:16.905958 kubelet[2654]: E0909 23:33:16.905926 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:18.146785 containerd[1515]: time="2025-09-09T23:33:18.146723107Z" level=info msg="connecting to shim 3c916ad600dda27429f89df7e40d6c7910e29fc57b0c34fd8f78d23e81c85602" address="unix:///run/containerd/s/5356db7bfd8d86b51c728c8de4a9d0faceb2a141e95c2e8e6942fc3a10df6b5b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:33:18.147709 containerd[1515]: time="2025-09-09T23:33:18.147676382Z" level=info msg="connecting to shim 99dbe9b0d1c68dd1a097a825331157b454f1f4c897c2e7c1a47f2c7c0ea58881" address="unix:///run/containerd/s/7eb7ac004a528c2332f6b6619e28602dbd573c21bdad2d5b0c8fc77ad130c484" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:33:18.178405 systemd[1]: Started cri-containerd-99dbe9b0d1c68dd1a097a825331157b454f1f4c897c2e7c1a47f2c7c0ea58881.scope - libcontainer container 99dbe9b0d1c68dd1a097a825331157b454f1f4c897c2e7c1a47f2c7c0ea58881. Sep 9 23:33:18.184382 systemd[1]: Started cri-containerd-3c916ad600dda27429f89df7e40d6c7910e29fc57b0c34fd8f78d23e81c85602.scope - libcontainer container 3c916ad600dda27429f89df7e40d6c7910e29fc57b0c34fd8f78d23e81c85602. Sep 9 23:33:18.191867 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:33:18.197811 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:33:18.214587 containerd[1515]: time="2025-09-09T23:33:18.214526335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfw78,Uid:5c47d105-ed9e-4565-98eb-443236bdd887,Namespace:kube-system,Attempt:0,} returns sandbox id \"99dbe9b0d1c68dd1a097a825331157b454f1f4c897c2e7c1a47f2c7c0ea58881\"" Sep 9 23:33:18.216213 kubelet[2654]: E0909 23:33:18.216177 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:18.221753 containerd[1515]: time="2025-09-09T23:33:18.221709798Z" level=info msg="CreateContainer within sandbox \"99dbe9b0d1c68dd1a097a825331157b454f1f4c897c2e7c1a47f2c7c0ea58881\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:33:18.231376 containerd[1515]: time="2025-09-09T23:33:18.231339315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fj684,Uid:93421861-ceee-42a8-b1d1-a4c112ed782e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c916ad600dda27429f89df7e40d6c7910e29fc57b0c34fd8f78d23e81c85602\"" Sep 9 23:33:18.231761 containerd[1515]: time="2025-09-09T23:33:18.231724321Z" level=info msg="Container 85e6d9aba879298bf677ceb23c70a200be2d38df19df259181a2a573dc77e6d0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:33:18.235462 kubelet[2654]: E0909 23:33:18.235245 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:18.238182 containerd[1515]: time="2025-09-09T23:33:18.237639152Z" level=info msg="CreateContainer within sandbox \"3c916ad600dda27429f89df7e40d6c7910e29fc57b0c34fd8f78d23e81c85602\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:33:18.242097 containerd[1515]: time="2025-09-09T23:33:18.242062964Z" level=info msg="CreateContainer within sandbox \"99dbe9b0d1c68dd1a097a825331157b454f1f4c897c2e7c1a47f2c7c0ea58881\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85e6d9aba879298bf677ceb23c70a200be2d38df19df259181a2a573dc77e6d0\"" Sep 9 23:33:18.242649 containerd[1515]: time="2025-09-09T23:33:18.242559944Z" level=info msg="StartContainer for \"85e6d9aba879298bf677ceb23c70a200be2d38df19df259181a2a573dc77e6d0\"" Sep 9 23:33:18.244037 containerd[1515]: time="2025-09-09T23:33:18.244001957Z" level=info msg="connecting to shim 85e6d9aba879298bf677ceb23c70a200be2d38df19df259181a2a573dc77e6d0" address="unix:///run/containerd/s/7eb7ac004a528c2332f6b6619e28602dbd573c21bdad2d5b0c8fc77ad130c484" protocol=ttrpc version=3 Sep 9 23:33:18.247060 containerd[1515]: time="2025-09-09T23:33:18.247027360Z" level=info msg="Container 45643486dc1291b96696c78e73383d7f8c8fa911539e2517eaed0170814f7bb4: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:33:18.260647 containerd[1515]: time="2025-09-09T23:33:18.260588190Z" level=info msg="CreateContainer within sandbox \"3c916ad600dda27429f89df7e40d6c7910e29fc57b0c34fd8f78d23e81c85602\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45643486dc1291b96696c78e73383d7f8c8fa911539e2517eaed0170814f7bb4\"" Sep 9 23:33:18.260989 containerd[1515]: time="2025-09-09T23:33:18.260968036Z" level=info msg="StartContainer for \"45643486dc1291b96696c78e73383d7f8c8fa911539e2517eaed0170814f7bb4\"" Sep 9 23:33:18.262030 containerd[1515]: time="2025-09-09T23:33:18.261764371Z" level=info msg="connecting to shim 45643486dc1291b96696c78e73383d7f8c8fa911539e2517eaed0170814f7bb4" address="unix:///run/containerd/s/5356db7bfd8d86b51c728c8de4a9d0faceb2a141e95c2e8e6942fc3a10df6b5b" protocol=ttrpc version=3 Sep 9 23:33:18.267272 systemd[1]: Started cri-containerd-85e6d9aba879298bf677ceb23c70a200be2d38df19df259181a2a573dc77e6d0.scope - libcontainer container 85e6d9aba879298bf677ceb23c70a200be2d38df19df259181a2a573dc77e6d0. Sep 9 23:33:18.286279 systemd[1]: Started cri-containerd-45643486dc1291b96696c78e73383d7f8c8fa911539e2517eaed0170814f7bb4.scope - libcontainer container 45643486dc1291b96696c78e73383d7f8c8fa911539e2517eaed0170814f7bb4. Sep 9 23:33:18.312619 containerd[1515]: time="2025-09-09T23:33:18.312578077Z" level=info msg="StartContainer for \"85e6d9aba879298bf677ceb23c70a200be2d38df19df259181a2a573dc77e6d0\" returns successfully" Sep 9 23:33:18.322207 containerd[1515]: time="2025-09-09T23:33:18.322031533Z" level=info msg="StartContainer for \"45643486dc1291b96696c78e73383d7f8c8fa911539e2517eaed0170814f7bb4\" returns successfully" Sep 9 23:33:18.914576 kubelet[2654]: E0909 23:33:18.914230 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:18.917706 kubelet[2654]: E0909 23:33:18.917655 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:18.939475 kubelet[2654]: I0909 23:33:18.939421 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fj684" podStartSLOduration=21.93940072 podStartE2EDuration="21.93940072s" podCreationTimestamp="2025-09-09 23:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:33:18.926413599 +0000 UTC m=+29.210197476" watchObservedRunningTime="2025-09-09 23:33:18.93940072 +0000 UTC m=+29.223184557" Sep 9 23:33:18.952701 kubelet[2654]: I0909 23:33:18.952618 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pfw78" podStartSLOduration=21.952599386 podStartE2EDuration="21.952599386s" podCreationTimestamp="2025-09-09 23:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:33:18.941211097 +0000 UTC m=+29.224994974" watchObservedRunningTime="2025-09-09 23:33:18.952599386 +0000 UTC m=+29.236383223" Sep 9 23:33:19.136220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1602798069.mount: Deactivated successfully. Sep 9 23:33:19.332584 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:50030.service - OpenSSH per-connection server daemon (10.0.0.1:50030). Sep 9 23:33:19.391661 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:19.393029 sshd-session[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:19.396919 systemd-logind[1490]: New session 8 of user core. Sep 9 23:33:19.407280 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:33:19.536036 sshd[3995]: Connection closed by 10.0.0.1 port 50030 Sep 9 23:33:19.536414 sshd-session[3993]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:19.539992 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:50030.service: Deactivated successfully. Sep 9 23:33:19.541932 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:33:19.543338 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:33:19.544440 systemd-logind[1490]: Removed session 8. Sep 9 23:33:19.919885 kubelet[2654]: E0909 23:33:19.919830 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:19.920533 kubelet[2654]: E0909 23:33:19.920126 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:20.921646 kubelet[2654]: E0909 23:33:20.921551 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:20.921646 kubelet[2654]: E0909 23:33:20.921602 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:33:24.549294 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:38238.service - OpenSSH per-connection server daemon (10.0.0.1:38238). Sep 9 23:33:24.620648 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 38238 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:24.622222 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:24.628025 systemd-logind[1490]: New session 9 of user core. Sep 9 23:33:24.638320 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:33:24.768975 sshd[4029]: Connection closed by 10.0.0.1 port 38238 Sep 9 23:33:24.769340 sshd-session[4027]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:24.772940 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:38238.service: Deactivated successfully. Sep 9 23:33:24.774583 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:33:24.777662 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:33:24.778978 systemd-logind[1490]: Removed session 9. Sep 9 23:33:29.781385 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:38240.service - OpenSSH per-connection server daemon (10.0.0.1:38240). Sep 9 23:33:29.842603 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 38240 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:29.844510 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:29.855785 systemd-logind[1490]: New session 10 of user core. Sep 9 23:33:29.867278 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:33:30.004351 sshd[4048]: Connection closed by 10.0.0.1 port 38240 Sep 9 23:33:30.005781 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:30.014433 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:38240.service: Deactivated successfully. Sep 9 23:33:30.016156 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:33:30.018941 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:33:30.021833 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:47128.service - OpenSSH per-connection server daemon (10.0.0.1:47128). Sep 9 23:33:30.023478 systemd-logind[1490]: Removed session 10. Sep 9 23:33:30.081393 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 47128 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:30.082704 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:30.086738 systemd-logind[1490]: New session 11 of user core. Sep 9 23:33:30.108343 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:33:30.263379 sshd[4065]: Connection closed by 10.0.0.1 port 47128 Sep 9 23:33:30.263984 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:30.275520 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:47128.service: Deactivated successfully. Sep 9 23:33:30.277989 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:33:30.279332 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:33:30.283199 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:47134.service - OpenSSH per-connection server daemon (10.0.0.1:47134). Sep 9 23:33:30.284099 systemd-logind[1490]: Removed session 11. Sep 9 23:33:30.339942 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 47134 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:30.341466 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:30.346326 systemd-logind[1490]: New session 12 of user core. Sep 9 23:33:30.353279 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:33:30.472789 sshd[4079]: Connection closed by 10.0.0.1 port 47134 Sep 9 23:33:30.472627 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:30.476276 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:47134.service: Deactivated successfully. Sep 9 23:33:30.477970 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:33:30.478769 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:33:30.480086 systemd-logind[1490]: Removed session 12. Sep 9 23:33:35.495222 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:47142.service - OpenSSH per-connection server daemon (10.0.0.1:47142). Sep 9 23:33:35.538400 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 47142 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:35.540962 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:35.545914 systemd-logind[1490]: New session 13 of user core. Sep 9 23:33:35.554316 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:33:35.678031 sshd[4095]: Connection closed by 10.0.0.1 port 47142 Sep 9 23:33:35.678584 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:35.686203 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:47142.service: Deactivated successfully. Sep 9 23:33:35.689336 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:33:35.690284 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:33:35.692060 systemd-logind[1490]: Removed session 13. Sep 9 23:33:40.694826 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:54294.service - OpenSSH per-connection server daemon (10.0.0.1:54294). Sep 9 23:33:40.754734 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 54294 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:40.756190 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:40.760806 systemd-logind[1490]: New session 14 of user core. Sep 9 23:33:40.777371 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:33:40.903352 sshd[4110]: Connection closed by 10.0.0.1 port 54294 Sep 9 23:33:40.903210 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:40.911232 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:54294.service: Deactivated successfully. Sep 9 23:33:40.914128 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:33:40.916331 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:33:40.918832 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:54302.service - OpenSSH per-connection server daemon (10.0.0.1:54302). Sep 9 23:33:40.921943 systemd-logind[1490]: Removed session 14. Sep 9 23:33:40.976894 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 54302 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:40.981301 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:40.985925 systemd-logind[1490]: New session 15 of user core. Sep 9 23:33:40.998308 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:33:41.224725 sshd[4125]: Connection closed by 10.0.0.1 port 54302 Sep 9 23:33:41.225261 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:41.239586 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:54302.service: Deactivated successfully. Sep 9 23:33:41.242619 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:33:41.243545 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:33:41.246532 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:54314.service - OpenSSH per-connection server daemon (10.0.0.1:54314). Sep 9 23:33:41.247394 systemd-logind[1490]: Removed session 15. Sep 9 23:33:41.308233 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 54314 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:41.310734 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:41.315541 systemd-logind[1490]: New session 16 of user core. Sep 9 23:33:41.330321 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:33:41.914148 sshd[4138]: Connection closed by 10.0.0.1 port 54314 Sep 9 23:33:41.913295 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:41.923669 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:54314.service: Deactivated successfully. Sep 9 23:33:41.926187 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:33:41.928238 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:33:41.935575 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:54324.service - OpenSSH per-connection server daemon (10.0.0.1:54324). Sep 9 23:33:41.937292 systemd-logind[1490]: Removed session 16. Sep 9 23:33:41.986719 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 54324 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:41.988001 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:41.992357 systemd-logind[1490]: New session 17 of user core. Sep 9 23:33:42.002537 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:33:42.238415 sshd[4160]: Connection closed by 10.0.0.1 port 54324 Sep 9 23:33:42.238567 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:42.253859 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:54324.service: Deactivated successfully. Sep 9 23:33:42.255835 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:33:42.257839 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:33:42.259768 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:54330.service - OpenSSH per-connection server daemon (10.0.0.1:54330). Sep 9 23:33:42.260977 systemd-logind[1490]: Removed session 17. Sep 9 23:33:42.308252 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 54330 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:42.311365 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:42.316012 systemd-logind[1490]: New session 18 of user core. Sep 9 23:33:42.326438 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:33:42.440288 sshd[4173]: Connection closed by 10.0.0.1 port 54330 Sep 9 23:33:42.440636 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:42.443950 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:54330.service: Deactivated successfully. Sep 9 23:33:42.445594 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:33:42.446563 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:33:42.447715 systemd-logind[1490]: Removed session 18. Sep 9 23:33:47.452769 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:54332.service - OpenSSH per-connection server daemon (10.0.0.1:54332). Sep 9 23:33:47.517262 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 54332 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:47.518880 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:47.524131 systemd-logind[1490]: New session 19 of user core. Sep 9 23:33:47.531298 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:33:47.648846 sshd[4192]: Connection closed by 10.0.0.1 port 54332 Sep 9 23:33:47.649372 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:47.653765 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:54332.service: Deactivated successfully. Sep 9 23:33:47.655607 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:33:47.656617 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:33:47.658157 systemd-logind[1490]: Removed session 19. Sep 9 23:33:52.663600 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:37790.service - OpenSSH per-connection server daemon (10.0.0.1:37790). Sep 9 23:33:52.732489 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 37790 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:52.734317 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:52.740428 systemd-logind[1490]: New session 20 of user core. Sep 9 23:33:52.758327 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:33:52.896665 sshd[4211]: Connection closed by 10.0.0.1 port 37790 Sep 9 23:33:52.897180 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:52.901745 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:37790.service: Deactivated successfully. Sep 9 23:33:52.903482 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:33:52.904560 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:33:52.906452 systemd-logind[1490]: Removed session 20. Sep 9 23:33:57.909586 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:37798.service - OpenSSH per-connection server daemon (10.0.0.1:37798). Sep 9 23:33:57.961548 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 37798 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:57.962936 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:57.968212 systemd-logind[1490]: New session 21 of user core. Sep 9 23:33:57.980340 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:33:58.107996 sshd[4226]: Connection closed by 10.0.0.1 port 37798 Sep 9 23:33:58.108532 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Sep 9 23:33:58.118259 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:37798.service: Deactivated successfully. Sep 9 23:33:58.120506 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:33:58.124146 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:33:58.126358 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:37812.service - OpenSSH per-connection server daemon (10.0.0.1:37812). Sep 9 23:33:58.129540 systemd-logind[1490]: Removed session 21. Sep 9 23:33:58.183609 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 37812 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:33:58.185442 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:33:58.193186 systemd-logind[1490]: New session 22 of user core. Sep 9 23:33:58.202327 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:33:59.846977 containerd[1515]: time="2025-09-09T23:33:59.846931079Z" level=info msg="StopContainer for \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" with timeout 30 (s)" Sep 9 23:33:59.848186 containerd[1515]: time="2025-09-09T23:33:59.848097909Z" level=info msg="Stop container \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" with signal terminated" Sep 9 23:33:59.862204 systemd[1]: cri-containerd-b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a.scope: Deactivated successfully. Sep 9 23:33:59.865115 containerd[1515]: time="2025-09-09T23:33:59.864683517Z" level=info msg="received exit event container_id:\"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" id:\"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" pid:3198 exited_at:{seconds:1757460839 nanos:864401689}" Sep 9 23:33:59.865115 containerd[1515]: time="2025-09-09T23:33:59.864856390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" id:\"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" pid:3198 exited_at:{seconds:1757460839 nanos:864401689}" Sep 9 23:33:59.877514 containerd[1515]: time="2025-09-09T23:33:59.877457849Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:33:59.885146 containerd[1515]: time="2025-09-09T23:33:59.884608941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" id:\"b18a368c276e121d713838a8841af6718f27a6272ac262542c655e20594a3dc1\" pid:4272 exited_at:{seconds:1757460839 nanos:883903492}" Sep 9 23:33:59.886564 containerd[1515]: time="2025-09-09T23:33:59.886524739Z" level=info msg="StopContainer for \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" with timeout 2 (s)" Sep 9 23:33:59.886847 containerd[1515]: time="2025-09-09T23:33:59.886816727Z" level=info msg="Stop container \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" with signal terminated" Sep 9 23:33:59.896235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a-rootfs.mount: Deactivated successfully. Sep 9 23:33:59.897720 systemd-networkd[1425]: lxc_health: Link DOWN Sep 9 23:33:59.897726 systemd-networkd[1425]: lxc_health: Lost carrier Sep 9 23:33:59.908332 containerd[1515]: time="2025-09-09T23:33:59.908291725Z" level=info msg="StopContainer for \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" returns successfully" Sep 9 23:33:59.911158 containerd[1515]: time="2025-09-09T23:33:59.911117403Z" level=info msg="StopPodSandbox for \"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\"" Sep 9 23:33:59.911761 systemd[1]: cri-containerd-6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7.scope: Deactivated successfully. Sep 9 23:33:59.912062 systemd[1]: cri-containerd-6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7.scope: Consumed 6.354s CPU time, 121.6M memory peak, 128K read from disk, 12.9M written to disk. Sep 9 23:33:59.913553 containerd[1515]: time="2025-09-09T23:33:59.913270671Z" level=info msg="received exit event container_id:\"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" id:\"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" pid:3309 exited_at:{seconds:1757460839 nanos:912932806}" Sep 9 23:33:59.913553 containerd[1515]: time="2025-09-09T23:33:59.913478982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" id:\"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" pid:3309 exited_at:{seconds:1757460839 nanos:912932806}" Sep 9 23:33:59.923129 containerd[1515]: time="2025-09-09T23:33:59.921932459Z" level=info msg="Container to stop \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:33:59.932714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7-rootfs.mount: Deactivated successfully. Sep 9 23:33:59.935792 systemd[1]: cri-containerd-1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51.scope: Deactivated successfully. Sep 9 23:33:59.937668 containerd[1515]: time="2025-09-09T23:33:59.937625225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\" id:\"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\" pid:2876 exit_status:137 exited_at:{seconds:1757460839 nanos:937259441}" Sep 9 23:33:59.947006 containerd[1515]: time="2025-09-09T23:33:59.946958745Z" level=info msg="StopContainer for \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" returns successfully" Sep 9 23:33:59.947524 containerd[1515]: time="2025-09-09T23:33:59.947472403Z" level=info msg="StopPodSandbox for \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\"" Sep 9 23:33:59.947579 containerd[1515]: time="2025-09-09T23:33:59.947552639Z" level=info msg="Container to stop \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:33:59.947579 containerd[1515]: time="2025-09-09T23:33:59.947566679Z" level=info msg="Container to stop \"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:33:59.947579 containerd[1515]: time="2025-09-09T23:33:59.947575238Z" level=info msg="Container to stop \"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:33:59.947641 containerd[1515]: time="2025-09-09T23:33:59.947583838Z" level=info msg="Container to stop \"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:33:59.947641 containerd[1515]: time="2025-09-09T23:33:59.947592318Z" level=info msg="Container to stop \"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:33:59.952938 systemd[1]: cri-containerd-ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e.scope: Deactivated successfully. Sep 9 23:33:59.978159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51-rootfs.mount: Deactivated successfully. Sep 9 23:33:59.987520 containerd[1515]: time="2025-09-09T23:33:59.986248938Z" level=info msg="shim disconnected" id=ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e namespace=k8s.io Sep 9 23:33:59.986803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e-rootfs.mount: Deactivated successfully. Sep 9 23:34:00.013283 containerd[1515]: time="2025-09-09T23:33:59.986282777Z" level=warning msg="cleaning up after shim disconnected" id=ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e namespace=k8s.io Sep 9 23:34:00.013283 containerd[1515]: time="2025-09-09T23:34:00.013276369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:34:00.013437 containerd[1515]: time="2025-09-09T23:33:59.989427042Z" level=info msg="shim disconnected" id=1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51 namespace=k8s.io Sep 9 23:34:00.013437 containerd[1515]: time="2025-09-09T23:34:00.013377885Z" level=warning msg="cleaning up after shim disconnected" id=1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51 namespace=k8s.io Sep 9 23:34:00.013437 containerd[1515]: time="2025-09-09T23:34:00.013403244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:34:00.036119 containerd[1515]: time="2025-09-09T23:34:00.035775580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" id:\"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" pid:2806 exit_status:137 exited_at:{seconds:1757460839 nanos:959924108}" Sep 9 23:34:00.036119 containerd[1515]: time="2025-09-09T23:34:00.035859217Z" level=info msg="received exit event sandbox_id:\"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\" exit_status:137 exited_at:{seconds:1757460839 nanos:937259441}" Sep 9 23:34:00.036119 containerd[1515]: time="2025-09-09T23:34:00.035897895Z" level=info msg="received exit event sandbox_id:\"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" exit_status:137 exited_at:{seconds:1757460839 nanos:959924108}" Sep 9 23:34:00.036789 containerd[1515]: time="2025-09-09T23:34:00.036757181Z" level=info msg="TearDown network for sandbox \"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\" successfully" Sep 9 23:34:00.036789 containerd[1515]: time="2025-09-09T23:34:00.036788699Z" level=info msg="StopPodSandbox for \"1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51\" returns successfully" Sep 9 23:34:00.037278 containerd[1515]: time="2025-09-09T23:34:00.037210762Z" level=info msg="TearDown network for sandbox \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" successfully" Sep 9 23:34:00.037278 containerd[1515]: time="2025-09-09T23:34:00.037236321Z" level=info msg="StopPodSandbox for \"ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e\" returns successfully" Sep 9 23:34:00.039481 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f2fcb24e7ef42ead041d3fbcbd921c27c831bfffd8b8129c049858c22fdee51-shm.mount: Deactivated successfully. Sep 9 23:34:00.168417 kubelet[2654]: I0909 23:34:00.168216 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-run\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.168417 kubelet[2654]: I0909 23:34:00.168262 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-host-proc-sys-kernel\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.168417 kubelet[2654]: I0909 23:34:00.168286 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33f3c0a1-6150-41be-b80e-4460d6094132-hubble-tls\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.168417 kubelet[2654]: I0909 23:34:00.168305 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drbfn\" (UniqueName: \"kubernetes.io/projected/33f3c0a1-6150-41be-b80e-4460d6094132-kube-api-access-drbfn\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.168417 kubelet[2654]: I0909 23:34:00.168323 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5ltl\" (UniqueName: \"kubernetes.io/projected/8d0decdc-5117-4a97-9a5f-eab81ca386a6-kube-api-access-s5ltl\") pod \"8d0decdc-5117-4a97-9a5f-eab81ca386a6\" (UID: \"8d0decdc-5117-4a97-9a5f-eab81ca386a6\") " Sep 9 23:34:00.168417 kubelet[2654]: I0909 23:34:00.168341 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-bpf-maps\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.168888 kubelet[2654]: I0909 23:34:00.168354 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-host-proc-sys-net\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.168888 kubelet[2654]: I0909 23:34:00.168370 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cni-path\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.168888 kubelet[2654]: I0909 23:34:00.168385 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-lib-modules\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.169182 kubelet[2654]: I0909 23:34:00.168965 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-config-path\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.169182 kubelet[2654]: I0909 23:34:00.169005 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d0decdc-5117-4a97-9a5f-eab81ca386a6-cilium-config-path\") pod \"8d0decdc-5117-4a97-9a5f-eab81ca386a6\" (UID: \"8d0decdc-5117-4a97-9a5f-eab81ca386a6\") " Sep 9 23:34:00.169182 kubelet[2654]: I0909 23:34:00.169024 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-xtables-lock\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.169182 kubelet[2654]: I0909 23:34:00.169041 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33f3c0a1-6150-41be-b80e-4460d6094132-clustermesh-secrets\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.169182 kubelet[2654]: I0909 23:34:00.169057 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-hostproc\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.169182 kubelet[2654]: I0909 23:34:00.169086 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-cgroup\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.169337 kubelet[2654]: I0909 23:34:00.169101 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-etc-cni-netd\") pod \"33f3c0a1-6150-41be-b80e-4460d6094132\" (UID: \"33f3c0a1-6150-41be-b80e-4460d6094132\") " Sep 9 23:34:00.171266 kubelet[2654]: I0909 23:34:00.171219 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.172121 kubelet[2654]: I0909 23:34:00.171351 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.172121 kubelet[2654]: I0909 23:34:00.171372 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cni-path" (OuterVolumeSpecName: "cni-path") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.172121 kubelet[2654]: I0909 23:34:00.171384 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.172121 kubelet[2654]: I0909 23:34:00.171514 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.172247 kubelet[2654]: I0909 23:34:00.172143 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.173329 kubelet[2654]: I0909 23:34:00.173296 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:34:00.173986 kubelet[2654]: I0909 23:34:00.173957 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f3c0a1-6150-41be-b80e-4460d6094132-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:34:00.174046 kubelet[2654]: I0909 23:34:00.173984 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0decdc-5117-4a97-9a5f-eab81ca386a6-kube-api-access-s5ltl" (OuterVolumeSpecName: "kube-api-access-s5ltl") pod "8d0decdc-5117-4a97-9a5f-eab81ca386a6" (UID: "8d0decdc-5117-4a97-9a5f-eab81ca386a6"). InnerVolumeSpecName "kube-api-access-s5ltl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:34:00.174046 kubelet[2654]: I0909 23:34:00.174010 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.174046 kubelet[2654]: I0909 23:34:00.174033 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-hostproc" (OuterVolumeSpecName: "hostproc") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.174147 kubelet[2654]: I0909 23:34:00.174051 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.174147 kubelet[2654]: I0909 23:34:00.174083 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:34:00.174514 kubelet[2654]: I0909 23:34:00.174479 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f3c0a1-6150-41be-b80e-4460d6094132-kube-api-access-drbfn" (OuterVolumeSpecName: "kube-api-access-drbfn") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "kube-api-access-drbfn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:34:00.175506 kubelet[2654]: I0909 23:34:00.175478 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d0decdc-5117-4a97-9a5f-eab81ca386a6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d0decdc-5117-4a97-9a5f-eab81ca386a6" (UID: "8d0decdc-5117-4a97-9a5f-eab81ca386a6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:34:00.176011 kubelet[2654]: I0909 23:34:00.175968 2654 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f3c0a1-6150-41be-b80e-4460d6094132-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "33f3c0a1-6150-41be-b80e-4460d6094132" (UID: "33f3c0a1-6150-41be-b80e-4460d6094132"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:34:00.269585 kubelet[2654]: I0909 23:34:00.269519 2654 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-drbfn\" (UniqueName: \"kubernetes.io/projected/33f3c0a1-6150-41be-b80e-4460d6094132-kube-api-access-drbfn\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269585 kubelet[2654]: I0909 23:34:00.269567 2654 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s5ltl\" (UniqueName: \"kubernetes.io/projected/8d0decdc-5117-4a97-9a5f-eab81ca386a6-kube-api-access-s5ltl\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269585 kubelet[2654]: I0909 23:34:00.269587 2654 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269777 kubelet[2654]: I0909 23:34:00.269603 2654 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269777 kubelet[2654]: I0909 23:34:00.269619 2654 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d0decdc-5117-4a97-9a5f-eab81ca386a6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269777 kubelet[2654]: I0909 23:34:00.269633 2654 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269777 kubelet[2654]: I0909 23:34:00.269646 2654 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269777 kubelet[2654]: I0909 23:34:00.269660 2654 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269777 kubelet[2654]: I0909 23:34:00.269674 2654 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269777 kubelet[2654]: I0909 23:34:00.269689 2654 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33f3c0a1-6150-41be-b80e-4460d6094132-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269777 kubelet[2654]: I0909 23:34:00.269697 2654 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269932 kubelet[2654]: I0909 23:34:00.269705 2654 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269932 kubelet[2654]: I0909 23:34:00.269712 2654 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269932 kubelet[2654]: I0909 23:34:00.269719 2654 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269932 kubelet[2654]: I0909 23:34:00.269728 2654 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33f3c0a1-6150-41be-b80e-4460d6094132-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.269932 kubelet[2654]: I0909 23:34:00.269735 2654 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33f3c0a1-6150-41be-b80e-4460d6094132-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 23:34:00.894214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab06693edec5dde4055654c5a73ec25dcc528c131bc09b7af28589add232a19e-shm.mount: Deactivated successfully. Sep 9 23:34:00.894315 systemd[1]: var-lib-kubelet-pods-8d0decdc\x2d5117\x2d4a97\x2d9a5f\x2deab81ca386a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds5ltl.mount: Deactivated successfully. Sep 9 23:34:00.894370 systemd[1]: var-lib-kubelet-pods-33f3c0a1\x2d6150\x2d41be\x2db80e\x2d4460d6094132-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddrbfn.mount: Deactivated successfully. Sep 9 23:34:00.894418 systemd[1]: var-lib-kubelet-pods-33f3c0a1\x2d6150\x2d41be\x2db80e\x2d4460d6094132-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 23:34:00.894476 systemd[1]: var-lib-kubelet-pods-33f3c0a1\x2d6150\x2d41be\x2db80e\x2d4460d6094132-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 23:34:01.017668 kubelet[2654]: I0909 23:34:01.017614 2654 scope.go:117] "RemoveContainer" containerID="b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a" Sep 9 23:34:01.020807 containerd[1515]: time="2025-09-09T23:34:01.020755653Z" level=info msg="RemoveContainer for \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\"" Sep 9 23:34:01.024582 systemd[1]: Removed slice kubepods-besteffort-pod8d0decdc_5117_4a97_9a5f_eab81ca386a6.slice - libcontainer container kubepods-besteffort-pod8d0decdc_5117_4a97_9a5f_eab81ca386a6.slice. Sep 9 23:34:01.034018 systemd[1]: Removed slice kubepods-burstable-pod33f3c0a1_6150_41be_b80e_4460d6094132.slice - libcontainer container kubepods-burstable-pod33f3c0a1_6150_41be_b80e_4460d6094132.slice. Sep 9 23:34:01.034433 containerd[1515]: time="2025-09-09T23:34:01.034402215Z" level=info msg="RemoveContainer for \"b857e8f61a59d95cd539515268089aef347699076fc182550fc7a0c3d743962a\" returns successfully" Sep 9 23:34:01.034588 systemd[1]: kubepods-burstable-pod33f3c0a1_6150_41be_b80e_4460d6094132.slice: Consumed 6.441s CPU time, 121.9M memory peak, 136K read from disk, 12.9M written to disk. Sep 9 23:34:01.034951 kubelet[2654]: I0909 23:34:01.034726 2654 scope.go:117] "RemoveContainer" containerID="6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7" Sep 9 23:34:01.039018 containerd[1515]: time="2025-09-09T23:34:01.038982842Z" level=info msg="RemoveContainer for \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\"" Sep 9 23:34:01.051839 containerd[1515]: time="2025-09-09T23:34:01.051788277Z" level=info msg="RemoveContainer for \"6205629c5787a74d7d234a9457e37ed1695c5a31a656978971447f230a49afe7\" returns successfully" Sep 9 23:34:01.052464 kubelet[2654]: I0909 23:34:01.052323 2654 scope.go:117] "RemoveContainer" containerID="ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba" Sep 9 23:34:01.054267 containerd[1515]: time="2025-09-09T23:34:01.054216544Z" level=info msg="RemoveContainer for \"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\"" Sep 9 23:34:01.058714 containerd[1515]: time="2025-09-09T23:34:01.058673936Z" level=info msg="RemoveContainer for \"ac1115c0d5b3658740189cb682ef98b95fb4cac52974a0b1c3e22ad5c20bcaba\" returns successfully" Sep 9 23:34:01.059116 kubelet[2654]: I0909 23:34:01.058989 2654 scope.go:117] "RemoveContainer" containerID="21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0" Sep 9 23:34:01.064303 containerd[1515]: time="2025-09-09T23:34:01.064256964Z" level=info msg="RemoveContainer for \"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\"" Sep 9 23:34:01.069936 containerd[1515]: time="2025-09-09T23:34:01.069893070Z" level=info msg="RemoveContainer for \"21128d4dfc87733385fa567deb42a42479ca207ddb7ae433689016bd622435b0\" returns successfully" Sep 9 23:34:01.070254 kubelet[2654]: I0909 23:34:01.070223 2654 scope.go:117] "RemoveContainer" containerID="86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5" Sep 9 23:34:01.071745 containerd[1515]: time="2025-09-09T23:34:01.071719641Z" level=info msg="RemoveContainer for \"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\"" Sep 9 23:34:01.077238 containerd[1515]: time="2025-09-09T23:34:01.077152715Z" level=info msg="RemoveContainer for \"86c2b21e1c68bd439704f71a8795a36427ec552a16fdee376623d143d871b2e5\" returns successfully" Sep 9 23:34:01.077474 kubelet[2654]: I0909 23:34:01.077453 2654 scope.go:117] "RemoveContainer" containerID="6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd" Sep 9 23:34:01.078996 containerd[1515]: time="2025-09-09T23:34:01.078964846Z" level=info msg="RemoveContainer for \"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\"" Sep 9 23:34:01.082283 containerd[1515]: time="2025-09-09T23:34:01.082228683Z" level=info msg="RemoveContainer for \"6d88f0d758854565f305b1706afd6f19ed19fbc9265b1fa4d522fb19efe527fd\" returns successfully" Sep 9 23:34:01.794120 sshd[4242]: Connection closed by 10.0.0.1 port 37812 Sep 9 23:34:01.795659 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Sep 9 23:34:01.798669 kubelet[2654]: I0909 23:34:01.798310 2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f3c0a1-6150-41be-b80e-4460d6094132" path="/var/lib/kubelet/pods/33f3c0a1-6150-41be-b80e-4460d6094132/volumes" Sep 9 23:34:01.801100 kubelet[2654]: I0909 23:34:01.798924 2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0decdc-5117-4a97-9a5f-eab81ca386a6" path="/var/lib/kubelet/pods/8d0decdc-5117-4a97-9a5f-eab81ca386a6/volumes" Sep 9 23:34:01.805282 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:37812.service: Deactivated successfully. Sep 9 23:34:01.807313 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:34:01.809472 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:34:01.811885 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:41088.service - OpenSSH per-connection server daemon (10.0.0.1:41088). Sep 9 23:34:01.812696 systemd-logind[1490]: Removed session 22. Sep 9 23:34:01.860355 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 41088 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:34:01.861683 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:34:01.866533 systemd-logind[1490]: New session 23 of user core. Sep 9 23:34:01.872280 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:34:02.911172 sshd[4399]: Connection closed by 10.0.0.1 port 41088 Sep 9 23:34:02.912211 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Sep 9 23:34:02.926836 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:41088.service: Deactivated successfully. Sep 9 23:34:02.930738 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:34:02.933227 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:34:02.942486 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:41098.service - OpenSSH per-connection server daemon (10.0.0.1:41098). Sep 9 23:34:02.943068 systemd-logind[1490]: Removed session 23. Sep 9 23:34:02.958081 kubelet[2654]: I0909 23:34:02.957911 2654 memory_manager.go:355] "RemoveStaleState removing state" podUID="8d0decdc-5117-4a97-9a5f-eab81ca386a6" containerName="cilium-operator" Sep 9 23:34:02.958081 kubelet[2654]: I0909 23:34:02.957980 2654 memory_manager.go:355] "RemoveStaleState removing state" podUID="33f3c0a1-6150-41be-b80e-4460d6094132" containerName="cilium-agent" Sep 9 23:34:02.973834 systemd[1]: Created slice kubepods-burstable-pod4b3b41d1_c914_4f85_852d_000b5f26d383.slice - libcontainer container kubepods-burstable-pod4b3b41d1_c914_4f85_852d_000b5f26d383.slice. Sep 9 23:34:03.009520 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 41098 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:34:03.010816 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:34:03.014899 systemd-logind[1490]: New session 24 of user core. Sep 9 23:34:03.026286 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:34:03.076224 sshd[4413]: Connection closed by 10.0.0.1 port 41098 Sep 9 23:34:03.076718 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Sep 9 23:34:03.086958 kubelet[2654]: I0909 23:34:03.086912 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-bpf-maps\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087088 kubelet[2654]: I0909 23:34:03.086962 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-xtables-lock\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087088 kubelet[2654]: I0909 23:34:03.086985 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnzp2\" (UniqueName: \"kubernetes.io/projected/4b3b41d1-c914-4f85-852d-000b5f26d383-kube-api-access-hnzp2\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087088 kubelet[2654]: I0909 23:34:03.087006 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-cilium-run\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087088 kubelet[2654]: I0909 23:34:03.087021 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-host-proc-sys-kernel\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087088 kubelet[2654]: I0909 23:34:03.087036 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b3b41d1-c914-4f85-852d-000b5f26d383-hubble-tls\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087088 kubelet[2654]: I0909 23:34:03.087051 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-cilium-cgroup\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087236 kubelet[2654]: I0909 23:34:03.087067 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b3b41d1-c914-4f85-852d-000b5f26d383-clustermesh-secrets\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087236 kubelet[2654]: I0909 23:34:03.087082 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-lib-modules\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087236 kubelet[2654]: I0909 23:34:03.087097 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b3b41d1-c914-4f85-852d-000b5f26d383-cilium-ipsec-secrets\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087236 kubelet[2654]: I0909 23:34:03.087133 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-hostproc\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087236 kubelet[2654]: I0909 23:34:03.087149 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-cni-path\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087236 kubelet[2654]: I0909 23:34:03.087165 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-etc-cni-netd\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087351 kubelet[2654]: I0909 23:34:03.087180 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b3b41d1-c914-4f85-852d-000b5f26d383-cilium-config-path\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.087351 kubelet[2654]: I0909 23:34:03.087197 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b3b41d1-c914-4f85-852d-000b5f26d383-host-proc-sys-net\") pod \"cilium-sswhx\" (UID: \"4b3b41d1-c914-4f85-852d-000b5f26d383\") " pod="kube-system/cilium-sswhx" Sep 9 23:34:03.092563 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:41098.service: Deactivated successfully. Sep 9 23:34:03.094337 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:34:03.095878 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:34:03.100961 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:41108.service - OpenSSH per-connection server daemon (10.0.0.1:41108). Sep 9 23:34:03.102314 systemd-logind[1490]: Removed session 24. Sep 9 23:34:03.156446 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 41108 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:34:03.157919 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:34:03.162683 systemd-logind[1490]: New session 25 of user core. Sep 9 23:34:03.175292 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:34:03.279786 kubelet[2654]: E0909 23:34:03.279636 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:03.280182 containerd[1515]: time="2025-09-09T23:34:03.280127375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sswhx,Uid:4b3b41d1-c914-4f85-852d-000b5f26d383,Namespace:kube-system,Attempt:0,}" Sep 9 23:34:03.300309 containerd[1515]: time="2025-09-09T23:34:03.300257107Z" level=info msg="connecting to shim 6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee" address="unix:///run/containerd/s/7d2daf416c914bfcdead20d80af587744767b765fb2f5c04b89f7b41f30fa853" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:34:03.334307 systemd[1]: Started cri-containerd-6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee.scope - libcontainer container 6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee. Sep 9 23:34:03.360984 containerd[1515]: time="2025-09-09T23:34:03.360945253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sswhx,Uid:4b3b41d1-c914-4f85-852d-000b5f26d383,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\"" Sep 9 23:34:03.361782 kubelet[2654]: E0909 23:34:03.361760 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:03.364070 containerd[1515]: time="2025-09-09T23:34:03.364005951Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:34:03.378788 containerd[1515]: time="2025-09-09T23:34:03.378726903Z" level=info msg="Container 2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:34:03.384757 containerd[1515]: time="2025-09-09T23:34:03.384718984Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46\"" Sep 9 23:34:03.385566 containerd[1515]: time="2025-09-09T23:34:03.385472919Z" level=info msg="StartContainer for \"2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46\"" Sep 9 23:34:03.386387 containerd[1515]: time="2025-09-09T23:34:03.386360330Z" level=info msg="connecting to shim 2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46" address="unix:///run/containerd/s/7d2daf416c914bfcdead20d80af587744767b765fb2f5c04b89f7b41f30fa853" protocol=ttrpc version=3 Sep 9 23:34:03.406508 systemd[1]: Started cri-containerd-2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46.scope - libcontainer container 2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46. Sep 9 23:34:03.433907 containerd[1515]: time="2025-09-09T23:34:03.433797716Z" level=info msg="StartContainer for \"2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46\" returns successfully" Sep 9 23:34:03.442117 systemd[1]: cri-containerd-2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46.scope: Deactivated successfully. Sep 9 23:34:03.448289 containerd[1515]: time="2025-09-09T23:34:03.447553099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46\" id:\"2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46\" pid:4491 exited_at:{seconds:1757460843 nanos:447182592}" Sep 9 23:34:03.448289 containerd[1515]: time="2025-09-09T23:34:03.447628457Z" level=info msg="received exit event container_id:\"2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46\" id:\"2ed3a998dfb2bd207523d049404d08ed069d070eba19c326806624122dc85d46\" pid:4491 exited_at:{seconds:1757460843 nanos:447182592}" Sep 9 23:34:04.036663 kubelet[2654]: E0909 23:34:04.036622 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:04.044658 containerd[1515]: time="2025-09-09T23:34:04.044597386Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:34:04.183663 containerd[1515]: time="2025-09-09T23:34:04.183594327Z" level=info msg="Container 4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:34:04.212985 containerd[1515]: time="2025-09-09T23:34:04.212918100Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad\"" Sep 9 23:34:04.213704 containerd[1515]: time="2025-09-09T23:34:04.213624758Z" level=info msg="StartContainer for \"4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad\"" Sep 9 23:34:04.214674 containerd[1515]: time="2025-09-09T23:34:04.214648086Z" level=info msg="connecting to shim 4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad" address="unix:///run/containerd/s/7d2daf416c914bfcdead20d80af587744767b765fb2f5c04b89f7b41f30fa853" protocol=ttrpc version=3 Sep 9 23:34:04.234308 systemd[1]: Started cri-containerd-4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad.scope - libcontainer container 4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad. Sep 9 23:34:04.260734 containerd[1515]: time="2025-09-09T23:34:04.260688862Z" level=info msg="StartContainer for \"4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad\" returns successfully" Sep 9 23:34:04.266272 systemd[1]: cri-containerd-4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad.scope: Deactivated successfully. Sep 9 23:34:04.266794 containerd[1515]: time="2025-09-09T23:34:04.266763074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad\" id:\"4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad\" pid:4536 exited_at:{seconds:1757460844 nanos:266464484}" Sep 9 23:34:04.267264 containerd[1515]: time="2025-09-09T23:34:04.267239380Z" level=info msg="received exit event container_id:\"4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad\" id:\"4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad\" pid:4536 exited_at:{seconds:1757460844 nanos:266464484}" Sep 9 23:34:04.284291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4511bf1260298a34602ead1424017b0cd9e2bb2fa495b1113c5f2cb8df3b84ad-rootfs.mount: Deactivated successfully. Sep 9 23:34:04.860922 kubelet[2654]: E0909 23:34:04.860808 2654 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:34:05.041826 kubelet[2654]: E0909 23:34:05.041749 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:05.047866 containerd[1515]: time="2025-09-09T23:34:05.047820018Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:34:05.077151 containerd[1515]: time="2025-09-09T23:34:05.076025287Z" level=info msg="Container ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:34:05.086565 containerd[1515]: time="2025-09-09T23:34:05.086511305Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf\"" Sep 9 23:34:05.087316 containerd[1515]: time="2025-09-09T23:34:05.087234565Z" level=info msg="StartContainer for \"ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf\"" Sep 9 23:34:05.090128 containerd[1515]: time="2025-09-09T23:34:05.089553658Z" level=info msg="connecting to shim ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf" address="unix:///run/containerd/s/7d2daf416c914bfcdead20d80af587744767b765fb2f5c04b89f7b41f30fa853" protocol=ttrpc version=3 Sep 9 23:34:05.113315 systemd[1]: Started cri-containerd-ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf.scope - libcontainer container ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf. Sep 9 23:34:05.154262 containerd[1515]: time="2025-09-09T23:34:05.154217879Z" level=info msg="StartContainer for \"ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf\" returns successfully" Sep 9 23:34:05.154544 systemd[1]: cri-containerd-ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf.scope: Deactivated successfully. Sep 9 23:34:05.157838 containerd[1515]: time="2025-09-09T23:34:05.157794096Z" level=info msg="received exit event container_id:\"ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf\" id:\"ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf\" pid:4582 exited_at:{seconds:1757460845 nanos:157619181}" Sep 9 23:34:05.158153 containerd[1515]: time="2025-09-09T23:34:05.158087087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf\" id:\"ce12753526114314bf3096b5fe00bfd4d34c1fefd98df9fa095a0c127eb28fcf\" pid:4582 exited_at:{seconds:1757460845 nanos:157619181}" Sep 9 23:34:05.794191 kubelet[2654]: E0909 23:34:05.794155 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:06.047530 kubelet[2654]: E0909 23:34:06.047413 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:06.052247 containerd[1515]: time="2025-09-09T23:34:06.052127282Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:34:06.076441 containerd[1515]: time="2025-09-09T23:34:06.076381076Z" level=info msg="Container db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:34:06.088753 containerd[1515]: time="2025-09-09T23:34:06.088710428Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57\"" Sep 9 23:34:06.090185 containerd[1515]: time="2025-09-09T23:34:06.089460408Z" level=info msg="StartContainer for \"db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57\"" Sep 9 23:34:06.091584 containerd[1515]: time="2025-09-09T23:34:06.091554632Z" level=info msg="connecting to shim db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57" address="unix:///run/containerd/s/7d2daf416c914bfcdead20d80af587744767b765fb2f5c04b89f7b41f30fa853" protocol=ttrpc version=3 Sep 9 23:34:06.117340 systemd[1]: Started cri-containerd-db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57.scope - libcontainer container db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57. Sep 9 23:34:06.148054 systemd[1]: cri-containerd-db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57.scope: Deactivated successfully. Sep 9 23:34:06.148816 containerd[1515]: time="2025-09-09T23:34:06.148764948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57\" id:\"db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57\" pid:4622 exited_at:{seconds:1757460846 nanos:148467956}" Sep 9 23:34:06.151785 containerd[1515]: time="2025-09-09T23:34:06.151643871Z" level=info msg="received exit event container_id:\"db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57\" id:\"db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57\" pid:4622 exited_at:{seconds:1757460846 nanos:148467956}" Sep 9 23:34:06.153500 containerd[1515]: time="2025-09-09T23:34:06.153472702Z" level=info msg="StartContainer for \"db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57\" returns successfully" Sep 9 23:34:06.193645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db911dba8eb9e10fb942f4b0476b2f8e364e831dee3425ea8cd98b2df76f5d57-rootfs.mount: Deactivated successfully. Sep 9 23:34:07.053790 kubelet[2654]: E0909 23:34:07.052860 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:07.056152 containerd[1515]: time="2025-09-09T23:34:07.056095047Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:34:07.070655 containerd[1515]: time="2025-09-09T23:34:07.069881268Z" level=info msg="Container 8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:34:07.084543 containerd[1515]: time="2025-09-09T23:34:07.084491428Z" level=info msg="CreateContainer within sandbox \"6fa6f0646dd705864216dab5185a3f4f1fe66396d70c88cec1ee2c8ed61ec9ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348\"" Sep 9 23:34:07.085288 containerd[1515]: time="2025-09-09T23:34:07.085254210Z" level=info msg="StartContainer for \"8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348\"" Sep 9 23:34:07.086443 containerd[1515]: time="2025-09-09T23:34:07.086342943Z" level=info msg="connecting to shim 8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348" address="unix:///run/containerd/s/7d2daf416c914bfcdead20d80af587744767b765fb2f5c04b89f7b41f30fa853" protocol=ttrpc version=3 Sep 9 23:34:07.112369 systemd[1]: Started cri-containerd-8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348.scope - libcontainer container 8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348. Sep 9 23:34:07.149658 containerd[1515]: time="2025-09-09T23:34:07.149605827Z" level=info msg="StartContainer for \"8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348\" returns successfully" Sep 9 23:34:07.210767 containerd[1515]: time="2025-09-09T23:34:07.210710524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348\" id:\"afad9fbd4e43f16a316e3b7d0a972c6e1634394afcfd01b952468a633c7bac0c\" pid:4689 exited_at:{seconds:1757460847 nanos:210423411}" Sep 9 23:34:07.440213 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 23:34:07.794654 kubelet[2654]: E0909 23:34:07.794510 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:08.059567 kubelet[2654]: E0909 23:34:08.059406 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:08.076127 kubelet[2654]: I0909 23:34:08.075986 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sswhx" podStartSLOduration=6.075967423 podStartE2EDuration="6.075967423s" podCreationTimestamp="2025-09-09 23:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:34:08.075794787 +0000 UTC m=+78.359578624" watchObservedRunningTime="2025-09-09 23:34:08.075967423 +0000 UTC m=+78.359751260" Sep 9 23:34:09.281548 kubelet[2654]: E0909 23:34:09.281477 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:09.564054 containerd[1515]: time="2025-09-09T23:34:09.563708344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348\" id:\"2945037ac872c049abb02d5653c4c4afe3745c8f19be341c8131171823f97ca8\" pid:4975 exit_status:1 exited_at:{seconds:1757460849 nanos:563133476}" Sep 9 23:34:10.308137 systemd-networkd[1425]: lxc_health: Link UP Sep 9 23:34:10.314164 systemd-networkd[1425]: lxc_health: Gained carrier Sep 9 23:34:11.281923 kubelet[2654]: E0909 23:34:11.281883 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:11.713122 containerd[1515]: time="2025-09-09T23:34:11.712997298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348\" id:\"37784e7e5bda7923e7f57ca82350c1b37735475adf276072960de4c3c97d070a\" pid:5234 exited_at:{seconds:1757460851 nanos:712442748}" Sep 9 23:34:12.067534 kubelet[2654]: E0909 23:34:12.067376 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:12.238647 systemd-networkd[1425]: lxc_health: Gained IPv6LL Sep 9 23:34:12.793698 kubelet[2654]: E0909 23:34:12.793585 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:13.069676 kubelet[2654]: E0909 23:34:13.069261 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:34:13.832973 containerd[1515]: time="2025-09-09T23:34:13.832885240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348\" id:\"ab72eee1f1015757a11cae123ca69d7bff09d70da23a24b34f8318fc04ccc5e8\" pid:5261 exited_at:{seconds:1757460853 nanos:832543485}" Sep 9 23:34:16.028130 containerd[1515]: time="2025-09-09T23:34:16.028039742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aa66eae4b50487f6697bc7acc78bfc43d4363290a29c44b8b17e41839665348\" id:\"6c01ceda8386b81944f95c28be7876965149b3dc0c4fed79c30622b82b798419\" pid:5292 exited_at:{seconds:1757460856 nanos:27561346}" Sep 9 23:34:16.033711 sshd[4422]: Connection closed by 10.0.0.1 port 41108 Sep 9 23:34:16.034175 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Sep 9 23:34:16.037854 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:41108.service: Deactivated successfully. Sep 9 23:34:16.040708 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:34:16.043005 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:34:16.044447 systemd-logind[1490]: Removed session 25.