Sep 6 09:18:48.752094 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 09:18:48.752115 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sat Sep 6 08:11:45 -00 2025 Sep 6 09:18:48.752125 kernel: KASLR enabled Sep 6 09:18:48.752131 kernel: efi: EFI v2.7 by EDK II Sep 6 09:18:48.752136 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 6 09:18:48.752142 kernel: random: crng init done Sep 6 09:18:48.752148 kernel: secureboot: Secure boot disabled Sep 6 09:18:48.752154 kernel: ACPI: Early table checksum verification disabled Sep 6 09:18:48.752160 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 6 09:18:48.752167 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 6 09:18:48.752173 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752179 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752185 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752191 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752197 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752205 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752211 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752217 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752223 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:18:48.752229 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 6 09:18:48.752235 kernel: ACPI: Use ACPI SPCR as default console: No Sep 6 09:18:48.752241 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 09:18:48.752247 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 6 09:18:48.752253 kernel: Zone ranges: Sep 6 09:18:48.752259 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 09:18:48.752267 kernel: DMA32 empty Sep 6 09:18:48.752273 kernel: Normal empty Sep 6 09:18:48.752279 kernel: Device empty Sep 6 09:18:48.752285 kernel: Movable zone start for each node Sep 6 09:18:48.752291 kernel: Early memory node ranges Sep 6 09:18:48.752297 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 6 09:18:48.752304 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 6 09:18:48.752310 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 6 09:18:48.752316 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 6 09:18:48.752323 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 6 09:18:48.752328 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 6 09:18:48.752334 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 6 09:18:48.752342 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 6 09:18:48.752348 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 6 09:18:48.752354 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 6 09:18:48.752363 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 6 09:18:48.752369 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 6 09:18:48.752376 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 6 09:18:48.752384 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 09:18:48.752390 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 6 09:18:48.752397 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 6 09:18:48.752403 kernel: psci: probing for conduit method from ACPI. Sep 6 09:18:48.752410 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 09:18:48.752417 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 09:18:48.752423 kernel: psci: Trusted OS migration not required Sep 6 09:18:48.752430 kernel: psci: SMC Calling Convention v1.1 Sep 6 09:18:48.752436 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 6 09:18:48.752443 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 6 09:18:48.752450 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 6 09:18:48.752457 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 6 09:18:48.752463 kernel: Detected PIPT I-cache on CPU0 Sep 6 09:18:48.752469 kernel: CPU features: detected: GIC system register CPU interface Sep 6 09:18:48.752476 kernel: CPU features: detected: Spectre-v4 Sep 6 09:18:48.752482 kernel: CPU features: detected: Spectre-BHB Sep 6 09:18:48.752488 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 09:18:48.752495 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 09:18:48.752501 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 09:18:48.752508 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 09:18:48.752514 kernel: alternatives: applying boot alternatives Sep 6 09:18:48.752521 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6163bff8094500f0c843d90ad54b6289c22d80e37c1e6e3ca3f70e7b65171850 Sep 6 09:18:48.752529 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 09:18:48.752536 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 09:18:48.752543 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 09:18:48.752549 kernel: Fallback order for Node 0: 0 Sep 6 09:18:48.752555 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 6 09:18:48.752562 kernel: Policy zone: DMA Sep 6 09:18:48.752568 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 09:18:48.752575 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 6 09:18:48.752581 kernel: software IO TLB: area num 4. Sep 6 09:18:48.752588 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 6 09:18:48.752594 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 6 09:18:48.752602 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 09:18:48.752609 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 09:18:48.752615 kernel: rcu: RCU event tracing is enabled. Sep 6 09:18:48.752622 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 09:18:48.752629 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 09:18:48.752635 kernel: Tracing variant of Tasks RCU enabled. Sep 6 09:18:48.752642 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 09:18:48.752648 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 09:18:48.752655 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 09:18:48.752661 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 09:18:48.752668 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 09:18:48.752675 kernel: GICv3: 256 SPIs implemented Sep 6 09:18:48.752682 kernel: GICv3: 0 Extended SPIs implemented Sep 6 09:18:48.752688 kernel: Root IRQ handler: gic_handle_irq Sep 6 09:18:48.752702 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 6 09:18:48.752708 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 6 09:18:48.752714 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 6 09:18:48.752721 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 6 09:18:48.752728 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 6 09:18:48.752735 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 6 09:18:48.752749 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 6 09:18:48.752755 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 6 09:18:48.752762 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 6 09:18:48.752770 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 09:18:48.752776 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 09:18:48.752783 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 09:18:48.752790 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 09:18:48.752796 kernel: arm-pv: using stolen time PV Sep 6 09:18:48.752803 kernel: Console: colour dummy device 80x25 Sep 6 09:18:48.752809 kernel: ACPI: Core revision 20240827 Sep 6 09:18:48.752816 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 09:18:48.752822 kernel: pid_max: default: 32768 minimum: 301 Sep 6 09:18:48.752829 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 6 09:18:48.752837 kernel: landlock: Up and running. Sep 6 09:18:48.752843 kernel: SELinux: Initializing. Sep 6 09:18:48.752850 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 09:18:48.752857 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 09:18:48.752863 kernel: rcu: Hierarchical SRCU implementation. Sep 6 09:18:48.752870 kernel: rcu: Max phase no-delay instances is 400. Sep 6 09:18:48.752877 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 6 09:18:48.752883 kernel: Remapping and enabling EFI services. Sep 6 09:18:48.752890 kernel: smp: Bringing up secondary CPUs ... Sep 6 09:18:48.752902 kernel: Detected PIPT I-cache on CPU1 Sep 6 09:18:48.752909 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 6 09:18:48.752916 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 6 09:18:48.752924 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 09:18:48.752931 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 09:18:48.752938 kernel: Detected PIPT I-cache on CPU2 Sep 6 09:18:48.752964 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 6 09:18:48.752972 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 6 09:18:48.752981 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 09:18:48.752988 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 6 09:18:48.752995 kernel: Detected PIPT I-cache on CPU3 Sep 6 09:18:48.753002 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 6 09:18:48.753009 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 6 09:18:48.753016 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 09:18:48.753023 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 6 09:18:48.753031 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 09:18:48.753038 kernel: SMP: Total of 4 processors activated. Sep 6 09:18:48.753046 kernel: CPU: All CPU(s) started at EL1 Sep 6 09:18:48.753054 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 09:18:48.753061 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 09:18:48.753069 kernel: CPU features: detected: Common not Private translations Sep 6 09:18:48.753076 kernel: CPU features: detected: CRC32 instructions Sep 6 09:18:48.753083 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 6 09:18:48.753090 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 09:18:48.753097 kernel: CPU features: detected: LSE atomic instructions Sep 6 09:18:48.753104 kernel: CPU features: detected: Privileged Access Never Sep 6 09:18:48.753113 kernel: CPU features: detected: RAS Extension Support Sep 6 09:18:48.753121 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 6 09:18:48.753128 kernel: alternatives: applying system-wide alternatives Sep 6 09:18:48.753136 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 6 09:18:48.753143 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 6 09:18:48.753151 kernel: devtmpfs: initialized Sep 6 09:18:48.753158 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 09:18:48.753165 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 09:18:48.753173 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 6 09:18:48.753181 kernel: 0 pages in range for non-PLT usage Sep 6 09:18:48.753188 kernel: 508560 pages in range for PLT usage Sep 6 09:18:48.753195 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 09:18:48.753202 kernel: SMBIOS 3.0.0 present. Sep 6 09:18:48.753209 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 6 09:18:48.753216 kernel: DMI: Memory slots populated: 1/1 Sep 6 09:18:48.753223 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 09:18:48.753231 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 09:18:48.753238 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 09:18:48.753246 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 09:18:48.753254 kernel: audit: initializing netlink subsys (disabled) Sep 6 09:18:48.753261 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 6 09:18:48.753268 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 09:18:48.753289 kernel: cpuidle: using governor menu Sep 6 09:18:48.753296 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 09:18:48.753304 kernel: ASID allocator initialised with 32768 entries Sep 6 09:18:48.753311 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 09:18:48.753319 kernel: Serial: AMBA PL011 UART driver Sep 6 09:18:48.753328 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 09:18:48.753335 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 6 09:18:48.753342 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 09:18:48.753349 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 6 09:18:48.753356 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 09:18:48.753363 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 6 09:18:48.753370 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 09:18:48.753377 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 6 09:18:48.753384 kernel: ACPI: Added _OSI(Module Device) Sep 6 09:18:48.753391 kernel: ACPI: Added _OSI(Processor Device) Sep 6 09:18:48.753400 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 09:18:48.753407 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 09:18:48.753415 kernel: ACPI: Interpreter enabled Sep 6 09:18:48.753422 kernel: ACPI: Using GIC for interrupt routing Sep 6 09:18:48.753430 kernel: ACPI: MCFG table detected, 1 entries Sep 6 09:18:48.753437 kernel: ACPI: CPU0 has been hot-added Sep 6 09:18:48.753444 kernel: ACPI: CPU1 has been hot-added Sep 6 09:18:48.753451 kernel: ACPI: CPU2 has been hot-added Sep 6 09:18:48.753458 kernel: ACPI: CPU3 has been hot-added Sep 6 09:18:48.753466 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 6 09:18:48.753473 kernel: printk: legacy console [ttyAMA0] enabled Sep 6 09:18:48.753480 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 09:18:48.753612 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 09:18:48.753676 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 09:18:48.753748 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 09:18:48.753807 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 6 09:18:48.753881 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 6 09:18:48.753890 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 6 09:18:48.753898 kernel: PCI host bridge to bus 0000:00 Sep 6 09:18:48.753996 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 6 09:18:48.754053 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 09:18:48.754105 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 6 09:18:48.754164 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 09:18:48.754247 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 6 09:18:48.754318 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 6 09:18:48.754378 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 6 09:18:48.754436 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 6 09:18:48.754495 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 09:18:48.754553 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 6 09:18:48.754611 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 6 09:18:48.754672 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 6 09:18:48.754750 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 6 09:18:48.754805 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 09:18:48.754859 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 6 09:18:48.754868 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 09:18:48.754876 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 09:18:48.754883 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 09:18:48.754892 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 09:18:48.754899 kernel: iommu: Default domain type: Translated Sep 6 09:18:48.754906 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 09:18:48.754914 kernel: efivars: Registered efivars operations Sep 6 09:18:48.754920 kernel: vgaarb: loaded Sep 6 09:18:48.754927 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 09:18:48.754935 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 09:18:48.754942 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 09:18:48.754957 kernel: pnp: PnP ACPI init Sep 6 09:18:48.755040 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 6 09:18:48.755050 kernel: pnp: PnP ACPI: found 1 devices Sep 6 09:18:48.755057 kernel: NET: Registered PF_INET protocol family Sep 6 09:18:48.755065 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 09:18:48.755072 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 09:18:48.755079 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 09:18:48.755086 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 09:18:48.755093 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 6 09:18:48.755100 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 09:18:48.755109 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 09:18:48.755116 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 09:18:48.755123 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 09:18:48.755129 kernel: PCI: CLS 0 bytes, default 64 Sep 6 09:18:48.755137 kernel: kvm [1]: HYP mode not available Sep 6 09:18:48.755143 kernel: Initialise system trusted keyrings Sep 6 09:18:48.755150 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 09:18:48.755157 kernel: Key type asymmetric registered Sep 6 09:18:48.755164 kernel: Asymmetric key parser 'x509' registered Sep 6 09:18:48.755172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 09:18:48.755179 kernel: io scheduler mq-deadline registered Sep 6 09:18:48.755187 kernel: io scheduler kyber registered Sep 6 09:18:48.755194 kernel: io scheduler bfq registered Sep 6 09:18:48.755201 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 09:18:48.755208 kernel: ACPI: button: Power Button [PWRB] Sep 6 09:18:48.755215 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 09:18:48.755275 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 6 09:18:48.755285 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 09:18:48.755295 kernel: thunder_xcv, ver 1.0 Sep 6 09:18:48.755302 kernel: thunder_bgx, ver 1.0 Sep 6 09:18:48.755309 kernel: nicpf, ver 1.0 Sep 6 09:18:48.755316 kernel: nicvf, ver 1.0 Sep 6 09:18:48.755385 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 09:18:48.755444 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T09:18:48 UTC (1757150328) Sep 6 09:18:48.755453 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 09:18:48.755461 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 6 09:18:48.755470 kernel: watchdog: NMI not fully supported Sep 6 09:18:48.755477 kernel: watchdog: Hard watchdog permanently disabled Sep 6 09:18:48.755484 kernel: NET: Registered PF_INET6 protocol family Sep 6 09:18:48.755491 kernel: Segment Routing with IPv6 Sep 6 09:18:48.755497 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 09:18:48.755504 kernel: NET: Registered PF_PACKET protocol family Sep 6 09:18:48.755511 kernel: Key type dns_resolver registered Sep 6 09:18:48.755518 kernel: registered taskstats version 1 Sep 6 09:18:48.755525 kernel: Loading compiled-in X.509 certificates Sep 6 09:18:48.755533 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: bc71141abefe5117f42b11d2b521de2eb9144b0e' Sep 6 09:18:48.755540 kernel: Demotion targets for Node 0: null Sep 6 09:18:48.755547 kernel: Key type .fscrypt registered Sep 6 09:18:48.755554 kernel: Key type fscrypt-provisioning registered Sep 6 09:18:48.755561 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 09:18:48.755568 kernel: ima: Allocated hash algorithm: sha1 Sep 6 09:18:48.755575 kernel: ima: No architecture policies found Sep 6 09:18:48.755582 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 09:18:48.755589 kernel: clk: Disabling unused clocks Sep 6 09:18:48.755597 kernel: PM: genpd: Disabling unused power domains Sep 6 09:18:48.755604 kernel: Warning: unable to open an initial console. Sep 6 09:18:48.755612 kernel: Freeing unused kernel memory: 38976K Sep 6 09:18:48.755619 kernel: Run /init as init process Sep 6 09:18:48.755626 kernel: with arguments: Sep 6 09:18:48.755633 kernel: /init Sep 6 09:18:48.755639 kernel: with environment: Sep 6 09:18:48.755646 kernel: HOME=/ Sep 6 09:18:48.755653 kernel: TERM=linux Sep 6 09:18:48.755661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 09:18:48.755669 systemd[1]: Successfully made /usr/ read-only. Sep 6 09:18:48.755679 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 6 09:18:48.755687 systemd[1]: Detected virtualization kvm. Sep 6 09:18:48.755701 systemd[1]: Detected architecture arm64. Sep 6 09:18:48.755709 systemd[1]: Running in initrd. Sep 6 09:18:48.755716 systemd[1]: No hostname configured, using default hostname. Sep 6 09:18:48.755725 systemd[1]: Hostname set to . Sep 6 09:18:48.755733 systemd[1]: Initializing machine ID from VM UUID. Sep 6 09:18:48.755740 systemd[1]: Queued start job for default target initrd.target. Sep 6 09:18:48.755748 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 09:18:48.755755 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 09:18:48.755763 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 6 09:18:48.755771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 09:18:48.755778 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 6 09:18:48.755788 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 6 09:18:48.755797 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 6 09:18:48.755804 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 6 09:18:48.755812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 09:18:48.755820 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 09:18:48.755828 systemd[1]: Reached target paths.target - Path Units. Sep 6 09:18:48.755835 systemd[1]: Reached target slices.target - Slice Units. Sep 6 09:18:48.755844 systemd[1]: Reached target swap.target - Swaps. Sep 6 09:18:48.755851 systemd[1]: Reached target timers.target - Timer Units. Sep 6 09:18:48.755859 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 09:18:48.755867 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 09:18:48.755874 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 6 09:18:48.755882 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 6 09:18:48.755890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 09:18:48.755898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 09:18:48.755905 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 09:18:48.755914 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 09:18:48.755922 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 6 09:18:48.755929 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 09:18:48.755937 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 6 09:18:48.755961 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 6 09:18:48.755973 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 09:18:48.755981 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 09:18:48.755988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 09:18:48.755999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 09:18:48.756006 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 6 09:18:48.756014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 09:18:48.756022 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 09:18:48.756047 systemd-journald[244]: Collecting audit messages is disabled. Sep 6 09:18:48.756067 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 09:18:48.756075 systemd-journald[244]: Journal started Sep 6 09:18:48.756096 systemd-journald[244]: Runtime Journal (/run/log/journal/0a23a579ba834721ad9cc94732cd1ebe) is 6M, max 48.5M, 42.4M free. Sep 6 09:18:48.764060 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 09:18:48.764105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:18:48.749361 systemd-modules-load[246]: Inserted module 'overlay' Sep 6 09:18:48.767348 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 6 09:18:48.768884 kernel: Bridge firewalling registered Sep 6 09:18:48.768903 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 09:18:48.771009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 09:18:48.772104 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 09:18:48.778734 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 09:18:48.780667 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 09:18:48.782872 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 09:18:48.790796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 09:18:48.798053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 09:18:48.799735 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 6 09:18:48.800843 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 09:18:48.803061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 09:18:48.806349 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 09:18:48.809512 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 09:18:48.820549 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 6 09:18:48.835683 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6163bff8094500f0c843d90ad54b6289c22d80e37c1e6e3ca3f70e7b65171850 Sep 6 09:18:48.850682 systemd-resolved[286]: Positive Trust Anchors: Sep 6 09:18:48.850712 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 09:18:48.850744 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 09:18:48.855671 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 6 09:18:48.856842 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 09:18:48.861222 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 09:18:48.907980 kernel: SCSI subsystem initialized Sep 6 09:18:48.912967 kernel: Loading iSCSI transport class v2.0-870. Sep 6 09:18:48.920998 kernel: iscsi: registered transport (tcp) Sep 6 09:18:48.933996 kernel: iscsi: registered transport (qla4xxx) Sep 6 09:18:48.934044 kernel: QLogic iSCSI HBA Driver Sep 6 09:18:48.951150 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 6 09:18:48.965607 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 09:18:48.967971 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 6 09:18:49.011557 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 6 09:18:49.014051 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 6 09:18:49.075994 kernel: raid6: neonx8 gen() 15205 MB/s Sep 6 09:18:49.092971 kernel: raid6: neonx4 gen() 15540 MB/s Sep 6 09:18:49.109968 kernel: raid6: neonx2 gen() 12965 MB/s Sep 6 09:18:49.126987 kernel: raid6: neonx1 gen() 9971 MB/s Sep 6 09:18:49.143969 kernel: raid6: int64x8 gen() 6884 MB/s Sep 6 09:18:49.160987 kernel: raid6: int64x4 gen() 7349 MB/s Sep 6 09:18:49.177966 kernel: raid6: int64x2 gen() 6065 MB/s Sep 6 09:18:49.194985 kernel: raid6: int64x1 gen() 5031 MB/s Sep 6 09:18:49.195014 kernel: raid6: using algorithm neonx4 gen() 15540 MB/s Sep 6 09:18:49.211978 kernel: raid6: .... xor() 12257 MB/s, rmw enabled Sep 6 09:18:49.212015 kernel: raid6: using neon recovery algorithm Sep 6 09:18:49.217408 kernel: xor: measuring software checksum speed Sep 6 09:18:49.217435 kernel: 8regs : 21624 MB/sec Sep 6 09:18:49.217981 kernel: 32regs : 21670 MB/sec Sep 6 09:18:49.218977 kernel: arm64_neon : 28118 MB/sec Sep 6 09:18:49.218991 kernel: xor: using function: arm64_neon (28118 MB/sec) Sep 6 09:18:49.270978 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 6 09:18:49.277794 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 6 09:18:49.280512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 09:18:49.314313 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 6 09:18:49.318385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 09:18:49.320455 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 6 09:18:49.345448 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 6 09:18:49.369386 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 09:18:49.371741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 09:18:49.429463 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 09:18:49.432753 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 6 09:18:49.482965 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 6 09:18:49.487101 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 09:18:49.490973 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 09:18:49.491009 kernel: GPT:9289727 != 19775487 Sep 6 09:18:49.491019 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 09:18:49.491028 kernel: GPT:9289727 != 19775487 Sep 6 09:18:49.491041 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 09:18:49.491977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 09:18:49.492472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 09:18:49.492545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:18:49.495590 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 09:18:49.497379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 09:18:49.523470 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 6 09:18:49.525029 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 6 09:18:49.534000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:18:49.543655 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 6 09:18:49.549929 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 6 09:18:49.551136 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 6 09:18:49.559517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 09:18:49.560792 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 09:18:49.563068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 09:18:49.565208 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 09:18:49.567864 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 6 09:18:49.569787 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 6 09:18:49.586538 disk-uuid[589]: Primary Header is updated. Sep 6 09:18:49.586538 disk-uuid[589]: Secondary Entries is updated. Sep 6 09:18:49.586538 disk-uuid[589]: Secondary Header is updated. Sep 6 09:18:49.590214 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 6 09:18:49.593051 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 09:18:49.595960 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 09:18:50.597880 disk-uuid[593]: The operation has completed successfully. Sep 6 09:18:50.598985 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 09:18:50.623043 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 09:18:50.623143 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 6 09:18:50.647248 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 6 09:18:50.660863 sh[610]: Success Sep 6 09:18:50.673737 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 09:18:50.673806 kernel: device-mapper: uevent: version 1.0.3 Sep 6 09:18:50.673823 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 6 09:18:50.679998 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 6 09:18:50.702860 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 6 09:18:50.705921 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 6 09:18:50.717438 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 6 09:18:50.722029 kernel: BTRFS: device fsid ca3931e9-1a72-47f8-8d4d-1c421b859b01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (622) Sep 6 09:18:50.722056 kernel: BTRFS info (device dm-0): first mount of filesystem ca3931e9-1a72-47f8-8d4d-1c421b859b01 Sep 6 09:18:50.723551 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 6 09:18:50.727139 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 6 09:18:50.727161 kernel: BTRFS info (device dm-0): enabling free space tree Sep 6 09:18:50.728258 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 6 09:18:50.729580 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 6 09:18:50.731116 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 6 09:18:50.731920 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 6 09:18:50.733642 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 6 09:18:50.755965 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 6 09:18:50.757976 kernel: BTRFS info (device vda6): first mount of filesystem bf040226-1451-4dec-bd48-6652b943e27f Sep 6 09:18:50.758012 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 09:18:50.761196 kernel: BTRFS info (device vda6): turning on async discard Sep 6 09:18:50.761227 kernel: BTRFS info (device vda6): enabling free space tree Sep 6 09:18:50.765963 kernel: BTRFS info (device vda6): last unmount of filesystem bf040226-1451-4dec-bd48-6652b943e27f Sep 6 09:18:50.766414 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 6 09:18:50.768496 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 6 09:18:50.834541 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 09:18:50.839899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 09:18:50.868180 ignition[696]: Ignition 2.22.0 Sep 6 09:18:50.868199 ignition[696]: Stage: fetch-offline Sep 6 09:18:50.868235 ignition[696]: no configs at "/usr/lib/ignition/base.d" Sep 6 09:18:50.868243 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:18:50.868326 ignition[696]: parsed url from cmdline: "" Sep 6 09:18:50.868330 ignition[696]: no config URL provided Sep 6 09:18:50.868334 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 09:18:50.868341 ignition[696]: no config at "/usr/lib/ignition/user.ign" Sep 6 09:18:50.868362 ignition[696]: op(1): [started] loading QEMU firmware config module Sep 6 09:18:50.868366 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 09:18:50.873586 ignition[696]: op(1): [finished] loading QEMU firmware config module Sep 6 09:18:50.884830 systemd-networkd[801]: lo: Link UP Sep 6 09:18:50.884843 systemd-networkd[801]: lo: Gained carrier Sep 6 09:18:50.885585 systemd-networkd[801]: Enumeration completed Sep 6 09:18:50.886080 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 09:18:50.886084 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 09:18:50.886527 systemd-networkd[801]: eth0: Link UP Sep 6 09:18:50.886817 systemd-networkd[801]: eth0: Gained carrier Sep 6 09:18:50.886827 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 09:18:50.887626 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 09:18:50.889033 systemd[1]: Reached target network.target - Network. Sep 6 09:18:50.903998 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 09:18:50.931768 ignition[696]: parsing config with SHA512: f1d641e79466a379a08aca071200cc10a3b9e5e12fb53f390079213cb80a6821661f85838fa3342a5d81661d2615ca226db6e08769c425c7d09383c82198fe76 Sep 6 09:18:50.937273 unknown[696]: fetched base config from "system" Sep 6 09:18:50.937283 unknown[696]: fetched user config from "qemu" Sep 6 09:18:50.937740 ignition[696]: fetch-offline: fetch-offline passed Sep 6 09:18:50.939458 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 09:18:50.937804 ignition[696]: Ignition finished successfully Sep 6 09:18:50.941139 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 09:18:50.944980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 6 09:18:50.975696 ignition[809]: Ignition 2.22.0 Sep 6 09:18:50.975713 ignition[809]: Stage: kargs Sep 6 09:18:50.975847 ignition[809]: no configs at "/usr/lib/ignition/base.d" Sep 6 09:18:50.975858 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:18:50.976597 ignition[809]: kargs: kargs passed Sep 6 09:18:50.980201 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 6 09:18:50.976643 ignition[809]: Ignition finished successfully Sep 6 09:18:50.983379 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 6 09:18:51.014557 ignition[818]: Ignition 2.22.0 Sep 6 09:18:51.015509 ignition[818]: Stage: disks Sep 6 09:18:51.016292 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 6 09:18:51.017224 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:18:51.018059 ignition[818]: disks: disks passed Sep 6 09:18:51.018105 ignition[818]: Ignition finished successfully Sep 6 09:18:51.020846 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 6 09:18:51.023255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 6 09:18:51.024227 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 6 09:18:51.025229 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 09:18:51.026972 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 09:18:51.028920 systemd[1]: Reached target basic.target - Basic System. Sep 6 09:18:51.031555 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 6 09:18:51.058193 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 6 09:18:51.061994 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 6 09:18:51.064314 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 6 09:18:51.122982 kernel: EXT4-fs (vda9): mounted filesystem f1c79682-29a0-4e69-abb7-e74d836aa96b r/w with ordered data mode. Quota mode: none. Sep 6 09:18:51.123097 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 6 09:18:51.124368 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 6 09:18:51.126825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 09:18:51.128489 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 6 09:18:51.129509 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 6 09:18:51.129551 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 09:18:51.129575 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 09:18:51.148998 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 6 09:18:51.153636 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 6 09:18:51.152392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 6 09:18:51.156650 kernel: BTRFS info (device vda6): first mount of filesystem bf040226-1451-4dec-bd48-6652b943e27f Sep 6 09:18:51.156670 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 09:18:51.159473 kernel: BTRFS info (device vda6): turning on async discard Sep 6 09:18:51.159615 kernel: BTRFS info (device vda6): enabling free space tree Sep 6 09:18:51.161214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 09:18:51.189824 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 09:18:51.194577 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Sep 6 09:18:51.198867 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 09:18:51.201665 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 09:18:51.270204 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 6 09:18:51.272625 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 6 09:18:51.274323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 6 09:18:51.290968 kernel: BTRFS info (device vda6): last unmount of filesystem bf040226-1451-4dec-bd48-6652b943e27f Sep 6 09:18:51.301009 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 6 09:18:51.329887 ignition[953]: INFO : Ignition 2.22.0 Sep 6 09:18:51.329887 ignition[953]: INFO : Stage: mount Sep 6 09:18:51.331231 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 09:18:51.331231 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:18:51.331231 ignition[953]: INFO : mount: mount passed Sep 6 09:18:51.331231 ignition[953]: INFO : Ignition finished successfully Sep 6 09:18:51.333575 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 6 09:18:51.335870 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 6 09:18:51.861904 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 6 09:18:51.863392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 09:18:51.891960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 6 09:18:51.893669 kernel: BTRFS info (device vda6): first mount of filesystem bf040226-1451-4dec-bd48-6652b943e27f Sep 6 09:18:51.893699 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 09:18:51.895960 kernel: BTRFS info (device vda6): turning on async discard Sep 6 09:18:51.895976 kernel: BTRFS info (device vda6): enabling free space tree Sep 6 09:18:51.897202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 09:18:51.936220 ignition[983]: INFO : Ignition 2.22.0 Sep 6 09:18:51.936220 ignition[983]: INFO : Stage: files Sep 6 09:18:51.938032 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 09:18:51.938032 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:18:51.938032 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 6 09:18:51.941589 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 09:18:51.941589 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 09:18:51.941589 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 09:18:51.941589 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 09:18:51.941589 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 09:18:51.940665 unknown[983]: wrote ssh authorized keys file for user: core Sep 6 09:18:51.949709 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 6 09:18:51.949709 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 6 09:18:51.999473 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 09:18:52.227073 systemd-networkd[801]: eth0: Gained IPv6LL Sep 6 09:18:53.558163 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 6 09:18:53.558163 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 09:18:53.562477 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 09:18:53.785634 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 09:18:53.920091 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 09:18:53.920091 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 09:18:53.924058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 09:18:53.946722 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 6 09:18:54.340311 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 09:18:55.166174 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 09:18:55.166174 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 6 09:18:55.170235 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 09:18:55.212405 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 09:18:55.212405 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 6 09:18:55.212405 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 6 09:18:55.212405 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 09:18:55.212405 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 09:18:55.212405 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 6 09:18:55.212405 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 09:18:55.227035 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 09:18:55.230285 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 09:18:55.233087 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 09:18:55.233087 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 6 09:18:55.233087 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 09:18:55.233087 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 09:18:55.233087 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 09:18:55.233087 ignition[983]: INFO : files: files passed Sep 6 09:18:55.233087 ignition[983]: INFO : Ignition finished successfully Sep 6 09:18:55.236002 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 6 09:18:55.238910 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 6 09:18:55.240934 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 6 09:18:55.249823 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 09:18:55.250870 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 6 09:18:55.253271 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 6 09:18:55.255182 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 09:18:55.255182 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 6 09:18:55.257629 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 09:18:55.257350 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 09:18:55.259005 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 6 09:18:55.261632 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 6 09:18:55.290714 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 09:18:55.290808 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 6 09:18:55.292862 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 6 09:18:55.294702 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 6 09:18:55.296390 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 6 09:18:55.297067 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 6 09:18:55.333007 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 09:18:55.334998 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 6 09:18:55.355565 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 6 09:18:55.356841 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 09:18:55.359063 systemd[1]: Stopped target timers.target - Timer Units. Sep 6 09:18:55.360930 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 09:18:55.361060 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 09:18:55.363738 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 6 09:18:55.365831 systemd[1]: Stopped target basic.target - Basic System. Sep 6 09:18:55.367557 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 6 09:18:55.369284 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 09:18:55.371238 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 6 09:18:55.373190 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 6 09:18:55.375083 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 6 09:18:55.377057 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 09:18:55.379138 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 6 09:18:55.381081 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 6 09:18:55.382940 systemd[1]: Stopped target swap.target - Swaps. Sep 6 09:18:55.384545 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 09:18:55.384659 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 6 09:18:55.386978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 6 09:18:55.389086 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 09:18:55.391070 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 6 09:18:55.392049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 09:18:55.393886 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 09:18:55.393999 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 6 09:18:55.396720 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 09:18:55.396884 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 09:18:55.398730 systemd[1]: Stopped target paths.target - Path Units. Sep 6 09:18:55.400101 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 09:18:55.400234 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 09:18:55.402079 systemd[1]: Stopped target slices.target - Slice Units. Sep 6 09:18:55.403495 systemd[1]: Stopped target sockets.target - Socket Units. Sep 6 09:18:55.405048 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 09:18:55.405160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 09:18:55.407109 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 09:18:55.407221 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 09:18:55.408701 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 09:18:55.408859 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 09:18:55.410304 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 09:18:55.410438 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 6 09:18:55.412460 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 6 09:18:55.413902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 6 09:18:55.414653 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 09:18:55.414837 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 09:18:55.416541 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 09:18:55.416685 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 09:18:55.423341 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 09:18:55.425971 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 6 09:18:55.431649 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 09:18:55.434216 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 09:18:55.435013 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 6 09:18:55.440731 ignition[1037]: INFO : Ignition 2.22.0 Sep 6 09:18:55.440731 ignition[1037]: INFO : Stage: umount Sep 6 09:18:55.443018 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 09:18:55.443018 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:18:55.443018 ignition[1037]: INFO : umount: umount passed Sep 6 09:18:55.443018 ignition[1037]: INFO : Ignition finished successfully Sep 6 09:18:55.443736 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 09:18:55.443825 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 6 09:18:55.445104 systemd[1]: Stopped target network.target - Network. Sep 6 09:18:55.446799 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 09:18:55.446851 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 6 09:18:55.448480 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 09:18:55.448522 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 6 09:18:55.450180 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 09:18:55.450226 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 6 09:18:55.451904 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 6 09:18:55.451942 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 6 09:18:55.453822 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 09:18:55.453868 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 6 09:18:55.455679 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 6 09:18:55.457415 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 6 09:18:55.461398 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 09:18:55.461498 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 6 09:18:55.465474 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 6 09:18:55.465744 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 6 09:18:55.465780 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 09:18:55.468463 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 6 09:18:55.468645 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 09:18:55.468761 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 6 09:18:55.471332 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 6 09:18:55.471706 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 6 09:18:55.472761 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 09:18:55.472798 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 6 09:18:55.475101 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 6 09:18:55.476117 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 09:18:55.476184 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 09:18:55.478158 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 09:18:55.478196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 09:18:55.480904 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 09:18:55.480942 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 6 09:18:55.483835 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 09:18:55.485402 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 09:18:55.496281 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 09:18:55.496378 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 6 09:18:55.501134 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 09:18:55.501249 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 09:18:55.502906 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 09:18:55.502981 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 6 09:18:55.504373 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 09:18:55.504403 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 09:18:55.506141 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 09:18:55.506187 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 6 09:18:55.509096 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 09:18:55.509140 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 6 09:18:55.512041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 09:18:55.512090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 09:18:55.515609 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 6 09:18:55.516934 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 6 09:18:55.517000 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 09:18:55.519867 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 09:18:55.519910 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 09:18:55.522928 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 6 09:18:55.522989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 09:18:55.526308 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 09:18:55.526347 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 09:18:55.528486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 09:18:55.528528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:18:55.545637 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 09:18:55.545755 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 6 09:18:55.548048 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 6 09:18:55.550598 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 6 09:18:55.558752 systemd[1]: Switching root. Sep 6 09:18:55.596816 systemd-journald[244]: Journal stopped Sep 6 09:18:56.276427 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 6 09:18:56.276472 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 09:18:56.276490 kernel: SELinux: policy capability open_perms=1 Sep 6 09:18:56.276499 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 09:18:56.276511 kernel: SELinux: policy capability always_check_network=0 Sep 6 09:18:56.276520 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 09:18:56.276532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 09:18:56.276542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 09:18:56.276551 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 09:18:56.276560 kernel: SELinux: policy capability userspace_initial_context=0 Sep 6 09:18:56.276570 systemd[1]: Successfully loaded SELinux policy in 43.014ms. Sep 6 09:18:56.276584 kernel: audit: type=1403 audit(1757150335.738:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 09:18:56.276598 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.200ms. Sep 6 09:18:56.276609 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 6 09:18:56.276620 systemd[1]: Detected virtualization kvm. Sep 6 09:18:56.276631 systemd[1]: Detected architecture arm64. Sep 6 09:18:56.276641 systemd[1]: Detected first boot. Sep 6 09:18:56.276651 systemd[1]: Initializing machine ID from VM UUID. Sep 6 09:18:56.276670 kernel: NET: Registered PF_VSOCK protocol family Sep 6 09:18:56.276682 zram_generator::config[1083]: No configuration found. Sep 6 09:18:56.276693 systemd[1]: Populated /etc with preset unit settings. Sep 6 09:18:56.276704 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 6 09:18:56.276714 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 09:18:56.276725 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 6 09:18:56.276736 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 09:18:56.276746 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 6 09:18:56.276756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 6 09:18:56.276766 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 6 09:18:56.276776 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 6 09:18:56.276786 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 6 09:18:56.276796 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 6 09:18:56.276806 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 6 09:18:56.276817 systemd[1]: Created slice user.slice - User and Session Slice. Sep 6 09:18:56.276827 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 09:18:56.276837 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 09:18:56.276847 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 6 09:18:56.276857 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 6 09:18:56.276868 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 6 09:18:56.276878 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 09:18:56.276888 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 6 09:18:56.276898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 09:18:56.276909 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 09:18:56.276919 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 6 09:18:56.276929 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 6 09:18:56.276939 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 6 09:18:56.276967 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 6 09:18:56.276978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 09:18:56.276988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 09:18:56.276998 systemd[1]: Reached target slices.target - Slice Units. Sep 6 09:18:56.277010 systemd[1]: Reached target swap.target - Swaps. Sep 6 09:18:56.277021 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 6 09:18:56.277031 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 6 09:18:56.277041 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 6 09:18:56.277051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 09:18:56.277061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 09:18:56.277075 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 09:18:56.277084 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 6 09:18:56.277094 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 6 09:18:56.277106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 6 09:18:56.277116 systemd[1]: Mounting media.mount - External Media Directory... Sep 6 09:18:56.277127 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 6 09:18:56.277137 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 6 09:18:56.277147 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 6 09:18:56.277157 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 09:18:56.277168 systemd[1]: Reached target machines.target - Containers. Sep 6 09:18:56.277177 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 6 09:18:56.277189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 09:18:56.277202 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 09:18:56.277212 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 6 09:18:56.277223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 09:18:56.277233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 09:18:56.277243 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 09:18:56.277253 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 6 09:18:56.277263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 09:18:56.277273 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 09:18:56.277285 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 09:18:56.277295 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 6 09:18:56.277305 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 09:18:56.277315 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 09:18:56.277326 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 6 09:18:56.277336 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 09:18:56.277346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 09:18:56.277356 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 6 09:18:56.277367 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 6 09:18:56.277376 kernel: ACPI: bus type drm_connector registered Sep 6 09:18:56.277386 kernel: loop: module loaded Sep 6 09:18:56.277395 kernel: fuse: init (API version 7.41) Sep 6 09:18:56.277405 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 6 09:18:56.277414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 09:18:56.277426 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 09:18:56.277435 systemd[1]: Stopped verity-setup.service. Sep 6 09:18:56.277446 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 6 09:18:56.277456 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 6 09:18:56.277466 systemd[1]: Mounted media.mount - External Media Directory. Sep 6 09:18:56.277496 systemd-journald[1158]: Collecting audit messages is disabled. Sep 6 09:18:56.277522 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 6 09:18:56.277534 systemd-journald[1158]: Journal started Sep 6 09:18:56.277554 systemd-journald[1158]: Runtime Journal (/run/log/journal/0a23a579ba834721ad9cc94732cd1ebe) is 6M, max 48.5M, 42.4M free. Sep 6 09:18:56.080272 systemd[1]: Queued start job for default target multi-user.target. Sep 6 09:18:56.102793 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 6 09:18:56.103169 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 09:18:56.279470 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 6 09:18:56.281204 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 09:18:56.281841 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 6 09:18:56.283209 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 6 09:18:56.285994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 09:18:56.287484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 09:18:56.287642 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 6 09:18:56.289146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 09:18:56.289330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 09:18:56.290724 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 09:18:56.290896 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 09:18:56.292414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 09:18:56.292566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 09:18:56.294090 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 09:18:56.294248 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 6 09:18:56.295714 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 09:18:56.295865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 09:18:56.297286 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 09:18:56.299984 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 09:18:56.301519 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 6 09:18:56.303285 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 6 09:18:56.314764 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 6 09:18:56.317044 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 6 09:18:56.318693 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 6 09:18:56.319651 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 09:18:56.319695 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 09:18:56.321415 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 6 09:18:56.332752 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 6 09:18:56.333822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 09:18:56.334820 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 6 09:18:56.336594 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 6 09:18:56.337751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 09:18:56.340737 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 6 09:18:56.342213 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 09:18:56.343307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 09:18:56.345768 systemd-journald[1158]: Time spent on flushing to /var/log/journal/0a23a579ba834721ad9cc94732cd1ebe is 29.269ms for 890 entries. Sep 6 09:18:56.345768 systemd-journald[1158]: System Journal (/var/log/journal/0a23a579ba834721ad9cc94732cd1ebe) is 8M, max 195.6M, 187.6M free. Sep 6 09:18:56.387690 systemd-journald[1158]: Received client request to flush runtime journal. Sep 6 09:18:56.387776 kernel: loop0: detected capacity change from 0 to 100632 Sep 6 09:18:56.347094 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 6 09:18:56.350186 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 09:18:56.353370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 09:18:56.356193 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 6 09:18:56.357162 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 6 09:18:56.358427 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 6 09:18:56.361681 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 6 09:18:56.365281 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 6 09:18:56.377877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 09:18:56.381330 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 6 09:18:56.381343 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 6 09:18:56.386389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 09:18:56.390976 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 6 09:18:56.393992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 09:18:56.395822 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 6 09:18:56.405141 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 6 09:18:56.409983 kernel: loop1: detected capacity change from 0 to 211168 Sep 6 09:18:56.425006 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 6 09:18:56.427514 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 09:18:56.441155 kernel: loop2: detected capacity change from 0 to 119368 Sep 6 09:18:56.449841 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 6 09:18:56.450044 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 6 09:18:56.453524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 09:18:56.491985 kernel: loop3: detected capacity change from 0 to 100632 Sep 6 09:18:56.497973 kernel: loop4: detected capacity change from 0 to 211168 Sep 6 09:18:56.503968 kernel: loop5: detected capacity change from 0 to 119368 Sep 6 09:18:56.508452 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 6 09:18:56.508841 (sd-merge)[1225]: Merged extensions into '/usr'. Sep 6 09:18:56.512130 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 6 09:18:56.512149 systemd[1]: Reloading... Sep 6 09:18:56.552970 zram_generator::config[1250]: No configuration found. Sep 6 09:18:56.601668 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 09:18:56.697917 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 09:18:56.698105 systemd[1]: Reloading finished in 185 ms. Sep 6 09:18:56.729465 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 6 09:18:56.730991 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 6 09:18:56.744062 systemd[1]: Starting ensure-sysext.service... Sep 6 09:18:56.745832 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 09:18:56.754581 systemd[1]: Reload requested from client PID 1287 ('systemctl') (unit ensure-sysext.service)... Sep 6 09:18:56.754600 systemd[1]: Reloading... Sep 6 09:18:56.758112 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 6 09:18:56.758285 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 6 09:18:56.758516 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 09:18:56.758730 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 6 09:18:56.759342 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 09:18:56.759540 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Sep 6 09:18:56.759587 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Sep 6 09:18:56.762063 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 09:18:56.762076 systemd-tmpfiles[1288]: Skipping /boot Sep 6 09:18:56.767673 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 09:18:56.767687 systemd-tmpfiles[1288]: Skipping /boot Sep 6 09:18:56.792993 zram_generator::config[1315]: No configuration found. Sep 6 09:18:56.918724 systemd[1]: Reloading finished in 163 ms. Sep 6 09:18:56.939663 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 6 09:18:56.946432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 09:18:56.955878 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 6 09:18:56.958003 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 6 09:18:56.960107 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 6 09:18:56.964091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 09:18:56.966040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 09:18:56.968135 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 6 09:18:56.973400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 09:18:56.978838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 09:18:56.982087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 09:18:56.984667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 09:18:56.987198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 09:18:56.987320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 6 09:18:56.988318 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 6 09:18:56.998065 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 6 09:18:57.000028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 09:18:57.001980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 09:18:57.003727 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 6 09:18:57.006663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 09:18:57.006819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 09:18:57.007129 augenrules[1380]: No rules Sep 6 09:18:57.008732 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 09:18:57.008929 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 6 09:18:57.010336 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 09:18:57.010516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 09:18:57.013586 systemd-udevd[1361]: Using default interface naming scheme 'v255'. Sep 6 09:18:57.019776 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 6 09:18:57.021113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 09:18:57.022172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 09:18:57.031339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 09:18:57.033082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 09:18:57.035499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 09:18:57.036957 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 09:18:57.037080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 6 09:18:57.038478 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 6 09:18:57.045203 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 6 09:18:57.046009 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 09:18:57.047278 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 09:18:57.050515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 09:18:57.050708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 09:18:57.052454 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 09:18:57.054016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 09:18:57.056164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 09:18:57.056337 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 09:18:57.059556 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 09:18:57.065394 augenrules[1391]: /sbin/augenrules: No change Sep 6 09:18:57.068463 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 09:18:57.076254 systemd[1]: Finished ensure-sysext.service. Sep 6 09:18:57.080636 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 6 09:18:57.094121 augenrules[1447]: No rules Sep 6 09:18:57.095388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 09:18:57.096337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 09:18:57.096404 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 09:18:57.098520 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 6 09:18:57.100104 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 09:18:57.101985 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 6 09:18:57.103347 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 6 09:18:57.119272 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 6 09:18:57.200372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 09:18:57.206960 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 6 09:18:57.210468 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 6 09:18:57.210942 systemd-networkd[1452]: lo: Link UP Sep 6 09:18:57.210975 systemd-networkd[1452]: lo: Gained carrier Sep 6 09:18:57.211632 systemd[1]: Reached target time-set.target - System Time Set. Sep 6 09:18:57.211823 systemd-networkd[1452]: Enumeration completed Sep 6 09:18:57.212245 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 09:18:57.212256 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 09:18:57.212479 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 09:18:57.213287 systemd-networkd[1452]: eth0: Link UP Sep 6 09:18:57.213415 systemd-networkd[1452]: eth0: Gained carrier Sep 6 09:18:57.213435 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 09:18:57.218037 systemd-resolved[1355]: Positive Trust Anchors: Sep 6 09:18:57.218055 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 09:18:57.218087 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 09:18:57.223767 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 6 09:18:57.226307 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 6 09:18:57.226997 systemd-networkd[1452]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 09:18:57.227593 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Sep 6 09:18:57.227783 systemd-resolved[1355]: Defaulting to hostname 'linux'. Sep 6 09:18:57.229199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 09:18:57.229902 systemd-timesyncd[1453]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 09:18:57.229962 systemd-timesyncd[1453]: Initial clock synchronization to Sat 2025-09-06 09:18:57.586574 UTC. Sep 6 09:18:57.230396 systemd[1]: Reached target network.target - Network. Sep 6 09:18:57.231290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 09:18:57.234164 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 09:18:57.235309 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 6 09:18:57.236699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 6 09:18:57.238146 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 6 09:18:57.241187 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 6 09:18:57.242496 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 6 09:18:57.243805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 09:18:57.243839 systemd[1]: Reached target paths.target - Path Units. Sep 6 09:18:57.244834 systemd[1]: Reached target timers.target - Timer Units. Sep 6 09:18:57.246337 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 6 09:18:57.248541 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 6 09:18:57.250887 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 6 09:18:57.253225 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 6 09:18:57.254189 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 6 09:18:57.258514 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 6 09:18:57.260313 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 6 09:18:57.264980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 6 09:18:57.266519 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 6 09:18:57.268501 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 09:18:57.269973 systemd[1]: Reached target basic.target - Basic System. Sep 6 09:18:57.271009 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 6 09:18:57.271035 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 6 09:18:57.272773 systemd[1]: Starting containerd.service - containerd container runtime... Sep 6 09:18:57.274829 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 6 09:18:57.277675 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 6 09:18:57.280130 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 6 09:18:57.281788 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 6 09:18:57.284107 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 6 09:18:57.286464 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 6 09:18:57.295190 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 6 09:18:57.298292 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 6 09:18:57.301111 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 6 09:18:57.304113 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 6 09:18:57.306014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 09:18:57.306411 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 09:18:57.308090 systemd[1]: Starting update-engine.service - Update Engine... Sep 6 09:18:57.310110 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 6 09:18:57.312163 jq[1495]: false Sep 6 09:18:57.313022 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 6 09:18:57.320052 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 6 09:18:57.320397 extend-filesystems[1496]: Found /dev/vda6 Sep 6 09:18:57.322103 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 09:18:57.322404 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 6 09:18:57.323188 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 09:18:57.323368 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 6 09:18:57.326172 extend-filesystems[1496]: Found /dev/vda9 Sep 6 09:18:57.328256 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 09:18:57.328447 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 6 09:18:57.330146 extend-filesystems[1496]: Checking size of /dev/vda9 Sep 6 09:18:57.340522 jq[1508]: true Sep 6 09:18:57.342166 extend-filesystems[1496]: Resized partition /dev/vda9 Sep 6 09:18:57.349892 extend-filesystems[1535]: resize2fs 1.47.3 (8-Jul-2025) Sep 6 09:18:57.354570 (ntainerd)[1529]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 6 09:18:57.355230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 09:18:57.356978 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 09:18:57.357104 jq[1533]: true Sep 6 09:18:57.367961 tar[1518]: linux-arm64/LICENSE Sep 6 09:18:57.367961 tar[1518]: linux-arm64/helm Sep 6 09:18:57.380275 dbus-daemon[1493]: [system] SELinux support is enabled Sep 6 09:18:57.382968 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 6 09:18:57.386711 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 09:18:57.387246 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 09:18:57.387273 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 6 09:18:57.389120 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 09:18:57.389144 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 6 09:18:57.390337 systemd-logind[1503]: New seat seat0. Sep 6 09:18:57.392204 systemd[1]: Started systemd-logind.service - User Login Management. Sep 6 09:18:57.402673 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 09:18:57.418263 update_engine[1505]: I20250906 09:18:57.407027 1505 main.cc:92] Flatcar Update Engine starting Sep 6 09:18:57.418263 update_engine[1505]: I20250906 09:18:57.414003 1505 update_check_scheduler.cc:74] Next update check in 3m45s Sep 6 09:18:57.414013 systemd[1]: Started update-engine.service - Update Engine. Sep 6 09:18:57.422462 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 6 09:18:57.423692 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 09:18:57.423692 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 09:18:57.423692 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 09:18:57.435515 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Sep 6 09:18:57.432821 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 09:18:57.433138 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 6 09:18:57.439459 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Sep 6 09:18:57.469618 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 6 09:18:57.473599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:18:57.484837 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 6 09:18:57.501056 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 09:18:57.547966 containerd[1529]: time="2025-09-06T09:18:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 6 09:18:57.548523 containerd[1529]: time="2025-09-06T09:18:57.548485320Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 6 09:18:57.558507 containerd[1529]: time="2025-09-06T09:18:57.558459240Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.72µs" Sep 6 09:18:57.558507 containerd[1529]: time="2025-09-06T09:18:57.558495800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 6 09:18:57.558507 containerd[1529]: time="2025-09-06T09:18:57.558514520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 6 09:18:57.558702 containerd[1529]: time="2025-09-06T09:18:57.558679480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 6 09:18:57.558734 containerd[1529]: time="2025-09-06T09:18:57.558701880Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 6 09:18:57.558734 containerd[1529]: time="2025-09-06T09:18:57.558728320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 6 09:18:57.558809 containerd[1529]: time="2025-09-06T09:18:57.558789760Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 6 09:18:57.558809 containerd[1529]: time="2025-09-06T09:18:57.558805560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 6 09:18:57.559064 containerd[1529]: time="2025-09-06T09:18:57.559037680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 6 09:18:57.559064 containerd[1529]: time="2025-09-06T09:18:57.559059760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 6 09:18:57.559110 containerd[1529]: time="2025-09-06T09:18:57.559071760Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 6 09:18:57.559110 containerd[1529]: time="2025-09-06T09:18:57.559079640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 6 09:18:57.559176 containerd[1529]: time="2025-09-06T09:18:57.559159240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 6 09:18:57.559371 containerd[1529]: time="2025-09-06T09:18:57.559344360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 6 09:18:57.559435 containerd[1529]: time="2025-09-06T09:18:57.559377280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 6 09:18:57.559435 containerd[1529]: time="2025-09-06T09:18:57.559387080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 6 09:18:57.559435 containerd[1529]: time="2025-09-06T09:18:57.559423960Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 6 09:18:57.560047 containerd[1529]: time="2025-09-06T09:18:57.560018760Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 6 09:18:57.560136 containerd[1529]: time="2025-09-06T09:18:57.560117320Z" level=info msg="metadata content store policy set" policy=shared Sep 6 09:18:57.564570 containerd[1529]: time="2025-09-06T09:18:57.564526760Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 6 09:18:57.564645 containerd[1529]: time="2025-09-06T09:18:57.564591200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 6 09:18:57.564645 containerd[1529]: time="2025-09-06T09:18:57.564605760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 6 09:18:57.564645 containerd[1529]: time="2025-09-06T09:18:57.564617920Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 6 09:18:57.564645 containerd[1529]: time="2025-09-06T09:18:57.564629560Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 6 09:18:57.564645 containerd[1529]: time="2025-09-06T09:18:57.564640000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 6 09:18:57.564754 containerd[1529]: time="2025-09-06T09:18:57.564650560Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 6 09:18:57.564754 containerd[1529]: time="2025-09-06T09:18:57.564672240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 6 09:18:57.564754 containerd[1529]: time="2025-09-06T09:18:57.564685600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 6 09:18:57.564754 containerd[1529]: time="2025-09-06T09:18:57.564695920Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 6 09:18:57.564754 containerd[1529]: time="2025-09-06T09:18:57.564705000Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 6 09:18:57.564754 containerd[1529]: time="2025-09-06T09:18:57.564717560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 6 09:18:57.564851 containerd[1529]: time="2025-09-06T09:18:57.564833400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 6 09:18:57.564870 containerd[1529]: time="2025-09-06T09:18:57.564852680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 6 09:18:57.564870 containerd[1529]: time="2025-09-06T09:18:57.564867040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 6 09:18:57.564901 containerd[1529]: time="2025-09-06T09:18:57.564878320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 6 09:18:57.564901 containerd[1529]: time="2025-09-06T09:18:57.564888760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 6 09:18:57.564901 containerd[1529]: time="2025-09-06T09:18:57.564898960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 6 09:18:57.564968 containerd[1529]: time="2025-09-06T09:18:57.564910120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 6 09:18:57.564968 containerd[1529]: time="2025-09-06T09:18:57.564920200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 6 09:18:57.564968 containerd[1529]: time="2025-09-06T09:18:57.564938480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 6 09:18:57.565025 containerd[1529]: time="2025-09-06T09:18:57.564986920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 6 09:18:57.565025 containerd[1529]: time="2025-09-06T09:18:57.565000880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 6 09:18:57.565202 containerd[1529]: time="2025-09-06T09:18:57.565181760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 6 09:18:57.565237 containerd[1529]: time="2025-09-06T09:18:57.565203680Z" level=info msg="Start snapshots syncer" Sep 6 09:18:57.565237 containerd[1529]: time="2025-09-06T09:18:57.565230360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 6 09:18:57.565465 containerd[1529]: time="2025-09-06T09:18:57.565429560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 6 09:18:57.565579 containerd[1529]: time="2025-09-06T09:18:57.565479640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 6 09:18:57.565579 containerd[1529]: time="2025-09-06T09:18:57.565550960Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 6 09:18:57.565682 containerd[1529]: time="2025-09-06T09:18:57.565647240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 6 09:18:57.565715 containerd[1529]: time="2025-09-06T09:18:57.565697520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 6 09:18:57.565715 containerd[1529]: time="2025-09-06T09:18:57.565710480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 6 09:18:57.565749 containerd[1529]: time="2025-09-06T09:18:57.565720880Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 6 09:18:57.565749 containerd[1529]: time="2025-09-06T09:18:57.565732840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 6 09:18:57.565749 containerd[1529]: time="2025-09-06T09:18:57.565743600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565754720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565779560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565791000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565801200Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565841960Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565856400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565865840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565874720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565882960Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565897120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 6 09:18:57.565972 containerd[1529]: time="2025-09-06T09:18:57.565907280Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 6 09:18:57.566152 containerd[1529]: time="2025-09-06T09:18:57.566003800Z" level=info msg="runtime interface created" Sep 6 09:18:57.566152 containerd[1529]: time="2025-09-06T09:18:57.566010640Z" level=info msg="created NRI interface" Sep 6 09:18:57.566152 containerd[1529]: time="2025-09-06T09:18:57.566021400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 6 09:18:57.566152 containerd[1529]: time="2025-09-06T09:18:57.566033400Z" level=info msg="Connect containerd service" Sep 6 09:18:57.566152 containerd[1529]: time="2025-09-06T09:18:57.566062680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 6 09:18:57.566764 containerd[1529]: time="2025-09-06T09:18:57.566731360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 09:18:57.631287 containerd[1529]: time="2025-09-06T09:18:57.631210840Z" level=info msg="Start subscribing containerd event" Sep 6 09:18:57.631412 containerd[1529]: time="2025-09-06T09:18:57.631301040Z" level=info msg="Start recovering state" Sep 6 09:18:57.631481 containerd[1529]: time="2025-09-06T09:18:57.631451080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 09:18:57.631514 containerd[1529]: time="2025-09-06T09:18:57.631502480Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 09:18:57.631797 containerd[1529]: time="2025-09-06T09:18:57.631766640Z" level=info msg="Start event monitor" Sep 6 09:18:57.631830 containerd[1529]: time="2025-09-06T09:18:57.631799840Z" level=info msg="Start cni network conf syncer for default" Sep 6 09:18:57.631830 containerd[1529]: time="2025-09-06T09:18:57.631810040Z" level=info msg="Start streaming server" Sep 6 09:18:57.631830 containerd[1529]: time="2025-09-06T09:18:57.631819280Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 6 09:18:57.631830 containerd[1529]: time="2025-09-06T09:18:57.631825760Z" level=info msg="runtime interface starting up..." Sep 6 09:18:57.632013 containerd[1529]: time="2025-09-06T09:18:57.631997680Z" level=info msg="starting plugins..." Sep 6 09:18:57.632044 containerd[1529]: time="2025-09-06T09:18:57.632021120Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 6 09:18:57.632173 containerd[1529]: time="2025-09-06T09:18:57.632155920Z" level=info msg="containerd successfully booted in 0.084560s" Sep 6 09:18:57.632277 systemd[1]: Started containerd.service - containerd container runtime. Sep 6 09:18:57.710271 tar[1518]: linux-arm64/README.md Sep 6 09:18:57.726238 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 6 09:18:58.371178 systemd-networkd[1452]: eth0: Gained IPv6LL Sep 6 09:18:58.373586 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 6 09:18:58.375620 systemd[1]: Reached target network-online.target - Network is Online. Sep 6 09:18:58.378320 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 6 09:18:58.380804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:18:58.383221 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 6 09:18:58.406458 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 6 09:18:58.408769 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 6 09:18:58.408973 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 6 09:18:58.411675 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 6 09:18:58.578428 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 09:18:58.601051 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 6 09:18:58.604457 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 6 09:18:58.622843 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 09:18:58.623093 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 6 09:18:58.627008 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 6 09:18:58.647044 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 6 09:18:58.650558 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 6 09:18:58.653818 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 6 09:18:58.655428 systemd[1]: Reached target getty.target - Login Prompts. Sep 6 09:18:58.966447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:18:58.968180 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 6 09:18:58.969537 systemd[1]: Startup finished in 1.986s (kernel) + 7.146s (initrd) + 3.274s (userspace) = 12.406s. Sep 6 09:18:58.970456 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 09:18:59.329926 kubelet[1634]: E0906 09:18:59.329803 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 09:18:59.332262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 09:18:59.332398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 09:18:59.332696 systemd[1]: kubelet.service: Consumed 754ms CPU time, 258.8M memory peak. Sep 6 09:19:01.125321 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 6 09:19:01.126348 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:57730.service - OpenSSH per-connection server daemon (10.0.0.1:57730). Sep 6 09:19:01.203299 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 57730 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:19:01.205068 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:19:01.210999 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 6 09:19:01.211884 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 6 09:19:01.218281 systemd-logind[1503]: New session 1 of user core. Sep 6 09:19:01.234449 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 6 09:19:01.237031 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 6 09:19:01.257054 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 09:19:01.259366 systemd-logind[1503]: New session c1 of user core. Sep 6 09:19:01.369804 systemd[1652]: Queued start job for default target default.target. Sep 6 09:19:01.383895 systemd[1652]: Created slice app.slice - User Application Slice. Sep 6 09:19:01.383926 systemd[1652]: Reached target paths.target - Paths. Sep 6 09:19:01.383965 systemd[1652]: Reached target timers.target - Timers. Sep 6 09:19:01.385217 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 6 09:19:01.394913 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 6 09:19:01.394992 systemd[1652]: Reached target sockets.target - Sockets. Sep 6 09:19:01.395035 systemd[1652]: Reached target basic.target - Basic System. Sep 6 09:19:01.395069 systemd[1652]: Reached target default.target - Main User Target. Sep 6 09:19:01.395095 systemd[1652]: Startup finished in 130ms. Sep 6 09:19:01.395208 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 6 09:19:01.396754 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 6 09:19:01.468204 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:57740.service - OpenSSH per-connection server daemon (10.0.0.1:57740). Sep 6 09:19:01.520824 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 57740 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:19:01.522241 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:19:01.526463 systemd-logind[1503]: New session 2 of user core. Sep 6 09:19:01.536139 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 6 09:19:01.587665 sshd[1666]: Connection closed by 10.0.0.1 port 57740 Sep 6 09:19:01.588175 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 6 09:19:01.597055 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:57740.service: Deactivated successfully. Sep 6 09:19:01.600246 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 09:19:01.600864 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Sep 6 09:19:01.603099 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:57744.service - OpenSSH per-connection server daemon (10.0.0.1:57744). Sep 6 09:19:01.604185 systemd-logind[1503]: Removed session 2. Sep 6 09:19:01.661721 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 57744 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:19:01.662888 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:19:01.667358 systemd-logind[1503]: New session 3 of user core. Sep 6 09:19:01.676145 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 6 09:19:01.725448 sshd[1675]: Connection closed by 10.0.0.1 port 57744 Sep 6 09:19:01.725842 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Sep 6 09:19:01.740940 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:57744.service: Deactivated successfully. Sep 6 09:19:01.742314 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 09:19:01.742959 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Sep 6 09:19:01.745123 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:57758.service - OpenSSH per-connection server daemon (10.0.0.1:57758). Sep 6 09:19:01.746301 systemd-logind[1503]: Removed session 3. Sep 6 09:19:01.811420 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 57758 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:19:01.813197 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:19:01.818003 systemd-logind[1503]: New session 4 of user core. Sep 6 09:19:01.826131 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 6 09:19:01.880368 sshd[1684]: Connection closed by 10.0.0.1 port 57758 Sep 6 09:19:01.881668 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 6 09:19:01.894328 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:57758.service: Deactivated successfully. Sep 6 09:19:01.895968 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 09:19:01.896818 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Sep 6 09:19:01.898871 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:57762.service - OpenSSH per-connection server daemon (10.0.0.1:57762). Sep 6 09:19:01.899834 systemd-logind[1503]: Removed session 4. Sep 6 09:19:01.955620 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 57762 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:19:01.956917 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:19:01.961141 systemd-logind[1503]: New session 5 of user core. Sep 6 09:19:01.972201 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 6 09:19:02.030072 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 09:19:02.030332 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 09:19:02.042799 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 6 09:19:02.045023 sshd[1693]: Connection closed by 10.0.0.1 port 57762 Sep 6 09:19:02.045106 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 6 09:19:02.060259 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:57762.service: Deactivated successfully. Sep 6 09:19:02.063363 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 09:19:02.064198 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Sep 6 09:19:02.066624 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:57776.service - OpenSSH per-connection server daemon (10.0.0.1:57776). Sep 6 09:19:02.067105 systemd-logind[1503]: Removed session 5. Sep 6 09:19:02.140121 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 57776 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:19:02.141345 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:19:02.145008 systemd-logind[1503]: New session 6 of user core. Sep 6 09:19:02.162163 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 6 09:19:02.213884 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 09:19:02.214754 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 09:19:02.324854 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 6 09:19:02.330213 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 6 09:19:02.330488 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 09:19:02.339981 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 6 09:19:02.379149 augenrules[1727]: No rules Sep 6 09:19:02.380400 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 09:19:02.380600 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 6 09:19:02.381448 sudo[1704]: pam_unix(sudo:session): session closed for user root Sep 6 09:19:02.382563 sshd[1703]: Connection closed by 10.0.0.1 port 57776 Sep 6 09:19:02.382889 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 6 09:19:02.392798 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:57776.service: Deactivated successfully. Sep 6 09:19:02.395333 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 09:19:02.396043 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Sep 6 09:19:02.398087 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). Sep 6 09:19:02.399065 systemd-logind[1503]: Removed session 6. Sep 6 09:19:02.451201 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:19:02.452507 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:19:02.457054 systemd-logind[1503]: New session 7 of user core. Sep 6 09:19:02.469133 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 6 09:19:02.521202 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 09:19:02.521478 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 09:19:02.815657 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 6 09:19:02.832310 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 6 09:19:03.036271 dockerd[1761]: time="2025-09-06T09:19:03.036204315Z" level=info msg="Starting up" Sep 6 09:19:03.037116 dockerd[1761]: time="2025-09-06T09:19:03.037094213Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 6 09:19:03.048782 dockerd[1761]: time="2025-09-06T09:19:03.048735432Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 6 09:19:03.247911 dockerd[1761]: time="2025-09-06T09:19:03.247796450Z" level=info msg="Loading containers: start." Sep 6 09:19:03.256184 kernel: Initializing XFRM netlink socket Sep 6 09:19:03.469520 systemd-networkd[1452]: docker0: Link UP Sep 6 09:19:03.473716 dockerd[1761]: time="2025-09-06T09:19:03.473672126Z" level=info msg="Loading containers: done." Sep 6 09:19:03.484974 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1229172373-merged.mount: Deactivated successfully. Sep 6 09:19:03.489606 dockerd[1761]: time="2025-09-06T09:19:03.489288632Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 09:19:03.489606 dockerd[1761]: time="2025-09-06T09:19:03.489366125Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 6 09:19:03.489606 dockerd[1761]: time="2025-09-06T09:19:03.489456138Z" level=info msg="Initializing buildkit" Sep 6 09:19:03.512207 dockerd[1761]: time="2025-09-06T09:19:03.512114664Z" level=info msg="Completed buildkit initialization" Sep 6 09:19:03.517121 dockerd[1761]: time="2025-09-06T09:19:03.517090361Z" level=info msg="Daemon has completed initialization" Sep 6 09:19:03.517375 dockerd[1761]: time="2025-09-06T09:19:03.517265231Z" level=info msg="API listen on /run/docker.sock" Sep 6 09:19:03.517569 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 6 09:19:04.480810 containerd[1529]: time="2025-09-06T09:19:04.480755053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 6 09:19:05.142602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067582489.mount: Deactivated successfully. Sep 6 09:19:06.209990 containerd[1529]: time="2025-09-06T09:19:06.209207188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:06.210637 containerd[1529]: time="2025-09-06T09:19:06.210615481Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 6 09:19:06.211476 containerd[1529]: time="2025-09-06T09:19:06.211446623Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:06.214890 containerd[1529]: time="2025-09-06T09:19:06.214861598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:06.215790 containerd[1529]: time="2025-09-06T09:19:06.215767875Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.734972264s" Sep 6 09:19:06.215842 containerd[1529]: time="2025-09-06T09:19:06.215798253Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 6 09:19:06.217091 containerd[1529]: time="2025-09-06T09:19:06.217035809Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 6 09:19:07.521074 containerd[1529]: time="2025-09-06T09:19:07.521030372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:07.521743 containerd[1529]: time="2025-09-06T09:19:07.521429492Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 6 09:19:07.525806 containerd[1529]: time="2025-09-06T09:19:07.525772426Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:07.529292 containerd[1529]: time="2025-09-06T09:19:07.529250804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:07.530296 containerd[1529]: time="2025-09-06T09:19:07.530268021Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.313174801s" Sep 6 09:19:07.530369 containerd[1529]: time="2025-09-06T09:19:07.530300085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 6 09:19:07.530867 containerd[1529]: time="2025-09-06T09:19:07.530825597Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 6 09:19:09.094186 containerd[1529]: time="2025-09-06T09:19:09.094111637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:09.111628 containerd[1529]: time="2025-09-06T09:19:09.111552158Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 6 09:19:09.113006 containerd[1529]: time="2025-09-06T09:19:09.112737481Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:09.116298 containerd[1529]: time="2025-09-06T09:19:09.116231178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:09.117176 containerd[1529]: time="2025-09-06T09:19:09.117150802Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.586291588s" Sep 6 09:19:09.117230 containerd[1529]: time="2025-09-06T09:19:09.117181433Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 6 09:19:09.117997 containerd[1529]: time="2025-09-06T09:19:09.117968914Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 6 09:19:09.584138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 09:19:09.585580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:19:09.727798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:19:09.731613 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 09:19:09.772008 kubelet[2051]: E0906 09:19:09.771940 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 09:19:09.776155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 09:19:09.776284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 09:19:09.778080 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108.4M memory peak. Sep 6 09:19:10.199034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798030259.mount: Deactivated successfully. Sep 6 09:19:10.612421 containerd[1529]: time="2025-09-06T09:19:10.612307531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:10.613316 containerd[1529]: time="2025-09-06T09:19:10.613088725Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 6 09:19:10.614094 containerd[1529]: time="2025-09-06T09:19:10.614062514Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:10.616314 containerd[1529]: time="2025-09-06T09:19:10.616284856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:10.616784 containerd[1529]: time="2025-09-06T09:19:10.616763719Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.498764507s" Sep 6 09:19:10.616945 containerd[1529]: time="2025-09-06T09:19:10.616852025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 6 09:19:10.617320 containerd[1529]: time="2025-09-06T09:19:10.617288752Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 6 09:19:11.157921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967383808.mount: Deactivated successfully. Sep 6 09:19:11.867972 containerd[1529]: time="2025-09-06T09:19:11.867276597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:11.868304 containerd[1529]: time="2025-09-06T09:19:11.867988670Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 6 09:19:11.870311 containerd[1529]: time="2025-09-06T09:19:11.868827611Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:11.872010 containerd[1529]: time="2025-09-06T09:19:11.871981282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:11.873400 containerd[1529]: time="2025-09-06T09:19:11.873357050Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.256009088s" Sep 6 09:19:11.873400 containerd[1529]: time="2025-09-06T09:19:11.873395873Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 6 09:19:11.874213 containerd[1529]: time="2025-09-06T09:19:11.874195024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 09:19:12.367002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1989852617.mount: Deactivated successfully. Sep 6 09:19:12.371134 containerd[1529]: time="2025-09-06T09:19:12.371099237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 09:19:12.371691 containerd[1529]: time="2025-09-06T09:19:12.371667236Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 6 09:19:12.373002 containerd[1529]: time="2025-09-06T09:19:12.372480752Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 09:19:12.374464 containerd[1529]: time="2025-09-06T09:19:12.374422090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 09:19:12.375268 containerd[1529]: time="2025-09-06T09:19:12.374948123Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 500.726378ms" Sep 6 09:19:12.375268 containerd[1529]: time="2025-09-06T09:19:12.374991821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 09:19:12.375426 containerd[1529]: time="2025-09-06T09:19:12.375394692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 6 09:19:12.791665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935115254.mount: Deactivated successfully. Sep 6 09:19:14.535981 containerd[1529]: time="2025-09-06T09:19:14.535515417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:14.536361 containerd[1529]: time="2025-09-06T09:19:14.536069237Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 6 09:19:14.537841 containerd[1529]: time="2025-09-06T09:19:14.537798293Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:14.540205 containerd[1529]: time="2025-09-06T09:19:14.540170758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:14.541240 containerd[1529]: time="2025-09-06T09:19:14.541210524Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.165789866s" Sep 6 09:19:14.541295 containerd[1529]: time="2025-09-06T09:19:14.541254112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 6 09:19:19.825267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 09:19:19.826750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:19:19.837500 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 09:19:19.837570 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 09:19:19.837787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:19:19.841560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:19:19.868041 systemd[1]: Reload requested from client PID 2211 ('systemctl') (unit session-7.scope)... Sep 6 09:19:19.868060 systemd[1]: Reloading... Sep 6 09:19:19.930989 zram_generator::config[2254]: No configuration found. Sep 6 09:19:20.122197 systemd[1]: Reloading finished in 253 ms. Sep 6 09:19:20.194378 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 09:19:20.194452 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 09:19:20.194686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:19:20.194736 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95M memory peak. Sep 6 09:19:20.196145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:19:20.318484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:19:20.322589 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 09:19:20.353255 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 09:19:20.353255 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 09:19:20.353255 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 09:19:20.353587 kubelet[2299]: I0906 09:19:20.353292 2299 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 09:19:22.080245 kubelet[2299]: I0906 09:19:22.080036 2299 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 09:19:22.080245 kubelet[2299]: I0906 09:19:22.080066 2299 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 09:19:22.080579 kubelet[2299]: I0906 09:19:22.080291 2299 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 09:19:22.102376 kubelet[2299]: E0906 09:19:22.102333 2299 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 6 09:19:22.103433 kubelet[2299]: I0906 09:19:22.103403 2299 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 09:19:22.109212 kubelet[2299]: I0906 09:19:22.109194 2299 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 6 09:19:22.111918 kubelet[2299]: I0906 09:19:22.111899 2299 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 09:19:22.112223 kubelet[2299]: I0906 09:19:22.112203 2299 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 09:19:22.112357 kubelet[2299]: I0906 09:19:22.112225 2299 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 09:19:22.112441 kubelet[2299]: I0906 09:19:22.112420 2299 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 09:19:22.112441 kubelet[2299]: I0906 09:19:22.112428 2299 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 09:19:22.113173 kubelet[2299]: I0906 09:19:22.113154 2299 state_mem.go:36] "Initialized new in-memory state store" Sep 6 09:19:22.115503 kubelet[2299]: I0906 09:19:22.115475 2299 kubelet.go:480] "Attempting to sync node with API server" Sep 6 09:19:22.115503 kubelet[2299]: I0906 09:19:22.115498 2299 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 09:19:22.115568 kubelet[2299]: I0906 09:19:22.115521 2299 kubelet.go:386] "Adding apiserver pod source" Sep 6 09:19:22.116595 kubelet[2299]: I0906 09:19:22.116572 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 09:19:22.118491 kubelet[2299]: I0906 09:19:22.118467 2299 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 6 09:19:22.119265 kubelet[2299]: I0906 09:19:22.119244 2299 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 09:19:22.119446 kubelet[2299]: W0906 09:19:22.119365 2299 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 09:19:22.121633 kubelet[2299]: E0906 09:19:22.121592 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 09:19:22.121633 kubelet[2299]: I0906 09:19:22.121626 2299 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 09:19:22.121700 kubelet[2299]: I0906 09:19:22.121666 2299 server.go:1289] "Started kubelet" Sep 6 09:19:22.122039 kubelet[2299]: I0906 09:19:22.121994 2299 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 09:19:22.124272 kubelet[2299]: I0906 09:19:22.124116 2299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 09:19:22.124564 kubelet[2299]: I0906 09:19:22.124536 2299 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 09:19:22.126438 kubelet[2299]: I0906 09:19:22.125823 2299 server.go:317] "Adding debug handlers to kubelet server" Sep 6 09:19:22.127407 kubelet[2299]: I0906 09:19:22.127381 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 09:19:22.128432 kubelet[2299]: E0906 09:19:22.128395 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 09:19:22.129452 kubelet[2299]: I0906 09:19:22.129428 2299 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 09:19:22.131646 kubelet[2299]: E0906 09:19:22.128813 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1862a6f1cf358bbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 09:19:22.121636799 +0000 UTC m=+1.795924803,LastTimestamp:2025-09-06 09:19:22.121636799 +0000 UTC m=+1.795924803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 09:19:22.131646 kubelet[2299]: E0906 09:19:22.131005 2299 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 09:19:22.131646 kubelet[2299]: E0906 09:19:22.131039 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:22.131646 kubelet[2299]: I0906 09:19:22.131056 2299 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 09:19:22.131646 kubelet[2299]: I0906 09:19:22.131202 2299 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 09:19:22.131646 kubelet[2299]: I0906 09:19:22.131251 2299 reconciler.go:26] "Reconciler: start to sync state" Sep 6 09:19:22.131646 kubelet[2299]: E0906 09:19:22.131596 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 09:19:22.132078 kubelet[2299]: I0906 09:19:22.132058 2299 factory.go:223] Registration of the systemd container factory successfully Sep 6 09:19:22.132215 kubelet[2299]: I0906 09:19:22.132196 2299 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 09:19:22.132480 kubelet[2299]: E0906 09:19:22.132435 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Sep 6 09:19:22.133457 kubelet[2299]: I0906 09:19:22.133429 2299 factory.go:223] Registration of the containerd container factory successfully Sep 6 09:19:22.142781 kubelet[2299]: I0906 09:19:22.142755 2299 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 09:19:22.144433 kubelet[2299]: I0906 09:19:22.144153 2299 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 09:19:22.144433 kubelet[2299]: I0906 09:19:22.144174 2299 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 09:19:22.144433 kubelet[2299]: I0906 09:19:22.144190 2299 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 09:19:22.144433 kubelet[2299]: I0906 09:19:22.144196 2299 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 09:19:22.144433 kubelet[2299]: E0906 09:19:22.144234 2299 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 09:19:22.147274 kubelet[2299]: I0906 09:19:22.147241 2299 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 09:19:22.147274 kubelet[2299]: I0906 09:19:22.147264 2299 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 09:19:22.147274 kubelet[2299]: I0906 09:19:22.147282 2299 state_mem.go:36] "Initialized new in-memory state store" Sep 6 09:19:22.232088 kubelet[2299]: E0906 09:19:22.232021 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:22.245287 kubelet[2299]: E0906 09:19:22.245232 2299 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 09:19:22.271726 kubelet[2299]: E0906 09:19:22.271690 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 09:19:22.332322 kubelet[2299]: E0906 09:19:22.332175 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:22.334061 kubelet[2299]: E0906 09:19:22.334024 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Sep 6 09:19:22.380131 kubelet[2299]: I0906 09:19:22.380097 2299 policy_none.go:49] "None policy: Start" Sep 6 09:19:22.380131 kubelet[2299]: I0906 09:19:22.380135 2299 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 09:19:22.380225 kubelet[2299]: I0906 09:19:22.380147 2299 state_mem.go:35] "Initializing new in-memory state store" Sep 6 09:19:22.406599 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 6 09:19:22.420306 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 6 09:19:22.422843 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 6 09:19:22.432564 kubelet[2299]: E0906 09:19:22.432510 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:22.442727 kubelet[2299]: E0906 09:19:22.442699 2299 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 09:19:22.442900 kubelet[2299]: I0906 09:19:22.442876 2299 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 09:19:22.442973 kubelet[2299]: I0906 09:19:22.442894 2299 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 09:19:22.443170 kubelet[2299]: I0906 09:19:22.443153 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 09:19:22.444145 kubelet[2299]: E0906 09:19:22.444119 2299 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 09:19:22.444212 kubelet[2299]: E0906 09:19:22.444153 2299 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 09:19:22.487408 systemd[1]: Created slice kubepods-burstable-pod050fe8359e4f7053a4c4a6e39934358e.slice - libcontainer container kubepods-burstable-pod050fe8359e4f7053a4c4a6e39934358e.slice. Sep 6 09:19:22.498848 kubelet[2299]: E0906 09:19:22.498797 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:22.503998 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 6 09:19:22.506196 kubelet[2299]: E0906 09:19:22.506160 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:22.507417 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 6 09:19:22.509130 kubelet[2299]: E0906 09:19:22.509111 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:22.532607 kubelet[2299]: I0906 09:19:22.532576 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/050fe8359e4f7053a4c4a6e39934358e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"050fe8359e4f7053a4c4a6e39934358e\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:22.532662 kubelet[2299]: I0906 09:19:22.532626 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/050fe8359e4f7053a4c4a6e39934358e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"050fe8359e4f7053a4c4a6e39934358e\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:22.532685 kubelet[2299]: I0906 09:19:22.532670 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:22.532710 kubelet[2299]: I0906 09:19:22.532693 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/050fe8359e4f7053a4c4a6e39934358e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"050fe8359e4f7053a4c4a6e39934358e\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:22.532748 kubelet[2299]: I0906 09:19:22.532711 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:22.532748 kubelet[2299]: I0906 09:19:22.532737 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:22.532792 kubelet[2299]: I0906 09:19:22.532758 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:22.532792 kubelet[2299]: I0906 09:19:22.532773 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:22.532792 kubelet[2299]: I0906 09:19:22.532787 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:22.544602 kubelet[2299]: I0906 09:19:22.544557 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:19:22.545081 kubelet[2299]: E0906 09:19:22.545034 2299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Sep 6 09:19:22.734664 kubelet[2299]: E0906 09:19:22.734522 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Sep 6 09:19:22.746989 kubelet[2299]: I0906 09:19:22.746938 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:19:22.747304 kubelet[2299]: E0906 09:19:22.747245 2299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Sep 6 09:19:22.799617 kubelet[2299]: E0906 09:19:22.799561 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:22.800202 containerd[1529]: time="2025-09-06T09:19:22.800162420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:050fe8359e4f7053a4c4a6e39934358e,Namespace:kube-system,Attempt:0,}" Sep 6 09:19:22.807593 kubelet[2299]: E0906 09:19:22.807392 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:22.807967 containerd[1529]: time="2025-09-06T09:19:22.807804900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 6 09:19:22.810273 kubelet[2299]: E0906 09:19:22.810239 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:22.810580 containerd[1529]: time="2025-09-06T09:19:22.810552622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 6 09:19:22.870684 containerd[1529]: time="2025-09-06T09:19:22.870563107Z" level=info msg="connecting to shim 13364ada8f94a698ea2474646ec74736f7a684d5fed310c078913478d30b0bfa" address="unix:///run/containerd/s/11e711219c4ff3f0ec1dcea922d91a08ffe42f09bfb66df3684d331414b63eb8" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:19:22.871095 containerd[1529]: time="2025-09-06T09:19:22.871055636Z" level=info msg="connecting to shim a13307a89b01c7a9095c15a55720fd6e837def05128938ea90fca10d08029f63" address="unix:///run/containerd/s/90c423fb6029da64a4341e83368c6829cdaa83d267aaa6eb3b556a07ad6a85ee" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:19:22.882835 containerd[1529]: time="2025-09-06T09:19:22.882801647Z" level=info msg="connecting to shim e995d3f8883ac259a5c9159a347cf792a4351334d5d9081bd876991ae3fe90d7" address="unix:///run/containerd/s/0f5eb20d457dc645c7219b485921f10b2e7351ac13f699db50574a1a31b51b13" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:19:22.898185 systemd[1]: Started cri-containerd-13364ada8f94a698ea2474646ec74736f7a684d5fed310c078913478d30b0bfa.scope - libcontainer container 13364ada8f94a698ea2474646ec74736f7a684d5fed310c078913478d30b0bfa. Sep 6 09:19:22.899186 systemd[1]: Started cri-containerd-a13307a89b01c7a9095c15a55720fd6e837def05128938ea90fca10d08029f63.scope - libcontainer container a13307a89b01c7a9095c15a55720fd6e837def05128938ea90fca10d08029f63. Sep 6 09:19:22.902189 systemd[1]: Started cri-containerd-e995d3f8883ac259a5c9159a347cf792a4351334d5d9081bd876991ae3fe90d7.scope - libcontainer container e995d3f8883ac259a5c9159a347cf792a4351334d5d9081bd876991ae3fe90d7. Sep 6 09:19:22.939292 containerd[1529]: time="2025-09-06T09:19:22.939211190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:050fe8359e4f7053a4c4a6e39934358e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a13307a89b01c7a9095c15a55720fd6e837def05128938ea90fca10d08029f63\"" Sep 6 09:19:22.939468 containerd[1529]: time="2025-09-06T09:19:22.939386105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"13364ada8f94a698ea2474646ec74736f7a684d5fed310c078913478d30b0bfa\"" Sep 6 09:19:22.940860 kubelet[2299]: E0906 09:19:22.940662 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:22.940860 kubelet[2299]: E0906 09:19:22.940662 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:22.941635 containerd[1529]: time="2025-09-06T09:19:22.941581590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e995d3f8883ac259a5c9159a347cf792a4351334d5d9081bd876991ae3fe90d7\"" Sep 6 09:19:22.942325 kubelet[2299]: E0906 09:19:22.942308 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:22.944340 containerd[1529]: time="2025-09-06T09:19:22.944316168Z" level=info msg="CreateContainer within sandbox \"a13307a89b01c7a9095c15a55720fd6e837def05128938ea90fca10d08029f63\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 09:19:22.945595 containerd[1529]: time="2025-09-06T09:19:22.945550076Z" level=info msg="CreateContainer within sandbox \"13364ada8f94a698ea2474646ec74736f7a684d5fed310c078913478d30b0bfa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 09:19:22.947657 containerd[1529]: time="2025-09-06T09:19:22.947631555Z" level=info msg="CreateContainer within sandbox \"e995d3f8883ac259a5c9159a347cf792a4351334d5d9081bd876991ae3fe90d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 09:19:22.953609 containerd[1529]: time="2025-09-06T09:19:22.953581860Z" level=info msg="Container f1f19c58632ccfa883013d99fe5e35cdeb502c62832510f48ff4e32c8fa75d13: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:22.955813 containerd[1529]: time="2025-09-06T09:19:22.955787643Z" level=info msg="Container 883e23382f521c57e28661cc7f5971ef143dbab2d15318bc2307e3a9a9e341cd: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:22.957701 containerd[1529]: time="2025-09-06T09:19:22.957673288Z" level=info msg="Container f01bc3a8e632b23cd2b4d21fbc2f6094fff0214f2dd21c0efee61a591e934b7c: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:22.967007 containerd[1529]: time="2025-09-06T09:19:22.966920667Z" level=info msg="CreateContainer within sandbox \"a13307a89b01c7a9095c15a55720fd6e837def05128938ea90fca10d08029f63\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"883e23382f521c57e28661cc7f5971ef143dbab2d15318bc2307e3a9a9e341cd\"" Sep 6 09:19:22.968160 containerd[1529]: time="2025-09-06T09:19:22.968134458Z" level=info msg="StartContainer for \"883e23382f521c57e28661cc7f5971ef143dbab2d15318bc2307e3a9a9e341cd\"" Sep 6 09:19:22.969228 containerd[1529]: time="2025-09-06T09:19:22.969198700Z" level=info msg="connecting to shim 883e23382f521c57e28661cc7f5971ef143dbab2d15318bc2307e3a9a9e341cd" address="unix:///run/containerd/s/90c423fb6029da64a4341e83368c6829cdaa83d267aaa6eb3b556a07ad6a85ee" protocol=ttrpc version=3 Sep 6 09:19:22.969594 containerd[1529]: time="2025-09-06T09:19:22.969558390Z" level=info msg="CreateContainer within sandbox \"e995d3f8883ac259a5c9159a347cf792a4351334d5d9081bd876991ae3fe90d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f01bc3a8e632b23cd2b4d21fbc2f6094fff0214f2dd21c0efee61a591e934b7c\"" Sep 6 09:19:22.970707 containerd[1529]: time="2025-09-06T09:19:22.970668835Z" level=info msg="StartContainer for \"f01bc3a8e632b23cd2b4d21fbc2f6094fff0214f2dd21c0efee61a591e934b7c\"" Sep 6 09:19:22.971646 containerd[1529]: time="2025-09-06T09:19:22.971371624Z" level=info msg="CreateContainer within sandbox \"13364ada8f94a698ea2474646ec74736f7a684d5fed310c078913478d30b0bfa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f1f19c58632ccfa883013d99fe5e35cdeb502c62832510f48ff4e32c8fa75d13\"" Sep 6 09:19:22.971751 containerd[1529]: time="2025-09-06T09:19:22.971721736Z" level=info msg="StartContainer for \"f1f19c58632ccfa883013d99fe5e35cdeb502c62832510f48ff4e32c8fa75d13\"" Sep 6 09:19:22.971781 containerd[1529]: time="2025-09-06T09:19:22.971764694Z" level=info msg="connecting to shim f01bc3a8e632b23cd2b4d21fbc2f6094fff0214f2dd21c0efee61a591e934b7c" address="unix:///run/containerd/s/0f5eb20d457dc645c7219b485921f10b2e7351ac13f699db50574a1a31b51b13" protocol=ttrpc version=3 Sep 6 09:19:22.972846 containerd[1529]: time="2025-09-06T09:19:22.972815992Z" level=info msg="connecting to shim f1f19c58632ccfa883013d99fe5e35cdeb502c62832510f48ff4e32c8fa75d13" address="unix:///run/containerd/s/11e711219c4ff3f0ec1dcea922d91a08ffe42f09bfb66df3684d331414b63eb8" protocol=ttrpc version=3 Sep 6 09:19:22.994110 systemd[1]: Started cri-containerd-883e23382f521c57e28661cc7f5971ef143dbab2d15318bc2307e3a9a9e341cd.scope - libcontainer container 883e23382f521c57e28661cc7f5971ef143dbab2d15318bc2307e3a9a9e341cd. Sep 6 09:19:22.998506 systemd[1]: Started cri-containerd-f01bc3a8e632b23cd2b4d21fbc2f6094fff0214f2dd21c0efee61a591e934b7c.scope - libcontainer container f01bc3a8e632b23cd2b4d21fbc2f6094fff0214f2dd21c0efee61a591e934b7c. Sep 6 09:19:22.999867 systemd[1]: Started cri-containerd-f1f19c58632ccfa883013d99fe5e35cdeb502c62832510f48ff4e32c8fa75d13.scope - libcontainer container f1f19c58632ccfa883013d99fe5e35cdeb502c62832510f48ff4e32c8fa75d13. Sep 6 09:19:23.023350 kubelet[2299]: E0906 09:19:23.023291 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 09:19:23.045074 containerd[1529]: time="2025-09-06T09:19:23.045037112Z" level=info msg="StartContainer for \"f1f19c58632ccfa883013d99fe5e35cdeb502c62832510f48ff4e32c8fa75d13\" returns successfully" Sep 6 09:19:23.046207 containerd[1529]: time="2025-09-06T09:19:23.046174469Z" level=info msg="StartContainer for \"f01bc3a8e632b23cd2b4d21fbc2f6094fff0214f2dd21c0efee61a591e934b7c\" returns successfully" Sep 6 09:19:23.046718 containerd[1529]: time="2025-09-06T09:19:23.046686599Z" level=info msg="StartContainer for \"883e23382f521c57e28661cc7f5971ef143dbab2d15318bc2307e3a9a9e341cd\" returns successfully" Sep 6 09:19:23.150692 kubelet[2299]: I0906 09:19:23.150659 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:19:23.153298 kubelet[2299]: E0906 09:19:23.153276 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:23.153566 kubelet[2299]: E0906 09:19:23.153504 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:23.158006 kubelet[2299]: E0906 09:19:23.157990 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:23.158187 kubelet[2299]: E0906 09:19:23.158168 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:23.160023 kubelet[2299]: E0906 09:19:23.160005 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:23.160232 kubelet[2299]: E0906 09:19:23.160186 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:24.162406 kubelet[2299]: E0906 09:19:24.162358 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:24.163258 kubelet[2299]: E0906 09:19:24.162498 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:24.163982 kubelet[2299]: E0906 09:19:24.163688 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:24.166962 kubelet[2299]: E0906 09:19:24.164285 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:24.726885 kubelet[2299]: E0906 09:19:24.726855 2299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 09:19:24.794036 kubelet[2299]: I0906 09:19:24.794002 2299 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 09:19:24.794204 kubelet[2299]: E0906 09:19:24.794190 2299 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 09:19:24.810381 kubelet[2299]: E0906 09:19:24.810341 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:24.911224 kubelet[2299]: E0906 09:19:24.911182 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.012147 kubelet[2299]: E0906 09:19:25.011752 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.112453 kubelet[2299]: E0906 09:19:25.112406 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.212760 kubelet[2299]: E0906 09:19:25.212712 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.313719 kubelet[2299]: E0906 09:19:25.313365 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.357646 kubelet[2299]: E0906 09:19:25.357622 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:19:25.357764 kubelet[2299]: E0906 09:19:25.357754 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:25.413866 kubelet[2299]: E0906 09:19:25.413834 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.514673 kubelet[2299]: E0906 09:19:25.514630 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.616027 kubelet[2299]: E0906 09:19:25.615632 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.715810 kubelet[2299]: E0906 09:19:25.715763 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.816445 kubelet[2299]: E0906 09:19:25.816403 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:25.917528 kubelet[2299]: E0906 09:19:25.917208 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:26.018038 kubelet[2299]: E0906 09:19:26.017960 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:26.118710 kubelet[2299]: E0906 09:19:26.118661 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:26.232131 kubelet[2299]: I0906 09:19:26.231416 2299 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:26.239069 kubelet[2299]: I0906 09:19:26.239004 2299 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:26.244330 kubelet[2299]: I0906 09:19:26.244295 2299 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:26.759847 systemd[1]: Reload requested from client PID 2584 ('systemctl') (unit session-7.scope)... Sep 6 09:19:26.759861 systemd[1]: Reloading... Sep 6 09:19:26.814977 zram_generator::config[2627]: No configuration found. Sep 6 09:19:26.978420 systemd[1]: Reloading finished in 218 ms. Sep 6 09:19:27.009863 kubelet[2299]: I0906 09:19:27.009812 2299 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 09:19:27.010137 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:19:27.024881 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 09:19:27.025259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:19:27.025432 systemd[1]: kubelet.service: Consumed 2.128s CPU time, 126.4M memory peak. Sep 6 09:19:27.027578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:19:27.163781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:19:27.168435 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 09:19:27.211719 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 09:19:27.211719 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 09:19:27.211719 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 09:19:27.212074 kubelet[2669]: I0906 09:19:27.211808 2669 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 09:19:27.219093 kubelet[2669]: I0906 09:19:27.219061 2669 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 09:19:27.219093 kubelet[2669]: I0906 09:19:27.219086 2669 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 09:19:27.219274 kubelet[2669]: I0906 09:19:27.219257 2669 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 09:19:27.220428 kubelet[2669]: I0906 09:19:27.220412 2669 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 6 09:19:27.222432 kubelet[2669]: I0906 09:19:27.222408 2669 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 09:19:27.226973 kubelet[2669]: I0906 09:19:27.226301 2669 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 6 09:19:27.228759 kubelet[2669]: I0906 09:19:27.228739 2669 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 09:19:27.229076 kubelet[2669]: I0906 09:19:27.229049 2669 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 09:19:27.229273 kubelet[2669]: I0906 09:19:27.229141 2669 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 09:19:27.229395 kubelet[2669]: I0906 09:19:27.229383 2669 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 09:19:27.229448 kubelet[2669]: I0906 09:19:27.229440 2669 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 09:19:27.229532 kubelet[2669]: I0906 09:19:27.229523 2669 state_mem.go:36] "Initialized new in-memory state store" Sep 6 09:19:27.229744 kubelet[2669]: I0906 09:19:27.229721 2669 kubelet.go:480] "Attempting to sync node with API server" Sep 6 09:19:27.229833 kubelet[2669]: I0906 09:19:27.229823 2669 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 09:19:27.229923 kubelet[2669]: I0906 09:19:27.229914 2669 kubelet.go:386] "Adding apiserver pod source" Sep 6 09:19:27.230079 kubelet[2669]: I0906 09:19:27.230068 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 09:19:27.230813 kubelet[2669]: I0906 09:19:27.230751 2669 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 6 09:19:27.232972 kubelet[2669]: I0906 09:19:27.231849 2669 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 09:19:27.234098 kubelet[2669]: I0906 09:19:27.234082 2669 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 09:19:27.234206 kubelet[2669]: I0906 09:19:27.234197 2669 server.go:1289] "Started kubelet" Sep 6 09:19:27.236201 kubelet[2669]: I0906 09:19:27.236166 2669 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 09:19:27.236536 kubelet[2669]: I0906 09:19:27.236481 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 09:19:27.237030 kubelet[2669]: I0906 09:19:27.236720 2669 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 09:19:27.237030 kubelet[2669]: I0906 09:19:27.235245 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 09:19:27.237828 kubelet[2669]: E0906 09:19:27.237812 2669 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:19:27.238063 kubelet[2669]: I0906 09:19:27.238050 2669 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 09:19:27.238549 kubelet[2669]: I0906 09:19:27.238533 2669 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 09:19:27.238882 kubelet[2669]: I0906 09:19:27.238870 2669 reconciler.go:26] "Reconciler: start to sync state" Sep 6 09:19:27.239341 kubelet[2669]: I0906 09:19:27.239324 2669 server.go:317] "Adding debug handlers to kubelet server" Sep 6 09:19:27.243967 kubelet[2669]: I0906 09:19:27.235331 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 09:19:27.247239 kubelet[2669]: I0906 09:19:27.247212 2669 factory.go:223] Registration of the systemd container factory successfully Sep 6 09:19:27.247346 kubelet[2669]: I0906 09:19:27.247324 2669 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 09:19:27.248791 kubelet[2669]: E0906 09:19:27.248766 2669 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 09:19:27.253300 kubelet[2669]: I0906 09:19:27.253247 2669 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 09:19:27.254367 kubelet[2669]: I0906 09:19:27.254345 2669 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 09:19:27.254409 kubelet[2669]: I0906 09:19:27.254370 2669 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 09:19:27.254409 kubelet[2669]: I0906 09:19:27.254389 2669 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 09:19:27.254409 kubelet[2669]: I0906 09:19:27.254398 2669 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 09:19:27.254468 kubelet[2669]: E0906 09:19:27.254443 2669 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 09:19:27.258607 kubelet[2669]: I0906 09:19:27.258570 2669 factory.go:223] Registration of the containerd container factory successfully Sep 6 09:19:27.288355 kubelet[2669]: I0906 09:19:27.288255 2669 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 09:19:27.288355 kubelet[2669]: I0906 09:19:27.288275 2669 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 09:19:27.288355 kubelet[2669]: I0906 09:19:27.288296 2669 state_mem.go:36] "Initialized new in-memory state store" Sep 6 09:19:27.288480 kubelet[2669]: I0906 09:19:27.288403 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 09:19:27.288480 kubelet[2669]: I0906 09:19:27.288411 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 09:19:27.288480 kubelet[2669]: I0906 09:19:27.288426 2669 policy_none.go:49] "None policy: Start" Sep 6 09:19:27.288480 kubelet[2669]: I0906 09:19:27.288433 2669 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 09:19:27.288480 kubelet[2669]: I0906 09:19:27.288441 2669 state_mem.go:35] "Initializing new in-memory state store" Sep 6 09:19:27.288587 kubelet[2669]: I0906 09:19:27.288514 2669 state_mem.go:75] "Updated machine memory state" Sep 6 09:19:27.292125 kubelet[2669]: E0906 09:19:27.291995 2669 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 09:19:27.292403 kubelet[2669]: I0906 09:19:27.292176 2669 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 09:19:27.292403 kubelet[2669]: I0906 09:19:27.292191 2669 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 09:19:27.292403 kubelet[2669]: I0906 09:19:27.292381 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 09:19:27.294657 kubelet[2669]: E0906 09:19:27.294633 2669 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 09:19:27.355877 kubelet[2669]: I0906 09:19:27.355833 2669 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:27.355987 kubelet[2669]: I0906 09:19:27.355843 2669 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:27.356016 kubelet[2669]: I0906 09:19:27.355996 2669 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.357653 kubelet[2669]: I0906 09:19:27.357022 2669 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:27.362104 kubelet[2669]: E0906 09:19:27.362001 2669 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:27.362244 kubelet[2669]: E0906 09:19:27.362104 2669 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:27.362244 kubelet[2669]: E0906 09:19:27.362153 2669 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:27.362244 kubelet[2669]: I0906 09:19:27.362172 2669 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.362244 kubelet[2669]: E0906 09:19:27.362193 2669 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.365982 kubelet[2669]: E0906 09:19:27.365962 2669 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.366139 kubelet[2669]: I0906 09:19:27.366064 2669 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:27.369473 kubelet[2669]: E0906 09:19:27.369447 2669 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:27.395713 kubelet[2669]: I0906 09:19:27.395689 2669 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:19:27.402767 kubelet[2669]: I0906 09:19:27.402728 2669 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 6 09:19:27.402834 kubelet[2669]: I0906 09:19:27.402794 2669 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 09:19:27.440968 kubelet[2669]: I0906 09:19:27.440875 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/050fe8359e4f7053a4c4a6e39934358e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"050fe8359e4f7053a4c4a6e39934358e\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:27.440968 kubelet[2669]: I0906 09:19:27.440909 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/050fe8359e4f7053a4c4a6e39934358e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"050fe8359e4f7053a4c4a6e39934358e\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:27.440968 kubelet[2669]: I0906 09:19:27.440929 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.441182 kubelet[2669]: I0906 09:19:27.441130 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.441182 kubelet[2669]: I0906 09:19:27.441155 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.441336 kubelet[2669]: I0906 09:19:27.441170 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/050fe8359e4f7053a4c4a6e39934358e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"050fe8359e4f7053a4c4a6e39934358e\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:27.441336 kubelet[2669]: I0906 09:19:27.441289 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.441336 kubelet[2669]: I0906 09:19:27.441308 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:19:27.441482 kubelet[2669]: I0906 09:19:27.441323 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:27.662985 kubelet[2669]: E0906 09:19:27.662878 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:27.663261 kubelet[2669]: E0906 09:19:27.663235 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:27.663413 kubelet[2669]: E0906 09:19:27.663372 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:27.759914 sudo[2708]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 09:19:27.760535 sudo[2708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 6 09:19:28.068147 sudo[2708]: pam_unix(sudo:session): session closed for user root Sep 6 09:19:28.230590 kubelet[2669]: I0906 09:19:28.230566 2669 apiserver.go:52] "Watching apiserver" Sep 6 09:19:28.239603 kubelet[2669]: I0906 09:19:28.239567 2669 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 09:19:28.273465 kubelet[2669]: I0906 09:19:28.273345 2669 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:28.273773 kubelet[2669]: I0906 09:19:28.273751 2669 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:28.274636 kubelet[2669]: E0906 09:19:28.274282 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:28.279511 kubelet[2669]: E0906 09:19:28.279484 2669 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 09:19:28.279939 kubelet[2669]: E0906 09:19:28.279864 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:28.280580 kubelet[2669]: E0906 09:19:28.280525 2669 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 09:19:28.280786 kubelet[2669]: E0906 09:19:28.280774 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:28.294791 kubelet[2669]: I0906 09:19:28.294547 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.294533219 podStartE2EDuration="2.294533219s" podCreationTimestamp="2025-09-06 09:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:19:28.294279173 +0000 UTC m=+1.121791231" watchObservedRunningTime="2025-09-06 09:19:28.294533219 +0000 UTC m=+1.122045277" Sep 6 09:19:28.301282 kubelet[2669]: I0906 09:19:28.301234 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.3012237669999998 podStartE2EDuration="2.301223767s" podCreationTimestamp="2025-09-06 09:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:19:28.301070242 +0000 UTC m=+1.128582300" watchObservedRunningTime="2025-09-06 09:19:28.301223767 +0000 UTC m=+1.128735825" Sep 6 09:19:28.316054 kubelet[2669]: I0906 09:19:28.315601 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.31558806 podStartE2EDuration="2.31558806s" podCreationTimestamp="2025-09-06 09:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:19:28.308683538 +0000 UTC m=+1.136195596" watchObservedRunningTime="2025-09-06 09:19:28.31558806 +0000 UTC m=+1.143100078" Sep 6 09:19:29.274435 kubelet[2669]: E0906 09:19:29.274347 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:29.274769 kubelet[2669]: E0906 09:19:29.274509 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:30.275602 kubelet[2669]: E0906 09:19:30.275263 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:30.276042 kubelet[2669]: E0906 09:19:30.275621 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:30.306744 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 6 09:19:30.308015 sshd[1739]: Connection closed by 10.0.0.1 port 57786 Sep 6 09:19:30.308510 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 6 09:19:30.312705 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Sep 6 09:19:30.313787 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:57786.service: Deactivated successfully. Sep 6 09:19:30.317104 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 09:19:30.317465 systemd[1]: session-7.scope: Consumed 7.475s CPU time, 259.9M memory peak. Sep 6 09:19:30.321315 systemd-logind[1503]: Removed session 7. Sep 6 09:19:31.277382 kubelet[2669]: E0906 09:19:31.277346 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:32.704821 kubelet[2669]: E0906 09:19:32.704728 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:34.063569 kubelet[2669]: I0906 09:19:34.063528 2669 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 09:19:34.064205 containerd[1529]: time="2025-09-06T09:19:34.063870453Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 09:19:34.064982 kubelet[2669]: I0906 09:19:34.064508 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 09:19:34.933450 systemd[1]: Created slice kubepods-besteffort-pod290ed1da_228a_4870_99d6_19d80cfc86bc.slice - libcontainer container kubepods-besteffort-pod290ed1da_228a_4870_99d6_19d80cfc86bc.slice. Sep 6 09:19:34.959613 systemd[1]: Created slice kubepods-burstable-podf0cfec9c_81a3_46b8_aa38_9dac657802e9.slice - libcontainer container kubepods-burstable-podf0cfec9c_81a3_46b8_aa38_9dac657802e9.slice. Sep 6 09:19:34.990427 kubelet[2669]: I0906 09:19:34.990389 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0cfec9c-81a3-46b8-aa38-9dac657802e9-hubble-tls\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990427 kubelet[2669]: I0906 09:19:34.990430 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/290ed1da-228a-4870-99d6-19d80cfc86bc-xtables-lock\") pod \"kube-proxy-79vt2\" (UID: \"290ed1da-228a-4870-99d6-19d80cfc86bc\") " pod="kube-system/kube-proxy-79vt2" Sep 6 09:19:34.990603 kubelet[2669]: I0906 09:19:34.990451 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/290ed1da-228a-4870-99d6-19d80cfc86bc-lib-modules\") pod \"kube-proxy-79vt2\" (UID: \"290ed1da-228a-4870-99d6-19d80cfc86bc\") " pod="kube-system/kube-proxy-79vt2" Sep 6 09:19:34.990603 kubelet[2669]: I0906 09:19:34.990466 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-xtables-lock\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990603 kubelet[2669]: I0906 09:19:34.990490 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-host-proc-sys-net\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990603 kubelet[2669]: I0906 09:19:34.990509 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skzpr\" (UniqueName: \"kubernetes.io/projected/f0cfec9c-81a3-46b8-aa38-9dac657802e9-kube-api-access-skzpr\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990603 kubelet[2669]: I0906 09:19:34.990524 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-etc-cni-netd\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990603 kubelet[2669]: I0906 09:19:34.990538 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-lib-modules\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990723 kubelet[2669]: I0906 09:19:34.990551 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0cfec9c-81a3-46b8-aa38-9dac657802e9-clustermesh-secrets\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990723 kubelet[2669]: I0906 09:19:34.990570 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/290ed1da-228a-4870-99d6-19d80cfc86bc-kube-proxy\") pod \"kube-proxy-79vt2\" (UID: \"290ed1da-228a-4870-99d6-19d80cfc86bc\") " pod="kube-system/kube-proxy-79vt2" Sep 6 09:19:34.990723 kubelet[2669]: I0906 09:19:34.990584 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-bpf-maps\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990723 kubelet[2669]: I0906 09:19:34.990597 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-hostproc\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990723 kubelet[2669]: I0906 09:19:34.990615 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-cgroup\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990723 kubelet[2669]: I0906 09:19:34.990631 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cni-path\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990847 kubelet[2669]: I0906 09:19:34.990654 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjdr\" (UniqueName: \"kubernetes.io/projected/290ed1da-228a-4870-99d6-19d80cfc86bc-kube-api-access-6mjdr\") pod \"kube-proxy-79vt2\" (UID: \"290ed1da-228a-4870-99d6-19d80cfc86bc\") " pod="kube-system/kube-proxy-79vt2" Sep 6 09:19:34.990847 kubelet[2669]: I0906 09:19:34.990671 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-run\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990847 kubelet[2669]: I0906 09:19:34.990685 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-config-path\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:34.990847 kubelet[2669]: I0906 09:19:34.990708 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-host-proc-sys-kernel\") pod \"cilium-67vhw\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " pod="kube-system/cilium-67vhw" Sep 6 09:19:35.068936 systemd[1]: Created slice kubepods-besteffort-pod3d9be6f7_164c_4c97_b5a0_be48ed86ad4e.slice - libcontainer container kubepods-besteffort-pod3d9be6f7_164c_4c97_b5a0_be48ed86ad4e.slice. Sep 6 09:19:35.091131 kubelet[2669]: I0906 09:19:35.091052 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4wt\" (UniqueName: \"kubernetes.io/projected/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e-kube-api-access-6k4wt\") pod \"cilium-operator-6c4d7847fc-spkb6\" (UID: \"3d9be6f7-164c-4c97-b5a0-be48ed86ad4e\") " pod="kube-system/cilium-operator-6c4d7847fc-spkb6" Sep 6 09:19:35.091757 kubelet[2669]: I0906 09:19:35.091164 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-spkb6\" (UID: \"3d9be6f7-164c-4c97-b5a0-be48ed86ad4e\") " pod="kube-system/cilium-operator-6c4d7847fc-spkb6" Sep 6 09:19:35.259165 kubelet[2669]: E0906 09:19:35.259061 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:35.260363 containerd[1529]: time="2025-09-06T09:19:35.259852092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-79vt2,Uid:290ed1da-228a-4870-99d6-19d80cfc86bc,Namespace:kube-system,Attempt:0,}" Sep 6 09:19:35.265554 kubelet[2669]: E0906 09:19:35.265526 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:35.266913 containerd[1529]: time="2025-09-06T09:19:35.266881880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-67vhw,Uid:f0cfec9c-81a3-46b8-aa38-9dac657802e9,Namespace:kube-system,Attempt:0,}" Sep 6 09:19:35.281568 containerd[1529]: time="2025-09-06T09:19:35.281502416Z" level=info msg="connecting to shim 09792bc1b34c451116901edc25ae5a28b507b2933167e0d008e35727fff67109" address="unix:///run/containerd/s/cbd01b27a379d6fc34ddfddf76de6255cf40bae6291d8c220fa9d2cf065f2d73" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:19:35.288958 containerd[1529]: time="2025-09-06T09:19:35.288914037Z" level=info msg="connecting to shim db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6" address="unix:///run/containerd/s/08db355db05b410b176f3c86bd78a8e9507e799fa7dc3f2e7678c7fffab30fba" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:19:35.317111 systemd[1]: Started cri-containerd-09792bc1b34c451116901edc25ae5a28b507b2933167e0d008e35727fff67109.scope - libcontainer container 09792bc1b34c451116901edc25ae5a28b507b2933167e0d008e35727fff67109. Sep 6 09:19:35.320392 systemd[1]: Started cri-containerd-db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6.scope - libcontainer container db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6. Sep 6 09:19:35.346900 containerd[1529]: time="2025-09-06T09:19:35.346811125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-79vt2,Uid:290ed1da-228a-4870-99d6-19d80cfc86bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"09792bc1b34c451116901edc25ae5a28b507b2933167e0d008e35727fff67109\"" Sep 6 09:19:35.351283 kubelet[2669]: E0906 09:19:35.351254 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:35.351514 containerd[1529]: time="2025-09-06T09:19:35.351437314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-67vhw,Uid:f0cfec9c-81a3-46b8-aa38-9dac657802e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\"" Sep 6 09:19:35.352888 kubelet[2669]: E0906 09:19:35.352862 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:35.353926 containerd[1529]: time="2025-09-06T09:19:35.353884584Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 09:19:35.358161 containerd[1529]: time="2025-09-06T09:19:35.358088311Z" level=info msg="CreateContainer within sandbox \"09792bc1b34c451116901edc25ae5a28b507b2933167e0d008e35727fff67109\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 09:19:35.368701 containerd[1529]: time="2025-09-06T09:19:35.368672160Z" level=info msg="Container df7e9afb73f6f02e56462e89eb7611bee5e5482768a774544ca4e4535f5edc80: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:35.372817 kubelet[2669]: E0906 09:19:35.372788 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:35.373348 containerd[1529]: time="2025-09-06T09:19:35.373311998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-spkb6,Uid:3d9be6f7-164c-4c97-b5a0-be48ed86ad4e,Namespace:kube-system,Attempt:0,}" Sep 6 09:19:35.375388 containerd[1529]: time="2025-09-06T09:19:35.375220604Z" level=info msg="CreateContainer within sandbox \"09792bc1b34c451116901edc25ae5a28b507b2933167e0d008e35727fff67109\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"df7e9afb73f6f02e56462e89eb7611bee5e5482768a774544ca4e4535f5edc80\"" Sep 6 09:19:35.375614 containerd[1529]: time="2025-09-06T09:19:35.375574457Z" level=info msg="StartContainer for \"df7e9afb73f6f02e56462e89eb7611bee5e5482768a774544ca4e4535f5edc80\"" Sep 6 09:19:35.376838 containerd[1529]: time="2025-09-06T09:19:35.376794449Z" level=info msg="connecting to shim df7e9afb73f6f02e56462e89eb7611bee5e5482768a774544ca4e4535f5edc80" address="unix:///run/containerd/s/cbd01b27a379d6fc34ddfddf76de6255cf40bae6291d8c220fa9d2cf065f2d73" protocol=ttrpc version=3 Sep 6 09:19:35.399118 systemd[1]: Started cri-containerd-df7e9afb73f6f02e56462e89eb7611bee5e5482768a774544ca4e4535f5edc80.scope - libcontainer container df7e9afb73f6f02e56462e89eb7611bee5e5482768a774544ca4e4535f5edc80. Sep 6 09:19:35.404772 containerd[1529]: time="2025-09-06T09:19:35.403429258Z" level=info msg="connecting to shim b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c" address="unix:///run/containerd/s/4d8d07deaa7f4c183e7552318caa2ddb82756bf4e2ef96c53eb3077915ff0a5a" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:19:35.423165 systemd[1]: Started cri-containerd-b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c.scope - libcontainer container b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c. Sep 6 09:19:35.438263 containerd[1529]: time="2025-09-06T09:19:35.438184275Z" level=info msg="StartContainer for \"df7e9afb73f6f02e56462e89eb7611bee5e5482768a774544ca4e4535f5edc80\" returns successfully" Sep 6 09:19:35.462731 containerd[1529]: time="2025-09-06T09:19:35.462681395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-spkb6,Uid:3d9be6f7-164c-4c97-b5a0-be48ed86ad4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\"" Sep 6 09:19:35.463651 kubelet[2669]: E0906 09:19:35.463594 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:36.296669 kubelet[2669]: E0906 09:19:36.296035 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:36.305564 kubelet[2669]: I0906 09:19:36.305496 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-79vt2" podStartSLOduration=2.305480017 podStartE2EDuration="2.305480017s" podCreationTimestamp="2025-09-06 09:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:19:36.305360776 +0000 UTC m=+9.132872834" watchObservedRunningTime="2025-09-06 09:19:36.305480017 +0000 UTC m=+9.132992075" Sep 6 09:19:38.303805 kubelet[2669]: E0906 09:19:38.303770 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:39.364733 kubelet[2669]: E0906 09:19:39.364585 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:40.302111 kubelet[2669]: E0906 09:19:40.301784 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:41.318435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636848897.mount: Deactivated successfully. Sep 6 09:19:42.570275 update_engine[1505]: I20250906 09:19:42.570094 1505 update_attempter.cc:509] Updating boot flags... Sep 6 09:19:42.665877 containerd[1529]: time="2025-09-06T09:19:42.665823892Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:42.667559 containerd[1529]: time="2025-09-06T09:19:42.667500960Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 6 09:19:42.668978 containerd[1529]: time="2025-09-06T09:19:42.668533390Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:42.671983 containerd[1529]: time="2025-09-06T09:19:42.671909616Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.317989289s" Sep 6 09:19:42.672054 containerd[1529]: time="2025-09-06T09:19:42.671984653Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 09:19:42.695518 containerd[1529]: time="2025-09-06T09:19:42.695471887Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 09:19:42.703770 containerd[1529]: time="2025-09-06T09:19:42.703468194Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 09:19:42.712760 containerd[1529]: time="2025-09-06T09:19:42.712684063Z" level=info msg="Container 16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:42.720463 kubelet[2669]: E0906 09:19:42.719686 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:42.729273 containerd[1529]: time="2025-09-06T09:19:42.729143667Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\"" Sep 6 09:19:42.737295 containerd[1529]: time="2025-09-06T09:19:42.737257512Z" level=info msg="StartContainer for \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\"" Sep 6 09:19:42.738515 containerd[1529]: time="2025-09-06T09:19:42.738445739Z" level=info msg="connecting to shim 16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75" address="unix:///run/containerd/s/08db355db05b410b176f3c86bd78a8e9507e799fa7dc3f2e7678c7fffab30fba" protocol=ttrpc version=3 Sep 6 09:19:42.848144 systemd[1]: Started cri-containerd-16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75.scope - libcontainer container 16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75. Sep 6 09:19:42.873800 containerd[1529]: time="2025-09-06T09:19:42.873743642Z" level=info msg="StartContainer for \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\" returns successfully" Sep 6 09:19:42.888075 systemd[1]: cri-containerd-16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75.scope: Deactivated successfully. Sep 6 09:19:42.919710 containerd[1529]: time="2025-09-06T09:19:42.919656104Z" level=info msg="received exit event container_id:\"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\" id:\"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\" pid:3117 exited_at:{seconds:1757150382 nanos:914468864}" Sep 6 09:19:42.920009 containerd[1529]: time="2025-09-06T09:19:42.919852241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\" id:\"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\" pid:3117 exited_at:{seconds:1757150382 nanos:914468864}" Sep 6 09:19:42.949206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75-rootfs.mount: Deactivated successfully. Sep 6 09:19:43.308577 kubelet[2669]: E0906 09:19:43.308478 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:43.314150 containerd[1529]: time="2025-09-06T09:19:43.314112173Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 09:19:43.323761 containerd[1529]: time="2025-09-06T09:19:43.323120681Z" level=info msg="Container 5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:43.332964 containerd[1529]: time="2025-09-06T09:19:43.332901551Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\"" Sep 6 09:19:43.333616 containerd[1529]: time="2025-09-06T09:19:43.333442365Z" level=info msg="StartContainer for \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\"" Sep 6 09:19:43.334433 containerd[1529]: time="2025-09-06T09:19:43.334382527Z" level=info msg="connecting to shim 5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda" address="unix:///run/containerd/s/08db355db05b410b176f3c86bd78a8e9507e799fa7dc3f2e7678c7fffab30fba" protocol=ttrpc version=3 Sep 6 09:19:43.352116 systemd[1]: Started cri-containerd-5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda.scope - libcontainer container 5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda. Sep 6 09:19:43.377939 containerd[1529]: time="2025-09-06T09:19:43.377823235Z" level=info msg="StartContainer for \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\" returns successfully" Sep 6 09:19:43.389063 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 09:19:43.389337 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 09:19:43.389543 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 6 09:19:43.391287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 09:19:43.393557 systemd[1]: cri-containerd-5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda.scope: Deactivated successfully. Sep 6 09:19:43.394684 containerd[1529]: time="2025-09-06T09:19:43.394650853Z" level=info msg="received exit event container_id:\"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\" id:\"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\" pid:3163 exited_at:{seconds:1757150383 nanos:394495660}" Sep 6 09:19:43.394888 containerd[1529]: time="2025-09-06T09:19:43.394870236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\" id:\"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\" pid:3163 exited_at:{seconds:1757150383 nanos:394495660}" Sep 6 09:19:43.411920 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 09:19:43.889914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069638813.mount: Deactivated successfully. Sep 6 09:19:44.304413 containerd[1529]: time="2025-09-06T09:19:44.303509940Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:44.305464 containerd[1529]: time="2025-09-06T09:19:44.305418873Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 6 09:19:44.306134 containerd[1529]: time="2025-09-06T09:19:44.306109701Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:19:44.307998 containerd[1529]: time="2025-09-06T09:19:44.307972333Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.612460307s" Sep 6 09:19:44.308114 containerd[1529]: time="2025-09-06T09:19:44.308097429Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 09:19:44.312318 containerd[1529]: time="2025-09-06T09:19:44.312288541Z" level=info msg="CreateContainer within sandbox \"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 09:19:44.313118 kubelet[2669]: E0906 09:19:44.313082 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:44.323483 containerd[1529]: time="2025-09-06T09:19:44.323453527Z" level=info msg="Container d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:44.329323 containerd[1529]: time="2025-09-06T09:19:44.329274406Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 09:19:44.332965 containerd[1529]: time="2025-09-06T09:19:44.332737873Z" level=info msg="CreateContainer within sandbox \"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\"" Sep 6 09:19:44.333275 containerd[1529]: time="2025-09-06T09:19:44.333247701Z" level=info msg="StartContainer for \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\"" Sep 6 09:19:44.336298 containerd[1529]: time="2025-09-06T09:19:44.336261447Z" level=info msg="connecting to shim d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192" address="unix:///run/containerd/s/4d8d07deaa7f4c183e7552318caa2ddb82756bf4e2ef96c53eb3077915ff0a5a" protocol=ttrpc version=3 Sep 6 09:19:44.353140 containerd[1529]: time="2025-09-06T09:19:44.353094685Z" level=info msg="Container 77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:44.359114 systemd[1]: Started cri-containerd-d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192.scope - libcontainer container d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192. Sep 6 09:19:44.361665 containerd[1529]: time="2025-09-06T09:19:44.361557184Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\"" Sep 6 09:19:44.362553 containerd[1529]: time="2025-09-06T09:19:44.362512571Z" level=info msg="StartContainer for \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\"" Sep 6 09:19:44.365929 containerd[1529]: time="2025-09-06T09:19:44.365896962Z" level=info msg="connecting to shim 77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1" address="unix:///run/containerd/s/08db355db05b410b176f3c86bd78a8e9507e799fa7dc3f2e7678c7fffab30fba" protocol=ttrpc version=3 Sep 6 09:19:44.390124 systemd[1]: Started cri-containerd-77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1.scope - libcontainer container 77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1. Sep 6 09:19:44.399031 containerd[1529]: time="2025-09-06T09:19:44.398991382Z" level=info msg="StartContainer for \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" returns successfully" Sep 6 09:19:44.426444 containerd[1529]: time="2025-09-06T09:19:44.426359565Z" level=info msg="StartContainer for \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\" returns successfully" Sep 6 09:19:44.429408 systemd[1]: cri-containerd-77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1.scope: Deactivated successfully. Sep 6 09:19:44.432471 containerd[1529]: time="2025-09-06T09:19:44.432439680Z" level=info msg="received exit event container_id:\"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\" id:\"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\" pid:3254 exited_at:{seconds:1757150384 nanos:432252757}" Sep 6 09:19:44.432632 containerd[1529]: time="2025-09-06T09:19:44.432555652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\" id:\"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\" pid:3254 exited_at:{seconds:1757150384 nanos:432252757}" Sep 6 09:19:45.316351 kubelet[2669]: E0906 09:19:45.316313 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:45.326746 kubelet[2669]: E0906 09:19:45.326537 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:45.328878 kubelet[2669]: I0906 09:19:45.328819 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-spkb6" podStartSLOduration=1.4841194309999999 podStartE2EDuration="10.328804728s" podCreationTimestamp="2025-09-06 09:19:35 +0000 UTC" firstStartedPulling="2025-09-06 09:19:35.464357154 +0000 UTC m=+8.291869212" lastFinishedPulling="2025-09-06 09:19:44.309042451 +0000 UTC m=+17.136554509" observedRunningTime="2025-09-06 09:19:45.326476978 +0000 UTC m=+18.153989036" watchObservedRunningTime="2025-09-06 09:19:45.328804728 +0000 UTC m=+18.156316786" Sep 6 09:19:45.333548 containerd[1529]: time="2025-09-06T09:19:45.333505207Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 09:19:45.358847 containerd[1529]: time="2025-09-06T09:19:45.358266578Z" level=info msg="Container 837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:45.363285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110747721.mount: Deactivated successfully. Sep 6 09:19:45.368589 containerd[1529]: time="2025-09-06T09:19:45.368525621Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\"" Sep 6 09:19:45.376987 containerd[1529]: time="2025-09-06T09:19:45.376144421Z" level=info msg="StartContainer for \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\"" Sep 6 09:19:45.377191 containerd[1529]: time="2025-09-06T09:19:45.377166135Z" level=info msg="connecting to shim 837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831" address="unix:///run/containerd/s/08db355db05b410b176f3c86bd78a8e9507e799fa7dc3f2e7678c7fffab30fba" protocol=ttrpc version=3 Sep 6 09:19:45.397111 systemd[1]: Started cri-containerd-837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831.scope - libcontainer container 837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831. Sep 6 09:19:45.421860 systemd[1]: cri-containerd-837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831.scope: Deactivated successfully. Sep 6 09:19:45.422294 containerd[1529]: time="2025-09-06T09:19:45.422253790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\" id:\"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\" pid:3301 exited_at:{seconds:1757150385 nanos:422024733}" Sep 6 09:19:45.423288 containerd[1529]: time="2025-09-06T09:19:45.423256256Z" level=info msg="received exit event container_id:\"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\" id:\"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\" pid:3301 exited_at:{seconds:1757150385 nanos:422024733}" Sep 6 09:19:45.430071 containerd[1529]: time="2025-09-06T09:19:45.430029097Z" level=info msg="StartContainer for \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\" returns successfully" Sep 6 09:19:45.441526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831-rootfs.mount: Deactivated successfully. Sep 6 09:19:46.328394 kubelet[2669]: E0906 09:19:46.328242 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:46.328394 kubelet[2669]: E0906 09:19:46.328326 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:46.332730 containerd[1529]: time="2025-09-06T09:19:46.332674503Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 09:19:46.345324 containerd[1529]: time="2025-09-06T09:19:46.345173809Z" level=info msg="Container 8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:46.350888 containerd[1529]: time="2025-09-06T09:19:46.350856792Z" level=info msg="CreateContainer within sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\"" Sep 6 09:19:46.351324 containerd[1529]: time="2025-09-06T09:19:46.351288567Z" level=info msg="StartContainer for \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\"" Sep 6 09:19:46.352977 containerd[1529]: time="2025-09-06T09:19:46.352563604Z" level=info msg="connecting to shim 8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444" address="unix:///run/containerd/s/08db355db05b410b176f3c86bd78a8e9507e799fa7dc3f2e7678c7fffab30fba" protocol=ttrpc version=3 Sep 6 09:19:46.383134 systemd[1]: Started cri-containerd-8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444.scope - libcontainer container 8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444. Sep 6 09:19:46.409787 containerd[1529]: time="2025-09-06T09:19:46.409744618Z" level=info msg="StartContainer for \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" returns successfully" Sep 6 09:19:46.494328 containerd[1529]: time="2025-09-06T09:19:46.494274317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" id:\"6a2c2d0ffbeb7eaf6cedc674780a540d2e414b58c679255fb6006f3faed2df79\" pid:3370 exited_at:{seconds:1757150386 nanos:493892883}" Sep 6 09:19:46.574216 kubelet[2669]: I0906 09:19:46.574184 2669 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 09:19:46.611457 systemd[1]: Created slice kubepods-burstable-pod0aa58d8d_2a5b_4481_b697_bffce025c52a.slice - libcontainer container kubepods-burstable-pod0aa58d8d_2a5b_4481_b697_bffce025c52a.slice. Sep 6 09:19:46.617068 systemd[1]: Created slice kubepods-burstable-pod778d49e0_3f33_41aa_bbc9_4cbd1eafc7e5.slice - libcontainer container kubepods-burstable-pod778d49e0_3f33_41aa_bbc9_4cbd1eafc7e5.slice. Sep 6 09:19:46.677124 kubelet[2669]: I0906 09:19:46.677089 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0aa58d8d-2a5b-4481-b697-bffce025c52a-config-volume\") pod \"coredns-674b8bbfcf-d5pqn\" (UID: \"0aa58d8d-2a5b-4481-b697-bffce025c52a\") " pod="kube-system/coredns-674b8bbfcf-d5pqn" Sep 6 09:19:46.677124 kubelet[2669]: I0906 09:19:46.677129 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hblp4\" (UniqueName: \"kubernetes.io/projected/0aa58d8d-2a5b-4481-b697-bffce025c52a-kube-api-access-hblp4\") pod \"coredns-674b8bbfcf-d5pqn\" (UID: \"0aa58d8d-2a5b-4481-b697-bffce025c52a\") " pod="kube-system/coredns-674b8bbfcf-d5pqn" Sep 6 09:19:46.677124 kubelet[2669]: I0906 09:19:46.677149 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/778d49e0-3f33-41aa-bbc9-4cbd1eafc7e5-config-volume\") pod \"coredns-674b8bbfcf-jrvxj\" (UID: \"778d49e0-3f33-41aa-bbc9-4cbd1eafc7e5\") " pod="kube-system/coredns-674b8bbfcf-jrvxj" Sep 6 09:19:46.677124 kubelet[2669]: I0906 09:19:46.677167 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7dvc\" (UniqueName: \"kubernetes.io/projected/778d49e0-3f33-41aa-bbc9-4cbd1eafc7e5-kube-api-access-x7dvc\") pod \"coredns-674b8bbfcf-jrvxj\" (UID: \"778d49e0-3f33-41aa-bbc9-4cbd1eafc7e5\") " pod="kube-system/coredns-674b8bbfcf-jrvxj" Sep 6 09:19:46.914606 kubelet[2669]: E0906 09:19:46.914498 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:46.915260 containerd[1529]: time="2025-09-06T09:19:46.915222443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d5pqn,Uid:0aa58d8d-2a5b-4481-b697-bffce025c52a,Namespace:kube-system,Attempt:0,}" Sep 6 09:19:46.920568 kubelet[2669]: E0906 09:19:46.920529 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:46.921313 containerd[1529]: time="2025-09-06T09:19:46.921219834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jrvxj,Uid:778d49e0-3f33-41aa-bbc9-4cbd1eafc7e5,Namespace:kube-system,Attempt:0,}" Sep 6 09:19:47.336268 kubelet[2669]: E0906 09:19:47.336139 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:47.357664 kubelet[2669]: I0906 09:19:47.356392 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-67vhw" podStartSLOduration=6.03188652 podStartE2EDuration="13.356375322s" podCreationTimestamp="2025-09-06 09:19:34 +0000 UTC" firstStartedPulling="2025-09-06 09:19:35.353615872 +0000 UTC m=+8.181127930" lastFinishedPulling="2025-09-06 09:19:42.678104674 +0000 UTC m=+15.505616732" observedRunningTime="2025-09-06 09:19:47.354934365 +0000 UTC m=+20.182446383" watchObservedRunningTime="2025-09-06 09:19:47.356375322 +0000 UTC m=+20.183887340" Sep 6 09:19:48.337966 kubelet[2669]: E0906 09:19:48.337929 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:48.438753 systemd-networkd[1452]: cilium_host: Link UP Sep 6 09:19:48.438865 systemd-networkd[1452]: cilium_net: Link UP Sep 6 09:19:48.439026 systemd-networkd[1452]: cilium_host: Gained carrier Sep 6 09:19:48.439152 systemd-networkd[1452]: cilium_net: Gained carrier Sep 6 09:19:48.465087 systemd-networkd[1452]: cilium_host: Gained IPv6LL Sep 6 09:19:48.516719 systemd-networkd[1452]: cilium_vxlan: Link UP Sep 6 09:19:48.516725 systemd-networkd[1452]: cilium_vxlan: Gained carrier Sep 6 09:19:48.769980 kernel: NET: Registered PF_ALG protocol family Sep 6 09:19:49.321998 systemd-networkd[1452]: lxc_health: Link UP Sep 6 09:19:49.322286 systemd-networkd[1452]: lxc_health: Gained carrier Sep 6 09:19:49.339715 kubelet[2669]: E0906 09:19:49.339674 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:49.443215 systemd-networkd[1452]: cilium_net: Gained IPv6LL Sep 6 09:19:49.485973 kernel: eth0: renamed from tmp09b86 Sep 6 09:19:49.493977 kernel: eth0: renamed from tmp9cd35 Sep 6 09:19:49.495174 systemd-networkd[1452]: lxc1efe97eab323: Link UP Sep 6 09:19:49.497266 systemd-networkd[1452]: lxc129bb6b3f9f7: Link UP Sep 6 09:19:49.497488 systemd-networkd[1452]: lxc129bb6b3f9f7: Gained carrier Sep 6 09:19:49.497595 systemd-networkd[1452]: lxc1efe97eab323: Gained carrier Sep 6 09:19:50.341252 kubelet[2669]: E0906 09:19:50.341207 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:50.403180 systemd-networkd[1452]: cilium_vxlan: Gained IPv6LL Sep 6 09:19:50.851167 systemd-networkd[1452]: lxc129bb6b3f9f7: Gained IPv6LL Sep 6 09:19:50.915137 systemd-networkd[1452]: lxc_health: Gained IPv6LL Sep 6 09:19:51.235138 systemd-networkd[1452]: lxc1efe97eab323: Gained IPv6LL Sep 6 09:19:51.342761 kubelet[2669]: E0906 09:19:51.342722 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:52.345416 kubelet[2669]: E0906 09:19:52.345373 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:53.041985 containerd[1529]: time="2025-09-06T09:19:53.041702417Z" level=info msg="connecting to shim 09b86a9a6859cf92b0577f78a0d9e1b7748820a3362c19f55eb8e135be53e324" address="unix:///run/containerd/s/04c39303636c8915294e52b0cb2e122682d164262cbdb7bfc761efb7c67d0459" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:19:53.041985 containerd[1529]: time="2025-09-06T09:19:53.041900716Z" level=info msg="connecting to shim 9cd35dc79eae041d6e4e93b10415a6b552496d2d3cdac878fa8a57198c899180" address="unix:///run/containerd/s/d4197e3ce69dcc8ffa650ca9e260f96423cbb8f5a68987d41488cdeab18151bd" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:19:53.076167 systemd[1]: Started cri-containerd-9cd35dc79eae041d6e4e93b10415a6b552496d2d3cdac878fa8a57198c899180.scope - libcontainer container 9cd35dc79eae041d6e4e93b10415a6b552496d2d3cdac878fa8a57198c899180. Sep 6 09:19:53.087421 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:19:53.114822 containerd[1529]: time="2025-09-06T09:19:53.114765104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jrvxj,Uid:778d49e0-3f33-41aa-bbc9-4cbd1eafc7e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cd35dc79eae041d6e4e93b10415a6b552496d2d3cdac878fa8a57198c899180\"" Sep 6 09:19:53.116119 systemd[1]: Started cri-containerd-09b86a9a6859cf92b0577f78a0d9e1b7748820a3362c19f55eb8e135be53e324.scope - libcontainer container 09b86a9a6859cf92b0577f78a0d9e1b7748820a3362c19f55eb8e135be53e324. Sep 6 09:19:53.119382 kubelet[2669]: E0906 09:19:53.119329 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:53.129256 containerd[1529]: time="2025-09-06T09:19:53.129215146Z" level=info msg="CreateContainer within sandbox \"9cd35dc79eae041d6e4e93b10415a6b552496d2d3cdac878fa8a57198c899180\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 09:19:53.134071 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:19:53.142964 containerd[1529]: time="2025-09-06T09:19:53.142770482Z" level=info msg="Container 290b9c06f1a8a10f31208ac6f128ef71ddb6a0e3f8f2c77f8de2183004051010: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:53.150327 containerd[1529]: time="2025-09-06T09:19:53.150267063Z" level=info msg="CreateContainer within sandbox \"9cd35dc79eae041d6e4e93b10415a6b552496d2d3cdac878fa8a57198c899180\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"290b9c06f1a8a10f31208ac6f128ef71ddb6a0e3f8f2c77f8de2183004051010\"" Sep 6 09:19:53.151260 containerd[1529]: time="2025-09-06T09:19:53.151222506Z" level=info msg="StartContainer for \"290b9c06f1a8a10f31208ac6f128ef71ddb6a0e3f8f2c77f8de2183004051010\"" Sep 6 09:19:53.152684 containerd[1529]: time="2025-09-06T09:19:53.152613998Z" level=info msg="connecting to shim 290b9c06f1a8a10f31208ac6f128ef71ddb6a0e3f8f2c77f8de2183004051010" address="unix:///run/containerd/s/d4197e3ce69dcc8ffa650ca9e260f96423cbb8f5a68987d41488cdeab18151bd" protocol=ttrpc version=3 Sep 6 09:19:53.160906 containerd[1529]: time="2025-09-06T09:19:53.160870764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d5pqn,Uid:0aa58d8d-2a5b-4481-b697-bffce025c52a,Namespace:kube-system,Attempt:0,} returns sandbox id \"09b86a9a6859cf92b0577f78a0d9e1b7748820a3362c19f55eb8e135be53e324\"" Sep 6 09:19:53.161823 kubelet[2669]: E0906 09:19:53.161783 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:53.168243 containerd[1529]: time="2025-09-06T09:19:53.168189693Z" level=info msg="CreateContainer within sandbox \"09b86a9a6859cf92b0577f78a0d9e1b7748820a3362c19f55eb8e135be53e324\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 09:19:53.183139 systemd[1]: Started cri-containerd-290b9c06f1a8a10f31208ac6f128ef71ddb6a0e3f8f2c77f8de2183004051010.scope - libcontainer container 290b9c06f1a8a10f31208ac6f128ef71ddb6a0e3f8f2c77f8de2183004051010. Sep 6 09:19:53.190126 containerd[1529]: time="2025-09-06T09:19:53.189517532Z" level=info msg="Container 644219454f193da7526fa44ee54af7b0c3ed776089899c920c2f4e42f287a8db: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:19:53.196003 containerd[1529]: time="2025-09-06T09:19:53.195970044Z" level=info msg="CreateContainer within sandbox \"09b86a9a6859cf92b0577f78a0d9e1b7748820a3362c19f55eb8e135be53e324\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"644219454f193da7526fa44ee54af7b0c3ed776089899c920c2f4e42f287a8db\"" Sep 6 09:19:53.196581 containerd[1529]: time="2025-09-06T09:19:53.196561139Z" level=info msg="StartContainer for \"644219454f193da7526fa44ee54af7b0c3ed776089899c920c2f4e42f287a8db\"" Sep 6 09:19:53.197690 containerd[1529]: time="2025-09-06T09:19:53.197666306Z" level=info msg="connecting to shim 644219454f193da7526fa44ee54af7b0c3ed776089899c920c2f4e42f287a8db" address="unix:///run/containerd/s/04c39303636c8915294e52b0cb2e122682d164262cbdb7bfc761efb7c67d0459" protocol=ttrpc version=3 Sep 6 09:19:53.219258 systemd[1]: Started cri-containerd-644219454f193da7526fa44ee54af7b0c3ed776089899c920c2f4e42f287a8db.scope - libcontainer container 644219454f193da7526fa44ee54af7b0c3ed776089899c920c2f4e42f287a8db. Sep 6 09:19:53.225352 containerd[1529]: time="2025-09-06T09:19:53.225128763Z" level=info msg="StartContainer for \"290b9c06f1a8a10f31208ac6f128ef71ddb6a0e3f8f2c77f8de2183004051010\" returns successfully" Sep 6 09:19:53.255014 containerd[1529]: time="2025-09-06T09:19:53.254840566Z" level=info msg="StartContainer for \"644219454f193da7526fa44ee54af7b0c3ed776089899c920c2f4e42f287a8db\" returns successfully" Sep 6 09:19:53.349701 kubelet[2669]: E0906 09:19:53.349653 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:53.351046 kubelet[2669]: E0906 09:19:53.351022 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:53.361629 kubelet[2669]: I0906 09:19:53.361304 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d5pqn" podStartSLOduration=18.361287264 podStartE2EDuration="18.361287264s" podCreationTimestamp="2025-09-06 09:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:19:53.361048313 +0000 UTC m=+26.188560371" watchObservedRunningTime="2025-09-06 09:19:53.361287264 +0000 UTC m=+26.188799322" Sep 6 09:19:53.386218 kubelet[2669]: I0906 09:19:53.386162 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jrvxj" podStartSLOduration=18.386144148 podStartE2EDuration="18.386144148s" podCreationTimestamp="2025-09-06 09:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:19:53.38382118 +0000 UTC m=+26.211333238" watchObservedRunningTime="2025-09-06 09:19:53.386144148 +0000 UTC m=+26.213656206" Sep 6 09:19:54.023749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871229370.mount: Deactivated successfully. Sep 6 09:19:54.353022 kubelet[2669]: E0906 09:19:54.352912 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:54.353473 kubelet[2669]: E0906 09:19:54.353069 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:55.354544 kubelet[2669]: E0906 09:19:55.354500 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:55.354985 kubelet[2669]: E0906 09:19:55.354792 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:19:57.233558 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:47146.service - OpenSSH per-connection server daemon (10.0.0.1:47146). Sep 6 09:19:57.283682 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 47146 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:19:57.284784 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:19:57.289005 systemd-logind[1503]: New session 8 of user core. Sep 6 09:19:57.299211 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 6 09:19:57.438815 sshd[4034]: Connection closed by 10.0.0.1 port 47146 Sep 6 09:19:57.439135 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Sep 6 09:19:57.442497 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:47146.service: Deactivated successfully. Sep 6 09:19:57.444602 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 09:19:57.446290 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Sep 6 09:19:57.447624 systemd-logind[1503]: Removed session 8. Sep 6 09:20:02.455051 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:54778.service - OpenSSH per-connection server daemon (10.0.0.1:54778). Sep 6 09:20:02.514752 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 54778 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:02.515848 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:02.519470 systemd-logind[1503]: New session 9 of user core. Sep 6 09:20:02.528095 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 6 09:20:02.634985 sshd[4055]: Connection closed by 10.0.0.1 port 54778 Sep 6 09:20:02.635467 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:02.638757 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:54778.service: Deactivated successfully. Sep 6 09:20:02.640391 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 09:20:02.641420 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Sep 6 09:20:02.642312 systemd-logind[1503]: Removed session 9. Sep 6 09:20:07.649147 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:54788.service - OpenSSH per-connection server daemon (10.0.0.1:54788). Sep 6 09:20:07.706325 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 54788 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:07.707578 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:07.711305 systemd-logind[1503]: New session 10 of user core. Sep 6 09:20:07.720113 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 6 09:20:07.830887 sshd[4074]: Connection closed by 10.0.0.1 port 54788 Sep 6 09:20:07.831670 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:07.847330 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:54788.service: Deactivated successfully. Sep 6 09:20:07.848917 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 09:20:07.850011 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Sep 6 09:20:07.851836 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:54790.service - OpenSSH per-connection server daemon (10.0.0.1:54790). Sep 6 09:20:07.852930 systemd-logind[1503]: Removed session 10. Sep 6 09:20:07.906532 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 54790 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:07.906934 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:07.910786 systemd-logind[1503]: New session 11 of user core. Sep 6 09:20:07.924100 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 6 09:20:08.066354 sshd[4091]: Connection closed by 10.0.0.1 port 54790 Sep 6 09:20:08.067319 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:08.080317 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:54790.service: Deactivated successfully. Sep 6 09:20:08.083457 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 09:20:08.086175 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Sep 6 09:20:08.089491 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:54798.service - OpenSSH per-connection server daemon (10.0.0.1:54798). Sep 6 09:20:08.091172 systemd-logind[1503]: Removed session 11. Sep 6 09:20:08.158518 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 54798 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:08.159599 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:08.163405 systemd-logind[1503]: New session 12 of user core. Sep 6 09:20:08.174122 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 6 09:20:08.287979 sshd[4105]: Connection closed by 10.0.0.1 port 54798 Sep 6 09:20:08.288304 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:08.291778 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:54798.service: Deactivated successfully. Sep 6 09:20:08.295788 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 09:20:08.296854 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Sep 6 09:20:08.298410 systemd-logind[1503]: Removed session 12. Sep 6 09:20:13.303602 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:38726.service - OpenSSH per-connection server daemon (10.0.0.1:38726). Sep 6 09:20:13.359534 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 38726 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:13.361060 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:13.364616 systemd-logind[1503]: New session 13 of user core. Sep 6 09:20:13.372111 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 6 09:20:13.492405 sshd[4122]: Connection closed by 10.0.0.1 port 38726 Sep 6 09:20:13.492672 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:13.496033 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:38726.service: Deactivated successfully. Sep 6 09:20:13.499386 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 09:20:13.500056 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Sep 6 09:20:13.501167 systemd-logind[1503]: Removed session 13. Sep 6 09:20:18.508097 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:38730.service - OpenSSH per-connection server daemon (10.0.0.1:38730). Sep 6 09:20:18.574320 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 38730 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:18.575673 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:18.580025 systemd-logind[1503]: New session 14 of user core. Sep 6 09:20:18.586104 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 6 09:20:18.701553 sshd[4140]: Connection closed by 10.0.0.1 port 38730 Sep 6 09:20:18.702451 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:18.713006 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:38730.service: Deactivated successfully. Sep 6 09:20:18.715510 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 09:20:18.716718 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Sep 6 09:20:18.719862 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:38738.service - OpenSSH per-connection server daemon (10.0.0.1:38738). Sep 6 09:20:18.720529 systemd-logind[1503]: Removed session 14. Sep 6 09:20:18.777333 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 38738 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:18.778516 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:18.782956 systemd-logind[1503]: New session 15 of user core. Sep 6 09:20:18.794125 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 6 09:20:19.011574 sshd[4157]: Connection closed by 10.0.0.1 port 38738 Sep 6 09:20:19.012094 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:19.025982 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:38738.service: Deactivated successfully. Sep 6 09:20:19.027609 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 09:20:19.028519 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Sep 6 09:20:19.031006 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:38740.service - OpenSSH per-connection server daemon (10.0.0.1:38740). Sep 6 09:20:19.031627 systemd-logind[1503]: Removed session 15. Sep 6 09:20:19.092742 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 38740 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:19.094046 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:19.097758 systemd-logind[1503]: New session 16 of user core. Sep 6 09:20:19.105126 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 6 09:20:19.756722 sshd[4172]: Connection closed by 10.0.0.1 port 38740 Sep 6 09:20:19.756990 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:19.771927 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:38740.service: Deactivated successfully. Sep 6 09:20:19.774910 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 09:20:19.779142 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Sep 6 09:20:19.781795 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:38752.service - OpenSSH per-connection server daemon (10.0.0.1:38752). Sep 6 09:20:19.784642 systemd-logind[1503]: Removed session 16. Sep 6 09:20:19.833819 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 38752 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:19.835251 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:19.839287 systemd-logind[1503]: New session 17 of user core. Sep 6 09:20:19.851113 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 6 09:20:20.080073 sshd[4198]: Connection closed by 10.0.0.1 port 38752 Sep 6 09:20:20.081104 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:20.090743 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:38752.service: Deactivated successfully. Sep 6 09:20:20.094227 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 09:20:20.097183 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Sep 6 09:20:20.103259 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:35980.service - OpenSSH per-connection server daemon (10.0.0.1:35980). Sep 6 09:20:20.104393 systemd-logind[1503]: Removed session 17. Sep 6 09:20:20.156060 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 35980 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:20.157486 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:20.161875 systemd-logind[1503]: New session 18 of user core. Sep 6 09:20:20.171168 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 6 09:20:20.284709 sshd[4214]: Connection closed by 10.0.0.1 port 35980 Sep 6 09:20:20.285049 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:20.288688 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:35980.service: Deactivated successfully. Sep 6 09:20:20.290379 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 09:20:20.291124 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Sep 6 09:20:20.292262 systemd-logind[1503]: Removed session 18. Sep 6 09:20:25.300091 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:35986.service - OpenSSH per-connection server daemon (10.0.0.1:35986). Sep 6 09:20:25.345270 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 35986 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:25.346625 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:25.352295 systemd-logind[1503]: New session 19 of user core. Sep 6 09:20:25.362127 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 6 09:20:25.474633 sshd[4235]: Connection closed by 10.0.0.1 port 35986 Sep 6 09:20:25.474979 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:25.478859 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:35986.service: Deactivated successfully. Sep 6 09:20:25.480560 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 09:20:25.481344 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Sep 6 09:20:25.482517 systemd-logind[1503]: Removed session 19. Sep 6 09:20:30.487111 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:51896.service - OpenSSH per-connection server daemon (10.0.0.1:51896). Sep 6 09:20:30.539278 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 51896 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:30.540524 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:30.544760 systemd-logind[1503]: New session 20 of user core. Sep 6 09:20:30.552109 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 6 09:20:30.663367 sshd[4254]: Connection closed by 10.0.0.1 port 51896 Sep 6 09:20:30.663885 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:30.667354 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:51896.service: Deactivated successfully. Sep 6 09:20:30.669194 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 09:20:30.671406 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Sep 6 09:20:30.672349 systemd-logind[1503]: Removed session 20. Sep 6 09:20:35.682182 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:51902.service - OpenSSH per-connection server daemon (10.0.0.1:51902). Sep 6 09:20:35.734965 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 51902 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:35.736334 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:35.742056 systemd-logind[1503]: New session 21 of user core. Sep 6 09:20:35.751098 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 6 09:20:35.874155 sshd[4272]: Connection closed by 10.0.0.1 port 51902 Sep 6 09:20:35.874488 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:35.885179 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:51902.service: Deactivated successfully. Sep 6 09:20:35.886779 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 09:20:35.888978 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. Sep 6 09:20:35.890425 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:51912.service - OpenSSH per-connection server daemon (10.0.0.1:51912). Sep 6 09:20:35.891818 systemd-logind[1503]: Removed session 21. Sep 6 09:20:35.958943 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 51912 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:35.960213 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:35.967175 systemd-logind[1503]: New session 22 of user core. Sep 6 09:20:35.979125 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 6 09:20:38.585227 containerd[1529]: time="2025-09-06T09:20:38.583003198Z" level=info msg="StopContainer for \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" with timeout 30 (s)" Sep 6 09:20:38.585608 containerd[1529]: time="2025-09-06T09:20:38.585454790Z" level=info msg="Stop container \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" with signal terminated" Sep 6 09:20:38.618450 systemd[1]: cri-containerd-d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192.scope: Deactivated successfully. Sep 6 09:20:38.620554 containerd[1529]: time="2025-09-06T09:20:38.620457367Z" level=info msg="received exit event container_id:\"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" id:\"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" pid:3228 exited_at:{seconds:1757150438 nanos:620119921}" Sep 6 09:20:38.621060 containerd[1529]: time="2025-09-06T09:20:38.621017990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" id:\"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" pid:3228 exited_at:{seconds:1757150438 nanos:620119921}" Sep 6 09:20:38.640829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192-rootfs.mount: Deactivated successfully. Sep 6 09:20:38.642825 containerd[1529]: time="2025-09-06T09:20:38.642410745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" id:\"dd08b36cb604ee7cab4cae59ee941e3f8f06b6d91634d65084abd25975bbc590\" pid:4315 exited_at:{seconds:1757150438 nanos:641753612}" Sep 6 09:20:38.644853 containerd[1529]: time="2025-09-06T09:20:38.644827341Z" level=info msg="StopContainer for \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" with timeout 2 (s)" Sep 6 09:20:38.645112 containerd[1529]: time="2025-09-06T09:20:38.645094754Z" level=info msg="Stop container \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" with signal terminated" Sep 6 09:20:38.646006 containerd[1529]: time="2025-09-06T09:20:38.645964505Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 09:20:38.652027 systemd-networkd[1452]: lxc_health: Link DOWN Sep 6 09:20:38.652033 systemd-networkd[1452]: lxc_health: Lost carrier Sep 6 09:20:38.670315 containerd[1529]: time="2025-09-06T09:20:38.670188854Z" level=info msg="StopContainer for \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" returns successfully" Sep 6 09:20:38.670669 systemd[1]: cri-containerd-8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444.scope: Deactivated successfully. Sep 6 09:20:38.671208 systemd[1]: cri-containerd-8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444.scope: Consumed 6.109s CPU time, 125.2M memory peak, 914K read from disk, 12.9M written to disk. Sep 6 09:20:38.674693 containerd[1529]: time="2025-09-06T09:20:38.673967751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" id:\"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" pid:3338 exited_at:{seconds:1757150438 nanos:673680940}" Sep 6 09:20:38.674693 containerd[1529]: time="2025-09-06T09:20:38.673977590Z" level=info msg="received exit event container_id:\"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" id:\"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" pid:3338 exited_at:{seconds:1757150438 nanos:673680940}" Sep 6 09:20:38.675872 containerd[1529]: time="2025-09-06T09:20:38.675844321Z" level=info msg="StopPodSandbox for \"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\"" Sep 6 09:20:38.695862 containerd[1529]: time="2025-09-06T09:20:38.694284415Z" level=info msg="Container to stop \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 09:20:38.695716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444-rootfs.mount: Deactivated successfully. Sep 6 09:20:38.704070 systemd[1]: cri-containerd-b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c.scope: Deactivated successfully. Sep 6 09:20:38.706241 containerd[1529]: time="2025-09-06T09:20:38.706205008Z" level=info msg="StopContainer for \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" returns successfully" Sep 6 09:20:38.706729 containerd[1529]: time="2025-09-06T09:20:38.706705038Z" level=info msg="StopPodSandbox for \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\"" Sep 6 09:20:38.706793 containerd[1529]: time="2025-09-06T09:20:38.706767911Z" level=info msg="Container to stop \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 09:20:38.706793 containerd[1529]: time="2025-09-06T09:20:38.706781510Z" level=info msg="Container to stop \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 09:20:38.706793 containerd[1529]: time="2025-09-06T09:20:38.706790069Z" level=info msg="Container to stop \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 09:20:38.706856 containerd[1529]: time="2025-09-06T09:20:38.706798868Z" level=info msg="Container to stop \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 09:20:38.706856 containerd[1529]: time="2025-09-06T09:20:38.706806747Z" level=info msg="Container to stop \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 09:20:38.709456 containerd[1529]: time="2025-09-06T09:20:38.709387446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\" id:\"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\" pid:2899 exit_status:137 exited_at:{seconds:1757150438 nanos:709160949}" Sep 6 09:20:38.714454 systemd[1]: cri-containerd-db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6.scope: Deactivated successfully. Sep 6 09:20:38.738708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6-rootfs.mount: Deactivated successfully. Sep 6 09:20:38.745616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c-rootfs.mount: Deactivated successfully. Sep 6 09:20:38.747664 containerd[1529]: time="2025-09-06T09:20:38.747624496Z" level=info msg="TearDown network for sandbox \"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\" successfully" Sep 6 09:20:38.747664 containerd[1529]: time="2025-09-06T09:20:38.747658893Z" level=info msg="StopPodSandbox for \"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\" returns successfully" Sep 6 09:20:38.748433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c-shm.mount: Deactivated successfully. Sep 6 09:20:38.749704 containerd[1529]: time="2025-09-06T09:20:38.749553981Z" level=info msg="shim disconnected" id=b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c namespace=k8s.io Sep 6 09:20:38.749757 containerd[1529]: time="2025-09-06T09:20:38.749709245Z" level=warning msg="cleaning up after shim disconnected" id=b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c namespace=k8s.io Sep 6 09:20:38.749757 containerd[1529]: time="2025-09-06T09:20:38.749744322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 09:20:38.750708 containerd[1529]: time="2025-09-06T09:20:38.750146201Z" level=info msg="received exit event sandbox_id:\"b5674958ce2fe2cd383358f1f6632c17d546f49ffeb39121c686102676c0df7c\" exit_status:137 exited_at:{seconds:1757150438 nanos:709160949}" Sep 6 09:20:38.752083 containerd[1529]: time="2025-09-06T09:20:38.752055808Z" level=info msg="shim disconnected" id=db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6 namespace=k8s.io Sep 6 09:20:38.752158 containerd[1529]: time="2025-09-06T09:20:38.752079445Z" level=warning msg="cleaning up after shim disconnected" id=db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6 namespace=k8s.io Sep 6 09:20:38.752158 containerd[1529]: time="2025-09-06T09:20:38.752154158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 09:20:38.776925 containerd[1529]: time="2025-09-06T09:20:38.776839859Z" level=info msg="received exit event sandbox_id:\"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" exit_status:137 exited_at:{seconds:1757150438 nanos:714493130}" Sep 6 09:20:38.777056 containerd[1529]: time="2025-09-06T09:20:38.777021641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" id:\"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" pid:2827 exit_status:137 exited_at:{seconds:1757150438 nanos:714493130}" Sep 6 09:20:38.777130 containerd[1529]: time="2025-09-06T09:20:38.777098513Z" level=info msg="TearDown network for sandbox \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" successfully" Sep 6 09:20:38.777158 containerd[1529]: time="2025-09-06T09:20:38.777130550Z" level=info msg="StopPodSandbox for \"db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6\" returns successfully" Sep 6 09:20:38.904770 kubelet[2669]: I0906 09:20:38.904716 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skzpr\" (UniqueName: \"kubernetes.io/projected/f0cfec9c-81a3-46b8-aa38-9dac657802e9-kube-api-access-skzpr\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.904770 kubelet[2669]: I0906 09:20:38.904767 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-etc-cni-netd\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905150 kubelet[2669]: I0906 09:20:38.904788 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-bpf-maps\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905150 kubelet[2669]: I0906 09:20:38.904801 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-run\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905150 kubelet[2669]: I0906 09:20:38.904827 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-config-path\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905150 kubelet[2669]: I0906 09:20:38.904843 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-xtables-lock\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905150 kubelet[2669]: I0906 09:20:38.904859 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k4wt\" (UniqueName: \"kubernetes.io/projected/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e-kube-api-access-6k4wt\") pod \"3d9be6f7-164c-4c97-b5a0-be48ed86ad4e\" (UID: \"3d9be6f7-164c-4c97-b5a0-be48ed86ad4e\") " Sep 6 09:20:38.905150 kubelet[2669]: I0906 09:20:38.904876 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-hostproc\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905291 kubelet[2669]: I0906 09:20:38.904904 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-host-proc-sys-kernel\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905291 kubelet[2669]: I0906 09:20:38.904920 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-host-proc-sys-net\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905291 kubelet[2669]: I0906 09:20:38.904937 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-cgroup\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905291 kubelet[2669]: I0906 09:20:38.904979 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e-cilium-config-path\") pod \"3d9be6f7-164c-4c97-b5a0-be48ed86ad4e\" (UID: \"3d9be6f7-164c-4c97-b5a0-be48ed86ad4e\") " Sep 6 09:20:38.905291 kubelet[2669]: I0906 09:20:38.905019 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-lib-modules\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905291 kubelet[2669]: I0906 09:20:38.905045 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0cfec9c-81a3-46b8-aa38-9dac657802e9-clustermesh-secrets\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905416 kubelet[2669]: I0906 09:20:38.905060 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cni-path\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.905416 kubelet[2669]: I0906 09:20:38.905077 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0cfec9c-81a3-46b8-aa38-9dac657802e9-hubble-tls\") pod \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\" (UID: \"f0cfec9c-81a3-46b8-aa38-9dac657802e9\") " Sep 6 09:20:38.906116 kubelet[2669]: I0906 09:20:38.906076 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.906156 kubelet[2669]: I0906 09:20:38.906094 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.908382 kubelet[2669]: I0906 09:20:38.908149 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d9be6f7-164c-4c97-b5a0-be48ed86ad4e" (UID: "3d9be6f7-164c-4c97-b5a0-be48ed86ad4e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 09:20:38.908382 kubelet[2669]: I0906 09:20:38.908172 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 09:20:38.908382 kubelet[2669]: I0906 09:20:38.908231 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.908382 kubelet[2669]: I0906 09:20:38.908233 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.908532 kubelet[2669]: I0906 09:20:38.908254 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.908532 kubelet[2669]: I0906 09:20:38.908271 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.908532 kubelet[2669]: I0906 09:20:38.908289 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.908532 kubelet[2669]: I0906 09:20:38.908320 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.908874 kubelet[2669]: I0906 09:20:38.908847 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.908984 kubelet[2669]: I0906 09:20:38.908969 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 09:20:38.909491 kubelet[2669]: I0906 09:20:38.909453 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0cfec9c-81a3-46b8-aa38-9dac657802e9-kube-api-access-skzpr" (OuterVolumeSpecName: "kube-api-access-skzpr") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "kube-api-access-skzpr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 09:20:38.910022 kubelet[2669]: I0906 09:20:38.910000 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0cfec9c-81a3-46b8-aa38-9dac657802e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 09:20:38.910387 kubelet[2669]: I0906 09:20:38.910362 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e-kube-api-access-6k4wt" (OuterVolumeSpecName: "kube-api-access-6k4wt") pod "3d9be6f7-164c-4c97-b5a0-be48ed86ad4e" (UID: "3d9be6f7-164c-4c97-b5a0-be48ed86ad4e"). InnerVolumeSpecName "kube-api-access-6k4wt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 09:20:38.910694 kubelet[2669]: I0906 09:20:38.910660 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0cfec9c-81a3-46b8-aa38-9dac657802e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f0cfec9c-81a3-46b8-aa38-9dac657802e9" (UID: "f0cfec9c-81a3-46b8-aa38-9dac657802e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 09:20:39.006221 kubelet[2669]: I0906 09:20:39.006043 2669 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006221 kubelet[2669]: I0906 09:20:39.006080 2669 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006221 kubelet[2669]: I0906 09:20:39.006089 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006221 kubelet[2669]: I0906 09:20:39.006098 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006221 kubelet[2669]: I0906 09:20:39.006107 2669 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006221 kubelet[2669]: I0906 09:20:39.006116 2669 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0cfec9c-81a3-46b8-aa38-9dac657802e9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006221 kubelet[2669]: I0906 09:20:39.006123 2669 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006221 kubelet[2669]: I0906 09:20:39.006132 2669 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0cfec9c-81a3-46b8-aa38-9dac657802e9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006508 kubelet[2669]: I0906 09:20:39.006139 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-skzpr\" (UniqueName: \"kubernetes.io/projected/f0cfec9c-81a3-46b8-aa38-9dac657802e9-kube-api-access-skzpr\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006508 kubelet[2669]: I0906 09:20:39.006147 2669 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006508 kubelet[2669]: I0906 09:20:39.006154 2669 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006508 kubelet[2669]: I0906 09:20:39.006162 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006508 kubelet[2669]: I0906 09:20:39.006170 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0cfec9c-81a3-46b8-aa38-9dac657802e9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006508 kubelet[2669]: I0906 09:20:39.006177 2669 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006508 kubelet[2669]: I0906 09:20:39.006186 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6k4wt\" (UniqueName: \"kubernetes.io/projected/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e-kube-api-access-6k4wt\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.006508 kubelet[2669]: I0906 09:20:39.006193 2669 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0cfec9c-81a3-46b8-aa38-9dac657802e9-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 09:20:39.261267 systemd[1]: Removed slice kubepods-besteffort-pod3d9be6f7_164c_4c97_b5a0_be48ed86ad4e.slice - libcontainer container kubepods-besteffort-pod3d9be6f7_164c_4c97_b5a0_be48ed86ad4e.slice. Sep 6 09:20:39.270623 systemd[1]: Removed slice kubepods-burstable-podf0cfec9c_81a3_46b8_aa38_9dac657802e9.slice - libcontainer container kubepods-burstable-podf0cfec9c_81a3_46b8_aa38_9dac657802e9.slice. Sep 6 09:20:39.270712 systemd[1]: kubepods-burstable-podf0cfec9c_81a3_46b8_aa38_9dac657802e9.slice: Consumed 6.194s CPU time, 125.5M memory peak, 922K read from disk, 12.9M written to disk. Sep 6 09:20:39.457404 kubelet[2669]: I0906 09:20:39.457255 2669 scope.go:117] "RemoveContainer" containerID="d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192" Sep 6 09:20:39.462759 containerd[1529]: time="2025-09-06T09:20:39.462567413Z" level=info msg="RemoveContainer for \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\"" Sep 6 09:20:39.472693 containerd[1529]: time="2025-09-06T09:20:39.472561338Z" level=info msg="RemoveContainer for \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" returns successfully" Sep 6 09:20:39.474530 kubelet[2669]: I0906 09:20:39.473915 2669 scope.go:117] "RemoveContainer" containerID="d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192" Sep 6 09:20:39.474672 containerd[1529]: time="2025-09-06T09:20:39.474269335Z" level=error msg="ContainerStatus for \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\": not found" Sep 6 09:20:39.482472 kubelet[2669]: E0906 09:20:39.482248 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\": not found" containerID="d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192" Sep 6 09:20:39.482472 kubelet[2669]: I0906 09:20:39.482311 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192"} err="failed to get container status \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\": rpc error: code = NotFound desc = an error occurred when try to find container \"d649bbcf35127117a319d307929aa9bee9c95a65e5554b631fa71c434dc63192\": not found" Sep 6 09:20:39.482472 kubelet[2669]: I0906 09:20:39.482406 2669 scope.go:117] "RemoveContainer" containerID="8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444" Sep 6 09:20:39.486025 containerd[1529]: time="2025-09-06T09:20:39.485093621Z" level=info msg="RemoveContainer for \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\"" Sep 6 09:20:39.489398 containerd[1529]: time="2025-09-06T09:20:39.489360853Z" level=info msg="RemoveContainer for \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" returns successfully" Sep 6 09:20:39.489540 kubelet[2669]: I0906 09:20:39.489517 2669 scope.go:117] "RemoveContainer" containerID="837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831" Sep 6 09:20:39.491647 containerd[1529]: time="2025-09-06T09:20:39.491621357Z" level=info msg="RemoveContainer for \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\"" Sep 6 09:20:39.497209 containerd[1529]: time="2025-09-06T09:20:39.497156908Z" level=info msg="RemoveContainer for \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\" returns successfully" Sep 6 09:20:39.497675 kubelet[2669]: I0906 09:20:39.497544 2669 scope.go:117] "RemoveContainer" containerID="77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1" Sep 6 09:20:39.499660 containerd[1529]: time="2025-09-06T09:20:39.499626552Z" level=info msg="RemoveContainer for \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\"" Sep 6 09:20:39.503039 containerd[1529]: time="2025-09-06T09:20:39.503003509Z" level=info msg="RemoveContainer for \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\" returns successfully" Sep 6 09:20:39.503438 kubelet[2669]: I0906 09:20:39.503295 2669 scope.go:117] "RemoveContainer" containerID="5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda" Sep 6 09:20:39.504702 containerd[1529]: time="2025-09-06T09:20:39.504650592Z" level=info msg="RemoveContainer for \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\"" Sep 6 09:20:39.507443 containerd[1529]: time="2025-09-06T09:20:39.507399289Z" level=info msg="RemoveContainer for \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\" returns successfully" Sep 6 09:20:39.507755 kubelet[2669]: I0906 09:20:39.507730 2669 scope.go:117] "RemoveContainer" containerID="16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75" Sep 6 09:20:39.509097 containerd[1529]: time="2025-09-06T09:20:39.509061330Z" level=info msg="RemoveContainer for \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\"" Sep 6 09:20:39.511732 containerd[1529]: time="2025-09-06T09:20:39.511651883Z" level=info msg="RemoveContainer for \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\" returns successfully" Sep 6 09:20:39.511840 kubelet[2669]: I0906 09:20:39.511794 2669 scope.go:117] "RemoveContainer" containerID="8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444" Sep 6 09:20:39.512545 containerd[1529]: time="2025-09-06T09:20:39.511992090Z" level=error msg="ContainerStatus for \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\": not found" Sep 6 09:20:39.512818 kubelet[2669]: E0906 09:20:39.512786 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\": not found" containerID="8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444" Sep 6 09:20:39.512920 kubelet[2669]: I0906 09:20:39.512894 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444"} err="failed to get container status \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e7488ace8453caae3227b6e137c121d385c2b79b0813987712269a1147c3444\": not found" Sep 6 09:20:39.513003 kubelet[2669]: I0906 09:20:39.512990 2669 scope.go:117] "RemoveContainer" containerID="837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831" Sep 6 09:20:39.513255 containerd[1529]: time="2025-09-06T09:20:39.513215173Z" level=error msg="ContainerStatus for \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\": not found" Sep 6 09:20:39.514059 kubelet[2669]: E0906 09:20:39.514031 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\": not found" containerID="837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831" Sep 6 09:20:39.514147 kubelet[2669]: I0906 09:20:39.514064 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831"} err="failed to get container status \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\": rpc error: code = NotFound desc = an error occurred when try to find container \"837e0430d7ecb36e297ae974c93bc52bd74eaf3a9898e4ad1208611f556c5831\": not found" Sep 6 09:20:39.514147 kubelet[2669]: I0906 09:20:39.514085 2669 scope.go:117] "RemoveContainer" containerID="77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1" Sep 6 09:20:39.514272 containerd[1529]: time="2025-09-06T09:20:39.514242635Z" level=error msg="ContainerStatus for \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\": not found" Sep 6 09:20:39.514364 kubelet[2669]: E0906 09:20:39.514332 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\": not found" containerID="77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1" Sep 6 09:20:39.514392 kubelet[2669]: I0906 09:20:39.514367 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1"} err="failed to get container status \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\": rpc error: code = NotFound desc = an error occurred when try to find container \"77af0641650efc46c43dfbb2a1c44167a9b8e0a91b265b85c96d5eeb03997db1\": not found" Sep 6 09:20:39.514392 kubelet[2669]: I0906 09:20:39.514379 2669 scope.go:117] "RemoveContainer" containerID="5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda" Sep 6 09:20:39.514530 containerd[1529]: time="2025-09-06T09:20:39.514485772Z" level=error msg="ContainerStatus for \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\": not found" Sep 6 09:20:39.514628 kubelet[2669]: E0906 09:20:39.514611 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\": not found" containerID="5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda" Sep 6 09:20:39.514666 kubelet[2669]: I0906 09:20:39.514649 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda"} err="failed to get container status \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fda08a840bb1500654c1c0d275355861a7baada345821ee0d5b746076594fda\": not found" Sep 6 09:20:39.514691 kubelet[2669]: I0906 09:20:39.514669 2669 scope.go:117] "RemoveContainer" containerID="16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75" Sep 6 09:20:39.514828 containerd[1529]: time="2025-09-06T09:20:39.514808741Z" level=error msg="ContainerStatus for \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\": not found" Sep 6 09:20:39.514918 kubelet[2669]: E0906 09:20:39.514902 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\": not found" containerID="16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75" Sep 6 09:20:39.514964 kubelet[2669]: I0906 09:20:39.514923 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75"} err="failed to get container status \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\": rpc error: code = NotFound desc = an error occurred when try to find container \"16e629d64c072ee8b1f662b3589464c4bdc82be9c40db03a9a6bb7955d30dd75\": not found" Sep 6 09:20:39.640137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db3fca9316aa505b995d81ce18fdb970195480371231e7681e6d6ef7952e39e6-shm.mount: Deactivated successfully. Sep 6 09:20:39.640229 systemd[1]: var-lib-kubelet-pods-3d9be6f7\x2d164c\x2d4c97\x2db5a0\x2dbe48ed86ad4e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6k4wt.mount: Deactivated successfully. Sep 6 09:20:39.640280 systemd[1]: var-lib-kubelet-pods-f0cfec9c\x2d81a3\x2d46b8\x2daa38\x2d9dac657802e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskzpr.mount: Deactivated successfully. Sep 6 09:20:39.640339 systemd[1]: var-lib-kubelet-pods-f0cfec9c\x2d81a3\x2d46b8\x2daa38\x2d9dac657802e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 09:20:39.640395 systemd[1]: var-lib-kubelet-pods-f0cfec9c\x2d81a3\x2d46b8\x2daa38\x2d9dac657802e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 09:20:40.527224 sshd[4289]: Connection closed by 10.0.0.1 port 51912 Sep 6 09:20:40.527463 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:40.540095 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:51912.service: Deactivated successfully. Sep 6 09:20:40.541545 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 09:20:40.541758 systemd[1]: session-22.scope: Consumed 1.914s CPU time, 25.2M memory peak. Sep 6 09:20:40.542210 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. Sep 6 09:20:40.544030 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:46594.service - OpenSSH per-connection server daemon (10.0.0.1:46594). Sep 6 09:20:40.544777 systemd-logind[1503]: Removed session 22. Sep 6 09:20:40.596564 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 46594 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:40.597917 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:40.601414 systemd-logind[1503]: New session 23 of user core. Sep 6 09:20:40.612088 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 6 09:20:41.255479 kubelet[2669]: E0906 09:20:41.255430 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:41.258299 kubelet[2669]: I0906 09:20:41.258254 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9be6f7-164c-4c97-b5a0-be48ed86ad4e" path="/var/lib/kubelet/pods/3d9be6f7-164c-4c97-b5a0-be48ed86ad4e/volumes" Sep 6 09:20:41.258898 kubelet[2669]: I0906 09:20:41.258876 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0cfec9c-81a3-46b8-aa38-9dac657802e9" path="/var/lib/kubelet/pods/f0cfec9c-81a3-46b8-aa38-9dac657802e9/volumes" Sep 6 09:20:41.781666 sshd[4440]: Connection closed by 10.0.0.1 port 46594 Sep 6 09:20:41.782079 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:41.795109 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:46594.service: Deactivated successfully. Sep 6 09:20:41.797153 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 09:20:41.799834 systemd[1]: session-23.scope: Consumed 1.092s CPU time, 26.2M memory peak. Sep 6 09:20:41.802234 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. Sep 6 09:20:41.808235 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:46596.service - OpenSSH per-connection server daemon (10.0.0.1:46596). Sep 6 09:20:41.808788 systemd-logind[1503]: Removed session 23. Sep 6 09:20:41.827373 systemd[1]: Created slice kubepods-burstable-podd52fa226_9543_4f68_8dbb_e5dd64d2d0e4.slice - libcontainer container kubepods-burstable-podd52fa226_9543_4f68_8dbb_e5dd64d2d0e4.slice. Sep 6 09:20:41.888914 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 46596 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:41.890317 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:41.894154 systemd-logind[1503]: New session 24 of user core. Sep 6 09:20:41.906170 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 6 09:20:41.920111 kubelet[2669]: I0906 09:20:41.920057 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwbqw\" (UniqueName: \"kubernetes.io/projected/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-kube-api-access-xwbqw\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920111 kubelet[2669]: I0906 09:20:41.920109 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-cilium-cgroup\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920228 kubelet[2669]: I0906 09:20:41.920135 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-cni-path\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920228 kubelet[2669]: I0906 09:20:41.920151 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-hubble-tls\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920228 kubelet[2669]: I0906 09:20:41.920168 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-clustermesh-secrets\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920228 kubelet[2669]: I0906 09:20:41.920184 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-host-proc-sys-kernel\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920228 kubelet[2669]: I0906 09:20:41.920198 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-lib-modules\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920228 kubelet[2669]: I0906 09:20:41.920212 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-host-proc-sys-net\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920368 kubelet[2669]: I0906 09:20:41.920226 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-cilium-run\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920368 kubelet[2669]: I0906 09:20:41.920241 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-hostproc\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920368 kubelet[2669]: I0906 09:20:41.920257 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-etc-cni-netd\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920368 kubelet[2669]: I0906 09:20:41.920273 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-xtables-lock\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920368 kubelet[2669]: I0906 09:20:41.920287 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-cilium-config-path\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920368 kubelet[2669]: I0906 09:20:41.920303 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-bpf-maps\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.920481 kubelet[2669]: I0906 09:20:41.920320 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d52fa226-9543-4f68-8dbb-e5dd64d2d0e4-cilium-ipsec-secrets\") pod \"cilium-747p2\" (UID: \"d52fa226-9543-4f68-8dbb-e5dd64d2d0e4\") " pod="kube-system/cilium-747p2" Sep 6 09:20:41.955013 sshd[4455]: Connection closed by 10.0.0.1 port 46596 Sep 6 09:20:41.956192 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:41.969356 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:46596.service: Deactivated successfully. Sep 6 09:20:41.971877 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 09:20:41.972787 systemd-logind[1503]: Session 24 logged out. Waiting for processes to exit. Sep 6 09:20:41.975606 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:46604.service - OpenSSH per-connection server daemon (10.0.0.1:46604). Sep 6 09:20:41.977098 systemd-logind[1503]: Removed session 24. Sep 6 09:20:42.034009 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 46604 ssh2: RSA SHA256:X47YspvX07HWoTlAIflLPnuZCDJotA6YWjawbej7zpY Sep 6 09:20:42.037835 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:20:42.044593 systemd-logind[1503]: New session 25 of user core. Sep 6 09:20:42.056143 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 6 09:20:42.131199 kubelet[2669]: E0906 09:20:42.131131 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:42.133084 containerd[1529]: time="2025-09-06T09:20:42.133040828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-747p2,Uid:d52fa226-9543-4f68-8dbb-e5dd64d2d0e4,Namespace:kube-system,Attempt:0,}" Sep 6 09:20:42.247736 containerd[1529]: time="2025-09-06T09:20:42.247672579Z" level=info msg="connecting to shim 17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047" address="unix:///run/containerd/s/3030975b29c31b3bdd1bdadc92156d021cc3cfd52e470af6bca54d90c9d4d1dd" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:20:42.270108 systemd[1]: Started cri-containerd-17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047.scope - libcontainer container 17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047. Sep 6 09:20:42.303608 containerd[1529]: time="2025-09-06T09:20:42.303507652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-747p2,Uid:d52fa226-9543-4f68-8dbb-e5dd64d2d0e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\"" Sep 6 09:20:42.304970 kubelet[2669]: E0906 09:20:42.304490 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:42.311330 kubelet[2669]: E0906 09:20:42.311296 2669 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 09:20:42.329703 containerd[1529]: time="2025-09-06T09:20:42.329654130Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 09:20:42.385047 containerd[1529]: time="2025-09-06T09:20:42.384997443Z" level=info msg="Container 37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:20:42.434606 containerd[1529]: time="2025-09-06T09:20:42.434562776Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1\"" Sep 6 09:20:42.435149 containerd[1529]: time="2025-09-06T09:20:42.435128411Z" level=info msg="StartContainer for \"37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1\"" Sep 6 09:20:42.436694 containerd[1529]: time="2025-09-06T09:20:42.436669608Z" level=info msg="connecting to shim 37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1" address="unix:///run/containerd/s/3030975b29c31b3bdd1bdadc92156d021cc3cfd52e470af6bca54d90c9d4d1dd" protocol=ttrpc version=3 Sep 6 09:20:42.470159 systemd[1]: Started cri-containerd-37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1.scope - libcontainer container 37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1. Sep 6 09:20:42.496318 containerd[1529]: time="2025-09-06T09:20:42.496087036Z" level=info msg="StartContainer for \"37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1\" returns successfully" Sep 6 09:20:42.503966 systemd[1]: cri-containerd-37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1.scope: Deactivated successfully. Sep 6 09:20:42.505355 containerd[1529]: time="2025-09-06T09:20:42.505322301Z" level=info msg="received exit event container_id:\"37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1\" id:\"37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1\" pid:4533 exited_at:{seconds:1757150442 nanos:505072960}" Sep 6 09:20:42.505745 containerd[1529]: time="2025-09-06T09:20:42.505717269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1\" id:\"37661f58fc31ae00c1585b7b063b78e70b79d8ed89dff742ba0a6612af3144f1\" pid:4533 exited_at:{seconds:1757150442 nanos:505072960}" Sep 6 09:20:43.464958 kubelet[2669]: E0906 09:20:43.464918 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:43.471452 containerd[1529]: time="2025-09-06T09:20:43.471418332Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 09:20:43.481790 containerd[1529]: time="2025-09-06T09:20:43.481664327Z" level=info msg="Container 45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:20:43.490796 containerd[1529]: time="2025-09-06T09:20:43.490746089Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71\"" Sep 6 09:20:43.491305 containerd[1529]: time="2025-09-06T09:20:43.491282849Z" level=info msg="StartContainer for \"45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71\"" Sep 6 09:20:43.492633 containerd[1529]: time="2025-09-06T09:20:43.492499038Z" level=info msg="connecting to shim 45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71" address="unix:///run/containerd/s/3030975b29c31b3bdd1bdadc92156d021cc3cfd52e470af6bca54d90c9d4d1dd" protocol=ttrpc version=3 Sep 6 09:20:43.518143 systemd[1]: Started cri-containerd-45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71.scope - libcontainer container 45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71. Sep 6 09:20:43.541536 containerd[1529]: time="2025-09-06T09:20:43.541500139Z" level=info msg="StartContainer for \"45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71\" returns successfully" Sep 6 09:20:43.547625 systemd[1]: cri-containerd-45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71.scope: Deactivated successfully. Sep 6 09:20:43.548487 containerd[1529]: time="2025-09-06T09:20:43.548445941Z" level=info msg="received exit event container_id:\"45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71\" id:\"45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71\" pid:4579 exited_at:{seconds:1757150443 nanos:548116046}" Sep 6 09:20:43.548667 containerd[1529]: time="2025-09-06T09:20:43.548495537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71\" id:\"45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71\" pid:4579 exited_at:{seconds:1757150443 nanos:548116046}" Sep 6 09:20:43.566039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45368872e43934fc1fed69a614076f43eb0b97ad3e07c400bfd762e23733bb71-rootfs.mount: Deactivated successfully. Sep 6 09:20:44.468193 kubelet[2669]: E0906 09:20:44.468037 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:44.476009 containerd[1529]: time="2025-09-06T09:20:44.474426404Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 09:20:44.487184 containerd[1529]: time="2025-09-06T09:20:44.486106388Z" level=info msg="Container 6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:20:44.487734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913917711.mount: Deactivated successfully. Sep 6 09:20:44.496557 containerd[1529]: time="2025-09-06T09:20:44.496505862Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330\"" Sep 6 09:20:44.497976 containerd[1529]: time="2025-09-06T09:20:44.497039545Z" level=info msg="StartContainer for \"6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330\"" Sep 6 09:20:44.498357 containerd[1529]: time="2025-09-06T09:20:44.498330134Z" level=info msg="connecting to shim 6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330" address="unix:///run/containerd/s/3030975b29c31b3bdd1bdadc92156d021cc3cfd52e470af6bca54d90c9d4d1dd" protocol=ttrpc version=3 Sep 6 09:20:44.513128 systemd[1]: Started cri-containerd-6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330.scope - libcontainer container 6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330. Sep 6 09:20:44.550480 systemd[1]: cri-containerd-6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330.scope: Deactivated successfully. Sep 6 09:20:44.553329 containerd[1529]: time="2025-09-06T09:20:44.553231500Z" level=info msg="received exit event container_id:\"6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330\" id:\"6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330\" pid:4623 exited_at:{seconds:1757150444 nanos:553080751}" Sep 6 09:20:44.553329 containerd[1529]: time="2025-09-06T09:20:44.553295376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330\" id:\"6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330\" pid:4623 exited_at:{seconds:1757150444 nanos:553080751}" Sep 6 09:20:44.553537 containerd[1529]: time="2025-09-06T09:20:44.553514961Z" level=info msg="StartContainer for \"6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330\" returns successfully" Sep 6 09:20:44.571663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a00812b97c3d11045d8872c91af05c9d28feff22de421eaae748461853dd330-rootfs.mount: Deactivated successfully. Sep 6 09:20:45.473094 kubelet[2669]: E0906 09:20:45.473023 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:45.477305 containerd[1529]: time="2025-09-06T09:20:45.477264908Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 09:20:45.488490 containerd[1529]: time="2025-09-06T09:20:45.488451739Z" level=info msg="Container 856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:20:45.489483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626713625.mount: Deactivated successfully. Sep 6 09:20:45.496016 containerd[1529]: time="2025-09-06T09:20:45.495979049Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884\"" Sep 6 09:20:45.496554 containerd[1529]: time="2025-09-06T09:20:45.496529733Z" level=info msg="StartContainer for \"856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884\"" Sep 6 09:20:45.497299 containerd[1529]: time="2025-09-06T09:20:45.497261285Z" level=info msg="connecting to shim 856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884" address="unix:///run/containerd/s/3030975b29c31b3bdd1bdadc92156d021cc3cfd52e470af6bca54d90c9d4d1dd" protocol=ttrpc version=3 Sep 6 09:20:45.515091 systemd[1]: Started cri-containerd-856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884.scope - libcontainer container 856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884. Sep 6 09:20:45.536990 systemd[1]: cri-containerd-856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884.scope: Deactivated successfully. Sep 6 09:20:45.541646 containerd[1529]: time="2025-09-06T09:20:45.541613835Z" level=info msg="received exit event container_id:\"856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884\" id:\"856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884\" pid:4662 exited_at:{seconds:1757150445 nanos:541470244}" Sep 6 09:20:45.541827 containerd[1529]: time="2025-09-06T09:20:45.541735027Z" level=info msg="StartContainer for \"856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884\" returns successfully" Sep 6 09:20:45.542020 containerd[1529]: time="2025-09-06T09:20:45.541980531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884\" id:\"856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884\" pid:4662 exited_at:{seconds:1757150445 nanos:541470244}" Sep 6 09:20:45.558706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-856f4419d4ec1374d52e02ebc136171f32bd50e0d5e7accf69375002226f1884-rootfs.mount: Deactivated successfully. Sep 6 09:20:46.477084 kubelet[2669]: E0906 09:20:46.476976 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:46.484126 containerd[1529]: time="2025-09-06T09:20:46.484066364Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 09:20:46.496133 containerd[1529]: time="2025-09-06T09:20:46.496086236Z" level=info msg="Container d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:20:46.505061 containerd[1529]: time="2025-09-06T09:20:46.505023494Z" level=info msg="CreateContainer within sandbox \"17a3c65baa740e0c8bd6afef5b876b8f802fd997c4532918f430f0d4d8131047\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9\"" Sep 6 09:20:46.505672 containerd[1529]: time="2025-09-06T09:20:46.505567821Z" level=info msg="StartContainer for \"d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9\"" Sep 6 09:20:46.506567 containerd[1529]: time="2025-09-06T09:20:46.506544362Z" level=info msg="connecting to shim d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9" address="unix:///run/containerd/s/3030975b29c31b3bdd1bdadc92156d021cc3cfd52e470af6bca54d90c9d4d1dd" protocol=ttrpc version=3 Sep 6 09:20:46.528095 systemd[1]: Started cri-containerd-d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9.scope - libcontainer container d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9. Sep 6 09:20:46.558025 containerd[1529]: time="2025-09-06T09:20:46.557984123Z" level=info msg="StartContainer for \"d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9\" returns successfully" Sep 6 09:20:46.607477 containerd[1529]: time="2025-09-06T09:20:46.607440924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9\" id:\"2d53d8315bf6cdf6c9d477674c14406e6841420d5f1e7d5c221cc607779cec33\" pid:4730 exited_at:{seconds:1757150446 nanos:606270995}" Sep 6 09:20:46.806972 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 6 09:20:47.484280 kubelet[2669]: E0906 09:20:47.484249 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:47.501404 kubelet[2669]: I0906 09:20:47.501141 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-747p2" podStartSLOduration=6.501126529 podStartE2EDuration="6.501126529s" podCreationTimestamp="2025-09-06 09:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:20:47.500522643 +0000 UTC m=+80.328034701" watchObservedRunningTime="2025-09-06 09:20:47.501126529 +0000 UTC m=+80.328638587" Sep 6 09:20:48.430351 containerd[1529]: time="2025-09-06T09:20:48.430281883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9\" id:\"03d25a808b543167165ec9f0f9c08ce4d9cb145beac02cc02853d27733ecade7\" pid:4874 exit_status:1 exited_at:{seconds:1757150448 nanos:429775749}" Sep 6 09:20:48.486765 kubelet[2669]: E0906 09:20:48.486719 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:49.487972 kubelet[2669]: E0906 09:20:49.487924 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:49.605749 systemd-networkd[1452]: lxc_health: Link UP Sep 6 09:20:49.613972 systemd-networkd[1452]: lxc_health: Gained carrier Sep 6 09:20:50.494198 kubelet[2669]: E0906 09:20:50.494160 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:50.572975 containerd[1529]: time="2025-09-06T09:20:50.572801898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9\" id:\"7e629e9cf5ca99736e811f1dd3053d716fef51b01cfaaa9a1710dce99c2c3345\" pid:5261 exited_at:{seconds:1757150450 nanos:572488192}" Sep 6 09:20:51.494247 kubelet[2669]: E0906 09:20:51.494215 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:51.587822 systemd-networkd[1452]: lxc_health: Gained IPv6LL Sep 6 09:20:52.678700 containerd[1529]: time="2025-09-06T09:20:52.678640228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9\" id:\"d8fd9c31ccec591bba29bc7fa1bc00b4678bdea9e59f3977ef60a5743f404e5f\" pid:5295 exited_at:{seconds:1757150452 nanos:678258602}" Sep 6 09:20:53.256031 kubelet[2669]: E0906 09:20:53.255995 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:54.255494 kubelet[2669]: E0906 09:20:54.255453 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:20:54.831271 containerd[1529]: time="2025-09-06T09:20:54.831231768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6c0194d3c3a2d5f4bcf67d148a29d2c52e742ea17e81d44c9514a195d7a82c9\" id:\"05cb577218131cb218d891ed1e1d9a2b8600278fb09409667bc82bbfb732a89c\" pid:5328 exited_at:{seconds:1757150454 nanos:830658824}" Sep 6 09:20:54.835512 sshd[4469]: Connection closed by 10.0.0.1 port 46604 Sep 6 09:20:54.836117 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Sep 6 09:20:54.840147 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:46604.service: Deactivated successfully. Sep 6 09:20:54.841927 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 09:20:54.842704 systemd-logind[1503]: Session 25 logged out. Waiting for processes to exit. Sep 6 09:20:54.843889 systemd-logind[1503]: Removed session 25.